ironic-5.1.0/0000775000567000056710000000000012674513633014153 5ustar jenkinsjenkins00000000000000ironic-5.1.0/AUTHORS0000664000567000056710000002506612674513632015233 0ustar jenkinsjenkins00000000000000119Vik Adam Gandelman Alberto Planas Alessandro Pilotti Alex Meade Alexander Gordeev Alexis Lee Ana Krivokapic Andreas Jaeger Andreas Jaeger Andrew Bogott Andrey Kurilin Angus Thomas Anita Kuno Anne Gentle Anson Y.W Anton Arefiev Anusha Ramineni Anusha Ramineni Aparna Arata Notsu Armando Migliaccio Artem Rozumenko Bernard Van De Walle Bertrand Lallau Bob Ball Boris Pavlovic Brian Elliott Brian Waldon Bruno Cornec Béla Vancsics Caio Oliveira Cameron.C Chang Bo Guo ChangBo Guo(gcb) Chris Behrens Chris Dearborn Chris Jones Chris Krelle Chris Krelle Chris Krelle Chris St. Pierre Christian Berendt Chuck Short Claudiu Belu Clif Houck Dan Prince Dan Smith Dan Smith Daryl Walleck Davanum Srinivas Davanum Srinivas David Hewson David Kang David McNally David Shrewsbury Davide Guerri Debayan Ray Derek Higgins Devananda van der Veen Dima Shulyak Dirk Mueller Dmitry Nikishov Dmitry Tantsur Dmitry Tantsur DongCan Dongdong Zhou Doug Hellmann Edwin Zhai Ellen Hui Erhan Ekici Eric Guo Eric Windisch Faizan Barmawer Fang Jinxing Fengqian Gao Gabriel Assis Bezerra Ghe Rivero Ghe Rivero Ghe Rivero Gonéri Le Bouder Gregory Haynes Grzegorz Grasza Hadi Bannazadeh Hans Lindgren Haomeng, Wang Harshada Mangesh Kakad He Yongli Hironori Shiina Hugo Nicodemos IWAMOTO Toshihiro Ihar Hrachyshka Ilya Pekelny Imre Farkas Ionut Balutoiu Jacek Tomasiak James E. Blair James Slagle Jason Kölker Jay Faulkner Jeremy Stanley Jesse Andrews Jim Rollenhagen Jing Sun Joe Gordon Johannes Erdfelt John Garbutt John L. Villalovos John Trowbridge Josh Gachnang Joshua Harlow Joshua Harlow Julia Kreger Julien Danjou Junya Akahira KATO Tomoyuki Kan Ken Igarashi Kun Huang Kurt Taylor Kurt Taylor Kyle Stevenson Laura Moore Lenny Verkhovsky Lilia Lilia Sampaio Lin Tan Lin Tan Lucas Alvares Gomes Marco Morais Marcus Rafael Mario Villaplana Mark Atwood Mark Goddard Mark McLoughlin Mark Silence Martin Kletzander Martyn Taylor Mathieu Gagné Mathieu Mitchell Matt Joyce Matt Keeann Matt Wagner Matthew Gilliard Matthew Treinish Mauro S. M. Rodrigues Max Lobur Max Lobur Michael Davies Michael Kerrin Michael Krotscheck Michael Still Michael Turek Michey Mehta michey.mehta@hp.com Mike Turek Mikhail Durnosvistov Mikyung Kang Miles Gould Mitsuhiro SHIGEMATSU Mitsuhiro SHIGEMATSU Monty Taylor Motohiro OTSUKA Motohiro Otsuka Naohiro Tamura Nguyen Hung Phuong Nisha Agarwal Om Kumar Ondřej Nový Pablo Fernando Cargnelutti Pavlo Shchelokovskyy Peeyush Gupta Peng Yong Phil Day Pádraig Brady Rafi Khardalian Rakesh H S Ramakrishnan G Rick Harris Robert Collins Robert Collins Rohan Kanade Rohan Kanade Roman Bogorodskiy Roman Dashevsky Roman Podoliaka Roman Prykhodchenko Roman Prykhodchenko Ruby Loo Russell Bryant Russell Haering SHIGEMATSU Mitsuhiro Sam Betts Sana Khan Sandhya Balakrishnan Sandy Walsh Sanjay Kumar Singh Sascha Peilicke Sascha Peilicke Satoru Moriya Sean Dague Sean Dague Serge Kovaleff Sergey Lukjanov Sergey Lupersolsky Sergey Lupersolsky Sergey Nikitin Sergey Vilgelm Shane Wang Shilla Saebi Shinn'ya Hoshino Shivanand Tendulker Shuangtai Tian Shuichiro MAKIGAKI Shuquan Huang Sinval Vieira Sirushti Murugesan Srinivasa Acharya Stanislaw Pitucha Steven Dake Steven Hardy Stig Telfer Tan Lin Thiago Paiva Thierry Carrez Tom Fifield Tony Breeds Tushar Kalra Vic Howard Victor Lowther Victor Sergeyev Vikas Jain Vinay B S Vishvananda Ishaya Vladyslav Drok Wang Wei Wanghua Wei Du Xian Dong, Meng Xian Dong, Meng Xiaobin Qu Yatin Kumbhare Yuiko Takada Yun Mao Yuriy Taraday Yuriy Zveryanskyy Yushiro FURUKAWA Zhang Yang Zhao Lei Zhenguo Niu Zhenguo Niu Zhenzan Zhou ZhiQiang Fan ZhiQiang Fan Zhongyue Luo Zhongyue Luo baiyuan chenghang chenglch dekehn divakar-padiyar-nandavar dparalen gaoxiaoyong houming-wang jiangfei jiangwt100 jinxingfang jxiaobin lei-zhang-99cloud linggao lvdongbing max_lobur ryo.kurahashi saripurigopi sjing sonu.kumar stephane tanlin vishal mahajan vmud213 vsaienko whaom whitekid yangxurong yunhong jiang zouyee ironic-5.1.0/ChangeLog0000664000567000056710000034133612674513632015736 0ustar jenkinsjenkins00000000000000CHANGES ======= 5.1.0 ----- * Documentation update for partition image support * Append 'Openstack-Request-Id' header to the response * Add disk_label and node_uuid for agent drivers * Fix sphinx docs build * Agent: Out-of-band power off on deploy * Document partition image support with agent_ilo * Add support for partition images in agent drivers * Update the text in user guide of ironic * Translate requests exception to IronicException * Extend the Conductor RPC object * Make sure target state is cleared on stable states * Removes redundant "to" * Install apparmor b/c Docker.io has undeclared dep * Don't depend on existing file perm for qemu hook * Devstack: add check of chassis creating * Adds doc - firmware update(iLO) manual clean step * Add ensure_thread_contain_context() to task_manager * [devstack] Do not die if neutron is disabled * Follow-up of firmware update(iLO) as manual cleaning step * Updating driver docs with DL hardwares requirements * Remove unneeded 'wait=False' to be more clean and consistent * Pass region_name to SwiftAPI * Uses jsonschema library to verify clean steps * Fix important typo in the ipmitool documentation * DevStack: Allow configuring the authentication strategy * Add documentation for RAID 5.0.0 ----- * Add documentation about the disk_label capability * SSH driver: Remove pipes from virsh's list_{all, running} * Add documentation for the IPMITool driver * Fix error in cleaning docs * Replace depricated tempest-lib with tempest.lib * Add new 'disk_label' capability * Fix JSON string in example of starting manual cleaning * Remove 'grub2' option in creating whole-disk-images * Update iRMC driver doc for inspection * Don't use token for glance & check for some unset vars * Use 'baremetal' flavor in devstack * [devstack] Fix IPA source build on Fedora * DevStack: Enable VirtualBMC logs * Support for passing CA certificate in Ironic Glance Communication * Updated from global requirements * Firmware update(iLO) as manual cleaning step * Updated from global requirements * Remove code duplication * Update iLO documentation for clean step 'reset_ilo' * Refactor the management verbs check to utils * Updated from global requirements * Remove duplicate doc in ironic.conf.sample * Prep for 5.0 release * Fix unittests after new releases of libraries * Updating docs with support for DL class servers * Update CIMC driver docs to install ImcSdk from PyPi * Add returns to send_raw() ipmitool function * Add function for dump SDR to ipmitool driver * Add clean step in iLO drivers to activate iLO license * Update proliantutils version to 2.1.7 for Mitaka release * ipxe: add --timeout parameter to kernel and initrd * Updated iLO driver documentation to recommend ipmitool version * Refactor driver loading to load a driver instance per node * Clean up driver loading in init_host * add wipefs to ironic-lib.filters * Updated from global requirements * Use assertEqual/Greater/Less/IsNone * Follow up nits of 3429e3824c060071e59a117c19c95659c78e4c8b * API to list nodes using the same driver * [devstack] set ipa-debug=1 for greater debugability * Loose python-oneviewclient version requirement * Set node last_error in TaskManager * Add possible values for config options * Follow up nits of irmc oob inspection * Enable removing name when updating node * Make some agent functions require exclusive lock * Add db api layer for CRUD operations on node tags * Update proliantutils version required for Mitaka release * Add deprecated_for_removal config info in ironic.conf.sample * Update ironic.conf.sample * Tolerate roles in context.RequestContext * Switch to Futurist library for asynchronous execution and periodic tasks * Move _from_db_object() into base class * Add ironic_tempest_plugin to the list of packages in setup.cfg * Fix gate broken by sudden remove of SERVICE_TENANT_NAME variable * Add manual cleaning to documentation * Import host option in base test module * Fixes automated cleaning failure in iLO drivers * Updated from global requirements * DevStack: Add support for deploying nodes with pxe_ipmitool * Change the libvirt NIC driver to virtio * DevStack: Support to install diskimage-builder from source * [Devstack]Add ability to enable ironic node pty console * Use 'node' directly in update_port() * Add links to the standalone configdrive documentation * DevStack: Install squashfs-tools * [DevStack] fix restart of nova compute * Use http_{root, url} config from "deploy" instead of "pxe" * During cleaning, store clean step index * Use oslo_config.fixture in unit tests * Introduce driver_internal_info in code-contribution-guide * Updated from global requirements * Correct instance parameter description * Add node.uuid to InstanceDeploy error message * Set existing ports pxe_enabled=True when adding pxe_enabled column * Augmenting the hashing strategy * Add hardware inspection module for iRMC driver * Document possible access problems with custom IRONIC_VM_LOG_DIR path * Add documentation for proxies usage with IPA * Updated from global requirements * Devstack: create endpoint in catalog unconditionally * Comment out test options that already exists on tempest's tree * Replace config 'clean_nodes' with 'automated_clean' * Remove 'zapping' from code * Cache agent clean steps on node * API to manually clean nodes * Replace ifconfig with ip * Updated iLO documentation for boot mode capability * Agent vendor handles manual cleaning * Remove downgrade support from migrations * Enable tinyipa for devstack Ironic * Disable clean step 'reset_ilo' for iLO drivers by default * Add proxy related parameters to agent driver * Update ironic.conf.samle * Fix genconfig "tempdir" inconsistency * Update the home page * Follow-up on dracclient refactor * Log warning if ipmi_username/ipmi_password missing * Add portgroups to support LAG interfaces - net * Add portgroups to support LAG interfaces - RPC * Add portgroups to support LAG interfaces - objs * Add portgroups to support LAG interfaces - DB * Fix missing lookup() vendor method error for pxe_drac * Refresh ssh verification mechanism * Refactor install-guide to configure API/Conductor seperately * Enable Ironic Inspector for Cisco Drivers * Fix doc8's "duplicated target names" (D000) error * Remove conditional checking the auth_strategy values * Extend root device hints to support device name * Fix spawn error hook in "continue_node_clean" RPC method * Enable doc8 style checker for *.rst files * Updated from global requirements * Show transitions initiated by API requests * Remove hard-coded DEPLOYWAIT timeout from Baremetal Scenario * Fix tiny format issue with install_guide * Add priority to manual clean step example * Use node uuid in some exception log * Fix error message in devstack * Updated from global requirements * [devstack] Restart nova compute before checking hypervisor stats * Imported Translations from Zanata * Fix minor typo * DRAC: cleanup after switch to python-dracclient * API service logs access requests again * Updated from global requirements * Correct port_id parameter description * Remove duplicate words in API version history * Remove unneeded enable_service in dev-quickstart.rst * Clarify that size in root device hints and local_gb are often different * Update ImcSdk requirement to use PyPi * Clean up 'no_proxy' unit tests * Add more unit tests for NO_PROXY validation * Add ability to cache swift temporary URLs * DRAC: switch to python-dracclient on vendor-passthru * Migrate Tempest tests into Ironic tree * Use Tempest plugin interface * Fix issues with uefi-ipxe booting * Update links to OpenStack manuals * Fix issue where system hostname can impact genconfig * Add choices option to several options * Reorganize the developer's main page * Document backwards compat for passthru methods * Drop MANIFEST.in - it's not needed pbr * Clean up unneeded deprecated_group * Devstack: replace 'http' with SERVICE_PROTOCOL * Clarify rejected status in RFE contribution docs * Bring UP baremetal bridge * Document the process of proposing new features * Updated from global requirements * Use assertTrue/False instead of assertEqual(T/F) * devstack 'cleanup-node' script should delete OVS bridges * Change default IRONIC_VM_SPECS_RAM to 1024 * Remove release differences from flavor creation docs * Add documentation for standalone ilo drivers * Devstack: Make sure libvirt's hooks directory exists * Update the ironic.conf.sample file * Follow-up on refactor DRAC management interface * Allow user to set arch for the baremetal flavor and ironic node * tox: make it possible to run pep8 on current patch only * Devstack: Use [deploy] erase_devices_priority config option * Remove bashate from envlist * Use ironic-lib's util methods * Refactor objects into a magic registry * Don't return tracebacks in API response in debug mode * Updated from global requirements * Change assertTrue(isinstance()) by optimal assert * Remove */openstack/common* in tox * Remove vim headers in source files * Trival: Remove unused logging import * Use ironic-lib's qemu_img_info() & convert_image() * Update "Developer Quick-Start" guide for Fedora 23+ * Enable ironic devstack plugin in local.conf sample * Correct a tiny issue in install-guide * Install 'shellinabox' package for Ironic * Fix translations in driver base * Run flake8 against the python scripts under tools/ and devstack/tools * Add UEFI support for iPXE * Add console feature to ssh driver * Conductor handles manual cleaning * Add extensions to the scripts at devstack/tools/ironic/scripts * Fix "No closing quotation" error when building with tox * Devstack: Remove QEMU hook at ./unstack * Run bashate as part of the pep8 command * Fix bashate errors in grenade plugin * Fix syntax errors in the shell scripts under devstack/tools * Use the apache-ironic.template from our tree * Fix typo in ironic/conductor/manager.py * genconfig: Debug info for unknown config types * Keep the console logs for all boots * Use imageutils from oslo.utils * Add documentation for user inputs as HTTPS URLs * Add bashate tox command * Updated from global requirements * Add documentation for swiftless intermediate images * DRAC: switch to python-dracclient on management interface * DRAC: switch to python-dracclient on power interface * Follow up nits of Exception to str type conversion * Clean up variables in plugin.sh * Replace assertEqual(None, *) with assertIsNone in tests * Add utility function to validate NO_PROXY * Add bifrost as an option projects in Service overview * Sequence diagrams for iLo driver documentation * Refactor ilo documentation for duplicate information * Update swift HTTPs information in ilo documentation * Updated from global requirements * Deprecated tox -downloadcache option removed * Remove override-defaults * Use 'service_type' of 'network'. Not 'neutron' * Update ironic.conf.sample by applying the bug fix #1522841 * Add grenade plugin * Follow up patch to correct code-contribute-guide * Fix iPXE template for whole disk image * Add devstack plugin * Copy devstack code to ironic tree * Add FSM.is_stable() method * Explicitly depend on WebTest>=2.0 * Always pass keystone credentials to neutronclient * Remove extra space in 'host' config comment * Add oslo_config.Opt support in Ironic config generator * Refactor disk partitioner code from ironic and use ironic-lib * Simplifies exception message assurance for oneview.common tests * Use node.uuid directly in stop_console() * Correct NotImplemented to NotImplementedError in rpcapi.py * Adding oneview.common tests for some method not well tested * Add port option support for ipmitool * Numerous debug messages due to iso8601 log level * Handle deprecated opts' group correctly * Updated from global requirements * Clarify what changes need a release note * Remove wsgi reset_pool_size_to_default test * Add Mitaka release notes page * Update python-scciclient version number * Add release notes from Icehouse to Liberty * Add Code Contribution Guide for Ironic * Replace HTTP 'magic numbers' with constants * Documentation points to official release notes 4.3.0 ----- * Fix awake AMT unit test * Fix bug where clean steps do not run * Add reno for AMT wakeup patch * Updating OneView driver requirements and docs * Correct the db connection string in dev-quickstart * Split BaseConductorManager from ConductorManager * Validate arguments to clean_step() decorator * test: Remove _BaseTestCase * Wake up AMT interface before send request * Fall back to old boot.ipxe behaviour if inc command is not found * Only mention IPA in the quick start and user guides for DevStack * Improve options help for image caching * Add troubleshooting docs for "no valid host found" * change mysql url in dev-quickstart doc * Extend FAQ with answer of how to create a new release note * Sync ironic.conf sample * Comment spelling error in ironic-images.filters file * Updated from global requirements * Add a developer FAQ * Add tests for RequestContextSerializer * Add a test to enforce object version bump correctly * force releasenotes warnings to be treated as errors * Avoid RequestContextSerializer from oslo.messaging * Follow up patch for the first commit of iRMC new boot I/F * Move iso8601 as a test dependency only * Catch up release notes for Mitaka * Move common code from ironic.conductor.manager to ironic.conductor.utils * Add deprecated config info in ironic.conf.sample * Add switch to enable/disable streaming raw images for IPA * SwiftAPI constructor should read CONF variables at runtime * Take over console session if enabled * Drop some outdated information from our quick start guide * Refactor IRMCVirtualMediaAgentDeploy by applying new BootInterface * Refactor IRMCVirtualMediaIscsiDeploy by applying new BootInterface * Updated from global requirements * Fix: Next cleaning hangs if the previous cleaning was aborted * Add clean up method for the DHCP factory * Add missing packages to dev-quickstart * Support arguments for clean step methods * Validate all tcp/udp port numbers * Add manual cleaning to state machine * Specifying target provision states in fsm * Use server_profile_template_uri at scheduling * Check shellinabox started successfully or not * Add SSL support to the Ironic API * Updated from global requirements * Use wsgi from oslo.service for Ironic API * Remove duplicated unit tests in test_manager * Get mandatory patch attrs from WSME properties * Add and document two new root device hints: wwn_{with, vendor}_extension * Sort root device hints when parsing * add "unreleased" release notes page * Follow up patch for 39e40ef12b016a1aeb37a3fe755b9978d3f9934f * Document 'erase_devices_iterations' config option * Update iLO documentation * Adds test case for the iscsi_ilo recreate boot iso * Refactor agent_ilo driver to use new boot interface * Updated from global requirements * Refactor iLO driver console interface into new module * Add reno for release notes management * Add choices to temp_url_endpoint_type config option * Fix oslo namespace in default log level * Remove __name__ attribute from WSME user types * refine the ironic installation guide * Revert "Add Pillow to test-requirements.txt" * Update etc/ironic/ironic.conf.sample * Make task parameter mandatory in get_supported_boot_devices * Follow up patch for Ib8968418a1835a4131f2f22fb3e4df5ecb9b0dc5 * Check shellinabox process during stopping console * Add whole disk image creation command to Installation Guide * Fix docker.io bug in the Install Guide * Updated from global requirements * Node's last_error to show the actual error from sync_power_state * Updated from global requirements * Rename test_conductor_utils.py to test_utils.py * Follow up patch for 8c3e102fc5736bfcf98525ebab59b6598a69b428 * Add agent_iboot entrypoint * Validate console port number in a valid range * iboot: add wait loop for pstate to activate * Don't reraise the exception in _set_console_mode * Check seamicro terminal port as long as it specified * Add missing unit tests for some PXE drivers * Validate the input of properties of nodes * Add documentation for Ceph Object Gateway support * Refactor iscsi_ilo driver to use new boot interface * Fix comments on DRAC BIOS vendor_passthru * cautiously fail on unhandled heartbeat exception * Add "agent_wol" (AgentAndWakeOnLanDriver) * Added unit tests for CORS middleware * Use oslo_config new type PortOpt for port options * Fix markup error in deploy/drivers.rst * Update the Configuration Reference to Liberty in doc * Updated from global requirements * Use self.__class__.X instead of self.X * Rename utils.py to mgr_utils.py to avoid namespace collision * XenAPI: Add support for XenServer VMs * Add PortOpt to config generator * Imported Translations from Zanata * Move hash_ring refresh logic out of sync_local_state * Move ironic.tests.unit.base to ironic.tests.base * Change required version of ImcSdk to 0.7.2 * Add an iboot reboot_delay setting * iPXE document about the existence of prebuilt images * Fix a typo * Switched order of CORS middleware * DRAC BIOS vendor_passthru: enable rebooting the node * Replace deprecated LOG.warn with warning * Add db migration and model for tags table * Add OneView driver documentation * Fix snmp property descriptions * Updated from global requirements * Slightly reword README * Remove unused functions from agent driver * mocking syscalls to make the tests run on OS X * Enable cmd/api & cmd/conductor to be launched directly * Add reboot_delay option to snmp driver * Add self.raid for iSCSI based drivers * Move test_pxe.py inside unit/drivers/modules directory * Move pxe._parse_instance_info() to deploy_utils * Add note about driver API breakage * Fix a missing detail in install guide * Enable radosgw support in ironic * Updated from global requirements * Add agent_amt docs * Add release notes for 4.2.1 * Convert set() to list in ListType * remove lxml requirement * Update python-oneviewclient version * Fix an annoying detail in the developer quick-start * Updated from global requirements * Expose versioning information on GET / endpoint * Fixes logging of failure in deletion of swift temporary object * ucs_hostname changed to ucs_address * Updated from global requirements * Remove functions: _cleanse_dict & format_message * Move FakeOneViewDriver to the fake.py module * Add testresources and testscenarios used by oslo.db fixture * Add agent_amt driver * Imported Translations from Zanata * Stop adding translation function to builtins * Fix tests giving erroneous output during os-testr run * OneView Driver for Ironic * Fix agent_ilo to remove temporary images * Updated from global requirements * iPXE: Fix assumption that ${mac} is the MAC of the NIC it's booting * Prevent iRMC unit test from potential failure at the gate * Add secret=True to password option * Fix a bug error by passwords only includes numbers * Add support for in-band cleaning in ISCSIDeploy * Fix typo in document * Remove unused import of oslo_log * Use power manager to reboot in agent deployments * Add retries to ssh._get_hosts_name_for_node * Refactor deploy_utils methods * Fix irmc driver unit test * PXE: Support Extra DHCP Options for IPv6 * Use standard locale when executing 'parted' command * Updated from global requirements * To run a specific unit test with ostestr use -r * Add .eggs to gitignore * Fix log formatting issue in agent base * Add notes to functions which are in ironic-lib * Allow empty password for ipmitool console * Update help string on tftp_root option * Updated from global requirements * Fix conductor deregistration on non init conductor * Imported Translations from Zanata * Add Pillow to test-requirements.txt * Add agent inspection support for IPMI and SSH drivers * Python 3.4 unit tests fail with LANG=C * Fix ubuntu install command in install guide * Move unit tests to correct directory * Add 'whitelist_externals = bash' for two testenvs * Rename 'message' attribute to '_msg_fmt' in IronicException * Follow up for: Prepare for functional testing patch * Fix documentation for installing mariaDB * Update help strings for DRAC configs * Switch tox unit test command to use ostestr * Use standard locale when executing 'dd' command * Imported Translations from Zanata * Fix typo: add a missing white space * Prepare for functional testing * Fix some iBoot strings * Replace six.iteritems() with .items() * Make generation of ironic.conf.sample deterministic * Cached file should not be deleted if time equal to master 4.2.0 ----- * Cleanup of Translations * Update architecture docs to mention new driver interfaces * Add 4.2.0 release notes * Update docs for Fedora 22 * Add i18n _ import to cimc common * Update proliantutils version required for L release * Use of 'the Bare Metal service' in guide * Update install guide to reflect latest code * Implement indirection_api * Add 'abort' to state machine diagram * Unit test environment setup clarification * Make end-points discoverable via Ironic API * Updated from global requirements * Allow unsetting node.target_raid_config * Allow abort for CLEANWAIT states * Clean up CIMC driver docs and comments * Add Cisco IMC PXE Driver * Fix final comments in RAID commits * Refactor agent {prepare,tear_down}_cleaning into deploy_utils * Handle unquoted node names from virt types * Fix iRMC vmedia deploy failure due to already attached image * Implement take_over for iscsi_ilo driver * Fix typo in vendor method dev documentation * Fix incorrect urls * Check image size before provisioning for agent driver * Help patch authors to remember to update version docs * Add constraint target to tox.ini * Add IPMINative vendor methods to *IPMINative drivers * Fix string formatting issues * Remove DictMatches custom matcher from unit tests * Imported Translations from Zanata * Remove unused object function * Use oslo.versionedobjects remotable decorators * Base IronicObject on VersionedObject * Update descriptions in RAID config schema * Document GET ...raid/logical_disk_properties * Convert functools.wraps() usage to six.wraps() * Remove comment about exception decorator * Replace metaclass registry with explicit opt-in registry from oslo * Add config option to override url for links * Fix iBoot test__switch_retries test to not waste time sleeping * Allow tftpd usage of '--secure' by using symlinks * Add support for inband raid configuration agent ramdisk * Agent supports post-clean-step operations * Update 'Installation Guide' for RHEL7/CentOS7/Fedora * Fix docs about --is-public parameter for glance image-create * Fix indentation of the console docs * Fix heading levels in the install-guide * Cache the description of RAID properties * Remove the hard dependency of swift from ilo drivers * Fix mistakes in comments * Updated from global requirements * Fix object field type calling conventions * Add version info for pyghmi in driver-requirements.txt 4.1.0 ----- * Add 4.1.0 release notes * Try to standardize retrieval of an Exception's description * Add description how to restart ironic services in Fedora/RHEL7/CentOS7 * Improve the ability to resolve capability value * Add supported environment 'VMware' to comments * Updated from global requirements * Remove policy 'admin' rule support * Handle missing is_whole_disk_image in pxe._build_pxe_config_options * Raise InvalidPrameterValue when ipmi_terminal_port is '' * Fix doc typo * Remove executable permission from irmc.py * Add APIs for RAID configuration * agent_ilo fails to bring up instance * Updated from global requirements * Remove 'is_valid_event' method * Set boot device in PXE Boot interface method prepare_instance() * Revert "Do not overwrite the iPXE boot script on every deployment" * Add vendor interface to ipminative driver * When boot option is not persisted, set boot on next power on * Document nodes in enroll state, in install guide * Added CORS support middleware to Ironic * Refactor map_color() * Removes unused posix-ipc requirement * Add retry options to iBoot power driver * Trusted boot doc * Prevent ilo drivers powering off active nodes during take over * Add release notes for 4.0.0 * Clean up cleaning error handling on heartbeats * Use vendor mixin in IPMITool drivers * Use oslo.messaging serializers * Add RPC APIs for RAID configuration * Add new method validate_raid_config to RAIDInterface * Fix docker package name in Ubuntu 14.04 in Install Guide * Updated from global requirements * Do not overwrite the iPXE boot script on every deployment * Reset tempdir config option after NestedTempfile fixture applied * Remove unused dep discover from test reqs * Add deprecation warning to periodic tasks with parallel=False * Use six.text_type in parse_image_ref * Ensure that pass_deploy_info() always calls boot.prepare_instance() * Add minimum and maximum on port option * Update ironic.conf.sample with tox -egenconfig * Update documentation to install grub2 when creating the user image * Fix logging and exceptions messages in ipminative driver * Fix minor spelling/grammar errors * Put py34 first in the env order of tox * format links in the readme to work with the release notes tools * Periodically checks for nodes being cleaned * Add links for UEFI secure boot support to iLO driver documentation * Add cleanup in console utils tests * Follow up the nits in iRMC vmedia driver merged patch * Refactor agent driver with pxe boot interface * Update tests to reflect WSME 0.8 fixes * Remove ObjectListBase * Remove broken workaround code for old mock * Create a versions.py file * Improve comparison operators for api/controllers/base.py * Switch to post-versioning 4.0.0 ----- * Fix improper exception catching * Fix nits from 'HTTP constants' patch * Use JsonEncoded{Dict,List} from oslo_db * Move tests into correct directories * Fix logging levels in do_node_deploy * Fix misspelling from "applicatin" to "application" * Updated from global requirements * Remove unneeded module variable '__all__' * Updated from global requirements * Change and edit of Ironic Installation Guide * Remove the --autofree option from boot.ipxe * Switch from deprecated timeutils.isotime * Fix "tox -egenconfig" by avoiding the MODULEPATH env variable * Improve logging for agent driver * Refactor the essential prop list of inspect driver * Reset clean_step if error occurs in CLEANWAIT * Fix bug sending sensor data for drivers w/o management * Replace HTTP 'magic numbers' with constants * Address final comments on update image cache based on update time * 'updated_at' field shows old value after resource is saved * Increase size of nodes.driver column * Add better dbapi support for querying reservation * Allow digits in IPA driver names * Updated from global requirements * Add documentation for iRMC virtual media driver * Add copyright notice to iRMC driver source code * Remove CONF.agent.agent_pxe_bootfile_name * Update single letter release names to full names * Enforce flake8 E711 * Update docstring for agent deploy's take_over * Update cached images based on update time * Updated from global requirements * Add RAIDInterface for RAID configuration * get_supported_boot_devices() returns static device list * add ironic client and ironic inspector projects into contribution list * Updated from global requirements * Use the oslo_utils.timeutils 'StopWatch' class * Update the documentation to use IPA as deploy ramdisk * Inspector inspection fails due to node locked error * Prevent power actions when the node is in CLENWAIT state * Imported Translations from Transifex * Remove unnecessary trailing backslash in Installation Guide * Refactor some minor issues to improve code readability * Fix misspelling in comment * Make app.wsgi more like ironic.cmd.api * Migrate IronicObjectSerializer to subclass from oslo * Updated from global requirements * Fix warnings on doc builds * Change vagrant.yml to vagrant.yaml * Developer quickstart documentation fixes * Document configuring ironic-api behind mod_wsgi * Updated from global requirements * Add deprecation messages on the bash ramdisk endpoints * Document API versioning * Log configuration values as DEBUG, not INFO * Update ironic.conf.sample * Update ironic.conf.sample * Add information 'node_uuid' in debug logs to facilitate the reader's life * Clean up instance_uuid as part of the node's tear down * Fix a trusted boot test bug * Add more info level log to deploy_utils.work_on_disk() method * Fix broken agent virtual media drivers * Updated from global requirements * Fix apache wsgi import * Add raises docstring tag into object.Ports methods * Only take exclusive lock in sync_power_state if node is updated * Secure boot support for pxe_ilo driver * UCS: node-get-boot-device is failing for Cisco servers * grub2 bootloader support for uefi boot mode * Add Nova scheduler_tracks_instance_changes config to docs * Use automaton's converters/pydot * enroll/verify/cleanwait in state machine diagram * Save and re-raise exception * Cache Keystone client instance * Refactor pxe - New PXEBoot and ISCSIDeploy interfaces * Don't prevent updates if power transition is in progress * Follow-on to b6ed09e297 to fix docstrings/comments * Make inspector driver test correctly * Allow inspector driver to work in standalone mode * Remove outdated TODO.rst file * Updated from global requirements * Introduce support for APC MasterSwitchPlus and Rack PDU * Allow agent lookup to directly accept node UUID * Add CLEANWAIT state * Allow updates in VERIFYING state * Allow deleting nodes in ENROLL state * Updated from global requirements * Fixes a testcase related to trusted boot in UEFI boot mode * Clarify inspection upgrade guide * Refactor refresh method in objects for reuse * Imported Translations from Transifex * Use utils.mkfs directly in deploy_utils * Updated from global requirements * Migrate ObjectListBase to subclass from the Oslo one * Clean up tftp files if agent deployed disk image * Don't do a premature reservation check in the provision API * Move the http_url and http_root to deploy config * Allow upgrading shared lock to an exclusive one * Fix the DEPLOYWAIT check for agent_* drivers * Add a missing comma in Vendor Methods of Developer Guide * Replacing dict.iteritems() with dict.items() * Updated from global requirements * db: use new EngineFacade feature of oslo.db * Address minor comments on the ENROLL patch * Remove requirements.txt from tox.ini deps * Updated from global requirements * Replace common.fileutils with oslo_utils.fileutils * Updated from global requirements * Switch to the oslo_utils.fileutils * Start using new ENROLL state * Add .idea to .gitignore * Periodically checks the status of nodes in DEPLOYING state * Add IPA support for iscsi_irmc driver * Updated from global requirements * Vagrant configuration generation now uses pymysql * Remove deprecated code for driver vendor passthru * Add DRAC BIOS config vendor passthru API * Use DEPLOYWAIT while waiting for agent to write image * Fix unittests due mock 1.1.0 release * Migrate RPC objects to oslo.versionedobjects Fields * Imported Translations from Transifex * Updated from global requirements * Mock the file creation for the GetConfigdriveTestCase tests * Address follow-up comments * Clear ilo_boot_iso before deploy for glance images * Enable translation for config option help messages * Replace is_hostname_safe with a better check * Initial oslo.versionedobjects conversion * Add whole disk image support for iscsi_irmc driver * Add localboot support for iscsi_irmc driver * Add iRMC Virtual Media Deploy module for iRMC Driver * add python-scciclient version number requirement * Remove db connection string env variable from tox.ini * Make use of tempdir configuration * Updated from global requirements * Fix failing unit tests under py34 * Allow vendor methods to serve static files * Allow updates when node is on ERROR provision state * Add sequence diagrams for pxe_ipmi driver * Fix logging for soft power off failures * Mute ipmi debug log output * Validate IPMI protocol version for IPMIShellinaboxConsole * Image service should not be set in ImageCache constructor * Clean nodes stuck in DEPLOYING state when ir-cond restarts * Add ability to filter nodes by provision_state via API * Refactor check_allow_management_verbs * Add node fields for raid configuration * Switch to oslo.service * Fix "boot_mode_support" hyper link in Installation Guide * Log configuration options on ironic-conductor startup * Allow deleting even associated and active node in maintenance mode * Use oslo_log * Replace self.assertEqual(None,*) to self.assertIsNone() * Improve warning message in conductor.utils.node_power_action() * Add a new boot section 'trusted_boot' for PXE * use versionutils from oslo_utils * Make task_manager logging more helpful * Add IPMI 1.5 support for the ipmitool power driver * Add iBoot driver documentation * Updated from global requirements * Add unit test for ilo_deploy _configure_vmedia_boot() * Do not use "private" attribute in AuthTokenMiddleware * API: Get a subset of fields from Ports and Chassis * Save disk layout information when deploying * Add ENROLL and related states to the state machine * Refactor method to add or update capability string * Use LOGDIR instead of SCREEN_LOGDIR in docs * Always allow removing instance_uuid from node in maintenance mode * API: Get a subset of fields from Nodes * Switch from MySQL-python to PyMySQL * Updated from global requirements * copy editing of ironic deploy docs * Transition state machine to use automaton oslo lib * Finish switch to inspector and inspector-client * Rename ilo_power._attach_boot_iso to improve readability * Expose current clean step in the API * Fix broken ACL tests * Add option to configure passes in erase_devices * Refactor node's and driver's vendor passthru to a common place * Change return value of [driver_]vendor_passthru to dict * Add Wake-On-Lan driver documentation * Fixes a bug on the iLO driver tutorial * Address follow-up comments on ucs drivers * Added documentation to Vagrantfile * Updated from global requirements * Addresses UcsSdk install issue * Don't raise exception from set_failed_state() * Add disk layout check on re-provisioning * Add boot interface in Ironic * Fix Cisco UCS slow tests * Validate capability in properties and instance_info * Pass environment variables of proxy to tox * DRAC: fix set/get boot device for 11g * Enable flake8 checking of ironic/nova/* * Remove tools/flakes.py * Wake-On-Lan Power interface * IPA: Do a soft power off at the end of deployment * Remove unnecessary validation in PXE * Add additional logging around cleaning * remove unneeded sqlalchemy-migrate requirement * Add vendor-passthru to attach and boot an ISO * Updated from global requirements * Sync with latest oslo-incubator * Add pxe_ucs and agent_ucs drivers to manage Cisco UCS servers * Doc: Use --notest for creating venv * Updated from global requirements * Fix DRAC driver job completion detection * Add additional required RPMs to dev instructions * Update docs for usage of python-ironicclient * Install guide reflects changes on master branch * Remove auth token saving from iLO driver * Don't support deprecated drivers' vendor_passthru * Updated from global requirements * Enforce flake8 E123/6/7/8 in ironic * Change driver_info to driver_internal_info in conductor * Use svg as it looks better/scales better than png * Updated from global requirements * Use oslo config import methods for Keystone options * Add documentation for getting a node's console * fix node-get-console returns url always start with http * Update the config drive doc to replace deprecated value * Updated from global requirements * Remove bogus conditional from node_update * Prevent node delete based on provision, not power, state * Revert "Add simplegeneric to py34 requirements" * Do not save auth token on TFTP server in PXE driver * Updated from global requirements * Update iLO documentation for UEFI secure boot * ironic-discoverd is being renamed to ironic-inspector * Update doc "install from packages" section to include Red Hat * Improve strictness of iLO test cases error checking * Remove deprecated pxe_deploy_{kernel, ramdisk} * Get admin auth token for Glance client in image_service * Fix: iSCSI iqn name RFC violation * Update documentation index.rst * Update AMT Driver doc * Refactor ilo.common._prepare_floppy_image() * Do not add auth token in context for noauth API mode * DRAC: config options for retry values * Disable meaningless sort keys in list command * Update pyremotevbox documentation * Fix drac implementation of set_boot_device * Update to hacking 0.10.x * Prepare for hacking 0.10.x * Rename gendocs tox environment * Add simplegeneric to py34 requirements * Reduce AMT Driver's dependence on new release of Openwsman * Fixes some docstring warnings * Slight changes to Vagrant developer configs * Delete neutron ports when the node cleaning fails * Update docstring DHCPNotFound -> DHCPLoadError * Wrap all DHCP provider load errors * Add partition number to list_partitions() output fields * Added vagrant VM for developer use * Execute "parted" from root in list_partitions() * Remove unused CONF variable in test_ipminative.py * Ironic doesn't use cacert while talking to Swift * Fix chainloading iPXE (undionly.kpxe) * Updated from global requirements * Improve root partition size check in deploy_partition_image * ironic/tests/drivers: Add autospec=True and spec_set= * Fix and enhance "Exercising the Services Locally" docs * Fix typos in Ironic docs * Fix spelling error in docstring * Remove deprecated exceptions * Check temp dir is usable for ipmitool driver * Improve strictness of AMT test cases error checking * Improve strictness of iRMC test cases error checking * Fix Python 3.4 test failure * Remove unneeded usage of '# noqa' * Drop use of 'oslo' namespace package * Updated from global requirements * Specify environment variables needed for a standalone usage * Adds OCS Power and Management interfaces * Run tests in py34 environment * Adds docstrings to some functions in ironic/conductor/manager.py * Add section header to state machines page * Update config generator to use oslo released libs * Use oslo_log lib * Include graphviz in install prerequisites * Link to config reference in our docs * Adopt config generator * Remove cleanfail->cleaning from state diagram * Imported Translations from Transifex * Return HTTP 400 for invalid sort_key * Update the Vendor Passthru documentation * Add maintenance mode example with reason * Add logical name example to install-guide * Improve strictness of DRAC test cases error checking * Add a venv that can generate/write/update the states diagram * Log attempts while trying to sync power state * Disable clean_step if config option is set to 0 * Improve iSCSI deployment logs * supports alembic migration for db2 * Updated from global requirements * Update iLO documentation for capabilities 2015.1.0 -------- * ironic/tests/drivers/amt: Add autospec=True to mocks * ironic/tests/drivers/irmc: Add spec_set & autospec=True * Updated from global requirements * ironic/tests/drivers/drac: Add spec_set= or autospec=True * Create a 3rd party mock specs file * Release Import of Translations from Transifex * Document how to configure Neutron with iPXE * Remove state transition: CLEANFAIL -> CLEANING * Remove scripts for migrating nova baremetal * Add a missing comma and correct some typos * Remove API reboot from cleaning docs * Remove scripts for migrating nova baremetal * Fixed is_glance_image(image_href) predicate logic * Rearrange some code in PXEDeploy.prepare * Fixes typo in ironic/api/hooks.py and removes unnecessary parenthesis * update .gitreview for stable/kilo * Add cleaning network docs * Remove ironic compute driver and sched manager * ironic/tests/drivers/ilo: Add spec= & autospec=True to mocks * Replace 'metrics' with 'meters' in option * Update some config option's help strings * document "scheduler_use_baremetal_filters" option in nova.conf * Fix heartbeat when clean step in progress * Fix heartbeat when clean step in progress * Update ilo drivers documentation for inspection * Open Liberty development 2015.1.0rc1 ----------- * Local boot note about updated deploy ramdisk * Convert internal RPC continue_node_cleaning to a "cast" * iLO driver documentation for node cleaning * Fix typos in vendor-passthru.rst * Add Ceilometer to Ironic's Conceptual Architecture * Improve AMT driver doc * iLO driver documentation for UEFI secure boot * Fix for automated boot iso issue with IPA ramdisk * Update session headers during initialization of AgentClient * Agent driver fails without Ironic-managed TFTP * Add notes about upgrading juno->kilo to docs * Address comments on I5cc41932acd75cf5e9e5b626285331f97126932e * Use mock patch decorator for eventlet.greenthread.sleep * Cleanup DHCPFactory._dhcp_provider after tests * Follow-up to "Add retry logic to _exec_ipmitool" * Nit fixes for boot_mode being overwritten * Update installation service overview * Don't pass boot_option: local for whole disk images * Fixup post-merge comments on cleaning document * Use hexhyp instead of hexraw iPXE type * Fix exception handling in Glance image service * Update proliantutils version required for K release * Fix type of value in error middleware response header * Imported Translations from Transifex * Fix mocks not being stopped as intended * Add maintenance check before call do_node_deploy * Fix VM stuck when deploying with pxe_ssh + local boot * Fix bad quoting in quickstart guide * Set hash seed to 0 in gendocs environment * boot_mode is overwritten in node properties * Add retry logic to _exec_ipmitool * Check status of bootloader installation for DIB ramdisk * Add missing mock for test_create_cleaning_ports_fail * Shorten time for unittest test_download_with_retries * Disable XML now that we have WSME/Pecan support * tests/db: Add autospec=True to mocks * Sync with oslo.incubator * Enable cleaning by default * Improve error handling when JSON is not returned by agent * Fix help string for glance auth_strategy option * Document ports creating configuration for in-band inspection * Remove DB tests workarounds * Fix formatting issue in install guide * Add missing test for DB migration 2fb93ffd2af1 * Regenerate states diagram after addition of CLEANING * Fix UnicodeEncodeError issue when the language is not en_US * pxe deploy fails for whole disk images in UEFI * Remove setting language to en_US for 'venv' * Add config drive documentation * Refactor test code to reduce duplication * Mock time.sleep() for two unittests * Clarify message for power action during cleaning * Add display-name option to example apache2 configuration * New field 'name' not supported in port REST API * Update doc for test database migrations * Add PXE-AMT driver's support of IPA ramdisk * Fix cleaning nits * Update docs: No power actions during cleaning * Prevent power actions on node in cleaning * Followup to comments on Cleaning Docs * Remove inspect_ports from ilo inspection * Removed hardcoded IDs from "chassis" test resources * Fix is_hostname_safe for RFC compliance * Enable pxe_amt driver with localboot * Improve backwards compat on API behaviour * Use node UUID in logs instead of node ID * Add IPA to enable drivers doc's page * Top level unit tests: Use autospec=True for mocks * DRAC: power on during reboot if powered off * Update pythonseamicroclient package version * A wrong variable format used in msg of ilo: * Add documentation for Cleaning * Explictly state that reboot is expected to work with powered off nodes * Prevent updating the node's driver if console is enabled * Agent driver: no-op heartbeat for maintenanced node * Deploys post whole disk image deploy fails * Allow node.instance_uuid to be removed during cleaning * Attach ilo_boot_iso only if node is active * Ensure configdrive isn't mounted for ilo drivers * Ensure configdrive isn't mounted for ipxe/elilo * Correct update_dhcp_opts methods * Fix broken unittests usage of sort() * Add root device hints documentation * Ensure configdrive isn't mounted in CoreOS ramdisks * Add local boot with partition images documentation * Add a return after saving node power state * Fix formatting error in states_to_dot * pxe partition image deploy fails in UEFI boot mode * Updated from global requirements * Fix common misspellings * Ilo drivers sets capabilities:boot_mode in node * Add whole disk image support for iscsi_ilo using agent ramdisk * Fixed nits for secure boot support for iLO Drivers * Fix typos in ironic/ironic/drivers/modules * fix invalid asserts in tests * Fail deploy if root uuid or disk id isn't available * Hide new fields via single method * Update "Ironic as a standalone service" documentation * DRAC: add retry capability to wsman client operations * Secure boot support for agent_ilo driver * Secure boot support for iscsi_ilo driver * Changes for secure boot support for iLO drivers 2015.1.0b3 ---------- * follow up patch for ilo capabilities * Support agent_ilo driver to perform cleaning * Implement cleaning/zapping for the agent driver * Add Cleaning Operations for iLO drivers * Automate uefi boot iso creation for iscsi_ilo driver * Generate keystone_authtoken options in sample config file * Use task.spawn_after to maintain lock during cleaning * is_whole_disk_image might not exist for previous instances * Hide inspection_*_at fields if version < 1.6 * Disable cleaning by default * Suppress urllib3.connection INFO level logging * Allow periods (".") in hostnames * iscsi_ilo driver do not validate boot_option * Sync from oslo.incubator * Common changes for secure boot support * Add pxe_irmc to the sending IPMI sensor data driver list * iLO driver updates node capabilities during inspection * iLO implementation for hardware inspection * Address nits in uefi agent iscsi deploy commit * Raise exception for Agent Deploy driver when using partition images * Add uefi support for agent iscsi deploy * Enable agent_ilo for uefi-bios switching * Fixup log message for discoverd * Update unittests and use NamedTemporaryFile * Rename _continue_deploy() to pass_deploy_info() * Write documentation for hardware inspection * Start using in-band inspection * Log message is missing a blank space * Address comments on cleaning commit * IPA: Add support for root device hints * Use Mock.patch decorator to handle patching amt management module * iscsi_ilo driver to support agent ramdisk * Enhance AMT driver documentation, pt 2 * Implement execute clean steps * Add missing exceptions to destroy_node docstrings * Force LANGUAGE=en_US in test runs * Add validations for root device hints * Add localboot support for uefi boot mode * ironic port deletion fails even if node is locked by same process * Add whole disk image support in iscsi_ilo driver * Enhance AMT driver documentation * Use oslo_policy package * Use oslo_context package * Adds support for deploying whole disk images * Add AMT-PXE driver doc * Fix two typos * Add node UUID to deprecated log message * Fix wrong chown command in deployment guide * PXE driver: Deprecate pxe_deploy_{ramdisk, kernel} * Add label to virtual floppy image * Make sure we don't log the full content of the config drive * Update API doc to reflect node uuid or name * Fix typo agaist->against * Use strutils from oslo_utils * Updated from global requirements * Add AMT-PXE-Driver Power&Management&Vendor Interface * Fix wrong log output in ironic/ironic/conductor/manager.py * Refactor agent iscsi deploy out of pxe driver * Tiny improvement of efficient * Make try block shorter for _make_password_file * Add module for in-band inspection using ironic-discoverd * Fix take over for agent driver * Add server-supported min and max API version to HTTPNotAcceptable(406) * Updated from global requirements * Add tftp mapfile configuration in install-guide * Fix nits in cleaning * Fix nits for supporting non-glance images * Follow-up patch for generic node inspection * Add a note to dev-quickstart * Add iter_nodes() helper to the conductor manager * Implement Cleaning in DriverInterfaces * Update install-guide for Ubuntu 14.10 package changes * Use mock instead of fixtures when appropriate * Generic changes for Node Inspection * Fix typo in "Enabling Drivers" * Support for non-Glance image references * Create new config for pecan debug mode * Local boot support for IPA * PXE drivers support for IPA * Update documentation on VirtualBox drivers * Add localboot support for iscsi_ilo driver * Improve last_error for async exceptions * Fix IPMI support documentation * Root partition should be bootable for localboot * Updated from global requirements * Add iRMC Management module for iRMC Driver * Spelling error in Comment * Remove unused code from agent vendor lookup() * Add documentation for VirtualBox drivers * Implement Cleaning States * Missing mock causing long tests * Add support for 'latest' in microversion header * Add tests for ilo_deploy driver * Fix reboot logic of iRMC Power Driver * Update the states generator and regenerate the image * Ensure state values are 15 characters or less * Minor changes to InspectInterface * INSPECTFAIL value is more readable * Disable n-novnc, heat, cinder and horizon on devstack * Return required properties for agent deploy driver * Remove unused modules from ironic/openstack/common * Use functions from oslo.utils * Update Ilo drivers to use REST API interface to iLO * Add dhcp-all-interfaces to get IP to NIC other than eth0 * Log exception on tear_down failure * Fix PEP8 E124 & E125 errors * Mock sleep function for OtherFunctionTestCase * Log node UUID rather than node object * Updated from global requirements * Add InspectInterface for node-introspection * Correctly rebuild the PXE file during takeover of ACTIVE nodes * Fix PEP8 E121 & E122 errors * Add documentation for the IPMI retry timeout option * Use oslo_utils replace oslo.utils * Avoid deregistering conductor following SIGUSR1 * Add states required for node-inspection * For flake8 check, make the 'E12' ignore be more granular * add retry logic to is_block_device function * Imported Translations from Transifex * Move oslo.config references to oslo_config * Add AMT-PXE-Driver Common Library * Fix typos in documentation: Capabilities * Removed unused image file * Address final comments of a4cf7149fb * Add concept of stable states to the state machine * Fix ml2_conf.ini settings * Vendorpassthru doesn't get correct 'self' * Remove docs in proprietary formats * Fix file permissions in project * Imported Translations from Transifex * Updated from global requirements * Remove deploy_is_done() from AgentClient * AgentVendorInterface: Move to a common place * Stop console at first if console is enabled when destroy node * fixed typos from eligable to eligible and delition to deletion * Add logical name support to Ironic * Add support for local boot * Fix chown invalid option -- 'p' * ipmitool drivers fail with integer passwords * Add the subnet creation step to the install guide 2015.1.0b2 ---------- * improve iSCSI connection check * Remove min and max from base.Version * Add list of python driver packages * Add policy show_password to mask passwords in driver_info * Conductor errors if enabled_drivers are not found * Add MANAGEABLE state and associated transitions * Raise minimum API version to 1.1 * Correct typo in agent_client * Fix argument value for work_on_disk() in unit test * Documentation: Describe the 'spacing' argument * update docstring for driver_periodic_task's parallel param * Use prolianutils module for ilo driver tests * Add documentation on parallel argument for driver periodic tasks * Rename provision_state to power_state in test_manager.py * Refactor ilo.deploy._get_single_nic_with_vif_port_id() * Update agent driver with new field driver_internal_info * Updated from global requirements * Add support for driver-specific periodic tasks * Partial revert of 4606716 until we debug further * Clean driver_internal_info when changes nodes' driver * Add Node.driver_internal_info * Move oslo.config references to oslo_config * Move oslo.db references to oslo_db * Revert "Do not pass PXE net config from bootloader to ramdisk" * Bump oslo.rootwrap to 1.5.0 * Drop deprecated namespace for oslo.rootwrap * Add VirtualBox drivers and its modules * region missing in endpoint selection * Add :raises: for Version constructor docstring * Improve testing of the Node's REST API * Rename NOSTATE to AVAILABLE * Add support for API microversions * Address final comments of edf532db91 * Add missing exceptions into function docstring * Fix typos in commit I68c9f9f86f5f113bb111c0f4fd83216ae0659d36 * Add logic to store the config drive passed by Nova * Do not POST conductor_affinity in tests * Add 'irmc_' prefix to optional properties * Actively check iSCSI connection after login * Updated from global requirements * Add iRMC Driver and its iRMC Power module * Fix drivers.rst doc format error * Improve test assertion for get_glance_image_properties * Do not pass PXE net config from bootloader to ramdisk * Adds get_glance_image_properties * Fix filter_query in drac/power interface * Updated from global requirements * Simplify policy.json * Replace DIB installation step from git clone to pip * Add a TODO file * Updated from global requirements * Fix function docstring of _get_boot_iso_object_name() * Improve ironic-dbsync help strings * Clear locks on conductor startup * Remove argparse from requirements * Use oslo_serialization replace oslo.serialization * Agent driver fails with Swift Multiple Containers * Add ipmitool to quickstart guide for Ubuntu * Allow operations on DEPLOYFAIL'd nodes * Allow associate an instance independent of the node power state * Improve docstrings about TaskManager's spawning feature * DracClient to handle ReturnValue validation * Fix instance_info parameters clearing * DRAC: Fix wsman host verification * Updated from global requirements * Clean up ilo's parse_driver_info() * Fix ssh _get_power_status as it returned status for wrong node * Fix RPCService and Ironic Conductor so they shut down gracefully * Remove jsonutils from openstack.common * Remove lockfile from dependencies * Remove IloPXEDeploy.validate() * Force glance recheck for kernel/ramdisk on rebuild * iboot power driver: unbound variable error * Remove unused state transitions * PXE: Add configdrive support * Rename localrc for local.conf * DracClient to handle ClientOptions creation * Ensure we don't have stale power state in database after power action * Remove links autogenerated from module names * Make DD block size adjustable * Improve testing of state transitions * Convert drivers to use process_event() * Update service.py to support graceful Service shutdown * Ensure that image link points to the correct image * Raise SSH failure messages to the error level * Make 'method' explicit for VendorInterface.validate() * Updated from global requirements * Provided backward compat for enforcing admin policy * Allow configuration of neutronclient retries * Convert check_deploy_timeout to use process_event * Add requests to requirements.txt * Enable async callbacks from task.process_event() * Document dependency on `fuser` for pxe driver * Distinguish between prepare + deploy errors * Avoid querying the power state twice * Add state machine to documentation * Updated from global requirements * Adjust the help strings to better reflect usage * Updated from global requirements * Updated from global requirements * Update etc/ironic/ironic.conf.sample * Fix policy enforcement to properly detect admin * Minor changes to state model * Add documentation to create in RegionOne * Delete unnecessary document files * Updated from global requirements * display error logging should be improved * Refactor async helper methods in conductor/manager.py * Hide oslo.messaging DEBUG logs by default * add comments for NodeStates fields * Stop conductor if no drivers were loaded * Fix typo in install-guide.rst * Reuse methods from netutils * Use get_my_ipv4 from oslo.utils * improve the neutron configuration in install-guide * Refactoring for Ironic policy * PXE: Pass root device hints via kernel cmdline * Extend API multivalue fields * Add a fsm state -> dot diagram generator * Updated from global requirements * Update command options in the Installation Guide 2015.1.0b1 ---------- * Improve Agent deploy driver validation * Add new enrollment and troubleshooting doc sections * Begin using the state machine for node deploy/teardown * Add base state machine * Updated from global requirements * Get rid of set_failed_state duplication * Remove Python 2.6 from setup.cfg * Updated from global requirements * Update dev quick-start for devstack * Updated from global requirements * Correct vmware ssh power manager * rename oslo.concurrency to oslo_concurrency * Remove duplicate dependencies from dev-quickstart docs * Do not strip 'glance://' prefix from image hrefs * Updated from global requirements * Fix image_info passed to IPA for image download * Use Literal Blocks to write code sample in docstring * Workflow documentation is now in infra-manual * Add tests to iscsi_deploy.build_deploy_ramdisk_options * Fix for broken deploy of iscsi_ilo driver * Updated from global requirements * Add info on creating a tftp map file * Add documentation for SeaMicro driver * Fixed typo in Drac management driver test * boot_devices.PXE value should match with pyghmi define * Add decorator that requires a lock for Drac management driver * Remove useless deprecation warning for node-update maintenance * Ilo tests refactoring * Change some exceptions from invalid to missing * Add decorator that requires a lock for Drac power driver * Change methods from classmethod to staticmethod * iLO Management Interface * Improve docs for running IPA in Devstack * Update 'Introduction to Ironic' document * Avoid calling _parse_driver_info in every test * Updated from global requirements * Correct link in user guide * Minor fix to install guide for associating k&r to nodes * Add serial console feature to seamicro driver * Support configdrive in agent driver * Add driver_validate() * Update drivers VendorInterface validate() method * Adds help for installing prerequisites on RHEL * Add documentation about Vendor Methods * Make vendor methods discoverable via the Ironic API * Fix PXEDeploy class docstring * Updated from global requirements * Vendor endpoints to support different HTTP methods * Add ipmitool as dependency on RHEL/Fedora systems * dev-quickstart.rst update to add required packages * Add gendocs tox job for generating the documentation * Add gettext to packages needed in dev quickstart * Convert qcow2 image to raw format when deploy * Update iLO driver documentation * Disable IPMI timeout before setting boot device * Updated from global requirements * ConductorManager catches Exceptions * Remove unused variable in agent._get_interfaces() * Enable hacking rule E265 * Add sync and async support for passthru methods * Fix documentation on Standard driver interfaces * Add a mechanism to route vendor methods * Remove redundant FunctionalTest usage in API tests * Use wsme.Unset as default value for API objects * Fix traceback on rare agent error case * Make _send_sensor_data more cooperative * Updated from global requirements * Add logging to driver vendor_passthru functions * Support ipxe with Dnsmasq * Correct "returns" line in PXE deploy method * Remove all redundant setUp() methods * Update install guide to install tftp * Remove duplicated _fetch_images function * Change the force_raw_image config usage * Clear maintenance_reason when setting maintenance=False * Removed hardcoded IDs from "port" test resources * Switch to oslo.concurrency * Updated from global requirements * Use docstrings for attributes in api/controllers * Put nodes-related API in same section * Fix get_test_node attributes set incorrectly * Get new auth token for ramdisk if old will expire soon * Delete unused 'use_ipv6' config option * Updated from global requirements * Add maintenance to RESTful web API documentation * Updated from global requirements * Iterate over glance API servers * Add API endpoint to set/unset the node maintenance mode * Removed hardcoded IDs from "node" test resources * Add maintenance_reason when setting maintenance mode * Add Node.maintenance_reason * Fix F811 error in pep8 * Improve hash ring value conversion * Add SNMP driver for Aten PDU's * Update node-validate error messages * Store image disk_format and container_format * Continue heartbeating after DB connection failure * TestAgentVendor to use the fake_agent driver * Put a cap on our cyclomatic complexity * More helpful failure for tests on noexec /tmp * Update doc headers at end of Juno * Fix E131 PEP8 errors 2014.2 ------ * Add the PXE VendorPassthru interface to PXEDracDriver * Add documentation for iLO driver(s) * Enable E111 PEP8 check * Updated from global requirements * Fix F812 PEP8 error * Enable H305 PEP8 check * Enable H307 PEP8 check * Updated from global requirements * Enable H405 PEP8 check * Enable H702 PEP8 check * Enable H904 PEP8 check * Migration to oslo.serialization * Add the PXE VendorPassthru interface to PXEDracDriver * Adds instructions for deploying instances on real hardware * Fix pep8 test * Add missing attributes to sample API objects * Fix markup-related issues in documentation * Add documentation for PXE UEFI setup 2014.2.rc2 ---------- * Clear hash ring cache in get_topic_for* * Fix exceptions names and messages for Keystone errors * Remove unused change_node_maintenance_mode from rpcapi * Imported Translations from Transifex * Clear hash ring cache in get_topic_for* * Move database fixture to a separate test case * KeyError from AgentVendorInterface._heartbeat() * Validate the power interface before deployment * Cleans up some Sphinx rST warnings in Ironic * Remove kombu as a dependency for Ironic 2014.2.rc1 ---------- * Make hash ring mapping be more consistent * Add periodic task to rebuild conductor local state * Open Kilo development * Add "affinity" tracking to nodes and conductors * ilo* drivers to use only ilo credentials * Update hacking version in test requirements * Add a call to management.validate(task) * Replace custom lazy loading by stevedore * Updated from global requirements * Remove useless variable in migration * Use DbTestCase as test base when context needed * For convention rename the first classmethod parameter to cls * Always reset target_power_state in node_power_action * Imported Translations from Transifex * Stop running check_uptodate in the pep8 testenv * Add HashRingManager to wrap hash ring singleton * Fix typo in agent validation code * Conductor changes target_power_state before starting work * Adds openSUSE support for developer documentation * Updated from global requirements * Remove untranslated PO files * Update ironic.conf.sample * Remove unneeded context initialization in tests * Force the SSH commands to use their default language * Add parameter to override locale to utils.execute * Refactor PXE clean up tests * Updated from global requirements * Don't reraise Exceptions from agent driver * Add documentation for ironic-dbsync command * Do not return 'id' in REST API error messages * Separate the agent driver config from the base localrc config * pxe_ilo driver to call iLO set_boot_device * Remove redundant context parameter * Update docs with new dbsync command * Update devstack docs, require Ubuntu 14.04 * Do not use the context parameter on refresh() * Pass ipa-driver-name to agent ramdisk * Do not set the context twice when forming RPC objects * Make context mandatory when instantiating a RPC object * Neutron DHCP implementation to raise exception if no ports have VIF * Do not cache auth token in Neutron DHCP provider * Imported Translations from Transifex * add_node_capability and rm_node_capability unable to save changes to db * Updated from global requirements * Handle SNMP exception error.PySnmpError * Use standard locale in list_partitions * node_uuid should not be used to create test port * Revert "Revert "Search line with awk itself and avoid grep"" * Fix code error in pxe_ilo driver * Add unit tests for SNMPClient * Check whether specified FS is supported * Sync the doc with latest code * Add a doc note about the vendor_passthru endpoint * Remove 'incubated' documentation theme * Import modules for fake IPMINative/iBoot drivers * Allow clean_up with missing image ref * mock.called_once_with() is not a valid method * Fix Devstack docs for zsh users * Fix timestamp column migration * Update ironic states and documentation * Stop using intersphinx * Updated from global requirements * Remove the objectify decorator * Add reserve() and release() to Node object * Add uefi boot mode support in IloVirtualMediaIscsiDeploy * Don't write python bytecode while testing * Support for setting boot mode in pxe_ilo driver * Remove bypassing of H302 for gettextutils markers * Revert "Search line with awk itself and avoid grep" * Search line with awk itself and avoid grep * Add list_by_node_id() to Port object * Remove unused modules from openstack-common.conf * Sync the document with the current implementation * Unify the sensor data format * Updated from global requirements * Deprecate Ironic compute driver and sched manager * Log ERROR power state in node_power_action() * Fix compute_driver and scheduler_host_manager in install-guide * Use oslo.utils instead of ironic.openstack.common * Use expected, actual order for PXE template test * Fix agent PXE template * Translator functions cleanup part 3 * Translator functions cleanup part 2 * Imported Translations from Transifex * Updated from global requirements * Remove XML from api doc samples * Update ironic.conf.sample * Fix race conditions running pxe_utils tests in parallel * Switch to "incubating" doc theme * Minor fixes for ipminative console support * Translator functions cleanup part 4 * Translator functions cleanup part 1 * Remove unnecessary mapping from Agent drivers * mock.assert_called_once() is not valid method * Use models.TimestampMixin from oslo.db * Updated from global requirements 2014.2.b3 --------- * Driver merge review comments from 111425 * Nova review updates for _node_resource * Ignore backup files * IloVirtualMediaAgent deploy driver * IloVirtualMediaIscsi deploy driver * Unbreak debugging via testr * Interactive console support for ipminative driver * Add UEFI based deployment support in Ironic * Adds SNMP power driver * Control extra space for images conversion in image_cache * Use metadata.create_all() to initialise DB schema * Fix minor issues in the DRAC driver * Add send-data-to-ceilometer support for pxe_ipminative driver * Reduce redundancy in conductor manager docstrings * Fix typo in PXE driver docstrings * Update installation guide for syslinux 6 * Updated from global requirements * Imported Translations from Transifex * Avoid deadlock when logging network_info * Implements the DRAC ManagementInterface for get/set boot device * Rewrite images tests with mock * Add boot_device support for vbox * Remove gettextutils _ injection * Make DHCP provider pluggable * DRAC wsman_{enumerate, invoke}() to return an ElementTree object * Remove futures from requirements * Script to migrate Nova BM data to Ironic * Imported Translations from Transifex * Updated from global requirements * Fix unit tests with keystoneclient master * Add support for interacting with swift * properly format user guide in RST * Updated from global requirements * Fix typo in user-guide.rst * Add console interface to agent_ipmitool driver * Add support for creating vfat and iso images * Check ERROR state from driver in _do_sync_power_state * Set PYTHONHASHSEED for venv tox environment * Add iPXE Installation Guide documentation * Add management interface for agent drivers * Add driver name on driver load exception * Take iSCSI deploy out of pxe driver * Set ssh_virt_type to vmware * Update nova driver's power_off() parameters * return power state ERROR instead of an exception * handle invalid seamicro_api_version * Imported Translations from Transifex * Nova ironic driver review update requests to p4 * Allow rebuild of node in ERROR and DEPLOYFAIL state * Use cache in node_is_available() * Query full node details and cache * Add in text for text mode on trusty * Add Parallels virtualisation type * IPMI double bridging functionality * Add DracDriver and its DracPower module * use MissingParameterValue exception in iboot * Update compute driver macs_for_instance per docs * Update DevStack guide when querying the image UUID * Updated from global requirements * Fix py3k-unsafe code in test_get_properties() * Fix tear_down a node with missing info * Remove d_info param from _destroy_images * Add docs for agent driver with devstack * Removes get_port_by_vif * Update API document with BootDevice * Replace incomplete "ilo" driver with pxe_ilo and fake_ilo * Handle all exceptions from _exec_ipmitool * Remove objectify decorator from dbapi's {get, register}_conductor() * Improve exception handling in console code * Use valid exception in start_shellinabox_console * Remove objectify decorator from dbapi.update_* methods * Add list() to Chassis, Node, Port objects * Raise MissingParameterValue when validating glance info * Mechanism to cleanup all ImageCaches * Driver merge review comments from 111425-2-3 * Raise MissingParameterValue instead of Invalid * Import fixes from the Nova driver reviews * Imported Translations from Transifex * Use auth_token from keystonemiddleware * Make swift tempurl key secret * Add method for deallocating networks on reschedule * Reduce running time of test_different_sizes * Remove direct calls to dbapi's get_node_by_instance * Add create() and destroy() to Port object * Correct `op.drop_constraint` parameters * Use timeutils from one place * Add create() and destroy() to Chassis object * Add iPXE support for Ironic * Imported Translations from Transifex * Add posix_ipc to requirements * backport reviewer comments on nova.virt.ironic.patcher * Move the 'instance_info' fields to GenericDriverFields * Migration to oslo.utils library * Fix self.fields on API Port object * Fix self.fields on API Chassis object * Sync oslo.incubator modules * Updated from global requirements * Expose {set,get}_boot_device in the API * Check if boot device is persistent on ipminative * Sync oslo imageutils, strutils to Ironic * Add charset and engine settings to every table * Imported Translations from Transifex * Remove dbapi calls from agent driver * Fix not attribute '_periodic_last_run' * Implements send-data-to-ceilometer * Port iBoot PDU driver from Nova * Log exception with translation * Add ironic-python-agent deploy driver * Updated from global requirements * Imported Translations from Transifex * Clean up calls to get_port() * Clean up calls to get_chassis() * Do not rely on hash ordering in tests * Update_port should expect MACAlreadyExists * Imported Translations from Transifex * Adding swift temp url support * Push the image cache ttl way up * Imported Translations from Transifex * SSH virsh to use the new ManagementInterface * Split test case in ironic.tests.conductor.test_manager * Tune down node_locked_retry_{attempts,interval} config for tests * Add RPC version to test_get_driver_properties 2014.2.b2 --------- * Import fixes from the Nova driver reviews * Generalize exception handling in Nova driver * Fix nodes left in an incosistent state if no workers * IPMINative to use the new ManagementInterface * Backporting nova host manager changes into ironic * Catch oslo.db error instead of sqlalchemy error * Add a test case for DB schema comparison * remove ironic-manage-ipmi.filters * Implement API to get driver properties * Add drivers.base.BaseDriver.get_properties() * Implement retry on NodeLocked exceptions * SeaMicro to use the new ManagementInterface * Import fixes from Nova scheduler reviews * Rename/update common/tftp.py to common/pxe_utils.py * Imported Translations from Transifex * Factor out deploy info from PXE driver * IPMITool to use the new ManagementInterface * Use mock.assert_called_once_with() * Add missing docstrings * Raise appropriate errors on duplicate Node, Port and Chassis creation * Add IloDriver and its IloPower module * Add methods to ipmitool driver * Use opportunistic approach for migration testing * Use oslo.db library * oslo.i18n migration * Import a few more fixes from the Nova driver * Set a more generous default image cache size * Fix wrong test fixture for Node.properties * Make ComputeCapabilitiesFilter work with Ironic * Add more INFO logging to ironic/common/service.py * Clean up nova virt driver test code * Fix node to chassis and port to node association * Allow Ironic URL from config file * Imported Translations from Transifex * Update webapi doc with link and console * REST API 'limit' parameter to only accept positive values * Update docstring for api...node.validate * Document 'POST /v1/.../vendor_passthru' * ManagementInterface {set, get}_boot_device() to support 'persistent' * Use my_ip for neutron URL * Updated from global requirements * Add more INFO logging to ironic/conductor * Specify rootfstype=ramfs deploy kernel parameter * Add set_spawn_error_hook to TaskManager * Imported Translations from Transifex * Updates the Ironic on Devstack dev documentation * Simplify error handling * Add gettextutils._L* to import_exceptions * Fix workaround for the "device is busy" problem * Allow noauth for Neutron * Minor cleanups to nova virt driver and tests * Update nova rebuild to account for new image * Updated from global requirements * pep8 cleanup of Nova code * PEP fixes for the Nova driver * Fix glance endpoint tests * Update Nova's available resources at termination * Fix the section name in CONTRIBUTING.rst * Add/Update docstrings in the Nova Ironic Driver * Update Nova Ironic Driver destroy() method * Nova Ironic driver get_info() to return memory stats in KBytes * Updates Ironic Guide with deployment information * Add the remaining unittests to the ClientWrapper class * Wait for Neutron port updates when using SSHPower * Fix 'fake' driver unable to finish a deploy * Update "Exercising the Services Locally" doc * Fixing hardcoded glance protocol * Remove from_chassis/from_nodes from the API doc * Prevent updating UUID of Node, Port and Chassis on DB API level * Imported Translations from Transifex * Do not delete pxe_deploy_{kernel, ramdisk} on tear down * Implement security groups and firewall filtering methods * Add genconfig tox job for sample config file generation * Mock pyghmi lib in unit tests if not present * PXE to pass hints to ImageCache on how much space to reclaim * Add some real-world testing on DiskPartitioner * Eliminate races in Conductor _check_deploy_timeouts * Use temporary dir for image conversion * Updated from global requirements * Move PXE instance level parameters to instance_info * Clarify doc: API is admin only * Mock time.sleep for the IPMI tests * Destroy instance to clear node state on failure * Add 'context' parameter to get_console_output() * Cleanup virt driver tests and verify final spawn * Test fake console driver * Allow overriding the log level for ironicclient * Virt driver logging improvements * ipmitool driver raises DriverLoadError * VendorPassthru.validate()s call _parse_driver_info * Enforce a minimum time between all IPMI commands * Remove 'node' parameter from the validate() methods * Test for membership should be 'not in' * Replace mknod() with chmod() * Factoring out PXE and TFTP functions * Let ipmitool natively retry commands * Sync processutils from oslo code * Driver interface's validate should return nothing * Use .png instead of .gif images * Fix utils.execute() for consistency with Oslo code * remove default=None for config options 2014.2.b1 --------- * Stop ipmitool.validate from touching the BMC * Set instance default_ephemeral_device * Add unique constraint to instance_uuid * Add node id to DEBUG messages in impitool * Remove 'node' parameter from the Console and Rescue interfaces * TaskManager: Only support single node locking * Allow more time for API requests to be completed * Add retry logic to iscsiadm commands * Wipe any metadata from a nodes disk * Rework make_partitions logic when preserve_ephemeral is set * Fix host manager node detection logic * Add missing stats to IronicNodeState * Update IronicHostManager tests to better match how code works * Update Nova driver's list_instance_uuids() * Remove 'fake' and 'ssh' drivers from default enabled list * Work around iscsiadm delete failures * Mock seamicroclient lib in unit tests if not present * Cleanup mock patch without `with` part 2 * Add __init__.py for nova scheduler filters * Skip migrations test_walk_versions instead of pass * Improving unit tests for _do_sync_power_state * Fix AttributeError when calling create_engine() * Reuse validate_instance_and_node() Nova ironic Driver * Fix the logging message to identify node by uuid * Fix concurrent deletes in virt driver * Log exceptions from deploy and tear_down * PXE driver to validate the requested image in Glance * Return the HTTP Location for accepted requestes * Return the HTTP Location for newly created resources * Fix tests with new keystoneclient * list_instances() to return a list of instances names * Pass kwargs to ClientWrapper's call() method * Remove 'node' parameter from the Power interface * Set the correct target versions for the RPC methods * Consider free disk space before downloading images into cache * Change NodeLocked status code to a client-side error * Remove "node" parameter from methods handling power state in docs * Add parallel_image_downloads option * Synced jsonutils from oslo-incubator * Fix chassis bookmark link url * Remove 'node' parameter from the Deploy interface * Imported Translations from Transifex * Remove all mostly untranslated PO files * Cleanup images after deployment * Fix wrong usage of mock methods * Using system call for downloading files * Run keepalive in a dedicated thread * Don't translate debug level logs * Update dev quickstart guide for ephemeral testing * Speed up Nova Ironic driver tests * Renaming ironicclient exceptions in nova driver * Fix bad Mock calls to assert_called_once() * Cleanup mock patch without `with` part 1 * Corrects a typo in RESTful Web API (v1) document * Updated from global requirements * Clean up openstack-common.conf * Remove non-existent 'pxe_default_format' parameter from patcher * Remove explicit dependency on amqplib * Pin RPC client version min == max * Check requested image size * Fix 'pxe_preserve_ephemeral' parameter leakage * RPC_API_VERSION out of sync * Simplify calls to ImageCache in PXE module * Implement the reboot command on the Ironic Driver * Place root partition last so that it can always be expanded * Stop creating a swap partition when none was specified * Virt driver change to use API retry config value * Implement more robust caching for master images * Decouple state inspection and availability check * Updated from global requirements * Fix ironic node state comparison * Add create() and destroy() to Node * Fix typo in rpcapi.driver_vendor_passthru * Support serial console access * Remove 'node' parameter from the VendorPassthru interface * Updated from global requirements * Synced jsonutils from oslo-incubator * Fix chassis-node relationship * Implement instance rebuild in nova.virt.driver * Sync oslo logging * Add ManagementInterface * Clean oslo dependencies files * Return error immediately if set_console_mode is not supported * Fix bypassed reference to node state values * Updated from global requirements * Port to oslo.messaging * Drivers may expose a top-level passthru API * Overwrite instance_exists in Nova Ironic Driver * Update Ironic User Guide post landing for 41af7d6b * Spawn support for TaskManager and 2 locking fixes * Document ClusteredComputeManager * Clean up calls to get_node() * nova.virt.ironic passes ephemeral_gb to ironic * Implement list_instance_uuids() in Nova driver * Modify the get console API * Complete wrapping ironic client calls * Add worker threads limit to _check_deploy_timeouts task * Use DiskPartitioner * Better handling of missing drivers * Remove hardcoded node id value * cleanup docstring for drivers.utils.get_node_mac_addresses * Update ironic.conf.sample * Make sync_power_states yield * Refactor sync_power_states tests to not use DB * Add DiskPartitioner * Some minor clean up of various doc pages * Fix message preventing overwrite the instance_uuid * Install guide for Ironic * Refactor the driver fields mapping * Imported Translations from Transifex * Fix conductor.manager test assertion order * Overwriting node_is_available in IronicDriver * Sync oslo/common/excutils * Sync oslo/config/generator * Cherry pick oslo rpc HA fixes * Add Ironic User Guide * Remove a DB query for get_ports_by_node() * Fix missed stopping of conductor service * Encapsulate Ironic client retry logic * Do not sync power state for new invalidated nodes * Make tests use Node object instead of dict * Sync object list stuff from Nova * Fix Node object version * Cleanup running conductor services in tests * Factor hash ring management out of the conductor * Replace sfdisk with parted * Handling validation in conductor consistently * JsonPatch add operation on existing property * Updated from global requirements * Remove usage of Glance from PXE clean_up() * Fix hosts mapping for conductor's periodic tasks * Supports filtering port by address * Fix seamicro power.validate() method definition * Update tox.ini to also run nova tests * Updated from global requirements * Fix messages formatting for _sync_power_states * Refactor nova.virt.ironic.driver get_host_stats * Use xargs -0 instead of --null * Change admin_url help in ironic driver * Sync base object code with Nova's * Add Node.instance_info field * Fix self.fields on API Node object * Show maintenance field in GET /nodes * Move duplicated _get_node(s)_mac_addresses() * Fix grammar in error string in pxe driver * Reduce logging output from non-Ironic libraries * Open Juno development 2014.1.rc1 ---------- * Fix spelling error in conductor/manager * Improved coverage for ironic API * Manually update all translated strings * Check that all po/pot files are valid * If no swap is specified default to 1MB * Fix Nova rescheduling tear down problem * Remove obsolete po entries - they break translation jobs * Add note to ssh about impact on ci testing * Adds exact match filters to nova scheduler * Clean up IronicNodeStates.update_from_compute_node * ironic_host_manager was missing two stats * Imported Translations from Transifex * Fix seamicro validate() method definition * Remove some obsolete settings from DevStack doc * Raise unexpected exceptions during destroy() * Start using oslosphinx theme for docs * Provide a new ComputeManager for Ironic * Nova Ironic driver to set pxe_swap_mb in Ironic * Fix strings post landing for c63e1d9f6 * Run periodic_task in a with a dynamic timer * Update SeaMicro to use MixinVendorInterface * Run ipmi power status less aggressively * Avoid API root controller dependency on v1 dir * Update Neutron if mac address of the port changed * Replace fixtures with mock in test_keystone.py * Decrease running time of SeaMicro driver tests * Remove logging of exceptions from controller's methods * Imported Translations from Transifex * Fix missed exception raise in _add_driver_fields * Speed up ironic tests * Pass no arguments to _wait_for_provision_state() * Adds max retry limit to sync_power_state task * Updated from global requirements * Imported Translations from Transifex * Stop incorrectly returning rescue: supported * Correct version.py and update current version string * Documentation for deploying DevStack /w Ironic * Hide rescue interface from validate() output * Change set_console_mode() to use greenthreads * Fix help string for a glance option * Expose API for fetching a single driver * Change JsonEncodedType.impl to TEXT * Fix traceback hook for avoid duplicate traces * Fix 'spacing' parameters for periodic tasks * Permit passing SSH keys into the Ironic API * Better instance-not-found handling within IronicDriver * Make sure auth_url exists and is not versionless * Conductor de-registers on shutdown * Change deploy validation exception handling * Suppress conductor logging of expected exceptions * Remove unused method from timeutils * Add admin_auth_token option for nova driver * Remove redundant nova virt driver test * Process public API list as regular expressions * Enable pep8 tests for the Nova Ironic Driver * Fix typo tenet -> tenant * Stop logging paramiko's DEBUG and INFO messages * Set boot device to PXE when deploying * Driver utils should raise unsupported method * Delete node while waiting for deploy * Check BMC availability in ipmitool 'validate' method * SeaMicro use device parameter for set_boot_device * Make the Nova Ironic driver to wait for ACTIVE * Fix misspelled impi to ipmi * Do not use __builtin__ in python3 * Use range instead xrange to keep python 3.X compatibility * Set the database.connection option default value * PXE validate() to fail if no Ironic API URL * Improve Ironic Conductor threading & locks * Generic MixinVendorInterface using static mapping * Conductor logs better error if seamicroclient missing * Add TaskManager lock on change port data * Nova ironic driver to retry on HTTP 503 * Mark hash_replicas as experimental * do_node_deploy() to use greenthreads * Move v1 API tests to separate v1 directory * Pin iso8601 logging to WARN * Only fetch node once for vif actions * Fix how nova ironic driver gets flavor information * Imported Translations from Transifex * API: Add sample() method to remaining models * Import Nova "ironic" driver * Remove errors from API documentation * Add libffi-dev(el) dependency to quickstart * Updated from global requirements * Remove redundant default value None for dict.get 2014.1.b3 --------- * Refactor vendor_passthru to use conductor async workers * Fix wrong exception raised by conductor for node * Fix params order in assertEqual * Sync the log_handler from oslo * Fix SeaMicro driver post landing for ba207b4aa0 * Implements SeaMicro VendorPassThru functionality * Implement the SeaMicro Power driver * Fix provision_updated_at deserialization * Remove jsonutils from test_rpcapi * Do not delete a Node which is not powered off * Add provision_updated_at to node's resource * Prevent a node in maintenance from being deployed * Allow clients to mark a node as in maintenance * Support preserve_ephemeral * Updated from global requirements * API: Expose a way to start/stop the console * Add option to sync node power state from DB * Make the PXE driver understand ephemeral disks * Log deploy_utils.deploy() erros in the PXE driver * Removing get_node_power_state, bumping RPC version * Add timeout for waiting callback from deploy ramdisk * Prevent GET /v1/nodes returning maintenance field * Suggested improvements to _set_boot_device * Move ipminative _set_boot_device to VendorPassthru * Sync common db code from Oslo * PXE clean_up() to remove the pxe_deploy_key parameter * Add support for custom libvirt uri * Python 3: replace "im_self" by "__self__" * Fix race condition when deleting a node * Remove extraneous vim configuration comments for ironic * Do not allow POST ports and chassis internal attributes * Do not allow POST node's internal attributes * Unused 'pxe_key_data' & 'pxe_instance_name' info * Add provision_updated_at field to nodes table * Exclude nodes in DEPLOYWAIT state from _sync_power_states * Sync common config module from Oslo * Get rid object model `dict` methods part 4 * Sync Oslo rpc module to Ironic * Clarify and fix the dev-quickstart doc some more * Do not use CONF as a default parameter value * Simplify locking around acquiring Node resources * Improve help strings * Remove shebang lines from code * Use six.moves.urllib.parse instead of urlparse * Add string representation method to MultiType * Fix test migrations for alembic * Sync Oslo gettextutils module to Ironic * NodeLocked returns 503 error status * Supports OPERATOR priv level for ipmitool driver * Correct assertEqual order from patch e69e41c99fb * PXE and SSH validate() method to check for a port * Task object as paramater to validate() methods * Fix dev-quick-start.rst post landing for 9d81333fd0 * API validates driver name for both POST and PATCH * Sync Oslo service module to Ironic * Move ipmitool _set_boot_device to VendorPassthru * Use six.StringIO/BytesIO instead of StringIO.StringIO * Add JSONEncodedType with enforced type checking * Correct PXEPrivateMethodsTestCase.setUp * Don't raise MySQL 2013 'Lost connection' errors * Use the custom wsme BooleanType on the nodes api * Add wsme custom BooleanType type * Fix task_manager acquire post landing for c4f2f26ed * Add common.service config options to sample * Removes use of timeutils.set_time_override * Replace assertEqual(None, *) with assertIsNone in tests * Replace nonexistent mock assert methods with real ones * Log IPMI power on/off timeouts * Remove None as default value for dict get() * Fix autodoc formatting in pxe.py * Fix race condition when changing node states * Use StringType from WSME * Add testing and doc sections to docs/dev-quickstart * Implement _update_neutron in PXE driver * Remove _load_one_plugin fallback * SSHPower driver support VMware ESXi * Make ironic-api not single threaded * Remove POST calls in tests for resource creation * Add topic to the change_node_maintenance_mode() RPC method * Fix API inconsistence when changing node's states * Add samples to serve API through Apache mod_wsgi * Add git dependency to quickstart docs * Add get_console() method * Remove unnecessary json dumps/loads from tests * Add parameter for filtering nodes by maintenance mode * Rename and update ironic-deploy-helper rootwrap * Remove tox locale overrides * Updated from global requirements * Move eventlent monkeypatch out of cmd/ * Fix misspellings in ironic * Ensure parameter order of assertEqual correct * Return correct HTTP response codes for create ops * Fix broken doc links on the index page * Allow to tear-down a node waiting to be deployed * Improve NodeLocked exception message * Expose 'reservation' field of a node via API * Implement a multiplexed VendorPassthru example * Fix log and test for NeutronAPI.update_port_dhcp_opts * Fix 'run_as_root' parameter check in utils * Handle multiple exceptions raised by jsonpatch * API tests to check for the return codes * Imported Translations from Transifex * Move test__get_nodes_mac_addresses * Removed duplicated function to create a swap fs * Updated from global requirements * Add futures to requirements * Fix missing keystone option in ironic.conf.sample * Adds Neutron support to Ironic * Replace CONF.set_default with self.config * Fix ssh_port type in _parse_driver_info() from ssh.py * Improve handling of invalid input in HashRing class * Sync db.sqlalchemy code from Oslo * Add lockfile>=0.8 to requirements.txt * Remove net_config_template options * Remove deploy kernel and ramdisk global config * Update docstrings in ssh.py * SSHPower driver raises IronicExceptions * mock's return value for processutils.ssh_execute * API: Add sample() method on Node * Update method doc strings in pxe.py * Minor documentation update * Removed unused exceptions * Bump version of sphinxcontrib-pecanwsme * Add missing parameter in call to _load_one_plugin * Docstrings for ipmitool * alembic with initial migration and tests * Update RPC version post-landing for 9bc5f92fb * ipmitool's _power_status raises IPMIFailure 2014.1.b2 --------- * Add [keystone_authtoken] to ironic.conf.sample * Updated from global requirements * Add comment about node.instance_uuid * Run mkfs as root * Remove the absolute paths from ironic-deploy-helper.filters * PXE instance_name is no longer mandatory * Remove unused config option - pxe_deploy_timeout * Delete the iscsi target * Imported Translations from Transifex * Fix non-unique tftp dir instance_uuid * Fix non-unique pxe driver 'instance_name' * Add missing "Filters" section to the ironic-images.filters * Use oslo.rootwrap library instead of local copy * Replace assertTrue with explicit assertIsInstance * Disallow new provision for nodes in maintenance * Add RPC method for node maintenance mode * Fix keystone get_service_url filtering * Use same MANAGER_TOPIC variable * Implement consistent hashing of nodes to conductors * PXEAndSSH driver lacked vendor_passthru * Use correct auth context inside pxe driver * sync_power_states handles missing driver info * Enable $pybasedir value in pxe.py * Correct SSHPowerDriver validate() exceptions * API to check the requested power state * Improve the node driver interfaces validation output * Remove copyright from empty files * Make param descriptions more consistent in API * Imported Translations from Transifex * Fix wrong message of pxe validator * Remove unused dict BYTE_MULTIPLIERS * Implement API for provisioning * API to validate UUID parameters * Make chassis_uuid field of nodes optional * Add unit tests for get_nodeinfo_list * Improve error handling in PXE _continue_deploy * Make param names more consistent in API * Sync config module from oslo * Fix wrong message of MACAlreadyExists * Avoid a race when associating instance_uuid * Move and rename ValidTypes * Convert trycmd() to oslo's processutils * Improve error handling in validate_vendor_action * Passing nodes more consistently * Add 'next' link when GET maximum number of items * Check connectivity in SSH driver 'validate' method * GET /drivers to show a list of active conductors * Improve method to get list of active conductors * Refactor /node//state * Reworks Chassis validations * Reworks Node validations * Developer doc index page points to correct API docs * Fix auto-generated REST API formatting * Method to generate PXE options for Neutron ports * Strip '/' from api_url string for PXE driver * Add driver interfaces validation * Command call should log the stdout and stderr * Add prepare, clean_up, take_over methods to deploy * PEP8-ify imports in test_ipmitool * API: Add sample() method on Port and PortCollection * API: Validate and normalize address * Handle DBDuplicateEntry on Ports with same address * Imported Translations from Transifex * removed wrap_exception method from ironic/common/exception.py * Rework patch validation on Ports * Add JsonPatchType class * Change default API auth to keystone-based * Clean up duplicated change-building code in objects * Add -U to pip install command in tox.ini * Updated from global requirements * Add config option for # of conductor replicas * Port StringType class from WSME trunk * Add tools/conf/check_uptodate to tox.ini 2014.1.b1 --------- * Correct error with unicode mac address * Expose created_at/updated_at properties in the REST API * Import heartbeat_interval opt in API * Add power control to PXE driver * Implement sync_power_state periodic task * Set the provision_state to DEPLOYFAIL * Save PKI token in a file for PXE deploy ramdisk * API ports update for WSME 0.5b6 compliance * Add heartbeat_interval to new 'conductor' cfg group * Add missing hash_partition_exponent config option * If no block devices abort deployment * Add missing link for drivers resource * Apply comments to 58558/4 post-landing * Replace removed xrange in Python3 * Imported Translations from Transifex * Use addCleanup() in test_deploy_utils * Allow Pecan to use 'debuginfo' response field * Do not allow API to expose error stacktrace * Add port address unique constraint for sqlite * Implement consistent hashing common methods * Sync some db changes from Oslo * Bump required version of sqlalchemy-migrate * Update ironic.conf.sample * Import uuidutils unit tests from oslo * Allow FakePower to return node objects power_state * Adds doc strings to API FunctionalTest class * Use oslo's execute() and ssh_execute() methods * Remove openstack.common.uuidutils * Sync common.context changes from olso * Remove oslo uuidutils.is_uuid_like call * Remove oslo uuidutils.generate_uuid() call * Add troubleshoot option to PXE template * Imported Translations from Transifex * Add tftp_server pattern in ironic.conf * Import HasLength object * ipmitool SHOULD accept empty username/password * Imported Translations from Transifex * Add missing ConfigNotFound exception * Imported Translations from Transifex * Add hooks to auto-generate REST API docs * Imported Translations from Transifex * Redefined default value of allowed_rpc_exception_modules * Add last_error usage to deploy and teardown methods * Support building wheels (PEP-427) * Import missing gettext _ to fix Sphinx error * sync common.service from oslo * sync common.periodic_task from oslo * sync common.notifier.* from oslo * sync common.log from oslo * sync common.local from oslo * Sync common utils from Oslo * Rename parameters * Accessing a subresource that parent does not exist * Imported Translations from Transifex * Changes power_state and adds last_error field * Update openstack/common/lockutils * sync common.context from oslo * sync common.config.generator from oslo * Remove sqlalchemy-migrate 0.7.3 patching * Fix integer division compatibility in middleware * Fix node lock in PXE driver * Imported Translations from Transifex * Register API options under the 'api' group * Supporting both Python 2 and Python 3 with six * Supports get node by instance uuid in API * Imported Translations from Transifex * Check invalid uuid for get-by-instance db api * Fix error handling in ssh driver * Replace __metaclass__ * Supporting both Python 2 and Python 3 with six * Pass Ironic API url to deploy ramdisk in PXE driver * Remove 'basestring' from objects utils * Allows unicode description for chassis * Fix a typo in the name of logger method exception * Don't use deprecated module commands * Comply with new hacking requirements * Improve the API doc spec for chassis * Improve the API doc spec for node * Updated from global requirements * Fix i18N compliance * Add wrapper for keystone service catalog * Fix test node manager * Expose /drivers on the API * Update mailmap for Joe Gordon * Add mailmap file * Implement /nodes/UUID/vendor_passthru in the API * Add context to TaskManager * Regenerate the sample config file * Conductors maintan driver list in the DB * Group and unify ipmi configurations * Fix a few missing i18n * Fix status codes in node controller * Fix exceptions handling in controllers * Updated from global requirements * Support uniform MAC address with colons * Remove redundant test stubs from conductor/manager * Remove several old TODO messages * Supports paginate query for two get nodes DB APIs * Remove _driver_factory class attribute * Fixes RootController to allow URL without version tag * Don't allow deletion of associated node * Remove duplicated db_api.get_instance() from tests * Updated from global requirements * Do not use string concatenation for localized strings * Remove the NULL state * Add DriverFactory * Adjust native ipmi default wait time * Be more patient with IPMI and BMC * Implement db get_[un]associated_nodes * Remove unused nova specific files * Removes unwanted mox and fixture files * Removes stubs from unit tests * Remove unused class/file * Remove driver validation on node update * Consolidates TestCase and BaseTestCase * Fix policies * Improve error message for ssh * Fix datetime format in FakeCache * Fix power_state set to python object repr * Updated from global requirements * Replaces mox with mock for test_deploy_utils * Replaces mox with mock in api's unit tests * Replaces mox with mock in objects' unit tests * Replaces mox with mock for conductor unit tests * fix ssh driver exec command issues * Fix exceptions error codes * Remove obsolete redhat-eventlet.patch * Replaces mox with mock for test_utils * Replaces mox with mock for ssh driver unit tests * Remove nested 'ipmi' dict from driver_info * Replace tearDown with addCleanup in unit tests * Remove nested 'ssh' dict from driver_info * Remove nested 'pxe' dict from driver_info * Save and validate deployment key in PXE driver * Implement deploy and tear_down conductor methods * Use mock to do unit tests for pxe driver * Code clean in node controller * Use mock to do unit tests for ipminative driver * Replaces mox with mock for ipmitool driver unit tests * Fix parameter name in wsexpose * Rename start_power_state_change to change_node_power_state * Mount iSCSI target and 'dd' in PXE driver * Add tests for api/utils.py * Check for required fields on ports * Replace Cheetah with Jinja2 * Update from global requirements * Upgrade tox to 1.6 * Add API uuid <-> id mapping * Doc string and minor clean up for 41976 * Update error return code to match new Pecan release * Add vendor_passthru method to RPC API * Integer types support in api * Add native ipmi driver * API GET to return only minimal data * Fix broken links * Collection named based on resource type * Remove nova specific tests * Changes documentation hyperlinks to be relative * Replace OpenStack LLC with OpenStack Foundation * Force textmode consoles * Implemented start_power_state_change In Conductor * Updates documentation for tox use * Drop setuptools_git dependency * Fix tests return codes * Fix misused assertTrue in unit tests * Prevent updates while state change is in progress * Use localisation where user visible strings are used * Update only the changed fields * Improve parameters validate in PXE driver * Rename ipmi driver to ipmitool * Remove jsonutils from PXE driver * Expose the vendor_passthru resource * Driver's validation during node update process implemented * Public API * Remove references for the 'task_state' property * Use 'provision_state' in PXE driver * Updating resources with PATCH * Add missing unique constraint * Fix docstring typo * Removed templates directory in api config * Added upper version boundry for six * Sync models with migrations * Optimization reserve and release nodes db api methods * Add missing foreign key * Porting nova pxe driver to ironic * API Nodes states * Fix driver loading * Move glance image service client from nova and cinder into ironic * Implement the root and v1 entry points of the API * Expose subresources for Chassis and Node * Add checks locked nodes to db api * Update the dev docs with driver interface description * Add missing tests for chassis API * Delete controller to make code easy to read and understood * Disable deleting a chassis that contains nodes * Update API documentation * Add Pagination of collections across the API * Fix typo in conductor manager * Remove wsme validate decorator from API * Add missing tests for ports API * Modify is_valid_mac() for support unicode strings * Add DB and RPC method doc strings to hook.py * Delete unused templates * Use fixture from Oslo * Move "opportunistic" db migrations tests from Nova * Build unittests for nodes api * make api test code more readable * Add links to API Objects * Delete Ironic context * Add tests for existing db migrations * Add common code from Oslo for db migrations test * Remove extra pep8/flake8/pyflakes requirements * Sync requirements with OpenStack/requirements * Fix up API tests before updating hacking checks * Add RPC methods for updating nodes * Run extract_messages * Keystone authentiation * Add serializer param to RPC service * Import serialization and nesting from Nova Objects * Implement chassis api actions * update requires to prevent version cap * Change validate() to raise instead of returning T/F * Add helpers for single-node tasks * Implement port api action * Modify gitignore to ignore sqlite * Update resource manager for fixed stevedore issue * Add dbapi functions * Remove suds requirement * Sync install_venv_common from oslo * Move mysql_engine option to [database] group * Re-define 'extra' as dict_or_none * Added Python-2.6 to the classifier * Rename "manager" to "conductor" * Port from nova: Fix local variable 'root_uuid' ref * Created a package for API controllers V1 * Sync requirements with OpenStack/requirements * Remove unused APICoverage class * Sync fileutils from oslo-incubator * Sync strutils from oslo-incubator * Add license header * Update get_by_uuid function doc in chassis * Fix various Python 2.x->3.x compat issues * Improve unit tests for API * Add Chassis object * Add Chassis DB model and DB-API * Delete associated ports after deleting a node * Virtual power driver is superceded by ssh driver * Add conf file generator * Refactored query filters * Add troubleshoot to baremetal PXE template * Add err_msg param to baremetal_deploy_helper * Retry the sfdisk command up to 3 times * Updated API Spec for new Drivers * Improve IPMI's _make_password_file method * Remove spurious print statement from update_node * Port middleware error handler from ceilometer API * Add support for GET /v1/nodes to return a list * Add object support to API service * Remove the unused plugin framework * Improve tests for Node and Port DB objects * SSH driver doesn't need to query database * Create Port object * Add uuid to Port DB model * Delete Flask Dependence * Writing Error: nodess to nodes * Create the Node object * Restructuring driver API and inheritance * Remove explicit distribute depend * Bump version of PBR * Remove deleted[_at] from base object * Make object actions pass positional arguments * Fix relative links in architecture doc * Reword architecture driver description * Remove duplication from README, add link to docs * Port base object from Nova * Fix ironic-rootwrap capability * Add ssh power manager * Prevent IPMI actions from colliding * Add TaskManager tests and fix decorator * Mocked NodeManager can load and mock real drivers * Add docs for task_manager and tests/manager/utils * Fix one typo in index.rst * Add missing 'extra' field to models.nodes * More doc updates * Remove the old README * More doc updates * Minor fixes to sphinx docs * Added API v1 Specification * Add initial sphinx docs, based on README * Initial skeleton for an RPC layer * Log configuration values on API startup * Don't use pecan to configure logging * Move database.backend option import * Remove unused authentication CLI options * Rename TestCase.flags() to TestCase.config() * Copy the RHEL6 eventlet workaround from Oslo * Sync new database config group from oslo-incubator * Minor doc change for manager and resorce_manager * Add support for Sphinx Docs * Update IPMI driver to work with resource manager * Add validate_driver_info to driver classes * Implement Task and Resource managers * Update [reserve|release]_nodes to accept a tag * More updates to the README * Reimplement reserve_nodes and release_nodes * Rename the 'ifaces' table to 'ports' * Change 'nodes' to use more driver-specific JSON * Update driver names and base class * Stop creating a new db IMPL for every request * Fix double "host" option * Sync safe changes from oslo-incubator * Sync rpc changes from oslo-incubator * Sync log changes from oslo-incubator * Sync a rootwrap KillFilter fix from oslo-incubator * Sync oslo-incubator python3 changes * Add steps to README.rst * Fix fake bmc driver * move ironic docs to top level for ease of discovery * Update the README file development section * Add some API definitions to the README * Update the distribute dependency version * Add information to the project README * Fixes test_update_node by testing updated node * Fix pep8 errors and make it pass Jenkins tests * Update IPMI driver for new base class * Add new base and fake driver classes * Delete old base and fake classes * Add a few fixes for the API * Move strong nova depenencies into temporary dir * Update IPMI for new DB schema * Add unit tests for DB API * Remove tests for old DB * Add tests for ironic-dbsync * Remove ironic_manage * Implement GET /node/ifaces/ in API * Update exception.py * Update db models and API * Implement skeleton for a new DB backend * Remove the old db implementation * Implement initial skeleton of a manager service * Implement initial draft of a Pecan-based API * Fix IPMI tests * Move common things to ironic.common * Fix failing db and deploy_helper tests * un-split the db backend * Rename files and fix things * Import add'l files from Nova * update openstack-common.conf and import from oslo * Added .testr.conf * Renamed nova to ironic * Fixed hacking, pep8 and pyflakes errors * Added project infrastructure needs * Fix baremetal get_available_nodes * Improve Python 3.x compatibility * Import and convert to oslo loopingcall * baremetal: VirtualPowerDriver uses mac addresses in bm_interfaces * baremetal: Change input for sfdisk * baremetal: Change node api related to prov_mac_address * Remove "undefined name" pyflake errors * Remove unnecessary LOG initialisation * Define LOG globally in baremetal_deploy_helper * Only call getLogger after configuring logging * baremetal: Integrate provisioning and non-provisioning interfaces * Move console scripts to entrypoints * baremetal: Drop unused columns in bm_nodes * Remove print statements * Delete tests.baremetal.util.new_bm_deployment() * Adds Tilera back-end for baremetal * Change type of ssh_port option from Str to Int * Virtual Power Driver list running vms quoting error * xenapi: Fix reboot with hung volumes * Make bm model's deleted column match database * Correct substring matching of baremetal VPD node names * Read baremetal images from extra_specs namespace * Compute manager should remove dead resources * Add ssh port and key based auth to VPD * Add instance_type_get() to virt api * Don't blindly skip first migration * BM Migration 004: Actually drop column * Update OpenStack LLC to Foundation * Sync nova with oslo DB exception cleanup * Fix exception handling in baremetal API * BM Migrations 2 & 3: Fix drop_column statements * Remove function redefinitions * Move some context checking code from sqlalchemy * Baremetal driver returns accurate list of instance * Identify baremetal nodes by UUID * Improve performance of baremetal list_instances * Better error handling in baremetal spawn & destroy * Wait for baremetal deploy inside driver.spawn * Add better status to baremetal deployments * Use oslo-config-2013.1b4 * Delete baremetal interfaces when their parent node is deleted * VirtualPowerDriver catches ProcessExecutionError * Don't modify injected_files inside PXE driver * Remove nova.db call from baremetal PXE driver * Add a virtual PowerDriver for Baremetal testing * Recache or rebuild missing images on hard_reboot * Use oslo database code * Fixes 'not in' operator usage * Make sure there are no unused import * Enable N302: Import modules only * Correct a format string in virt/baremetal/ipmi.py * Add REST api to manage bare-metal nodes * Baremetal/utils should not log certain exceptions * PXE driver should rmtree directories it created * Add support for Option Groups in LazyPluggable * Remove obsolete baremetal override of MAC addresses * PXE driver should not accept empty kernel UUID * Correcting improper use of the word 'an' * Export the MAC addresses of nodes for bare-metal * Break out a helper function for working with bare metal nodes * Keep self and context out of error notification payload * Tests for PXE bare-metal provisioning helper server * Change ComputerDriver.legacy_nwinfo to raise by default * fix new N402 errors * Remove unused baremetal PXE options * Move global service networking opts to new module * Fix N402 for nova/virt * Cope better with out of sync bm data * Fix baremetal VIFDriver * CLI for bare-metal database sync * attach/detach_volume() take instance as a parameter * Convert short doc strings to be on one line * Check admin context in bm_interface_get_all() * Provide a PXE NodeDriver for the Baremetal driver * Refactor periodic tasks * Add helper methods to nova.paths * Move global path opts in nova.paths * Removes unused imports * Improve baremetal driver error handling * baremetal power driver takes **kwargs * Implement IPMI sub-driver for baremetal compute * Fix tests/baremetal/test_driver.py * Move baremetal options to [BAREMETAL] OptGroup * Remove session.flush() and session.query() monkey patching * Remove unused imports * Removed unused imports * Parameterize database connection in test.py * Baremetal VIF and Volume sub-drivers * New Baremetal provisioning framework * Move baremetal database tests to fixtures * Add exceptions to baremetal/db/api * Add blank nova/virt/baremetal/__init__.py * Move sql options to nova.db.sqlalchemy.session * Use CONF.import_opt() for nova.config opts * Remove nova.config.CONF * remove old baremetal driver * Remove nova.flags * Fix a couple uses of FLAGS * Added separate bare-metal MySQL DB * Switch from FLAGS to CONF in tests * Updated scheduler and compute for multiple capabilities * Switch from FLAGS to CONF in nova.virt * Make ComputeDrivers send hypervisor_hostname * Introduce VirtAPI to nova/virt * Migrate to fileutils and lockutils * Remove ComputeDriver.update_host_status() * Rename imagebackend arguments * Move ensure_tree to utils * Keep the ComputeNode model updated with usage * Don't stuff non-db data into instance dict * Making security group refresh more specific * Use dict style access for image_ref * Remove unused InstanceInfo class * Remove list_instances_detail from compute drivers * maint: remove an unused import in libvirt.driver * Fixes bare-metal spawn error * Refactoring required for blueprint xenapi-live-migration * refactor baremetal/proxy => baremetal/driver * Switch to common logging * Make libvirt LoopingCalls actually wait() * Imports cleanup * Unused imports cleanup (folsom-2) * convert virt drivers to fully dynamic loading * cleanup power state (partially implements bp task-management) * clean-up of the bare-metal framework * Added a instance state update notification * Update pep8 dependency to v1.1 * Alphabetize imports in nova/tests/ * Make use of openstack.common.jsonutils * Alphabetize imports in nova/virt/ * Replaces exceptions.Error with NovaException * Log instance information for baremetal * Improved localization testing * remove unused flag: baremetal_injected_network_template baremetal_uri baremetal_allow_project_net_traffic * Add periodic_fuzzy_delay option * HACKING fixes, TODO authors * Add pybasedir and bindir options * Only raw string literals should be used with _() * Remove unnecessary setting up and down of mox and stubout * Remove unnecessary variables from tests * Move get_info to taking an instance * Exception cleanup * Backslash continuations (nova.tests) * Replace ApiError with new exceptions * Standardize logging delaration and use * remove unused and buggy function from baremetal proxy * Backslash continuations (nova.virt.baremetal) * Remove the last of the gflags shim layer * Implements blueprint heterogeneous-tilera-architecture-support * Deleting test dir from a pull from trunk * Updated to remove built docs * initial commit ironic-5.1.0/tools/0000775000567000056710000000000012674513633015313 5ustar jenkinsjenkins00000000000000ironic-5.1.0/tools/flake8wrap.sh0000775000567000056710000000073512674513466017727 0ustar jenkinsjenkins00000000000000#!/bin/bash # # A simple wrapper around flake8 which makes it possible # to ask it to only verify files changed in the current # git HEAD patch. # # Intended to be invoked via tox: # # tox -epep8 -- -HEAD # if test "x$1" = "x-HEAD" ; then shift files=$(git diff --name-only HEAD~1 | tr '\n' ' ') echo "Running flake8 on ${files}" diff -u --from-file /dev/null ${files} | flake8 --diff "$@" else echo "Running flake8 on all files" exec flake8 "$@" fi ironic-5.1.0/tools/states_to_dot.py0000775000567000056710000001103212674513466020544 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import optparse import os import sys from automaton.converters import pydot from ironic.common import states top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)) sys.path.insert(0, top_dir) def print_header(text): print("*" * len(text)) print(text) print("*" * len(text)) def map_color(text, key='fontcolor'): """Map the text to a color. The text is mapped to a color. :param text: string of text to be mapped to a color. 'error' and 'fail' in the text will map to 'red'. :param key: in returned dictionary, the key to use that corresponds to the color :returns: A dictionary with one entry, key = color. If no color is associated with the text, an empty dictionary. """ # If the text contains 'error'/'fail' then we'll return red... if 'error' in text or 'fail' in text: return {key: 'red'} else: return {} def main(): parser = optparse.OptionParser() parser.add_option("-f", "--file", dest="filename", help="write output to FILE", metavar="FILE") parser.add_option("-T", "--format", dest="format", help="output in given format (default: png)", default='png') parser.add_option("--no-labels", dest="labels", help="do not include labels", action='store_false', default=True) (options, args) = parser.parse_args() if options.filename is None: options.filename = 'states.%s' % options.format def node_attrs(state): """Attributes used for drawing the nodes (states). The user can perform actions on stable states (and in a few other cases), so we distinguish the stable states from the other states by highlighting the node. Non-stable states are labelled with gray. This is a callback method used by pydot.convert(). :param state: name of state :returns: A dictionary with graphic attributes used for displaying the state. """ attrs = map_color(state) if source.is_stable(state): attrs['penwidth'] = 1.7 else: if 'fontcolor' not in attrs: attrs['fontcolor'] = 'gray' return attrs def edge_attrs(start_state, event, end_state): """Attributes used for drawing the edges (transitions). There are two types of transitions; the ones that the user can initiate and the ones that are done internally by the conductor. The user-initiated ones are shown with '(via API'); the others are in gray. This is a callback method used by pydot.convert(). :param start_state: name of the start state :param event: the event, a string :param end_state: name of the end state (unused) :returns: A dictionary with graphic attributes used for displaying the transition. """ if not options.labels: return {} translations = {'delete': 'deleted', 'deploy': 'active'} attrs = {} attrs['fontsize'] = 12 attrs['label'] = translations.get(event, event) if (source.is_stable(start_state) or 'fail' in start_state or event in ('abort', 'delete')): attrs['label'] += " (via API)" else: attrs['fontcolor'] = 'gray' return attrs source = states.machine graph_name = '"Ironic states"' graph_attrs = {'size': 0} g = pydot.convert(source, graph_name, graph_attrs=graph_attrs, node_attrs_cb=node_attrs, edge_attrs_cb=edge_attrs) print_header(graph_name) print(g.to_string().strip()) g.write(options.filename, format=options.format) print_header("Created %s at '%s'" % (options.format, options.filename)) if __name__ == '__main__': main() ironic-5.1.0/tools/config/0000775000567000056710000000000012674513633016560 5ustar jenkinsjenkins00000000000000ironic-5.1.0/tools/config/generate_sample.sh0000775000567000056710000001007312674513466022257 0ustar jenkinsjenkins00000000000000#!/usr/bin/env bash # Generate sample configuration for your project. # # Aside from the command line flags, it also respects a config file which # should be named oslo.config.generator.rc and be placed in the same directory. # # You can then export the following variables: # IRONIC_CONFIG_GENERATOR_EXTRA_MODULES: list of modules to interrogate for options. # IRONIC_CONFIG_GENERATOR_EXTRA_LIBRARIES: list of libraries to discover. # IRONIC_CONFIG_GENERATOR_EXCLUDED_FILES: list of files to remove from automatic listing. print_hint() { echo "Try \`${0##*/} --help' for more information." >&2 } PARSED_OPTIONS=$(getopt -n "${0##*/}" -o hb:p:m:l:o: \ --long help,base-dir:,package-name:,output-dir:,module:,library: -- "$@") if [ $? != 0 ] ; then print_hint ; exit 1 ; fi eval set -- "$PARSED_OPTIONS" while true; do case "$1" in -h|--help) echo "${0##*/} [options]" echo "" echo "options:" echo "-h, --help show brief help" echo "-b, --base-dir=DIR project base directory" echo "-p, --package-name=NAME project package name" echo "-o, --output-dir=DIR file output directory" echo "-m, --module=MOD extra python module to interrogate for options" echo "-l, --library=LIB extra library that registers options for discovery" exit 0 ;; -b|--base-dir) shift BASEDIR=`echo $1 | sed -e 's/\/*$//g'` shift ;; -p|--package-name) shift PACKAGENAME=`echo $1` shift ;; -o|--output-dir) shift OUTPUTDIR=`echo $1 | sed -e 's/\/*$//g'` shift ;; -m|--module) shift MODULES="$MODULES -m $1" shift ;; -l|--library) shift LIBRARIES="$LIBRARIES -l $1" shift ;; --) break ;; esac done BASEDIR=${BASEDIR:-`pwd`} if ! [ -d $BASEDIR ] then echo "${0##*/}: missing project base directory" >&2 ; print_hint ; exit 1 elif [[ $BASEDIR != /* ]] then BASEDIR=$(cd "$BASEDIR" && pwd) fi PACKAGENAME=${PACKAGENAME:-$(python setup.py --name)} TARGETDIR=$BASEDIR/$PACKAGENAME if ! [ -d $TARGETDIR ] then echo "${0##*/}: invalid project package name" >&2 ; print_hint ; exit 1 fi OUTPUTDIR=${OUTPUTDIR:-$BASEDIR/etc} # NOTE(bnemec): Some projects put their sample config in etc/, # some in etc/$PACKAGENAME/ if [ -d $OUTPUTDIR/$PACKAGENAME ] then OUTPUTDIR=$OUTPUTDIR/$PACKAGENAME elif ! [ -d $OUTPUTDIR ] then echo "${0##*/}: cannot access \`$OUTPUTDIR': No such file or directory" >&2 exit 1 fi BASEDIRESC=`echo $BASEDIR | sed -e 's/\//\\\\\//g'` find $TARGETDIR -type f -name "*.pyc" -delete FILES=$(find $TARGETDIR -type f -name "*.py" ! -path "*/tests/*" ! -path "*/nova/*" \ -exec grep -l "Opt(" {} + | sed -e "s/^$BASEDIRESC\///g" | sort -u) RC_FILE="`dirname $0`/oslo.config.generator.rc" if test -r "$RC_FILE" then source "$RC_FILE" fi for filename in ${IRONIC_CONFIG_GENERATOR_EXCLUDED_FILES}; do FILES="${FILES[@]/$filename/}" done for mod in ${IRONIC_CONFIG_GENERATOR_EXTRA_MODULES}; do MODULES="$MODULES -m $mod" done for lib in ${IRONIC_CONFIG_GENERATOR_EXTRA_LIBRARIES}; do LIBRARIES="$LIBRARIES -l $lib" done export EVENTLET_NO_GREENDNS=yes OS_VARS=$(set | sed -n '/^OS_/s/=[^=]*$//gp' | xargs) [ "$OS_VARS" ] && eval "unset \$OS_VARS" DEFAULT_CONFIG_GENERATOR=ironic.common.config_generator.generator CONFIG_GENERATOR=${CONFIG_GENERATOR:-$DEFAULT_CONFIG_GENERATOR} OUTPUTFILE=$OUTPUTDIR/$PACKAGENAME.conf.sample python -m $CONFIG_GENERATOR $MODULES $LIBRARIES $FILES > $OUTPUTFILE if [ $? != 0 ] then echo "Can not generate $OUTPUTFILE" exit 1 fi # Hook to allow projects to append custom config file snippets CONCAT_FILES=$(ls $BASEDIR/tools/config/*.conf.sample 2>/dev/null) for CONCAT_FILE in $CONCAT_FILES; do cat $CONCAT_FILE >> $OUTPUTFILE done ironic-5.1.0/tools/config/oslo.config.generator.rc0000664000567000056710000000053112674513466023316 0ustar jenkinsjenkins00000000000000export IRONIC_CONFIG_GENERATOR_EXTRA_LIBRARIES='oslo.db oslo.messaging oslo.middleware.cors keystonemiddleware.auth_token oslo.concurrency oslo.policy oslo.log oslo.service.service oslo.service.periodic_task oslo.service.sslutils' export IRONIC_CONFIG_GENERATOR_EXTRA_MODULES='ironic_lib.disk_utils ironic_lib.disk_partitioner ironic_lib.utils' ironic-5.1.0/tools/config/check_uptodate.sh0000775000567000056710000000132012674513466022101 0ustar jenkinsjenkins00000000000000#!/usr/bin/env bash PROJECT_NAME=${PROJECT_NAME:-ironic} CFGFILE_NAME=${PROJECT_NAME}.conf.sample if [ -e etc/${PROJECT_NAME}/${CFGFILE_NAME} ]; then CFGFILE=etc/${PROJECT_NAME}/${CFGFILE_NAME} elif [ -e etc/${CFGFILE_NAME} ]; then CFGFILE=etc/${CFGFILE_NAME} else echo "${0##*/}: can not find config file" exit 1 fi TEMPDIR=`mktemp -d /tmp/${PROJECT_NAME}.XXXXXX` trap "rm -rf $TEMPDIR" EXIT tools/config/generate_sample.sh -b ./ -p ${PROJECT_NAME} -o ${TEMPDIR} if [ $? != 0 ] then exit 1 fi if ! diff -u ${TEMPDIR}/${CFGFILE_NAME} ${CFGFILE} then echo "${0##*/}: ${PROJECT_NAME}.conf.sample is not up to date." echo "${0##*/}: Please run ${0%%${0##*/}}generate_sample.sh." exit 1 fi ironic-5.1.0/tools/__init__.py0000664000567000056710000000000012674513466017416 0ustar jenkinsjenkins00000000000000ironic-5.1.0/tools/run_bashate.sh0000775000567000056710000000242012674513466020147 0ustar jenkinsjenkins00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. find "$@" -not \( -type d -name .?\* -prune \) \ -type f \ -not -name \*.swp \ -not -name \*~ \ -not -name \*.xml \ -not -name \*.template \ -not -name \*.py \ \( \ -name \*.sh -or \ -wholename \*/lib/\* -or \ -wholename \*/tools/\* \ \) \ -print0 | xargs -0 bashate -v -iE006 -eE005,E042 ironic-5.1.0/tools/with_venv.sh0000775000567000056710000000033212674513466017665 0ustar jenkinsjenkins00000000000000#!/bin/bash tools_path=${tools_path:-$(dirname $0)} venv_path=${venv_path:-${tools_path}} venv_dir=${venv_name:-/../.venv} TOOLS=${tools_path} VENV=${venv:-${venv_path}/${venv_dir}} source ${VENV}/bin/activate && "$@" ironic-5.1.0/doc/0000775000567000056710000000000012674513633014720 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/0000775000567000056710000000000012674513633016220 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/drivers/0000775000567000056710000000000012674513633017676 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/drivers/wol.rst0000664000567000056710000000713312674513466021241 0ustar jenkinsjenkins00000000000000.. _WOL: ================== Wake-On-Lan driver ================== Overview ======== Wake-On-Lan is a standard that allows a computer to be powered on by a network message. This is widely available and doesn't require any fancy hardware to work with [1]_. The Wake-On-Lan driver is a **testing** driver not meant for production. And useful for users that wants to try Ironic with real bare metal instead of virtual machines. It's important to note that Wake-On-Lan is only capable of powering on the machine. When power off is called the driver won't take any action and will just log a message, the power off require manual intervention to be performed. Also, since Wake-On-Lan does not offer any means to determine the current power state of the machine, the driver relies on the power state set in the Ironic database. Any calls to the API to get the power state of the node will return the value from the Ironic's database. Drivers ======= pxe_wol ^^^^^^^ Overview ~~~~~~~~ The ``pxe_wol`` driver uses the Wake-On-Lan technology to control the power state, PXE/iPXE technology for booting and the iSCSI methodology for deploying the node. Requirements ~~~~~~~~~~~~ * Wake-On-Lan should be enabled in the BIOS Configuring and Enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Add ``pxe_wol`` to the list of ``enabled_drivers`` in */etc/ironic/ironic.conf*. For example:: [DEFAULT] ... enabled_drivers = pxe_ipmitool,pxe_wol 2. Restart the Ironic conductor service:: service ironic-conductor restart Registering a node with the Wake-On-Lan driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Nodes configured for Wake-On-Lan driver should have the ``driver`` property set to ``pxe_wol``. The node should have at least one port registered with it because the Wake-On-Lan driver will use the MAC address of the ports to create the magic packet [2]_. The following configuration values are optional and can be added to the node's ``driver_info`` as needed to match the network configuration: - ``wol_host``: The broadcast IP address; defaults to **255.255.255.255**. - ``wol_port``: The destination port; defaults to **9**. .. note:: Say the ``ironic-conductor`` is connected to more than one network and the node you are trying to wake up is in the ``192.0.2.0/24`` range. The ``wol_host`` configuration should be set to **192.0.2.255** (the broadcast IP) so the packets will get routed correctly. The following sequence of commands can be used to enroll a node with the Wake-On-Lan driver. 1. Create node:: ironic node-create -d pxe_wol [-i wol_host= [ -i wol_port=]] The above command ``ironic node-create`` will return UUID of the node, which is the value of *$NODE* in the following command. 2. Associate port with the node created:: ironic port-create -n $NODE -a agent_wol ^^^^^^^^^ Overview ~~~~~~~~ The ``agent_wol`` driver uses the Wake-On-Lan technology to control the power state, PXE/iPXE technology for booting and the Ironic Python Agent for deploying the node. Additional requirements ~~~~~~~~~~~~~~~~~~~~~~~ * Boot device order should be set to "PXE, DISK" in the BIOS setup * BIOS must try next boot device if PXE boot failed * Automated cleaning should be disabled, see :ref:`automated_cleaning` * Node should be powered off before start of deploy Configuration steps are the same as for ``pxe_wol`` driver, replace "pxe_wol" with "agent_wol". References ========== .. [1] Wake-On-Lan - https://en.wikipedia.org/wiki/Wake-on-LAN .. [2] Magic packet - https://en.wikipedia.org/wiki/Wake-on-LAN#Sending_the_magic_packet ironic-5.1.0/doc/source/drivers/oneview.rst0000664000567000056710000001777512674513466022131 0ustar jenkinsjenkins00000000000000.. _oneview: =============== OneView drivers =============== Overview ======== HP OneView [1]_ is a single integrated platform, packaged as an appliance that implements a software-defined approach to managing physical infrastructure. The appliance supports scenarios such as deploying bare metal servers, for instance. In this context, the ``HP OneView driver`` for Ironic enables the users of OneView to use Ironic as a bare metal provider to their managed physical hardware. Currently there are two OneView drivers: * ``iscsi_pxe_oneview`` * ``agent_pxe_oneview`` The ``iscsi_pxe_oneview`` and ``agent_pxe_oneview`` drivers implement the core interfaces of an Ironic Driver [2]_, and use the ``python-oneviewclient`` [3]_ to provide communication between Ironic and OneView through OneView's Rest API. To provide a bare metal instance there are four components involved in the process: * Ironic service * python-oneviewclient * OneView appliance * iscsi_pxe_oneview/agent_pxe_oneview driver The role of Ironic is to serve as a bare metal provider to OneView's managed physical hardware and to provide communication with other necessary OpenStack services such as Nova and Glance. When Ironic receives a boot request, it works together with the Ironic OneView driver to access a machine in OneView, the ``python-oneviewclient`` being responsible for the communication with the OneView appliance. Prerequisites ============= The following requirements apply for both ``iscsi_pxe_oneview`` and ``agent_pxe_oneview`` drivers: * ``OneView appliance`` is the HP physical infrastructure manager to be integrated with the OneView drivers. Minimum version supported is 2.0. * ``python-oneviewclient`` is a python package containing a client to manage the communication between Ironic and OneView. Install the ``python-oneviewclient`` module to enable the communication. Minimum version required is 2.0.2 but it is recommended to install the most up-to-date version.:: $ pip install "python-oneviewclient<3.0.0,>=2.0.2" Tested platforms ================ * The OneView appliance used for testing was the OneView 2.0. * The Enclosure used for testing was the ``BladeSystem c7000 Enclosure G2``. * The drivers should work on HP Proliant Gen8 and Gen9 Servers supported by OneView 2.0 and above, or any hardware whose network can be managed by OneView's ServerProfile. It has been tested with the following servers: - Proliant BL460c Gen8 - Proliant BL465c Gen8 - Proliant DL360 Gen9 (starting with python-oneviewclient 2.1.0) Notice here that to the driver work correctly with Gen8 and Gen9 DL servers in general, the hardware also needs to run version 4.2.3 of iLO, with Redfish. Drivers ======= iscsi_pxe_oneview driver ^^^^^^^^^^^^^^^^^^^^^^^^ Overview ~~~~~~~~ ``iscsi_pxe_oneview`` driver uses PXEBoot for boot and ISCSIDeploy for deploy. Configuring and enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Add ``iscsi_pxe_oneview`` to the list of ``enabled_drivers`` in ``/etc/ironic/ironic.conf``. For example:: enabled_drivers = iscsi_pxe_oneview 2. Update the [oneview] section of your ``ironic.conf`` file with your OneView credentials and CA certificate files information. 3. Restart the Ironic conductor service. For Ubuntu users, do:: $ sudo service ironic-conductor restart See [5]_ for more information. Deploy process ~~~~~~~~~~~~~~ Here is an overview of the deploy process for this driver: 1. Admin configures the Proliant baremetal node to use ``iscsi_pxe_oneview`` driver. 2. Ironic gets a request to deploy a Glance image on the baremetal node. 3. Driver sets the boot device to PXE. 4. Driver powers on the baremetal node. 5. Ironic downloads the deploy and user images from a TFTP server. 6. Driver reboots the baremetal node. 7. User image is now deployed. 8. Driver powers off the machine. 9. Driver sets boot device to Disk. 10. Driver powers on the machine. 11. Baremetal node is active and ready to be used. agent_pxe_oneview driver ^^^^^^^^^^^^^^^^^^^^^^^^ Overview ~~~~~~~~ ``agent_pxe_oneview`` driver uses PXEBoot for boot and AgentDeploy for deploy. Configuring and enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Add ``agent_pxe_oneview`` to the list of ``enabled_drivers`` in ``/etc/ironic/ironic.conf``. For example:: enabled_drivers = fake,pxe_ssh,pxe_ipmitool,agent_pxe_oneview 2. Update the [oneview] section of your ``ironic.conf`` file with your OneView credentials and CA certificate files information. 3. Restart the Ironic conductor service. For Ubuntu users, do:: $ service ironic-conductor restart See [5]_ for more information. Deploy process ~~~~~~~~~~~~~~ Here is an overview of the deploy process for this driver: 1. Admin configures the Proliant baremetal node to use ``agent_pxe_oneview`` driver. 2. Ironic gets a request to deploy a Glance image on the baremetal node. 3. Driver sets the boot device to PXE. 4. Driver powers on the baremetal node. 5. Node downloads the agent deploy images. 6. Agent downloads the user images and writes it to disk. 7. Driver reboots the baremetal node. 8. User image is now deployed. 9. Driver powers off the machine. 10. Driver sets boot device to Disk. 11. Driver powers on the machine. 12. Baremetal node is active and ready to be used. Registering a OneView node in Ironic ===================================== Nodes configured to use any of the OneView drivers should have the ``driver`` property set to ``iscsi_pxe_oneview`` or ``agent_pxe_oneview``. Considering our context, a node is the representation of a ``Server Hardware`` in OneView, and should be consistent with all its properties and related components, such as ``Server Hardware Type``, ``Server Profile Template``, ``Enclosure Group``, etc. In this case, to be enrolled, the node must have the following parameters: * In ``driver_info`` - ``server_hardware_uri``: URI of the Server Hardware on OneView. * In ``properties/capabilities`` - ``server_hardware_type_uri``: URI of the Server Hardware Type of the Server Hardware. - ``server_profile_template_uri``: URI of the Server Profile Template used to create the Server Profile of the Server Hardware. - ``enclosure_group_uri`` (optional): URI of the Enclosure Group of the Server Hardware. To enroll a node with any of the OneView drivers, do:: $ ironic node-create -d $DRIVER_NAME To update the ``driver_info`` field of a newly enrolled OneView node, do:: $ ironic node-update $NODE_UUID add \ driver_info/server_hardware_uri=$SH_URI To update the ``properties/capabilities`` namespace of a newly enrolled OneView node, do:: $ ironic node-update $NODE_UUID add \ properties/capabilities=server_hardware_type_uri:$SHT_URI,enclosure_group_uri:$EG_URI,server_profile_template_uri=$SPT_URI In order to deploy, a Server Profile consistent with the Server Profile Template of the node MUST be applied to the Server Hardware it represents. Server Profile Templates and Server Profiles to be utilized for deployments MUST have configuration such that its **first Network Interface** ``boot`` property is set to "Primary" and connected to Ironic's provisioning network. To tell Ironic which NIC should be connected to the provisioning network, do:: $ ironic port-create -n $NODE_UUID -a $MAC_ADDRESS For more information on the enrollment process of an Ironic node, see [4]_. For more information on the definitions of ``Server Hardware``, ``Server Profile``, ``Server Profile Template`` and many other OneView entities, see [1]_ or browse Help in your OneView appliance menu. References ========== .. [1] HP OneView - http://www8.hp.com/us/en/business-solutions/converged-systems/oneview.html .. [2] Driver interfaces - http://docs.openstack.org/developer/ironic/dev/architecture.html#drivers .. [3] python-oneviewclient - https://pypi.python.org/pypi/python-oneviewclient .. [4] Enrollment process of a node - http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enrollment-process .. [5] Ironic install guide - http://docs.openstack.org/developer/ironic/deploy/install-guide.html#installation-guide ironic-5.1.0/doc/source/drivers/ipmitool.rst0000664000567000056710000001350012674513466022267 0ustar jenkinsjenkins00000000000000.. _IPMITOOL: =============== IPMITool driver =============== Overview ======== The IPMITool driver enables managing nodes by using the Intelligent Platform Management Interface (IPMI) versions 2.0 or 1.5. The name of the driver comes from the utility ``ipmitool`` which is an open-source command-line interface (CLI) for controlling IPMI-enabled devices. Currently there are 2 IPMITool drivers: * ``agent_ipmitool`` * ``pxe_ipmitool`` Glossary ======== * IPMI - Intelligent Platform Management Interface. * IPMB - Intelligent Platform Management Bus/Bridge. * BMC - Baseboard Management Controller. * RMCP - Remote Management Control Protocol. Prerequisites ============= * The `ipmitool utility `_ should be installed on the ironic conductor node. On most distros, this is provided as part of the ``ipmitool`` package. Enabling the IPMITool driver(s) =============================== .. note:: The ``pxe_ipmitool`` driver is the default driver in Ironic, so if no extra configuration is provided the driver will be enabled. #. Add ``pxe_ipmitool`` and/or ``agent_ipmitool`` to the list of ``enabled_drivers`` in */etc/ironic/ironic.conf*. For example:: [DEFAULT] ... enabled_drivers = pxe_ipmitool,agent_ipmitool #. Restart the Ironic conductor service:: service ironic-conductor restart Registering a node with the IPMItool driver =========================================== Nodes configured to use the IPMItool drivers should have the ``driver`` property set to ``pxe_ipmitool`` or ``agent_ipmitool``. The following configuration value is required and has to be added to the node's ``driver_info`` field: - ``ipmi_address``: The IP address or hostname of the BMC. Other options may be needed to match the configuration of the BMC, the following options are optional, but in most cases, it's considered a good practice to have them set: - ``ipmi_username``: The username to access the BMC; defaults to *NULL* user. - ``ipmi_password``: The password to access the BMC; defaults to *NULL*. - ``ipmi_port``: The remote IPMI RMCP port. By default ipmitool will use the port *623*. .. note:: It is highly recommend that you setup a username and password for your BMC. The ``ironic node-create`` command can be used to enroll a node with the IPMITool driver. For example:: ironic node-create -d pxe_ipmitool -i ipmi_address=
-i ipmi_username= -i ipmi_password= Advanced configuration ====================== When a simple configuration such as providing the ``address``, ``username`` and ``password`` is not enough, the IPMItool driver contains many other options that can be used to address special usages. Single/Double bridging functionality ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: A version of ``ipmitool`` higher or equal to 1.8.12 is required to use the bridging functionality. There are two different bridging functionalities supported by the IPMITool driver: *single* bridge and *dual* bridge. The following configuration values need to be added to the node's ``driver_info`` field so bridging can be used: - ``ipmi_bridging``: The bridging type; default is *no*; other supported values are *single* for single bridge or *dual* for double bridge. - ``ipmi_local_address``: The local IPMB address for bridged requests. Required only if ``ipmi_bridging`` is set to *single* or *dual*. This configuration is optional, if not specified it will be auto discovered by ipmitool. - ``ipmi_target_address``: The destination address for bridged requests. Required only if ``ipmi_bridging`` is set to *single* or *dual*. - ``ipmi_target_channel``: The destination channel for bridged requests. Required only if ``ipmi_bridging`` is set to *single* or *dual*. Double bridge specific options: - ``ipmi_transit_address``: The transit address for bridged requests. Required only if ``ipmi_bridging`` is set to *dual*. - ``ipmi_transit_channel``: The transit channel for bridged requests. Required only if ``ipmi_bridging`` is set to *dual*. The parameter ``ipmi_bridging`` should specify the type of bridging required: *single* or *dual* to access the bare metal node. If the parameter is not specified, the default value will be set to *no*. The ``ironic node-update`` command can be used to set the required bridging information to the Ironic node enrolled with the IPMItool driver. For example: * Single Bridging:: ironic node-update add driver_info/ipmi_local_address=
driver_info/ipmi_bridging=single driver_info/ipmi_target_channel= driver_info/ipmi_target_address= * Double Bridging:: ironic node-update add driver_info/ipmi_local_address=
driver_info/ipmi_bridging=dual driver_info/ipmi_transit_channel= driver_info/ipmi_transit_address= driver_info/ipmi_target_channel= driver_info/ipmi_target_address= Changing the version of the IPMI protocol ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The IPMItool driver works with the versions *2.0* and *1.5* of the IPMI protocol. By default, the version *2.0* is used. In order to change the IPMI protocol version in the bare metal node, the following option needs to be set to the node's ``driver_info`` field: - ``ipmi_protocol_version``: The version of the IPMI protocol; default is *2.0*. Supported values are *1.5* or *2.0*. The ``ironic node-update`` command can be used to set the desired protocol version:: ironic node-update add driver_info/ipmi_protocol_version= .. warning:: The version *1.5* of the IPMI protocol does not support encryption. So it's very recommended that the version *2.0* is used. .. TODO(lucasagomes): Write about privilege level .. TODO(lucasagomes): Write about force boot device ironic-5.1.0/doc/source/drivers/ucs.rst0000664000567000056710000000636212674513466021235 0ustar jenkinsjenkins00000000000000.. _UCS: =========== UCS drivers =========== Overview ======== The UCS driver is targeted for UCS Manager managed Cisco UCS B/C series servers. The pxe_ucs, agent_ucs drivers enables you to take advantage of UCS Manager by using the python SDK. ``pxe_ucs`` driver uses PXE/iSCSI (just like ``pxe_ipmitool`` driver) to deploy the image and uses UCS to do all management operations on the baremetal node (instead of using IPMI). ``agent_ucs`` driver uses IPA ramdisk (just like ``agent_ipmitool`` and ``agent_ipminative`` drivers.) to deploy the image and uses UCS to do all management operations on the baremetal node (instead of using IPMI). The UCS drivers can use the Ironic Inspector service for in-band inspection of equipment. For more information see the `Ironic Inspector documentation `_. Prerequisites ============= * ``UcsSdk`` is a python package version of XML API sdk available to to manage Cisco UCS Managed B/C-series servers. Install ``UcsSdk`` [1]_ module on the Ironic conductor node. Required version is 0.8.2.2:: $ pip install "UcsSdk==0.8.2.2" Tested Platforms ~~~~~~~~~~~~~~~~ This driver works on Cisco UCS Manager Managed B/C-series servers. It has been tested with the following servers: UCS Manager version: 2.2(1b), 2.2(3d). * UCS B22M, B200M3 * UCS C220M3. All the Cisco UCS B/C-series servers managed by UCSM 2.1 or later are supported by this driver. Configuring and Enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Add ``pxe_ucs`` and/or ``agent_ucs`` to the list of ``enabled_drivers`` in ``/etc/ironic/ironic.conf``. For example:: enabled_drivers = pxe_ipmitool,pxe_ucs,agent_ucs 2. Restart the Ironic conductor service:: service ironic-conductor restart Registering UCS node in Ironic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Nodes configured for UCS driver should have the ``driver`` property set to ``pxe_ucs/agent_ucs``. The following configuration values are also required in ``driver_info``: - ``ucs_address``: IP address or hostname of the UCS Manager - ``ucs_username``: UCS Manager login user name with administrator or server_profile privileges. - ``ucs_password``: UCS Manager login password for the above UCS Manager user. - ``deploy_kernel``: The Glance UUID of the deployment kernel. - ``deploy_ramdisk``: The Glance UUID of the deployment ramdisk. - ``ucs_service_profile``: Distinguished name(DN) of service_profile being enrolled. The following sequence of commands can be used to enroll a UCS node. Create Node:: ironic node-create -d -i ucs_address= -i ucs_username= -i ucs_password= -i ucs_service_profile= -i deploy_kernel= -i deploy_ramdisk= -p cpus= -p memory_mb= -p local_gb= -p cpu_arch= The above command 'ironic node-create' will return UUID of the node, which is the value of $NODE in the following command. Associate port with the node created:: ironic port-create -n $NODE -a References ========== .. [1] UcsSdk - https://pypi.python.org/pypi/UcsSdk ironic-5.1.0/doc/source/drivers/xenserver.rst0000664000567000056710000000224412674513466022457 0ustar jenkinsjenkins00000000000000.. _xenserver: .. _bug 1498576: https://bugs.launchpad.net/diskimage-builder/+bug/1498576 ================= XenServer drivers ================= Overview ======== XenServer drivers can be used to deploy hosts with Ironic by using XenServer VMs to simulate bare metal nodes. Ironic provides support via the ``pxe_ssh`` and ``agent_ssh`` drivers for using a XenServer VM as a bare metal target and do provisioning on it. It works by connecting via SSH into the XenServer host and running commands using the 'xe' command. This is particularly useful for deploying overclouds that use XenServer for VM hosting as the Compute node must be run as a virtual machine on the XenServer host it will be controlling. In this case, one VM per hypervisor needs to be installed. This support has been tested with XenServer 6.5. Usage ===== * Install the VMs using the "Other Install Media" template, which will ensure that they are HVM guests * Set the HVM guests to boot from network first * If your generated initramfs does not have the fix for `bug 1498576`_, disable the Xen PV drivers as a work around :: xe vm-param-set uuid= xenstore-data:vm-data="vm_data/disable_pf: 1" ironic-5.1.0/doc/source/drivers/seamicro.rst0000664000567000056710000001004712674513466022240 0ustar jenkinsjenkins00000000000000.. _SeaMicro: =============== SeaMicro driver =============== Overview ======== The SeaMicro power driver enables you to take advantage of power cycle management of servers (nodes) within the SeaMicro chassis. The SeaMicro driver is targeted for SeaMicro Fabric Compute systems. Prerequisites ============= * ``python-seamicroclient`` is a python package which contains a set of modules for managing SeaMicro Fabric Compute systems. Install ``python-seamicroclient`` [1]_ module on the Ironic conductor node. Minimum version required is 0.4.0.:: $ pip install "python-seamicroclient>=0.4.0" Drivers ======= pxe_seamicro driver ^^^^^^^^^^^^^^^^^^^ Overview ~~~~~~~~ ``pxe_seamicro`` driver uses PXE/iSCSI (just like ``pxe_ipmitool`` driver) to deploy the image and uses SeaMicro to do all management operations on the baremetal node (instead of using IPMI). Target Users ~~~~~~~~~~~~ * Users who want to use PXE/iSCSI for deployment in their environment. * Users who want to use SeaMicro Fabric Compute systems. Tested Platforms ~~~~~~~~~~~~~~~~ This driver works on SeaMicro Fabric Compute system. It has been tested with the following servers: * SeaMicro SM15000-XN * SeaMicro SM15000-OP Requirements ~~~~~~~~~~~~ None. Configuring and Enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Build or download a deploy image, see :ref:`BuildingDeployRamdisk` 2. Upload these images to Glance:: glance image-create --name deploy-ramdisk.kernel --disk-format aki --container-format aki < deploy-ramdisk.kernel glance image-create --name deploy-ramdisk.initramfs --disk-format ari --container-format ari < deploy-ramdisk.initramfs 3. Add ``pxe_seamicro`` to the list of ``enabled_drivers`` in ``/etc/ironic/ironic.conf``. For example:: enabled_drivers = pxe_ipmitool,pxe_seamicro 4. Restart the Ironic conductor service:: service ironic-conductor restart Registering SeaMicro node in Ironic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Nodes configured for SeaMicro driver should have the ``driver`` property set to ``pxe_seamicro``. The following configuration values are also required in ``driver_info``: - ``seamicro_api_endpoint``: IP address or hostname of the SeaMicro with valid URL as http:///v2.0 - ``seamicro_server_id``: SeaMicro Server ID. Expected format is / - ``seamicro_username``: SeaMicro Username with administrator privileges. - ``seamicro_password``: Password for the above SeaMicro user. - ``deploy_kernel``: The Glance UUID of the deployment kernel. - ``deploy_ramdisk``: The Glance UUID of the deployment ramdisk. - ``seamicro_api_version``: (optional) SeaMicro API Version defaults to "2". - ``seamicro_terminal_port``: (optional) Node's UDP port for console access. Any unused port on the Ironic conductor node may be used. The following sequence of commands can be used to enroll a SeaMicro node and boot an instance on it: Create nova baremetal flavor corresponding to SeaMicro server's config:: nova flavor-create baremetal auto Create Node:: ironic node-create -d pxe_seamicro -i seamicro_api_endpoint=https:/// -i seamicro_server_id= -i seamicro_username= -i seamicro_password= -i seamicro_api_version= -i seamicro_terminal_port= -i deploy_kernel= -i deploy_ramdisk= -p cpus= -p memory_mb= -p local_gb= -p cpu_arch= Associate port with the node created:: ironic port-create -n $NODE -a Associate properties with the flavor:: nova flavor-key baremetal set "cpu_arch"= Boot the Instance:: nova boot --flavor baremetal --image test-image instance-1 References ========== .. [1] Python-seamicroclient - https://pypi.python.org/pypi/python-seamicroclient .. [2] DiskImage-Builder - https://github.com/openstack/diskimage-builder ironic-5.1.0/doc/source/drivers/snmp.rst0000664000567000056710000000740112674513466021413 0ustar jenkinsjenkins00000000000000=========== SNMP driver =========== The SNMP power driver enables control of power distribution units of the type frequently found in data centre racks. PDUs frequently have a management ethernet interface and SNMP support enabling control of the power outlets. The SNMP power driver works with the PXE driver for network deployment and network-configured boot. List of supported devices ========================= This is a non-exhaustive list of supported devices. Any device not listed in this table could possibly work using a similar driver. Please report any device status. ============== ========== ========== ===================== Manufacturer Model Supported? Driver name ============== ========== ========== ===================== APC AP7920 Yes apc_masterswitch APC AP9606 Yes apc_masterswitch APC AP9225 Yes apc_masterswitchplus APC AP7155 Yes apc_rackpdu APC AP7900 Yes apc_rackpdu APC AP7901 Yes apc_rackpdu APC AP7902 Yes apc_rackpdu APC AP7911a Yes apc_rackpdu APC AP7930 Yes apc_rackpdu APC AP7931 Yes apc_rackpdu APC AP7932 Yes apc_rackpdu APC AP7940 Yes apc_rackpdu APC AP7941 Yes apc_rackpdu APC AP7951 Yes apc_rackpdu APC AP7960 Yes apc_rackpdu APC AP7990 Yes apc_rackpdu APC AP7998 Yes apc_rackpdu APC AP8941 Yes apc_rackpdu APC AP8953 Yes apc_rackpdu APC AP8959 Yes apc_rackpdu APC AP8961 Yes apc_rackpdu APC AP8965 Yes apc_rackpdu Aten all? Yes aten CyberPower all? Untested cyberpower EatonPower all? Untested eatonpower Teltronix all? Yes teltronix ============== ========== ========== ===================== Software Requirements ===================== - The PySNMP package must be installed, variously referred to as ``pysnmp`` or ``python-pysnmp`` Enabling the SNMP Power Driver ============================== - Add ``pxe_snmp`` to the list of ``enabled_drivers`` in ``/etc/ironic/ironic.conf`` - Ironic Conductor must be restarted for the new driver to be loaded. Ironic Node Configuration ========================= Nodes are configured for SNMP control by setting the Ironic node object's ``driver`` property to be ``pxe_snmp``. Further configuration values are added to ``driver_info``: - ``snmp_driver``: PDU manufacturer driver - ``snmp_address``: the IPv4 address of the PDU controlling this node. - ``snmp_port``: (optional) A non-standard UDP port to use for SNMP operations. If not specified, the default port (161) is used. - ``snmp_outlet``: The power outlet on the PDU (1-based indexing). - ``snmp_protocol``: (optional) SNMP protocol version (permitted values ``1``, ``2c`` or ``3``). If not specified, SNMPv1 is chosen. - ``snmp_community``: (Required for SNMPv1 and SNMPv2c) SNMP community parameter for reads and writes to the PDU. - ``snmp_security``: (Required for SNMPv3) SNMP security string. PDU Configuration ================= This version of the SNMP power driver does not support handling PDU authentication credentials. When using SNMPv3, the PDU must be configured for ``NoAuthentication`` and ``NoEncryption``. The security name is used analogously to the SNMP community in early SNMP versions. ironic-5.1.0/doc/source/drivers/cimc.rst0000664000567000056710000000643312674513466021355 0ustar jenkinsjenkins00000000000000.. _CIMC: ============ CIMC drivers ============ Overview ======== The CIMC drivers are targeted for standalone Cisco UCS C series servers. These drivers enable you to take advantage of CIMC by using the python SDK. ``pxe_iscsi_cimc`` driver uses PXE boot + iSCSI deploy (just like ``pxe_ipmitool`` driver) to deploy the image and uses CIMC to do all management operations on the baremetal node (instead of using IPMI). ``pxe_agent_cimc`` driver uses PXE boot + Agent deploy (just like ``agent_ipmitool`` and ``agent_ipminative`` drivers.) to deploy the image and uses CIMC to do all management operations on the baremetal node (instead of using IPMI). Unlike with iSCSI deploy in Agent deploy, the ramdisk is responsible for writing the image to the disk, instead of the conductor. The CIMC drivers can use the Ironic Inspector service for in-band inspection of equipment. For more information see the `Ironic Inspector documentation `_. Prerequisites ============= * ``ImcSdk`` is a python SDK for the CIMC HTTP/HTTPS XML API used to control CIMC. Install the ``ImcSdk`` module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: Install the ``ImcSdk`` module on the Ironic conductor node. Required version is 0.7.2. #. Install it:: $ pip install "ImcSdk>=0.7.2" Tested Platforms ~~~~~~~~~~~~~~~~ This driver works with UCS C-Series servers and has been tested with: * UCS C240M3S Configuring and Enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Add ``pxe_iscsi_cimc`` and/or ``pxe_agent_cimc`` to the list of ``enabled_drivers`` in ``/etc/ironic/ironic.conf``. For example:: enabled_drivers = pxe_ipmitool,pxe_iscsi_cimc,pxe_agent_cimc 2. Restart the Ironic conductor service: For Ubuntu/Debian systems:: $ sudo service ironic-conductor restart or for RHEL/CentOS/Fedora:: $ sudo systemctl restart openstack-ironic-conductor Registering CIMC Managed UCS node in Ironic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Nodes configured for CIMC driver should have the ``driver`` property set to ``pxe_iscsi_cimc`` or ``pxe_agent_cimc``. The following configuration values are also required in ``driver_info``: - ``cimc_address``: IP address or hostname for CIMC - ``cimc_username``: CIMC login user name - ``cimc_password``: CIMC login password for the above CIMC user. - ``deploy_kernel``: Identifier for the deployment kernel e.g. a Glance UUID - ``deploy_ramdisk``: Identifier for the deployment ramdisk e.g. a Glance UUID The following sequence of commands can be used to enroll a UCS Standalone node. Create Node:: ironic node-create -d -i cimc_address= -i cimc_username= -i cimc_password= -i deploy_kernel= -i deploy_ramdisk= -p cpus= -p memory_mb= -p local_gb= -p cpu_arch= The above command 'ironic node-create' will return UUID of the node, which is the value of $NODE in the following command. Associate port with the node created:: ironic port-create -n $NODE -a For more information about enrolling nodes see "Enrolling a node" in the :ref:`install-guide` ironic-5.1.0/doc/source/drivers/ilo.rst0000664000567000056710000016670012674513470021224 0ustar jenkinsjenkins00000000000000.. _ilo: =========== iLO drivers =========== Overview ======== iLO drivers enable to take advantage of features of iLO management engine in HPE ProLiant servers. iLO drivers are targeted for HPE ProLiant Gen 8 systems and above which have `iLO 4 management engine `_. For more details, please refer the iLO driver document of Juno, Kilo and Liberty releases, and for up-to-date information (like tested platforms, known issues, etc), please check the `iLO driver wiki page `_. Currently there are 3 iLO drivers: * ``iscsi_ilo`` * ``agent_ilo`` * ``pxe_ilo``. The ``iscsi_ilo`` and ``agent_ilo`` drivers provide security enhanced PXE-less deployment by using iLO virtual media to boot up the bare metal node. These drivers send management info through management channel and separates it from data channel which is used for deployment. ``iscsi_ilo`` and ``agent_ilo`` drivers use deployment ramdisk built from ``diskimage-builder``. The ``iscsi_ilo`` driver deploys from ironic conductor and supports both net-boot and local-boot of instance. ``agent_ilo`` deploys from bare metal node and supports both net-boot and local-boot of instance. ``pxe_ilo`` driver uses PXE/iSCSI for deployment (just like normal PXE driver) and deploys from ironic conductor. Additionally it supports automatic setting of requested boot mode from nova. This driver doesn't require iLO Advanced license. Prerequisites ============= * `proliantutils `_ is a python package which contains set of modules for managing HPE ProLiant hardware. Install ``proliantutils`` module on the ironic conductor node. Minimum version required is 2.1.7.:: $ pip install "proliantutils>=2.1.7" * ``ipmitool`` command must be present on the service node(s) where ``ironic-conductor`` is running. On most distros, this is provided as part of the ``ipmitool`` package. Refer to `Hardware Inspection Support`_ for more information on recommended version. Different Configuration for ilo drivers ======================================= Glance Configuration ^^^^^^^^^^^^^^^^^^^^ 1. `Configure Glance image service with its storage backend as Swift `_. 2. Set a temp-url key for Glance user in Swift. For example, if you have configured Glance with user ``glance-swift`` and tenant as ``service``, then run the below command:: swift --os-username=service:glance-swift post -m temp-url-key:mysecretkeyforglance 3. Fill the required parameters in the ``[glance]`` section in ``/etc/ironic/ironic.conf``. Normally you would be required to fill in the following details.:: [glance] swift_temp_url_key=mysecretkeyforglance swift_endpoint_url=https://10.10.1.10:8080 swift_api_version=v1 swift_account=AUTH_51ea2fb400c34c9eb005ca945c0dc9e1 swift_container=glance The details can be retrieved by running the below command: .. code-block:: bash $ swift --os-username=service:glance-swift stat -v | grep -i url StorageURL: http://10.10.1.10:8080/v1/AUTH_51ea2fb400c34c9eb005ca945c0dc9e1 Meta Temp-Url-Key: mysecretkeyforglance 4. Swift must be accessible with the same admin credentials configured in Ironic. For example, if Ironic is configured with the below credentials in ``/etc/ironic/ironic.conf``.:: [keystone_authtoken] admin_password = password admin_user = ironic admin_tenant_name = service Ensure ``auth_version`` in ``keystone_authtoken`` to 2. Then, the below command should work.: .. code-block:: bash $ swift --os-username ironic --os-password password --os-tenant-name service --auth-version 2 stat Account: AUTH_22af34365a104e4689c46400297f00cb Containers: 2 Objects: 18 Bytes: 1728346241 Objects in policy "policy-0": 18 Bytes in policy "policy-0": 1728346241 Meta Temp-Url-Key: mysecretkeyforglance X-Timestamp: 1409763763.84427 X-Trans-Id: tx51de96a28f27401eb2833-005433924b Content-Type: text/plain; charset=utf-8 Accept-Ranges: bytes 5. Restart the Ironic conductor service.:: $ service ironic-conductor restart Web server configuration on conductor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * The HTTP(S) web server can be configured in many ways. For apache web server on Ubuntu, refer `here `_ * Following config variables need to be set in ``/etc/ironic/ironic.conf``: * ``use_web_server_for_images`` in ``[ilo]`` section:: [ilo] use_web_server_for_images = True * ``http_url`` and ``http_root`` in ``[deploy]`` section:: [deploy] # Ironic compute node's http root path. (string value) http_root=/httpboot # Ironic compute node's HTTP server URL. Example: # http://192.1.2.3:8080 (string value) http_url=http://192.168.0.2:8080 ``use_web_server_for_images``: If the variable is set to ``false``, ``iscsi_ilo`` and ``agent_ilo`` uses swift containers to host the intermediate floppy image and the boot ISO. If the variable is set to ``true``, these drivers uses the local web server for hosting the intermediate files. The default value for ``use_web_server_for_images`` is False. ``http_url``: The value for this variable is prefixed with the generated intermediate files to generate a URL which is attached in the virtual media. ``http_root``: It is the directory location to which ironic conductor copies the intermediate floppy image and the boot ISO. .. note:: HTTPS is strongly recommended over HTTP web server configuration for security enhancement. The ``iscsi_ilo`` and ``agent_ilo`` will send the instance's configdrive over an encrypted channel if web server is HTTPS enabled. Enable driver ============= 1. Build a deploy ISO (and kernel and ramdisk) image, see :ref:`BuildingDibBasedDeployRamdisk` 2. See `Glance Configuration`_ for configuring glance image service with its storage backend as ``swift``. 3. Upload this image to Glance.:: glance image-create --name deploy-ramdisk.iso --disk-format iso --container-format bare < deploy-ramdisk.iso 4. Add the driver name to the list of ``enabled_drivers`` in ``/etc/ironic/ironic.conf``. For example, for `iscsi_ilo` driver:: enabled_drivers = fake,pxe_ssh,pxe_ipmitool,iscsi_ilo Similarly it can be added for ``agent_ilo`` and ``pxe_ilo`` drivers. 5. Restart the ironic conductor service.:: $ service ironic-conductor restart Drivers ======= iscsi_ilo driver ^^^^^^^^^^^^^^^^ Overview ~~~~~~~~ ``iscsi_ilo`` driver was introduced as an alternative to ``pxe_ipmitool`` and ``pxe_ipminative`` drivers for HPE ProLiant servers. ``iscsi_ilo`` uses virtual media feature in iLO to boot up the bare metal node instead of using PXE or iPXE. Target Users ~~~~~~~~~~~~ * Users who do not want to use PXE/TFTP protocol on their data centres. * Users who have concerns with PXE protocol's security issues and want to have a security enhanced PXE-less deployment mechanism. The PXE driver passes management information in clear-text to the bare metal node. However, if swift proxy server has an HTTPS endpoint (See :ref:`EnableHTTPSinSwift` for more information), the ``iscsi_ilo`` driver provides enhanced security by passing management information to and from swift endpoint over HTTPS. The management information, deploy ramdisk and boot images for the instance will be retrieved over encrypted management network via iLO virtual media. Tested Platforms ~~~~~~~~~~~~~~~~ This driver should work on HPE ProLiant Gen8 Servers and above with iLO 4. It has been tested with the following servers: * ProLiant DL380e Gen8 * ProLiant DL580 Gen8 UEFI * ProLiant DL180 Gen9 UEFI * ProLiant DL360 Gen9 UEFI * ProLiant DL380 Gen9 UEFI For more up-to-date information on server platform support info, refer `iLO driver wiki page `_. Features ~~~~~~~~ * PXE-less deploy with virtual media. * Automatic detection of current boot mode. * Automatic setting of the required boot mode, if UEFI boot mode is requested by the nova flavor's extra spec. * Supports booting the instance from virtual media (netboot) as well as booting locally from disk. By default, the instance will always boot from virtual media for partition images. * UEFI Boot Support * UEFI Secure Boot Support * Passing management information via secure, encrypted management network (virtual media) if swift proxy server has an HTTPS endpoint. See :ref:`EnableHTTPSinSwift` for more info. User image provisioning is done using iSCSI over data network, so this driver has the benefit of security enhancement with the same performance. It segregates management info from data channel. * Support for out-of-band cleaning operations. * Remote Console * HW Sensors * Works well for machines with resource constraints (lesser amount of memory). * Support for out-of-band hardware inspection. * Swiftless deploy for intermediate images * HTTP(S) Based Deploy. * iLO drivers with standalone ironic. Requirements ~~~~~~~~~~~~ * **iLO 4 Advanced License** needs to be installed on iLO to enable Virtual Media feature. * **Swift Object Storage Service** - iLO driver uses swift to store temporary FAT images as well as boot ISO images. * **Glance Image Service with swift configured as its backend** - When using ``iscsi_ilo`` driver, the image containing the deploy ramdisk is retrieved from swift directly by the iLO. Deploy Process ~~~~~~~~~~~~~~ Refer to `Netboot with glance and swift`_ and `Localboot with glance and swift for partition images`_ for the deploy process of partition image and `Localboot with glance and swift`_ for the deploy process of whole disk image. Configuring and Enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Refer to `Glance Configuration`_ and `Enable driver`_. Registering ProLiant node in ironic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Nodes configured for iLO driver should have the ``driver`` property set to ``iscsi_ilo``. The following configuration values are also required in ``driver_info``: - ``ilo_address``: IP address or hostname of the iLO. - ``ilo_username``: Username for the iLO with administrator privileges. - ``ilo_password``: Password for the above iLO user. - ``ilo_deploy_iso``: The glance UUID of the deploy ramdisk ISO image. - ``client_port``: (optional) Port to be used for iLO operations if you are using a custom port on the iLO. Default port used is 443. - ``client_timeout``: (optional) Timeout for iLO operations. Default timeout is 60 seconds. - ``console_port``: (optional) Node's UDP port for console access. Any unused port on the ironic conductor node may be used. For example, you could run a similar command like below to enroll the ProLiant node:: ironic node-create -d iscsi_ilo -i ilo_address= -i ilo_username= -i ilo_password= -i ilo_deploy_iso= Boot modes ~~~~~~~~~~ Refer to `Boot mode support`_ section for more information. UEFI Secure Boot ~~~~~~~~~~~~~~~~ Refer to `UEFI Secure Boot Support`_ section for more information. Node cleaning ~~~~~~~~~~~~~ Refer to `Node Cleaning Support`_ for more information. Hardware Inspection ~~~~~~~~~~~~~~~~~~~ Refer to `Hardware Inspection Support`_ for more information. Swiftless deploy for intermediate deploy and boot images ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Refer to `Swiftless deploy for intermediate images`_ for more information. HTTP(S) Based Deploy ~~~~~~~~~~~~~~~~~~~~ Refer to `HTTP(S) Based Deploy Support`_ for more information. iLO drivers with standalone ironic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Refer to `Support for iLO drivers with Standalone Ironic`_ for more information. RAID Configuration ~~~~~~~~~~~~~~~~~~ Refer to `RAID Support`_ for more information. agent_ilo driver ^^^^^^^^^^^^^^^^ Overview ~~~~~~~~ ``agent_ilo`` driver was introduced as an alternative to ``agent_ipmitool`` and ``agent_ipminative`` drivers for HPE ProLiant servers. ``agent_ilo`` driver uses virtual media feature in HPE ProLiant bare metal servers to boot up the Ironic Python Agent (IPA) on the bare metal node instead of using PXE. For more information on IPA, refer https://wiki.openstack.org/wiki/Ironic-python-agent. Target Users ~~~~~~~~~~~~ * Users who do not want to use PXE/TFTP protocol on their data centres. * Users who have concerns on PXE based agent driver's security and want to have a security enhanced PXE-less deployment mechanism. The PXE based agent drivers pass management information in clear-text to the bare metal node. However, if swift proxy server has an HTTPS endpoint (See :ref:`EnableHTTPSinSwift` for more information), the ``agent_ilo`` driver provides enhanced security by passing authtoken and management information to and from swift endpoint over HTTPS. The management information and deploy ramdisk will be retrieved over encrypted management network via iLO. Tested Platforms ~~~~~~~~~~~~~~~~ This driver should work on HPE ProLiant Gen8 Servers and above with iLO 4. It has been tested with the following servers: * ProLiant DL380e Gen8 * ProLiant DL580e Gen8 * ProLiant DL360 Gen9 UEFI * ProLiant DL380 Gen9 UEFI * ProLiant DL180 Gen9 UEFI For more up-to-date information, check the `iLO driver wiki page `_. Features ~~~~~~~~ * PXE-less deploy with virtual media using Ironic Python Agent(IPA). * Support for out-of-band cleaning operations. * Remote Console * HW Sensors * IPA runs on the bare metal node and pulls the image directly from swift. * Supports booting the instance from virtual media (netboot) as well as booting locally from disk. By default, the instance will always boot from virtual media for partition images. * Segregates management info from data channel. * UEFI Boot Support * UEFI Secure Boot Support * Support to use default in-band cleaning operations supported by Ironic Python Agent. For more details, see :ref:`InbandvsOutOfBandCleaning`. * Support for out-of-band hardware inspection. * Swiftless deploy for intermediate images. * HTTP(S) Based Deploy. * iLO drivers with standalone ironic. Requirements ~~~~~~~~~~~~ * **iLO 4 Advanced License** needs to be installed on iLO to enable Virtual Media feature. * **Swift Object Storage Service** - iLO driver uses swift to store temporary FAT images as well as boot ISO images. * **Glance Image Service with swift configured as its backend** - When using ``agent_ilo`` driver, the image containing the agent is retrieved from swift directly by the iLO. Deploy Process ~~~~~~~~~~~~~~ Refer to `Netboot with glance and swift`_ and `Localboot with glance and swift for partition images`_ for the deploy process of partition image and `Localboot with glance and swift`_ for the deploy process of whole disk image. Configuring and Enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Refer to `Glance Configuration`_ and `Enable driver`_. Registering ProLiant node in ironic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Nodes configured for iLO driver should have the ``driver`` property set to ``agent_ilo``. The following configuration values are also required in ``driver_info``: - ``ilo_address``: IP address or hostname of the iLO. - ``ilo_username``: Username for the iLO with administrator privileges. - ``ilo_password``: Password for the above iLO user. - ``ilo_deploy_iso``: The glance UUID of the deploy ramdisk ISO image. - ``client_port``: (optional) Port to be used for iLO operations if you are using a custom port on the iLO. Default port used is 443. - ``client_timeout``: (optional) Timeout for iLO operations. Default timeout is 60 seconds. - ``console_port``: (optional) Node's UDP port for console access. Any unused port on the ironic conductor node may be used. For example, you could run a similar command like below to enroll the ProLiant node:: ironic node-create -d agent_ilo -i ilo_address= -i ilo_username= -i ilo_password= -i ilo_deploy_iso= Boot modes ~~~~~~~~~~ Refer to `Boot mode support`_ section for more information. UEFI Secure Boot ~~~~~~~~~~~~~~~~ Refer to `UEFI Secure Boot Support`_ section for more information. Node Cleaning ~~~~~~~~~~~~~ Refer to `Node Cleaning Support`_ for more information. Hardware Inspection ~~~~~~~~~~~~~~~~~~~ Refer to `Hardware Inspection Support`_ for more information. Swiftless deploy for intermediate deploy and boot images ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Refer to `Swiftless deploy for intermediate images`_ for more information. HTTP(S) Based Deploy ~~~~~~~~~~~~~~~~~~~~ Refer to `HTTP(S) Based Deploy Support`_ for more information. iLO drivers with standalone ironic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Refer to `Support for iLO drivers with Standalone Ironic`_ for more information. RAID Configuration ~~~~~~~~~~~~~~~~~~ Refer to `RAID Support`_ for more information. pxe_ilo driver ^^^^^^^^^^^^^^ Overview ~~~~~~~~ ``pxe_ilo`` driver uses PXE/iSCSI (just like ``pxe_ipmitool`` driver) to deploy the image and uses iLO to do power and management operations on the bare metal node(instead of using IPMI). Target Users ~~~~~~~~~~~~ * Users who want to use PXE/iSCSI for deployment in their environment or who don't have Advanced License in their iLO. * Users who don't want to configure boot mode manually on the bare metal node. Tested Platforms ~~~~~~~~~~~~~~~~ This driver should work on HPE ProLiant Gen8 Servers and above with iLO 4. It has been tested with the following servers: * ProLiant DL380e Gen8 * ProLiant DL380e Gen8 * ProLiant DL580 Gen8 (BIOS/UEFI) * ProLiant DL360 Gen9 UEFI * ProLiant DL380 Gen9 UEFI For more up-to-date information, check the `iLO driver wiki page `_. Features ~~~~~~~~ * Automatic detection of current boot mode. * Automatic setting of the required boot mode, if UEFI boot mode is requested by the nova flavor's extra spec. * Support for out-of-band cleaning operations. * Support for out-of-band hardware inspection. * Supports UEFI Boot mode * Supports UEFI Secure Boot * HTTP(S) Based Deploy. Requirements ~~~~~~~~~~~~ None. Configuring and Enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Build a deploy image, see :ref:`BuildingDibBasedDeployRamdisk` 2. Upload this image to glance.:: glance image-create --name deploy-ramdisk.kernel --disk-format aki --container-format aki < deploy-ramdisk.kernel glance image-create --name deploy-ramdisk.initramfs --disk-format ari --container-format ari < deploy-ramdisk.initramfs 3. Add ``pxe_ilo`` to the list of ``enabled_drivers`` in ``/etc/ironic/ironic.conf``. For example::: enabled_drivers = fake,pxe_ssh,pxe_ipmitool,pxe_ilo 4. Restart the ironic conductor service.:: service ironic-conductor restart Registering ProLiant node in ironic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Nodes configured for iLO driver should have the ``driver`` property set to ``pxe_ilo``. The following configuration values are also required in ``driver_info``: - ``ilo_address``: IP address or hostname of the iLO. - ``ilo_username``: Username for the iLO with administrator privileges. - ``ilo_password``: Password for the above iLO user. - ``deploy_kernel``: The glance UUID of the deployment kernel. - ``deploy_ramdisk``: The glance UUID of the deployment ramdisk. - ``client_port``: (optional) Port to be used for iLO operations if you are using a custom port on the iLO. Default port used is 443. - ``client_timeout``: (optional) Timeout for iLO operations. Default timeout is 60 seconds. - ``console_port``: (optional) Node's UDP port for console access. Any unused port on the ironic conductor node may be used. For example, you could run a similar command like below to enroll the ProLiant node:: ironic node-create -d pxe_ilo -i ilo_address= -i ilo_username= -i ilo_password= -i deploy_kernel= -i deploy_ramdisk= Boot modes ~~~~~~~~~~ Refer to `Boot mode support`_ section for more information. UEFI Secure Boot ~~~~~~~~~~~~~~~~ Refer to `UEFI Secure Boot Support`_ section for more information. Node Cleaning ~~~~~~~~~~~~~ Refer to `Node Cleaning Support`_ for more information. Hardware Inspection ~~~~~~~~~~~~~~~~~~~ Refer to `Hardware Inspection Support`_ for more information. HTTP(S) Based Deploy ~~~~~~~~~~~~~~~~~~~~ Refer to `HTTP(S) Based Deploy Support`_ for more information. iLO drivers with standalone ironic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Refer to `Support for iLO drivers with Standalone Ironic`_ for more information. RAID Configuration ~~~~~~~~~~~~~~~~~~ Refer to `RAID Support`_ for more information. Functionalities across drivers ============================== Boot mode support ^^^^^^^^^^^^^^^^^ The following drivers support automatic detection and setting of boot mode (Legacy BIOS or UEFI). * ``pxe_ilo`` * ``iscsi_ilo`` * ``agent_ilo`` * When boot mode capability is not configured: - If the pending boot mode is set on the node then iLO drivers use that boot mode for provisioning the baremetal ProLiant servers. - If the pending boot mode is not set on the node then iLO drivers use 'uefi' boot mode for UEFI capable servers and "bios" when UEFI is not supported. * When boot mode capability is configured, the driver sets the pending boot mode to the configured value. * Only one boot mode (either ``uefi`` or ``bios``) can be configured for the node. * If the operator wants a node to boot always in ``uefi`` mode or ``bios`` mode, then they may use ``capabilities`` parameter within ``properties`` field of an ironic node. To configure a node in ``uefi`` mode, then set ``capabilities`` as below:: ironic node-update add properties/capabilities='boot_mode:uefi' Nodes having ``boot_mode`` set to ``uefi`` may be requested by adding an ``extra_spec`` to the nova flavor:: nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi" nova boot --flavor ironic-test-3 --image test-image instance-1 If ``capabilities`` is used in ``extra_spec`` as above, nova scheduler (``ComputeCapabilitiesFilter``) will match only ironic nodes which have the ``boot_mode`` set appropriately in ``properties/capabilities``. It will filter out rest of the nodes. The above facility for matching in nova can be used in heterogeneous environments where there is a mix of ``uefi`` and ``bios`` machines, and operator wants to provide a choice to the user regarding boot modes. If the flavor doesn't contain ``boot_mode`` then nova scheduler will not consider boot mode as a placement criteria, hence user may get either a BIOS or UEFI machine that matches with user specified flavors. The automatic boot ISO creation for UEFI boot mode has been enabled in Kilo. The manual creation of boot ISO for UEFI boot mode is also supported. For the latter, the boot ISO for the deploy image needs to be built separately and the deploy image's ``boot_iso`` property in glance should contain the glance UUID of the boot ISO. For building boot ISO, add ``iso`` element to the diskimage-builder command to build the image. For example:: disk-image-create ubuntu baremetal iso UEFI Secure Boot Support ^^^^^^^^^^^^^^^^^^^^^^^^ The following drivers support UEFI secure boot deploy: * ``pxe_ilo`` * ``iscsi_ilo`` * ``agent_ilo`` The UEFI secure boot can be configured in ironic by adding ``secure_boot`` parameter in the ``capabilities`` parameter within ``properties`` field of an ironic node. ``secure_boot`` is a boolean parameter and takes value as ``true`` or ``false``. To enable ``secure_boot`` on a node add it to ``capabilities`` as below:: ironic node-update add properties/capabilities='secure_boot:true' Alternatively see `Hardware Inspection Support`_ to know how to automatically populate the secure boot capability. Nodes having ``secure_boot`` set to ``true`` may be requested by adding an ``extra_spec`` to the nova flavor:: nova flavor-key ironic-test-3 set capabilities:secure_boot="true" nova boot --flavor ironic-test-3 --image test-image instance-1 If ``capabilities`` is used in ``extra_spec`` as above, nova scheduler (``ComputeCapabilitiesFilter``) will match only ironic nodes which have the ``secure_boot`` set appropriately in ``properties/capabilities``. It will filter out rest of the nodes. The above facility for matching in nova can be used in heterogeneous environments where there is a mix of machines supporting and not supporting UEFI secure boot, and operator wants to provide a choice to the user regarding secure boot. If the flavor doesn't contain ``secure_boot`` then nova scheduler will not consider secure boot mode as a placement criteria, hence user may get a secure boot capable machine that matches with user specified flavors but deployment would not use its secure boot capability. Secure boot deploy would happen only when it is explicitly specified through flavor. Use element ``ubuntu-signed`` or ``fedora`` to build signed deploy iso and user images from `diskimage-builder `_. Refer :ref:`BuildingDibBasedDeployRamdisk` for more information on building deploy ramdisk. The below command creates files named cloud-image-boot.iso, cloud-image.initrd, cloud-image.vmlinuz and cloud-image.qcow2 in the current working directory.:: cd ./bin/disk-image-create -o cloud-image ubuntu-signed baremetal iso .. note:: In UEFI secure boot, digitally signed bootloader should be able to validate digital signatures of kernel during boot process. This requires that the bootloader contains the digital signatures of the kernel. For ``iscsi_ilo`` driver, it is recommended that ``boot_iso`` property for user image contains the glance UUID of the boot ISO. If ``boot_iso`` property is not updated in glance for the user image, it would create the ``boot_iso`` using bootloader from the deploy iso. This ``boot_iso`` will be able to boot the user image in UEFI secure boot environment only if the bootloader is signed and can validate digital signatures of user image kernel. Ensure the public key of the signed image is loaded into bare metal to deploy signed images. For HPE ProLiant Gen9 servers, one can enroll public key using iLO System Utilities UI. Please refer to section ``Accessing Secure Boot options`` in `HP UEFI System Utilities User Guide `_. One can also refer to white paper on `Secure Boot for Linux on HP ProLiant servers `_ for additional details. For more up-to-date information, refer `iLO driver wiki page `_ .. _ilo_node_cleaning: Node Cleaning Support ^^^^^^^^^^^^^^^^^^^^^ The following iLO drivers support node cleaning - * ``pxe_ilo`` * ``iscsi_ilo`` * ``agent_ilo`` For more information on node cleaning, see :ref:`cleaning` Supported **Automated** Cleaning Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * The automated cleaning operations supported are: ``reset_bios_to_default``: Resets system ROM settings to default. By default, enabled with priority 10. This clean step is supported only on Gen9 and above servers. ``reset_secure_boot_keys_to_default``: Resets secure boot keys to manufacturer's defaults. This step is supported only on Gen9 and above servers. By default, enabled with priority 20 . ``reset_ilo_credential``: Resets the iLO password, if ``ilo_change_password`` is specified as part of node's driver_info. By default, enabled with priority 30. ``clear_secure_boot_keys``: Clears all secure boot keys. This step is supported only on Gen9 and above servers. By default, this step is disabled. ``reset_ilo``: Resets the iLO. By default, this step is disabled. * For in-band cleaning operations supported by ``agent_ilo`` driver, see :ref:`InbandvsOutOfBandCleaning`. * All the automated cleaning steps have an explicit configuration option for priority. In order to disable or change the priority of the automated clean steps, respective configuration option for priority should be updated in ironic.conf. * Updating clean step priority to 0, will disable that particular clean step and will not run during automated cleaning. * Configuration Options for the automated clean steps are listed under ``[ilo]`` section in ironic.conf :: - clean_priority_reset_ilo=0 - clean_priority_reset_bios_to_default=10 - clean_priority_reset_secure_boot_keys_to_default=20 - clean_priority_clear_secure_boot_keys=0 - clean_priority_reset_ilo_credential=30 - clean_priority_erase_devices=10 For more information on node automated cleaning, see :ref:`automated_cleaning` Supported **Manual** Cleaning Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * The manual cleaning operations supported are: ``activate_license``: Activates the iLO Advanced license. This is an out-of-band manual cleaning step associated with the ``management`` interface. See `Activating iLO Advanced license as manual clean step`_ for user guidance on usage. Please note that this operation cannot be performed using virtual media based drivers like ``iscsi_ilo`` and ``agent_ilo`` as they need this type of advanced license already active to use virtual media to boot into to start cleaning operation. Virtual media is an advanced feature. If an advanced license is already active and the user wants to overwrite the current license key, for example in case of a multi-server activation key delivered with a flexible-quantity kit or after completing an Activation Key Agreement (AKA), then these drivers can still be used for executing this cleaning step. ``update_firmware``: Updates the firmware of the devices. Also an out-of-band step associated with the ``management`` interface. See `Initiating firmware update as manual clean step`_ for user guidance on usage. The supported devices for firmware update are: ``ilo``, ``cpld``, ``power_pic``, ``bios`` and ``chassis``. Refer to below table for their commonly used descriptions. .. csv-table:: :header: "Device", "Description" :widths: 30, 80 "``ilo``", "BMC for HPE ProLiant servers" "``cpld``", "System programmable logic device" "``power_pic``", "Power management controller" "``bios``", "HPE ProLiant System ROM" "``chassis``", "System chassis device" Some devices firmware cannot be updated via this method, such as: storage controllers, host bus adapters, disk drive firmware, network interfaces and OA. * iLO with firmware version 1.5 is minimally required to support all the operations. For more information on node manual cleaning, see :ref:`manual_cleaning` Hardware Inspection Support ^^^^^^^^^^^^^^^^^^^^^^^^^^^ The following iLO drivers support hardware inspection: * ``pxe_ilo`` * ``iscsi_ilo`` * ``agent_ilo`` .. note:: * The RAID needs to be pre-configured prior to inspection otherwise proliantutils returns 0 for disk size. The inspection process will discover the following essential properties (properties required for scheduling deployment): * ``memory_mb``: memory size * ``cpus``: number of cpus * ``cpu_arch``: cpu architecture * ``local_gb``: disk size Inspection can also discover the following extra capabilities for iLO drivers: * ``ilo_firmware_version``: iLO firmware version * ``rom_firmware_version``: ROM firmware version * ``secure_boot``: secure boot is supported or not. The possible values are 'true' or 'false'. The value is returned as 'true' if secure boot is supported by the server. * ``server_model``: server model * ``pci_gpu_devices``: number of gpu devices connected to the bare metal. * ``nic_capacity``: the max speed of the embedded NIC adapter. .. note:: * The capability ``nic_capacity`` can only be discovered if ipmitool version >= 1.8.15 is used on the conductor. The latest version can be downloaded from `here `__. * The iLO firmware version needs to be 2.10 or above for nic_capacity to be discovered. The operator can specify these capabilities in nova flavor for node to be selected for scheduling:: nova flavor-key my-baremetal-flavor set capabilities:server_model=" Gen8" nova flavor-key my-baremetal-flavor set capabilities:pci_gpu_devices="> 0" nova flavor-key my-baremetal-flavor set capabilities:nic_capacity="10Gb" nova flavor-key my-baremetal-flavor set capabilities:ilo_firmware_version=" 2.10" nova flavor-key my-baremetal-flavor set capabilities:secure_boot="true" Swiftless deploy for intermediate images ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``iscsi_ilo`` and ``agent_ilo`` drivers can deploy and boot the server with and without ``swift`` being used for hosting the intermediate temporary floppy image (holding metadata for deploy kernel and ramdisk) and the boot ISO (which is required for ``iscsi_ilo`` only). A local HTTP(S) web server on each conductor node needs to be configured. Refer `Web server configuration on conductor`_ for more information. The HTTPS web server needs to be enabled (instead of HTTP web server) in order to send management information and images in encrypted channel over HTTPS. .. note:: This feature assumes that the user inputs are on Glance which uses swift as backend. If swift dependency has to be eliminated, please refer to `HTTP(S) Based Deploy Support`_ also. Deploy Process ~~~~~~~~~~~~~~ Refer to `Netboot in swiftless deploy for intermediate images`_ for partition image support and refer to `Localboot in swiftless deploy for intermediate images`_ for whole disk image support. HTTP(S) Based Deploy Support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The user input for the images given in ``driver_info`` like ``ilo_deploy_iso``, ``deploy_kernel`` and ``deploy_ramdisk`` and in ``instance_info`` like ``image_source``, ``kernel``, ``ramdisk`` and ``ilo_boot_iso`` may also be given as HTTP(S) URLs. The HTTP(S) web server can be configured in many ways. For the Apache web server on Ubuntu, refer `here `_. The web server may reside on a different system than the conductor nodes, but its URL must be reachable by the conductor and the bare metal nodes. Deploy Process ~~~~~~~~~~~~~~ Refer to `Netboot with HTTP(S) based deploy`_ for partition image boot and refer to `Localboot with HTTP(S) based deploy`_ for whole disk image boot. Support for iLO drivers with Standalone Ironic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to use ironic as standalone services without other OpenStack services. iLO drivers can be used in standalone ironic. This feature is referred to as ``iLO drivers with standalone ironic`` in this document and is supported by following drivers: * ``pxe_ilo`` * ``iscsi_ilo`` * ``agent_ilo`` Configuration ~~~~~~~~~~~~~ The HTTP(S) web server needs to be configured as described in `HTTP(S) Based Deploy Support`_ and `Web server configuration on conductor`_ needs to be configured for hosting intermediate images on conductor as described in `Swiftless deploy for intermediate images`_. Deploy Process ~~~~~~~~~~~~~~ ``iscsi_ilo`` and ``agent_ilo`` supports both netboot and localboot. Refer to `Netboot in standalone ironic`_ and `Localboot in standalone ironic`_ for details of deploy process for netboot and localboot respectively. For ``pxe_ilo``, the deploy process is same as native ``pxe_ipmitool`` driver. Deploy Process ============== Netboot with glance and swift ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Download user image"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Exposes the disk over iSCSI"]; Conductor -> Conductor [label = "Connects to bare metal's disk over iSCSI and writes image"]; Conductor -> Conductor [label = "Generates the boot ISO"]; Conductor -> Swift [label = "Uploads the boot ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for boot ISO"]; Conductor -> iLO [label = "Attaches boot ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets boot device to CDROM"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; iLO -> Swift [label = "Downloads boot ISO"]; iLO -> Baremetal [label = "Boots the instance kernel/ramdisk from iLO virtual media CDROM"]; Baremetal -> Baremetal [label = "Instance kernel finds root partition and continues booting from disk"]; } Localboot with glance and swift for partition images ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Glance -> Conductor [label = "Returns the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Swift [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to root partition"]; IPA -> IPA [label = "Installs boot loader"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Localboot with glance and swift ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Glance -> Conductor [label = "Returns the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Swift [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to disk"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Netboot in swiftless deploy for intermediate images ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; ConductorWebserver; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Download user image"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> ConductorWebserver [label = "Uploads the FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image URL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Exposes the disk over iSCSI"]; Conductor -> Conductor [label = "Connects to bare metal's disk over iSCSI and writes image"]; Conductor -> Conductor [label = "Generates the boot ISO"]; Conductor -> ConductorWebserver [label = "Uploads the boot ISO"]; Conductor -> iLO [label = "Attaches boot ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets boot device to CDROM"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; iLO -> ConductorWebserver [label = "Downloads boot ISO"]; iLO -> Baremetal [label = "Boots the instance kernel/ramdisk from iLO virtual media CDROM"]; Baremetal -> Baremetal [label = "Instance kernel finds root partition and continues booting from disk"]; } Localboot in swiftless deploy for intermediate images ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; ConductorWebserver; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Glance -> Conductor [label = "Returns the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> ConductorWebserver [label = "Uploads the FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image URL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Swift [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to disk"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> Baremetal [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Netboot with HTTP(S) based deploy ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Webserver; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Webserver [label = "Download user image"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Webserver [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Exposes the disk over iSCSI"]; Conductor -> Conductor [label = "Connects to bare metal's disk over iSCSI and writes image"]; Conductor -> Conductor [label = "Generates the boot ISO"]; Conductor -> Swift [label = "Uploads the boot ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for boot ISO"]; Conductor -> iLO [label = "Attaches boot ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets boot device to CDROM"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; iLO -> Swift [label = "Downloads boot ISO"]; iLO -> Baremetal [label = "Boots the instance kernel/ramdisk from iLO virtual media CDROM"]; Baremetal -> Baremetal [label = "Instance kernel finds root partition and continues booting from disk"]; } Localboot with HTTP(S) based deploy ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Webserver; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Conductor [label = "Creates the FAT32 image containing ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Webserver [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Webserver [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to disk"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> Baremetal [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Netboot in standalone ironic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Webserver; Conductor; Baremetal; ConductorWebserver; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Webserver [label = "Download user image"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> ConductorWebserver[label = "Uploads the FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image URL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Webserver [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Exposes the disk over iSCSI"]; Conductor -> Conductor [label = "Connects to bare metal's disk over iSCSI and writes image"]; Conductor -> Conductor [label = "Generates the boot ISO"]; Conductor -> ConductorWebserver [label = "Uploads the boot ISO"]; Conductor -> iLO [label = "Attaches boot ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets boot device to CDROM"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; iLO -> ConductorWebserver [label = "Downloads boot ISO"]; iLO -> Baremetal [label = "Boots the instance kernel/ramdisk from iLO virtual media CDROM"]; Baremetal -> Baremetal [label = "Instance kernel finds root partition and continues booting from disk"]; } Localboot in standalone ironic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Webserver; Conductor; Baremetal; ConductorWebserver; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> ConductorWebserver [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates URL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image URL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Webserver [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Webserver [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to disk"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> Baremetal [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Activating iLO Advanced license as manual clean step ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ iLO drivers can activate the iLO Advanced license key as a manual cleaning step. Any manual cleaning step can only be initiated when a node is in the ``manageable`` state. Once the manual cleaning is finished, the node will be put in the ``manageable`` state again. User can follow steps from :ref:`manual_cleaning` to initiate manual cleaning operation on a node. An example of a manual clean step with ``activate_license`` as the only clean step could be:: 'clean_steps': [{ 'interface': 'management', 'step': 'activate_license', 'args': { 'ilo_license_key': 'ABC12-XXXXX-XXXXX-XXXXX-YZ345' } }] The different attributes of ``activate_license`` clean step are as follows: .. csv-table:: :header: "Attribute", "Description" :widths: 30, 120 "``interface``", "Interface of clean step, here ``management``" "``step``", "Name of clean step, here ``activate_license``" "``args``", "Keyword-argument entry (: ) being passed to clean step" "``args.ilo_license_key``", "iLO Advanced license key to activate enterprise features. This is mandatory." Initiating firmware update as manual clean step ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ iLO drivers can invoke secure firmware update as a manual cleaning step. Any manual cleaning step can only be initiated when a node is in the ``manageable`` state. Once the manual cleaning is finished, the node will be put in the ``manageable`` state again. A user can follow steps from :ref:`manual_cleaning` to initiate manual cleaning operation on a node. An example of a manual clean step with ``update_firmware`` as the only clean step could be:: 'clean_steps': [{ 'interface': 'management', 'step': 'update_firmware', 'args': { 'firmware_update_mode': 'ilo', 'firmware_images':[ { 'url': 'file:///firmware_images/ilo/1.5/CP024444.scexe', 'checksum': 'a94e683ea16d9ae44768f0a65942234d', 'component': 'ilo' }, { 'url': 'swift://firmware_container/cpld2.3.rpm', 'checksum': '', 'component': 'cpld' }, { 'url': 'http://my_address:port/firmwares/bios_vLatest.scexe', 'checksum': '', 'component': 'bios' }, { 'url': 'https://my_secure_address_url/firmwares/chassis_vLatest.scexe', 'checksum': '', 'component': 'chassis' }, { 'url': 'file:///home/ubuntu/firmware_images/power_pic/pmc_v3.0.bin', 'checksum': '', 'component': 'power_pic' } ] } }] The different attributes of ``update_firmware`` clean step are as follows: .. csv-table:: :header: "Attribute", "Description" :widths: 30, 120 "``interface``", "Interface of clean step, here ``management``" "``step``", "Name of clean step, here ``update_firmware``" "``args``", "Keyword-argument entry (: ) being passed to clean step" "``args.firmware_update_mode``", "Mode (or mechanism) of out-of-band firmware update. Supported value is ``ilo``. This is mandatory." "``args.firmware_images``", "Ordered list of dictionaries of images to be flashed. This is mandatory." Each firmware image block is represented by a dictionary (JSON), in the form:: { 'url': '', 'checksum': '', 'component': '' } All the fields in the firmware image block are mandatory. * The different types of firmware url schemes supported are: ``file``, ``http``, ``https`` and ``swift``. .. note:: This feature assumes that while using ``file`` url scheme the file path is on the conductor controlling the node. * Different firmware components that can be updated are: ``ilo``, ``cpld``, ``power_pic``, ``bios`` and ``chassis``. * The firmware images will be updated in the order given by the operator. If there is any error during processing of any of the given firmware images provided in the list, none of the firmware updates will occur. The processing error could happen during image download, image checksum verification or image extraction. The logic is to process each of the firmware files and update them on the devices only if all the files are processed successfully. If, during the update (uploading and flashing) process, an update fails, then the remaining updates, if any, in the list will be aborted. But it is recommended to triage and fix the failure and re-attempt the manual clean step ``update_firmware`` for the aborted ``firmware_images``. The devices for which the firmwares have been updated successfully would start functioning using their newly updated firmware. * As a troubleshooting guidance on the complete process, check Ironic conductor logs carefully to see if there are any firmware processing or update related errors which may help in root causing or gain an understanding of where things were left off or where things failed. You can then fix or work around and then try again. A common cause of update failure is HPE Secure Digital Signature check failure for the firmware image file. * To compute ``md5`` checksum for your image file, you can use the following command:: $ md5sum image.rpm 66cdb090c80b71daa21a67f06ecd3f33 image.rpm RAID Support ^^^^^^^^^^^^ The inband RAID functionality is now supported by iLO drivers. See :ref:`raid` for more information. .. _DIB_raid_support: DIB support for Proliant Hardware Manager ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To create an agent ramdisk with ``Proliant Hardware Manager``, use the ``proliant-tools`` element in DIB:: disk-image-create -o proliant-agent-ramdisk ironic-agent fedora proliant-tools ironic-5.1.0/doc/source/drivers/ipa.rst0000664000567000056710000001315412674513470021204 0ustar jenkinsjenkins00000000000000.. _IPA: =================== Ironic Python Agent =================== Overview ======== *Ironic Python Agent* (also often called *IPA* or just *agent*) is a Python-based agent which handles *ironic* bare metal nodes in a variety of actions such as inspect, configure, clean and deploy images. IPA is distributed over nodes and runs, inside of a ramdisk, the process of booting this ramdisk on the node. For more information see the `ironic-python-agent documentation `_. Drivers ======= Starting with the Kilo release all drivers (except for fake ones) are using IPA for deployment. There are two types of them, which can be distinguished by prefix: * For drivers with ``pxe_`` or ``iscsi_`` prefix IPA exposes the root hard drive as an iSCSI share and calls back to the ironic conductor. The conductor mounts the share and copies an image there. It then signals back to IPA for post-installation actions like setting up a bootloader for local boot support. * For drivers with ``agent_`` prefix the conductor prepares a swift temporary URL for an image. IPA then handles the whole deployment process: downloading an image from swift, putting it on the machine and doing any post-deploy actions. Which one to choose depends on your environment. iSCSI-based drivers put higher load on conductors, agent-based drivers currently require the whole image to fit in the node's memory. .. todo: other differences? .. todo: explain configuring swift for temporary URL's Requirements ------------ Using IPA requires it to be present and configured on the deploy ramdisk, see :ref:`BuildingDeployRamdisk` for details. Using proxies for image download in agent drivers ================================================= Overview -------- IPA supports using proxies while downloading the user image. For example, this could be used to speed up download by using caching proxy. Steps to enable proxies ----------------------- #. Configure the proxy server of your choice (for example `Squid `_, `Apache Traffic Server `_). This will probably require you to configure the proxy server to cache the content even if the requested URL contains a query, and to raise the maximum cached file size as images can be pretty big. If you have HTTPS enabled in swift (see `swift deployment guide `_), it is possible to configure the proxy server to talk to swift via HTTPS to download the image, store it in the cache unencrypted and return it to the node via HTTPS again. Because the image will be stored unencrypted in the cache, this approach is recommended for images that do not contain sensitive information. Refer to your proxy server's documentation to complete this step. #. Set ``[glance]swift_temp_url_cache_enabled`` in the ironic conductor config file to ``True``. The conductor will reuse the cached swift temporary URLs instead of generating new ones each time an image is requested, so that the proxy server does not create new cache entries for the same image, based on the query part of the URL (as it contains some query parameters that change each time it is regenerated). #. Set ``[glance]swift_temp_url_expected_download_start_delay`` option in the ironic conductor config file to the value appropriate for your hardware. This is the delay (in seconds) from the time of the deploy request (when the swift temporary URL is generated) to when the URL is used for the image download. You can think of it as roughly the time needed for IPA ramdisk to startup and begin download. This value is used to check if the swift temporary URL duration is large enough to let the image download begin. Also if temporary URL caching is enabled this will determine if a cached entry will still be valid when the download starts. It is used only if ``[glance]swift_temp_url_cache_enabled`` is ``True``. #. Increase ``[glance]swift_temp_url_duration`` option in the ironic conductor config file, as only non-expired links to images will be returned from the swift temporary URLs cache. This means that if ``swift_temp_url_duration=1200`` then after 20 minutes a new image will be cached by the proxy server as the query in its URL will change. The value of this option must be greater than or equal to ``[glance]swift_temp_url_expected_download_start_delay``. #. Add one or more of ``image_http_proxy``, ``image_https_proxy``, ``image_no_proxy`` to driver_info properties in each node that will use the proxy. Please refer to ``ironic driver-properties`` output of the ``agent_*`` driver you're using for descriptions of these properties. Advanced configuration ====================== Out-of-band Vs. in-band power off on deploy ------------------------------------------- After deploying an image onto the node's hard disk Ironic will reboot the machine into the new image. By default this power action happens ``in-band``, meaning that the ironic-conductor will instruct the IPA ramdisk to power itself off. Some hardware may have a problem with the default approach and would require Ironic to talk directly to the management controller to switch the power off and on again. In order to tell Ironic to do that you have to update the node's ``driver_info`` field and set the ``deploy_forces_oob_reboot`` parameter with the value of **True**. For example, the below command sets this configuration in a specific node:: ironic node-update add driver_info/deploy_forces_oob_reboot=True ironic-5.1.0/doc/source/drivers/vbox.rst0000664000567000056710000001101712674513466021412 0ustar jenkinsjenkins00000000000000.. _vbox: ================== VirtualBox drivers ================== Overview ======== VirtualBox drivers can be used to test Ironic by using VirtualBox VMs to simulate bare metal nodes. Ironic provides support via the ``pxe_ssh`` and ``agent_ssh`` drivers for using a VirtualBox VM as a bare metal target and do provisioning on it. It works by connecting via SSH into the VirtualBox host and running commands using VBoxManage. This works well if you have VirtualBox installed on a Linux box. But when VirtualBox is installed on a Windows box, configuring and getting SSH to work with VBoxManage is difficult (if not impossible) due to the following reasons: * Windows doesn't come with native SSH support and one needs to use some third-party software to enable SSH support on Windows. * Even after configuring SSH, VBoxManage doesn't work remotely due to how Windows manages user accounts -- the native Windows user account is different from the corresponding SSH user account, and VBoxManage doesn't work properly when done with the SSH user account. * Even after tweaking the policies of the VirtualBox application, the remote VBoxManage and VBoxSvc don't sync each other properly and often results in a crash. VirtualBox drivers use SOAP to talk to the VirtualBox web service running on the VirtualBox host. These drivers are primarily intended for Ironic developers running Windows on their laptops/desktops, although they can be used on other operating systems as well. Using these drivers, a developer could configure a cloud controller on one VirtualBox VM and use other VMs in the same VirtualBox as bare metals for that cloud controller. These VirtualBox drivers are available : * ``pxe_vbox``: uses iSCSI-based deployment mechanism. * ``agent_vbox``: uses agent-based deployment mechanism. * ``fake_vbox``: uses VirtualBox for power and management, but uses fake deploy. Setting up development environment ================================== * Install VirtualBox on your desktop or laptop. * Create a VM for the cloud controller. Do not power on the VM now. For example, ``cloud-controller``. * In VirtualBox Manager, Select ``cloud-controller`` VM -> Click Settings -> Network -> Adapter 2 -> Select 'Enable Network Adapter' -> Select Attached to: Internal Network -> Select Name: intnet * Create a VM in VirtualBox to act as bare metal. A VM with 1 CPU, 1 GB memory should be sufficient. Let's name this VM as ``baremetal``. * In VirtualBox Manager, Select ``baremetal`` VM -> Click Settings -> Network -> Adapter 1 -> Select 'Enable Network Adapter' -> Select Attached to: Internal Network -> Select Name: intnet * Configure the VirtualBox web service to disable authentication. (This is only a suggestion. If you want, enable authentication with the appropriate web service authentication library.) :: VBoxManage setproperty websrvauthlibrary null * Run VirtualBox web service:: C:\Program Files\Oracle\VirtualBox\VBoxWebSrv.exe * Power on the ``cloud-controller`` VM. * All the following instructions are to be done in the ``cloud-controller`` VM. * Install the GNU/Linux distribution of your choice. * Set up devstack. * Install pyremotevbox:: sudo pip install "pyremotevbox>=0.5.0" * Enable one (or more) of the VirtualBox drivers (``pxe_vbox``, ``agent_vbox``, or ``fake_vbox``) via the ``enabled_drivers`` configuration option in ``/etc/ironic/ironic.conf``, and restart Ironic conductor. * Set up flat networking on ``eth1``. For details on how to do this, see :ref:`NeutronFlatNetworking`. * Enroll a VirtualBox node. The following examples use the ``pxe_vbox`` driver. :: ironic node-create -d pxe_vbox -i virtualbox_host='10.0.2.2' -i virtualbox_vmname='baremetal' If you are using authentication with VirtualBox web service, your username and password need to be provided. The ironic node-create command will look like:: ironic node-create -d pxe_vbox -i virtualbox_host='10.0.2.2' -i virtualbox_vmname='baremetal' -i virtualbox_username= -i virtualbox_password= If VirtualBox web service is listening on a different port than the default 18083, then that port may be specified using the driver_info parameter ``virtualbox_port``. * Add other Node properties and trigger provisioning on the bare metal node. .. note:: When a newly created bare metal VM is powered on for the first time by Ironic (during provisioning), VirtualBox will automatically pop up a dialog box asking to 'Select start-up disk'. Just press 'Cancel' to continue booting the VM. ironic-5.1.0/doc/source/drivers/iboot.rst0000664000567000056710000000414612674513466021555 0ustar jenkinsjenkins00000000000000.. _IBOOT: ============ iBoot driver ============ Overview ======== The iBoot power driver enables you to take advantage of power cycle management of nodes using Dataprobe iBoot devices over the DxP protocol. Drivers ======= There are two iboot drivers: * The ``pxe_iboot`` driver uses iBoot to control the power state of the node, PXE/iPXE technology for booting and the iSCSI methodology for deploying the node. * The ``agent_iboot`` driver uses iBoot to control the power state of the node, PXE/iPXE technology for booting and the Ironic Python Agent for deploying an image to the node. Requirements ~~~~~~~~~~~~ * ``python-iboot`` library should be installed - https://github.com/darkip/python-iboot Tested platforms ~~~~~~~~~~~~~~~~ * iBoot-G2 Configuring and enabling the driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Add ``pxe_iboot`` and/or ``agent_iboot`` to the list of ``enabled_drivers`` in */etc/ironic/ironic.conf*. For example:: [DEFAULT] ... enabled_drivers = pxe_iboot,agent_iboot 2. Restart the Ironic conductor service:: service ironic-conductor restart Registering a node with the iBoot driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Nodes configured for the iBoot driver should have the ``driver`` property set to ``pxe_iboot`` or ``agent_iboot``. The following configuration values are also required in ``driver_info``: - ``iboot_address``: The IP address of the iBoot PDU. - ``iboot_username``: User name used for authentication. - ``iboot_password``: Password used for authentication. In addition, there are optional properties in ``driver_info``: - ``iboot_port``: iBoot PDU port. Defaults to 9100. - ``iboot_relay_id``: iBoot PDU relay ID. This option is useful in order to support multiple nodes attached to a single PDU. Defaults to 1. The following sequence of commands can be used to enroll a node with the iBoot driver. 1. Create node:: ironic node-create -d pxe_iboot -i iboot_username= -i iboot_password= -i iboot_address=
References ========== .. [1] iBoot-G2 official documentation - http://dataprobe.com/support_iboot-g2.html ironic-5.1.0/doc/source/drivers/amt.rst0000664000567000056710000000561212674513466021221 0ustar jenkinsjenkins00000000000000.. _amt: =========== AMT drivers =========== Overview ======== AMT (Active Management Technology) drivers extend Ironic's range to the desktop. AMT/vPro is widely used in desktops to remotely control their power, similar to IPMI in servers. AMT drivers use WS-MAN protocol to interact with AMT clients. They work on AMT 7.0/8.0/9.0. AMT 7.0 was released in 2010, so AMT drivers should work on most PCs with vPro. There are two AMT drivers: * ``pxe_amt`` uses AMT for power management and deploys the user image over iSCSI from the conductor * ``agent_amt`` uses AMT for power management and deploys the user image directly to the node via HTTP. Set up your environment ======================= A detailed reference is available here, and a short guide follows below: https://software.intel.com/en-us/articles/intel-active-management-technology-start-here-guide-intel-amt-9#4.2 * Set up AMT Client * Choose a system which supports Intel AMT / vPro. Desktop and laptop systems that support this can often be identified by looking at the "Intel" tag for the word ``vPro``. * During boot, press Ctrl+P to enter Intel MEBx management. * Reset password -- default is ``admin``. The new password must contain at least one upper case letter, one lower case letter, one digit and one special character, and be at least eight characters. * Go to Intel AMT Configuration: * Enable all features under SOL/IDER/KVM section * Select User Consent and choose None (No password is needed) * Select Network Setup section and set IP * Activate Network Access * MEBx Exit * Restart and enable PXE boot in bios * Install ``openwsman`` on servers where ``ironic-conductor`` is running: * Fedora/RHEL: ``openwsman-python``. * Ubuntu: ``python-openwsman``'s most recent version is 2.4.3 which is enough. * Or build it yourself from: https://github.com/Openwsman/openwsman * Enable the ``pxe_amt`` or ``agent_amt`` driver by adding it to the configuration option ``enabled_drivers`` (typically located at ``/etc/ironic/ironic.conf``) and restart the ``ironic-conductor`` process:: service ironic-conductor restart * Enroll an AMT node * Specify these driver_info properties for the node: ``amt_password``, ``amt_address``, and ``amt_username`` * Boot an instance .. note:: It is recommended that nodes using the pxe_amt driver be deployed with the `local boot`_ option. This is because the AMT firmware currently has no support for setting a persistent boot device. Nodes deployed without the `local boot`_ option could fail to boot if they are restarted outside of Ironic's control (I.E. rebooted by a local user) because the node will not attempt to PXE / network boot the kernel, using `local boot`_ solves this known issue. .. _`local boot`: http://docs.openstack.org/developer/ironic/deploy/install-guide.html#local-boot-with-partition-images ironic-5.1.0/doc/source/drivers/irmc.rst0000664000567000056710000002531312674513466021372 0ustar jenkinsjenkins00000000000000.. _irmc: ============ iRMC drivers ============ Overview ======== The iRMC driver enables control FUJITSU PRIMERGY via ServerView Common Command Interface (SCCI). There are 3 iRMC drivers: * ``pxe_irmc``. * ``iscsi_irmc`` * ``agent_irmc`` Prerequisites ============= * Install `python-scciclient package `_:: $ pip install "python-scciclient>=0.3.0" Drivers ======= pxe_irmc driver ^^^^^^^^^^^^^^^ This driver enables PXE deploy and power control via ServerView Common Command Interface (SCCI). Enabling the driver ~~~~~~~~~~~~~~~~~~~ - Add ``pxe_irmc`` to the list of ``enabled_drivers`` in ``[DEFAULT]`` section of ``/etc/ironic/ironic.conf``. - Ironic Conductor must be restarted for the new driver to be loaded. Node configuration ~~~~~~~~~~~~~~~~~~ * Each node is configured for iRMC with PXE deploy by setting the following ironic node object’s properties: - ``driver`` property to be ``pxe_irmc`` - ``driver_info/irmc_address`` property to be ``IP address`` or ``hostname`` of the iRMC. - ``driver_info/irmc_username`` property to be ``username`` for the iRMC with administrator privileges. - ``driver_info/irmc_password`` property to be ``password`` for irmc_username. - ``properties/capabilities`` property to be ``boot_mode:uefi`` if UEFI boot is required. * All of nodes are configured by setting the following configuration options in ``[irmc]`` section of ``/etc/ironic/ironic.conf``: - ``port``: Port to be used for iRMC operations; either 80 or 443. The default value is 443. Optional. - ``auth_method``: Authentication method for iRMC operations; either ``basic`` or ``digest``. The default value is ``basic``. Optional. - ``client_timeout``: Timeout (in seconds) for iRMC operations. The default value is 60. Optional. - ``sensor_method``: Sensor data retrieval method; either ``ipmitool`` or ``scci``. The default value is ``ipmitool``. Optional. * The following options are only required for inspection: - ``snmp_version``: SNMP protocol version; either ``v1``, ``v2c`` or ``v3``. The default value is ``v2c``. Optional. - ``snmp_port``: SNMP port. The default value is ``161``. Optional. - ``snmp_community``: SNMP community required for versions ``v1`` and ``v2c``. The default value is ``public``. Optional. - ``snmp_security``: SNMP security name required for version ``v3``. Optional. * Each node can be further configured by setting the following ironic node object’s properties which override the parameter values in ``[irmc]`` section of ``/etc/ironic/ironic.conf``: - ``driver_info/irmc_port`` property overrides ``port``. - ``driver_info/irmc_auth_method`` property overrides ``auth_method``. - ``driver_info/irmc_client_timeout`` property overrides ``client_timeout``. - ``driver_info/irmc_sensor_method`` property overrides ``sensor_method``. - ``driver_info/irmc_snmp_version`` property overrides ``snmp_version``. - ``driver_info/irmc_snmp_port`` property overrides ``snmp_port``. - ``driver_info/irmc_snmp_community`` property overrides ``snmp_community``. - ``driver_info/irmc_snmp_security`` property overrides ``snmp_security``. iscsi_irmc driver ^^^^^^^^^^^^^^^^^ This driver enables Virtual Media deploy with image build from Diskimage Builder and power control via ServerView Common Command Interface (SCCI). Enabling the driver ~~~~~~~~~~~~~~~~~~~ - Add ``iscsi_irmc`` to the list of ``enabled_drivers`` in ``[DEFAULT]`` section of ``/etc/ironic/ironic.conf``. - Ironic Conductor must be restarted for the new driver to be loaded. Node configuration ~~~~~~~~~~~~~~~~~~ * Each node is configured for iRMC with PXE deploy by setting the followings ironic node object’s properties: - ``driver`` property to be ``iscsi_irmc`` - ``driver_info/irmc_address`` property to be ``IP address`` or ``hostname`` of the iRMC. - ``driver_info/irmc_username`` property to be ``username`` for the iRMC with administrator privileges. - ``driver_info/irmc_password`` property to be ``password`` for irmc_username. - ``properties/capabilities`` property to be ``boot_mode:uefi`` if UEFI boot is required. - ``driver_info/irmc_deploy_iso`` property to be either ``deploy iso file name``, ``Glance UUID``, ``Glance URL`` or ``Image Service URL``. - ``instance info/irmc_boot_iso`` property to be either ``boot iso file name``, ``Glance UUID``, ``Glance URL`` or ``Image Service URL``. This is optional property for ``netboot``. * All of nodes are configured by setting the following configuration options in ``[irmc]`` section of ``/etc/ironic/ironic.conf``: - ``port``: Port to be used for iRMC operations; either ``80`` or ``443``. The default value is ``443``. Optional. - ``auth_method``: Authentication method for iRMC operations; either ``basic`` or ``digest``. The default value is ``basic``. Optional. - ``client_timeout``: Timeout (in seconds) for iRMC operations. The default value is 60. Optional. - ``sensor_method``: Sensor data retrieval method; either ``ipmitool`` or ``scci``. The default value is ``ipmitool``. Optional. - ``remote_image_share_root``: Ironic conductor node's ``NFS`` or ``CIFS`` root path. The default value is ``/remote_image_share_root``. - ``remote_image_server``: IP of remote image server. - ``remote_image_share_type``: Share type of virtual media, either ``NFS`` or ``CIFS``. The default is ``CIFS``. - ``remote_image_share_name``: share name of ``remote_image_server``. The default value is ``share``. - ``remote_image_user_name``: User name of ``remote_image_server``. - ``remote_image_user_password``: Password of ``remote_image_user_name``. - ``remote_image_user_domain``: Domain name of ``remote_image_user_name``. * The following options are only required for inspection: - ``snmp_version``: SNMP protocol version; either ``v1``, ``v2c`` or ``v3``. The default value is ``v2c``. Optional. - ``snmp_port``: SNMP port. The default value is ``161``. Optional. - ``snmp_community``: SNMP community required for versions ``v1`` and ``v2c``. The default value is ``public``. Optional. - ``snmp_security``: SNMP security name required for version ``v3``. Optional. * Each node can be further configured by setting the following ironic node object’s properties which override the parameter values in ``[irmc]`` section of ``/etc/ironic/ironic.conf``: - ``driver_info/irmc_port`` property overrides ``port``. - ``driver_info/irmc_auth_method`` property overrides ``auth_method``. - ``driver_info/irmc_client_timeout`` property overrides ``client_timeout``. - ``driver_info/irmc_sensor_method`` property overrides ``sensor_method``. - ``driver_info/irmc_snmp_version`` property overrides ``snmp_version``. - ``driver_info/irmc_snmp_port`` property overrides ``snmp_port``. - ``driver_info/irmc_snmp_community`` property overrides ``snmp_community``. - ``driver_info/irmc_snmp_security`` property overrides ``snmp_security``. agent_irmc driver ^^^^^^^^^^^^^^^^^ This driver enables Virtual Media deploy with IPA (Ironic Python Agent) and power control via ServerView Common Command Interface (SCCI). Enabling the driver ~~~~~~~~~~~~~~~~~~~ - Add ``agent_irmc`` to the list of ``enabled_drivers`` in ``[DEFAULT]`` section of ``/etc/ironic/ironic.conf``. - Ironic Conductor must be restarted for the new driver to be loaded. Node configuration ~~~~~~~~~~~~~~~~~~ * Each node is configured for iRMC with PXE deploy by setting the followings ironic node object’s properties: - ``driver`` property to be ``agent_irmc`` - ``driver_info/irmc_address`` property to be ``IP address`` or ``hostname`` of the iRMC. - ``driver_info/irmc_username`` property to be ``username`` for the iRMC with administrator privileges. - ``driver_info/irmc_password`` property to be ``password`` for irmc_username. - ``properties/capabilities`` property to be ``boot_mode:uefi`` if UEFI boot is required. - ``driver_info/irmc_deploy_iso`` property to be either ``deploy iso file name``, ``Glance UUID``, ``Glance URL`` or ``Image Service URL``. * All of nodes are configured by setting the following configuration options in ``[irmc]`` section of ``/etc/ironic/ironic.conf``: - ``port``: Port to be used for iRMC operations; either 80 or 443. The default value is 443. Optional. - ``auth_method``: Authentication method for iRMC operations; either ``basic`` or ``digest``. The default value is ``basic``. Optional. - ``client_timeout``: Timeout (in seconds) for iRMC operations. The default value is 60. Optional. - ``sensor_method``: Sensor data retrieval method; either ``ipmitool`` or ``scci``. The default value is ``ipmitool``. Optional. - ``remote_image_share_root``: Ironic conductor node's ``NFS`` or ``CIFS`` root path. The default value is ``/remote_image_share_root``. - ``remote_image_server``: IP of remote image server. - ``remote_image_share_type``: Share type of virtual media, either ``NFS`` or ``CIFS``. The default is ``CIFS``. - ``remote_image_share_name``: share name of ``remote_image_server``. The default value is ``share``. - ``remote_image_user_name``: User name of ``remote_image_server``. - ``remote_image_user_password``: Password of ``remote_image_user_name``. - ``remote_image_user_domain``: Domain name of ``remote_image_user_name``. * The following options are only required for inspection: - ``snmp_version``: SNMP protocol version; either ``v1``, ``v2c`` or ``v3``. The default value is ``v2c``. Optional. - ``snmp_port``: SNMP port. The default value is ``161``. Optional. - ``snmp_community``: SNMP community required for versions ``v1`` and ``v2c``. The default value is ``public``. Optional. - ``snmp_security``: SNMP security name required for version ``v3``. Optional. * Each node can be further configured by setting the following ironic node object’s properties which override the parameter values in ``[irmc]`` section of ``/etc/ironic/ironic.conf``: - ``driver_info/irmc_port`` property overrides ``port``. - ``driver_info/irmc_auth_method`` property overrides ``auth_method``. - ``driver_info/irmc_client_timeout`` property overrides ``client_timeout``. - ``driver_info/irmc_sensor_method`` property overrides ``sensor_method``. - ``driver_info/irmc_snmp_version`` property overrides ``snmp_version``. - ``driver_info/irmc_snmp_port`` property overrides ``snmp_port``. - ``driver_info/irmc_snmp_community`` property overrides ``snmp_community``. - ``driver_info/irmc_snmp_security`` property overrides ``snmp_security``. Supported platforms =================== This driver supports FUJITSU PRIMERGY BX S4 or RX S8 servers and above. - PRIMERGY BX920 S4 - PRIMERGY BX924 S4 - PRIMERGY RX300 S8 ironic-5.1.0/doc/source/deploy/0000775000567000056710000000000012674513633017514 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/deploy/user-guide.rst0000664000567000056710000003417012674513470022323 0ustar jenkinsjenkins00000000000000.. _user-guide: ======================= Introduction to Ironic ======================= Ironic is an OpenStack project which provisions physical hardware as opposed to virtual machines. Ironic provides several reference drivers which leverage common technologies like PXE and IPMI, to cover a wide range of hardware. Ironic's pluggable driver architecture also allows vendor-specific drivers to be added for improved performance or functionality not provided by reference drivers. If one thinks of traditional hypervisor functionality (e.g., creating a VM, enumerating virtual devices, managing the power state, loading an OS onto the VM, and so on), then Ironic may be thought of as a hypervisor API gluing together multiple drivers, each of which implement some portion of that functionality with respect to physical hardware. OpenStack's Ironic project makes physical servers as easy to provision as virtual machines in cloud, which in turn will open up new avenues for enterprises and service providers. Ironic's driver replaces the Nova "bare metal" driver (in Grizzly - Juno releases). Ironic is available for use and is supported by the Ironic developers starting with the Juno release. It is officially integrated with OpenStack in the Kilo release. See https://wiki.openstack.org/wiki/Ironic for links to the project's current development status. Why Provision Bare Metal ======================== Here are a few use-cases for bare metal (physical server) provisioning in cloud; there are doubtless many more interesting ones: - High-performance computing clusters - Computing tasks that require access to hardware devices which can't be virtualized - Database hosting (some databases run poorly in a hypervisor) - Single tenant, dedicated hardware for performance, security, dependability and other regulatory requirements - Or, rapidly deploying a cloud infrastructure Conceptual Architecture ======================= The following diagram shows the relationships and how all services come into play during the provisioning of a physical server. (Note that Ceilometer and Swift can be used with Ironic, but are missing from this diagram.) .. figure:: ../images/conceptual_architecture.png :alt: ConceptualArchitecture Logical Architecture ==================== The diagram below shows the logical architecture. It shows the basic components that form the Ironic service, the relation of Ironic service with other OpenStack services and the logical flow of a boot instance request resulting in the provisioning of a physical server. .. figure:: ../images/logical_architecture.png :alt: Logical Architecture The Ironic service is composed of the following components: #. a RESTful API service, by which operators and other services may interact with the managed bare metal servers. #. a Conductor service, which does the bulk of the work. Functionality is exposed via the API service. The Conductor and API services communicate via RPC. #. various Drivers that support heterogeneous hardware #. a Message Queue #. a Database for storing information about the resources. Among other things, this includes the state of the conductors, nodes (physical servers), and drivers. As in Figure 1.2. Logical Architecture, a user request to boot an instance is passed to the Nova Compute service via Nova API and Nova Scheduler. The Compute service hands over this request to the Ironic service, where the request passes from the Ironic API, to the Conductor, to a Driver to successfully provision a physical server for the user. Just as the Nova Compute service talks to various OpenStack services like Glance, Neutron, Swift etc to provision a virtual machine instance, here the Ironic service talks to the same OpenStack services for image, network and other resource needs to provision a bare metal instance. Key Technologies for Bare Metal Hosting ======================================= PXE ----- Preboot Execution Environment (PXE) is part of the Wired for Management (WfM) specification developed by Intel and Microsoft. The PXE enables system's BIOS and network interface card (NIC) to bootstrap a computer from the network in place of a disk. Bootstrapping is the process by which a system loads the OS into local memory so that it can be executed by the processor. This capability of allowing a system to boot over a network simplifies server deployment and server management for administrators. DHCP ------ Dynamic Host Configuration Protocol (DHCP) is a standardized networking protocol used on Internet Protocol (IP) networks for dynamically distributing network configuration parameters, such as IP addresses for interfaces and services. Using PXE, the BIOS uses DHCP to obtain an IP address for the network interface and to locate the server that stores the network bootstrap program (NBP). NBP ------ Network Bootstrap Program (NBP) is equivalent to GRUB (GRand Unified Bootloader) or LILO (LInux LOader) - loaders which are traditionally used in local booting. Like the boot program in a hard drive environment, the NBP is responsible for loading the OS kernel into memory so that the OS can be bootstrapped over a network. TFTP ------ Trivial File Transfer Protocol (TFTP) is a simple file transfer protocol that is generally used for automated transfer of configuration or boot files between machines in a local environment. In a PXE environment, TFTP is used to download NBP over the network using information from the DHCP server. IPMI ------ Intelligent Platform Management Interface (IPMI) is a standardized computer system interface used by system administrators for out-of-band management of computer systems and monitoring of their operation. It is a method to manage systems that may be unresponsive or powered off by using only a network connection to the hardware rather than to an operating system. Ironic Deployment Architecture ============================== The Ironic RESTful API service is used to enroll hardware that Ironic will manage. A cloud administrator usually registers the hardware, specifying their attributes such as MAC addresses and IPMI credentials. There can be multiple instances of the API service. The Ironic conductor service does the bulk of the work. For security reasons, it is advisable to place the conductor service on an isolated host, since it is the only service that requires access to both the data plane and IPMI control plane. There can be multiple instances of the conductor service to support various class of drivers and also to manage fail over. Instances of the conductor service should be on separate nodes. Each conductor can itself run many drivers to operate heterogeneous hardware. This is depicted in the following figure. The API exposes a list of supported drivers and the names of conductor hosts servicing them. .. figure:: ../images/deployment_architecture_2.png :alt: Deployment Architecture 2 Understanding Bare Metal Deployment =================================== What happens when a boot instance request comes in? The below diagram walks through the steps involved during the provisioning of a bare metal instance. These pre-requisites must be met before the deployment process: - Dependent packages to be configured on the Bare Metal service node(s) where ironic-conductor is running like tftp-server, ipmi, syslinux etc for bare metal provisioning. - Nova must be configured to make use of the bare metal service endpoint and compute driver should be configured to use ironic driver on the Nova compute node(s). - Flavors to be created for the available hardware. Nova must know the flavor to boot from. - Images to be made available in Glance. Listed below are some image types required for successful bare metal deployment: + bm-deploy-kernel + bm-deploy-ramdisk + user-image + user-image-vmlinuz + user-image-initrd - Hardware to be enrolled via Ironic RESTful API service. .. figure:: ../images/deployment_steps.png :alt: Deployment Steps Deploy Process ----------------- #. A boot instance request comes in via the Nova API, through the message queue to the Nova scheduler. #. Nova scheduler applies filter and finds the eligible compute node. Nova scheduler uses flavor extra_specs detail such as 'cpu_arch', 'baremetal:deploy_kernel_id', 'baremetal:deploy_ramdisk_id' etc to match the target physical node. #. A spawn task is placed by the driver which contains all information such as which image to boot from etc. It invokes the driver.spawn from the virt layer of Nova compute. #. Information about the bare metal node is retrieved from the bare metal database and the node is reserved. #. Images from Glance are pulled down to the local disk of the Ironic conductor servicing the bare metal node. #. For pxe_* drivers these include all images: both the deploy ramdisk and user instance images. #. For agent_* drivers only the deploy ramdisk is stored locally. Temporary URLs in OpenStack's Object Storage service are created for user instance images. #. Virtual interfaces are plugged in and Neutron API updates DHCP port to support PXE/TFTP options. #. Nova's ironic driver issues a deploy request via the Ironic API to the Ironic conductor servicing the bare metal node. #. PXE driver prepares tftp bootloader. #. The IPMI driver issues command to enable network boot of a node and power it on. #. The DHCP boots the deploy ramdisk. Next, depending on the exact driver used, either the conductor copies the image over iSCSI to the physical node (pxe_* group of drivers) or the deploy ramdisk downloads the image from a temporary URL (agent_* group of drivers), which can be generated by a variety of object stores, e.g. *swift*, *radosgw*, etc, and uploaded to OpenStack's Object Storage service. In the former case, the conductor connects to the iSCSI end point, partitions volume, "dd" the image and closes the iSCSI connection. The deployment is done. The Ironic conductor will switch pxe config to service mode and notify ramdisk agent on the successful deployment. #. The IPMI driver reboots the bare metal node. Note that there are 2 power cycles during bare metal deployment; the first time when powered-on, the images get deployed as mentioned in step 9. The second time as in this case, after the images are deployed, the node is powered up. #. The bare metal node status is updated and the node instance is made available. Example 1: PXE Boot and iSCSI Deploy Process -------------------------------------------- This process is used with pxe_* family of drivers. .. seqdiag:: :scale: 80 :alt: pxe_ipmi diagram { Nova; API; Conductor; Neutron; "TFTP/HTTPd"; Node; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Nova -> API [label = "Set instance_info", note = "image_source\n,root_gb,etc."]; Nova -> API [label = "Set provision_state"]; API -> Conductor [label = "do_node_deploy()"]; Conductor -> Conductor [label = "Cache images"]; Conductor -> Conductor [label = "Build TFTP config"]; Conductor -> Neutron [label = "Update DHCPBOOT"]; Conductor -> Node [label = "IPMI power-on"]; Node -> Neutron [label = "DHCP request"]; Neutron -> Node [label = "next-server = Conductor"]; Node -> Conductor [label = "Attempts to tftpboot from Conductor"]; "TFTP/HTTPd" -> Node [label = "Send deploy kernel, ramdisk\nand config"]; Node -> Node [label = "Runs deploy\nramdisk"]; Node -> Node [label = "Exposes disks\nvia iSCSI"]; Node -> API [label = "POST /vendor_passthru?method=pass_deploy_info"]; API -> Conductor [label = "Continue deploy"]; Conductor -> Node [label = "iSCSI attach"]; Conductor -> Node [label = "Copies user image"]; Conductor -> Node [label = "iSCSI detach"]; Conductor -> Node [label = "Sends 'DONE' message"]; Conductor -> Conductor [label = "Mark node as\nACTIVE"]; Node -> Node [label = "Terminates iSCSI endpoint"]; Node -> Node [label = "Reboots into\nuser instance"]; } (From a `talk`_ and `slides`_) Example 2: PXE Boot and Direct Deploy Process ---------------------------------------------- This process is used with agent_* family of drivers. .. seqdiag:: :scale: 80 :alt: pxe_ipmi_agent diagram { Nova; API; Conductor; Neutron; "TFTP/HTTPd"; Node; activation = none; edge_length = 250; span_height = 1; default_note_color = white; default_fontsize = 14; Nova -> API [label = "Set instance_info", note = "image_source\n,root_gb,etc."]; Nova -> API [label = "Set provision_state"]; API -> Conductor [label = "do_node_deploy()"]; Conductor -> Conductor [label = "Cache images"]; Conductor -> Conductor [label = "Update pxe,\ntftp configs"]; Conductor -> Neutron [label = "Update DHCPBOOT"]; Conductor -> Node [label = "power on"]; Node -> Neutron [label = "DHCP request"]; Neutron -> Node [label = "next-server = Conductor"]; Node -> Conductor [label = "Attempts tftpboot"]; "TFTP/HTTPd" -> Node [label = "Send deploy kernel, ramdisk and config"]; Node -> Node [label = "Runs agent\nramdisk"]; Node -> API [label = "lookup()"]; API -> Conductor [label = "..."]; Conductor -> Node [label = "Pass UUID"]; Node -> API [label = "Heartbeat (UUID)"]; API -> Conductor [label = "Heartbeat"]; Conductor -> Node [label = "Continue deploy: Pass image, disk info"]; === Node downloads image, writes to disk === Node -> API [label = "Heartbeat periodically"]; API -> Conductor [label = "..."]; Conductor -> Node [label = "Is deploy done yet?"]; Node -> Conductor [label = "Still working..."]; === When deploy is done === Conductor -> Neutron [label = "Clear DHCPBOOT"]; Conductor -> Node [label = "Set bootdev HDD"]; Conductor -> Node [label = "Reboot"]; Node -> Node [label = "Reboots into\nuser instance"]; } (From a `talk`_ and `slides`_) .. _talk: https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/isn-and-039t-it-ironic-the-bare-metal-cloud .. _slides: http://devananda.github.io/talks/isnt-it-ironic.html ironic-5.1.0/doc/source/deploy/cleaning.rst0000664000567000056710000002777212674513466022051 0ustar jenkinsjenkins00000000000000.. _cleaning: ============= Node cleaning ============= Overview ======== Ironic provides two modes for node cleaning: ``automated`` and ``manual``. ``Automated cleaning`` is automatically performed before the first workload has been assigned to a node and when hardware is recycled from one workload to another. ``Manual cleaning`` must be invoked by the operator. .. _automated_cleaning: Automated cleaning ================== When hardware is recycled from one workload to another, ironic performs automated cleaning on the node to ensure it's ready for another workload. This ensures the tenant will get a consistent bare metal node deployed every time. Ironic implements automated cleaning by collecting a list of cleaning steps to perform on a node from the Power, Deploy, Management, and RAID interfaces of the driver assigned to the node. These steps are then ordered by priority and executed on the node when the node is moved to ``cleaning`` state, if automated cleaning is enabled. With automated cleaning, nodes move to ``cleaning`` state when moving from ``active`` -> ``available`` state (when the hardware is recycled from one workload to another). Nodes also traverse cleaning when going from ``manageable`` -> ``available`` state (before the first workload is assigned to the nodes). For a full understanding of all state transitions into cleaning, please see :ref:`states`. Ironic added support for automated cleaning in the Kilo release. .. _enabling-cleaning: Enabling automated cleaning --------------------------- To enable automated cleaning, ensure that your ironic.conf is set as follows. (Prior to Mitaka, this option was named 'clean_nodes'.):: [conductor] automated_clean=true This will enable the default set of cleaning steps, based on your hardware and ironic drivers. If you're using an agent_* driver, this includes, by default, erasing all of the previous tenant's data. You may also need to configure a `Cleaning Network`_. Cleaning steps -------------- Cleaning steps used for automated cleaning are ordered from higher to lower priority, where a larger integer is a higher priority. In case of a conflict between priorities across drivers, the following resolution order is used: Power, Management, Deploy, and RAID interfaces. You can skip a cleaning step by setting the priority for that cleaning step to zero or 'None'. You can reorder the cleaning steps by modifying the integer priorities of the cleaning steps. See `How do I change the priority of a cleaning step?`_ for more information. .. _manual_cleaning: Manual cleaning =============== ``Manual cleaning`` is typically used to handle long running, manual, or destructive tasks that an operator wishes to perform either before the first workload has been assigned to a node or between workloads. When initiating a manual clean, the operator specifies the cleaning steps to be performed. Manual cleaning can only be performed when a node is in the ``manageable`` state. Once the manual cleaning is finished, the node will be put in the ``manageable`` state again. Ironic added support for manual cleaning in the 4.4 (Mitaka series) release. Setup ----- In order for manual cleaning to work, you may need to configure a `Cleaning Network`_. Starting manual cleaning via API -------------------------------- Manual cleaning can only be performed when a node is in the ``manageable`` state. The REST API request to initiate it is available in API version 1.15 and higher:: PUT /v1/nodes//states/provision (Additional information is available `here `_.) This API will allow operators to put a node directly into ``cleaning`` provision state from ``manageable`` state via 'target': 'clean'. The PUT will also require the argument 'clean_steps' to be specified. This is an ordered list of cleaning steps. A cleaning step is represented by a dictionary (JSON), in the form:: { 'interface': , 'step': , 'args': {: , ..., : } } The 'interface' and 'step' keys are required for all steps. If a cleaning step method takes keyword arguments, the 'args' key may be specified. It is a dictionary of keyword variable arguments, with each keyword-argument entry being : . If any step is missing a required keyword argument, manual cleaning will not be performed and the node will be put in ``clean failed`` provision state with an appropriate error message. If, during the cleaning process, a cleaning step determines that it has incorrect keyword arguments, all earlier steps will be performed and then the node will be put in ``clean failed`` provision state with an appropriate error message. An example of the request body for this API:: { "target":"clean", "clean_steps": [{ "interface": "raid", "step": "create_configuration", "args": {"create_nonroot_volumes": "False"} }, { "interface": "deploy", "step": "erase_devices" }] } In the above example, the driver's RAID interface would configure hardware RAID without non-root volumes, and then all devices would be erased (in that order). Starting manual cleaning via ``ironic`` CLI ------------------------------------------- Manual cleaning is supported in the ``ironic node-set-provision-state`` command, starting with python-ironicclient 1.2. The target/verb is 'clean' and the argument 'clean-steps' must be specified. Its value is one of: - a JSON string - path to a JSON file whose contents are passed to the API - '-', to read from stdin. This allows piping in the clean steps. Using '-' to signify stdin is common in Unix utilities. Keep in mind that manual cleaning is only supported in API version 1.15 and higher. An example of doing this with a JSON string:: ironic --ironic-api-version 1.15 node-set-provision-state \ clean --clean-steps '[...]' Or with a file:: ironic --ironic-api-version 1.15 node-set-provision-state \ clean --clean-steps my-clean-steps.txt Or with stdin:: cat my-clean-steps.txt | ironic --ironic-api-version 1.15 \ node-set-provision-state clean --clean-steps - Cleaning Network ================ If you are using the Neutron DHCP provider (the default) you will also need to ensure you have configured a cleaning network. This network will be used to boot the ramdisk for in-band cleaning. You can use the same network as your tenant network. For steps to set up the cleaning network, please see :ref:`CleaningNetworkSetup`. .. _InbandvsOutOfBandCleaning: In-band vs out-of-band ====================== Ironic uses two main methods to perform actions on a node: in-band and out-of-band. Ironic supports using both methods to clean a node. In-band ------- In-band steps are performed by ironic making API calls to a ramdisk running on the node using a Deploy driver. Currently, only the ironic-python-agent ramdisk used with an agent_* driver supports in-band cleaning. By default, ironic-python-agent ships with a minimal cleaning configuration, only erasing disks. However, with this ramdisk, you can add your own cleaning steps and/or override default cleaning steps with a custom Hardware Manager. There is currently no support for in-band cleaning using the ironic pxe ramdisk. Out-of-band ----------- Out-of-band are actions performed by your management controller, such as IPMI, iLO, or DRAC. Out-of-band steps will be performed by ironic using a Power or Management driver. Which steps are performed depends on the driver and hardware. For Out-of-Band cleaning operations supported by iLO drivers, refer to :ref:`ilo_node_cleaning`. FAQ === How are cleaning steps ordered? ------------------------------- For automated cleaning, cleaning steps are ordered by integer priority, where a larger integer is a higher priority. In case of a conflict between priorities across drivers, the following resolution order is used: Power, Management, Deploy, and RAID interfaces. For manual cleaning, the cleaning steps should be specified in the desired order. How do I skip a cleaning step? ------------------------------ For automated cleaning, cleaning steps with a priority of 0 or None are skipped. How do I change the priority of a cleaning step? ------------------------------------------------ For manual cleaning, specify the cleaning steps in the desired order. For automated cleaning, it depends on whether the cleaning steps are out-of-band or in-band. Most out-of-band cleaning steps have an explicit configuration option for priority. Changing the priority of an in-band (ironic-python-agent) cleaning step requires use of a custom HardwareManager. The only exception is ``erase_devices``, which can have its priority set in ironic.conf. For instance, to disable erase_devices, you'd set the following configuration option:: [deploy] erase_devices_priority=0 To enable/disable the in-band disk erase using ``agent_ilo`` driver, use the following configuration option:: [ilo] clean_priority_erase_devices=0 The generic hardware manager first tries to perform ATA disk erase by using ``hdparm`` utility. If ATA disk erase is not supported, it performs software based disk erase using ``shred`` utility. By default, the number of iterations performed by ``shred`` for software based disk erase is 1. To configure the number of iterations, use the following configuration option:: [deploy] erase_devices_iterations=1 What cleaning step is running? ------------------------------ To check what cleaning step the node is performing or attempted to perform and failed, either query the node endpoint for the node or run ``ironic node-show $node_ident`` and look in the `driver_internal_info` field. The `clean_steps` field will contain a list of all remaining steps with their priorities, and the first one listed is the step currently in progress or that the node failed before going into ``clean failed`` state. Should I disable automated cleaning? ------------------------------------ Automated cleaning is recommended for ironic deployments, however, there are some tradeoffs to having it enabled. For instance, ironic cannot deploy a new instance to a node that is currently cleaning, and cleaning can be a time consuming process. To mitigate this, we suggest using disks with support for cryptographic ATA Security Erase, as typically the erase_devices step in the deploy driver takes the longest time to complete of all cleaning steps. Why can't I power on/off a node while it's cleaning? ---------------------------------------------------- During cleaning, nodes may be performing actions that shouldn't be interrupted, such as BIOS or Firmware updates. As a result, operators are forbidden from changing power state via the ironic API while a node is cleaning. Troubleshooting =============== If cleaning fails on a node, the node will be put into ``clean failed`` state and placed in maintenance mode, to prevent ironic from taking actions on the node. Nodes in ``clean failed`` will not be powered off, as the node might be in a state such that powering it off could damage the node or remove useful information about the nature of the cleaning failure. A ``clean failed`` node can be moved to ``manageable`` state, where it cannot be scheduled by nova and you can safely attempt to fix the node. To move a node from ``clean failed`` to ``manageable``: ``ironic node-set-provision-state manage``. You can now take actions on the node, such as replacing a bad disk drive. Strategies for determining why a cleaning step failed include checking the ironic conductor logs, viewing logs on the still-running ironic-python-agent (if an in-band step failed), or performing general hardware troubleshooting on the node. When the node is repaired, you can move the node back to ``available`` state, to allow it to be scheduled by nova. :: # First, move it out of maintenance mode ironic node-set-maintenance $node_ident false # Now, make the node available for scheduling by nova ironic node-set-provision-state $node_ident provide The node will begin automated cleaning from the start, and move to ``available`` state when complete. ironic-5.1.0/doc/source/deploy/upgrade-guide.rst0000664000567000056710000001033112674513466022772 0ustar jenkinsjenkins00000000000000.. _upgrade-guide: ===================================== Bare Metal Service Upgrade Guide ===================================== This document outlines various steps and notes for operators to consider when upgrading their Ironic-driven clouds from previous versions of OpenStack. The Ironic service is tightly coupled with the Ironic driver that is shipped with Nova. Currently, some special considerations must be taken into account when upgrading your cloud from previous versions of OpenStack. Upgrading from Kilo to Liberty ============================== In-band Inspection ------------------ If you used in-band inspection with **ironic-discoverd**, you have to install **python-ironic-inspector-client** during the upgrade. This package contains a client module for in-band inspection service, which was previously part of **ironic-discoverd** package. Ironic Liberty supports **ironic-discoverd** service, but does not support its in-tree client module. Please refer to `ironic-inspector version support matrix `_ for details on which Ironic version can work with which **ironic-inspector**/**ironic-discoverd** version. It's also highly recommended that you switch to using **ironic-inspector**, which is a newer (and compatible on API level) version of the same service. The discoverd to inspector upgrade procedure: #. Install **ironic-inspector** on the machine where you have **ironic-discoverd** (usually the same as conductor). #. (Recommended) update the **ironic-inspector** configuration file to stop using deprecated configuration options, as marked by the comments in the `example.conf `_. The file name is provided on command line when starting **ironic-discoverd**, and the previously recommended default was ``/etc/ironic-discoverd/discoverd.conf``. In this case, for the sake of consistency it's recommended you move the configuration file to ``/etc/ironic-inspector/inspector.conf``. #. Shutdown **ironic-discoverd**, start **ironic-inspector**. #. During upgrade of each conductor instance: #. Shutdown the conductor #. Uninstall **ironic-discoverd**, install **python-ironic-inspector-client** #. Update the conductor Kilo -> Liberty #. (Recommended) update ``ironic.conf`` to use ``[inspector]`` section instead of ``[discoverd]`` (option names are the same) #. Start the conductor Upgrading from Juno to Kilo =========================== When upgrading a cloud from Juno to Kilo, users must ensure the Nova service is upgraded prior to upgrading the Ironic service. Additionally, users need to set a special config flag in Nova prior to upgrading to ensure the newer version of Nova is not attempting to take advantage of new Ironic features until the Ironic service has been upgraded. The steps for upgrading your Nova and Ironic services are as follows: - Edit nova.conf and ensure force_config_drive=False is set in the [DEFAULT] group. Restart nova-compute if necessary. - Install new Nova code, run database migrations - Install new python-ironicclient code. - Restart Nova services. - Install new Ironic code, run database migrations, restart Ironic services. - Edit nova.conf and set force_config_drive to your liking, restarting nova-compute if necessary. Note that during the period between Nova's upgrade and Ironic's upgrades, instances can still be provisioned to nodes, however, any attempt by users to specify a config drive for an instance will cause error until Ironic's upgrade has completed. Cleaning -------- A new feature in Kilo is support for the automated cleaning of nodes between workloads to ensure the node is ready for another workload. This can include erasing the hard drives, updating firmware, and other steps. For more information, see :ref:`automated_cleaning`. If Ironic is configured with automated cleaning enabled (defaults to True) and to use Neutron as the DHCP provider (also the default), you will need to set the `cleaning_network_uuid` option in the Ironic configuration file before starting the Kilo Ironic service. See :ref:`CleaningNetworkSetup` for information on how to set up the cleaning network for Ironic. ironic-5.1.0/doc/source/deploy/troubleshooting.rst0000664000567000056710000000564012674513466023506 0ustar jenkinsjenkins00000000000000.. _troubleshooting: ====================== Troubleshooting Ironic ====================== Nova returns "No valid host was found" Error ============================================ Sometimes Nova Conductor log file "nova-conductor.log" or a message returned from Nova API contains the following error:: NoValidHost: No valid host was found. There are not enough hosts available. "No valid host was found" means that the Nova Scheduler could not find a bare metal node suitable for booting the new instance. This in turn usually means some mismatch between resources that Nova expects to find and resources that Ironic advertised to Nova. A few things should be checked in this case: #. Inspection should have succeeded for you before, or you should have entered the required Ironic node properties manually. For each node with available state in ``ironic node-list --provision-state available`` use :: ironic node-show and make sure that ``properties`` JSON field has valid values for keys ``cpus``, ``cpu_arch``, ``memory_mb`` and ``local_gb``. #. The Nova flavor that you are using does not match any properties of the available Ironic nodes. Use :: nova flavor-show to compare. If you're using exact match filters in Nova Scheduler, please make sure the flavor and the node properties match exactly. Regarding the extra specs in flavor, you should make sure they map to ``node.properties['capabilities']``. #. Make sure that enough nodes are in ``available`` state according to ``ironic node-list --provision-state available``. #. Make sure nodes you're going to deploy to are not in maintenance mode. Again, use ``ironic node-list`` to check. A node automatically going to maintenance mode usually means wrong power credentials for this node. Check them and then remove maintenance mode:: ironic node-set-maintenance off #. After making changes to nodes in Ironic, it takes time for those changes to propagate from Ironic to Nova. Check that :: nova hypervisor-stats correctly shows total amount of resources in your system. You can also check ``nova hypervisor-list`` to see the status of individual Ironic nodes as reported to Nova. And you can correlate the Nova "hypervisor hostname" to the Ironic node UUID. #. If none of the above helped, check Ironic conductor log carefully to see if there are any conductor-related errors which are the root cause for "No valid host was found". If there are any "Error in deploy of node : [Errno 28] ..." error messages in Ironic conductor log, it means the conductor run into a special error during deployment. So you can check the log carefully to fix or work around and then try again. API Errors ========== The `debug_tracebacks_in_api` config option may be set to return tracebacks in the API response for all 4xx and 5xx errors. ironic-5.1.0/doc/source/deploy/drivers.rst0000664000567000056710000000275712674513466021743 0ustar jenkinsjenkins00000000000000.. _drivers: ================= Enabling drivers ================= Ironic-Python-Agent (agent) --------------------------- Ironic-Python-Agent is an agent that handles *ironic* bare metal nodes in various actions such as inspection and deployment of such nodes, and runs processes inside of a ramdisk. For more information on this, see :ref:`IPA`. IPMITool -------- .. toctree:: :maxdepth: 1 ../drivers/ipmitool DRAC ---- DRAC with PXE deploy ^^^^^^^^^^^^^^^^^^^^ - Add ``pxe_drac`` to the list of ``enabled_drivers`` in ``/etc/ironic/ironic.conf`` - Install python-dracclient package AMT ---- .. toctree:: :maxdepth: 1 ../drivers/amt SNMP ---- .. toctree:: :maxdepth: 1 ../drivers/snmp iLO driver ---------- .. toctree:: :maxdepth: 1 ../drivers/ilo SeaMicro driver --------------- .. toctree:: :maxdepth: 1 ../drivers/seamicro iRMC ---- .. toctree:: :maxdepth: 1 ../drivers/irmc VirtualBox drivers ------------------ .. toctree:: :maxdepth: 1 ../drivers/vbox Cisco UCS driver ---------------- .. toctree:: :maxdepth: 1 ../drivers/ucs Wake-On-Lan driver ------------------ .. toctree:: :maxdepth: 1 ../drivers/wol iBoot driver ------------ .. toctree:: :maxdepth: 1 ../drivers/iboot CIMC driver ------------ .. toctree:: :maxdepth: 1 ../drivers/cimc OneView driver -------------- .. toctree:: :maxdepth: 1 ../drivers/oneview XenServer ssh driver -------------------- .. toctree:: :maxdepth: 1 ../drivers/xenserver ironic-5.1.0/doc/source/deploy/install-guide.rst0000664000567000056710000031437612674513470023024 0ustar jenkinsjenkins00000000000000.. _install-guide: ================== Installation Guide ================== This document is continually updated and reflects the latest available code of the Bare Metal service (ironic). Users of releases may encounter differences and are encouraged to look at earlier versions of this document for guidance. Service overview ================ The Bare Metal service is a collection of components that provides support to manage and provision physical machines. Also known as the ``ironic`` project, the Bare Metal service may, depending upon configuration, interact with several other OpenStack services. This includes: - the OpenStack Telemetry module (ceilometer) for consuming the IPMI metrics - the OpenStack Identity service (keystone) for request authentication and to locate other OpenStack services - the OpenStack Image service (glance) from which to retrieve images and image meta-data - the OpenStack Networking service (neutron) for DHCP and network configuration - the OpenStack Compute service (nova) works with the Bare Metal service and acts as a user-facing API for instance management, while the Bare Metal service provides the admin/operator API for hardware management. The OpenStack Compute service also provides scheduling facilities (matching flavors <-> images <-> hardware), tenant quotas, IP assignment, and other services which the Bare Metal service does not, in and of itself, provide. - the OpenStack Block Storage (cinder) provides volumes, but this aspect is not yet available. The Bare Metal service includes the following components: - ironic-api: A RESTful API that processes application requests by sending them to the ironic-conductor over RPC. - ironic-conductor: Adds/edits/deletes nodes; powers on/off nodes with ipmi or ssh; provisions/deploys/decommissions bare metal nodes. - ironic-python-agent: A python service which is run in a temporary ramdisk to provide ironic-conductor service(s) with remote access and in-band hardware control. - python-ironicclient: A command-line interface (CLI) for interacting with the Bare Metal service. Additionally, the Bare Metal service has certain external dependencies, which are very similar to other OpenStack services: - A database to store hardware information and state. You can set the database back-end type and location. A simple approach is to use the same database back end as the Compute service. Another approach is to use a separate database back-end to further isolate bare metal resources (and associated metadata) from users. - A queue. A central hub for passing messages, such as RabbitMQ. It should use the same implementation as that of the Compute service. Optionally, one may wish to utilize the following associated projects for additional functionality: - ironic-inspector_; An associated service which performs in-band hardware introspection by PXE booting unregistered hardware into a "discovery ramdisk". - diskimage-builder_; May be used to customize machine images, create and discovery deploy ramdisks, if necessary. - bifrost_; a set of Ansible playbooks that automates the task of deploying a base image onto a set of known hardware using ironic. .. _ironic-inspector: https://github.com/openstack/ironic-inspector .. _diskimage-builder: https://github.com/openstack/diskimage-builder .. _bifrost: https://github.com/openstack/bifrost .. todo: include coreos-image-builder reference here, once the split is done Install and configure prerequisites =================================== The Bare Metal service is a collection of components that provides support to manage and provision physical machines. You can configure these components to run on separate nodes or the same node. In this guide, the components run on one node, typically the Compute Service's compute node. This section shows you how to install and configure the components. It assumes that the Identity, Image, Compute, and Networking services have already been set up. Configure the Identity service for the Bare Metal service --------------------------------------------------------- #. Create the Bare Metal service user (for example,``ironic``). The service uses this to authenticate with the Identity service. Use the ``service`` tenant and give the user the ``admin`` role:: openstack user create --password IRONIC_PASSWORD \ --email ironic@example.com ironic openstack role add --project service --user ironic admin #. You must register the Bare Metal service with the Identity service so that other OpenStack services can locate it. To register the service:: openstack service create --name ironic --description \ "Ironic baremetal provisioning service" baremetal #. Use the ``id`` property that is returned from the Identity service when registering the service (above), to create the endpoint, and replace IRONIC_NODE with your Bare Metal service's API node:: openstack endpoint create --region RegionOne \ --publicurl http://IRONIC_NODE:6385 \ --internalurl http://IRONIC_NODE:6385 \ --adminurl http://IRONIC_NODE:6385 \ baremetal Set up the database for Bare Metal ---------------------------------- The Bare Metal service stores information in a database. This guide uses the MySQL database that is used by other OpenStack services. #. In MySQL, create an ``ironic`` database that is accessible by the ``ironic`` user. Replace IRONIC_DBPASSWORD with a suitable password:: # mysql -u root -p mysql> CREATE DATABASE ironic CHARACTER SET utf8; mysql> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ IDENTIFIED BY 'IRONIC_DBPASSWORD'; mysql> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ IDENTIFIED BY 'IRONIC_DBPASSWORD'; Install the Bare Metal service ------------------------------ #. Install from packages and configure services:: Ubuntu 14.04 (trusty) or higher: sudo apt-get install ironic-api ironic-conductor python-ironicclient Fedora 21/RHEL7/CentOS7: sudo yum install openstack-ironic-api openstack-ironic-conductor \ python-ironicclient sudo systemctl enable openstack-ironic-api openstack-ironic-conductor sudo systemctl start openstack-ironic-api openstack-ironic-conductor Fedora 22 or higher: sudo dnf install openstack-ironic-api openstack-ironic-conductor \ python-ironicclient sudo systemctl enable openstack-ironic-api openstack-ironic-conductor sudo systemctl start openstack-ironic-api openstack-ironic-conductor Configure the Bare Metal service ================================ The Bare Metal service is configured via its configuration file. This file is typically located at ``/etc/ironic/ironic.conf``. Although some configuration options are mentioned here, it is recommended that you review all the `available options `_ so that the Bare Metal service is configured for your needs. It is possible to set up an ironic-api and an ironic-conductor services on the same host or different hosts. Users also can add new ironic-conductor hosts to deal with an increasing number of bare metal nodes. But the additional ironic-conductor services should be at the same version as that of existing ironic-conductor services. Configuring ironic-api service ------------------------------ #. The Bare Metal service stores information in a database. This guide uses the MySQL database that is used by other OpenStack services. Configure the location of the database via the ``connection`` option. In the following, replace IRONIC_DBPASSWORD with the password of your ``ironic`` user, and replace DB_IP with the IP address where the DB server is located:: [database] ... # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic?charset=utf8 #. Configure the ironic-api service to use the RabbitMQ message broker by setting one or more of these options. Replace RABBIT_HOST with the address of the RabbitMQ server:: [DEFAULT] ... # The messaging driver to use, defaults to rabbit. Other # drivers include qpid and zmq. (string value) #rpc_backend=rabbit [oslo_messaging_rabbit] ... # The RabbitMQ broker address where a single node is used # (string value) rabbit_host=RABBIT_HOST # The RabbitMQ userid (string value) #rabbit_userid=guest # The RabbitMQ password (string value) #rabbit_password=guest #. Configure the ironic-api service to use these credentials with the Identity service. Replace IDENTITY_IP with the IP of the Identity server, and replace IRONIC_PASSWORD with the password you chose for the ``ironic`` user in the Identity service:: [DEFAULT] ... # Authentication strategy used by ironic-api: one of # "keystone" or "noauth". "noauth" should not be used in a # production environment because all authentication will be # disabled. (string value) #auth_strategy=keystone [keystone_authtoken] ... # Complete public Identity API endpoint (string value) auth_uri=http://IDENTITY_IP:5000/ # Complete admin Identity API endpoint. This should specify # the unversioned root endpoint e.g. https://localhost:35357/ # (string value) identity_uri=http://IDENTITY_IP:35357/ # Service username. (string value) admin_user=ironic # Service account password. (string value) admin_password=IRONIC_PASSWORD # Service tenant name. (string value) admin_tenant_name=service #. Create the Bare Metal service database tables:: ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema #. Restart the ironic-api service:: Fedora/RHEL7/CentOS7: sudo systemctl restart openstack-ironic-api Ubuntu: sudo service ironic-api restart Configuring ironic-conductor service ------------------------------------ #. Replace HOST_IP with IP of the conductor host, and replace DRIVERS with a comma-separated list of drivers you chose for the conductor service as follows:: [DEFAULT] ... # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use "127.0.0.1". # (string value) my_ip = HOST_IP # Specify the list of drivers to load during service # initialization. Missing drivers, or drivers which fail to # initialize, will prevent the conductor service from # starting. The option default is a recommended set of # production-oriented drivers. A complete list of drivers # present on your system may be found by enumerating the # "ironic.drivers" entrypoint. An example may be found in the # developer documentation online. (list value) enabled_drivers=DRIVERS .. note:: If a conductor host has multiple IPs, ``my_ip`` should be set to the IP which is on the same network as the bare metal nodes. #. Configure the ironic-api service URL. Replace IRONIC_API_IP with IP of ironic-api service as follows:: [conductor] ... # URL of Ironic API service. If not set ironic can get the # current value from the keystone service catalog. (string # value) api_url=http://IRONIC_API_IP:6385 #. Configure the location of the database. Ironic-conductor should use the same configuration as ironic-api. Replace IRONIC_DBPASSWORD with the password of your ``ironic`` user, and replace DB_IP with the IP address where the DB server is located:: [database] ... # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic?charset=utf8 #. Configure the ironic-conductor service to use the RabbitMQ message broker by setting one or more of these options. Ironic-conductor should use the same configuration as ironic-api. Replace RABBIT_HOST with the address of the RabbitMQ server:: [DEFAULT] ... # The messaging driver to use, defaults to rabbit. Other # drivers include qpid and zmq. (string value) #rpc_backend=rabbit [oslo_messaging_rabbit] ... # The RabbitMQ broker address where a single node is used. # (string value) rabbit_host=RABBIT_HOST # The RabbitMQ userid. (string value) #rabbit_userid=guest # The RabbitMQ password. (string value) #rabbit_password=guest #. Configure the ironic-conductor service so that it can communicate with the Image service. Replace GLANCE_IP with the hostname or IP address of the Image service:: [glance] ... # Default glance hostname or IP address. (string value) glance_host=GLANCE_IP .. note:: Swift backend for the Image service should be installed and configured for ``agent_*`` drivers. Starting with Mitaka the Bare Metal service also supports Ceph Object Gateway (RADOS Gateway) as the Image service's backend (:ref:`radosgw support`). #. Set the URL (replace NEUTRON_IP) for connecting to the Networking service, to be the Networking service endpoint:: [neutron] ... # URL for connecting to neutron. (string value) url=http://NEUTRON_IP:9696 To configure the network for ironic-conductor service to perform node cleaning, see `CleaningNetworkSetup`_. #. Configure the ironic-conductor service to use these credentials with the Identity service. Ironic-conductor should use the same configuration as ironic-api. Replace IDENTITY_IP with the IP of the Identity server, and replace IRONIC_PASSWORD with the password you chose for the ``ironic`` user in the Identity service:: [keystone_authtoken] ... # Complete public Identity API endpoint (string value) auth_uri=http://IDENTITY_IP:5000/ # Complete admin Identity API endpoint. This should specify # the unversioned root endpoint e.g. https://localhost:35357/ # (string value) identity_uri=http://IDENTITY_IP:35357/ # Service username. (string value) admin_user=ironic # Service account password. (string value) admin_password=IRONIC_PASSWORD # Service tenant name. (string value) admin_tenant_name=service #. Make sure that ``qemu-img`` and ``iscsiadm`` (in the case of using iscsi-deploy driver) binaries are installed and prepare the host system as described at `Setup the drivers for the Bare Metal service`_ #. Restart the ironic-conductor service:: Fedora/RHEL7/CentOS7: sudo systemctl restart openstack-ironic-conductor Ubuntu: sudo service ironic-conductor restart Configuring ironic-api behind mod_wsgi -------------------------------------- Bare Metal service comes with an example file for configuring the ``ironic-api`` service to run behind Apache with mod_wsgi. 1. Install the apache service:: Fedora 21/RHEL7/CentOS7: sudo yum install httpd Fedora 22 (or higher): sudo dnf install httpd Debian/Ubuntu: apt-get install apache2 2. Copy the ``etc/apache2/ironic`` file under the apache sites:: Fedora/RHEL7/CentOS7: sudo cp etc/apache2/ironic /etc/httpd/conf.d/ironic.conf Debian/Ubuntu: sudo cp etc/apache2/ironic /etc/apache2/sites-available/ironic.conf 3. Edit the recently copied ``/ironic.conf``: - Modify the ``WSGIDaemonProcess``, ``APACHE_RUN_USER`` and ``APACHE_RUN_GROUP`` directives to set the user and group values to an appropriate user on your server. - Modify the ``WSGIScriptAlias`` directive to point to the *ironic/api/app.wsgi* script. - Modify the ``Directory`` directive to set the path to the Ironic API code. 4. Enable the apache ``ironic`` in site and reload:: Fedora/RHEL7/CentOS7: sudo systemctl reload httpd Debian/Ubuntu: sudo a2ensite ironic sudo service apache2 reload .. note:: The file ironic/api/app.wsgi is installed with the rest of the Bare Metal service application code, and should not need to be modified. Configure Compute to use the Bare Metal service =============================================== The Compute service needs to be configured to use the Bare Metal service's driver. The configuration file for the Compute service is typically located at ``/etc/nova/nova.conf``. *This configuration file must be modified on the Compute service's controller nodes and compute nodes.* 1. Change these configuration options in the ``default`` section, as follows:: [default] # Driver to use for controlling virtualization. Options # include: libvirt.LibvirtDriver, xenapi.XenAPIDriver, # fake.FakeDriver, baremetal.BareMetalDriver, # vmwareapi.VMwareESXDriver, vmwareapi.VMwareVCDriver (string # value) #compute_driver= compute_driver=nova.virt.ironic.IronicDriver # Firewall driver (defaults to hypervisor specific iptables # driver) (string value) #firewall_driver= firewall_driver=nova.virt.firewall.NoopFirewallDriver # The scheduler host manager class to use (string value) #scheduler_host_manager=nova.scheduler.host_manager.HostManager scheduler_host_manager=nova.scheduler.ironic_host_manager.IronicHostManager # Virtual ram to physical ram allocation ratio which affects # all ram filters. This configuration specifies a global ratio # for RamFilter. For AggregateRamFilter, it will fall back to # this configuration value if no per-aggregate setting found. # (floating point value) #ram_allocation_ratio=1.5 ram_allocation_ratio=1.0 # Amount of disk in MB to reserve for the host (integer value) #reserved_host_disk_mb=0 reserved_host_memory_mb=0 # Full class name for the Manager for compute (string value) #compute_manager=nova.compute.manager.ComputeManager compute_manager=ironic.nova.compute.manager.ClusteredComputeManager # Flag to decide whether to use baremetal_scheduler_default_filters or not. # (boolean value) #scheduler_use_baremetal_filters=False scheduler_use_baremetal_filters=True # Determines if the Scheduler tracks changes to instances to help with # its filtering decisions (boolean value) #scheduler_tracks_instance_changes=True scheduler_tracks_instance_changes=False 2. Change these configuration options in the ``ironic`` section. Replace: - IRONIC_PASSWORD with the password you chose for the ``ironic`` user in the Identity Service - IRONIC_NODE with the hostname or IP address of the ironic-api node - IDENTITY_IP with the IP of the Identity server :: [ironic] # Ironic keystone admin name admin_username=ironic #Ironic keystone admin password. admin_password=IRONIC_PASSWORD # keystone API endpoint admin_url=http://IDENTITY_IP:35357/v2.0 # Ironic keystone tenant name. admin_tenant_name=service # URL for Ironic API endpoint. api_endpoint=http://IRONIC_NODE:6385/v1 3. On the Compute service's controller nodes, restart the ``nova-scheduler`` process:: Fedora/RHEL7/CentOS7: sudo systemctl restart openstack-nova-scheduler Ubuntu: sudo service nova-scheduler restart 4. On the Compute service's compute nodes, restart the ``nova-compute`` process:: Fedora/RHEL7/CentOS7: sudo systemctl restart openstack-nova-compute Ubuntu: sudo service nova-compute restart .. _NeutronFlatNetworking: Configure Networking to communicate with the bare metal server ============================================================== You need to configure Networking so that the bare metal server can communicate with the Networking service for DHCP, PXE boot and other requirements. This section covers configuring Networking for a single flat network for bare metal provisioning. You will also need to provide Bare Metal service with the MAC address(es) of each node that it is provisioning; Bare Metal service in turn will pass this information to Networking service for DHCP and PXE boot configuration. An example of this is shown in the `Enrollment`_ section. #. Edit ``/etc/neutron/plugins/ml2/ml2_conf.ini`` and modify these:: [ml2] type_drivers = flat tenant_network_types = flat mechanism_drivers = openvswitch [ml2_type_flat] flat_networks = physnet1 [ml2_type_vlan] network_vlan_ranges = physnet1 [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True [ovs] bridge_mappings = physnet1:br-eth2 # Replace eth2 with the interface on the neutron node which you # are using to connect to the bare metal server #. If neutron-openvswitch-agent runs with ``ovs_neutron_plugin.ini`` as the input config-file, edit ``ovs_neutron_plugin.ini`` to configure the bridge mappings by adding the [ovs] section described in the previous step, and restart the neutron-openvswitch-agent. #. Add the integration bridge to Open vSwitch:: ovs-vsctl add-br br-int #. Create the br-eth2 network bridge to handle communication between the OpenStack services (and the Bare Metal services) and the bare metal nodes using eth2. Replace eth2 with the interface on the network node which you are using to connect to the Bare Metal service:: ovs-vsctl add-br br-eth2 ovs-vsctl add-port br-eth2 eth2 #. Restart the Open vSwitch agent:: service neutron-plugin-openvswitch-agent restart #. On restarting the Networking service Open vSwitch agent, the veth pair between the bridges br-int and br-eth2 is automatically created. Your Open vSwitch bridges should look something like this after following the above steps:: ovs-vsctl show Bridge br-int fail_mode: secure Port "int-br-eth2" Interface "int-br-eth2" type: patch options: {peer="phy-br-eth2"} Port br-int Interface br-int type: internal Bridge "br-eth2" Port "phy-br-eth2" Interface "phy-br-eth2" type: patch options: {peer="int-br-eth2"} Port "eth2" Interface "eth2" Port "br-eth2" Interface "br-eth2" type: internal ovs_version: "2.3.0" #. Create the flat network on which you are going to launch the instances:: neutron net-create --tenant-id $TENANT_ID sharednet1 --shared \ --provider:network_type flat --provider:physical_network physnet1 #. Create the subnet on the newly created network:: neutron subnet-create sharednet1 $NETWORK_CIDR --name $SUBNET_NAME \ --ip-version=4 --gateway=$GATEWAY_IP --allocation-pool \ start=$START_IP,end=$END_IP --enable-dhcp .. _CleaningNetworkSetup: Configure the Bare Metal service for cleaning ============================================= #. If you configure Bare Metal service to use :ref:`cleaning` (which is enabled by default), you will need to set the ``cleaning_network_uuid`` configuration option. Note the network UUID (the `id` field) of the network you created in :ref:`NeutronFlatNetworking` or another network you created for cleaning:: neutron net-list #. Configure the cleaning network UUID via the ``cleaning_network_uuid`` option in the Bare Metal service configuration file (/etc/ironic/ironic.conf). In the following, replace NETWORK_UUID with the UUID you noted in the previous step:: [neutron] ... # UUID of the network to create Neutron ports on, when booting # to a ramdisk for cleaning using Neutron DHCP. (string value) #cleaning_network_uuid= cleaning_network_uuid = NETWORK_UUID #. Restart the Bare Metal service's ironic-conductor:: Fedora/RHEL7/CentOS7: sudo systemctl restart openstack-ironic-conductor Ubuntu: sudo service ironic-conductor restart .. _ImageRequirement: Image requirements ================== Bare Metal provisioning requires two sets of images: the deploy images and the user images. The deploy images are used by the Bare Metal service to prepare the bare metal server for actual OS deployment. Whereas the user images are installed on the bare metal server to be used by the end user. Below are the steps to create the required images and add them to the Image service: 1. The `disk-image-builder`_ can be used to create images required for deployment and the actual OS which the user is going to run. .. _disk-image-builder: https://github.com/openstack/diskimage-builder *Note:* `tripleo-incubator`_ provides a `script`_ to install all the dependencies for the disk-image-builder. .. _tripleo-incubator: https://github.com/openstack/tripleo-incubator .. _script: https://github.com/openstack/tripleo-incubator/blob/master/scripts/install-dependencies - Install diskimage-builder package (use virtualenv, if you don't want to install anything globally):: sudo pip install diskimage-builder - Build the image your users will run (Ubuntu image has been taken as an example):: Partition images: disk-image-create ubuntu baremetal dhcp-all-interfaces grub2 -o my-image Whole disk images: disk-image-create ubuntu vm dhcp-all-interfaces -o my-image The partition image command creates *my-image.qcow2*, *my-image.vmlinuz* and *my-image.initrd* files. The *grub2* element in the partition image creation command is only needed if local boot will be used to deploy *my-image.qcow2*, otherwise the images *my-image.vmlinuz* and *my-image.initrd* will be used for PXE booting after deploying the bare metal with *my-image.qcow2*. If you want to use Fedora image, replace *ubuntu* with *fedora* in the chosen command. - To build the deploy image take a look at the `Building or downloading a deploy ramdisk image`_ section. 2. Add the user images to the Image service Load all the images created in the below steps into the Image service, and note the image UUIDs in the Image service for each one as it is generated. - Add the kernel and ramdisk images to the Image service:: glance image-create --name my-kernel --visibility public \ --disk-format aki --container-format aki < my-image.vmlinuz Store the image uuid obtained from the above step as *$MY_VMLINUZ_UUID*. :: glance image-create --name my-image.initrd --visibility public \ --disk-format ari --container-format ari < my-image.initrd Store the image UUID obtained from the above step as *$MY_INITRD_UUID*. - Add the *my-image* to the Image service which is going to be the OS that the user is going to run. Also associate the above created images with this OS image. These two operations can be done by executing the following command:: glance image-create --name my-image --visibility public \ --disk-format qcow2 --container-format bare --property \ kernel_id=$MY_VMLINUZ_UUID --property \ ramdisk_id=$MY_INITRD_UUID < my-image.qcow2 - *Note:* To deploy a whole disk image, a kernel_id and a ramdisk_id shouldn't be associated with the image. An example is as follows:: glance image-create --name my-whole-disk-image --visibility public \ --disk-format qcow2 \ --container-format bare < my-whole-disk-image.qcow2 3. Add the deploy images to the Image service Add the *my-deploy-ramdisk.kernel* and *my-deploy-ramdisk.initramfs* images to the Image service:: glance image-create --name deploy-vmlinuz --visibility public \ --disk-format aki --container-format aki < my-deploy-ramdisk.kernel Store the image UUID obtained from the above step as *$DEPLOY_VMLINUZ_UUID*. :: glance image-create --name deploy-initrd --visibility public \ --disk-format ari --container-format ari < my-deploy-ramdisk.initramfs Store the image UUID obtained from the above step as *$DEPLOY_INITRD_UUID*. Flavor creation =============== You'll need to create a special bare metal flavor in the Compute service. The flavor is mapped to the bare metal node through the hardware specifications. #. Change these to match your hardware:: RAM_MB=1024 CPU=2 DISK_GB=100 ARCH={i686|x86_64} #. Create the bare metal flavor by executing the following command:: nova flavor-create my-baremetal-flavor auto $RAM_MB $DISK_GB $CPU *Note: You can replace auto with your own flavor id.* #. Set the architecture as extra_specs information of the flavor. This will be used to match against the properties of bare metal nodes:: nova flavor-key my-baremetal-flavor set cpu_arch=$ARCH #. Associate the deploy ramdisk and kernel images with the ironic node:: ironic node-update $NODE_UUID add \ driver_info/deploy_kernel=$DEPLOY_VMLINUZ_UUID \ driver_info/deploy_ramdisk=$DEPLOY_INITRD_UUID Setup the drivers for the Bare Metal service ============================================ PXE setup --------- If you will be using PXE, it needs to be set up on the Bare Metal service node(s) where ``ironic-conductor`` is running. #. Make sure the tftp root directory exist and can be written to by the user the ``ironic-conductor`` is running as. For example:: sudo mkdir -p /tftpboot sudo chown -R ironic /tftpboot #. Install tftp server and the syslinux package with the PXE boot images:: Ubuntu: (Up to and including 14.04) sudo apt-get install tftpd-hpa syslinux-common syslinux Ubuntu: (14.10 and after) sudo apt-get install tftpd-hpa syslinux-common pxelinux Fedora 21/RHEL7/CentOS7: sudo yum install tftp-server syslinux-tftpboot Fedora 22 or higher: sudo dnf install tftp-server syslinux-tftpboot #. Setup tftp server to serve ``/tftpboot``. #. Copy the PXE image to ``/tftpboot``. The PXE image might be found at [1]_:: Ubuntu (Up to and including 14.04): sudo cp /usr/lib/syslinux/pxelinux.0 /tftpboot Ubuntu (14.10 and after): sudo cp /usr/lib/PXELINUX/pxelinux.0 /tftpboot #. If whole disk images need to be deployed via PXE-netboot, copy the chain.c32 image to ``/tftpboot`` to support it. The chain.c32 image might be found at:: Ubuntu (Up to and including 14.04): sudo cp /usr/lib/syslinux/chain.c32 /tftpboot Ubuntu (14.10 and after): sudo cp /usr/lib/syslinux/modules/bios/chain.c32 /tftpboot Fedora/RHEL7/CentOS7: sudo cp /boot/extlinux/chain.c32 /tftpboot #. If the version of syslinux is **greater than** 4 we also need to make sure that we copy the library modules into the ``/tftpboot`` directory [2]_ [1]_:: Ubuntu: sudo cp /usr/lib/syslinux/modules/*/ldlinux.* /tftpboot #. Create a map file in the tftp boot directory (``/tftpboot``):: echo 're ^(/tftpboot/) /tftpboot/\2' > /tftpboot/map-file echo 're ^/tftpboot/ /tftpboot/' >> /tftpboot/map-file echo 're ^(^/) /tftpboot/\1' >> /tftpboot/map-file echo 're ^([^/]) /tftpboot/\1' >> /tftpboot/map-file #. Enable tftp map file, modify ``/etc/xinetd.d/tftp`` as below and restart xinetd service:: server_args = -v -v -v -v -v --map-file /tftpboot/map-file /tftpboot .. [1] On **Fedora/RHEL** the ``syslinux-tftpboot`` package already install the library modules and PXE image at ``/tftpboot``. If the TFTP server is configured to listen to a different directory you should copy the contents of ``/tftpboot`` to the configured directory .. [2] http://www.syslinux.org/wiki/index.php/Library_modules PXE UEFI setup -------------- If you want to deploy on a UEFI supported bare metal, perform these additional steps on the ironic conductor node to configure the PXE UEFI environment. #. Download and untar the elilo bootloader version >= 3.16 from http://sourceforge.net/projects/elilo/:: sudo tar zxvf elilo-3.16-all.tar.gz #. Copy the elilo boot loader image to ``/tftpboot`` directory:: sudo cp ./elilo-3.16-x86_64.efi /tftpboot/elilo.efi #. Grub2 is an alternate UEFI bootloader supported in Bare Metal service. Install grub2 and shim packages:: Ubuntu: (14.04LTS and later) sudo apt-get install grub-efi-amd64-signed shim-signed Fedora 21/RHEL7/CentOS7: sudo yum install grub2-efi shim Fedora 22 or higher: sudo dnf install grub2-efi shim #. Copy grub and shim boot loader images to ``/tftpboot`` directory:: Ubuntu: (14.04LTS and later) sudo cp /usr/lib/shim/shim.efi.signed /tftpboot/bootx64.efi sudo cp /usr/lib/grub/x86_64-efi-signed/grubnetx64.efi.signed \ /tftpboot/grubx64.efi Fedora: (21 and later) sudo cp /boot/efi/EFI/fedora/shim.efi /tftpboot/bootx64.efi sudo cp /boot/efi/EFI/fedora/grubx64.efi /tftpboot/grubx64.efi CentOS: (7 and later) sudo cp /boot/efi/EFI/centos/shim.efi /tftpboot/bootx64.efi sudo cp /boot/efi/EFI/centos/grubx64.efi /tftpboot/grubx64.efi #. Create master grub.cfg:: Ubuntu: Create grub.cfg under ``/tftpboot/grub`` directory. GRUB_DIR=/tftpboot/grub Fedora: Create grub.cfg under ``/tftpboot/EFI/fedora`` directory. GRUB_DIR=/tftpboot/EFI/fedora CentOS: Create grub.cfg under ``/tftpboot/EFI/centos`` directory. GRUB_DIR=/tftpboot/EFI/centos Create directory GRUB_DIR sudo mkdir -p $GRUB_DIR This file is used to redirect grub to baremetal node specific config file. It redirects it to specific grub config file based on DHCP IP assigned to baremetal node. .. literalinclude:: ../../../ironic/drivers/modules/master_grub_cfg.txt Change the permission of grub.cfg:: sudo chmod 644 $GRUB_DIR/grub.cfg #. Update bootfile and template file configuration parameters for UEFI PXE boot in the Bare Metal Service's configuration file (/etc/ironic/ironic.conf):: [pxe] # Bootfile DHCP parameter for UEFI boot mode. (string value) uefi_pxe_bootfile_name=bootx64.efi # Template file for PXE configuration for UEFI boot loader. # (string value) uefi_pxe_config_template=$pybasedir/drivers/modules/pxe_grub_config.template #. Update the bare metal node with ``boot_mode`` capability in node's properties field:: ironic node-update add properties/capabilities='boot_mode:uefi' #. Make sure that bare metal node is configured to boot in UEFI boot mode and boot device is set to network/pxe. NOTE: ``pxe_ilo`` driver supports automatic setting of UEFI boot mode and boot device on the bare metal node. So this step is not required for ``pxe_ilo`` driver. For more information on configuring boot modes, refer boot_mode_support_. iPXE setup ---------- An alternative to PXE boot, iPXE was introduced in the Juno release (2014.2.0) of Bare Metal service. If you will be using iPXE to boot instead of PXE, iPXE needs to be set up on the Bare Metal service node(s) where ``ironic-conductor`` is running. #. Make sure these directories exist and can be written to by the user the ``ironic-conductor`` is running as. For example:: sudo mkdir -p /tftpboot sudo mkdir -p /httpboot sudo chown -R ironic /tftpboot sudo chown -R ironic /httpboot #. Create a map file in the tftp boot directory (``/tftpboot``):: echo 'r ^([^/]) /tftpboot/\1' > /tftpboot/map-file echo 'r ^(/tftpboot/) /tftpboot/\2' >> /tftpboot/map-file #. Set up TFTP and HTTP servers. These servers should be running and configured to use the local /tftpboot and /httpboot directories respectively, as their root directories. (Setting up these servers is outside the scope of this install guide.) These root directories need to be mounted locally to the ``ironic-conductor`` services, so that the services can access them. The Bare Metal service's configuration file (/etc/ironic/ironic.conf) should be edited accordingly to specify the TFTP and HTTP root directories and server addresses. For example:: [pxe] # Ironic compute node's tftp root path. (string value) tftp_root=/tftpboot # IP address of Ironic compute node's tftp server. (string # value) tftp_server=192.168.0.2 [deploy] # Ironic compute node's http root path. (string value) http_root=/httpboot # Ironic compute node's HTTP server URL. Example: # http://192.1.2.3:8080 (string value) http_url=http://192.168.0.2:8080 #. Install the iPXE package with the boot images:: Ubuntu: apt-get install ipxe Fedora 21/RHEL7/CentOS7: yum install ipxe-bootimgs Fedora 22 or higher: dnf install ipxe-bootimgs #. Copy the iPXE boot image (``undionly.kpxe`` for **BIOS** and ``ipxe.efi`` for **UEFI**) to ``/tftpboot``. The binary might be found at:: Ubuntu: cp /usr/lib/ipxe/{undionly.kpxe,ipxe.efi} /tftpboot Fedora/RHEL7/CentOS7: cp /usr/share/ipxe/{undionly.kpxe,ipxe.efi} /tftpboot .. note:: If the packaged version of the iPXE boot image doesn't work, you can download a prebuilt one from http://boot.ipxe.org or build one image from source, see http://ipxe.org/download for more information. #. Enable/Configure iPXE in the Bare Metal Service's configuration file (/etc/ironic/ironic.conf):: [pxe] # Enable iPXE boot. (boolean value) ipxe_enabled=True # Neutron bootfile DHCP parameter. (string value) pxe_bootfile_name=undionly.kpxe # Bootfile DHCP parameter for UEFI boot mode. (string value) uefi_pxe_bootfile_name=ipxe.efi # Template file for PXE configuration. (string value) pxe_config_template=$pybasedir/drivers/modules/ipxe_config.template # Template file for PXE configuration for UEFI boot loader. # (string value) uefi_pxe_config_template=$pybasedir/drivers/modules/ipxe_config.template #. Restart the ``ironic-conductor`` process:: Fedora/RHEL7/CentOS7: sudo systemctl restart openstack-ironic-conductor Ubuntu: sudo service ironic-conductor restart Networking service configuration -------------------------------- DHCP requests from iPXE need to have a DHCP tag called ``ipxe``, in order for the DHCP server to tell the client to get the boot.ipxe script via HTTP. Otherwise, if the tag isn't there, the DHCP server will tell the DHCP client to chainload the iPXE image (undionly.kpxe). The Networking service needs to be configured to create this DHCP tag, since it isn't created by default. #. Create a custom ``dnsmasq.conf`` file with a setting for the ipxe tag. For example, create the file ``/etc/dnsmasq-ironic.conf`` with the content:: # Create the "ipxe" tag if request comes from iPXE user class dhcp-userclass=set:ipxe,iPXE # Alternatively, create the "ipxe" tag if request comes from DHCP option 175 # dhcp-match=set:ipxe,175 #. In the Networking service DHCP Agent configuration file (typically located at /etc/neutron/dhcp_agent.ini), set the custom ``/etc/dnsmasq-ironic.conf`` file as the dnsmasq configuration file:: [DEFAULT] dnsmasq_config_file = /etc/dnsmasq-ironic.conf #. Restart the ``neutron-dhcp-agent`` process:: service neutron-dhcp-agent restart IPMI support ------------ If using the IPMITool driver, the ``ipmitool`` command must be present on the service node(s) where ``ironic-conductor`` is running. On most distros, this is provided as part of the ``ipmitool`` package. Source code is available at http://ipmitool.sourceforge.net/ Note that certain distros, notably Mac OS X and SLES, install ``openipmi`` instead of ``ipmitool`` by default. THIS DRIVER IS NOT COMPATIBLE WITH ``openipmi`` AS IT RELIES ON ERROR HANDLING OPTIONS NOT PROVIDED BY THIS TOOL. Check that you can connect to and authenticate with the IPMI controller in your bare metal server by using ``ipmitool``:: ipmitool -I lanplus -H -U -P chassis power status = The IP of the IPMI controller you want to access *Note:* #. This is not the bare metal node's main IP. The IPMI controller should have its own unique IP. #. In case the above command doesn't return the power status of the bare metal server, check for these: - ``ipmitool`` is installed. - The IPMI controller on your bare metal server is turned on. - The IPMI controller credentials passed in the command are right. - The conductor node has a route to the IPMI controller. This can be checked by just pinging the IPMI controller IP from the conductor node. .. note:: If there are slow or unresponsive BMCs in the environment, the retry_timeout configuration option in the [ipmi] section may need to be lowered. The default is fairly conservative, as setting this timeout too low can cause older BMCs to crash and require a hard-reset. Bare Metal service supports sending IPMI sensor data to Telemetry with pxe_ipmitool, pxe_ipminative, agent_ipmitool, agent_pyghmi, agent_ilo, iscsi_ilo, pxe_ilo, and with pxe_irmc driver starting from Kilo release. By default, support for sending IPMI sensor data to Telemetry is disabled. If you want to enable it, you should make the following two changes in ``ironic.conf``: * ``notification_driver = messaging`` in the ``DEFAULT`` section * ``send_sensor_data = true`` in the ``conductor`` section If you want to customize the sensor types which will be sent to Telemetry, change the ``send_sensor_data_types`` option. For example, the below settings will send temperature, fan, voltage and these three sensor types of data to Telemetry: * send_sensor_data_types=Temperature,Fan,Voltage If we use default value 'All' for all the sensor types which are supported by Telemetry, they are: * Temperature, Fan, Voltage, Current Configure node web console -------------------------- The web console can be configured in Bare Metal service in the following way: * Install shellinabox in ironic conductor node. For RHEL/CentOS, shellinabox package is not present in base repositories, user must enable EPEL repository, you can find more from `FedoraProject page`_. Installation example:: Ubuntu: sudo apt-get install shellinabox Fedora 21/RHEL7/CentOS7: sudo yum install shellinabox Fedora 22 or higher: sudo dnf install shellinabox You can find more about shellinabox on the `shellinabox page`_. You can optionally use the SSL certificate in shellinabox. If you want to use the SSL certificate in shellinabox, you should install openssl and generate the SSL certificate. 1. Install openssl, for example:: Ubuntu: sudo apt-get install openssl Fedora 21/RHEL7/CentOS7: sudo yum install openssl Fedora 22 or higher: sudo dnf install openssl 2. Generate the SSL certificate, here is an example, you can find more about openssl on the `openssl page`_:: cd /tmp/ca openssl genrsa -des3 -out my.key 1024 openssl req -new -key my.key -out my.csr cp my.key my.key.org openssl rsa -in my.key.org -out my.key openssl x509 -req -days 3650 -in my.csr -signkey my.key -out my.crt cat my.crt my.key > certificate.pem * Customize the console section in the Bare Metal service configuration file (/etc/ironic/ironic.conf), if you want to use SSL certificate in shellinabox, you should specify ``terminal_cert_dir``. for example:: [console] # # Options defined in ironic.drivers.modules.console_utils # # Path to serial console terminal program (string value) #terminal=shellinaboxd # Directory containing the terminal SSL cert(PEM) for serial # console access (string value) terminal_cert_dir=/tmp/ca # Directory for holding terminal pid files. If not specified, # the temporary directory will be used. (string value) #terminal_pid_dir= # Time interval (in seconds) for checking the status of # console subprocess. (integer value) #subprocess_checking_interval=1 # Time (in seconds) to wait for the console subprocess to # start. (integer value) #subprocess_timeout=10 * Append console parameters for bare metal PXE boot in the Bare Metal service configuration file (/etc/ironic/ironic.conf), including right serial port terminal and serial speed, serial speed should be same serial configuration with BIOS settings, so that os boot process can be seen in web console, for example:: pxe_* driver: [pxe] #Additional append parameters for bare metal PXE boot. (string value) pxe_append_params = nofb nomodeset vga=normal console=tty0 console=ttyS0,115200n8 agent_* driver: [agent] #Additional append parameters for bare metal PXE boot. (string value) agent_pxe_append_params = nofb nomodeset vga=normal console=tty0 console=ttyS0,115200n8 * Configure node web console. Enable the web console, for example:: ironic node-update add driver_info/= ironic node-set-console-mode true Check whether the console is enabled, for example:: ironic node-validate Disable the web console, for example:: ironic node-set-console-mode false ironic node-update remove driver_info/ The ```` is driver dependent. The actual name of this field can be checked in driver properties, for example:: ironic driver-properties For ``*_ipmitool`` and ``*_ipminative`` drivers, this option is ``ipmi_terminal_port``. For ``seamicro`` driver, this option is ``seamicro_terminal_port``. Give a customized port number to ````, for example ``8023``, this customized port is used in web console url. * Get web console information:: ironic node-get-console +-----------------+----------------------------------------------------------------------+ | Property | Value | +-----------------+----------------------------------------------------------------------+ | console_enabled | True | | console_info | {u'url': u'http://:', u'type': u'shellinabox'} | +-----------------+----------------------------------------------------------------------+ You can open web console using above ``url`` through web browser. If ``console_enabled`` is ``false``, ``console_info`` is ``None``, web console is disabled. If you want to launch web console, refer to ``Enable web console`` part. .. _`shellinabox page`: https://code.google.com/p/shellinabox/ .. _`openssl page`: https://www.openssl.org/ .. _`FedoraProject page`: https://fedoraproject.org/wiki/Infrastructure/Mirroring .. _boot_mode_support: Boot mode support ----------------- The following drivers support setting of boot mode (Legacy BIOS or UEFI). * ``pxe_ipmitool`` The boot modes can be configured in Bare Metal service in the following way: * When no boot mode setting is provided, these drivers default the boot_mode to Legacy BIOS. * Only one boot mode (either ``uefi`` or ``bios``) can be configured for the node. * If the operator wants a node to boot always in ``uefi`` mode or ``bios`` mode, then they may use ``capabilities`` parameter within ``properties`` field of an bare metal node. The operator must manually set the appropriate boot mode on the bare metal node. To configure a node in ``uefi`` mode, then set ``capabilities`` as below:: ironic node-update add properties/capabilities='boot_mode:uefi' Nodes having ``boot_mode`` set to ``uefi`` may be requested by adding an ``extra_spec`` to the Compute service flavor:: nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi" nova boot --flavor ironic-test-3 --image test-image instance-1 If ``capabilities`` is used in ``extra_spec`` as above, nova scheduler (``ComputeCapabilitiesFilter``) will match only bare metal nodes which have the ``boot_mode`` set appropriately in ``properties/capabilities``. It will filter out rest of the nodes. The above facility for matching in the Compute service can be used in heterogeneous environments where there is a mix of ``uefi`` and ``bios`` machines, and operator wants to provide a choice to the user regarding boot modes. If the flavor doesn't contain ``boot_mode`` and ``boot_mode`` is configured for bare metal nodes, then nova scheduler will consider all nodes and user may get either ``bios`` or ``uefi`` machine. .. _choosing_the_disk_label: Choosing the disk label ----------------------- .. note:: The term ``disk label`` is historically used in Ironic and was taken from `parted `_. Apparently everyone seems to have a different word for ``disk label`` - these are all the same thing: disk type, partition table, partition map and so on... Ironic allows operators to choose which disk label they want their bare metal node to be deployed with when Ironic is responsible for partitioning the disk; therefore choosing the disk label does not apply when the image being deployed is a ``whole disk image``. There are some edge cases where someone may want to choose a specific disk label for the images being deployed, including but not limited to: * For machines in ``bios`` boot mode with disks larger than 2 terabytes it's recommended to use a ``gpt`` disk label. That's because a capacity beyond 2 terabytes is not addressable by using the MBR partitioning type. But, although GPT claims to be backward compatible with legacy BIOS systems `that's not always the case `_. * Operators may want to force the partitioning to be always MBR (even if the machine is deployed with boot mode ``uefi``) to avoid breakage of applications and tools running on those instances. The disk label can be configured in two ways; when Ironic is used with the Compute service or in standalone mode. The following bullet points and sections will describe both methods: * When no disk label is provided Ironic will configure it according to the `boot mode `_; ``bios`` boot mode will use ``msdos`` and ``uefi`` boot mode will use ``gpt``. * Only one disk label - either ``msdos`` or ``gpt`` - can be configured for the node. When used with Compute service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When Ironic is used with the Compute service the disk label should be set to node's ``properties/capabilities`` field and also to the flavor which will request such capability, for example:: ironic node-update add properties/capabilities='disk_label:gpt' As for the flavor:: nova flavor-key baremetal set capabilities:disk_label="gpt" When used in standalone mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When used without the Compute service, the disk label should be set directly to the node's ``instance_info`` field, as below:: ironic node-update add instance_info/capabilities='{"disk_label": "gpt"}' Local boot with partition images ================================ Starting with the Kilo release, Bare Metal service supports local boot with partition images, meaning that after the deployment the node's subsequent reboots won't happen via PXE or Virtual Media. Instead, it will boot from a local boot loader installed on the disk. It's important to note that in order for this to work the image being deployed with Bare Metal serivce **must** contain ``grub2`` installed within it. Enabling the local boot is different when Bare Metal service is used with Compute service and without it. The following sections will describe both methods. .. note:: The local boot feature is dependent upon a updated deploy ramdisk built with diskimage-builder_ **version >= 0.1.42** or ironic-python-agent_ in the kilo-era. Enabling local boot with Compute service ---------------------------------------- To enable local boot we need to set a capability on the bare metal node, for example:: ironic node-update add properties/capabilities="boot_option:local" Nodes having ``boot_option`` set to ``local`` may be requested by adding an ``extra_spec`` to the Compute service flavor, for example:: nova flavor-key baremetal set capabilities:boot_option="local" .. note:: If the node is configured to use ``UEFI``, Bare Metal service will create an ``EFI partition`` on the disk and switch the partition table format to ``gpt``. The ``EFI partition`` will be used later by the boot loader (which is installed from the deploy ramdisk). Enabling local boot without Compute ----------------------------------- Since adding ``capabilities`` to the node's properties is only used by the nova scheduler to perform more advanced scheduling of instances, we need a way to enable local boot when Compute is not present. To do that we can simply specify the capability via the ``instance_info`` attribute of the node, for example:: ironic node-update add instance_info/capabilities='{"boot_option": "local"}' Enrollment ========== After all the services have been properly configured, you should enroll your hardware with the Bare Metal service, and confirm that the Compute service sees the available hardware. The nodes will be visible to the Compute service once they are in the ``available`` provision state. .. note:: After enrolling nodes with the Bare Metal service, the Compute service will not be immediately notified of the new resources. The Compute service's resource tracker syncs periodically, and so any changes made directly to the Bare Metal service's resources will become visible in the Compute service only after the next run of that periodic task. More information is in the `Troubleshooting`_ section below. .. note:: Any bare metal node that is visible to the Compute service may have a workload scheduled to it, if both the ``power`` and ``deploy`` interfaces pass the ``validate`` check. If you wish to exclude a node from the Compute service's scheduler, for instance so that you can perform maintenance on it, you can set the node to "maintenance" mode. For more information see the `Maintenance Mode`_ section below. Enrollment process ------------------ This section describes the main steps to enroll a node and make it available for provisioning. Some steps are shown separately for illustration purposes, and may be combined if desired. #. Create a node in the Bare Metal service. At a minimum, you must specify the driver name (for example, "pxe_ipmitool"). This will return the node UUID along with other information about the node. The node's provision state will be ``available``. (The example assumes that the client is using the default API version.):: ironic node-create -d pxe_ipmitool +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | uuid | dfc6189f-ad83-4261-9bda-b27258eb1987 | | driver_info | {} | | extra | {} | | driver | pxe_ipmitool | | chassis_uuid | | | properties | {} | | name | None | +--------------+--------------------------------------+ ironic node-show dfc6189f-ad83-4261-9bda-b27258eb1987 +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | maintenance_reason | None | | provision_state | available | | uuid | dfc6189f-ad83-4261-9bda-b27258eb1987 | | console_enabled | False | | target_provision_state | None | | provision_updated_at | None | | maintenance | False | | power_state | None | | driver | pxe_ipmitool | | properties | {} | | instance_uuid | None | | name | None | | driver_info | {} | | ... | ... | +------------------------+--------------------------------------+ Beginning with the Kilo release a node may also be referred to by a logical name as well as its UUID. To utilize this new feature a name must be assigned to the node. This can be done when the node is created by adding the ``-n`` option to the ``node-create`` command or by updating an existing node with the ``node-update`` command. See `Logical Names`_ for examples. Beginning with the Liberty release, with API version 1.11 and above, a newly created node will have an initial provision state of ``enroll`` as opposed to ``available``. See `Enrolling a node`_ for more details. #. Update the node ``driver_info`` so that Bare Metal service can manage the node. Different drivers may require different information about the node. You can determine this with the ``driver-properties`` command, as follows:: ironic driver-properties pxe_ipmitool +----------------------+-------------------------------------------------------------------------------------------------------------+ | Property | Description | +----------------------+-------------------------------------------------------------------------------------------------------------+ | ipmi_address | IP address or hostname of the node. Required. | | ipmi_password | password. Optional. | | ipmi_username | username; default is NULL user. Optional. | | ... | ... | | deploy_kernel | UUID (from Glance) of the deployment kernel. Required. | | deploy_ramdisk | UUID (from Glance) of the ramdisk that is mounted at boot time. Required. | +----------------------+-------------------------------------------------------------------------------------------------------------+ ironic node-update $NODE_UUID add \ driver_info/ipmi_username=$USER \ driver_info/ipmi_password=$PASS \ driver_info/ipmi_address=$ADDRESS .. note:: If IPMI is running on a port other than 623 (the default). The port must be added to ``driver_info`` by specifying the ``ipmi_port`` value. Example:: ironic node-update $NODE_UUID add driver_info/ipmi_port=$PORT_NUMBER Note that you may also specify all ``driver_info`` parameters during ``node-create`` by passing the **-i** option multiple times. #. Update the node's properties to match the bare metal flavor you created earlier:: ironic node-update $NODE_UUID add \ properties/cpus=$CPU \ properties/memory_mb=$RAM_MB \ properties/local_gb=$DISK_GB \ properties/cpu_arch=$ARCH As above, these can also be specified at node creation by passing the **-p** option to ``node-create`` multiple times. #. If you wish to perform more advanced scheduling of the instances based on hardware capabilities, you may add metadata to each node that will be exposed to the nova scheduler (see: `ComputeCapabilitiesFilter`_). A full explanation of this is outside of the scope of this document. It can be done through the special ``capabilities`` member of node properties:: ironic node-update $NODE_UUID add \ properties/capabilities=key1:val1,key2:val2 #. As mentioned in the `Flavor Creation`_ section, if using the Kilo or later release of Bare Metal service, you should specify a deploy kernel and ramdisk which correspond to the node's driver, for example:: ironic node-update $NODE_UUID add \ driver_info/deploy_kernel=$DEPLOY_VMLINUZ_UUID \ driver_info/deploy_ramdisk=$DEPLOY_INITRD_UUID #. You must also inform Bare Metal service of the network interface cards which are part of the node by creating a port with each NIC's MAC address. These MAC addresses are passed to the Networking service during instance provisioning and used to configure the network appropriately:: ironic port-create -n $NODE_UUID -a $MAC_ADDRESS #. To check if Bare Metal service has the minimum information necessary for a node's driver to function, you may ``validate`` it:: ironic node-validate $NODE_UUID +------------+--------+--------+ | Interface | Result | Reason | +------------+--------+--------+ | console | True | | | deploy | True | | | management | True | | | power | True | | +------------+--------+--------+ If the node fails validation, each driver will return information as to why it failed:: ironic node-validate $NODE_UUID +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+ | Interface | Result | Reason | +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+ | console | None | not supported | | deploy | False | Cannot validate iSCSI deploy. Some parameters were missing in node's instance_info. Missing are: ['root_gb', 'image_source'] | | management | False | Missing the following IPMI credentials in node's driver_info: ['ipmi_address']. | | power | False | Missing the following IPMI credentials in node's driver_info: ['ipmi_address']. | +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+ #. If using API version 1.11 or above, the node was created in the ``enroll`` provision state. In order for the node to be available for deploying a workload (for example, by the Compute service), it needs to be in the ``available`` provision state. To do this, it must be moved into the ``manageable`` state and then moved into the ``available`` state. The `API version 1.11 and above`_ section describes the commands for this. .. _ComputeCapabilitiesFilter: http://docs.openstack.org/developer/nova/devref/filter_scheduler.html?highlight=computecapabilitiesfilter Enrolling a node ---------------- In the Liberty cycle, starting with API version 1.11, the Bare Metal service added a new initial provision state of ``enroll`` to its state machine. Existing automation tooling that use an API version lower than 1.11 are not affected, since the initial provision state is still ``available``. However, using API version 1.11 or above may break existing automation tooling with respect to node creation. The default API version used by (the most recent) python-ironicclient is 1.9. The examples below set the API version for each command. To set the API version for all commands, you can set the environment variable ``IRONIC_API_VERSION``. API version 1.10 and below ~~~~~~~~~~~~~~~~~~~~~~~~~~ Below is an example of creating a node with API version 1.10. After creation, the node will be in the ``available`` provision state. Other API versions below 1.10 may be substituted in place of 1.10. :: ironic --ironic-api-version 1.10 node-create -d agent_ilo -n pre11 +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | uuid | cc4998a0-f726-4927-9473-0582458c6789 | | driver_info | {} | | extra | {} | | driver | agent_ilo | | chassis_uuid | | | properties | {} | | name | pre11 | +--------------+--------------------------------------+ ironic --ironic-api-version 1.10 node-list +--------------------------------------+-------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+-------+---------------+-------------+--------------------+-------------+ | cc4998a0-f726-4927-9473-0582458c6789 | pre11 | None | None | available | False | +--------------------------------------+-------+---------------+-------------+--------------------+-------------+ API version 1.11 and above ~~~~~~~~~~~~~~~~~~~~~~~~~~ Beginning with API version 1.11, the initial provision state for newly created nodes is ``enroll``. In the examples below, other API versions above 1.11 may be substituted in place of 1.11. :: ironic --ironic-api-version 1.11 node-create -d agent_ilo -n post11 +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | uuid | 0eb013bb-1e4b-4f4c-94b5-2e7468242611 | | driver_info | {} | | extra | {} | | driver | agent_ilo | | chassis_uuid | | | properties | {} | | name | post11 | +--------------+--------------------------------------+ ironic --ironic-api-version 1.11 node-list +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | 0eb013bb-1e4b-4f4c-94b5-2e7468242611 | post11 | None | None | enroll | False | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ In order for nodes to be available for deploying workloads on them, nodes must be in the ``available`` provision state. To do this, nodes created with API version 1.11 and above must be moved from the ``enroll`` state to the ``manageable`` state and then to the ``available`` state. To move a node to a different provision state, use the ``node-set-provision-state`` command. .. note:: Since it is an asychronous call, the response for ``ironic node-set-provision-state`` will not indicate whether the transition succeeded or not. You can check the status of the operation via ``ironic node-show``. If it was successful, ``provision_state`` will be in the desired state. If it failed, there will be information in the node's ``last_error``. After creating a node and before moving it from its initial provision state of ``enroll``, basic power and port information needs to be configured on the node. The Bare Metal service needs this information because it verifies that it is capable of controlling the node when transitioning the node from ``enroll`` to ``manageable`` state. To move a node from ``enroll`` to ``manageable`` provision state:: ironic --ironic-api-version 1.11 node-set-provision-state $NODE_UUID manage ironic node-show $NODE_UUID +------------------------+--------------------------------------------------------------------+ | Property | Value | +------------------------+--------------------------------------------------------------------+ | ... | ... | | provision_state | manageable | <- verify correct state | uuid | 0eb013bb-1e4b-4f4c-94b5-2e7468242611 | | ... | ... | +------------------------+--------------------------------------------------------------------+ When a node is moved from the ``manageable`` to ``available`` provision state, the node will go through automated cleaning if configured to do so (see :ref:`CleaningNetworkSetup`). To move a node from ``manageable`` to ``available`` provision state:: ironic --ironic-api-version 1.11 node-set-provision-state $NODE_UUID provide ironic node-show $NODE_UUID +------------------------+--------------------------------------------------------------------+ | Property | Value | +------------------------+--------------------------------------------------------------------+ | ... | ... | | provision_state | available | < - verify correct state | uuid | 0eb013bb-1e4b-4f4c-94b5-2e7468242611 | | ... | ... | +------------------------+--------------------------------------------------------------------+ For more details on the Bare Metal service's state machine, see the `state machine `_ documentation. Logical names ------------- Beginning with the Kilo release a Node may also be referred to by a logical name as well as its UUID. Names can be assigned either when creating the node by adding the ``-n`` option to the ``node-create`` command or by updating an existing node with the ``node-update`` command. Node names must be unique, and conform to: - rfc952_ - rfc1123_ - wiki_hostname_ The node is named 'example' in the following examples: :: ironic node-create -d agent_ipmitool -n example or:: ironic node-update $NODE_UUID add name=example Once assigned a logical name, a node can then be referred to by name or UUID interchangeably. :: ironic node-create -d agent_ipmitool -n example +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | uuid | 71e01002-8662-434d-aafd-f068f69bb85e | | driver_info | {} | | extra | {} | | driver | agent_ipmitool | | chassis_uuid | | | properties | {} | | name | example | +--------------+--------------------------------------+ ironic node-show example +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | updated_at | 2015-04-24T16:23:46+00:00 | | ... | ... | | instance_info | {} | +------------------------+--------------------------------------+ .. _rfc952: http://tools.ietf.org/html/rfc952 .. _rfc1123: http://tools.ietf.org/html/rfc1123 .. _wiki_hostname: http://en.wikipedia.org/wiki/Hostname .. _inspection: Hardware Inspection ------------------- Starting with the Kilo release, Bare Metal service supports hardware inspection that simplifies enrolling nodes. Inspection allows Bare Metal service to discover required node properties once required ``driver_info`` fields (for example, IPMI credentials) are set by an operator. Inspection will also create the Bare Metal service ports for the discovered ethernet MACs. Operators will have to manually delete the Bare Metal service ports for which physical media is not connected. This is required due to the `bug 1405131 `_. There are two kinds of inspection supported by Bare Metal service: #. Out-of-band inspection is currently implemented by iLO drivers, listed at :ref:`ilo`. #. In-band inspection is performed by utilizing the ironic-inspector_ project. This is supported by the following drivers:: pxe_drac pxe_ipmitool pxe_ipminative pxe_ssh This feature needs to be explicitly enabled in the configuration by setting ``enabled = True`` in ``[inspector]`` section. You must additionally install python-ironic-inspector-client_ to use this functionality. You must set ``service_url`` if the ironic-inspector service is being run on a separate host from the ironic-conductor service, or is using non-standard port. In order to ensure that ports in Bare Metal service are synchronized with NIC ports on the node, the following settings in the ironic-inspector configuration file must be set:: [processing] add_ports = all keep_ports = present .. note:: During Kilo cycle we used on older verions of Inspector called ironic-discoverd_. Inspector is expected to be a mostly drop-in replacement, and the same client library should be used to connect to both. For Kilo, install ironic-discoverd_ of version 1.1.0 or higher instead of python-ironic-inspector-client and use ``[discoverd]`` option group in both Bare Metal service and ironic-discoverd configuration files instead of ones provided above. Inspection can be initiated using node-set-provision-state. The node should be in MANAGEABLE state before inspection is initiated. * Move node to manageable state:: ironic node-set-provision-state manage * Initiate inspection:: ironic node-set-provision-state inspect .. note:: The above commands require the python-ironicclient_ to be version 0.5.0 or greater. .. _ironic-discoverd: https://pypi.python.org/pypi/ironic-discoverd .. _python-ironic-inspector-client: https://pypi.python.org/pypi/python-ironic-inspector-client .. _python-ironicclient: https://pypi.python.org/pypi/python-ironicclient Specifying the disk for deployment ================================== Starting with the Kilo release, Bare Metal service supports passing hints to the deploy ramdisk about which disk it should pick for the deployment. The list of support hints is: * model (STRING): device identifier * vendor (STRING): device vendor * serial (STRING): disk serial number * size (INT): size of the device in GiB .. note:: A node's 'local_gb' property is often set to a value 1 GiB less than the actual disk size to account for partitioning (this is how DevStack, TripleO and Ironic Inspector work, to name a few). However, in this case ``size`` should be the actual size. For example, for a 128 GiB disk ``local_gb`` will be 127, but size hint will be 128. * wwn (STRING): unique storage identifier * wwn_with_extension (STRING): unique storage identifier with the vendor extension appended * wwn_vendor_extension (STRING): unique vendor storage identifier * name (STRING): the device name, e.g /dev/md0 .. warning:: The root device hint name should only be used for devices with constant names (e.g RAID volumes). For SATA, SCSI and IDE disk controllers this hint is not recommended because the order in which the device nodes are added in Linux is arbitrary, resulting in devices like /dev/sda and /dev/sdb `switching around at boot time `_. To associate one or more hints with a node, update the node's properties with a ``root_device`` key, for example:: ironic node-update add properties/root_device='{"wwn": "0x4000cca77fc4dba1"}' That will guarantee that Bare Metal service will pick the disk device that has the ``wwn`` equal to the specified wwn value, or fail the deployment if it can not be found. .. note:: If multiple hints are specified, a device must satisfy all the hints. .. _EnableHTTPSinSwift: Enabling HTTPS in Swift ======================= The drivers using virtual media use swift for storing boot images and node configuration information (contains sensitive information for Ironic conductor to provision bare metal hardware). By default, HTTPS is not enabled in swift. HTTPS is required to encrypt all communication between swift and Ironic conductor and swift and bare metal (via virtual media). It can be enabled in one of the following ways: * `Using an SSL termination proxy `_ * `Using native SSL support in swift `_ (recommended only for testing purpose by swift). Using Bare Metal service as a standalone service ================================================ Starting with the Kilo release, it's possible to use Bare Metal service without other OpenStack services. You should make the following changes to ``/etc/ironic/ironic.conf``: #. To disable usage of Identity service tokens:: [DEFAULT] ... auth_strategy=none #. If you want to disable the Networking service, you should have your network pre-configured to serve DHCP and TFTP for machines that you're deploying. To disable it, change the following lines:: [dhcp] ... dhcp_provider=none .. note:: If you disabled the Networking service and the driver that you use is supported by at most one conductor, PXE boot will still work for your nodes without any manual config editing. This is because you know all the DHCP options that will be used for deployment and can set up your DHCP server appropriately. If you have multiple conductors per driver, it would be better to use Networking since it will do all the dynamically changing configurations for you. If you don't use Image service, it's possible to provide images to Bare Metal service via hrefs. .. note:: At the moment, only two types of hrefs are acceptable instead of Image service UUIDs: HTTP(S) hrefs (for example, "http://my.server.net/images/img") and file hrefs (file:///images/img). There are however some limitations for different drivers: * If you're using one of the drivers that use agent deploy method (namely, ``agent_ilo``, ``agent_ipmitool``, ``agent_pyghmi``, ``agent_ssh`` or ``agent_vbox``) you have to know MD5 checksum for your instance image. To compute it, you can use the following command:: md5sum image.qcow2 ed82def8730f394fb85aef8a208635f6 image.qcow2 Apart from that, because of the way the agent deploy method works, image hrefs can use only HTTP(S) protocol. * If you're using ``iscsi_ilo`` or ``agent_ilo`` driver, Object Storage service is required, as these drivers need to store floppy image that is used to pass parameters to deployment iso. For this method also only HTTP(S) hrefs are acceptable, as HP iLO servers cannot attach other types of hrefs as virtual media. * Other drivers use PXE deploy method and there are no special requirements in this case. Steps to start a deployment are pretty similar to those when using Compute: #. To use the `ironic CLI `_, set up these environment variables. Since no authentication strategy is being used, the value can be any string for OS_AUTH_TOKEN. IRONIC_URL is the URL of the ironic-api process. For example:: export OS_AUTH_TOKEN=fake-token export IRONIC_URL=http://localhost:6385/ #. Create a node in Bare Metal service. At minimum, you must specify the driver name (for example, "pxe_ipmitool"). You can also specify all the required driver parameters in one command. This will return the node UUID:: ironic node-create -d pxe_ipmitool -i ipmi_address=ipmi.server.net \ -i ipmi_username=user -i ipmi_password=pass \ -i deploy_kernel=file:///images/deploy.vmlinuz \ -i deploy_ramdisk=http://my.server.net/images/deploy.ramdisk +--------------+--------------------------------------------------------------------------+ | Property | Value | +--------------+--------------------------------------------------------------------------+ | uuid | be94df40-b80a-4f63-b92b-e9368ee8d14c | | driver_info | {u'deploy_ramdisk': u'http://my.server.net/images/deploy.ramdisk', | | | u'deploy_kernel': u'file:///images/deploy.vmlinuz', u'ipmi_address': | | | u'ipmi.server.net', u'ipmi_username': u'user', u'ipmi_password': | | | u'******'} | | extra | {} | | driver | pxe_ipmitool | | chassis_uuid | | | properties | {} | +--------------+--------------------------------------------------------------------------+ Note that here deploy_kernel and deploy_ramdisk contain links to images instead of Image service UUIDs. #. As in case of Compute service, you can also provide ``capabilities`` to node properties, but they will be used only by Bare Metal service (for example, boot mode). Although you don't need to add properties like ``memory_mb``, ``cpus`` etc. as Bare Metal service will require UUID of a node you're going to deploy. #. Then create a port to inform Bare Metal service of the network interface cards which are part of the node by creating a port with each NIC's MAC address. In this case, they're used for naming of PXE configs for a node:: ironic port-create -n $NODE_UUID -a $MAC_ADDRESS #. As there is no Compute service flavor and instance image is not provided with nova boot command, you also need to specify some fields in ``instance_info``. For PXE deployment, they are ``image_source``, ``kernel``, ``ramdisk``, ``root_gb``:: ironic node-update $NODE_UUID add instance_info/image_source=$IMG \ instance_info/kernel=$KERNEL instance_info/ramdisk=$RAMDISK \ instance_info/root_gb=10 Here $IMG, $KERNEL, $RAMDISK can also be HTTP(S) or file hrefs. For agent drivers, you don't need to specify kernel and ramdisk, but MD5 checksum of instance image is required:: ironic node-update $NODE_UUID add instance_info/image_checksum=$MD5HASH #. Validate that all parameters are correct:: ironic node-validate $NODE_UUID +------------+--------+----------------------------------------------------------------+ | Interface | Result | Reason | +------------+--------+----------------------------------------------------------------+ | console | False | Missing 'ipmi_terminal_port' parameter in node's driver_info. | | deploy | True | | | management | True | | | power | True | | +------------+--------+----------------------------------------------------------------+ #. Now you can start the deployment, run:: ironic node-set-provision-state $NODE_UUID active You can manage provisioning by issuing this command. Valid provision states are ``active``, ``rebuild`` and ``deleted``. For iLO drivers, fields that should be provided are: * ``ilo_deploy_iso`` under ``driver_info``; * ``ilo_boot_iso``, ``image_source``, ``root_gb`` under ``instance_info``. .. note:: Before Liberty release Ironic was not able to track non-Glance images' content changes. Starting with Liberty, it is possible to do so using image modification date. For example, for HTTP image, if 'Last-Modified' header value from response to a HEAD request to "http://my.server.net/images/deploy.ramdisk" is greater than cached image modification time, Ironic will re-download the content. For "file://" images, the file system modification time is used. Other references ---------------- * `Enabling local boot without Compute`_ Enabling the configuration drive (configdrive) ============================================== Starting with the Kilo release, the Bare Metal service supports exposing a configuration drive image to the instances. The configuration drive is usually used in conjunction with the Compute service, but the Bare Metal service also offers a standalone way of using it. The following sections will describe both methods. When used with Compute service ------------------------------ To enable the configuration drive when deploying an instance, pass ``--config-drive true`` parameter to the ``nova boot`` command, for example:: nova boot --config-drive true --flavor baremetal --image test-image instance-1 It's also possible to enable the configuration drive automatically on all instances by configuring the ``OpenStack Compute service`` to always create a configuration drive by setting the following option in the ``/etc/nova/nova.conf`` file, for example:: [DEFAULT] ... force_config_drive=True When used standalone -------------------- When used without the Compute service, the operator needs to create a configuration drive and provide the file or HTTP URL to the Bare Metal service. For the format of the configuration drive, Bare Metal service expects a ``gzipped`` and ``base64`` encoded ISO 9660 [*]_ file with a ``config-2`` label. The `ironic client `_ can generate a configuration drive in the `expected format`_. Just pass a directory path containing the files that will be injected into it via the ``--config-drive`` parameter of the ``node-set-provision-state`` command, for example:: ironic node-set-provision-state --config-drive /dir/configdrive_files $node_identifier active Accessing the configuration drive data -------------------------------------- When the configuration drive is enabled, the Bare Metal service will create a partition on the instance disk and write the configuration drive image onto it. The configuration drive must be mounted before use. This is performed automatically by many tools, such as cloud-init and cloudbase-init. To mount it manually on a Linux distribution that supports accessing devices by labels, simply run the following:: mkdir -p /mnt/config mount /dev/disk/by-label/config-2 /mnt/config If the guest OS doesn't support accessing devices by labels, you can use other tools such as ``blkid`` to identify which device corresponds to the configuration drive and mount it, for example:: CONFIG_DEV=$(blkid -t LABEL="config-2" -odevice) mkdir -p /mnt/config mount $CONFIG_DEV /mnt/config .. [*] A config drive could also be a data block with a VFAT filesystem on it instead of ISO 9660. But it's unlikely that it would be needed since ISO 9660 is widely supported across operating systems. Cloud-init integration ---------------------- The configuration drive can be especially useful when used with `cloud-init `_, but in order to use it we should follow some rules: * ``Cloud-init`` data should be organized in the `expected format`_. * Since the Bare Metal service uses a disk partition as the configuration drive, it will only work with `cloud-init version >= 0.7.5 `_. * ``Cloud-init`` has a collection of data source modules, so when building the image with `disk-image-builder`_ we have to define ``DIB_CLOUD_INIT_DATASOURCES`` environment variable and set the appropriate sources to enable the configuration drive, for example:: DIB_CLOUD_INIT_DATASOURCES="ConfigDrive, OpenStack" disk-image-create -o fedora-cloud-image fedora baremetal For more information see `how to configure cloud-init data sources `_. .. _`expected format`: http://docs.openstack.org/user-guide/cli_config_drive.html#openstack-metadata-format .. _BuildingDeployRamdisk: Building or downloading a deploy ramdisk image ============================================== Ironic depends on having an image with the ironic-python-agent_ (IPA) service running on it for controlling and deploying bare metal nodes. You can download a pre-built version of the deploy ramdisk built with the `CoreOS tools`_ at: * `CoreOS deploy kernel `_ * `CoreOS deploy ramdisk `_ Building from source -------------------- There are two known methods for creating the deployment image with the IPA service: .. _BuildingCoreOSDeployRamdisk: CoreOS tools ~~~~~~~~~~~~ #. Clone the ironic-python-agent_ project:: git clone https://github.com/openstack/ironic-python-agent #. Install the requirements:: Fedora 21/RHEL7/CentOS7: sudo yum install docker gzip util-linux cpio findutils grep gpg Fedora 22 or higher: sudo dnf install docker gzip util-linux cpio findutils grep gpg Ubuntu 14.04 (trusty) or higher: sudo apt-get install docker.io gzip uuid-runtime cpio findutils grep gnupg #. Change directory to ``imagebuild/coreos``:: cd ironic-python-agent/imagebuild/coreos #. Start the docker daemon:: Fedora/RHEL7/CentOS7: sudo systemctl start docker Ubuntu: sudo service docker start #. Create the image:: sudo make #. Or, create an ISO image to boot with virtual media:: sudo make iso .. note:: Once built the deploy ramdisk and kernel will appear inside of a directory called ``UPLOAD``. .. _BuildingDibBasedDeployRamdisk: disk-image-builder ~~~~~~~~~~~~~~~~~~ #. Install disk-image-builder_ from pip or from your distro's packages:: sudo pip install diskimage-builder #. Create the image:: disk-image-create ironic-agent fedora -o ironic-deploy The above command creates the deploy ramdisk and kernel named ``ironic-deploy.vmlinuz`` and ``ironic-deploy.initramfs`` in your current directory. #. Or, create an ISO image to boot with virtual media:: disk-image-create ironic-agent fedora iso -o ironic-deploy The above command creates the deploy ISO named ``ironic-deploy.iso`` in your current directory. .. note:: Fedora was used as an example for the base operational system. Please check the `diskimage-builder documentation`_ for other supported operational systems. .. _`diskimage-builder documentation`: http://docs.openstack.org/developer/diskimage-builder Trusted boot with partition image ================================= Starting with the Liberty release, Ironic supports trusted boot with partition image. This means at the end of the deployment process, when the node is rebooted with the new user image, ``trusted boot`` will be performed. It will measure the node's BIOS, boot loader, Option ROM and the Kernel/Ramdisk, to determine whether a bare metal node deployed by Ironic should be trusted. It's important to note that in order for this to work the node being deployed **must** have Intel `TXT`_ hardware support. The image being deployed with Ironic must have ``oat-client`` installed within it. The following will describe how to enable ``trusted boot`` and boot with PXE and Nova: #. Create a customized user image with ``oat-client`` installed:: disk-image-create -u fedora baremetal oat-client -o $TRUST_IMG For more information on creating customized images, see `ImageRequirement`_. #. Enable VT-x, VT-d, TXT and TPM on the node. This can be done manually through the BIOS. Depending on the platform, several reboots may be needed. #. Enroll the node and update the node capability value:: ironic node-create -d pxe_ipmitool ironic node-update $NODE_UUID add properties/capabilities={'trusted_boot':true} #. Create a special flavor:: nova flavor-key $TRUST_FLAVOR_UUID set 'capabilities:trusted_boot'=true #. Prepare `tboot`_ and mboot.c32 and put them into tftp_root or http_root directory on all nodes with the ironic-conductor processes:: Ubuntu: cp /usr/lib/syslinux/mboot.c32 /tftpboot/ Fedora: cp /usr/share/syslinux/mboot.c32 /tftpboot/ *Note: The actual location of mboot.c32 varies among different distribution versions.* tboot can be downloaded from https://sourceforge.net/projects/tboot/files/latest/download #. Install an OAT Server. An `OAT Server`_ should be running and configured correctly. #. Boot an instance with Nova:: nova boot --flavor $TRUST_FLAVOR_UUID --image $TRUST_IMG --user-data $TRUST_SCRIPT trusted_instance *Note* that the node will be measured during ``trusted boot`` and the hash values saved into `TPM`_. An example of TRUST_SCRIPT can be found in `trust script example`_. #. Verify the result via OAT Server. This is outside the scope of Ironic. At the moment, users can manually verify the result by following the `manual verify steps`_. .. _`TXT`: http://en.wikipedia.org/wiki/Trusted_Execution_Technology .. _`tboot`: https://sourceforge.net/projects/tboot .. _`TPM`: http://en.wikipedia.org/wiki/Trusted_Platform_Module .. _`OAT Server`: https://github.com/OpenAttestation/OpenAttestation/wiki .. _`trust script example`: https://wiki.openstack.org/wiki/Bare-metal-trust#Trust_Script_Example .. _`manual verify steps`: https://wiki.openstack.org/wiki/Bare-metal-trust#Manual_verify_result Troubleshooting =============== Once all the services are running and configured properly, and a node has been enrolled with the Bare Metal service and is in the ``available`` provision state, the Compute service should detect the node as an available resource and expose it to the scheduler. .. note:: There is a delay, and it may take up to a minute (one periodic task cycle) for the Compute service to recognize any changes in the Bare Metal service's resources (both additions and deletions). In addition to watching ``nova-compute`` log files, you can see the available resources by looking at the list of Compute hypervisors. The resources reported therein should match the bare metal node properties, and the Compute service flavor. Here is an example set of commands to compare the resources in Compute service and Bare Metal service:: $ ironic node-list +--------------------------------------+---------------+-------------+--------------------+-------------+ | UUID | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+---------------+-------------+--------------------+-------------+ | 86a2b1bb-8b29-4964-a817-f90031debddb | None | power off | available | False | +--------------------------------------+---------------+-------------+--------------------+-------------+ $ ironic node-show 86a2b1bb-8b29-4964-a817-f90031debddb +------------------------+----------------------------------------------------------------------+ | Property | Value | +------------------------+----------------------------------------------------------------------+ | instance_uuid | None | | properties | {u'memory_mb': u'1024', u'cpu_arch': u'x86_64', u'local_gb': u'10', | | | u'cpus': u'1'} | | maintenance | False | | driver_info | { [SNIP] } | | extra | {} | | last_error | None | | created_at | 2014-11-20T23:57:03+00:00 | | target_provision_state | None | | driver | pxe_ipmitool | | updated_at | 2014-11-21T00:47:34+00:00 | | instance_info | {} | | chassis_uuid | 7b49bbc5-2eb7-4269-b6ea-3f1a51448a59 | | provision_state | available | | reservation | None | | power_state | power off | | console_enabled | False | | uuid | 86a2b1bb-8b29-4964-a817-f90031debddb | +------------------------+----------------------------------------------------------------------+ $ nova hypervisor-show 1 +-------------------------+--------------------------------------+ | Property | Value | +-------------------------+--------------------------------------+ | cpu_info | baremetal cpu | | current_workload | 0 | | disk_available_least | - | | free_disk_gb | 10 | | free_ram_mb | 1024 | | host_ip | [ SNIP ] | | hypervisor_hostname | 86a2b1bb-8b29-4964-a817-f90031debddb | | hypervisor_type | ironic | | hypervisor_version | 1 | | id | 1 | | local_gb | 10 | | local_gb_used | 0 | | memory_mb | 1024 | | memory_mb_used | 0 | | running_vms | 0 | | service_disabled_reason | - | | service_host | my-test-host | | service_id | 6 | | state | up | | status | enabled | | vcpus | 1 | | vcpus_used | 0 | +-------------------------+--------------------------------------+ Maintenance mode ---------------- Maintenance mode may be used if you need to take a node out of the resource pool. Putting a node in maintenance mode will prevent Bare Metal service from executing periodic tasks associated with the node. This will also prevent Compute service from placing a tenant instance on the node by not exposing the node to the nova scheduler. Nodes can be placed into maintenance mode with the following command. :: $ ironic node-set-maintenance $NODE_UUID on As of the Kilo release, a maintenance reason may be included with the optional ``--reason`` command line option. This is a free form text field that will be displayed in the ``maintenance_reason`` section of the ``node-show`` command. :: $ ironic node-set-maintenance $UUID on --reason "Need to add ram." $ ironic node-show $UUID +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | updated_at | 2015-04-27T15:43:58+00:00 | | maintenance_reason | Need to add ram. | | ... | ... | | maintenance | True | | ... | ... | +------------------------+--------------------------------------+ To remove maintenance mode and clear any ``maintenance_reason`` use the following command. :: $ ironic node-set-maintenance $NODE_UUID off .. _diskimage-builder: https://github.com/openstack/diskimage-builder .. _ironic-python-agent: https://github.com/openstack/ironic-python-agent ironic-5.1.0/doc/source/deploy/radosgw.rst0000664000567000056710000000477212674513466021732 0ustar jenkinsjenkins00000000000000.. _radosgw support: =========================== Ceph Object Gateway support =========================== Overview ======== Ceph project is a powerful distributed storage system. It contains object store and provides a RADOS Gateway Swift API which is compatible with OpenStack Swift API. These two APIs use different formats for their temporary URLs. Ironic added support for RADOS Gateway temporary URL in the Mitaka release. Configure Ironic and Glance with RADOS Gateway ============================================== #. Install Ceph storage with RADOS Gateway. See `Ceph documentation `_. #. Create RADOS Gateway credentials for Glance by executing the following commands on the RADOS Gateway admin host:: sudo radosgw-admin user create --uid="GLANCE_USERNAME" --display-name="User for Glance" sudo radosgw-admin subuser create --uid=GLANCE_USERNAME --subuser=GLANCE_USERNAME:swift --access=full sudo radosgw-admin key create --subuser=GLANCE_USERNAME:swift --key-type=swift --secret=STORE_KEY sudo radosgw-admin user modify --uid=GLANCE_USERNAME --temp-url-key=TEMP_URL_KEY Replace GLANCE_USERNAME with a user name for Glance access, and replace STORE_KEY and TEMP_URL_KEY with suitable keys. Note: Do not use "--gen-secret" CLI parameter because it will cause the "radosgw-admin" utility to generate keys with slash symbols which do not work with Glance. #. Configure Glance API service for RADOS Swift API as backend. Edit the configuration file for the Glance API service (is typically located at ``/etc/glance/glance-api.conf``). Replace RADOS_IP and PORT with the IP/port of the RADOS Gateway API service:: [glance_store] stores = file, http, swift default_store = swift swift_store_auth_version = 1 swift_store_auth_address = http://RADOS_IP:PORT/auth/1.0 swift_store_user = GLANCE_USERNAME:swift swift_store_key = STORE_KEY swift_store_container = glance swift_store_create_container_on_put = True Note: RADOS Gateway uses FastCGI protocol for interacting with HTTP server. Read your HTTP server documentation if you want to enable HTTPS support. #. Restart Glance API service and upload all needed images. #. Change Ironic configuration file on the conductor host(s) as follows:: [glance] swift_container = glance swift_api_version = v1 swift_endpoint_url = http://RADOS_IP:PORT swift_temp_url_key = TEMP_URL_KEY temp_url_endpoint_type=radosgw #. Restart Ironic conductor service(s). ironic-5.1.0/doc/source/deploy/raid.rst0000664000567000056710000003226312674513466021177 0ustar jenkinsjenkins00000000000000.. _raid: ================== RAID Configuration ================== Overview ======== Ironic supports RAID configuration for bare metal nodes. It allows operators to specify the desired RAID configuration via Ironic CLI or REST API. The desired RAID configuration is applied on the bare metal during manual cleaning. Prerequisites ============= The bare metal node needs to use a driver that supports RAID configuration. Drivers may implement RAID configuration either in-band or out-of-band. Currently, no upstream driver supports out-of-band RAID configuration. In-band RAID configuration is done using the Ironic Python Agent ramdisk. For in-band RAID configuration using agent ramdisk, a hardware manager which supports RAID should be bundled with the ramdisk. The drivers supporting RAID configuration could be found using the ironic CLI ``ironic node-validate ``. Build agent ramdisk which supports RAID configuration ===================================================== For doing in-band RAID configuration, Ironic needs an agent ramdisk bundled with a hardware manager which supports RAID configuration for your hardware. For example, the :ref:`DIB_raid_support` should be used for HPE Proliant Servers. RAID configuration JSON format ============================== The desired RAID configuration and current RAID configuration are represented in JSON format. Target RAID configuration ------------------------- This is the desired RAID configuration on the bare metal node. Using Ironic CLI or REST API, the operator sets ``target_raid_config`` field of the node. The target RAID configuration will be applied during manual cleaning. Target RAID configuration is a dictionary having ``logical_disks`` as the key. The value for the ``logical_disks`` is a list of JSON dictionaries. It looks like:: { 'logical_disks': [ {}, {}, . . . ] } If the ``target_raid_config`` is an empty dictionary, it unsets the value of ``target_raid_config`` if the value was set with previous RAID configuration done on the node. Each dictionary of logical disk contains the desired properties of logical disk supported by the driver. These properties are discoverable by using Ironic CLI or REST API:: Ironic CLI: ironic --ironic-api-version 1.15 driver-raid-logical-disk-properties Ironic REST API: curl -X GET -H "Content-Type: application/json" -H "X-Auth-Token: $AUTH_TOKEN" -H "X-OpenStack-Ironic-API-Version: 1.15" http:///v1/drivers//raid/logical_disk_properties The RAID feature is available in ironic API version 1.15 and above. If ``--ironic-api-version`` is not used in the CLI, it will error out with following message:: No API version was specified and the requested operation was not supported by the client's negotiated API version 1.9. Supported version range is: 1.1 to ... where the "..." in above error message would be the maximum version supported by the service. The RAID properties can be split into 4 different types: #. Mandatory properties. These properties must be specified for each logical disk and have no default values. - ``size_gb`` - Size (Integer) of the logical disk to be created in GiB. ``MAX`` may be specified if the logical disk should use all of the remaining space available. This can be used only when backing physical disks are specified (see below). - ``raid_level`` - RAID level for the logical disk. Ironic supports the following RAID levels: 0, 1, 2, 5, 6, 1+0, 5+0, 6+0. #. Optional properties. These properties have default values and they may be overridden in the specification of any logical disk. - ``volume_name`` - Name of the volume. Should be unique within the Node. If not specified, volume name will be auto-generated. - ``is_root_volume`` - Set to ``true`` if this is the root volume. At most one logical disk can have this set to ``true``; the other logical disks must have this set to ``false``. The ``root device hint`` will be saved, if the driver is capable of retrieving it. This is ``false`` by default. #. Backing physical disk hints. These hints are specified for each logical disk to let Ironic find the desired disks for RAID configuration. This is machine-independent information. This serves the use-case where the operator doesn't want to provide individual details for each bare metal node. - ``share_physical_disks`` - Set to ``true`` if this logical disk can share physical disks with other logical disks. The default value is ``false``. - ``disk_type`` - ``hdd`` or ``ssd``. If this is not specified, disk type will not be a criterion to find backing physical disks. - ``interface_type`` - ``sata`` or ``scsi`` or ``sas``. If this is not specified, interface type will not be a criterion to find backing physical disks. - ``number_of_physical_disks`` - Integer, number of disks to use for the logical disk. Defaults to minimum number of disks required for the particular RAID level. #. Backing physical disks. These are the actual machine-dependent information. This is suitable for environments where the operator wants to automate the selection of physical disks with a 3rd-party tool based on a wider range of attributes (eg. S.M.A.R.T. status, physical location). The values for these properties are hardware dependent. - ``controller`` - The name of the controller as read by the driver. - ``physical_disks`` - A list of physical disks to use as read by the driver. .. note:: If properties from both "Backing physical disk hints" or "Backing physical disks" are specified, they should be consistent with each other. If they are not consistent, then the RAID configuration will fail (because the appropriate backing physical disks could not be found). Examples for ``target_raid_config`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ *Example 1*. Single RAID disk of RAID level 5 with all of the space available. Make this the root volume to which Ironic deploys the image:: { 'logical_disks': [ { 'size_gb': 'MAX', 'raid_level': '5', 'is_root_volume': true } ] } *Example 2*. Two RAID disks. One with RAID level 5 of 100 GiB and make it root volume and use SSD. Another with RAID level 1 of 500 GiB and use HDD:: { 'logical_disks': [ { 'size_gb': 100, 'raid_level': '5', 'is_root_volume': true, 'disk_type': 'ssd' }, { 'size_gb': '500', 'raid_level': '1', 'disk_type': 'hdd' } ] } *Example 3*. Single RAID disk. I know which disks and controller to use:: { 'logical_disks': [ { 'size_gb': 100, 'raid_level': '5', 'controller': 'Smart Array P822 in Slot 3', 'physical_disks': ['6I:1:5', '6I:1:6', '6I:1:7'], 'is_root_volume': true } ] } *Example 4*. Using backing physical disks:: { 'logical_disks': [ { 'size_gb': 50, 'raid_level': '1+0', 'controller': 'RAID.Integrated.1-1', 'volume_name': 'root_volume', 'is_root_volume': 'true', 'physical_disks': [ 'Disk.Bay.0:Encl.Int.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Encl.Int.0-1:RAID.Integrated.1-1' ] }, { 'size_gb': 100, 'raid_level': '5', 'controller': 'RAID.Integrated.1-1', 'volume_name': 'data_volume', 'physical_disks': [ 'Disk.Bay.2:Encl.Int.0-1:RAID.Integrated.1-1', 'Disk.Bay.3:Encl.Int.0-1:RAID.Integrated.1-1', 'Disk.Bay.4:Encl.Int.0-1:RAID.Integrated.1-1' ] } ] } Current RAID configuration -------------------------- After target RAID configuration is applied on the bare metal node, Ironic populates the current RAID configuration. This is populated in the ``raid_config`` field in the Ironic node. This contains the details about every logical disk after they were created on the bare metal node. It contains details like RAID controller used, the backing physical disks used, WWN of each logical disk, etc. It also contains information about each physical disk found on the bare metal node. To get the current RAID configuration:: Ironic CLI: ironic --ironic-api-version 1.15 node-show REST API: curl -X GET -H "Content-Type: application/json" -H "X-Auth-Token: $AUTH_TOKEN" -H "X-OpenStack-Ironic-API-Version: 1.15" http:///v1/nodes//states Workflow ======== * Operator configures the bare metal node with a driver that has a ``RAIDInterface``. * For in-band RAID configuration, operator builds an agent ramdisk which supports RAID configuration by bundling the hardware manager with the ramdisk. See `Build agent ramdisk which supports RAID configuration`_ for more information. * Operator prepares the desired target RAID configuration as mentioned in `Target RAID configuration`_. The target RAID configuration is set on the Ironic node:: Ironic CLI: ironic --ironic-api-version 1.15 node-set-target-raid-config REST API: curl -X PUT -H "Content-Type: application/json" -H "X-Auth-Token: $AUTH_TOKEN" -H "X-OpenStack-Ironic-API-Version: 1.15" -d '' http:///v1/nodes//states/raid The Ironic CLI can accept the input from standard input also: ironic --ironic-api-version 1.15 node-set-target-raid-config - * Create a JSON file with the RAID clean steps for manual cleaning. Add other clean steps as desired:: { "clean_steps": [ { "interface": "raid" "step": "delete_configuration", }, { "interface": "raid" "step": "create_configuration", }] } .. note:: 'create_configuration' doesn't remove existing disks. It is recommended to add 'delete_configuration' before 'create_configuration' to make sure that only the desired logical disks exist in the system after manual cleaning. * Bring the node to ``manageable`` state and do a ``clean`` action to start cleaning on the node:: Ironic CLI: ironic --ironic-api-version 1.15 node-set-provision-state clean --clean-steps REST API: curl -X PUT -H "Content-Type: application/json" -H "X-Auth-Token: $AUTH_TOKEN" -H "X-OpenStack-Ironic-API-Version: 1.15" -d '{'target': 'clean', 'clean_steps': ' http:///v1/nodes//states/provision * After manual cleaning is complete, the current RAID configuration can be viewed using:: Ironic CLI: ironic --ironic-api-version 1.15 node-show REST API: curl -X GET -H "Content-Type: application/json" -H "X-Auth-Token: $AUTH_TOKEN" -H "X-OpenStack-Ironic-API-Version: 1.15" http:///v1/nodes//states Using RAID in nova flavor for scheduling ======================================== The operator can specify the `raid_level` capability in nova flavor for node to be selected for scheduling:: nova flavor-key my-baremetal-flavor set capabilities:raid_level="1+0" Developer documentation ======================= In-band RAID configuration is done using IPA ramdisk. IPA ramdisk has support for pluggable hardware managers which can be used to extend the functionality offered by IPA ramdisk using stevedore plugins. For more information, see Ironic Python Agent `Hardware Manager`_ documentation. .. _`Hardware Manager`: http://docs.openstack.org/developer/ironic-python-agent/#hardware-managers The hardware manager that supports RAID configuration should do the following: #. Implement a method named ``create_configuration``. This method creates the RAID configuration as given in ``target_raid_config``. After successful RAID configuration, it returns the current RAID configuration information which ironic uses to set ``node.raid_config``. #. Implement a method named ``delete_configuration``. This method deletes all the RAID disks on the bare metal. #. Return these two clean steps in ``get_clean_steps`` method with priority as 0. Example:: return [{'step': 'create_configuration', 'interface': 'raid', 'priority': 0}, {'step': 'delete_configuration', 'interface': 'raid', 'priority': 0}] ironic-5.1.0/doc/source/images/0000775000567000056710000000000012674513633017465 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/images/deployment_architecture_2.png0000664000567000056710000011255412674513466025352 0ustar jenkinsjenkins00000000000000PNG  IHDR1 AiCCPICC ProfileH wTSϽ7" %z ;HQIP&vDF)VdTG"cE b PQDE݌k 5ޚYg}׺PtX4X\XffGD=HƳ.d,P&s"7C$ E6<~&S2)212 "įl+ɘ&Y4Pޚ%ᣌ\%g|eTI(L0_&l2E9r9hxgIbטifSb1+MxL 0oE%YmhYh~S=zU&ϞAYl/$ZUm@O ޜl^ ' lsk.+7oʿ9V;?#I3eE妧KD d9i,UQ h A1vjpԁzN6p\W p G@ K0ށiABZyCAP8C@&*CP=#t] 4}a ٰ;GDxJ>,_“@FXDBX$!k"EHqaYbVabJ0՘cVL6f3bձX'?v 6-V``[a;p~\2n5׌ &x*sb|! ߏƿ' Zk! $l$T4QOt"y\b)AI&NI$R$)TIj"]&=&!:dGrY@^O$ _%?P(&OJEBN9J@y@yCR nXZOD}J}/G3ɭk{%Oחw_.'_!JQ@SVF=IEbbbb5Q%O@%!BӥyҸM:e0G7ӓ e%e[(R0`3R46i^)*n*|"fLUo՝mO0j&jajj.ϧwϝ_4갺zj=U45nɚ4ǴhZ ZZ^0Tf%9->ݫ=cXgN].[7A\SwBOK/X/_Q>QG[ `Aaac#*Z;8cq>[&IIMST`ϴ kh&45ǢYYF֠9<|y+ =X_,,S-,Y)YXmĚk]c}džjcΦ浭-v};]N"&1=xtv(}'{'IߝY) Σ -rqr.d._xpUەZM׍vm=+KGǔ ^WWbj>:>>>v}/avO8 FV> 2 u/_$\BCv< 5 ]s.,4&yUx~xw-bEDCĻHGKwFGEGME{EEKX,YFZ ={$vrK .3\rϮ_Yq*©L_wד+]eD]cIIIOAu_䩔)3ѩiB%a+]3='/40CiU@ёL(sYfLH$%Y jgGeQn~5f5wugv5k֮\۹Nw]m mHFˍenQQ`hBBQ-[lllfjۗ"^bO%ܒY}WwvwXbY^Ю]WVa[q`id2JjGէ{׿m>PkAma꺿g_DHGGu;776ƱqoC{P38!9 ҝˁ^r۽Ug9];}}_~imp㭎}]/}.{^=}^?z8hc' O*?f`ϳgC/Oϩ+FFGGόzˌㅿ)ѫ~wgbk?Jި9mdwi獵ޫ?cǑOO?w| x&mf2:Y~ pHYs+iTXtXML:com.adobe.xmp 5 2 1 2@IDATxUWy !(`5 ̴ T-0~f ܷooք`$3D} ^|m X Z iBL"F!ښI Y39>wgzwmV۠IB @bK`Tl-p@ @@s#@ @ 1@̇ @{ @1'yb> @ @9D};!@  @@ cށ@ D= @bNQ|@   @ sw C @Q=C`Ν裏 ĒXZÇ塇X) KO}|)=F֬Y#c>T@r ؼy3Ã]@AQ~ CMU BQ F I}z@MQuM@7K!@>TT7BIQ~u@ %ة4nq1 >]Mk oB (D}x)}ZV^-ǏBn)@Uz_]VTرCԛO h B?'^}Knj`ܻW!>QWdo>AЧ^ SNb^tvvڇǠ|@26Ba|B~pthWe͚5xcO_kQy}ԇI B gk2'8r5~0 WaO @W7񚌾 g5B9IV;wZoibS4ֆx 96l Byg]sm8&a {n&δ5֭['=h ZGQ_f~xߨKeg)>ʷKػ)|iM[G@HOC A@0O}_s_juobBu.IBi942'NȔ)SFXGA/P7-*Bzg_?[J P+D}RZ)/q.05ery۵[ c?۶UٳYN B-OM}L`芳0S* Dŧj L_^1SRu,K[[X`x*8f̘a יHHh0a2z![ٵS^4_5B=˨Q짆"ZQ+ Bث:v+{VΉ{i Pes)UO 32 R^VN.9k8*vF Y]g#i@NQ_7ǿ!wK.YY3/ ~}*"// ~nWѮ^?G.7\ow=~9D 9]La_̃o 4OQ_Krygǯ˘K Fvqk^@r,}Y, @A*~On^E֛/?㭯S + @u@ԗҟzbsf1K9AoD[_ j&!@BQ_NC]$蝸[_`-m}v~̩Z}M@@8üg/[?7'/>Y0",td 9@s'@ D}z/,/x8:=e`C]n@qw{)?O @!@M@{yuƛ>78 y߳p<,Gy҂'Cz|Y/\PM8@ǡ =>)xuƛsz靸|֗kiy{I{FN+]C23{ ӈg񏰯7 @H(D}cy鯹V\|?oro}&vj_,zEtuu9bPU1Oҗi6E.xcy@hDz7K_֛x=0=1SӮQõLio7ǏTB2!c] @h)D?K52-:MXsy;lClg?k ֕o{j ĉ>[E^ 1:7KW@^kH_{<\hHE#Mm 8H楿\,}^z絷W]_BqL}4>O}t>7#)KxKW};p>T,V (H(xu^'Vgzw/^ryƼ2Xf@[t;O!NȻ,>xt-p><9)#m3t :zr<̵=}풱zP1{-fAU@|U 2T^+~W^?և|#V 3nHa/#&śuMXW^ӟ._ }>/g[";{a>l*ӯM0ӾP $@EmY#N_Yr'_N La_4F[̧ZE2U8_ xm˼ꖌObt}Ք5AUm8'!@@R+yL1]䋨r&No=#[tS~><å*5 r* ,2}X ͼ |sB~Q|zi< 󐱽/[hCFڛj5}Ky(iv| H߼N[olDE>߿_~̜~̔;_22llքIv`3TchIsUaǦg%k:#WP.t;kVgpi̓R3T`aCť bJ ^jKG.1W/ZO39t}"$652 mR73Mڻ`Y3HO5x֛݆f1oBm͸ D;*l!@D0//yMcYik>ukp*Ⳬ%NȚAی(7FCЭWڂe(=nFOJ@ '9/K5x zS{ETm71"]U R7_M|{"hjG5\⫗ցv{o6;Nꆟ53ᘩp/f3<;}eJ!8%G U~]1c*k#-mK#fjR[V/)<7{Roܽ k Ր^3WG̔-"e7WmHOڇT!7 N6cH !mF}M67ߔӧO?]K5xecCS= +WqUބ jWRcCL]\Nu~3W_rYWwǏwv z>^)7`_WwLװ3FBQ!PMЗ!@RJ UǣF /P.b}'yݿ lx֣u P}= @H7)MAE{vy;)]vL8׏7><v[,vhNgVqqEIˇ^kF+QTV u{:1fo-(v@@S^=:xUC`ԃ1AxFp9֫ 6"{6VtlhNY1~7G$i43h2zYF̈D!=*җ{ xoo-fd13UL4j %2s:A  zR;*J*^*,JZ9Q,F`ˉ܊. hklɗ>SfJL hJv6 N^nV4!W{uYQ̊yA5w̬"۳!ę4ԜP`@@2F {袋: zիnܧ?S~mLA#c|U]5y##Ċ#iEĮ"?kλ 7v5bݶZ B7j @F u^a8Ϲl $AW93eG>Ɗ嶫V/UE{VEq1{wL@vzDCuۋtn܃JssWoMl8@R)GB?੷5}flޟ] ^Qߕ5y:ԙ5^y[*:rPNaP@@ _TWzԽo4uk.]37# iTݹfi| ͇td{w_YW?ӆu77RwݺbityBv4l ;B T(Eշ3vFwֵa旜Vn TXkH )q46Vq/i3 %)&_jg>SwgeY*\Qm>i= @^vEeةѕ /|ɇJ?Nt,,GϚ3e!27[2TWϘql2˝*=C@(uŽ{Lv黨G~@@<vxtVE c4 d'qo}CIA"(q$$C `w.Mk 0رcӭi(B t$@^yM4IN:t & ꛀǥ[kSNǏ{`{PE u9AZ!ѣ "ISH@AI :}KCzuיi(=3Z-|e/`R J+Eɮ]vńރz/ @u!b!2>׾G}pf6{+m8Z)赅7pWNSp:P{Ϟ=x'9@EYRt̂QYQl+{1FՂ^y7k֬?ixiOLlڴ zM  vAmtdy=fU67lt豥;>O יu`o>cEOG\[w͚<}vZgG1]|yï77GMERLi9 ^jSVvbLV3/Gi~ϴ',v̲ Yy/۳L2.5eݟgl^ϿlмjL_7]=CQ6ڑt=]9#( zmzw!+WdzKа˗ƍԪss&yQoCoڻfeƣݗW}[tٓFw\ڻn69<상lףSuƏ~8o\on>OGǽ=ywb7w9l1oË&]ӦU\uvvÇaz^S9sB!@>Kݣ9֑1"^0n_/l[BbFτo^wȗ*YfYڵk%;ذa{,\зr)#iQ֢!.a4FkI[<&>4֫&'!3&\Y)qW,6{LNx$vb@kz5GٳeղsNw:[orܹP>Z7B6ӦM'NݻЇT@`$_ޅ[Or3k4\velhH| ځy{ խvtÉǠaכ7 c4!;f AWohh>)q㭧СCruفA.+QFɸq .ѣGѺN;N²U)Sd߾}~zb?6"Dx#d@%4\E/?q8nH=gϞ~| |ߕ/| Oʅ^XTR%:BL5ost}y٭~'A@ DS z 9s=̏>Phj֛c.t.Ǐ+sε!5:s֭[=)iN{ T@ԧil'*uD݆u;{¶!t`~tm&'O_ڵzg̘! E`tZOA|A`O#?Rta6}@2U@!.y)뚿nxPv" @wzߑR  ,BpF+Z>]r 7#]kRe^a=T@h D}KS) (k>#w__ZM wrʴ5B4D} ^#y׋+bGUNj:^\ݻEgAǎ$.eC3ߒ[l_|7;vرa(@ D}P)@ ôD}z {!d:6T؟>}) j'9!@*,Y"@| D}H@PJgToٲEtZ #55AH4͛7#42D}{ r~O9f_'AfTd Zx@-$o C?Ry">}̟?_h9 D}x @_^-ZOUX@UFb PW9 L&c V}dTDRNQC>h”@@Q]DHvKF!FDžj"  (xWȧZxٻW\=~ ׅfϞmCqNj@b ӕ4#PkK—n{x,\@da T{}c `wC \}r<'=W!I3 4u'zsm%cTۣ^{@hiDo)_ꌚe%`@sj@ xW~rUΘNyeqD-XB9ܹse2k֬P@I#OZ@ø*'M$8v 6C @HtF@H7 ͛?|paz@ @7K 𛚶'QH#< pB&8A!OLW$@g7,lS+,#55AO>@ 1Jps eaF{PW؟:uJ446B(KQ_ !@ T:t=}d28|')B~Pywu]{=/xP\9P~zʗV?~|Y}zz!V`Vҧn@'{#ňAsd_SeNڵ o};  -CS]w+Pv>"|1'~ʹk#}x h=<,"}BcZl.N'cϒ >-=M;!3S?#篼"?b|~i*T8p@?.+WSCQ% '0vXٱcT{n @ ie$x鍯&j¾!l\Ĕ>H?*5axb[>OМ4Lkiy @ԧi1Gc '>'}Ό6aZ?!hF*w-:a_!X`XwCCU_?,a6olΪׁ*I@O}z6@ W2&U ~ҤIx+0 OxVC@MR@ #}gg -EQ1 P͓#G^@ r(. $zkIvwM׻n+1Rmmm#e| d2k }r"l.A(KQ_ !f81sN9tЈE91o믿.'O,LC9>d 6_,G굈ٳgŋ\> JQ*n*@:H>l߾]zZO|Bz.i*֥uD,L~y쵣F!)C8P@R8 06ǷB@28Aoș3gS?d40ߊo|f+_\tE2fA'Uڵc?ud4V@'O|@K@EѨW_KM717>oK/ {P"M6oc!*@a~0:.*pNs=rQ1mжhŽtm{?G`ժUmg9~8X D1" @ lH\x~V>+WArw6imE>+V,nV<#< JVҧn$[-P5hGE {wsmSi MWZ$@'WiZLyUkQa㺒gݻWǍg1%\b\ǡ+w!fj0 D}@`)i'P*<$콂~2a+Xg^_D@'iJQמnD1W<^ k<رcZZD`JDt#@t s_~Yt8Їd…jĞ={ !7<Hu[(轱rJ<Ц@VZIh@-`d6F& D_Ot8ĒfL,-h@ 67[:Kxgy{+[n]ji->萮.y/LwP$>]K - {]U^w]<Ѻu"kާI Vݷa@ 7A|=AϽ(Ç7omfGQ>E.O|e&M|#6^O6@@ :X|*:}%H'GZJ~{U8p}`+۷: - @aP@"ZGWN@@=*ϟoKC"  GDD@ h}cMOo B@GB<_~@)a{}S]u@ @SMb@O {ca5"?}Zjk@Li5%Pts̩Y#. eɒ%h"d25\A@U_vrGN>*`|%{Yq.R?~L0n?ZuPik(Ack.D} NQ_OdϾ '' ,ZO>Y*=GME<>Y})!@MJ:fB ܹsryD~믿^>я5QO?M }D޿:%8Q l@r0P69}IK H hpws\Q؎\tE6ǽ H$4ic^THۋq@<੏G?a%RO`$ {kR=>L^j͛#o/B&v`] >|Xv)ﷵ!E[Vk䫯껱*IgYl̚5~?hNs~ҤIF!r `  'B~ڵ6\Cc]o{+z<_^|2n8+pouSN[ni=,-}l!)Huƴ{q*М7PgmJ/~ ;VőcNJK.$z ˖-;u@ؗ!@.p9i4fʕV;|ژxoa={6 /b~̘1 Uիz.}H %OhҬ S/+֑YE5ϟR?t*5.bf gfYpaUQv WGO;vo\HQ#L iY5K׭&7]ebk 2pƍ an*w a_7:.@j SnܹsmbBJoU4S:Uz H @d,7:}>2]!C[ /^,6lhj.^"AFO}5:K$ɓ'[/$A *M&<"5:(3=اi)!z\;O=L>zbg<džN)ٳ'66cht ^iؠ%A(GQ_ K`޽`ĶE.`k׮%&ҁXw%C 0Rp 4Ggggz L& &@ F'M4  qW  } ~PaI=c̆ %>HD}ոzf,ii$r}bA)qqh,f6B("^gϞm=:>O!@ nNBq1H̆O>@@VXaD؇M = Aj&!hDw/k%ÇΝ;e֌Ǐ~H"V` pa:ed֬YEyz %6r~ wk$ G-ge+uuбL\/ T!8*Uȯ]bteQMi{N߶mׯ[n%Lh}Q{d<,VB]'N(8\tqaMS^{:[&E˻ |&/{v/x6I HQ?K,T- Pq*l½S 4T$gYreꅽ a'ޟ~iuc*Ub0FA~CIcad޼yơ]@{BPD?*G%W^}~C&Uﰊwݪ]=*p'NhU̧!EۯN&e+ q[.ʆ,d'@^zY5?˗/5k… kL(G ^wUݻ׆QN\~G(3+'}Y`~GDQ֦ -U  W:EƍPbuH:^i׮]mq*F]y]`l 胑}p {4n/x6IoQ?w\*WM: h|}gg:t?u#{1֥q{GB^E NmZR5C@^'}RηzkfZD}{"২{LvQ d@tqƉJ@34B7o i&y衇xdfn:uoFoWpUV`caSOИV 9W$4K@:SW)ot}P!}=:Ygv;pvZ?5QJTAE:@ h Ƹ ( {}Ё з*3t[R X:x1鬘o$E+[-T@Xm_r7IKߪt$W'g&~#AO*U\ӊ+6}O |ui[ٳgU8}+Oo()(MiֶUxY&a.S[qz׭[cC,{ ~~5_l9w`Mc^gv҇ b#~T1O&L_J.NEHS @ >P} Lz$J+|Ϙ @&w}ٺm۶" lݺB)a4 H0. ~,ԳR ӏ&bٲev= ^Rd2M4;s~*rD+nj;5/1W9z@ڵk*U ҤgQ?އ;R:x[現[R͆7O ,aǎamܸcJ^stD}hVI#Rtɒ%V%+? ˖-;@q&a{;ᄏ&::SogtjT׏b h1 q3PرQ[:y}TSOŭ !Ǐ3jABWWTG@Soj'EhL/_jzY:"l!^V}5fVl$iT 3UU޿NT~R, jiO]8+\?sl%7_?>~84lB= ^S1B Z:7轤Q6HW+UUqCɞ={-~ʩL@?c^Em6L̻6p8SKDW9EgА 'vvvvʡCv 6e!kշNía@Gŋ˂ (<ވw^;㚆_zJ檥Z?~UƎ7DcI.>n8 7ee-շSRI|6WqO?pBStC~\OQ_?31~|*g֋T7]شig*E\̋8?}:IŽRT_wu6TDn(ݪyYf*McrpL8`~Qꀰ}%8M41aވ Nk7*TO W:CY&*n$3$eWѮ G^n{ kDǏrHMf`7I$E`ʕVDj`#ޏё℮ |2SNmUN2%QjK^z%;@GۢmRFLζ:D}u>M~|(I#04'"oz' ~hF;L[ԫ_}Gz]ro9ݪ8qPC‡\^I.E]TIJӪ~lUX/>MǏrf̘a$^ &믿d5R {?~GBcZӎ8c=c^D}{ &׏Ri^6 cC{66 4"'C@*P .@IDAT h\'iD]A%}c ` J TPa:~WA `~ WJ(^QeD3fE[;%&e2;ےN}v`JKR\ 5QFic]pE7mkܼ wTꝟ7o8qŠ}=dHAmN'#X~|(Ǐ2@vaٹsg x:uG.0lٲB|= P@ +@-BLE@? ?_v]FSϚ&O* sbm69uꔬ_^nT2эH=+WSH]e1S]k G9~M]}ɒ%V̫@sX?ʦdPQ*Ҷl½uQ a)+~N#X~|(Ǐ2E6-S1<͖#իWsjq~j[ctnN_` B%BL;v*U1:x.\F"@'>yˣG9~( АB&%u׹7n r8VDE)cS|Q+N :/X^~eJ3#Gaܹ6TB=$I@;;;СCM;.ʊVFF|-w,>rNsYn>wApmׁ֋/ 6W %C0x!@ hhjP?<~H_jEUaO@4 gڴi3Y2OU'Џ< , z5< I熞>}:>7@H;v}p޳gOH5R   RE@UڵkWm޽{eCC iT  P N%Nj Hci,Ocχf~#A@8pаuIOOOD, $L,}Zhʶ^P0㉍[ )x+]X ҷ#\swV~* S4r$:뭸2erGmG* 3eg*;$]c{tgdsae-譃;6-ii @@1?bF;z]vd q9y5rMwI`tgf̡wo4 @@_OF]rzUwdd@g<_F$Umiب W>T@ԧi, @J@\g6Q/j8Nkyس՟%]C3__0ӼU/3ov+bz4@@W=}tWr  @,2Kdp(6Z n+l9\I2}ޗ1/zF6&AmX~Srp[BJ[Lͼbl T6mLkM?V(?a-nxeӏ&<#<P12g8@'95&>H @ "OPgҔ,ZHv^@ NA$;yd9yFR+[Qg#l&ߒяA#(K?˻Zmx*P!n:驐 @ <xcMM #3fȁdҤIuލVY7.H te kҏ#X!h8[eY|9sWGv.vunFJ}LnG! =cmEl@&}wԩ.$7 D}S8̙#z\25(,"-M1)fˋ_fɮJez[%ݺN_UM0ׯ~L}r~pzĤ03CZb+VyE=V{. |WJG>[o$]݈Vٳ;v͛ k2M@bСCO[qsΰ͠>>u]N j*6ѹsʆ "EeFeI}KMLYAnoX^'YZoiQW-Y>1*{@Iq7@ ̊nA''NiӦڵk婧_CX#7(KƟn5 lOAu{K`?_oϘ+= u/p:g=zO /o #Wz})"zߓA|Jy?tMlfxza>̌Q]T79m;2**;}iOIT;ﴞ{R]g+#sx3&(DrC;zEtuƳ\vvt{31C_]ڶ7̞ⶎhİ ؞!dkҷP&\Gk>v-P q.ؖH}xb5k P>ŝO! &$kgőTM=XK&̢0SH㉠7KW{V2fut xwn3fjFRIlPf+ cAEug̔3C,xC`7gC+ pEgŸ|᭪dMa8LX'_uR?C6jƍbc̆@<X 0m'+<Z$+SLײR9:edu9+ԷwKf+ 35 qL3ҡgHt;`o%|-SP!n:1T @hfq-$3f5u6 S?J;~SPo-[dɜ>}J!@ ouꄀ9s[o+Wz @j'9!+VȼyD  @ OfҪp+ivmrw b}GJUpU)3%6ӋxX}ĉ2m4YvЯN:U!@H<)l/Ν;UՄ}C!x–?ctQE{O/^,;vZ~|kD&p2p$c#7 ,@ԋi&ѣUCtj. Ė>]i&{ާm >w?x`av :ulذ!vm`@!@L?)={twwXzxaYd8p@&MTbN@Go>L`QT1:>:}%zǏo7Y{}@RBO}J:fƟ4;c ٽ{;_jos|&?џctQT%cO>-7n,+1UO]D*J1E/,#J?F N_` иyN;vl5J $)-Jz=$OJ&Hdt&~SN@C @ YOZtKBs"$ @ 6@ctǏ7v1WAzF p L-D}$!F S4D}:S 4E)/!m6Yl @ $~h@u@N*7p1Ki>Gz꩔>S hQ[>)'ЈD3f̐ΆCT\R:&uK,[oU/^VƧixWX _~˖-|r(x H{y`Ba,8L&S<M@ȍ?AnBG30a3g#A/*ήZ[*rN z-Hxk<}w=ehTBZOHG K`Ŋ2oLG"G^hr"(_ P3[4w\ٹsgj54n8};Lԗ!ą@#1mիWm&wygiC*[ځx諒Htk-5~"N{$b׿n7ëj&!=~zm1LjLAJ_?E?CCЂJD} KԻvL_ڵK3C@9:&CZp!r8 =zGdӦMQ9u:ꋡ *DoYf cLjtfBE}:8o:bhǎn ֧: N@q%5G@߼[{ѣGo%9Q  }b9{l;Zc`5 5KrT+7G:=u]WO]E}ޤKoII%oa9QMP. sKq,SH@E }n_^UWϾiJ*5Fѱ/>񐟬;Q5)#OY\L;vL.b,Gu$k, <:P\57X*tRϻϳ>k'=Pb^k&n}ڬf r=ZHQBZnܴ>?*8`ÈO@SPq/R(Ul]zKq~ uzu넺9sPߵ g>L2ڧn@ԧiq8&lذ`o>[>Yv=~zj$z]ro9ݪ\&*Ľ u=aCew #!#^#15@1c޽ IC D}HA ʧPOיn\*OJ=.[@/Q5!@ h,}e GO}x (^&I"F@JO}U< @@ GX@ Up @'~a! @@WI@ D>} @JQ_'!@ }B @*D}U< @@ GX@ Up @'~a! @@WI@ D>}6mT w  d@'iE <#裏VmÇjNB <m&%Y#S;&cǎ lkk?sm&+VHȱ-ݶ=zMQzojҭۯ C@Fמ@+ ̚5K.\(s_~)[na7ah:B^?O/>)tT85SNɌ32}t+Txj idݢ'ջyNٟ˂o^=~E2ҙ-xp/ʅ^(\p>B@ MOSo4itww[o1GxbJϟgoaK8Ÿ~]sq KyR'j*A;wu?~=tPu'Ϝ1n?1N->˘1c zc|a: iHwA`8${k׮u։zI'[oPO= %@M|)Pפzޓ'bUGEwR\2&A@pǖ!(ٳ΄㦸 „ONBWA+˹oŚ@P}|xm.)-{ Do()(teGꫯ}% .6:;d\r[d{>~Ld· iO PD}mcNݑ޾l[R1>܉PMRyxozr ѸoȻ] + ]GxafD} 0DQ?Ăp^=Ĝ+'MmI>Vur?4FE~Ǝ(7MKh\i!v$qS< T%@٪x8N'Rt"msL yUԫ^ }ӱqlꗇAk׾*2xiZ+_#A@xᱦpybiR~ڥ: b;Is}Ov?Wrզ{7$@Hoэ4''1⺳RS"{\TG=91{'?ܚB^m~/ʿ9/O?Mj߲n)k2aܯzuǟ~R>rײsk䢱;@PSQ8a<ڧ*.=wm9ڠ69wfK䊫ʻ>_\?sX O=r[*׿1wGX^8ke^H <|иפ$wc|z]=,sTm&~'}Ư1s[}w[w- KQ.oj jx_rTS*tg2z1?T;O}݊6$74T1If JK%/Ez^QVՉ6zoL ?V{gUqoWG d @ǥ"l^0|>Y&!'Ia>Ê`qswog/?h/>(w/T:A$@ 5L$H /_~ӽGRͬso6>s:Bmmmy3h @ǧ 'TkJ>B SM {헧d;EE@[_ۍ /qȩ"3ېQNO'_M5O,φJ"@/*W z;}^'*b[[DFЛps|?L%׷~J/ EIFKEo1VBfKoNWqVݶllW7a7x>mio=cz蛲6¾]B@J Sqlz\ TDZUѰ篝Η㴼ۖWPsfЧKo11by6cc >@ 8A… <1ca:9\80 _zRF?ɘ3˨_ț?a6ĺQeԯL؄-?|Tl' ć>>}zKss)-X+}w[w<.[+MXț?\M&m,{2'Fk22by3w;_1^ENka? EQii@AoMqUMmUrǽy"g ..c~?eW7h("Ai OC/'NiK56䦔ۆlRM̙3r1N6]r @i. G86[3or1v[k 4<)0 ȥ;fn[ =>eNsB 2U@P:+ƅl~!sl)zO @>P$tK-XeBh쨈ϛ yk~o3K~siy_Q }˥l%ñM`snI pV |) 3ϡH2kW)ZH8]28𢴽ʒM[Q_a.wsJJ+ GQ[JB ^lt1Q`QF]UR敎FL@Xd$Xya-1v[(쫠׏[LE}0(iuvn AHD}B qi+s;YٕWx.yW$@h D}kS>@Qk%OY;oec=#븹3}w[wܳ?\v!$>ݚFYJ+Tx-;^%m91XٕWxFQMSsc鶮dmHox#mwSvU:nl(t"6@i%Ok'ݱRX3}w[w}w1-^z +ǯqݜwo" L Fߤf!ys>UWtW#Kh=. 1/<gKM~ce+ zqǫב+]cuxkp_120EeȥaҀ2ynAڮ*dLIO}TN}6#VuBu4[@SC\ \=xLlϔ{bH6}KyO9_Շ5暬|uyWaO.GwȲ=r6LȲۏȧ7U/-Tg;LWo͍y 옺UejSvW:% @S_?3rNFYtKִ90/ʍ%y6;+|<ϵˍ e0ۻ iwˍqLf eHK.-=澏tNG+3&5_7ӈ;3=Q|8~෇QZ:79+W;M2knK>  2uB S[5pX9c.WϬsiXvpghvyaw.06ͫg xޝV\G|^f:AqkʫtY*˺;k>{~/K}5#Ҧa߱^kod=?7<s\zn^G wٖ᭯|o_67nvr͟g@a & KpE!/W+LͷYdwVn3 /Pg"*"jRÔH9:vbˊk%\u̔~Jϝ̚w4ytu J 9!=_Usd2 dIפyP]:֘DI(~[\=JMζ6o@e(ɥo#GSj<$@`Rz9hXF߼MveZ\\_-ie17K0oU7˻_oWp-d_+LFQ%p\YzγߵO}w[ ~׈ҷx-}O3P?]z92pQ(V"r*|&^$Ǒ̐Bλf-kW+KYIN^ B^ 9duP8e}?H'Ccň!H,켿sfHYsNnzj##R*X, fO* @DB}DLJWre[ťV+ߍdxw@>vǙnRw.i׷աzW.y\[k!Zv|?r{dKߺs[zߍzۅr{{:F!OG% ?i,/$;$%؇i{cb(211!~Ȉ'% @w~WjMAbe#]A罖BW&ɅKp"u|ߒw~{[~Rk۷yګʽ;vSol[~iӪC8CMyKw֥ \1ZiyԐ:R#̫op/ -F%K]L?L-+ C'37ޕz >=N_s,]7ZΟ\Ο6˽~l[F?qujg)9+x}NEgi{PďM/jҖ  @]@ʔ.`R:Şi,u+m{˝>ꐍzGN!@ivRjv7ޏ?{x>8Si.Rj=]卯/xrO vzD'@-5wY]r}]}fV@omL۽ ;m}|ieXג @ti--YZi5ۺ=e7neg:moN;$Guԋ Y3٣G;駇'һ[s?s^=ugWڽ :mw/e?ަ\Ep ` ߹jX ɤ46v.7:Zols0@R ԧL`@6kX|z?LuhBEn[ɗb/@! @B}D^ʙW#v[i3]ib;%b ԤT#1rl*$/"gͳRv~/2B.@`p\)">2 }\-IN/  D GĤt ({ZX3RZ^|x=lY7A]rRcɍۨ|}ֳyQ 0羼J?5/,2K+3x!?77 da]8ڜ~UPv znHmE?0yzo7e:!`C̩"@_ [.kOJ"}}͇w?Nz23SO]fB]""2uUfj+n~Dr]ˈᩩoچCz@P)h7~O'`Vnrr4NŢn&P̜pt>9گRI ~t^^Gw!>{ tCP ULE 1J܈z&Mu]LLL gF-iDO^D?wZkk&^zO@_PV1[i rYT 0]K :bkkqQ3s]ϓCWkF>itrCAP5 E k:FG굴@5YN;;}4UP^*SFJr4Rvs'V@gSug}Y@NP?p]?̀l4K+f[pT> r{ee-Wvp./?l7 td7Ny dKPl@K_Pvm#Ҫ!i/EB9IL7F*R[G[ROɣd ;dwvF'_j> }ոyu;:^׵혈2BT]~nUk_Cy{Zhe2_F5J}oM[.Я5y2ʛ3{?}rn]k6J?~!g;Co=NJBU~әoycKj\]j{rR^¡|𧽺ƒ\Coߔk/Ю!\mTݶYa-kڲ4+@!;F /[ +.nlM9e~\d8/2?|E?i9= n\)YX(5]D9FGRZr#r#K4,~ˇm gNp-[o޺B{-۟@ gA`F*G.My!;2?Qъ xMq&05» h>UDÍj(c;Ҳ)5h<{[->!}I9,0o!^,|Z[ymw<@!@F?p+kh{8Q {r{*37\\>ߚAeG[Crkm 750궞~dQ;!k}wT]j\?x"17eyXhy ^<䈼i> ϖj ~p4r~!|$=\/>=5p7F5ZCpW=-exm ]Ig2~,C*L?gJ~bZrŒ}ٻn;!G2<yU6ޖhw5ćAF_/L@@&AswO>O?$#7kR/Fȹ=+-[aXk/Y&;:'꺞^᥯ޞ/IT yaLڑ %z Rwo )xW}ԏ]wЪmky~&y/pjޟ՘{ZZԂ>M] @'uڃdTWy$ˮ2+|Xf {[ O Z&|.]i{O/-:MNNJZ^G7dխ=5m>*K˴L^2T %+\N.lMRYwt^ vs R\IVPK[o䓣OIK@ zR[=G_@Mk=7!ܜa> 5$F ~4*z4IDATէ}5١ё}}䗟nc?,JC?PLy= x'`fנMYfYE@\BDUg# ?^GoMv,1Uky 0¼v!zW:Ziia^C|#7}nAF`ߪpk¹YXj ajYc@.&@Ge@t`5g:c" SytF䷾*gF-ki3,[ϳ؂u^Cl!J{M0kpo[KkO۲PoJۮعٱZէ<@!@H X0z4"?׏e~zJ&˲~uTJ<†4o{0{mT^}8kge˝KK w+= 򶏕ZҺ0rcezن @ ?bq6}L&WdyTVGeHŅY)OLɵqSW9诺nmu?"AƜn-]wTǻ+6[ҖDp{pel]lC@nRgt [:Ҫ_ޖ5ܔ]H+}pT!YJeBNrqX1)M]/Qa) 5nnn;ؕJEmɢDMnLhy}(y[0\yMf,tUSx=<󶅯 @h ;;;틑Z#u}nWDv*9Y;Q5''E9yYO2? ƅ5 suڌtG#6׺Ԍm@@/1W"`5ȇ-Zi^˺.$ GbuF-[OzאΓ^+mJ@@ r|&Vjpק[V:zoEh𶧆r v 6ƦՄ#6*u¬# @LB}LMJExXT--[i-[CK av ܭA>  @BPa@g}]ou0[HmapVX=  @Zi' -8-!]m2  >Mmں2 Nkapm(@@ _<"  I{*@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  U~`jZIENDB`ironic-5.1.0/doc/source/images/deployment_steps.png0000664000567000056710000022714412674513466023607 0ustar jenkinsjenkins00000000000000PNG  IHDRGsRGBgAMA a pHYsodIDATx^ G}&_\ݕYpi+2!m0;kˆ0$; ,20YA-1F~aƯ#/Is@}]ݻ{~TUWujG~]_~U"@ D"@ %蕌D"@ D"@G:F@ D"@ D4ӥ!c"@ D"@ lD"@ D"@J#@:]2& D"@ D"@:6@ D"@ D4ӥ!c"@ D"@ lD"@ D"@J#@:]2& D"@ D"@:6@ D"@ D4ӥ!c"@ D"@ lD"@ D"@J#@:]2& D"@ D"@:6@ D"@ D4ӥ!c"@ D"@ lD"@ D"@J#@:]2& D"@ D"@:6@ D"@ D4ӥ!c"@ D"@ lD"@ D"@J#@:]2& D"@ D"@:6@ D"@ D4ӥ!c"@ D"@ lD"@ D"@J#@:]2&H"ПkXFK  D"@h贠:ӫOO>ɓ΋U$N60=51R) #K ibj1jyu)wͩ&RrCR{M"@ DN4.( S$NY&n$ 鴿P@`X6mfO D"@ \IYa}=Fr/\3v8eN(Œ&c`MA,]썦ӃE[t:Uˀ50e"@ D"`t:V:*RIg@7&yŠ^v$+X]ecZ (#]EȦEi+Xât:UtyzC:>^A'G2&ŠGa5helD- 36c"@ D."N6S{d9ӱfzC"qʵ{\Iv6C"c3O3^t 1\YzRKjcv`5"K1_n1p!k/pX|LIF&D"@ 50EC`mw4g}cr)nHrQ 8P3OJ",L"wL Mцk8QW.Bz (,vT۶ܘ: ݏD^-)"@ D)rc@0 y6tH(w"q UDv%8YBiV )Ug&# S [7m6sxEMrDd6VlM!ɳ[C#0+oJ/v"@ Divk(ҹ(E㏊ޡK)ݚt~AN%1\\LSPNG l)TrR{Pi sdវLM^N緛Lr̩ m˖w]Ya*`/X"@ Dt?sog)1+xy> Dv8n\.Yb߬ ]qd]lglib{Wv,',7@<ӱ r" #IӭnZZw_x굖NFM4 DNf+WM:-od=nNONmI:;don&|^D<̑"` icPw%#t4֍pG!hжMua>/msH01 kF#ِLVmhgzjNLMG'HDLdE4h[Qd*AݩkC%A7DX7φdmC۬>Nzsiy"@NHuIfgi]w5ʆTfiGH6O'Ⱥ,sĵ%#$ ǫ ~ lS[7ON2+w@hU D@%*Ѥ, yf@u# 7$HnӆZ9Ol]t:"бxЀߒD6R!l̢mt̏"qH;&%awD~ )0JI:m(Gsh ʉ wM,چ!D4[b8oP 8"o!I4eF4Ĕ9Xܠ\nf؝P28qǎ(F(LSjy&h42&b"@L#@:mgvz83 HXXCJQD$萀$vҡ7ɈDEjT^09*G&Ini: rM:m33"@@7 fk,YƂX+[[5n)XC8tTzOĖ!a>p4ut::2,LW)A:ZCIw-"@Zttm5.~;Cc i@eQeCX x3tpu &j:,ձC`m&Bٻ. mEt5Xl566h%4ħ̕jYmF6}l'=zm=R* D H6)y$ºG &Nn6<&N[S mXSQZ!Z`P"@NQ-҂ݕIu# 73wĒgoȓr5W =TpifLLM<  !@:͖uKu#lH&+6md]ϋhǜ9"@!@:m dy&º|6$mڶc.Ec  6uW2:Svѕ6QN̚@y!TжMua>/msH017lЕ//׽x̰poRmC6}Lօy̙# DƠJF/RZ\f͂ g [&+6md]ϋhǜ9"@!@:m dv3<+5^ /pƳe-DS|jڶc.Ec  6u2 VZա**Ko*7r87Ym>&|^D<̑"` icPw(ϝ;|;TfE=z[֭[ϊ9tNMVmhۦɺ069s$DCt[| ^ݭbk+-4ܼZm9Ppdۆm ym3G"@1HAݹ 5xuJ]\Z5])`ڶc.Ec  6u37V[oU{ plspR:K۞M87Ym>&|^D<̑"` icPw7#.\8gL)Bu=Gs9 pBmhۦɺ069s$DCt`*VmAztNMVmhۦɺ069s$DCtu?j[tD8⛄6md]ϋhǜ9"@!@:m jfd؍|)\z˖-<)͊ZM )ImC6}Lօy̙# DƠfFV ]8"\+c[Q1T8⛄6md]ϋhǜ9"@!ӯI>zƪ ##?c4ՈR-C&k6md]ϋhǜ9"@! {?s#;(x:IHe5ָjM|@SpoyжMua>/msH0N8qbvv4rdһsxkdƚxю;iZd0V\ jMqcmES|5kڶc.Ec iXM;g{ :x'3QkrFK Ӵxpw|35/jRipo|жMua>/msH0@N4k.<M'o*B3pow5ܛDyV] ">|m@w"p\^ ӱwLAFS|g ,&l^L4w'Dl:Ǡ{ɨ^w5' @w_F-[dt6/l, ;[|"z!nQ͇t q4pV)"@ "M#@:]՚J ndd8lЃ5k,\D7]{pq;u],un4@wQ"@8t<ӖFiL0 +&sjyQ{q:_,ebeSK3hQhHxТxɳ5K -@ N?21Gtڡz 086U%f;]E^)]w:TҌ̥ lH-V"{  v_T<«8* }*8G ESx@sJRonYaY"@pl:aϩ?7q&tЖtڹ1żźΚiY84^ VEB 7cvYX"@d{C &zecc?4'>)r7o?.b!ӧŢ82:g!U 8B]@`0.e4[lqfD"0l:|n lߑ1c!NzaNsT.#)Ŕɐ̀NWN*!j^t Ry`[Њpׄwk"@lC No: >^:=6Zas{\:uZo⃛+&{s!rلdehwBI!eՐ{Bmݣ>ZBP 4Fp&Nu))A"@@&tCpU@F "tz$0Po: L)|ٛ*jD0~&;)Ϸ%gfwȗY,)Ii{)fu8 к^Ot tqhTdo,> DYCESNDNoğm6FtzGfABIMS&?0+ 2Ӆ2bd!n3(ɝ{DH!mE{"Z  j¬ Dd Aт>~ _uEXH,3#ba';Rw,]i3%e2bd!nGaXjU;RyʀZT)m͚5`˖-#D"@ #MƆO/YhAt:Է>jTA-ҟÄ>I!sZ@e<2Nn8p>)ߚX7i,׫*_~e+`6V !Dl:76\(OtZD$9|π]&  O|h"#_O}j#9;;_%@:ݚ;Q!jFZ`)MN{ί:װ="@0ku D_N H;uϢ3ƴY ?.:+`R^}M$gQ̗> It@7$8Ęj3h3G<Vm4-?uy}MkV-k"@E Nc0鴻iXsLp@NJIJ`iX{B]I)G=`Ԡxnpo"@"@:m–ݸ(G0uk\ *@G~X{lκVk >4FceU&D؉@6`i;;Iexje蘰`+, p g],Nu]X?>7뀗2 A N{c0t: y Y10\d)j,e.d' []k" l:}<ӚZySbM|ۄH%K@{`nSX2 lԼ ݆DȦkx  v {ZYFF L]pf :Ȧ?1GtZGoV&h5[̽M`+$ nSSi:  D=NműbL"PyQ0M7TJD"q{  n_oĉ0P\,h[wh5Xm Mg}~Y."@]=8J_ |xq(alD/(Xb9EԀ!D}d#"m~Bya~VL"s6ڍJ!7nR N~#*}:W, D@t{<8GAN81;; _<1 #^$2ۏ쁭?2ZӸI6 ̙/۠ uhhZMʲ"@"M#c_81d"̙&0vDFuF^zijP!#%"Dd#"iz׮]|1!O|'^ofɨh4P[W%mQ Dd#ݙ ͟?1XdGN .\mv:v!Dd;y pN>|x۶m|16=^MJ o*mA`ɒ%W\qE[Jr"@.<8M,Di2j]'Z_=z mD8EbQ D@ t}yێz v" mZFO ܘvZKAstkj!DC Nk`sG=kJFo>pN]Ԕ6@DX@gsA5Nߺc0t٣<9sr Q[Ub5Xl/;)&8lrŇ^g|vˋ6ڍ"V (؉ ҫWуT{=8M91X2j;?h_E]ue̫5.[q"J/:ۏ"˨22j [t1}l/  X|Pͦӫz pN?}c'aQjͭ -voeg)j{/,G`[R_QyС@G9ӻww_"Mo1Gi: 0?~v"CdiQVϘ,ڵ c<7KFњ$>1D#r"-N֢1Gi:gS/M|q*=<`łV*;<&$A hf%9rWrcJ=3vHxر<'x݃5KGW?_WW0Q;avm<N'xɈGN|hi t>%,~TJ.Y188"@;ǔ쐘@>|x۶mpqcpgfy-15'tW^!ʉ/ x pNorBySvD/;?~$硩7͛ ϛV>=q@x4p!!b/R ϨϹA A⃊GNGW *Q8iՊSCʭ⹣sl%:nR8hi 9-R a`x!!!6ep8@ڭ haIÏKPs`jd<…8̚fu"g!2z/,NB ^c0syp{*S"c"`'NǷ-#Bz!tzO<8s)#3^^Չ!N ADT:UD8=B2lj W E'{r_nU@:$i"ay'Q!MJF҄]sʁ1I8V)@;ǔ rzcpv¼.|  y>ϑN%BF:=33s΍7qW&Sm`aZ(xixÐ!sNfRRf}_=7Ԋo!6ZicJyU[nOz N#Uh(i\5iqQ8^$ȃKbNy N yiMpT2~|cpU#' HiQ>'wjQDPk>0Qd$O6@S+%5BhtR^U[c3jX"`?j4tvF m@@rm818V6{ticJyU[nx N#U3jQ"&ީA'?+qnnAmDnEr?MS2cE~t;zT<~fI Ʃ۽NC:mxL)w˭iT c"=z~wo++O5iv?[\zyʅSN>^=scJ[(fi|GU&(POZЃvH )UnUhcpUŽZ .p4 Nh@ଳκk9޸qcOxft{*KyA/'Lny՛xeS?Zl毚4ZɦicJyU[n4Ë6ΥU F7jLFǎ۳g[:I.yQs1'0mԥSN'dFf?}*{q_wP ;$#?PHrfIgaKNwehtR^U[CD<vd@_+dNhӨ\iM˿, ~|7geF௪ZGN"@": ]HpiIնXJӂ<ʡ u: ^oaS4<9̑YsVu҉i嗾ht:/Z/'@ Zi 5kf!(nz*67]M}3?PBNE7MQO+' C7 aH׫/Q\WUpvDR0رS^j>)6lˑޖV%O;zO߷QCrYb7NқdBK$#Z،h $.O.읲dވMV|GF$:C*ʁePłv́nhW,j+j &r76-yҳs as{\!tC>JDqwvU Q֫$} XB N+Wޫj%fE:&%DGr^N0>pw@1#a? -M4V;zP/~VbvЖXҔa `l'P Or. ׳tH6#Ko$H!=aߤw˭b!/;t&l5ȉ90g|>r|:6+㏀%2H+3# ODuCpЀZeۦouIBSdM7c/OcX!6`TջVᐗ"+~"`ȡƭ:_F?| rӾcŰ2HO{'W_S_+]ޫj.gԑI:R謮0s{g,MCSL:?S2͒hW tHp8v /{&MԪv-4\;d>s؊g, `pv4n<4%&8S>lYb;hg.4Unyl_ $~\NF@}|:E̠?gQ?Sq3'ʷ9σžZGN"@" h@tk uZmSQS{ N {@!;dRsahMx1?Fɐmlf(Zm #lL-Cd}`4~iISU[ź^NkpDLeFN'tOڙ땁 ҕX!-ԹBHn tX~ %W6W*8r"i :!tZm4-S@S+-Yi[/׆w![Gn8kٶKh ;g1gḛ͘4UnU

`Ϳ5'IxobEeq ?98snT!AF ~,ZT g9*N,EttҦIL" O18V&yCF؎ˍo)r*80?F_) 3dʌm}R9㍟z m6ir`XP ޠ>.fU?QC"2GN _ guIgoS;y N ۽NE퐾/zfѦȴU7왡7pGe9279": Ɂ tjsHO'Lz=tSTtZDyDNk` ]\r}o&5T:{S#DeOU[n :No}UU DE@р:)x /OZ6uJS0<ky}=>>ƏO'$"X&Ҥ6O̵]!@;ӆǔz*~cpUB چBC8 vFQ+oI50Hi3-B@rݾϳ'|)No^_Q2͢& `ԪV[d1-{ N#UTcӁ8qbvvxcGN^UCFh@ts\`Xᄐ)" !N5{|"$aQ^$h $2Zf\7~|J||OvNةBvH )UnU`4Z1 ǎ۳g{` F}ߎȨ6xi dyHi͘YE@rݺ zojW'ǧoݛxMٛ;A4//mou"H O 8򯃼$ebC V'dFOAYbGȔt+ .TXy8}oԪn[d1-{ N#UTcӁN4k.`͒QmdgԐ1#@:m)3:S[z!'x3_XNuBw'˽r*w w.6+$4 '2DFQ1 {;Vu!AvH )UnUܹcpU83mlff<xZmsϑ&m|4 GGp g{7/O:{$@@rݼ9[׿,q|oBCbLIįrLY2-=`R^Q9b}1Rq XXf(sp轝S+vicJyU[n18Vv؎cm<3{r)6!itB$&L*?7C(x3W{y"zWN%~nPaf园:h DFQ1 {;V*#eTAvH )UnU18VQe|O#< xZm'͞NknsJW&L ?O&' !0oH#/E䁨 Q?#}27Sð((pF1eO*<ǿsj=RFhtR^U[wy N#UT_mG=;xj6j ?!MN?GW_}u`8{s=yy[c{_L^*}%vu"a~/N J]$jW!F( *_EOR)C!0J|GTS+vicJyU[nwF@UP;Fm칣l%V%itB$鴶K?xC65碻˷sjURR9htR^U[ŷy N#UYŖسG=xrפ+.g{owp:)=y:]=3IExC{Nw.oԪbd1-y N#Ul\3G=BF4i~o$TStZC H'\74N\/Ω J1-y N#U6~< -uꫯmݔNG5`i[K{ߜ`$f#f# o18S!@;ӆǔz*n18VQmK%x N gϞǏN+2~FwϿ+}{߅GM9[DTdL꒡IJ8~6S׮] p )9\cp;V#i@vH )UnUܲcpUB#G<'@[ڵkב#GN<=tS~:ֈ:ݻɵύ# nA%@ 䒡I; y景yk$|Y9sF m ,`ffէ=tSLVϿ3$; =R'#!N ~#!7BDJU`:lٲLiN?Hu:͝;K/]f LfPUGF<vɒ%xJ9pO<" _~h92\E&C:mxL)w˭⦽jjPmF5{o|zL䤨/s{ȌC0mIm4)Mb[ɡIˠ=1fI*aq62Ҥt_E9gW o}[7lP_ b ؾ}{V P;Y$8,T*f1ɮ*{<P*erf/y(ߣ ]s~˧C~r$LJ M28ndg[R2MJ3Acll.d6ܦ pu:Aijpiy{W;Q| ٹsƍիWHnƟk $T 52:k9 (I;ӆǔ&riT Ñƞrρ' # IQNHZs B&O^ãRCIPxqig[R2MJ3A}cb5[2GoCoo)F-1оY:CBq>L(i:ͽ<@S^b oE=_y-[_n>#׾׽)PA֭Fx.X!6_'0;}<iI}Ϝ 7oQI?zD~)lg[R2MJSꞿM>뿾#E>Iċ7EdJ^Y"ah"xfirRwzdb^]wŋ7%?=tK/m6bg+V BPYD  itR^dWn_18Vv8c3E~-b MK/}NX@#i˙9o $?,c ks;ےږiR&:/:퐥d0\D ',g$7pyH Tܼqi$MzY1ɮ*n18Vv@أ3^~迫{ A&ǎ kZ=V#ٖԶLHuy[)omy`T'-C߲B8N#@;ӆǔ&rXcpUB@'{#(qB1C:=:!~QMRF|v=Wbg#e5g"섨 !ζeF::=g'6E:m jcq}̨ C:mxL)j+?F@U(8&{v9hSieE(|)lKHB~Aud Gi)TFvH )UMvEؾ}{VΆ߹g|v׋JDUBQ gaώLh^쒩jQޜ('h@ޟg2c]wݕdt vQ :(;eZNSʫU ;wnܸ;Xz7;y_|EGšpKj-tس#ٛ঒NSRrU5XA4I:mIۘ鴍RC Z&%p۟_[P8^y-[_~ڵww9s?[tTܺu릧QPG<18oL1Q&ްNgu~^NmMJ#n} y5S$v׾|]B!q;tK/m۶~Z7;ޗe;u+*ՇJpp{[ҳ"o|5*\ lvvL :nQq>y?~*4DNo818ottIնX' _~K/wu*Udt]@i;+V1ӵ>r;$N8^3Y5NՇJ4y N @:vV il0v_r[5O5tӟtp;tڦޤ4Nc| DtIwy׽G))vܵCn(]$&T@@C̀=#Dހ$HJX(I,!g^%F/G&I0T#.B$6'ՠe|oF=' M}D{t@{@ <:˙CN:m(FD9&ٖm,oֻJDhUtZ g3vfl,CZ) X0Q* 3K;c{`3ۏ쁭?2lgxʂE4d t^~êv؞={{`9F}ߎȨ>XMD`,M9Ud:BGh"rw@'%,t$SD"Yi%MMM& j^RDeL ~31?N}u[|GE|<9z_B@ԧ>31yS%A{ dtwu:'e{mߗ|bI"tZmgZc/6$~n[nՔWĊ2nN4k.|vGY2*?X%%|OGH*l pǤB{Fw rVxE  ;,~5 ̑LD " SF:C9ӁL}L1'+ո2dJ7(rT8tZatl!oh-;\TGm1cי~ݢ"/0YWW=aoa333X"c4Sۏy !4uɾd>%1cQ:\JL>g?'| ިaW{3:Ao𖁎tZm &Vۢ(- ب Fٲ-)mG=xj6R+J(ʨ c툲ΨMdXe7tX,X`׾?\;y9h0@?&/oqH7Ncr`'?)w|-=;{mnKNǀ@e|Kr;4L;18V27.#!̅b5rx͜y*N TDg_-4xgоcpi]tZ9N A_/rxB?8 [__}lN!D">ƝfFWxNDZe+ubӰQ׌a9z`!Jt٣<9sr Q ,hv{E15 ,+%Lr?W S9v>sb}N2 NOMMe ~ހSʾo禛nZz7|my睛7o~qE.)REPj&tZ..c \"gz "苽'()cَ {ki~V7:bd|(q؁> H?>{cǎNm HsN hn cZoDY3x "0}y7o)W_mQٜ5W|o+!!wsbsƬtDBNhb3cŻFixgyn2۷o=W`uĉ.Z4fEFx|5Z .m H1i9%p41Vh̲E)ڮRt(`L xyi0)u¤a)YlT-٫n0;Jmj;Li,e u,=Iv Ҝš#*qC&cyN@ye ]>]0hRPOMOevb{CM oMVŲ˅WUr` vz 5X+ш@ m%Q`RM0Y0Uɣ /.&gyMtC{` va]Z0#Fk%nȬVp -{='HG:?NmNJkrK5qLP*LYo|171&.y(KCSGsnyH's#ĩ&kO .ʧܐR5R&.@B\‘B Heʒx+" 8)}rGcq3F_~T{! 8DHݼcp~Ԏ2Gӟ'2:1+5tcֶݝ Ü{h5C (l3'_<FH|":qC"H=iu:PN|( I'V2.EvLmWuV~K3=NG_S\cvޏkᄥHQ#u^ 1R;$Nk` N}?W_iVۤm&: |1Ɲ6bFaibvV_<6ĺ?*Z,! Lc3}"w>I|匜J=6rMJfg357&g#+IcbZʯLAG|BB]u%{<'Hm#Mq_-y{???(Yr:Z@SW򑈝%\eD}WALou@| .VOiㄒd)̘Y@]SX%hAG.[pƯ|efCZcp~*0#Nk` ?&0bj-[<{ťӸe~NPVilpcWX4 urʏfnlxڏ9}ѼiM祺zُyPȧo_D"wo S:/E(?0fH#2J!T;&(Id)Sn@V2y1=g0:VʰFV!!t |4 w!߾cp~kNoK>~vK/+w>p%i i]Ci`6`m"ؤj*#t^N^L\ID t~*?3yu,|ex7D{S@A„gD,TFV#*B$-(~XH7uSs\.rV8+ ʏ9(Zn]Ց3 }CФV^z_ &C>Dq|,6:\\?R7@#c :=55" ~GǏ?UťI ZLњQ_C6R2 *-q5ZvyamV\yO3jE\zox*S5\B;C-VXY“yW#un 1RH +?0Dï[S8x<^:3y N#L>S ]Q7ZEO] ri&cp~2^Gw 4CiE"1ݍb 1co"RBy4\9t|lM [> Hc)hK#}^! PY2TecB#!ϠoB_J>oۭk FY3wnaY+#z1у"|I8Uy N#@:]wwFp[I ̚Yp kKݹs'";V^Mk̿oFkԱBT 6T*tZ )u|F"Lw~w~~W~Wz7>x&-oykϞ= Ƣ~7)0  }{!MS&|`3gV~{Ft:GbܝeWl« 4.ݲe Kmm̓8 F+A`j*Cš gA:ǠNgf᭍ oxA???muYλ馛s7=ַ`s΄k< Mt0e ^󲩜iv|q!+4%ܤ ûغ^GoBHGAekK+ޱB2ye]GtZI€ai"0-l`A>e:t襗^O?-Kя ݿ`jyfjgs= >?cQb޼y'1k<6#^'p44,b 7nTrIG" P08q<_o},Մj+ժ],i BȩK䁱ao ~M%O8 O~j,_Oa'' I/c5pi @:=v,\-,̥-?^j{˱}Ҥ=E{hSV mAafg,eLqw xF,2ضyfāA$1Hɹ~2~gd\zI%/!K4. @WI W'5OBkCuӕּQkY@X ria|[9-cVC@ӰǶ[4CcʺbŊo{8;A}YdXq 7l8U} gxƋ`X3mtZ `1@>D޹Ѕ"_USa%;:AS[m EYC'W{ HfMȌc5ُHJ%I,Ѳg΍sxԩEr.9[=鴪QB#X!HYۥ 8&]KYLG]+5kv~>|pXދ- +?Shi o+; W -tz/;qk@koqm!#i>MOȌl3<_UI%.$PNf0Go &l< C0rÑ[nWmsN\SiMRxTi|kiX)U2~/'Xܷ$|E$3&t^O?H tnY}WITr|%M| ƾVӾ "̝5DyhWO:=~4KT4cփТ B]u,̸*EƳmIݹf%`1ZC-Z++8] h1Ħha;wE׬42yPE1]hgpI-'DwrO83 '|?,:#|߻#Cx*!s8aȇh8GI@XTNCZ&%aٳʅ|Q0/7)+PXA:gA12?N~=FK,>ҁؘٝdi6v734|]|>Oo+W s4qn{G7D1TZم(:.Z|"8]?N`ץS?٠>_ BI X8OW|ʘnh42T jUu<EqV;.]:ۭ>Bߺpe4vGdr-pVb_r7 G@ ly!0/`r_wNߵRNk/_MM,s$69b:! ΤKF@\vs[DZ9>XAZ}$)a WF?8c,6G hnMO: Xu t_8Lz4vWLu)t7Q;!uL:m~\IyO#(5%.5# )ư,ռC-2Fm q_Ҹ ++"jNO:ARԧ-^?+_%Ftch 7UyŸlaCJi-5Y6b3ZYTm8*W^Ƕ=ԧ`Fˏo;1cO=޽{Chhv<t P%KTn=2^cpAIBANV_%@ Q! t[1a6ڡq0^ݎ"~F{{0~c-X//`w47cpw4{饗֭RGhl4E6"FЅ@nLt)ר\|k~e:}N71y >#x@0p;z@P:oq>c%Vg%Nc F 56QGd j kv-tMW_}5lH>X7da[s355۶ H GtRs ԍú_|E3^U&c^i`RT9tN0*KӷFtbSlmX9sл1Fڤc`mȭ.P E! &!-&QF i^ }n޼yݺus~Awq3l _җ>O > 0mys?sxڌnA iˮ8[uuxDF$!V'@XBDf.DcJ>2m4.tB:bE`JƺBw&t7U#5=<[{}Ԙa72+,5AhaYe0\7n| K2aĆ6>10e$pmo?~BDC|BZH ew܉sE7;zt]ZteҥԡӘo{p UR2u4J:=B@)y (.k.| Nuƶ:mJL8VB2%HR@ 2gp+ 1PlA{1aXឍ6 S6H2\uU2z`sF489+ C420S' _yU VҪME:OJ+@):٧|+a-h!R7N߲cpÊSQ1Ąݤ %aĩ*rIYzz-l*bC昉ֱk] 5,ɰ`êk. 3H2X7|3?<`ΈgrBZPtz i1 #F:C(?0'IP)iHv E{(…B˫AqѶ\ш̴`eV|GD?/gFc{\rwŽҢQ}1Mˆ;Q~ߘ 4Nc4Ӗezh kX%(5dy,6_nv8a|J$6J@xZD:ݚFLA}N+diZOQ[-mAEoN18t}19mׅcXm_p+%Or_Uj#6'o%gNwsULl~aM(0ń 7²ݎ8=G_18t;yR].:~c-\ZVŷ1y}NQ]|P)]&,/-^h-uv.+2tL욖5 ظڴ|y[7Ftڊ1 %`8Md^1O bz&=Nꃶᓴb(cE>`J-@ qFv5K|Nh ˭v4pN>b ~#X3lTExN߰cp-}11P%'\7ci:]iQXiR\߃q%<=s/N?}-Q+ .% Hbtډ3^I> .ٌؖ<>M1'n=-+>؃i+4yg2^cp;@1cQJ#'>k*XiR{nكq=<5sҵ[^ً2 D695 =_iU>;.1rѹΒ#RbR;[d:b4M%qZF  v *c0Mڵ cUl=3Qa*.-]at =*$@;Z`-c^~(?mH18tUY FmJB||cZpmJhff_RclCI2Mh c†t)ۖͧk,aC᢯6Ao-9jTFDZնJs[O,*A1KL+Sk;y "ڬimUiVdr`>Rڲ\dn  A`Ν7n;V^M>"Cy*Q:311°oYNӨ3ZOQ ChZۏy v"0MF$2g4R J4;+ O^Fb֭[c{/W^ٲeaf禛nrJ[n4j)0hXr)q.,Ӫi_RUC%:XzjyT@biLJ8xcIR1eRjt#+fGMvv"Z|ƁK=AeiM6& tlhaYc#v_WhJ˺4Rj5*Wtz@q &o3Mɒ=_@13&Q&dj"x&Tw0h N' _2UkKnRj>@nM9vYҐg2~ ?<D *i bL=S‡o͚5Qˎtڒ(WZ$Ё6L4nA 7؉14Cd#9膉n5.%.ddNo\Atz&6_ `{v'E:]& xM ew9vf+r-N`6#3~}r,Wq^w0a>)Md:==1mHYcöC[U@  C-hYTZ1p Z,@2&eS/'f=W 6M煑lSX [*Z%QZ RҐHmf 5hRQY"1Ej7iڶc7+NK郛Z?ϴjNH:]}Z٪fd.zTQ6NjIa-TS`]d tJ=$面ӏx @:H= ӥI:jل̒bzj)znԴ *+k^%-{+T͕"J3pI:Mjqi79PǒbβuH-Y#ԳH}lmрt~6#$YDO'i#)kh kiܒ 5r-kĤDY.Mj]Y`Ze:Ȍ ]L^9gy=uD:mO]Ӥ$dߧϒ%LLñ; oPǓ-YY%(P.T/),xI3$x噼DZ)($;O l5D&Oj*ef<W V Li&35wvl>߼Ir!+5#e~őqL˝"!3Ir1K^dx`i%f-Tʓ{`לrދ[x?UML> 7]毟v@ȦӢ%WZԏ'JmF#%SviIAfU$"($8퀲F8~`Qlqwx,iHr1KTkI:m"MDS M<@3km jk4qۢ(N?tث?4'ҟ]7?*M>p9y? #TEI™V/~YN[>ԃ $@20ġ*SGE9MJR uE'|ݡ?Ġsnd",yYt:-` ^T\.-F1*Q%]Cޖ TҕϊL7jeBϛf(ӧĠօT<'ϖ$#@dN|`ۢ~g5~Fa*2_9~Fb2ޏʱ8bN?b@s(5=:tڭ2ж$HuͶN`iNof >Ǟ g2ks.D՗D;{0UuYBvEV €!RxNl-)MAi! 9(T$A/H뿳7MbZϛFv:VcaseP,Q8-&P>=Ȣ(/hKI019YG:P8]ն^iKBՒB!э8nX=l9Lْn,|[WXG[3Yγ` Zq Ƹ0HJyI&̮T߶C_TQ9Nc+y5UKUK}: *ˊܩU_f?8!1YjH3oTKo N Ld:03Ǻ H0tMI T,JR3-ƲI9`'ggxgbj*:;!kH3%~"Ok&m2Fin Յs`l2h1>=>Vյm͚5Y\:e:W3|fQԥBӱ@Bl:7C6w6A 3FYo"i>bq B$T-Dm ՈJ4_%eXd4hLi% ˩LR^n~_ $u &]D|'ꀅtӛ6|NJRE;|R' %9)s`z@V N5HwD7@ˈa/˱I ƣHWð*qtvVUG e, ?.QF{hc/4qig897ѱ"Xq1`PJ8#qڐ[C^NGB@J} 7N~ڗ?rQ9~ `s(0YrBQ|>&'?aQ8 ,j}EP̰8x9(., vvƵU|@Fų .]1vUӴ5 al&X E ] 7 s'i047P%#\Tsn2^wЫ|,?`C:]( 9y٧蟾0#_rE.OkOD1TR0ɕ "t 6@NUGXDBƻ|A5HLZ3H>R KژFԍ1獙 y383 *Ӎ7 l,GfxQDPB:W0ާRN&.TNhBm[%v-y&khc*#%1PhK$6Yȋm0u;hQxtC)|{~ħӾɺʑIMvyXba&M˶tѶDja@ -46~t#3SwPyΜ9"1qoCy˰tCIg |'9I=`NC KM:DPdhs asnIFyG\ 0%T  1sCGp76 #C`ժUN߂#x @:S+`4b-OoZӖ\HM0ÆVeMjҩ@1ɀ\ 6vjXXW|R^$]I;xK22N3${x @:l/;HKԶE-ֺh&$mnI+Պx"X8 )t6a5vcRQXV7//nFtғtDCz*װוrRIu#K>.dmhylqFԂ~8mi Llu|eΣcb׮`8۶T]p"bt+N0~=khkِ@,&]ذ`l)ִTDD;鴫 nbZI[|5Q0#LU0rhЕMhBcXe+r|5 5fdtֽ+9tDO`7/k(Nz]mØ. uX*:ՙVDF*4.kњ>LZӷ\AFƌ T74;%˟~f ÛTn\4\w:@@j1qZIMt(Q>ژ^+ G]&Ң:d:z 鴨eb`A )V1t{.|8h`(uSNFX|qZvaBsܶn\?kl:p|jDd^:rK0"~pe#>0i 4ȨMuZM\Dpv9lmYDLm2/a7@[k5|)\ip􎻮FԋLW\ALd@^‘s6d.̧8, bdzd3!6u֛ha?-6!ankښO`h$bemyY6,`˘F mL7E?KcމVʴ Nm 4-V̴^%j*YbI/j Hbp3SqesXotl *{o|ʜXBt5Kw 6$&jdobj̥l1Vi|RNwjd:}niLЃt$  ږ>k3 PUI|Z9 %<Њx0f h'8EWOOM 40yd&Tߌ*% o  .h r3ZJmt5T*p맥̩nbQsqe:}ni4. ~ W֚Rw%c3sc.'TGձS]sN~2| o)3FժSPwk+tڪib蜘esHU@HmՃw7YṳR:w%\̫yxL*+{ʞcRvI2W]"7YmMvϯM"JU  &6Z;iFK%4A i n)p?^/WH*?\A>F+?4TeSgN4"."f*?1"@ƓUe_1&:bcg,''Uh@-alvXzj<MRI.zN%ME56-WjCW 27JZqPgR :}^ /5X;.ߕg&7թIi: 3 l$OikUai=3o|}da@eS4˖~Jա9r ;xNǖ-?5wpWED{4h:] i,iS9 t/*Ν;7nxw^|Fu%5]: 4g IL#ʤL9L>N@= Kp3~6(` ʾIjw&mɐ}u#D.IN+h@ /HoZҐgnNTix%F}QOsYnK797qdNcXND{ K5S;n5}sЉf#^.S*Btd7fggA@j nǏiF{L#†̱,A6RRtZ9@@\➙zO#`(äLs9Shc[>eYmK#Qe?6 2G) u9h-aD&aă-'Nc?)tM4r7]P=t@_d+E:]4&)Mpy%TՆQRuYG |45xu!F "X+QL@ZGNK E`?*R8WqjJOKCKA!6m9dt`b453Ք})ݨ 0<II ?za6B60X-D2yb =tb_L YVX. #G+Z*m@|4FiR6LV@ua%ļxĥ&K*ʔ87Fix<c . kdc#&f^]@@.{6cN\fɼH,j6Y@N6l0V:Ly0^ 1Ž b'6kp͚̎y#AHq<4`l@ۉHM6KyEmQ"84- `/y'>y;c=' Ϻ0dӛ+-nT[RBP l]]D1 [RYEThBr@@d =4*DL9Pastt*pяmU{r@iMRl6LV W#m`  pRwezyI&mc2YheUmY`Vr]:&$y߻ԗn{qC<MQOt΁F+ҙtZ'=qEuQ)*u]ݰ UQQEe l/*Xp1;"P|Ngs(-TN >ݏ zeݢTqP0Y6I,$&vh " Ui3. 4l?fRNF|ܽR4O )Gs::D8!eQrUX&+0yDd^n@=vLKo@=fA\G&?fP?SN68liVNE֑XTx]^C:aO $E`]EJE<hB-amyzjh"Ι37c5{!IH PQs<4٘Dc綾^ju"ӖFYZ׶'`]ah$bemyYϝ;6jL@a$ ,[p2܁ՖaYoml<NZ:1mi왥um{օ6,L"fy^&6PO /ZoXK@քg[?`2 "Pщp_T<8TIF;,()&!ӚlꙬaX&[ da12ɼ,݀zewk&fJbR $yuJtY Hu7EO Hdd"00 m YIMe9ԫ+g2,ZH5&@'\j*= 5{h+WF} xJdiRf.LG&f?cm@gM):8Ú5EI͂ &5g^De`4.YeZob0TC[46M.P:$О2CJfƱnmM1\ |VkB|Y.,UAJ`QUٍɁm ;d 5 ÚufaaW+YH)a1Y¡E-m5r} ̖YxB(3T`,"Vڄp5ێdXQ>hN#Mom̔pr;>& */iUHRN!LB u>;zUjX+2HaͶf,5U%%N#MVIa )̋tZ!5=s6 a0j^9`4҆(+TmU i Vh\l j:2;&+Xj!d8fdE:L9^Ɛ@SaJő6{جsef]P3,UK*454X'aUU!AHum mLϟohj"V'AXdMT@T!O5pC&4> WZTVUhBX,NkHclz $FHk mEt L>`?GqwkMjf`h΀y:@j69bpkLpN#M֠A+ i͒y,;!F6$WԛLP?ծx|/x`N Sp3 ( (imMӨiY 03xQ`}SD4 5pRWm:$ N4,e0\c8SfGEHugmL6Y(yN@2s03~ ipDƅ +fJ1*zuoe>iEXpڅF+.3PcX {5!JnFPh CdtE:U$fP@$Fv6۷R%^QmbPݎ^Y @uLDnۧr![ c3g;XTEMº*mC: ]Zw"m/3zVک^a&Q0iAzEt6S?q FQ.loǜM͏e|,Z`̴aNX2\Ӯ4詵g#CB@z-'9P|X\ZƱADU8]9:-ӎ%V gK&Y![Q4e'`.Iaӆzv]o%ˏ\(N$I-%hItqYWhb㴁LiXomAˣEPRGӧ{KˀtZI\^&SDA1EP9F%6$MwfX0X,]Y ԮPY4P"dEWX1> tc LC!@:ݦ:K(Q*l /fXp :eaN,Cl0'e,z`,4P%̢`τb:Q .x͚/>yhN## D ̞1ߚŶ P295y3 e .576?Ütcʍ " %R. EN- 36!\Ys+ys GkOplߩStV4jlTNm$_iѥ L#M0ZI:`n05vrj: &Vڠ2aɥ1~ݙuM:ݚ.`{ANiit8K?ZT ]‘ Kx#n)h zRqaXn_q .]F4Z)t8jy+8QCtzB.nNΑǥ!ڵrs1w5Rm"@ćtLኃyc3"WVTcX,cX0tj+1`ug5ς-n?O֬Yu'6Ld%mM%՜=: ^. vWx$<*!@:] 6KaI0,m #QC0zy5)j(1  鲵tڞX]Jqp';8TAnxp^AԝtZ7&ݹld%KɎ>z*O"D@+q&,,N:myQYW >!P8(,gZHh[e֔En?(Gp^*-pK)iEZϐڶ%F^N fufFQF#h3S"@@qDHbbo֭+e YV'm5RGtv:bUZ8^afJA#83 "@j"#L)ZS[}IaW2Ћo t@΂tK)V&8KjB8yuO%A56{="i'{8CSݕ;mO7uNA϶4p؄m8P`56u D`4iN;Y\3u6s\H D4'6+xpC(3,:xyCjTKi%D:mI ,_ip{+) @:ֆ6rymȶQsw Y"@Zz4uK봓N9̙c&/29uav.͍Өq2J:ϲG{r4 uzW4Yp&ž 0 '&333~ϒQw-uح(Nw ^715mKմm[p.JNiܕtNQJj,XР6d9kS>Ӈ޶mc+<3{r)mՁe-鴓M<@"j`"µrZ.v3Xi5H gժU$l ?:kCYŰ2~cpɨN]ZsdtVQTDT7湥 bF uiHl.Fh%p}G@gzso^MTێz n! m%F-G\dI݉I:d]'>۝(X :$̓hÖ|a62dGo&"goj$4IfL mvQ94‰j*$M9aEes{XujR7N?wcpd&Ǭ4NILqKtXD-O*B23Oa7?K8ɈD' \6%\X̿BKw&NJ%ә Gxyi)&NjmhA) jʮ֣Mƽy ?J[ W gkZ=}R6De:QE9[44^lc  ضߘg[㷲Q8b]#\"≘rP I|z5h[f@01ZGqL oA)2p6ڍƾe- v"nC+u|3G=GBFmE'npw^]N;mLQ4˖~stp9OYyt:'Ӭub_br1;q{`3ۏ˨.0j ⤿@x- `f=Z@JF 2~.ӇObӀܝm6,ۉ!鴝2FO&/=y-E Xk?7l2jf:hSc:qr/uQ697byÕM2~4`p.,Z8C%qxg;ʢ:P. YtN O:kn];o|{3Z2ӒF{ :5]vk`?vQ[an#f n nQktzi04ȑ#hlj֮]Խ}NJ:D5ICeJ8K:1,4X4>[:w{B'Q(rlOfyK,>Rx) \j5}vQMZŚ`pB'?R]eRSpՈ'EOF Z<'2|t!aƇ"hUtZ+Kt!ShS<xՌ6.DSNٿαG!+n'Fyqb̌VvLzᡩ{<D7uۈ韦/h?,|(DBtJ}p܉1ښ@:ms9K޸wіDێz n!Lkmpm JFve(LӜcH|Ĭu(sj="~'Ns}>5zEZo?w~(I py6=#24&2G*/1Iڒ#<ӣ+t.snwіDC`kK5m pr}k֬Q.}> QߎgS// Qro0HH,<{E9f%'Uʒ?5OxYI:ݾqp7 I :ٹKL:wіD\DəKZw{n;kzQ`ƣ7xvucxl>hN/aM@b?/7uKc{_{l&ϙ=㛔HeGcFy,u2GCcL❋13C"I[;%x.VhSgz "uncRVL X\ 5錋ʰ%؀& K/;z7hӋG olg~e'x ޣ7Lˀ gޏHeKQ$n)S2k]kVtv%x..#L>yp8W_u:>m էVBW+ Ұ^J*; h!MHP f. Տ ΢>ހZiIbU#qB h"]%܌@iMݤ#bmŖvN:mg]"-O:18D{9~t4Kݖx<@a' dԾ,ЏJ/^hG3^; EG'r—?Zyy'3M\Y T< )qHu(͍ctz,D6Fpو8E[m=18Dv:rɓ'ǵYdW!ڒীjͥJѕttoÇܰih4t-ӣ2hgrpE_0\%"Hj |p!6tw~G`bO-  kv0*e*P "Tycӿo͔8csng 0PȃaVU+p7 caiZp?LO-80eR^i吚.32JAM't˧ I>c ӗ<4ECɭD]wuFy ;oXlm_tW0NG+UճoǦaokfNub6Bi%xjAwіDOz5Pб1}f>BOR'm pʓ LD! = 2Q._>.y 'W,EX8s)x[3%osv?بѷ dnJa^rPbc CۡPxwA:](K±6-OF஑[g//d /Dvk`ՋB~}?Bpj \G;?'2 ʒ's >KreFav"qYTQYG;-Ԡ+k׮-6x0VJ`4}5 iܞ.V^w۱4v8U]&IMw p<%zl`oF -/ؐ?IxKpM䀬YKH9sO}M2e&ș:3_9B?fJ4g3 ^سôn[uêe̛\̀5&ߎ <hͷLa.2˖-gEtz,D6Fpو8E[=:兛/-N/.9)bF ȕ:&B r_x3"ç-Kf@o5#eI*J732LGcXvS"h> *حyF^k(|QXeZ{?n2jO.EwLy0d5oChq'UIݨ<vmyJȌ>70/G|tQD'Gxs)`[!! :.2!Vz/2sO%3!GP>3&LW8}1%[g] ,ܹsaCB<,1>[I! ijP*;RߎuGo`OV%>vw7{Uy"(Y]grmyJ@(N1Ol:/1"{?n׺ڣ: RޗIAQqo\A/?Mt҉SNQخ,rKbm^- #1t. <%zW "fHG =x@>ȉW|KlM߲>+#G$IiM :9{e$j|Z,dQ߾BU[MZ{\۵!킙M8b*cg P0CfjqA/?rd2FoǷڞb0chqtZ!DKa.'wіDFS"oɪ3KQ{ŴfZ#\'<B>8uj u@?w"¾fw֊Y Il PO-2Fu4% xڲE;f:s؈IHd]w uPP])cJI|wbF] 6E\ڹZƨ8jd{hۿo۶ }.͝Z8NMo#"ox+L~(~O&r^2p?&%g'Gv,TçӁqZ凚H}z{TQP(o<X$mk|K . %xvWN wі`JY1lXoi߼f/p*6'X45F^\hν?3`A0y}2rDӿxE(y8,V;|E@3!Kf)?2J5 #3S Nu(K:].IceQLw E V]e:!iZ0%X5Gp1} I܈5H[NX9U^106(=x&| d;%l3ɏ,X/-'9 FB ZF^~…XNX9_ R)ԂouZUKKH[(yncy N#Ђ)f]8;oh  7Zdc`LKMGFoy)v/?2B*tydYw,-'Nf!K+!?<2J5 #3S Mukt'.]M$ƪtz!iZ0%j\c%죆[oXWu- "zNpgrA+=?;cy,O(3~ D ,_ ʡjȒ ^.ԂoGhj .'tq,{ՍUNmFh!iZ0%km~i@CcEpyjcKLE^~O8X6%S~eH|n?"NBt33yRh,YdWT(AC%MiWЙ-2"yƢ$"Nʮh85v!;ͪUpWJ'OC<8"m?RX`J{[[5b=EptTE?18@ E-(Gt8ű(fO1\ &]LyJA/?Wq3"nX:K(lWGD]G*k@ 4 d,D@vܵcp׿ vgpݑN\Yv8чeheNO)ʑqF$WʀNQخ)MjG$ުV>iCmU/ Jq>i\v#6nPԤEP15sd4 Y,tK.GTSaPpϝx靵"ūAV'?B=9SѦ;FPc'!yYh xڲS"^XOn|dݞ{vQ>i\v؃ڡ3tz$.u1a6uF Kհb^~ڒ_.2񾍈?K9qu} 9䓉\$r΅}?fpH 9;9e'jt:P2A^PIPoaaGdt~r}Jz*J uo N T;Yv18G;Q -B+t8V6Ƅ}Tn-S{xB23OFr׀%'Jrt5dBլ2'^Fdt~r}J÷xtjrb7/VV>i\v&p 88W@@ܥ;nӪicrЬaD -k 8u C-4$O9U^10L#0JKL/ v(J؜gY+b_ZNr+!KA \s>SЙ)10lh˄U"[Ao<pQŎ&6`2\;9E < 0I4$.Ң&֘%8xˍU=凐UB¼6 WT_.ϕ,$_Pl5dɢ\Yy%G'^Fdt~rzJë3ŁԸyG0f18ߎ 9LR)[%B^%`m,4Kl){F؏;.7_c,ӹ3Ǣ?SY@PBHNj,yH /I@eP.肌ONOp 62KY\rA8=`pr=HtB}4N;j7?3)&a*9Zu1ʬ>8ɷuK4<%{@&'?N=1?M9eɔ(Ǐo:Ld!KJPAISZdt~rzJ M^熁͖i'tp0FoG~d~zjb8"9> 0}di*+Ya",Z4jUxyJ8=%6/+ё1ᠯ=]wB,[Lw.oֽ8)`>Q8u$0Xׁ@xĢ['6,)]=pzJO.X6ܗ1Y#G&lv㖽8\4g^A:mU2>`0Mcr/+@]eS;y N#vi|n+7$A_vpבFSvܼcpwu|a#i "UKQ3k**W|3BqTQj}oI-oHxWYt'¤ja|ݑiG`}hX\iՈR^=Y'}d:=8G>Igɒ%MBޞ|aľ<|;o馶fL9Pq>y?~Si#}/Npe7J1N',?8C ]F}hkPVHi-Rhe`ӕK%=pNɐu:5k>ᫎL:dGĉխy>я~ _hMqՇJi$N"- kA[9}X9:8QtHcQtZ $=U)[WF-:}g@%wSFnFᬛԭGӌ緾, t:}gVսӒl ~KO:m+NAga`SN18CtU7.V.5THLuLBG_GJ4N >1t:<,8{i[`N+iMㅐNLj1 iV Lcp4N@ ߗqٲe)+WEB3nW5mj;&nw wtp/(sqEu]W_J v"Zs4M:m1l[dyͩޅ2vܹq;cty/~Yk)C}U NӛWd)8[ѱƦB,DۧYB 3YDtd{c^i41DG'+lٲe8ng:볟RnݺiT%*Can'AX0cr#G53uTwԭ,^}8HZ$x&hNlo+8`f}WhRCK۶m{駱~KG6*ՇJ˲P1T|aA1pIWI6tÄҴBXbX2Afgg;CKM:m1/x)TC-'f$0>'\^(vubu ^\_`9_0irjJ,,UIgN6Ƽ|! ~{ *@g &7X,3Q변SX"4-LBD_cu=MP_⼱ԡD%E 3YDtd{c^iAS@)-f3ӚwM,I 0PEiZ`)DݵH mb=UʛZ8x˚GM:m1/v"PP Lz w\Y$+Eau o5FiZA'0Pt\(Pr@˗pB.( #g$x&[&6ޘoᇊ&;vg}5unwr^⪣~ M]߮iYmar'4r3:2GO0j^iOMF0.aa=$x&khNlo˃"7G32H*CȔ"\1MI FQؐűac6FׂLrZ#;01m!D@>SjdA OeeHLV"&6޺8]vL:j,QEP!S^hՠ_|1){.x2XL_ʔ!~YN+7%\_.A"d2 )ycXm VDIM%: ,:@9~*:$ϫJ61r1HIc?_gi%KV~_FvptZM"`3xAM:mu=/,1t_!0ịer(Һ=bQ:1YO{0; fyT^E c*K]W}&,qT53G\.@g602 7m+L}0чMX7,6}4MWM^G׀0c k{mF@m- c#Hgymiyaщckުld0mtT#MEP!V\(4H/TIJ 0\cI5v7Wz@6o!L|SQHLV&6޺!z h8twk)鱐` 3 el^D@ሄ"tGRF5jib:ՒǩHjX*9&.`:"׮zz$o* 0\ER4W0GS ,z \,%Nתߙ,mxT>~LVdmiyzwQx{QC/{p9=H]p d@RQ0,lo .JI2"ڤ&[O#Lع`M!թS!°:_GHbXC{T傐NjWȶD c)3SY`"TmVAJh 4mj{&0tJ @GڮZ#\Ϙbݹ6%!F6*.GBL&b/h "rHIƩ8RB/Mw(B/Zd21e2E5jX؅c"MX6A \,ԹY]9Xl`yG#NilE`[,rn0)V 8ce_r$:ک:?bvr xaa5^/n+ 9@tBٖj)(E\:%ց*e&t%piR6^RZeɔVO3aXC5ц`m5a#Pg9u_;y?~cA$Ej(d* ib$vʜTX\'k^l~D%ã+(U <Ô*(`h'N{`!{qeN= /R2 7iP ,V@`&Q%V"  "l[SZobJ43Ƶe/tC<fC7J1Z+E!!vر={N[Hc>83ƷFN@ 28 SP­%uF xҼ"#tv6XcIktDX6L1s"59-5b.c s6׷U#ƶmОkP66BѰ9nΜ9bKv'2Ƽ~NAI^JDQm`piz׮]/E` DS̀Yg+1SL0mi\~qX\̧aC^΃Fi3bpLB1*f4iA.-غVdX8eҞw vc=lXA$BCN-hQ (t&&c|ǸZܥjmbDEPjCZ`Ņ9Sjinٹ_X@ k h5̨[s*5z=ukg0MF:I`hsZi󰆆5"܌X-LXy .LPd8<2A޺fՋxV&4 . :`(-n! 2e: woՆJ@eH+Cg{B`QC0b 8_ܚM.3'4R P0n/#s|9te$-O(B/!Zޒt [f*Hͭ}`0 qn]Ɓcvwx*:ҘptH2uypCʁ<48d Vr6j4rXaȺ{wbr<'MtZwsmJ>[yX[݌&v}^|u @:fdr>W\.lŒmF?z͙駙 y^$6h"=1:m0r`|dD@-jlL KA?mX*C9qqəK\SӪ1@yXp6j1Gz(IZmqR-Ӎ8SSY8$8G/)d;h(N XH7`39x "p}Ǐ;7U5؁0HF9DAH_MּM-u؜w/)ͧ< W_}5nuҍt 0dp2U^~g5Z#>I8E VF*APAo&9u8qd.HIDAT-ҙ/ {>ߎt#W>BɔE4t.>@ 56HujH&5xVMaNmɳG<lf׮]Gg3 X ]9s˖-[vm)YA[I@ō懂`M2:E (FMۛ@}O->@۷o?|yK(ȴ!@:m[Ч^%׶!fz`BP +'$lf۶mb6 00͟ |Ñ૘tAE1tv% #afmJRLB(a`{CCՓF/}D7ib/X\ 4q`e::0-PZɼ! +5>mOrI=y64^שe› : N󋵤y7yc֑A~#j+)`K7]D=^MB46a3]? 8ぱac0! ɀ,f?NwWOtunwկ~vUWuأɦV /ԑIbz5Ef%z;>ptzoΥ*RݴWo2w Fc's㝹1ϻC""u=EFv0@`èZ^VEn6Xsno_-DdW'ο+4vU#C9]Io矗~+ͿwwٕͶbxѦg}yE{YIU@Y{+ys$Ɩbq7&4>?Mr)ة8%.DGh.5GZZJ?D/ӾTꂌxI.+Fڟs{3r=(NqrGj<k};nMЈ̯$?Iƾ1m|%~-u#76{}+71??][h9EEq)ir8}ݬ%b>^1z ^wi繙'ܝKu:MpxyG&YzP)jKWrھwxd ;fvꂰY+@5gr97%:3Q"f>7{$sFTq6;7ݙeIlLknL>7+ǚ?-94-)6ԓܢ渑dn{3?􀿃h[i\c52+F/^w 䴣 >"\r7Vu;Hëfyj'g91;Ʒ4r:J^v6꺉֒Z, =t[Q>՜PXsno柯C(+Ӊcc|M[sZ3s;$I'eɵ4N^r:Y$$,9S7\KԈf;S&zh¤vZ`F7QׂYgff\[SŦ\Jq!\mRf.RuҙQof}@KZq tz= A99^lRv ׂeUuAEsqqREդ_崍 SG(J$+c-0v>S$|ߒf߉ +_E+Vo??a;'mscb4'S0k@"IQTHE&iz5I#mlD:HU7yڟs{3t-Xd[KE&w/5N3v!Ss֙/jl3$9V(ȳ$X[[5NEe5EOA[#~ ( b&-72fB۪cϹ TĪO2 hJ'9#3QF&ƗUL~I ey䶉YsBEFLqgr}K\Mq} 7[/|7\Kjdz3_׾կ"{\ X`LcNnZͻ3, /tvAz),Mb]*O BlgK^F&[Hm'tok~nT2\7(7$ԥ߻m"$8Ic$ǷIJ[(#r;mF  ra՗z3z[tת^1z ^wiŷD> |z&F!HEqc ~o6oMko=1dXsno湷aHOVuoTç>gy9]j^1ZT0˒-^g &lG`V5VmD9_s CA]3#!x'Fk-ŴXTVEzc4(,`fc]ꝝ]4YW5|^h hRcϹWε[~ FKzh O53˟(^ڃ }[nQJZZV|SrzUd;[ [zZr@352"oJZ^ ^@N:X iYr񎁬!܀pG7ͺjĊLMzE:sLkE֖FJWf*ۛ9w5 xMLWfy>cb,/ bsParwjA.C4F$dsvDZj9p8qB/T\y,Ih'TiWAj+^ ,7sJ@fcbČ6;̎00X[)I4ZQL=g К.o? hfrt+Aau'@oSY+FSI*%dmNp*rz8Ǟ r-v\E1yq}+JeswfM[v `kkl`W3_TȭlR+?^ nl~ VE+F\1JV| 2(%cvfͭ׋_8 h͛3/ (ӟkX<ͦeVNKjӫWC_m&[1h+3Ixb\/?5lX#$kHUv8lǎCkx0Y vs4;;45F8iȎ;EXnvyNL5[UBDlkaz܀V^/䴿ujwٕͶbxѦg}yE#̕no,g޼_o;8ڍ[ۚ;ȹ1r,pCͬT2^抑Ӯ*un3VfvZI23|n(9|$"<.^ۛ_o15Z޼D6;?y-9itMNhr۫B9ʹ!wo;Q8z`u-qMZ;tMa<̧%>T'@N{\cUTXa,RGoICE2G1+C Л␛i+"9=#OsC9*ZgRy:|FNhE23 k T='%Č܀ʟ\7ik.H7Vnơ 0B{gkCIwLn;T*O 7»0;!@o&T+FolkW/Ώ7:ZH.Hla}+߀7^zA9E5;Y?qu}03azd]!cQMn ̍Y 7U(2Wr:"tFtwdgٱhfv}3q;oo;cF+Wae`Da@NqN܀պfHCi:3f,0#$HhS*zha8fQ.ph9z35.Vm^B+y9.w21d#YI>:ݳ} E$qn~Z XuK-=Faş}76T ֘r1 g9ۛ`q06ۨ ~eljG Pz3ù a`LxtZ&GEA! O =86]+o0?\^7bohjWX%f=nOUKӀhY#c*GEƲ_CJHFNG̋/?3N~L2q |E}ʎ[E9 Xa^7bohjWX%f=nOUKFJYdqzҎ1}~1ڲ` ?Rk$/gY)fì܀/ȷ/] 5z3-66&f0e[VN;r*(9-y_< tv*MNV`dz'+Co6raH ew8XΨY x1 q̠ U՜ RWat{3O Лi745+FC,ro@?z3XhC%,iөԂ$fdƑfvAE;L ǭ8&Qu]'g䥲&YY dشs@*0r~þ "BtAZN&@o&߆n`Mr̛p) ~@M{EIzXb{7/K7B9DӿIIӥ6ƛ߈J&oItcY#6j4ڝpgd/؁4aKn2+fa!t\JGz9 &C@N{Zqr]m3QmkUsrhVn\f Cf~&@of8kW^dހ~x9X.=m:R?Xslb~lkZ$1_܊:.0Ӷo #욝_7*lmر(m8NqٟM^*oTp:=*.V7 YxJ9iUzܶj^(ދ2rs͜&@o˶Àjgӝ8#a 0 n@7rڣʚu\mE+uY=i<+huCQqetT|Pe2<~) xMpZXf/;Խ`xn 7 YxJ9iULEn*tq(ĥ5{hc+/avt:xm4˼9#Nc͔iz3dހ~?F0RnpɐӞV\e9<'7$:ٛ'F% NNϾĜZ^cϹ] ^7%wcWef/;Խ~@7 YxJ9iՐ@o,8gTv;Y)hT:7ጜ΋1$٩ɘh;5=ۛ^ ^73Xf/;Խ=& &C@N{Zqut8@=܌Bh 7?fzгC ߝ^0#N Rω<ۛkfJ6, 2EN{gA! O =zr:^S,{w6t#3%#ts<~zĄx~pD)īRcϹG/ ЛҦWwWR[`` Sk܀:h2d)䴧WSNGtLά0H^Ir9JeLxzwqۜƉ/|_;3}|g`97W z=+F/ 2uo@c?XvDڡnN|udSiO+]>ۛ7tyM(kW^dހ*ol={8<9A! O =8t7ۛkfZn ̍In)8[>_C9݉*x1n@nmx'tM!@7XaG^7bohjWX%f=nOހf?Xl|j6I(_%9| ☷O͞4;3Y=;ws.=Er\f*o".mv RC܀/ȷ/] & }0ƑYwbmAr޳mkٺW8Ւ7 O!nw@9UeAEci{3/o:{!kDU*TɏzŨy`6g8L >})XinHǟiP:݌>_06%C&k +_ ynde>MD ETK RCa2jtAjK6V\vW_=O>yԩyH@UUB-zzŨGa*`68䮜~R8UyYaP0?lSv(Mte׬; r:6 ײu;/8Ւrz z' ;@>cۯ Ν;w̙<$;{ޞRjmT cb@< 03WN?v)X*͋&FZ?fl7?Lc;)P{HGߡB'pT9k枛c? v' xrz @9=Z]it>xw^kK/c*NէJr*Tɏր( 0[9(S>u)X&}Ox4G6>4I"F'_w,&mڌp-wJgq'Ғx ;T™A@N:9C#L ׯK]|yC8Uy; Mk6 Yr{on#tfM)A'aj$S5t5T}Dii[lMQIn ]9=&xsý {4k M NJiu5Z<1Մ&fE2鿾&T™A@N:9Cl& 0z\1Zb`sP\9ŀ5 rzPQ x![)7Xo{Yk(fȕu1 xM9=;"PeTu.HUb"LLA`nzj저ZhСCdD@N{gGNw߂i_j*O`ab 'pĉ~>zhE\9^@Nw߂i_j 9ښBN/!0.}7*V'? /1Y+5[ 9J9ko`ab @ɓǎ[0\!Xvw^@NM0 Y/BT.0q" 8r*C|W%\9ki/NB9 v2E1ab kB@5ĺ&m;;;wq[D:@S~5. rz Q `j EZДkUڅ1=@KfKBWN?ˀ5/?'yk9[Ct(1#裏kOO^W+3gըyە; ^@N{q|+G -f015$5{RkXEݖ>442A_~N#xM@TZq~R E9[ab kK@ZZQƫL䘄_SBڵk|SN}C8UR^Ly@N/E4(-֜DަևO8Qkś<}!_pܹsjOyH@wٽ=U*Tպ3"i `ab W^yekkғZzM*So[#z=z>w}իK//|`*NէJr*Tպg5e<g f014Y Yp 7|L^Pf8p@Zz%HץDiC8Uy; Mk7l"LLA$9%9:B{=xux֓',ƙ?C@#{'])U*QZi89l:"LLA4#Zk.^-ՠ娆5|q94=k2 ټ& -FLi+bE֨dEj,qe13 ~eMnhY#rLC=}yU*1!@@Nwy%[ ab hB@35v-ժ58.Pv[۱ctHŭܵ[IZ#MQqottiy{7#B,g:=ZxlQV+hAlm'ODq+wm]JnV2qkh @@NwyUYhBSWe}j`Gu*1e @@N7%cz &hz^&kMPhCA @  ;,4%Os`̪2XZ_G+֮Mq)( @ t {4,M k) @'y9jUnژz}mKA!@  [Cٗ!Mk_>2[s@ H9#6֔K_RixmIBC[ɔ  @%^-.?ӧǼaU=xu @pOַf/>|Xs3-}hzt@ @@ӝ&3ii)Fm&%$5(+ !@ TIjMS8qBuLFײ'% @ TpW^:x^ʴF4A?z(SWqja @kN9.'f;k!no_j5u_S"@  rz8uё'ҜҜzXstnKSا-v4=ɪ?# IH@H`\#11Wˤzmw18 "5`}_Gƞ+o[f}@b+U2K_z}yg12{/ʕSeϾ y)r#Oݷ_rrrƕ"Oɖbq$XXM,_V}Yb%}Ͽ8qqR>lM5Wn!CfO/smi٫u7I w ?LGSE4j`n]N @@6LZ&_̤Khl# x$cF~4ɯ9$2E!<"P?Vy?R}v={ɃZ{EIINUucHOY.-9  bd#ѯx [Y-'2KǗU5+❂4}YVrUe{|}~@0~{Yr}%$<|w3_Ni&guTI,6m+^Jߵ[TR@()a9nɡリe{i:1WǷn,%ya+9aS@/#ľhaHr:;FYYc,;>#G{iOfv]XGgV.\Wbiu_FY8^]5Hڵ+lɮl,rԏRޏ+Cb%Zڵi)/=iNs?}.C㞐^0^@+`dKKJ\Rzt 5GbwI~( *e֯JequetcA}J%D~Y[\O {M~\e̓,OڔkFKm؅hɜ?JFΩs$!>^f}@^ys<55;䞇8S6/Ͻl+,EΤi B@B(`dˏQ$&]j˿+~O0iX=ޙ"K 'JޏFՓZZGb]yt9Y]ztnҘֱz|i~]"]}8:gi ߲Vd|޾AW.&ָK5s"mj['(˹UX]^:G Ҭ<$55E{MdW[qҨA=] ^sB=8ҁN#OV{J\׮طtM;UY2VX|3EEk-}Fn_t41->e͂x.;jha)4[և\Ew do],cuI߳5OEͱa[v>i?[x+9W˹_̖ 5hļ[-̚|4 |Z᧜n1q 2gxb$y;7s̰A7Ng/z@ 8*_Jޠr/y):i ɊF tֲE/&8ۇRTNK+1|3GorrՓ&|``S?or# + u9R ..l۸WsZY)v1<QYzHA@ɓ~O6Nڛ  opv 3`< fJ> y<|t5WdӐpz68avPN}WG7nhql]Yf%=REGY| cyTYvG_0.u\=IN*ZlBi(=ect@@jFE]ŷm]f$;n1_z?lt´&:hU/ŪYͬt.+҇?\d;(zz78䭧ŲTW쀒-r>YwEy}KzY l E m6*9TaDn݅kK|\^ K>r8D{@ ٲv3nmmP=2OS   n'['.++ K}2:@@!e  sXFX,#, pV͙   @HB-L   $[Qv   0#  @T lEe4  Zd+ԏ  Q)@ag   jP S?  DVTA#  @HB-L   $[Qv,KK˰@$[C+i-l )ak@ H@|ɕR4a! ٲol vuILLLۥ T  @1-.@@bJe\".HZSRҪlT ɖcD@ 4l^&N@c٨U5@  x Erdrpo:0-M< 8Wd˹ @3]$PM(uA Cb}cE@ :H#ΌjK>{{XD@d$[\ d˟$\@"Aa@@@W   R%   @5  @d+T  lq   ! *U"  $[\   @HBJ      R%   @5  @d+T  lq   ! *U"  $[\   @HBJ      R%   @5  @d+T  lq   ! *U"  $[\   @HBJ      R%   @5  @d+T  x   @p '!(!xf;ürI @IV۷֭[&Gd+<δ ppq9HeذaRn]_*Y͞=[ 9%)/b@眀`5녵KÆ &Gd+<δ p_,l޼Yk_P h-G#X~Y7zh1W3f/]TF%Ç1cƘƎ+͚5Ν;o}+FСC%''|33gmڴ'x¼Gi+Voٱc\{%K.]k׮裏pN93r@   @8ƍ1b MZ,x=<ܹs=qqqW_}su_SNCyliժÞ3gFOEy㱒!ϼyh³fO?֭>|ᇞkƴm%t+93m-k4im޽syyc%r+y3='M[ `f+Ԕ@!.YYY'T鬔wJ䦛n4pРAjmϥRtϖ5j駟.L:U7o.V&K,+9KvdΜ9r}ٵw}!mf͚b%ib%vfVKgmV켊 W@ɖAo@h۶Ix߁O>-[JrrqUCjժ^Jf5MtB̚%3ɛ~Y3S2p@iݺ,_\ԩch^իW5fѣts*^TA"Q&@eg ʕ+-R=mJ~䣏>2 O4;=wb-?oYnٸq/,/[f+77לVf͒/\?KV=Y @Ǝ+͚5Ν;o}BkO<~Ϟ=Í̟?_?{EgôB2Bz6ȈK@yGo>{-@6g/q e +@g vlI AaK_$'4n9lm 3XF̸k@KvS]ٗY5=5, H?zD.^ݱ),ҷHZ%>F@ l9$}ܳ}T=>$S) ;HWFD@톧٭`Lڿsj\!Ύ|72@ 0Uqèaƌҿ-E\ÇƗ[t@*[{ ԨSD-@$[7̙#^ziԌ7q7o^4 F@-e|#YVӥ۽i*8@H&Xbt99: &w>wkǦ/]UY8@$- 6l ͚5xtu˖-0=EOsjK"4C@t-yLM8}}1eDe{ޙPҝ !D@$[&@; TX2(svKg*55p H, nisړ~: 3@J @p@Mޭ{K=b~mNj @H%I= Н n(+caZ:/   $ ZvVd.jm [غt@y$[΋=F(@\|TYOl[_l݁PgS*VNA@t-@Qv=els=]@-@lQC@brqл3@XVIJb>1|)((pɖBF@*"PAsؽM/܁ZxfddHLLԬY5l0ٲeK ՔxY+8{|3 'f͚IϞ=_6 :T͛#8pX1vXc=&[޽{˚5k|Ǿ{Ҿ}{iӦ[.39rtYtM v ٲC @XyZ;%:E).9s&ꪫ$33S^|Eիu]&Izwdܹ2|߫WO?]}MթS'%֭3eƌO˂ ̱{zHMf&M$~L>ݜI&{$|#,@ x$[&@ UwPo)SɓeժU&[L'u&7tԩSeҲeKIHHAɄ k1HitךE8=Og4 ͷ͛ˢEdɒ%ҪU+ĵkN̙#wɻk?0zQ)@awzSy_[N [! +Ə/?7Zj{ӦMRN5 Hǎ>'u]WkΫRp͛7Kbb䘯;Ot &e˗/7hտ3@pɖ3D/.?)Z\}.فoa:r "3K `٢K/CY/))P-zə 0@3o63i7n4t`/^VC@&?vA%nTm۶5ZJ}.On9]t1j92Ů]JdʕrK>}'};SeEg@1sh*..)9x]VZhjB[vĺD?D֢]}f+)K/zmFs6XkGjz#wlllCUr @ AU~} JrrLsU?EgիgR@;l! 郝~&[dD->>|ؗlo8i1[5e&_ٸqciԨܹ'u.Oygi-4 >{?K/仩<җi G@!2`(RGz)_D[t&A[N΂=fViIwa2IKgɴhm҆&V/_͌/2bv4  ZR{}YWʕ}-z]23W*U27dk믋-%'zUYOn?iC38loСC!z@@H lEH^,\Mלzt!_:i]v 2U4Q&2k]r[ݩJީNj,>bɒ?mx.5cG@|@_9cn3XtOO|Wz_lv!zGh߾}f_2 V;zJynSN1qyɒGK/ 9ta|Fz|9oӧ,\pW)vK|tYǓ{j1ޗ\ߡ|#v_P⌺I'ݔA֬Y#\uU2ݥ\}N.tajĈon6,}o|?`wg8z錫n?(Nu3lw:x_=‘m;%o=D7ޡo,D-%--MY6(!kyݼ|ryM%c:hoB׭!fk֮=z֣<د6vmηf|mL8ѼfmnPUk uĚTVc%H5R fJ̱f<kfc"d}2Z*."uaJu٠h3Z5:3e%=E7V,FZZ=No]Uo̲a .˳֭{³f͚ pBCa2yYt_$[anɖ. ai%vohkm۶5޽{hvl.~IcE-AaIx_BqI\r{ iSL1p89 d' ř< ֲZLmڴG [~ uT7u0ѢErE%@; ݛ V];k74,{8 pjGo[\g{0^w@( 96} 0c#{gviN2Q YT {xFvr'G߲qP7 l߾<0aÆˈGU\gQn`Iw20ܑM 8  Dc+uV^9g2BgǏ#  MHl   rv=  Td˦[   l-gǏ#  MHl   rv=  Td˦[   l-gǏ#  MHl   rv=  Td˦c$;;ێ]O t@@-. 5k&6lxtƵQF0=E@ @ ٥;w.ݡAXvh|)   k袋d̙g4Pڻwh:cF@B&Jjb##n`ޱcmV,Y"mڴ!.Y.]ȷ~+-[tɨ; |W!8#pp3[nbP^=_*#FCU @^^ƕD+ԍ (VdOv-Ǐg+ O-Md٢MRB/σЋ}zcZpJ'b3[yq%0}t8pqɓ'O?Ķ6:t{wӌ3Y:x饗h3c +i׮祗^X J, E]T8п P#<"V7" L`jR dݰbK=|תUߛ6m:u*ؠAر|g2qDΚU*v^*U7o,c;<%Y3VM_hSRR|~u@?ȧ~jo+֌$''qjE=|Ќ @$H"N @$l/ӂf=WIIIfh{.Fg5  Xd+T  )20ĝi0 W 끮q۱c1)Ž3rel} Wv)`]Ϸy;Z7ӭԪ!\: CLxsM޽YOiٰVԭ 5KRB'<3]+g*IiV+ұsTǹu6?IkT@<>lu݆ /4֩JpȜH +Gm:JChu`AnLgw6mjjgr.Ow>|ŽW1e{DcLA@ $[PM[ y?W}X _eTtFk yZ5"͚ͬX}>{رҬY3ܹ۾!zϞ=̓w!C &M.u=ޓۛkiRR7ph򤿔Xl1Cy8pYn~sn&={X}>hgOR8.]*FǢ1cJ%Xvifb1XS@>gZ+mڃ&kyi~}zƐ'777hmP}-v큰vσ_~өS'ϡC<[ljcxukYJSIn߾c%qJ۲rp챒(a5)((}9ƍX 9zQFbX966NS\\W_X+vc5}_"@E*s {zKv=jժrÝM!a=ZNI+١? 4H'u]WꅢEuYfɓ2S h5s   6m9BO>Znb?:bt/]wkׯ ҄?33Xeeen-3Ff͒K.$a 0@exr7-"7ntSIjEgު$1EљgyƌZx>kF4s   6m9F74}g:.+f塽{6\kNz!wuT\Yz%Ҷm[.3%/x]v5;pf}XRZֽTfN4iӥstS/3`:5No}k9@p 0h\0 o0Jq>TX2`E٤Wh:۸[EKIZ'Y6M-EY.5ꫯߥ7R1׊"'2r=e' p99z;V>ӭodCzڟ+2Hn}#{ $WXU%AMZU0*R?x󠬰vO?ԨQ<8<%R1Xi30h}_:3' CҤX Ͷn MDuoWڇFo}{qg)A?.R1@0 :$T,:/يθ{Ԛh}ٲD܈69qaF&!qu77nDո @$[ =;3Zdn+~"g  )@U&x-]:HqpڟA1@@$[6SapSX?7 L@@H6∣ ŝ[wFQ! @$H"ﰶu{w;>-VOჅU~NvyڮtvU%>.F=ٝQysTn~ @ $[=P)vS`^߰k,Wfwd>:({sVwTʩt+8\K$UB6# $[\;MٺWÙCO䐭q5JK8~$Y2WI6MM( Zd+2$yZ2#e[/I!OdvG(/>~ok^Wkoٝ#k\.2c~!jLy.#ֿ!7_Zyvn]?]-?w|1C2d!b#׃f alE%ѽm%Vݴ.vĞq" يth\1֯ ][,b uQĦRlھI#Gˏ+u?};wtڎksWϐqco0G왇GȜɞ[rcYBbއ]&,~ 8)p[6U?3+Pfo3 %ݢ   2mDn;/%Eކ~o^gjeɪoHuȕhfbm ׃O`ʛ4aV@t/=Wer۲v֍Sl?: N-w5*F,nOҭxE˦_0طK^z/19$@|U}d-JS 9~ok׹Ԩ]ߺ*ںWXK􏖴5S;[H̫G2+*8Q@ZuYAd=+YSJ+ jUMc%z@p X?I0\FY;ߛfg-|mzfjn`6x_ie'IRzblHY}R)77 !7k[4q{&UZ3^OYDf~'׎|ļze[x1{PwJSig"{߃/Eu1)Y]u3 ܣσ_v`ܹrٳo߾n}c|\gC{Aqli3yC|LO*&MYVԚ*]qU?|_7$͟dht4yLbKsM],M_vO]/ p灓"`4h >\x y.skx_Yt^,#θjԧutv^1~KL{9y_oc%EJ޿UF]>!Q[fOdY <.…裏_~cIjL/7 ~ @(XFJݣue. Yu-+Q4hZ,!zt }Vu%66ĪuRŕl&/Ou^iи$%Ո2-*+H :$V+W>l%7vә/M̾ eHBK]V$O^؆Y7DKzgJjСCM3B;qO߸@F2@8@:坱:Y/K3uX X-dž#h彇kĈ tR! @4Ѻ+=Xeh7M9E(Mdk@b}֋/=Ze ;ålp@ *K#N@D'Zj:.A%2@ jH6 p-]:زeː {BJ @ lE]0 \M:訽piFr  Z@p&ZVyw t:å-|r $[\ (:5(f%I= @ lE_1 ( ût0܉.3fpEVi@ fG`xw)> Wdk@ _ p+q @t lEw= `KHl/#Fc8R( ^/YQ1-r觗 gTA:W@kVHǿYmMt>uЗh/םsSC6HE/zy{p>o]:?Ќ۟] &o.Fc?a<@.$[v(Q`ҭkF q** +3Z+בMɌ5Ⱥ}O:}`zd,\P%R4 Rsi\OerSyJ˓z7(9\kguHʩR%]Bˊ G.@eEq8$ru$?Ͽߞ;,:)8RTF<c `WuIlL%)fVP DKgv^x&8YƋ2TC2םx\ pzYٜl[֟׊Z/;V;`y7ЄsƌŎښ)֮EwHNAT^]N@ C+m ِi1Fr sh*'H* $5>՜x$?GزL[ q W^-[&q:Gt]GHaKftk[7Jxߤ&LX{DzYu7;zw븇K[x^/$0~^nXȈzÚ՗LV0.)ԄkgVLY^>ٴUrIiҢJf9d!48_w_okG)~gVo_VJ&iދ> [PN;}< ,@j 8:^f2B]BhfK>Kg׋e^=f[2Z3Q=hR~[Lz:=byH|}6Xl+*5;n<%zڴi2qLruIqRJ(Z4Il @wWsuX$[,.\!6|K`y7ǨT)FԬ[[tk VƪJ Վm+-I\),Y{8rܿS7TM"IfyD},>ZJ8S5֗|y7(u-<0#٪"ILɖl>X|[rނc Tuނ㒭:/M [ɖn}sOY_E[PTNJ\ܤYN趒i=-3CV'gׯgH/#iIۥ pEOCw[zm&Ѫ#5&V5? <ke,-kVb̌.6͹'_6NCߙ[>Y\E']& ӬI=DHJ|u\ҢVvn۠YOլq[VlX_xV y3}]XfR@k)aabh]hw&Z-9NmҺUT6İ~$;gɊ{17vh<Ӻ'SI$~xqm8o-.[ ˶R~e9My~S9S߶])V!LO$$JJc[Ϟ\C>}9W,MIo/@ $[ԧm@RK K!VpM2E&O,V2 -b֭tM&SNK˖-%!!VB8=Og4 {h}͛7Eɒ%KUV&k׮̙3G>˓w}w q &3\$Z'FI7]yѣZj{ӦMRN5]4h ;v>L&N(]w]o^*Uo޼Y%''|wyK5)[|9GB".@@ xgt#R"*5گ_?PgQ^r%B.uh{赢_wyWTP c et̞\<]+pؙGKծm)?͒m +M \σ͊G]tjժU2?vCt _鿝YFÆ %99y:[EtիW,5W*"uV=̖scG@pc M HwiRH-~O:>כ6mPR pW . tP5-̙rAg@[ l*t@P>ػtW^!Cӝ>F7,( lq DKa0gtFK̙B9_`BL( 8E$}W0-' "Yl;V.\hcꫥE?OҤIML.]Pw|G}-=YYz>'k{oFkmG);waÆɊ+1" 5-!@ {g^0⿀nϮ;j23\s|C+IRwO?/R}]Yv2e#<"/ܹSk6m4i9f[xMFK5G" 3 P@YR}3Z駟.gy٪Q҄KyfYIa:cٶmItvzZO-SN5;곸!KvdΜ9fLzIɖB@(ϦE-f&?v[tU;fҩS'xC:vh?]5-z>/+''|wy2p@pRNp߿❧@ $[!@J@xgtd'Vc;^~?YwnjѤKg DV 0]gn/,/[Νkft"ɖ"B@*$}6͈+:ȌV:Y'wKaiLSO=%vԫWlѻwoiР$$$出P7С\$k}'|NN/C cp4mR ̞\<]+pؙGKծm)?͒m.m1>!^w{*WkXg40X~xx[ҝ>ƍtӌL9|lR4袋mPnqzVѢ3]RÊޗ\rql+@@{K nAK%66֗hit }Mgtkw}n&^ݻTRDKU4ъ"@lEK' p1x3Z:h aÆ9]r{-/N ܌3@$pi5)$Z6.܃l5@-8@p}+kCG@@(@Ĩg@z_-\ƌczOHfͤgϞ/׆*bb Sرc=nZz-k֬{IM6n-pȑҹsgY6 YdelQ/KfΜ)Iz*̔_|Qz%wuIy;w̟?իO_$Y:u]n:~[f̘!O?,Xw^y衇dڴikҤI͹diM¢>H  kH\ZȔ)Sdɲj*pr-[nrM7IZZL:U,-[4hL0AH錔&L^{[4tfK /|Kk޼,ZH,Y"Z2I\vdΜ9r}I^^ 8vaCZ =ݰbK=|CU7m$u]!UAұcGdĉru#۾}{TbyfILLuy.AԤlM !`  $[Q| ,5lѥ\R,-~IIIfh{Zv^&:U~}Z7hذ$'';OgtyhљzꙥX<\xGm#Z.S`?ˈy\}lQ+I,: f34) $ XDKphM6D@Kd+h@@@ L$[a@@.7E6zO@pɖ(A`޼yfHݝ+dh@Vȉi >|X>Ѥ YdelQ!0tbeŊf@f͚IϞ=CGK{g…W_}hBƎ+ӟI&a4{Ҿ}{|-mG>yȑҹsg6lEgv郐qҥ2j(>|3&* @@}$[)#2O?]}3j}VֲeˤSNfywdܹի}:4m޼-[Gr=,=y/"G޽{ӦM3_&M?POntɞ&K$hHƏ/5k֔)SLyE|]wEYD. Ed-dQ+p5Ȍ3̬&=^{yԩSe9X 2h 0a_NyffFfKN;$hZ>CkѢEdiժIڵk's̑^'md}֭tMW9@ɖ"BPAұcGdĉruי6m$uզOAAA۷=}hVjbi•(9993[n-˗/7jտd}UV Kd^7K@gtɞ>,qp̙f+;;[f͚e]ʧɎ~_-]~o0`Yr3^rlܸQ}a/,_Y/]zXZ)@@; l1* 4%}NzT۶m&ҽ{wuvtJ[t9.ڵtAV\){>}'};SK>C@(㱊;>2,~%{|T::hM3tƫ~%֩3Ru-W{:gsVѢ3]3K )eџ:r (7m:@e g2 /-J}% pEuVtƝQG&B%ZPDKϭR |he1P@@E$[. &CA@@l'@@\$@`2@@ɖ}bAOpf9t@V#@   J-WA!  @H"G@@W l2 @@"-@>  Rd˕aeP   iHG@@@$[ +B@@H lE:   reX  DZd+}@@pɖ+ʠ@@@ $[#  +H\V   يth@@\)@ʰ2(@@V#@   J-WA!  @H"G@@W l2 @@"-@>  Rd˕aeP   i G`ŊcǎRk}GAn@|+*7:Do""2"4@ l95_,k׮-V?nײe Ju `G>/ `Wd+޵jՒk^uAFaZ SO=d+T]ҪU+m&ժU akTvk??ңGIlڴIԩ&NlAұcGdĉrukuֲ|rs&\/ՃRRR|Ɗ㑲oIޗԧmCd+q~7M ~{[zP~RC]zx%u6K&%%~mUZx}f n`-eRKx_Zn_3_) |'zgնm[iӦ}Z `OhLg1|I۷;.rzKyYgjޗ~[ Z 4#eӁm^9)Gs"%?@bl&H8gz͞\<]H۰SGKծԥb},ئp! cmU vK XQ+(1Hl׸TY;ltfD4qFW$Z}/ZoY'g~8 ֶGr$ۚݣW ZrfWX'9IWI,= CjC>N(.dK;vr9ʬZKΨIJC#٪`pUIՑe_Hb} 6غ%s$&*I;dˡL)Lڰ 5EmP_ٙ8vid.o?"P˶$Zy[ۙ[>d1t(Py.r[7h9<%k,dgj$o<Hg4њ~l:xD+8-ذIM]( KtFp1f˯ Mzdr8e#_-|:uF!oB*|A `$[z.K@gvgsנM-t*|EhAt+0pvWT % D$0AН挭 n=Z!h& fܣU!B۞C D$])<ЍJwS`?:Hqlsvgd"lw dvUt{w;@v~srE7PW}B?jyOOVNF \;՜-iuMwo_W?y}7B /?OroȽ}.%k27/ fԅ8Jdp}9wοˁ7]V*+V|,WBB m\z 6E"5{(]իH59Տ?lsx |;<}͒!##\v˭r`y{eVq H8/7O{<ڣ2dԵ OIdEY/جCëcɢ$rxsZ8<{2﷎wp۵s[+ Ͽ-lQ&_gy㱗Eg*:U`_$%>Ś٪zZYK{xg=ߑ^g; S1ˤK}=?by\3_&y{p*v@Nv5a_ gs^%GҬCqRGuZ3 ΗG >ؽ迥7ƌ/gΔdž_#c|ز]{+Ryd^[صK^r+ˇ?g;zb! HkǦm QJZٻG0>?Oλl꯾-)//_[BX />M#Ix7cs4ƿdl)FnN{1b+Itϙ+҄lͪtU' -$]vp0Ue7ߖX'_̞+[앚i˟>) WdoR%[oֱ̼YsWяGWJٕq@46*B PkטjZҡgb2T8YF<:j5aך{RҬ_ 9Ǜ0-yy򃕨}9s/=]֭\),2W󷏔-w'O&s~KÆdMݽuLϋV}:TG"@UqۆߌժWKb*ֿƞzу [˸O^K|˛X\6Դ~kߕS-Ƃd`ѻ5y7Mٴy>߱{v얋+( P]?m;I e}Aֽ[$!hY;[|%IS͡wx3s+66~IuִڰYآ^k2Yd6-[+1@ b6k6hnCZu4KoE|8M{-s+v|֡Cr[o?>+ |U+͟V|!5՗/T5o.;~mXZO*l.Dil%v_oo-bx4Q)@U$ o<%ziyZkz{Y7vy4iLRK5c2vq}F^fRRo-_MAyGdrd_vl_rER2nf |fvh+ k֐Jgs6-%MbOLkI׿y+7Գt{IZ5#~̼S'@0NZ3H޲ZI/MxJ*O=UnY|=fKF-WN3|+lC g.I{(reTˬs:_ W|س[^/rGGS25˗:@b o cӬ0S:5[$!j[^I].UkV+=Ȟ%-s䮧,<5^:Y:Xuo>Rɺ_ ~b5k\*7 ɭzMȿ|dZkjʊfrg.8M93bLKެf3Pߣ9͈< ;9bdx)>>sCa.id%U:'+?'n4[7vx~I.\"7yvpײz?p>/lzi-;G Ru#ݏ\tu\|?[A]a}a{hm-{o=֗.KM~t7w*ԑ|nf,jյ?L}y /}RtxݽvfVJs>9/$<\xu2W|1m4>TGEKov݋"k˔.3} O~݀/"L:t祷k jכx8(X9%dΖFKڟyyqSxU/ٚ/###]/ԫ+yMg)c .@H#ӄgJ-1L}ɹr{ .9{$95YV.^._Yd=khh,qyٷdkSfZr3${`<[tiwJuOOi=leSZb+U+{cgIMeJurĺgK8J`mԕvc Nt .  ٪^Y;9RCPήoShe}*σ=;IϔI~^[WJ7!)Vpx"HU!l@C(N ;bluUrE]d+䱩NH6_ZZxJ[:@?L>!*:Ĥ8Ypi!J xpKd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOFQ&#'Fp@pɖ3D/@R$+3&}8WRV!,@g x9/`8?[SIzN rr;D@ɲ{ǡw x{3ZUHM @ lEq:8_^*s!+p`A?5@ $[n"c@R=IkրOUBfgJM $[F.h߭lZOvn;0H I3G Ndu!e@ m:q9e[RVv Tdt;[cR9"q. @-.@59CGd5drPrA|=#yf&ow3I,MRAZ>@ bh+_GKծC܊{=xH;q;ሯncٷ+zWds]¥ρRn41ߝePM8"9 #F֋?Iq`t3 s'_zQN!NZLUq2kùL@U1?ζ>xWyxvf}^v<: +Kzͦ"IR)6&8*G-o<8S"%7Rhdq t,# T@".3[mzr-Lxί'lrDe-/)V @XHL# \WHV-fje 7K44ߏ㟼Pŀz@J_ @W&![T6nƣKΦ.@)@̸k@Zwmfn0hP'@"&@1zF.)M06kd`@ @8@PFu @$H"O AR-I @")@I}Fn )̭7F Dd@z@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@"'=SdZC9OVPڢL#@HI vH "X4Mٌ`.@( ي 3D@ɩ&yNjF3U_nɮ퇤AӪvƇ `3- O}RRٙi*kYXؽA*,@* @U' Q#@5f %qiKJFTs! -Hl: Pͪ;Y,,# Gd+8Ԃ `ظJҸEuӓTkš*۠Wt@ ZH5p@V5:vo2,@l9%R@/7}KZ~A Jd+Tԋ 1/>^i@@*pV chuthqq' i׋Cm8"6dȾ݇%9nx.}Gԭ? b݃uTj5HH6`93#"MӨ8:fx@T-_Ynٷ0qۿ'K6W|^ts@" $[FQ$?񯒾 ]A*{3MEe@@$$[\ @|6̍Ѻ˕KpɖG@[{89D@gt)(@$[ B*apcvPImSٶl``@8-. @ tA;v -! e w >΁1*@$[."C@9Ze 9ֹ ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-#q0@@ يx;rI)qȾi@@ ي;fɩ r`_cKG@@P- T,w}? _`s咡ss_I˦ é  z@FUdCW`ApZٴʼn v ٲ{Td| @ -eؓeד#IZJ@ يθ;n՗Mkmw:.{lZyO oXe[-"_ž/wrR1.XZX>ABV#@~ $4_lVKA>[A@`s,dӖuNJƝpզ'ֱ:L[mef8+zl P @XHL#Y7Uι8" &;6ÇrxW0 se~w^dr; W Y_eۗұw?Fmε]C@XB壥j!n%׭wl9 veYʑyI~z|PFid*^+L}fIi2o<6.YhUjr4@o2="園%f6g֫Q10 ĸg00d+ .jE,a( ?|ֱdppAL| mؓd+Lq! ;!rg\ 6ݲZ1X=m讄ǵ@ *H" p/ N9Sf\$=ew؍ZHSsn6tӌ;Ch2cD$h &SEf䪑>doÍcۿݘbs6 @lEI& &÷`׽aTpMZjx.o]t@AF04KPmTek 2l"Gv 266;a wƵQ1UG@@!@U4NA@@ *K#   *   e l%@@@rlS@@@H   @9Hʁ)   @Y$[e }@@@$[@@@@,>   Prq    PVYB|@@(V98@@(Kd,!  Cdh8M !1i]~ [?8 Vi@jp7I{aHh@ P@8p@I5]GJub  ي:m"ahtJu s4M@(V98p@ZDӰӺMY7UN @$[6 B-j/BN=FuEGd%A\ )RQ FC6Ĥ` Hl S@="Naux0CSƨZdiҲtLՔD+4ԊU cHe'd,-UF^brA7k/PłwW[vN!_d1f #֝ȱ̌76|K֦?Dɖ=B@g{⪴Krʭ 3OLCVP9 @ ٻ%(Im9\*%(|bR{|;o $[F8^ 6 IVAWq2@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ 㱊.f,-Uv|^goxqs g}$&-Ánc=#qE_B݉XCRI|kI@LLcw x63[N}F@!Tu?rd"ɳ-'&ZZ75F[ΥrW%s[=h يH3N@,?e ̭\0j.`RW4lEC# ?)i2>EԉkH\Z @=ZDҥOd?'B@t3 -1 s^+Kd/&B@@tA;vs` l*@vޝN#{90F@HBJ Dۻ -# VM@@-.@@@ $[!@J@@@dk@@VP@@ @@@B @TD@@H@@@l*@~~jD@VP#5k} 6Ll_q'k@@ $[7 ڵK#钕%ƍ h_C%   ي=-H ..Nt6jٲetR5j >\ƌcFOHfͤgϞ/׆*) 8PVXa>vXc=&[޽{˚5k|Ǿ{Ҿ}{iӦ[.39rtYt[   (p+bpM8ƬYd̙ꫯ窫LyW^r]w$wޑs^ZN?tyMׯ_oN:Gbd_~m1c<Ӳ`s޽{塇iӦI&ɇ~(ӧO]ިI&{$,&WDp3 Td+ H*R? -DL"'OUV[n1]֭tM&SNK˖-%!!A $&Lk$R:# ӵ^kOZL̖&a^x[k޼,ZH,Y"Z2I\vdΜ9r}I^^=?<)D@ =ݍ5 Z}T0ĪRb5h"a矗=zU7m$u]!UAұcGdĉruיcdIdپ}{TbyfILLuy.AԤlMnC?pA@t`dތ,/뽾_38pTt Do'DVbD CK4aG~Y*]zx%l.KJJ23TZIǖU=`z&gz.C2`9p|f&mƍflrwcu+77׎$Q'>~lV'Y׵ySdڝa{Wso.Cs_@޽;J.@t+Nr3~nC7 ܶkB{ڶmk6HMM}jz_!ZU{cstbrie]v:ʕ+/>}ȓO>)}5_wy#;AxlVi$)rl)M.]H5ENMIT{(DK{OMh <9,yKIݚMN%|1kcѢ3\3_AVNmY[|YV#?ج{u /D;hWgtFKw Ǹx",VNFZ;M?H<+qpJS\/NTI|Y8*5yкde:7Ӥ[& rמL}4ْ`ֲBMwn"7҂~5k{kYwR,_lo9ݚz|G~ˬzudnkZO:t)&vZיG1wJKAŢ<͖Veֲn56ԖO3XMklivML=3Dm5KO?-ovdK;n8sO>?R7nҳgOOKbH;Aev}p/gy9><(?c?|ɈDo2D(yrYXGnyC.LL}]OPz9d :5Rdŏere%ZȐ?#SNm^K^I^۸[_D$.ӢuotU7q=ϒS_*$MK:K^6IJ uavԭ6|~w+ ҁX7~,sfu|QvF&zj,/$OiND%zNԣG9묳D/R@^hP|w!\4}v&]|4iD.Ӗ~OoT%W_}m|T"eOZ_q`ře԰hэ+ο.A<=W쓄Xiި$nӮ v1Zt#2L7(O)3sʺwY~DA ي3 VNoj׮-'NlQ~믅;`TN*6ɗnۄ ̡ݺunD+ѥ2p@|\%iۢgDk;KNZF VЫWI.HVuOH]J3]DK_Y2dC+ov ѲK4G$H"OۮН>T4iYJG0F*}@.C<=_6m:uzs}$@-{ k"kFDh ي3 @7xC^y5rHywf͚ɑ&dh-[׶Vү_?9s3;;,CKS '0͛voZ(>~ ,@ @8Rt aÊ}W [je6hӦ_ڵ+ѺnݺRJ1bDz s֫h<#`FN@JQ\ ,6$ P@zzI]*ͪ\5\~.PC"` 2‚F Cgh$V@\@OקM|ϋD1L@@ $[ԥn@@Z =G@@P lR@@@ jH6 @@B)@J]F@@ ي3p@@V(u@@Vd+jC@@@ $[ԥn@@Z =G@@P lR@@@ jH6 @ ĥrj(?;Lda 8A 6I!P)F9Sd+:Ψ@@lJOMi l9,`t@'*ÏNU}1S8S0jC@@lr:t PsDNM"6"D-'F># &Kj)]G@cl?r X @@ ÒPTOa-Mb+EA$[#@@Y;$w ?Mw&a nhtоg ٲw|  8T{8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8  h*~ IENDB`ironic-5.1.0/doc/source/images/logical_architecture.png0000664000567000056710000011230512674513466024355 0ustar jenkinsjenkins00000000000000PNG  IHDRMsRGBgAMA a pHYs+ZIDATx^o]}'G3ƥLIlj.[MGjXāJ$p)v@P\yQ}Q21!~a3(ȃ[.2v@IGkἸQ2Z{u9ph|Zϟom|Ⱦ/ @ @ z @ @!ޅ@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a^ :x;v>mnqdBɓ +ݺx o*Fʌc e#w* @B|~I=y~~}_ e{Yhغ}Ζ6UX= Y_WC?`v;- lkyYms6~z.ҟtBَ,nv0]㭳>Vob?LhMe}fun9L @ B``!~v}gl3g'[_og7綘7?_7&vV~}rO_5F*#kA}g sfȞx"riOv_s nWIԧo>`Km ΅Cl)9ŇuyVUp @Mn%/etxO=^8+(GmrfA~.7[.7'koIs S(=hgՖF׌ _v8Xlct6]K4 @ V+o$7WO]Wߚ=Ȯxjt0SYئJ @@#[Ϟt>yJ|g}zn|,'  @6n%~to:Z-3YZ @ ЖC|[%@ @ l;}Z'@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|S!E @}U99 o|#$@ @ XoXnUe| @ @``B|-5M @ -kpͱ- W @ B*0tM @ @@B|ũ44G @  D NIS @ P1 M4C @ F LU @ У_Ͱf5t @A@EN%@ @].4 @@@G 4N!@ @-q }%  @ @!~ Bt c؂̡ @ Р_3XJ9 @K cS J @ @@ 7o<}'^XG篆sӟGjZ @ 5Ia;1Fv @)]J8&J1p @D į(Bӡ7O?m//~Ĥ|v~_ZXdsצ! @ @t_p<~!u+_u%7=v @K@G!>7'{Yw}${(^g&}-AMviu{w @) t L|Ų> ي}ë'ڇnugy}Ȟ~ow 46u4 @, gBO»zȶ2}ǧ!+J/S~gs2J|WS>M @ s~Wn/Uo/gB};o2O?_́mCss< @J| wɏT/vV܋必?v/slwtZ'@ @mF?pkE~c  @ (WKos%gT @.0?0;W8 @* Vy  @Mw/ۿPA @Fֽ^~@ @/6۽XM @͗F @ @@, iKUy+G @ @OAxtKf @ @@ hK]u;g @ @ Ax!%ð @ Ж`B%²9K- @ @IAxKbMH @ɇxa%l۳2 @$L9 @ @ E!ӛ;Qw @ 0CMEO%ȧdE` @!pFU| _sk/#@ @!$rWc @ Ю߮uad] @ @`B|E7i @* @ @_vq-U @ R[o%@ @GuA @ -zZ&  @ @ B!lZmy @HH@oXUV| @ @ R!lz]ٮR#@ @Ⱦ5f;WmFe @p.7 98}{sYwMT;bno9lSlj{vn49ƦjTj|ަy%d /+b?;!JAh 'w+:(»? ;#j w . wX!Cl] Љ@ N и' N<f @`X~VM͆ on)1jbDZV. @ _A* @V@[o uO B?b*GUnM\@OOC~~J`[!~[1 @ @uK§T-c%@@uI+!+i @ @B|M@ @ @CDlOpMWsuC @ uO @:;  @+ t> @H@Z7 @ @_W @&?q qŨ ?) @" @ @0  @ @ @ @0  @ @ @ ^B8u@8pT0 - fG,=߷'ƴ(u_q~Q.jZw̮9T⫹9HM'ng,؟h'npwJ\ey^a'fJsșg/U?ٙ* ķʫq/kh@i;z6uF1>r*tᔭ]+'I=·Y9…-u}l8ZGsF\@w @xpa "8~'|grlJSs[h½ƅ٪t bN-o럞wlg+.3v}VPnw>Ock{)cjkҭ3G|lعpBjwnOvmq=L~??ƪ3@ΧGg3'2 /$|}N½Y;Ymv[=[緍wv}pjzѳob:م1LyYg&lxb^~EvS s>yr~uc.yuiT+9pb}Z? 痶 yPl/xv`w[݄dz>:; Ź-1]xbvqŭWu5r;(,YuѢ8\çcۊ?Y1Evǧ[g*<1ydX׍y1 ue :_`gRGDm{,m_~4l;k| IWK~[}fN j5-;7fok|R[aȅD~Z-ֿS_0q,o1cئ%@`BX+o @`L;:ּE~w'Cwb~fVWXOC_;=.{h%o.Awbo=n;+͓CQ0ڹL0얉wKԪ6![eyP7"X1:c(s9q 㪷 @ ϶g_xXlK?zna,+ mY_>^ o,ϖk?;ާ;o}ѕˬ:L3}V~:97{$ïygݘak<' 0t!~6? @`"pU=6h~elupol]wguh [/2s?tm^1=N/Lv6=s K]uϽשբ>nuư l׍O3 䵗Ǡω?w'#(2?nUV-' `%>"" @.yۥl' ohNr+  @ -g~6U=w~K~Vɓ#@  !|0 [s @ bǵ8TM @ @@B|E7< @ @@D>bb,|Gwl $N*)s}ar/} @ 5 M8j:C>jqosbV ! /=ۛ9t! WT1 9bNc:BۍOxk|}C- @&-5SEBȖE)\{ Z%oha~ @n)"AdqvXj%2%o6r _E9 @Y!~ &ֆp[E֏Gxo^|z @n³PPZ]/\!\qj(IvFNp|knۜ媖TeSa @-!~P`zqҮvS)8WE*W2koTٞyw]7aTDZoT @`?'4?ye^~Of1AJ@Ϥj^f~nj&no%[UA50C S}[߯  @c}s $i]G!: @`lnoH۶fbj, 6{R?F @@[ ˉK[/fUfnE] @:хxpA @]FF`o}uO1: @F]TΩԣ+z NNl @FvcW\CP& @ oQ'hB8k֩ x3% WZ2Z @@vir, @%0v )M6uk[ڳ2 @@wl @4)0$ @ @@V86M+q ?hx|`xL'?}<f?6  @#tY-M@C'tЏ.b8쥘X÷?xx %0o%udzpZ6cǗVӜQ @HQ`!>b3\ 7L~p%a)pϤ[<6WcN럙=yJuďb87W?q<D.NuB_mzo/~)_ _艝~~;|-%_?ݟΩJxr6CC!|gdӐ'σpiP7}7 i{XܣG?/f};_)~6ق ܯ~ṛ[Wm9hs  @XYuӏI y?b5P';dW^ gA>W'}7oo,;w3A/UZ>;ɓ簦o\Zq}O?d[i @ ZA)^]_X~z$ЁcU΂΂4OyUt#p}䃳;]̭.0h]?~lbX/V]  ]Ow%X? ojNVIPl?Oo;}eI' @\|w~''/>-M{}5:9.]G!j?ܟ?ܹoeo-]}4%V \=,<{ ɏO~U<R46[~zscm_7s|+`%_hZJ|Ӣ#МߜZr ͛ݢ]IF@I!IMmhV`P酠f/ @ @@\Z⻻7om4[v_ 1pg=+|:+qh @`Zň?6:{B}' @O!O}} 0*VQᬘ79{CdVOG@O͔#VCК*_N{9'G @Ws3&WؽYh7ܳᡧ'Nxxd:3&- )_+WnχoM~8_FEt% w% d+gLGw?`ojcRxݹ':_)oCso,of|ߺ6`xBV@4F#c_ }^ g?Vϯ϶go,p8,nXY;1kbX#[Rm^)v eGf]/wͳS/LͮKggIׁ  @E!+o}q'|b@?6 5?kiC \N`_$`\6K0pܡᥥc9tN~L`tSW۹h0|x3$Ws#Y0߄ٳ(^`'@F( ķPv(+V?/<0zxb8Lۿν4\VI[c* i>] ['ڕp1[YW].[1`)互.G/, y"(L}ūG rw߄?gzlp]o1 wmۦǼ?m: 4'p7i}i;Bok5hzxgZ^<7'7IK`s; PEXq/Ӱ ;X|S|v @Q *ǰowx5[Yσ|^f}xzsFqd4Wasm_=hk!qe+_<#hw9b~1+lW90ESV}tؑpjr @ *wdb+_dY5w}|}?<\ fI &?~2^sWxeﴛ'} ewtU^sT2SY`y ; =g➅B<c#@b8p#mPuƓ?Ԫϯg: '˞<@}>'vohWNVFfox|k!^uYeW) Ͻx_{.k`U@Lߔ>c9_q:N#0(+ sg+>m~>>Bn5E |>b77?qsx͏?+f>"~C!@,l+}xcﻳW~nV[ªpQ&YP  @B|WDavEs>{Xk[S L?SJ|L @ : n)+{~7^b/2{⻺b=Uxzr6aXFb@C ~*Ic%6' @l]" @ @ !>B& @d k;Zeݝu=_$@╖ @N~3 hU@oW @ @9xYHx6Y5e;fuufD @x3'@/7k @D`!~+b}\Pɾ?{= @Q`!>Fpc"@`7.=>ə @@V2]g5q4~Ծ @|Ĝk@"@ @`nd_c Scfs(V{OnpܾJƫz8X+n|1!]VɃyMdG(g){ M \ B}$(9UZ|UZwI5 @I@SuG@w1 @ bՌ_ @( H @@sB|sZ\@@G @F!~# @ @?  @)8'*fݬ'mPk @tN]@BG* X+c\B]YV"@ 0z!~Ԍ @ A * @ $ Ļ$# &B @>BKc h @l]" ɗ @() ėrX|u1a yҰ&d6 @7 c̟hKmj@}!z{@% @ s< @CRɑCIM @B #>R( @- -jYYO @ fF5j~T6Y @.cp @t, w |y+G @ 0!~uNn|r%3` @:;@vv^&@ @`<BxjL$d @Ν>1ttk7 + Dr)i @Q'!_C3 @ @}!}c=l]" @(' ėsuԪZi;cX*a @ ķ\"nȫtm^oSc @@@e|:z] j @-æ[n' 0'ɦ~z^HM 0j!W U`J+ ĪT=o&H 0H!j8D-.˾(C4>hb  Ѝq *B| l)|Iv֓$f @ χP;EË,񮪾yUĹ  @`B|ER #Xo=IDJ б1Vw|-ўk`G k @Q&!WF4[OR @# >\k;O=a̽|,8fH (pF<7z`idot!~>Ƹ5 {H?fu8{:S$@ bG\C/U޷>o0| @c_V5J\ߴj$׼>zDaS_~O`S7h[J|'P]@`fШ0R 0KA+|-}M((:B|= Ѕ߅>Tp3pl1̵V .)7Ub#w]_CV滾ߟ  @@[Vd>pV_FбC} u^X @f^XXOֈnCa3 @ чx+l/M.yW6.6]+]>6.mE Y`!/u[7m|9bc} uCf́ ؅UQmGx>$0?k\ @) .wӛrC;goE|5C{ʯsJC$01?> yp/CjN @8FȱT[QT+g @CMJb C75G @Fq=Kۜ*ɩ @ >űLتZoW^y9 @`;AxLow1l{4m;^Ռ[57g @llG7q&lCmeH~>^޵l @@ CbdK @EAxE&@7fgslQ+ @ hޏ{e< @` Xr߯  @M xJS!мB|4d󧏇õ;߻u~8_ }G @ bB|[N -2ܞ? @ 1pZcǗV[7V~y?-jgߞ?g @! U WֲGoL{&ݚ䱹w]tT^8l5ug['|}>:ٺ_+קAn/ſ$8>o49Mm(p­\Z|x-lcOi@c-ސ1N @B Ӌ^W~rj,O|w%vdw}|p4|;~Yiux<+V?&_>?Jj|.ax7NEu'_Yp{̓W/O gJyvVM/ھ7g__ c_B}O0"U?6o?9*t},ԿD4I;j|N, 6yn%~qE8`G540ijbe|cF Z>N5+oʯ/Ue\B_*8~Çj5tdyo7S*n܁HN wrtL#-L˃*oH "P^ $@z)|7N?= @I_^>zx8ܞ D!`Kw?eޞUlkY޼ƶg3X/ t} =w>z!@o>⮏HM@4:<9{P|ql#{|VU7m_K8vzYu{{;g++x xJ.S6Fmj Y`A|!ayl`?O[>y|WM޼xha._DH[n뗢wb_ ze:N]׋a}C- 03mU1YUcYȝ+vf"zŭC&/2}==HlewTR7;t+U{szGq/;UD @ ^!i~h~[<.ۗg}}lZ \9w~wy˵~ˍQ/ (]C>یN-Z,,D`Ʉ4B}> Bm,ntW`˹b|D?<$M87,lI6U`!~R=: WbxnSvkW·g+ON?L,[æ~zś7C0  x#b3mT&0˾ϯlU}}//x=?R14]qf4Zk7N`F?ʚ d)yhniM @@d$ C !VsW+ ]]KCӨXuڴj{n5gjV[mq6Ti5gU|fKM7tk[k d>_ @,F@ E!>Ū3f MA>͓1J|EY?$20#GVp%@ u|52 c໯:H uݳege7yի٪k~c+_oi4jMv^cwHUJ|3nx~nxTz:_"@_ު#c?^q qŨD# oWmVo9oWG @ |*vI`lImH[AczXƤ8-ƯFou~{ rj $[8+1mMM^jKe[ V)-hT@oSc-MPDU}P _"R @@ZɆxa" MҨQ:{uXCg 5C@LTB NdC|FE`y`#([bS/;geG @@zIxA" N}Oэ1ӛl_w>J7z!@!>Ut&0l)|W>- (]?'@`ɇ.j|c[_zy՘)kijI W ?ҙ9xL|xc$@ A>*S[2hvDѬgJ CW |Jlc`83sOӰV -0+ ڕu >nb]pi.]"eV盺R w|[n?s7MNs^\O<`<0\\(so2śl[A~2CCvϦ=<ٹO<8w0 SٶlK?BIо;TsJ8 ] W/09G^W'o  W]DDz7 ypH:hul]t/03 \LqC/nn1Wc[4oߝi}=׮OCb{i/Pӽ6= |5RbLlۿ0}Cnon :?wឝd{WW&ATv?} ?m$RLϋXWbWA̳ݘ : ]|{Z&@K>_~=<-m~|fԽ@-]-~vKY^Zpǭ ~G_ 륫׌~f *0?кش:_v)7?qsxfA~nƗ_ z'qqG=soݬT eǭ{#GEq.3| 0^71L?f7[*ϝ>Q{uye輁__rMu',!dJupD>vy!۹7<&p䡑óOk)̺;gi=h2߂?=O?nK i U`4!>/ _2j@jM!W5aOOӚM1C̣МZ}ޏS_ WO>8dJ{fcw|QM#v<ةr_}zFfoX̌F^wl 4 ??+UM K0W/>!_᎛><@ri%`>2}~gXܗw}r} *SO>/a % :{g*XӜgNG~u6o\\~#pFa<|c3ݻwy7 @&}DZUOϝx =]|^i0cyμv%\,ݕU1Տq$%0WJP]}rIul\~ǣ5FRU;~m+OfwK썀l+}1+?DW`—q= z;k>m£+oe--}C5b_7fѯc!0w -~k]wJ{帟WrowC?k]} K~c|z\elh;=7McS-_U~_y[G @ !~JeUz,LibȷT6U\u+}SXngd=uor)U<mqn~æ tpY'A´؝Zi k5!-ן@lX:3Gf!ο|pJ!Т7:Z4`- 5Yyߢ%@`"PFM!t|쳩hwG`M/ϨG“IO|ofOоzxgss';dYGi_ٛwt fxnietҏy%XAlV#m~j [ x m`EyS^Ny[ȡ(Pf=UÇ>6 ]wdG|~4p價 1ٖGS1y;gIkRX?v mn;IE@4|40QPWSg8Hmn/xHdu{;O.NN~x^xaq;p~vJdzбl|nol9~q9F( B@oU>lLS$$PVR;<_<.;qswnw?鹭{W7,/Y msdmWDoVa$@{%߳_Yr_ɶ{_Onpkϝ椝^/^xerxRlϟx?_?v!i XO~FOF֭+uVd,+ jdbE>̏k&>o(Pv=b~sӘc8쥱L< $' 'W2&@@uxH]I@Vz㭐[##C- -j  jՔ zp5Lg @I!IMm @gM/j3  й7U:'!$$fH}]- W@p_VGm+xD Pf;-ƿn>+Ŋp!b{O?wD(z'@`xʊ{7 )a @:;'! %7w # C arN"@`* Ļ U /Gv1 {+&p穧' [!yV{v}XiLv^g*׆{⻽F`!~-ǎ^J`/&ؗqc yՖ*Z4XʦqB|U!w{6B6Z@}PI(ߏ)W*|ޖpukPIWݑ{[B *B|5 Z@%7^T/ħ:9-v\[5!ķ @U!'cx_FӺ!6EZK=ۼC|:u:z@ ]ORJpo2W~E:bJz5Y5hW.#^aW )|B)}?nʼQ @$`%~L6׉@}]l.NXu~f{{ԽU:n^sVYO1V}sb^Ū}(NʼQ}WQ^ 0L+ìY- 5*U/2atȡJ|Eu 6բߖlz5ЌV]@0)_cUB|^}wՂҕt{M4c~ |c |Zp_mmo#6US$뭞D=?g +xW`}e/lO/+U!fPMdg +㨳Y[ q X.FUC@x݋g{/ v _ `{ڲ_U.ު(Ķ8*B|5D- į.؃˶{pMbP_-xmWT_Bvuv4.lR[_ '+>ζ-FI^o!?4m]j,`%~j|0)o^=UZ%ܶrzYʥ_Ǝ"Ї߇>˓5 寁6%!šm6S(Œq!\S~j(}}Q~{Ρy}ڷuF!>@?S$m]cnW___/B_{o_I5 k֙]>_w'c1UHB`Xl7zj6|rպͳ˼f&@?Cjrl6Bk(xʒo,aeǬz"I, ď =U䘙cscZMl^L7bV2i:!b . 8M<Խ:~+)o}Ig @] h:XXu/Afmi ƹ'z5xGR1'Vr\V7 % } =nP&kU|UGBb'02aBpcZM7}- u_{F, @ v!> ߎbTYpW,~%A_#01ϽM-pF5nOELLe.. me]7b+ _*ﵴ6j#r T+7q |nŽK7jbkLȽS% U!#)So2s, @`WvzWCV+S~yoUX:uS?wTϺ{޹n  0F!~UOl|7T(MB|{lL@=k4-`;}Ӣ#0<#) x@W@_ OfOnd\m s){vS{Q׮G(' ėsr@tf; 듭|Gg/0)ߥ۽mݛ c'0>}y} gl*{7o do a}ͅ{;vzfW @66T@-ܕx/wj|`V܇10*~7?]OB0~w @@!c-0Y^}W?>;gqEJX^Z+~ߝvWt$P_*|}C- @ esO<|c'0h7A`d'_X^g1u!j YJkn `S @ A X 2E+ Ļ* 0b2~ @ ѕĀ Џm c  0 2 @! &AzV9 Еߕ~ Ua(P xQ_) Y|Fy~*|"@9 HJf @!@ F?b* 0H!~e5)l(}-AJ@JZ?Y*|= @!DM'jٮk{?]Zv1 @`H^*|%4 @D@w! @`dVGVp%@% Ü֕w]#P @!g @{a) @5!5Z  @ .㪇 @UԜӹ-ݐsƹ^w% @@wB|wz"@@ouVϝ>۸uL ( Ļ"JnuV @!g @7: Z @+xFRV)v\jG%_ @@B|z @@gwF# ZP]^K\@OcUfγYϘZ>j  @!Gt, x6αXZ K% Оߞ  ЛUuLhU@oWm XE˯_lg KMx @G[aL\VGnR9*|*2N @BfΈL@ ݮ Jh)B ЌߌVzLS9js}JJ?J`"6  Еt% ۛ9#bA5ZV'  @ z!> ܨo{ Vc @qJDy`ZaхP7 9 @@"B|"2>tϰ n팜 k뼓8k Gs* @ q!>fڱw0#lV=͆ ýcc_E\M @"#-a#0UΫ VZ?wDhFN?f!Xi.+y @8qݬ3pcwUkhE ЖߖvH)4d.j>2 @!!!Hͤ/s@ylW>Hkd @@~X\,HpJ @`Bl]W]ƙ] XJZ? @4ȾQSSOm=WzX~{{)CzU^ PC̳T- -jub xSRfeRo31N?*# HJfUAd& Cl^M͈ PU@*<$`uK%@ @@@M @! ďfI@"V)a @z{-<5A I@$Az@% @ !>"" ?! @ ! Em @A bj* yRD@B|z @Zs @J9 @, \ 0n㮿 @s<HavNhhƚ!@+ t>05A?""  @- ďfO@V#(! @)a 0N-̈́ еߵ @ @@E!" PE*|5 @Bk-a}uf&  @#GVp%@}2a=ٝyRhz @(% ėbr zp6 @Ⱦ0Qs$@`8wzjɼqѭϩzªS>_?wDUHPX-av]eMSoUԆuNk`ճ%c6 @Vs< TYeoj@Mm>S [^wFl @ e+)W D`~)1ۥh|}]/{\|34" @m!ma @c$jblX~L˟DX}Jdн  @܌ .D@jMu>ϹO .S3#!@ VƸ P@Mc_Vd>(SEbq@ku gŘR ilU49- @e2J!@|ʗɨN,sϨ g @ !>B 9/תЮZs2{PEs |Niy(+#+D& GV!0bV*_) Y"@6 |ʪ>jןת.w?*#sޟႲ@Ep7g @GkCsx_-?y#-_}vdz<ݘ n|2I7z!PE@~CO ,s6*fc~L6W--jq< Xs6hJ@oJR;F( /P|RB?N|Z#@ ۊ9PjԶٶge N@w} `8jq794- 7-=H[57g/ _##$@a êhU@˯_ٳʲRǼ9K- @MB&!'@ 1#7Il[_@oXUG NsO90"@ .J 4\=YX?wew|^Mv᧳O|p~~}N\1͆6Ԙkc4 @@|B||51"8|cy]v+퇎'yI~<;ھO@&FD` /cÉ3~{/?ޯ7}`ϼM~4޹ wk: @@B|Z#@`;-™oVaOKsᅅm o; @hW.N@jUx[gu3=ݶOnpe+ xZ5Ϻmw}ZȷӻE`M!`%S _lt?`-S#@H@Z7$'"@ G@4UG3k=~u!j 0$!~H4 @? Z @4% 7% @ в2  @ @@SB|S!0=Uܗk/߿~ִZFKHQ@OjL Y_ < MO>}uWƒdKi~>|Íg_t_ PI@$C\4#>}c+8qÁ/:L"@` NsȌLpL-p~믄'U<ς?nc4Vmt++ _;ig?+V2ܣG?/f};Jm~W}_4ݪj;yEs'h @@!c/.O)4ObK|(V~b?uɱ^^h@[vތ4?9iG“]>;ɓ|u07e?ӭwihJ!R⹧m~]Gg'AJxS!;O.N~^xarN^U,k7?= "@& C z^o̷׿j߷f+fA>'gy~~wl#ޙ\۳l$_,q%|{eྞkק!~WnqW_z1|=o"{ r@B-sC6?f+Gݧ~e/ӟeS8Y>O{v~w2W&ATv?%H(AdO2B}Snm/gvC~'s7w>r'׬7{ oLkn/V_5t @ J!>ʲ nqssWl|Krqq'TJl5t)'n=,|pwc;ڴt/pF}z$@ F[UnUOy?õWi}m'?W|_+pKNtg? @ H@P1M@]uϿ  @cPes$@ @A(I_ f+e @) )W  @ @`TBm^ B/ @88`p~Jw @RS @ @hі  @K;>0<3w|{}?pED* GZ"Ч;ގ'?ׯ~~h 7gD @c!cp @y*yjVC;ܕA V.i @`Bn s/9>{lð޿ь@';93#x @[x%.nztӏy̑.㸧bt @@B|: @ ?)HY@OzNem7÷NG{a;?rxxhӖt&n @@!S @@Sy0ݸq D-Eܣ Wk j@B|:# AJc\c-ۺ_Vq㮏э[@w͞-߯ 4) 7- x]Z#@gg?z-h\@oT @`_!Ak-)} )UX  @mmO A^oL7X] 0XAҲmV @` @A ͖{Z#'bq?K/ x6Sco4 PGs.F 0 /^}# @@B|#@ @l4 @ еߵ @ @@E!" @ @@B|#@ @pN#@ 0TI| g] @] @ @ !>B&/y @   @ H  @ @ H } ;}"//} @{~_) Y"@ @{x @HD@OPI>lS_NVD_Uy @ @c!cp @ UVθ  PN*|9'G[@'@@B|B2Tl! oP= =@ @7Ⱦ? 0fs}s'@`hcVq%@@kSSX@CB|C!@XʏM@NrM`* Ļ @¼1!otc"л{ c6mlY6hMm}ocWMm+iۧEu^˺U6˚5h [8 @ @R>b @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)ej}|IENDB`ironic-5.1.0/doc/source/images/states.svg0000664000567000056710000005430712674513466021526 0ustar jenkinsjenkins00000000000000 Ironic states enroll enroll verifying verifying enroll->verifying manage (via API) manageable manageable verifying->manageable done verifying->enroll fail cleaning cleaning manageable->cleaning provide (via API) manageable->cleaning clean (via API) inspecting inspecting manageable->inspecting inspect (via API) available available cleaning->available done clean failed clean failed cleaning->clean failed fail clean wait clean wait cleaning->clean wait wait cleaning->manageable manage inspecting->manageable done inspect failed inspect failed inspecting->inspect failed fail deploying deploying available->deploying active (via API) available->manageable manage (via API) deploy failed deploy failed deploying->deploy failed fail wait call-back wait call-back deploying->wait call-back wait active active deploying->active done active->deploying rebuild (via API) deleting deleting active->deleting deleted (via API) error error deleting->error error deleting->cleaning clean error->deploying rebuild (via API) error->deleting deleted (via API) deploy failed->deploying rebuild (via API) deploy failed->deploying active (via API) deploy failed->deleting deleted (via API) wait call-back->deploying resume wait call-back->deploy failed fail wait call-back->deleting deleted (via API) clean failed->manageable manage (via API) clean wait->clean failed fail clean wait->clean failed abort (via API) clean wait->cleaning resume inspect failed->manageable manage (via API) inspect failed->inspecting inspect (via API) ironic-5.1.0/doc/source/index.rst0000664000567000056710000000372712674513466020076 0ustar jenkinsjenkins00000000000000============================================ Welcome to Ironic's developer documentation! ============================================ Introduction ============ Ironic is an OpenStack project which provisions bare metal (as opposed to virtual) machines by leveraging common technologies such as PXE boot and IPMI to cover a wide range of hardware, while supporting pluggable drivers to allow vendor-specific functionality to be added. If one thinks of traditional hypervisor functionality (eg, creating a VM, enumerating virtual devices, managing the power state, loading an OS onto the VM, and so on), then Ironic may be thought of as a *hypervisor API* gluing together multiple drivers, each of which implement some portion of that functionality with respect to physical hardware. The documentation provided here is continually kept up-to-date based on the latest code, and may not represent the state of the project at any specific prior release. For information on any current or prior version of Ironic, see `the release notes`_. .. _the release notes: http://docs.openstack.org/releasenotes/ironic/ Administrator's Guide ===================== .. toctree:: :maxdepth: 1 deploy/user-guide Installation Guide Upgrade Guide Configuration Reference (Liberty) drivers/ipa deploy/drivers deploy/cleaning deploy/raid deploy/troubleshooting Release Notes Commands and API References =========================== .. toctree:: :maxdepth: 1 cmds/ironic-dbsync webapi/v1 dev/drivers Developer's Guide ================= .. toctree:: :maxdepth: 1 dev/architecture dev/states dev/contributing dev/code-contribution-guide dev/dev-quickstart dev/vendor-passthru dev/faq Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ironic-5.1.0/doc/source/cmds/0000775000567000056710000000000012674513633017146 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/cmds/ironic-dbsync.rst0000664000567000056710000001220712674513466022451 0ustar jenkinsjenkins00000000000000============= ironic-dbsync ============= The :command:`ironic-dbsync` utility is used to create the database schema tables that the ironic services will use for storage. It can also be used to upgrade (or downgrade) existing database tables when migrating between different versions of ironic. The `Alembic library `_ is used to perform the database migrations. Options ======= This is a partial list of the most useful options. To see the full list, run the following:: ironic-dbsync --help .. program:: ironic-dbsync .. option:: -h, --help Show help message and exit. .. option:: --config-dir

Path to a config directory with configuration files. .. option:: --config-file Path to a configuration file to use. .. option:: -d, --debug Print debugging output. .. option:: -v, --verbose Print more verbose output. .. option:: --version Show the program's version number and exit. .. option:: upgrade, downgrade, stamp, revision, version, create_schema The :ref:`command ` to run. Usage ===== Options for the various :ref:`commands ` for :command:`ironic-dbsync` are listed when the :option:`-h` or :option:`--help` option is used after the command. For example:: ironic-dbsync create_schema --help Information about the database is read from the ironic configuration file used by the API server and conductor services. This file must be specified with the :option:`--config-file` option:: ironic-dbsync --config-file /path/to/ironic.conf create_schema The configuration file defines the database backend to use with the *connection* database option:: [database] connection=mysql+pymysql://root@localhost/ironic If no configuration file is specified with the :option:`--config-file` option, :command:`ironic-dbsync` assumes an SQLite database. .. _dbsync_cmds: Command Options =============== :command:`ironic-dbsync` is given a command that tells the utility what actions to perform. These commands can take arguments. Several commands are available: .. _create_schema: create_schema ------------- .. program:: create_schema .. option:: -h, --help Show help for create_schema and exit. This command will create database tables based on the most current version. It assumes that there are no existing tables. An example of creating database tables with the most recent version:: ironic-dbsync --config-file=/etc/ironic/ironic.conf create_schema downgrade --------- .. program:: downgrade .. option:: -h, --help Show help for downgrade and exit. .. option:: --revision The revision number you want to downgrade to. This command will revert existing database tables to a previous version. The version can be specified with the :option:`--revision` option. An example of downgrading to table versions at revision 2581ebaf0cb2:: ironic-dbsync --config-file=/etc/ironic/ironic.conf downgrade --revision 2581ebaf0cb2 revision -------- .. program:: revision .. option:: -h, --help Show help for revision and exit. .. option:: -m , --message The message to use with the revision file. .. option:: --autogenerate Compares table metadata in the application with the status of the database and generates migrations based on this comparison. This command will create a new revision file. You can use the :option:`--message` option to comment the revision. This is really only useful for ironic developers making changes that require database changes. This revision file is used during database migration and will specify the changes that need to be made to the database tables. Further discussion is beyond the scope of this document. stamp ----- .. program:: stamp .. option:: -h, --help Show help for stamp and exit. .. option:: --revision The revision number. This command will 'stamp' the revision table with the version specified with the :option:`--revision` option. It will not run any migrations. upgrade ------- .. program:: upgrade .. option:: -h, --help Show help for upgrade and exit. .. option:: --revision The revision number to upgrade to. This command will upgrade existing database tables to the most recent version, or to the version specified with the :option:`--revision` option. If there are no existing tables, then new tables are created, beginning with the oldest known version, and successively upgraded using all of the database migration files, until they are at the specified version. Note that this behavior is different from the :ref:`create_schema` command that creates the tables based on the most recent version. An example of upgrading to the most recent table versions:: ironic-dbsync --config-file=/etc/ironic/ironic.conf upgrade .. note:: This command is the default if no command is given to :command:`ironic-dbsync`. .. warning:: The upgrade command is not compatible with SQLite databases since it uses ALTER TABLE commands to upgrade the database tables. SQLite supports only a limited subset of ALTER TABLE. version ------- .. program:: version .. option:: -h, --help Show help for version and exit. This command will output the current database version. ironic-5.1.0/doc/source/dev/0000775000567000056710000000000012674513633016776 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/dev/architecture.rst0000664000567000056710000001155012674513466022220 0ustar jenkinsjenkins00000000000000.. _architecture: =================== System Architecture =================== High Level description ====================== An Ironic deployment will be composed of the following components: - An admin-only RESTful `API service`_, by which privileged users, such as cloud operators and other services within the cloud control plane, may interact with the managed bare metal servers. - A `Conductor service`_, which does the bulk of the work. Functionality is exposed via the `API service`_. The Conductor and API services communicate via RPC. - A Database and `DB API`_ for storing the state of the Conductor and Drivers. - A Deployment Ramdisk or Deployment Agent, which provide control over the hardware which is not available remotely to the Conductor. A ramdisk should be built which contains one of these agents, eg. with `diskimage-builder`_. This ramdisk can be booted on-demand. - **NOTE:** The agent is never run inside a tenant instance. Drivers ======= The internal driver API provides a consistent interface between the Conductor service and the driver implementations. A driver is defined by a class inheriting from the `BaseDriver`_ class, defining certain interfaces; each interface is an instance of the relevant driver module. For example, a fake driver class might look like this:: class FakePower(base.PowerInterface): def get_properties(self): return {} def validate(self, task): pass def get_power_state(self, task): return states.NOSTATE def set_power_state(self, task, power_state): pass def reboot(self, task): pass class FakeDriver(base.BaseDriver): def __init__(self): self.power = FakePower() There are three categories of driver interfaces: - `Core` interfaces provide the essential functionality for Ironic within OpenStack, and may be depended upon by other services. All drivers must implement these interfaces. The Core interfaces are `power` and `deploy`. - `Standard` interfaces provide functionality beyond the needs of OpenStack, but which have been standardized across all drivers and becomes part of Ironic's API. If a driver implements this interface, it must adhere to the standard. This is presented to encourage vendors to work together with the Ironic project and implement common features in a consistent way, thus reducing the burden on consumers of the API. The Standard interfaces are `management`, `console`, `boot`, `inspect`, and `raid`. - The `Vendor` interface allows an exemption to the API contract when a vendor wishes to expose unique functionality provided by their hardware and is unable to do so within the `Core` or `Standard` interfaces. In this case, Ironic will merely relay the message from the API service to the appropriate driver. Driver-Specific Periodic Tasks ------------------------------ Drivers may run their own periodic tasks, i.e. actions run repeatedly after a certain amount of time. Such task is created by decorating a method on the driver itself or on any interface with periodic_ decorator, e.g. :: from futurist import periodics class FakePower(base.PowerInterface): @periodics.periodic(spacing=42) def task(self, manager, context): pass # do something class FakeDriver(base.BaseDriver): def __init__(self): self.power = FakePower() @periodics.periodic(spacing=42) def task2(self, manager, context): pass # do something Here the ``spacing`` argument is a period in seconds for a given periodic task. For example 'spacing=5' means every 5 seconds. Message Routing =============== Each Conductor registers itself in the database upon start-up, and periodically updates the timestamp of its record. Contained within this registration is a list of the drivers which this Conductor instance supports. This allows all services to maintain a consistent view of which Conductors and which drivers are available at all times. Based on their respective driver, all nodes are mapped across the set of available Conductors using a `consistent hashing algorithm`_. Node-specific tasks are dispatched from the API tier to the appropriate conductor using conductor-specific RPC channels. As Conductor instances join or leave the cluster, nodes may be remapped to different Conductors, thus triggering various driver actions such as take-over or clean-up. .. _API service: ../webapi/v1.html .. _BaseDriver: ../api/ironic.drivers.base.html#ironic.drivers.base.BaseDriver .. _Conductor service: ../api/ironic.conductor.manager.html .. _DB API: ../api/ironic.db.api.html .. _diskimage-builder: https://github.com/openstack/diskimage-builder .. _consistent hashing algorithm: ../api/ironic.common.hash_ring.html .. _periodic: http://docs.openstack.org/developer/futurist/api.html#futurist.periodics.periodic ironic-5.1.0/doc/source/dev/faq.rst0000664000567000056710000000367012674513466020311 0ustar jenkinsjenkins00000000000000.. _faq: ========================================== Developer FAQ (frequently asked questions) ========================================== Here are some answers to frequently-asked questions from IRC and elsewhere. .. contents:: :local: :depth: 2 How do I... =========== ...create a migration script template? -------------------------------------- Using the ``alembic revision`` command, e.g:: $ cd ironic/ironic/db/sqlalchemy $ alembic revision -m "create foo table" For more information see the `alembic documentation`_. .. _`alembic documentation`: https://alembic.readthedocs.org/en/latest/tutorial.html#create-a-migration-script ...know if a release note is needed for my change? -------------------------------------------------- `Reno documentation`_ contains a description of what can be added to each section of a release note. If, after reading this, you're still unsure about whether to add a release note for your change or not, keep in mind that it is intended to contain information for deployers, so changes to unit tests or documentation are unlikely to require one. ...create a new release note? ----------------------------- By running ``reno`` command via tox, e.g:: $ tox -e venv -- reno new version-foo venv create: /home/foo/ironic/.tox/venv venv installdeps: -r/home/foo/ironic/test-requirements.txt venv develop-inst: /home/foo/ironic venv runtests: PYTHONHASHSEED='0' venv runtests: commands[0] | reno new version-foo Created new notes file in releasenotes/notes/version-foo-ecb3875dc1cbf6d9.yaml venv: commands succeeded congratulations :) $ git status On branch test Untracked files: (use "git add ..." to include in what will be committed) releasenotes/notes/version-foo-ecb3875dc1cbf6d9.yaml Then edit the result file. For more information see the `reno documentation`_. .. _`reno documentation`: http://docs.openstack.org/developer/reno/usage.html ironic-5.1.0/doc/source/dev/dev-quickstart.rst0000664000567000056710000005102012674513466022500 0ustar jenkinsjenkins00000000000000.. _dev-quickstart: ===================== Developer Quick-Start ===================== This is a quick walkthrough to get you started developing code for Ironic. This assumes you are already familiar with submitting code reviews to an OpenStack project. The gate currently runs the unit tests under both Python 2.7 and Python 3.4. It is strongly encouraged to run the unit tests locally under one, the other, or both prior to submitting a patch. .. note:: Do not run unit tests on the same environment as devstack due to conflicting configuration with system dependencies. .. seealso:: http://docs.openstack.org/infra/manual/developers.html#development-workflow Install prerequisites (for python 2.7): - Ubuntu/Debian:: sudo apt-get install python-dev libssl-dev python-pip libmysqlclient-dev libxml2-dev libxslt-dev libpq-dev git git-review libffi-dev gettext ipmitool psmisc graphviz libjpeg-dev - Fedora 21/RHEL7/CentOS7:: sudo yum install python-devel openssl-devel python-pip mysql-devel libxml2-devel libxslt-devel postgresql-devel git git-review libffi-devel gettext ipmitool psmisc graphviz gcc libjpeg-turbo-devel If using RHEL and yum reports "No package python-pip available" and "No package git-review available", use the EPEL software repository. Instructions can be found at ``_. - Fedora 22 or higher:: sudo dnf install python-devel openssl-devel python-pip mysql-devel libxml2-devel libxslt-devel postgresql-devel git git-review libffi-devel gettext ipmitool psmisc graphviz gcc libjpeg-turbo-devel Additionally, if using Fedora 23, ``redhat-rpm-config`` package should be installed so that development virtualenv can be built successfully. - openSUSE/SLE 12:: sudo zypper install git git-review libffi-devel libmysqlclient-devel libopenssl-devel libxml2-devel libxslt-devel postgresql-devel python-devel python-nose python-pip gettext-runtime psmisc Graphviz is only needed for generating the state machine diagram. To install it on openSUSE or SLE 12, see ``_. To use Python 3.4, follow the instructions above to install prerequisites and additionally install the following packages: - On Ubuntu/Debian:: sudo apt-get install python3-dev - On Fedora 21/RHEL7/CentOS7:: sudo yum install python3-devel - On Fedora 22 and higher:: sudo dnf install python3-devel If your distro has at least tox 1.8, use similar command to install ``python-tox`` package. Otherwise install this on all distros:: sudo pip install -U tox You may need to explicitly upgrade virtualenv if you've installed the one from your OS distribution and it is too old (tox will complain). You can upgrade it individually, if you need to:: sudo pip install -U virtualenv Ironic source code should be pulled directly from git:: # from your home or source directory cd ~ git clone https://git.openstack.org/openstack/ironic cd ironic Set up a local environment for development and testing should be done with tox, for example:: # create a virtualenv for development tox -evenv --notest All unit tests should be run using tox. To run Ironic's entire test suite:: # run all tests (unit under both py27 and py34, and pep8) tox To run the unit tests under py27 and also run the pep8 tests:: # run all tests (unit under py27 and pep8) tox -epy27 -epep8 To run the unit tests under py34 and also run the pep8 tests:: # run all tests (unit under py34 and pep8) tox -epy34 -epep8 You may pass options to the test programs using positional arguments. To run a specific unit test, this passes the -r option and desired test (regex string) to `os-testr `_:: # run a specific test for Python 2.7 tox -epy27 -- -r test_conductor To run only the pep8/flake8 syntax and style checks:: tox -epep8 =============================== Exercising the Services Locally =============================== If you would like to exercise the Ironic services in isolation within a local virtual environment, you can do this without starting any other OpenStack services. For example, this is useful for rapidly prototyping and debugging interactions over the RPC channel, testing database migrations, and so forth. Step 1: System Dependencies --------------------------- There are two ways you may use to install the required system dependencies: Manually, or by using the included Vagrant file. Option 1: Manual Install ######################## #. Install a few system prerequisites:: # install rabbit message broker # Ubuntu/Debian: sudo apt-get install rabbitmq-server # Fedora 21/RHEL7/CentOS7: sudo yum install rabbitmq-server sudo systemctl start rabbitmq-server.service # Fedora 22 or higher: sudo dnf install rabbitmq-server sudo systemctl start rabbitmq-server.service # openSUSE/SLE 12: sudo zypper install rabbitmq-server sudo systemctl start rabbitmq-server.service # optionally, install mysql-server # Ubuntu/Debian: # sudo apt-get install mysql-server # Fedora 21/RHEL7/CentOS7: # sudo yum install mariadb mariadb-server # sudo systemctl start mariadb.service # Fedora 22 or higher: # sudo dnf install mariadb mariadb-server # sudo systemctl start mariadb.service # openSUSE/SLE 12: # sudo zypper install mariadb # sudo systemctl start mysql.service #. Clone the ``Ironic`` repository and install it within a virtualenv:: # activate the virtualenv cd ~ git clone https://git.openstack.org/openstack/ironic cd ironic tox -evenv --notest source .tox/venv/bin/activate # install ironic within the virtualenv python setup.py develop #. Create a configuration file within the ironic source directory:: # copy sample config and modify it as necessary cp etc/ironic/ironic.conf.sample etc/ironic/ironic.conf.local # disable auth since we are not running keystone here sed -i "s/#auth_strategy=keystone/auth_strategy=noauth/" etc/ironic/ironic.conf.local # Use the 'fake_ipmitool' test driver sed -i "s/#enabled_drivers=pxe_ipmitool/enabled_drivers=fake_ipmitool/" etc/ironic/ironic.conf.local # set a fake host name [useful if you want to test multiple services on the same host] sed -i "s/#host=.*/host=test-host/" etc/ironic/ironic.conf.local # turn off the periodic sync_power_state task, to avoid getting NodeLocked exceptions sed -i "s/#sync_power_state_interval=60/sync_power_state_interval=-1/" etc/ironic/ironic.conf.local #. Initialize the ironic database (optional):: # ironic defaults to storing data in ./ironic/ironic.sqlite # If using MySQL, you need to create the initial database mysql -u root -pMYSQL_ROOT_PWD -e "create schema ironic" # and switch the DB connection from sqlite to something else, eg. mysql sed -i "s/#connection=.*/connection=mysql\+pymysql:\/\/root:MYSQL_ROOT_PWD@localhost\/ironic/" etc/ironic/ironic.conf.local At this point, you can continue to Step 2. Option 2: Vagrant, VirtualBox, and Ansible ########################################## This option requires `virtualbox `_, `vagrant `_, and `ansible `_. You may install these using your favorite package manager, or by downloading from the provided links. Next, run vagrant:: vagrant up This will create a VM available to your local system at `192.168.99.11`, will install all the necessary service dependencies, and configure some default users. It will also generate `./etc/ironic/ironic.conf.local` preconfigured for local dev work. We recommend you compare and familiarize yourself with the settings in `./etc/ironic/ironic.conf.sample` so you can adjust it to meet your own needs. Step 2: Start the API --------------------- #. Activate the virtual environment created in the previous section to run the API:: # switch to the ironic source (Not necessary if you followed Option 1) cd ironic # activate the virtualenv source .tox/venv/bin/activate # install ironic within the virtualenv python setup.py develop # This creates the database tables. ironic-dbsync --config-file etc/ironic/ironic.conf.local create_schema #. Start the API service in debug mode and watch its output:: # start the API service ironic-api -v -d --config-file etc/ironic/ironic.conf.local Step 3: Install the Client -------------------------- #. Clone the ``python-ironicclient`` repository and install it within a virtualenv:: # from your home or source directory cd ~ git clone https://git.openstack.org/openstack/python-ironicclient cd python-ironicclient tox -evenv --notest source .tox/venv/bin/activate #. Export some ENV vars so the client will connect to the local services that you'll start in the next section:: export OS_AUTH_TOKEN=fake-token export IRONIC_URL=http://localhost:6385/ Step 4: Start the Conductor Service ----------------------------------- Open one more window (or screen session), again activate the venv, and then start the conductor service and watch its output:: # activate the virtualenv cd ironic source .tox/venv/bin/activate # start the conductor service ironic-conductor -v -d --config-file etc/ironic/ironic.conf.local You should now be able to interact with Ironic via the python client (installed in Step 3) and observe both services' debug outputs in the other two windows. This is a good way to test new features or play with the functionality without necessarily starting DevStack. To get started, list the available commands and resources:: # get a list of available commands ironic help # get the list of drivers currently supported by the available conductor(s) ironic driver-list # get a list of nodes (should be empty at this point) ironic node-list Here is an example walkthrough of creating a node:: MAC="aa:bb:cc:dd:ee:ff" # replace with the MAC of a data port on your node IPMI_ADDR="1.2.3.4" # replace with a real IP of the node BMC IPMI_USER="admin" # replace with the BMC's user name IPMI_PASS="pass" # replace with the BMC's password # enroll the node with the "fake" deploy driver and the "ipmitool" power driver # Note that driver info may be added at node creation time with "-i" NODE=$(ironic node-create -d fake_ipmitool -i ipmi_address=$IPMI_ADDR -i ipmi_username=$IPMI_USER | grep ' uuid ' | awk '{print $4}') # driver info may also be added or updated later on ironic node-update $NODE add driver_info/ipmi_password=$IPMI_PASS # add a network port ironic port-create -n $NODE -a $MAC # view the information for the node ironic node-show $NODE # request that the node's driver validate the supplied information ironic node-validate $NODE # you have now enrolled a node sufficiently to be able to control # its power state from ironic! ironic node-set-power-state $NODE on If you make some code changes and want to test their effects, install again with "python setup.py develop", stop the services with Ctrl-C, and restart them. ============================== Deploying Ironic with DevStack ============================== DevStack may be configured to deploy Ironic, setup Nova to use the Ironic driver and provide hardware resources (network, baremetal compute nodes) using a combination of OpenVSwitch and libvirt. It is highly recommended to deploy on an expendable virtual machine and not on your personal work station. Deploying Ironic with DevStack requires a machine running Ubuntu 14.04 (or later) or Fedora 20 (or later). .. seealso:: http://docs.openstack.org/developer/devstack/ Devstack will no longer create the user 'stack' with the desired permissions, but does provide a script to perform the task:: git clone https://github.com/openstack-dev/devstack.git devstack sudo ./devstack/tools/create-stack-user.sh Switch to the stack user and clone DevStack:: sudo su - stack git clone https://github.com/openstack-dev/devstack.git devstack Create devstack/local.conf with minimal settings required to enable Ironic. You can use either of two drivers for deploy: pxe_* or agent_*, see :ref:`IPA` for explanation. An example local.conf that enables both types of drivers and uses the ``pxe_ssh`` driver by default:: cd devstack cat >local.conf </vendor_passthru?method={METHOD}`` endpoint. Beyond basic checking, Ironic does not introspect the message body and simply "passes it through" to the relevant driver. A method: * can support one or more HTTP methods (for example, GET, POST) * is asynchronous or synchronous + For asynchronous methods, a 202 (Accepted) HTTP status code is returned to indicate that the request was received, accepted and is being acted upon. No body is returned in the response. + For synchronous methods, a 200 (OK) HTTP status code is returned to indicate that the request was fulfilled. The response may include a body. While performing the request, a lock is held on the node, and other requests for the node will be delayed and may fail with an HTTP 409 (Conflict) error code. This endpoint exposes a node's driver directly, and as such, it is expressly not part of Ironic's standard REST API. There is only a single HTTP endpoint exposed, and the semantics of the message body are determined solely by the driver. Ironic makes no guarantees about backwards compatibility; this is solely up to the discretion of each driver's author. To get information about all the methods available via the vendor_passthru endpoint for a particular node, you can issue an HTTP GET request:: GET /v1/nodes//vendor_passthru/methods The response's JSON body will contain information for each method, such as the method's name, a description, the HTTP methods supported, and whether it's asynchronous or synchronous. Driver Vendor Passthru ---------------------- Drivers may implement an API for requests not related to any node, at ``/v1/drivers//vendor_passthru?method={METHOD}``. A method: * can support one or more HTTP methods (for example, GET, POST) * is asynchronous or synchronous + For asynchronous methods, a 202 (Accepted) HTTP status code is returned to indicate that the request was received, accepted and is being acted upon. No body is returned in the response. + For synchronous methods, a 200 (OK) HTTP status code is returned to indicate that the request was fulfilled. The response may include a body. .. note:: Unlike methods in `Node Vendor Passthru`_, a request does not lock any resource, so it will not delay other requests and will not fail with an HTTP 409 (Conflict) error code. Ironic makes no guarantees about the semantics of the message BODY sent to this endpoint. That is left up to each driver's author. To get information about all the methods available via the driver vendor_passthru endpoint, you can issue an HTTP GET request:: GET /v1/drivers//vendor_passthru/methods The response's JSON body will contain information for each method, such as the method's name, a description, the HTTP methods supported, and whether it's asynchronous or synchronous. ironic-5.1.0/doc/source/dev/states.rst0000664000567000056710000000201512674513466021035 0ustar jenkinsjenkins00000000000000.. _states: ====================== Ironic's State Machine ====================== State Machine Diagram ===================== The diagram below shows the provisioning states that an Ironic node goes through during the lifetime of a node. The diagram also depicts the events that transition the node to different states. Stable states are highlighted with a thicker border. All transitions from stable states are initiated by API requests. There are a few other API-initiated-transitions that are possible from non-stable states. The events for these API-initiated transitions are indicated with '(via API)'. Internally, the conductor initiates the other transitions (depicted in gray). .. figure:: ../images/states.svg :width: 660px :align: left :alt: Ironic state transitions .. note:: For more information about the states, see the specification located at `ironic-state-machine`_. .. _ironic-state-machine: http://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/new-ironic-state-machine.html ironic-5.1.0/doc/source/dev/vendor-passthru.rst0000664000567000056710000001214312674513466022701 0ustar jenkinsjenkins00000000000000.. _vendor-passthru: ============== Vendor Methods ============== This document is a quick tutorial on writing vendor specific methods to a driver. The first thing to note is that the Ironic API supports two vendor endpoints: A driver vendor passthru and a node vendor passthru. * The driver vendor passthru allows drivers to expose a custom top-level functionality which is not specific to a Node. For example, let's say the driver `pxe_ipmitool` exposed a method called `authentication_types` that would return what are the authentication types supported. It could be accessed via the Ironic API like: :: GET http://
:/v1/drivers/pxe_ipmitool/vendor_passthru/authentication_types * The node vendor passthru allows drivers to expose custom functionality on per-node basis. For example the same driver `pxe_ipmitool` exposing a method called `send_raw` that would send raw bytes to the BMC, the method also receives a parameter called `raw_bytes` which the value would be the bytes to be sent. It could be accessed via the Ironic API like: :: POST {'raw_bytes': '0x01 0x02'} http://
:/v1/nodes//vendor_passthru/send_raw Writing Vendor Methods ====================== Writing a custom vendor method in Ironic should be simple. The first thing to do is write a class inheriting from the `VendorInterface`_ class: .. code-block:: python class ExampleVendor(VendorInterface) def get_properties(self): return {} def validate(self, task, **kwargs): pass The `get_properties` is a method that all driver interfaces have, it should return a dictionary of : telling in the description whether that property is required or optional so the node can be manageable by that driver. For example, a required property for a `ipmi` driver would be `ipmi_address` which is the IP address or hostname of the node. We are returning an empty dictionary in our example to make it simpler. The `validate` method is responsible for validating the parameters passed to the vendor methods. Ironic will not introspect into what is passed to the drivers, it's up to the developers writing the vendor method to validate that data. Let's extend the `ExampleVendor` class to support two methods, the `authentication_types` which will be exposed on the driver vendor passthru endpoint; And the `send_raw` method that will be exposed on the node vendor passthru endpoint: .. code-block:: python class ExampleVendor(VendorInterface) def get_properties(self): return {} def validate(self, task, method, **kwargs): if method == 'send_raw': if 'raw_bytes' not in kwargs: raise MissingParameterValue() @base.driver_passthru(['GET'], async=False) def authentication_types(self, context, **kwargs): return {"types": ["NONE", "MD5", "MD2"]} @base.passthru(['POST']) def send_raw(self, task, **kwargs): raw_bytes = kwargs.get('raw_bytes') ... That's it! Writing a node or driver vendor passthru method is pretty much the same, the only difference is how you decorate the methods and the first parameter of the method (ignoring self). A method decorated with the `@passthru` decorator should expect a Task object as first parameter and a method decorated with the `@driver_passthru` decorator should expect a Context object as first parameter. Both decorators accepts the same parameters: * http_methods: A list of what the HTTP methods supported by that vendor function. To know what HTTP method that function was invoked with, a `http_method` parameter will be present in the `kwargs`. Supported HTTP methods are *POST*, *PUT*, *GET* and *PATCH*. * method: By default the method name is the name of the python function, if you want to use a different name this parameter is where this name can be set. For example: .. code-block:: python @passthru(['PUT'], method="alternative_name") def name(self, task, **kwargs): ... * description: A string containing a nice description about what that method is supposed to do. Defaults to "" (empty string). .. _VendorInterface: ../api/ironic.drivers.base.html#ironic.drivers.base.VendorInterface * async: A boolean value to determine whether this method should run asynchronously or synchronously. Defaults to True (Asynchronously). .. WARNING:: Please avoid having a synchronous method for slow/long-running operations **or** if the method does talk to a BMC; BMCs are flaky and very easy to break. .. WARNING:: Each asynchronous request consumes a worker thread in the ``ironic-conductor`` process. This can lead to starvation of the thread pool, resulting in a denial of service. Backwards Compatibility ======================= There is no requirement that changes to a vendor method be backwards compatible. However, for your users' sakes, we highly recommend that you do so. If you are changing the exceptions being raised, you might want to ensure that the same HTTP code is being returned to the user. For non-backwards compatibility, please make sure you add a release note that indicates this. ironic-5.1.0/doc/source/dev/contributing.rst0000664000567000056710000000251612674513466022247 0ustar jenkinsjenkins00000000000000.. _contributing: ====================== Contributing to Ironic ====================== If you're interested in contributing to the Ironic project, the following will help get you started. Contributor License Agreement ----------------------------- .. index:: single: license; agreement In order to contribute to the Ironic project, you need to have signed OpenStack's contributor's agreement. .. seealso:: * http://docs.openstack.org/infra/manual/developers.html * http://wiki.openstack.org/CLA LaunchPad Project ----------------- Most of the tools used for OpenStack depend on a launchpad.net ID for authentication. .. seealso:: * https://launchpad.net * https://launchpad.net/ironic Related Projects ----------------- * https://launchpad.net/ironic-inspector * https://launchpad.net/python-ironicclient * https://launchpad.net/python-ironic-inspector-client * https://launchpad.net/bifrost Project Hosting Details ------------------------- Bug tracker http://launchpad.net/ironic Mailing list (prefix subjects with ``[ironic]`` for faster responses) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Wiki http://wiki.openstack.org/Ironic Code Hosting https://github.com/openstack/ironic Code Review https://review.openstack.org/#/q/status:open+project:openstack/ironic,n,z ironic-5.1.0/doc/source/dev/code-contribution-guide.rst0000664000567000056710000002070212674513466024257 0ustar jenkinsjenkins00000000000000.. _code-contribution-guide: ======================= Code Contribution Guide ======================= This document provides some necessary points for developers to consider when writing and reviewing Ironic code. The checklist will help developers get things right. Adding new features =================== Starting with the Mitaka development cycle, Ironic tracks new features using RFEs (Requests for Feature Enhancements) instead of blueprints. These are bugs with 'rfe' tag, and they should be submitted before a spec or code is proposed. When a member of `ironic-drivers launchpad team `_ decides that the proposal is worth implementing, a spec (if needed) and code should be submitted, referencing the RFE bug. Contributors are welcome to submit a spec and/or code before the RFE is approved, however those patches will not land until the RFE is approved. Here is a list of steps to do during the new process of adding a new feature to Ironic: #. Submit a bug report at https://bugs.launchpad.net/ironic/+filebug. There are two fields that must be filled: 'summary' and 'further information'. The 'summary' must be brief enough to fit in one line: if you can’t describe it in a few words it may mean that you are either trying to capture more than one RFE at once, or that you are having a hard time defining what you are trying to solve at all. #. Describe the proposed change in the 'further information' field. The description should provide enough details for a knowledgeable developer to understand what is the existing problem in the current platform that needs to be addressed, or what is the enhancement that would make the platform more capable, both from a functional and a non-functional standpoint. #. Submit the bug, add an 'rfe' tag to it and assign yourself or whoever is going to work on this feature. #. As soon as a member of the ironic-drivers team acknowledges the bug, it will be moved into the 'Triaged' state. The importance will be set to 'Wishlist' to signal the fact that the report is indeed a feature and there is no severity associated to it. Discussion about the RFE, and whether to approve it, happens in bug comments while in the 'Triaged' state. #. The ironic-drivers team will evaluate the RFE and may advise the submitter to file a spec in ironic-specs to elaborate on the feature request, in case the RFE requires extra scrutiny, more design discussion, etc. For the spec submission process, please see the `specs process `_ wiki page. #. If a spec is not required, once the discussion has happened and there is positive consensus among the ironic-drivers team on the RFE, the RFE is 'approved', and its tag will move from 'rfe' to 'rfe-approved'. This means that the feature is approved and the related code may be merged. #. If a spec is required, the spec must be submitted (with the bug properly referenced as 'Partial-Bug' in the commit message), reviewed, and merged before the RFE will be 'approved' (and the tag changed to 'rfe-approved'). #. The bug then goes through the usual process -- first to 'In progress' when the spec/code is being worked on, then 'Fix Released' when it is implemented. #. If the RFE is rejected, the ironic-drivers team will move the bug to "Won't Fix" status. When working on an RFE, please be sure to tag your commits properly: "Partial-Bug: #xxxx" or "Related-Bug: #xxxx" for intermediate commits for the feature, and "Closes-Bug: #xxxx" for the final commit. It is also helpful to set a consistent review topic, such as "bug/xxxx" for all patches related to the RFE. If the RFE spans across several projects (e.g. ironic and python-ironicclient), but the main work is going to happen within ironic, please use the same bug for all the code you're submitting, there is no need to create a separate RFE in every project. Note that currently the Ironic bug tracker is managed by the open 'ironic-bugs' team, not the ironic-drivers team. This means that anyone may edit bug details, and there is room to game the system here. **RFEs may only be approved by members of the ironic-drivers team**. Attempts to sneak around this rule will not be tolerated, and will be called out in public on the mailing list. Live Upgrade Related Concerns ============================= Ironic implements upgrade with the same methodology of Nova: http://docs.openstack.org/developer/nova/upgrade.html Ironic API RPC Versions ----------------------- * When the signature(arguments) of an RPC method is changed, the following things need to be considered: - The RPC version must be incremented and be the same value for both the client (conductor/rpcapi.py, used by ironic-api) and the server (conductor/manager.py, used by ironic-conductor). - New arguments of the method can only be added as optional. Existing arguments cannot be removed or changed in incompatible ways (with the method in older RPC versions). - Client-side can pin a version cap by passing ``version_cap`` to the constructor of oslo_messaging.RPCClient. Methods which change arguments should run client.can_send_version() to see if the version of the request is compatible with the version cap of RPC Client, otherwise the request needs to be created to work with a previous version that is supported. - Server-side should tolerate the older version of requests in order to keep working during the progress of live upgrade. The behavior of server-side should depend on the input parameters passed from the client-side. Object Versions --------------- * When Object classes (subclasses of ironic.objects.base.IronicObject) are modified, the following things need to be considered: - The change of fields and the signature of remotable method needs a bump of object version. - The arguments of methods can only be added as optional, they cannot be removed or changed in an incompatible way. - Fields types cannot be changed. If it is a must, create a new field and deprecate the old one. - When new version objects communicate with old version objects, obj_make_compatible() will be called to convert objects to the target version during serialization. So objects should implement their own obj_make_compatible() to remove/alter attributes which was added/changed after the target version. - There is a test (object/test_objects.py) to generate the hash of object fields and the signatures of remotable methods, which helps developers to check if the change of objects need a version bump. The object fingerprint should only be updated with a version bump. Driver Internal Info ==================== The ``driver_internal_info`` node field was introduced in the Kilo release. It allows driver developers to store internal information that can not be modified by end users. Here is the list of existing common and agent driver attributes: Common attributes: * ``is_whole_disk_image``: A Boolean value to indicate whether the user image contains ramdisk/kernel. * ``clean_steps``: An ordered list of clean steps that will be performed on the node. * ``instance``: A list of dictionaries containing the disk layout values. * ``root_uuid_or_disk_id``: A String value of the bare metal node's root partition uuid or disk id. * ``persistent_boot_device``: A String value of device from ``ironic.common.boot_devices``. * ``is_next_boot_persistent``: A Boolean value to indicate whether the next boot device is ``persistent_boot_device``. Agent driver attributes: * ``agent_url``: A String value of IPA API URL so that Ironic can talk to IPA ramdisk. * ``agent_last_heartbeat``: An Integer value of the last agent heartbeat time. * ``hardware_manager_version``: A String value of the version of the hardware manager in IPA ramdisk. * ``target_raid_config``: A Dictionary containing the target RAID configuration. This is a copy of the same name attribute in Node object. But this one is never actually saved into DB and is only read by IPA ramdisk. .. note:: These are only some fields in use. Other vendor drivers might expose more ``driver_internal_info`` properties, please check their development documentation and/or module docstring for details. It is important for developers to make sure these properties follow the precedent of prefixing their variable names with a specific interface name(e.g., iboot_bar, amt_xyz), so as to minimize or avoid any conflicts between interfaces. ironic-5.1.0/doc/source/releasenotes/0000775000567000056710000000000012674513633020711 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/releasenotes/index.rst0000664000567000056710000000217412674513466022562 0ustar jenkinsjenkins00000000000000============= Release Notes ============= The official location for release notes is: http://docs.openstack.org/releasenotes/ironic. This page is old and not up-to-date but retained to prevent links to this page from breaking. 4.2.1 ===== Release notes: http://docs.openstack.org/releasenotes/ironic/liberty.html#V4-2-1 4.2.0 ===== Release notes: http://docs.openstack.org/releasenotes/ironic/liberty.html#V4-2-0 4.1.0 ===== Release notes: http://docs.openstack.org/releasenotes/ironic/liberty.html#V4-1-0 4.0.0 First semver release ============================ Release notes: http://docs.openstack.org/releasenotes/ironic/liberty.html#V4.0.0 2015.1.0 OpenStack "Kilo" Release ==================================== Release notes: https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#OpenStack_Bare_Metal_service_.28Ironic.29 2014.2.0 OpenStack "Juno" Release ==================================== Release notes: https://wiki.openstack.org/wiki/Ironic/ReleaseNotes/Juno 2014.1.0 OpenStack "Icehouse" Release ======================================== Release notes: https://wiki.openstack.org/wiki/Ironic/ReleaseNotes/Icehouse ironic-5.1.0/doc/source/images_src/0000775000567000056710000000000012674513633020334 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/images_src/deployment_steps.svg0000664000567000056710000046551212674513466024474 0ustar jenkinsjenkins00000000000000 image/svg+xml Nova API Message Queue Nova Conductor Nova Scheduler 2. Apply filters & find available compute host node Nova Database 3. Compute Manager calls driver.spawn () Nova Compute 4. Get info and claim bare metal node 6 . Plug VIFs 5. Fetch images Neutron Glance 7. Deploy bar e metal node Ironic API Ironic Conductor Ironic Database 8. Deploy (active boot loader) Bare Metal Nodes IPMI driver 9. Power on bare metal node PXE driver 1. Nova boot 11. Reboot 12. Update status of bare metal node 10. Write image ironic-5.1.0/doc/source/webapi/0000775000567000056710000000000012674513633017467 5ustar jenkinsjenkins00000000000000ironic-5.1.0/doc/source/webapi/v1.rst0000664000567000056710000001675712674513466020573 0ustar jenkinsjenkins00000000000000===================== RESTful Web API (v1) ===================== API Versioning ============== Starting with the Kilo release ironic supports versioning of API. Version is defined as a string of 2 integers separated by a dot: **X.Y**. Here ``X`` is a major version, always equal to ``1`` at the moment of writing, ``Y`` is a minor version. Server minor version is increased every time the API behavior is changed (note `Exceptions from Versioning`_). `Nova versioning documentation`_ has a nice guide on when to bump an API version. Server indicates its minimum and maximum supported API versions in the ``X-OpenStack-Ironic-API-Minimum-Version`` and ``X-OpenStack-Ironic-API-Maximum-Version`` headers respectively, returned with every response. Client may request a specific API version by providing ``X-OpenStack-Ironic-API-Version`` header with request. If no version is requested by the client, minimum supported version - **1.1**, is assumed. The client is only exposed to those API features that are supported in the requested (explicitly or implicitly) API version (again note `Exceptions from Versioning`_, they are not covered by this rule). We recommend clients requiring stable API to always request a specific version of API. However, a special value ``latest`` can be requested instead, which always requests the newest supported API version. .. _Nova versioning documentation: http://docs.openstack.org/developer/nova/api_microversion_dev.html#when-do-i-need-a-new-microversion API Versions History -------------------- **1.16** Add ability to filter nodes by driver. **1.15** Add ability to do manual cleaning when a node is in the manageable provision state via PUT v1/nodes//states/provision, target:clean, clean_steps:[...]. **1.14** Make the following endpoints discoverable via Ironic API: * '/v1/nodes//states' * '/v1/drivers//properties' **1.13** Add a new verb ``abort`` to the API used to abort nodes in ``CLEANWAIT`` state. **1.12** This API version adds the following abilities: * Get/set ``node.target_raid_config`` and to get ``node.raid_config``. * Retrieve the logical disk properties for the driver. **1.11** (breaking change) Newly registered nodes begin in the ``enroll`` provision state by default, instead of ``available``. To get them to the ``available`` state, the ``manage`` action must first be run to verify basic hardware control. On success the node moves to ``manageable`` provision state. Then the ``provide`` action must be run. Automated cleaning of the node is done and the node is made ``available``. **1.10** Logical node names support all RFC 3986 unreserved characters. Previously only valid fully qualified domain names could be used. **1.9** Add ability to filter nodes by provision state. **1.8** Add ability to return a subset of resource fields. **1.7** Add node ``clean_step`` field. **1.6** Add :ref:`inspection` process: introduce ``inspecting`` and ``inspectfail`` provision states, and ``inspect`` action that can be used when a node is in ``manageable`` provision state. **1.5** Add logical node names that can be used to address a node in addition to the node UUID. Name is expected to be a valid `fully qualified domain name`_ in this version of API. **1.4** Add ``manageable`` state and ``manage`` transition, which can be used to move a node to ``manageable`` state from ``available``. The node cannot be deployed in ``managable`` state. This change is mostly a preparation for future inspection work and introduction of ``enroll`` provision state. **1.3** Add node ``driver_internal_info`` field. **1.2** (breaking change) Renamed NOSTATE (``None`` in Python, ``null`` in JSON) node state to ``available``. This is needed to reduce confusion around ``None`` state, especially when future additions to the state machine land. **1.1** This was the initial version when API versioning was introduced. Includes the following changes from Kilo release cycle: * Add node ``maintenance_reason`` field and an API endpoint to set/unset the node maintenance mode. * Add sync and async support for vendor passthru methods. * Vendor passthru endpoints support different HTTP methods, not only ``POST``. * Make vendor methods discoverable via the Ironic API. * Add logic to store the config drive passed by Nova. This has been the minimum supported version since versioning was introduced. **1.0** This version denotes Juno API and was never explicitly supported, as API versioning was not implemented in Juno, and **1.1** became the minimum supported version in Kilo. .. _fully qualified domain name: https://en.wikipedia.org/wiki/Fully_qualified_domain_name Exceptions from Versioning -------------------------- The following API-visible things are not covered by the API versioning: * Current node state is always exposed as it is, even if not supported by the requested API version, with exception of ``available`` state, which is returned in version 1.1 as ``None`` (in Python) or ``null`` (in JSON). * Data within free-form JSON attributes: ``properties``, ``driver_info``, ``instance_info``, ``driver_internal_info`` fields on a node object; ``extra`` fields on all objects. * Addition of new drivers. * All vendor passthru methods. Chassis ======= .. rest-controller:: ironic.api.controllers.v1.chassis:ChassisController :webprefix: /v1/chassis .. autotype:: ironic.api.controllers.v1.chassis.ChassisCollection :members: .. autotype:: ironic.api.controllers.v1.chassis.Chassis :members: Drivers ======= .. rest-controller:: ironic.api.controllers.v1.driver:DriversController :webprefix: /v1/drivers .. rest-controller:: ironic.api.controllers.v1.driver:DriverRaidController :webprefix: /v1/drivers/(driver_name)/raid .. rest-controller:: ironic.api.controllers.v1.driver:DriverPassthruController :webprefix: /v1/drivers/(driver_name)/vendor_passthru .. autotype:: ironic.api.controllers.v1.driver.DriverList :members: .. autotype:: ironic.api.controllers.v1.driver.Driver :members: Links ===== .. autotype:: ironic.api.controllers.link.Link :members: Nodes ===== .. rest-controller:: ironic.api.controllers.v1.node:NodesController :webprefix: /v1/nodes .. rest-controller:: ironic.api.controllers.v1.node:NodeMaintenanceController :webprefix: /v1/nodes/(node_ident)/maintenance .. rest-controller:: ironic.api.controllers.v1.node:BootDeviceController :webprefix: /v1/nodes/(node_ident)/management/boot_device .. rest-controller:: ironic.api.controllers.v1.node:NodeStatesController :webprefix: /v1/nodes/(node_ident)/states .. rest-controller:: ironic.api.controllers.v1.node:NodeConsoleController :webprefix: /v1/nodes/(node_ident)/states/console .. rest-controller:: ironic.api.controllers.v1.node:NodeVendorPassthruController :webprefix: /v1/nodes/(node_ident)/vendor_passthru .. autotype:: ironic.api.controllers.v1.node.ConsoleInfo :members: .. autotype:: ironic.api.controllers.v1.node.Node :members: .. autotype:: ironic.api.controllers.v1.node.NodeCollection :members: .. autotype:: ironic.api.controllers.v1.node.NodeStates :members: Ports ===== .. rest-controller:: ironic.api.controllers.v1.port:PortsController :webprefix: /v1/ports .. autotype:: ironic.api.controllers.v1.port.PortCollection :members: .. autotype:: ironic.api.controllers.v1.port.Port :members: ironic-5.1.0/doc/source/conf.py0000664000567000056710000000554612674513466017535 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinxcontrib.httpdomain', 'sphinxcontrib.pecanwsme.rest', 'sphinxcontrib.seqdiag', 'wsmeext.sphinxext', 'oslosphinx', ] wsme_protocols = ['restjson'] # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Ironic' copyright = u'OpenStack Foundation' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. from ironic import version as ironic_version # The full version, including alpha/beta/rc tags. release = ironic_version.version_info.release_string() # The short X.Y version. version = ironic_version.version_info.version_string() # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['ironic.'] # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # NOTE(cinerama): mock out nova modules so docs can build without warnings import mock import sys MOCK_MODULES = ['nova', 'nova.compute', 'nova.context'] for module in MOCK_MODULES: sys.modules[module] = mock.Mock() # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. #html_theme_path = ["."] #html_theme = '_theme' #html_static_path = ['_static'] # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ( 'index', '%s.tex' % project, u'%s Documentation' % project, u'OpenStack Foundation', 'manual' ), ] # -- Options for seqdiag ------------------------------------------------------ seqdiag_html_image_format = "SVG" ironic-5.1.0/ironic_tempest_plugin/0000775000567000056710000000000012674513633020555 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/common/0000775000567000056710000000000012674513633022045 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/common/waiters.py0000664000567000056710000000346712674513466024113 0ustar jenkinsjenkins00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from tempest.lib.common.utils import misc as misc_utils from tempest.lib import exceptions as lib_exc def wait_for_bm_node_status(client, node_id, attr, status): """Waits for a baremetal node attribute to reach given status. The client should have a show_node(node_uuid) method to get the node. """ _, node = client.show_node(node_id) start = int(time.time()) while node[attr] != status: time.sleep(client.build_interval) _, node = client.show_node(node_id) status_curr = node[attr] if status_curr == status: return if int(time.time()) - start >= client.build_timeout: message = ('Node %(node_id)s failed to reach %(attr)s=%(status)s ' 'within the required time (%(timeout)s s).' % {'node_id': node_id, 'attr': attr, 'status': status, 'timeout': client.build_timeout}) message += ' Current state of %s: %s.' % (attr, status_curr) caller = misc_utils.find_test_caller() if caller: message = '(%s) %s' % (caller, message) raise lib_exc.TimeoutException(message) ironic-5.1.0/ironic_tempest_plugin/common/__init__.py0000664000567000056710000000000012674513466024150 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/README.rst0000664000567000056710000000133512674513466022252 0ustar jenkinsjenkins00000000000000===================== Ironic tempest plugin ===================== This directory contains Tempest tests to cover the Ironic project, as well as a plugin to automatically load these tests into tempest. See the tempest plugin docs for information on using it: http://docs.openstack.org/developer/tempest/plugin.html#using-plugins To run all tests from this plugin, install ironic into your environment and run:: $ tox -e all-plugin -- ironic To run a single test case, run with the test case name, for example:: $ tox -e all-plugin -- ironic_tempest_plugin.tests.scenario.test_baremetal_basic_ops.BaremetalBasicOps.test_baremetal_server_ops To run all tempest tests including this plugin, run:: $ tox -e all-plugin ironic-5.1.0/ironic_tempest_plugin/services/0000775000567000056710000000000012674513633022400 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/services/baremetal/0000775000567000056710000000000012674513633024334 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/services/baremetal/base.py0000664000567000056710000001553312674513466025633 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from oslo_serialization import jsonutils as json import six from six.moves.urllib import parse as urllib from tempest.lib.common import rest_client def handle_errors(f): """A decorator that allows to ignore certain types of errors.""" @functools.wraps(f) def wrapper(*args, **kwargs): param_name = 'ignore_errors' ignored_errors = kwargs.get(param_name, tuple()) if param_name in kwargs: del kwargs[param_name] try: return f(*args, **kwargs) except ignored_errors: # Silently ignore errors pass return wrapper class BaremetalClient(rest_client.RestClient): """Base Tempest REST client for Ironic API.""" uri_prefix = '' def serialize(self, object_dict): """Serialize an Ironic object.""" return json.dumps(object_dict) def deserialize(self, object_str): """Deserialize an Ironic object.""" return json.loads(object_str) def _get_uri(self, resource_name, uuid=None, permanent=False): """Get URI for a specific resource or object. :param resource_name: The name of the REST resource, e.g., 'nodes'. :param uuid: The unique identifier of an object in UUID format. :returns: Relative URI for the resource or object. """ prefix = self.uri_prefix if not permanent else '' return '{pref}/{res}{uuid}'.format(pref=prefix, res=resource_name, uuid='/%s' % uuid if uuid else '') def _make_patch(self, allowed_attributes, **kwargs): """Create a JSON patch according to RFC 6902. :param allowed_attributes: An iterable object that contains a set of allowed attributes for an object. :param **kwargs: Attributes and new values for them. :returns: A JSON path that sets values of the specified attributes to the new ones. """ def get_change(kwargs, path='/'): for name, value in six.iteritems(kwargs): if isinstance(value, dict): for ch in get_change(value, path + '%s/' % name): yield ch else: if value is None: yield {'path': path + name, 'op': 'remove'} else: yield {'path': path + name, 'value': value, 'op': 'replace'} patch = [ch for ch in get_change(kwargs) if ch['path'].lstrip('/') in allowed_attributes] return patch def _list_request(self, resource, permanent=False, **kwargs): """Get the list of objects of the specified type. :param resource: The name of the REST resource, e.g., 'nodes'. :param **kwargs: Parameters for the request. :returns: A tuple with the server response and deserialized JSON list of objects """ uri = self._get_uri(resource, permanent=permanent) if kwargs: uri += "?%s" % urllib.urlencode(kwargs) resp, body = self.get(uri) self.expected_success(200, resp['status']) return resp, self.deserialize(body) def _show_request(self, resource, uuid, permanent=False, **kwargs): """Gets a specific object of the specified type. :param uuid: Unique identifier of the object in UUID format. :returns: Serialized object as a dictionary. """ if 'uri' in kwargs: uri = kwargs['uri'] else: uri = self._get_uri(resource, uuid=uuid, permanent=permanent) resp, body = self.get(uri) self.expected_success(200, resp['status']) return resp, self.deserialize(body) def _create_request(self, resource, object_dict): """Create an object of the specified type. :param resource: The name of the REST resource, e.g., 'nodes'. :param object_dict: A Python dict that represents an object of the specified type. :returns: A tuple with the server response and the deserialized created object. """ body = self.serialize(object_dict) uri = self._get_uri(resource) resp, body = self.post(uri, body=body) self.expected_success(201, resp['status']) return resp, self.deserialize(body) def _delete_request(self, resource, uuid): """Delete specified object. :param resource: The name of the REST resource, e.g., 'nodes'. :param uuid: The unique identifier of an object in UUID format. :returns: A tuple with the server response and the response body. """ uri = self._get_uri(resource, uuid) resp, body = self.delete(uri) self.expected_success(204, resp['status']) return resp, body def _patch_request(self, resource, uuid, patch_object): """Update specified object with JSON-patch. :param resource: The name of the REST resource, e.g., 'nodes'. :param uuid: The unique identifier of an object in UUID format. :returns: A tuple with the server response and the serialized patched object. """ uri = self._get_uri(resource, uuid) patch_body = json.dumps(patch_object) resp, body = self.patch(uri, body=patch_body) self.expected_success(200, resp['status']) return resp, self.deserialize(body) @handle_errors def get_api_description(self): """Retrieves all versions of the Ironic API.""" return self._list_request('', permanent=True) @handle_errors def get_version_description(self, version='v1'): """Retrieves the desctription of the API. :param version: The version of the API. Default: 'v1'. :returns: Serialized description of API resources. """ return self._list_request(version, permanent=True) def _put_request(self, resource, put_object): """Update specified object with JSON-patch.""" uri = self._get_uri(resource) put_body = json.dumps(put_object) resp, body = self.put(uri, body=put_body) self.expected_success(202, resp['status']) return resp, body ironic-5.1.0/ironic_tempest_plugin/services/baremetal/v1/0000775000567000056710000000000012674513633024662 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/services/baremetal/v1/json/0000775000567000056710000000000012674513633025633 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/services/baremetal/v1/json/baremetal_client.py0000664000567000056710000002711612674513466031512 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic_tempest_plugin.services.baremetal import base class BaremetalClient(base.BaremetalClient): """Base Tempest REST client for Ironic API v1.""" version = '1' uri_prefix = 'v1' @base.handle_errors def list_nodes(self, **kwargs): """List all existing nodes.""" return self._list_request('nodes', **kwargs) @base.handle_errors def list_chassis(self): """List all existing chassis.""" return self._list_request('chassis') @base.handle_errors def list_chassis_nodes(self, chassis_uuid): """List all nodes associated with a chassis.""" return self._list_request('/chassis/%s/nodes' % chassis_uuid) @base.handle_errors def list_ports(self, **kwargs): """List all existing ports.""" return self._list_request('ports', **kwargs) @base.handle_errors def list_node_ports(self, uuid): """List all ports associated with the node.""" return self._list_request('/nodes/%s/ports' % uuid) @base.handle_errors def list_nodestates(self, uuid): """List all existing states.""" return self._list_request('/nodes/%s/states' % uuid) @base.handle_errors def list_ports_detail(self, **kwargs): """Details list all existing ports.""" return self._list_request('/ports/detail', **kwargs) @base.handle_errors def list_drivers(self): """List all existing drivers.""" return self._list_request('drivers') @base.handle_errors def show_node(self, uuid): """Gets a specific node. :param uuid: Unique identifier of the node in UUID format. :return: Serialized node as a dictionary. """ return self._show_request('nodes', uuid) @base.handle_errors def show_node_by_instance_uuid(self, instance_uuid): """Gets a node associated with given instance uuid. :param uuid: Unique identifier of the node in UUID format. :return: Serialized node as a dictionary. """ uri = '/nodes/detail?instance_uuid=%s' % instance_uuid return self._show_request('nodes', uuid=None, uri=uri) @base.handle_errors def show_chassis(self, uuid): """Gets a specific chassis. :param uuid: Unique identifier of the chassis in UUID format. :return: Serialized chassis as a dictionary. """ return self._show_request('chassis', uuid) @base.handle_errors def show_port(self, uuid): """Gets a specific port. :param uuid: Unique identifier of the port in UUID format. :return: Serialized port as a dictionary. """ return self._show_request('ports', uuid) @base.handle_errors def show_port_by_address(self, address): """Gets a specific port by address. :param address: MAC address of the port. :return: Serialized port as a dictionary. """ uri = '/ports/detail?address=%s' % address return self._show_request('ports', uuid=None, uri=uri) def show_driver(self, driver_name): """Gets a specific driver. :param driver_name: Name of driver. :return: Serialized driver as a dictionary. """ return self._show_request('drivers', driver_name) @base.handle_errors def create_node(self, chassis_id=None, **kwargs): """Create a baremetal node with the specified parameters. :param cpu_arch: CPU architecture of the node. Default: x86_64. :param cpus: Number of CPUs. Default: 8. :param local_gb: Disk size. Default: 1024. :param memory_mb: Available RAM. Default: 4096. :param driver: Driver name. Default: "fake" :return: A tuple with the server response and the created node. """ node = {'chassis_uuid': chassis_id, 'properties': {'cpu_arch': kwargs.get('cpu_arch', 'x86_64'), 'cpus': kwargs.get('cpus', 8), 'local_gb': kwargs.get('local_gb', 1024), 'memory_mb': kwargs.get('memory_mb', 4096)}, 'driver': kwargs.get('driver', 'fake')} return self._create_request('nodes', node) @base.handle_errors def create_chassis(self, **kwargs): """Create a chassis with the specified parameters. :param description: The description of the chassis. Default: test-chassis :return: A tuple with the server response and the created chassis. """ chassis = {'description': kwargs.get('description', 'test-chassis')} return self._create_request('chassis', chassis) @base.handle_errors def create_port(self, node_id, **kwargs): """Create a port with the specified parameters. :param node_id: The ID of the node which owns the port. :param address: MAC address of the port. :param extra: Meta data of the port. Default: {'foo': 'bar'}. :param uuid: UUID of the port. :return: A tuple with the server response and the created port. """ port = {'extra': kwargs.get('extra', {'foo': 'bar'}), 'uuid': kwargs['uuid']} if node_id is not None: port['node_uuid'] = node_id if kwargs['address'] is not None: port['address'] = kwargs['address'] return self._create_request('ports', port) @base.handle_errors def delete_node(self, uuid): """Deletes a node having the specified UUID. :param uuid: The unique identifier of the node. :return: A tuple with the server response and the response body. """ return self._delete_request('nodes', uuid) @base.handle_errors def delete_chassis(self, uuid): """Deletes a chassis having the specified UUID. :param uuid: The unique identifier of the chassis. :return: A tuple with the server response and the response body. """ return self._delete_request('chassis', uuid) @base.handle_errors def delete_port(self, uuid): """Deletes a port having the specified UUID. :param uuid: The unique identifier of the port. :return: A tuple with the server response and the response body. """ return self._delete_request('ports', uuid) @base.handle_errors def update_node(self, uuid, **kwargs): """Update the specified node. :param uuid: The unique identifier of the node. :return: A tuple with the server response and the updated node. """ node_attributes = ('properties/cpu_arch', 'properties/cpus', 'properties/local_gb', 'properties/memory_mb', 'driver', 'instance_uuid') patch = self._make_patch(node_attributes, **kwargs) return self._patch_request('nodes', uuid, patch) @base.handle_errors def update_chassis(self, uuid, **kwargs): """Update the specified chassis. :param uuid: The unique identifier of the chassis. :return: A tuple with the server response and the updated chassis. """ chassis_attributes = ('description',) patch = self._make_patch(chassis_attributes, **kwargs) return self._patch_request('chassis', uuid, patch) @base.handle_errors def update_port(self, uuid, patch): """Update the specified port. :param uuid: The unique identifier of the port. :param patch: List of dicts representing json patches. :return: A tuple with the server response and the updated port. """ return self._patch_request('ports', uuid, patch) @base.handle_errors def set_node_power_state(self, node_uuid, state): """Set power state of the specified node. :param node_uuid: The unique identifier of the node. :state: desired state to set (on/off/reboot). """ target = {'target': state} return self._put_request('nodes/%s/states/power' % node_uuid, target) @base.handle_errors def validate_driver_interface(self, node_uuid): """Get all driver interfaces of a specific node. :param uuid: Unique identifier of the node in UUID format. """ uri = '{pref}/{res}/{uuid}/{postf}'.format(pref=self.uri_prefix, res='nodes', uuid=node_uuid, postf='validate') return self._show_request('nodes', node_uuid, uri=uri) @base.handle_errors def set_node_boot_device(self, node_uuid, boot_device, persistent=False): """Set the boot device of the specified node. :param node_uuid: The unique identifier of the node. :param boot_device: The boot device name. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. """ request = {'boot_device': boot_device, 'persistent': persistent} resp, body = self._put_request('nodes/%s/management/boot_device' % node_uuid, request) self.expected_success(204, resp.status) return body @base.handle_errors def get_node_boot_device(self, node_uuid): """Get the current boot device of the specified node. :param node_uuid: The unique identifier of the node. """ path = 'nodes/%s/management/boot_device' % node_uuid resp, body = self._list_request(path) self.expected_success(200, resp.status) return body @base.handle_errors def get_node_supported_boot_devices(self, node_uuid): """Get the supported boot devices of the specified node. :param node_uuid: The unique identifier of the node. """ path = 'nodes/%s/management/boot_device/supported' % node_uuid resp, body = self._list_request(path) self.expected_success(200, resp.status) return body @base.handle_errors def get_console(self, node_uuid): """Get connection information about the console. :param node_uuid: Unique identifier of the node in UUID format. """ resp, body = self._show_request('nodes/states/console', node_uuid) self.expected_success(200, resp.status) return resp, body @base.handle_errors def set_console_mode(self, node_uuid, enabled): """Start and stop the node console. :param node_uuid: Unique identifier of the node in UUID format. :param enabled: Boolean value; whether to enable or disable the console. """ enabled = {'enabled': enabled} resp, body = self._put_request('nodes/%s/states/console' % node_uuid, enabled) self.expected_success(202, resp.status) return resp, body ironic-5.1.0/ironic_tempest_plugin/services/baremetal/v1/json/__init__.py0000664000567000056710000000000012674513466027736 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/services/baremetal/v1/__init__.py0000664000567000056710000000000012674513466026765 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/services/baremetal/__init__.py0000664000567000056710000000000012674513466026437 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/services/__init__.py0000664000567000056710000000000012674513466024503 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/config.py0000664000567000056710000000621412674513466022403 0ustar jenkinsjenkins00000000000000# Copyright 2015 NEC Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from tempest import config # noqa baremetal_group = cfg.OptGroup(name='baremetal', title='Baremetal provisioning service options', help='When enabling baremetal tests, Nova ' 'must be configured to use the Ironic ' 'driver. The following parameters for the ' '[compute] section must be disabled: ' 'console_output, interface_attach, ' 'live_migration, pause, rescue, resize, ' 'shelve, snapshot, and suspend') # NOTE(maurosr): Until liberty-eol we need to keep config options and tests # on tempest's tree to test stable branches and thus we have to comment the # options bellow to avoid duplication. Only new options should live here. BaremetalGroup = [ # cfg.StrOpt('catalog_type', # default='baremetal', # help="Catalog type of the baremetal provisioning service"), # cfg.BoolOpt('driver_enabled', # default=True, # help="Whether the Ironic nova-compute driver is enabled"), # cfg.StrOpt('driver', # default='fake', # help="Driver name which Ironic uses"), # cfg.StrOpt('endpoint_type', # default='publicURL', # choices=['public', 'admin', 'internal', # 'publicURL', 'adminURL', 'internalURL'], # help="The endpoint type to use for the baremetal provisioning" # " service"), cfg.IntOpt('deploywait_timeout', default=15, help="Timeout for Ironic node to reach the " "wait-callback state after powering on."), # cfg.IntOpt('active_timeout', # default=300, # help="Timeout for Ironic node to completely provision"), # cfg.IntOpt('association_timeout', # default=30, # help="Timeout for association of Nova instance and Ironic " # "node"), # cfg.IntOpt('power_timeout', # default=60, # help="Timeout for Ironic power transitions."), # cfg.IntOpt('unprovision_timeout', # default=300, # help="Timeout for unprovisioning an Ironic node. " # "Takes longer since Kilo as Ironic performs an extra " # "step in Node cleaning.") ] ironic-5.1.0/ironic_tempest_plugin/__init__.py0000664000567000056710000000000012674513466022660 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/tests/0000775000567000056710000000000012674513633021717 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/tests/scenario/0000775000567000056710000000000012674513633023522 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/tests/scenario/baremetal_manager.py0000664000567000056710000001422312674513466027530 0ustar jenkinsjenkins00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from tempest.common import waiters from tempest import config from tempest.lib import exceptions as lib_exc from tempest.scenario import manager # noqa import tempest.test from ironic_tempest_plugin import clients CONF = config.CONF # power/provision states as of icehouse class BaremetalPowerStates(object): """Possible power states of an Ironic node.""" POWER_ON = 'power on' POWER_OFF = 'power off' REBOOT = 'rebooting' SUSPEND = 'suspended' class BaremetalProvisionStates(object): """Possible provision states of an Ironic node.""" NOSTATE = None INIT = 'initializing' ACTIVE = 'active' BUILDING = 'building' DEPLOYWAIT = 'wait call-back' DEPLOYING = 'deploying' DEPLOYFAIL = 'deploy failed' DEPLOYDONE = 'deploy complete' DELETING = 'deleting' DELETED = 'deleted' ERROR = 'error' class BaremetalScenarioTest(manager.ScenarioTest): credentials = ['primary', 'admin'] @classmethod def skip_checks(cls): super(BaremetalScenarioTest, cls).skip_checks() if not CONF.baremetal.driver_enabled: msg = 'Ironic not available or Ironic compute driver not enabled' raise cls.skipException(msg) @classmethod def setup_clients(cls): super(BaremetalScenarioTest, cls).setup_clients() cls.baremetal_client = clients.Manager().baremetal_client @classmethod def resource_setup(cls): super(BaremetalScenarioTest, cls).resource_setup() # allow any issues obtaining the node list to raise early cls.baremetal_client.list_nodes() def _node_state_timeout(self, node_id, state_attr, target_states, timeout=10, interval=1): if not isinstance(target_states, list): target_states = [target_states] def check_state(): node = self.get_node(node_id=node_id) if node.get(state_attr) in target_states: return True return False if not tempest.test.call_until_true(check_state, timeout, interval): msg = ("Timed out waiting for node %s to reach %s state(s) %s" % (node_id, state_attr, target_states)) raise lib_exc.TimeoutException(msg) def wait_provisioning_state(self, node_id, state, timeout): self._node_state_timeout( node_id=node_id, state_attr='provision_state', target_states=state, timeout=timeout) def wait_power_state(self, node_id, state): self._node_state_timeout( node_id=node_id, state_attr='power_state', target_states=state, timeout=CONF.baremetal.power_timeout) def wait_node(self, instance_id): """Waits for a node to be associated with instance_id.""" def _get_node(): node = None try: node = self.get_node(instance_id=instance_id) except lib_exc.NotFound: pass return node is not None if (not tempest.test.call_until_true( _get_node, CONF.baremetal.association_timeout, 1)): msg = ('Timed out waiting to get Ironic node by instance id %s' % instance_id) raise lib_exc.TimeoutException(msg) def get_node(self, node_id=None, instance_id=None): if node_id: _, body = self.baremetal_client.show_node(node_id) return body elif instance_id: _, body = self.baremetal_client.show_node_by_instance_uuid( instance_id) if body['nodes']: return body['nodes'][0] def get_ports(self, node_uuid): ports = [] _, body = self.baremetal_client.list_node_ports(node_uuid) for port in body['ports']: _, p = self.baremetal_client.show_port(port['uuid']) ports.append(p) return ports def add_keypair(self): self.keypair = self.create_keypair() def verify_connectivity(self, ip=None): if ip: dest = self.get_remote_client(ip) else: dest = self.get_remote_client(self.instance) dest.validate_authentication() def boot_instance(self): self.instance = self.create_server( key_name=self.keypair['name']) self.wait_node(self.instance['id']) self.node = self.get_node(instance_id=self.instance['id']) self.wait_power_state(self.node['uuid'], BaremetalPowerStates.POWER_ON) self.wait_provisioning_state( self.node['uuid'], [BaremetalProvisionStates.DEPLOYWAIT, BaremetalProvisionStates.ACTIVE], timeout=CONF.baremetal.deploywait_timeout) self.wait_provisioning_state(self.node['uuid'], BaremetalProvisionStates.ACTIVE, timeout=CONF.baremetal.active_timeout) waiters.wait_for_server_status(self.servers_client, self.instance['id'], 'ACTIVE') self.node = self.get_node(instance_id=self.instance['id']) self.instance = (self.servers_client.show_server(self.instance['id']) ['server']) def terminate_instance(self): self.servers_client.delete_server(self.instance['id']) self.wait_power_state(self.node['uuid'], BaremetalPowerStates.POWER_OFF) self.wait_provisioning_state( self.node['uuid'], BaremetalProvisionStates.NOSTATE, timeout=CONF.baremetal.unprovision_timeout) ironic-5.1.0/ironic_tempest_plugin/tests/scenario/__init__.py0000664000567000056710000000000012674513466025625 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/tests/scenario/test_baremetal_basic_ops.py0000664000567000056710000001223712674513466031122 0ustar jenkinsjenkins00000000000000# # Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from tempest.common import waiters from tempest import config from tempest import test from ironic_tempest_plugin.tests.scenario import baremetal_manager CONF = config.CONF LOG = logging.getLogger(__name__) class BaremetalBasicOps(baremetal_manager.BaremetalScenarioTest): """This smoke test tests the pxe_ssh Ironic driver. It follows this basic set of operations: * Creates a keypair * Boots an instance using the keypair * Monitors the associated Ironic node for power and expected state transitions * Validates Ironic node's port data has been properly updated * Verifies SSH connectivity using created keypair via fixed IP * Associates a floating ip * Verifies SSH connectivity using created keypair via floating IP * Verifies instance rebuild with ephemeral partition preservation * Deletes instance * Monitors the associated Ironic node for power and expected state transitions """ def rebuild_instance(self, preserve_ephemeral=False): self.rebuild_server(server_id=self.instance['id'], preserve_ephemeral=preserve_ephemeral, wait=False) node = self.get_node(instance_id=self.instance['id']) # We should remain on the same node self.assertEqual(self.node['uuid'], node['uuid']) self.node = node waiters.wait_for_server_status( self.servers_client, server_id=self.instance['id'], status='REBUILD', ready_wait=False) waiters.wait_for_server_status( self.servers_client, server_id=self.instance['id'], status='ACTIVE') def verify_partition(self, client, label, mount, gib_size): """Verify a labeled partition's mount point and size.""" LOG.info("Looking for partition %s mounted on %s" % (label, mount)) # Validate we have a device with the given partition label cmd = "/sbin/blkid | grep '%s' | cut -d':' -f1" % label device = client.exec_command(cmd).rstrip('\n') LOG.debug("Partition device is %s" % device) self.assertNotEqual('', device) # Validate the mount point for the device cmd = "mount | grep '%s' | cut -d' ' -f3" % device actual_mount = client.exec_command(cmd).rstrip('\n') LOG.debug("Partition mount point is %s" % actual_mount) self.assertEqual(actual_mount, mount) # Validate the partition size matches what we expect numbers = '0123456789' devnum = device.replace('/dev/', '') cmd = "cat /sys/block/%s/%s/size" % (devnum.rstrip(numbers), devnum) num_bytes = client.exec_command(cmd).rstrip('\n') num_bytes = int(num_bytes) * 512 actual_gib_size = num_bytes / (1024 * 1024 * 1024) LOG.debug("Partition size is %d GiB" % actual_gib_size) self.assertEqual(actual_gib_size, gib_size) def get_flavor_ephemeral_size(self): """Returns size of the ephemeral partition in GiB.""" f_id = self.instance['flavor']['id'] flavor = self.flavors_client.show_flavor(f_id)['flavor'] ephemeral = flavor.get('OS-FLV-EXT-DATA:ephemeral') if not ephemeral or ephemeral == 'N/A': return None return int(ephemeral) def validate_ports(self): for port in self.get_ports(self.node['uuid']): n_port_id = port['extra']['vif_port_id'] body = self.ports_client.show_port(n_port_id) n_port = body['port'] self.assertEqual(n_port['device_id'], self.instance['id']) self.assertEqual(n_port['mac_address'], port['address']) @test.idempotent_id('549173a5-38ec-42bb-b0e2-c8b9f4a08943') @test.services('baremetal', 'compute', 'image', 'network') def test_baremetal_server_ops(self): self.add_keypair() self.boot_instance() self.validate_ports() ip_address = self.get_server_ip(self.instance) self.get_remote_client(ip_address).validate_authentication() vm_client = self.get_remote_client(ip_address) # We expect the ephemeral partition to be mounted on /mnt and to have # the same size as our flavor definition. eph_size = self.get_flavor_ephemeral_size() if eph_size: self.verify_partition(vm_client, 'ephemeral0', '/mnt', eph_size) # Create the test file self.create_timestamp( ip_address, private_key=self.keypair['private_key']) self.terminate_instance() ironic-5.1.0/ironic_tempest_plugin/tests/api/0000775000567000056710000000000012674513633022470 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/0000775000567000056710000000000012674513633023560 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/base.py0000664000567000056710000001511412674513466025052 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from tempest import config from tempest.lib.common.utils import data_utils from tempest.lib import exceptions as lib_exc from tempest import test from ironic_tempest_plugin import clients CONF = config.CONF # NOTE(adam_g): The baremetal API tests exercise operations such as enroll # node, power on, power off, etc. Testing against real drivers (ie, IPMI) # will require passing driver-specific data to Tempest (addresses, # credentials, etc). Until then, only support testing against the fake driver, # which has no external dependencies. SUPPORTED_DRIVERS = ['fake'] # NOTE(jroll): resources must be deleted in a specific order, this list # defines the resource types to clean up, and the correct order. RESOURCE_TYPES = ['port', 'node', 'chassis'] def creates(resource): """Decorator that adds resources to the appropriate cleanup list.""" def decorator(f): @functools.wraps(f) def wrapper(cls, *args, **kwargs): resp, body = f(cls, *args, **kwargs) if 'uuid' in body: cls.created_objects[resource].add(body['uuid']) return resp, body return wrapper return decorator class BaseBaremetalTest(test.BaseTestCase): """Base class for Baremetal API tests.""" credentials = ['admin'] @classmethod def skip_checks(cls): super(BaseBaremetalTest, cls).skip_checks() if CONF.baremetal.driver not in SUPPORTED_DRIVERS: skip_msg = ('%s skipped as Ironic driver %s is not supported for ' 'testing.' % (cls.__name__, CONF.baremetal.driver)) raise cls.skipException(skip_msg) @classmethod def setup_clients(cls): super(BaseBaremetalTest, cls).setup_clients() cls.client = clients.Manager().baremetal_client @classmethod def resource_setup(cls): super(BaseBaremetalTest, cls).resource_setup() cls.driver = CONF.baremetal.driver cls.power_timeout = CONF.baremetal.power_timeout cls.created_objects = {} for resource in RESOURCE_TYPES: cls.created_objects[resource] = set() @classmethod def resource_cleanup(cls): """Ensure that all created objects get destroyed.""" try: for resource in RESOURCE_TYPES: uuids = cls.created_objects[resource] delete_method = getattr(cls.client, 'delete_%s' % resource) for u in uuids: delete_method(u, ignore_errors=lib_exc.NotFound) finally: super(BaseBaremetalTest, cls).resource_cleanup() @classmethod @creates('chassis') def create_chassis(cls, description=None, expect_errors=False): """Wrapper utility for creating test chassis. :param description: A description of the chassis. if not supplied, a random value will be generated. :return: Created chassis. """ description = description or data_utils.rand_name('test-chassis') resp, body = cls.client.create_chassis(description=description) return resp, body @classmethod @creates('node') def create_node(cls, chassis_id, cpu_arch='x86', cpus=8, local_gb=10, memory_mb=4096): """Wrapper utility for creating test baremetal nodes. :param cpu_arch: CPU architecture of the node. Default: x86. :param cpus: Number of CPUs. Default: 8. :param local_gb: Disk size. Default: 10. :param memory_mb: Available RAM. Default: 4096. :return: Created node. """ resp, body = cls.client.create_node(chassis_id, cpu_arch=cpu_arch, cpus=cpus, local_gb=local_gb, memory_mb=memory_mb, driver=cls.driver) return resp, body @classmethod @creates('port') def create_port(cls, node_id, address, extra=None, uuid=None): """Wrapper utility for creating test ports. :param address: MAC address of the port. :param extra: Meta data of the port. If not supplied, an empty dictionary will be created. :param uuid: UUID of the port. :return: Created port. """ extra = extra or {} resp, body = cls.client.create_port(address=address, node_id=node_id, extra=extra, uuid=uuid) return resp, body @classmethod def delete_chassis(cls, chassis_id): """Deletes a chassis having the specified UUID. :param uuid: The unique identifier of the chassis. :return: Server response. """ resp, body = cls.client.delete_chassis(chassis_id) if chassis_id in cls.created_objects['chassis']: cls.created_objects['chassis'].remove(chassis_id) return resp @classmethod def delete_node(cls, node_id): """Deletes a node having the specified UUID. :param uuid: The unique identifier of the node. :return: Server response. """ resp, body = cls.client.delete_node(node_id) if node_id in cls.created_objects['node']: cls.created_objects['node'].remove(node_id) return resp @classmethod def delete_port(cls, port_id): """Deletes a port having the specified UUID. :param uuid: The unique identifier of the port. :return: Server response. """ resp, body = cls.client.delete_port(port_id) if port_id in cls.created_objects['port']: cls.created_objects['port'].remove(port_id) return resp def validate_self_link(self, resource, uuid, link): """Check whether the given self link formatted correctly.""" expected_link = "{base}/{pref}/{res}/{uuid}".format( base=self.client.base_url, pref=self.client.uri_prefix, res=resource, uuid=uuid) self.assertEqual(expected_link, link) ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/test_drivers.py0000664000567000056710000000264212674513466026657 0ustar jenkinsjenkins00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from tempest import config from tempest import test from ironic_tempest_plugin.tests.api.admin import base CONF = config.CONF class TestDrivers(base.BaseBaremetalTest): """Tests for drivers.""" @classmethod def resource_setup(cls): super(TestDrivers, cls).resource_setup() cls.driver_name = CONF.baremetal.driver @test.idempotent_id('5aed2790-7592-4655-9b16-99abcc2e6ec5') def test_list_drivers(self): _, drivers = self.client.list_drivers() self.assertIn(self.driver_name, [d['name'] for d in drivers['drivers']]) @test.idempotent_id('fb3287a3-c4d7-44bf-ae9d-1eef906d78ce') def test_show_driver(self): _, driver = self.client.show_driver(self.driver_name) self.assertEqual(self.driver_name, driver['name']) ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/test_ports.py0000664000567000056710000002407212674513466026351 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from tempest.lib.common.utils import data_utils from tempest.lib import exceptions as lib_exc from tempest import test from ironic_tempest_plugin.tests.api.admin import base class TestPorts(base.BaseBaremetalTest): """Tests for ports.""" def setUp(self): super(TestPorts, self).setUp() _, self.chassis = self.create_chassis() _, self.node = self.create_node(self.chassis['uuid']) _, self.port = self.create_port(self.node['uuid'], data_utils.rand_mac_address()) def _assertExpected(self, expected, actual): # Check if not expected keys/values exists in actual response body for key, value in six.iteritems(expected): if key not in ('created_at', 'updated_at'): self.assertIn(key, actual) self.assertEqual(value, actual[key]) @test.idempotent_id('83975898-2e50-42ed-b5f0-e510e36a0b56') def test_create_port(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) _, body = self.client.show_port(port['uuid']) self._assertExpected(port, body) @test.idempotent_id('d1f6b249-4cf6-4fe6-9ed6-a6e84b1bf67b') def test_create_port_specifying_uuid(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() uuid = data_utils.rand_uuid() _, port = self.create_port(node_id=node_id, address=address, uuid=uuid) _, body = self.client.show_port(uuid) self._assertExpected(port, body) @test.idempotent_id('4a02c4b0-6573-42a4-a513-2e36ad485b62') def test_create_port_with_extra(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() extra = {'str': 'value', 'int': 123, 'float': 0.123, 'bool': True, 'list': [1, 2, 3], 'dict': {'foo': 'bar'}} _, port = self.create_port(node_id=node_id, address=address, extra=extra) _, body = self.client.show_port(port['uuid']) self._assertExpected(port, body) @test.idempotent_id('1bf257a9-aea3-494e-89c0-63f657ab4fdd') def test_delete_port(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) self.delete_port(port['uuid']) self.assertRaises(lib_exc.NotFound, self.client.show_port, port['uuid']) @test.idempotent_id('9fa77ab5-ce59-4f05-baac-148904ba1597') def test_show_port(self): _, port = self.client.show_port(self.port['uuid']) self._assertExpected(self.port, port) @test.idempotent_id('7c1114ff-fc3f-47bb-bc2f-68f61620ba8b') def test_show_port_by_address(self): _, port = self.client.show_port_by_address(self.port['address']) self._assertExpected(self.port, port['ports'][0]) @test.idempotent_id('bd773405-aea5-465d-b576-0ab1780069e5') def test_show_port_with_links(self): _, port = self.client.show_port(self.port['uuid']) self.assertIn('links', port.keys()) self.assertEqual(2, len(port['links'])) self.assertIn(port['uuid'], port['links'][0]['href']) @test.idempotent_id('b5e91854-5cd7-4a8e-bb35-3e0a1314606d') def test_list_ports(self): _, body = self.client.list_ports() self.assertIn(self.port['uuid'], [i['uuid'] for i in body['ports']]) # Verify self links. for port in body['ports']: self.validate_self_link('ports', port['uuid'], port['links'][0]['href']) @test.idempotent_id('324a910e-2f80-4258-9087-062b5ae06240') def test_list_with_limit(self): _, body = self.client.list_ports(limit=3) next_marker = body['ports'][-1]['uuid'] self.assertIn(next_marker, body['next']) @test.idempotent_id('8a94b50f-9895-4a63-a574-7ecff86e5875') def test_list_ports_details(self): node_id = self.node['uuid'] uuids = [ self.create_port(node_id=node_id, address=data_utils.rand_mac_address()) [1]['uuid'] for i in range(0, 5)] _, body = self.client.list_ports_detail() ports_dict = dict((port['uuid'], port) for port in body['ports'] if port['uuid'] in uuids) for uuid in uuids: self.assertIn(uuid, ports_dict) port = ports_dict[uuid] self.assertIn('extra', port) self.assertIn('node_uuid', port) # never expose the node_id self.assertNotIn('node_id', port) # Verify self link. self.validate_self_link('ports', port['uuid'], port['links'][0]['href']) @test.idempotent_id('8a03f688-7d75-4ecd-8cbc-e06b8f346738') def test_list_ports_details_with_address(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() self.create_port(node_id=node_id, address=address) for i in range(0, 5): self.create_port(node_id=node_id, address=data_utils.rand_mac_address()) _, body = self.client.list_ports_detail(address=address) self.assertEqual(1, len(body['ports'])) self.assertEqual(address, body['ports'][0]['address']) @test.idempotent_id('9c26298b-1bcb-47b7-9b9e-8bdd6e3c4aba') def test_update_port_replace(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() extra = {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'} _, port = self.create_port(node_id=node_id, address=address, extra=extra) new_address = data_utils.rand_mac_address() new_extra = {'key1': 'new-value1', 'key2': 'new-value2', 'key3': 'new-value3'} patch = [{'path': '/address', 'op': 'replace', 'value': new_address}, {'path': '/extra/key1', 'op': 'replace', 'value': new_extra['key1']}, {'path': '/extra/key2', 'op': 'replace', 'value': new_extra['key2']}, {'path': '/extra/key3', 'op': 'replace', 'value': new_extra['key3']}] self.client.update_port(port['uuid'], patch) _, body = self.client.show_port(port['uuid']) self.assertEqual(new_address, body['address']) self.assertEqual(new_extra, body['extra']) @test.idempotent_id('d7e7fece-6ed9-460a-9ebe-9267217e8580') def test_update_port_remove(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() extra = {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'} _, port = self.create_port(node_id=node_id, address=address, extra=extra) # Removing one item from the collection self.client.update_port(port['uuid'], [{'path': '/extra/key2', 'op': 'remove'}]) extra.pop('key2') _, body = self.client.show_port(port['uuid']) self.assertEqual(extra, body['extra']) # Removing the collection self.client.update_port(port['uuid'], [{'path': '/extra', 'op': 'remove'}]) _, body = self.client.show_port(port['uuid']) self.assertEqual({}, body['extra']) # Assert nothing else was changed self.assertEqual(node_id, body['node_uuid']) self.assertEqual(address, body['address']) @test.idempotent_id('241288b3-e98a-400f-a4d7-d1f716146361') def test_update_port_add(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) extra = {'key1': 'value1', 'key2': 'value2'} patch = [{'path': '/extra/key1', 'op': 'add', 'value': extra['key1']}, {'path': '/extra/key2', 'op': 'add', 'value': extra['key2']}] self.client.update_port(port['uuid'], patch) _, body = self.client.show_port(port['uuid']) self.assertEqual(extra, body['extra']) @test.idempotent_id('5309e897-0799-4649-a982-0179b04c3876') def test_update_port_mixed_ops(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() extra = {'key1': 'value1', 'key2': 'value2'} _, port = self.create_port(node_id=node_id, address=address, extra=extra) new_address = data_utils.rand_mac_address() new_extra = {'key1': 0.123, 'key3': {'cat': 'meow'}} patch = [{'path': '/address', 'op': 'replace', 'value': new_address}, {'path': '/extra/key1', 'op': 'replace', 'value': new_extra['key1']}, {'path': '/extra/key2', 'op': 'remove'}, {'path': '/extra/key3', 'op': 'add', 'value': new_extra['key3']}] self.client.update_port(port['uuid'], patch) _, body = self.client.show_port(port['uuid']) self.assertEqual(new_address, body['address']) self.assertEqual(new_extra, body['extra']) ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/test_api_discovery.py0000664000567000056710000000324712674513466030043 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from tempest import test from ironic_tempest_plugin.tests.api.admin import base class TestApiDiscovery(base.BaseBaremetalTest): """Tests for API discovery features.""" @test.idempotent_id('a3c27e94-f56c-42c4-8600-d6790650b9c5') def test_api_versions(self): _, descr = self.client.get_api_description() expected_versions = ('v1',) versions = [version['id'] for version in descr['versions']] for v in expected_versions: self.assertIn(v, versions) @test.idempotent_id('896283a6-488e-4f31-af78-6614286cbe0d') def test_default_version(self): _, descr = self.client.get_api_description() default_version = descr['default_version'] self.assertEqual(default_version['id'], 'v1') @test.idempotent_id('abc0b34d-e684-4546-9728-ab7a9ad9f174') def test_version_1_resources(self): _, descr = self.client.get_version_description(version='v1') expected_resources = ('nodes', 'chassis', 'ports', 'links', 'media_types') for res in expected_resources: self.assertIn(res, descr) ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/test_chassis.py0000664000567000056710000000661512674513466026642 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from tempest.lib.common.utils import data_utils from tempest.lib import exceptions as lib_exc from tempest import test from ironic_tempest_plugin.tests.api.admin import base class TestChassis(base.BaseBaremetalTest): """Tests for chassis.""" @classmethod def resource_setup(cls): super(TestChassis, cls).resource_setup() _, cls.chassis = cls.create_chassis() def _assertExpected(self, expected, actual): # Check if not expected keys/values exists in actual response body for key, value in six.iteritems(expected): if key not in ('created_at', 'updated_at'): self.assertIn(key, actual) self.assertEqual(value, actual[key]) @test.idempotent_id('7c5a2e09-699c-44be-89ed-2bc189992d42') def test_create_chassis(self): descr = data_utils.rand_name('test-chassis') _, chassis = self.create_chassis(description=descr) self.assertEqual(chassis['description'], descr) @test.idempotent_id('cabe9c6f-dc16-41a7-b6b9-0a90c212edd5') def test_create_chassis_unicode_description(self): # Use a unicode string for testing: # 'We ♡ OpenStack in Ukraine' descr = u'В Україні ♡ OpenStack!' _, chassis = self.create_chassis(description=descr) self.assertEqual(chassis['description'], descr) @test.idempotent_id('c84644df-31c4-49db-a307-8942881f41c0') def test_show_chassis(self): _, chassis = self.client.show_chassis(self.chassis['uuid']) self._assertExpected(self.chassis, chassis) @test.idempotent_id('29c9cd3f-19b5-417b-9864-99512c3b33b3') def test_list_chassis(self): _, body = self.client.list_chassis() self.assertIn(self.chassis['uuid'], [i['uuid'] for i in body['chassis']]) @test.idempotent_id('5ae649ad-22d1-4fe1-bbc6-97227d199fb3') def test_delete_chassis(self): _, body = self.create_chassis() uuid = body['uuid'] self.delete_chassis(uuid) self.assertRaises(lib_exc.NotFound, self.client.show_chassis, uuid) @test.idempotent_id('cda8a41f-6be2-4cbf-840c-994b00a89b44') def test_update_chassis(self): _, body = self.create_chassis() uuid = body['uuid'] new_description = data_utils.rand_name('new-description') _, body = (self.client.update_chassis(uuid, description=new_description)) _, chassis = self.client.show_chassis(uuid) self.assertEqual(chassis['description'], new_description) @test.idempotent_id('76305e22-a4e2-4ab3-855c-f4e2368b9335') def test_chassis_node_list(self): _, node = self.create_node(self.chassis['uuid']) _, body = self.client.list_chassis_nodes(self.chassis['uuid']) self.assertIn(node['uuid'], [n['uuid'] for n in body['nodes']]) ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/test_nodestates.py0000664000567000056710000000460212674513466027350 0ustar jenkinsjenkins00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import timeutils from tempest.lib import exceptions from tempest import test from ironic_tempest_plugin.tests.api.admin import base class TestNodeStates(base.BaseBaremetalTest): """Tests for baremetal NodeStates.""" @classmethod def resource_setup(cls): super(TestNodeStates, cls).resource_setup() _, cls.chassis = cls.create_chassis() _, cls.node = cls.create_node(cls.chassis['uuid']) def _validate_power_state(self, node_uuid, power_state): # Validate that power state is set within timeout if power_state == 'rebooting': power_state = 'power on' start = timeutils.utcnow() while timeutils.delta_seconds( start, timeutils.utcnow()) < self.power_timeout: _, node = self.client.show_node(node_uuid) if node['power_state'] == power_state: return message = ('Failed to set power state within ' 'the required time: %s sec.' % self.power_timeout) raise exceptions.TimeoutException(message) @test.idempotent_id('cd8afa5e-3f57-4e43-8185-beb83d3c9015') def test_list_nodestates(self): _, nodestates = self.client.list_nodestates(self.node['uuid']) for key in nodestates: self.assertEqual(nodestates[key], self.node[key]) @test.idempotent_id('fc5b9320-0c98-4e5a-8848-877fe5a0322c') def test_set_node_power_state(self): _, node = self.create_node(self.chassis['uuid']) states = ["power on", "rebooting", "power off"] for state in states: # Set power state self.client.set_node_power_state(node['uuid'], state) # Check power state after state is set self._validate_power_state(node['uuid'], state) ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/test_nodes.py0000664000567000056710000001565612674513466026322 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from tempest.lib.common.utils import data_utils from tempest.lib import exceptions as lib_exc from tempest import test from ironic_tempest_plugin.common import waiters from ironic_tempest_plugin.tests.api.admin import base class TestNodes(base.BaseBaremetalTest): """Tests for baremetal nodes.""" def setUp(self): super(TestNodes, self).setUp() _, self.chassis = self.create_chassis() _, self.node = self.create_node(self.chassis['uuid']) def _assertExpected(self, expected, actual): # Check if not expected keys/values exists in actual response body for key, value in six.iteritems(expected): if key not in ('created_at', 'updated_at'): self.assertIn(key, actual) self.assertEqual(value, actual[key]) def _associate_node_with_instance(self): self.client.set_node_power_state(self.node['uuid'], 'power off') waiters.wait_for_bm_node_status(self.client, self.node['uuid'], 'power_state', 'power off') instance_uuid = data_utils.rand_uuid() self.client.update_node(self.node['uuid'], instance_uuid=instance_uuid) self.addCleanup(self.client.update_node, uuid=self.node['uuid'], instance_uuid=None) return instance_uuid @test.idempotent_id('4e939eb2-8a69-4e84-8652-6fffcbc9db8f') def test_create_node(self): params = {'cpu_arch': 'x86_64', 'cpus': '12', 'local_gb': '10', 'memory_mb': '1024'} _, body = self.create_node(self.chassis['uuid'], **params) self._assertExpected(params, body['properties']) @test.idempotent_id('9ade60a4-505e-4259-9ec4-71352cbbaf47') def test_delete_node(self): _, node = self.create_node(self.chassis['uuid']) self.delete_node(node['uuid']) self.assertRaises(lib_exc.NotFound, self.client.show_node, node['uuid']) @test.idempotent_id('55451300-057c-4ecf-8255-ba42a83d3a03') def test_show_node(self): _, loaded_node = self.client.show_node(self.node['uuid']) self._assertExpected(self.node, loaded_node) @test.idempotent_id('4ca123c4-160d-4d8d-a3f7-15feda812263') def test_list_nodes(self): _, body = self.client.list_nodes() self.assertIn(self.node['uuid'], [i['uuid'] for i in body['nodes']]) @test.idempotent_id('85b1f6e0-57fd-424c-aeff-c3422920556f') def test_list_nodes_association(self): _, body = self.client.list_nodes(associated=True) self.assertNotIn(self.node['uuid'], [n['uuid'] for n in body['nodes']]) self._associate_node_with_instance() _, body = self.client.list_nodes(associated=True) self.assertIn(self.node['uuid'], [n['uuid'] for n in body['nodes']]) _, body = self.client.list_nodes(associated=False) self.assertNotIn(self.node['uuid'], [n['uuid'] for n in body['nodes']]) @test.idempotent_id('18c4ebd8-f83a-4df7-9653-9fb33a329730') def test_node_port_list(self): _, port = self.create_port(self.node['uuid'], data_utils.rand_mac_address()) _, body = self.client.list_node_ports(self.node['uuid']) self.assertIn(port['uuid'], [p['uuid'] for p in body['ports']]) @test.idempotent_id('72591acb-f215-49db-8395-710d14eb86ab') def test_node_port_list_no_ports(self): _, node = self.create_node(self.chassis['uuid']) _, body = self.client.list_node_ports(node['uuid']) self.assertEmpty(body['ports']) @test.idempotent_id('4fed270a-677a-4d19-be87-fd38ae490320') def test_update_node(self): props = {'cpu_arch': 'x86_64', 'cpus': '12', 'local_gb': '10', 'memory_mb': '128'} _, node = self.create_node(self.chassis['uuid'], **props) new_p = {'cpu_arch': 'x86', 'cpus': '1', 'local_gb': '10000', 'memory_mb': '12300'} _, body = self.client.update_node(node['uuid'], properties=new_p) _, node = self.client.show_node(node['uuid']) self._assertExpected(new_p, node['properties']) @test.idempotent_id('cbf1f515-5f4b-4e49-945c-86bcaccfeb1d') def test_validate_driver_interface(self): _, body = self.client.validate_driver_interface(self.node['uuid']) core_interfaces = ['power', 'deploy'] for interface in core_interfaces: self.assertIn(interface, body) @test.idempotent_id('5519371c-26a2-46e9-aa1a-f74226e9d71f') def test_set_node_boot_device(self): self.client.set_node_boot_device(self.node['uuid'], 'pxe') @test.idempotent_id('9ea73775-f578-40b9-bc34-efc639c4f21f') def test_get_node_boot_device(self): body = self.client.get_node_boot_device(self.node['uuid']) self.assertIn('boot_device', body) self.assertIn('persistent', body) self.assertTrue(isinstance(body['boot_device'], six.string_types)) self.assertTrue(isinstance(body['persistent'], bool)) @test.idempotent_id('3622bc6f-3589-4bc2-89f3-50419c66b133') def test_get_node_supported_boot_devices(self): body = self.client.get_node_supported_boot_devices(self.node['uuid']) self.assertIn('supported_boot_devices', body) self.assertTrue(isinstance(body['supported_boot_devices'], list)) @test.idempotent_id('f63b6288-1137-4426-8cfe-0d5b7eb87c06') def test_get_console(self): _, body = self.client.get_console(self.node['uuid']) con_info = ['console_enabled', 'console_info'] for key in con_info: self.assertIn(key, body) @test.idempotent_id('80504575-9b21-4670-92d1-143b948f9437') def test_set_console_mode(self): self.client.set_console_mode(self.node['uuid'], True) _, body = self.client.get_console(self.node['uuid']) self.assertEqual(True, body['console_enabled']) @test.idempotent_id('b02a4f38-5e8b-44b2-aed2-a69a36ecfd69') def test_get_node_by_instance_uuid(self): instance_uuid = self._associate_node_with_instance() _, body = self.client.show_node_by_instance_uuid(instance_uuid) self.assertEqual(len(body['nodes']), 1) self.assertIn(self.node['uuid'], [n['uuid'] for n in body['nodes']]) ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/test_ports_negative.py0000664000567000056710000003223212674513466030230 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from tempest.lib.common.utils import data_utils from tempest.lib import exceptions as lib_exc from tempest import test from ironic_tempest_plugin.tests.api.admin import base class TestPortsNegative(base.BaseBaremetalTest): """Negative tests for ports.""" def setUp(self): super(TestPortsNegative, self).setUp() _, self.chassis = self.create_chassis() _, self.node = self.create_node(self.chassis['uuid']) @test.attr(type=['negative']) @test.idempotent_id('0a6ee1f7-d0d9-4069-8778-37f3aa07303a') def test_create_port_malformed_mac(self): node_id = self.node['uuid'] address = 'malformed:mac' self.assertRaises(lib_exc.BadRequest, self.create_port, node_id=node_id, address=address) @test.attr(type=['negative']) @test.idempotent_id('30277ee8-0c60-4f1d-b125-0e51c2f43369') def test_create_port_nonexsistent_node_id(self): node_id = str(data_utils.rand_uuid()) address = data_utils.rand_mac_address() self.assertRaises(lib_exc.BadRequest, self.create_port, node_id=node_id, address=address) @test.attr(type=['negative']) @test.idempotent_id('029190f6-43e1-40a3-b64a-65173ba653a3') def test_show_port_malformed_uuid(self): self.assertRaises(lib_exc.BadRequest, self.client.show_port, 'malformed:uuid') @test.attr(type=['negative']) @test.idempotent_id('0d00e13d-e2e0-45b1-bcbc-55a6d90ca793') def test_show_port_nonexistent_uuid(self): self.assertRaises(lib_exc.NotFound, self.client.show_port, data_utils.rand_uuid()) @test.attr(type=['negative']) @test.idempotent_id('4ad85266-31e9-4942-99ac-751897dc9e23') def test_show_port_by_mac_not_allowed(self): self.assertRaises(lib_exc.BadRequest, self.client.show_port, data_utils.rand_mac_address()) @test.attr(type=['negative']) @test.idempotent_id('89a34380-3c61-4c32-955c-2cd9ce94da21') def test_create_port_duplicated_port_uuid(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() uuid = data_utils.rand_uuid() self.create_port(node_id=node_id, address=address, uuid=uuid) self.assertRaises(lib_exc.Conflict, self.create_port, node_id=node_id, address=address, uuid=uuid) @test.attr(type=['negative']) @test.idempotent_id('65e84917-733c-40ae-ae4b-96a4adff931c') def test_create_port_no_mandatory_field_node_id(self): address = data_utils.rand_mac_address() self.assertRaises(lib_exc.BadRequest, self.create_port, node_id=None, address=address) @test.attr(type=['negative']) @test.idempotent_id('bcea3476-7033-4183-acfe-e56a30809b46') def test_create_port_no_mandatory_field_mac(self): node_id = self.node['uuid'] self.assertRaises(lib_exc.BadRequest, self.create_port, node_id=node_id, address=None) @test.attr(type=['negative']) @test.idempotent_id('2b51cd18-fb95-458b-9780-e6257787b649') def test_create_port_malformed_port_uuid(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() uuid = 'malformed:uuid' self.assertRaises(lib_exc.BadRequest, self.create_port, node_id=node_id, address=address, uuid=uuid) @test.attr(type=['negative']) @test.idempotent_id('583a6856-6a30-4ac4-889f-14e2adff8105') def test_create_port_malformed_node_id(self): address = data_utils.rand_mac_address() self.assertRaises(lib_exc.BadRequest, self.create_port, node_id='malformed:nodeid', address=address) @test.attr(type=['negative']) @test.idempotent_id('e27f8b2e-42c6-4a43-a3cd-accff716bc5c') def test_create_port_duplicated_mac(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() self.create_port(node_id=node_id, address=address) self.assertRaises(lib_exc.Conflict, self.create_port, node_id=node_id, address=address) @test.attr(type=['negative']) @test.idempotent_id('8907082d-ac5e-4be3-b05f-d072ede82020') def test_update_port_by_mac_not_allowed(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() extra = {'key': 'value'} self.create_port(node_id=node_id, address=address, extra=extra) patch = [{'path': '/extra/key', 'op': 'replace', 'value': 'new-value'}] self.assertRaises(lib_exc.BadRequest, self.client.update_port, address, patch) @test.attr(type=['negative']) @test.idempotent_id('df1ac70c-db9f-41d9-90f1-78cd6b905718') def test_update_port_nonexistent(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() extra = {'key': 'value'} _, port = self.create_port(node_id=node_id, address=address, extra=extra) port_id = port['uuid'] _, body = self.client.delete_port(port_id) patch = [{'path': '/extra/key', 'op': 'replace', 'value': 'new-value'}] self.assertRaises(lib_exc.NotFound, self.client.update_port, port_id, patch) @test.attr(type=['negative']) @test.idempotent_id('c701e315-aa52-41ea-817c-65c5ca8ca2a8') def test_update_port_malformed_port_uuid(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() self.create_port(node_id=node_id, address=address) new_address = data_utils.rand_mac_address() self.assertRaises(lib_exc.BadRequest, self.client.update_port, uuid='malformed:uuid', patch=[{'path': '/address', 'op': 'replace', 'value': new_address}]) @test.attr(type=['negative']) @test.idempotent_id('f8f15803-34d6-45dc-b06f-e5e04bf1b38b') def test_update_port_add_nonexistent_property(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) port_id = port['uuid'] self.assertRaises(lib_exc.BadRequest, self.client.update_port, port_id, [{'path': '/nonexistent', ' op': 'add', 'value': 'value'}]) @test.attr(type=['negative']) @test.idempotent_id('898ec904-38b1-4fcb-9584-1187d4263a2a') def test_update_port_replace_node_id_with_malformed(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) port_id = port['uuid'] patch = [{'path': '/node_uuid', 'op': 'replace', 'value': 'malformed:node_uuid'}] self.assertRaises(lib_exc.BadRequest, self.client.update_port, port_id, patch) @test.attr(type=['negative']) @test.idempotent_id('2949f30f-5f59-43fa-a6d9-4eac578afab4') def test_update_port_replace_mac_with_duplicated(self): node_id = self.node['uuid'] address1 = data_utils.rand_mac_address() address2 = data_utils.rand_mac_address() _, port1 = self.create_port(node_id=node_id, address=address1) _, port2 = self.create_port(node_id=node_id, address=address2) port_id = port2['uuid'] patch = [{'path': '/address', 'op': 'replace', 'value': address1}] self.assertRaises(lib_exc.Conflict, self.client.update_port, port_id, patch) @test.attr(type=['negative']) @test.idempotent_id('97f6e048-6e4f-4eba-a09d-fbbc78b77a77') def test_update_port_replace_node_id_with_nonexistent(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) port_id = port['uuid'] patch = [{'path': '/node_uuid', 'op': 'replace', 'value': data_utils.rand_uuid()}] self.assertRaises(lib_exc.BadRequest, self.client.update_port, port_id, patch) @test.attr(type=['negative']) @test.idempotent_id('375022c5-9e9e-4b11-9ca4-656729c0c9b2') def test_update_port_replace_mac_with_malformed(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) port_id = port['uuid'] patch = [{'path': '/address', 'op': 'replace', 'value': 'malformed:mac'}] self.assertRaises(lib_exc.BadRequest, self.client.update_port, port_id, patch) @test.attr(type=['negative']) @test.idempotent_id('5722b853-03fc-4854-8308-2036a1b67d85') def test_update_port_replace_nonexistent_property(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) port_id = port['uuid'] patch = [{'path': '/nonexistent', ' op': 'replace', 'value': 'value'}] self.assertRaises(lib_exc.BadRequest, self.client.update_port, port_id, patch) @test.attr(type=['negative']) @test.idempotent_id('ae2696ca-930a-4a7f-918f-30ae97c60f56') def test_update_port_remove_mandatory_field_mac(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) port_id = port['uuid'] self.assertRaises(lib_exc.BadRequest, self.client.update_port, port_id, [{'path': '/address', 'op': 'remove'}]) @test.attr(type=['negative']) @test.idempotent_id('5392c1f0-2071-4697-9064-ec2d63019018') def test_update_port_remove_mandatory_field_port_uuid(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) port_id = port['uuid'] self.assertRaises(lib_exc.BadRequest, self.client.update_port, port_id, [{'path': '/uuid', 'op': 'remove'}]) @test.attr(type=['negative']) @test.idempotent_id('06b50d82-802a-47ef-b079-0a3311cf85a2') def test_update_port_remove_nonexistent_property(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() _, port = self.create_port(node_id=node_id, address=address) port_id = port['uuid'] self.assertRaises(lib_exc.BadRequest, self.client.update_port, port_id, [{'path': '/nonexistent', 'op': 'remove'}]) @test.attr(type=['negative']) @test.idempotent_id('03d42391-2145-4a6c-95bf-63fe55eb64fd') def test_delete_port_by_mac_not_allowed(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() self.create_port(node_id=node_id, address=address) self.assertRaises(lib_exc.BadRequest, self.client.delete_port, address) @test.attr(type=['negative']) @test.idempotent_id('0629e002-818e-4763-b25b-ae5e07b1cb23') def test_update_port_mixed_ops_integrity(self): node_id = self.node['uuid'] address = data_utils.rand_mac_address() extra = {'key1': 'value1', 'key2': 'value2'} _, port = self.create_port(node_id=node_id, address=address, extra=extra) port_id = port['uuid'] new_address = data_utils.rand_mac_address() new_extra = {'key1': 'new-value1', 'key3': 'new-value3'} patch = [{'path': '/address', 'op': 'replace', 'value': new_address}, {'path': '/extra/key1', 'op': 'replace', 'value': new_extra['key1']}, {'path': '/extra/key2', 'op': 'remove'}, {'path': '/extra/key3', 'op': 'add', 'value': new_extra['key3']}, {'path': '/nonexistent', 'op': 'replace', 'value': 'value'}] self.assertRaises(lib_exc.BadRequest, self.client.update_port, port_id, patch) # patch should not be applied _, body = self.client.show_port(port_id) self.assertEqual(address, body['address']) self.assertEqual(extra, body['extra']) ironic-5.1.0/ironic_tempest_plugin/tests/api/admin/__init__.py0000664000567000056710000000000012674513466025663 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/tests/api/__init__.py0000664000567000056710000000000012674513466024573 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/tests/__init__.py0000664000567000056710000000000012674513466024022 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic_tempest_plugin/plugin.py0000664000567000056710000000256512674513466022441 0ustar jenkinsjenkins00000000000000# Copyright 2015 NEC Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from tempest import config from tempest.test_discover import plugins from ironic_tempest_plugin import config as project_config class IronicTempestPlugin(plugins.TempestPlugin): def load_tests(self): base_path = os.path.split(os.path.dirname( os.path.abspath(__file__)))[0] test_dir = "ironic_tempest_plugin/tests" full_test_dir = os.path.join(base_path, test_dir) return full_test_dir, base_path def register_opts(self, conf): config.register_opt_group(conf, project_config.baremetal_group, project_config.BaremetalGroup) def get_opt_lists(self): return [(project_config.baremetal_group.name, project_config.BaremetalGroup)] ironic-5.1.0/ironic_tempest_plugin/clients.py0000664000567000056710000000260212674513466022574 0ustar jenkinsjenkins00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from tempest import clients from tempest.common import credentials_factory as common_creds from tempest import config from ironic_tempest_plugin.services.baremetal.v1.json.baremetal_client import \ BaremetalClient CONF = config.CONF ADMIN_CREDS = common_creds.get_configured_credentials('identity_admin') class Manager(clients.Manager): def __init__(self, credentials=ADMIN_CREDS, service=None, api_microversions=None): super(Manager, self).__init__(credentials, service) self.baremetal_client = BaremetalClient( self.auth_provider, CONF.baremetal.catalog_type, CONF.identity.region, endpoint_type=CONF.baremetal.endpoint_type, **self.default_params_with_timeout_values) ironic-5.1.0/test-requirements.txt0000664000567000056710000000157312674513466020426 0ustar jenkinsjenkins00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking<0.11,>=0.10.0 coverage>=3.6 # Apache-2.0 doc8 # Apache-2.0 fixtures>=1.3.1 # Apache-2.0/BSD mock>=1.2 # BSD Babel>=1.3 # BSD PyMySQL>=0.6.2 # MIT License iso8601>=0.1.9 # MIT oslotest>=1.10.0 # Apache-2.0 psycopg2>=2.5 # LGPL/ZPL python-ironicclient>=1.1.0 # Apache-2.0 python-subunit>=0.0.18 # Apache-2.0/BSD testtools>=1.4.0 # MIT os-testr>=0.4.1 # Apache-2.0 testresources>=0.2.4 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD WebTest>=2.0 # MIT bashate>=0.2 # Apache-2.0 # Doc requirements sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD sphinxcontrib-pecanwsme>=0.8 # Apache-2.0 sphinxcontrib-seqdiag # BSD oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0 reno>=0.1.1 # Apache2 ironic-5.1.0/.testr.conf0000664000567000056710000000043712674513466016251 0ustar jenkinsjenkins00000000000000[DEFAULT] test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ ${TESTS_DIR:-./ironic/tests/unit/} $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list ironic-5.1.0/LICENSE0000664000567000056710000002363712674513466015177 0ustar jenkinsjenkins00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. ironic-5.1.0/README.rst0000664000567000056710000000176212674513466015654 0ustar jenkinsjenkins00000000000000Ironic ====== Ironic is an integrated OpenStack project which aims to provision bare metal machines instead of virtual machines, forked from the Nova Baremetal driver. It is best thought of as a bare metal hypervisor **API** and a set of plugins which interact with the bare metal hypervisors. By default, it will use PXE and IPMI together to provision and turn on/off machines, but Ironic also supports vendor-specific plugins which may implement additional functionality. ----------------- Project Resources ----------------- * Free software: Apache license * Documentation: http://docs.openstack.org/developer/ironic * Source: http://git.openstack.org/cgit/openstack/ironic * Bugs: http://bugs.launchpad.net/ironic * Wiki: https://wiki.openstack.org/wiki/Ironic Project status, bugs, and blueprints are tracked on Launchpad: http://launchpad.net/ironic Anyone wishing to contribute to an OpenStack project should find a good reference here: http://docs.openstack.org/infra/manual/developers.html ironic-5.1.0/requirements.txt0000664000567000056710000000303012674513466017437 0ustar jenkinsjenkins00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr>=1.6 # Apache-2.0 SQLAlchemy<1.1.0,>=1.0.10 # MIT alembic>=0.8.0 # MIT automaton>=0.5.0 # Apache-2.0 eventlet!=0.18.3,>=0.18.2 # MIT WebOb>=1.2.3 # MIT greenlet>=0.3.2 # MIT netaddr!=0.7.16,>=0.7.12 # BSD paramiko>=1.16.0 # LGPL python-neutronclient!=4.1.0,>=2.6.0 # Apache-2.0 python-glanceclient>=2.0.0 # Apache-2.0 python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0 ironic-lib>=1.1.0 # Apache-2.0 python-swiftclient>=2.2.0 # Apache-2.0 pytz>=2013.6 # MIT stevedore>=1.5.0 # Apache-2.0 pysendfile>=2.0.0 # MIT websockify>=0.6.1 # LGPLv3 oslo.concurrency>=3.5.0 # Apache-2.0 oslo.config>=3.7.0 # Apache-2.0 oslo.context>=0.2.0 # Apache-2.0 oslo.db>=4.1.0 # Apache-2.0 oslo.rootwrap>=2.0.0 # Apache-2.0 oslo.i18n>=2.1.0 # Apache-2.0 oslo.log>=1.14.0 # Apache-2.0 oslo.middleware>=3.0.0 # Apache-2.0 oslo.policy>=0.5.0 # Apache-2.0 oslo.serialization>=1.10.0 # Apache-2.0 oslo.service>=1.0.0 # Apache-2.0 oslo.utils>=3.5.0 # Apache-2.0 pecan>=1.0.0 # BSD requests!=2.9.0,>=2.8.1 # Apache-2.0 six>=1.9.0 # MIT jsonpatch>=1.1 # BSD WSME>=0.8 # MIT Jinja2>=2.8 # BSD License (3 clause) keystonemiddleware!=4.1.0,>=4.0.0 # Apache-2.0 oslo.messaging>=4.0.0 # Apache-2.0 retrying!=1.3.0,>=1.2.3 # Apache-2.0 oslo.versionedobjects>=1.5.0 # Apache-2.0 jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT psutil<2.0.0,>=1.1.1 # BSD futurist>=0.11.0 # Apache-2.0 ironic-5.1.0/ironic/0000775000567000056710000000000012674513633015436 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/objects/0000775000567000056710000000000012674513633017067 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/objects/base.py0000664000567000056710000001174612674513466020370 0ustar jenkinsjenkins00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Ironic common internal object model""" from oslo_utils import versionutils from oslo_versionedobjects import base as object_base from ironic import objects from ironic.objects import fields as object_fields class IronicObjectRegistry(object_base.VersionedObjectRegistry): def registration_hook(self, cls, index): # NOTE(jroll): blatantly stolen from nova # NOTE(danms): This is called when an object is registered, # and is responsible for maintaining ironic.objects.$OBJECT # as the highest-versioned implementation of a given object. version = versionutils.convert_version_to_tuple(cls.VERSION) if not hasattr(objects, cls.obj_name()): setattr(objects, cls.obj_name(), cls) else: cur_version = versionutils.convert_version_to_tuple( getattr(objects, cls.obj_name()).VERSION) if version >= cur_version: setattr(objects, cls.obj_name(), cls) class IronicObject(object_base.VersionedObject): """Base class and object factory. This forms the base of all objects that can be remoted or instantiated via RPC. Simply defining a class that inherits from this base class will make it remotely instantiatable. Objects should implement the necessary "get" classmethod routines as well as "save" object methods as appropriate. """ OBJ_SERIAL_NAMESPACE = 'ironic_object' OBJ_PROJECT_NAMESPACE = 'ironic' # TODO(lintan) Refactor these fields and create PersistentObject and # TimeStampObject like Nova when it is necessary. fields = { 'created_at': object_fields.DateTimeField(nullable=True), 'updated_at': object_fields.DateTimeField(nullable=True), } def as_dict(self): return dict((k, getattr(self, k)) for k in self.fields if hasattr(self, k)) def obj_refresh(self, loaded_object): """Applies updates for objects that inherit from base.IronicObject. Checks for updated attributes in an object. Updates are applied from the loaded object column by column in comparison with the current object. """ for field in self.fields: if (self.obj_attr_is_set(field) and self[field] != loaded_object[field]): self[field] = loaded_object[field] @staticmethod def _from_db_object(obj, db_object): """Converts a database entity to a formal object. :param obj: An object of the class. :param db_object: A DB model of the object :return: The object of the class with the database entity added """ for field in obj.fields: obj[field] = db_object[field] obj.obj_reset_changes() return obj class IronicObjectIndirectionAPI(object_base.VersionedObjectIndirectionAPI): def __init__(self): super(IronicObjectIndirectionAPI, self).__init__() # FIXME(xek): importing here due to a cyclical import error from ironic.conductor import rpcapi as conductor_api self._conductor = conductor_api.ConductorAPI() def object_action(self, context, objinst, objmethod, args, kwargs): return self._conductor.object_action(context, objinst, objmethod, args, kwargs) def object_class_action(self, context, objname, objmethod, objver, args, kwargs): # NOTE(xek): This method is implemented for compatibility with # oslo.versionedobjects 0.10.0 and older. It will be replaced by # object_class_action_versions. versions = object_base.obj_tree_get_versions(objname) return self.object_class_action_versions( context, objname, objmethod, versions, args, kwargs) def object_class_action_versions(self, context, objname, objmethod, object_versions, args, kwargs): return self._conductor.object_class_action_versions( context, objname, objmethod, object_versions, args, kwargs) def object_backport_versions(self, context, objinst, object_versions): return self._conductor.object_backport_versions(context, objinst, object_versions) class IronicObjectSerializer(object_base.VersionedObjectSerializer): # Base class to use for object hydration OBJ_BASE_CLASS = IronicObject ironic-5.1.0/ironic/objects/port.py0000664000567000056710000003144012674513466020433 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from oslo_utils import uuidutils from oslo_versionedobjects import base as object_base from ironic.common import exception from ironic.common import utils from ironic.db import api as dbapi from ironic.objects import base from ironic.objects import fields as object_fields @base.IronicObjectRegistry.register class Port(base.IronicObject, object_base.VersionedObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Add get() and get_by_id() and get_by_address() and # make get_by_uuid() only work with a uuid # Version 1.2: Add create() and destroy() # Version 1.3: Add list() # Version 1.4: Add list_by_node_id() # Version 1.5: Add list_by_portgroup_id() and new fields # local_link_connection, portgroup_id and pxe_enabled VERSION = '1.5' dbapi = dbapi.get_instance() fields = { 'id': object_fields.IntegerField(), 'uuid': object_fields.UUIDField(nullable=True), 'node_id': object_fields.IntegerField(nullable=True), 'address': object_fields.MACAddressField(nullable=True), 'extra': object_fields.FlexibleDictField(nullable=True), 'local_link_connection': object_fields.FlexibleDictField( nullable=True), 'portgroup_id': object_fields.IntegerField(nullable=True), 'pxe_enabled': object_fields.BooleanField() } @staticmethod def _from_db_object_list(db_objects, cls, context): """Converts a list of database entities to a list of formal objects.""" return [Port._from_db_object(cls(context), obj) for obj in db_objects] # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get(cls, context, port_id): """Find a port. Find a port based on its id or uuid or MAC address and return a Port object. :param port_id: the id *or* uuid *or* MAC address of a port. :returns: a :class:`Port` object. :raises: InvalidIdentity """ if strutils.is_int_like(port_id): return cls.get_by_id(context, port_id) elif uuidutils.is_uuid_like(port_id): return cls.get_by_uuid(context, port_id) elif utils.is_valid_mac(port_id): return cls.get_by_address(context, port_id) else: raise exception.InvalidIdentity(identity=port_id) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_id(cls, context, port_id): """Find a port based on its integer id and return a Port object. :param port_id: the id of a port. :returns: a :class:`Port` object. :raises: PortNotFound """ db_port = cls.dbapi.get_port_by_id(port_id) port = Port._from_db_object(cls(context), db_port) return port # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_uuid(cls, context, uuid): """Find a port based on uuid and return a :class:`Port` object. :param uuid: the uuid of a port. :param context: Security context :returns: a :class:`Port` object. :raises: PortNotFound """ db_port = cls.dbapi.get_port_by_uuid(uuid) port = Port._from_db_object(cls(context), db_port) return port # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_address(cls, context, address): """Find a port based on address and return a :class:`Port` object. :param address: the address of a port. :param context: Security context :returns: a :class:`Port` object. :raises: PortNotFound """ db_port = cls.dbapi.get_port_by_address(address) port = Port._from_db_object(cls(context), db_port) return port # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def list(cls, context, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of Port objects. :param context: Security context. :param limit: maximum number of resources to return in a single result. :param marker: pagination marker for large data sets. :param sort_key: column to sort results by. :param sort_dir: direction to sort. "asc" or "desc". :returns: a list of :class:`Port` object. :raises: InvalidParameterValue """ db_ports = cls.dbapi.get_port_list(limit=limit, marker=marker, sort_key=sort_key, sort_dir=sort_dir) return Port._from_db_object_list(db_ports, cls, context) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def list_by_node_id(cls, context, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of Port objects associated with a given node ID. :param context: Security context. :param node_id: the ID of the node. :param limit: maximum number of resources to return in a single result. :param marker: pagination marker for large data sets. :param sort_key: column to sort results by. :param sort_dir: direction to sort. "asc" or "desc". :returns: a list of :class:`Port` object. """ db_ports = cls.dbapi.get_ports_by_node_id(node_id, limit=limit, marker=marker, sort_key=sort_key, sort_dir=sort_dir) return Port._from_db_object_list(db_ports, cls, context) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def list_by_portgroup_id(cls, context, portgroup_id, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of Port objects associated with a given portgroup ID. :param context: Security context. :param portgroup_id: the ID of the portgroup. :param limit: maximum number of resources to return in a single result. :param marker: pagination marker for large data sets. :param sort_key: column to sort results by. :param sort_dir: direction to sort. "asc" or "desc". :returns: a list of :class:`Port` object. """ db_ports = cls.dbapi.get_ports_by_portgroup_id(portgroup_id, limit=limit, marker=marker, sort_key=sort_key, sort_dir=sort_dir) return Port._from_db_object_list(db_ports, cls, context) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def create(self, context=None): """Create a Port record in the DB. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Port(context) :raises: MACAlreadyExists if 'address' column is not unique :raises: PortAlreadyExists if 'uuid' column is not unique """ values = self.obj_get_changes() db_port = self.dbapi.create_port(values) self._from_db_object(self, db_port) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def destroy(self, context=None): """Delete the Port from the DB. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Port(context) :raises: PortNotFound """ self.dbapi.destroy_port(self.uuid) self.obj_reset_changes() # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def save(self, context=None): """Save updates to this Port. Updates will be made column by column based on the result of self.what_changed(). :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Port(context) :raises: PortNotFound :raises: MACAlreadyExists if 'address' column is not unique """ updates = self.obj_get_changes() updated_port = self.dbapi.update_port(self.uuid, updates) self._from_db_object(self, updated_port) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def refresh(self, context=None): """Loads updates for this Port. Loads a port with the same uuid from the database and checks for updated attributes. Updates are applied from the loaded port column by column, if there are any updates. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Port(context) :raises: PortNotFound """ current = self.__class__.get_by_uuid(self._context, uuid=self.uuid) self.obj_refresh(current) ironic-5.1.0/ironic/objects/fields.py0000664000567000056710000000370512674513466020720 0ustar jenkinsjenkins00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ast import six from oslo_versionedobjects import fields as object_fields from ironic.common import utils class IntegerField(object_fields.IntegerField): pass class UUIDField(object_fields.UUIDField): pass class StringField(object_fields.StringField): pass class DateTimeField(object_fields.DateTimeField): pass class BooleanField(object_fields.BooleanField): pass class ListOfStringsField(object_fields.ListOfStringsField): pass class FlexibleDict(object_fields.FieldType): @staticmethod def coerce(obj, attr, value): if isinstance(value, six.string_types): value = ast.literal_eval(value) return dict(value) class FlexibleDictField(object_fields.AutoTypedField): AUTO_TYPE = FlexibleDict() # TODO(lucasagomes): In our code we've always translated None to {}, # this method makes this field to work like this. But probably won't # be accepted as-is in the oslo_versionedobjects library def _null(self, obj, attr): if self.nullable: return {} super(FlexibleDictField, self)._null(obj, attr) class MACAddress(object_fields.FieldType): @staticmethod def coerce(obj, attr, value): return utils.validate_and_normalize_mac(value) class MACAddressField(object_fields.AutoTypedField): AUTO_TYPE = MACAddress() ironic-5.1.0/ironic/objects/node.py0000664000567000056710000004021612674513466020375 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from oslo_utils import uuidutils from oslo_versionedobjects import base as object_base from ironic.common import exception from ironic.common.i18n import _ from ironic.db import api as db_api from ironic.objects import base from ironic.objects import fields as object_fields REQUIRED_INT_PROPERTIES = ['local_gb', 'cpus', 'memory_mb'] @base.IronicObjectRegistry.register class Node(base.IronicObject, object_base.VersionedObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Added instance_info # Version 1.2: Add get() and get_by_id() and make get_by_uuid() # only work with a uuid # Version 1.3: Add create() and destroy() # Version 1.4: Add get_by_instance_uuid() # Version 1.5: Add list() # Version 1.6: Add reserve() and release() # Version 1.7: Add conductor_affinity # Version 1.8: Add maintenance_reason # Version 1.9: Add driver_internal_info # Version 1.10: Add name and get_by_name() # Version 1.11: Add clean_step # Version 1.12: Add raid_config and target_raid_config # Version 1.13: Add touch_provisioning() # Version 1.14: Add _validate_property_values() and make create() # and save() validate the input of property values. VERSION = '1.14' dbapi = db_api.get_instance() fields = { 'id': object_fields.IntegerField(), 'uuid': object_fields.UUIDField(nullable=True), 'name': object_fields.StringField(nullable=True), 'chassis_id': object_fields.IntegerField(nullable=True), 'instance_uuid': object_fields.UUIDField(nullable=True), 'driver': object_fields.StringField(nullable=True), 'driver_info': object_fields.FlexibleDictField(nullable=True), 'driver_internal_info': object_fields.FlexibleDictField(nullable=True), # A clean step dictionary, indicating the current clean step # being executed, or None, indicating cleaning is not in progress # or has not yet started. 'clean_step': object_fields.FlexibleDictField(nullable=True), 'raid_config': object_fields.FlexibleDictField(nullable=True), 'target_raid_config': object_fields.FlexibleDictField(nullable=True), 'instance_info': object_fields.FlexibleDictField(nullable=True), 'properties': object_fields.FlexibleDictField(nullable=True), 'reservation': object_fields.StringField(nullable=True), # a reference to the id of the conductor service, not its hostname, # that has most recently performed some action which could require # local state to be maintained (eg, built a PXE config) 'conductor_affinity': object_fields.IntegerField(nullable=True), # One of states.POWER_ON|POWER_OFF|NOSTATE|ERROR 'power_state': object_fields.StringField(nullable=True), # Set to one of states.POWER_ON|POWER_OFF when a power operation # starts, and set to NOSTATE when the operation finishes # (successfully or unsuccessfully). 'target_power_state': object_fields.StringField(nullable=True), 'provision_state': object_fields.StringField(nullable=True), 'provision_updated_at': object_fields.DateTimeField(nullable=True), 'target_provision_state': object_fields.StringField(nullable=True), 'maintenance': object_fields.BooleanField(), 'maintenance_reason': object_fields.StringField(nullable=True), 'console_enabled': object_fields.BooleanField(), # Any error from the most recent (last) asynchronous transaction # that started but failed to finish. 'last_error': object_fields.StringField(nullable=True), 'inspection_finished_at': object_fields.DateTimeField(nullable=True), 'inspection_started_at': object_fields.DateTimeField(nullable=True), 'extra': object_fields.FlexibleDictField(nullable=True), } def _validate_property_values(self, properties): """Check if the input of local_gb, cpus and memory_mb are valid. :param properties: a dict contains the node's information. """ if not properties: return invalid_msgs_list = [] for param in REQUIRED_INT_PROPERTIES: value = properties.get(param) if value is None: continue try: int_value = int(value) assert int_value >= 0 except (ValueError, AssertionError): msg = (('%(param)s=%(value)s') % {'param': param, 'value': value}) invalid_msgs_list.append(msg) if invalid_msgs_list: msg = (_('The following properties for node %(node)s ' 'should be non-negative integers, ' 'but provided values are: %(msgs)s') % {'node': self.uuid, 'msgs': ', '.join(invalid_msgs_list)}) raise exception.InvalidParameterValue(msg) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get(cls, context, node_id): """Find a node based on its id or uuid and return a Node object. :param node_id: the id *or* uuid of a node. :returns: a :class:`Node` object. """ if strutils.is_int_like(node_id): return cls.get_by_id(context, node_id) elif uuidutils.is_uuid_like(node_id): return cls.get_by_uuid(context, node_id) else: raise exception.InvalidIdentity(identity=node_id) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_id(cls, context, node_id): """Find a node based on its integer id and return a Node object. :param node_id: the id of a node. :returns: a :class:`Node` object. """ db_node = cls.dbapi.get_node_by_id(node_id) node = Node._from_db_object(cls(context), db_node) return node # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_uuid(cls, context, uuid): """Find a node based on uuid and return a Node object. :param uuid: the uuid of a node. :returns: a :class:`Node` object. """ db_node = cls.dbapi.get_node_by_uuid(uuid) node = Node._from_db_object(cls(context), db_node) return node # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_name(cls, context, name): """Find a node based on name and return a Node object. :param name: the logical name of a node. :returns: a :class:`Node` object. """ db_node = cls.dbapi.get_node_by_name(name) node = Node._from_db_object(cls(context), db_node) return node # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_instance_uuid(cls, context, instance_uuid): """Find a node based on the instance uuid and return a Node object. :param uuid: the uuid of the instance. :returns: a :class:`Node` object. """ db_node = cls.dbapi.get_node_by_instance(instance_uuid) node = Node._from_db_object(cls(context), db_node) return node # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def list(cls, context, limit=None, marker=None, sort_key=None, sort_dir=None, filters=None): """Return a list of Node objects. :param context: Security context. :param limit: maximum number of resources to return in a single result. :param marker: pagination marker for large data sets. :param sort_key: column to sort results by. :param sort_dir: direction to sort. "asc" or "desc". :param filters: Filters to apply. :returns: a list of :class:`Node` object. """ db_nodes = cls.dbapi.get_node_list(filters=filters, limit=limit, marker=marker, sort_key=sort_key, sort_dir=sort_dir) return [Node._from_db_object(cls(context), obj) for obj in db_nodes] # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def reserve(cls, context, tag, node_id): """Get and reserve a node. To prevent other ManagerServices from manipulating the given Node while a Task is performed, mark it reserved by this host. :param context: Security context. :param tag: A string uniquely identifying the reservation holder. :param node_id: A node id or uuid. :raises: NodeNotFound if the node is not found. :returns: a :class:`Node` object. """ db_node = cls.dbapi.reserve_node(tag, node_id) node = Node._from_db_object(cls(context), db_node) return node # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def release(cls, context, tag, node_id): """Release the reservation on a node. :param context: Security context. :param tag: A string uniquely identifying the reservation holder. :param node_id: A node id or uuid. :raises: NodeNotFound if the node is not found. """ cls.dbapi.release_node(tag, node_id) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def create(self, context=None): """Create a Node record in the DB. Column-wise updates will be made based on the result of self.what_changed(). If target_power_state is provided, it will be checked against the in-database copy of the node before updates are made. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Node(context) :raises: InvalidParameterValue if some property values are invalid. """ values = self.obj_get_changes() self._validate_property_values(values.get('properties')) db_node = self.dbapi.create_node(values) self._from_db_object(self, db_node) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def destroy(self, context=None): """Delete the Node from the DB. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Node(context) """ self.dbapi.destroy_node(self.uuid) self.obj_reset_changes() # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def save(self, context=None): """Save updates to this Node. Column-wise updates will be made based on the result of self.what_changed(). If target_power_state is provided, it will be checked against the in-database copy of the node before updates are made. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Node(context) :raises: InvalidParameterValue if some property values are invalid. """ updates = self.obj_get_changes() self._validate_property_values(updates.get('properties')) if 'driver' in updates and 'driver_internal_info' not in updates: # Clean driver_internal_info when changes driver self.driver_internal_info = {} updates = self.obj_get_changes() self.dbapi.update_node(self.uuid, updates) self.obj_reset_changes() # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def refresh(self, context=None): """Refresh the object by re-fetching from the DB. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Node(context) """ current = self.__class__.get_by_uuid(self._context, self.uuid) self.obj_refresh(current) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def touch_provisioning(self, context=None): """Touch the database record to mark the provisioning as alive.""" self.dbapi.touch_node_provisioning(self.id) ironic-5.1.0/ironic/objects/conductor.py0000664000567000056710000001250212674513466021445 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_versionedobjects import base as object_base from ironic.common.i18n import _ from ironic.db import api as db_api from ironic.objects import base from ironic.objects import fields as object_fields @base.IronicObjectRegistry.register class Conductor(base.IronicObject, object_base.VersionedObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Add register() and unregister(), make the context parameter # to touch() optional. VERSION = '1.1' dbapi = db_api.get_instance() fields = { 'id': object_fields.IntegerField(), 'drivers': object_fields.ListOfStringsField(nullable=True), 'hostname': object_fields.StringField(), } # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_hostname(cls, context, hostname): """Get a Conductor record by its hostname. :param hostname: the hostname on which a Conductor is running :returns: a :class:`Conductor` object. """ db_obj = cls.dbapi.get_conductor(hostname) conductor = Conductor._from_db_object(cls(context), db_obj) return conductor def save(self, context): """Save is not supported by Conductor objects.""" raise NotImplementedError( _('Cannot update a conductor record directly.')) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def refresh(self, context=None): """Loads and applies updates for this Conductor. Loads a :class:`Conductor` with the same uuid from the database and checks for updated attributes. Updates are applied from the loaded chassis column by column, if there are any updates. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Conductor(context) """ current = self.__class__.get_by_hostname(self._context, hostname=self.hostname) self.obj_refresh(current) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def touch(self, context=None): """Touch this conductor's DB record, marking it as up-to-date.""" self.dbapi.touch_conductor(self.hostname) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable @classmethod def register(cls, context, hostname, drivers, update_existing=False): """Register an active conductor with the cluster. :param hostname: the hostname on which the conductor will run :param drivers: the list of drivers enabled in the conductor :param update_existing: When false, registration will raise an exception when a conflicting online record is found. When true, will overwrite the existing record. Default: False. :raises: ConductorAlreadyRegistered :returns: a :class:`Conductor` object. """ db_cond = cls.dbapi.register_conductor({'hostname': hostname, 'drivers': drivers}, update_existing=update_existing) return Conductor._from_db_object(cls(context), db_cond) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def unregister(self, context=None): """Remove this conductor from the service registry.""" self.dbapi.unregister_conductor(self.hostname) ironic-5.1.0/ironic/objects/__init__.py0000664000567000056710000000254412674513466021211 0ustar jenkinsjenkins00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(comstud): You may scratch your head as you see code that imports # this module and then accesses attributes for objects such as Node, # etc, yet you do not see these attributes in here. Never fear, there is # a little bit of magic. When objects are registered, an attribute is set # on this module automatically, pointing to the newest/latest version of # the object. def register_all(): # NOTE(danms): You must make sure your object gets imported in this # function in order for it to be registered by services that may # need to receive it via RPC. __import__('ironic.objects.chassis') __import__('ironic.objects.conductor') __import__('ironic.objects.node') __import__('ironic.objects.port') __import__('ironic.objects.portgroup') ironic-5.1.0/ironic/objects/portgroup.py0000664000567000056710000003054612674513466021516 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from oslo_utils import uuidutils from oslo_versionedobjects import base as object_base from ironic.common import exception from ironic.common import utils from ironic.db import api as dbapi from ironic.objects import base from ironic.objects import fields as object_fields @base.IronicObjectRegistry.register class Portgroup(base.IronicObject, object_base.VersionedObjectDictCompat): # Version 1.0: Initial version VERSION = '1.0' dbapi = dbapi.get_instance() fields = { 'id': object_fields.IntegerField(), 'uuid': object_fields.UUIDField(nullable=True), 'name': object_fields.StringField(nullable=True), 'node_id': object_fields.IntegerField(nullable=True), 'address': object_fields.MACAddressField(nullable=True), 'extra': object_fields.FlexibleDictField(nullable=True), } @staticmethod def _from_db_object_list(db_objects, cls, context): """Converts a list of database entities to a list of formal objects.""" return [Portgroup._from_db_object(cls(context), obj) for obj in db_objects] # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get(cls, context, portgroup_ident): """Find a portgroup based on its id, uuid, name or address. :param portgroup_ident: The id, uuid, name or address of a portgroup. :param context: Security context :returns: A :class:`Portgroup` object. :raises: InvalidIdentity """ if strutils.is_int_like(portgroup_ident): return cls.get_by_id(context, portgroup_ident) elif uuidutils.is_uuid_like(portgroup_ident): return cls.get_by_uuid(context, portgroup_ident) elif utils.is_valid_mac(portgroup_ident): return cls.get_by_address(context, portgroup_ident) elif utils.is_valid_logical_name(portgroup_ident): return cls.get_by_name(context, portgroup_ident) else: raise exception.InvalidIdentity(identity=portgroup_ident) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_id(cls, context, portgroup_id): """Find a portgroup based on its integer id and return a Portgroup object. :param portgroup id: The id of a portgroup. :param context: Security context :returns: A :class:`Portgroup` object. :raises: PortgroupNotFound """ db_portgroup = cls.dbapi.get_portgroup_by_id(portgroup_id) portgroup = Portgroup._from_db_object(cls(context), db_portgroup) return portgroup # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_uuid(cls, context, uuid): """Find a portgroup based on uuid and return a :class:`Portgroup` object. :param uuid: The uuid of a portgroup. :param context: Security context :returns: A :class:`Portgroup` object. :raises: PortgroupNotFound """ db_portgroup = cls.dbapi.get_portgroup_by_uuid(uuid) portgroup = Portgroup._from_db_object(cls(context), db_portgroup) return portgroup # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_address(cls, context, address): """Find a portgroup based on address and return a :class:`Portgroup` object. :param address: The MAC address of a portgroup. :param context: Security context :returns: A :class:`Portgroup` object. :raises: PortgroupNotFound """ db_portgroup = cls.dbapi.get_portgroup_by_address(address) portgroup = Portgroup._from_db_object(cls(context), db_portgroup) return portgroup # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_name(cls, context, name): """Find a portgroup based on name and return a :class:`Portgroup` object. :param name: The name of a portgroup. :param context: Security context :returns: A :class:`Portgroup` object. :raises: PortgroupNotFound """ db_portgroup = cls.dbapi.get_portgroup_by_name(name) portgroup = Portgroup._from_db_object(cls(context), db_portgroup) return portgroup # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def list(cls, context, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of Portgroup objects. :param context: Security context. :param limit: Maximum number of resources to return in a single result. :param marker: Pagination marker for large data sets. :param sort_key: Column to sort results by. :param sort_dir: Direction to sort. "asc" or "desc". :returns: A list of :class:`Portgroup` object. :raises: InvalidParameterValue """ db_portgroups = cls.dbapi.get_portgroup_list(limit=limit, marker=marker, sort_key=sort_key, sort_dir=sort_dir) return Portgroup._from_db_object_list(db_portgroups, cls, context) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def list_by_node_id(cls, context, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of Portgroup objects associated with a given node ID. :param context: Security context. :param node_id: The ID of the node. :param limit: Maximum number of resources to return in a single result. :param marker: Pagination marker for large data sets. :param sort_key: Column to sort results by. :param sort_dir: Direction to sort. "asc" or "desc". :returns: A list of :class:`Portgroup` object. :raises: InvalidParameterValue """ db_portgroups = cls.dbapi.get_portgroups_by_node_id(node_id, limit=limit, marker=marker, sort_key=sort_key, sort_dir=sort_dir) return Portgroup._from_db_object_list(db_portgroups, cls, context) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def create(self, context=None): """Create a Portgroup record in the DB. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Portgroup(context) :raises: DuplicateName, MACAlreadyExists, PortgroupAlreadyExists """ values = self.obj_get_changes() db_portgroup = self.dbapi.create_portgroup(values) self._from_db_object(self, db_portgroup) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def destroy(self, context=None): """Delete the Portgroup from the DB. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Portgroup(context) :raises: PortgroupNotEmpty, PortgroupNotFound """ self.dbapi.destroy_portgroup(self.uuid) self.obj_reset_changes() # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def save(self, context=None): """Save updates to this Portgroup. Updates will be made column by column based on the result of self.what_changed(). :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Portgroup(context) :raises: PortgroupNotFound, DuplicateName, MACAlreadyExists """ updates = self.obj_get_changes() updated_portgroup = self.dbapi.update_portgroup(self.uuid, updates) self._from_db_object(self, updated_portgroup) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def refresh(self, context=None): """Loads updates for this Portgroup. Loads a portgroup with the same uuid from the database and checks for updated attributes. Updates are applied from the loaded portgroup column by column, if there are any updates. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Portgroup(context) :raises: PortgroupNotFound """ current = self.__class__.get_by_uuid(self._context, uuid=self.uuid) self.obj_refresh(current) ironic-5.1.0/ironic/objects/chassis.py0000664000567000056710000002132112674513466021101 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from oslo_utils import uuidutils from oslo_versionedobjects import base as object_base from ironic.common import exception from ironic.db import api as dbapi from ironic.objects import base from ironic.objects import fields as object_fields @base.IronicObjectRegistry.register class Chassis(base.IronicObject, object_base.VersionedObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Add get() and get_by_id() and make get_by_uuid() # only work with a uuid # Version 1.2: Add create() and destroy() # Version 1.3: Add list() VERSION = '1.3' dbapi = dbapi.get_instance() fields = { 'id': object_fields.IntegerField(), 'uuid': object_fields.UUIDField(nullable=True), 'extra': object_fields.FlexibleDictField(nullable=True), 'description': object_fields.StringField(nullable=True), } # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get(cls, context, chassis_id): """Find a chassis based on its id or uuid and return a Chassis object. :param chassis_id: the id *or* uuid of a chassis. :returns: a :class:`Chassis` object. """ if strutils.is_int_like(chassis_id): return cls.get_by_id(context, chassis_id) elif uuidutils.is_uuid_like(chassis_id): return cls.get_by_uuid(context, chassis_id) else: raise exception.InvalidIdentity(identity=chassis_id) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_id(cls, context, chassis_id): """Find a chassis based on its integer id and return a Chassis object. :param chassis_id: the id of a chassis. :returns: a :class:`Chassis` object. """ db_chassis = cls.dbapi.get_chassis_by_id(chassis_id) chassis = Chassis._from_db_object(cls(context), db_chassis) return chassis # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def get_by_uuid(cls, context, uuid): """Find a chassis based on uuid and return a :class:`Chassis` object. :param uuid: the uuid of a chassis. :param context: Security context :returns: a :class:`Chassis` object. """ db_chassis = cls.dbapi.get_chassis_by_uuid(uuid) chassis = Chassis._from_db_object(cls(context), db_chassis) return chassis # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable_classmethod @classmethod def list(cls, context, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of Chassis objects. :param context: Security context. :param limit: maximum number of resources to return in a single result. :param marker: pagination marker for large data sets. :param sort_key: column to sort results by. :param sort_dir: direction to sort. "asc" or "desc". :returns: a list of :class:`Chassis` object. """ db_chassis = cls.dbapi.get_chassis_list(limit=limit, marker=marker, sort_key=sort_key, sort_dir=sort_dir) return [Chassis._from_db_object(cls(context), obj) for obj in db_chassis] # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def create(self, context=None): """Create a Chassis record in the DB. Column-wise updates will be made based on the result of self.what_changed(). If target_power_state is provided, it will be checked against the in-database copy of the chassis before updates are made. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Chassis(context) """ values = self.obj_get_changes() db_chassis = self.dbapi.create_chassis(values) self._from_db_object(self, db_chassis) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def destroy(self, context=None): """Delete the Chassis from the DB. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Chassis(context) """ self.dbapi.destroy_chassis(self.uuid) self.obj_reset_changes() # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def save(self, context=None): """Save updates to this Chassis. Updates will be made column by column based on the result of self.what_changed(). :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Chassis(context) """ updates = self.obj_get_changes() updated_chassis = self.dbapi.update_chassis(self.uuid, updates) self._from_db_object(self, updated_chassis) # NOTE(xek): We don't want to enable RPC on this call just yet. Remotable # methods can be used in the future to replace current explicit RPC calls. # Implications of calling new remote procedures should be thought through. # @object_base.remotable def refresh(self, context=None): """Loads and applies updates for this Chassis. Loads a :class:`Chassis` with the same uuid from the database and checks for updated attributes. Updates are applied from the loaded chassis column by column, if there are any updates. :param context: Security context. NOTE: This should only be used internally by the indirection_api. Unfortunately, RPC requires context as the first argument, even though we don't use it. A context should be set when instantiating the object, e.g.: Chassis(context) """ current = self.__class__.get_by_uuid(self._context, uuid=self.uuid) self.obj_refresh(current) ironic-5.1.0/ironic/locale/0000775000567000056710000000000012674513633016675 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/locale/ironic-log-critical.pot0000664000567000056710000000125512674513466023262 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the ironic project. # FIRST AUTHOR , 2015. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: ironic 2015.2.0.dev476\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2015-08-11 06:21+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 2.0\n" #: ironic/conductor/manager.py:313 msgid "Failed to start keepalive" msgstr "" ironic-5.1.0/ironic/locale/ironic-log-info.pot0000664000567000056710000002175212674513466022427 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2016 ORGANIZATION # This file is distributed under the same license as the ironic project. # FIRST AUTHOR , 2016. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: ironic 4.3.1.dev202\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2016-01-27 06:37+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 2.2.0\n" #: ironic/common/driver_factory.py:139 #, python-format msgid "Loaded the following drivers: %s" msgstr "" #: ironic/common/service.py:88 #, python-format msgid "Created RPC server for service %(service)s on host %(host)s." msgstr "" #: ironic/common/service.py:106 #, python-format msgid "Stopped RPC server for service %(service)s on host %(host)s." msgstr "" #: ironic/common/service.py:111 #, python-format msgid "" "Got signal SIGUSR1. Not deregistering on next shutdown of service " "%(service)s on host %(host)s." msgstr "" #: ironic/conductor/base_manager.py:156 #, python-format msgid "Successfully started conductor with hostname %(hostname)s." msgstr "" #: ironic/conductor/base_manager.py:181 #, python-format msgid "Successfully stopped conductor with hostname %(hostname)s." msgstr "" #: ironic/conductor/base_manager.py:187 #, python-format msgid "Not deregistering conductor with hostname %(hostname)s." msgstr "" #: ironic/conductor/manager.py:625 #, python-format msgid "Successfully unprovisioned node %(node)s with instance %(instance)s." msgstr "" #: ironic/conductor/manager.py:847 #, python-format msgid "" "Automated cleaning is disabled, node %s has been successfully moved to " "AVAILABLE state." msgstr "" #: ironic/conductor/manager.py:924 #, python-format msgid "Executing %(state)s on node %(node)s, remaining steps: %(steps)s" msgstr "" #: ironic/conductor/manager.py:934 #, python-format msgid "Executing %(step)s on node %(node)s" msgstr "" #: ironic/conductor/manager.py:961 #, python-format msgid "" "Clean step %(step)s on node %(node)s being executed asynchronously, " "waiting for driver." msgstr "" #: ironic/conductor/manager.py:973 #, python-format msgid "Node %(node)s finished clean step %(step)s" msgstr "" #: ironic/conductor/manager.py:991 #, python-format msgid "Node %s cleaning complete" msgstr "" #: ironic/conductor/manager.py:1108 #, python-format msgid "" "The current clean step \"%(clean_step)s\" for node %(node)s is not " "abortable. Adding a flag to abort the cleaning after the clean step is " "completed." msgstr "" #: ironic/conductor/manager.py:1203 #, python-format msgid "" "During sync_power_state, node %(node)s was not found and presumed deleted" " by another process." msgstr "" #: ironic/conductor/manager.py:1207 #, python-format msgid "" "During sync_power_state, node %(node)s was already locked by another " "process. Skip." msgstr "" #: ironic/conductor/manager.py:1489 #, python-format msgid "Successfully deleted node %(node)s." msgstr "" #: ironic/conductor/manager.py:1509 #, python-format msgid "" "Successfully deleted port %(port)s. The node associated with the port was" " %(node)s" msgstr "" #: ironic/conductor/manager.py:1583 #, python-format msgid "No console action was triggered because the console is already %s" msgstr "" #: ironic/conductor/manager.py:2194 #, python-format msgid "Successfully deployed node %(node)s with instance %(instance)s." msgstr "" #: ironic/conductor/manager.py:2310 #, python-format msgid "" "During sync_power_state, node %(node)s has no previous known state. " "Recording current state '%(state)s'." msgstr "" #: ironic/conductor/manager.py:2380 #, python-format msgid "Successfully inspected node %(node)s" msgstr "" #: ironic/conductor/utils.py:136 #, python-format msgid "Successfully set node %(node)s power state to %(state)s." msgstr "" #: ironic/drivers/modules/agent.py:425 #: ironic/drivers/modules/oneview/vendor.py:53 #, python-format msgid "Image successfully written to node %s" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:249 #, python-format msgid "" "Node %s detected a clean version mismatch, resetting clean steps and " "rebooting the node." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:285 #, python-format msgid "" "Agent on node %s returned cleaning command success, moving to next clean " "step" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:430 #, python-format msgid "" "Initial lookup for node %s succeeded, agent is running and waiting for " "commands" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:612 #: ironic/drivers/modules/iscsi_deploy.py:646 #: ironic/drivers/modules/oneview/vendor.py:113 #, python-format msgid "Deployment to node %s done" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:659 #, python-format msgid "Local boot successfully configured for node %s" msgstr "" #: ironic/drivers/modules/image_cache.py:144 #, python-format msgid "Master cache miss for image %(href)s, starting download" msgstr "" #: ironic/drivers/modules/image_cache.py:281 #, python-format msgid "" "After cleaning up cache dir %(dir)s cache size %(actual)d is still larger" " than threshold %(expected)d" msgstr "" #: ironic/drivers/modules/image_cache.py:415 #, python-format msgid "" "Image %(href)s was last modified at %(remote_time)s. Deleting the cached " "copy \"%(cached_file)s since it was last modified at %(local_time)s and " "may be outdated." msgstr "" #: ironic/drivers/modules/inspector.py:75 #, python-format msgid "" "Inspection via ironic-inspector is disabled in configuration for driver " "%s. To enable, change [inspector] enabled = True." msgstr "" #: ironic/drivers/modules/inspector.py:167 #, python-format msgid "Node %s was sent to inspection to ironic-inspector" msgstr "" #: ironic/drivers/modules/inspector.py:214 #, python-format msgid "Inspection finished successfully for node %s" msgstr "" #: ironic/drivers/modules/ipmitool.py:160 #, python-format msgid "Option %(opt)s is not supported by ipmitool" msgstr "" #: ironic/drivers/modules/ipmitool.py:164 #, python-format msgid "Option %(opt)s is supported by ipmitool" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:611 #, python-format msgid "Bootloader successfully installed on node %s" msgstr "" #: ironic/drivers/modules/wol.py:155 #, python-format msgid "" "Power off called for node %s. Wake-On-Lan does not support this " "operation. Manual intervention required to perform this action." msgstr "" #: ironic/drivers/modules/wol.py:178 #, python-format msgid "" "Reboot called for node %s. Wake-On-Lan does not fully support this " "operation. Trying to power on the node." msgstr "" #: ironic/drivers/modules/amt/management.py:95 #, python-format msgid "Successfully set boot device %(boot_device)s for node %(node_id)s" msgstr "" #: ironic/drivers/modules/amt/management.py:148 #, python-format msgid "Successfully enabled boot config for node %(node_id)s." msgstr "" #: ironic/drivers/modules/amt/power.py:118 #, python-format msgid "Power state set to %(state)s for node %(node_id)s" msgstr "" #: ironic/drivers/modules/ilo/common.py:351 #, python-format msgid "Attached virtual media %s successfully." msgstr "" #: ironic/drivers/modules/ilo/common.py:369 #, python-format msgid "Node %(uuid)s pending boot mode is %(boot_mode)s." msgstr "" #: ironic/drivers/modules/ilo/common.py:381 #, python-format msgid "Node %(uuid)s boot mode is set to %(boot_mode)s." msgstr "" #: ironic/drivers/modules/ilo/common.py:492 #: ironic/drivers/modules/irmc/boot.py:384 #, python-format msgid "Setting up node %s to boot from virtual media" msgstr "" #: ironic/drivers/modules/ilo/common.py:649 #, python-format msgid "Changed secure boot to %(mode)s for node %(node)s" msgstr "" #: ironic/drivers/modules/ilo/inspect.py:56 #, python-format msgid "Port created for MAC address %(address)s for node %(node)s" msgstr "" #: ironic/drivers/modules/ilo/inspect.py:207 #, python-format msgid "The node %s is not powered on. Powering on the node for inspection." msgstr "" #: ironic/drivers/modules/ilo/inspect.py:247 #, python-format msgid "Node %s inspected." msgstr "" #: ironic/drivers/modules/ilo/inspect.py:250 #, python-format msgid "" "The node %s was powered on for inspection. Powered off the node as " "inspection completed." msgstr "" #: ironic/drivers/modules/ilo/management.py:257 #, python-format msgid "" "Missing 'ilo_change_password' parameter in driver_info. Clean step " "'reset_ilo_credential' is not performed on node %s." msgstr "" #: ironic/drivers/modules/irmc/boot.py:458 #, python-format msgid "Attached virtual cdrom successfully for node %s" msgstr "" #: ironic/drivers/modules/irmc/boot.py:481 #, python-format msgid "Detached virtual cdrom successfully for node %s" msgstr "" #: ironic/drivers/modules/irmc/boot.py:514 #, python-format msgid "Attached virtual floppy successfully for node %s" msgstr "" #: ironic/drivers/modules/irmc/boot.py:537 #, python-format msgid "Detached virtual floppy successfully for node %s" msgstr "" ironic-5.1.0/ironic/locale/ja/0000775000567000056710000000000012674513633017267 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/locale/ja/LC_MESSAGES/0000775000567000056710000000000012674513633021054 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/locale/ja/LC_MESSAGES/ironic.po0000664000567000056710000026171212674513466022714 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the ironic project. # # Translators: # Tomoyuki KATO , 2013 # Akihiro Motoki , 2015. #zanata # KATO Tomoyuki , 2015. #zanata msgid "" msgstr "" "Project-Id-Version: ironic 4.3.1.dev202\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2016-01-27 02:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2015-10-09 05:15+0000\n" "Last-Translator: KATO Tomoyuki \n" "Language: ja\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.7.3\n" "Language-Team: Japanese\n" #, python-format msgid " after the completion of step \"%s\"" msgstr "手順「%s」の完了後" #, fuzzy msgid " and " msgstr " および " #, python-format msgid "\"%s\" is not a valid \"msftocs_base_url\"" msgstr "「%s」は有効な「msftocs_base_url」ではありません。" #, python-format msgid "\"%s\" is not a valid \"msftocs_blade_id\"" msgstr "「%s」は有効な「msftocs_blade_id」ではありません。" #, python-format msgid "\"msftocs_blade_id\" must be greater than 0. The provided value is: %s" msgstr "「msftocs_blade_id」は 0 より大きい必要があります。指定された値: %s" msgid "\"swift_temp_url_duration\" must be a positive integer." msgstr "「swift_temp_url_duration」は正の整数である必要があります。" #, python-format msgid "%(err)s" msgstr "%(err)s" #, python-format msgid "%(error_msg)s. Missing are: %(missing_info)s" msgstr "%(error_msg)s。不足情報: %(missing_info)s" #, python-format msgid "" "%(exec_error)s\n" "Command: %(command)s" msgstr "" "%(exec_error)s\n" "コマンド: %(command)s" #, python-format msgid "%(operation)s failed, error: %(error)s" msgstr "%(operation)s が失敗しました。エラー: %(error)s" #, python-format msgid "%(operation)s not supported. error: %(error)s" msgstr "%(operation)s はサポートされていません。エラー: %(error)s" #, python-format msgid "%(param)s not provided" msgstr "%(param)s が指定されていません" #, python-format msgid "%s is not JSON serializable" msgstr "%s が JSON シリアライズ可能ではありません" #, python-format msgid "%s is not an integer value." msgstr "%s は整数値ではありません。" #, python-format msgid "'%s' contains non-ASCII symbol." msgstr "「%s」に非 ASCII 記号が含まれています。" #, python-format msgid "'%s' is a mandatory attribute and can not be removed" msgstr "「%s」は必須属性であり、削除できません" #, python-format msgid "'%s' is an internal attribute and can not be updated" msgstr "「%s」は内部属性であり、更新できません" #, python-format msgid "'%s' not supplied to DracDriver." msgstr "「%s」が DracDriver に提供されていません。" msgid "'add' and 'replace' operations need a value" msgstr "'add' と 'replace' 処理には、値が必要です。" msgid "'drac_path' contains non-ASCII symbol." msgstr "「drac_path」に非 ASCII 記号が含まれています。" msgid "'drac_protocol' contains non-ASCII symbol." msgstr "「drac_protocol」に非 ASCII 記号が含まれています。" msgid "'irmc_auth_method' has unsupported value." msgstr "「irmc_auth_method」にサポートされない値があります。" msgid "'irmc_client_timeout' is not integer type." msgstr "「irmc_client_timeout」が整数ではありません。" msgid "'irmc_port' has unsupported value." msgstr "「irmc_port」にサポートされない値があります。" msgid "'irmc_sensor_method' has unsupported value." msgstr "「irmc_sensor_method」にサポートされない値があります。" msgid "'qemu-img info' parsing failed." msgstr "「qemu-img info」の解析に失敗しました。" #, python-format msgid "'set_power_state' called with invalid power state '%s'" msgstr "無効な電源状態「%s」で「set_power_state」が呼び出されました" msgid "" "(list of) filename(s) of optional private key(s) for authentication. One of " "this, ssh_key_contents, or ssh_password must be specified." msgstr "" "認証に使用するオプションの秘密鍵のファイル名 (の一覧)。この中の 1 つ、" "ssh_key_contents、または ssh_password のいずれかを指定する必要があります。" #, python-format msgid "A chassis with UUID %(uuid)s already exists." msgstr "UUID %(uuid)s のシャーシは既に存在します。" #, python-format msgid "A node with UUID %(uuid)s already exists." msgstr "UUID %(uuid)s のノードは既に存在します。" #, python-format msgid "A node with name %(name)s already exists." msgstr "名前が %(name)s のノードは既に存在します。" #, python-format msgid "A port with MAC address %(mac)s already exists." msgstr "MAC アドレス %(mac)s のポートは既に存在します。" #, python-format msgid "A port with UUID %(uuid)s already exists." msgstr "UUID %(uuid)s のポートは既に存在します。" #, python-format msgid "AMT call failed: %(cmd)s." msgstr "AMT 呼び出しに失敗しました: %(cmd)s。" #, python-format msgid "AMT driver requires the following to be set in node's driver_info: %s." msgstr "" "AMT ドライバーはノードの driver_info に次のパラメーターの設定を必要とします: " "%s。" msgid "API endpoint. Required." msgstr "API エンドポイント。必須。" msgid "API version to use for communicating with the ramdisk agent." msgstr "ramdisk エージェントと通信するために使用する API バージョン。" #, python-format msgid "" "Adding a config drive is only supported when setting provision state to %s" msgstr "" "コンフィグドライブの追加は、配備状態が %s に設定されている場合にのみサポート" "されます" #, python-format msgid "Adding a new attribute (%s) to the root of the resource is not allowed" msgstr "リソースのルートへの新規属性 (%s) の追加は許可されません" msgid "Additional append parameters for baremetal PXE boot." msgstr "ベアメタル PXE ブート向けの追加の append パラメーター。" msgid "Agent driver requires agent_url in driver_internal_info" msgstr "" "エージェントドライバーは driver_internal_info 内に agent_url を必要とします" #, python-format msgid "Agent on node %(node)s returned bad command result: %(result)s" msgstr "" "ノード %(node)s 上のエージェントから正しくないコマンド結果が返されました: " "%(result)s" #, python-format msgid "" "Agent returned error for clean step %(step)s on node %(node)s : %(err)s." msgstr "" "エージェントからノード %(node)s でのクリーニングステップ %(step)s に関するエ" "ラーが返されました: %(err)s。" #, fuzzy, python-format msgid "" "Agent returned unknown status for clean step %(step)s on node %(node)s : " "%(err)s." msgstr "" "エージェントからノード %(node)s でのクリーニング・ステップ %(step)s に関して" "不明な状況が返されました: %(err)s。" msgid "" "An exclusive lock is required, but the current context has a shared lock." msgstr "" "排他ロックが要求されましたが、現在のコンテキストには共有ロックがあります。" msgid "" "An integer value between 0 and 32 is required for " "swift_store_multiple_containers_seed." msgstr "" "swift_store_multiple_containers_seed には 0 から 32 までの整数値が必要です。" msgid "An unknown exception occurred." msgstr "不明な例外が発生しました。" #, python-format msgid "" "Another job with ID %(job_id)s is already created to configure %(target)s. " "Wait until existing job is completed or is canceled" msgstr "" "別の ID %(job_id)s のジョブが、すでに %(target)s を設定するために作成されてい" "ます。既存のジョブが完了するか、取り消されるまで待ってください。" #, python-format msgid "Asynchronous exception for node %(node)s: %(msg)s exception: %(e)s" msgstr "ノード %(node)s の非同期例外: %(msg)s 例外: %(e)s" msgid "" "Authentication method for iRMC operations; either 'basic' or 'digest'. The " "default value is 'basic'. Optional." msgstr "" "iRMC 操作の認証方式。「basic」または「digest」。デフォルト値は「basic」。オプ" "ション。" msgid "" "Authentication method to be used for iRMC operations, either \"basic\" or " "\"digest\"" msgstr "iRMC の操作に使用する認証方式。「basic」か「digest」のどちらか。" msgid "Available commands" msgstr "利用できるコマンド" msgid "" "Base url of the OCS chassis manager REST API, e.g.: http://10.0.0.1:8000. " "Required." msgstr "" "OCS シャーシマネージャーの REST API のベース URL。例: http://10.0.0.1:8000。" "必須。" msgid "" "Blade id, must be a number between 1 and the maximum number of blades " "available in the chassis. Required." msgstr "" "ブレード ID。有効な値は 1 からシャーシ内で利用できるブレードの最大数の間で" "す。必須。" #, python-format msgid "Boot ISO file, %(boot_iso)s, not found for node: %(node)s." msgstr "" "ノード %(node)s のブート ISO ファイル %(boot_iso)s が見つかりませんでした。" msgid "Bootfile DHCP parameter for UEFI boot mode." msgstr "UEFI ブートモードの起動ファイルの DHCP のパラメーター。" msgid "Bootfile DHCP parameter." msgstr "起動ファイルの DHCP のパラメーター。" msgid "Broadcast IP address; defaults to 255.255.255.255. Optional." msgstr "" "ブロードキャスト IP アドレス。デフォルトは 255.255.255.255。オプション。" msgid "CIMC Manager admin username. Required." msgstr "CIMC Manager の管理ユーザー名。必須。" msgid "CIMC Manager password. Required." msgstr "CIMC Manager のパスワード。必須。" #, python-format msgid "Cannot compile public API routes: %s" msgstr "パブリック API ルートをコンパイルできません: %s" #, python-format msgid "" "Cannot complete the requested action because chassis %(chassis)s contains " "nodes." msgstr "" "シャーシ %(chassis)s にノードが含まれているため、要求されたアクションを完了で" "きません。" #, python-format msgid "" "Cannot continue cleaning on %(node)s, node is in %(state)s state, should be " "%(clean_state)s" msgstr "" "%(node)s のクリーニングを続行できません。ノードが %(state)s 状態で" "す。%(clean_state)s でなければなりません" #, python-format msgid "" "Cannot create directory '%(path)s' for console PID file. Reason: %(reason)s." msgstr "" "コンソール PID ファイルのディレクトリー「%(path)s」を作成できません。理由: " "%(reason)s。" #, python-format msgid "Cannot create node with invalid name %(name)s" msgstr "無効な名前 %(name)s でノードを作成することはできません" msgid "Cannot deploy whole disk image with swap or ephemeral size set" msgstr "" "スワップまたは一時サイズ設定ではディスクイメージ全体をデプロイできません" msgid "" "Cannot determine image size as there is no Content-Length header specified " "in response to HEAD request." msgstr "" "HEAD 要求に対する応答で Content-Length ヘッダーが指定されていないため、イメー" "ジ容量を判別できません。" msgid "Cannot overwrite UUID for an existing Chassis." msgstr "既存のシャーシの UUID を上書きできません。" msgid "Cannot overwrite UUID for an existing Node." msgstr "既存のノードの UUID を上書きできません。" msgid "Cannot overwrite UUID for an existing Port." msgstr "既存のポートの UUID を上書きできません。" msgid "Cannot update a conductor record directly." msgstr "conductor のレコードを直接更新することはできません。" msgid "" "Cannot validate PXE bootloader. Some parameters were missing in node's " "driver_info" msgstr "" "PXE ブートローダーを検証できません。いくつかのパラメーターがノードの " "driver_info に不足しています。" msgid "" "Cannot validate iSCSI deploy. Some parameters were missing in node's " "instance_info" msgstr "" "iSCSI デプロイを検証できません。ノードの instance_info でいくつかのパラメー" "ターが不足しています。" #, python-format msgid "" "Cannot validate parameter for iSCSI deploy. Invalid parameter %(param)s. " "Reason: %(reason)s" msgstr "" "iSCSI デプロイのパラメーターを検証できません。無効なパラメーター: %(param)s、" "理由: %(reason)s" #, python-format msgid "Chassis %(chassis)s could not be found." msgstr "シャーシ %(chassis)s が見つかりませんでした。" msgid "Chassis id not specified." msgstr "シャーシ ID が指定されていません。" #, python-format msgid "Cisco IMC exception occurred for node %(node)s: %(error)s" msgstr "Cisco IMC 例外がノード %(node)s で発生しました: %(error)s" #, python-format msgid "" "Cisco UCS client: connection failed for node %(node)s. Reason: %(error)s" msgstr "" "Cisco UCS クライアント: %(node)s ノードの接続に失敗しました。理由: %(error)s" #, python-format msgid "" "Cisco UCS client: operation %(operation)s failed for node %(node)s. Reason: " "%(error)s" msgstr "" "Cisco UCS クライアント: %(node)s ノードの %(operation)s 処理に失敗しました。" "理由: %(error)s" #, python-format msgid "Clean step %(step)s failed on node %(node)s with error: %(err)s" msgstr "" "クリーニングステップ %(step)s がノード %(node)s で失敗しました。エラー: " "%(err)s" #, python-format msgid "Clean step '%s' not found. 'proliantutils' package needs to be updated." msgstr "" "クリーニングステップ「%s」が見つかりません。「proliantutils」パッケージは更新" "が必要です。" #, python-format msgid "Cleanup failed for node %(node)s after deploy timeout: %(error)s" msgstr "" "デプロイのタイムアウト後、ノード %(node)s のクリーンアップに失敗しました: " "%(error)s" #, python-format msgid "" "Command: %(command)s.\n" "Exit code: %(return_code)s.\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" msgstr "" "コマンド: %(command)s。\n" "終了コード: %(return_code)s。\n" "標準出力: %(stdout)r\n" "標準エラー: %(stderr)r" #, python-format msgid "Conductor %(conductor)s already registered." msgstr "コンダクター %(conductor)s は既に登録されています。" #, python-format msgid "" "Conductor %(conductor)s cannot be started because no drivers were loaded." msgstr "" "ドライバーが読み込まれていないため、コンダクター %(conductor)s を開始できませ" "ん。" #, python-format msgid "Conductor %(conductor)s could not be found." msgstr "コンダクター %(conductor)s が見つかりませんでした。" msgid "Conflict." msgstr "競合があります。" #, python-format msgid "" "Conflict: Whole disk image being used for deploy, but cannot be used with " "node %(node_uuid)s configured to use UEFI boot with netboot option" msgstr "" "競合: ディスク・イメージ全体がデプロイに使用されていますが、netboot オプショ" "ン指定で UEFI を使用するように構成されたノード %(node_uuid)s では使用できませ" "ん" msgid "Connection failed" msgstr "接続が失敗しました" #, python-format msgid "Connection to glance host %(host)s:%(port)s failed: %(reason)s" msgstr "Glance ホスト %(host)s:%(port)s への接続に失敗しました: %(reason)s" #, python-format msgid "Console access is not enabled on node %(node)s" msgstr "ノード %(node)s でコンソールアクセスが有効になっていません" #, python-format msgid "Console subprocess failed to start. %(error)s" msgstr "コンソールのサブプロセスの開始に失敗しました。%(error)s" #, python-format msgid "Converted to raw, but format is now %s" msgstr "ロー形式に変換されましたが、現在の形式は %s です" msgid "Copying floppy image file" msgstr "フロッピーイメージファイルのコピー中" #, python-format msgid "Could not authorize in Keystone: %s" msgstr "keystone で認可できませんでした: %s" #, python-format msgid "" "Could not create cleaning port on network %(net)s from %(node)s. %(exc)s" msgstr "" "%(node)s からネットワーク %(net)s にクリーニングポートを作成できませんでし" "た。%(exc)s" #, python-format msgid "Could not find config at %(path)s" msgstr "%(path)s に config がありませんでした" #, python-format msgid "Could not find pid in pid file %(pid_path)s" msgstr "pid ファイル %(pid_path)s に pid が見つかりませんでした" #, python-format msgid "Could not find the following driver(s): %(driver_name)s." msgstr "次のドライバーが見つかりませんでした。%(driver_name)s" #, python-format msgid "" "Could not get cleaning network vif for %(node)s from Neutron, possible " "network issue. %(exc)s" msgstr "" "%(node)s のクリーニングネットワークの仮想インターフェースを Neutron から取得" "できませんでした。ネットワークの問題が考えられます。%(exc)s" #, python-format msgid "" "Could not remove cleaning ports on network %(net)s from %(node)s, possible " "network issue. %(exc)s" msgstr "" "ネットワーク %(net)s 上でクリーニング・ポートを %(node)s から削除できませんで" "した。ネットワークの問題が考えられます。%(exc)s" #, python-format msgid "Could not restart cleaning on node %(node)s: %(err)s." msgstr "ノード %(node)s でクリーニングを再開できませんでした: %(err)s。" #, python-format msgid "Could not stop the console for node '%(node)s'. Reason: %(err)s." msgstr "ノード「%(node)s」のコンソールを停止できませんでした。理由: %(err)s。" #, python-format msgid "Couldn't apply patch '%(patch)s'. Reason: %(reason)s" msgstr "パッチ「%(patch)s」を適用できませんでした。理由: %(reason)s" #, python-format msgid "" "Couldn't determine the UUID of the root partition or the disk identifier " "after deploying node %s" msgstr "" "ノード %s のデプロイ後にルートパーティションの UUID またはディスク ID を認識" "できませんでした" #, python-format msgid "" "Couldn't get the URL of the Ironic API service from the configuration file " "or keystone catalog. Keystone error: %s" msgstr "" "Ironic API サービスの URL を構成ファイルまたは keystone カタログから取得でき" "ませんでした。keystone エラー: %s" msgid "Create a new alembic revision. Use --message to set the message string." msgstr "" "新しい Alembic バージョンを作成します。--message を使用して、メッセージ文字列" "を設定します。" msgid "Create the database schema." msgstr "データベースのスキーマを作成します。" #, python-format msgid "Creating %(image_type)s image failed: %(error)s" msgstr "%(image_type)s イメージの作成に失敗しました: %(error)s" msgid "" "DHCP provider to use. \"neutron\" uses Neutron, and \"none\" uses a no-op " "provider." msgstr "" "使用する DHCP プロバイダー。「neutron」は Neutron、「none」は何もしないプロバ" "イダーを使用します。" #, python-format msgid "" "DRAC client failed. Last error (cURL error code): %(last_error)s, fault " "string: \"%(fault_string)s\" response_code: %(response_code)s" msgstr "" "DRAC クライアントで障害が発生しました。最終エラー (cURL エラーコード): " "%(last_error)s、エラー文字列: \"%(fault_string)s\"、応答コード: " "%(response_code)s" #, python-format msgid "" "DRAC operation yielded return value %(actual_return_value)s that is neither " "error nor expected %(expected_return_value)s" msgstr "" "DRAC 処理が、エラーでも予期された %(expected_return_value)s でもない戻り値 " "%(actual_return_value)s を受け取りました。" msgid "Default glance hostname or IP address." msgstr "デフォルトの glance ホスト名または IP アドレス。" msgid "Default glance port." msgstr "デフォルトの glance ポート。" msgid "" "Default protocol to use when connecting to glance. Set to https for SSL." msgstr "" "glance 接続時に使用するデフォルトのプロトコル。SSL には https を設定します。" #, python-format msgid "Deploy ISO file, %(deploy_iso)s, not found for node: %(node)s." msgstr "" "ノード %(node)s のデプロイ ISO ファイル %(deploy_iso)s が見つかりませんでし" "た。" #, python-format msgid "Deploy failed for instance %(instance)s. Error: %(error)s" msgstr "インスタンス %(instance)s のデプロイに失敗しました。エラー: %(error)s" msgid "Deploy iso didn't contain efiboot.img or grub.cfg" msgstr "デプロイ ISO に efiboot.img も grub.cfg も含まれていませんでした" #, python-format msgid "Deploy key %(key_sent)s does not match with %(expected_key)s" msgstr "デプロイキー %(key_sent)s が %(expected_key)s に一致しません" msgid "Deploy key does not match" msgstr "デプロイキーが一致しません" msgid "" "Deploy timed out, but an unhandled exception was encountered while aborting. " "More info may be found in the log file." msgstr "" "デプロイがタイムアウトになりましたが、中止中に未処理例外が発生しました。詳細" "情報はログファイルに記録されている可能性があります。" msgid "Deployment ISO image file name. Required." msgstr "デプロイ ISO イメージのファイル名。必須。" msgid "Destination port; defaults to 9. Optional." msgstr "宛先ポート。デフォルトは 9。オプション。" #, python-format msgid "Directory %(dir)s is not writable." msgstr "ディレクトリー %(dir)s が書き込みできません。" msgid "Directory where ironic binaries are installed." msgstr "ironic のバイナリーがインストールされるディレクトリー。" msgid "Directory where the ironic python module is installed." msgstr "ironic python モジュールがインストールされるディレクトリー。" #, python-format msgid "" "Disk volume where '%(path)s' is located doesn't have enough disk space. " "Required %(required)d MiB, only %(actual)d MiB available space present." msgstr "" "「%(path)s」があるディスク・ボリュームに十分なディスク容量がありませ" "ん。%(required)d Mib が必要ですが、使用可能な容量は%(actual)d MiB のみです。" msgid "" "Downgrade the database schema to the oldest revision. While optional, one " "should generally use --revision to specify the alembic revision string to " "downgrade to." msgstr "" "データベースのスキーマを最も古いバージョンにダウングレードします。オプション" "ですが、 一般的には --revision を使用して、ダウングレードする Alembic バー" "ジョン文字列を指定すべきです。" #, python-format msgid "Driver %(driver)s could not be loaded. Reason: %(reason)s." msgstr "%(driver)s ドライバーを読み込めませんでした。理由: %(reason)s。" #, python-format msgid "" "Driver %(driver)s does not support %(extension)s (disabled or not " "implemented)." msgstr "" "ドライバー %(driver)s では %(extension)s はサポートされていません (有効化され" "ていないか、実装されていません)。" #, python-format msgid "During inspection, driver returned unexpected state %(state)s" msgstr "検査中にドライバーから予期しない状態 %(state)s が返されました" #, python-format msgid "" "During sync_power_state, max retries exceeded for node %(node)s, node state " "%(actual)s does not match expected state '%(state)s'. Updating DB state to " "'%(actual)s' Switching node to maintenance mode." msgstr "" "sync_power_state 中にノード %(node)s の最大再試行回数を超えました。ノード状" "態 %(actual)s が予期された状態「%(state)s」に一致しません。DB 状態を" "「%(actual)s」に更新し、ノードを保守モードに切り替えます。" #, python-format msgid "Eject virtual media %s" msgstr "仮想メディア %s を取り出します" msgid "Ejecting virtual cdrom" msgstr "仮想 CD-ROM の取り出し中" msgid "Ejecting virtual floppy" msgstr "仮想フロッピーの取り出し中" msgid "Enable iPXE boot." msgstr "iPXE ブートを有効化します。" msgid "" "Enable pecan debug mode. WARNING: this is insecure and should not be used in " "a production environment." msgstr "" "pecan デバッグモードを有効化します。警告: これはセキュリティーを低下させま" "す。本番環境において使用すべきではありません。" #, python-format msgid "Error %(op)s the console on node %(node)s. Reason: %(error)s" msgstr "" "ノード %(node)s のコンソール %(op)s にエラーが発生しました。理由: %(error)s" #, python-format msgid "" "Error parsing capabilities from Node %s instance_info field. A dictionary or " "a \"jsonified\" dictionary is expected." msgstr "" "ノード %s の instance_info フィールドのケイパビリティーの解析エラー。ディク" "ショナリーまたは「jsonified」ディクショナリーが必要です。" #, python-format msgid "Error rebooting node %(node)s after deploy. Error: %(error)s" msgstr "" "ノード %(node)s のデプロイ後の再起動がエラーになりました。エラー: %(error)s" #, python-format msgid "Error returned from deploy ramdisk: %s" msgstr "デプロイ RAM ディスクからエラーが返されました: %s" msgid "" "Error validating iLO virtual media deploy. Some parameters were missing in " "node's driver_info" msgstr "" "iLO 仮想メディアデプロイの検証エラー。いくつかのパラメーターがノードの" "driver_info にありませんでした" msgid "" "Error validating iRMC virtual media deploy. Some parameters were missing in " "node's driver_info" msgstr "" "iRMC 仮想メディアデプロイの検証でエラーが発生しました。いくつかのパラメーター" "がノードの driver_info にありませんでした。" msgid "" "Error validating input for boot_into_iso vendor passthru. Some parameters " "were not provided: " msgstr "" "boot_into_iso ベンダーパススルーの入力で検証エラーになりました。いくつかのパ" "ラメーターが指定されていません。" #, python-format msgid "ErrorDocumentMiddleware received an invalid status %s" msgstr "ErrorDocumentMiddleware が無効なステータス %s を受け取りました" #, python-format msgid "" "Essential properties are expected to be in dictionary format, received " "%(properties)s from node %(node)s." msgstr "" "必須プロパティーはディクショナリー形式でなければなりません。%(properties)s を" "ノード %(node)s から受け取りました。" #, python-format msgid "Expected a MAC address but received %(mac)s." msgstr "MAC アドレスが必要ですが、%(mac)s を受け取りました。" #, python-format msgid "Expected a logical name but received %(name)s." msgstr "論理名が必要ですが、%(name)s を受け取りました。" #, python-format msgid "Expected a logical name or uuid but received %(name)s." msgstr "論理名または UUID が必要ですが、%(name)s を受け取りました。" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "UUID が必要ですが、%(uuid)s を受け取りました。" #, python-format msgid "Expected an uuid or int but received %(identity)s." msgstr "UUID または整数が必要ですが、%(identity)s を受け取りました。" msgid "Failed checking if deploy is done." msgstr "デプロイが実行されたかどうかの検査に失敗しました。" #, python-format msgid "Failed to change power state to '%(target)s'. Error: %(error)s" msgstr "電源状態を「%(target)s」に変更できませんでした。エラー: %(error)s" #, python-format msgid "" "Failed to change the boot device to %(boot_dev)s when deploying node " "%(node)s. Error: %(error)s" msgstr "" "ノード %(node)s のデプロイ中にブートデバイスを %(boot_dev)s に変更できません" "でした。エラー: %(error)s" #, python-format msgid "Failed to clean node %(node)s: %(reason)s" msgstr "ノード %(node)s のクリーニングに失敗しました: %(reason)s" #, python-format msgid "Failed to connect to Glance to get the properties of the image %s" msgstr "" "イメージ %s のプロパティーを取得するために Glance に接続できませんでした" msgid "Failed to continue agent deployment." msgstr "エージェントのデプロイの続行に失敗しました。" msgid "Failed to continue iSCSI deployment." msgstr "iSCSI デプロイメントを続行できませんでした。" #, python-format msgid "Failed to create a file system. File system %(fs)s is not supported." msgstr "" "ファイルシステムの作成に失敗しました。ファイルシステム %(fs)s はサポートされ" "ていません。" #, python-format msgid "Failed to create cleaning ports for node %(node)s" msgstr "ノード %(node)s のクリーニングポートの作成に失敗しました" #, python-format msgid "Failed to create the password file. %(error)s" msgstr "パスワードファイルの作成に失敗しました。%(error)s" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "インスタンスをデプロイできませんでした: %(reason)s" #, python-format msgid "Failed to deploy. Error: %s" msgstr "デプロイに失敗しました。エラー: %s" #, python-format msgid "Failed to download image %(image_href)s, reason: %(reason)s" msgstr "イメージ %(image_href)s のダウンロードに失敗しました。理由: %(reason)s" #, python-format msgid "Failed to establish SSH connection to host %(host)s." msgstr "ホスト %(host)s への SSH 接続を確立できませんでした。" #, python-format msgid "Failed to execute command via SSH: %(cmd)s." msgstr "SSH を介したコマンド %(cmd)s の実行に失敗しました。" #, python-format msgid "Failed to get IP address for any port on node %s." msgstr "ノード %s 上のどのポートの IP アドレスも取得できませんでした。" #, python-format msgid "Failed to get sensor data for node %(node)s. Error: %(error)s" msgstr "" "ノード %(node)s のセンサーデータの取得に失敗しました。エラー: %(error)s" #, python-format msgid "Failed to inspect hardware. Reason: %(error)s" msgstr "ハードウェアの検査に失敗しました。理由: %(error)s" #, python-format msgid "" "Failed to install a bootloader when deploying node %(node)s. Error: %(error)s" msgstr "" "ノード %(node)s のデプロイ中にブートローダーのインストールに失敗しました。エ" "ラー: %(error)s" #, python-format msgid "Failed to install bootloader on node %(node)s. Error: %(error)s." msgstr "" "ノード %(node)s においてブートローダーのインストールに失敗しました。エラー: " "%(error)s。" #, python-format msgid "" "Failed to notify ramdisk to reboot after bootloader installation. Error: %s" msgstr "" "ブートローダーのインストール後に再起動するよう RAM ディスクに通知できませんで" "した。エラー: %s" #, python-format msgid "Failed to parse sensor data for node %(node)s. Error: %(error)s" msgstr "" "ノード %(node)s のセンサーデータの解析に失敗しました。エラー: %(error)s" #, python-format msgid "Failed to prepare node %(node)s for cleaning: %(e)s" msgstr "ノード %(node)s のクリーニングのための準備に失敗しました: %(e)s" #, python-format msgid "Failed to prepare to deploy. Error: %s" msgstr "デプロイの準備に失敗しました。エラー: %s" #, python-format msgid "" "Failed to send Wake-On-Lan magic packets to node %(node)s port %(port)s. " "Error: %(error)s" msgstr "" "Wake-On-Lan のマジックパケットをノード %(node)s のポート %(port)s に送信でき" "ませんでした。エラー: %(error)s" #, python-format msgid "Failed to set DHCP BOOT options for any port on node %s." msgstr "ノード %s 上のポートの DHCP BOOT オプションの設定に失敗しました。" #, python-format msgid "Failed to set node power state to %(pstate)s." msgstr "ノードの電源状態を %(pstate)s に設定できませんでした。" #, python-format msgid "Failed to start inspection: %s" msgstr "検査の開始に失敗しました: %s" #, python-format msgid "" "Failed to start the iSCSI target to deploy the node %(node)s. Error: " "%(error)s" msgstr "" "ノード %(node)s をデプロイする iSCSI ターゲットの開始に失敗しました。エラー: " "%(error)s" #, python-format msgid "Failed to tear down from cleaning for node %s" msgstr "ノード %s のクリーニングからの取り外しに失敗しました" #, python-format msgid "Failed to tear down. Error: %s" msgstr "取り外しに失敗しました。エラー: %s" #, python-format msgid "Failed to toggle maintenance-mode flag for node %(node)s: %(reason)s" msgstr "" "ノード %(node)s の保守モードのフラグの切り替えに失敗しました: %(reason)s" #, python-format msgid "" "Failed to upload %(image_name)s image to web server %(web_server)s, reason: " "%(reason)s" msgstr "" "%(image_name)s イメージの Web サーバー %(web_server)s へのアップロードに失敗" "しました。理由: %(reason)s" #, python-format msgid "Failed to upload the configdrive to Swift. Error: %s" msgstr "コンフィグドライブの Swift へのアップロードに失敗しました。エラー: %s" #, python-format msgid "" "Failed to validate power driver interface. Can not clean node %(node)s. " "Error: %(msg)s" msgstr "" "電源ドライバーインターフェースの検証に失敗しました。ノード %(node)s をクリー" "ニングできません。エラー: %(msg)s" #, python-format msgid "" "Failed to validate power driver interface. Can not delete instance. Error: " "%(msg)s" msgstr "" "電源ドライバーインターフェースを検証できませんでした。インスタンスを削除でき" "ません。エラー: %(msg)s" #, python-format msgid "Field(s) \"%s\" are not valid" msgstr "フィールド \"%s\" が有効ではありません。" msgid "For heartbeat operation, \"agent_url\" must be specified." msgstr "ハートビート操作には「agent_url」を指定する必要があります。" msgid "Get boot device" msgstr "ブートデバイスの取得" #, python-format msgid "Get secure boot mode for node %s." msgstr "ノード %s のセキュアブートのモードを取得します。" #, python-format msgid "Got HTTP code %s instead of 200 in response to GET request." msgstr "" "GET 要求に対する応答として HTTP コード 200 ではなく %s を受け取りました。" #, python-format msgid "Got HTTP code %s instead of 200 in response to HEAD request." msgstr "" "HEAD 要求に対する応答として HTTP コード 200 ではなく %s を受け取りました。" #, python-format msgid "HTTP call failed: %s" msgstr "HTTP コールに失敗しました: %s" msgid "IP address of ironic-conductor node's TFTP server." msgstr "ironic-conductor ノードの TFTP サーバーの IP アドレス。" msgid "IP address of the node. Required." msgstr "ノードの IP アドレス。必須。" msgid "" "IP address of this host. If unset, will determine the IP programmatically. " "If unable to do so, will use \"127.0.0.1\"." msgstr "" "このホストの IP アドレス。設定されていない場合、プログラムにより自動判定され" "ます。判定できない場合、127.0.0.1 を使用します。" msgid "IP address or host name of the node. Required." msgstr "ノードの IP アドレスまたはホスト名。必須。" msgid "IP address or hostname of the DRAC card. Required." msgstr "DRAC カードの IP アドレスまたはホスト名。必須。" msgid "IP address or hostname of the VirtualBox host. Required." msgstr "VirtualBox ホストの IP アドレスまたはホスト名。必須。" msgid "IP address or hostname of the iLO. Required." msgstr "iLO の IP アドレスまたはホスト名。必須。" msgid "IP address or hostname of the iRMC. Required." msgstr "iRMC の IP アドレスまたはホスト名。必須。" msgid "IP address or hostname of the node to ssh into. Required." msgstr "SSH 接続先のノードの IP アドレスまたはホスト名。必須。" msgid "IP address or hostname of the node. Required." msgstr "ノードの IP アドレスまたはホスト名。必須。" msgid "IP of the node's BMC. Required." msgstr "ノードの BMC の IP。必須。" msgid "IP or Hostname of the CIMC. Required." msgstr "CIMC の IP またはホスト名。必須。" msgid "IP or Hostname of the UCS Manager. Required." msgstr "UCS Manager の IP またはホスト名。必須。" #, python-format msgid "IPMI call failed: %(cmd)s." msgstr "IPMI 呼び出しが失敗しました: %(cmd)s." #, python-format msgid "" "IPMI get power state failed for node %(node_id)s with the following error: " "%(error)s" msgstr "" "ノード %(node_id)s の IPMI 電源状態取得に次のエラーで失敗しました: %(error)s" msgid "IPMI password. Required." msgstr "IPMI パスワード。必須。" #, python-format msgid "" "IPMI power off failed for node %(node_id)s with the following error: " "%(error)s" msgstr "" "ノード %(node_id)s の IPMI 電源オフに次のエラーで失敗しました: %(error)s" #, python-format msgid "" "IPMI power reboot failed for node %(node_id)s with the following error: " "%(error)s" msgstr "ノード %(node_id)s の IPMI 再起動に次のエラーで失敗しました: %(error)s" #, python-format msgid "" "IPMI send raw bytes '%(bytes)s' failed for node %(node_id)s with the " "following error: %(error)s" msgstr "" "IPMI が raw バイト \"%(bytes)s\" をノード %(node_id)s に送信しましたが、次の" "エラーで失敗しました: %(error)s" msgid "IPMI username. Required." msgstr "IPMI ユーザー名。必須。" msgid "If True, convert backing images to \"raw\" disk image format." msgstr "" "True の場合、バックエンドのイメージを「raw」ディスクイメージ形式に変換しま" "す。" #, python-format msgid "Image %(image)s is missing the following properties: %(properties)s" msgstr "イメージ %(image)s に次のプロパティーがありません: %(properties)s" #, python-format msgid "Image %(image_id)s could not be found." msgstr "イメージ %(image_id)s が見つかりませんでした。" #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "イメージ %(image_id)s は受け入れられません: %(reason)s" #, python-format msgid "Image %s can not be found." msgstr "イメージ %s が見つかりません。" #, python-format msgid "Image download protocol %s is not supported." msgstr "イメージダウンロードのプロトコル %s はサポートされていません。" msgid "Inserting virtual cdrom" msgstr "仮想 CD-ROM の挿入中" msgid "Inserting virtual floppy" msgstr "仮想フロッピーの挿入中" #, python-format msgid "Inserting virtual media %s" msgstr "仮想メディア %s の接続中" #, python-format msgid "Inspecting hardware (get_power_state) on %s" msgstr "%s でハードウェア (get_power_state) を検査中" #, python-format msgid "Instance %(instance)s could not be found." msgstr "インスタンス %(instance)s が見つかりませんでした。" #, python-format msgid "" "Instance %(instance_uuid)s is already associated with a node, it cannot be " "associated with this other node %(node)s" msgstr "" "インスタンス %(instance_uuid)s は既にノードに関連付けられており、この他のノー" "ド %(node)s に関連付けることはできません。" msgid "Invalid 'seamicro_api_endpoint' parameter in node's driver_info." msgstr "" "ノードの driver_info にある「seamicro_api_endpoint」パラメーターが無効です。" #, python-format msgid "Invalid 'seamicro_api_version' parameter. Reason: %s." msgstr "無効な「seamicro_api_version」パラメーター。理由: %s。" msgid "" "Invalid 'seamicro_server_id' parameter in node's driver_info. Expected " "format of 'seamicro_server_id' is /" msgstr "" "ノードの driver_info にある「seamicro_server_id」パラメーターが無効です。期待" "される「seamicro_server_id」の形式は / です" #, python-format msgid "" "Invalid IPMI protocol version value %(version)s, the valid value can be one " "of %(valid_versions)s" msgstr "" "無効な IPMI プロトコルバージョンの値 %(version)s。有効な値は " "%(valid_versions)s のどれかです。" #, python-format msgid "Invalid IPv4 address %(ip_address)s." msgstr "IPv4 アドレス %(ip_address)s は無効です。" #, python-format msgid "Invalid VirtualMachine method '%s' passed to '_run_virtualbox_method'." msgstr "" "無効な VirtualMachine メソッド「%s」が「_run_virtualbox_method」に渡されまし" "た。" #, python-format msgid "Invalid XML: %s" msgstr "無効な XML: %s" #, python-format msgid "Invalid boot device %s specified." msgstr "無効なブートデバイス %s が指定されました。" #, python-format msgid "Invalid capabilities string '%s'." msgstr "無効なケイパビリティー文字列「%s」。" #, python-format msgid "Invalid configuration file. %(error_msg)s" msgstr "設定ファイルが無効です。%(error_msg)s" msgid "Invalid data supplied to HashRing.get_hosts." msgstr "HashRing.get_hosts に無効なデータが指定されました。" #, fuzzy, python-format msgid "" "Invalid filter dialect '%(invalid_filter)s'. Supported options are " "%(supported)s" msgstr "" "フィルター方言「%(invalid_filter)s」は無効です。サポートされるオプションは " "%(supported)s です" msgid "Invalid hosts supplied when building HashRing." msgstr "HashRing の構築時に無効なホストが指定されました。" #, python-format msgid "Invalid image href %(image_href)s." msgstr "無効なイメージ href %(image_href)s。" msgid "Invalid private key" msgstr "無効な秘密鍵" #, python-format msgid "" "Invalid privilege level value:%(priv_level)s, the valid value can be one of " "%(valid_levels)s" msgstr "" "特権レベルの値 %(priv_level)s は無効です。有効な値は %(valid_levels)s の いず" "れかです。" #, python-format msgid "Invalid protocol %s." msgstr "無効なプロトコル %s" #, python-format msgid "Invalid raw bytes string: '%s'" msgstr "無効な raw バイト文字列: '%s'" msgid "Invalid resource state." msgstr "リソース状態が無効です。" #, python-format msgid "Invalid sort direction: %s. Acceptable values are 'asc' or 'desc'" msgstr "%s は無効なソート方向です。「asc」または「desc」が有効です。" #, python-format msgid "Invalid value for %s header" msgstr "無効な %s ヘッダーの値" #, python-format msgid "" "Invalid value for ipmi_bridging: %(bridging_type)s, the valid value can be " "one of: %(bridging_types)s" msgstr "" "ipmi_bridging の値 %(bridging_type)s は無効です。有効な値は " "%(bridging_types)s のいずれかです。" msgid "Invalid volume id specified" msgstr "無効なボリューム ID が指定されました" msgid "Keystone API endpoint is missing" msgstr "keystone API エンドポイントがありません。" msgid "Limit must be positive" msgstr "「limit」は正の値でなければなりません" msgid "MSFT OCS call failed." msgstr "MSFT OCS コールに失敗しました。" #, python-format msgid "Malformed network interfaces lookup: %s" msgstr "ネットワークインターフェースのルックアップの形式が正しくありません: %s" msgid "Maximum interval (in seconds) for agent heartbeats." msgstr "エージェントのハートビートの最大間隔 (秒単位)。" msgid "Maximum retries for iBoot operations" msgstr "iBoot 処理の最大試行回数" msgid "Method not specified" msgstr "メソッドが指定されていません" msgid "Method not specified when calling vendor extension." msgstr "ベンダー拡張機能の呼び出し時にメソッドが指定されていません" msgid "Missing 'console_port' parameter in node's driver_info." msgstr "ノードの driver_info に「console_port」パラメーターがありません。" msgid "Missing 'ipmi_terminal_port' parameter in node's driver_info." msgstr "" "ノードの driver_info に「ipmi_terminal_port」パラメーターがありません。" msgid "Missing 'seamicro_terminal_port' parameter in node's driver_info" msgstr "" "ノードの driver_info に「seamicro_terminal_port」パラメーターがありません" msgid "Missing parameter version" msgstr "パラメーターのバージョンがありません" #, python-format msgid "Missing the following IPMI credentials in node's driver_info: %s." msgstr "ノードの driver_info に次の IPMI クレデンシャルがありません: %s。" #, python-format msgid "Missing the following iBoot credentials in node's driver_info: %s." msgstr "ノードの driver_info に次の iBoot クレデンシャルがありません: %s。" #, python-format msgid "Missing the following iRMC parameters in node's driver_info: %s." msgstr "ノードの driver_info に次の iRMC パラメーターがありません: %s。" #, python-format msgid "" "Mutually exclusive versions requested. Version %(ver)s requested but not " "supported by this service. The supported version range is: [%(min)s, " "%(max)s]." msgstr "" "相互排他的なバージョンが要求されました。バージョン %(ver)s が要求されました" "が、このサービスではサポートされていません。サポートされるバージョンの範囲は " "[%(min)s、%(max)s] です。" msgid "MySQL engine to use." msgstr "使用する MySQL エンジン。" msgid "Name of the VM in VirtualBox. Required." msgstr "VirtualBox の仮想マシン名。必須。" msgid "Neutron auth_strategy should be either \"noauth\" or \"keystone\"." msgstr "" "Neutron の auth_strategy は「noauth」または「keystone」のいずれかでなければな" "りません。" msgid "No Keystone service catalog loaded" msgstr "keystone のサービスカタログが読み込まれていません" #, python-format msgid "" "No VIFs found for node %(node)s when attempting to update DHCP BOOT options." msgstr "" "DHCP BOOT オプションの更新を試行中に、ノード %(node)s の仮想インターフェース" "が見つかりませんでした。" #, python-format msgid "No conductor service registered which supports driver %s." msgstr "ドライバー %s をサポートするコンダクターサービスが登録されていません。" msgid "No free conductor workers available" msgstr "コンダクターの利用可能な空きワーカーがありません。" #, python-format msgid "No handler for method %s" msgstr "%s メソッドのハンドラーがありません" msgid "No storage pools found for ironic" msgstr "ironic 用のストレージプールが見つかりません" #, python-format msgid "No valid host was found. Reason: %(reason)s" msgstr "有効なホストが見つかりませんでした。理由: %(reason)s" msgid "No vlan id provided" msgstr "VLAN ID が指定されていません" msgid "No volume size provided for creating volume" msgstr "ボリュームの作成でボリューム容量が指定されていません" #, python-format msgid "Node %(node)s could not be found." msgstr "ノード %(node)s が見つかりませんでした。" #, python-format msgid "Node %(node)s didn't return MACs %(macs)s in dictionary format." msgstr "" "ノード %(node)s からディクショナリー形式の MAC %(macs)s が返されませんでし" "た。" #, python-format msgid "Node %(node)s failed step %(step)s: %(exc)s" msgstr "ノード %(node)s でステップ %(step)s が実行されませんでした: %(exc)s" #, python-format msgid "Node %(node)s found not to be locked on release" msgstr "ノード %(node)s はリリース状態で、ロックされていませんでした。" #, python-format msgid "Node %(node)s got an invalid last step for %(state)s: %(step)s." msgstr "" "ノード %(node)s が %(state)s に関して無効な最終ステップを受け取りました: " "%(step)s。" #, python-format msgid "Node %(node)s is associated with instance %(instance)s." msgstr "ノード %(node)s はインスタンス %(instance)s に関連付けられています。" #, python-format msgid "" "Node %(node)s is configured to use the %(driver)s driver which currently " "does not support deploying partition images." msgstr "" "ノード %(node)s は、分割イメージのデプロイを現在サポートしていない " "%(driver)s ドライバーを使用するように設定されています。" #, python-format msgid "" "Node %(node)s is locked by host %(host)s, please retry after the current " "operation is completed." msgstr "" "ノード %(node)s がホスト %(host)s によりロックされています。現在の操作の完了" "後に再試行してください。" #, python-format msgid "Node %(node)s: Cannot change name to invalid name '%(name)s'" msgstr "" "ノード %(node)s: 名前を無効な名前「%(name)s」に変更することはできません" #, python-format msgid "Node %s can not be updated while a state transition is in progress." msgstr "状態遷移の進行中にノード %s を更新することはできません。" #, python-format msgid "" "Node %s can not update the driver while the console is enabled. Please stop " "the console first." msgstr "" "ノード %s は、コンソール有効時、ドライバーを更新できません。まずコンソールを" "停止してください。" #, python-format msgid "Node %s does not have any port associated with it." msgstr "ノード %s にポートが関連付けられていません。" #, python-format msgid "" "Node %s failed to validate deploy image info. Some parameters were missing" msgstr "" "ノード %s はデプロイイメージ情報の検証に失敗しました。いくつかのパラメーター" "が指定されていません" msgid "Node failed to check cleaning progress." msgstr "ノードが、クリーニング進行状況の確認に失敗しました。" msgid "Node failed to get image for deploy." msgstr "ノードがデプロイ用のイメージの取得に失敗しました。" msgid "Node failed to move to active state." msgstr "ノードを稼働状態に移行できませんでした。" msgid "Node failed to start the next cleaning step." msgstr "ノードが、次のクリーニング手順の開始に失敗しました。" msgid "Node identifier not specified." msgstr "ノード ID が指定されていません。" #, fuzzy, python-format msgid "Not authorized for image %(image_id)s." msgstr "イメージ %(image_id)s では許可されません。" msgid "Not authorized in Keystone." msgstr "keystone において許可されていません。" msgid "Not authorized." msgstr "権限がありません。" msgid "Number of retries when downloading an image from glance." msgstr "glance からイメージをダウンロードするとき、再試行する回数。" msgid "" "On ironic-conductor node, template file for PXE configuration for UEFI boot " "loader." msgstr "" "ironic-conductor ノードでの、UEFI ブートローダーの PXE 設定のテンプレートファ" "イル。" msgid "On ironic-conductor node, template file for PXE configuration." msgstr "ironic-conductor ノードでの、PXE 設定のテンプレートファイル。" msgid "On ironic-conductor node, the path to the main iPXE script file." msgstr "" "ironic-conductor ノードにおける、メインの iPXE スクリプトファイルのパス。" msgid "" "On the ironic-conductor node, directory where images are stored on disk." msgstr "" "ironic-conductor ノードにおいて、イメージがディスク上に保存されるディレクト" "リー。" #, python-format msgid "Operation failed: %s" msgstr "処理に失敗しました: %s" msgid "Operation not permitted." msgstr "操作が許可されていません。" msgid "PDU IPv4 address or hostname. Required." msgstr "PDU のIPv4 アドレスまたはホスト名。必須。" msgid "PDU manufacturer driver. Required." msgstr "PDU の製造元ドライバー。必須。" msgid "PDU power outlet index (1-based). Required." msgstr "PDU 電源アウトレット番号 (基準は 1)。必須。" #, python-format msgid "Parameter 'bar' not passed to method '%s'." msgstr "「bar」パラメーターが「%s」メソッドに渡されませんでした。" msgid "Parameter 'bar' not passed to method 'first_method'." msgstr "bar パラメーターが first_method メソッドに渡されませんでした。" msgid "Parameter raw_bytes (string of bytes) was not specified." msgstr "raw_bytes (バイトのストリング) パラメーターが指定されませんでした。" #, python-format msgid "Parameters %s were not passed to ironic for deploy." msgstr "パラメーター %s がデプロイのために ironic に渡されませんでした。" #, python-format msgid "Parent device '%s' not found" msgstr "親デバイス「%s」が見つかりません" msgid "Password for 'virtualbox_username'. Default value is ''. Optional." msgstr "" "「virtualbox_username」のパスワード。デフォルト値は空白です。オプション。" msgid "Password for irmc_username. Required." msgstr "irmc_username のパスワード。必須。" msgid "Password to access the chassis manager REST API. Required." msgstr "シャーシマネージャーの REST API にアクセスするパスワード。必須。" msgid "Password. Required." msgstr "パスワード。必須。" #, python-format msgid "Path %(dir)s does not exist." msgstr "パス %(dir)s が存在しません。" msgid "Path to isolinux binary file." msgstr "isolinux バイナリーファイルのパス。" msgid "Path to serial console terminal program" msgstr "シリアルコンソールのターミナルプログラムのパス" #, python-format msgid "Port %(port)s could not be found." msgstr "ポート %(port)s が見つかりませんでした。" msgid "Port on which VirtualBox web service is listening." msgstr "VirtualBox web サービスがリッスンしているポート。" msgid "Port on which VirtualBox web service is listening. Optional." msgstr "VirtualBox Web サービスがリッスンしているポート。オプション。" msgid "Port to be used for iLO operations" msgstr "iLO 操作に使用するポート" msgid "Port to be used for iRMC operations, either 80 or 443" msgstr "iRMC の操作に使用するポート。80 か 443 のどちらか。" msgid "" "Port to be used for iRMC operations; either 80 or 443. The default value is " "443. Optional." msgstr "" "iRMC の操作に使用するポート。80 または 443。デフォルト値は 443。オプション。" #, python-format msgid "" "Ports matching mac addresses match multiple nodes. MACs: %(macs)s. Port ids: " "%(port_ids)s" msgstr "" "MAC アドレスに一致するポートが、複数のノードに一致します。MAC: %(macs)s、ポー" "ト ID: %(port_ids)s" msgid "Power driver returned ERROR state while trying to sync power state." msgstr "電源状態の同期を試行中に電源ドライバーが ERROR 状態を返しました。" msgid "Print the current version information and exit." msgstr "現在のバージョン情報を表示して、終了します。" msgid "Priority for reset_bios_to_default clean step." msgstr "reset_bios_to_default クリーニング手順の優先度。" msgid "Priority for reset_ilo clean step." msgstr "reset_ilo クリーニング手順の優先度。" msgid "Protocol used for AMT endpoint, support http/https" msgstr "" "AMT エンドポイントで使用されるプロトコル。http/https をサポートします。" msgid "" "Protocol used for AMT endpoint. one of http, https; default is \"http\". " "Optional." msgstr "" "AMT エンドポイントに使用するプロトコル。http または https のいずれかです。デ" "フォルトは「http」です。オプション。" #, python-format msgid "Provision state \"%s\" is not valid" msgstr "プロビジョニング状態 \"%s\" が有効ではありません。" #, python-format msgid "RAID config validation error: %s" msgstr "RAID 設定の検証エラー: %s" #, python-format msgid "" "RPC do_node_deploy failed to validate deploy or power info. Error: %(msg)s" msgstr "" "RPC do_node_deploy はデプロイ情報または電源情報の検証に失敗しました。エラー: " "%(msg)s" #, python-format msgid "" "RPC inspect_hardware failed to validate inspection or power info. Error: " "%(msg)s" msgstr "" "RPC inspect_hardware による検査情報または電源情報の検証に失敗しました。エ" "ラー: %(msg)s" #, python-format msgid "" "Raid config cannot have more than one root volume. %d root volumes were " "specified" msgstr "" "RAID 設定は複数のルートボリュームを持てません。%d 個のルートボリュームが指定" "されました。" msgid "Raw bytes string requires two bytes at least." msgstr "raw バイト文字列は少なくとも 2 バイト必要です。" msgid "Request not acceptable." msgstr "要求は受け入れられませんでした。" msgid "Requested OpenStack Images API is forbidden" msgstr "要求された OpenStack Image API は禁止されています" msgid "" "Requested action cannot be performed due to lack of free conductor workers." msgstr "" "コンダクターの空きワーカーが不足しているため、要求されたアクションを実行でき" "ません。" msgid "Resource already exists." msgstr "リソースがすでに存在します。" msgid "Resource could not be found." msgstr "リソースが見つかりませんでした。" msgid "Resource temporarily unavailable, please retry." msgstr "リソースが一時的に使用できません。再試行してください。" #, python-format msgid "Retrieve IP address on port: %(port_id)s failed." msgstr "ポート %(port_id)s での IP アドレスの取得に失敗しました。" msgid "Root device hint \"size\" is not an integer value." msgstr "ルートデバイスヒント「size」が整数値ではありません。" #, python-format msgid "SNMP driver requires snmp_community to be set for version %s." msgstr "" "SNMP ドライバーのバージョン %s は snmp_community の設定を必要とします。" #, python-format msgid "SNMP driver requires snmp_security to be set for version %s." msgstr "SNMP ドライバーのバージョン %s は snmp_security の設定を必要とします。" #, python-format msgid "" "SNMP driver requires the following parameters to be set in node's " "driver_info: %s." msgstr "" "SNMP ドライバーはノードの driver_info で次のパラメーターの設定を必要としま" "す: %s。" #, python-format msgid "SNMP operation '%(operation)s' failed: %(error)s" msgstr "SNMP の操作「%(operation)s」が失敗しました: %(error)s" #, python-format msgid "SNMP port, default %(port)d" msgstr "SNMP ポート。デフォルトは %(port)d。" #, python-format msgid "SNMP security name. Required for version %(v3)s" msgstr "SNMP セキュリティー名。バージョン %(v3)s に必須。" #, python-format msgid "SNMPPowerDriver: SNMP UDP port out of range: %d" msgstr "SNMPPowerDriver: SNMP の UDP ポートが範囲外です: %d" #, python-format msgid "SNMPPowerDriver: unknown SNMP version: '%s'" msgstr "SNMPPowerDriver: 不明な SNMP バージョン: %s" #, python-format msgid "SNMPPowerDriver: unknown driver: '%s'" msgstr "SNMPPowerDriver: 不明なドライバー: %s" #, python-format msgid "SSH connection cannot be established: %s" msgstr "SSH 接続を確立できません: %s" #, python-format msgid "SSH key file %s not found." msgstr "SSH 鍵ファイル %s が見つかりません。" #, python-format msgid "SSHPowerDriver '%(virt_type)s' is not a valid virt_type, " msgstr "SSHPowerDriver「%(virt_type)s」は有効な virt_type ではありません。" #, python-format msgid "SSHPowerDriver '%(virt_type)s' is not a valid virt_type." msgstr "SSHPowerDriver「%(virt_type)s」は有効な virt_type ではありません。" msgid "" "SSHPowerDriver requires one and only one of password, key_contents and " "key_filename to be set." msgstr "" "SSHPowerDriver は、唯一のパスワード、key_contents、および key_filename の設定" "を必要とします。" #, python-format msgid "" "SSHPowerDriver requires the following parameters to be set in node's " "driver_info: %s." msgstr "" "SSHPowerDriver はノードの driver_info で次のパラメーターの設定を必要としま" "す: %s" #, python-format msgid "" "SeaMicro driver requires the following parameters to be set in node's " "driver_info: %s." msgstr "" "SeaMicro ドライバーはノードの driver_info で次のパラメーターの設定を必要とし" "ます: %s。" msgid "Seconds between conductor heart beats." msgstr "conductor のハートビート間隔の秒数" msgid "Seconds between running periodic tasks." msgstr "定期タスクの実行間隔 (秒単位)。" msgid "Sensor data retrieval method, either \"ipmitool\" or \"scci\"" msgstr "センサーデータの取得方式。「ipmitool」か「scci」のどちらか。" msgid "" "Sensor data retrieval method; either 'ipmitool' or 'scci'. The default value " "is 'ipmitool'. Optional." msgstr "" "センサーデータ取得方式。「ipmitool」または「scci」のいずれかを指定します。デ" "フォルト値は「ipmitool」です。オプション。" #, python-format msgid "Server didn't return the key(s): %(key)s" msgstr "サーバーからキー %(key)s が返されませんでした" #, python-format msgid "" "Service type %(service_type)s with endpoint type %(endpoint_type)s not found " "in keystone service catalog." msgstr "" "エンドポイントタイプ %(endpoint_type)s のサービスタイプ %(service_type)s が " "keystone サービスカタログに見つかりませんでした。" #, python-format msgid "Setting %s as boot device" msgstr "%s をブートデバイスとして設定中" #, python-format msgid "Setting %s as boot mode" msgstr "ブートモードを %s に設定中" #, python-format msgid "Setting secure boot to %(flag)s for node %(node)s." msgstr "ノード %(node)s のセキュアブートを %(flag)s に設定中。" msgid "" "Some mandatory input missing in 'pass_bootloader_info' vendor passthru from " "ramdisk." msgstr "" "一部の必須入力が RAM ディスクの pass_bootloader_info ベンダーパススルーにあり" "ません。" msgid "Specified image file not found." msgstr "指定されたイメージファイルが見つかりません。" #, python-format msgid "Swift operation '%(operation)s' failed: %(error)s" msgstr "Swift の操作「%(operation)s」が失敗しました: %(error)s" msgid "" "Swift temporary URLs require a Swift account string. You must provide " "\"swift_account\" as a config option." msgstr "" "Swift 一時 URL は Swift アカウント文字列を必要とします。設定オプションとして" "「swift_account」を指定する必要があります。" msgid "" "Swift temporary URLs require a Swift endpoint URL. You must provide " "\"swift_endpoint_url\" as a config option." msgstr "" "Swift 一時 URL は Swift エンドポイント URL を必要とします。設定オプションとし" "て「swift_endpoint_url」を指定する必要があります。" msgid "" "Swift temporary URLs require a shared secret to be created. You must provide " "\"swift_temp_url_key\" as a config option." msgstr "" "Swift 一時 URL は共有秘密鍵の作成を必要とします。設定オプションとして" "「swift_temp_url_key」を指定する必要があります。" #, python-format msgid "Target state '%s' does not exist" msgstr "ターゲット状態「%s」は存在しません" #, python-format msgid "Target state '%s' is not a 'stable' state" msgstr "ターゲット状態「%s」は「stable」状態ではありません" msgid "Template file for grub configuration file." msgstr "Template file for grub configuration file." msgid "Template file for isolinux configuration file." msgstr "isolinux 設定ファイルのテンプレートファイル。" msgid "Test if the value of bar is baz" msgstr "bar の値が baz かどうかをテストします" msgid "Test if the value of bar is kazoo" msgstr "bar の値が kazoo かどうかをテストします" msgid "Test if the value of bar is meow" msgstr "bar の値が meow かどうかをテストします" #, python-format msgid "" "The %(op)s operation can't be performed on node %(node)s because it's in " "maintenance mode." msgstr "" "ノード %(node)s は保守モードになっているため、%(op)s 操作を実行できません。" msgid "The IP address on which ironic-api listens." msgstr "ironic-api がリッスンする IP アドレス。" msgid "The Swift iLO container to store data." msgstr "データを保存するための Swift iLO コンテナー。" msgid "The TCP port on which ironic-api listens." msgstr "ironic-api がリッスンする TCP ポート。" #, python-format msgid "The driver '%s' is unknown." msgstr "ドライバー「%s」は不明です。" #, python-format msgid "The following errors were encountered while parsing config file:%s" msgstr "設定ファイルの構文解析中に以下のエラーが発生しました。%s" #, python-format msgid "" "The following errors were encountered while parsing driver_info:\n" "%s" msgstr "" "driver_info の解析中に次のエラーが検出されました:\n" "%s" #, python-format msgid "" "The following iLO parameters from the node's driver_info should be integers: " "%s" msgstr "" "ノードの driver_info からの次の iLO パラメーターは整数でなければなりません: " "%s" #, python-format msgid "The following parameters are missing in driver_info: %s" msgstr "次のパラメーターが driver_info にありません: %s" #, python-format msgid "The following parameters were missing: %s" msgstr "パラメーターが不足しています: %s" #, python-format msgid "" "The following required iLO parameters are missing from the node's " "driver_info: %s" msgstr "ノードの driver_info に次の必須 iLO パラメーターがありません: %s" #, python-format msgid "" "The following type errors were encountered while parsing driver_info:\n" "%s" msgstr "" "driver_info の解析中に次の形式エラーが検出されました:\n" "%s" #, python-format msgid "The given image info does not have a valid image id: %s" msgstr "指定されたイメージ情報には有効なイメージ ID がありません: %s" #, python-format msgid "" "The hints \"%(invalid_hints)s\" are invalid. Valid hints are: " "\"%(valid_hints)s\"" msgstr "" "ヒント「%(invalid_hints)s」は無効です。有効なヒントは以下のとおりです: " "「%(valid_hints)s」" msgid "" "The maximum number of items returned in a single response from a collection " "resource." msgstr "コレクションリソースから 1 回の応答で最大数の項目が返されました。" #, python-format msgid "The method %(method)s does not support HTTP %(http)s" msgstr "メソッド %(method)s では HTTP %(http)s はサポートされていません" #, python-format msgid "The node %s didn't return 'macs' as the key with inspection." msgstr "ノード %s から検査に用いるキーとして「macs」が返されませんでした。" #, python-format msgid "The node %s didn't return 'properties' as the key with inspection." msgstr "" "ノード %s から検査に用いるキーとして「properties」が返されませんでした。" msgid "The provided endpoint is invalid" msgstr "指定されたエンドポイントが無効です" msgid "The region used for getting endpoints of OpenStack services." msgstr "OpenStack サービスのエンドポイントを取得するために使用するリージョン。" #, python-format msgid "" "The requested action \"%(action)s\" can not be performed on node \"%(node)s" "\" while it is in state \"%(state)s\"." msgstr "" "要求されたアクション「%(action)s」は、ノード「%(node)s」が状態「%(state)s」で" "ある間はそのノードに対して実行できません。" #, python-format msgid "The requested action \"%(action)s\" could not be understood." msgstr "要求されたアクション「%(action)s」を認識できませんでした。" #, python-format msgid "The sort_key value %(key)s is an invalid field for sorting" msgstr "sort_key の値 %(key)s が、並び替えでは使用できないフィールドです" msgid "Time (in seconds) to wait for the console subprocess to start." msgstr "コンソールのサブプロセスの起動を待機する時間 (秒単位)" msgid "Timeout (in seconds) for iLO operations" msgstr "iLO 操作のタイムアウト (秒単位)" msgid "Timeout (in seconds) for iRMC operations" msgstr "iRMC 処理のタイムアウト (秒単位)" msgid "" "Timeout (in seconds) for iRMC operations. The default value is 60. Optional." msgstr "iRMC 操作のタイムアウト (秒)。デフォルト値は 60。オプション。" #, python-format msgid "Timeout reached while waiting for callback for node %s" msgstr "ノード %s に対するコールバックを待機中にタイムアウトになりました" msgid "Top-level directory for maintaining ironic's state." msgstr "ironic の状態を維持するための最上位ディレクトリー。" msgid "UCS Manager admin/server-profile username. Required." msgstr "UCS Manager の管理者/サーバープロファイルのユーザー名。必須。" msgid "UCS Manager password. Required." msgstr "UCS Manager のパスワード。必須。" msgid "UCS Manager service-profile name. Required." msgstr "UCS Manager サービスプロファイル名。必須。" msgid "URL for connecting to neutron." msgstr "neutron に接続するための URL。" msgid "" "URL of Ironic API service. If not set ironic can get the current value from " "the keystone service catalog." msgstr "" "Ironic API サービスの URL。設定されていない場合、ironic は keystone のサービ" "スカタログから現在の値を取得できます。" msgid "UUID (from Glance) of the deployment ISO. Required." msgstr "デプロイメント ISO の (Glance からの) UUID。必須。" msgid "UUID (from Glance) of the deployment kernel. Required." msgstr "デプロイカーネルの (Glance からの) UUID。必須。" msgid "" "UUID (from Glance) of the ramdisk that is mounted at boot time. Required." msgstr "ブート時にマウントされた RAM ディスクの (Glance からの) UUID。必須。" msgid "" "UUID (from Glance) of the ramdisk with agent that is used at deploy time. " "Required." msgstr "" "デプロイ時に使用されるエージェントのある RAM ディスクの (Glance からの) " "UUID。必須。" msgid "Unable to communicate with the server." msgstr "サーバーと通信できません。" #, python-format msgid "" "Unable to decode response as JSON.\n" "Request URL: %(url)s\n" "Request body: \"%(body)s\"\n" "Response status code: %(code)s\n" "Response: \"%(response)s\"" msgstr "" "応答として JSON をデコードできません。\n" "リクエスト URL: %(url)s\n" "リクエストボディー: \"%(body)s\"\n" "応答ステータスコード: %(code)s\n" "応答: \"%(response)s\"" msgid "Unable to import ImcSdk library" msgstr "ImcSdk ライブラリーをインポートできません" msgid "Unable to import UcsSdk library" msgstr "UcsSdk ライブラリーをインポートできません" msgid "Unable to import iboot library" msgstr "iboot ライブラリーをインポートできません" msgid "Unable to import proliantutils library" msgstr "proliantutils ライブラリーをインポートできません" msgid "Unable to import pyghmi IPMI library" msgstr "pyghmi IPMI ライブラリーをインポートできません" msgid "Unable to import pyghmi library" msgstr "pyghmi ライブラリーをインポートできません" msgid "Unable to import pyremotevbox library" msgstr "pyremotevbox ライブラリーをインポートできません" msgid "Unable to import pysnmp library" msgstr "pysnmp ライブラリーをインポートできません" msgid "Unable to import python-scciclient library" msgstr "python-scciclient ライブラリーをインポートできません" msgid "Unable to import pywsman library" msgstr "pywsman ライブラリーをインポートできません" msgid "Unable to import seamicroclient library" msgstr "seamicroclient ライブラリーをインポートできません" msgid "" "Unable to locate usable ipmitool command in the system path when checking " "ipmitool version" msgstr "" "ipmitool バージョンの確認時に、使用可能な ipmitool コマンドがシステムのパスに" "見つかりませんでした" msgid "Unacceptable parameters." msgstr "受け入れられないパラメーター。" #, fuzzy, python-format msgid "Unknown lookup payload version: %s" msgstr "ルックアップ・ペイロード・バージョンが不明です: %s" #, python-format msgid "Unsupported target_state: %s" msgstr "サポートされない target_state: %s" #, python-format msgid "Update DHCP options on port: %(port_id)s failed." msgstr "ポート %(port_id)s での DHCP オプションの更新に失敗しました。" #, python-format msgid "Update MAC address on port: %(port_id)s failed." msgstr "ポート %(port_id)s での MAC アドレスの更新に失敗しました。" msgid "" "Upgrade the database schema to the latest version. Optionally, use --" "revision to specify an alembic revision string to upgrade to." msgstr "" "データベースのスキーマを最新バージョンにアップグレードします。オプションとし" "て、--revision を使用して、アップグレードする Alembic バージョン文字列を指定" "することもできます。" msgid "Username for the VirtualBox host. Default value is ''. Optional." msgstr "VirtualBox ホストのユーザー名。デフォルト値は空白です。オプション。" msgid "Username for the iRMC with administrator privileges. Required." msgstr "管理者権限を持つ iRMC のユーザー名。必須。" msgid "Username to access the chassis manager REST API. Required." msgstr "シャーシマネージャーの REST API にアクセスするユーザー名。必須。" msgid "Username to log into AMT system. Required." msgstr "AMT システムにログインするためのユーザー名。必須。" msgid "Valid cleaning network UUID not provided" msgstr "有効なクリーニングネットワーク UUID が指定されていません" #, python-format msgid "Validation of image href %(image_href)s failed, reason: %(reason)s" msgstr "イメージ href %(image_href)s の検証に失敗しました。理由: %(reason)s" #, python-format msgid "" "Value '%s' for remote_image_share_root isn't a directory or doesn't exist." msgstr "" "remote_image_share_root の値「%s」が、ディレクトリーではないか、存在しませ" "ん。" #, python-format msgid "" "Value '%s' for remote_image_share_type is not supported value either 'NFS' " "or 'CIFS'." msgstr "" "remote_image_share_type の値「%s」はサポートされません。「NFS」と「CIFS」がサ" "ポートされます。" #, python-format msgid "" "Value for ipmi_bridging is provided as %s, but IPMI bridging is not " "supported by the IPMI utility installed on host. Ensure ipmitool version is " "> 1.8.11" msgstr "" "ipmi_bridging の値に %s が指定されましたが、IPMI ブリッジングはホストにインス" "トールされた IPMI ユーティリティーではサポートされていません。ipmitool バー" "ジョンが 1.8.11 より上であることを確認してください。" #, python-format msgid "" "Version %(ver)s was requested but the minor version is not supported by this " "service. The supported version range is: [%(min)s, %(max)s]." msgstr "" "バージョン %(ver)s が要求されましたが、このサービスでは、マイナーバージョンは" "サポートされていません。サポートされるバージョン範囲は [%(min)s、%(max)s] で" "す。" #, python-format msgid "VirtualBox operation '%(operation)s' failed. Error: %(error)s" msgstr "VirtualBox の処理「%(operation)s」が失敗しました。エラー: %(error)s" msgid "" "Wake-On-Lan needs at least one port resource to be registered in the node" msgstr "" "Wake-On-Lan は、少なくとも 1 つのポートがノードに登録されている必要があります" #, python-format msgid "" "When creating cleaning ports, DHCP provider didn't return VIF port ID for %s" msgstr "" "クリーニングポートの作成中、DHCP プロバイダーが %s の仮想インターフェースの" "ポート ID を返しませんでした。" #, python-format msgid "" "While executing step %(step)s on node %(node)s, step returned invalid value: " "%(val)s" msgstr "" "ノード %(node)s でステップ %(step)s を実行中に無効な値が返されました: %(val)s" #, python-format msgid "_set_power_state called with invalid power state '%s'" msgstr "無効な電源状態「%s」で _set_power_state が呼び出されました" #, python-format msgid "bad response: %s" msgstr "不正な応答: %s" msgid "" "bridging_type; default is \"no\". One of \"single\", \"dual\", \"no\". " "Optional." msgstr "" "bridging_type; デフォルトは「no」です。「single」、「dual」、「no」のいずれか" "を指定します。オプション。" msgid "delete object" msgstr "delete オブジェクト" msgid "" "destination address for bridged request. Required only if ipmi_bridging is " "set to \"single\" or \"dual\"." msgstr "" "ブリッジ要求の宛先アドレス。ipmi_bridging が「single」または「dual」に設定さ" "れている場合にのみ必要です。" msgid "" "destination channel for bridged request. Required only if ipmi_bridging is " "set to \"single\" or \"dual\"." msgstr "" "ブリッジ要求の宛先チャネル。ipmi_bridging が「single」または「dual」に設定さ" "れている場合にのみ必要です。" msgid "disabled" msgstr "無効化" msgid "disabling" msgstr "無効化中" msgid "enabled" msgstr "有効化" msgid "enabling" msgstr "有効化中" #, fuzzy, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s の基盤: %(backing_file)s" #, python-format msgid "get_clean_steps for node %(node)s returned invalid result: %(result)s" msgstr "" "ノード %(node)s の get_clean_steps から無効な結果が返されました: %(result)s" msgid "getting boot device" msgstr "ブートデバイスの取得中" msgid "getting power status" msgstr "電源状態の取得中" msgid "head account" msgstr "head アカウント" msgid "head object" msgstr "head オブジェクト" msgid "iBoot PDU port; default is 9100. Optional." msgstr "iBoot PDU ポート。デフォルトは 9100。オプション。" msgid "iBoot PDU relay id must be an integer." msgstr "iBoot PDU リレー ID は整数でなければなりません。" msgid "iBoot PDU relay id; default is 1. Optional." msgstr "iBoot PDU リレー ID。デフォルトは 1。オプション。" msgid "iLO get_power_status" msgstr "iLO get_power_status" msgid "iLO license check" msgstr "iLO ライセンスチェック" msgid "iLO set_power_state" msgstr "iLO set_power_state" msgid "iPXE boot is enabled but no HTTP URL or HTTP root was specified." msgstr "" "iPXE ブートが有効になっていますが、HTTP URL も HTTP ルートも指定されませんで" "した。" #, python-format msgid "iRMC %(operation)s failed. Reason: %(error)s" msgstr "iRMC %(operation)s が失敗しました。理由: %(error)s" msgid "iRMC set_power_state" msgstr "iRMC 電源操作" #, python-format msgid "iRMC shared file system '%(share)s' is not mounted." msgstr "iRMC 共有ファイルシステム「%(share)s」がマウントされていません。" #, python-format msgid "" "iSCSI connection did not become active after attempting to verify %d times." msgstr "iSCSI 接続は %d 回の検証試行後にアクティブになりませんでした。" #, python-format msgid "" "iSCSI connection was not seen by the file system after attempting to verify " "%d times." msgstr "" "iSCSI 接続は %d 回の検証試行後にファイルシステムによって参照されませんでし" "た。" #, python-format msgid "" "image_source's image_checksum must be provided in instance_info for node %s" msgstr "" "image_source の image_checksum をノード %s の instance_info で指定する必要が" "あります" msgid "" "ironic-inspector HTTP endpoint. If this is not set, the ironic-inspector " "client default (http://127.0.0.1:5050) will be used." msgstr "" "ironic-inspector の HTTP エンドポイント。設定されていない場合、ironic-" "inspector クライアントのデフォルト (http://127.0.0.1:5050) が使用されます。" #, python-format msgid "ironic-inspector inspection failed: %s" msgstr "ironic-inspector による検査に失敗しました: %s" msgid "ironic-inspector support is disabled" msgstr "ironic-inspector のサポートが無効化されています" msgid "" "local IPMB address for bridged requests. Used only if ipmi_bridging is set " "to \"single\" or \"dual\". Optional." msgstr "" "ブリッジ要求のローカル IPMB アドレス。ipmi_bridging が「single」または" "「dual」に設定されている場合にのみ使用されます。オプション。" msgid "" "new password for iLO. Required if the clean step 'reset_ilo_credential' is " "enabled." msgstr "" "iLO の新規パスワード。クリーニングステップ「reset_ilo_credential」が有効に" "なっている場合は必須です。" #, python-format msgid "node %(node)s command status errored: %(error)s" msgstr "ノード %(node)s のコマンドステータスがエラーでした: %(error)s" msgid "node's UDP port to connect to. Only required for console access." msgstr "接続先のノードの UDP ポート。コンソールアクセスのみに必要です。" msgid "not supported" msgstr "サポートされていません" #, python-format msgid "" "parse ipmi sensor data failed, get nothing with input data: %(sensors_data)s" msgstr "" "ipmi センサーデータの解析に失敗しました。入力データ %(sensors_data)s から何も" "得られません" #, python-format msgid "" "parse ipmi sensor data failed, unknown sensor type data: %(sensors_data)s" msgstr "" "ipmi センサーデータの解析に失敗しました。不明なセンサータイプデータ: " "%(sensors_data)s" msgid "password for ilo_username. Required." msgstr "ilo_username のパスワード。必須。" msgid "" "password to use for authentication or for unlocking a private key. One of " "this, ssh_key_contents, or ssh_key_filename must be specified." msgstr "" "認証または秘密鍵のロック解除に使用するパスワード。この中の 1 つ、" "ssh_key_contents、または ssh_key_filename のいずれかを指定する必要がありま" "す。" msgid "password used for authentication. Required." msgstr "認証に使用するパスワード。必須。" msgid "password. Optional." msgstr "パスワード。オプション。" msgid "password. Required." msgstr "パスワード。必須。" msgid "path used for WS-Man endpoint; default is \"/wsman\". Optional." msgstr "" "WS-Man エンドポイントに使用するパス。デフォルトは「/wsman」。オプション。" msgid "port on the node to connect to; default is 22. Optional." msgstr "接続先のノードのポート。デフォルトは 22。オプション。" msgid "port to be used for iLO operations. Optional." msgstr "iLO 操作に使用するポート。オプション。" msgid "port used for WS-Man endpoint; default is 443. Optional." msgstr "WS-Man エンドポイントに使用するポート。デフォルトは 443。オプション。" msgid "post object" msgstr "post オブジェクト" msgid "" "private key(s). One of this, ssh_key_filename, or ssh_password must be " "specified." msgstr "" "秘密鍵。ssh_key_filename または ssh_password のどちらかを指定する必要がありま" "す。" #, python-format msgid "privilege level; default is ADMINISTRATOR. One of %s. Optional." msgstr "" "特権レベル。デフォルトは ADMINISTRATOR です。%s のいずれかです。オプション。" msgid "" "protocol used for WS-Man endpoint; one of http, https; default is \"https\". " "Optional." msgstr "" "WS-Man エンドポイントで使用するプロトコル。http または https のいずれかを指定" "します。デフォルトは「https」です。オプション。" msgid "provisioning" msgstr "プロビジョニング" msgid "put container" msgstr "put コンテナー" msgid "put object" msgstr "put オブジェクト" msgid "python-ironic-inspector-client Python module not found" msgstr "python-ironic-inspector-client Python モジュールが見つかりません" msgid "rebooting" msgstr "再起動中" msgid "server ID. Required." msgstr "サーバー ID。必須。" #, python-format msgid "" "set_boot_device called with invalid device %(device)s for node %(node_id)s." msgstr "" "set_boot_device がノード %(node_id)s の無効なデバイス %(device)s を使用して呼" "び出されました。" #, python-format msgid "set_power_state called for %(node)s with invalid state %(state)s" msgstr "" "%(node)s の set_power_state が、無効な状態 %(state)s で呼び出されました" #, python-format msgid "" "set_power_state called for Node %(node)s with invalid power state %(pstate)s." msgstr "" "set_power_state が、無効な電源状態 %(pstate)s を持つノード %(node)s に対して" "呼び出されました。" #, python-format msgid "set_power_state called with an invalid power state: %s." msgstr "無効な電源状態 %s で set_power_state が呼び出されました。" #, python-format msgid "set_power_state called with an invalid powerstate: %s." msgstr "set_power_state が無効な電源状態で呼び出されました: %s" #, python-format msgid "set_power_state called with invalid power state %s." msgstr "無効な電源状態 %s で set_power_state が呼び出されました。" #, python-format msgid "set_power_state called with invalid power state '%s'" msgstr "set_power_state が無効な電源状態「%s」で呼び出されました。" msgid "set_power_state called with invalid power state." msgstr "set_power_state が無効な電源状態で呼び出されました。" msgid "setting boot device" msgstr "ブートデバイスの設定中" msgid "setting power status" msgstr "電源状態の設定中" msgid "skipping non-root volumes" msgstr "非ルートボリュームのスキップ中" msgid "skipping root volume" msgstr "ルートボリュームのスキップ中" msgid "" "the version of the IPMI protocol; default is \"2.0\". One of \"1.5\", " "\"2.0\". Optional." msgstr "" "IPMI プロトコルのバージョン。デフォルトは \"2.0\"。 \"1.5\"、\"2.0\" のいず" "れか。オプション。" msgid "timeout (in seconds) for iLO operations. Optional." msgstr "iLO 操作のタイムアウト (秒)。オプション。" msgid "timeout reached while inspecting the node" msgstr "ノードの検査中にタイムアウトに達しました" msgid "" "transit address for bridged request. Required only if ipmi_bridging is set " "to \"dual\"." msgstr "" "ブリッジ要求の通過アドレス。ipmi_bridging が「dual」に設定されている場合にの" "み必要です。" msgid "" "transit channel for bridged request. Required only if ipmi_bridging is set " "to \"dual\"." msgstr "" "ブリッジ要求の通過チャネル。ipmi_bridging が「dual」に設定されている場合にの" "み必要です。" msgid "username for the iLO with administrator privileges. Required." msgstr "管理者権限を持つ iLO のユーザー名。必須。" msgid "username to authenticate as. Required." msgstr "認証するユーザー名。必須。" msgid "username used for authentication. Required." msgstr "認証に使用するユーザー名。必須。" msgid "username. Required." msgstr "ユーザー名。必須。" msgid "username; default is NULL user. Optional." msgstr "ユーザー名。デフォルトは NULL。オプション。" msgid "version of SeaMicro API client; default is 2. Optional." msgstr "SeaMicro API クライアントのバージョン。デフォルトは 2。オプション。" msgid "whether to enable inspection using ironic-inspector" msgstr "ironic-inspector を使用した検査を有効化するかどうか" ironic-5.1.0/ironic/locale/ja/LC_MESSAGES/ironic-log-critical.po0000664000567000056710000000154612674513466025260 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the ironic project. # # Translators: # Masaharu Miyamoto , 2015 # KATO Tomoyuki , 2015. #zanata msgid "" msgstr "" "Project-Id-Version: ironic 4.3.1.dev202\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2016-01-27 02:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2015-02-13 05:11+0000\n" "Last-Translator: Masaharu Miyamoto \n" "Language: ja\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.7.3\n" "Language-Team: Japanese\n" msgid "Failed to start keepalive" msgstr "キープアライブの起動に失敗しました" ironic-5.1.0/ironic/locale/ironic.pot0000664000567000056710000031422312674513466020715 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2016 ORGANIZATION # This file is distributed under the same license as the ironic project. # FIRST AUTHOR , 2016. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: ironic 4.3.1.dev202\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2016-01-27 06:37+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 2.2.0\n" #: ironic/netconf.py:28 msgid "" "IP address of this host. If unset, will determine the IP " "programmatically. If unable to do so, will use \"127.0.0.1\"." msgstr "" #: ironic/api/__init__.py:23 msgid "The IP address on which ironic-api listens." msgstr "" #: ironic/api/__init__.py:26 msgid "The TCP port on which ironic-api listens." msgstr "" #: ironic/api/__init__.py:29 msgid "" "The maximum number of items returned in a single response from a " "collection resource." msgstr "" #: ironic/api/__init__.py:33 msgid "" "Public URL to use when building the links to the API resources (for " "example, \"https://ironic.rocks:6384\"). If None the links will be built " "using the request's host URL. If the API is operating behind a proxy, you" " will want to change this to represent the proxy's URL. Defaults to None." msgstr "" #: ironic/api/__init__.py:40 msgid "" "Number of workers for OpenStack Ironic API service. The default is equal " "to the number of CPUs available if that can be determined, else a default" " worker count of 1 is returned." msgstr "" #: ironic/api/__init__.py:46 msgid "" "Enable the integrated stand-alone API to service requests via HTTPS " "instead of HTTP. If there is a front-end service performing HTTPS " "offloading from the service, this option should be False; note, you will " "want to change public API endpoint to represent SSL termination URL with " "'public_endpoint' option." msgstr "" #: ironic/api/app.py:33 msgid "" "Authentication strategy used by ironic-api: one of \"keystone\" or " "\"noauth\". \"noauth\" should not be used in a production environment " "because all authentication will be disabled." msgstr "" #: ironic/api/app.py:38 msgid "" "Return server tracebacks in the API response for any error responses. " "WARNING: this is insecure and should not be used in a production " "environment." msgstr "" #: ironic/api/app.py:43 msgid "" "Enable pecan debug mode. WARNING: this is insecure and should not be used" " in a production environment." msgstr "" #: ironic/api/controllers/base.py:104 #, python-format msgid "Invalid value for %s header" msgstr "" #: ironic/api/controllers/v1/__init__.py:150 #, python-format msgid "" "Mutually exclusive versions requested. Version %(ver)s requested but not " "supported by this service. The supported version range is: [%(min)s, " "%(max)s]." msgstr "" #: ironic/api/controllers/v1/__init__.py:159 #, python-format msgid "" "Version %(ver)s was requested but the minor version is not supported by " "this service. The supported version range is: [%(min)s, %(max)s]." msgstr "" #: ironic/api/controllers/v1/chassis.py:178 #: ironic/api/controllers/v1/node.py:901 ironic/api/controllers/v1/port.py:209 #, python-format msgid "The sort_key value %(key)s is an invalid field for sorting" msgstr "" #: ironic/api/controllers/v1/node.py:432 ironic/conductor/manager.py:511 msgid "provisioning" msgstr "" #: ironic/api/controllers/v1/node.py:454 #, python-format msgid "Adding a config drive is only supported when setting provision state to %s" msgstr "" #: ironic/api/controllers/v1/node.py:480 #, python-format msgid "The requested action \"%(action)s\" could not be understood." msgstr "" #: ironic/api/controllers/v1/node.py:889 msgid "Chassis id not specified." msgstr "" #: ironic/api/controllers/v1/node.py:988 #, python-format msgid "" "Node %s can not update the driver while the console is enabled. Please " "stop the console first." msgstr "" #: ironic/api/controllers/v1/node.py:1131 #, python-format msgid "Cannot create node with invalid name %(name)s" msgstr "" #: ironic/api/controllers/v1/node.py:1175 #, python-format msgid "Node %s can not be updated while a state transition is in progress." msgstr "" #: ironic/api/controllers/v1/node.py:1181 #, python-format msgid "Node %(node)s: Cannot change name to invalid name '%(name)s'" msgstr "" #: ironic/api/controllers/v1/port.py:197 msgid "Node identifier not specified." msgstr "" #: ironic/api/controllers/v1/types.py:145 #, python-format msgid "%s is not JSON serializable" msgstr "" #: ironic/api/controllers/v1/types.py:242 #, python-format msgid "'%s' is an internal attribute and can not be updated" msgstr "" #: ironic/api/controllers/v1/types.py:246 #, python-format msgid "'%s' is a mandatory attribute and can not be removed" msgstr "" #: ironic/api/controllers/v1/types.py:251 msgid "'add' and 'replace' operations need a value" msgstr "" #: ironic/api/controllers/v1/utils.py:46 msgid "Limit must be positive" msgstr "" #: ironic/api/controllers/v1/utils.py:53 #, python-format msgid "Invalid sort direction: %s. Acceptable values are 'asc' or 'desc'" msgstr "" #: ironic/api/controllers/v1/utils.py:63 #, python-format msgid "Adding a new attribute (%s) to the root of the resource is not allowed" msgstr "" #: ironic/api/controllers/v1/utils.py:143 msgid "Method not specified" msgstr "" #: ironic/api/controllers/v1/utils.py:187 #, python-format msgid "Field(s) \"%s\" are not valid" msgstr "" #: ironic/api/controllers/v1/utils.py:214 #, python-format msgid "Provision state \"%s\" is not valid" msgstr "" #: ironic/api/middleware/auth_token.py:44 #, python-format msgid "Cannot compile public API routes: %s" msgstr "" #: ironic/api/middleware/parsable_error.py:52 #, python-format msgid "ErrorDocumentMiddleware received an invalid status %s" msgstr "" #: ironic/cmd/dbsync.py:60 msgid "" "Upgrade the database schema to the latest version. Optionally, use " "--revision to specify an alembic revision string to upgrade to." msgstr "" #: ironic/cmd/dbsync.py:68 msgid "" "Downgrade the database schema to the oldest revision. While optional, one" " should generally use --revision to specify the alembic revision string " "to downgrade to." msgstr "" #: ironic/cmd/dbsync.py:80 msgid "Create a new alembic revision. Use --message to set the message string." msgstr "" #: ironic/cmd/dbsync.py:88 msgid "Print the current version information and exit." msgstr "" #: ironic/cmd/dbsync.py:93 msgid "Create the database schema." msgstr "" #: ironic/cmd/dbsync.py:99 msgid "Available commands" msgstr "" #: ironic/common/dhcp_factory.py:25 msgid "" "DHCP provider to use. \"neutron\" uses Neutron, and \"none\" uses a no-op" " provider." msgstr "" #: ironic/common/driver_factory.py:31 msgid "" "Specify the list of drivers to load during service initialization. " "Missing drivers, or drivers which fail to initialize, will prevent the " "conductor service from starting. The option default is a recommended set " "of production-oriented drivers. A complete list of drivers present on " "your system may be found by enumerating the \"ironic.drivers\" " "entrypoint. An example may be found in the developer documentation " "online." msgstr "" #: ironic/common/exception.py:38 msgid "" "Used if there is a formatting error when generating an exception message " "(a programming error). If True, raise an exception; if False, use the " "unformatted message." msgstr "" #: ironic/common/exception.py:59 msgid "An unknown exception occurred." msgstr "" #: ironic/common/exception.py:115 msgid "Not authorized." msgstr "" #: ironic/common/exception.py:120 msgid "Operation not permitted." msgstr "" #: ironic/common/exception.py:124 msgid "Unacceptable parameters." msgstr "" #: ironic/common/exception.py:129 msgid "Conflict." msgstr "" #: ironic/common/exception.py:134 msgid "Resource temporarily unavailable, please retry." msgstr "" #: ironic/common/exception.py:140 msgid "Request not acceptable." msgstr "" #: ironic/common/exception.py:145 msgid "Invalid resource state." msgstr "" #: ironic/common/exception.py:149 #, python-format msgid "A node with UUID %(uuid)s already exists." msgstr "" #: ironic/common/exception.py:153 #, python-format msgid "A port with MAC address %(mac)s already exists." msgstr "" #: ironic/common/exception.py:157 #, python-format msgid "A chassis with UUID %(uuid)s already exists." msgstr "" #: ironic/common/exception.py:161 #, python-format msgid "A port with UUID %(uuid)s already exists." msgstr "" #: ironic/common/exception.py:165 #, python-format msgid "" "Instance %(instance_uuid)s is already associated with a node, it cannot " "be associated with this other node %(node)s" msgstr "" #: ironic/common/exception.py:170 #, python-format msgid "A node with name %(name)s already exists." msgstr "" #: ironic/common/exception.py:174 #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "" #: ironic/common/exception.py:178 #, python-format msgid "Expected a logical name or uuid but received %(name)s." msgstr "" #: ironic/common/exception.py:182 #, python-format msgid "Expected a logical name but received %(name)s." msgstr "" #: ironic/common/exception.py:186 #, python-format msgid "Expected an uuid or int but received %(identity)s." msgstr "" #: ironic/common/exception.py:190 #, python-format msgid "Expected a MAC address but received %(mac)s." msgstr "" #: ironic/common/exception.py:194 #, python-format msgid "" "The requested action \"%(action)s\" can not be performed on node " "\"%(node)s\" while it is in state \"%(state)s\"." msgstr "" #: ironic/common/exception.py:199 #, python-format msgid "Couldn't apply patch '%(patch)s'. Reason: %(reason)s" msgstr "" #: ironic/common/exception.py:203 #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "" #: ironic/common/exception.py:207 ironic/common/exception.py:211 #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "" #: ironic/common/exception.py:217 ironic/common/exception.py:221 #, python-format msgid "%(err)s" msgstr "" #: ironic/common/exception.py:225 msgid "Resource already exists." msgstr "" #: ironic/common/exception.py:229 msgid "Resource could not be found." msgstr "" #: ironic/common/exception.py:234 #, python-format msgid "Failed to load DHCP provider %(dhcp_provider_name)s, reason: %(reason)s" msgstr "" #: ironic/common/exception.py:239 #, python-format msgid "Could not find the following driver(s): %(driver_name)s." msgstr "" #: ironic/common/exception.py:243 #, python-format msgid "Image %(image_id)s could not be found." msgstr "" #: ironic/common/exception.py:247 #, python-format msgid "No valid host was found. Reason: %(reason)s" msgstr "" #: ironic/common/exception.py:251 #, python-format msgid "Instance %(instance)s could not be found." msgstr "" #: ironic/common/exception.py:255 #, python-format msgid "Node %(node)s could not be found." msgstr "" #: ironic/common/exception.py:259 #, python-format msgid "Node %(node)s is associated with instance %(instance)s." msgstr "" #: ironic/common/exception.py:263 #, python-format msgid "Port %(port)s could not be found." msgstr "" #: ironic/common/exception.py:267 #, python-format msgid "Update DHCP options on port: %(port_id)s failed." msgstr "" #: ironic/common/exception.py:271 #, python-format msgid "Clean up DHCP options on node: %(node)s failed." msgstr "" #: ironic/common/exception.py:275 #, python-format msgid "Retrieve IP address on port: %(port_id)s failed." msgstr "" #: ironic/common/exception.py:279 #, python-format msgid "Invalid IPv4 address %(ip_address)s." msgstr "" #: ironic/common/exception.py:283 #, python-format msgid "Update MAC address on port: %(port_id)s failed." msgstr "" #: ironic/common/exception.py:287 #, python-format msgid "Chassis %(chassis)s could not be found." msgstr "" #: ironic/common/exception.py:291 #, python-format msgid "Conductor %(conductor)s cannot be started because no drivers were loaded." msgstr "" #: ironic/common/exception.py:296 #, python-format msgid "Conductor %(conductor)s could not be found." msgstr "" #: ironic/common/exception.py:300 #, python-format msgid "Conductor %(conductor)s already registered." msgstr "" #: ironic/common/exception.py:304 #, python-format msgid "Failed to set node power state to %(pstate)s." msgstr "" #: ironic/common/exception.py:308 msgid "An exclusive lock is required, but the current context has a shared lock." msgstr "" #: ironic/common/exception.py:313 #, python-format msgid "Failed to toggle maintenance-mode flag for node %(node)s: %(reason)s" msgstr "" #: ironic/common/exception.py:318 #, python-format msgid "Console access is not enabled on node %(node)s" msgstr "" #: ironic/common/exception.py:322 #, python-format msgid "" "The %(op)s operation can't be performed on node %(node)s because it's in " "maintenance mode." msgstr "" #: ironic/common/exception.py:327 #, python-format msgid "" "Cannot complete the requested action because chassis %(chassis)s contains" " nodes." msgstr "" #: ironic/common/exception.py:332 #, python-format msgid "IPMI call failed: %(cmd)s." msgstr "" #: ironic/common/exception.py:336 msgid "" "Failed to connect to AMT service. This could be caused by the wrong " "amt_address or bad network environment." msgstr "" #: ironic/common/exception.py:341 #, python-format msgid "AMT call failed: %(cmd)s." msgstr "" #: ironic/common/exception.py:345 msgid "MSFT OCS call failed." msgstr "" #: ironic/common/exception.py:349 #, python-format msgid "Failed to establish SSH connection to host %(host)s." msgstr "" #: ironic/common/exception.py:353 #, python-format msgid "Failed to execute command via SSH: %(cmd)s." msgstr "" #: ironic/common/exception.py:357 #, python-format msgid "" "Driver %(driver)s does not support %(extension)s (disabled or not " "implemented)." msgstr "" #: ironic/common/exception.py:362 #, python-format msgid "Connection to glance host %(host)s:%(port)s failed: %(reason)s" msgstr "" #: ironic/common/exception.py:367 #, python-format msgid "Not authorized for image %(image_id)s." msgstr "" #: ironic/common/exception.py:371 #, python-format msgid "Invalid image href %(image_href)s." msgstr "" #: ironic/common/exception.py:375 #, python-format msgid "Validation of image href %(image_href)s failed, reason: %(reason)s" msgstr "" #: ironic/common/exception.py:380 #, python-format msgid "Failed to download image %(image_href)s, reason: %(reason)s" msgstr "" #: ironic/common/exception.py:384 msgid "Not authorized in Keystone." msgstr "" #: ironic/common/exception.py:392 #, python-format msgid "" "Service type %(service_type)s with endpoint type %(endpoint_type)s not " "found in keystone service catalog." msgstr "" #: ironic/common/exception.py:397 msgid "Connection failed" msgstr "" #: ironic/common/exception.py:401 msgid "Requested OpenStack Images API is forbidden" msgstr "" #: ironic/common/exception.py:409 msgid "The provided endpoint is invalid" msgstr "" #: ironic/common/exception.py:413 msgid "Unable to communicate with the server." msgstr "" #: ironic/common/exception.py:429 #, python-format msgid "Could not find config at %(path)s" msgstr "" #: ironic/common/exception.py:433 #, python-format msgid "" "Node %(node)s is locked by host %(host)s, please retry after the current " "operation is completed." msgstr "" #: ironic/common/exception.py:438 #, python-format msgid "Node %(node)s found not to be locked on release" msgstr "" #: ironic/common/exception.py:442 msgid "" "Requested action cannot be performed due to lack of free conductor " "workers." msgstr "" #: ironic/common/exception.py:452 #, python-format msgid "Invalid configuration file. %(error_msg)s" msgstr "" #: ironic/common/exception.py:456 #, python-format msgid "Driver %(driver)s could not be loaded. Reason: %(reason)s." msgstr "" #: ironic/common/exception.py:464 #, python-format msgid "Could not find pid in pid file %(pid_path)s" msgstr "" #: ironic/common/exception.py:468 #, python-format msgid "Console subprocess failed to start. %(error)s" msgstr "" #: ironic/common/exception.py:472 #, python-format msgid "Failed to create the password file. %(error)s" msgstr "" #: ironic/common/exception.py:480 #, python-format msgid "%(operation)s failed, error: %(error)s" msgstr "" #: ironic/common/exception.py:484 #, python-format msgid "%(operation)s not supported. error: %(error)s" msgstr "" #: ironic/common/exception.py:488 #, python-format msgid "DRAC operation failed. Reason: %(error)s" msgstr "" #: ironic/common/exception.py:496 #, python-format msgid "" "DRAC client failed. Last error (cURL error code): %(last_error)s, fault " "string: \"%(fault_string)s\" response_code: %(response_code)s" msgstr "" #: ironic/common/exception.py:503 #, python-format msgid "DRAC operation failed. _msg_fmt: %(_msg_fmt)s" msgstr "" #: ironic/common/exception.py:507 #, python-format msgid "" "DRAC operation yielded return value %(actual_return_value)s that is " "neither error nor expected %(expected_return_value)s" msgstr "" #: ironic/common/exception.py:513 #, python-format msgid "" "Another job with ID %(job_id)s is already created to configure " "%(target)s. Wait until existing job is completed or is canceled" msgstr "" #: ironic/common/exception.py:519 #, python-format msgid "" "Invalid filter dialect '%(invalid_filter)s'. Supported options are " "%(supported)s" msgstr "" #: ironic/common/exception.py:524 #, python-format msgid "Failed to get sensor data for node %(node)s. Error: %(error)s" msgstr "" #: ironic/common/exception.py:529 #, python-format msgid "Failed to parse sensor data for node %(node)s. Error: %(error)s" msgstr "" #: ironic/common/exception.py:534 #, python-format msgid "" "Disk volume where '%(path)s' is located doesn't have enough disk space. " "Required %(required)d MiB, only %(actual)d MiB available space present." msgstr "" #: ironic/common/exception.py:540 #, python-format msgid "Creating %(image_type)s image failed: %(error)s" msgstr "" #: ironic/common/exception.py:544 #, python-format msgid "Swift operation '%(operation)s' failed: %(error)s" msgstr "" #: ironic/common/exception.py:548 #, python-format msgid "" "Swift object %(object)s from container %(container)s not found. Operation" " '%(operation)s' failed." msgstr "" #: ironic/common/exception.py:553 #, python-format msgid "SNMP operation '%(operation)s' failed: %(error)s" msgstr "" #: ironic/common/exception.py:557 #, python-format msgid "Failed to create a file system. File system %(fs)s is not supported." msgstr "" #: ironic/common/exception.py:562 #, python-format msgid "iRMC %(operation)s failed. Reason: %(error)s" msgstr "" #: ironic/common/exception.py:566 #, python-format msgid "iRMC shared file system '%(share)s' is not mounted." msgstr "" #: ironic/common/exception.py:570 #, python-format msgid "VirtualBox operation '%(operation)s' failed. Error: %(error)s" msgstr "" #: ironic/common/exception.py:575 #, python-format msgid "Failed to inspect hardware. Reason: %(error)s" msgstr "" #: ironic/common/exception.py:579 #, python-format msgid "Failed to clean node %(node)s: %(reason)s" msgstr "" #: ironic/common/exception.py:583 #, python-format msgid "Path %(dir)s does not exist." msgstr "" #: ironic/common/exception.py:587 #, python-format msgid "Directory %(dir)s is not writable." msgstr "" #: ironic/common/exception.py:591 #, python-format msgid "" "Cisco UCS client: operation %(operation)s failed for node %(node)s. " "Reason: %(error)s" msgstr "" #: ironic/common/exception.py:596 #, python-format msgid "Cisco UCS client: connection failed for node %(node)s. Reason: %(error)s" msgstr "" #: ironic/common/exception.py:605 #, python-format msgid "" "Failed to upload %(image_name)s image to web server %(web_server)s, " "reason: %(reason)s" msgstr "" #: ironic/common/exception.py:610 #, python-format msgid "Cisco IMC exception occurred for node %(node)s: %(error)s" msgstr "" #: ironic/common/exception.py:614 #, python-format msgid "OneView exception occurred. Error: %(error)s" msgstr "" #: ironic/common/fsm.py:78 #, python-format msgid "State '%s' does not exist" msgstr "" #: ironic/common/fsm.py:124 #, python-format msgid "Target state '%s' does not exist" msgstr "" #: ironic/common/fsm.py:127 #, python-format msgid "Target state '%s' is not a 'stable' state" msgstr "" #: ironic/common/hash_ring.py:31 msgid "" "Exponent to determine number of hash partitions to use when distributing " "load across conductors. Larger values will result in more even " "distribution of load and less load when rebalancing the ring, but more " "memory usage. Number of partitions per conductor is " "(2^hash_partition_exponent). This determines the granularity of " "rebalancing: given 10 hosts, and an exponent of the 2, there are 40 " "partitions in the ring.A few thousand partitions should make rebalancing " "smooth in most cases. The default is suitable for up to a few hundred " "conductors. Too many partitions has a CPU impact." msgstr "" #: ironic/common/hash_ring.py:45 msgid "" "[Experimental Feature] Number of hosts to map onto each hash partition. " "Setting this to more than one will cause additional conductor services to" " prepare deployment environments and potentially allow the Ironic cluster" " to recover more quickly if a conductor instance is terminated." msgstr "" #: ironic/common/hash_ring.py:53 msgid "Interval (in seconds) between hash ring resets." msgstr "" #: ironic/common/hash_ring.py:90 msgid "Invalid hosts supplied when building HashRing." msgstr "" #: ironic/common/hash_ring.py:121 msgid "Invalid data supplied to HashRing.get_hosts." msgstr "" #: ironic/common/hash_ring.py:208 #, python-format msgid "The driver '%s' is unknown." msgstr "" #: ironic/common/image_service.py:48 msgid "Default glance hostname or IP address." msgstr "" #: ironic/common/image_service.py:51 msgid "Default glance port." msgstr "" #: ironic/common/image_service.py:54 msgid "Default protocol to use when connecting to glance. Set to https for SSL." msgstr "" #: ironic/common/image_service.py:57 msgid "" "A list of the glance api servers available to ironic. Prefix with " "https:// for SSL-based glance API servers. Format is [hostname|IP]:port." msgstr "" #: ironic/common/image_service.py:62 msgid "Allow to perform insecure SSL (https) requests to glance." msgstr "" #: ironic/common/image_service.py:66 msgid "Number of retries when downloading an image from glance." msgstr "" #: ironic/common/image_service.py:70 msgid "" "Authentication strategy to use when connecting to glance. Only " "\"keystone\" and \"noauth\" are currently supported by ironic." msgstr "" #: ironic/common/image_service.py:145 #, python-format msgid "Got HTTP code %s instead of 200 in response to HEAD request." msgstr "" #: ironic/common/image_service.py:168 #, python-format msgid "Got HTTP code %s instead of 200 in response to GET request." msgstr "" #: ironic/common/image_service.py:193 msgid "" "Cannot determine image size as there is no Content-Length header " "specified in response to HEAD request." msgstr "" #: ironic/common/image_service.py:235 msgid "Specified image file not found." msgstr "" #: ironic/common/image_service.py:315 #, python-format msgid "Image download protocol %s is not supported." msgstr "" #: ironic/common/images.py:46 msgid "If True, convert backing images to \"raw\" disk image format." msgstr "" #: ironic/common/images.py:50 msgid "Path to isolinux binary file." msgstr "" #: ironic/common/images.py:53 msgid "Template file for isolinux configuration file." msgstr "" #: ironic/common/images.py:56 msgid "Template file for grub configuration file." msgstr "" #: ironic/common/images.py:343 msgid "'qemu-img info' parsing failed." msgstr "" #: ironic/common/images.py:350 #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "" #: ironic/common/images.py:365 #, python-format msgid "Converted to raw, but format is now %s" msgstr "" #: ironic/common/images.py:552 msgid "Deploy iso didn't contain efiboot.img or grub.cfg" msgstr "" #: ironic/common/keystone.py:27 msgid "The region used for getting endpoints of OpenStack services." msgstr "" #: ironic/common/keystone.py:54 msgid "Keystone API endpoint is missing" msgstr "" #: ironic/common/keystone.py:78 #, python-format msgid "Could not authorize in Keystone: %s" msgstr "" #: ironic/common/keystone.py:123 msgid "No Keystone service catalog loaded" msgstr "" #: ironic/common/paths.py:28 msgid "Directory where the ironic python module is installed." msgstr "" #: ironic/common/paths.py:32 msgid "Directory where ironic binaries are installed." msgstr "" #: ironic/common/paths.py:35 msgid "Top-level directory for maintaining ironic's state." msgstr "" #: ironic/common/pxe_utils.py:118 #, python-format msgid "Failed to get IP address for any port on node %s." msgstr "" #: ironic/common/raid.py:42 #, python-format msgid "" "Raid config cannot have more than one root volume. %d root volumes were " "specified" msgstr "" #: ironic/common/raid.py:67 #, python-format msgid "RAID config validation error: %s" msgstr "" #: ironic/common/service.py:43 msgid "Seconds between running periodic tasks." msgstr "" #: ironic/common/service.py:46 msgid "" "Name of this node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address. However, the node name must " "be valid within an AMQP key, and if using ZeroMQ, a valid hostname, FQDN," " or IP address." msgstr "" #: ironic/common/service.py:167 #, python-format msgid "api_workers value of %d is invalid, must be greater than 0." msgstr "" #: ironic/common/swift.py:31 msgid "Maximum number of times to retry a Swift request, before failing." msgstr "" #: ironic/common/swift.py:103 msgid "put container" msgstr "" #: ironic/common/swift.py:114 msgid "put object" msgstr "" #: ironic/common/swift.py:134 msgid "head account" msgstr "" #: ironic/common/swift.py:163 msgid "delete object" msgstr "" #: ironic/common/swift.py:184 msgid "head object" msgstr "" #: ironic/common/swift.py:199 msgid "post object" msgstr "" #: ironic/common/utils.py:48 msgid "" "Path to the rootwrap configuration file to use for running commands as " "root." msgstr "" #: ironic/common/utils.py:52 msgid "Temporary working directory, default is Python temp dir." msgstr "" #: ironic/common/utils.py:123 msgid "Invalid private key" msgstr "" #: ironic/common/utils.py:614 #, python-format msgid "" "Cannot update capabilities. The new capabilities should be in a " "dictionary. Provided value is %s" msgstr "" #: ironic/common/utils.py:627 #, python-format msgid "Invalid capabilities string '%s'." msgstr "" #: ironic/common/utils.py:660 #, python-format msgid "%(port_name)s \"%(port)s\" is not a valid integer." msgstr "" #: ironic/common/utils.py:664 #, python-format msgid "" "%(port_name)s \"%(port)s\" is out of range. Valid port numbers must be " "between 1 and 65535." msgstr "" #: ironic/common/glance_service/v2/image_service.py:31 msgid "" "A list of URL schemes that can be downloaded directly via the direct_url." " Currently supported schemes: [file]." msgstr "" #: ironic/common/glance_service/v2/image_service.py:40 msgid "" "The secret token given to Swift to allow temporary URL downloads. " "Required for temporary URLs." msgstr "" #: ironic/common/glance_service/v2/image_service.py:45 msgid "" "The length of time in seconds that the temporary URL will be valid for. " "Defaults to 20 minutes. If some deploys get a 401 response code when " "trying to download from the temporary URL, try raising this duration." msgstr "" #: ironic/common/glance_service/v2/image_service.py:52 msgid "" "The \"endpoint\" (scheme, hostname, optional port) for the Swift URL of " "the form \"endpoint_url/api_version/[account/]container/object_id\". Do " "not include trailing \"/\". For example, use " "\"https://swift.example.com\". If using RADOS Gateway, endpoint may also " "contain /swift path; if it does not, it will be appended. Required for " "temporary URLs." msgstr "" #: ironic/common/glance_service/v2/image_service.py:62 msgid "" "The Swift API version to create a temporary URL for. Defaults to \"v1\". " "Swift temporary URL format: " "\"endpoint_url/api_version/[account/]container/object_id\"" msgstr "" #: ironic/common/glance_service/v2/image_service.py:67 msgid "" "The account that Glance uses to communicate with Swift. The format is " "\"AUTH_uuid\". \"uuid\" is the UUID for the account configured in the " "glance-api.conf. Required for temporary URLs when Glance backend is " "Swift. For example: \"AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30\". Swift " "temporary URL format: " "\"endpoint_url/api_version/[account/]container/object_id\"" msgstr "" #: ironic/common/glance_service/v2/image_service.py:77 msgid "" "The Swift container Glance is configured to store its images in. Defaults" " to \"glance\", which is the default in glance-api.conf. Swift temporary " "URL format: \"endpoint_url/api_version/[account/]container/object_id\"" msgstr "" #: ironic/common/glance_service/v2/image_service.py:84 msgid "" "This should match a config by the same name in the Glance configuration " "file. When set to 0, a single-tenant store will only use one container to" " store all images. When set to an integer value between 1 and 32, a " "single-tenant store will use multiple containers to store images, and " "this value will determine how many containers are created." msgstr "" #: ironic/common/glance_service/v2/image_service.py:94 msgid "" "Type of endpoint to use for temporary URLs. If the Glance backend is " "Swift, use \"swift\"; if it is CEPH with RADOS gateway, use \"radosgw\"." msgstr "" #: ironic/common/glance_service/v2/image_service.py:155 #, python-format msgid "The given image info does not have a valid image id: %s" msgstr "" #: ironic/common/glance_service/v2/image_service.py:174 #, python-format msgid "" "Swift endpoint URL should only contain scheme, hostname, optional port " "and optional /swift path without trailing slash; provided value is: %s" msgstr "" #: ironic/common/glance_service/v2/image_service.py:195 msgid "" "Swift temporary URLs require a shared secret to be created. You must " "provide \"swift_temp_url_key\" as a config option." msgstr "" #: ironic/common/glance_service/v2/image_service.py:199 msgid "" "Swift temporary URLs require a Swift endpoint URL. You must provide " "\"swift_endpoint_url\" as a config option." msgstr "" #: ironic/common/glance_service/v2/image_service.py:204 msgid "" "Swift temporary URLs require a Swift account string. You must provide " "\"swift_account\" as a config option." msgstr "" #: ironic/common/glance_service/v2/image_service.py:208 msgid "\"swift_temp_url_duration\" must be a positive integer." msgstr "" #: ironic/common/glance_service/v2/image_service.py:213 msgid "" "An integer value between 0 and 32 is required for " "swift_store_multiple_containers_seed." msgstr "" #: ironic/conductor/base_manager.py:44 msgid "The size of the workers greenthread pool." msgstr "" #: ironic/conductor/base_manager.py:47 msgid "Seconds between conductor heart beats." msgstr "" #: ironic/conductor/base_manager.py:147 #, python-format msgid "The deployment can't be resumed by conductor %s. Moving to fail state." msgstr "" #: ironic/conductor/manager.py:78 msgid "" "URL of Ironic API service. If not set ironic can get the current value " "from the keystone service catalog." msgstr "" #: ironic/conductor/manager.py:83 msgid "" "Maximum time (in seconds) since the last check-in of a conductor. A " "conductor is considered inactive when this time has been exceeded." msgstr "" #: ironic/conductor/manager.py:88 msgid "Interval between syncing the node power state to the database, in seconds." msgstr "" #: ironic/conductor/manager.py:92 msgid "Interval between checks of provision timeouts, in seconds." msgstr "" #: ironic/conductor/manager.py:96 msgid "" "Timeout (seconds) to wait for a callback from a deploy ramdisk. Set to 0 " "to disable timeout." msgstr "" #: ironic/conductor/manager.py:100 msgid "" "During sync_power_state, should the hardware power state be set to the " "state recorded in the database (True) or should the database be updated " "based on the hardware state (False)." msgstr "" #: ironic/conductor/manager.py:106 msgid "" "During sync_power_state failures, limit the number of times Ironic should" " try syncing the hardware node power state with the node power state in " "DB" msgstr "" #: ironic/conductor/manager.py:112 msgid "" "Maximum number of worker threads that can be started simultaneously by a " "periodic task. Should be less than RPC thread pool size." msgstr "" #: ironic/conductor/manager.py:117 msgid "Number of attempts to grab a node lock." msgstr "" #: ironic/conductor/manager.py:120 msgid "Seconds to sleep between node lock attempts." msgstr "" #: ironic/conductor/manager.py:123 msgid "Enable sending sensor data message via the notification bus" msgstr "" #: ironic/conductor/manager.py:127 msgid "" "Seconds between conductor sending sensor data message to ceilometer via " "the notification bus." msgstr "" #: ironic/conductor/manager.py:131 msgid "" "List of comma separated meter types which need to be sent to Ceilometer. " "The default value, \"ALL\", is a special value meaning send all the " "sensor data." msgstr "" #: ironic/conductor/manager.py:136 msgid "" "When conductors join or leave the cluster, existing conductors may need " "to update any persistent local state as nodes are moved around the " "cluster. This option controls how often, in seconds, each conductor will " "check for nodes that it should \"take over\". Set it to a negative value " "to disable the check entirely." msgstr "" #: ironic/conductor/manager.py:145 msgid "Whether to upload the config drive to Swift." msgstr "" #: ironic/conductor/manager.py:148 msgid "" "Name of the Swift container to store config drive data. Used when " "configdrive_use_swift is True." msgstr "" #: ironic/conductor/manager.py:152 msgid "Timeout (seconds) for waiting for node inspection. 0 - unlimited." msgstr "" #: ironic/conductor/manager.py:156 msgid "" "Cleaning is a configurable set of steps, such as erasing disk drives, " "that are performed on the node to ensure it is in a baseline state and " "ready to be deployed to. This is done after instance deletion, and during" " the transition from a \"managed\" to \"available\" state. When enabled, " "the particular steps performed to clean a node depend on which driver " "that node is managed by; see the individual driver's documentation for " "details. NOTE: The introduction of the cleaning operation causes instance" " deletion to take significantly longer. In an environment where all " "tenants are trusted (eg, because there is only one tenant), this option " "could be safely disabled." msgstr "" #: ironic/conductor/manager.py:173 msgid "" "Timeout (seconds) to wait for a callback from the ramdisk doing the " "cleaning. If the timeout is reached the node will be put in the \"clean " "failed\" provision state. Set to 0 to disable timeout." msgstr "" #: ironic/conductor/manager.py:324 ironic/conductor/manager.py:404 #: ironic/drivers/utils.py:87 #, python-format msgid "No handler for method %s" msgstr "" #: ironic/conductor/manager.py:329 ironic/conductor/manager.py:409 #, python-format msgid "The method %(method)s does not support HTTP %(http)s" msgstr "" #: ironic/conductor/manager.py:546 #, python-format msgid "RPC do_node_deploy failed to validate deploy or power info. Error: %(msg)s" msgstr "" #: ironic/conductor/manager.py:594 #, python-format msgid "" "Failed to validate power driver interface. Can not delete instance. " "Error: %(msg)s" msgstr "" #: ironic/conductor/manager.py:620 #, python-format msgid "Failed to tear down. Error: %s" msgstr "" #: ironic/conductor/manager.py:669 #, python-format msgid "Node %(node)s got an invalid last step for %(state)s: %(step)s." msgstr "" #: ironic/conductor/manager.py:716 msgid "cleaning" msgstr "" #: ironic/conductor/manager.py:727 #, python-format msgid "" "RPC do_node_clean failed to validate power info. Cannot clean node " "%(node)s. Error: %(msg)s" msgstr "" #: ironic/conductor/manager.py:778 #, python-format msgid "" "Cannot continue cleaning on %(node)s, node is in %(state)s state, should " "be %(clean_state)s" msgstr "" #: ironic/conductor/manager.py:820 msgid "Failed to run next clean step" msgstr "" #: ironic/conductor/manager.py:857 #, python-format msgid "" "Failed to validate power driver interface. Can not clean node %(node)s. " "Error: %(msg)s" msgstr "" #: ironic/conductor/manager.py:874 #, python-format msgid "Failed to prepare node %(node)s for cleaning: %(e)s" msgstr "" #: ironic/conductor/manager.py:902 #, python-format msgid "Cannot clean node %(node)s. Error: %(msg)s" msgstr "" #: ironic/conductor/manager.py:939 #, python-format msgid "Node %(node)s failed step %(step)s: %(exc)s" msgstr "" #: ironic/conductor/manager.py:968 #, python-format msgid "" "While executing step %(step)s on node %(node)s, step returned invalid " "value: %(val)s" msgstr "" #: ironic/conductor/manager.py:985 #, python-format msgid "Failed to tear down from cleaning for node %s" msgstr "" #: ironic/conductor/manager.py:1006 #, python-format msgid "" "Failed to validate power driver interface for node %(node)s. Error: " "%(msg)s" msgstr "" #: ironic/conductor/manager.py:1013 #, python-format msgid "Failed to get power state for node %(node)s. Error: %(msg)s" msgstr "" #: ironic/conductor/manager.py:1040 msgid "Failed to tear down cleaning after aborting the operation" msgstr "" #: ironic/conductor/manager.py:1047 #, python-format msgid "Clean operation aborted for node %s" msgstr "" #: ironic/conductor/manager.py:1048 msgid "By request, the clean operation was aborted" msgstr "" #: ironic/conductor/manager.py:1050 #, python-format msgid " after the completion of step \"%s\"" msgstr "" #: ironic/conductor/manager.py:1308 #, python-format msgid "Failed to start console while taking over the node %(node)s: %(err)s." msgstr "" #: ironic/conductor/manager.py:1339 msgid "" "Timeout reached while cleaning the node. Please check if the ramdisk " "responsible for the cleaning is running on the node." msgstr "" #: ironic/conductor/manager.py:1431 msgid "not supported" msgstr "" #: ironic/conductor/manager.py:1475 #, python-format msgid "" "Can not delete node \"%(node)s\" while it is in provision state " "\"%(state)s\". Valid provision states to perform deletion are: " "\"%(valid_states)s\"" msgstr "" #: ironic/conductor/manager.py:1582 msgid "enabled" msgstr "" #: ironic/conductor/manager.py:1582 msgid "disabled" msgstr "" #: ironic/conductor/manager.py:1604 msgid "enabling" msgstr "" #: ironic/conductor/manager.py:1604 msgid "disabling" msgstr "" #: ironic/conductor/manager.py:1605 #, python-format msgid "Error %(op)s the console on node %(node)s. Reason: %(error)s" msgstr "" #: ironic/conductor/manager.py:1890 #, python-format msgid "" "RPC inspect_hardware failed to validate inspection or power info. Error: " "%(msg)s" msgstr "" #: ironic/conductor/manager.py:1923 msgid "timeout reached while inspecting the node" msgstr "" #: ironic/conductor/manager.py:2164 #, python-format msgid "Failed to upload the configdrive to Swift. Error: %s" msgstr "" #: ironic/conductor/manager.py:2175 #, python-format msgid "Failed to prepare to deploy. Error: %s" msgstr "" #: ironic/conductor/manager.py:2184 #, python-format msgid "Failed to deploy. Error: %s" msgstr "" #: ironic/conductor/manager.py:2223 #, python-format msgid "" "During sync_power_state, max retries exceeded for node %(node)s, node " "state %(actual)s does not match expected state '%(state)s'. Updating DB " "state to '%(actual)s' Switching node to maintenance mode." msgstr "" #: ironic/conductor/manager.py:2232 #, python-format msgid " Error: %s" msgstr "" #: ironic/conductor/manager.py:2275 msgid "Power driver returned ERROR state while trying to sync power state." msgstr "" #: ironic/conductor/manager.py:2383 #, python-format msgid "During inspection, driver returned unexpected state %(state)s" msgstr "" #: ironic/conductor/rpcapi.py:118 #, python-format msgid "No conductor service registered which supports driver %s." msgstr "" #: ironic/conductor/rpcapi.py:635 ironic/conductor/rpcapi.py:661 #: ironic/conductor/rpcapi.py:686 msgid "Incompatible conductor version - please upgrade ironic-conductor first" msgstr "" #: ironic/conductor/utils.py:83 ironic/conductor/utils.py:130 #, python-format msgid "Failed to change power state to '%(target)s'. Error: %(error)s" msgstr "" #: ironic/conductor/utils.py:151 #, python-format msgid "Timeout reached while waiting for callback for node %s" msgstr "" #: ironic/conductor/utils.py:157 #, python-format msgid "Cleanup failed for node %(node)s after deploy timeout: %(error)s" msgstr "" #: ironic/conductor/utils.py:167 msgid "" "Deploy timed out, but an unhandled exception was encountered while " "aborting. More info may be found in the log file." msgstr "" #: ironic/conductor/utils.py:194 ironic/conductor/utils.py:244 msgid "No free conductor workers available" msgstr "" #: ironic/conductor/utils.py:368 #, python-format msgid "node does not support this clean step: %(step)s" msgstr "" #: ironic/conductor/utils.py:378 #, python-format msgid "clean step %(step)s has these invalid arguments: %(invalid)s" msgstr "" #: ironic/conductor/utils.py:392 #, python-format msgid "clean step %(step)s is missing these required keyword arguments: %(miss)s" msgstr "" #: ironic/db/sqlalchemy/api.py:141 #, python-format msgid "The sort_key value \"%(key)s\" is an invalid field for sorting" msgstr "" #: ironic/db/sqlalchemy/api.py:333 msgid "Cannot overwrite UUID for an existing Node." msgstr "" #: ironic/db/sqlalchemy/api.py:432 msgid "Cannot overwrite UUID for an existing Port." msgstr "" #: ironic/db/sqlalchemy/api.py:492 msgid "Cannot overwrite UUID for an existing Chassis." msgstr "" #: ironic/db/sqlalchemy/models.py:38 msgid "MySQL engine to use." msgstr "" #: ironic/dhcp/neutron.py:38 msgid "URL for connecting to neutron." msgstr "" #: ironic/dhcp/neutron.py:41 msgid "Timeout value for connecting to neutron in seconds." msgstr "" #: ironic/dhcp/neutron.py:44 msgid "Client retries in the case of a failed request." msgstr "" #: ironic/dhcp/neutron.py:47 msgid "" "Default authentication strategy to use when connecting to neutron. Can be" " either \"keystone\" or \"noauth\". Running neutron in noauth mode " "(related to but not affected by this setting) is insecure and should only" " be used for testing." msgstr "" #: ironic/dhcp/neutron.py:53 msgid "" "UUID of the network to create Neutron ports on when booting to a ramdisk " "for cleaning/zapping using Neutron DHCP" msgstr "" #: ironic/dhcp/neutron.py:74 msgid "Neutron auth_strategy should be either \"noauth\" or \"keystone\"." msgstr "" #: ironic/dhcp/neutron.py:168 #, python-format msgid "" "No VIFs found for node %(node)s when attempting to update DHCP BOOT " "options." msgstr "" #: ironic/dhcp/neutron.py:182 #, python-format msgid "Failed to set DHCP BOOT options for any port on node %s." msgstr "" #: ironic/dhcp/neutron.py:294 msgid "Valid cleaning network UUID not provided" msgstr "" #: ironic/dhcp/neutron.py:310 #, python-format msgid "Could not create cleaning port on network %(net)s from %(node)s. %(exc)s" msgstr "" #: ironic/dhcp/neutron.py:319 #, python-format msgid "Failed to create cleaning ports for node %(node)s" msgstr "" #: ironic/dhcp/neutron.py:340 #, python-format msgid "" "Could not get cleaning network vif for %(node)s from Neutron, possible " "network issue. %(exc)s" msgstr "" #: ironic/dhcp/neutron.py:354 #, python-format msgid "" "Could not remove cleaning ports on network %(net)s from %(node)s, " "possible network issue. %(exc)s" msgstr "" #: ironic/drivers/agent.py:144 ironic/drivers/fake.py:230 #: ironic/drivers/pxe.py:285 msgid "Unable to import pyremotevbox library" msgstr "" #: ironic/drivers/agent.py:166 ironic/drivers/drac.py:39 #: ironic/drivers/fake.py:185 ironic/drivers/fake.py:258 #: ironic/drivers/pxe.py:307 msgid "Unable to import pywsman library" msgstr "" #: ironic/drivers/agent.py:189 ironic/drivers/fake.py:280 #: ironic/drivers/pxe.py:346 msgid "Unable to import UcsSdk library" msgstr "" #: ironic/drivers/agent.py:212 ironic/drivers/fake.py:293 #: ironic/drivers/pxe.py:368 msgid "Unable to import ImcSdk library" msgstr "" #: ironic/drivers/agent.py:250 ironic/drivers/fake.py:159 #: ironic/drivers/pxe.py:192 msgid "Unable to import iboot library" msgstr "" #: ironic/drivers/base.py:1004 #, python-format msgid "\"argsinfo\" must be a dictionary instead of \"%s\"" msgstr "" #: ironic/drivers/base.py:1009 #, python-format msgid "Argument \"%(arg)s\" must be a dictionary instead of \"%(val)s\"." msgstr "" #: ironic/drivers/base.py:1016 #, python-format msgid "" "For argument \"%(arg)s\", \"description\" must be a string value instead " "of \"%(value)s\"." msgstr "" #: ironic/drivers/base.py:1023 #, python-format msgid "" "For argument \"%(arg)s\", \"required\" must be a Boolean value instead of" " \"%(value)s\"." msgstr "" #: ironic/drivers/base.py:1028 #, python-format msgid "" "Argument \"%(arg)s\" has an invalid key named \"%(key)s\". It must be " "\"description\" or \"required\"." msgstr "" #: ironic/drivers/base.py:1033 #, python-format msgid "Argument \"%(arg)s\" is missing a \"description\"." msgstr "" #: ironic/drivers/base.py:1099 #, python-format msgid "\"priority\" must be an integer value instead of \"%s\"" msgstr "" #: ironic/drivers/base.py:1106 #, python-format msgid "\"abortable\" must be a Boolean value instead of \"%s\"" msgstr "" #: ironic/drivers/drac.py:44 ironic/drivers/fake.py:190 msgid "Unable to import python-dracclient library" msgstr "" #: ironic/drivers/fake.py:118 msgid "Unable to import pyghmi IPMI library" msgstr "" #: ironic/drivers/fake.py:133 ironic/drivers/pxe.py:164 msgid "Unable to import seamicroclient library" msgstr "" #: ironic/drivers/fake.py:171 ironic/drivers/ilo.py:47 ironic/drivers/ilo.py:72 #: ironic/drivers/pxe.py:212 msgid "Unable to import proliantutils library" msgstr "" #: ironic/drivers/fake.py:205 ironic/drivers/pxe.py:237 msgid "Unable to import pysnmp library" msgstr "" #: ironic/drivers/fake.py:217 ironic/drivers/irmc.py:45 #: ironic/drivers/irmc.py:69 ironic/drivers/pxe.py:260 msgid "Unable to import python-scciclient library" msgstr "" #: ironic/drivers/fake.py:314 ironic/drivers/oneview.py:49 #: ironic/drivers/oneview.py:78 msgid "Unable to import python-oneviewclient library" msgstr "" #: ironic/drivers/pxe.py:127 msgid "Unable to import pyghmi library" msgstr "" #: ironic/drivers/utils.py:81 msgid "Method not specified when calling vendor extension." msgstr "" #: ironic/drivers/utils.py:233 #, python-format msgid "Value of 'capabilities' must be string. Got %s" msgstr "" #: ironic/drivers/utils.py:241 #, python-format msgid "Malformed capabilities value: %s" msgstr "" #: ironic/drivers/modules/agent.py:42 msgid "" "DEPRECATED. Additional append parameters for baremetal PXE boot. This " "option is deprecated and will be removed in Mitaka release. Please use " "[pxe]pxe_append_params instead." msgstr "" #: ironic/drivers/modules/agent.py:49 msgid "" "DEPRECATED. Template file for PXE configuration. This option is " "deprecated and will be removed in Mitaka release. Please use " "[pxe]pxe_config_template instead." msgstr "" #: ironic/drivers/modules/agent.py:56 msgid "" "Whether Ironic will manage booting of the agent ramdisk. If set to False," " you will need to configure your mechanism to allow booting the agent " "ramdisk." msgstr "" #: ironic/drivers/modules/agent.py:62 msgid "" "The memory size in MiB consumed by agent when it is booted on a bare " "metal node. This is used for checking if the image can be downloaded and " "deployed on the bare metal node after booting agent ramdisk. This may be " "set according to the memory consumed by the agent ramdisk image." msgstr "" #: ironic/drivers/modules/agent.py:70 msgid "" "Whether the agent ramdisk should stream raw images directly onto the disk" " or not. By streaming raw images directly onto the disk the agent ramdisk" " will not spend time copying the image to a tmpfs partition (therefore " "consuming less memory) prior to writing it to the disk. Unless the disk " "where the image will be copied to is really slow, this option should be " "set to True. Defaults to True." msgstr "" #: ironic/drivers/modules/agent.py:90 ironic/drivers/modules/pxe.py:99 msgid "UUID (from Glance) of the deployment kernel. Required." msgstr "" #: ironic/drivers/modules/agent.py:92 msgid "" "UUID (from Glance) of the ramdisk with agent that is used at deploy time." " Required." msgstr "" #: ironic/drivers/modules/agent.py:163 #, python-format msgid "" "Memory size is too small for requested image, if it is less than (image " "size + reserved RAM size), will break the IPA deployments. Image size: " "%(image_size)d MiB, Memory size: %(memory_size)d MiB, Reserved size: " "%(reserved_size)d MiB." msgstr "" #: ironic/drivers/modules/agent.py:204 #, python-format msgid "Node %s failed to validate deploy image info. Some parameters were missing" msgstr "" #: ironic/drivers/modules/agent.py:210 #, python-format msgid "" "image_source's image_checksum must be provided in instance_info for node " "%s" msgstr "" #: ironic/drivers/modules/agent.py:219 #, python-format msgid "" "Node %(node)s is configured to use the %(driver)s driver which currently " "does not support deploying partition images." msgstr "" #: ironic/drivers/modules/agent.py:419 #: ironic/drivers/modules/oneview/vendor.py:47 #, python-format msgid "node %(node)s command status errored: %(error)s" msgstr "" #: ironic/drivers/modules/agent.py:480 #, python-format msgid "Node %s has no target RAID configuration." msgstr "" #: ironic/drivers/modules/agent.py:489 msgid "skipping root volume" msgstr "" #: ironic/drivers/modules/agent.py:492 msgid "skipping non-root volumes" msgstr "" #: ironic/drivers/modules/agent.py:499 msgid " and " msgstr "" #: ironic/drivers/modules/agent.py:501 #, python-format msgid "Node %(node)s has empty target RAID configuration after %(msg)s." msgstr "" #: ironic/drivers/modules/agent.py:537 #, python-format msgid "" "Agent ramdisk didn't return a proper command result while cleaning " "%(node)s. It returned '%(result)s' after command execution." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:45 msgid "Maximum interval (in seconds) for agent heartbeats." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:48 msgid "" "Number of times to retry getting power state to check if bare metal node " "has been powered off after a soft power off." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:53 msgid "" "Amount of time (in seconds) to wait between polling power state after " "trigger soft poweroff." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:201 msgid "Missing parameter version" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:204 #, python-format msgid "Unknown lookup payload version: %s" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:240 #, python-format msgid "Agent returned error for clean step %(step)s on node %(node)s : %(err)s." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:255 #, python-format msgid "Could not restart cleaning on node %(node)s: %(err)s." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:275 #, python-format msgid "" "For node %(node)s, post clean step hook %(method)s failed for clean step " "%(step)s.Error: %(error)s" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:289 #, python-format msgid "" "Agent returned unknown status for clean step %(step)s on node %(node)s : " "%(err)s." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:320 msgid "For heartbeat operation, \"agent_url\" must be specified." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:329 msgid "Failed checking if deploy is done." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:338 msgid "Node failed to get image for deploy." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:342 msgid "Node failed to move to active state." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:356 msgid "Node failed to start the next cleaning step." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:360 msgid "Node failed to check cleaning progress." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:365 #, python-format msgid "Asynchronous exception for node %(node)s: %(msg)s exception: %(e)s" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:477 #, python-format msgid "Malformed network interfaces lookup: %s" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:506 #, python-format msgid "No ports matching the given MAC addresses %s exist in the database." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:554 #, python-format msgid "" "Ports matching mac addresses match multiple nodes. MACs: %(macs)s. Port " "ids: %(port_ids)s" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:606 #: ironic/drivers/modules/oneview/vendor.py:107 #, python-format msgid "Error rebooting node %(node)s after deploy. Error: %(error)s" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:644 #, python-format msgid "" "Failed to install a bootloader when deploying node %(node)s. Error: " "%(error)s" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:653 #, python-format msgid "" "Failed to change the boot device to %(boot_dev)s when deploying node " "%(node)s. Error: %(error)s" msgstr "" #: ironic/drivers/modules/agent_client.py:26 msgid "API version to use for communicating with the ramdisk agent." msgstr "" #: ironic/drivers/modules/agent_client.py:49 msgid "Agent driver requires agent_url in driver_internal_info" msgstr "" #: ironic/drivers/modules/agent_client.py:78 #, python-format msgid "" "Unable to decode response as JSON.\n" "Request URL: %(url)s\n" "Request body: \"%(body)s\"\n" "Response status code: %(code)s\n" "Response: \"%(response)s\"" msgstr "" #: ironic/drivers/modules/console_utils.py:45 msgid "Path to serial console terminal program" msgstr "" #: ironic/drivers/modules/console_utils.py:47 msgid "Directory containing the terminal SSL cert(PEM) for serial console access" msgstr "" #: ironic/drivers/modules/console_utils.py:50 msgid "" "Directory for holding terminal pid files. If not specified, the temporary" " directory will be used." msgstr "" #: ironic/drivers/modules/console_utils.py:55 msgid "Time interval (in seconds) for checking the status of console subprocess." msgstr "" #: ironic/drivers/modules/console_utils.py:59 msgid "Time (in seconds) to wait for the console subprocess to start." msgstr "" #: ironic/drivers/modules/console_utils.py:89 #, python-format msgid "" "Cannot create directory '%(path)s' for console PID file. Reason: " "%(reason)s." msgstr "" #: ironic/drivers/modules/console_utils.py:132 #, python-format msgid "Could not stop the console for node '%(node)s'. Reason: %(err)s." msgstr "" #: ironic/drivers/modules/console_utils.py:220 #, python-format msgid "" "%(exec_error)s\n" "Command: %(command)s" msgstr "" #: ironic/drivers/modules/console_utils.py:238 #, python-format msgid "" "Command: %(command)s.\n" "Exit code: %(return_code)s.\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" msgstr "" #: ironic/drivers/modules/console_utils.py:251 #, python-format msgid "Timeout while waiting for console subprocessto start for node %s." msgstr "" #: ironic/drivers/modules/deploy_utils.py:63 msgid "" "Priority to run in-band erase devices via the Ironic Python Agent " "ramdisk. If unset, will use the priority set in the ramdisk (defaults to " "10 for the GenericHardwareManager). If set to 0, will not run during " "cleaning." msgstr "" #: ironic/drivers/modules/deploy_utils.py:72 msgid "Number of iterations to be run for erasing devices." msgstr "" #: ironic/drivers/modules/deploy_utils.py:151 #, python-format msgid "" "iSCSI connection was not seen by the file system after attempting to " "verify %d times." msgstr "" #: ironic/drivers/modules/deploy_utils.py:175 #, python-format msgid "" "iSCSI connection did not become active after attempting to verify %d " "times." msgstr "" #: ironic/drivers/modules/deploy_utils.py:338 #: ironic/drivers/modules/iscsi_deploy.py:249 #, python-format msgid "" "Root partition is too small for requested image. Image virtual size: " "%(image_mb)d MB, Root size: %(root_mb)d MB" msgstr "" #: ironic/drivers/modules/deploy_utils.py:388 #, python-format msgid "Parent device '%s' not found" msgstr "" #: ironic/drivers/modules/deploy_utils.py:439 #, python-format msgid "%(error_msg)s. Missing are: %(missing_info)s" msgstr "" #: ironic/drivers/modules/deploy_utils.py:533 #, python-format msgid "" "Error parsing capabilities from Node %s instance_info field. A dictionary" " or a \"jsonified\" dictionary is expected." msgstr "" #: ironic/drivers/modules/deploy_utils.py:574 #, python-format msgid "get_clean_steps for node %(node)s returned invalid result: %(result)s" msgstr "" #: ironic/drivers/modules/deploy_utils.py:616 #, python-format msgid "Agent on node %(node)s returned bad command result: %(result)s" msgstr "" #: ironic/drivers/modules/deploy_utils.py:688 #, python-format msgid "" "The hints \"%(invalid_hints)s\" are invalid. Valid hints are: " "\"%(valid_hints)s\"" msgstr "" #: ironic/drivers/modules/deploy_utils.py:698 msgid "Root device hint \"size\" is not an integer value." msgstr "" #: ironic/drivers/modules/deploy_utils.py:802 #, python-format msgid "" "The parameter '%(capability)s' from %(field)s has an invalid value: " "'%(value)s'. Acceptable values are: %(valid_values)s." msgstr "" #: ironic/drivers/modules/deploy_utils.py:854 #, python-format msgid "Failed to connect to Glance to get the properties of the image %s" msgstr "" #: ironic/drivers/modules/deploy_utils.py:858 #, python-format msgid "Image %s can not be found." msgstr "" #: ironic/drivers/modules/deploy_utils.py:870 #, python-format msgid "Image %(image)s is missing the following properties: %(properties)s" msgstr "" #: ironic/drivers/modules/deploy_utils.py:922 #, python-format msgid "" "When creating cleaning ports, DHCP provider didn't return VIF port ID for" " %s" msgstr "" #: ironic/drivers/modules/deploy_utils.py:1070 #, python-format msgid "" "Cannot validate image information for node %s because one or more " "parameters are missing from its instance_info." msgstr "" #: ironic/drivers/modules/fake.py:49 #, python-format msgid "set_power_state called with an invalid powerstate: %s." msgstr "" #: ironic/drivers/modules/fake.py:119 msgid "Parameter 'bar' not passed to method 'first_method'." msgstr "" #: ironic/drivers/modules/fake.py:123 msgid "Test if the value of bar is baz" msgstr "" #: ironic/drivers/modules/fake.py:139 #, python-format msgid "Parameter 'bar' not passed to method '%s'." msgstr "" #: ironic/drivers/modules/fake.py:143 msgid "Test if the value of bar is kazoo" msgstr "" #: ironic/drivers/modules/fake.py:148 msgid "Test if the value of bar is meow" msgstr "" #: ironic/drivers/modules/fake.py:186 ironic/drivers/modules/ipminative.py:465 #: ironic/drivers/modules/ipmitool.py:842 #: ironic/drivers/modules/seamicro.py:570 ironic/drivers/modules/ssh.py:741 #: ironic/drivers/modules/virtualbox.py:338 #: ironic/drivers/modules/ilo/management.py:203 #: ironic/drivers/modules/irmc/management.py:146 #: ironic/drivers/modules/oneview/management.py:105 #, python-format msgid "Invalid boot device %s specified." msgstr "" #: ironic/drivers/modules/iboot.py:42 msgid "Maximum retries for iBoot operations" msgstr "" #: ironic/drivers/modules/iboot.py:45 msgid "Time (in seconds) between retry attempts for iBoot operations" msgstr "" #: ironic/drivers/modules/iboot.py:50 msgid "" "Time (in seconds) to sleep between when rebooting (powering off and on " "again)." msgstr "" #: ironic/drivers/modules/iboot.py:63 msgid "IP address of the node. Required." msgstr "" #: ironic/drivers/modules/iboot.py:64 ironic/drivers/modules/seamicro.py:73 msgid "username. Required." msgstr "" #: ironic/drivers/modules/iboot.py:65 ironic/drivers/modules/seamicro.py:71 msgid "password. Required." msgstr "" #: ironic/drivers/modules/iboot.py:68 msgid "iBoot PDU relay id; default is 1. Optional." msgstr "" #: ironic/drivers/modules/iboot.py:69 msgid "iBoot PDU port; default is 9100. Optional." msgstr "" #: ironic/drivers/modules/iboot.py:80 #, python-format msgid "Missing the following iBoot credentials in node's driver_info: %s." msgstr "" #: ironic/drivers/modules/iboot.py:92 msgid "iBoot PDU relay id must be an integer." msgstr "" #: ironic/drivers/modules/iboot.py:260 ironic/drivers/modules/ipmitool.py:758 #: ironic/drivers/modules/snmp.py:708 ironic/drivers/modules/ssh.py:654 #: ironic/drivers/modules/oneview/power.py:116 #, python-format msgid "set_power_state called with invalid power state %s." msgstr "" #: ironic/drivers/modules/image_cache.py:47 msgid "Run image downloads and raw format conversions in parallel." msgstr "" #: ironic/drivers/modules/inspector.py:38 msgid "whether to enable inspection using ironic-inspector" msgstr "" #: ironic/drivers/modules/inspector.py:41 msgid "" "ironic-inspector HTTP endpoint. If this is not set, the ironic-inspector " "client default (http://127.0.0.1:5050) will be used." msgstr "" #: ironic/drivers/modules/inspector.py:46 msgid "period (in seconds) to check status of nodes on inspection" msgstr "" #: ironic/drivers/modules/inspector.py:82 msgid "ironic-inspector support is disabled" msgstr "" #: ironic/drivers/modules/inspector.py:86 msgid "python-ironic-inspector-client Python module not found" msgstr "" #: ironic/drivers/modules/inspector.py:164 #, python-format msgid "Failed to start inspection: %s" msgstr "" #: ironic/drivers/modules/inspector.py:210 #, python-format msgid "ironic-inspector inspection failed: %s" msgstr "" #: ironic/drivers/modules/ipminative.py:50 msgid "" "Maximum time in seconds to retry IPMI operations. There is a tradeoff " "when setting this value. Setting this too low may cause older BMCs to " "crash and require a hard reset. However, setting too high can cause the " "sync power state periodic task to hang when there are slow or " "unresponsive BMCs." msgstr "" #: ironic/drivers/modules/ipminative.py:58 msgid "" "Minimum time, in seconds, between IPMI operations sent to a server. There" " is a risk with some hardware that setting this too low may cause the BMC" " to crash. Recommended setting is 5 seconds." msgstr "" #: ironic/drivers/modules/ipminative.py:69 msgid "IP of the node's BMC. Required." msgstr "" #: ironic/drivers/modules/ipminative.py:70 msgid "IPMI password. Required." msgstr "" #: ironic/drivers/modules/ipminative.py:71 msgid "IPMI username. Required." msgstr "" #: ironic/drivers/modules/ipminative.py:73 #: ironic/drivers/modules/ipmitool.py:101 msgid "" "Whether Ironic should specify the boot device to the BMC each time the " "server is turned on, eg. because the BMC is not capable of remembering " "the selected boot device across power cycles; default value is False. " "Optional." msgstr "" #: ironic/drivers/modules/ipminative.py:83 #: ironic/drivers/modules/ipmitool.py:111 ironic/drivers/modules/seamicro.py:82 #: ironic/drivers/modules/ilo/common.py:91 msgid "node's UDP port to connect to. Only required for console access." msgstr "" #: ironic/drivers/modules/ipminative.py:107 #: ironic/drivers/modules/ipmitool.py:253 #, python-format msgid "Missing the following IPMI credentials in node's driver_info: %s." msgstr "" #: ironic/drivers/modules/ipminative.py:145 #, python-format msgid "" "IPMI power on failed for node %(node_id)s with the following error: " "%(error)s" msgstr "" #: ironic/drivers/modules/ipminative.py:162 #: ironic/drivers/modules/ipminative.py:194 #: ironic/drivers/modules/ipminative.py:228 #, python-format msgid "bad response: %s" msgstr "" #: ironic/drivers/modules/ipminative.py:177 #, python-format msgid "" "IPMI power off failed for node %(node_id)s with the following error: " "%(error)s" msgstr "" #: ironic/drivers/modules/ipminative.py:211 #, python-format msgid "" "IPMI power reboot failed for node %(node_id)s with the following error: " "%(error)s" msgstr "" #: ironic/drivers/modules/ipminative.py:248 #, python-format msgid "" "IPMI get power state failed for node %(node_id)s with the following " "error: %(error)s" msgstr "" #: ironic/drivers/modules/ipminative.py:320 #, python-format msgid "Invalid raw bytes string: '%s'" msgstr "" #: ironic/drivers/modules/ipminative.py:323 msgid "Raw bytes string requires two bytes at least." msgstr "" #: ironic/drivers/modules/ipminative.py:338 #, python-format msgid "" "IPMI send raw bytes '%(bytes)s' failed for node %(node_id)s with the " "following error: %(error)s" msgstr "" #: ironic/drivers/modules/ipminative.py:398 #, python-format msgid "set_power_state called with an invalid power state: %s." msgstr "" #: ironic/drivers/modules/ipminative.py:575 #: ironic/drivers/modules/ipmitool.py:1081 msgid "Missing 'ipmi_terminal_port' parameter in node's driver_info." msgstr "" #: ironic/drivers/modules/ipminative.py:656 #: ironic/drivers/modules/ipmitool.py:1046 msgid "Parameter raw_bytes (string of bytes) was not specified." msgstr "" #: ironic/drivers/modules/ipmitool.py:76 msgid "IP address or hostname of the node. Required." msgstr "" #: ironic/drivers/modules/ipmitool.py:79 msgid "password. Optional." msgstr "" #: ironic/drivers/modules/ipmitool.py:80 msgid "remote IPMI RMCP port. Optional." msgstr "" #: ironic/drivers/modules/ipmitool.py:81 #, python-format msgid "privilege level; default is ADMINISTRATOR. One of %s. Optional." msgstr "" #: ironic/drivers/modules/ipmitool.py:83 msgid "username; default is NULL user. Optional." msgstr "" #: ironic/drivers/modules/ipmitool.py:84 msgid "" "bridging_type; default is \"no\". One of \"single\", \"dual\", \"no\". " "Optional." msgstr "" #: ironic/drivers/modules/ipmitool.py:86 msgid "" "transit channel for bridged request. Required only if ipmi_bridging is " "set to \"dual\"." msgstr "" #: ironic/drivers/modules/ipmitool.py:88 msgid "" "transit address for bridged request. Required only if ipmi_bridging is " "set to \"dual\"." msgstr "" #: ironic/drivers/modules/ipmitool.py:90 msgid "" "destination channel for bridged request. Required only if ipmi_bridging " "is set to \"single\" or \"dual\"." msgstr "" #: ironic/drivers/modules/ipmitool.py:93 msgid "" "destination address for bridged request. Required only if ipmi_bridging " "is set to \"single\" or \"dual\"." msgstr "" #: ironic/drivers/modules/ipmitool.py:96 msgid "" "local IPMB address for bridged requests. Used only if ipmi_bridging is " "set to \"single\" or \"dual\". Optional." msgstr "" #: ironic/drivers/modules/ipmitool.py:99 msgid "" "the version of the IPMI protocol; default is \"2.0\". One of \"1.5\", " "\"2.0\". Optional." msgstr "" #: ironic/drivers/modules/ipmitool.py:274 #, python-format msgid "" "Invalid IPMI protocol version value %(version)s, the valid value can be " "one of %(valid_versions)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:293 #, python-format msgid "" "Value for ipmi_bridging is provided as %s, but IPMI bridging is not " "supported by the IPMI utility installed on host. Ensure ipmitool version " "is > 1.8.11" msgstr "" #: ironic/drivers/modules/ipmitool.py:316 #, python-format msgid "%(param)s not provided" msgstr "" #: ironic/drivers/modules/ipmitool.py:319 #, python-format msgid "" "Invalid value for ipmi_bridging: %(bridging_type)s, the valid value can " "be one of: %(bridging_types)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:327 #, python-format msgid "" "Invalid privilege level value:%(priv_level)s, the valid value can be one " "of %(valid_levels)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:595 #, python-format msgid "parse ipmi sensor data failed, unknown sensor type data: %(sensors_data)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:635 #, python-format msgid "" "parse ipmi sensor data failed, get nothing with input data: " "%(sensors_data)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:683 #, python-format msgid "" "Ipmitool drivers need to be able to create temporary files to pass " "password to ipmitool. Encountered error: %s" msgstr "" #: ironic/drivers/modules/ipmitool.py:700 #: ironic/drivers/modules/ipmitool.py:795 #: ironic/drivers/modules/ipmitool.py:969 #: ironic/drivers/modules/ipmitool.py:1062 msgid "" "Unable to locate usable ipmitool command in the system path when checking" " ipmitool version" msgstr "" #: ironic/drivers/modules/ipmitool.py:1086 msgid "" "Serial over lan only works with IPMI protocol version 2.0. Check the " "'ipmi_protocol_version' parameter in node's driver_info" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:53 msgid "Additional append parameters for baremetal PXE boot." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:56 msgid "Default file system format for ephemeral partition, if one is created." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:60 msgid "On the ironic-conductor node, directory where images are stored on disk." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:64 msgid "" "On the ironic-conductor node, directory where master instance images are " "stored on disk. Setting to disables image caching." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:69 msgid "Maximum size (in MiB) of cache for master images, including those in use." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:75 msgid "Maximum TTL (in minutes) for old master images in cache." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:79 msgid "The disk devices to scan while doing the deploy." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:131 #, python-format msgid "" " Deployed value of %(param)s was %(param_value)s but requested value is " "%(request_value)s." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:137 #, python-format msgid "" "The following parameters have different values from previous " "deployment:%(error_msg)s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:188 msgid "" "Cannot validate iSCSI deploy. Some parameters were missing in node's " "instance_info" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:197 #, python-format msgid "" "Cannot validate parameter for iSCSI deploy. Invalid parameter %(param)s. " "Reason: %(reason)s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:203 #, python-format msgid "%s is not an integer value." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:210 msgid "Cannot deploy whole disk image with swap or ephemeral size set" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:303 msgid "Deploy key does not match" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:325 #, python-format msgid "Parameters %s were not passed to ironic for deploy." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:371 #, python-format msgid "Error returned from deploy ramdisk: %s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:392 #, python-format msgid "Deploy failed for instance %(instance)s. Error: %(error)s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:400 #, python-format msgid "" "Couldn't determine the UUID of the root partition or the disk identifier " "after deploying node %s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:443 #, python-format msgid "" "Failed to start the iSCSI target to deploy the node %(node)s. Error: " "%(error)s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:554 #, python-format msgid "" "Couldn't get the URL of the Ironic API service from the configuration " "file or keystone catalog. Keystone error: %s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:578 msgid "" "Some mandatory input missing in 'pass_bootloader_info' vendor passthru " "from ramdisk." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:585 #, python-format msgid "Deploy key %(key_sent)s does not match with %(expected_key)s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:604 #, python-format msgid "Failed to install bootloader on node %(node)s. Error: %(error)s." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:631 #, python-format msgid "" "Failed to notify ramdisk to reboot after bootloader installation. Error: " "%s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:884 #, python-format msgid "" "Encountered exception for node %(node)s while initiating cleaning. Error:" " %(error)s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:944 msgid "Failed to continue iSCSI deployment." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:983 msgid "Failed to continue agent deployment." msgstr "" #: ironic/drivers/modules/pxe.py:48 msgid "On ironic-conductor node, template file for PXE configuration." msgstr "" #: ironic/drivers/modules/pxe.py:53 msgid "" "On ironic-conductor node, template file for PXE configuration for UEFI " "boot loader." msgstr "" #: ironic/drivers/modules/pxe.py:57 msgid "IP address of ironic-conductor node's TFTP server." msgstr "" #: ironic/drivers/modules/pxe.py:60 msgid "" "ironic-conductor node's TFTP root path. The ironic-conductor must have " "read/write access to this path." msgstr "" #: ironic/drivers/modules/pxe.py:65 msgid "" "On ironic-conductor node, directory where master TFTP images are stored " "on disk. Setting to disables image caching." msgstr "" #: ironic/drivers/modules/pxe.py:72 msgid "Bootfile DHCP parameter." msgstr "" #: ironic/drivers/modules/pxe.py:75 msgid "Bootfile DHCP parameter for UEFI boot mode." msgstr "" #: ironic/drivers/modules/pxe.py:78 msgid "Enable iPXE boot." msgstr "" #: ironic/drivers/modules/pxe.py:82 msgid "On ironic-conductor node, the path to the main iPXE script file." msgstr "" #: ironic/drivers/modules/pxe.py:87 msgid "" "The IP version that will be used for PXE booting. Can be either 4 or 6. " "Defaults to 4. EXPERIMENTAL" msgstr "" #: ironic/drivers/modules/pxe.py:101 msgid "UUID (from Glance) of the ramdisk that is mounted at boot time. Required." msgstr "" #: ironic/drivers/modules/pxe.py:164 msgid "" "Cannot validate PXE bootloader. Some parameters were missing in node's " "driver_info" msgstr "" #: ironic/drivers/modules/pxe.py:299 #, python-format msgid "" "Conflict: Whole disk image being used for deploy, but cannot be used with" " node %(node_uuid)s configured to use UEFI boot with netboot option" msgstr "" #: ironic/drivers/modules/pxe.py:316 #, python-format msgid "" "Trusted boot is only supported in BIOS boot mode with netboot and without" " whole_disk_image, but Node %(node_uuid)s was configured with boot_mode: " "%(boot_mode)s, boot_option: %(boot_option)s, is_whole_disk_image: " "%(is_whole_disk_image)s: at least one of them is wrong, and this can be " "caused by enable secure boot." msgstr "" #: ironic/drivers/modules/pxe.py:395 ironic/drivers/modules/ssh.py:598 #, python-format msgid "Node %s does not have any port associated with it." msgstr "" #: ironic/drivers/modules/pxe.py:404 msgid "iPXE boot is enabled but no HTTP URL or HTTP root was specified." msgstr "" #: ironic/drivers/modules/seamicro.py:50 msgid "Maximum retries for SeaMicro operations" msgstr "" #: ironic/drivers/modules/seamicro.py:53 ironic/drivers/modules/snmp.py:60 msgid "Seconds to wait for power action to be completed" msgstr "" #: ironic/drivers/modules/seamicro.py:70 msgid "API endpoint. Required." msgstr "" #: ironic/drivers/modules/seamicro.py:72 msgid "server ID. Required." msgstr "" #: ironic/drivers/modules/seamicro.py:76 msgid "version of SeaMicro API client; default is 2. Optional." msgstr "" #: ironic/drivers/modules/seamicro.py:103 #, python-format msgid "Invalid 'seamicro_api_version' parameter. Reason: %s." msgstr "" #: ironic/drivers/modules/seamicro.py:119 #, python-format msgid "" "SeaMicro driver requires the following parameters to be set in node's " "driver_info: %s." msgstr "" #: ironic/drivers/modules/seamicro.py:135 msgid "" "Invalid 'seamicro_server_id' parameter in node's driver_info. Expected " "format of 'seamicro_server_id' is /" msgstr "" #: ironic/drivers/modules/seamicro.py:142 msgid "Invalid 'seamicro_api_endpoint' parameter in node's driver_info." msgstr "" #: ironic/drivers/modules/seamicro.py:337 msgid "Invalid volume id specified" msgstr "" #: ironic/drivers/modules/seamicro.py:353 msgid "No storage pools found for ironic" msgstr "" #: ironic/drivers/modules/seamicro.py:423 msgid "set_power_state called with invalid power state." msgstr "" #: ironic/drivers/modules/seamicro.py:464 msgid "No vlan id provided" msgstr "" #: ironic/drivers/modules/seamicro.py:505 msgid "No volume size provided for creating volume" msgstr "" #: ironic/drivers/modules/seamicro.py:632 msgid "Missing 'seamicro_terminal_port' parameter in node's driver_info" msgstr "" #: ironic/drivers/modules/snmp.py:66 msgid "" "Time (in seconds) to sleep between when rebooting (powering off and on " "again)" msgstr "" #: ironic/drivers/modules/snmp.py:82 msgid "PDU manufacturer driver. Required." msgstr "" #: ironic/drivers/modules/snmp.py:83 msgid "PDU IPv4 address or hostname. Required." msgstr "" #: ironic/drivers/modules/snmp.py:84 msgid "PDU power outlet index (1-based). Required." msgstr "" #: ironic/drivers/modules/snmp.py:88 #, python-format msgid "" "SNMP protocol version: %(v1)s, %(v2c)s or %(v3)s (optional, default " "%(v1)s)" msgstr "" #: ironic/drivers/modules/snmp.py:92 #, python-format msgid "SNMP port, default %(port)d" msgstr "" #: ironic/drivers/modules/snmp.py:94 #, python-format msgid "SNMP community. Required for versions %(v1)s and %(v2c)s" msgstr "" #: ironic/drivers/modules/snmp.py:97 #, python-format msgid "SNMP security name. Required for version %(v3)s" msgstr "" #: ironic/drivers/modules/snmp.py:586 #, python-format msgid "" "SNMP driver requires the following parameters to be set in node's " "driver_info: %s." msgstr "" #: ironic/drivers/modules/snmp.py:595 #, python-format msgid "SNMPPowerDriver: unknown driver: '%s'" msgstr "" #: ironic/drivers/modules/snmp.py:601 #, python-format msgid "SNMPPowerDriver: unknown SNMP version: '%s'" msgstr "" #: ironic/drivers/modules/snmp.py:610 #, python-format msgid "SNMPPowerDriver: SNMP UDP port out of range: %d" msgstr "" #: ironic/drivers/modules/snmp.py:617 #, python-format msgid "SNMP driver requires snmp_community to be set for version %s." msgstr "" #: ironic/drivers/modules/snmp.py:623 #, python-format msgid "SNMP driver requires snmp_security to be set for version %s." msgstr "" #: ironic/drivers/modules/ssh.py:54 msgid "libvirt URI." msgstr "" #: ironic/drivers/modules/ssh.py:57 msgid "" "Number of attempts to try to get VM name used by the host that " "corresponds to a node's MAC address." msgstr "" #: ironic/drivers/modules/ssh.py:61 msgid "" "Number of seconds to wait between attempts to get VM name used by the " "host that corresponds to a node's MAC address." msgstr "" #: ironic/drivers/modules/ssh.py:72 msgid "IP address or hostname of the node to ssh into. Required." msgstr "" #: ironic/drivers/modules/ssh.py:74 msgid "username to authenticate as. Required." msgstr "" #: ironic/drivers/modules/ssh.py:75 msgid "" "virtualization software to use; one of vbox, virsh, vmware, parallels, " "xenserver. Required." msgstr "" #: ironic/drivers/modules/ssh.py:79 msgid "" "private key(s). One of this, ssh_key_filename, or ssh_password must be " "specified." msgstr "" #: ironic/drivers/modules/ssh.py:81 msgid "" "(list of) filename(s) of optional private key(s) for authentication. One " "of this, ssh_key_contents, or ssh_password must be specified." msgstr "" #: ironic/drivers/modules/ssh.py:84 msgid "" "password to use for authentication or for unlocking a private key. One of" " this, ssh_key_contents, or ssh_key_filename must be specified." msgstr "" #: ironic/drivers/modules/ssh.py:87 msgid "port on the node to connect to; default is 22. Optional." msgstr "" #: ironic/drivers/modules/ssh.py:92 msgid "" "node's UDP port to connect to. Only required for console access and only " "applicable for 'virsh'." msgstr "" #: ironic/drivers/modules/ssh.py:129 #, python-format msgid "SSHPowerDriver '%(virt_type)s' is not a valid virt_type." msgstr "" #: ironic/drivers/modules/ssh.py:270 #, python-format msgid "SSHPowerDriver '%(virt_type)s' is not a valid virt_type, " msgstr "" #: ironic/drivers/modules/ssh.py:365 #, python-format msgid "" "SSHPowerDriver requires the following parameters to be set in node's " "driver_info: %s." msgstr "" #: ironic/drivers/modules/ssh.py:399 msgid "" "SSHPowerDriver requires one and only one of password, key_contents and " "key_filename to be set." msgstr "" #: ironic/drivers/modules/ssh.py:408 #, python-format msgid "SSH key file %s not found." msgstr "" #: ironic/drivers/modules/ssh.py:514 #, python-format msgid "" "SSH driver was not able to find a VM with any of the specified MACs: " "%(macs)s for node %(node)s." msgstr "" #: ironic/drivers/modules/ssh.py:603 #, python-format msgid "SSH connection cannot be established: %s" msgstr "" #: ironic/drivers/modules/ssh.py:823 msgid "not supported for non-virsh types" msgstr "" #: ironic/drivers/modules/ssh.py:827 msgid "Missing 'ssh_terminal_port' parameter in node's 'driver_info'" msgstr "" #: ironic/drivers/modules/virtualbox.py:52 msgid "Port on which VirtualBox web service is listening." msgstr "" #: ironic/drivers/modules/virtualbox.py:60 msgid "Name of the VM in VirtualBox. Required." msgstr "" #: ironic/drivers/modules/virtualbox.py:61 msgid "IP address or hostname of the VirtualBox host. Required." msgstr "" #: ironic/drivers/modules/virtualbox.py:66 msgid "Username for the VirtualBox host. Default value is ''. Optional." msgstr "" #: ironic/drivers/modules/virtualbox.py:68 msgid "Password for 'virtualbox_username'. Default value is ''. Optional." msgstr "" #: ironic/drivers/modules/virtualbox.py:70 msgid "Port on which VirtualBox web service is listening. Optional." msgstr "" #: ironic/drivers/modules/virtualbox.py:112 #, python-format msgid "The following parameters are missing in driver_info: %s" msgstr "" #: ironic/drivers/modules/virtualbox.py:167 #, python-format msgid "Invalid VirtualMachine method '%s' passed to '_run_virtualbox_method'." msgstr "" #: ironic/drivers/modules/virtualbox.py:243 #, python-format msgid "'set_power_state' called with invalid power state '%s'" msgstr "" #: ironic/drivers/modules/wol.py:38 msgid "Broadcast IP address; defaults to 255.255.255.255. Optional." msgstr "" #: ironic/drivers/modules/wol.py:40 msgid "Destination port; defaults to 9. Optional." msgstr "" #: ironic/drivers/modules/wol.py:74 #, python-format msgid "" "Failed to send Wake-On-Lan magic packets to node %(node)s port %(port)s. " "Error: %(error)s" msgstr "" #: ironic/drivers/modules/wol.py:92 msgid "Wake-On-Lan needs at least one port resource to be registered in the node" msgstr "" #: ironic/drivers/modules/wol.py:159 #, python-format msgid "" "set_power_state called for Node %(node)s with invalid power state " "%(pstate)s." msgstr "" #: ironic/drivers/modules/amt/common.py:39 msgid "IP address or host name of the node. Required." msgstr "" #: ironic/drivers/modules/amt/common.py:40 msgid "Password. Required." msgstr "" #: ironic/drivers/modules/amt/common.py:41 msgid "Username to log into AMT system. Required." msgstr "" #: ironic/drivers/modules/amt/common.py:44 msgid "" "Protocol used for AMT endpoint. one of http, https; default is \"http\". " "Optional." msgstr "" #: ironic/drivers/modules/amt/common.py:53 msgid "Protocol used for AMT endpoint, support http/https" msgstr "" #: ironic/drivers/modules/amt/common.py:58 msgid "" "Time interval (in seconds) for successive awake call to AMT interface, " "this depends on the IdleTimeout setting on AMT interface. AMT Interface " "will go to sleep after 60 seconds of inactivity by default. IdleTimeout=0" " means AMT will not go to sleep at all. Setting awake_interval=0 will " "disable awake call." msgstr "" #: ironic/drivers/modules/amt/common.py:175 #, python-format msgid "AMT driver requires the following to be set in node's driver_info: %s." msgstr "" #: ironic/drivers/modules/amt/common.py:184 #, python-format msgid "Invalid protocol %s." msgstr "" #: ironic/drivers/modules/amt/management.py:195 #: ironic/drivers/modules/msftocs/management.py:68 #, python-format msgid "" "set_boot_device called with invalid device %(device)s for node " "%(node_id)s." msgstr "" #: ironic/drivers/modules/amt/power.py:41 msgid "Maximum number of times to attempt an AMT operation, before failing" msgstr "" #: ironic/drivers/modules/amt/power.py:45 msgid "Amount of time (in seconds) to wait, before retrying an AMT operation" msgstr "" #: ironic/drivers/modules/amt/power.py:166 #: ironic/drivers/modules/msftocs/power.py:85 #, python-format msgid "Unsupported target_state: %s" msgstr "" #: ironic/drivers/modules/cimc/common.py:24 msgid "IP or Hostname of the CIMC. Required." msgstr "" #: ironic/drivers/modules/cimc/common.py:25 msgid "CIMC Manager admin username. Required." msgstr "" #: ironic/drivers/modules/cimc/common.py:26 msgid "CIMC Manager password. Required." msgstr "" #: ironic/drivers/modules/cimc/common.py:45 #: ironic/drivers/modules/ucs/helper.py:80 #, python-format msgid "%s driver requires these parameters to be set in the node's driver_info." msgstr "" #: ironic/drivers/modules/cimc/power.py:31 #: ironic/drivers/modules/ilo/power.py:40 #: ironic/drivers/modules/ucs/power.py:39 msgid "Number of times a power operation needs to be retried" msgstr "" #: ironic/drivers/modules/cimc/power.py:35 #: ironic/drivers/modules/ilo/power.py:44 #: ironic/drivers/modules/ucs/power.py:43 msgid "Amount of time in seconds to wait in between power operations" msgstr "" #: ironic/drivers/modules/cimc/power.py:143 #, python-format msgid "set_power_state called for %(node)s with invalid state %(state)s" msgstr "" #: ironic/drivers/modules/drac/client.py:35 msgid "" "In case there is a communication failure, the DRAC client resends the " "request as many times as defined in this setting." msgstr "" #: ironic/drivers/modules/drac/client.py:40 msgid "" "In case there is a communication failure, the DRAC client waits for as " "many seconds as defined in this setting before resending the request." msgstr "" #: ironic/drivers/modules/drac/common.py:30 msgid "IP address or hostname of the DRAC card. Required." msgstr "" #: ironic/drivers/modules/drac/common.py:31 msgid "username used for authentication. Required." msgstr "" #: ironic/drivers/modules/drac/common.py:32 msgid "password used for authentication. Required." msgstr "" #: ironic/drivers/modules/drac/common.py:35 msgid "port used for WS-Man endpoint; default is 443. Optional." msgstr "" #: ironic/drivers/modules/drac/common.py:36 msgid "path used for WS-Man endpoint; default is \"/wsman\". Optional." msgstr "" #: ironic/drivers/modules/drac/common.py:38 msgid "" "protocol used for WS-Man endpoint; one of http, https; default is " "\"https\". Optional." msgstr "" #: ironic/drivers/modules/drac/common.py:65 #, python-format msgid "'%s' not supplied to DracDriver." msgstr "" #: ironic/drivers/modules/drac/common.py:67 #, python-format msgid "'%s' contains non-ASCII symbol." msgstr "" #: ironic/drivers/modules/drac/common.py:75 msgid "'drac_path' contains non-ASCII symbol." msgstr "" #: ironic/drivers/modules/drac/common.py:82 msgid "'drac_protocol' must be either 'http' or 'https'." msgstr "" #: ironic/drivers/modules/drac/common.py:85 msgid "'drac_protocol' contains non-ASCII symbol." msgstr "" #: ironic/drivers/modules/drac/common.py:88 #, python-format msgid "" "The following errors were encountered while parsing driver_info:\n" "%s" msgstr "" #: ironic/drivers/modules/drac/job.py:51 #, python-format msgid "" "Unfinished config jobs found: %(jobs)r. Make sure they are completed " "before retrying." msgstr "" #: ironic/drivers/modules/drac/management.py:191 #, python-format msgid "" "set_boot_device called with invalid device '%(device)s' for node " "%(node_id)s." msgstr "" #: ironic/drivers/modules/ilo/boot.py:46 msgid "UUID (from Glance) of the deployment ISO. Required." msgstr "" #: ironic/drivers/modules/ilo/boot.py:68 msgid "" "Error validating iLO virtual media deploy. Some parameters were missing " "in node's driver_info" msgstr "" #: ironic/drivers/modules/ilo/common.py:53 msgid "Timeout (in seconds) for iLO operations" msgstr "" #: ironic/drivers/modules/ilo/common.py:56 msgid "Port to be used for iLO operations" msgstr "" #: ironic/drivers/modules/ilo/common.py:59 msgid "The Swift iLO container to store data." msgstr "" #: ironic/drivers/modules/ilo/common.py:62 msgid "Amount of time in seconds for Swift objects to auto-expire." msgstr "" #: ironic/drivers/modules/ilo/common.py:66 msgid "" "Set this to True to use http web server to host floppy images and " "generated boot ISO. This requires http_root and http_url to be configured" " in the [deploy] section of the config file. If this is set to False, " "then Ironic will use Swift to host the floppy images and generated " "boot_iso." msgstr "" #: ironic/drivers/modules/ilo/common.py:81 msgid "IP address or hostname of the iLO. Required." msgstr "" #: ironic/drivers/modules/ilo/common.py:82 msgid "username for the iLO with administrator privileges. Required." msgstr "" #: ironic/drivers/modules/ilo/common.py:84 msgid "password for ilo_username. Required." msgstr "" #: ironic/drivers/modules/ilo/common.py:87 msgid "port to be used for iLO operations. Optional." msgstr "" #: ironic/drivers/modules/ilo/common.py:88 msgid "timeout (in seconds) for iLO operations. Optional." msgstr "" #: ironic/drivers/modules/ilo/common.py:95 msgid "" "new password for iLO. Required if the clean step 'reset_ilo_credential' " "is enabled." msgstr "" #: ironic/drivers/modules/ilo/common.py:161 #, python-format msgid "" "The following required iLO parameters are missing from the node's " "driver_info: %s" msgstr "" #: ironic/drivers/modules/ilo/common.py:185 #, python-format msgid "" "The following iLO parameters from the node's driver_info should be " "integers: %s" msgstr "" #: ironic/drivers/modules/ilo/common.py:233 msgid "iLO license check" msgstr "" #: ironic/drivers/modules/ilo/common.py:347 #, python-format msgid "Inserting virtual media %s" msgstr "" #: ironic/drivers/modules/ilo/common.py:377 #: ironic/drivers/modules/ilo/common.py:426 #, python-format msgid "Setting %s as boot mode" msgstr "" #: ironic/drivers/modules/ilo/common.py:532 #, python-format msgid "Eject virtual media %s" msgstr "" #: ironic/drivers/modules/ilo/common.py:581 #, python-format msgid "Get secure boot mode for node %s." msgstr "" #: ironic/drivers/modules/ilo/common.py:614 #, python-format msgid "Setting secure boot to %(flag)s for node %(node)s." msgstr "" #: ironic/drivers/modules/ilo/console.py:43 msgid "Missing 'console_port' parameter in node's driver_info." msgstr "" #: ironic/drivers/modules/ilo/deploy.py:41 msgid "" "Priority for erase devices clean step. If unset, it defaults to 10. If " "set to 0, the step will be disabled and will not run during cleaning." msgstr "" #: ironic/drivers/modules/ilo/inspect.py:100 #, python-format msgid "Server didn't return the key(s): %(key)s" msgstr "" #: ironic/drivers/modules/ilo/inspect.py:105 #, python-format msgid "" "Essential properties are expected to be in dictionary format, received " "%(properties)s from node %(node)s." msgstr "" #: ironic/drivers/modules/ilo/inspect.py:111 #, python-format msgid "The node %s didn't return 'properties' as the key with inspection." msgstr "" #: ironic/drivers/modules/ilo/inspect.py:117 #, python-format msgid "Node %(node)s didn't return MACs %(macs)s in dictionary format." msgstr "" #: ironic/drivers/modules/ilo/inspect.py:122 #, python-format msgid "The node %s didn't return 'macs' as the key with inspection." msgstr "" #: ironic/drivers/modules/ilo/inspect.py:202 #, python-format msgid "Inspecting hardware (get_power_state) on %s" msgstr "" #: ironic/drivers/modules/ilo/management.py:50 msgid "Priority for reset_ilo clean step." msgstr "" #: ironic/drivers/modules/ilo/management.py:53 msgid "Priority for reset_bios_to_default clean step." msgstr "" #: ironic/drivers/modules/ilo/management.py:56 msgid "" "Priority for reset_secure_boot_keys clean step. This step will reset the " "secure boot keys to manufacturing defaults." msgstr "" #: ironic/drivers/modules/ilo/management.py:61 msgid "" "Priority for clear_secure_boot_keys clean step. This step is not enabled " "by default. It can be enabled to to clear all secure boot keys enrolled " "with iLO." msgstr "" #: ironic/drivers/modules/ilo/management.py:66 msgid "" "Priority for reset_ilo_credential clean step. This step requires " "\"ilo_change_password\" parameter to be updated in nodes's driver_info " "with the new password." msgstr "" #: ironic/drivers/modules/ilo/management.py:94 #, python-format msgid "Clean step '%s' not found. 'proliantutils' package needs to be updated." msgstr "" #: ironic/drivers/modules/ilo/management.py:105 #, python-format msgid "Clean step %(step)s failed on node %(node)s with error: %(err)s" msgstr "" #: ironic/drivers/modules/ilo/management.py:171 msgid "Get boot device" msgstr "" #: ironic/drivers/modules/ilo/management.py:214 #, python-format msgid "Setting %s as boot device" msgstr "" #: ironic/drivers/modules/ilo/power.py:98 msgid "iLO get_power_status" msgstr "" #: ironic/drivers/modules/ilo/power.py:162 #: ironic/drivers/modules/irmc/power.py:66 #, python-format msgid "_set_power_state called with invalid power state '%s'" msgstr "" #: ironic/drivers/modules/ilo/power.py:171 msgid "iLO set_power_state" msgstr "" #: ironic/drivers/modules/ilo/vendor.py:116 #, python-format msgid "" "The requested action 'boot_into_iso' can be performed only when node " "%(node_uuid)s is in %(state)s state or in 'maintenance' mode" msgstr "" #: ironic/drivers/modules/ilo/vendor.py:123 msgid "" "Error validating input for boot_into_iso vendor passthru. Some parameters" " were not provided: " msgstr "" #: ironic/drivers/modules/irmc/boot.py:55 msgid "Ironic conductor node's \"NFS\" or \"CIFS\" root path" msgstr "" #: ironic/drivers/modules/irmc/boot.py:57 msgid "IP of remote image server" msgstr "" #: ironic/drivers/modules/irmc/boot.py:60 msgid "Share type of virtual media, either \"NFS\" or \"CIFS\"" msgstr "" #: ironic/drivers/modules/irmc/boot.py:63 msgid "share name of remote_image_server" msgstr "" #: ironic/drivers/modules/irmc/boot.py:65 msgid "User name of remote_image_server" msgstr "" #: ironic/drivers/modules/irmc/boot.py:67 msgid "Password of remote_image_user_name" msgstr "" #: ironic/drivers/modules/irmc/boot.py:70 msgid "Domain name of remote_image_user_name" msgstr "" #: ironic/drivers/modules/irmc/boot.py:78 msgid "Deployment ISO image file name. Required." msgstr "" #: ironic/drivers/modules/irmc/boot.py:97 #, python-format msgid "Value '%s' for remote_image_share_root isn't a directory or doesn't exist." msgstr "" #: ironic/drivers/modules/irmc/boot.py:102 #, python-format msgid "" "Value '%s' for remote_image_share_type is not supported value either " "'NFS' or 'CIFS'." msgstr "" #: ironic/drivers/modules/irmc/boot.py:106 #, python-format msgid "The following errors were encountered while parsing config file:%s" msgstr "" #: ironic/drivers/modules/irmc/boot.py:129 msgid "" "Error validating iRMC virtual media deploy. Some parameters were missing " "in node's driver_info" msgstr "" #: ironic/drivers/modules/irmc/boot.py:138 #: ironic/tests/unit/drivers/modules/irmc/test_boot.py:134 #, python-format msgid "Deploy ISO file, %(deploy_iso)s, not found for node: %(node)s." msgstr "" #: ironic/drivers/modules/irmc/boot.py:170 #: ironic/tests/unit/drivers/modules/irmc/test_boot.py:231 #, python-format msgid "Boot ISO file, %(boot_iso)s, not found for node: %(node)s." msgstr "" #: ironic/drivers/modules/irmc/boot.py:341 msgid "Copying floppy image file" msgstr "" #: ironic/drivers/modules/irmc/boot.py:454 msgid "Inserting virtual cdrom" msgstr "" #: ironic/drivers/modules/irmc/boot.py:477 msgid "Ejecting virtual cdrom" msgstr "" #: ironic/drivers/modules/irmc/boot.py:510 msgid "Inserting virtual floppy" msgstr "" #: ironic/drivers/modules/irmc/boot.py:533 msgid "Ejecting virtual floppy" msgstr "" #: ironic/drivers/modules/irmc/common.py:30 msgid "Port to be used for iRMC operations, either 80 or 443" msgstr "" #: ironic/drivers/modules/irmc/common.py:34 msgid "" "Authentication method to be used for iRMC operations, either \"basic\" or" " \"digest\"" msgstr "" #: ironic/drivers/modules/irmc/common.py:38 msgid "Timeout (in seconds) for iRMC operations" msgstr "" #: ironic/drivers/modules/irmc/common.py:41 msgid "Sensor data retrieval method, either \"ipmitool\" or \"scci\"" msgstr "" #: ironic/drivers/modules/irmc/common.py:50 msgid "IP address or hostname of the iRMC. Required." msgstr "" #: ironic/drivers/modules/irmc/common.py:51 msgid "Username for the iRMC with administrator privileges. Required." msgstr "" #: ironic/drivers/modules/irmc/common.py:53 msgid "Password for irmc_username. Required." msgstr "" #: ironic/drivers/modules/irmc/common.py:56 msgid "" "Port to be used for iRMC operations; either 80 or 443. The default value " "is 443. Optional." msgstr "" #: ironic/drivers/modules/irmc/common.py:58 msgid "" "Authentication method for iRMC operations; either 'basic' or 'digest'. " "The default value is 'basic'. Optional." msgstr "" #: ironic/drivers/modules/irmc/common.py:61 msgid "" "Timeout (in seconds) for iRMC operations. The default value is 60. " "Optional." msgstr "" #: ironic/drivers/modules/irmc/common.py:63 msgid "" "Sensor data retrieval method; either 'ipmitool' or 'scci'. The default " "value is 'ipmitool'. Optional." msgstr "" #: ironic/drivers/modules/irmc/common.py:89 #, python-format msgid "Missing the following iRMC parameters in node's driver_info: %s." msgstr "" #: ironic/drivers/modules/irmc/common.py:103 msgid "'irmc_auth_method' has unsupported value." msgstr "" #: ironic/drivers/modules/irmc/common.py:106 msgid "'irmc_port' has unsupported value." msgstr "" #: ironic/drivers/modules/irmc/common.py:109 msgid "'irmc_client_timeout' is not integer type." msgstr "" #: ironic/drivers/modules/irmc/common.py:112 msgid "'irmc_sensor_method' has unsupported value." msgstr "" #: ironic/drivers/modules/irmc/common.py:114 #, python-format msgid "" "The following type errors were encountered while parsing driver_info:\n" "%s" msgstr "" #: ironic/drivers/modules/irmc/power.py:75 msgid "iRMC set_power_state" msgstr "" #: ironic/drivers/modules/msftocs/common.py:26 msgid "" "Base url of the OCS chassis manager REST API, e.g.: http://10.0.0.1:8000." " Required." msgstr "" #: ironic/drivers/modules/msftocs/common.py:28 msgid "" "Blade id, must be a number between 1 and the maximum number of blades " "available in the chassis. Required." msgstr "" #: ironic/drivers/modules/msftocs/common.py:31 msgid "Username to access the chassis manager REST API. Required." msgstr "" #: ironic/drivers/modules/msftocs/common.py:33 msgid "Password to access the chassis manager REST API. Required." msgstr "" #: ironic/drivers/modules/msftocs/common.py:81 #, python-format msgid "The following parameters were missing: %s" msgstr "" #: ironic/drivers/modules/msftocs/common.py:99 #, python-format msgid "\"%s\" is not a valid \"msftocs_base_url\"" msgstr "" #: ironic/drivers/modules/msftocs/common.py:106 #, python-format msgid "\"%s\" is not a valid \"msftocs_blade_id\"" msgstr "" #: ironic/drivers/modules/msftocs/common.py:109 #, python-format msgid "\"msftocs_blade_id\" must be greater than 0. The provided value is: %s" msgstr "" #: ironic/drivers/modules/msftocs/msftocsclient.py:72 #, python-format msgid "HTTP call failed: %s" msgstr "" #: ironic/drivers/modules/msftocs/msftocsclient.py:87 #, python-format msgid "Invalid XML: %s" msgstr "" #: ironic/drivers/modules/msftocs/msftocsclient.py:91 #, python-format msgid "Operation failed: %s" msgstr "" #: ironic/drivers/modules/oneview/common.py:37 msgid "URL where OneView is available" msgstr "" #: ironic/drivers/modules/oneview/common.py:39 msgid "OneView username to be used" msgstr "" #: ironic/drivers/modules/oneview/common.py:42 msgid "OneView password to be used" msgstr "" #: ironic/drivers/modules/oneview/common.py:45 msgid "Option to allow insecure connection with OneView" msgstr "" #: ironic/drivers/modules/oneview/common.py:48 msgid "Path to CA certificate" msgstr "" #: ironic/drivers/modules/oneview/common.py:51 msgid "Max connection retries to check changes on OneView" msgstr "" #: ironic/drivers/modules/oneview/common.py:58 msgid "Server Hardware URI. Required in driver_info." msgstr "" #: ironic/drivers/modules/oneview/common.py:62 msgid "Server Hardware Type URI. Required in properties/capabilities." msgstr "" #: ironic/drivers/modules/oneview/common.py:72 msgid "Enclosure Group URI. Optional in properties/capabilities." msgstr "" #: ironic/drivers/modules/oneview/common.py:75 msgid "" "Server Profile Template URI to clone from. Deprecated in driver_info. " "Required in properties/capabilities." msgstr "" #: ironic/drivers/modules/oneview/common.py:146 msgid "" "Missing 'server_profile_template_uri' parameter value in " "properties/capabilities" msgstr "" #: ironic/drivers/modules/oneview/common.py:221 #, python-format msgid "Error validating node resources with OneView: %s" msgstr "" #: ironic/drivers/modules/oneview/common.py:253 #, python-format msgid "" "Missing the keys for the following OneView data in node's %(namespace)s: " "%(missing_keys)s." msgstr "" #: ironic/drivers/modules/oneview/common.py:267 #, python-format msgid "Missing parameter value for: '%s'" msgstr "" #: ironic/drivers/modules/oneview/common.py:292 #, python-format msgid "A Server Profile is not associated with node %s." msgstr "" #: ironic/drivers/modules/oneview/management.py:114 #, python-format msgid "Error setting boot device on OneView. Error: %s" msgstr "" #: ironic/drivers/modules/oneview/management.py:145 #, python-format msgid "Error getting boot device from OneView. Error: %s" msgstr "" #: ironic/drivers/modules/oneview/management.py:155 #, python-format msgid "Unsupported boot Device %(device)s for Node: %(node)s" msgstr "" #: ironic/drivers/modules/oneview/power.py:120 #, python-format msgid "Error setting power state: %s" msgstr "" #: ironic/drivers/modules/ucs/helper.py:34 msgid "IP or Hostname of the UCS Manager. Required." msgstr "" #: ironic/drivers/modules/ucs/helper.py:35 msgid "UCS Manager admin/server-profile username. Required." msgstr "" #: ironic/drivers/modules/ucs/helper.py:36 msgid "UCS Manager password. Required." msgstr "" #: ironic/drivers/modules/ucs/helper.py:37 msgid "UCS Manager service-profile name. Required." msgstr "" #: ironic/drivers/modules/ucs/management.py:100 msgid "setting boot device" msgstr "" #: ironic/drivers/modules/ucs/management.py:134 msgid "getting boot device" msgstr "" #: ironic/drivers/modules/ucs/power.py:130 msgid "getting power status" msgstr "" #: ironic/drivers/modules/ucs/power.py:155 #, python-format msgid "set_power_state called with invalid power state '%s'" msgstr "" #: ironic/drivers/modules/ucs/power.py:172 msgid "setting power status" msgstr "" #: ironic/drivers/modules/ucs/power.py:204 msgid "rebooting" msgstr "" #: ironic/objects/conductor.py:63 msgid "Cannot update a conductor record directly." msgstr "" #: ironic/objects/node.py:135 #, python-format msgid "" "The following properties for node %(node)s should be non-negative " "integers, but provided values are: %(msgs)s" msgstr "" ironic-5.1.0/ironic/locale/ironic-log-error.pot0000664000567000056710000003761712674513466022634 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2016 ORGANIZATION # This file is distributed under the same license as the ironic project. # FIRST AUTHOR , 2016. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: ironic 4.3.1.dev202\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2016-01-27 06:37+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 2.2.0\n" #: ironic/api/middleware/parsable_error.py:80 #, python-format msgid "Error parsing HTTP response: %s" msgstr "" #: ironic/common/exception.py:89 msgid "Exception in string format operation" msgstr "" #: ironic/common/images.py:151 #, python-format msgid "vfat image creation failed. Error: %s" msgstr "" #: ironic/common/images.py:221 ironic/common/images.py:287 msgid "Creating the filesystem root failed." msgstr "" #: ironic/common/images.py:236 ironic/common/images.py:313 msgid "Creating ISO image failed." msgstr "" #: ironic/common/images.py:531 msgid "mounting the deploy iso failed." msgstr "" #: ironic/common/images.py:545 msgid "examining the deploy iso failed." msgstr "" #: ironic/common/service.py:97 #, python-format msgid "Service error occurred when stopping the RPC server. Error: %s" msgstr "" #: ironic/common/service.py:102 #, python-format msgid "Service error occurred when cleaning up the RPC manager. Error: %s" msgstr "" #: ironic/common/utils.py:462 #, python-format msgid "Could not remove tmpdir: %s" msgstr "" #: ironic/common/glance_service/base_image_service.py:128 #, python-format msgid "" "Error contacting glance server '%(host)s:%(port)s' for '%(method)s', " "attempt %(attempt)s of %(num_attempts)s failed." msgstr "" #: ironic/conductor/base_manager.py:103 #, python-format msgid "" "Conductor %s cannot be started because no drivers were loaded. This " "could be because no drivers were specified in 'enabled_drivers' config " "option." msgstr "" #: ironic/conductor/manager.py:617 #, python-format msgid "Error in tear_down of node %(node)s: %(err)s" msgstr "" #: ironic/conductor/manager.py:1037 #, python-format msgid "" "Failed to tear down cleaning for node %(node)s after aborting the " "operation. Error: %(err)s" msgstr "" #: ironic/conductor/manager.py:1485 #, python-format msgid "Failed to stop console while deleting the node %(node)s: %(err)s." msgstr "" #: ironic/conductor/manager.py:2162 #, python-format msgid "Error while uploading the configdrive for %(node)s to Swift" msgstr "" #: ironic/conductor/manager.py:2173 #, python-format msgid "Error while preparing to deploy to node %(node)s: %(err)s" msgstr "" #: ironic/conductor/manager.py:2183 #, python-format msgid "Error in deploy of node %(node)s: %(err)s" msgstr "" #: ironic/conductor/manager.py:2200 #, python-format msgid "Unexpected state %(state)s returned while deploying node %(node)s." msgstr "" #: ironic/conductor/manager.py:2333 #, python-format msgid "" "Failed to change power state of node %(node)s to '%(state)s', attempt " "%(attempt)s of %(retries)s." msgstr "" #: ironic/conductor/manager.py:2367 #, python-format msgid "Failed to inspect node %(node)s: %(err)s" msgstr "" #: ironic/conductor/utils.py:221 #, python-format msgid "Failed to tear down cleaning on node %(uuid)s, reason: %(err)s" msgstr "" #: ironic/dhcp/neutron.py:125 #, python-format msgid "Failed to update Neutron port %s." msgstr "" #: ironic/dhcp/neutron.py:140 #, python-format msgid "Failed to update MAC address on Neutron port %s." msgstr "" #: ironic/dhcp/neutron.py:213 #, python-format msgid "Failed to Get IP address on Neutron port %s." msgstr "" #: ironic/dhcp/neutron.py:229 #, python-format msgid "Neutron returned invalid IPv4 address %s." msgstr "" #: ironic/dhcp/neutron.py:233 #, python-format msgid "No IP address assigned to Neutron port %s." msgstr "" #: ironic/dhcp/neutron.py:376 #, python-format msgid "Failed to rollback cleaning port changes for node %s" msgstr "" #: ironic/drivers/base.py:633 #, python-format msgid "vendor_passthru failed with method %s" msgstr "" #: ironic/drivers/modules/agent.py:128 #, python-format msgid "" "Agent deploy supports only HTTP(S) URLs as instance_info['image_source']." " Either %s is not a valid HTTP(S) URL or is not reachable." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:514 #, python-format msgid "Could not find matching node for the provided MACs %s." msgstr "" #: ironic/drivers/modules/deploy_utils.py:394 #: ironic/drivers/modules/deploy_utils.py:400 #, python-format msgid "Deploy to address %s failed." msgstr "" #: ironic/drivers/modules/deploy_utils.py:395 #, python-format msgid "Command: %s" msgstr "" #: ironic/drivers/modules/deploy_utils.py:396 #, python-format msgid "StdOut: %r" msgstr "" #: ironic/drivers/modules/deploy_utils.py:397 #, python-format msgid "StdErr: %r" msgstr "" #: ironic/drivers/modules/deploy_utils.py:482 #, python-format msgid "" "Internal error. Node %(node)s in provision state \"%(state)s\" could not " "transition to a failed state." msgstr "" #: ironic/drivers/modules/deploy_utils.py:490 #, python-format msgid "" "Node %s failed to power off while handling deploy failure. This may be a " "serious condition. Node should be removed from Ironic or put in " "maintenance mode until the problem is resolved." msgstr "" #: ironic/drivers/modules/inspector.py:156 #, python-format msgid "" "Exception during contacting ironic-inspector for inspection of node " "%(node)s: %(err)s" msgstr "" #: ironic/drivers/modules/inspector.py:191 #, python-format msgid "" "Unexpected exception while getting inspection status for node %s, will " "retry later" msgstr "" #: ironic/drivers/modules/inspector.py:207 #, python-format msgid "Inspection failed for node %(uuid)s with error: %(err)s" msgstr "" #: ironic/drivers/modules/ipminative.py:282 #, python-format msgid "" "IPMI get sensor data failed for node %(node_id)s with the following " "error: %(error)s" msgstr "" #: ironic/drivers/modules/ipminative.py:484 #, python-format msgid "" "IPMI set boot device failed for node %(node_id)s with the following " "error: %(error)s" msgstr "" #: ironic/drivers/modules/ipminative.py:530 #, python-format msgid "" "IPMI get boot device failed for node %(node_id)s with the following " "error: %(error)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:426 #, python-format msgid "IPMI Error while attempting \"%(cmd)s\"for node %(node)s. Error: %(error)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:501 #, python-format msgid "" "IPMI power %(state)s timed out after %(tries)s retries on node " "%(node_id)s." msgstr "" #: ironic/drivers/modules/ipmitool.py:664 #, python-format msgid "IPMI \"raw bytes\" failed for node %(node_id)s with error: %(error)s." msgstr "" #: ironic/drivers/modules/ipmitool.py:1019 #, python-format msgid "IPMI \"bmc reset\" failed for node %(node_id)s with error: %(error)s." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:628 #: ironic/drivers/modules/iscsi_deploy.py:941 #: ironic/drivers/modules/iscsi_deploy.py:980 #, python-format msgid "Deploy failed for instance %(instance)s. Error: %(error)s" msgstr "" #: ironic/drivers/modules/pxe.py:297 msgid "Whole disk image with netboot is not supported in UEFI boot mode." msgstr "" #: ironic/drivers/modules/seamicro.py:195 #, python-format msgid "SeaMicro client exception %(msg)s for node %(uuid)s" msgstr "" #: ironic/drivers/modules/seamicro.py:476 #: ironic/drivers/modules/seamicro.py:515 #, python-format msgid "SeaMicro client exception: %s" msgstr "" #: ironic/drivers/modules/seamicro.py:579 #, python-format msgid "" "Seamicro set boot device failed for node %(node)s with the following " "error: %(error)s" msgstr "" #: ironic/drivers/modules/ssh.py:346 #, python-format msgid "Cannot execute SSH cmd %(cmd)s. Reason: %(err)s." msgstr "" #: ironic/drivers/modules/ssh.py:751 #, python-format msgid "" "Failed to set boot device for node %(node)s, virt_type %(vtype)s does not" " support this operation" msgstr "" #: ironic/drivers/modules/virtualbox.py:158 #, python-format msgid "" "Failed while creating a VirtualMachine object for node %(node_id)s. " "Error: %(error)s." msgstr "" #: ironic/drivers/modules/virtualbox.py:174 #, python-format msgid "'%(ironic_method)s' failed for node %(node_id)s with error: %(error)s." msgstr "" #: ironic/drivers/modules/virtualbox.py:216 #, python-format msgid "VirtualBox returned unknown state '%(state)s' for node %(node)s" msgstr "" #: ironic/drivers/modules/virtualbox.py:312 #, python-format msgid "VirtualBox returned unknown boot device '%(device)s' for node %(node)s" msgstr "" #: ironic/drivers/modules/virtualbox.py:353 #, python-format msgid "'set_boot_device' failed for node %(node_id)s with error: %(error)s" msgstr "" #: ironic/drivers/modules/amt/common.py:119 #, python-format msgid "Call to AMT with URI %(uri)s failed: got Fault %(fault)s" msgstr "" #: ironic/drivers/modules/amt/common.py:143 #, python-format msgid "" "Call to AMT with URI %(uri)s and method %(method)s failed: return value " "was %(value)s" msgstr "" #: ironic/drivers/modules/amt/common.py:248 #, python-format msgid "Unable to awake AMT interface on node %(node_id)s. Error: %(error)s" msgstr "" #: ironic/drivers/modules/amt/management.py:90 #, python-format msgid "" "Failed to set boot device %(boot_device)s for node %(node_id)s with " "error: %(error)s." msgstr "" #: ironic/drivers/modules/amt/management.py:144 #, python-format msgid "Failed to enable boot config for node %(node_id)s with error: %(error)s." msgstr "" #: ironic/drivers/modules/amt/power.py:113 #, python-format msgid "" "Failed to set power state %(state)s for node %(node_id)s with error: " "%(error)s." msgstr "" #: ironic/drivers/modules/amt/power.py:137 #, python-format msgid "Failed to get power state for node %(node_id)s with error: %(error)s." msgstr "" #: ironic/drivers/modules/drac/bios.py:93 #, python-format msgid "" "DRAC driver failed to get the BIOS settings for node %(node_uuid)s. " "Reason: %(error)s." msgstr "" #: ironic/drivers/modules/drac/bios.py:120 #, python-format msgid "" "DRAC driver failed to set the BIOS settings for node %(node_uuid)s. " "Reason: %(error)s." msgstr "" #: ironic/drivers/modules/drac/bios.py:144 #, python-format msgid "" "DRAC driver failed to commit the pending BIOS changes for node " "%(node_uuid)s. Reason: %(error)s." msgstr "" #: ironic/drivers/modules/drac/bios.py:163 #, python-format msgid "" "DRAC driver failed to delete the pending BIOS settings for node " "%(node_uuid)s. Reason: %(error)s." msgstr "" #: ironic/drivers/modules/drac/job.py:44 #, python-format msgid "" "DRAC driver failed to get the list of unfinished jobs for node " "%(node_uuid)s. Reason: %(error)s." msgstr "" #: ironic/drivers/modules/drac/management.py:69 #, python-format msgid "" "DRAC driver failed to get next boot mode for node %(node_uuid)s. Reason: " "%(error)s." msgstr "" #: ironic/drivers/modules/drac/management.py:115 #, python-format msgid "" "DRAC driver failed to change boot device order for node %(node_uuid)s. " "Reason: %(error)s." msgstr "" #: ironic/drivers/modules/drac/power.py:58 #, python-format msgid "" "DRAC driver failed to get power state for node %(node_uuid)s. Reason: " "%(error)s." msgstr "" #: ironic/drivers/modules/drac/power.py:104 #, python-format msgid "" "DRAC driver failed to set power state for node %(node_uuid)s to " "%(power_state)s. Reason: %(error)s." msgstr "" #: ironic/drivers/modules/ilo/boot.py:130 #, python-format msgid "" "Virtual media deploy accepts only Glance images or HTTP(S) URLs as " "instance_info['ilo_boot_iso']. Either %s is not a valid HTTP(S) URL or is" " not reachable." msgstr "" #: ironic/drivers/modules/ilo/boot.py:158 #, python-format msgid "" "Unable to find kernel or ramdisk for image %(image)s to generate boot ISO" " for %(node)s" msgstr "" #: ironic/drivers/modules/ilo/boot.py:218 #, python-format msgid "Failed to clean up boot ISO for node %(node)s. Error: %(error)s." msgstr "" #: ironic/drivers/modules/ilo/boot.py:399 #, python-format msgid "Cannot get boot ISO for node %s" msgstr "" #: ironic/drivers/modules/ilo/common.py:528 #, python-format msgid "" "Error while ejecting virtual media %(device)s from node %(uuid)s. Error: " "%(error)s" msgstr "" #: ironic/drivers/modules/ilo/common.py:558 #, python-format msgid "" "Error while deleting temporary swift object %(object_name)s from " "%(container)s associated with virtual floppy. Error: %(error)s" msgstr "" #: ironic/drivers/modules/ilo/power.py:95 #, python-format msgid "iLO get_power_state failed for node %(node_id)s with error: %(error)s." msgstr "" #: ironic/drivers/modules/ilo/power.py:167 #, python-format msgid "" "iLO set_power_state failed to set state to %(tstate)s for node " "%(node_id)s with error: %(error)s" msgstr "" #: ironic/drivers/modules/ilo/power.py:180 #, python-format msgid "iLO failed to change state to %(tstate)s within %(timeout)s sec" msgstr "" #: ironic/drivers/modules/irmc/boot.py:451 #, python-format msgid "Error while inserting virtual cdrom into node %(uuid)s. Error: %(error)s" msgstr "" #: ironic/drivers/modules/irmc/boot.py:474 #, python-format msgid "Error while ejecting virtual cdrom from node %(uuid)s. Error: %(error)s" msgstr "" #: ironic/drivers/modules/irmc/boot.py:507 #, python-format msgid "Error while inserting virtual floppy into node %(uuid)s. Error: %(error)s" msgstr "" #: ironic/drivers/modules/irmc/boot.py:530 #, python-format msgid "Error while ejecting virtual floppy from node %(uuid)s. Error: %(error)s" msgstr "" #: ironic/drivers/modules/irmc/management.py:64 #, python-format msgid "" "SCCI get sensor data failed for node %(node_id)s with the following " "error: %(error)s" msgstr "" #: ironic/drivers/modules/irmc/power.py:71 #, python-format msgid "" "iRMC set_power_state failed to set state to %(tstate)s for node " "%(node_id)s with error: %(error)s" msgstr "" #: ironic/drivers/modules/msftocs/msftocsclient.py:85 #, python-format msgid "XML parsing failed: %s" msgstr "" #: ironic/drivers/modules/msftocs/power.py:87 #, python-format msgid "Changing the power state to %(pstate)s failed. Error: %(err_msg)s" msgstr "" #: ironic/drivers/modules/msftocs/power.py:104 #, python-format msgid "Reboot failed. Error: %(err_msg)s" msgstr "" #: ironic/drivers/modules/oneview/common.py:285 #, python-format msgid "" "Failed to get server profile from OneView appliance fornode %(node)s. " "Error: %(message)s" msgstr "" #: ironic/drivers/modules/oneview/power.py:79 #, python-format msgid "Error getting power state for node %(node)s. Error:%(error)s" msgstr "" #: ironic/drivers/modules/ucs/helper.py:118 #, python-format msgid "Cisco client: service unavailable for node %(uuid)s." msgstr "" #: ironic/drivers/modules/ucs/management.py:96 #, python-format msgid "%(driver)s: client failed to set boot device %(device)s for node %(uuid)s." msgstr "" #: ironic/drivers/modules/ucs/management.py:131 #, python-format msgid "%(driver)s: client failed to get boot device for node %(uuid)s." msgstr "" #: ironic/drivers/modules/ucs/power.py:126 #, python-format msgid "" "%(driver)s: get_power_state operation failed for node %(uuid)s with " "error: %(msg)s." msgstr "" #: ironic/drivers/modules/ucs/power.py:168 #, python-format msgid "" "%(driver)s: set_power_state operation failed for node %(uuid)s with " "error: %(msg)s." msgstr "" #: ironic/drivers/modules/ucs/power.py:179 #, python-format msgid "" "%(driver)s: driver failed to change node %(uuid)s power state to " "%(state)s within %(timeout)s seconds." msgstr "" #: ironic/drivers/modules/ucs/power.py:201 #, python-format msgid "%(driver)s: driver failed to reset node %(uuid)s power state." msgstr "" #: ironic/drivers/modules/ucs/power.py:212 #, python-format msgid "" "%(driver)s: driver failed to reboot node %(uuid)s within %(timeout)s " "seconds." msgstr "" #: ironic/tests/unit/db/sqlalchemy/test_migrations.py:169 #, python-format msgid "Failed to migrate to version %(version)s on engine %(engine)s" msgstr "" ironic-5.1.0/ironic/locale/ko_KR/0000775000567000056710000000000012674513633017702 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/locale/ko_KR/LC_MESSAGES/0000775000567000056710000000000012674513633021467 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/locale/ko_KR/LC_MESSAGES/ironic-log-critical.po0000664000567000056710000000152112674513466025664 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the ironic project. # # Translators: # Mario Cho , 2014 # OpenStack Infra , 2015. #zanata msgid "" msgstr "" "Project-Id-Version: ironic 4.3.1.dev202\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2016-01-27 02:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2014-10-08 04:00+0000\n" "Last-Translator: Mario Cho \n" "Language: ko-KR\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.7.3\n" "Language-Team: Korean (South Korea)\n" msgid "Failed to start keepalive" msgstr "활성 상태를 시작하지 못했습니다. " ironic-5.1.0/ironic/locale/fr/0000775000567000056710000000000012674513633017304 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/locale/fr/LC_MESSAGES/0000775000567000056710000000000012674513633021071 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/locale/fr/LC_MESSAGES/ironic-log-critical.po0000664000567000056710000000160012674513466025264 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the ironic project. # # Translators: # Maxime COQUEREL , 2014 # Andrew Melim , 2014 # OpenStack Infra , 2015. #zanata msgid "" msgstr "" "Project-Id-Version: ironic 4.3.1.dev202\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2016-01-27 02:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2014-09-25 03:41+0000\n" "Last-Translator: Maxime COQUEREL \n" "Language: fr\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.7.3\n" "Language-Team: French\n" msgid "Failed to start keepalive" msgstr "Echec de démarrage keepalive" ironic-5.1.0/ironic/locale/ironic-log-warning.pot0000664000567000056710000003220212674513466023131 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2016 ORGANIZATION # This file is distributed under the same license as the ironic project. # FIRST AUTHOR , 2016. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: ironic 4.3.1.dev202\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2016-01-27 06:37+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 2.2.0\n" #: ironic/common/exception.py:76 #, python-format msgid "" "Exception class: %s Using the 'message' attribute in an exception has " "been deprecated. The exception class should be modified to use the " "'_msg_fmt' attribute." msgstr "" #: ironic/common/utils.py:470 #, python-format msgid "Failed to remove dir %(path)s, error: %(e)s" msgstr "" #: ironic/common/utils.py:487 #, python-format msgid "Failed to create symlink from %(source)s to %(link)s, error: %(e)s" msgstr "" #: ironic/common/utils.py:501 #, python-format msgid "" "Failed to remove trailing character. Returning original object. Supplied " "object is not a string: %s," msgstr "" #: ironic/conductor/base_manager.py:129 #, python-format msgid "" "A conductor with hostname %(hostname)s was previously registered. " "Updating registration" msgstr "" #: ironic/conductor/base_manager.py:242 msgid "Conductor could not connect to database while heartbeating." msgstr "" #: ironic/conductor/manager.py:881 msgid "" "Returning CLEANING for asynchronous prepare cleaning has been deprecated." " Please use CLEANWAIT instead." msgstr "" #: ironic/conductor/manager.py:950 msgid "" "Returning CLEANING for asynchronous clean steps has been deprecated. " "Please use CLEANWAIT instead." msgstr "" #: ironic/conductor/manager.py:1268 #, python-format msgid "" "During checking for deploying state, node %s was not found and presumed " "deleted by another process. Skipping." msgstr "" #: ironic/conductor/manager.py:1273 #, python-format msgid "" "During checking for deploying state, when releasing the lock of the node " "%s, it was locked by another process. Skipping." msgstr "" #: ironic/conductor/manager.py:1279 #, python-format msgid "" "During checking for deploying state, when releasing the lock of the node " "%s, it was already unlocked." msgstr "" #: ironic/conductor/manager.py:1646 #, python-format msgid "" "No VIF found for instance %(instance)s port %(port)s when attempting to " "update port MAC address." msgstr "" #: ironic/conductor/manager.py:1704 #, python-format msgid "" "get_sensors_data is not implemented for driver %(driver)s, node_uuid is " "%(node)s" msgstr "" #: ironic/conductor/manager.py:1709 #, python-format msgid "" "During get_sensors_data, could not parse sensor data for node %(node)s. " "Error: %(err)s." msgstr "" #: ironic/conductor/manager.py:1714 #, python-format msgid "" "During get_sensors_data, could not get sensor data for node %(node)s. " "Error: %(err)s." msgstr "" #: ironic/conductor/manager.py:1719 #, python-format msgid "" "During send_sensor_data, node %(node)s was not found and presumed deleted" " by another process." msgstr "" #: ironic/conductor/manager.py:1724 #, python-format msgid "Failed to get sensor data for node %(node)s. Error: %(error)s" msgstr "" #: ironic/conductor/manager.py:2284 #, python-format msgid "" "During sync_power_state, could not get power state for node %(node)s, " "attempt %(attempt)s of %(retries)s. Error: %(err)s." msgstr "" #: ironic/conductor/manager.py:2323 #, python-format msgid "" "During sync_power_state, node %(node)s state '%(actual)s' does not match " "expected state. Changing hardware state to '%(state)s'." msgstr "" #: ironic/conductor/manager.py:2341 #, python-format msgid "" "During sync_power_state, node %(node)s state does not match expected " "state '%(state)s'. Updating recorded state to '%(actual)s'." msgstr "" #: ironic/conductor/task_manager.py:402 #, python-format msgid "Task's on_error hook failed to call %(method)s on node %(node)s" msgstr "" #: ironic/conductor/utils.py:104 #, python-format msgid "" "Not going to change node power state because current state = requested " "state = '%(state)s'." msgstr "" #: ironic/conductor/utils.py:111 #, python-format msgid "Driver returns ERROR power state for node %s." msgstr "" #: ironic/conductor/utils.py:196 #, python-format msgid "" "No free conductor workers available to perform an action on node " "%(node)s, setting node's provision_state back to %(prov_state)s and " "target_provision_state to %(tgt_prov_state)s." msgstr "" #: ironic/conductor/utils.py:246 #, python-format msgid "" "No free conductor workers available to perform an action on node " "%(node)s, setting node's power state back to %(power_state)s." msgstr "" #: ironic/db/sqlalchemy/api.py:582 #, python-format msgid "Cleared reservations held by %(hostname)s: %(nodes)s" msgstr "" #: ironic/dhcp/neutron.py:186 #, python-format msgid "" "Some errors were encountered when updating the DHCP BOOT options for node" " %(node)s on the following ports: %(ports)s." msgstr "" #: ironic/dhcp/neutron.py:250 #, python-format msgid "No VIFs found for node %(node)s when attempting to get port IP address." msgstr "" #: ironic/dhcp/neutron.py:279 #, python-format msgid "" "Some errors were encountered on node %(node)s while retrieving IP address" " on the following ports: %(ports)s." msgstr "" #: ironic/drivers/base.py:1145 msgid "" "Using periodic tasks with parallel=False is deprecated, \"parallel\" " "argument will be ignored starting with the Mitaka release" msgstr "" #: ironic/drivers/utils.py:146 #, python-format msgid "Ignoring malformed capability '%s'. Format should be 'key:val'." msgstr "" #: ironic/drivers/modules/agent.py:149 #, python-format msgid "" "Skip the image size check as memory_mb is not defined in properties on " "node %s." msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:491 #, python-format msgid "Malformed MAC: %s" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:534 #, python-format msgid "MAC address %s not found in database" msgstr "" #: ironic/drivers/modules/agent_base_vendor.py:599 #: ironic/drivers/modules/oneview/vendor.py:96 #, python-format msgid "" "Failed to soft power off node %(node_uuid)s in at least %(timeout)d " "seconds. Error: %(error)s" msgstr "" #: ironic/drivers/modules/console_utils.py:136 #, python-format msgid "" "Console process for node %s is not running but pid file exists while " "trying to stop shellinabox console." msgstr "" #: ironic/drivers/modules/console_utils.py:189 #, python-format msgid "" "Failed to kill the old console process before starting a new shellinabox " "console for node %(node)s. Reason: %(err)s" msgstr "" #: ironic/drivers/modules/console_utils.py:275 #, python-format msgid "No console pid found for node %s while trying to stop shellinabox console." msgstr "" #: ironic/drivers/modules/deploy_utils.py:660 #, python-format msgid "" "ipmitool is unable to set boot device while the node %s is in UEFI boot " "mode. Please set the boot device manually." msgstr "" #: ironic/drivers/modules/iboot.py:122 #, python-format msgid "" "Reached maximum number of attempts (%(attempts)d) to set power state for " "node %(node)s to \"%(op)s\"" msgstr "" #: ironic/drivers/modules/iboot.py:135 #, python-format msgid "" "Cannot call set power state for node '%(node)s' at relay '%(relay)s'. " "iBoot switch() failed." msgstr "" #: ironic/drivers/modules/iboot.py:170 #, python-format msgid "" "Reached maximum number of attempts (%(attempts)d) to get power state for " "node %(node)s" msgstr "" #: ironic/drivers/modules/iboot.py:186 #, python-format msgid "" "Cannot get power state for node '%(node)s' at relay '%(relay)s'. iBoot " "get_relays() failed." msgstr "" #: ironic/drivers/modules/image_cache.py:206 #, python-format msgid "" "Cache clean up was unable to reclaim %(required)d MiB of disk space, " "still %(left)d MiB required" msgstr "" #: ironic/drivers/modules/image_cache.py:233 #: ironic/drivers/modules/image_cache.py:272 #, python-format msgid "Unable to delete file %(name)s from master image cache: %(exc)s" msgstr "" #: ironic/drivers/modules/image_cache.py:407 #, python-format msgid "" "Image service couldn't determine last modification time of %(href)s, " "considering cached image up to date." msgstr "" #: ironic/drivers/modules/ipminative.py:263 #, python-format msgid "" "IPMI get power state for node %(node_id)s returns the following details: " "%(detail)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:433 #, python-format msgid "" "IPMI Error encountered, retrying \"%(cmd)s\" for node %(node)s. Error: " "%(error)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:490 #, python-format msgid "IPMI power %(state)s failed for node %(node)s." msgstr "" #: ironic/drivers/modules/ipmitool.py:555 #, python-format msgid "IPMI power status failed for node %(node_id)s with error: %(error)s." msgstr "" #: ironic/drivers/modules/ipmitool.py:869 #, python-format msgid "" "IPMI set boot device failed for node %(node)s when executing \"ipmitool " "%(cmd)s\". Error: %(error)s" msgstr "" #: ironic/drivers/modules/ipmitool.py:912 #, python-format msgid "" "IPMI get boot device failed for node %(node)s when executing \"ipmitool " "%(cmd)s\". Error: %(error)s" msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:856 #: ironic/drivers/modules/iscsi_deploy.py:902 #, python-format msgid "" "The node %s is using the bash deploy ramdisk for its deployment. This " "deploy ramdisk has been deprecated. Please use the ironic-python-agent " "(IPA) ramdisk instead." msgstr "" #: ironic/drivers/modules/iscsi_deploy.py:876 #, python-format msgid "" "Bash deploy ramdisk doesn't support in-band cleaning. Please use the " "ironic-python-agent (IPA) ramdisk instead for node %s. " msgstr "" #: ironic/drivers/modules/pxe.py:139 #, python-format msgid "" "The CONF option [agent]agent_%(opt_name)s is deprecated and will be " "removed in Mitaka release of Ironic. Please use [pxe]%(opt_name)s " "instead." msgstr "" #: ironic/drivers/modules/pxe.py:499 #, python-format msgid "" "Could not get deploy image info to clean up images for node %(node)s: " "%(err)s" msgstr "" #: ironic/drivers/modules/pxe.py:539 #, python-format msgid "" "The UUID for the root partition can't be found, unable to switch the pxe " "config from deployment mode to service (boot) mode for node %(node)s" msgstr "" #: ironic/drivers/modules/pxe.py:545 #, python-format msgid "" "The disk id for the whole disk image can't be found, unable to switch the" " pxe config from deployment mode to service (boot) mode for node %(node)s" msgstr "" #: ironic/drivers/modules/pxe.py:583 #, python-format msgid "" "Could not get instance image info to clean up images for node %(node)s: " "%(err)s" msgstr "" #: ironic/drivers/modules/seamicro.py:231 #, python-format msgid "Power-on failed for node %s." msgstr "" #: ironic/drivers/modules/seamicro.py:271 #, python-format msgid "Power-off failed for node %s." msgstr "" #: ironic/drivers/modules/seamicro.py:312 #, python-format msgid "Reboot failed for node %s." msgstr "" #: ironic/drivers/modules/snmp.py:373 #, python-format msgid "SNMP PDU %(addr)s outlet %(outlet)s: unrecognised power state %(state)s." msgstr "" #: ironic/drivers/modules/snmp.py:539 #, python-format msgid "" "Eaton Power SNMP PDU %(addr)s outlet %(outlet)s: unrecognised power state" " %(state)s." msgstr "" #: ironic/drivers/modules/ssh.py:787 #, python-format msgid "" "Failed to get boot device for node %(node)s, virt_type %(vtype)s does not" " support this operation" msgstr "" #: ironic/drivers/modules/amt/power.py:180 #, python-format msgid "" "AMT failed to set power state %(state)s after %(tries)s retries on node " "%(node_id)s." msgstr "" #: ironic/drivers/modules/amt/power.py:190 #, python-format msgid "" "AMT set power state %(state)s for node %(node)s - Attempt %(attempt)s " "times of %(max_attempt)s failed." msgstr "" #: ironic/drivers/modules/drac/client.py:91 #, python-format msgid "" "Empty response on calling %(action)s on client. Last error (cURL error " "code): %(last_error)s, fault string: \"%(fault_string)s\" response_code: " "%(response_code)s. Retry attempt %(count)d" msgstr "" #: ironic/drivers/modules/ilo/boot.py:350 #, python-format msgid "The UUID for the root partition could not be found for node %s" msgstr "" #: ironic/drivers/modules/ilo/common.py:555 #, python-format msgid "" "Temporary object associated with virtual floppy was already deleted from " "Swift. Error: %s" msgstr "" #: ironic/drivers/modules/ilo/deploy.py:142 #, python-format msgid "Secure boot mode is not supported for node %s" msgstr "" #: ironic/drivers/modules/ilo/inspect.py:59 #, python-format msgid "Port already exists for MAC address %(address)s for node %(node)s" msgstr "" #: ironic/drivers/modules/ilo/management.py:101 #, python-format msgid "" "'%(step)s' clean step is not supported on node %(uuid)s. Skipping the " "clean step." msgstr "" #: ironic/drivers/modules/oneview/common.py:138 #, python-format msgid "" "Using 'server_profile_template_uri' in driver_info is now deprecated and " "will be ignored in future releases. Node %s should have it in its " "properties/capabilities instead." msgstr "" ironic-5.1.0/ironic/locale/pt_BR/0000775000567000056710000000000012674513633017703 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/locale/pt_BR/LC_MESSAGES/0000775000567000056710000000000012674513633021470 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/locale/pt_BR/LC_MESSAGES/ironic-log-critical.po0000664000567000056710000000153712674513466025674 0ustar jenkinsjenkins00000000000000# Translations template for ironic. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the ironic project. # # Translators: # Lucas Alvares Gomes , 2015 # OpenStack Infra , 2015. #zanata msgid "" msgstr "" "Project-Id-Version: ironic 4.3.1.dev202\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2016-01-27 02:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2015-03-30 09:01+0000\n" "Last-Translator: Lucas Alvares Gomes \n" "Language: pt-BR\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.7.3\n" "Language-Team: Portuguese (Brazil)\n" msgid "Failed to start keepalive" msgstr "Falha ao inicar o keep alive" ironic-5.1.0/ironic/drivers/0000775000567000056710000000000012674513633017114 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/raid_config_schema.json0000664000567000056710000001002012674513466023570 0ustar jenkinsjenkins00000000000000{ "title": "raid configuration json schema", "type": "object", "properties": { "logical_disks": { "type": "array", "items": { "type": "object", "properties": { "raid_level": { "type": "string", "enum": [ "0", "1", "2", "5", "6", "1+0", "5+0", "6+0" ], "description": "RAID level for the logical disk. Valid values are '0', '1', '2', '5', '6', '1+0', '5+0' and '6+0'. Required." }, "size_gb": { "anyOf": [{ "type": "integer", "minimum": 0, "exclusiveMinimum": true }, { "type": "string", "enum": [ "MAX" ] }], "description": "Size in GiB (Integer) for the logical disk. Use 'MAX' as size_gb if this logical disk is supposed to use the rest of the space available. Required." }, "volume_name": { "type": "string", "description": "Name of the volume to be created. If this is not specified, it will be auto-generated. Optional." }, "is_root_volume": { "type": "boolean", "description": "Specifies whether this disk is a root volume. By default, this is False. Optional." }, "share_physical_disks": { "type": "boolean", "description": "Specifies whether other logical disks can share physical disks with this logical disk. By default, this is False. Optional." }, "disk_type": { "type": "string", "enum": [ "hdd", "ssd" ], "description": "The type of disk preferred. Valid values are 'hdd' and 'ssd'. If this is not specified, disk type will not be a selection criterion for choosing backing physical disks. Optional." }, "interface_type": { "type": "string", "enum": [ "sata", "scsi", "sas" ], "description": "The interface type of disk. Valid values are 'sata', 'scsi' and 'sas'. If this is not specified, interface type will not be a selection criterion for choosing backing physical disks. Optional." }, "number_of_physical_disks": { "type": "integer", "minimum": 0, "exclusiveMinimum": true, "description": "Number of physical disks to use for this logical disk. By default, the driver uses the minimum number of disks required for that RAID level. Optional." }, "controller": { "type": "string", "description": "Controller to use for this logical disk. If not specified, the driver will choose a suitable RAID controller on the bare metal node. Optional." }, "physical_disks": { "type": "array", "items": { "type": "string" }, "description": "The physical disks to use for this logical disk. If not specified, the driver will choose suitable physical disks to use. Optional." } }, "required": ["raid_level", "size_gb"], "additionalProperties": false, "dependencies": { "physical_disks": ["controller"] } }, "minItems": 1 } }, "required": ["logical_disks"], "additionalProperties": false } ironic-5.1.0/ironic/drivers/base.py0000664000567000056710000012560412674513466020414 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Abstract base classes for drivers. """ import abc import collections import copy import inspect import json import os from futurist import periodics from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils import six from ironic.common import exception from ironic.common.i18n import _, _LE, _LW from ironic.common import raid LOG = logging.getLogger(__name__) RAID_CONFIG_SCHEMA = os.path.join(os.path.dirname(__file__), 'raid_config_schema.json') CONF = cfg.CONF @six.add_metaclass(abc.ABCMeta) class BaseDriver(object): """Base class for all drivers. Defines the `core`, `standardized`, and `vendor-specific` interfaces for drivers. Any loadable driver must implement all `core` interfaces. Actual implementation may instantiate one or more classes, as long as the interfaces are appropriate. """ core_interfaces = [] standard_interfaces = [] power = None core_interfaces.append('power') """`Core` attribute for managing power state. A reference to an instance of :class:PowerInterface. """ deploy = None core_interfaces.append('deploy') """`Core` attribute for managing deployments. A reference to an instance of :class:DeployInterface. """ console = None standard_interfaces.append('console') """`Standard` attribute for managing console access. A reference to an instance of :class:ConsoleInterface. May be None, if unsupported by a driver. """ rescue = None # NOTE(deva): hide rescue from the interface list in Icehouse # because the API for this has not been created yet. # standard_interfaces.append('rescue') """`Standard` attribute for accessing rescue features. A reference to an instance of :class:RescueInterface. May be None, if unsupported by a driver. """ management = None """`Standard` attribute for management related features. A reference to an instance of :class:ManagementInterface. May be None, if unsupported by a driver. """ standard_interfaces.append('management') boot = None """`Standard` attribute for boot related features. A reference to an instance of :class:BootInterface. May be None, if unsupported by a driver. """ standard_interfaces.append('boot') vendor = None """Attribute for accessing any vendor-specific extensions. A reference to an instance of :class:VendorInterface. May be None, if the driver does not implement any vendor extensions. """ inspect = None """`Standard` attribute for inspection related features. A reference to an instance of :class:InspectInterface. May be None, if unsupported by a driver. """ standard_interfaces.append('inspect') raid = None """`Standard` attribute for RAID related features. A reference to an instance of :class:RaidInterface. May be None, if unsupported by a driver. """ standard_interfaces.append('raid') @abc.abstractmethod def __init__(self): pass @property def all_interfaces(self): return self.core_interfaces + self.standard_interfaces + ['vendor'] @property def non_vendor_interfaces(self): return self.core_interfaces + self.standard_interfaces def get_properties(self): """Get the properties of the driver. :returns: dictionary of : entries. """ properties = {} for iface_name in self.all_interfaces: iface = getattr(self, iface_name, None) if iface: properties.update(iface.get_properties()) return properties class BareDriver(BaseDriver): """A bare driver object which will have interfaces attached later. Any composable interfaces should be added as class attributes of this class, as well as appended to core_interfaces or standard_interfaces here. """ def __init__(self): pass class BaseInterface(object): """A base interface implementing common functions for Driver Interfaces.""" interface_type = 'base' def __new__(cls, *args, **kwargs): # Get the list of clean steps when the interface is initialized by # the conductor. We use __new__ instead of __init___ # to avoid breaking backwards compatibility with all the drivers. # We want to return all steps, regardless of priority. super_new = super(BaseInterface, cls).__new__ if super_new is object.__new__: instance = super_new(cls) else: instance = super_new(cls, *args, **kwargs) instance.clean_steps = [] for n, method in inspect.getmembers(instance, inspect.ismethod): if getattr(method, '_is_clean_step', False): # Create a CleanStep to represent this method step = {'step': method.__name__, 'priority': method._clean_step_priority, 'abortable': method._clean_step_abortable, 'argsinfo': method._clean_step_argsinfo, 'interface': instance.interface_type} instance.clean_steps.append(step) LOG.debug('Found clean steps %(steps)s for interface %(interface)s', {'steps': instance.clean_steps, 'interface': instance.interface_type}) return instance def get_clean_steps(self, task): """Get a list of (enabled and disabled) clean steps for the interface. This function will return all clean steps (both enabled and disabled) for the interface, in an unordered list. :param task: A TaskManager object, useful for interfaces overriding this function :raises NodeCleaningFailure: if there is a problem getting the steps from the driver. For example, when a node (using an agent driver) has just been enrolled and the agent isn't alive yet to be queried for the available clean steps. :returns: A list of clean step dictionaries """ return self.clean_steps def execute_clean_step(self, task, step): """Execute the clean step on task.node. A clean step must take a single positional argument: a TaskManager object. It may take one or more keyword variable arguments (for use with manual cleaning only.) A step can be executed synchronously or asynchronously. A step should return None if the method has completed synchronously or states.CLEANWAIT if the step will continue to execute asynchronously. If the step executes asynchronously, it should issue a call to the 'continue_node_clean' RPC, so the conductor can begin the next clean step. :param task: A TaskManager object :param step: The clean step dictionary representing the step to execute :returns: None if this method has completed synchronously, or states.CLEANWAIT if the step will continue to execute asynchronously. """ args = step.get('args') if args is not None: return getattr(self, step['step'])(task, **args) else: return getattr(self, step['step'])(task) @six.add_metaclass(abc.ABCMeta) class DeployInterface(BaseInterface): """Interface for deploy-related actions.""" interface_type = 'deploy' @abc.abstractmethod def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ @abc.abstractmethod def validate(self, task): """Validate the driver-specific Node deployment info. This method validates whether the 'driver_info' property of the task's node contains the required information for this driver to deploy images to the node. If invalid, raises an exception; otherwise returns None. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue :raises: MissingParameterValue """ @abc.abstractmethod def deploy(self, task): """Perform a deployment to the task's node. Perform the necessary work to deploy an image onto the specified node. This method will be called after prepare(), which may have already performed any preparatory steps, such as pre-caching some data for the node. :param task: a TaskManager instance containing the node to act on. :returns: status of the deploy. One of ironic.common.states. """ @abc.abstractmethod def tear_down(self, task): """Tear down a previous deployment on the task's node. Given a node that has been previously deployed to, do all cleanup and tear down necessary to "un-deploy" that node. :param task: a TaskManager instance containing the node to act on. :returns: status of the deploy. One of ironic.common.states. """ @abc.abstractmethod def prepare(self, task): """Prepare the deployment environment for the task's node. If preparation of the deployment environment ahead of time is possible, this method should be implemented by the driver. If implemented, this method must be idempotent. It may be called multiple times for the same node on the same conductor, and it may be called by multiple conductors in parallel. Therefore, it must not require an exclusive lock. This method is called before `deploy`. :param task: a TaskManager instance containing the node to act on. """ @abc.abstractmethod def clean_up(self, task): """Clean up the deployment environment for the task's node. If preparation of the deployment environment ahead of time is possible, this method should be implemented by the driver. It should erase anything cached by the `prepare` method. If implemented, this method must be idempotent. It may be called multiple times for the same node on the same conductor, and it may be called by multiple conductors in parallel. Therefore, it must not require an exclusive lock. This method is called before `tear_down`. :param task: a TaskManager instance containing the node to act on. """ @abc.abstractmethod def take_over(self, task): """Take over management of this task's node from a dead conductor. If conductors' hosts maintain a static relationship to nodes, this method should be implemented by the driver to allow conductors to perform the necessary work during the remapping of nodes to conductors when a conductor joins or leaves the cluster. For example, the PXE driver has an external dependency: Neutron must forward DHCP BOOT requests to a conductor which has prepared the tftpboot environment for the given node. When a conductor goes offline, another conductor must change this setting in Neutron as part of remapping that node's control to itself. This is performed within the `takeover` method. :param task: a TaskManager instance containing the node to act on. """ def prepare_cleaning(self, task): """Prepare the node for cleaning tasks. For example, nodes that use the Ironic Python Agent will need to boot the ramdisk in order to do in-band cleaning tasks. If the function is asynchronous, the driver will need to handle settings node.driver_internal_info['clean_steps'] and node.clean_step, as they would be set in ironic.conductor.manager._do_node_clean, but cannot be set when this is asynchronous. After, the interface should make an RPC call to continue_node_cleaning to start cleaning. NOTE(JoshNang) this should be moved to BootInterface when it gets implemented. :param task: a TaskManager instance containing the node to act on. :returns: If this function is going to be asynchronous, should return `states.CLEANWAIT`. Otherwise, should return `None`. The interface will need to call _get_cleaning_steps and then RPC to continue_node_cleaning """ pass def tear_down_cleaning(self, task): """Tear down after cleaning is completed. Given that cleaning is complete, do all cleanup and tear down necessary to allow the node to be deployed to again. NOTE(JoshNang) this should be moved to BootInterface when it gets implemented. :param task: a TaskManager instance containing the node to act on. """ pass @six.add_metaclass(abc.ABCMeta) class BootInterface(object): """Interface for boot-related actions.""" @abc.abstractmethod def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ @abc.abstractmethod def validate(self, task): """Validate the driver-specific info for booting. This method validates the driver-specific info for booting the ramdisk and instance on the node. If invalid, raises an exception; otherwise returns None. :param task: a task from TaskManager. :returns: None :raises: InvalidParameterValue :raises: MissingParameterValue """ @abc.abstractmethod def prepare_ramdisk(self, task, ramdisk_params): """Prepares the boot of Ironic ramdisk. This method prepares the boot of the deploy ramdisk after reading relevant information from the node's database. :param task: a task from TaskManager. :param ramdisk_params: the options to be passed to the ironic ramdisk. Different implementations might want to boot the ramdisk in different ways by passing parameters to them. For example, - When DIB ramdisk is booted to deploy a node, it takes the parameters iscsi_target_iqn, deployment_id, ironic_api_url, etc. - When Agent ramdisk is booted to deploy a node, it takes the parameters ipa-driver-name, ipa-api-url, root_device, etc. Other implementations can make use of ramdisk_params to pass such information. Different implementations of boot interface will have different ways of passing parameters to the ramdisk. :returns: None """ @abc.abstractmethod def clean_up_ramdisk(self, task): """Cleans up the boot of ironic ramdisk. This method cleans up the environment that was setup for booting the deploy ramdisk. :param task: a task from TaskManager. :returns: None """ @abc.abstractmethod def prepare_instance(self, task): """Prepares the boot of instance. This method prepares the boot of the instance after reading relevant information from the node's database. :param task: a task from TaskManager. :returns: None """ @abc.abstractmethod def clean_up_instance(self, task): """Cleans up the boot of instance. This method cleans up the environment that was setup for booting the instance. :param task: a task from TaskManager. :returns: None """ @six.add_metaclass(abc.ABCMeta) class PowerInterface(BaseInterface): """Interface for power-related actions.""" interface_type = 'power' @abc.abstractmethod def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ @abc.abstractmethod def validate(self, task): """Validate the driver-specific Node power info. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver to manage the power state of the node. If invalid, raises an exception; otherwise, returns None. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue :raises: MissingParameterValue """ @abc.abstractmethod def get_power_state(self, task): """Return the power state of the task's node. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue if a required parameter is missing. :returns: a power state. One of :mod:`ironic.common.states`. """ @abc.abstractmethod def set_power_state(self, task, power_state): """Set the power state of the task's node. :param task: a TaskManager instance containing the node to act on. :param power_state: Any power state from :mod:`ironic.common.states`. :raises: MissingParameterValue if a required parameter is missing. """ @abc.abstractmethod def reboot(self, task): """Perform a hard reboot of the task's node. Drivers are expected to properly handle case when node is powered off by powering it on. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue if a required parameter is missing. """ @six.add_metaclass(abc.ABCMeta) class ConsoleInterface(object): """Interface for console-related actions.""" @abc.abstractmethod def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ @abc.abstractmethod def validate(self, task): """Validate the driver-specific Node console info. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver to provide console access to the Node. If invalid, raises an exception; otherwise returns None. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue :raises: MissingParameterValue """ @abc.abstractmethod def start_console(self, task): """Start a remote console for the task's node. :param task: a TaskManager instance containing the node to act on. """ @abc.abstractmethod def stop_console(self, task): """Stop the remote console session for the task's node. :param task: a TaskManager instance containing the node to act on. """ @abc.abstractmethod def get_console(self, task): """Get connection information about the console. This method should return the necessary information for the client to access the console. :param task: a TaskManager instance containing the node to act on. :returns: the console connection information. """ @six.add_metaclass(abc.ABCMeta) class RescueInterface(object): """Interface for rescue-related actions.""" @abc.abstractmethod def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ @abc.abstractmethod def validate(self, task): """Validate the rescue info stored in the node' properties. If invalid, raises an exception; otherwise returns None. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue :raises: MissingParameterValue """ @abc.abstractmethod def rescue(self, task): """Boot the task's node into a rescue environment. :param task: a TaskManager instance containing the node to act on. """ @abc.abstractmethod def unrescue(self, task): """Tear down the rescue environment, and return to normal. :param task: a TaskManager instance containing the node to act on. """ # Representation of a single vendor method metadata VendorMetadata = collections.namedtuple('VendorMetadata', ['method', 'metadata']) def _passthru(http_methods, method=None, async=True, driver_passthru=False, description=None, attach=False): """A decorator for registering a function as a passthru function. Decorator ensures function is ready to catch any ironic exceptions and reraise them after logging the issue. It also catches non-ironic exceptions reraising them as a VendorPassthruException after writing a log. Logs need to be added because even though the exception is being reraised, it won't be handled if it is an async. call. :param http_methods: A list of supported HTTP methods by the vendor function. :param method: an arbitrary string describing the action to be taken. :param async: Boolean value. If True invoke the passthru function asynchronously; if False, synchronously. If a passthru function touches the BMC we strongly recommend it to run asynchronously. Defaults to True. :param driver_passthru: Boolean value. True if this is a driver vendor passthru method, and False if it is a node vendor passthru method. :param attach: Boolean value. True if the return value should be attached to the response object, and False if the return value should be returned in the response body. Defaults to False. :param description: a string shortly describing what the method does. """ def handle_passthru(func): api_method = method if api_method is None: api_method = func.__name__ supported_ = [i.upper() for i in http_methods] description_ = description or '' metadata = VendorMetadata(api_method, {'http_methods': supported_, 'async': async, 'description': description_, 'attach': attach}) if driver_passthru: func._driver_metadata = metadata else: func._vendor_metadata = metadata passthru_logmessage = _LE('vendor_passthru failed with method %s') @six.wraps(func) def passthru_handler(*args, **kwargs): try: return func(*args, **kwargs) except exception.IronicException as e: with excutils.save_and_reraise_exception(): LOG.exception(passthru_logmessage, api_method) except Exception as e: # catch-all in case something bubbles up here LOG.exception(passthru_logmessage, api_method) raise exception.VendorPassthruException(message=e) return passthru_handler return handle_passthru def passthru(http_methods, method=None, async=True, description=None, attach=False): return _passthru(http_methods, method, async, driver_passthru=False, description=description, attach=attach) def driver_passthru(http_methods, method=None, async=True, description=None, attach=False): return _passthru(http_methods, method, async, driver_passthru=True, description=description, attach=attach) @six.add_metaclass(abc.ABCMeta) class VendorInterface(object): """Interface for all vendor passthru functionality. Additional vendor- or driver-specific capabilities should be implemented as a method in the class inheriting from this class and use the @passthru or @driver_passthru decorators. Methods decorated with @driver_passthru should be short-lived because it is a blocking call. """ def __new__(cls, *args, **kwargs): super_new = super(VendorInterface, cls).__new__ if super_new is object.__new__: inst = super_new(cls) else: inst = super_new(cls, *args, **kwargs) inst.vendor_routes = {} inst.driver_routes = {} for name, ref in inspect.getmembers(inst, predicate=inspect.ismethod): vmeta = getattr(ref, '_vendor_metadata', None) dmeta = getattr(ref, '_driver_metadata', None) if vmeta is not None: metadata = copy.deepcopy(vmeta.metadata) metadata['func'] = ref inst.vendor_routes.update({vmeta.method: metadata}) if dmeta is not None: metadata = copy.deepcopy(dmeta.metadata) metadata['func'] = ref inst.driver_routes.update({dmeta.method: metadata}) return inst @abc.abstractmethod def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ @abc.abstractmethod def validate(self, task, method=None, **kwargs): """Validate vendor-specific actions. If invalid, raises an exception; otherwise returns None. :param task: a task from TaskManager. :param method: method to be validated :param kwargs: info for action. :raises: UnsupportedDriverExtension if 'method' can not be mapped to the supported interfaces. :raises: InvalidParameterValue if kwargs does not contain 'method'. :raises: MissingParameterValue """ def driver_validate(self, method, **kwargs): """Validate driver-vendor-passthru actions. If invalid, raises an exception; otherwise returns None. :param method: method to be validated :param kwargs: info for action. :raises: MissingParameterValue if kwargs does not contain certain parameter. :raises: InvalidParameterValue if parameter does not match. """ pass @six.add_metaclass(abc.ABCMeta) class ManagementInterface(BaseInterface): """Interface for management related actions.""" interface_type = 'management' @abc.abstractmethod def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ @abc.abstractmethod def validate(self, task): """Validate the driver-specific management information. If invalid, raises an exception; otherwise returns None. :param task: a task from TaskManager. :raises: InvalidParameterValue :raises: MissingParameterValue """ @abc.abstractmethod def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ @abc.abstractmethod def set_boot_device(self, task, device, persistent=False): """Set the boot device for a node. Set the boot device to use on next reboot of the node. :param task: a task from TaskManager. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: InvalidParameterValue if an invalid boot device is specified. :raises: MissingParameterValue if a required parameter is missing """ @abc.abstractmethod def get_boot_device(self, task): """Get the current boot device for a node. Provides the current boot device of the node. Be aware that not all drivers support this. :param task: a task from TaskManager. :raises: MissingParameterValue if a required parameter is missing :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ @abc.abstractmethod def get_sensors_data(self, task): """Get sensors data method. :param task: a TaskManager instance. :raises: FailedToGetSensorData when getting the sensor data fails. :raises: FailedToParseSensorData when parsing sensor data fails. :returns: returns a consistent format dict of sensor data grouped by sensor type, which can be processed by Ceilometer. eg, :: { 'Sensor Type 1': { 'Sensor ID 1': { 'Sensor Reading': 'current value', 'key1': 'value1', 'key2': 'value2' }, 'Sensor ID 2': { 'Sensor Reading': 'current value', 'key1': 'value1', 'key2': 'value2' } }, 'Sensor Type 2': { 'Sensor ID 3': { 'Sensor Reading': 'current value', 'key1': 'value1', 'key2': 'value2' }, 'Sensor ID 4': { 'Sensor Reading': 'current value', 'key1': 'value1', 'key2': 'value2' } } } """ @six.add_metaclass(abc.ABCMeta) class InspectInterface(object): """Interface for inspection-related actions.""" ESSENTIAL_PROPERTIES = {'memory_mb', 'local_gb', 'cpus', 'cpu_arch'} """The properties required by scheduler/deploy.""" @abc.abstractmethod def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ @abc.abstractmethod def validate(self, task): """Validate the driver-specific inspection information. If invalid, raises an exception; otherwise returns None. :param task: a task from TaskManager. :raises: InvalidParameterValue :raises: MissingParameterValue """ @abc.abstractmethod def inspect_hardware(self, task): """Inspect hardware. Inspect hardware to obtain the essential & additional hardware properties. :param task: a task from TaskManager. :raises: HardwareInspectionFailure, if unable to get essential hardware properties. :returns: resulting state of the inspection i.e. states.MANAGEABLE or None. """ class RAIDInterface(BaseInterface): interface_type = 'raid' def __init__(self): """Constructor for RAIDInterface class.""" with open(RAID_CONFIG_SCHEMA, 'r') as raid_schema_fobj: self.raid_schema = json.load(raid_schema_fobj) @abc.abstractmethod def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ def validate(self, task): """Validates the RAID Interface. This method validates the properties defined by Ironic for RAID configuration. Driver implementations of this interface can override this method for doing more validations (such as BMC's credentials). :param task: a TaskManager instance. :raises: InvalidParameterValue, if the RAID configuration is invalid. :raises: MissingParameterValue, if some parameters are missing. """ target_raid_config = task.node.target_raid_config if not target_raid_config: return self.validate_raid_config(task, target_raid_config) def validate_raid_config(self, task, raid_config): """Validates the given RAID configuration. This method validates the given RAID configuration. Driver implementations of this interface can override this method to support custom parameters for RAID configuration. :param task: a TaskManager instance. :param raid_config: The RAID configuration to validate. :raises: InvalidParameterValue, if the RAID configuration is invalid. """ raid.validate_configuration(raid_config, self.raid_schema) @abc.abstractmethod def create_configuration(self, task, create_root_volume=True, create_nonroot_volumes=True): """Creates RAID configuration on the given node. This method creates a RAID configuration on the given node. It assumes that the target RAID configuration is already available in node.target_raid_config. Implementations of this interface are supposed to read the RAID configuration from node.target_raid_config. After the RAID configuration is done (either in this method OR in a call-back method), ironic.common.raid.update_raid_info() may be called to sync the node's RAID-related information with the RAID configuration applied on the node. :param task: a TaskManager instance. :param create_root_volume: Setting this to False indicates not to create root volume that is specified in the node's target_raid_config. Default value is True. :param create_nonroot_volumes: Setting this to False indicates not to create non-root volumes (all except the root volume) in the node's target_raid_config. Default value is True. :returns: states.CLEANWAIT if RAID configuration is in progress asynchronously or None if it is complete. """ @abc.abstractmethod def delete_configuration(self, task): """Deletes RAID configuration on the given node. This method deletes the RAID configuration on the give node. After RAID configuration is deleted, node.raid_config should be cleared by the implementation. :param task: a TaskManager instance. :returns: states.CLEANWAIT if deletion is in progress asynchronously or None if it is complete. """ def get_logical_disk_properties(self): """Get the properties that can be specified for logical disks. This method returns a dictionary containing the properties that can be specified for logical disks and a textual description for them. :returns: A dictionary containing properties that can be mentioned for logical disks and a textual description for them. """ return raid.get_logical_disk_properties(self.raid_schema) def _validate_argsinfo(argsinfo): """Validate args info. This method validates args info, so that the values are the expected data types and required values are specified. :param argsinfo: a dictionary of keyword arguments where key is the name of the argument and value is a dictionary as follows:: ‘description’: . Required. This should include possible values. ‘required’: Boolean. Optional; default is False. True if this argument is required. If so, it must be specified in the clean request; false if it is optional. :raises InvalidParameterValue if any of the arguments are invalid """ if not argsinfo: return if not isinstance(argsinfo, dict): raise exception.InvalidParameterValue( _('"argsinfo" must be a dictionary instead of "%s"') % argsinfo) for (arg, info) in argsinfo.items(): if not isinstance(info, dict): raise exception.InvalidParameterValue( _('Argument "%(arg)s" must be a dictionary instead of ' '"%(val)s".') % {'arg': arg, 'val': info}) has_description = False for (key, value) in info.items(): if key == 'description': if not isinstance(value, six.string_types): raise exception.InvalidParameterValue( _('For argument "%(arg)s", "description" must be a ' 'string value instead of "%(value)s".') % {'arg': arg, 'value': value}) has_description = True elif key == 'required': if not isinstance(value, bool): raise exception.InvalidParameterValue( _('For argument "%(arg)s", "required" must be a ' 'Boolean value instead of "%(value)s".') % {'arg': arg, 'value': value}) else: raise exception.InvalidParameterValue( _('Argument "%(arg)s" has an invalid key named "%(key)s". ' 'It must be "description" or "required".') % {'key': key, 'arg': arg}) if not has_description: raise exception.InvalidParameterValue( _('Argument "%(arg)s" is missing a "description".') % {'arg': arg}) def clean_step(priority, abortable=False, argsinfo=None): """Decorator for cleaning steps. Cleaning steps may be used in manual or automated cleaning. For automated cleaning, only steps with priorities greater than 0 are used. These steps are ordered by priority from highest value to lowest value. For steps with the same priority, they are ordered by driver interface priority (see conductor.manager.CLEANING_INTERFACE_PRIORITY). execute_clean_step() will be called on each step. For manual cleaning, the clean steps will be executed in a similar fashion to automated cleaning, but the steps and order of execution must be explicitly specified by the user when invoking the cleaning API. Decorated clean steps must take as the only positional argument, a TaskManager object. Clean steps used in manual cleaning may also take keyword variable arguments (as described in argsinfo). Clean steps can be either synchronous or asynchronous. If the step is synchronous, it should return `None` when finished, and the conductor will continue on to the next step. While the clean step is executing, the node will be in `states.CLEANING` provision state. If the step is asynchronous, the step should return `states.CLEANWAIT` to the conductor before it starts the asynchronous work. When the step is complete, the step should make an RPC call to `continue_node_clean` to move to the next step in cleaning. The node will be in `states.CLEANWAIT` provision state during the asynchronous work. Examples:: class MyInterface(base.BaseInterface): # CONF.example_cleaning_priority should be an int CONF option @base.clean_step(priority=CONF.example_cleaning_priority) def example_cleaning(self, task): # do some cleaning @base.clean_step(priority=0, abortable=True, argsinfo= {'size': {'description': 'size of widget (MB)', 'required': True}}) def advanced_clean(self, task, **kwargs): # do some advanced cleaning :param priority: an integer priority, should be a CONF option :param abortable: Boolean value. Whether the clean step is abortable or not; defaults to False. :param argsinfo: a dictionary of keyword arguments where key is the name of the argument and value is a dictionary as follows:: ‘description’: . Required. This should include possible values. ‘required’: Boolean. Optional; default is False. True if this argument is required. If so, it must be specified in the clean request; false if it is optional. :raises InvalidParameterValue if any of the arguments are invalid """ def decorator(func): func._is_clean_step = True if isinstance(priority, int): func._clean_step_priority = priority else: raise exception.InvalidParameterValue( _('"priority" must be an integer value instead of "%s"') % priority) if isinstance(abortable, bool): func._clean_step_abortable = abortable else: raise exception.InvalidParameterValue( _('"abortable" must be a Boolean value instead of "%s"') % abortable) _validate_argsinfo(argsinfo) func._clean_step_argsinfo = argsinfo return func return decorator def driver_periodic_task(**kwargs): """Decorator for a driver-specific periodic task. Deprecated, please use futurist directly. Example:: from futurist import periodics class MyDriver(base.BaseDriver): @periodics.periodic(spacing=42) def task(self, manager, context): # do some job :param kwargs: arguments to pass to @periodics.periodic """ LOG.warning(_LW('driver_periodic_task decorator is deprecated, please ' 'use futurist.periodics.periodic directly')) # Previously we accepted more arguments, make a backward compatibility # layer for out-of-tree drivers. new_kwargs = {} for arg in ('spacing', 'enabled', 'run_immediately'): try: new_kwargs[arg] = kwargs.pop(arg) except KeyError: pass # NOTE(jroll) this is here to avoid a circular import when a module # imports ironic.common.service. Normally I would balk at this, but this # option is deprecared for removal and this code only runs at startup. CONF.import_opt('periodic_interval', 'ironic.common.service') new_kwargs.setdefault('spacing', CONF.periodic_interval) if kwargs: LOG.warning(_LW('The following arguments are not supported by ' 'futurist.periodics.periodic and are ignored: %s'), ', '.join(kwargs)) return periodics.periodic(**new_kwargs) ironic-5.1.0/ironic/drivers/oneview.py0000664000567000056710000000713712674513466021156 0ustar jenkinsjenkins00000000000000# # Copyright 2015 Hewlett Packard Development Company, LP # Copyright 2015 Universidade Federal de Campina Grande # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ OneView Driver and supporting meta-classes. """ from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers import base from ironic.drivers.modules import agent from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules.oneview import common from ironic.drivers.modules.oneview import management from ironic.drivers.modules.oneview import power from ironic.drivers.modules.oneview import vendor from ironic.drivers.modules import pxe class AgentPXEOneViewDriver(base.BaseDriver): """Agent + OneView driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.ov.OVPower` for power on/off and reboot of virtual machines, with :class:`ironic.driver.pxe.PXEBoot` for booting deploy kernel and ramdisk and :class:`ironic.driver.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('oneview_client.client'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import python-oneviewclient library")) # Checks connectivity to OneView and version compatibility on driver # initialization oneview_client = common.get_oneview_client() oneview_client.verify_oneview_version() oneview_client.verify_credentials() self.power = power.OneViewPower() self.management = management.OneViewManagement() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.vendor = vendor.AgentVendorInterface() class ISCSIPXEOneViewDriver(base.BaseDriver): """PXE + OneView driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.ov.OVPower` for power on/off and reboot of virtual machines, with :class:`ironic.driver.pxe.PXEBoot` for booting deploy kernel and ramdisk and :class:`ironic.driver.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('oneview_client.client'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import python-oneviewclient library")) # Checks connectivity to OneView and version compatibility on driver # initialization oneview_client = common.get_oneview_client() oneview_client.verify_oneview_version() oneview_client.verify_credentials() self.power = power.OneViewPower() self.management = management.OneViewManagement() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.vendor = iscsi_deploy.VendorPassthru() ironic-5.1.0/ironic/drivers/agent.py0000664000567000056710000002411412674513466020572 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers import base from ironic.drivers.modules import agent from ironic.drivers.modules.amt import management as amt_management from ironic.drivers.modules.amt import power as amt_power from ironic.drivers.modules.cimc import management as cimc_mgmt from ironic.drivers.modules.cimc import power as cimc_power from ironic.drivers.modules import iboot from ironic.drivers.modules import inspector from ironic.drivers.modules import ipminative from ironic.drivers.modules import ipmitool from ironic.drivers.modules import pxe from ironic.drivers.modules import ssh from ironic.drivers.modules.ucs import management as ucs_mgmt from ironic.drivers.modules.ucs import power as ucs_power from ironic.drivers.modules import virtualbox from ironic.drivers.modules import wol from ironic.drivers import utils class AgentAndIPMIToolDriver(base.BaseDriver): """Agent + IPMITool driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.ipmitool.IPMIPower` (for power on/off and reboot) with :class:`ironic.drivers.modules.agent.AgentDeploy` (for image deployment). Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): self.power = ipmitool.IPMIPower() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.management = ipmitool.IPMIManagement() self.console = ipmitool.IPMIShellinaboxConsole() self.agent_vendor = agent.AgentVendorInterface() self.ipmi_vendor = ipmitool.VendorPassthru() self.mapping = {'send_raw': self.ipmi_vendor, 'bmc_reset': self.ipmi_vendor, 'heartbeat': self.agent_vendor} self.driver_passthru_mapping = {'lookup': self.agent_vendor} self.vendor = utils.MixinVendorInterface( self.mapping, driver_passthru_mapping=self.driver_passthru_mapping) self.raid = agent.AgentRAID() self.inspect = inspector.Inspector.create_if_enabled( 'AgentAndIPMIToolDriver') class AgentAndIPMINativeDriver(base.BaseDriver): """Agent + IPMINative driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.ipminative.NativeIPMIPower` (for power on/off and reboot) with :class:`ironic.drivers.modules.agent.AgentDeploy` (for image deployment). Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): self.power = ipminative.NativeIPMIPower() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.management = ipminative.NativeIPMIManagement() self.console = ipminative.NativeIPMIShellinaboxConsole() self.agent_vendor = agent.AgentVendorInterface() self.ipminative_vendor = ipminative.VendorPassthru() self.mapping = { 'send_raw': self.ipminative_vendor, 'bmc_reset': self.ipminative_vendor, 'heartbeat': self.agent_vendor, } self.driver_passthru_mapping = {'lookup': self.agent_vendor} self.vendor = utils.MixinVendorInterface(self.mapping, self.driver_passthru_mapping) self.raid = agent.AgentRAID() self.inspect = inspector.Inspector.create_if_enabled( 'AgentAndIPMINativeDriver') class AgentAndSSHDriver(base.BaseDriver): """Agent + SSH driver. NOTE: This driver is meant only for testing environments. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.ssh.SSH` (for power on/off and reboot of virtual machines tunneled over SSH), with :class:`ironic.drivers.modules.agent.AgentDeploy` (for image deployment). Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): self.power = ssh.SSHPower() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.management = ssh.SSHManagement() self.vendor = agent.AgentVendorInterface() self.raid = agent.AgentRAID() self.inspect = inspector.Inspector.create_if_enabled( 'AgentAndSSHDriver') self.console = ssh.ShellinaboxConsole() class AgentAndVirtualBoxDriver(base.BaseDriver): """Agent + VirtualBox driver. NOTE: This driver is meant only for testing environments. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.virtualbox.VirtualBoxPower` (for power on/off and reboot of VirtualBox virtual machines), with :class:`ironic.drivers.modules.agent.AgentDeploy` (for image deployment). Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('pyremotevbox'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pyremotevbox library")) self.power = virtualbox.VirtualBoxPower() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.management = virtualbox.VirtualBoxManagement() self.vendor = agent.AgentVendorInterface() self.raid = agent.AgentRAID() class AgentAndAMTDriver(base.BaseDriver): """Agent + AMT driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.amt.AMTPower` for power on/off and reboot with :class:`ironic.drivers.modules.agent_deploy.AgentDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('pywsman'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pywsman library")) self.power = amt_power.AMTPower() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.management = amt_management.AMTManagement() self.vendor = agent.AgentVendorInterface() class AgentAndUcsDriver(base.BaseDriver): """Agent + Cisco UCSM driver. This driver implements the `core` functionality, combining :class:ironic.drivers.modules.ucs.power.Power for power on/off and reboot with :class:'ironic.driver.modules.agent.AgentDeploy' (for image deployment.) Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('UcsSdk'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import UcsSdk library")) self.power = ucs_power.Power() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.management = ucs_mgmt.UcsManagement() self.vendor = agent.AgentVendorInterface() self.inspect = inspector.Inspector.create_if_enabled( 'AgentAndUcsDriver') class AgentAndCIMCDriver(base.BaseDriver): """Agent + Cisco CIMC driver. This driver implements the `core` functionality, combining :class:ironic.drivers.modules.cimc.power.Power for power on/off and reboot with :class:'ironic.driver.modules.agent.AgentDeploy' (for image deployment.) Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('ImcSdk'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import ImcSdk library")) self.power = cimc_power.Power() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.management = cimc_mgmt.CIMCManagement() self.vendor = agent.AgentVendorInterface() self.inspect = inspector.Inspector.create_if_enabled( 'AgentAndCIMCDriver') class AgentAndWakeOnLanDriver(base.BaseDriver): """Agent + WakeOnLan driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.wol.WakeOnLanPower` for power on with :class:'ironic.driver.modules.agent.AgentDeploy' (for image deployment.) Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): self.power = wol.WakeOnLanPower() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.vendor = agent.AgentVendorInterface() class AgentAndIBootDriver(base.BaseDriver): """Agent + IBoot PDU driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.iboot.IBootPower` for power on/off and reboot with :class:'ironic.driver.modules.agent.AgentDeploy' (for image deployment.) Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('iboot'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import iboot library")) self.power = iboot.IBootPower() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.vendor = agent.AgentVendorInterface() ironic-5.1.0/ironic/drivers/drac.py0000664000567000056710000000466312674513466020414 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ DRAC Driver for remote system management using Dell Remote Access Card. """ from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers import base from ironic.drivers.modules.drac import management from ironic.drivers.modules.drac import power from ironic.drivers.modules.drac import vendor_passthru from ironic.drivers.modules import inspector from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import pxe from ironic.drivers import utils class PXEDracDriver(base.BaseDriver): """Drac driver using PXE for deploy.""" def __init__(self): if not importutils.try_import('dracclient'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_('Unable to import python-dracclient library')) self.power = power.DracPower() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = management.DracManagement() self.iscsi_vendor = iscsi_deploy.VendorPassthru() self.drac_vendor = vendor_passthru.DracVendorPassthru() self.mapping = {'pass_deploy_info': self.iscsi_vendor, 'heartbeat': self.iscsi_vendor, 'pass_bootloader_install_info': self.iscsi_vendor, 'get_bios_config': self.drac_vendor, 'set_bios_config': self.drac_vendor, 'commit_bios_config': self.drac_vendor, 'abandon_bios_config': self.drac_vendor, } self.driver_passthru_mapping = {'lookup': self.iscsi_vendor} self.vendor = utils.MixinVendorInterface(self.mapping, self.driver_passthru_mapping) self.inspect = inspector.Inspector.create_if_enabled( 'PXEDracDriver') ironic-5.1.0/ironic/drivers/pxe.py0000664000567000056710000004045212674513466020273 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ PXE Driver and supporting meta-classes. """ from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers import base from ironic.drivers.modules import agent from ironic.drivers.modules.amt import management as amt_management from ironic.drivers.modules.amt import power as amt_power from ironic.drivers.modules.amt import vendor as amt_vendor from ironic.drivers.modules.cimc import management as cimc_mgmt from ironic.drivers.modules.cimc import power as cimc_power from ironic.drivers.modules import iboot from ironic.drivers.modules.ilo import console as ilo_console from ironic.drivers.modules.ilo import deploy as ilo_deploy from ironic.drivers.modules.ilo import inspect as ilo_inspect from ironic.drivers.modules.ilo import management as ilo_management from ironic.drivers.modules.ilo import power as ilo_power from ironic.drivers.modules.ilo import vendor as ilo_vendor from ironic.drivers.modules import inspector from ironic.drivers.modules import ipminative from ironic.drivers.modules import ipmitool from ironic.drivers.modules.irmc import inspect as irmc_inspect from ironic.drivers.modules.irmc import management as irmc_management from ironic.drivers.modules.irmc import power as irmc_power from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules.msftocs import management as msftocs_management from ironic.drivers.modules.msftocs import power as msftocs_power from ironic.drivers.modules import pxe from ironic.drivers.modules import seamicro from ironic.drivers.modules import snmp from ironic.drivers.modules import ssh from ironic.drivers.modules.ucs import management as ucs_mgmt from ironic.drivers.modules.ucs import power as ucs_power from ironic.drivers.modules import virtualbox from ironic.drivers.modules import wol from ironic.drivers import utils class PXEAndIPMIToolDriver(base.BaseDriver): """PXE + IPMITool driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.ipmi.IPMI` for power on/off and reboot with :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): self.power = ipmitool.IPMIPower() self.console = ipmitool.IPMIShellinaboxConsole() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = ipmitool.IPMIManagement() self.inspect = inspector.Inspector.create_if_enabled( 'PXEAndIPMIToolDriver') self.iscsi_vendor = iscsi_deploy.VendorPassthru() self.ipmi_vendor = ipmitool.VendorPassthru() self.mapping = {'send_raw': self.ipmi_vendor, 'bmc_reset': self.ipmi_vendor, 'heartbeat': self.iscsi_vendor, 'pass_deploy_info': self.iscsi_vendor, 'pass_bootloader_install_info': self.iscsi_vendor} self.driver_passthru_mapping = {'lookup': self.iscsi_vendor} self.vendor = utils.MixinVendorInterface( self.mapping, driver_passthru_mapping=self.driver_passthru_mapping) self.raid = agent.AgentRAID() class PXEAndSSHDriver(base.BaseDriver): """PXE + SSH driver. NOTE: This driver is meant only for testing environments. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.ssh.SSH` for power on/off and reboot of virtual machines tunneled over SSH, with :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): self.power = ssh.SSHPower() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = ssh.SSHManagement() self.vendor = iscsi_deploy.VendorPassthru() self.inspect = inspector.Inspector.create_if_enabled( 'PXEAndSSHDriver') self.raid = agent.AgentRAID() self.console = ssh.ShellinaboxConsole() class PXEAndIPMINativeDriver(base.BaseDriver): """PXE + Native IPMI driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.ipminative.NativeIPMIPower` for power on/off and reboot with :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('pyghmi'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pyghmi library")) self.power = ipminative.NativeIPMIPower() self.console = ipminative.NativeIPMIShellinaboxConsole() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = ipminative.NativeIPMIManagement() self.iscsi_vendor = iscsi_deploy.VendorPassthru() self.ipminative_vendor = ipminative.VendorPassthru() self.mapping = { 'send_raw': self.ipminative_vendor, 'bmc_reset': self.ipminative_vendor, 'heartbeat': self.iscsi_vendor, 'pass_bootloader_install_info': self.iscsi_vendor, 'pass_deploy_info': self.iscsi_vendor, } self.driver_passthru_mapping = {'lookup': self.iscsi_vendor} self.vendor = utils.MixinVendorInterface(self.mapping, self.driver_passthru_mapping) self.inspect = inspector.Inspector.create_if_enabled( 'PXEAndIPMINativeDriver') self.raid = agent.AgentRAID() class PXEAndSeaMicroDriver(base.BaseDriver): """PXE + SeaMicro driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.seamicro.Power` for power on/off and reboot with :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('seamicroclient'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import seamicroclient library")) self.power = seamicro.Power() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = seamicro.Management() self.seamicro_vendor = seamicro.VendorPassthru() self.pxe_vendor = iscsi_deploy.VendorPassthru() self.mapping = {'pass_deploy_info': self.pxe_vendor, 'attach_volume': self.seamicro_vendor, 'set_node_vlan_id': self.seamicro_vendor} self.vendor = utils.MixinVendorInterface(self.mapping) self.console = seamicro.ShellinaboxConsole() class PXEAndIBootDriver(base.BaseDriver): """PXE + IBoot PDU driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.iboot.IBootPower` for power on/off and reboot with :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('iboot'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import iboot library")) self.power = iboot.IBootPower() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.vendor = iscsi_deploy.VendorPassthru() class PXEAndIloDriver(base.BaseDriver): """PXE + Ilo Driver using IloClient interface. This driver implements the `core` functionality using :class:`ironic.drivers.modules.ilo.power.IloPower` for power management :class:`ironic.drivers.modules.ilo.deploy.IloPXEDeploy` for image deployment. """ def __init__(self): if not importutils.try_import('proliantutils'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import proliantutils library")) self.power = ilo_power.IloPower() self.boot = pxe.PXEBoot() self.deploy = ilo_deploy.IloPXEDeploy() self.vendor = ilo_vendor.VendorPassthru() self.console = ilo_console.IloConsoleInterface() self.management = ilo_management.IloManagement() self.inspect = ilo_inspect.IloInspect() self.raid = agent.AgentRAID() class PXEAndSNMPDriver(base.BaseDriver): """PXE + SNMP driver. This driver implements the 'core' functionality, combining :class:`ironic.drivers.snmp.SNMP` for power on/off and reboot with :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implentations are in those respective classes; this class is merely the glue between them. """ def __init__(self): # Driver has a runtime dependency on PySNMP, abort load if it is absent if not importutils.try_import('pysnmp'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pysnmp library")) self.power = snmp.SNMPPower() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.vendor = iscsi_deploy.VendorPassthru() # PDUs have no boot device management capability. # Only PXE as a boot device is supported. self.management = None class PXEAndIRMCDriver(base.BaseDriver): """PXE + iRMC driver using SCCI. This driver implements the `core` functionality using :class:`ironic.drivers.modules.irmc.power.IRMCPower` for power management :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. """ def __init__(self): if not importutils.try_import('scciclient'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import python-scciclient library")) self.power = irmc_power.IRMCPower() self.console = ipmitool.IPMIShellinaboxConsole() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = irmc_management.IRMCManagement() self.vendor = iscsi_deploy.VendorPassthru() self.inspect = irmc_inspect.IRMCInspect() class PXEAndVirtualBoxDriver(base.BaseDriver): """PXE + VirtualBox driver. NOTE: This driver is meant only for testing environments. This driver implements the `core` functionality, combining :class:`ironic.drivers.virtualbox.VirtualBoxPower` for power on/off and reboot of VirtualBox virtual machines, with :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('pyremotevbox'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pyremotevbox library")) self.power = virtualbox.VirtualBoxPower() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = virtualbox.VirtualBoxManagement() self.vendor = iscsi_deploy.VendorPassthru() self.raid = agent.AgentRAID() class PXEAndAMTDriver(base.BaseDriver): """PXE + AMT driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.amt.AMTPower` for power on/off and reboot with :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('pywsman'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pywsman library")) self.power = amt_power.AMTPower() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = amt_management.AMTManagement() self.vendor = amt_vendor.AMTPXEVendorPassthru() class PXEAndMSFTOCSDriver(base.BaseDriver): """PXE + MSFT OCS driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.msftocs.power.MSFTOCSPower` for power on/off and reboot with :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): self.power = msftocs_power.MSFTOCSPower() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = msftocs_management.MSFTOCSManagement() self.vendor = iscsi_deploy.VendorPassthru() class PXEAndUcsDriver(base.BaseDriver): """PXE + Cisco UCSM driver. This driver implements the `core` functionality, combining :class:ironic.drivers.modules.ucs.power.Power for power on/off and reboot with :class:ironic.drivers.modules.iscsi_deploy.ISCSIDeploy for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('UcsSdk'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import UcsSdk library")) self.power = ucs_power.Power() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = ucs_mgmt.UcsManagement() self.vendor = iscsi_deploy.VendorPassthru() self.inspect = inspector.Inspector.create_if_enabled( 'PXEAndUcsDriver') class PXEAndCIMCDriver(base.BaseDriver): """PXE + Cisco IMC driver. This driver implements the 'core' functionality, combining :class:`ironic.drivers.modules.cimc.Power` for power on/off and reboot with :class:`ironic.drivers.modules.pxe.PXEBoot` for booting the node and :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implentations are in those respective classes; this class is merely the glue between them. """ def __init__(self): if not importutils.try_import('ImcSdk'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import ImcSdk library")) self.power = cimc_power.Power() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.management = cimc_mgmt.CIMCManagement() self.vendor = iscsi_deploy.VendorPassthru() self.inspect = inspector.Inspector.create_if_enabled( 'PXEAndCIMCDriver') class PXEAndWakeOnLanDriver(base.BaseDriver): """PXE + WakeOnLan driver. This driver implements the `core` functionality, combining :class:`ironic.drivers.modules.wol.WakeOnLanPower` for power on :class:`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy` for image deployment. Implementations are in those respective classes; this class is merely the glue between them. """ def __init__(self): self.power = wol.WakeOnLanPower() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.vendor = iscsi_deploy.VendorPassthru() ironic-5.1.0/ironic/drivers/modules/0000775000567000056710000000000012674513633020564 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/agent_config.template0000664000567000056710000000115212674513466024747 0ustar jenkinsjenkins00000000000000default deploy label deploy kernel {{ pxe_options.deployment_aki_path }} append initrd={{ pxe_options.deployment_ari_path }} text {{ pxe_options.pxe_append_params }} ipa-api-url={{ pxe_options['ipa-api-url'] }} ipa-driver-name={{ pxe_options['ipa-driver-name'] }}{% if pxe_options.root_device %} root_device={{ pxe_options.root_device }}{% endif %} coreos.configdrive=0 label boot_partition kernel {{ pxe_options.aki_path }} append initrd={{ pxe_options.ari_path }} root={{ ROOT }} ro text {{ pxe_options.pxe_append_params|default("", true) }} label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} ironic-5.1.0/ironic/drivers/modules/agent.py0000664000567000056710000007302712674513470022244 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import units import six.moves.urllib_parse as urlparse from ironic.common import dhcp_factory from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import image_service from ironic.common import images from ironic.common import paths from ironic.common import raid from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers import base from ironic.drivers.modules import agent_base_vendor from ironic.drivers.modules import deploy_utils agent_opts = [ cfg.StrOpt('agent_pxe_append_params', default='nofb nomodeset vga=normal', help=_('DEPRECATED. Additional append parameters for ' 'baremetal PXE boot. This option is deprecated and ' 'will be removed in Mitaka release. Please use ' '[pxe]pxe_append_params instead.')), cfg.StrOpt('agent_pxe_config_template', default=paths.basedir_def( 'drivers/modules/agent_config.template'), help=_('DEPRECATED. Template file for PXE configuration. ' 'This option is deprecated and will be removed ' 'in Mitaka release. Please use [pxe]pxe_config_template ' 'instead.')), cfg.BoolOpt('manage_agent_boot', default=True, deprecated_name='manage_tftp', help=_('Whether Ironic will manage booting of the agent ' 'ramdisk. If set to False, you will need to configure ' 'your mechanism to allow booting the agent ' 'ramdisk.')), cfg.IntOpt('memory_consumed_by_agent', default=0, help=_('The memory size in MiB consumed by agent when it is ' 'booted on a bare metal node. This is used for ' 'checking if the image can be downloaded and deployed ' 'on the bare metal node after booting agent ramdisk. ' 'This may be set according to the memory consumed by ' 'the agent ramdisk image.')), cfg.BoolOpt('stream_raw_images', default=True, help=_('Whether the agent ramdisk should stream raw images ' 'directly onto the disk or not. By streaming raw ' 'images directly onto the disk the agent ramdisk will ' 'not spend time copying the image to a tmpfs partition ' '(therefore consuming less memory) prior to writing it ' 'to the disk. Unless the disk where the image will be ' 'copied to is really slow, this option should be set ' 'to True. Defaults to True.')), ] CONF = cfg.CONF CONF.import_opt('my_ip', 'ironic.netconf') CONF.import_opt('erase_devices_priority', 'ironic.drivers.modules.deploy_utils', group='deploy') CONF.register_opts(agent_opts, group='agent') LOG = log.getLogger(__name__) REQUIRED_PROPERTIES = { 'deploy_kernel': _('UUID (from Glance) of the deployment kernel. ' 'Required.'), 'deploy_ramdisk': _('UUID (from Glance) of the ramdisk with agent that is ' 'used at deploy time. Required.'), } OPTIONAL_PROPERTIES = { 'image_http_proxy': _('URL of a proxy server for HTTP connections. ' 'Optional.'), 'image_https_proxy': _('URL of a proxy server for HTTPS connections. ' 'Optional.'), 'image_no_proxy': _('A comma-separated list of host names, IP addresses ' 'and domain names (with optional :port) that will be ' 'excluded from proxying. To denote a doman name, use ' 'a dot to prefix the domain name. This value will be ' 'ignored if ``image_http_proxy`` and ' '``image_https_proxy`` are not specified. Optional.'), } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) PARTITION_IMAGE_LABELS = ('kernel', 'ramdisk', 'root_gb', 'root_mb', 'swap_mb', 'ephemeral_mb', 'ephemeral_format', 'configdrive', 'preserve_ephemeral', 'image_type', 'deploy_boot_mode') def build_instance_info_for_deploy(task): """Build instance_info necessary for deploying to a node. :param task: a TaskManager object containing the node :returns: a dictionary containing the properties to be updated in instance_info :raises: exception.ImageRefValidationFailed if image_source is not Glance href and is not HTTP(S) URL. """ node = task.node instance_info = node.instance_info iwdi = node.driver_internal_info.get('is_whole_disk_image') image_source = instance_info['image_source'] if service_utils.is_glance_image(image_source): glance = image_service.GlanceImageService(version=2, context=task.context) image_info = glance.show(image_source) swift_temp_url = glance.swift_temp_url(image_info) LOG.debug('Got image info: %(info)s for node %(node)s.', {'info': image_info, 'node': node.uuid}) instance_info['image_url'] = swift_temp_url instance_info['image_checksum'] = image_info['checksum'] instance_info['image_disk_format'] = image_info['disk_format'] instance_info['image_container_format'] = ( image_info['container_format']) if not iwdi: instance_info['kernel'] = image_info['properties']['kernel_id'] instance_info['ramdisk'] = image_info['properties']['ramdisk_id'] else: try: image_service.HttpImageService().validate_href(image_source) except exception.ImageRefValidationFailed: with excutils.save_and_reraise_exception(): LOG.error(_LE("Agent deploy supports only HTTP(S) URLs as " "instance_info['image_source']. Either %s " "is not a valid HTTP(S) URL or " "is not reachable."), image_source) instance_info['image_url'] = image_source if not iwdi: instance_info['image_type'] = 'partition' i_info = deploy_utils.parse_instance_info(node) instance_info.update(i_info) else: instance_info['image_type'] = 'whole-disk-image' return instance_info def check_image_size(task, image_source): """Check if the requested image is larger than the ram size. :param task: a TaskManager instance containing the node to act on. :param image_source: href of the image. :raises: InvalidParameterValue if size of the image is greater than the available ram size. """ node = task.node properties = node.properties # skip check if 'memory_mb' is not defined if 'memory_mb' not in properties: LOG.warning(_LW('Skip the image size check as memory_mb is not ' 'defined in properties on node %s.'), node.uuid) return image_show = images.image_show(task.context, image_source) if CONF.agent.stream_raw_images and image_show.get('disk_format') == 'raw': LOG.debug('Skip the image size check since the image is going to be ' 'streamed directly onto the disk for node %s', node.uuid) return memory_size = int(properties.get('memory_mb')) image_size = int(image_show['size']) reserved_size = CONF.agent.memory_consumed_by_agent if (image_size + (reserved_size * units.Mi)) > (memory_size * units.Mi): msg = (_('Memory size is too small for requested image, if it is ' 'less than (image size + reserved RAM size), will break ' 'the IPA deployments. Image size: %(image_size)d MiB, ' 'Memory size: %(memory_size)d MiB, Reserved size: ' '%(reserved_size)d MiB.') % {'image_size': image_size / units.Mi, 'memory_size': memory_size, 'reserved_size': reserved_size}) raise exception.InvalidParameterValue(msg) def validate_image_proxies(node): """Check that the provided proxy parameters are valid. :param node: an Ironic node. :raises: InvalidParameterValue if any of the provided proxy parameters are incorrect. """ invalid_proxies = {} for scheme in ('http', 'https'): proxy_param = 'image_%s_proxy' % scheme proxy = node.driver_info.get(proxy_param) if proxy: chunks = urlparse.urlparse(proxy) # NOTE(vdrok) If no scheme specified, this is still a valid # proxy address. It is also possible for a proxy to have a # scheme different from the one specified in the image URL, # e.g. it is possible to use https:// proxy for downloading # http:// image. if chunks.scheme not in ('', 'http', 'https'): invalid_proxies[proxy_param] = proxy msg = '' if invalid_proxies: msg += _("Proxy URL should either have HTTP(S) scheme " "or no scheme at all, the following URLs are " "invalid: %s.") % invalid_proxies no_proxy = node.driver_info.get('image_no_proxy') if no_proxy is not None and not utils.is_valid_no_proxy(no_proxy): msg += _( "image_no_proxy should be a list of host names, IP addresses " "or domain names to exclude from proxying, the specified list " "%s is incorrect. To denote a domain name, prefix it with a dot " "(instead of e.g. '.*').") % no_proxy if msg: raise exception.InvalidParameterValue(msg) class AgentDeploy(base.DeployInterface): """Interface for deploy-related actions.""" def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return COMMON_PROPERTIES def validate(self, task): """Validate the driver-specific Node deployment info. This method validates whether the properties of the supplied node contain the required information for this driver to deploy images to the node. :param task: a TaskManager instance :raises: MissingParameterValue, if any of the required parameters are missing. :raises: InvalidParameterValue, if any of the parameters have invalid value. """ if CONF.agent.manage_agent_boot: task.driver.boot.validate(task) node = task.node params = {} image_source = node.instance_info.get('image_source') params['instance_info.image_source'] = image_source error_msg = _('Node %s failed to validate deploy image info. Some ' 'parameters were missing') % node.uuid deploy_utils.check_for_missing_params(params, error_msg) if not service_utils.is_glance_image(image_source): if not node.instance_info.get('image_checksum'): raise exception.MissingParameterValue(_( "image_source's image_checksum must be provided in " "instance_info for node %s") % node.uuid) check_image_size(task, image_source) # Validate the root device hints deploy_utils.parse_root_device_hints(node) # Validate node capabilities deploy_utils.validate_capabilities(node) validate_image_proxies(node) @task_manager.require_exclusive_lock def deploy(self, task): """Perform a deployment to a node. Perform the necessary work to deploy an image onto the specified node. This method will be called after prepare(), which may have already performed any preparatory steps, such as pre-caching some data for the node. :param task: a TaskManager instance. :returns: status of the deploy. One of ironic.common.states. """ manager_utils.node_power_action(task, states.REBOOT) return states.DEPLOYWAIT @task_manager.require_exclusive_lock def tear_down(self, task): """Tear down a previous deployment on the task's node. :param task: a TaskManager instance. :returns: status of the deploy. One of ironic.common.states. """ manager_utils.node_power_action(task, states.POWER_OFF) return states.DELETED @task_manager.require_exclusive_lock def prepare(self, task): """Prepare the deployment environment for this node. :param task: a TaskManager instance. """ # Nodes deployed by AgentDeploy always boot from disk now. So there # is nothing to be done in prepare() when it's called during # take over. node = task.node if node.provision_state != states.ACTIVE: node.instance_info = build_instance_info_for_deploy(task) node.save() if CONF.agent.manage_agent_boot: deploy_opts = deploy_utils.build_agent_options(node) task.driver.boot.prepare_ramdisk(task, deploy_opts) @task_manager.require_exclusive_lock def clean_up(self, task): """Clean up the deployment environment for this node. If preparation of the deployment environment ahead of time is possible, this method should be implemented by the driver. It should erase anything cached by the `prepare` method. If implemented, this method must be idempotent. It may be called multiple times for the same node on the same conductor, and it may be called by multiple conductors in parallel. Therefore, it must not require an exclusive lock. This method is called before `tear_down`. :param task: a TaskManager instance. """ if CONF.agent.manage_agent_boot: task.driver.boot.clean_up_ramdisk(task) provider = dhcp_factory.DHCPFactory() provider.clean_dhcp(task) def take_over(self, task): """Take over management of this node from a dead conductor. Since this deploy interface only does local boot, there's no need for this conductor to do anything when it takes over management of this node. :param task: a TaskManager instance. """ pass def get_clean_steps(self, task): """Get the list of clean steps from the agent. :param task: a TaskManager object containing the node :raises NodeCleaningFailure: if the clean steps are not yet available (cached), for example, when a node has just been enrolled and has not been cleaned yet. :returns: A list of clean step dictionaries """ new_priorities = { 'erase_devices': CONF.deploy.erase_devices_priority, } return deploy_utils.agent_get_clean_steps( task, interface='deploy', override_priorities=new_priorities) def execute_clean_step(self, task, step): """Execute a clean step asynchronously on the agent. :param task: a TaskManager object containing the node :param step: a clean step dictionary to execute :raises: NodeCleaningFailure if the agent does not return a command status :returns: states.CLEANWAIT to signify the step will be completed async """ return deploy_utils.agent_execute_clean_step(task, step) def prepare_cleaning(self, task): """Boot into the agent to prepare for cleaning. :param task: a TaskManager object containing the node :raises NodeCleaningFailure: if the previous cleaning ports cannot be removed or if new cleaning ports cannot be created :returns: states.CLEANWAIT to signify an asynchronous prepare """ return deploy_utils.prepare_inband_cleaning( task, manage_boot=CONF.agent.manage_agent_boot) def tear_down_cleaning(self, task): """Clean up the PXE and DHCP files after cleaning. :param task: a TaskManager object containing the node :raises NodeCleaningFailure: if the cleaning ports cannot be removed """ deploy_utils.tear_down_inband_cleaning( task, manage_boot=CONF.agent.manage_agent_boot) class AgentVendorInterface(agent_base_vendor.BaseAgentVendor): def deploy_has_started(self, task): commands = self._client.get_commands_status(task.node) for command in commands: if command['command_name'] == 'prepare_image': # deploy did start at some point return True return False def deploy_is_done(self, task): commands = self._client.get_commands_status(task.node) if not commands: return False last_command = commands[-1] if last_command['command_name'] != 'prepare_image': # catches race condition where prepare_image is still processing # so deploy hasn't started yet return False if last_command['command_status'] != 'RUNNING': return True return False @task_manager.require_exclusive_lock def continue_deploy(self, task, **kwargs): task.process_event('resume') node = task.node image_source = node.instance_info.get('image_source') LOG.debug('Continuing deploy for node %(node)s with image %(img)s', {'node': node.uuid, 'img': image_source}) image_info = { 'id': image_source.split('/')[-1], 'urls': [node.instance_info['image_url']], 'checksum': node.instance_info['image_checksum'], # NOTE(comstud): Older versions of ironic do not set # 'disk_format' nor 'container_format', so we use .get() # to maintain backwards compatibility in case code was # upgraded in the middle of a build request. 'disk_format': node.instance_info.get('image_disk_format'), 'container_format': node.instance_info.get( 'image_container_format'), 'stream_raw_images': CONF.agent.stream_raw_images, } proxies = {} for scheme in ('http', 'https'): proxy_param = 'image_%s_proxy' % scheme proxy = node.driver_info.get(proxy_param) if proxy: proxies[scheme] = proxy if proxies: image_info['proxies'] = proxies no_proxy = node.driver_info.get('image_no_proxy') if no_proxy is not None: image_info['no_proxy'] = no_proxy iwdi = node.driver_internal_info.get('is_whole_disk_image') if not iwdi: for label in PARTITION_IMAGE_LABELS: image_info[label] = node.instance_info.get(label) boot_option = deploy_utils.get_boot_option(node) boot_mode = deploy_utils.get_boot_mode_for_deploy(node) if boot_mode: image_info['deploy_boot_mode'] = boot_mode else: image_info['deploy_boot_mode'] = 'bios' image_info['boot_option'] = boot_option disk_label = deploy_utils.get_disk_label(node) if disk_label is not None: image_info['disk_label'] = disk_label image_info['node_uuid'] = node.uuid # Tell the client to download and write the image with the given args self._client.prepare_image(node, image_info) task.process_event('wait') def _get_uuid_from_result(self, task, type_uuid): command = self._client.get_commands_status(task.node)[-1] if command['command_result'] is not None: words = command['command_result']['result'].split() for word in words: if type_uuid in word: result = word.split('=')[1] if not result: msg = (_('Command result did not return %(type_uuid)s ' 'for node %(node)s. The version of the IPA ' 'ramdisk used in the deployment might not ' 'have support for provisioning of ' 'partition images.') % {'type_uuid': type_uuid, 'node': task.node.uuid}) LOG.error(msg) deploy_utils.set_failed_state(task, msg) return return result def check_deploy_success(self, node): # should only ever be called after we've validated that # the prepare_image command is complete command = self._client.get_commands_status(node)[-1] if command['command_status'] == 'FAILED': return command['command_error'] def reboot_to_instance(self, task, **kwargs): task.process_event('resume') node = task.node iwdi = task.node.driver_internal_info.get('is_whole_disk_image') error = self.check_deploy_success(node) if error is not None: # TODO(jimrollenhagen) power off if using neutron dhcp to # align with pxe driver? msg = (_('node %(node)s command status errored: %(error)s') % {'node': node.uuid, 'error': error}) LOG.error(msg) deploy_utils.set_failed_state(task, msg) return if not iwdi: root_uuid = self._get_uuid_from_result(task, 'root_uuid') if deploy_utils.get_boot_mode_for_deploy(node) == 'uefi': efi_sys_uuid = ( self._get_uuid_from_result(task, 'efi_system_partition_uuid')) else: efi_sys_uuid = None task.node.driver_internal_info['root_uuid_or_disk_id'] = root_uuid task.node.save() self.prepare_instance_to_boot(task, root_uuid, efi_sys_uuid) LOG.info(_LI('Image successfully written to node %s'), node.uuid) LOG.debug('Rebooting node %s to instance', node.uuid) if iwdi: manager_utils.node_set_boot_device(task, 'disk', persistent=True) self.reboot_and_finish_deploy(task) # NOTE(TheJulia): If we deployed a whole disk image, we # should expect a whole disk image and clean-up the tftp files # on-disk incase the node is disregarding the boot preference. # TODO(rameshg87): Not all in-tree drivers using reboot_to_instance # have a boot interface. So include a check for now. Remove this # check once all in-tree drivers have a boot interface. if task.driver.boot and iwdi: task.driver.boot.clean_up_ramdisk(task) class AgentRAID(base.RAIDInterface): """Implementation of RAIDInterface which uses agent ramdisk.""" def get_properties(self): """Return the properties of the interface.""" return {} @base.clean_step(priority=0) def create_configuration(self, task, create_root_volume=True, create_nonroot_volumes=True): """Create a RAID configuration on a bare metal using agent ramdisk. This method creates a RAID configuration on the given node. :param task: a TaskManager instance. :param create_root_volume: If True, a root volume is created during RAID configuration. Otherwise, no root volume is created. Default is True. :param create_nonroot_volumes: If True, non-root volumes are created. If False, no non-root volumes are created. Default is True. :returns: states.CLEANWAIT if operation was successfully invoked. :raises: MissingParameterValue, if node.target_raid_config is missing or was found to be empty after skipping root volume and/or non-root volumes. """ node = task.node LOG.debug("Agent RAID create_configuration invoked for node %(node)s " "with create_root_volume=%(create_root_volume)s and " "create_nonroot_volumes=%(create_nonroot_volumes)s with the " "following target_raid_config: %(target_raid_config)s.", {'node': node.uuid, 'create_root_volume': create_root_volume, 'create_nonroot_volumes': create_nonroot_volumes, 'target_raid_config': node.target_raid_config}) if not node.target_raid_config: raise exception.MissingParameterValue( _("Node %s has no target RAID configuration.") % node.uuid) target_raid_config = node.target_raid_config.copy() error_msg_list = [] if not create_root_volume: target_raid_config['logical_disks'] = [ x for x in target_raid_config['logical_disks'] if not x.get('is_root_volume')] error_msg_list.append(_("skipping root volume")) if not create_nonroot_volumes: error_msg_list.append(_("skipping non-root volumes")) target_raid_config['logical_disks'] = [ x for x in target_raid_config['logical_disks'] if x.get('is_root_volume')] if not target_raid_config['logical_disks']: error_msg = _(' and ').join(error_msg_list) raise exception.MissingParameterValue( _("Node %(node)s has empty target RAID configuration " "after %(msg)s.") % {'node': node.uuid, 'msg': error_msg}) # Rewrite it back to the node object, but no need to save it as # we need to just send this to the agent ramdisk. node.driver_internal_info['target_raid_config'] = target_raid_config LOG.debug("Calling agent RAID create_configuration for node %(node)s " "with the following target RAID configuration: %(target)s", {'node': node.uuid, 'target': target_raid_config}) step = node.clean_step return deploy_utils.agent_execute_clean_step(task, step) @staticmethod @agent_base_vendor.post_clean_step_hook( interface='raid', step='create_configuration') def _create_configuration_final(task, command): """Clean step hook after a RAID configuration was created. This method is invoked as a post clean step hook by the Ironic conductor once a create raid configuration is completed successfully. The node (properties, capabilities, RAID information) will be updated to reflect the actual RAID configuration that was created. :param task: a TaskManager instance. :param command: A command result structure of the RAID operation returned from agent ramdisk on query of the status of command(s). :raises: InvalidParameterValue, if 'current_raid_config' has more than one root volume or if node.properties['capabilities'] is malformed. :raises: IronicException, if clean_result couldn't be found within the 'command' argument passed. """ try: clean_result = command['command_result']['clean_result'] except KeyError: raise exception.IronicException( _("Agent ramdisk didn't return a proper command result while " "cleaning %(node)s. It returned '%(result)s' after command " "execution.") % {'node': task.node.uuid, 'result': command}) raid.update_raid_info(task.node, clean_result) @base.clean_step(priority=0) def delete_configuration(self, task): """Deletes RAID configuration on the given node. :param task: a TaskManager instance. :returns: states.CLEANWAIT if operation was successfully invoked """ LOG.debug("Agent RAID delete_configuration invoked for node %s.", task.node.uuid) step = task.node.clean_step return deploy_utils.agent_execute_clean_step(task, step) @staticmethod @agent_base_vendor.post_clean_step_hook( interface='raid', step='delete_configuration') def _delete_configuration_final(task, command): """Clean step hook after RAID configuration was deleted. This method is invoked as a post clean step hook by the Ironic conductor once a delete raid configuration is completed successfully. It sets node.raid_config to empty dictionary. :param task: a TaskManager instance. :param command: A command result structure of the RAID operation returned from agent ramdisk on query of the status of command(s). :returns: None """ task.node.raid_config = {} task.node.save() ironic-5.1.0/ironic/drivers/modules/agent_client.py0000664000567000056710000001621712674513470023600 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log from oslo_serialization import jsonutils import requests from ironic.common import exception from ironic.common.i18n import _ agent_opts = [ cfg.StrOpt('agent_api_version', default='v1', help=_('API version to use for communicating with the ramdisk ' 'agent.')) ] CONF = cfg.CONF CONF.register_opts(agent_opts, group='agent') LOG = log.getLogger(__name__) class AgentClient(object): """Client for interacting with nodes via a REST API.""" def __init__(self): self.session = requests.Session() self.session.headers.update({'Content-Type': 'application/json'}) def _get_command_url(self, node): agent_url = node.driver_internal_info.get('agent_url') if not agent_url: # (lintan) Keep backwards compatible with booted nodes before this # change. Remove this after Kilo. agent_url = node.driver_info.get('agent_url') if not agent_url: raise exception.IronicException(_('Agent driver requires ' 'agent_url in ' 'driver_internal_info')) return ('%(agent_url)s/%(api_version)s/commands' % {'agent_url': agent_url, 'api_version': CONF.agent.agent_api_version}) def _get_command_body(self, method, params): return jsonutils.dumps({ 'name': method, 'params': params, }) def _command(self, node, method, params, wait=False): url = self._get_command_url(node) body = self._get_command_body(method, params) request_params = { 'wait': str(wait).lower() } LOG.debug('Executing agent command %(method)s for node %(node)s', {'node': node.uuid, 'method': method}) try: response = self.session.post(url, params=request_params, data=body) except requests.RequestException as e: msg = (_('Error invoking agent command %(method)s for node ' '%(node)s. Error: %(error)s') % {'method': method, 'node': node.uuid, 'error': e}) LOG.error(msg) raise exception.IronicException(msg) # TODO(russellhaering): real error handling try: result = response.json() except ValueError: msg = _( 'Unable to decode response as JSON.\n' 'Request URL: %(url)s\nRequest body: "%(body)s"\n' 'Response status code: %(code)s\n' 'Response: "%(response)s"' ) % ({'response': response.text, 'body': body, 'url': url, 'code': response.status_code}) LOG.error(msg) raise exception.IronicException(msg) LOG.debug('Agent command %(method)s for node %(node)s returned ' 'result %(res)s, error %(error)s, HTTP status code %(code)d', {'node': node.uuid, 'method': method, 'res': result.get('command_result'), 'error': result.get('command_error'), 'code': response.status_code}) return result def get_commands_status(self, node): url = self._get_command_url(node) LOG.debug('Fetching status of agent commands for node %s', node.uuid) resp = self.session.get(url) result = resp.json()['commands'] status = '; '.join('%(cmd)s: result "%(res)s", error "%(err)s"' % {'cmd': r.get('command_name'), 'res': r.get('command_result'), 'err': r.get('command_error')} for r in result) LOG.debug('Status of agent commands for node %(node)s: %(status)s', {'node': node.uuid, 'status': status}) return result def prepare_image(self, node, image_info, wait=False): """Call the `prepare_image` method on the node.""" LOG.debug('Preparing image %(image)s on node %(node)s.', {'image': image_info.get('id'), 'node': node.uuid}) params = {'image_info': image_info} # this should be an http(s) URL configdrive = node.instance_info.get('configdrive') if configdrive is not None: params['configdrive'] = configdrive return self._command(node=node, method='standby.prepare_image', params=params, wait=wait) def start_iscsi_target(self, node, iqn): """Expose the node's disk as an ISCSI target.""" params = {'iqn': iqn} return self._command(node=node, method='iscsi.start_iscsi_target', params=params, wait=True) def install_bootloader(self, node, root_uuid, efi_system_part_uuid=None): """Install a boot loader on the image.""" params = {'root_uuid': root_uuid, 'efi_system_part_uuid': efi_system_part_uuid} return self._command(node=node, method='image.install_bootloader', params=params, wait=True) def get_clean_steps(self, node, ports): params = { 'node': node.as_dict(), 'ports': [port.as_dict() for port in ports] } return self._command(node=node, method='clean.get_clean_steps', params=params, wait=True) def execute_clean_step(self, step, node, ports): params = { 'step': step, 'node': node.as_dict(), 'ports': [port.as_dict() for port in ports], 'clean_version': node.driver_internal_info.get( 'hardware_manager_version') } return self._command(node=node, method='clean.execute_clean_step', params=params) def power_off(self, node): """Soft powers off the bare metal node by shutting down ramdisk OS.""" return self._command(node=node, method='standby.power_off', params={}) def sync(self, node): """Flush file system buffers forcing changed blocks to disk.""" return self._command(node=node, method='standby.sync', params={}, wait=True) ironic-5.1.0/ironic/drivers/modules/ssh.py0000664000567000056710000010435612674513466021750 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Ironic SSH power manager. Provides basic power control of virtual machines via SSH. For use in dev and test environments. Currently supported environments are: Virtual Box (vbox) Virsh (virsh) VMware (vmware) Parallels (parallels) XenServer (xenserver) """ import os from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils import retrying from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules import console_utils from ironic.drivers import utils as driver_utils libvirt_opts = [ cfg.StrOpt('libvirt_uri', default='qemu:///system', help=_('libvirt URI.')), cfg.IntOpt('get_vm_name_attempts', default=3, help=_("Number of attempts to try to get VM name used by the " "host that corresponds to a node's MAC address.")), cfg.IntOpt('get_vm_name_retry_interval', default=3, help=_("Number of seconds to wait between attempts to get " "VM name used by the host that corresponds to a " "node's MAC address.")), ] CONF = cfg.CONF CONF.register_opts(libvirt_opts, group='ssh') LOG = logging.getLogger(__name__) REQUIRED_PROPERTIES = { 'ssh_address': _("IP address or hostname of the node to ssh into. " "Required."), 'ssh_username': _("username to authenticate as. Required."), 'ssh_virt_type': _("virtualization software to use; one of vbox, virsh, " "vmware, parallels, xenserver. Required.") } OTHER_PROPERTIES = { 'ssh_key_contents': _("private key(s). One of this, ssh_key_filename, " "or ssh_password must be specified."), 'ssh_key_filename': _("(list of) filename(s) of optional private key(s) " "for authentication. One of this, ssh_key_contents, " "or ssh_password must be specified."), 'ssh_password': _("password to use for authentication or for unlocking a " "private key. One of this, ssh_key_contents, or " "ssh_key_filename must be specified."), 'ssh_port': _("port on the node to connect to; default is 22. Optional.") } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OTHER_PROPERTIES) CONSOLE_PROPERTIES = { 'ssh_terminal_port': _("node's UDP port to connect to. Only required for " "console access and only applicable for 'virsh'.") } # NOTE(dguerri) Generic boot device map. Virtualisation types that don't define # a more specific one, will use this. # This is left for compatibility with other modules and is still valid for # virsh and vmware. _BOOT_DEVICES_MAP = { boot_devices.DISK: 'hd', boot_devices.PXE: 'network', boot_devices.CDROM: 'cdrom', } def _get_boot_device_map(virt_type): if virt_type in ('virsh', 'vmware'): return _BOOT_DEVICES_MAP elif virt_type == 'vbox': return { boot_devices.DISK: 'disk', boot_devices.PXE: 'net', boot_devices.CDROM: 'dvd', } elif virt_type == 'xenserver': return { boot_devices.DISK: 'c', boot_devices.PXE: 'n', boot_devices.CDROM: 'd', } elif virt_type == 'parallels': return { boot_devices.DISK: 'hdd0', boot_devices.PXE: 'net0', boot_devices.CDROM: 'cdrom0', } else: raise exception.InvalidParameterValue(_( "SSHPowerDriver '%(virt_type)s' is not a valid virt_type.") % {'virt_type': virt_type}) def _get_command_sets(virt_type): """Retrieves the virt_type-specific commands to control power Required commands are as follows: base_cmd: Used by most sub-commands as the primary executable list_all: Lists all VMs (by virt_type identifier) that can be managed. One name per line, must not be quoted. list_running: Lists all running VMs (by virt_type identifier). One name per line, can be quoted. start_cmd / stop_cmd: Starts or stops the identified VM get_node_macs: Retrieves all MACs for an identified VM. One MAC per line, any standard format (see _normalize_mac) get_boot_device / set_boot_device: Gets or sets the primary boot device """ if virt_type == 'vbox': return { 'base_cmd': 'LC_ALL=C /usr/bin/VBoxManage', 'start_cmd': 'startvm {_NodeName_}', 'stop_cmd': 'controlvm {_NodeName_} poweroff', 'reboot_cmd': 'controlvm {_NodeName_} reset', 'list_all': "list vms|awk -F'\"' '{print $2}'", 'list_running': 'list runningvms', 'get_node_macs': ( "showvminfo --machinereadable {_NodeName_} | " "awk -F '\"' '/macaddress/{print $2}'"), 'set_boot_device': ( '{_BaseCmd_} modifyvm {_NodeName_} ' '--boot1 {_BootDevice_}'), 'get_boot_device': ( "{_BaseCmd_} showvminfo " "--machinereadable {_NodeName_} | " "awk -F '\"' '/boot1/{print $2}'"), } elif virt_type == 'vmware': return { 'base_cmd': 'LC_ALL=C /bin/vim-cmd', 'start_cmd': 'vmsvc/power.on {_NodeName_}', 'stop_cmd': 'vmsvc/power.off {_NodeName_}', 'reboot_cmd': 'vmsvc/power.reboot {_NodeName_}', 'list_all': "vmsvc/getallvms | awk '$1 ~ /^[0-9]+$/ {print $1}'", # NOTE(arata): In spite of its name, list_running_cmd shows a # single vmid, not a list. But it is OK. 'list_running': ( "vmsvc/power.getstate {_NodeName_} | " "grep 'Powered on' >/dev/null && " "echo '\"{_NodeName_}\"' || true"), # NOTE(arata): `true` is needed to handle a false vmid, which can # be returned by list_cmd. In that case, get_node_macs # returns an empty list rather than fails with # non-zero status code. 'get_node_macs': ( "vmsvc/device.getdevices {_NodeName_} | " "grep macAddress | awk -F '\"' '{print $2}' || true"), } elif virt_type == "virsh": # NOTE(NobodyCam): changes to the virsh commands will impact CI # see https://review.openstack.org/83906 # Change-Id: I160e4202952b7551b855dc7d91784d6a184cb0ed # for more detail. virsh_cmds = { 'base_cmd': 'LC_ALL=C /usr/bin/virsh', 'start_cmd': 'start {_NodeName_}', 'stop_cmd': 'destroy {_NodeName_}', 'reboot_cmd': 'reset {_NodeName_}', 'list_all': 'list --all --name', 'list_running': 'list --name', 'get_node_macs': ( "dumpxml {_NodeName_} | " "awk -F \"'\" '/mac address/{print $2}'| tr -d ':'"), 'set_boot_device': ( "EDITOR=\"sed -i '//d;" "/<\/os>/i\'\" " "{_BaseCmd_} edit {_NodeName_}"), 'get_boot_device': ( "{_BaseCmd_} dumpxml {_NodeName_} | " "awk '/boot dev=/ { gsub( \".*dev=\" Q, \"\" ); " "gsub( Q \".*\", \"\" ); print; }' " "Q=\"'\" RS=\"[<>]\" | " "head -1"), } if CONF.ssh.libvirt_uri: virsh_cmds['base_cmd'] += ' --connect %s' % CONF.ssh.libvirt_uri return virsh_cmds elif virt_type == 'parallels': return { 'base_cmd': 'LC_ALL=C /usr/bin/prlctl', 'start_cmd': 'start {_NodeName_}', 'stop_cmd': 'stop {_NodeName_} --kill', 'reboot_cmd': 'reset {_NodeName_}', 'list_all': "list -a -o name |tail -n +2", 'list_running': 'list -o name |tail -n +2', 'get_node_macs': ( "list -j -i \"{_NodeName_}\" | " "awk -F'\"' '/\"mac\":/ {print $4}' | " "sed 's/\\(..\\)\\(..\\)\\(..\\)\\(..\\)\\(..\\)\\(..\\)/" "\\1:\\2:\\3:\\4:\\5\\6/' | " "tr '[:upper:]' '[:lower:]'"), 'set_boot_device': ( "{_BaseCmd_} set {_NodeName_} " "--device-bootorder \"{_BootDevice_}\""), 'get_boot_device': ( "{_BaseCmd_} list -i {_NodeName_} | " "awk '/^Boot order:/ {print $3}'"), } elif virt_type == 'xenserver': return { 'base_cmd': 'LC_ALL=C /opt/xensource/bin/xe', # Note(bobba): XenServer appears to have a condition where # vm-start can return before the power-state # has been updated to 'running'. Ironic # expects the power-state to be updated # immediately, so may find that power-state # is still 'halted' and attempt to start the # VM a second time. Sleep to avoid the race. 'start_cmd': 'vm-start uuid={_NodeName_} && sleep 10s', 'stop_cmd': 'vm-shutdown uuid={_NodeName_} force=true', 'list_all': "vm-list --minimal | tr ',' '\n'", 'list_running': ( "vm-list power-state=running --minimal |" " tr ',' '\n'"), 'get_node_macs': ( "vif-list vm-uuid={_NodeName_}" " params=MAC --minimal | tr ',' '\n'"), 'set_boot_device': ( "{_BaseCmd_} vm-param-set uuid={_NodeName_}" " HVM-boot-params:order='{_BootDevice_}'"), 'get_boot_device': ( "{_BaseCmd_} vm-param-get uuid={_NodeName_}" " --param-name=HVM-boot-params param-key=order | cut -b 1"), } else: raise exception.InvalidParameterValue(_( "SSHPowerDriver '%(virt_type)s' is not a valid virt_type, ") % {'virt_type': virt_type}) def _normalize_mac(mac): return mac.replace('-', '').replace(':', '').lower() def _get_boot_device(ssh_obj, driver_info): """Get the current boot device. :param ssh_obj: paramiko.SSHClient, an active ssh connection. :param driver_info: information for accessing the node. :raises: SSHCommandFailed on an error from ssh. :raises: NotImplementedError if the virt_type does not support getting the boot device. :raises: NodeNotFound if could not find a VM corresponding to any of the provided MACs. """ cmd_to_exec = driver_info['cmd_set'].get('get_boot_device') if cmd_to_exec: boot_device_map = _get_boot_device_map(driver_info['virt_type']) node_name = _get_hosts_name_for_node(ssh_obj, driver_info) base_cmd = driver_info['cmd_set']['base_cmd'] cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', node_name) cmd_to_exec = cmd_to_exec.replace('{_BaseCmd_}', base_cmd) stdout, stderr = _ssh_execute(ssh_obj, cmd_to_exec) return next((dev for dev, hdev in boot_device_map.items() if hdev == stdout), None) else: raise NotImplementedError() def _set_boot_device(ssh_obj, driver_info, device): """Set the boot device. :param ssh_obj: paramiko.SSHClient, an active ssh connection. :param driver_info: information for accessing the node. :param device: the boot device. :raises: SSHCommandFailed on an error from ssh. :raises: NotImplementedError if the virt_type does not support setting the boot device. :raises: NodeNotFound if could not find a VM corresponding to any of the provided MACs. """ cmd_to_exec = driver_info['cmd_set'].get('set_boot_device') if cmd_to_exec: node_name = _get_hosts_name_for_node(ssh_obj, driver_info) base_cmd = driver_info['cmd_set']['base_cmd'] cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', node_name) cmd_to_exec = cmd_to_exec.replace('{_BootDevice_}', device) cmd_to_exec = cmd_to_exec.replace('{_BaseCmd_}', base_cmd) _ssh_execute(ssh_obj, cmd_to_exec) else: raise NotImplementedError() def _ssh_execute(ssh_obj, cmd_to_exec): """Executes a command via ssh. Executes a command via ssh and returns a list of the lines of the output from the command. :param ssh_obj: paramiko.SSHClient, an active ssh connection. :param cmd_to_exec: command to execute. :returns: list of the lines of output from the command. :raises: SSHCommandFailed on an error from ssh. """ try: output_list = processutils.ssh_execute(ssh_obj, cmd_to_exec)[0].split('\n') except Exception as e: LOG.error(_LE("Cannot execute SSH cmd %(cmd)s. Reason: %(err)s."), {'cmd': cmd_to_exec, 'err': e}) raise exception.SSHCommandFailed(cmd=cmd_to_exec) return output_list def _parse_driver_info(node): """Gets the information needed for accessing the node. :param node: the Node of interest. :returns: dictionary of information. :raises: InvalidParameterValue if any required parameters are incorrect. :raises: MissingParameterValue if any required parameters are missing. """ info = node.driver_info or {} missing_info = [key for key in REQUIRED_PROPERTIES if not info.get(key)] if missing_info: raise exception.MissingParameterValue(_( "SSHPowerDriver requires the following parameters to be set in " "node's driver_info: %s.") % missing_info) address = info.get('ssh_address') username = info.get('ssh_username') password = info.get('ssh_password') port = info.get('ssh_port', 22) port = utils.validate_network_port(port, 'ssh_port') key_contents = info.get('ssh_key_contents') key_filename = info.get('ssh_key_filename') virt_type = info.get('ssh_virt_type') terminal_port = info.get('ssh_terminal_port') if terminal_port is not None: terminal_port = utils.validate_network_port(terminal_port, 'ssh_terminal_port') # NOTE(deva): we map 'address' from API to 'host' for common utils res = { 'host': address, 'username': username, 'port': port, 'virt_type': virt_type, 'uuid': node.uuid, 'terminal_port': terminal_port } cmd_set = _get_command_sets(virt_type) res['cmd_set'] = cmd_set # Only one credential may be set (avoids complexity around having # precedence etc). if len([v for v in (password, key_filename, key_contents) if v]) != 1: raise exception.InvalidParameterValue(_( "SSHPowerDriver requires one and only one of password, " "key_contents and key_filename to be set.")) if password: res['password'] = password elif key_contents: res['key_contents'] = key_contents else: if not os.path.isfile(key_filename): raise exception.InvalidParameterValue(_( "SSH key file %s not found.") % key_filename) res['key_filename'] = key_filename return res def _get_power_status(ssh_obj, driver_info): """Returns a node's current power state. :param ssh_obj: paramiko.SSHClient, an active ssh connection. :param driver_info: information for accessing the node. :returns: one of ironic.common.states POWER_OFF, POWER_ON. :raises: NodeNotFound if could not find a VM corresponding to any of the provided MACs. """ power_state = None node_name = _get_hosts_name_for_node(ssh_obj, driver_info) # Get a list of vms running on the host. If the command supports # it, explicitly specify the desired node." cmd_to_exec = "%s %s" % (driver_info['cmd_set']['base_cmd'], driver_info['cmd_set']['list_running']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', node_name) running_list = _ssh_execute(ssh_obj, cmd_to_exec) # Command should return a list of running vms. If the current node is # not listed then we can assume it is not powered on. quoted_node_name = '"%s"' % node_name for node in running_list: if not node: continue # 'node' here is a formatted output from the virt cli's. The # node name is either an exact match or quoted (optionally with # other information, e.g. vbox returns '"NodeName" {}') if (quoted_node_name in node) or (node_name == node): power_state = states.POWER_ON break if not power_state: power_state = states.POWER_OFF return power_state def _get_connection(node): """Returns an SSH client connected to a node. :param node: the Node. :returns: paramiko.SSHClient, an active ssh connection. """ return utils.ssh_connect(_parse_driver_info(node)) def _get_hosts_name_for_node(ssh_obj, driver_info): """Get the name the host uses to reference the node. :param ssh_obj: paramiko.SSHClient, an active ssh connection. :param driver_info: information for accessing the node. :returns: the name of the node. :raises: NodeNotFound if could not find a VM corresponding to any of the provided MACs """ @retrying.retry( retry_on_result=lambda v: v is None, retry_on_exception=lambda _: False, # Do not retry on SSHCommandFailed stop_max_attempt_number=CONF.ssh.get_vm_name_attempts, wait_fixed=CONF.ssh.get_vm_name_retry_interval * 1000) def _with_retries(): matched_name = None cmd_to_exec = "%s %s" % (driver_info['cmd_set']['base_cmd'], driver_info['cmd_set']['list_all']) full_node_list = _ssh_execute(ssh_obj, cmd_to_exec) LOG.debug("Retrieved Node List: %s" % repr(full_node_list)) # for each node check Mac Addresses for node in full_node_list: if not node: continue LOG.debug("Checking Node: %s's Mac address." % node) cmd_to_exec = "%s %s" % (driver_info['cmd_set']['base_cmd'], driver_info['cmd_set']['get_node_macs']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', node) hosts_node_mac_list = _ssh_execute(ssh_obj, cmd_to_exec) for host_mac in hosts_node_mac_list: if not host_mac: continue for node_mac in driver_info['macs']: if _normalize_mac(host_mac) in _normalize_mac(node_mac): LOG.debug("Found Mac address: %s" % node_mac) matched_name = node break if matched_name: break if matched_name: break return matched_name try: return _with_retries() except retrying.RetryError: raise exception.NodeNotFound( _("SSH driver was not able to find a VM with any of the " "specified MACs: %(macs)s for node %(node)s.") % {'macs': driver_info['macs'], 'node': driver_info['uuid']}) def _power_on(ssh_obj, driver_info): """Power ON this node. :param ssh_obj: paramiko.SSHClient, an active ssh connection. :param driver_info: information for accessing the node. :returns: one of ironic.common.states POWER_ON or ERROR. """ current_pstate = _get_power_status(ssh_obj, driver_info) if current_pstate == states.POWER_ON: _power_off(ssh_obj, driver_info) node_name = _get_hosts_name_for_node(ssh_obj, driver_info) cmd_to_power_on = "%s %s" % (driver_info['cmd_set']['base_cmd'], driver_info['cmd_set']['start_cmd']) cmd_to_power_on = cmd_to_power_on.replace('{_NodeName_}', node_name) _ssh_execute(ssh_obj, cmd_to_power_on) current_pstate = _get_power_status(ssh_obj, driver_info) if current_pstate == states.POWER_ON: return current_pstate else: return states.ERROR def _power_off(ssh_obj, driver_info): """Power OFF this node. :param ssh_obj: paramiko.SSHClient, an active ssh connection. :param driver_info: information for accessing the node. :returns: one of ironic.common.states POWER_OFF or ERROR. """ current_pstate = _get_power_status(ssh_obj, driver_info) if current_pstate == states.POWER_OFF: return current_pstate node_name = _get_hosts_name_for_node(ssh_obj, driver_info) cmd_to_power_off = "%s %s" % (driver_info['cmd_set']['base_cmd'], driver_info['cmd_set']['stop_cmd']) cmd_to_power_off = cmd_to_power_off.replace('{_NodeName_}', node_name) _ssh_execute(ssh_obj, cmd_to_power_off) current_pstate = _get_power_status(ssh_obj, driver_info) if current_pstate == states.POWER_OFF: return current_pstate else: return states.ERROR class SSHPower(base.PowerInterface): """SSH Power Interface. This PowerInterface class provides a mechanism for controlling the power state of virtual machines via SSH. NOTE: This driver supports VirtualBox and Virsh commands. NOTE: This driver does not currently support multi-node operations. """ def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Check that the node's 'driver_info' is valid. Check that the node's 'driver_info' contains the requisite fields and that an SSH connection to the node can be established. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if any connection parameters are incorrect or if ssh failed to connect to the node. :raises: MissingParameterValue if no ports are enrolled for the given node. """ if not driver_utils.get_node_mac_addresses(task): raise exception.MissingParameterValue( _("Node %s does not have any port associated with it." ) % task.node.uuid) try: _get_connection(task.node) except exception.SSHConnectFailed as e: raise exception.InvalidParameterValue(_("SSH connection cannot" " be established: %s") % e) def get_power_state(self, task): """Get the current power state of the task's node. Poll the host for the current power state of the task's node. :param task: a TaskManager instance containing the node to act on. :returns: power state. One of :class:`ironic.common.states`. :raises: InvalidParameterValue if any connection parameters are incorrect. :raises: MissingParameterValue when a required parameter is missing :raises: NodeNotFound if could not find a VM corresponding to any of the provided MACs. :raises: SSHCommandFailed on an error from ssh. :raises: SSHConnectFailed if ssh failed to connect to the node. """ driver_info = _parse_driver_info(task.node) driver_info['macs'] = driver_utils.get_node_mac_addresses(task) ssh_obj = _get_connection(task.node) return _get_power_status(ssh_obj, driver_info) @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Turn the power on or off. Set the power state of the task's node. :param task: a TaskManager instance containing the node to act on. :param pstate: Either POWER_ON or POWER_OFF from :class: `ironic.common.states`. :raises: InvalidParameterValue if any connection parameters are incorrect, or if the desired power state is invalid. :raises: MissingParameterValue when a required parameter is missing :raises: NodeNotFound if could not find a VM corresponding to any of the provided MACs. :raises: PowerStateFailure if it failed to set power state to pstate. :raises: SSHCommandFailed on an error from ssh. :raises: SSHConnectFailed if ssh failed to connect to the node. """ driver_info = _parse_driver_info(task.node) driver_info['macs'] = driver_utils.get_node_mac_addresses(task) ssh_obj = _get_connection(task.node) if pstate == states.POWER_ON: state = _power_on(ssh_obj, driver_info) elif pstate == states.POWER_OFF: state = _power_off(ssh_obj, driver_info) else: raise exception.InvalidParameterValue( _("set_power_state called with invalid power state %s." ) % pstate) if state != pstate: raise exception.PowerStateFailure(pstate=pstate) @task_manager.require_exclusive_lock def reboot(self, task): """Cycles the power to the task's node. Power cycles a node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if any connection parameters are incorrect. :raises: MissingParameterValue when a required parameter is missing :raises: NodeNotFound if could not find a VM corresponding to any of the provided MACs. :raises: PowerStateFailure if it failed to set power state to POWER_ON. :raises: SSHCommandFailed on an error from ssh. :raises: SSHConnectFailed if ssh failed to connect to the node. """ driver_info = _parse_driver_info(task.node) driver_info['macs'] = driver_utils.get_node_mac_addresses(task) ssh_obj = _get_connection(task.node) # _power_on will turn the power off if it's already on. state = _power_on(ssh_obj, driver_info) if state != states.POWER_ON: raise exception.PowerStateFailure(pstate=states.POWER_ON) class SSHManagement(base.ManagementInterface): def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Check that 'driver_info' contains SSH credentials. Validates whether the 'driver_info' property of the supplied task's node contains the required credentials information. :param task: a task from TaskManager. :raises: InvalidParameterValue if any connection parameters are incorrect. :raises: MissingParameterValue if a required parameter is missing """ _parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return list(_BOOT_DEVICES_MAP.keys()) @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for the task's node. Set the boot device to use on next reboot of the node. :param task: a task from TaskManager. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. Ignored by this driver. :raises: InvalidParameterValue if an invalid boot device is specified or if any connection parameters are incorrect. :raises: MissingParameterValue if a required parameter is missing :raises: SSHConnectFailed if ssh failed to connect to the node. :raises: SSHCommandFailed on an error from ssh. :raises: NotImplementedError if the virt_type does not support setting the boot device. :raises: NodeNotFound if could not find a VM corresponding to any of the provided MACs. """ node = task.node driver_info = _parse_driver_info(node) if device not in self.get_supported_boot_devices(task): raise exception.InvalidParameterValue(_( "Invalid boot device %s specified.") % device) driver_info['macs'] = driver_utils.get_node_mac_addresses(task) ssh_obj = _get_connection(node) boot_device_map = _get_boot_device_map(driver_info['virt_type']) try: _set_boot_device(ssh_obj, driver_info, boot_device_map[device]) except NotImplementedError: with excutils.save_and_reraise_exception(): LOG.error(_LE("Failed to set boot device for node %(node)s, " "virt_type %(vtype)s does not support this " "operation"), {'node': node.uuid, 'vtype': driver_info['virt_type']}) def get_boot_device(self, task): """Get the current boot device for the task's node. Provides the current boot device of the node. Be aware that not all drivers support this. :param task: a task from TaskManager. :raises: InvalidParameterValue if any connection parameters are incorrect. :raises: MissingParameterValue if a required parameter is missing :raises: SSHConnectFailed if ssh failed to connect to the node. :raises: SSHCommandFailed on an error from ssh. :raises: NodeNotFound if could not find a VM corresponding to any of the provided MACs. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ node = task.node driver_info = _parse_driver_info(node) driver_info['macs'] = driver_utils.get_node_mac_addresses(task) ssh_obj = _get_connection(node) response = {'boot_device': None, 'persistent': None} try: response['boot_device'] = _get_boot_device(ssh_obj, driver_info) except NotImplementedError: LOG.warning(_LW("Failed to get boot device for node %(node)s, " "virt_type %(vtype)s does not support this " "operation"), {'node': node.uuid, 'vtype': driver_info['virt_type']}) return response def get_sensors_data(self, task): """Get sensors data. Not implemented by this driver. :param task: a TaskManager instance. """ raise NotImplementedError() class ShellinaboxConsole(base.ConsoleInterface): """A ConsoleInterface that uses ssh and shellinabox.""" def get_properties(self): properties = COMMON_PROPERTIES.copy() properties.update(CONSOLE_PROPERTIES) return properties def validate(self, task): """Validate the Node console info. :param task: a task from TaskManager. :raises: MissingParameterValue if required ssh parameters are missing :raises: InvalidParameterValue if required parameters are invalid. """ driver_info = _parse_driver_info(task.node) if driver_info['virt_type'] != 'virsh': raise exception.InvalidParameterValue(_( "not supported for non-virsh types")) if not driver_info['terminal_port']: raise exception.MissingParameterValue(_( "Missing 'ssh_terminal_port' parameter in node's " "'driver_info'")) def start_console(self, task): """Start a remote console for the node. :param task: a task from TaskManager :raises: MissingParameterValue if required ssh parameters are missing :raises: ConsoleError if the directory for the PID file cannot be created :raises: ConsoleSubprocessFailed when invoking the subprocess failed :raises: InvalidParameterValue if required parameters are invalid. """ driver_info = _parse_driver_info(task.node) driver_info['macs'] = driver_utils.get_node_mac_addresses(task) ssh_obj = _get_connection(task.node) node_name = _get_hosts_name_for_node(ssh_obj, driver_info) ssh_cmd = ("/:%(uid)s:%(gid)s:HOME:virsh console %(node)s" % {'uid': os.getuid(), 'gid': os.getgid(), 'node': node_name}) console_utils.start_shellinabox_console(driver_info['uuid'], driver_info['terminal_port'], ssh_cmd) def stop_console(self, task): """Stop the remote console session for the node. :param task: a task from TaskManager :raises: ConsoleError if unable to stop the console """ console_utils.stop_shellinabox_console(task.node.uuid) def get_console(self, task): """Get the type and connection information about the console. :param task: a task from TaskManager :raises: MissingParameterValue if required ssh parameters are missing :raises: InvalidParameterValue if required parameter are invalid. """ driver_info = _parse_driver_info(task.node) url = console_utils.get_shellinabox_console_url( driver_info['terminal_port']) return {'type': 'shellinabox', 'url': url} ironic-5.1.0/ironic/drivers/modules/snmp.py0000664000567000056710000006401412674513466022124 0ustar jenkinsjenkins00000000000000# Copyright 2013,2014 Cray Inc # # Authors: David Hewson # Stig Telfer # Mark Goddard # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Ironic SNMP power manager. Provides basic power control using an SNMP-enabled smart power controller. Uses a pluggable driver model to support devices with different SNMP object models. """ import abc import time from oslo_config import cfg from oslo_log import log as logging from oslo_service import loopingcall from oslo_utils import importutils import six from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LW from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers import base pysnmp = importutils.try_import('pysnmp') if pysnmp: from pysnmp.entity.rfc3413.oneliner import cmdgen from pysnmp import error as snmp_error from pysnmp.proto import rfc1902 else: cmdgen = None snmp_error = None rfc1902 = None opts = [ cfg.IntOpt('power_timeout', default=10, help=_('Seconds to wait for power action to be completed')), # NOTE(yuriyz): some of SNMP-enabled hardware have own options for pause # between off and on. This option guarantees minimal value. cfg.IntOpt('reboot_delay', default=0, min=0, help=_('Time (in seconds) to sleep between when rebooting ' '(powering off and on again)')) ] LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.register_opts(opts, group='snmp') SNMP_V1 = '1' SNMP_V2C = '2c' SNMP_V3 = '3' SNMP_PORT = 161 REQUIRED_PROPERTIES = { 'snmp_driver': _("PDU manufacturer driver. Required."), 'snmp_address': _("PDU IPv4 address or hostname. Required."), 'snmp_outlet': _("PDU power outlet index (1-based). Required."), } OPTIONAL_PROPERTIES = { 'snmp_version': _("SNMP protocol version: %(v1)s, %(v2c)s or %(v3)s " "(optional, default %(v1)s)") % {"v1": SNMP_V1, "v2c": SNMP_V2C, "v3": SNMP_V3}, 'snmp_port': _("SNMP port, default %(port)d") % {"port": SNMP_PORT}, 'snmp_community': _("SNMP community. Required for versions %(v1)s and %(v2c)s") % {"v1": SNMP_V1, "v2c": SNMP_V2C}, 'snmp_security': _("SNMP security name. Required for version %(v3)s") % {"v3": SNMP_V3}, } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) class SNMPClient(object): """SNMP client object. Performs low level SNMP get and set operations. Encapsulates all interaction with PySNMP to simplify dynamic importing and unit testing. """ def __init__(self, address, port, version, community=None, security=None): self.address = address self.port = port self.version = version if self.version == SNMP_V3: self.security = security else: self.community = community self.cmd_gen = cmdgen.CommandGenerator() def _get_auth(self): """Return the authorization data for an SNMP request. :returns: A :class:`pysnmp.entity.rfc3413.oneliner.cmdgen.CommunityData` object. """ if self.version == SNMP_V3: # Handling auth/encryption credentials is not (yet) supported. # This version supports a security name analogous to community. return cmdgen.UsmUserData(self.security) else: mp_model = 1 if self.version == SNMP_V2C else 0 return cmdgen.CommunityData(self.community, mpModel=mp_model) def _get_transport(self): """Return the transport target for an SNMP request. :returns: A :class: `pysnmp.entity.rfc3413.oneliner.cmdgen.UdpTransportTarget` object. :raises: snmp_error.PySnmpError if the transport address is bad. """ # The transport target accepts timeout and retries parameters, which # default to 1 (second) and 5 respectively. These are deemed sensible # enough to allow for an unreliable network or slow device. return cmdgen.UdpTransportTarget((self.address, self.port)) def get(self, oid): """Use PySNMP to perform an SNMP GET operation on a single object. :param oid: The OID of the object to get. :raises: SNMPFailure if an SNMP request fails. :returns: The value of the requested object. """ try: results = self.cmd_gen.getCmd(self._get_auth(), self._get_transport(), oid) except snmp_error.PySnmpError as e: raise exception.SNMPFailure(operation="GET", error=e) error_indication, error_status, error_index, var_binds = results if error_indication: # SNMP engine-level error. raise exception.SNMPFailure(operation="GET", error=error_indication) if error_status: # SNMP PDU error. raise exception.SNMPFailure(operation="GET", error=error_status.prettyPrint()) # We only expect a single value back name, val = var_binds[0] return val def get_next(self, oid): """Use PySNMP to perform an SNMP GET NEXT operation on a table object. :param oid: The OID of the object to get. :raises: SNMPFailure if an SNMP request fails. :returns: A list of values of the requested table object. """ try: results = self.cmd_gen.nextCmd(self._get_auth(), self._get_transport(), oid) except snmp_error.PySnmpError as e: raise exception.SNMPFailure(operation="GET_NEXT", error=e) error_indication, error_status, error_index, var_bind_table = results if error_indication: # SNMP engine-level error. raise exception.SNMPFailure(operation="GET_NEXT", error=error_indication) if error_status: # SNMP PDU error. raise exception.SNMPFailure(operation="GET_NEXT", error=error_status.prettyPrint()) return [val for row in var_bind_table for name, val in row] def set(self, oid, value): """Use PySNMP to perform an SNMP SET operation on a single object. :param oid: The OID of the object to set. :param value: The value of the object to set. :raises: SNMPFailure if an SNMP request fails. """ try: results = self.cmd_gen.setCmd(self._get_auth(), self._get_transport(), (oid, value)) except snmp_error.PySnmpError as e: raise exception.SNMPFailure(operation="SET", error=e) error_indication, error_status, error_index, var_binds = results if error_indication: # SNMP engine-level error. raise exception.SNMPFailure(operation="SET", error=error_indication) if error_status: # SNMP PDU error. raise exception.SNMPFailure(operation="SET", error=error_status.prettyPrint()) def _get_client(snmp_info): """Create and return an SNMP client object. :param snmp_info: SNMP driver info. :returns: A :class:`SNMPClient` object. """ return SNMPClient(snmp_info["address"], snmp_info["port"], snmp_info["version"], snmp_info.get("community"), snmp_info.get("security")) @six.add_metaclass(abc.ABCMeta) class SNMPDriverBase(object): """SNMP power driver base class. The SNMPDriver class hierarchy implements manufacturer-specific MIB actions over SNMP to interface with different smart power controller products. """ oid_enterprise = (1, 3, 6, 1, 4, 1) retry_interval = 1 def __init__(self, snmp_info): self.snmp_info = snmp_info self.client = _get_client(snmp_info) @abc.abstractmethod def _snmp_power_state(self): """Perform the SNMP request required to get the current power state. :raises: SNMPFailure if an SNMP request fails. :returns: power state. One of :class:`ironic.common.states`. """ @abc.abstractmethod def _snmp_power_on(self): """Perform the SNMP request required to set the power on. :raises: SNMPFailure if an SNMP request fails. """ @abc.abstractmethod def _snmp_power_off(self): """Perform the SNMP request required to set the power off. :raises: SNMPFailure if an SNMP request fails. """ def _snmp_wait_for_state(self, goal_state): """Wait for the power state of the PDU outlet to change. :param goal_state: The power state to wait for, one of :class:`ironic.common.states`. :raises: SNMPFailure if an SNMP request fails. :returns: power state. One of :class:`ironic.common.states`. """ def _poll_for_state(mutable): """Called at an interval until the node's power is consistent. :param mutable: dict object containing "state" and "next_time" :raises: SNMPFailure if an SNMP request fails. """ mutable["state"] = self._snmp_power_state() if mutable["state"] == goal_state: raise loopingcall.LoopingCallDone() mutable["next_time"] += self.retry_interval if mutable["next_time"] >= CONF.snmp.power_timeout: mutable["state"] = states.ERROR raise loopingcall.LoopingCallDone() # Pass state to the looped function call in a mutable form. state = {"state": None, "next_time": 0} timer = loopingcall.FixedIntervalLoopingCall(_poll_for_state, state) timer.start(interval=self.retry_interval).wait() LOG.debug("power state '%s'", state["state"]) return state["state"] def power_state(self): """Returns a node's current power state. :raises: SNMPFailure if an SNMP request fails. :returns: power state. One of :class:`ironic.common.states`. """ return self._snmp_power_state() def power_on(self): """Set the power state to this node to ON. :raises: SNMPFailure if an SNMP request fails. :returns: power state. One of :class:`ironic.common.states`. """ self._snmp_power_on() return self._snmp_wait_for_state(states.POWER_ON) def power_off(self): """Set the power state to this node to OFF. :raises: SNMPFailure if an SNMP request fails. :returns: power state. One of :class:`ironic.common.states`. """ self._snmp_power_off() return self._snmp_wait_for_state(states.POWER_OFF) def power_reset(self): """Reset the power to this node. :raises: SNMPFailure if an SNMP request fails. :returns: power state. One of :class:`ironic.common.states`. """ power_result = self.power_off() if power_result != states.POWER_OFF: return states.ERROR time.sleep(CONF.snmp.reboot_delay) power_result = self.power_on() if power_result != states.POWER_ON: return states.ERROR return power_result class SNMPDriverSimple(SNMPDriverBase): """SNMP driver base class for simple PDU devices. Here, simple refers to devices which provide a single SNMP object for controlling the power state of an outlet. The default OID of the power state object is of the form ... A different OID may be specified by overriding the _snmp_oid method in a subclass. """ def __init__(self, *args, **kwargs): super(SNMPDriverSimple, self).__init__(*args, **kwargs) self.oid = self._snmp_oid() @abc.abstractproperty def oid_device(self): """Device dependent portion of the power state object OID.""" @abc.abstractproperty def value_power_on(self): """Value representing power on state.""" @abc.abstractproperty def value_power_off(self): """Value representing power off state.""" def _snmp_oid(self): """Return the OID of the power state object. :returns: Power state object OID as a tuple of integers. """ outlet = int(self.snmp_info['outlet']) return self.oid_enterprise + self.oid_device + (outlet,) def _snmp_power_state(self): state = self.client.get(self.oid) # Translate the state to an Ironic power state. if state == self.value_power_on: power_state = states.POWER_ON elif state == self.value_power_off: power_state = states.POWER_OFF else: LOG.warning(_LW("SNMP PDU %(addr)s outlet %(outlet)s: " "unrecognised power state %(state)s."), {'addr': self.snmp_info['address'], 'outlet': self.snmp_info['outlet'], 'state': state}) power_state = states.ERROR return power_state def _snmp_power_on(self): value = rfc1902.Integer(self.value_power_on) self.client.set(self.oid, value) def _snmp_power_off(self): value = rfc1902.Integer(self.value_power_off) self.client.set(self.oid, value) class SNMPDriverAten(SNMPDriverSimple): """SNMP driver class for Aten PDU devices. SNMP objects for Aten PDU: 1.3.6.1.4.1.21317.1.3.2.2.2.2 Outlet Power Values: 1=Off, 2=On, 3=Pending, 4=Reset """ oid_device = (21317, 1, 3, 2, 2, 2, 2) value_power_on = 2 value_power_off = 1 def _snmp_oid(self): """Return the OID of the power state object. :returns: Power state object OID as a tuple of integers. """ outlet = int(self.snmp_info['outlet']) return self.oid_enterprise + self.oid_device + (outlet, 0,) class SNMPDriverAPCMasterSwitch(SNMPDriverSimple): """SNMP driver class for APC MasterSwitch PDU devices. SNMP objects for APC SNMPDriverAPCMasterSwitch PDU: 1.3.6.1.4.1.318.1.1.4.4.2.1.3 sPDUOutletCtl Values: 1=On, 2=Off, 3=PowerCycle, [...more options follow] """ oid_device = (318, 1, 1, 4, 4, 2, 1, 3) value_power_on = 1 value_power_off = 2 class SNMPDriverAPCMasterSwitchPlus(SNMPDriverSimple): """SNMP driver class for APC MasterSwitchPlus PDU devices. SNMP objects for APC SNMPDriverAPCMasterSwitchPlus PDU: 1.3.6.1.4.1.318.1.1.6.5.1.1.5 sPDUOutletControlMSPOutletCommand Values: 1=On, 3=Off, [...more options follow] """ oid_device = (318, 1, 1, 6, 5, 1, 1, 5) value_power_on = 1 value_power_off = 3 class SNMPDriverAPCRackPDU(SNMPDriverSimple): """SNMP driver class for APC RackPDU devices. SNMP objects for APC SNMPDriverAPCMasterSwitch PDU: # 1.3.6.1.4.1.318.1.1.12.3.3.1.1.4 rPDUOutletControlOutletCommand Values: 1=On, 2=Off, 3=PowerCycle, [...more options follow] """ oid_device = (318, 1, 1, 12, 3, 3, 1, 1, 4) value_power_on = 1 value_power_off = 2 class SNMPDriverCyberPower(SNMPDriverSimple): """SNMP driver class for CyberPower PDU devices. SNMP objects for CyberPower PDU: 1.3.6.1.4.1.3808.1.1.3.3.3.1.1.4 ePDUOutletControlOutletCommand Values: 1=On, 2=Off, 3=PowerCycle, [...more options follow] """ # NOTE(mgoddard): This device driver is currently untested, this driver has # been implemented based upon its published MIB # documentation. oid_device = (3808, 1, 1, 3, 3, 3, 1, 1, 4) value_power_on = 1 value_power_off = 2 class SNMPDriverTeltronix(SNMPDriverSimple): """SNMP driver class for Teltronix PDU devices. SNMP objects for Teltronix PDU: 1.3.6.1.4.1.23620.1.2.2.1.4 Outlet Power Values: 1=Off, 2=On """ oid_device = (23620, 1, 2, 2, 1, 4) value_power_on = 2 value_power_off = 1 class SNMPDriverEatonPower(SNMPDriverBase): """SNMP driver class for Eaton Power PDU. The Eaton power PDU does not follow the model of SNMPDriverSimple as it uses multiple SNMP objects. SNMP objects for Eaton Power PDU 1.3.6.1.4.1.534.6.6.7.6.6.1.2. outletControlStatus Read 0=off, 1=on, 2=pending off, 3=pending on 1.3.6.1.4.1.534.6.6.7.6.6.1.3. outletControlOffCmd Write 0 for immediate power off 1.3.6.1.4.1.534.6.6.7.6.6.1.4. outletControlOnCmd Write 0 for immediate power on """ # NOTE(mgoddard): This device driver is currently untested, this driver has # been implemented based upon its published MIB # documentation. oid_device = (534, 6, 6, 7, 6, 6, 1) oid_status = (2,) oid_poweron = (3,) oid_poweroff = (4,) status_off = 0 status_on = 1 status_pending_off = 2 status_pending_on = 3 value_power_on = 0 value_power_off = 0 def __init__(self, *args, **kwargs): super(SNMPDriverEatonPower, self).__init__(*args, **kwargs) # Due to its use of different OIDs for different actions, we only form # an OID that holds the common substring of the OIDs for power # operations. self.oid_base = self.oid_enterprise + self.oid_device def _snmp_oid(self, oid): """Return the OID for one of the outlet control objects. :param oid: The action-dependent portion of the OID, as a tuple of integers. :returns: The full OID as a tuple of integers. """ outlet = int(self.snmp_info['outlet']) return self.oid_base + oid + (outlet,) def _snmp_power_state(self): oid = self._snmp_oid(self.oid_status) state = self.client.get(oid) # Translate the state to an Ironic power state. if state in (self.status_on, self.status_pending_off): power_state = states.POWER_ON elif state in (self.status_off, self.status_pending_on): power_state = states.POWER_OFF else: LOG.warning(_LW("Eaton Power SNMP PDU %(addr)s outlet %(outlet)s: " "unrecognised power state %(state)s."), {'addr': self.snmp_info['address'], 'outlet': self.snmp_info['outlet'], 'state': state}) power_state = states.ERROR return power_state def _snmp_power_on(self): oid = self._snmp_oid(self.oid_poweron) value = rfc1902.Integer(self.value_power_on) self.client.set(oid, value) def _snmp_power_off(self): oid = self._snmp_oid(self.oid_poweroff) value = rfc1902.Integer(self.value_power_off) self.client.set(oid, value) # A dictionary of supported drivers keyed by snmp_driver attribute DRIVER_CLASSES = { 'apc': SNMPDriverAPCMasterSwitch, 'apc_masterswitch': SNMPDriverAPCMasterSwitch, 'apc_masterswitchplus': SNMPDriverAPCMasterSwitchPlus, 'apc_rackpdu': SNMPDriverAPCRackPDU, 'aten': SNMPDriverAten, 'cyberpower': SNMPDriverCyberPower, 'eatonpower': SNMPDriverEatonPower, 'teltronix': SNMPDriverTeltronix } def _parse_driver_info(node): """Parse a node's driver_info values. Return a dictionary of validated driver information, usable for SNMPDriver object creation. :param node: An Ironic node object. :returns: SNMP driver info. :raises: MissingParameterValue if any required parameters are missing. :raises: InvalidParameterValue if any parameters are invalid. """ info = node.driver_info or {} missing_info = [key for key in REQUIRED_PROPERTIES if not info.get(key)] if missing_info: raise exception.MissingParameterValue(_( "SNMP driver requires the following parameters to be set in " "node's driver_info: %s.") % missing_info) snmp_info = {} # Validate PDU driver type snmp_info['driver'] = info.get('snmp_driver') if snmp_info['driver'] not in DRIVER_CLASSES: raise exception.InvalidParameterValue(_( "SNMPPowerDriver: unknown driver: '%s'") % snmp_info['driver']) # In absence of a version, default to SNMPv1 snmp_info['version'] = info.get('snmp_version', SNMP_V1) if snmp_info['version'] not in (SNMP_V1, SNMP_V2C, SNMP_V3): raise exception.InvalidParameterValue(_( "SNMPPowerDriver: unknown SNMP version: '%s'") % snmp_info['version']) # In absence of a configured UDP port, default to the standard port port_str = info.get('snmp_port', SNMP_PORT) snmp_info['port'] = utils.validate_network_port(port_str, 'snmp_port') if snmp_info['port'] < 1 or snmp_info['port'] > 65535: raise exception.InvalidParameterValue(_( "SNMPPowerDriver: SNMP UDP port out of range: %d") % snmp_info['port']) # Extract version-dependent required parameters if snmp_info['version'] in (SNMP_V1, SNMP_V2C): if 'snmp_community' not in info: raise exception.MissingParameterValue(_( "SNMP driver requires snmp_community to be set for version " "%s.") % snmp_info['version']) snmp_info['community'] = info.get('snmp_community') elif snmp_info['version'] == SNMP_V3: if 'snmp_security' not in info: raise exception.MissingParameterValue(_( "SNMP driver requires snmp_security to be set for version %s.") % (SNMP_V3)) snmp_info['security'] = info.get('snmp_security') # Target PDU IP address and power outlet identification snmp_info['address'] = info.get('snmp_address') snmp_info['outlet'] = info.get('snmp_outlet') return snmp_info def _get_driver(node): """Return a new SNMP driver object of the correct type for `node`. :param node: Single node object. :raises: InvalidParameterValue if node power config is incomplete or invalid. :returns: SNMP driver object. """ snmp_info = _parse_driver_info(node) cls = DRIVER_CLASSES[snmp_info['driver']] return cls(snmp_info) class SNMPPower(base.PowerInterface): """SNMP Power Interface. This PowerInterface class provides a mechanism for controlling the power state of a physical device using an SNMP-enabled smart power controller. """ def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return COMMON_PROPERTIES def validate(self, task): """Check that node.driver_info contains the requisite fields. :raises: MissingParameterValue if required SNMP parameters are missing. :raises: InvalidParameterValue if SNMP parameters are invalid. """ _parse_driver_info(task.node) def get_power_state(self, task): """Get the current power state. Poll the SNMP device for the current power state of the node. :param task: A instance of `ironic.manager.task_manager.TaskManager`. :raises: MissingParameterValue if required SNMP parameters are missing. :raises: InvalidParameterValue if SNMP parameters are invalid. :raises: SNMPFailure if an SNMP request fails. :returns: power state. One of :class:`ironic.common.states`. """ driver = _get_driver(task.node) power_state = driver.power_state() return power_state @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Turn the power on or off. Set the power state of a node. :param task: A instance of `ironic.manager.task_manager.TaskManager`. :param pstate: Either POWER_ON or POWER_OFF from :class: `ironic.common.states`. :raises: MissingParameterValue if required SNMP parameters are missing. :raises: InvalidParameterValue if SNMP parameters are invalid or `pstate` is invalid. :raises: PowerStateFailure if the final power state of the node is not as requested after the timeout. :raises: SNMPFailure if an SNMP request fails. """ driver = _get_driver(task.node) if pstate == states.POWER_ON: state = driver.power_on() elif pstate == states.POWER_OFF: state = driver.power_off() else: raise exception.InvalidParameterValue(_("set_power_state called " "with invalid power " "state %s.") % str(pstate)) if state != pstate: raise exception.PowerStateFailure(pstate=pstate) @task_manager.require_exclusive_lock def reboot(self, task): """Cycles the power to a node. :param task: A instance of `ironic.manager.task_manager.TaskManager`. :raises: MissingParameterValue if required SNMP parameters are missing. :raises: InvalidParameterValue if SNMP parameters are invalid. :raises: PowerStateFailure if the final power state of the node is not POWER_ON after the timeout. :raises: SNMPFailure if an SNMP request fails. """ driver = _get_driver(task.node) state = driver.power_reset() if state != states.POWER_ON: raise exception.PowerStateFailure(pstate=states.POWER_ON) ironic-5.1.0/ironic/drivers/modules/boot.ipxe0000664000567000056710000000136712674513466022431 0ustar jenkinsjenkins00000000000000#!ipxe # NOTE(lucasagomes): Loop over all network devices and boot from # the first one capable of booting. For more information see: # https://bugs.launchpad.net/ironic/+bug/1504482 set netid:int32 -1 :loop inc netid || chain pxelinux.cfg/${mac:hexhyp} || goto old_rom isset ${net${netid}/mac} || goto loop_done echo Attempting to boot from MAC ${net${netid}/mac:hexhyp} chain pxelinux.cfg/${net${netid}/mac:hexhyp} || goto loop :loop_done echo PXE boot failed! No configuration found for any of the present NICs. echo Press any key to reboot... prompt --timeout 180 reboot :old_rom echo PXE boot failed! No configuration found for NIC ${mac:hexhyp}. echo Please update your iPXE ROM and retry. echo Press any key to reboot... prompt --timeout 180 reboot ironic-5.1.0/ironic/drivers/modules/pxe_grub_config.template0000664000567000056710000000216412674513466025470 0ustar jenkinsjenkins00000000000000set default=deploy set timeout=5 set hidden_timeout_quiet=false menuentry "deploy" { linuxefi {{ pxe_options.deployment_aki_path }} selinux=0 troubleshoot=0 text disk={{ pxe_options.disk }} iscsi_target_iqn={{ pxe_options.iscsi_target_iqn }} deployment_id={{ pxe_options.deployment_id }} deployment_key={{ pxe_options.deployment_key }} ironic_api_url={{ pxe_options.ironic_api_url }} {{ pxe_options.pxe_append_params|default("", true) }} boot_server={{pxe_options.tftp_server}} {% if pxe_options.root_device %}root_device={{ pxe_options.root_device }}{% endif %} ipa-api-url={{ pxe_options['ipa-api-url'] }} ipa-driver-name={{ pxe_options['ipa-driver-name'] }} boot_option={{ pxe_options.boot_option }} boot_mode={{ pxe_options['boot_mode'] }} coreos.configdrive=0 initrdefi {{ pxe_options.deployment_ari_path }} } menuentry "boot_partition" { linuxefi {{ pxe_options.aki_path }} root={{ ROOT }} ro text {{ pxe_options.pxe_append_params|default("", true) }} boot_server={{pxe_options.tftp_server}} initrdefi {{ pxe_options.ari_path }} } menuentry "boot_whole_disk" { linuxefi chain.c32 mbr:{{ DISK_IDENTIFIER }} } ironic-5.1.0/ironic/drivers/modules/pxe_config.template0000664000567000056710000000223312674513466024446 0ustar jenkinsjenkins00000000000000default deploy label deploy kernel {{ pxe_options.deployment_aki_path }} append initrd={{ pxe_options.deployment_ari_path }} selinux=0 disk={{ pxe_options.disk }} iscsi_target_iqn={{ pxe_options.iscsi_target_iqn }} deployment_id={{ pxe_options.deployment_id }} deployment_key={{ pxe_options.deployment_key }} ironic_api_url={{ pxe_options.ironic_api_url }} troubleshoot=0 text {{ pxe_options.pxe_append_params|default("", true) }} boot_option={{ pxe_options.boot_option }} {% if pxe_options.root_device %}root_device={{ pxe_options.root_device }}{% endif %} ipa-api-url={{ pxe_options['ipa-api-url'] }} ipa-driver-name={{ pxe_options['ipa-driver-name'] }} boot_mode={{ pxe_options['boot_mode'] }} coreos.configdrive=0 ipappend 3 label boot_partition kernel {{ pxe_options.aki_path }} append initrd={{ pxe_options.ari_path }} root={{ ROOT }} ro text {{ pxe_options.pxe_append_params|default("", true) }} label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} label trusted_boot kernel mboot append tboot.gz --- {{pxe_options.aki_path}} root={{ ROOT }} ro text {{ pxe_options.pxe_append_params|default("", true) }} intel_iommu=on --- {{pxe_options.ari_path}} ironic-5.1.0/ironic/drivers/modules/inspector.py0000664000567000056710000001761312674513466023160 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Modules required to work with ironic_inspector: https://pypi.python.org/pypi/ironic-inspector """ import eventlet from futurist import periodics from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common import keystone from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base LOG = logging.getLogger(__name__) inspector_opts = [ cfg.BoolOpt('enabled', default=False, help=_('whether to enable inspection using ironic-inspector'), deprecated_group='discoverd'), cfg.StrOpt('service_url', help=_('ironic-inspector HTTP endpoint. If this is not set, ' 'the ironic-inspector client default ' '(http://127.0.0.1:5050) will be used.'), deprecated_group='discoverd'), cfg.IntOpt('status_check_period', default=60, help=_('period (in seconds) to check status of nodes ' 'on inspection'), deprecated_group='discoverd'), ] CONF = cfg.CONF CONF.register_opts(inspector_opts, group='inspector') CONF.import_opt('auth_strategy', 'ironic.api.app') client = importutils.try_import('ironic_inspector_client') INSPECTOR_API_VERSION = (1, 0) class Inspector(base.InspectInterface): """In-band inspection via ironic-inspector project.""" @classmethod def create_if_enabled(cls, driver_name): """Create instance of Inspector if it's enabled. Reports log warning with given driver_name if it's not. :return: Inspector instance or None """ if CONF.inspector.enabled: return cls() else: LOG.info(_LI("Inspection via ironic-inspector is disabled in " "configuration for driver %s. To enable, change " "[inspector] enabled = True."), driver_name) def __init__(self): if not CONF.inspector.enabled: raise exception.DriverLoadError( _('ironic-inspector support is disabled')) if not client: raise exception.DriverLoadError( _('python-ironic-inspector-client Python module not found')) def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return {} # no properties def validate(self, task): """Validate the driver-specific inspection information. If invalid, raises an exception; otherwise returns None. :param task: a task from TaskManager. """ # NOTE(deva): this is not callable if inspector is disabled # so don't raise an exception -- just pass. pass def inspect_hardware(self, task): """Inspect hardware to obtain the hardware properties. This particular implementation only starts inspection using ironic-inspector. Results will be checked in a periodic task. :param task: a task from TaskManager. :returns: states.INSPECTING """ LOG.debug('Starting inspection for node %(uuid)s using ' 'ironic-inspector', {'uuid': task.node.uuid}) # NOTE(dtantsur): we're spawning a short-living green thread so that # we can release a lock as soon as possible and allow ironic-inspector # to operate on a node. eventlet.spawn_n(_start_inspection, task.node.uuid, task.context) return states.INSPECTING @periodics.periodic(spacing=CONF.inspector.status_check_period, enabled=CONF.inspector.enabled) def _periodic_check_result(self, manager, context): """Periodic task checking results of inspection.""" filters = {'provision_state': states.INSPECTING} node_iter = manager.iter_nodes(filters=filters) for node_uuid, driver in node_iter: try: lock_purpose = 'checking hardware inspection status' with task_manager.acquire(context, node_uuid, shared=True, purpose=lock_purpose) as task: _check_status(task) except (exception.NodeLocked, exception.NodeNotFound): continue def _call_inspector(func, uuid, context): """Wrapper around calls to inspector.""" # NOTE(dtantsur): due to bug #1428652 None is not accepted for base_url. kwargs = {'api_version': INSPECTOR_API_VERSION} if CONF.inspector.service_url: kwargs['base_url'] = CONF.inspector.service_url return func(uuid, auth_token=context.auth_token, **kwargs) def _start_inspection(node_uuid, context): """Call to inspector to start inspection.""" try: _call_inspector(client.introspect, node_uuid, context) except Exception as exc: LOG.exception(_LE('Exception during contacting ironic-inspector ' 'for inspection of node %(node)s: %(err)s'), {'node': node_uuid, 'err': exc}) # NOTE(dtantsur): if acquire fails our last option is to rely on # timeout lock_purpose = 'recording hardware inspection error' with task_manager.acquire(context, node_uuid, purpose=lock_purpose) as task: task.node.last_error = _('Failed to start inspection: %s') % exc task.process_event('fail') else: LOG.info(_LI('Node %s was sent to inspection to ironic-inspector'), node_uuid) def _check_status(task): """Check inspection status for node given by a task.""" node = task.node if node.provision_state != states.INSPECTING: return if not isinstance(task.driver.inspect, Inspector): return LOG.debug('Calling to inspector to check status of node %s', task.node.uuid) # NOTE(dtantsur): periodic tasks do not have proper tokens in context if CONF.auth_strategy == 'keystone': task.context.auth_token = keystone.get_admin_auth_token() try: status = _call_inspector(client.get_status, node.uuid, task.context) except Exception: # NOTE(dtantsur): get_status should not normally raise # let's assume it's a transient failure and retry later LOG.exception(_LE('Unexpected exception while getting ' 'inspection status for node %s, will retry later'), node.uuid) return error = status.get('error') finished = status.get('finished') if not error and not finished: return # If the inspection has finished or failed, we need to update the node, so # upgrade our lock to an exclusive one. task.upgrade_lock() node = task.node if error: LOG.error(_LE('Inspection failed for node %(uuid)s ' 'with error: %(err)s'), {'uuid': node.uuid, 'err': error}) node.last_error = (_('ironic-inspector inspection failed: %s') % error) task.process_event('fail') elif finished: LOG.info(_LI('Inspection finished successfully for node %s'), node.uuid) task.process_event('done') ironic-5.1.0/ironic/drivers/modules/ilo/0000775000567000056710000000000012674513633021347 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/ilo/firmware_processor.py0000664000567000056710000004213212674513466025642 0ustar jenkinsjenkins00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Firmware file processor """ import os import shutil import tempfile import types from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import importutils import six import six.moves.urllib.parse as urlparse from ironic.common import exception from ironic.common.i18n import _, _LI from ironic.common import image_service from ironic.common import swift from ironic.drivers.modules.ilo import common as ilo_common # Supported components for firmware update when invoked # through manual clean step, ``update_firmware``. SUPPORTED_FIRMWARE_UPDATE_COMPONENTS = ['ilo', 'cpld', 'power_pic', 'bios', 'chassis'] # Mandatory fields to be provided as part of firmware image update # with manual clean step FIRMWARE_IMAGE_INFO_FIELDS = {'url', 'checksum', 'component'} CONF = cfg.CONF LOG = logging.getLogger(__name__) proliantutils_error = importutils.try_import('proliantutils.exception') proliantutils_utils = importutils.try_import('proliantutils.utils') def verify_firmware_update_args(func): """Verifies the firmware update arguments.""" @six.wraps(func) def wrapper(self, task, **kwargs): """Wrapper around ``update_firmware`` call. :param task: a TaskManager object. :raises: InvalidParameterValue if validation fails for input arguments of firmware update. """ firmware_update_mode = kwargs.get('firmware_update_mode') firmware_images = kwargs.get('firmware_images') if firmware_update_mode != 'ilo': msg = (_("Invalid firmware update mode '%(mode)s' provided for " "node: %(node)s. 'ilo' is the only supported firmware " "update mode.") % {'mode': firmware_update_mode, 'node': task.node.uuid}) LOG.error(msg) raise exception.InvalidParameterValue(msg) if not firmware_images: msg = _("Firmware images cannot be an empty list or None.") LOG.error(msg) raise exception.InvalidParameterValue(msg) return func(self, task, **kwargs) return wrapper def get_and_validate_firmware_image_info(firmware_image_info): """Validates the firmware image info and returns the retrieved values. :param firmware_image_info: dict object containing the firmware image info :raises: MissingParameterValue, for missing fields (or values) in image info. :raises: InvalidParameterValue, for unsupported firmware component :returns: tuple of firmware url, checksum, component """ image_info = firmware_image_info or {} LOG.debug("Validating firmware image info: %s ... in progress", image_info) missing_fields = [] for field in FIRMWARE_IMAGE_INFO_FIELDS: if not image_info.get(field): missing_fields.append(field) if missing_fields: msg = (_("Firmware image info: %(image_info)s is missing the " "required %(missing)s field/s.") % {'image_info': image_info, 'missing': ", ".join(missing_fields)}) LOG.error(msg) raise exception.MissingParameterValue(msg) component = image_info['component'] component = component.lower() if component not in SUPPORTED_FIRMWARE_UPDATE_COMPONENTS: msg = (_("Component for firmware update is not supported. Provided " "value: %(component)s. Supported values are: " "%(supported_components)s") % {'component': component, 'supported_components': ( ", ".join(SUPPORTED_FIRMWARE_UPDATE_COMPONENTS))}) LOG.error(msg) raise exception.InvalidParameterValue(msg) LOG.debug("Validating firmware image info: %s ... done", image_info) return image_info['url'], image_info['checksum'], component class FirmwareProcessor(object): """Firmware file processor This class helps in downloading the firmware file from url, extracting the firmware file (if its in compact format) and makes it ready for firmware update operation. In future, methods can be added as and when required to extend functionality for different firmware file types. """ def __init__(self, url): # :attribute ``self.parsed_url``: structure returned by urlparse self._fine_tune_fw_processor(url) def _fine_tune_fw_processor(self, url): """Fine tunes the firmware processor object based on specified url :param url: url of firmware file :raises: InvalidParameterValue, for unsupported firmware url """ parsed_url = urlparse.urlparse(url) self.parsed_url = parsed_url url_scheme = parsed_url.scheme if url_scheme == 'file': self._download_fw_to = types.MethodType( _download_file_based_fw_to, self) elif url_scheme in ('http', 'https'): self._download_fw_to = types.MethodType( _download_http_based_fw_to, self) elif url_scheme == 'swift': self._download_fw_to = types.MethodType( _download_swift_based_fw_to, self) else: raise exception.InvalidParameterValue( _('This method does not support URL scheme %(url_scheme)s. ' 'Invalid URL %(url)s. The supported firmware URL schemes ' 'are "file", "http", "https" and "swift"') % {'url': url, 'url_scheme': url_scheme}) def process_fw_on(self, node, expected_checksum): """Processes the firmware file from the url This is the template method which downloads the firmware file from url, verifies checksum and extracts the firmware and makes it ready for firmware update operation. ``_download_fw_to`` method is set in the firmware processor object creation factory method, ``get_fw_processor()``, based on the url type. :param node: a single Node. :param expected_checksum: checksum to be checked against. :returns: wrapper object of raw firmware image location :raises: IloOperationError, on failure to process firmware file. :raises: ImageDownloadFailed, on failure to download the original file. :raises: ImageRefValidationFailed, on failure to verify the checksum. :raises: SwiftOperationError, if upload to Swift fails. :raises: ImageUploadFailed, if upload to web server fails. """ filename = os.path.basename(self.parsed_url.path) # create a temp directory where firmware file will be downloaded temp_dir = tempfile.mkdtemp() target_file = os.path.join(temp_dir, filename) # Note(deray): Operations performed in here: # # 1. Download the firmware file to the target file. # 2. Verify the checksum of the downloaded file. # 3. Extract the raw firmware file from its compact format # try: LOG.debug("For firmware update, downloading firmware file " "%(src_file)s to: %(target_file)s ...", {'src_file': self.parsed_url.geturl(), 'target_file': target_file}) self._download_fw_to(target_file) LOG.debug("For firmware update, verifying checksum of file: " "%(target_file)s ...", {'target_file': target_file}) ilo_common.verify_image_checksum(target_file, expected_checksum) # Extracting raw firmware file from target_file ... fw_image_location_obj, is_different_file = (_extract_fw_from_file( node, target_file)) except exception.IronicException: with excutils.save_and_reraise_exception(): # delete the target file along with temp dir and # re-raise the exception shutil.rmtree(temp_dir, ignore_errors=True) # Note(deray): In case of raw (no need for extraction) firmware files, # the same firmware file is returned from the extract method. # Hence, don't blindly delete the firmware file which gets passed on # to extraction operation after successful extract. Check whether the # file is same or not and then go ahead deleting it. if is_different_file: # delete the entire downloaded content along with temp dir. shutil.rmtree(temp_dir, ignore_errors=True) LOG.info(_LI("Final processed firmware location: %s"), fw_image_location_obj.fw_image_location) return fw_image_location_obj def _download_file_based_fw_to(self, target_file): """File based firmware file downloader (copier) It copies the file (url) to temporary location (file location). Original firmware file location (url) is expected in the format "file:///tmp/.." :param target_file: destination file for copying the original firmware file. :raises: ImageDownloadFailed, on failure to copy the original file. """ src_file = self.parsed_url.path with open(target_file, 'wb') as fd: image_service.FileImageService().download(src_file, fd) def _download_http_based_fw_to(self, target_file): """HTTP based firmware file downloader It downloads the file (url) to temporary location (file location). Original firmware file location (url) is expected in the format "http://.." :param target_file: destination file for downloading the original firmware file. :raises: ImageDownloadFailed, on failure to download the original file. """ src_file = self.parsed_url.geturl() with open(target_file, 'wb') as fd: image_service.HttpImageService().download(src_file, fd) def _download_swift_based_fw_to(self, target_file): """Swift based firmware file downloader It generates a temp url for the swift based firmware url and then downloads the firmware file via http based downloader to the target file. Expecting url as swift://containername/objectname :param target_file: destination file for downloading the original firmware file. :raises: SwiftOperationError, on failure to download from swift. :raises: ImageDownloadFailed, on failure to download the original file. """ # Extract container name and object name container = self.parsed_url.netloc objectname = os.path.basename(self.parsed_url.path) timeout = CONF.ilo.swift_object_expiry_timeout # Generate temp url using swift API tempurl = swift.SwiftAPI().get_temp_url(container, objectname, timeout) # set the parsed_url attribute to the newly created tempurl from swift and # delegate the dowloading job to the http_based downloader self.parsed_url = urlparse.urlparse(tempurl) _download_http_based_fw_to(self, target_file) def _extract_fw_from_file(node, target_file): """Extracts firmware image file. Extracts the firmware image file thru proliantutils and uploads it to the conductor webserver, if needed. :param node: an Ironic node object. :param target_file: firmware file to be extracted from :returns: tuple of: a) wrapper object of raw firmware image location b) a boolean, depending upon whether the raw firmware file was already in raw format(same file remains, no need to extract) or compact format (thereby extracted and hence different file). If uploaded then, then also its a different file. :raises: ImageUploadFailed, if upload to web server fails. :raises: SwiftOperationError, if upload to Swift fails. :raises: IloOperationError, on failure to process firmware file. """ ilo_object = ilo_common.get_ilo_object(node) try: # Note(deray): Based upon different iLO firmwares, the firmware file # which needs to be updated has to be either an http/https or a simple # file location. If it has to be a http/https location, then conductor # will take care of uploading the firmware file to web server or # swift (providing a temp url). fw_image_location, to_upload, is_extracted = ( proliantutils_utils.process_firmware_image(target_file, ilo_object)) except (proliantutils_error.InvalidInputError, proliantutils_error.ImageExtractionFailed) as proliantutils_exc: operation = _("Firmware file extracting as part of manual cleaning") raise exception.IloOperationError(operation=operation, error=proliantutils_exc) is_different_file = is_extracted fw_image_filename = os.path.basename(fw_image_location) fw_image_location_obj = FirmwareImageLocation(fw_image_location, fw_image_filename) if to_upload: is_different_file = True try: if CONF.ilo.use_web_server_for_images: # upload firmware image file to conductor webserver LOG.debug("For firmware update on node %(node)s, hosting " "firmware file %(firmware_image)s on web server ...", {'firmware_image': fw_image_location, 'node': node.uuid}) fw_image_uploaded_url = ilo_common.copy_image_to_web_server( fw_image_location, fw_image_filename) fw_image_location_obj.fw_image_location = fw_image_uploaded_url fw_image_location_obj.remove = types.MethodType( _remove_webserver_based_me, fw_image_location_obj) else: # upload firmware image file to swift LOG.debug("For firmware update on node %(node)s, hosting " "firmware file %(firmware_image)s on swift ...", {'firmware_image': fw_image_location, 'node': node.uuid}) fw_image_uploaded_url = ilo_common.copy_image_to_swift( fw_image_location, fw_image_filename) fw_image_location_obj.fw_image_location = fw_image_uploaded_url fw_image_location_obj.remove = types.MethodType( _remove_swift_based_me, fw_image_location_obj) finally: if is_extracted: # Note(deray): remove the file `fw_image_location` irrespective # of status of uploading (success or failure) and only if # extracted (and not passed as in plain binary format). If the # file is passed in binary format, then the invoking method # takes care of handling the deletion of the file. ilo_common.remove_single_or_list_of_files(fw_image_location) LOG.debug("For firmware update on node %(node)s, hosting firmware " "file: %(fw_image_location)s ... done. Hosted firmware " "file: %(fw_image_uploaded_url)s", {'fw_image_location': fw_image_location, 'node': node.uuid, 'fw_image_uploaded_url': fw_image_uploaded_url}) else: fw_image_location_obj.remove = types.MethodType( _remove_file_based_me, fw_image_location_obj) return fw_image_location_obj, is_different_file class FirmwareImageLocation(object): """Firmware image location class This class acts as a wrapper class for the firmware image location. It primarily helps in removing the firmware files from their respective locations, made available for firmware update operation. """ def __init__(self, fw_image_location, fw_image_filename): """Keeps hold of image location and image filename""" self.fw_image_location = fw_image_location self.fw_image_filename = fw_image_filename def remove(self): """Exposed method to remove the wrapped firmware file This method gets overriden by the remove method for the respective type of firmware file location it wraps. """ pass def _remove_file_based_me(self): """Removes file based firmware image location""" ilo_common.remove_single_or_list_of_files(self.fw_image_location) def _remove_swift_based_me(self): """Removes swift based firmware image location (by its object name)""" ilo_common.remove_image_from_swift(self.fw_image_filename, "firmware update") def _remove_webserver_based_me(self): """Removes webserver based firmware image location (by its file name)""" ilo_common.remove_image_from_web_server(self.fw_image_filename) ironic-5.1.0/ironic/drivers/modules/ilo/deploy.py0000664000567000056710000003222412674513466023224 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iLO Deploy Driver(s) and supporting methods. """ from oslo_config import cfg from oslo_log import log as logging from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LW from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import boot as ilo_boot from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import pxe LOG = logging.getLogger(__name__) CONF = cfg.CONF clean_opts = [ cfg.IntOpt('clean_priority_erase_devices', help=_('Priority for erase devices clean step. If unset, ' 'it defaults to 10. If set to 0, the step will be ' 'disabled and will not run during cleaning.')) ] CONF.import_opt('pxe_append_params', 'ironic.drivers.modules.iscsi_deploy', group='pxe') CONF.import_opt('swift_ilo_container', 'ironic.drivers.modules.ilo.common', group='ilo') CONF.register_opts(clean_opts, group='ilo') def _prepare_agent_vmedia_boot(task): """Ejects virtual media devices and prepares for vmedia boot.""" # Eject all virtual media devices, as we are going to use them # during deploy. ilo_common.eject_vmedia_devices(task) deploy_ramdisk_opts = deploy_utils.build_agent_options(task.node) deploy_iso = task.node.driver_info['ilo_deploy_iso'] ilo_common.setup_vmedia(task, deploy_iso, deploy_ramdisk_opts) manager_utils.node_power_action(task, states.REBOOT) def _disable_secure_boot(task): """Disables secure boot on node, if secure boot is enabled on node. This method checks if secure boot is enabled on node. If enabled, it disables same and returns True. :param task: a TaskManager instance containing the node to act on. :returns: It returns True, if secure boot was successfully disabled on the node. It returns False, if secure boot on node is in disabled state or if secure boot feature is not supported by the node. :raises: IloOperationError, if some operation on iLO failed. """ cur_sec_state = False try: cur_sec_state = ilo_common.get_secure_boot_mode(task) except exception.IloOperationNotSupported: LOG.debug('Secure boot mode is not supported for node %s', task.node.uuid) return False if cur_sec_state: LOG.debug('Disabling secure boot for node %s', task.node.uuid) ilo_common.set_secure_boot_mode(task, False) return True return False def _prepare_node_for_deploy(task): """Common preparatory steps for all iLO drivers. This method performs common preparatory steps required for all drivers. 1. Power off node 2. Disables secure boot, if it is in enabled state. 3. Updates boot_mode capability to 'uefi' if secure boot is requested. 4. Changes boot mode of the node if secure boot is disabled currently. :param task: a TaskManager instance containing the node to act on. :raises: IloOperationError, if some operation on iLO failed. """ manager_utils.node_power_action(task, states.POWER_OFF) # Boot mode can be changed only if secure boot is in disabled state. # secure boot and boot mode cannot be changed together. change_boot_mode = True # Disable secure boot on the node if it is in enabled state. if _disable_secure_boot(task): change_boot_mode = False if change_boot_mode: ilo_common.update_boot_mode(task) else: # Need to update boot mode that will be used during deploy, if one is # not provided. # Since secure boot was disabled, we are in 'uefi' boot mode. if deploy_utils.get_boot_mode_for_deploy(task.node) is None: instance_info = task.node.instance_info instance_info['deploy_boot_mode'] = 'uefi' task.node.instance_info = instance_info task.node.save() def _disable_secure_boot_if_supported(task): """Disables secure boot on node, does not throw if its not supported. :param task: a TaskManager instance containing the node to act on. :raises: IloOperationError, if some operation on iLO failed. """ try: ilo_common.update_secure_boot_mode(task, False) # We need to handle IloOperationNotSupported exception so that if # the user has incorrectly specified the Node capability # 'secure_boot' to a node that does not have that capability and # attempted deploy. Handling this exception here, will help the # user to tear down such a Node. except exception.IloOperationNotSupported: LOG.warning(_LW('Secure boot mode is not supported for node %s'), task.node.uuid) class IloVirtualMediaIscsiDeploy(iscsi_deploy.ISCSIDeploy): def get_properties(self): return {} @task_manager.require_exclusive_lock def tear_down(self, task): """Tear down a previous deployment on the task's node. Power off the node. All actual clean-up is done in the clean_up() method which should be called separately. :param task: a TaskManager instance containing the node to act on. :returns: deploy state DELETED. :raises: IloOperationError, if some operation on iLO failed. """ manager_utils.node_power_action(task, states.POWER_OFF) _disable_secure_boot_if_supported(task) return super(IloVirtualMediaIscsiDeploy, self).tear_down(task) def prepare(self, task): """Prepare the deployment environment for this task's node. :param task: a TaskManager instance containing the node to act on. :raises: IloOperationError, if some operation on iLO failed. """ if task.node.provision_state != states.ACTIVE: _prepare_node_for_deploy(task) super(IloVirtualMediaIscsiDeploy, self).prepare(task) def prepare_cleaning(self, task): """Boot into the agent to prepare for cleaning. :param task: a TaskManager object containing the node :returns: states.CLEANWAIT to signify an asynchronous prepare. :raises NodeCleaningFailure: if the previous cleaning ports cannot be removed or if new cleaning ports cannot be created :raises: IloOperationError, if some operation on iLO failed. """ # Powering off the Node before initiating boot for node cleaning. # If node is in system POST, setting boot device fails. manager_utils.node_power_action(task, states.POWER_OFF) return super(IloVirtualMediaIscsiDeploy, self).prepare_cleaning(task) class IloVirtualMediaAgentDeploy(agent.AgentDeploy): """Interface for deploy-related actions.""" def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return ilo_boot.COMMON_PROPERTIES @task_manager.require_exclusive_lock def tear_down(self, task): """Tear down a previous deployment on the task's node. :param task: a TaskManager instance. :returns: states.DELETED :raises: IloOperationError, if some operation on iLO failed. """ manager_utils.node_power_action(task, states.POWER_OFF) _disable_secure_boot_if_supported(task) return super(IloVirtualMediaAgentDeploy, self).tear_down(task) def prepare(self, task): """Prepare the deployment environment for this node. :param task: a TaskManager instance. :raises: IloOperationError, if some operation on iLO failed. """ if task.node.provision_state != states.ACTIVE: _prepare_node_for_deploy(task) super(IloVirtualMediaAgentDeploy, self).prepare(task) def prepare_cleaning(self, task): """Boot into the agent to prepare for cleaning. :param task: a TaskManager object containing the node :returns: states.CLEANWAIT to signify an asynchronous prepare. :raises NodeCleaningFailure: if the previous cleaning ports cannot be removed or if new cleaning ports cannot be created :raises: IloOperationError, if some operation on iLO failed. """ # Powering off the Node before initiating boot for node cleaning. # If node is in system POST, setting boot device fails. manager_utils.node_power_action(task, states.POWER_OFF) return super(IloVirtualMediaAgentDeploy, self).prepare_cleaning(task) def get_clean_steps(self, task): """Get the list of clean steps from the agent. :param task: a TaskManager object containing the node :raises NodeCleaningFailure: if the clean steps are not yet available (cached), for example, when a node has just been enrolled and has not been cleaned yet. :returns: A list of clean step dictionaries """ # TODO(stendulker): All drivers use CONF.deploy.erase_devices_priority # agent_ilo driver should also use the same. Defect has been filed for # the same. # https://bugs.launchpad.net/ironic/+bug/1515871 new_priorities = { 'erase_devices': CONF.ilo.clean_priority_erase_devices, } return deploy_utils.agent_get_clean_steps( task, interface='deploy', override_priorities=new_priorities) class IloPXEDeploy(iscsi_deploy.ISCSIDeploy): def prepare(self, task): """Prepare the deployment environment for this task's node. If the node's 'capabilities' property includes a boot_mode, that boot mode will be applied for the node. Otherwise, the existing boot mode of the node is used in the node's 'capabilities' property. PXEDeploys' prepare method is then called, to prepare the deploy environment for the node :param task: a TaskManager instance containing the node to act on. :raises: IloOperationError, if some operation on iLO failed. :raises: InvalidParameterValue, if some information is invalid. """ if task.node.provision_state != states.ACTIVE: _prepare_node_for_deploy(task) # Check if 'boot_option' is compatible with 'boot_mode' and image. # Whole disk image deploy is not supported in UEFI boot mode if # 'boot_option' is not 'local'. # If boot_mode is not set in the node properties/capabilities then # PXEDeploy.validate() would pass. # Boot mode gets updated in prepare stage. It is possible that the # deploy boot mode is 'uefi' after call to update_boot_mode(). # Hence a re-check is required here. pxe.validate_boot_option_for_uefi(task.node) super(IloPXEDeploy, self).prepare(task) def deploy(self, task): """Start deployment of the task's node. This method sets the boot device to 'NETWORK' and then calls PXEDeploy's deploy method to deploy on the given node. :param task: a TaskManager instance containing the node to act on. :returns: deploy state DEPLOYWAIT. """ manager_utils.node_set_boot_device(task, boot_devices.PXE) return super(IloPXEDeploy, self).deploy(task) @task_manager.require_exclusive_lock def tear_down(self, task): """Tear down a previous deployment on the task's node. :param task: a TaskManager instance. :returns: states.DELETED """ # Powering off the Node before disabling secure boot. If the node is # is in POST, disable secure boot will fail. manager_utils.node_power_action(task, states.POWER_OFF) _disable_secure_boot_if_supported(task) return super(IloPXEDeploy, self).tear_down(task) def prepare_cleaning(self, task): """Boot into the agent to prepare for cleaning. :param task: a TaskManager object containing the node :returns: states.CLEANWAIT to signify an asynchronous prepare. :raises NodeCleaningFailure: if the previous cleaning ports cannot be removed or if new cleaning ports cannot be created :raises: IloOperationError, if some operation on iLO failed. """ # Powering off the Node before initiating boot for node cleaning. # If node is in system POST, setting boot device fails. manager_utils.node_power_action(task, states.POWER_OFF) return super(IloPXEDeploy, self).prepare_cleaning(task) ironic-5.1.0/ironic/drivers/modules/ilo/boot.py0000664000567000056710000004072612674513466022701 0ustar jenkinsjenkins00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Boot Interface for iLO drivers and its supporting methods. """ import os import tempfile from ironic_lib import utils as ironic_utils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils import six.moves.urllib.parse as urlparse from ironic.common import boot_devices from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW from ironic.common import image_service from ironic.common import images from ironic.common import swift from ironic.conductor import utils as manager_utils from ironic.drivers import base from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import common as ilo_common LOG = logging.getLogger(__name__) CONF = cfg.CONF REQUIRED_PROPERTIES = { 'ilo_deploy_iso': _("UUID (from Glance) of the deployment ISO. " "Required.") } COMMON_PROPERTIES = REQUIRED_PROPERTIES def parse_driver_info(node): """Gets the driver specific Node deployment info. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver to deploy images to the node. :param node: a single Node. :returns: A dict with the driver_info values. :raises: MissingParameterValue, if any of the required parameters are missing. """ info = node.driver_info d_info = {} d_info['ilo_deploy_iso'] = info.get('ilo_deploy_iso') error_msg = _("Error validating iLO virtual media deploy. Some parameters" " were missing in node's driver_info") deploy_utils.check_for_missing_params(d_info, error_msg) return d_info def _get_boot_iso_object_name(node): """Returns the boot iso object name for a given node. :param node: the node for which object name is to be provided. """ return "boot-%s" % node.uuid def _get_boot_iso(task, root_uuid): """This method returns a boot ISO to boot the node. It chooses one of the three options in the order as below: 1. Does nothing if 'ilo_boot_iso' is present in node's instance_info and 'boot_iso_created_in_web_server' is not set in 'driver_internal_info'. 2. Image deployed has a meta-property 'boot_iso' in Glance. This should refer to the UUID of the boot_iso which exists in Glance. 3. Generates a boot ISO on the fly using kernel and ramdisk mentioned in the image deployed. It uploads the generated boot ISO to Swift. :param task: a TaskManager instance containing the node to act on. :param root_uuid: the uuid of the root partition. :returns: boot ISO URL. Should be either of below: * A Swift object - It should be of format 'swift:'. It is assumed that the image object is present in CONF.ilo.swift_ilo_container; * A Glance image - It should be format 'glance://' or just ; * An HTTP URL. On error finding the boot iso, it returns None. :raises: MissingParameterValue, if any of the required parameters are missing in the node's driver_info or instance_info. :raises: InvalidParameterValue, if any of the parameters have invalid value in the node's driver_info or instance_info. :raises: SwiftOperationError, if operation with Swift fails. :raises: ImageCreationFailed, if creation of boot ISO failed. :raises: exception.ImageRefValidationFailed if ilo_boot_iso is not HTTP(S) URL. """ LOG.debug("Trying to get a boot ISO to boot the baremetal node") # Option 1 - Check if user has provided ilo_boot_iso in node's # instance_info driver_internal_info = task.node.driver_internal_info boot_iso_created_in_web_server = ( driver_internal_info.get('boot_iso_created_in_web_server')) if (task.node.instance_info.get('ilo_boot_iso') and not boot_iso_created_in_web_server): LOG.debug("Using ilo_boot_iso provided in node's instance_info") boot_iso = task.node.instance_info['ilo_boot_iso'] if not service_utils.is_glance_image(boot_iso): try: image_service.HttpImageService().validate_href(boot_iso) except exception.ImageRefValidationFailed: with excutils.save_and_reraise_exception(): LOG.error(_LE("Virtual media deploy accepts only Glance " "images or HTTP(S) URLs as " "instance_info['ilo_boot_iso']. Either %s " "is not a valid HTTP(S) URL or is " "not reachable."), boot_iso) return task.node.instance_info['ilo_boot_iso'] # Option 2 - Check if user has provided a boot_iso in Glance. If boot_iso # is a supported non-glance href execution will proceed to option 3. deploy_info = _parse_deploy_info(task.node) image_href = deploy_info['image_source'] image_properties = ( images.get_image_properties( task.context, image_href, ['boot_iso', 'kernel_id', 'ramdisk_id'])) boot_iso_uuid = image_properties.get('boot_iso') kernel_href = (task.node.instance_info.get('kernel') or image_properties.get('kernel_id')) ramdisk_href = (task.node.instance_info.get('ramdisk') or image_properties.get('ramdisk_id')) if boot_iso_uuid: LOG.debug("Found boot_iso %s in Glance", boot_iso_uuid) return boot_iso_uuid if not kernel_href or not ramdisk_href: LOG.error(_LE("Unable to find kernel or ramdisk for " "image %(image)s to generate boot ISO for %(node)s"), {'image': image_href, 'node': task.node.uuid}) return # NOTE(rameshg87): Functionality to share the boot ISOs created for # similar instances (instances with same deployed image) is # not implemented as of now. Creation/Deletion of such a shared boot ISO # will require synchronisation across conductor nodes for the shared boot # ISO. Such a synchronisation mechanism doesn't exist in ironic as of now. # Option 3 - Create boot_iso from kernel/ramdisk, upload to Swift # or web server and provide its name. deploy_iso_uuid = deploy_info['ilo_deploy_iso'] boot_mode = deploy_utils.get_boot_mode_for_deploy(task.node) boot_iso_object_name = _get_boot_iso_object_name(task.node) kernel_params = CONF.pxe.pxe_append_params with tempfile.NamedTemporaryFile(dir=CONF.tempdir) as fileobj: boot_iso_tmp_file = fileobj.name images.create_boot_iso(task.context, boot_iso_tmp_file, kernel_href, ramdisk_href, deploy_iso_uuid, root_uuid, kernel_params, boot_mode) if CONF.ilo.use_web_server_for_images: boot_iso_url = ( ilo_common.copy_image_to_web_server(boot_iso_tmp_file, boot_iso_object_name)) driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_iso_created_in_web_server'] = True task.node.driver_internal_info = driver_internal_info task.node.save() LOG.debug("Created boot_iso %(boot_iso)s for node %(node)s", {'boot_iso': boot_iso_url, 'node': task.node.uuid}) return boot_iso_url else: container = CONF.ilo.swift_ilo_container swift_api = swift.SwiftAPI() swift_api.create_object(container, boot_iso_object_name, boot_iso_tmp_file) LOG.debug("Created boot_iso %s in Swift", boot_iso_object_name) return 'swift:%s' % boot_iso_object_name def _clean_up_boot_iso_for_instance(node): """Deletes the boot ISO if it was created for the instance. :param node: an ironic node object. """ ilo_boot_iso = node.instance_info.get('ilo_boot_iso') if not ilo_boot_iso: return if ilo_boot_iso.startswith('swift'): swift_api = swift.SwiftAPI() container = CONF.ilo.swift_ilo_container boot_iso_object_name = _get_boot_iso_object_name(node) try: swift_api.delete_object(container, boot_iso_object_name) except exception.SwiftOperationError as e: LOG.exception(_LE("Failed to clean up boot ISO for node " "%(node)s. Error: %(error)s."), {'node': node.uuid, 'error': e}) elif CONF.ilo.use_web_server_for_images: result = urlparse.urlparse(ilo_boot_iso) ilo_boot_iso_name = os.path.basename(result.path) boot_iso_path = os.path.join( CONF.deploy.http_root, ilo_boot_iso_name) ironic_utils.unlink_without_raise(boot_iso_path) def _parse_deploy_info(node): """Gets the instance and driver specific Node deployment info. This method validates whether the 'instance_info' and 'driver_info' property of the supplied node contains the required information for this driver to deploy images to the node. :param node: a single Node. :returns: A dict with the instance_info and driver_info values. :raises: MissingParameterValue, if any of the required parameters are missing. :raises: InvalidParameterValue, if any of the parameters have invalid value. """ info = {} info.update(deploy_utils.get_image_instance_info(node)) info.update(parse_driver_info(node)) return info class IloVirtualMediaBoot(base.BootInterface): def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Validate the deployment information for the task's node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue, if some information is invalid. :raises: MissingParameterValue if 'kernel_id' and 'ramdisk_id' are missing in the Glance image or 'kernel' and 'ramdisk' not provided in instance_info for non-Glance image. """ node = task.node d_info = _parse_deploy_info(node) if node.driver_internal_info.get('is_whole_disk_image'): props = [] elif service_utils.is_glance_image(d_info['image_source']): props = ['kernel_id', 'ramdisk_id'] else: props = ['kernel', 'ramdisk'] deploy_utils.validate_image_properties(task.context, d_info, props) def prepare_ramdisk(self, task, ramdisk_params): """Prepares the boot of deploy ramdisk using virtual media. This method prepares the boot of the deploy ramdisk after reading relevant information from the node's driver_info and instance_info. :param task: a task from TaskManager. :param ramdisk_params: the parameters to be passed to the ramdisk. :returns: None :raises: MissingParameterValue, if some information is missing in node's driver_info or instance_info. :raises: InvalidParameterValue, if some information provided is invalid. :raises: IronicException, if some power or set boot boot device operation failed on the node. :raises: IloOperationError, if some operation on iLO failed. """ node = task.node # Clear ilo_boot_iso if it's a glance image to force recreate # another one again (or use existing one in glance). # This is mainly for rebuild scenario. if service_utils.is_glance_image( node.instance_info.get('image_source')): instance_info = node.instance_info instance_info.pop('ilo_boot_iso', None) node.instance_info = instance_info node.save() # Eject all virtual media devices, as we are going to use them # during deploy. ilo_common.eject_vmedia_devices(task) deploy_nic_mac = deploy_utils.get_single_nic_with_vif_port_id(task) ramdisk_params['BOOTIF'] = deploy_nic_mac deploy_iso = node.driver_info['ilo_deploy_iso'] ilo_common.setup_vmedia(task, deploy_iso, ramdisk_params) def prepare_instance(self, task): """Prepares the boot of instance. This method prepares the boot of the instance after reading relevant information from the node's instance_info. It does the following depending on boot_option for deploy: - If the boot_option requested for this deploy is 'local' or image is a whole disk image, then it sets the node to boot from disk. - Otherwise it finds/creates the boot ISO to boot the instance image, attaches the boot ISO to the bare metal and then sets the node to boot from CDROM. :param task: a task from TaskManager. :returns: None :raises: IloOperationError, if some operation on iLO failed. """ ilo_common.cleanup_vmedia_boot(task) # For iscsi_ilo driver, we boot from disk every time if the image # deployed is a whole disk image. node = task.node iwdi = node.driver_internal_info.get('is_whole_disk_image') if deploy_utils.get_boot_option(node) == "local" or iwdi: manager_utils.node_set_boot_device(task, boot_devices.DISK, persistent=True) else: drv_int_info = node.driver_internal_info root_uuid_or_disk_id = drv_int_info.get('root_uuid_or_disk_id') if root_uuid_or_disk_id: self._configure_vmedia_boot(task, root_uuid_or_disk_id) else: LOG.warning(_LW("The UUID for the root partition could not " "be found for node %s"), node.uuid) def clean_up_instance(self, task): """Cleans up the boot of instance. This method cleans up the environment that was setup for booting the instance. It ejects virtual media :param task: a task from TaskManager. :returns: None :raises: IloOperationError, if some operation on iLO failed. """ _clean_up_boot_iso_for_instance(task.node) driver_internal_info = task.node.driver_internal_info driver_internal_info.pop('boot_iso_created_in_web_server', None) driver_internal_info.pop('root_uuid_or_disk_id', None) task.node.driver_internal_info = driver_internal_info task.node.save() ilo_common.cleanup_vmedia_boot(task) def clean_up_ramdisk(self, task): """Cleans up the boot of ironic ramdisk. This method cleans up virtual media devices setup for the deploy ramdisk. :param task: a task from TaskManager. :returns: None :raises: IloOperationError, if some operation on iLO failed. """ ilo_common.cleanup_vmedia_boot(task) def _configure_vmedia_boot(self, task, root_uuid): """Configure vmedia boot for the node. :param task: a task from TaskManager. :param root_uuid: uuid of the root partition :returns: None :raises: IloOperationError, if some operation on iLO failed. """ node = task.node boot_iso = _get_boot_iso(task, root_uuid) if not boot_iso: LOG.error(_LE("Cannot get boot ISO for node %s"), node.uuid) return # Upon deploy complete, some distros cloud images reboot the system as # part of its configuration. Hence boot device should be persistent and # not one-time. ilo_common.setup_vmedia_for_boot(task, boot_iso) manager_utils.node_set_boot_device(task, boot_devices.CDROM, persistent=True) i_info = node.instance_info i_info['ilo_boot_iso'] = boot_iso node.instance_info = i_info node.save() ironic-5.1.0/ironic/drivers/modules/ilo/common.py0000664000567000056710000007225512674513466023230 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common functionalities shared between different iLO modules. """ import os import shutil import tempfile from ironic_lib import utils as ironic_utils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils import six import six.moves.urllib.parse as urlparse from six.moves.urllib.parse import urljoin from ironic.common import boot_devices from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import images from ironic.common import swift from ironic.common import utils from ironic.conductor import utils as manager_utils from ironic.drivers.modules import deploy_utils ilo_client = importutils.try_import('proliantutils.ilo.client') ilo_error = importutils.try_import('proliantutils.exception') STANDARD_LICENSE = 1 ESSENTIALS_LICENSE = 2 ADVANCED_LICENSE = 3 opts = [ cfg.IntOpt('client_timeout', default=60, help=_('Timeout (in seconds) for iLO operations')), cfg.PortOpt('client_port', default=443, help=_('Port to be used for iLO operations')), cfg.StrOpt('swift_ilo_container', default='ironic_ilo_container', help=_('The Swift iLO container to store data.')), cfg.IntOpt('swift_object_expiry_timeout', default=900, help=_('Amount of time in seconds for Swift objects to ' 'auto-expire.')), cfg.BoolOpt('use_web_server_for_images', default=False, help=_('Set this to True to use http web server to host ' 'floppy images and generated boot ISO. This ' 'requires http_root and http_url to be configured ' 'in the [deploy] section of the config file. If this ' 'is set to False, then Ironic will use Swift ' 'to host the floppy images and generated ' 'boot_iso.')), ] CONF = cfg.CONF CONF.register_opts(opts, group='ilo') LOG = logging.getLogger(__name__) REQUIRED_PROPERTIES = { 'ilo_address': _("IP address or hostname of the iLO. Required."), 'ilo_username': _("username for the iLO with administrator privileges. " "Required."), 'ilo_password': _("password for ilo_username. Required.") } OPTIONAL_PROPERTIES = { 'client_port': _("port to be used for iLO operations. Optional."), 'client_timeout': _("timeout (in seconds) for iLO operations. Optional."), } CONSOLE_PROPERTIES = { 'console_port': _("node's UDP port to connect to. Only required for " "console access.") } CLEAN_PROPERTIES = { 'ilo_change_password': _("new password for iLO. Required if the clean " "step 'reset_ilo_credential' is enabled.") } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) DEFAULT_BOOT_MODE = 'LEGACY' BOOT_MODE_GENERIC_TO_ILO = {'bios': 'legacy', 'uefi': 'uefi'} BOOT_MODE_ILO_TO_GENERIC = dict( (v, k) for (k, v) in BOOT_MODE_GENERIC_TO_ILO.items()) def copy_image_to_web_server(source_file_path, destination): """Copies the given image to the http web server. This method copies the given image to the http_root location. It enables read-write access to the image else the deploy fails as the image file at the web_server url is inaccessible. :param source_file_path: The absolute path of the image file which needs to be copied to the web server root. :param destination: The name of the file that will contain the copied image. :raises: ImageUploadFailed exception if copying the source file to the web server fails. :returns: image url after the source image is uploaded. """ image_url = urljoin(CONF.deploy.http_url, destination) image_path = os.path.join(CONF.deploy.http_root, destination) try: shutil.copyfile(source_file_path, image_path) except IOError as exc: raise exception.ImageUploadFailed(image_name=destination, web_server=CONF.deploy.http_url, reason=exc) os.chmod(image_path, 0o644) return image_url def remove_image_from_web_server(object_name): """Removes the given image from the configured web server. This method removes the given image from the http_root location, if the image exists. :param object_name: The name of the image file which needs to be removed from the web server root. """ image_path = os.path.join(CONF.deploy.http_root, object_name) ironic_utils.unlink_without_raise(image_path) def copy_image_to_swift(source_file_path, destination_object_name): """Uploads the given image to swift. This method copies the given image to swift. :param source_file_path: The absolute path of the image file which needs to be copied to swift. :param destination_object_name: The name of the object that will contain the copied image. :raises: SwiftOperationError, if any operation with Swift fails. :returns: temp url from swift after the source image is uploaded. """ container = CONF.ilo.swift_ilo_container timeout = CONF.ilo.swift_object_expiry_timeout object_headers = {'X-Delete-After': timeout} swift_api = swift.SwiftAPI() swift_api.create_object(container, destination_object_name, source_file_path, object_headers=object_headers) temp_url = swift_api.get_temp_url(container, destination_object_name, timeout) LOG.debug("Uploaded image %(destination_object_name)s to %(container)s.", {'destination_object_name': destination_object_name, 'container': container}) return temp_url def remove_image_from_swift(object_name, associated_with=None): """Removes the given image from swift. This method removes the given image name from swift. It deletes the image if it exists in CONF.ilo.swift_ilo_container :param object_name: The name of the object which needs to be removed from swift. :param associated_with: string to depict the component/operation this object is associated to. """ container = CONF.ilo.swift_ilo_container try: swift_api = swift.SwiftAPI() swift_api.delete_object(container, object_name) except exception.SwiftObjectNotFoundError as e: LOG.warning( _LW("Temporary object %(associated_with_msg)s" "was already deleted from Swift. Error: %(err)s"), {'associated_with_msg': ("associated with %s " % associated_with if associated_with else ""), 'err': e}) except exception.SwiftOperationError as e: LOG.exception( _LE("Error while deleting temporary swift object %(object_name)s " "%(associated_with_msg)s from %(container)s. Error: %(err)s"), {'object_name': object_name, 'container': container, 'associated_with_msg': ("associated with %s" % associated_with if associated_with else ""), 'err': e}) def parse_driver_info(node): """Gets the driver specific Node info. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver. :param node: an ironic Node object. :returns: a dict containing information from driver_info (or where applicable, config values). :raises: InvalidParameterValue if any parameters are incorrect :raises: MissingParameterValue if some mandatory information is missing on the node """ info = node.driver_info d_info = {} missing_info = [] for param in REQUIRED_PROPERTIES: try: d_info[param] = info[param] except KeyError: missing_info.append(param) if missing_info: raise exception.MissingParameterValue(_( "The following required iLO parameters are missing from the " "node's driver_info: %s") % missing_info) not_integers = [] for param in OPTIONAL_PROPERTIES: value = info.get(param, CONF.ilo.get(param)) if param == "client_port": d_info[param] = utils.validate_network_port(value, param) else: try: d_info[param] = int(value) except ValueError: not_integers.append(param) for param in CONSOLE_PROPERTIES: value = info.get(param) if value: # Currently there's only "console_port" parameter # in CONSOLE_PROPERTIES if param == "console_port": d_info[param] = utils.validate_network_port(value, param) if not_integers: raise exception.InvalidParameterValue(_( "The following iLO parameters from the node's driver_info " "should be integers: %s") % not_integers) return d_info def get_ilo_object(node): """Gets an IloClient object from proliantutils library. Given an ironic node object, this method gives back a IloClient object to do operations on the iLO. :param node: an ironic node object. :returns: an IloClient object. :raises: InvalidParameterValue on invalid inputs. :raises: MissingParameterValue if some mandatory information is missing on the node """ driver_info = parse_driver_info(node) ilo_object = ilo_client.IloClient(driver_info['ilo_address'], driver_info['ilo_username'], driver_info['ilo_password'], driver_info['client_timeout'], driver_info['client_port']) return ilo_object def get_ilo_license(node): """Gives the current installed license on the node. Given an ironic node object, this method queries the iLO for currently installed license and returns it back. :param node: an ironic node object. :returns: a constant defined in this module which refers to the current license installed on the node. :raises: InvalidParameterValue on invalid inputs. :raises: MissingParameterValue if some mandatory information is missing on the node :raises: IloOperationError if it failed to retrieve the installed licenses from the iLO. """ # Get the ilo client object, and then the license from the iLO ilo_object = get_ilo_object(node) try: license_info = ilo_object.get_all_licenses() except ilo_error.IloError as ilo_exception: raise exception.IloOperationError(operation=_('iLO license check'), error=str(ilo_exception)) # Check the license to see if the given license exists current_license_type = license_info['LICENSE_TYPE'] if current_license_type.endswith("Advanced"): return ADVANCED_LICENSE elif current_license_type.endswith("Essentials"): return ESSENTIALS_LICENSE else: return STANDARD_LICENSE def update_ipmi_properties(task): """Update ipmi properties to node driver_info :param task: a task from TaskManager. """ node = task.node info = node.driver_info # updating ipmi credentials info['ipmi_address'] = info.get('ilo_address') info['ipmi_username'] = info.get('ilo_username') info['ipmi_password'] = info.get('ilo_password') if 'console_port' in info: info['ipmi_terminal_port'] = info['console_port'] # saving ipmi credentials to task object task.node.driver_info = info def _get_floppy_image_name(node): """Returns the floppy image name for a given node. :param node: the node for which image name is to be provided. """ return "image-%s" % node.uuid def _prepare_floppy_image(task, params): """Prepares the floppy image for passing the parameters. This method prepares a temporary vfat filesystem image. Then it adds a file into the image which contains the parameters to be passed to the ramdisk. After adding the parameters, it then uploads the file either to Swift in 'swift_ilo_container', setting it to auto-expire after 'swift_object_expiry_timeout' seconds or in web server. Then it returns the temp url for the Swift object or the http url for the uploaded floppy image depending upon value of CONF.ilo.use_web_server_for_images. :param task: a TaskManager instance containing the node to act on. :param params: a dictionary containing 'parameter name'->'value' mapping to be passed to the deploy ramdisk via the floppy image. :raises: ImageCreationFailed, if it failed while creating the floppy image. :raises: ImageUploadFailed, if copying the source file to the web server fails. :raises: SwiftOperationError, if any operation with Swift fails. :returns: the HTTP image URL or the Swift temp url for the floppy image. """ with tempfile.NamedTemporaryFile( dir=CONF.tempdir) as vfat_image_tmpfile_obj: vfat_image_tmpfile = vfat_image_tmpfile_obj.name images.create_vfat_image(vfat_image_tmpfile, parameters=params) object_name = _get_floppy_image_name(task.node) if CONF.ilo.use_web_server_for_images: image_url = copy_image_to_web_server(vfat_image_tmpfile, object_name) else: image_url = copy_image_to_swift(vfat_image_tmpfile, object_name) return image_url def destroy_floppy_image_from_web_server(node): """Removes the temporary floppy image. It removes the floppy image created for deploy. :param node: an ironic node object. """ object_name = _get_floppy_image_name(node) remove_image_from_web_server(object_name) def attach_vmedia(node, device, url): """Attaches the given url as virtual media on the node. :param node: an ironic node object. :param device: the virtual media device to attach :param url: the http/https url to attach as the virtual media device :raises: IloOperationError if insert virtual media failed. """ ilo_object = get_ilo_object(node) try: ilo_object.insert_virtual_media(url, device=device) ilo_object.set_vm_status( device=device, boot_option='CONNECT', write_protect='YES') except ilo_error.IloError as ilo_exception: operation = _("Inserting virtual media %s") % device raise exception.IloOperationError( operation=operation, error=ilo_exception) LOG.info(_LI("Attached virtual media %s successfully."), device) def set_boot_mode(node, boot_mode): """Sets the node to boot using boot_mode for the next boot. :param node: an ironic node object. :param boot_mode: Next boot mode. :raises: IloOperationError if setting boot mode failed. """ ilo_object = get_ilo_object(node) try: p_boot_mode = ilo_object.get_pending_boot_mode() except ilo_error.IloCommandNotSupportedError: p_boot_mode = DEFAULT_BOOT_MODE if BOOT_MODE_ILO_TO_GENERIC[p_boot_mode.lower()] == boot_mode: LOG.info(_LI("Node %(uuid)s pending boot mode is %(boot_mode)s."), {'uuid': node.uuid, 'boot_mode': boot_mode}) return try: ilo_object.set_pending_boot_mode( BOOT_MODE_GENERIC_TO_ILO[boot_mode].upper()) except ilo_error.IloError as ilo_exception: operation = _("Setting %s as boot mode") % boot_mode raise exception.IloOperationError( operation=operation, error=ilo_exception) LOG.info(_LI("Node %(uuid)s boot mode is set to %(boot_mode)s."), {'uuid': node.uuid, 'boot_mode': boot_mode}) def update_boot_mode(task): """Update instance_info with boot mode to be used for deploy. This method updates instance_info with boot mode to be used for deploy if node properties['capabilities'] do not have boot_mode. It sets the boot mode on the node. :param task: Task object. :raises: IloOperationError if setting boot mode failed. """ node = task.node boot_mode = deploy_utils.get_boot_mode_for_deploy(node) if boot_mode is not None: LOG.debug("Node %(uuid)s boot mode is being set to %(boot_mode)s", {'uuid': node.uuid, 'boot_mode': boot_mode}) set_boot_mode(node, boot_mode) return LOG.debug("Check pending boot mode for node %s.", node.uuid) ilo_object = get_ilo_object(node) try: boot_mode = ilo_object.get_pending_boot_mode() except ilo_error.IloCommandNotSupportedError: boot_mode = 'legacy' if boot_mode != 'UNKNOWN': boot_mode = BOOT_MODE_ILO_TO_GENERIC[boot_mode.lower()] if boot_mode == 'UNKNOWN': # NOTE(faizan) ILO will return this in remote cases and mostly on # the nodes which supports UEFI. Such nodes mostly comes with UEFI # as default boot mode. So we will try setting bootmode to UEFI # and if it fails then we fall back to BIOS boot mode. try: boot_mode = 'uefi' ilo_object.set_pending_boot_mode( BOOT_MODE_GENERIC_TO_ILO[boot_mode].upper()) except ilo_error.IloError as ilo_exception: operation = _("Setting %s as boot mode") % boot_mode raise exception.IloOperationError(operation=operation, error=ilo_exception) LOG.debug("Node %(uuid)s boot mode is being set to %(boot_mode)s " "as pending boot mode is unknown.", {'uuid': node.uuid, 'boot_mode': boot_mode}) instance_info = node.instance_info instance_info['deploy_boot_mode'] = boot_mode node.instance_info = instance_info node.save() def setup_vmedia(task, iso, ramdisk_options): """Attaches virtual media and sets it as boot device. This method attaches the given bootable ISO as virtual media, prepares the arguments for ramdisk in virtual media floppy. :param task: a TaskManager instance containing the node to act on. :param iso: a bootable ISO image href to attach to. Should be either of below: * A Swift object - It should be of format 'swift:'. It is assumed that the image object is present in CONF.ilo.swift_ilo_container; * A Glance image - It should be format 'glance://' or just ; * An HTTP URL. :param ramdisk_options: the options to be passed to the ramdisk in virtual media floppy. :raises: ImageCreationFailed, if it failed while creating the floppy image. :raises: IloOperationError, if some operation on iLO failed. """ setup_vmedia_for_boot(task, iso, ramdisk_options) # In UEFI boot mode, upon inserting virtual CDROM, one has to reset the # system to see it as a valid boot device in persistent boot devices. # But virtual CDROM device is always available for one-time boot. # During enable/disable of secure boot settings, iLO internally resets # the server twice. But it retains one time boot settings across internal # resets. Hence no impact of this change for secure boot deploy. manager_utils.node_set_boot_device(task, boot_devices.CDROM) def setup_vmedia_for_boot(task, boot_iso, parameters=None): """Sets up the node to boot from the given ISO image. This method attaches the given boot_iso on the node and passes the required parameters to it via virtual floppy image. :param task: a TaskManager instance containing the node to act on. :param boot_iso: a bootable ISO image to attach to. Should be either of below: * A Swift object - It should be of format 'swift:'. It is assumed that the image object is present in CONF.ilo.swift_ilo_container; * A Glance image - It should be format 'glance://' or just ; * An HTTP(S) URL. :param parameters: the parameters to pass in the virtual floppy image in a dictionary. This is optional. :raises: ImageCreationFailed, if it failed while creating the floppy image. :raises: SwiftOperationError, if any operation with Swift fails. :raises: IloOperationError, if attaching virtual media failed. """ LOG.info(_LI("Setting up node %s to boot from virtual media"), task.node.uuid) if parameters: floppy_image_temp_url = _prepare_floppy_image(task, parameters) attach_vmedia(task.node, 'FLOPPY', floppy_image_temp_url) boot_iso_url = None parsed_ref = urlparse.urlparse(boot_iso) if parsed_ref.scheme == 'swift': swift_api = swift.SwiftAPI() container = CONF.ilo.swift_ilo_container object_name = parsed_ref.path timeout = CONF.ilo.swift_object_expiry_timeout boot_iso_url = swift_api.get_temp_url( container, object_name, timeout) elif service_utils.is_glance_image(boot_iso): boot_iso_url = ( images.get_temp_url_for_glance_image(task.context, boot_iso)) attach_vmedia(task.node, 'CDROM', boot_iso_url or boot_iso) def eject_vmedia_devices(task): """Ejects virtual media devices. This method ejects virtual media floppy and cdrom. :param task: a TaskManager instance containing the node to act on. :returns: None :raises: IloOperationError, if some error was encountered while trying to eject virtual media floppy or cdrom. """ ilo_object = get_ilo_object(task.node) for device in ('FLOPPY', 'CDROM'): try: ilo_object.eject_virtual_media(device) except ilo_error.IloError as ilo_exception: LOG.error(_LE("Error while ejecting virtual media %(device)s " "from node %(uuid)s. Error: %(error)s"), {'device': device, 'uuid': task.node.uuid, 'error': ilo_exception}) operation = _("Eject virtual media %s") % device.lower() raise exception.IloOperationError(operation=operation, error=ilo_exception) def cleanup_vmedia_boot(task): """Cleans a node after a virtual media boot. This method cleans up a node after a virtual media boot. It deletes the floppy image if it exists in CONF.ilo.swift_ilo_container or web server. It also ejects both virtual media cdrom and virtual media floppy. :param task: a TaskManager instance containing the node to act on. """ LOG.debug("Cleaning up node %s after virtual media boot", task.node.uuid) if not CONF.ilo.use_web_server_for_images: object_name = _get_floppy_image_name(task.node) remove_image_from_swift(object_name, 'virtual floppy') else: destroy_floppy_image_from_web_server(task.node) eject_vmedia_devices(task) def get_secure_boot_mode(task): """Retrieves current enabled state of UEFI secure boot on the node Returns the current enabled state of UEFI secure boot on the node. :param task: a task from TaskManager. :raises: MissingParameterValue if a required iLO parameter is missing. :raises: IloOperationError on an error from IloClient library. :raises: IloOperationNotSupported if UEFI secure boot is not supported. :returns: Boolean value indicating current state of UEFI secure boot on the node. """ operation = _("Get secure boot mode for node %s.") % task.node.uuid secure_boot_state = False ilo_object = get_ilo_object(task.node) try: current_boot_mode = ilo_object.get_current_boot_mode() if current_boot_mode == 'UEFI': secure_boot_state = ilo_object.get_secure_boot_mode() except ilo_error.IloCommandNotSupportedError as ilo_exception: raise exception.IloOperationNotSupported(operation=operation, error=ilo_exception) except ilo_error.IloError as ilo_exception: raise exception.IloOperationError(operation=operation, error=ilo_exception) LOG.debug("Get secure boot mode for node %(node)s returned %(value)s", {'value': secure_boot_state, 'node': task.node.uuid}) return secure_boot_state def set_secure_boot_mode(task, flag): """Enable or disable UEFI Secure Boot for the next boot Enable or disable UEFI Secure Boot for the next boot :param task: a task from TaskManager. :param flag: Boolean value. True if the secure boot to be enabled in next boot. :raises: IloOperationError on an error from IloClient library. :raises: IloOperationNotSupported if UEFI secure boot is not supported. """ operation = (_("Setting secure boot to %(flag)s for node %(node)s.") % {'flag': flag, 'node': task.node.uuid}) ilo_object = get_ilo_object(task.node) try: ilo_object.set_secure_boot_mode(flag) LOG.debug(operation) except ilo_error.IloCommandNotSupportedError as ilo_exception: raise exception.IloOperationNotSupported(operation=operation, error=ilo_exception) except ilo_error.IloError as ilo_exception: raise exception.IloOperationError(operation=operation, error=ilo_exception) def update_secure_boot_mode(task, mode): """Changes secure boot mode for next boot on the node. This method changes secure boot mode on the node for next boot. It changes the secure boot mode setting on node only if the deploy has requested for the secure boot. During deploy, this method is used to enable secure boot on the node by passing 'mode' as 'True'. During teardown, this method is used to disable secure boot on the node by passing 'mode' as 'False'. :param task: a TaskManager instance containing the node to act on. :param mode: Boolean value requesting the next state for secure boot :raises: IloOperationNotSupported, if operation is not supported on iLO :raises: IloOperationError, if some operation on iLO failed. """ if deploy_utils.is_secure_boot_requested(task.node): set_secure_boot_mode(task, mode) LOG.info(_LI('Changed secure boot to %(mode)s for node %(node)s'), {'mode': mode, 'node': task.node.uuid}) def remove_single_or_list_of_files(file_location): """Removes (deletes) the file or list of files. This method only accepts single or list of files to delete. If single file is passed, this method removes (deletes) the file. If list of files is passed, this method removes (deletes) each of the files iteratively. :param file_location: a single or a list of file paths """ # file_location is a list of files if isinstance(file_location, list): for location in file_location: ironic_utils.unlink_without_raise(location) # file_location is a single file path elif isinstance(file_location, six.string_types): ironic_utils.unlink_without_raise(file_location) def verify_image_checksum(image_location, expected_checksum): """Verifies checksum (md5) of image file against the expected one. This method generates the checksum of the image file on the fly and verifies it against the expected checksum provided as argument. :param image_location: location of image file whose checksum is verified. :param expected_checksum: checksum to be checked against :raises: ImageRefValidationFailed, if invalid file path or verification fails. """ try: with open(image_location, 'rb') as fd: actual_checksum = utils.hash_file(fd) except IOError as e: LOG.error(_LE("Error opening file: %(file)s"), {'file': image_location}) raise exception.ImageRefValidationFailed(image_href=image_location, reason=six.text_type(e)) if actual_checksum != expected_checksum: msg = (_('Error verifying image checksum. Image %(image)s failed to ' 'verify against checksum %(checksum)s. Actual checksum is: ' '%(actual_checksum)s') % {'image': image_location, 'checksum': expected_checksum, 'actual_checksum': actual_checksum}) LOG.error(msg) raise exception.ImageRefValidationFailed(image_href=image_location, reason=msg) ironic-5.1.0/ironic/drivers/modules/ilo/power.py0000664000567000056710000002123012674513466023057 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iLO Power Driver """ from oslo_config import cfg from oslo_log import log as logging from oslo_service import loopingcall from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers import base from ironic.drivers.modules.ilo import common as ilo_common ilo_error = importutils.try_import('proliantutils.exception') opts = [ cfg.IntOpt('power_retry', default=6, help=_('Number of times a power operation needs to be ' 'retried')), cfg.IntOpt('power_wait', default=2, help=_('Amount of time in seconds to wait in between power ' 'operations')), ] CONF = cfg.CONF CONF.register_opts(opts, group='ilo') LOG = logging.getLogger(__name__) def _attach_boot_iso_if_needed(task): """Attaches boot ISO for a deployed node. This method checks the instance info of the baremetal node for a boot iso. If the instance info has a value of key 'ilo_boot_iso', it indicates that 'boot_option' is 'netboot'. Therefore it attaches the boot ISO on the baremetal node and then sets the node to boot from virtual media cdrom. :param task: a TaskManager instance containing the node to act on. """ i_info = task.node.instance_info node_state = task.node.provision_state # NOTE: On instance rebuild, ilo_boot_iso will be present in # instance_info but the node will be in DEPLOYING state. # In such a scenario, the ilo_boot_iso shouldn't be # attached to the node while powering on the node (the node # should boot from deploy ramdisk instead, which will already # be attached by the deploy driver). if 'ilo_boot_iso' in i_info and node_state == states.ACTIVE: ilo_common.setup_vmedia_for_boot(task, i_info['ilo_boot_iso']) manager_utils.node_set_boot_device(task, boot_devices.CDROM) def _get_power_state(node): """Returns the current power state of the node. :param node: The node. :returns: power state, one of :mod: `ironic.common.states`. :raises: InvalidParameterValue if required iLO credentials are missing. :raises: IloOperationError on an error from IloClient library. """ ilo_object = ilo_common.get_ilo_object(node) # Check the current power state. try: power_status = ilo_object.get_host_power_status() except ilo_error.IloError as ilo_exception: LOG.error(_LE("iLO get_power_state failed for node %(node_id)s with " "error: %(error)s."), {'node_id': node.uuid, 'error': ilo_exception}) operation = _('iLO get_power_status') raise exception.IloOperationError(operation=operation, error=ilo_exception) if power_status == "ON": return states.POWER_ON elif power_status == "OFF": return states.POWER_OFF else: return states.ERROR def _wait_for_state_change(node, target_state): """Wait for the power state change to get reflected.""" state = [None] retries = [0] def _wait(state): state[0] = _get_power_state(node) # NOTE(rameshg87): For reboot operations, initially the state # will be same as the final state. So defer the check for one retry. if retries[0] != 0 and state[0] == target_state: raise loopingcall.LoopingCallDone() if retries[0] > CONF.ilo.power_retry: state[0] = states.ERROR raise loopingcall.LoopingCallDone() retries[0] += 1 # Start a timer and wait for the operation to complete. timer = loopingcall.FixedIntervalLoopingCall(_wait, state) timer.start(interval=CONF.ilo.power_wait).wait() return state[0] def _set_power_state(task, target_state): """Turns the server power on/off or do a reboot. :param task: a TaskManager instance containing the node to act on. :param target_state: target state of the node. :raises: InvalidParameterValue if an invalid power state was specified. :raises: IloOperationError on an error from IloClient library. :raises: PowerStateFailure if the power couldn't be set to target_state. """ node = task.node ilo_object = ilo_common.get_ilo_object(node) # Trigger the operation based on the target state. try: if target_state == states.POWER_OFF: ilo_object.hold_pwr_btn() elif target_state == states.POWER_ON: _attach_boot_iso_if_needed(task) ilo_object.set_host_power('ON') elif target_state == states.REBOOT: _attach_boot_iso_if_needed(task) ilo_object.reset_server() target_state = states.POWER_ON else: msg = _("_set_power_state called with invalid power state " "'%s'") % target_state raise exception.InvalidParameterValue(msg) except ilo_error.IloError as ilo_exception: LOG.error(_LE("iLO set_power_state failed to set state to %(tstate)s " " for node %(node_id)s with error: %(error)s"), {'tstate': target_state, 'node_id': node.uuid, 'error': ilo_exception}) operation = _('iLO set_power_state') raise exception.IloOperationError(operation=operation, error=ilo_exception) # Wait till the state change gets reflected. state = _wait_for_state_change(node, target_state) if state != target_state: timeout = (CONF.ilo.power_wait) * (CONF.ilo.power_retry) LOG.error(_LE("iLO failed to change state to %(tstate)s " "within %(timeout)s sec"), {'tstate': target_state, 'timeout': timeout}) raise exception.PowerStateFailure(pstate=target_state) class IloPower(base.PowerInterface): def get_properties(self): return ilo_common.COMMON_PROPERTIES def validate(self, task): """Check if node.driver_info contains the required iLO credentials. :param task: a TaskManager instance. :param node: Single node object. :raises: InvalidParameterValue if required iLO credentials are missing. """ ilo_common.parse_driver_info(task.node) def get_power_state(self, task): """Gets the current power state. :param task: a TaskManager instance. :param node: The Node. :returns: one of :mod:`ironic.common.states` POWER_OFF, POWER_ON or ERROR. :raises: InvalidParameterValue if required iLO credentials are missing. :raises: IloOperationError on an error from IloClient library. """ return _get_power_state(task.node) @task_manager.require_exclusive_lock def set_power_state(self, task, power_state): """Turn the current power state on or off. :param task: a TaskManager instance. :param node: The Node. :param power_state: The desired power state POWER_ON,POWER_OFF or REBOOT from :mod:`ironic.common.states`. :raises: InvalidParameterValue if an invalid power state was specified. :raises: IloOperationError on an error from IloClient library. :raises: PowerStateFailure if the power couldn't be set to power_state. """ _set_power_state(task, power_state) @task_manager.require_exclusive_lock def reboot(self, task): """Reboot the node :param task: a TaskManager instance. :param node: The Node. :raises: PowerStateFailure if the final state of the node is not POWER_ON. :raises: IloOperationError on an error from IloClient library. """ node = task.node current_pstate = _get_power_state(node) if current_pstate == states.POWER_ON: _set_power_state(task, states.REBOOT) elif current_pstate == states.POWER_OFF: _set_power_state(task, states.POWER_ON) ironic-5.1.0/ironic/drivers/modules/ilo/vendor.py0000664000567000056710000001356112674513470023223 0ustar jenkinsjenkins00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Vendor Interface for iLO drivers and its supporting methods. """ from oslo_config import cfg from oslo_log import log as logging from ironic.common import exception from ironic.common.i18n import _ from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers import base from ironic.drivers.modules import agent from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules import iscsi_deploy LOG = logging.getLogger(__name__) CONF = cfg.CONF class IloVirtualMediaAgentVendorInterface(agent.AgentVendorInterface): """Interface for vendor passthru related actions.""" def reboot_to_instance(self, task, **kwargs): node = task.node LOG.debug('Preparing to reboot to instance for node %s', node.uuid) error = self.check_deploy_success(node) if error is None: # Set boot mode ilo_common.update_boot_mode(task) # Need to enable secure boot, if being requested ilo_common.update_secure_boot_mode(task, True) super(IloVirtualMediaAgentVendorInterface, self).reboot_to_instance(task, **kwargs) class VendorPassthru(iscsi_deploy.VendorPassthru): """Vendor-specific interfaces for iLO deploy drivers.""" def validate(self, task, method, **kwargs): """Validate vendor-specific actions. Checks if a valid vendor passthru method was passed and validates the parameters for the vendor passthru method. :param task: a TaskManager instance containing the node to act on. :param method: method to be validated. :param kwargs: kwargs containing the vendor passthru method's parameters. :raises: MissingParameterValue, if some required parameters were not passed. :raises: InvalidParameterValue, if any of the parameters have invalid value. """ if method == 'boot_into_iso': self._validate_boot_into_iso(task, kwargs) return super(VendorPassthru, self).validate(task, method, **kwargs) @base.passthru(['POST']) @task_manager.require_exclusive_lock def pass_deploy_info(self, task, **kwargs): """Continues the deployment of baremetal node over iSCSI. This method continues the deployment of the baremetal node over iSCSI from where the deployment ramdisk has left off. This updates boot mode and secure boot settings, if required. :param task: a TaskManager instance containing the node to act on. :param **kwargs: kwargs for performing iscsi deployment. :raises: InvalidState """ ilo_common.update_boot_mode(task) ilo_common.update_secure_boot_mode(task, True) super(VendorPassthru, self).pass_deploy_info(task, **kwargs) @task_manager.require_exclusive_lock def continue_deploy(self, task, **kwargs): """Method invoked when deployed with the IPA ramdisk. This method is invoked during a heartbeat from an agent when the node is in wait-call-back state. This updates boot mode and secure boot settings, if required. """ ilo_common.update_boot_mode(task) ilo_common.update_secure_boot_mode(task, True) super(VendorPassthru, self).continue_deploy(task, **kwargs) def _validate_boot_into_iso(self, task, kwargs): """Validates if attach_iso can be called and if inputs are proper.""" if not (task.node.provision_state == states.MANAGEABLE or task.node.maintenance is True): msg = (_("The requested action 'boot_into_iso' can be performed " "only when node %(node_uuid)s is in %(state)s state or " "in 'maintenance' mode") % {'node_uuid': task.node.uuid, 'state': states.MANAGEABLE}) raise exception.InvalidStateRequested(msg) d_info = {'boot_iso_href': kwargs.get('boot_iso_href')} error_msg = _("Error validating input for boot_into_iso vendor " "passthru. Some parameters were not provided: ") deploy_utils.check_for_missing_params(d_info, error_msg) deploy_utils.validate_image_properties( task.context, {'image_source': kwargs.get('boot_iso_href')}, []) @base.passthru(['POST']) @task_manager.require_exclusive_lock def boot_into_iso(self, task, **kwargs): """Attaches an ISO image in glance and reboots bare metal. This method accepts an ISO image href (a Glance UUID or an HTTP(S) URL) attaches it as virtual media and then reboots the node. This is useful for debugging purposes. This can be invoked only when the node is in manage state. :param task: A TaskManager object. :param kwargs: The arguments sent with vendor passthru. The expected kwargs are:: 'boot_iso_href': href of the image to be booted. This can be a Glance UUID or an HTTP(S) URL. """ ilo_common.setup_vmedia(task, kwargs['boot_iso_href'], ramdisk_options=None) manager_utils.node_power_action(task, states.REBOOT) ironic-5.1.0/ironic/drivers/modules/ilo/console.py0000664000567000056710000000331312674513466023367 0ustar jenkinsjenkins00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iLO Deploy Driver(s) and supporting methods. """ from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules import ipmitool class IloConsoleInterface(ipmitool.IPMIShellinaboxConsole): """A ConsoleInterface that uses ipmitool and shellinabox.""" def get_properties(self): props = ilo_common.REQUIRED_PROPERTIES.copy() props.update(ilo_common.CONSOLE_PROPERTIES) return props def validate(self, task): """Validate the Node console info. :param task: a task from TaskManager. :raises: InvalidParameterValue :raises: MissingParameterValue when a required parameter is missing """ node = task.node driver_info_dict = ilo_common.parse_driver_info(node) if 'console_port' not in driver_info_dict: raise exception.MissingParameterValue(_( "Missing 'console_port' parameter in node's driver_info.")) ilo_common.update_ipmi_properties(task) super(IloConsoleInterface, self).validate(task) ironic-5.1.0/ironic/drivers/modules/ilo/__init__.py0000664000567000056710000000000012674513466023452 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/ilo/inspect.py0000664000567000056710000002357612674513466023407 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iLO Inspect Interface """ from oslo_log import log as logging from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import states from ironic.common import utils from ironic.conductor import utils as conductor_utils from ironic.db import api as dbapi from ironic.drivers import base from ironic.drivers.modules.ilo import common as ilo_common ilo_error = importutils.try_import('proliantutils.exception') LOG = logging.getLogger(__name__) CAPABILITIES_KEYS = {'BootMode', 'secure_boot', 'rom_firmware_version', 'ilo_firmware_version', 'server_model', 'max_raid_level', 'pci_gpu_devices', 'sr_iov_devices', 'nic_capacity'} def _create_ports_if_not_exist(node, macs): """Create ironic ports for the mac addresses. Creates ironic ports for the mac addresses returned with inspection or as requested by operator. :param node: node object. :param macs: A dictionary of port numbers to mac addresses returned by node inspection. """ node_id = node.id sql_dbapi = dbapi.get_instance() for mac in macs.values(): port_dict = {'address': mac, 'node_id': node_id} try: sql_dbapi.create_port(port_dict) LOG.info(_LI("Port created for MAC address %(address)s for node " "%(node)s"), {'address': mac, 'node': node.uuid}) except exception.MACAlreadyExists: LOG.warning(_LW("Port already exists for MAC address %(address)s " "for node %(node)s"), {'address': mac, 'node': node.uuid}) def _get_essential_properties(node, ilo_object): """Inspects the node and get essential scheduling properties :param node: node object. :param ilo_object: an instance of proliantutils.ilo.IloClient :raises: HardwareInspectionFailure if any of the properties values are missing. :returns: The dictionary containing properties and MAC data. The dictionary possible keys are 'properties' and 'macs'. The 'properties' should contain keys as in IloInspect.ESSENTIAL_PROPERTIES. The 'macs' is a dictionary containing key:value pairs of """ try: # Retrieve the mandatory properties from hardware result = ilo_object.get_essential_properties() except ilo_error.IloError as e: raise exception.HardwareInspectionFailure(error=e) _validate(node, result) return result def _validate(node, data): """Validate the received value against the supported keys in ironic. :param node: node object. :param data: the dictionary received by querying server. :raises: HardwareInspectionFailure """ if data.get('properties'): if isinstance(data['properties'], dict): valid_keys = IloInspect.ESSENTIAL_PROPERTIES missing_keys = valid_keys - set(data['properties']) if missing_keys: error = (_( "Server didn't return the key(s): %(key)s") % {'key': ', '.join(missing_keys)}) raise exception.HardwareInspectionFailure(error=error) else: error = (_("Essential properties are expected to be in dictionary " "format, received %(properties)s from node " "%(node)s.") % {"properties": data['properties'], 'node': node.uuid}) raise exception.HardwareInspectionFailure(error=error) else: error = (_("The node %s didn't return 'properties' as the key with " "inspection.") % node.uuid) raise exception.HardwareInspectionFailure(error=error) if data.get('macs'): if not isinstance(data['macs'], dict): error = (_("Node %(node)s didn't return MACs %(macs)s " "in dictionary format.") % {"macs": data['macs'], 'node': node.uuid}) raise exception.HardwareInspectionFailure(error=error) else: error = (_("The node %s didn't return 'macs' as the key with " "inspection.") % node.uuid) raise exception.HardwareInspectionFailure(error=error) def _create_supported_capabilities_dict(capabilities): """Creates a capabilities dictionary from supported capabilities in ironic. :param capabilities: a dictionary of capabilities as returned by the hardware. :returns: a dictionary of the capabilities supported by ironic and returned by hardware. """ valid_cap = {} for key in CAPABILITIES_KEYS.intersection(capabilities): valid_cap[key] = capabilities.get(key) return valid_cap def _get_capabilities(node, ilo_object): """inspects hardware and gets additional capabilities. :param node: Node object. :param ilo_object: an instance of ilo drivers. :returns : a string of capabilities like 'key1:value1,key2:value2,key3:value3' or None. """ capabilities = None try: capabilities = ilo_object.get_server_capabilities() except ilo_error.IloError: LOG.debug(("Node %s did not return any additional capabilities."), node.uuid) return capabilities class IloInspect(base.InspectInterface): def get_properties(self): return ilo_common.REQUIRED_PROPERTIES def validate(self, task): """Check that 'driver_info' contains required ILO credentials. Validates whether the 'driver_info' property of the supplied task's node contains the required credentials information. :param task: a task from TaskManager. :raises: InvalidParameterValue if required iLO parameters are not valid. :raises: MissingParameterValue if a required parameter is missing. """ node = task.node ilo_common.parse_driver_info(node) def inspect_hardware(self, task): """Inspect hardware to get the hardware properties. Inspects hardware to get the essential and additional hardware properties. It fails if any of the essential properties are not received from the node. It doesn't fail if node fails to return any capabilities as the capabilities differ from hardware to hardware mostly. :param task: a TaskManager instance. :raises: HardwareInspectionFailure if essential properties could not be retrieved successfully. :raises: IloOperationError if system fails to get power state. :returns: The resulting state of inspection. """ power_turned_on = False ilo_object = ilo_common.get_ilo_object(task.node) try: state = task.driver.power.get_power_state(task) except exception.IloOperationError as ilo_exception: operation = (_("Inspecting hardware (get_power_state) on %s") % task.node.uuid) raise exception.IloOperationError(operation=operation, error=ilo_exception) if state != states.POWER_ON: LOG.info(_LI("The node %s is not powered on. Powering on the " "node for inspection."), task.node.uuid) conductor_utils.node_power_action(task, states.POWER_ON) power_turned_on = True # get the essential properties and update the node properties # with it. inspected_properties = {} result = _get_essential_properties(task.node, ilo_object) properties = result['properties'] for known_property in self.ESSENTIAL_PROPERTIES: inspected_properties[known_property] = properties[known_property] node_properties = task.node.properties node_properties.update(inspected_properties) task.node.properties = node_properties # Inspect the hardware for additional hardware capabilities. # Since additional hardware capabilities may not apply to all the # hardwares, the method inspect_hardware() doesn't raise an error # for these capabilities. capabilities = _get_capabilities(task.node, ilo_object) if capabilities: valid_cap = _create_supported_capabilities_dict(capabilities) capabilities = utils.get_updated_capabilities( task.node.properties.get('capabilities'), valid_cap) if capabilities: node_properties['capabilities'] = capabilities task.node.properties = node_properties task.node.save() # Create ports for the nics detected. _create_ports_if_not_exist(task.node, result['macs']) LOG.debug(("Node properties for %(node)s are updated as " "%(properties)s"), {'properties': inspected_properties, 'node': task.node.uuid}) LOG.info(_LI("Node %s inspected."), task.node.uuid) if power_turned_on: conductor_utils.node_power_action(task, states.POWER_OFF) LOG.info(_LI("The node %s was powered on for inspection. " "Powered off the node as inspection completed."), task.node.uuid) return states.MANAGEABLE ironic-5.1.0/ironic/drivers/modules/ilo/management.py0000664000567000056710000004432412674513466024050 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iLO Management Interface """ from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import importutils import six from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _, _LE, _LI, _LW from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import firmware_processor from ironic.drivers.modules import ipmitool LOG = logging.getLogger(__name__) ilo_error = importutils.try_import('proliantutils.exception') BOOT_DEVICE_MAPPING_TO_ILO = { boot_devices.PXE: 'NETWORK', boot_devices.DISK: 'HDD', boot_devices.CDROM: 'CDROM' } BOOT_DEVICE_ILO_TO_GENERIC = { v: k for k, v in BOOT_DEVICE_MAPPING_TO_ILO.items()} MANAGEMENT_PROPERTIES = ilo_common.REQUIRED_PROPERTIES.copy() MANAGEMENT_PROPERTIES.update(ilo_common.CLEAN_PROPERTIES) clean_step_opts = [ cfg.IntOpt('clean_priority_reset_ilo', default=0, help=_('Priority for reset_ilo clean step.')), cfg.IntOpt('clean_priority_reset_bios_to_default', default=10, help=_('Priority for reset_bios_to_default clean step.')), cfg.IntOpt('clean_priority_reset_secure_boot_keys_to_default', default=20, help=_('Priority for reset_secure_boot_keys clean step. This ' 'step will reset the secure boot keys to manufacturing ' 'defaults.')), cfg.IntOpt('clean_priority_clear_secure_boot_keys', default=0, help=_('Priority for clear_secure_boot_keys clean step. This ' 'step is not enabled by default. It can be enabled to ' 'clear all secure boot keys enrolled with iLO.')), cfg.IntOpt('clean_priority_reset_ilo_credential', default=30, help=_('Priority for reset_ilo_credential clean step. This ' 'step requires "ilo_change_password" parameter to be ' 'updated in nodes\'s driver_info with the new ' 'password.')), ] CONF = cfg.CONF CONF.register_opts(clean_step_opts, group='ilo') def _execute_ilo_clean_step(node, step, *args, **kwargs): """Executes a particular clean step. :param node: an Ironic node object. :param step: a clean step to be executed. :param args: The args to be passed to the clean step. :param kwargs: The kwargs to be passed to the clean step. :raises: NodeCleaningFailure, on failure to execute step. """ ilo_object = ilo_common.get_ilo_object(node) try: clean_step = getattr(ilo_object, step) except AttributeError: # The specified clean step is not present in the proliantutils # package. Raise exception to update the proliantutils package # to newer version. raise exception.NodeCleaningFailure( _("Clean step '%s' not found. 'proliantutils' package needs to be " "updated.") % step) try: clean_step(*args, **kwargs) except ilo_error.IloCommandNotSupportedError: # This clean step is not supported on Gen8 and below servers. # Log the failure and continue with cleaning. LOG.warning(_LW("'%(step)s' clean step is not supported on node " "%(uuid)s. Skipping the clean step."), {'step': step, 'uuid': node.uuid}) except ilo_error.IloError as ilo_exception: raise exception.NodeCleaningFailure(_( "Clean step %(step)s failed " "on node %(node)s with error: %(err)s") % {'node': node.uuid, 'step': step, 'err': ilo_exception}) class IloManagement(base.ManagementInterface): def get_properties(self): return MANAGEMENT_PROPERTIES def validate(self, task): """Check that 'driver_info' contains required ILO credentials. Validates whether the 'driver_info' property of the supplied task's node contains the required credentials information. :param task: a task from TaskManager. :raises: InvalidParameterValue if required iLO parameters are not valid. :raises: MissingParameterValue if a required parameter is missing. """ ilo_common.parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return list(BOOT_DEVICE_MAPPING_TO_ILO.keys()) def get_boot_device(self, task): """Get the current boot device for a node. Returns the current boot device of the node. :param task: a task from TaskManager. :raises: MissingParameterValue if a required iLO parameter is missing. :raises: IloOperationError on an error from IloClient library. :returns: a dictionary containing: :boot_device: the boot device, one of the supported devices listed in :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ ilo_object = ilo_common.get_ilo_object(task.node) persistent = False try: # Return one time boot device if set, else return # the persistent boot device next_boot = ilo_object.get_one_time_boot() if next_boot == 'Normal': # One time boot is not set. Check for persistent boot. persistent = True next_boot = ilo_object.get_persistent_boot_device() except ilo_error.IloError as ilo_exception: operation = _("Get boot device") raise exception.IloOperationError(operation=operation, error=ilo_exception) boot_device = BOOT_DEVICE_ILO_TO_GENERIC.get(next_boot, None) if boot_device is None: persistent = None return {'boot_device': boot_device, 'persistent': persistent} @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for a node. Set the boot device to use on next reboot of the node. :param task: a task from TaskManager. :param device: the boot device, one of the supported devices listed in :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: InvalidParameterValue if an invalid boot device is specified. :raises: MissingParameterValue if a required parameter is missing. :raises: IloOperationError on an error from IloClient library. """ try: boot_device = BOOT_DEVICE_MAPPING_TO_ILO[device] except KeyError: raise exception.InvalidParameterValue(_( "Invalid boot device %s specified.") % device) try: ilo_object = ilo_common.get_ilo_object(task.node) if not persistent: ilo_object.set_one_time_boot(boot_device) else: ilo_object.update_persistent_boot([boot_device]) except ilo_error.IloError as ilo_exception: operation = _("Setting %s as boot device") % device raise exception.IloOperationError(operation=operation, error=ilo_exception) LOG.debug("Node %(uuid)s set to boot from %(device)s.", {'uuid': task.node.uuid, 'device': device}) def get_sensors_data(self, task): """Get sensors data. :param task: a TaskManager instance. :raises: FailedToGetSensorData when getting the sensor data fails. :raises: FailedToParseSensorData when parsing sensor data fails. :raises: InvalidParameterValue if required ipmi parameters are missing. :raises: MissingParameterValue if a required parameter is missing. :returns: returns a dict of sensor data group by sensor type. """ ilo_common.update_ipmi_properties(task) ipmi_management = ipmitool.IPMIManagement() return ipmi_management.get_sensors_data(task) @base.clean_step(priority=CONF.ilo.clean_priority_reset_ilo) def reset_ilo(self, task): """Resets the iLO. :param task: a task from TaskManager. :raises: NodeCleaningFailure, on failure to execute step. """ return _execute_ilo_clean_step(task.node, 'reset_ilo') @base.clean_step(priority=CONF.ilo.clean_priority_reset_ilo_credential) def reset_ilo_credential(self, task): """Resets the iLO password. :param task: a task from TaskManager. :raises: NodeCleaningFailure, on failure to execute step. """ info = task.node.driver_info password = info.pop('ilo_change_password', None) if not password: LOG.info(_LI("Missing 'ilo_change_password' parameter in " "driver_info. Clean step 'reset_ilo_credential' is " "not performed on node %s."), task.node.uuid) return _execute_ilo_clean_step(task.node, 'reset_ilo_credential', password) info['ilo_password'] = password task.node.driver_info = info task.node.save() @base.clean_step(priority=CONF.ilo.clean_priority_reset_bios_to_default) def reset_bios_to_default(self, task): """Resets the BIOS settings to default values. Resets BIOS to default settings. This operation is currently supported only on HP Proliant Gen9 and above servers. :param task: a task from TaskManager. :raises: NodeCleaningFailure, on failure to execute step. """ return _execute_ilo_clean_step(task.node, 'reset_bios_to_default') @base.clean_step(priority=CONF.ilo. clean_priority_reset_secure_boot_keys_to_default) def reset_secure_boot_keys_to_default(self, task): """Reset secure boot keys to manufacturing defaults. Resets the secure boot keys to manufacturing defaults. This operation is supported only on HP Proliant Gen9 and above servers. :param task: a task from TaskManager. :raises: NodeCleaningFailure, on failure to execute step. """ return _execute_ilo_clean_step(task.node, 'reset_secure_boot_keys') @base.clean_step(priority=CONF.ilo.clean_priority_clear_secure_boot_keys) def clear_secure_boot_keys(self, task): """Clear all secure boot keys. Clears all the secure boot keys. This operation is supported only on HP Proliant Gen9 and above servers. :param task: a task from TaskManager. :raises: NodeCleaningFailure, on failure to execute step. """ return _execute_ilo_clean_step(task.node, 'clear_secure_boot_keys') @base.clean_step(priority=0, abortable=False, argsinfo={ 'ilo_license_key': { 'description': ( 'The HPE iLO Advanced license key to activate enterprise ' 'features.' ), 'required': True } }) def activate_license(self, task, **kwargs): """Activates iLO Advanced license. :param task: a TaskManager object. :raises: InvalidParameterValue, if any of the arguments are invalid. :raises: NodeCleaningFailure, on failure to execute clean step. """ ilo_license_key = kwargs.get('ilo_license_key') node = task.node if not isinstance(ilo_license_key, six.string_types): msg = (_("Value of 'ilo_license_key' must be a string instead of " "'%(value)s'. Step 'activate_license' is not executed " "for %(node)s.") % {'value': ilo_license_key, 'node': node.uuid}) LOG.error(msg) raise exception.InvalidParameterValue(msg) LOG.debug("Activating iLO license for node %(node)s ...", {'node': node.uuid}) _execute_ilo_clean_step(node, 'activate_license', ilo_license_key) LOG.info(_LI("iLO license activated for node %(node)s."), {'node': node.uuid}) @base.clean_step(priority=0, abortable=False, argsinfo={ 'firmware_update_mode': { 'description': ( "This argument indicates the mode (or mechanism) of firmware " "update procedure. Supported value is 'ilo'." ), 'required': True }, 'firmware_images': { 'description': ( "This argument represents the ordered list of JSON " "dictionaries of firmware images. Each firmware image " "dictionary consists of three mandatory fields, namely 'url', " "'checksum' and 'component'. These fields represent firmware " "image location URL, md5 checksum of image file and firmware " "component type respectively. The supported firmware URL " "schemes are 'file', 'http', 'https' and 'swift'. The " "supported values for firmware component are 'ilo', 'cpld', " "'power_pic', 'bios' and 'chassis'. The firmware images will " "be applied (in the order given) one by one on the baremetal " "server. For more information, see " "http://docs.openstack.org/developer/ironic/drivers/ilo.html#initiating-firmware-update-as-manual-clean-step" # noqa ), 'required': True } }) @firmware_processor.verify_firmware_update_args def update_firmware(self, task, **kwargs): """Updates the firmware. :param task: a TaskManager object. :raises: InvalidParameterValue if update firmware mode is not 'ilo'. Even applicable for invalid input cases. :raises: NodeCleaningFailure, on failure to execute step. """ node = task.node fw_location_objs_n_components = [] firmware_images = kwargs['firmware_images'] # Note(deray): Processing of firmware images happens here. As part # of processing checksum validation is also done for the firmware file. # Processing of firmware file essentially means downloading the file # on the conductor, validating the checksum of the downloaded content, # extracting the raw firmware file from its compact format, if it is, # and hosting the file on a web server or a swift store based on the # need of the baremetal server iLO firmware update method. try: for firmware_image_info in firmware_images: url, checksum, component = ( firmware_processor.get_and_validate_firmware_image_info( firmware_image_info)) LOG.debug("Processing of firmware file: %(firmware_file)s on " "node: %(node)s ... in progress", {'firmware_file': url, 'node': node.uuid}) fw_processor = firmware_processor.FirmwareProcessor(url) fw_location_obj = fw_processor.process_fw_on(node, checksum) fw_location_objs_n_components.append( (fw_location_obj, component)) LOG.debug("Processing of firmware file: %(firmware_file)s on " "node: %(node)s ... done", {'firmware_file': url, 'node': node.uuid}) except exception.IronicException as ilo_exc: # delete all the files extracted so far from the extracted list # and re-raise the exception for fw_loc_obj_n_comp_tup in fw_location_objs_n_components: fw_loc_obj_n_comp_tup[0].remove() LOG.error(_LE("Processing of firmware image: %(firmware_image)s " "on node: %(node)s ... failed"), {'firmware_image': firmware_image_info, 'node': node.uuid}) raise exception.NodeCleaningFailure(node=node.uuid, reason=ilo_exc) # Updating of firmware images happen here. try: for fw_location_obj, component in fw_location_objs_n_components: fw_location = fw_location_obj.fw_image_location LOG.debug("Firmware update for %(firmware_file)s on " "node: %(node)s ... in progress", {'firmware_file': fw_location, 'node': node.uuid}) _execute_ilo_clean_step( node, 'update_firmware', fw_location, component) LOG.debug("Firmware update for %(firmware_file)s on " "node: %(node)s ... done", {'firmware_file': fw_location, 'node': node.uuid}) except exception.NodeCleaningFailure: with excutils.save_and_reraise_exception(): LOG.error(_LE("Firmware update for %(firmware_file)s on " "node: %(node)s failed."), {'firmware_file': fw_location, 'node': node.uuid}) finally: for fw_loc_obj_n_comp_tup in fw_location_objs_n_components: fw_loc_obj_n_comp_tup[0].remove() LOG.info(_LI("All Firmware update operations completed successfully " "for node: %s."), node.uuid) ironic-5.1.0/ironic/drivers/modules/console_utils.py0000664000567000056710000002273312674513466024033 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2014 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Ironic console utilities. """ import errno import os import psutil import signal import subprocess import time from ironic_lib import utils as ironic_utils from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_service import loopingcall from oslo_utils import netutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LW from ironic.common import utils opts = [ cfg.StrOpt('terminal', default='shellinaboxd', help=_('Path to serial console terminal program')), cfg.StrOpt('terminal_cert_dir', help=_('Directory containing the terminal SSL cert(PEM) for ' 'serial console access')), cfg.StrOpt('terminal_pid_dir', help=_('Directory for holding terminal pid files. ' 'If not specified, the temporary directory ' 'will be used.')), cfg.IntOpt('subprocess_checking_interval', default=1, help=_('Time interval (in seconds) for checking the status of ' 'console subprocess.')), cfg.IntOpt('subprocess_timeout', default=10, help=_('Time (in seconds) to wait for the console subprocess ' 'to start.')), ] CONF = cfg.CONF CONF.register_opts(opts, group='console') LOG = logging.getLogger(__name__) def _get_console_pid_dir(): """Return the directory for the pid file.""" return CONF.console.terminal_pid_dir or CONF.tempdir def _ensure_console_pid_dir_exists(): """Ensure that the console PID directory exists Checks that the directory for the console PID file exists and if not, creates it. :raises: ConsoleError if the directory doesn't exist and cannot be created """ dir = _get_console_pid_dir() if not os.path.exists(dir): try: os.makedirs(dir) except OSError as exc: msg = (_("Cannot create directory '%(path)s' for console PID file." " Reason: %(reason)s.") % {'path': dir, 'reason': exc}) LOG.error(msg) raise exception.ConsoleError(message=msg) def _get_console_pid_file(node_uuid): """Generate the pid file name to hold the terminal process id.""" pid_dir = _get_console_pid_dir() name = "%s.pid" % node_uuid path = os.path.join(pid_dir, name) return path def _get_console_pid(node_uuid): """Get the terminal process id from pid file.""" pid_path = _get_console_pid_file(node_uuid) try: with open(pid_path, 'r') as f: pid_str = f.readline() return int(pid_str) except (IOError, ValueError): raise exception.NoConsolePid(pid_path=pid_path) def _stop_console(node_uuid): """Close the serial console for a node Kills the console process and deletes the PID file. :param node_uuid: the UUID of the node :raises: NoConsolePid if no console PID was found :raises: ConsoleError if unable to stop the console process """ try: console_pid = _get_console_pid(node_uuid) os.kill(console_pid, signal.SIGTERM) except OSError as exc: if exc.errno != errno.ESRCH: msg = (_("Could not stop the console for node '%(node)s'. " "Reason: %(err)s.") % {'node': node_uuid, 'err': exc}) raise exception.ConsoleError(message=msg) else: LOG.warning(_LW("Console process for node %s is not running " "but pid file exists while trying to stop " "shellinabox console."), node_uuid) finally: ironic_utils.unlink_without_raise(_get_console_pid_file(node_uuid)) def make_persistent_password_file(path, password): """Writes a file containing a password until deleted.""" try: utils.delete_if_exists(path) with open(path, 'wb') as file: os.chmod(path, 0o600) file.write(password.encode()) return path except Exception as e: utils.delete_if_exists(path) raise exception.PasswordFileFailedToCreate(error=e) def get_shellinabox_console_url(port): """Get a url to access the console via shellinaboxd. :param port: the terminal port for the node. """ console_host = CONF.my_ip if netutils.is_valid_ipv6(console_host): console_host = '[%s]' % console_host scheme = 'https' if CONF.console.terminal_cert_dir else 'http' return '%(scheme)s://%(host)s:%(port)s' % {'scheme': scheme, 'host': console_host, 'port': port} def start_shellinabox_console(node_uuid, port, console_cmd): """Open the serial console for a node. :param node_uuid: the uuid for the node. :param port: the terminal port for the node. :param console_cmd: the shell command that gets the console. :raises: ConsoleError if the directory for the PID file cannot be created. :raises: ConsoleSubprocessFailed when invoking the subprocess failed. """ # make sure that the old console for this node is stopped # and the files are cleared try: _stop_console(node_uuid) except exception.NoConsolePid: pass except processutils.ProcessExecutionError as exc: LOG.warning(_LW("Failed to kill the old console process " "before starting a new shellinabox console " "for node %(node)s. Reason: %(err)s"), {'node': node_uuid, 'err': exc}) _ensure_console_pid_dir_exists() pid_file = _get_console_pid_file(node_uuid) # put together the command and arguments for invoking the console args = [] args.append(CONF.console.terminal) if CONF.console.terminal_cert_dir: args.append("-c") args.append(CONF.console.terminal_cert_dir) else: args.append("-t") args.append("-p") args.append(str(port)) args.append("--background=%s" % pid_file) args.append("-s") args.append(console_cmd) # run the command as a subprocess try: LOG.debug('Running subprocess: %s', ' '.join(args)) # use pipe here to catch the error in case shellinaboxd # failed to start. obj = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) except (OSError, ValueError) as e: error = _("%(exec_error)s\n" "Command: %(command)s") % {'exec_error': str(e), 'command': ' '.join(args)} LOG.warning(error) raise exception.ConsoleSubprocessFailed(error=error) def _wait(node_uuid, popen_obj): locals['returncode'] = popen_obj.poll() # check if the console pid is created and the process is running. # if it is, then the shellinaboxd is invoked successfully as a daemon. # otherwise check the error. if locals['returncode'] is not None: if (locals['returncode'] == 0 and os.path.exists(pid_file) and psutil.pid_exists(_get_console_pid(node_uuid))): raise loopingcall.LoopingCallDone() else: (stdout, stderr) = popen_obj.communicate() locals['errstr'] = _( "Command: %(command)s.\n" "Exit code: %(return_code)s.\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r") % { 'command': ' '.join(args), 'return_code': locals['returncode'], 'stdout': stdout, 'stderr': stderr} LOG.warning(locals['errstr']) raise loopingcall.LoopingCallDone() if (time.time() > expiration): locals['errstr'] = _("Timeout while waiting for console subprocess" "to start for node %s.") % node_uuid LOG.warning(locals['errstr']) raise loopingcall.LoopingCallDone() locals = {'returncode': None, 'errstr': ''} expiration = time.time() + CONF.console.subprocess_timeout timer = loopingcall.FixedIntervalLoopingCall(_wait, node_uuid, obj) timer.start(interval=CONF.console.subprocess_checking_interval).wait() if locals['errstr']: raise exception.ConsoleSubprocessFailed(error=locals['errstr']) def stop_shellinabox_console(node_uuid): """Close the serial console for a node. :param node_uuid: the UUID of the node :raises: ConsoleError if unable to stop the console process """ try: _stop_console(node_uuid) except exception.NoConsolePid: LOG.warning(_LW("No console pid found for node %s while trying to " "stop shellinabox console."), node_uuid) ironic-5.1.0/ironic/drivers/modules/ipxe_config.template0000664000567000056710000000255112674513466024622 0ustar jenkinsjenkins00000000000000#!ipxe dhcp goto deploy :deploy kernel {% if pxe_options.ipxe_timeout > 0 %}--timeout {{ pxe_options.ipxe_timeout }} {% endif %}{{ pxe_options.deployment_aki_path }} selinux=0 disk={{ pxe_options.disk }} iscsi_target_iqn={{ pxe_options.iscsi_target_iqn }} deployment_id={{ pxe_options.deployment_id }} deployment_key={{ pxe_options.deployment_key }} ironic_api_url={{ pxe_options.ironic_api_url }} troubleshoot=0 text {{ pxe_options.pxe_append_params|default("", true) }} boot_option={{ pxe_options.boot_option }} ip=${ip}:${next-server}:${gateway}:${netmask} BOOTIF=${mac} {% if pxe_options.root_device %}root_device={{ pxe_options.root_device }}{% endif %} ipa-api-url={{ pxe_options['ipa-api-url'] }} ipa-driver-name={{ pxe_options['ipa-driver-name'] }} boot_mode={{ pxe_options['boot_mode'] }} initrd=deploy_ramdisk coreos.configdrive=0 initrd {% if pxe_options.ipxe_timeout > 0 %}--timeout {{ pxe_options.ipxe_timeout }} {% endif %}{{ pxe_options.deployment_ari_path }} boot :boot_partition kernel {% if pxe_options.ipxe_timeout > 0 %}--timeout {{ pxe_options.ipxe_timeout }} {% endif %}{{ pxe_options.aki_path }} root={{ ROOT }} ro text {{ pxe_options.pxe_append_params|default("", true) }} initrd=ramdisk initrd {% if pxe_options.ipxe_timeout > 0 %}--timeout {{ pxe_options.ipxe_timeout }} {% endif %}{{ pxe_options.ari_path }} boot :boot_whole_disk sanboot --no-describe ironic-5.1.0/ironic/drivers/modules/amt/0000775000567000056710000000000012674513633021345 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/amt/common.py0000664000567000056710000002160212674513466023214 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common functionalities for AMT Driver """ import time from xml.etree import ElementTree from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils import six from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common import utils pywsman = importutils.try_import('pywsman') _SOAP_ENVELOPE = 'http://www.w3.org/2003/05/soap-envelope' LOG = logging.getLogger(__name__) REQUIRED_PROPERTIES = { 'amt_address': _('IP address or host name of the node. Required.'), 'amt_password': _('Password. Required.'), 'amt_username': _('Username to log into AMT system. Required.'), } OPTIONAL_PROPERTIES = { 'amt_protocol': _('Protocol used for AMT endpoint. one of http, https; ' 'default is "http". Optional.'), } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) opts = [ cfg.StrOpt('protocol', default='http', choices=['http', 'https'], help=_('Protocol used for AMT endpoint')), cfg.IntOpt('awake_interval', default=60, min=0, help=_('Time interval (in seconds) for successive awake call ' 'to AMT interface, this depends on the IdleTimeout ' 'setting on AMT interface. AMT Interface will go to ' 'sleep after 60 seconds of inactivity by default. ' 'IdleTimeout=0 means AMT will not go to sleep at all. ' 'Setting awake_interval=0 will disable awake call.')), ] CONF = cfg.CONF opt_group = cfg.OptGroup(name='amt', title='Options for the AMT power driver') CONF.register_group(opt_group) CONF.register_opts(opts, opt_group) # TODO(lintan): More boot devices are supported by AMT, but not useful # currently. Add them in the future. BOOT_DEVICES_MAPPING = { boot_devices.PXE: 'Intel(r) AMT: Force PXE Boot', boot_devices.DISK: 'Intel(r) AMT: Force Hard-drive Boot', boot_devices.CDROM: 'Intel(r) AMT: Force CD/DVD Boot', } DEFAULT_BOOT_DEVICE = boot_devices.DISK AMT_PROTOCOL_PORT_MAP = { 'http': 16992, 'https': 16993, } # ReturnValue constants RET_SUCCESS = '0' # A dict cache last awake call to AMT Interface AMT_AWAKE_CACHE = {} class Client(object): """AMT client. Create a pywsman client to connect to the target server """ def __init__(self, address, protocol, username, password): port = AMT_PROTOCOL_PORT_MAP[protocol] path = '/wsman' self.client = pywsman.Client(address, port, path, protocol, username, password) def wsman_get(self, resource_uri, options=None): """Get target server info :param options: client options :param resource_uri: a URI to an XML schema :returns: XmlDoc object :raises: AMTFailure if get unexpected response. :raises: AMTConnectFailure if unable to connect to the server. """ if options is None: options = pywsman.ClientOptions() doc = self.client.get(options, resource_uri) item = 'Fault' fault = xml_find(doc, _SOAP_ENVELOPE, item) if fault is not None: LOG.exception(_LE('Call to AMT with URI %(uri)s failed: ' 'got Fault %(fault)s'), {'uri': resource_uri, 'fault': fault.text}) raise exception.AMTFailure(cmd='wsman_get') return doc def wsman_invoke(self, options, resource_uri, method, data=None): """Invoke method on target server :param options: client options :param resource_uri: a URI to an XML schema :param method: invoke method :param data: a XmlDoc as invoke input :returns: XmlDoc object :raises: AMTFailure if get unexpected response. :raises: AMTConnectFailure if unable to connect to the server. """ if data is None: doc = self.client.invoke(options, resource_uri, method) else: doc = self.client.invoke(options, resource_uri, method, data) item = "ReturnValue" return_value = xml_find(doc, resource_uri, item).text if return_value != RET_SUCCESS: LOG.exception(_LE("Call to AMT with URI %(uri)s and " "method %(method)s failed: return value " "was %(value)s"), {'uri': resource_uri, 'method': method, 'value': return_value}) raise exception.AMTFailure(cmd='wsman_invoke') return doc def parse_driver_info(node): """Parses and creates AMT driver info :param node: an Ironic node object. :returns: AMT driver info. :raises: MissingParameterValue if any required parameters are missing. :raises: InvalidParameterValue if any parameters have invalid values. """ info = node.driver_info or {} d_info = {} missing_info = [] for param in REQUIRED_PROPERTIES: value = info.get(param) if value: if not isinstance(value, six.binary_type): value = value.encode() d_info[param[4:]] = value else: missing_info.append(param) if missing_info: raise exception.MissingParameterValue(_( "AMT driver requires the following to be set in " "node's driver_info: %s.") % missing_info) d_info['uuid'] = node.uuid param = 'amt_protocol' protocol = info.get(param, CONF.amt.get(param[4:])) if protocol not in AMT_PROTOCOL_PORT_MAP: raise exception.InvalidParameterValue( _("Invalid protocol %s.") % protocol) if not isinstance(value, six.binary_type): protocol = protocol.encode() d_info[param[4:]] = protocol return d_info def get_wsman_client(node): """Return a AMT Client object :param node: an Ironic node object. :returns: a Client object :raises: MissingParameterValue if any required parameters are missing. :raises: InvalidParameterValue if any parameters have invalid values. """ driver_info = parse_driver_info(node) client = Client(address=driver_info['address'], protocol=driver_info['protocol'], username=driver_info['username'], password=driver_info['password']) return client def xml_find(doc, namespace, item): """Find the first element with namespace and item, in the XML doc :param doc: a doc object. :param namespace: the namespace of the element. :param item: the element name. :returns: the element object or None :raises: AMTConnectFailure if unable to connect to the server. """ if doc is None: raise exception.AMTConnectFailure() tree = ElementTree.fromstring(doc.root().string()) query = ('.//{%(namespace)s}%(item)s' % {'namespace': namespace, 'item': item}) return tree.find(query) def awake_amt_interface(node): """Wake up AMT interface. AMT interface goes to sleep after a period of time if the host is off. This method will ping AMT interface to wake it up. Because there is no guarantee that the AMT address in driver_info is correct, only ping the IP five times which is enough to wake it up. :param node: an Ironic node object. :raises: AMTConnectFailure if unable to connect to the server. """ awake_interval = CONF.amt.awake_interval if awake_interval == 0: return now = time.time() last_awake = AMT_AWAKE_CACHE.get(node.uuid, 0) if now - last_awake > awake_interval: cmd_args = ['ping', '-i', 0.2, '-c', 5, node.driver_info['amt_address']] try: utils.execute(*cmd_args) except processutils.ProcessExecutionError as err: LOG.error(_LE('Unable to awake AMT interface on node ' '%(node_id)s. Error: %(error)s'), {'node_id': node.uuid, 'error': err}) raise exception.AMTConnectFailure() else: LOG.debug(('Successfully awakened AMT interface on node ' '%(node_id)s.'), {'node_id': node.uuid}) AMT_AWAKE_CACHE[node.uuid] = now ironic-5.1.0/ironic/drivers/modules/amt/power.py0000664000567000056710000002253612674513466023067 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ AMT Power Driver """ import copy from oslo_config import cfg from oslo_log import log as logging from oslo_service import loopingcall from oslo_utils import excutils from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.amt import common as amt_common from ironic.drivers.modules.amt import resource_uris pywsman = importutils.try_import('pywsman') opts = [ cfg.IntOpt('max_attempts', default=3, help=_('Maximum number of times to attempt an AMT operation, ' 'before failing')), cfg.IntOpt('action_wait', default=10, help=_('Amount of time (in seconds) to wait, before retrying ' 'an AMT operation')) ] CONF = cfg.CONF CONF.register_opts(opts, group='amt') LOG = logging.getLogger(__name__) AMT_POWER_MAP = { states.POWER_ON: '2', states.POWER_OFF: '8', } def _generate_power_action_input(action): """Generate Xmldoc as set_power_state input. This generates a Xmldoc used as input for set_power_state. :param action: the power action. :returns: Xmldoc. """ method_input = "RequestPowerStateChange_INPUT" address = 'http://schemas.xmlsoap.org/ws/2004/08/addressing' anonymous = ('http://schemas.xmlsoap.org/ws/2004/08/addressing/' 'role/anonymous') wsman = 'http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd' namespace = resource_uris.CIM_PowerManagementService doc = pywsman.XmlDoc(method_input) root = doc.root() root.set_ns(namespace) root.add(namespace, 'PowerState', action) child = root.add(namespace, 'ManagedElement', None) child.add(address, 'Address', anonymous) grand_child = child.add(address, 'ReferenceParameters', None) grand_child.add(wsman, 'ResourceURI', resource_uris.CIM_ComputerSystem) g_grand_child = grand_child.add(wsman, 'SelectorSet', None) g_g_grand_child = g_grand_child.add(wsman, 'Selector', 'ManagedSystem') g_g_grand_child.attr_add(wsman, 'Name', 'Name') return doc def _set_power_state(node, target_state): """Set power state of the AMT Client. :param node: a node object. :param target_state: desired power state. :raises: AMTFailure :raises: AMTConnectFailure """ amt_common.awake_amt_interface(node) client = amt_common.get_wsman_client(node) method = 'RequestPowerStateChange' options = pywsman.ClientOptions() options.add_selector('Name', 'Intel(r) AMT Power Management Service') doc = _generate_power_action_input(AMT_POWER_MAP[target_state]) try: client.wsman_invoke(options, resource_uris.CIM_PowerManagementService, method, doc) except (exception.AMTFailure, exception.AMTConnectFailure) as e: with excutils.save_and_reraise_exception(): LOG.exception(_LE("Failed to set power state %(state)s for " "node %(node_id)s with error: %(error)s."), {'state': target_state, 'node_id': node.uuid, 'error': e}) else: LOG.info(_LI("Power state set to %(state)s for node %(node_id)s"), {'state': target_state, 'node_id': node.uuid}) def _power_status(node): """Get the power status for a node. :param node: a node object. :returns: one of ironic.common.states POWER_OFF, POWER_ON or ERROR. :raises: AMTFailure. :raises: AMTConnectFailure. """ amt_common.awake_amt_interface(node) client = amt_common.get_wsman_client(node) namespace = resource_uris.CIM_AssociatedPowerManagementService try: doc = client.wsman_get(namespace) except (exception.AMTFailure, exception.AMTConnectFailure) as e: with excutils.save_and_reraise_exception(): LOG.exception(_LE("Failed to get power state for node %(node_id)s " "with error: %(error)s."), {'node_id': node.uuid, 'error': e}) item = "PowerState" power_state = amt_common.xml_find(doc, namespace, item).text for state in AMT_POWER_MAP: if power_state == AMT_POWER_MAP[state]: return state return states.ERROR def _set_and_wait(task, target_state): """Helper function for DynamicLoopingCall. This method changes the power state and polls AMT until the desired power state is reached. :param task: a TaskManager instance contains the target node. :param target_state: desired power state. :returns: one of ironic.common.states. :raises: PowerStateFailure if cannot set the node to target_state. :raises: AMTFailure. :raises: AMTConnectFailure :raises: InvalidParameterValue """ node = task.node driver = task.driver if target_state not in (states.POWER_ON, states.POWER_OFF): raise exception.InvalidParameterValue(_('Unsupported target_state: %s') % target_state) elif target_state == states.POWER_ON: boot_device = node.driver_internal_info.get('amt_boot_device') if boot_device and boot_device != amt_common.DEFAULT_BOOT_DEVICE: driver.management.ensure_next_boot_device(node, boot_device) def _wait(status): status['power'] = _power_status(node) if status['power'] == target_state: raise loopingcall.LoopingCallDone() if status['iter'] >= CONF.amt.max_attempts: status['power'] = states.ERROR LOG.warning(_LW("AMT failed to set power state %(state)s after " "%(tries)s retries on node %(node_id)s."), {'state': target_state, 'tries': status['iter'], 'node_id': node.uuid}) raise loopingcall.LoopingCallDone() try: _set_power_state(node, target_state) except Exception: # Log failures but keep trying LOG.warning(_LW("AMT set power state %(state)s for node %(node)s " "- Attempt %(attempt)s times of %(max_attempt)s " "failed."), {'state': target_state, 'node': node.uuid, 'attempt': status['iter'] + 1, 'max_attempt': CONF.amt.max_attempts}) status['iter'] += 1 status = {'power': None, 'iter': 0} timer = loopingcall.FixedIntervalLoopingCall(_wait, status) timer.start(interval=CONF.amt.action_wait).wait() if status['power'] != target_state: raise exception.PowerStateFailure(pstate=target_state) return status['power'] class AMTPower(base.PowerInterface): """AMT Power interface. This Power interface control the power of node by providing power on/off and reset functions. """ def get_properties(self): return copy.deepcopy(amt_common.COMMON_PROPERTIES) def validate(self, task): """Validate the driver_info in the node. Check if the driver_info contains correct required fields :param task: a TaskManager instance contains the target node. :raises: MissingParameterValue if any required parameters are missing. :raises: InvalidParameterValue if any parameters have invalid values. """ # FIXME(lintan): validate hangs if unable to reach AMT, so dont # connect to the node until bug 1314961 is resolved. amt_common.parse_driver_info(task.node) def get_power_state(self, task): """Get the power state from the node. :param task: a TaskManager instance contains the target node. :raises: AMTFailure. :raises: AMTConnectFailure. """ return _power_status(task.node) @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Set the power state of the node. Turn the node power on or off. :param task: a TaskManager instance contains the target node. :param pstate: The desired power state of the node. :raises: PowerStateFailure if the power cannot set to pstate. :raises: AMTFailure. :raises: AMTConnectFailure. :raises: InvalidParameterValue """ _set_and_wait(task, pstate) @task_manager.require_exclusive_lock def reboot(self, task): """Cycle the power of the node :param task: a TaskManager instance contains the target node. :raises: PowerStateFailure if failed to reboot. :raises: AMTFailure. :raises: AMTConnectFailure. :raises: InvalidParameterValue """ _set_and_wait(task, states.POWER_OFF) _set_and_wait(task, states.POWER_ON) ironic-5.1.0/ironic/drivers/modules/amt/vendor.py0000664000567000056710000000313012674513466023215 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ AMT Vendor Methods """ from ironic.common import boot_devices from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import iscsi_deploy class AMTPXEVendorPassthru(iscsi_deploy.VendorPassthru): @base.passthru(['POST']) @task_manager.require_exclusive_lock def pass_deploy_info(self, task, **kwargs): if deploy_utils.get_boot_option(task.node) == "netboot": task.driver.management.ensure_next_boot_device(task.node, boot_devices.PXE) super(AMTPXEVendorPassthru, self).pass_deploy_info(task, **kwargs) @task_manager.require_exclusive_lock def continue_deploy(self, task, **kwargs): if deploy_utils.get_boot_option(task.node) == "netboot": task.driver.management.ensure_next_boot_device(task.node, boot_devices.PXE) super(AMTPXEVendorPassthru, self).continue_deploy(task, **kwargs) ironic-5.1.0/ironic/drivers/modules/amt/__init__.py0000664000567000056710000000000012674513466023450 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/amt/resource_uris.py0000664000567000056710000000265512674513466024624 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ XML Schemas to define the requests sent to AMT """ CIM_AssociatedPowerManagementService = ('http://schemas.dmtf.org/wbem/wscim/' '1/cim-schema/2/' 'CIM_AssociatedPowerManagementService') CIM_PowerManagementService = ('http://schemas.dmtf.org/wbem/wscim/1/' 'cim-schema/2/CIM_PowerManagementService') CIM_ComputerSystem = ('http://schemas.dmtf.org/wbem/wscim/' '1/cim-schema/2/CIM_ComputerSystem') CIM_BootConfigSetting = ('http://schemas.dmtf.org/wbem/wscim/' '1/cim-schema/2/CIM_BootConfigSetting') CIM_BootSourceSetting = ('http://schemas.dmtf.org/wbem/wscim/' '1/cim-schema/2/CIM_BootSourceSetting') CIM_BootService = ('http://schemas.dmtf.org/wbem/wscim/' '1/cim-schema/2/CIM_BootService') ironic-5.1.0/ironic/drivers/modules/amt/management.py0000664000567000056710000002215512674513466024044 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ AMT Management Driver """ import copy from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.amt import common as amt_common from ironic.drivers.modules.amt import resource_uris pywsman = importutils.try_import('pywsman') LOG = logging.getLogger(__name__) _ADDRESS = 'http://schemas.xmlsoap.org/ws/2004/08/addressing' _ANONYMOUS = 'http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous' _WSMAN = 'http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd' def _generate_change_boot_order_input(device): """Generate Xmldoc as change_boot_order input. This generates a Xmldoc used as input for change_boot_order. :param device: the boot device. :returns: Xmldoc. """ method_input = "ChangeBootOrder_INPUT" namespace = resource_uris.CIM_BootConfigSetting doc = pywsman.XmlDoc(method_input) root = doc.root() root.set_ns(namespace) child = root.add(namespace, 'Source', None) child.add(_ADDRESS, 'Address', _ANONYMOUS) grand_child = child.add(_ADDRESS, 'ReferenceParameters', None) grand_child.add(_WSMAN, 'ResourceURI', resource_uris.CIM_BootSourceSetting) g_grand_child = grand_child.add(_WSMAN, 'SelectorSet', None) g_g_grand_child = g_grand_child.add(_WSMAN, 'Selector', device) g_g_grand_child.attr_add(_WSMAN, 'Name', 'InstanceID') return doc def _set_boot_device_order(node, boot_device): """Set boot device order configuration of AMT Client. :param node: a node object :param boot_device: the boot device :raises: AMTFailure :raises: AMTConnectFailure """ amt_common.awake_amt_interface(node) client = amt_common.get_wsman_client(node) device = amt_common.BOOT_DEVICES_MAPPING[boot_device] doc = _generate_change_boot_order_input(device) method = 'ChangeBootOrder' options = pywsman.ClientOptions() options.add_selector('InstanceID', 'Intel(r) AMT: Boot Configuration 0') try: client.wsman_invoke(options, resource_uris.CIM_BootConfigSetting, method, doc) except (exception.AMTFailure, exception.AMTConnectFailure) as e: with excutils.save_and_reraise_exception(): LOG.exception(_LE("Failed to set boot device %(boot_device)s for " "node %(node_id)s with error: %(error)s."), {'boot_device': boot_device, 'node_id': node.uuid, 'error': e}) else: LOG.info(_LI("Successfully set boot device %(boot_device)s for " "node %(node_id)s"), {'boot_device': boot_device, 'node_id': node.uuid}) def _generate_enable_boot_config_input(): """Generate Xmldoc as enable_boot_config input. This generates a Xmldoc used as input for enable_boot_config. :returns: Xmldoc. """ method_input = "SetBootConfigRole_INPUT" namespace = resource_uris.CIM_BootService doc = pywsman.XmlDoc(method_input) root = doc.root() root.set_ns(namespace) child = root.add(namespace, 'BootConfigSetting', None) child.add(_ADDRESS, 'Address', _ANONYMOUS) grand_child = child.add(_ADDRESS, 'ReferenceParameters', None) grand_child.add(_WSMAN, 'ResourceURI', resource_uris.CIM_BootConfigSetting) g_grand_child = grand_child.add(_WSMAN, 'SelectorSet', None) g_g_grand_child = g_grand_child.add(_WSMAN, 'Selector', 'Intel(r) AMT: Boot Configuration 0') g_g_grand_child.attr_add(_WSMAN, 'Name', 'InstanceID') root.add(namespace, 'Role', '1') return doc def _enable_boot_config(node): """Enable boot configuration of AMT Client. :param node: a node object :raises: AMTFailure :raises: AMTConnectFailure """ amt_common.awake_amt_interface(node) client = amt_common.get_wsman_client(node) method = 'SetBootConfigRole' doc = _generate_enable_boot_config_input() options = pywsman.ClientOptions() options.add_selector('Name', 'Intel(r) AMT Boot Service') try: client.wsman_invoke(options, resource_uris.CIM_BootService, method, doc) except (exception.AMTFailure, exception.AMTConnectFailure) as e: with excutils.save_and_reraise_exception(): LOG.exception(_LE("Failed to enable boot config for node " "%(node_id)s with error: %(error)s."), {'node_id': node.uuid, 'error': e}) else: LOG.info(_LI("Successfully enabled boot config for node %(node_id)s."), {'node_id': node.uuid}) class AMTManagement(base.ManagementInterface): def get_properties(self): return copy.deepcopy(amt_common.COMMON_PROPERTIES) def validate(self, task): """Validate the driver_info in the node Check if the driver_info contains correct required fields :param task: a TaskManager instance contains the target node :raises: MissingParameterValue if any required parameters are missing. :raises: InvalidParameterValue if any parameters have invalid values. """ # FIXME(lintan): validate hangs if unable to reach AMT, so dont # connect to the node until bug 1314961 is resolved. amt_common.parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices. """ return list(amt_common.BOOT_DEVICES_MAPPING) @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for the task's node. Set the boot device to use on next boot of the node. :param task: a task from TaskManager. :param device: the boot device :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: InvalidParameterValue if an invalid boot device is specified. """ node = task.node if device not in amt_common.BOOT_DEVICES_MAPPING: raise exception.InvalidParameterValue( _("set_boot_device called with invalid device " "%(device)s for node %(node_id)s." ) % {'device': device, 'node_id': node.uuid}) # AMT/vPro doesn't support set boot_device persistent, so we have to # save amt_boot_device/amt_boot_persistent in driver_internal_info. driver_internal_info = node.driver_internal_info driver_internal_info['amt_boot_device'] = device driver_internal_info['amt_boot_persistent'] = persistent node.driver_internal_info = driver_internal_info node.save() def get_boot_device(self, task): """Get the current boot device for the task's node. Returns the current boot device of the node. :param task: a task from TaskManager. :returns: a dictionary containing: :boot_device: the boot device :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ driver_internal_info = task.node.driver_internal_info device = driver_internal_info.get('amt_boot_device') persistent = driver_internal_info.get('amt_boot_persistent') if not device: device = amt_common.DEFAULT_BOOT_DEVICE persistent = True return {'boot_device': device, 'persistent': persistent} def ensure_next_boot_device(self, node, boot_device): """Set next boot device (one time only) of AMT Client. :param node: a node object :param boot_device: the boot device :raises: AMTFailure :raises: AMTConnectFailure """ driver_internal_info = node.driver_internal_info if not driver_internal_info.get('amt_boot_persistent'): driver_internal_info['amt_boot_device'] = ( amt_common.DEFAULT_BOOT_DEVICE) driver_internal_info['amt_boot_persistent'] = True node.driver_internal_info = driver_internal_info node.save() _set_boot_device_order(node, boot_device) _enable_boot_config(node) def get_sensors_data(self, task): raise NotImplementedError() ironic-5.1.0/ironic/drivers/modules/wol.py0000664000567000056710000001516512674513466021753 0ustar jenkinsjenkins00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Ironic Wake-On-Lan power manager. """ import contextlib import socket import time from oslo_log import log from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LI from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers import base LOG = log.getLogger(__name__) REQUIRED_PROPERTIES = {} OPTIONAL_PROPERTIES = { 'wol_host': _('Broadcast IP address; defaults to ' '255.255.255.255. Optional.'), 'wol_port': _("Destination port; defaults to 9. Optional."), } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) def _send_magic_packets(task, dest_host, dest_port): """Create and send magic packets. Creates and sends a magic packet for each MAC address registered in the Node. :param task: a TaskManager instance containing the node to act on. :param dest_host: The broadcast to this IP address. :param dest_port: The destination port. :raises: WolOperationError if an error occur when connecting to the host or sending the magic packets """ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1) with contextlib.closing(s) as sock: for port in task.ports: address = port.address.replace(':', '') # TODO(lucasagomes): Implement sending the magic packets with # SecureON password feature. If your NIC is capable of, you can # set the password of your SecureON using the ethtool utility. data = 'FFFFFFFFFFFF' + (address * 16) packet = bytearray.fromhex(data) try: sock.sendto(packet, (dest_host, dest_port)) except socket.error as e: msg = (_("Failed to send Wake-On-Lan magic packets to " "node %(node)s port %(port)s. Error: %(error)s") % {'node': task.node.uuid, 'port': port.address, 'error': e}) LOG.exception(msg) raise exception.WolOperationError(msg) # let's not flood the network with broadcast packets time.sleep(0.5) def _parse_parameters(task): driver_info = task.node.driver_info host = driver_info.get('wol_host', '255.255.255.255') port = driver_info.get('wol_port', 9) port = utils.validate_network_port(port, 'wol_port') if len(task.ports) < 1: raise exception.MissingParameterValue(_( 'Wake-On-Lan needs at least one port resource to be ' 'registered in the node')) return {'host': host, 'port': port} class WakeOnLanPower(base.PowerInterface): """Wake-On-Lan Driver for Ironic This PowerManager class provides a mechanism for controlling power state via Wake-On-Lan. """ def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Validate driver. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if parameters are invalid. :raises: MissingParameterValue if required parameters are missing. """ _parse_parameters(task) def get_power_state(self, task): """Not supported. Get the current power state of the task's node. This operation is not supported by the Wake-On-Lan driver. So value returned will be from the database and may not reflect the actual state of the system. :returns: POWER_OFF if power state is not set otherwise return the node's power_state value from the database. """ pstate = task.node.power_state return states.POWER_OFF if pstate is states.NOSTATE else pstate @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Wakes the task's node on power on. Powering off is not supported. Wakes the task's node on. Wake-On-Lan does not support powering the task's node off so, just log it. :param task: a TaskManager instance containing the node to act on. :param pstate: The desired power state, one of ironic.common.states POWER_ON, POWER_OFF. :raises: InvalidParameterValue if parameters are invalid. :raises: MissingParameterValue if required parameters are missing. :raises: WolOperationError if an error occur when sending the magic packets """ node = task.node params = _parse_parameters(task) if pstate == states.POWER_ON: _send_magic_packets(task, params['host'], params['port']) elif pstate == states.POWER_OFF: LOG.info(_LI('Power off called for node %s. Wake-On-Lan does not ' 'support this operation. Manual intervention ' 'required to perform this action.'), node.uuid) else: raise exception.InvalidParameterValue(_( "set_power_state called for Node %(node)s with invalid " "power state %(pstate)s.") % {'node': node.uuid, 'pstate': pstate}) @task_manager.require_exclusive_lock def reboot(self, task): """Not supported. Cycles the power to the task's node. This operation is not fully supported by the Wake-On-Lan driver. So this method will just try to power the task's node on. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if parameters are invalid. :raises: MissingParameterValue if required parameters are missing. :raises: WolOperationError if an error occur when sending the magic packets """ LOG.info(_LI('Reboot called for node %s. Wake-On-Lan does ' 'not fully support this operation. Trying to ' 'power on the node.'), task.node.uuid) self.set_power_state(task, states.POWER_ON) ironic-5.1.0/ironic/drivers/modules/pxe.py0000664000567000056710000005743212674513466021751 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ PXE Boot Interface """ import os import shutil from ironic_lib import utils as ironic_utils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import fileutils from ironic.common import boot_devices from ironic.common import dhcp_factory from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW from ironic.common import image_service as service from ironic.common import paths from ironic.common import pxe_utils from ironic.common import states from ironic.drivers import base from ironic.drivers.modules import agent from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import image_cache from ironic.drivers import utils as driver_utils pxe_opts = [ cfg.StrOpt('pxe_config_template', default=paths.basedir_def( 'drivers/modules/pxe_config.template'), help=_('On ironic-conductor node, template file for PXE ' 'configuration.')), cfg.StrOpt('uefi_pxe_config_template', default=paths.basedir_def( 'drivers/modules/elilo_efi_pxe_config.template'), help=_('On ironic-conductor node, template file for PXE ' 'configuration for UEFI boot loader.')), cfg.StrOpt('tftp_server', default='$my_ip', help=_("IP address of ironic-conductor node's TFTP server.")), cfg.StrOpt('tftp_root', default='/tftpboot', help=_("ironic-conductor node's TFTP root path. The " "ironic-conductor must have read/write access to this " "path.")), cfg.StrOpt('tftp_master_path', default='/tftpboot/master_images', help=_('On ironic-conductor node, directory where master TFTP ' 'images are stored on disk. ' 'Setting to disables image caching.')), # NOTE(dekehn): Additional boot files options may be created in the event # other architectures require different boot files. cfg.StrOpt('pxe_bootfile_name', default='pxelinux.0', help=_('Bootfile DHCP parameter.')), cfg.StrOpt('uefi_pxe_bootfile_name', default='elilo.efi', help=_('Bootfile DHCP parameter for UEFI boot mode.')), cfg.BoolOpt('ipxe_enabled', default=False, help=_('Enable iPXE boot.')), cfg.StrOpt('ipxe_boot_script', default=paths.basedir_def( 'drivers/modules/boot.ipxe'), help=_('On ironic-conductor node, the path to the main iPXE ' 'script file.')), cfg.IntOpt('ipxe_timeout', default=0, help=_('Timeout value (in seconds) for downloading an image ' 'via iPXE. Defaults to 0 (no timeout)')), cfg.StrOpt('ip_version', default='4', choices=['4', '6'], help=_('The IP version that will be used for PXE booting. ' 'Defaults to 4. EXPERIMENTAL')), ] LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.register_opts(pxe_opts, group='pxe') CONF.import_opt('deploy_callback_timeout', 'ironic.conductor.manager', group='conductor') REQUIRED_PROPERTIES = { 'deploy_kernel': _("UUID (from Glance) of the deployment kernel. " "Required."), 'deploy_ramdisk': _("UUID (from Glance) of the ramdisk that is " "mounted at boot time. Required."), } COMMON_PROPERTIES = REQUIRED_PROPERTIES # TODO(rameshg87): This method is only for allowing time for deployers to # migrate to CONF.pxe. after the CONF.agent. have been # deprecated. Remove this in Mitaka release. def _get_pxe_conf_option(task, opt_name): """Returns the value of PXEBoot provided CONF option. This method returns the value of PXEBoot CONF option after checking the driver.deploy. If driver.deploy is AgentDeploy and the value of the CONF option is not it's default value, it returns the value of CONF.agent.agent_. Otherwise, it returns the value of CONF.pxe.. There are only 2 such parameters right now - pxe_config_template and pxe_append_params. Caller has to make sure that only these 2 options are passed. :param task: TaskManager instance. :param opt_name: The CONF opt whose value is desired. :returns: The value of the CONF option. :raises: AttributeError, if such a CONF option doesn't exist. """ if isinstance(task.driver.deploy, agent.AgentDeploy): agent_opt_name = 'agent_' + opt_name current_value = getattr(CONF.agent, agent_opt_name) opt_object = [x for x in agent.agent_opts if x.name == agent_opt_name][0] default_value = opt_object.default # Replace $pybasedir which can occur in pxe_config_template # default value. default_value = default_value.replace('$pybasedir', CONF.pybasedir) if current_value != default_value: LOG.warning( _LW("The CONF option [agent]agent_%(opt_name)s is " "deprecated and will be removed in Mitaka release of " "Ironic. Please use [pxe]%(opt_name)s instead."), {'opt_name': opt_name}) return current_value # Either task.driver.deploy is ISCSIDeploy() or the default value hasn't # been modified. So return the value of corresponding parameter in # [pxe] group. return getattr(CONF.pxe, opt_name) def _parse_driver_info(node): """Gets the driver specific Node deployment info. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver to deploy images to the node. :param node: a single Node. :returns: A dict with the driver_info values. :raises: MissingParameterValue """ info = node.driver_info d_info = {k: info.get(k) for k in ('deploy_kernel', 'deploy_ramdisk')} error_msg = _("Cannot validate PXE bootloader. Some parameters were" " missing in node's driver_info") deploy_utils.check_for_missing_params(d_info, error_msg) return d_info def _get_instance_image_info(node, ctx): """Generate the paths for TFTP files for instance related images. This method generates the paths for instance kernel and instance ramdisk. This method also updates the node, so caller should already have a non-shared lock on the node. :param node: a node object :param ctx: context :returns: a dictionary whose keys are the names of the images (kernel, ramdisk) and values are the absolute paths of them. If it's a whole disk image, it returns an empty dictionary. """ image_info = {} if node.driver_internal_info.get('is_whole_disk_image'): return image_info root_dir = pxe_utils.get_root_dir() i_info = node.instance_info labels = ('kernel', 'ramdisk') d_info = deploy_utils.get_image_instance_info(node) if not (i_info.get('kernel') and i_info.get('ramdisk')): glance_service = service.GlanceImageService(version=1, context=ctx) iproperties = glance_service.show(d_info['image_source'])['properties'] for label in labels: i_info[label] = str(iproperties[label + '_id']) node.instance_info = i_info node.save() for label in labels: image_info[label] = ( i_info[label], os.path.join(root_dir, node.uuid, label) ) return image_info def _get_deploy_image_info(node): """Generate the paths for TFTP files for deploy images. This method generates the paths for the deploy kernel and deploy ramdisk. :param node: a node object :returns: a dictionary whose keys are the names of the images ( deploy_kernel, deploy_ramdisk) and values are the absolute paths of them. :raises: MissingParameterValue, if deploy_kernel/deploy_ramdisk is missing in node's driver_info. """ d_info = _parse_driver_info(node) return pxe_utils.get_deploy_kr_info(node.uuid, d_info) def _build_pxe_config_options(task, pxe_info): """Build the PXE config options for a node This method builds the PXE boot options for a node, given all the required parameters. The options should then be passed to pxe_utils.create_pxe_config to create the actual config files. :param task: A TaskManager object :param pxe_info: a dict of values to set on the configuration file :returns: A dictionary of pxe options to be used in the pxe bootfile template. """ node = task.node is_whole_disk_image = node.driver_internal_info.get('is_whole_disk_image') # These are dummy values to satisfy elilo. # image and initrd fields in elilo config cannot be blank. kernel = 'no_kernel' ramdisk = 'no_ramdisk' if CONF.pxe.ipxe_enabled: deploy_kernel = '/'.join([CONF.deploy.http_url, node.uuid, 'deploy_kernel']) deploy_ramdisk = '/'.join([CONF.deploy.http_url, node.uuid, 'deploy_ramdisk']) if not is_whole_disk_image: kernel = '/'.join([CONF.deploy.http_url, node.uuid, 'kernel']) ramdisk = '/'.join([CONF.deploy.http_url, node.uuid, 'ramdisk']) else: deploy_kernel = pxe_info['deploy_kernel'][1] deploy_ramdisk = pxe_info['deploy_ramdisk'][1] if not is_whole_disk_image: # It is possible that we don't have kernel/ramdisk or even # image_source to determine if it's a whole disk image or not. # For example, when transitioning to 'available' state for first # time from 'manage' state. Retain dummy values if we don't have # kernel/ramdisk. if 'kernel' in pxe_info: kernel = pxe_info['kernel'][1] if 'ramdisk' in pxe_info: ramdisk = pxe_info['ramdisk'][1] pxe_options = { 'deployment_aki_path': deploy_kernel, 'deployment_ari_path': deploy_ramdisk, 'pxe_append_params': _get_pxe_conf_option(task, 'pxe_append_params'), 'tftp_server': CONF.pxe.tftp_server, 'aki_path': kernel, 'ari_path': ramdisk, 'ipxe_timeout': CONF.pxe.ipxe_timeout * 1000 } return pxe_options def validate_boot_option_for_uefi(node): """In uefi boot mode, validate if the boot option is compatible. This method raises exception if whole disk image being deployed in UEFI boot mode without 'boot_option' being set to 'local'. :param node: a single Node. :raises: InvalidParameterValue """ boot_mode = deploy_utils.get_boot_mode_for_deploy(node) boot_option = deploy_utils.get_boot_option(node) if (boot_mode == 'uefi' and node.driver_internal_info.get('is_whole_disk_image') and boot_option != 'local'): LOG.error(_LE("Whole disk image with netboot is not supported in UEFI " "boot mode.")) raise exception.InvalidParameterValue(_( "Conflict: Whole disk image being used for deploy, but " "cannot be used with node %(node_uuid)s configured to use " "UEFI boot with netboot option") % {'node_uuid': node.uuid}) def validate_boot_parameters_for_trusted_boot(node): """Check if boot parameters are valid for trusted boot.""" boot_mode = deploy_utils.get_boot_mode_for_deploy(node) boot_option = deploy_utils.get_boot_option(node) is_whole_disk_image = node.driver_internal_info.get('is_whole_disk_image') # 'is_whole_disk_image' is not supported by trusted boot, because there is # no Kernel/Ramdisk to measure at all. if (boot_mode != 'bios' or is_whole_disk_image or boot_option != 'netboot'): msg = (_("Trusted boot is only supported in BIOS boot mode with " "netboot and without whole_disk_image, but Node " "%(node_uuid)s was configured with boot_mode: %(boot_mode)s, " "boot_option: %(boot_option)s, is_whole_disk_image: " "%(is_whole_disk_image)s: at least one of them is wrong, and " "this can be caused by enable secure boot.") % {'node_uuid': node.uuid, 'boot_mode': boot_mode, 'boot_option': boot_option, 'is_whole_disk_image': is_whole_disk_image}) LOG.error(msg) raise exception.InvalidParameterValue(msg) @image_cache.cleanup(priority=25) class TFTPImageCache(image_cache.ImageCache): def __init__(self): super(TFTPImageCache, self).__init__( CONF.pxe.tftp_master_path, # MiB -> B cache_size=CONF.pxe.image_cache_size * 1024 * 1024, # min -> sec cache_ttl=CONF.pxe.image_cache_ttl * 60) def _cache_ramdisk_kernel(ctx, node, pxe_info): """Fetch the necessary kernels and ramdisks for the instance.""" fileutils.ensure_tree( os.path.join(pxe_utils.get_root_dir(), node.uuid)) LOG.debug("Fetching necessary kernel and ramdisk for node %s", node.uuid) deploy_utils.fetch_images(ctx, TFTPImageCache(), list(pxe_info.values()), CONF.force_raw_images) def _clean_up_pxe_env(task, images_info): """Cleanup PXE environment of all the images in images_info. Cleans up the PXE environment for the mentioned images in images_info. :param task: a TaskManager object :param images_info: A dictionary of images whose keys are the image names to be cleaned up (kernel, ramdisk, etc) and values are a tuple of identifier and absolute path. """ for label in images_info: path = images_info[label][1] ironic_utils.unlink_without_raise(path) pxe_utils.clean_up_pxe_config(task) TFTPImageCache().clean_up() class PXEBoot(base.BootInterface): def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return COMMON_PROPERTIES def validate(self, task): """Validate the PXE-specific info for booting deploy/instance images. This method validates the PXE-specific info for booting the ramdisk and instance on the node. If invalid, raises an exception; otherwise returns None. :param task: a task from TaskManager. :returns: None :raises: InvalidParameterValue, if some parameters are invalid. :raises: MissingParameterValue, if some required parameters are missing. """ node = task.node if not driver_utils.get_node_mac_addresses(task): raise exception.MissingParameterValue( _("Node %s does not have any port associated with it.") % node.uuid) # Get the boot_mode capability value. boot_mode = deploy_utils.get_boot_mode_for_deploy(node) if CONF.pxe.ipxe_enabled: if (not CONF.deploy.http_url or not CONF.deploy.http_root): raise exception.MissingParameterValue(_( "iPXE boot is enabled but no HTTP URL or HTTP " "root was specified.")) if boot_mode == 'uefi': validate_boot_option_for_uefi(node) # Check the trusted_boot capabilities value. deploy_utils.validate_capabilities(node) if deploy_utils.is_trusted_boot_requested(node): # Check if 'boot_option' and boot mode is compatible with # trusted boot. validate_boot_parameters_for_trusted_boot(node) _parse_driver_info(node) d_info = deploy_utils.get_image_instance_info(node) if node.driver_internal_info.get('is_whole_disk_image'): props = [] elif service_utils.is_glance_image(d_info['image_source']): props = ['kernel_id', 'ramdisk_id'] else: props = ['kernel', 'ramdisk'] deploy_utils.validate_image_properties(task.context, d_info, props) def prepare_ramdisk(self, task, ramdisk_params): """Prepares the boot of Ironic ramdisk using PXE. This method prepares the boot of the deploy kernel/ramdisk after reading relevant information from the node's driver_info and instance_info. :param task: a task from TaskManager. :param ramdisk_params: the parameters to be passed to the ramdisk. pxe driver passes these parameters as kernel command-line arguments. :returns: None :raises: MissingParameterValue, if some information is missing in node's driver_info or instance_info. :raises: InvalidParameterValue, if some information provided is invalid. :raises: IronicException, if some power or set boot boot device operation failed on the node. """ node = task.node # TODO(deva): optimize this if rerun on existing files if CONF.pxe.ipxe_enabled: # Copy the iPXE boot script to HTTP root directory bootfile_path = os.path.join( CONF.deploy.http_root, os.path.basename(CONF.pxe.ipxe_boot_script)) shutil.copyfile(CONF.pxe.ipxe_boot_script, bootfile_path) dhcp_opts = pxe_utils.dhcp_options_for_instance(task) provider = dhcp_factory.DHCPFactory() provider.update_dhcp(task, dhcp_opts) pxe_info = _get_deploy_image_info(node) # NODE: Try to validate and fetch instance images only # if we are in DEPLOYING state. if node.provision_state == states.DEPLOYING: pxe_info.update(_get_instance_image_info(node, task.context)) pxe_options = _build_pxe_config_options(task, pxe_info) pxe_options.update(ramdisk_params) if deploy_utils.get_boot_mode_for_deploy(node) == 'uefi': pxe_config_template = CONF.pxe.uefi_pxe_config_template else: pxe_config_template = _get_pxe_conf_option(task, 'pxe_config_template') pxe_utils.create_pxe_config(task, pxe_options, pxe_config_template) deploy_utils.try_set_boot_device(task, boot_devices.PXE) # FIXME(lucasagomes): If it's local boot we should not cache # the image kernel and ramdisk (Or even require it). _cache_ramdisk_kernel(task.context, node, pxe_info) def clean_up_ramdisk(self, task): """Cleans up the boot of ironic ramdisk. This method cleans up the PXE environment that was setup for booting the deploy ramdisk. It unlinks the deploy kernel/ramdisk in the node's directory in tftproot and removes it's PXE config. :param task: a task from TaskManager. :returns: None """ node = task.node try: images_info = _get_deploy_image_info(node) except exception.MissingParameterValue as e: LOG.warning(_LW('Could not get deploy image info ' 'to clean up images for node %(node)s: %(err)s'), {'node': node.uuid, 'err': e}) else: _clean_up_pxe_env(task, images_info) def prepare_instance(self, task): """Prepares the boot of instance. This method prepares the boot of the instance after reading relevant information from the node's instance_info. In case of netboot, it updates the dhcp entries and switches the PXE config. In case of localboot, it cleans up the PXE config. :param task: a task from TaskManager. :returns: None """ node = task.node boot_option = deploy_utils.get_boot_option(node) if boot_option != "local": # Make sure that the instance kernel/ramdisk is cached. # This is for the takeover scenario for active nodes. instance_image_info = _get_instance_image_info( task.node, task.context) _cache_ramdisk_kernel(task.context, task.node, instance_image_info) # If it's going to PXE boot we need to update the DHCP server dhcp_opts = pxe_utils.dhcp_options_for_instance(task) provider = dhcp_factory.DHCPFactory() provider.update_dhcp(task, dhcp_opts) iwdi = task.node.driver_internal_info.get('is_whole_disk_image') try: root_uuid_or_disk_id = task.node.driver_internal_info[ 'root_uuid_or_disk_id' ] except KeyError: if not iwdi: LOG.warning( _LW("The UUID for the root partition can't be " "found, unable to switch the pxe config from " "deployment mode to service (boot) mode for " "node %(node)s"), {"node": task.node.uuid}) else: LOG.warning( _LW("The disk id for the whole disk image can't " "be found, unable to switch the pxe config " "from deployment mode to service (boot) mode " "for node %(node)s"), {"node": task.node.uuid}) else: pxe_config_path = pxe_utils.get_pxe_config_file_path( task.node.uuid) deploy_utils.switch_pxe_config( pxe_config_path, root_uuid_or_disk_id, deploy_utils.get_boot_mode_for_deploy(node), iwdi, deploy_utils.is_trusted_boot_requested(node)) # In case boot mode changes from bios to uefi, boot device # order may get lost in some platforms. Better to re-apply # boot device. deploy_utils.try_set_boot_device(task, boot_devices.PXE) else: # If it's going to boot from the local disk, we don't need # PXE config files. They still need to be generated as part # of the prepare() because the deployment does PXE boot the # deploy ramdisk pxe_utils.clean_up_pxe_config(task) deploy_utils.try_set_boot_device(task, boot_devices.DISK) def clean_up_instance(self, task): """Cleans up the boot of instance. This method cleans up the environment that was setup for booting the instance. It unlinks the instance kernel/ramdisk in node's directory in tftproot and removes the PXE config. :param task: a task from TaskManager. :returns: None """ node = task.node try: images_info = _get_instance_image_info(node, task.context) except exception.MissingParameterValue as e: LOG.warning(_LW('Could not get instance image info ' 'to clean up images for node %(node)s: %(err)s'), {'node': node.uuid, 'err': e}) else: _clean_up_pxe_env(task, images_info) ironic-5.1.0/ironic/drivers/modules/ucs/0000775000567000056710000000000012674513633021356 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/ucs/power.py0000664000567000056710000002104712674513466023074 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Ironic Cisco UCSM interfaces. Provides basic power control of servers managed by Cisco UCSM using PyUcs Sdk. """ from oslo_config import cfg from oslo_log import log as logging from oslo_service import loopingcall from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.ucs import helper as ucs_helper ucs_power = importutils.try_import('UcsSdk.utils.power') ucs_error = importutils.try_import('UcsSdk.utils.exception') opts = [ cfg.IntOpt('max_retry', default=6, help=_('Number of times a power operation needs to be ' 'retried')), cfg.IntOpt('action_interval', default=5, help=_('Amount of time in seconds to wait in between power ' 'operations')), ] CONF = cfg.CONF CONF.register_opts(opts, group='cisco_ucs') LOG = logging.getLogger(__name__) UCS_TO_IRONIC_POWER_STATE = { 'up': states.POWER_ON, 'down': states.POWER_OFF, } IRONIC_TO_UCS_POWER_STATE = { states.POWER_ON: 'up', states.POWER_OFF: 'down', states.REBOOT: 'hard-reset-immediate' } def _wait_for_state_change(target_state, ucs_power_handle): """Wait and check for the power state change.""" state = [None] retries = [0] def _wait(state, retries): state[0] = ucs_power_handle.get_power_state() if ((retries[0] != 0) and ( UCS_TO_IRONIC_POWER_STATE.get(state[0]) == target_state)): raise loopingcall.LoopingCallDone() if retries[0] > CONF.cisco_ucs.max_retry: state[0] = states.ERROR raise loopingcall.LoopingCallDone() retries[0] += 1 timer = loopingcall.FixedIntervalLoopingCall(_wait, state, retries) timer.start(interval=CONF.cisco_ucs.action_interval).wait() return UCS_TO_IRONIC_POWER_STATE.get(state[0], states.ERROR) class Power(base.PowerInterface): """Cisco Power Interface. This PowerInterface class provides a mechanism for controlling the power state of servers managed by Cisco UCS Manager. """ def get_properties(self): """Returns common properties of the driver.""" return ucs_helper.COMMON_PROPERTIES def validate(self, task): """Check that node 'driver_info' is valid. Check that node 'driver_info' contains the required fields. :param task: instance of `ironic.manager.task_manager.TaskManager`. :raises: MissingParameterValue if required CiscoDriver parameters are missing. """ ucs_helper.parse_driver_info(task.node) @ucs_helper.requires_ucs_client def get_power_state(self, task, helper=None): """Get the current power state. Poll the host for the current power state of the node. :param task: instance of `ironic.manager.task_manager.TaskManager`. :param helper: ucs helper instance :raises: MissingParameterValue if required CiscoDriver parameters are missing. :raises: UcsOperationError on error from UCS Client. :returns: power state. One of :class:`ironic.common.states`. """ try: power_handle = ucs_power.UcsPower(helper) power_status = power_handle.get_power_state() except ucs_error.UcsOperationError as ucs_exception: LOG.error(_LE("%(driver)s: get_power_state operation failed for " "node %(uuid)s with error: %(msg)s."), {'driver': task.node.driver, 'uuid': task.node.uuid, 'msg': ucs_exception}) operation = _('getting power status') raise exception.UcsOperationError(operation=operation, error=ucs_exception, node=task.node.uuid) return UCS_TO_IRONIC_POWER_STATE.get(power_status, states.ERROR) @task_manager.require_exclusive_lock @ucs_helper.requires_ucs_client def set_power_state(self, task, pstate, helper=None): """Turn the power on or off. Set the power state of a node. :param task: instance of `ironic.manager.task_manager.TaskManager`. :param pstate: Either POWER_ON or POWER_OFF from :class: `ironic.common.states`. :param helper: ucs helper instance :raises: InvalidParameterValue if an invalid power state was specified. :raises: MissingParameterValue if required CiscoDriver parameters are missing. :raises: UcsOperationError on error from UCS Client. :raises: PowerStateFailure if the desired power state couldn't be set. """ if pstate not in (states.POWER_ON, states.POWER_OFF): msg = _("set_power_state called with invalid power state " "'%s'") % pstate raise exception.InvalidParameterValue(msg) try: ucs_power_handle = ucs_power.UcsPower(helper) power_status = ucs_power_handle.get_power_state() if UCS_TO_IRONIC_POWER_STATE.get(power_status) != pstate: ucs_power_handle.set_power_state( IRONIC_TO_UCS_POWER_STATE.get(pstate)) else: return except ucs_error.UcsOperationError as ucs_exception: LOG.error(_LE("%(driver)s: set_power_state operation failed for " "node %(uuid)s with error: %(msg)s."), {'driver': task.node.driver, 'uuid': task.node.uuid, 'msg': ucs_exception}) operation = _("setting power status") raise exception.UcsOperationError(operation=operation, error=ucs_exception, node=task.node.uuid) state = _wait_for_state_change(pstate, ucs_power_handle) if state != pstate: timeout = CONF.cisco_ucs.action_interval * CONF.cisco_ucs.max_retry LOG.error(_LE("%(driver)s: driver failed to change node %(uuid)s " "power state to %(state)s within %(timeout)s " "seconds."), {'driver': task.node.driver, 'uuid': task.node.uuid, 'state': pstate, 'timeout': timeout}) raise exception.PowerStateFailure(pstate=pstate) @task_manager.require_exclusive_lock @ucs_helper.requires_ucs_client def reboot(self, task, helper=None): """Cycles the power to a node. :param task: a TaskManager instance. :param helper: ucs helper instance. :raises: UcsOperationError on error from UCS Client. :raises: PowerStateFailure if the final state of the node is not POWER_ON. """ try: ucs_power_handle = ucs_power.UcsPower(helper) ucs_power_handle.reboot() except ucs_error.UcsOperationError as ucs_exception: LOG.error(_LE("%(driver)s: driver failed to reset node %(uuid)s " "power state."), {'driver': task.node.driver, 'uuid': task.node.uuid}) operation = _("rebooting") raise exception.UcsOperationError(operation=operation, error=ucs_exception, node=task.node.uuid) state = _wait_for_state_change(states.POWER_ON, ucs_power_handle) if state != states.POWER_ON: timeout = CONF.cisco_ucs.action_interval * CONF.cisco_ucs.max_retry LOG.error(_LE("%(driver)s: driver failed to reboot node %(uuid)s " "within %(timeout)s seconds."), {'driver': task.node.driver, 'uuid': task.node.uuid, 'timeout': timeout}) raise exception.PowerStateFailure(pstate=states.POWER_ON) ironic-5.1.0/ironic/drivers/modules/ucs/__init__.py0000664000567000056710000000000012674513466023461 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/ucs/helper.py0000664000567000056710000001051212674513466023212 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Ironic Cisco UCSM helper functions """ from oslo_log import log as logging from oslo_utils import importutils import six from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.drivers.modules import deploy_utils ucs_helper = importutils.try_import('UcsSdk.utils.helper') ucs_error = importutils.try_import('UcsSdk.utils.exception') LOG = logging.getLogger(__name__) REQUIRED_PROPERTIES = { 'ucs_address': _('IP or Hostname of the UCS Manager. Required.'), 'ucs_username': _('UCS Manager admin/server-profile username. Required.'), 'ucs_password': _('UCS Manager password. Required.'), 'ucs_service_profile': _('UCS Manager service-profile name. Required.') } COMMON_PROPERTIES = REQUIRED_PROPERTIES def requires_ucs_client(func): """Creates handle to connect to UCS Manager. This method is being used as a decorator method. It establishes connection with UCS Manager. And creates a session. Any method that has to perform operation on UCS Manager, requries this session, which can use this method as decorator method. Use this method as decorator method requires having helper keyword argument in the definition. :param func: function using this as a decorator. :returns: a wrapper function that performs the required tasks mentioned above before and after calling the actual function. """ @six.wraps(func) def wrapper(self, task, *args, **kwargs): if kwargs.get('helper') is None: kwargs['helper'] = CiscoUcsHelper(task) try: kwargs['helper'].connect_ucsm() return func(self, task, *args, **kwargs) finally: kwargs['helper'].logout() return wrapper def parse_driver_info(node): """Parses and creates Cisco driver info :param node: An Ironic node object. :returns: dictonary that contains node.driver_info parameter/values. :raises: MissingParameterValue if any required parameters are missing. """ info = {} for param in REQUIRED_PROPERTIES: info[param] = node.driver_info.get(param) error_msg = (_("%s driver requires these parameters to be set in the " "node's driver_info.") % node.driver) deploy_utils.check_for_missing_params(info, error_msg) return info class CiscoUcsHelper(object): """Cisco UCS helper. Performs session managemnt.""" def __init__(self, task): """Initialize with UCS Manager details. :param task: instance of `ironic.manager.task_manager.TaskManager`. """ info = parse_driver_info(task.node) self.address = info['ucs_address'] self.username = info['ucs_username'] self.password = info['ucs_password'] # service_profile is used by the utilities functions in UcsSdk.utils.*. self.service_profile = info['ucs_service_profile'] self.handle = None self.uuid = task.node.uuid def connect_ucsm(self): """Creates the UcsHandle :raises: UcsConnectionError, if ucs helper failes to establish session with UCS Manager. """ try: success, self.handle = ucs_helper.generate_ucsm_handle( self.address, self.username, self.password) except ucs_error.UcsConnectionError as ucs_exception: LOG.error(_LE("Cisco client: service unavailable for node " "%(uuid)s."), {'uuid': self.uuid}) raise exception.UcsConnectionError(error=ucs_exception, node=self.uuid) def logout(self): """Logouts the current active session.""" if self.handle: self.handle.Logout() ironic-5.1.0/ironic/drivers/modules/ucs/management.py0000664000567000056710000001306512674513466024055 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Ironic Cisco UCSM interfaces. Provides Management interface operations of servers managed by Cisco UCSM using PyUcs Sdk. """ from oslo_log import log as logging from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.drivers import base from ironic.drivers.modules.ucs import helper as ucs_helper ucs_error = importutils.try_import('UcsSdk.utils.exception') ucs_mgmt = importutils.try_import('UcsSdk.utils.management') LOG = logging.getLogger(__name__) UCS_TO_IRONIC_BOOT_DEVICE = { 'storage': boot_devices.DISK, 'disk': boot_devices.DISK, 'pxe': boot_devices.PXE, 'read-only-vm': boot_devices.CDROM, 'cdrom': boot_devices.CDROM } class UcsManagement(base.ManagementInterface): def get_properties(self): return ucs_helper.COMMON_PROPERTIES def validate(self, task): """Check that 'driver_info' contains UCSM login credentials. Validates whether the 'driver_info' property of the supplied task's node contains the required credentials information. :param task: a task from TaskManager. :raises: MissingParameterValue if a required parameter is missing """ ucs_helper.parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return list(set(UCS_TO_IRONIC_BOOT_DEVICE.values())) @ucs_helper.requires_ucs_client def set_boot_device(self, task, device, persistent=False, helper=None): """Set the boot device for the task's node. Set the boot device to use on next reboot of the node. :param task: a task from TaskManager. :param device: the boot device, one of 'PXE, DISK or CDROM'. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. Ignored by this driver. :param helper: ucs helper instance. :raises: MissingParameterValue if required CiscoDriver parameters are missing. :raises: UcsOperationError on error from UCS client. setting the boot device. """ try: mgmt_handle = ucs_mgmt.BootDeviceHelper(helper) mgmt_handle.set_boot_device(device, persistent) except ucs_error.UcsOperationError as ucs_exception: LOG.error(_LE("%(driver)s: client failed to set boot device " "%(device)s for node %(uuid)s."), {'driver': task.node.driver, 'device': device, 'uuid': task.node.uuid}) operation = _('setting boot device') raise exception.UcsOperationError(operation=operation, error=ucs_exception, node=task.node.uuid) LOG.debug("Node %(uuid)s set to boot from %(device)s.", {'uuid': task.node.uuid, 'device': device}) @ucs_helper.requires_ucs_client def get_boot_device(self, task, helper=None): """Get the current boot device for the task's node. Provides the current boot device of the node. :param task: a task from TaskManager. :param helper: ucs helper instance. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` [PXE, DISK, CDROM] or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. :raises: MissingParameterValue if a required UCS parameter is missing. :raises: UcsOperationError on error from UCS client, while setting the boot device. """ try: mgmt_handle = ucs_mgmt.BootDeviceHelper(helper) boot_device = mgmt_handle.get_boot_device() except ucs_error.UcsOperationError as ucs_exception: LOG.error(_LE("%(driver)s: client failed to get boot device for " "node %(uuid)s."), {'driver': task.node.driver, 'uuid': task.node.uuid}) operation = _('getting boot device') raise exception.UcsOperationError(operation=operation, error=ucs_exception, node=task.node.uuid) boot_device['boot_device'] = ( UCS_TO_IRONIC_BOOT_DEVICE[boot_device['boot_device']]) return boot_device def get_sensors_data(self, task): """Get sensors data. Not implemented by this driver. :param task: a TaskManager instance. """ raise NotImplementedError() ironic-5.1.0/ironic/drivers/modules/msftocs/0000775000567000056710000000000012674513633022242 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/msftocs/common.py0000664000567000056710000000754312674513466024121 0ustar jenkinsjenkins00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import re import six from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers.modules.msftocs import msftocsclient REQUIRED_PROPERTIES = { 'msftocs_base_url': _('Base url of the OCS chassis manager REST API, ' 'e.g.: http://10.0.0.1:8000. Required.'), 'msftocs_blade_id': _('Blade id, must be a number between 1 and the ' 'maximum number of blades available in the chassis. ' 'Required.'), 'msftocs_username': _('Username to access the chassis manager REST API. ' 'Required.'), 'msftocs_password': _('Password to access the chassis manager REST API. ' 'Required.'), } def get_client_info(driver_info): """Returns an instance of the REST API client and the blade id. :param driver_info: the node's driver_info dict. """ client = msftocsclient.MSFTOCSClientApi(driver_info['msftocs_base_url'], driver_info['msftocs_username'], driver_info['msftocs_password']) return client, driver_info['msftocs_blade_id'] def get_properties(): """Returns the driver's properties.""" return copy.deepcopy(REQUIRED_PROPERTIES) def _is_valid_url(url): """Checks whether a URL is valid. :param url: a url string. :returns: True if the url is valid or None, False otherwise. """ r = re.compile( r'^https?://' r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)*[A-Z]{2,6}\.?|' r'localhost|' r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' r'(?::\d+)?' r'(?:/?|[/?]\S+)$', re.IGNORECASE) return bool(isinstance(url, six.string_types) and r.search(url)) def _check_required_properties(driver_info): """Checks if all required properties are present. :param driver_info: the node's driver_info dict. :raises: MissingParameterValue if one or more required properties are missing. """ missing_properties = set(REQUIRED_PROPERTIES) - set(driver_info) if missing_properties: raise exception.MissingParameterValue( _('The following parameters were missing: %s') % ' '.join(missing_properties)) def parse_driver_info(node): """Checks for the required properties and values validity. :param node: the target node. :raises: MissingParameterValue if one or more required properties are missing. :raises: InvalidParameterValue if a parameter value is invalid. """ driver_info = node.driver_info _check_required_properties(driver_info) base_url = driver_info.get('msftocs_base_url') if not _is_valid_url(base_url): raise exception.InvalidParameterValue( _('"%s" is not a valid "msftocs_base_url"') % base_url) blade_id = driver_info.get('msftocs_blade_id') try: blade_id = int(blade_id) except ValueError: raise exception.InvalidParameterValue( _('"%s" is not a valid "msftocs_blade_id"') % blade_id) if blade_id < 1: raise exception.InvalidParameterValue( _('"msftocs_blade_id" must be greater than 0. The provided value ' 'is: %s') % blade_id) ironic-5.1.0/ironic/drivers/modules/msftocs/power.py0000664000567000056710000000761212674513466023762 0ustar jenkinsjenkins00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ MSFT OCS Power Driver """ from oslo_log import log from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.msftocs import common as msftocs_common from ironic.drivers.modules.msftocs import msftocsclient LOG = log.getLogger(__name__) POWER_STATES_MAP = { msftocsclient.POWER_STATUS_ON: states.POWER_ON, msftocsclient.POWER_STATUS_OFF: states.POWER_OFF, } class MSFTOCSPower(base.PowerInterface): def get_properties(self): """Returns the driver's properties.""" return msftocs_common.get_properties() def validate(self, task): """Validate the driver_info in the node. Check if the driver_info contains correct required fields. :param task: a TaskManager instance containing the target node. :raises: MissingParameterValue if any required parameters are missing. :raises: InvalidParameterValue if any parameters have invalid values. """ msftocs_common.parse_driver_info(task.node) def get_power_state(self, task): """Get the power state from the node. :param task: a TaskManager instance containing the target node. :raises: MSFTOCSClientApiException. """ client, blade_id = msftocs_common.get_client_info( task.node.driver_info) return POWER_STATES_MAP[client.get_blade_state(blade_id)] @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Set the power state of the node. Turn the node power on or off. :param task: a TaskManager instance contains the target node. :param pstate: The desired power state of the node. :raises: PowerStateFailure if the power cannot set to pstate. :raises: InvalidParameterValue """ client, blade_id = msftocs_common.get_client_info( task.node.driver_info) try: if pstate == states.POWER_ON: client.set_blade_on(blade_id) elif pstate == states.POWER_OFF: client.set_blade_off(blade_id) else: raise exception.InvalidParameterValue( _('Unsupported target_state: %s') % pstate) except exception.MSFTOCSClientApiException as ex: LOG.exception(_LE("Changing the power state to %(pstate)s failed. " "Error: %(err_msg)s"), {"pstate": pstate, "err_msg": ex}) raise exception.PowerStateFailure(pstate=pstate) @task_manager.require_exclusive_lock def reboot(self, task): """Cycle the power of the node :param task: a TaskManager instance contains the target node. :raises: PowerStateFailure if failed to reboot. """ client, blade_id = msftocs_common.get_client_info( task.node.driver_info) try: client.set_blade_power_cycle(blade_id) except exception.MSFTOCSClientApiException as ex: LOG.exception(_LE("Reboot failed. Error: %(err_msg)s"), {"err_msg": ex}) raise exception.PowerStateFailure(pstate=states.REBOOT) ironic-5.1.0/ironic/drivers/modules/msftocs/__init__.py0000664000567000056710000000000012674513466024345 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/msftocs/msftocsclient.py0000664000567000056710000001454412674513466025505 0ustar jenkinsjenkins00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ MSFT OCS ChassisManager v2.0 REST API client https://github.com/MSOpenTech/ChassisManager """ import posixpath from xml.etree import ElementTree from oslo_log import log import requests from requests import auth from requests import exceptions as requests_exceptions from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE LOG = log.getLogger(__name__) WCSNS = 'http://schemas.datacontract.org/2004/07/Microsoft.GFS.WCS.Contracts' COMPLETION_CODE_SUCCESS = "Success" BOOT_TYPE_UNKNOWN = 0 BOOT_TYPE_NO_OVERRIDE = 1 BOOT_TYPE_FORCE_PXE = 2 BOOT_TYPE_FORCE_DEFAULT_HDD = 3 BOOT_TYPE_FORCE_INTO_BIOS_SETUP = 4 BOOT_TYPE_FORCE_FLOPPY_OR_REMOVABLE = 5 BOOT_TYPE_MAP = { 'Unknown': BOOT_TYPE_UNKNOWN, 'NoOverride': BOOT_TYPE_NO_OVERRIDE, 'ForcePxe': BOOT_TYPE_FORCE_PXE, 'ForceDefaultHdd': BOOT_TYPE_FORCE_DEFAULT_HDD, 'ForceIntoBiosSetup': BOOT_TYPE_FORCE_INTO_BIOS_SETUP, 'ForceFloppyOrRemovable': BOOT_TYPE_FORCE_FLOPPY_OR_REMOVABLE, } POWER_STATUS_ON = "ON" POWER_STATUS_OFF = "OFF" class MSFTOCSClientApi(object): def __init__(self, base_url, username, password): self._base_url = base_url self._username = username self._password = password def _exec_cmd(self, rel_url): """Executes a command by calling the chassis manager API.""" url = posixpath.join(self._base_url, rel_url) try: response = requests.get( url, auth=auth.HTTPBasicAuth(self._username, self._password)) response.raise_for_status() except requests_exceptions.RequestException as ex: msg = _("HTTP call failed: %s") % ex LOG.exception(msg) raise exception.MSFTOCSClientApiException(msg) xml_response = response.text LOG.debug("Call to %(url)s got response: %(xml_response)s", {"url": url, "xml_response": xml_response}) return xml_response def _check_completion_code(self, xml_response): try: et = ElementTree.fromstring(xml_response) except ElementTree.ParseError as ex: LOG.exception(_LE("XML parsing failed: %s"), ex) raise exception.MSFTOCSClientApiException( _("Invalid XML: %s") % xml_response) item = et.find("./n:completionCode", namespaces={'n': WCSNS}) if item is None or item.text != COMPLETION_CODE_SUCCESS: raise exception.MSFTOCSClientApiException( _("Operation failed: %s") % xml_response) return et def get_blade_state(self, blade_id): """Returns whether a blade's chipset is receiving power (soft-power). :param blade_id: the blade id :returns: one of: POWER_STATUS_ON, POWER_STATUS_OFF :raises: MSFTOCSClientApiException """ et = self._check_completion_code( self._exec_cmd("GetBladeState?bladeId=%d" % blade_id)) return et.find('./n:bladeState', namespaces={'n': WCSNS}).text def set_blade_on(self, blade_id): """Supplies power to a blade chipset (soft-power state). :param blade_id: the blade id :raises: MSFTOCSClientApiException """ self._check_completion_code( self._exec_cmd("SetBladeOn?bladeId=%d" % blade_id)) def set_blade_off(self, blade_id): """Shuts down a given blade (soft-power state). :param blade_id: the blade id :raises: MSFTOCSClientApiException """ self._check_completion_code( self._exec_cmd("SetBladeOff?bladeId=%d" % blade_id)) def set_blade_power_cycle(self, blade_id, off_time=0): """Performs a soft reboot of a given blade. :param blade_id: the blade id :param off_time: seconds to wait between shutdown and boot :raises: MSFTOCSClientApiException """ self._check_completion_code( self._exec_cmd("SetBladeActivePowerCycle?bladeId=%(blade_id)d&" "offTime=%(off_time)d" % {"blade_id": blade_id, "off_time": off_time})) def get_next_boot(self, blade_id): """Returns the next boot device configured for a given blade. :param blade_id: the blade id :returns: one of: BOOT_TYPE_UNKNOWN, BOOT_TYPE_NO_OVERRIDE, BOOT_TYPE_FORCE_PXE, BOOT_TYPE_FORCE_DEFAULT_HDD, BOOT_TYPE_FORCE_INTO_BIOS_SETUP, BOOT_TYPE_FORCE_FLOPPY_OR_REMOVABLE :raises: MSFTOCSClientApiException """ et = self._check_completion_code( self._exec_cmd("GetNextBoot?bladeId=%d" % blade_id)) return BOOT_TYPE_MAP[ et.find('./n:nextBoot', namespaces={'n': WCSNS}).text] def set_next_boot(self, blade_id, boot_type, persistent=True, uefi=True): """Sets the next boot device for a given blade. :param blade_id: the blade id :param boot_type: possible values: BOOT_TYPE_UNKNOWN, BOOT_TYPE_NO_OVERRIDE, BOOT_TYPE_FORCE_PXE, BOOT_TYPE_FORCE_DEFAULT_HDD, BOOT_TYPE_FORCE_INTO_BIOS_SETUP, BOOT_TYPE_FORCE_FLOPPY_OR_REMOVABLE :param persistent: whether this setting affects the next boot only or every subsequent boot :param uefi: True if UEFI, False otherwise :raises: MSFTOCSClientApiException """ self._check_completion_code( self._exec_cmd( "SetNextBoot?bladeId=%(blade_id)d&bootType=%(boot_type)d&" "uefi=%(uefi)s&persistent=%(persistent)s" % {"blade_id": blade_id, "boot_type": boot_type, "uefi": str(uefi).lower(), "persistent": str(persistent).lower()})) ironic-5.1.0/ironic/drivers/modules/msftocs/management.py0000664000567000056710000001132712674513466024740 0ustar jenkinsjenkins00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.msftocs import common as msftocs_common from ironic.drivers.modules.msftocs import msftocsclient from ironic.drivers import utils as drivers_utils BOOT_TYPE_TO_DEVICE_MAP = { msftocsclient.BOOT_TYPE_FORCE_PXE: boot_devices.PXE, msftocsclient.BOOT_TYPE_FORCE_DEFAULT_HDD: boot_devices.DISK, msftocsclient.BOOT_TYPE_FORCE_INTO_BIOS_SETUP: boot_devices.BIOS, } DEVICE_TO_BOOT_TYPE_MAP = {v: k for k, v in BOOT_TYPE_TO_DEVICE_MAP.items()} DEFAULT_BOOT_DEVICE = boot_devices.DISK class MSFTOCSManagement(base.ManagementInterface): def get_properties(self): """Returns the driver's properties.""" return msftocs_common.get_properties() def validate(self, task): """Validate the driver_info in the node. Check if the driver_info contains correct required fields. :param task: a TaskManager instance containing the target node. :raises: MissingParameterValue if any required parameters are missing. :raises: InvalidParameterValue if any parameters have invalid values. """ msftocs_common.parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices. """ return list(BOOT_TYPE_TO_DEVICE_MAP.values()) def _check_valid_device(self, device, node): """Checks if the desired boot device is valid for this driver. :param device: a boot device. :param node: the target node. :raises: InvalidParameterValue if the boot device is not valid. """ if device not in DEVICE_TO_BOOT_TYPE_MAP: raise exception.InvalidParameterValue( _("set_boot_device called with invalid device %(device)s for " "node %(node_id)s.") % {'device': device, 'node_id': node.uuid}) @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for the task's node. Set the boot device to use on next boot of the node. :param task: a task from TaskManager. :param device: the boot device. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: InvalidParameterValue if an invalid boot device is specified. """ self._check_valid_device(device, task.node) client, blade_id = msftocs_common.get_client_info( task.node.driver_info) boot_mode = drivers_utils.get_node_capability(task.node, 'boot_mode') uefi = (boot_mode == 'uefi') boot_type = DEVICE_TO_BOOT_TYPE_MAP[device] client.set_next_boot(blade_id, boot_type, persistent, uefi) def get_boot_device(self, task): """Get the current boot device for the task's node. Returns the current boot device of the node. :param task: a task from TaskManager. :returns: a dictionary containing: :boot_device: the boot device :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ client, blade_id = msftocs_common.get_client_info( task.node.driver_info) device = BOOT_TYPE_TO_DEVICE_MAP.get( client.get_next_boot(blade_id), DEFAULT_BOOT_DEVICE) # Note(alexpilotti): Although the ChasssisManager REST API allows to # specify the persistent boot status in SetNextBoot, currently it does # not provide a way to retrieve the value with GetNextBoot. # This is being addressed in the ChassisManager API. return {'boot_device': device, 'persistent': None} def get_sensors_data(self, task): raise NotImplementedError() ironic-5.1.0/ironic/drivers/modules/master_grub_cfg.txt0000664000567000056710000000020312674513466024455 0ustar jenkinsjenkins00000000000000set default=master set timeout=5 set hidden_timeout_quiet=false menuentry "master" { configfile /tftpboot/$net_default_ip.conf } ironic-5.1.0/ironic/drivers/modules/oneview/0000775000567000056710000000000012674513633022240 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/oneview/common.py0000664000567000056710000002523612674513466024116 0ustar jenkinsjenkins00000000000000# # Copyright 2015 Hewlett Packard Development Company, LP # Copyright 2015 Universidade Federal de Campina Grande # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW from ironic.common import states from ironic.drivers import utils LOG = logging.getLogger(__name__) client = importutils.try_import('oneview_client.client') oneview_states = importutils.try_import('oneview_client.states') oneview_exceptions = importutils.try_import('oneview_client.exceptions') opts = [ cfg.StrOpt('manager_url', help=_('URL where OneView is available')), cfg.StrOpt('username', help=_('OneView username to be used')), cfg.StrOpt('password', secret=True, help=_('OneView password to be used')), cfg.BoolOpt('allow_insecure_connections', default=False, help=_('Option to allow insecure connection with OneView')), cfg.StrOpt('tls_cacert_file', default=None, help=_('Path to CA certificate')), cfg.IntOpt('max_polling_attempts', default=12, help=_('Max connection retries to check changes on OneView')), ] CONF = cfg.CONF CONF.register_opts(opts, group='oneview') REQUIRED_ON_DRIVER_INFO = { 'server_hardware_uri': _("Server Hardware URI. Required in driver_info."), } REQUIRED_ON_PROPERTIES = { 'server_hardware_type_uri': _( "Server Hardware Type URI. Required in properties/capabilities." ), } # TODO(gabriel-bezerra): Move 'server_profile_template_uri' to # REQUIRED_ON_PROPERTIES after Mitaka. See methods get_oneview_info, # verify_node_info from this file; and test_verify_node_info_missing_spt # and test_deprecated_spt_in_driver_info* from test_common tests. OPTIONAL_ON_PROPERTIES = { 'enclosure_group_uri': _( "Enclosure Group URI. Optional in properties/capabilities."), 'server_profile_template_uri': _( "Server Profile Template URI to clone from. " "Deprecated in driver_info. " "Required in properties/capabilities."), } COMMON_PROPERTIES = {} COMMON_PROPERTIES.update(REQUIRED_ON_DRIVER_INFO) COMMON_PROPERTIES.update(REQUIRED_ON_PROPERTIES) COMMON_PROPERTIES.update(OPTIONAL_ON_PROPERTIES) def get_oneview_client(): """Generates an instance of the OneView client. Generates an instance of the OneView client using the imported oneview_client library. :returns: an instance of the OneView client """ oneview_client = client.Client( manager_url=CONF.oneview.manager_url, username=CONF.oneview.username, password=CONF.oneview.password, allow_insecure_connections=CONF.oneview.allow_insecure_connections, tls_cacert_file=CONF.oneview.tls_cacert_file, max_polling_attempts=CONF.oneview.max_polling_attempts ) return oneview_client def verify_node_info(node): """Verifies if fields and namespaces of a node are valid. Verifies if the 'driver_info' field and the 'properties/capabilities' namespace exist and are not empty. :param: node: node object to be verified :raises: InvalidParameterValue if required node capabilities and/or driver_info are malformed or missing :raises: MissingParameterValue if required node capabilities and/or driver_info are missing """ capabilities_dict = utils.capabilities_to_dict( node.properties.get('capabilities', '') ) driver_info = node.driver_info _verify_node_info('properties/capabilities', capabilities_dict, REQUIRED_ON_PROPERTIES) # TODO(gabriel-bezerra): Remove this after Mitaka try: _verify_node_info('properties/capabilities', capabilities_dict, ['server_profile_template_uri']) except exception.MissingParameterValue: try: _verify_node_info('driver_info', driver_info, ['server_profile_template_uri']) LOG.warning( _LW("Using 'server_profile_template_uri' in driver_info is " "now deprecated and will be ignored in future releases. " "Node %s should have it in its properties/capabilities " "instead."), node.uuid ) except exception.MissingParameterValue: raise exception.MissingParameterValue( _("Missing 'server_profile_template_uri' parameter value in " "properties/capabilities") ) # end _verify_node_info('driver_info', driver_info, REQUIRED_ON_DRIVER_INFO) def get_oneview_info(node): """Gets OneView information from the node. :param: node: node object to get information from :returns: a dictionary containing: :server_hardware_uri: the uri of the server hardware in OneView :server_hardware_type_uri: the uri of the server hardware type in OneView :enclosure_group_uri: the uri of the enclosure group in OneView :server_profile_template_uri: the uri of the server profile template in OneView :raises InvalidParameterValue if node capabilities are malformed """ capabilities_dict = utils.capabilities_to_dict( node.properties.get('capabilities', '') ) driver_info = node.driver_info oneview_info = { 'server_hardware_uri': driver_info.get('server_hardware_uri'), 'server_hardware_type_uri': capabilities_dict.get('server_hardware_type_uri'), 'enclosure_group_uri': capabilities_dict.get('enclosure_group_uri'), 'server_profile_template_uri': capabilities_dict.get('server_profile_template_uri') or driver_info.get('server_profile_template_uri'), } return oneview_info def validate_oneview_resources_compatibility(task): """Validates if the node configuration is consistent with OneView. This method calls python-oneviewclient functions to validate if the node configuration is consistent with the OneView resources it represents, including server_hardware_uri, server_hardware_type_uri, server_profile_template_uri, enclosure_group_uri and node ports. Also verifies if a Server Profile is applied to the Server Hardware the node represents. If any validation fails, python-oneviewclient will raise an appropriate OneViewException. :param: task: a TaskManager instance containing the node to act on. """ node = task.node node_ports = task.ports try: oneview_client = get_oneview_client() oneview_info = get_oneview_info(node) oneview_client.validate_node_server_hardware( oneview_info, node.properties.get('memory_mb'), node.properties.get('cpus') ) oneview_client.validate_node_server_hardware_type(oneview_info) oneview_client.check_server_profile_is_applied(oneview_info) oneview_client.is_node_port_mac_compatible_with_server_profile( oneview_info, node_ports ) oneview_client.validate_node_enclosure_group(oneview_info) oneview_client.validate_node_server_profile_template(oneview_info) except oneview_exceptions.OneViewException as oneview_exc: msg = (_("Error validating node resources with OneView: %s") % oneview_exc) LOG.error(msg) raise exception.OneViewError(error=msg) def translate_oneview_power_state(power_state): """Translates OneView's power states strings to Ironic's format. :param: power_state: power state string to be translated :returns: the power state translated """ power_states_map = { oneview_states.ONEVIEW_POWER_ON: states.POWER_ON, oneview_states.ONEVIEW_POWERING_OFF: states.POWER_ON, oneview_states.ONEVIEW_POWER_OFF: states.POWER_OFF, oneview_states.ONEVIEW_POWERING_ON: states.POWER_OFF, oneview_states.ONEVIEW_RESETTING: states.REBOOT } return power_states_map.get(power_state, states.ERROR) def _verify_node_info(node_namespace, node_info_dict, info_required): """Verify if info_required is present in node_namespace of the node info. """ missing_keys = set(info_required) - set(node_info_dict) if missing_keys: raise exception.MissingParameterValue( _("Missing the keys for the following OneView data in node's " "%(namespace)s: %(missing_keys)s.") % {'namespace': node_namespace, 'missing_keys': ', '.join(missing_keys) } ) # False and 0 can still be considered as valid values missing_values_keys = [k for k in info_required if node_info_dict[k] in ('', None)] if missing_values_keys: missing_keys = ["%s:%s" % (node_namespace, k) for k in missing_values_keys] raise exception.MissingParameterValue( _("Missing parameter value for: '%s'") % "', '".join(missing_keys) ) def node_has_server_profile(func): """Checks if the node's Server Hardware as a Server Profile associated. """ def inner(*args, **kwargs): task = args[1] oneview_info = get_oneview_info(task.node) oneview_client = get_oneview_client() try: node_has_server_profile = ( oneview_client.get_server_profile_from_hardware(oneview_info) ) except oneview_exceptions.OneViewException as oneview_exc: LOG.error( _LE("Failed to get server profile from OneView appliance for" "node %(node)s. Error: %(message)s"), {"node": task.node.uuid, "message": oneview_exc} ) raise exception.OneViewError(error=oneview_exc) if not node_has_server_profile: raise exception.OperationNotPermitted( _("A Server Profile is not associated with node %s.") % task.node.uuid ) return func(*args, **kwargs) return inner ironic-5.1.0/ironic/drivers/modules/oneview/power.py0000664000567000056710000001213712674513466023756 0ustar jenkinsjenkins00000000000000# # Copyright 2015 Hewlett Packard Development Company, LP # Copyright 2015 Universidade Federal de Campina Grande # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.oneview import common LOG = logging.getLogger(__name__) oneview_exceptions = importutils.try_import('oneview_client.exceptions') class OneViewPower(base.PowerInterface): def get_properties(self): return common.COMMON_PROPERTIES def validate(self, task): """Checks required info on 'driver_info' and validates node with OneView Validates whether the 'oneview_info' property of the supplied task's node contains the required info such as server_hardware_uri, server_hardware_type, server_profile_template_uri and enclosure_group_uri. Also, checks if the server profile of the node is applied, if NICs are valid for the server profile of the node, and if the server hardware attributes (ram, memory, vcpus count) are consistent with OneView. :param task: a task from TaskManager. :raises: MissingParameterValue if a required parameter is missing. :raises: InvalidParameterValue if parameters set are inconsistent with resources in OneView """ common.verify_node_info(task.node) try: common.validate_oneview_resources_compatibility(task) except exception.OneViewError as oneview_exc: raise exception.InvalidParameterValue(oneview_exc) def get_power_state(self, task): """Gets the current power state. :param task: a TaskManager instance. :param node: The Node. :returns: one of :mod:`ironic.common.states` POWER_OFF, POWER_ON or ERROR. :raises: OneViewError if fails to retrieve power state of OneView resource """ oneview_info = common.get_oneview_info(task.node) oneview_client = common.get_oneview_client() try: power_state = oneview_client.get_node_power_state(oneview_info) except oneview_exceptions.OneViewException as oneview_exc: LOG.error( _LE("Error getting power state for node %(node)s. Error:" "%(error)s"), {'node': task.node.uuid, 'error': oneview_exc} ) raise exception.OneViewError(error=oneview_exc) return common.translate_oneview_power_state(power_state) @task_manager.require_exclusive_lock def set_power_state(self, task, power_state): """Turn the current power state on or off. :param task: a TaskManager instance. :param node: The Node. :param power_state: The desired power state POWER_ON, POWER_OFF or REBOOT from :mod:`ironic.common.states`. :raises: InvalidParameterValue if an invalid power state was specified. :raises: PowerStateFailure if the power couldn't be set to power_state. :raises: OneViewError if OneView fails setting the power state. """ oneview_info = common.get_oneview_info(task.node) oneview_client = common.get_oneview_client() LOG.debug('Setting power state of node %(node_uuid)s to ' '%(power_state)s', {'node_uuid': task.node.uuid, 'power_state': power_state}) try: if power_state == states.POWER_ON: oneview_client.power_on(oneview_info) elif power_state == states.POWER_OFF: oneview_client.power_off(oneview_info) elif power_state == states.REBOOT: oneview_client.power_off(oneview_info) oneview_client.power_on(oneview_info) else: raise exception.InvalidParameterValue( _("set_power_state called with invalid power state %s.") % power_state) except oneview_exceptions.OneViewException as exc: raise exception.OneViewError( _("Error setting power state: %s") % exc ) @task_manager.require_exclusive_lock def reboot(self, task): """Reboot the node :param task: a TaskManager instance. :param node: The Node. :raises: PowerStateFailure if the final state of the node is not POWER_ON. """ self.set_power_state(task, states.REBOOT) ironic-5.1.0/ironic/drivers/modules/oneview/vendor.py0000664000567000056710000001114512674513466024115 0ustar jenkinsjenkins00000000000000# # Copyright 2015 Hewlett Packard Development Company, LP # Copyright 2015 Universidade Federal de Campina Grande # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log import retrying from ironic.common.i18n import _ from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import states from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent from ironic.drivers.modules import deploy_utils LOG = log.getLogger(__name__) CONF = agent.CONF # NOTE (thiagop): We overwrite this interface because we cannot change the boot # device of OneView managed blades while they are still powered on. We moved # the call of node_set_boot_device from reboot_to_instance to # reboot_and_finish_deploy and changed the behavior to shutdown the node before # doing it. # TODO(thiagop): remove this interface once bug/1503855 is fixed class AgentVendorInterface(agent.AgentVendorInterface): def reboot_to_instance(self, task, **kwargs): task.process_event('resume') node = task.node error = self.check_deploy_success(node) if error is not None: # TODO(jimrollenhagen) power off if using neutron dhcp to # align with pxe driver? msg = (_('node %(node)s command status errored: %(error)s') % {'node': node.uuid, 'error': error}) LOG.error(msg) deploy_utils.set_failed_state(task, msg) return LOG.info(_LI('Image successfully written to node %s'), node.uuid) LOG.debug('Rebooting node %s to instance', node.uuid) self.reboot_and_finish_deploy(task) # NOTE(TheJulia): If we deployed a whole disk image, we # should expect a whole disk image and clean-up the tftp files # on-disk incase the node is disregarding the boot preference. # TODO(rameshg87): Not all in-tree drivers using reboot_to_instance # have a boot interface. So include a check for now. Remove this # check once all in-tree drivers have a boot interface. if task.driver.boot: task.driver.boot.clean_up_ramdisk(task) def reboot_and_finish_deploy(self, task): """Helper method to trigger reboot on the node and finish deploy. This method initiates a reboot on the node. On success, it marks the deploy as complete. On failure, it logs the error and marks deploy as failure. :param task: a TaskManager object containing the node :raises: InstanceDeployFailure, if node reboot failed. """ wait = CONF.agent.post_deploy_get_power_state_retry_interval * 1000 attempts = CONF.agent.post_deploy_get_power_state_retries + 1 @retrying.retry( stop_max_attempt_number=attempts, retry_on_result=lambda state: state != states.POWER_OFF, wait_fixed=wait ) def _wait_until_powered_off(task): return task.driver.power.get_power_state(task) node = task.node try: try: self._client.power_off(node) _wait_until_powered_off(task) except Exception as e: LOG.warning( _LW('Failed to soft power off node %(node_uuid)s ' 'in at least %(timeout)d seconds. Error: %(error)s'), {'node_uuid': node.uuid, 'timeout': (wait * (attempts - 1)) / 1000, 'error': e}) manager_utils.node_power_action(task, states.POWER_OFF) manager_utils.node_set_boot_device(task, 'disk', persistent=True) manager_utils.node_power_action(task, states.POWER_ON) except Exception as e: msg = (_('Error rebooting node %(node)s after deploy. ' 'Error: %(error)s') % {'node': node.uuid, 'error': e}) self._log_and_raise_deployment_error(task, msg) task.process_event('done') LOG.info(_LI('Deployment to node %s done'), task.node.uuid) ironic-5.1.0/ironic/drivers/modules/oneview/__init__.py0000664000567000056710000000000012674513466024343 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/oneview/management.py0000664000567000056710000001455612674513466024745 0ustar jenkinsjenkins00000000000000# # Copyright 2015 Hewlett Packard Development Company, LP # Copyright 2015 Universidade Federal de Campina Grande # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.oneview import common LOG = logging.getLogger(__name__) BOOT_DEVICE_MAPPING_TO_OV = { boot_devices.DISK: 'HardDisk', boot_devices.PXE: 'PXE', boot_devices.CDROM: 'CD', } BOOT_DEVICE_OV_TO_GENERIC = { v: k for k, v in BOOT_DEVICE_MAPPING_TO_OV.items() } oneview_exceptions = importutils.try_import('oneview_client.exceptions') class OneViewManagement(base.ManagementInterface): def get_properties(self): return common.COMMON_PROPERTIES def validate(self, task): """Checks required info on 'driver_info' and validates node with OneView Validates whether the 'driver_info' property of the supplied task's node contains the required info such as server_hardware_uri, server_hardware_type, server_profile_template_uri and enclosure_group_uri. Also, checks if the server profile of the node is applied, if NICs are valid for the server profile of the node, and if the server hardware attributes (ram, memory, vcpus count) are consistent with OneView. :param task: a task from TaskManager. :raises: InvalidParameterValue if parameters set are inconsistent with resources in OneView """ common.verify_node_info(task.node) try: common.validate_oneview_resources_compatibility(task) except exception.OneViewError as oneview_exc: raise exception.InvalidParameterValue(oneview_exc) def get_supported_boot_devices(self, task): """Gets a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return sorted(BOOT_DEVICE_MAPPING_TO_OV.keys()) @task_manager.require_exclusive_lock @common.node_has_server_profile def set_boot_device(self, task, device, persistent=False): """Sets the boot device for a node. Sets the boot device to use on next reboot of the node. :param task: a task from TaskManager. :param device: the boot device, one of the supported devices listed in :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: InvalidParameterValue if an invalid boot device is specified. :raises: OperationNotPermitted if the server has no server profile or if the server is already powered on. :raises: OneViewError if the communication with OneView fails """ oneview_info = common.get_oneview_info(task.node) if device not in self.get_supported_boot_devices(task): raise exception.InvalidParameterValue( _("Invalid boot device %s specified.") % device) LOG.debug("Setting boot device to %(device)s for node %(node)s", {"device": device, "node": task.node.uuid}) try: oneview_client = common.get_oneview_client() device_to_oneview = BOOT_DEVICE_MAPPING_TO_OV.get(device) oneview_client.set_boot_device(oneview_info, device_to_oneview) except oneview_exceptions.OneViewException as oneview_exc: msg = (_( "Error setting boot device on OneView. Error: %s") % oneview_exc ) LOG.error(msg) raise exception.OneViewError(error=msg) @common.node_has_server_profile def get_boot_device(self, task): """Get the current boot device for the task's node. Provides the current boot device of the node. :param task: a task from TaskManager. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` [PXE, DISK, CDROM] :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. :raises: OperationNotPermitted if no Server Profile is associated with the node :raises: InvalidParameterValue if the boot device is unknown :raises: OneViewError if the communication with OneView fails """ oneview_info = common.get_oneview_info(task.node) try: oneview_client = common.get_oneview_client() boot_order = oneview_client.get_boot_order(oneview_info) except oneview_exceptions.OneViewException as oneview_exc: msg = (_( "Error getting boot device from OneView. Error: %s") % oneview_exc ) LOG.error(msg) raise exception.OneViewError(msg) primary_device = boot_order[0] if primary_device not in BOOT_DEVICE_OV_TO_GENERIC: raise exception.InvalidParameterValue( _("Unsupported boot Device %(device)s for Node: %(node)s") % {"device": primary_device, "node": task.node.uuid} ) boot_device = { 'boot_device': BOOT_DEVICE_OV_TO_GENERIC.get(primary_device), 'persistent': True, } return boot_device def get_sensors_data(self, task): """Get sensors data. Not implemented by this driver. :param task: a TaskManager instance. """ raise NotImplementedError() ironic-5.1.0/ironic/drivers/modules/deploy_utils.py0000664000567000056710000013434312674513466023666 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 NTT DOCOMO, INC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import os import re import socket import time from ironic_lib import disk_utils from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import excutils from oslo_utils import strutils import six from six.moves.urllib import parse from ironic.common import dhcp_factory from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW from ironic.common import image_service from ironic.common import keystone from ironic.common import states from ironic.common import utils from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent_client from ironic.drivers.modules import image_cache from ironic.drivers import utils as driver_utils from ironic import objects deploy_opts = [ cfg.StrOpt('http_url', help='ironic-conductor node\'s HTTP server URL. ' 'Example: http://192.1.2.3:8080', deprecated_group='pxe'), cfg.StrOpt('http_root', default='/httpboot', help='ironic-conductor node\'s HTTP root path.', deprecated_group='pxe'), # TODO(rameshg87): Remove the deprecated names for the below two options in # Mitaka release. cfg.IntOpt('erase_devices_priority', deprecated_name='agent_erase_devices_priority', deprecated_group='agent', help=_('Priority to run in-band erase devices via the Ironic ' 'Python Agent ramdisk. If unset, will use the priority ' 'set in the ramdisk (defaults to 10 for the ' 'GenericHardwareManager). If set to 0, will not run ' 'during cleaning.')), cfg.IntOpt('erase_devices_iterations', deprecated_name='agent_erase_devices_iterations', deprecated_group='agent', default=1, help=_('Number of iterations to be run for erasing devices.')), ] CONF = cfg.CONF CONF.register_opts(deploy_opts, group='deploy') # TODO(Faizan): Move this logic to common/utils.py and deprecate # rootwrap_config. # This is required to set the default value of ironic_lib option # only if rootwrap_config does not contain the default value. if CONF.rootwrap_config != '/etc/ironic/rootwrap.conf': root_helper = 'sudo ironic-rootwrap %s' % CONF.rootwrap_config CONF.set_default('root_helper', root_helper, 'ironic_lib') LOG = logging.getLogger(__name__) VALID_ROOT_DEVICE_HINTS = set(('size', 'model', 'wwn', 'serial', 'vendor', 'wwn_with_extension', 'wwn_vendor_extension', 'name')) SUPPORTED_CAPABILITIES = { 'boot_option': ('local', 'netboot'), 'boot_mode': ('bios', 'uefi'), 'secure_boot': ('true', 'false'), 'trusted_boot': ('true', 'false'), 'disk_label': ('msdos', 'gpt'), } DISK_LAYOUT_PARAMS = ('root_gb', 'swap_mb', 'ephemeral_gb') # All functions are called from deploy() directly or indirectly. # They are split for stub-out. def discovery(portal_address, portal_port): """Do iSCSI discovery on portal.""" utils.execute('iscsiadm', '-m', 'discovery', '-t', 'st', '-p', '%s:%s' % (portal_address, portal_port), run_as_root=True, check_exit_code=[0], attempts=5, delay_on_retry=True) def login_iscsi(portal_address, portal_port, target_iqn): """Login to an iSCSI target.""" utils.execute('iscsiadm', '-m', 'node', '-p', '%s:%s' % (portal_address, portal_port), '-T', target_iqn, '--login', run_as_root=True, check_exit_code=[0], attempts=5, delay_on_retry=True) # Ensure the login complete verify_iscsi_connection(target_iqn) # force iSCSI initiator to re-read luns force_iscsi_lun_update(target_iqn) # ensure file system sees the block device check_file_system_for_iscsi_device(portal_address, portal_port, target_iqn) def check_file_system_for_iscsi_device(portal_address, portal_port, target_iqn): """Ensure the file system sees the iSCSI block device.""" check_dir = "/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-1" % (portal_address, portal_port, target_iqn) total_checks = CONF.disk_utils.iscsi_verify_attempts for attempt in range(total_checks): if os.path.exists(check_dir): break time.sleep(1) LOG.debug("iSCSI connection not seen by file system. Rechecking. " "Attempt %(attempt)d out of %(total)d", {"attempt": attempt + 1, "total": total_checks}) else: msg = _("iSCSI connection was not seen by the file system after " "attempting to verify %d times.") % total_checks LOG.error(msg) raise exception.InstanceDeployFailure(msg) def verify_iscsi_connection(target_iqn): """Verify iscsi connection.""" LOG.debug("Checking for iSCSI target to become active.") for attempt in range(CONF.disk_utils.iscsi_verify_attempts): out, _err = utils.execute('iscsiadm', '-m', 'node', '-S', run_as_root=True, check_exit_code=[0]) if target_iqn in out: break time.sleep(1) LOG.debug("iSCSI connection not active. Rechecking. Attempt " "%(attempt)d out of %(total)d", {"attempt": attempt + 1, "total": CONF.disk_utils.iscsi_verify_attempts}) else: msg = _("iSCSI connection did not become active after attempting to " "verify %d times.") % CONF.disk_utils.iscsi_verify_attempts LOG.error(msg) raise exception.InstanceDeployFailure(msg) def force_iscsi_lun_update(target_iqn): """force iSCSI initiator to re-read luns.""" LOG.debug("Re-reading iSCSI luns.") utils.execute('iscsiadm', '-m', 'node', '-T', target_iqn, '-R', run_as_root=True, check_exit_code=[0]) def logout_iscsi(portal_address, portal_port, target_iqn): """Logout from an iSCSI target.""" utils.execute('iscsiadm', '-m', 'node', '-p', '%s:%s' % (portal_address, portal_port), '-T', target_iqn, '--logout', run_as_root=True, check_exit_code=[0], attempts=5, delay_on_retry=True) def delete_iscsi(portal_address, portal_port, target_iqn): """Delete the iSCSI target.""" # Retry delete until it succeeds (exit code 0) or until there is # no longer a target to delete (exit code 21). utils.execute('iscsiadm', '-m', 'node', '-p', '%s:%s' % (portal_address, portal_port), '-T', target_iqn, '-o', 'delete', run_as_root=True, check_exit_code=[0, 21], attempts=5, delay_on_retry=True) def _replace_lines_in_file(path, regex_pattern, replacement): with open(path) as f: lines = f.readlines() compiled_pattern = re.compile(regex_pattern) with open(path, 'w') as f: for line in lines: line = compiled_pattern.sub(replacement, line) f.write(line) def _replace_root_uuid(path, root_uuid): root = 'UUID=%s' % root_uuid pattern = r'(\(\(|\{\{) ROOT (\)\)|\}\})' _replace_lines_in_file(path, pattern, root) def _replace_boot_line(path, boot_mode, is_whole_disk_image, trusted_boot=False): if is_whole_disk_image: boot_disk_type = 'boot_whole_disk' elif trusted_boot: boot_disk_type = 'trusted_boot' else: boot_disk_type = 'boot_partition' if boot_mode == 'uefi' and not CONF.pxe.ipxe_enabled: pattern = '^((set )?default)=.*$' boot_line = '\\1=%s' % boot_disk_type else: pxe_cmd = 'goto' if CONF.pxe.ipxe_enabled else 'default' pattern = '^%s .*$' % pxe_cmd boot_line = '%s %s' % (pxe_cmd, boot_disk_type) _replace_lines_in_file(path, pattern, boot_line) def _replace_disk_identifier(path, disk_identifier): pattern = r'(\(\(|\{\{) DISK_IDENTIFIER (\)\)|\}\})' _replace_lines_in_file(path, pattern, disk_identifier) def switch_pxe_config(path, root_uuid_or_disk_id, boot_mode, is_whole_disk_image, trusted_boot=False): """Switch a pxe config from deployment mode to service mode. :param path: path to the pxe config file in tftpboot. :param root_uuid_or_disk_id: root uuid in case of partition image or disk_id in case of whole disk image. :param boot_mode: if boot mode is uefi or bios. :param is_whole_disk_image: if the image is a whole disk image or not. :param trusted_boot: if boot with trusted_boot or not. The usage of is_whole_disk_image and trusted_boot are mutually exclusive. You can have one or neither, but not both. """ if not is_whole_disk_image: _replace_root_uuid(path, root_uuid_or_disk_id) else: _replace_disk_identifier(path, root_uuid_or_disk_id) _replace_boot_line(path, boot_mode, is_whole_disk_image, trusted_boot) def notify(address, port): """Notify a node that it becomes ready to reboot.""" s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.connect((address, port)) s.send('done') finally: s.close() def get_dev(address, port, iqn, lun): """Returns a device path for given parameters.""" dev = ("/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-%s" % (address, port, iqn, lun)) return dev def deploy_partition_image( address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid, preserve_ephemeral=False, configdrive=None, boot_option="netboot", boot_mode="bios", disk_label=None): """All-in-one function to deploy a partition image to a node. :param address: The iSCSI IP address. :param port: The iSCSI port number. :param iqn: The iSCSI qualified name. :param lun: The iSCSI logical unit number. :param image_path: Path for the instance's disk image. :param root_mb: Size of the root partition in megabytes. :param swap_mb: Size of the swap partition in megabytes. :param ephemeral_mb: Size of the ephemeral partition in megabytes. If 0, no ephemeral partition will be created. :param ephemeral_format: The type of file system to format the ephemeral partition. :param node_uuid: node's uuid. Used for logging. :param preserve_ephemeral: If True, no filesystem is written to the ephemeral block device, preserving whatever content it had (if the partition table has not changed). :param configdrive: Optional. Base64 encoded Gzipped configdrive content or configdrive HTTP URL. :param boot_option: Can be "local" or "netboot". "netboot" by default. :param boot_mode: Can be "bios" or "uefi". "bios" by default. :param disk_label: The disk label to be used when creating the partition table. Valid values are: "msdos", "gpt" or None; If None Ironic will figure it out according to the boot_mode parameter. :raises: InstanceDeployFailure if image virtual size is bigger than root partition size. :returns: a dictionary containing the following keys: 'root uuid': UUID of root partition 'efi system partition uuid': UUID of the uefi system partition (if boot mode is uefi). NOTE: If key exists but value is None, it means partition doesn't exist. """ image_mb = disk_utils.get_image_mb(image_path) if image_mb > root_mb: msg = (_('Root partition is too small for requested image. Image ' 'virtual size: %(image_mb)d MB, Root size: %(root_mb)d MB') % {'image_mb': image_mb, 'root_mb': root_mb}) raise exception.InstanceDeployFailure(msg) with _iscsi_setup_and_handle_errors(address, port, iqn, lun) as dev: uuid_dict_returned = disk_utils.work_on_disk( dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format, image_path, node_uuid, preserve_ephemeral=preserve_ephemeral, configdrive=configdrive, boot_option=boot_option, boot_mode=boot_mode, disk_label=disk_label) return uuid_dict_returned def deploy_disk_image(address, port, iqn, lun, image_path, node_uuid): """All-in-one function to deploy a whole disk image to a node. :param address: The iSCSI IP address. :param port: The iSCSI port number. :param iqn: The iSCSI qualified name. :param lun: The iSCSI logical unit number. :param image_path: Path for the instance's disk image. :param node_uuid: node's uuid. Used for logging. Currently not in use by this function but could be used in the future. :returns: a dictionary containing the key 'disk identifier' to identify the disk which was used for deployment. """ with _iscsi_setup_and_handle_errors(address, port, iqn, lun) as dev: disk_utils.populate_image(image_path, dev) disk_identifier = disk_utils.get_disk_identifier(dev) return {'disk identifier': disk_identifier} @contextlib.contextmanager def _iscsi_setup_and_handle_errors(address, port, iqn, lun): """Function that yields an iSCSI target device to work on. :param address: The iSCSI IP address. :param port: The iSCSI port number. :param iqn: The iSCSI qualified name. :param lun: The iSCSI logical unit number. """ dev = get_dev(address, port, iqn, lun) discovery(address, port) login_iscsi(address, port, iqn) if not disk_utils.is_block_device(dev): raise exception.InstanceDeployFailure(_("Parent device '%s' not found") % dev) try: yield dev except processutils.ProcessExecutionError as err: with excutils.save_and_reraise_exception(): LOG.error(_LE("Deploy to address %s failed."), address) LOG.error(_LE("Command: %s"), err.cmd) LOG.error(_LE("StdOut: %r"), err.stdout) LOG.error(_LE("StdErr: %r"), err.stderr) except exception.InstanceDeployFailure as e: with excutils.save_and_reraise_exception(): LOG.error(_LE("Deploy to address %s failed."), address) LOG.error(e) finally: logout_iscsi(address, port, iqn) delete_iscsi(address, port, iqn) def notify_ramdisk_to_proceed(address): """Notifies the ramdisk waiting for instructions from Ironic. DIB ramdisk (from init script) makes vendor passhthrus and listens on port 10000 for Ironic to notify back the completion of the task. This method connects to port 10000 of the bare metal running the ramdisk and then sends some data to notify the ramdisk to proceed with it's next task. :param address: The IP address of the node. """ # Ensure the node started netcat on the port after POST the request. time.sleep(3) notify(address, 10000) def check_for_missing_params(info_dict, error_msg, param_prefix=''): """Check for empty params in the provided dictionary. :param info_dict: The dictionary to inspect. :param error_msg: The error message to prefix before printing the information about missing parameters. :param param_prefix: Add this prefix to each parameter for error messages :raises: MissingParameterValue, if one or more parameters are empty in the provided dictionary. """ missing_info = [] for label, value in info_dict.items(): if not value: missing_info.append(param_prefix + label) if missing_info: exc_msg = _("%(error_msg)s. Missing are: %(missing_info)s") raise exception.MissingParameterValue( exc_msg % {'error_msg': error_msg, 'missing_info': missing_info}) def fetch_images(ctx, cache, images_info, force_raw=True): """Check for available disk space and fetch images using ImageCache. :param ctx: context :param cache: ImageCache instance to use for fetching :param images_info: list of tuples (image href, destination path) :param force_raw: boolean value, whether to convert the image to raw format :raises: InstanceDeployFailure if unable to find enough disk space """ try: image_cache.clean_up_caches(ctx, cache.master_dir, images_info) except exception.InsufficientDiskSpace as e: raise exception.InstanceDeployFailure(reason=e) # NOTE(dtantsur): This code can suffer from race condition, # if disk space is used between the check and actual download. # This is probably unavoidable, as we can't control other # (probably unrelated) processes for href, path in images_info: cache.fetch_image(href, path, ctx=ctx, force_raw=force_raw) def set_failed_state(task, msg): """Sets the deploy status as failed with relevant messages. This method sets the deployment as fail with the given message. It sets node's provision_state to DEPLOYFAIL and updates last_error with the given error message. It also powers off the baremetal node. :param task: a TaskManager instance containing the node to act on. :param msg: the message to set in last_error of the node. """ node = task.node try: task.process_event('fail') except exception.InvalidState: msg2 = (_LE('Internal error. Node %(node)s in provision state ' '"%(state)s" could not transition to a failed state.') % {'node': node.uuid, 'state': node.provision_state}) LOG.exception(msg2) try: manager_utils.node_power_action(task, states.POWER_OFF) except Exception: msg2 = (_LE('Node %s failed to power off while handling deploy ' 'failure. This may be a serious condition. Node ' 'should be removed from Ironic or put in maintenance ' 'mode until the problem is resolved.') % node.uuid) LOG.exception(msg2) # NOTE(deva): node_power_action() erases node.last_error # so we need to set it here. node.last_error = msg node.save() def get_single_nic_with_vif_port_id(task): """Returns the MAC address of a port which has a VIF port id. :param task: a TaskManager instance containing the ports to act on. :returns: MAC address of the port connected to deployment network. None if it cannot find any port with vif id. """ for port in task.ports: if port.extra.get('vif_port_id'): return port.address def parse_instance_info_capabilities(node): """Parse the instance_info capabilities. One way of having these capabilities set is via Nova, where the capabilities are defined in the Flavor extra_spec and passed to Ironic by the Nova Ironic driver. NOTE: Although our API fully supports JSON fields, to maintain the backward compatibility with Juno the Nova Ironic driver is sending it as a string. :param node: a single Node. :raises: InvalidParameterValue if the capabilities string is not a dictionary or is malformed. :returns: A dictionary with the capabilities if found, otherwise an empty dictionary. """ def parse_error(): error_msg = (_('Error parsing capabilities from Node %s instance_info ' 'field. A dictionary or a "jsonified" dictionary is ' 'expected.') % node.uuid) raise exception.InvalidParameterValue(error_msg) capabilities = node.instance_info.get('capabilities', {}) if isinstance(capabilities, six.string_types): try: capabilities = jsonutils.loads(capabilities) except (ValueError, TypeError): parse_error() if not isinstance(capabilities, dict): parse_error() return capabilities def agent_get_clean_steps(task, interface=None, override_priorities=None): """Get the list of cached clean steps from the agent. #TODO(JoshNang) move to BootInterface The clean steps cache is updated at the beginning of cleaning. :param task: a TaskManager object containing the node :param interface: The interface for which clean steps are to be returned. If this is not provided, it returns the clean steps for all interfaces. :param override_priorities: a dictionary with keys being step names and values being new priorities for them. If a step isn't in this dictionary, the step's original priority is used. :raises NodeCleaningFailure: if the clean steps are not yet cached, for example, when a node has just been enrolled and has not been cleaned yet. :returns: A list of clean step dictionaries """ node = task.node try: all_steps = node.driver_internal_info['agent_cached_clean_steps'] except KeyError: raise exception.NodeCleaningFailure(_('Cleaning steps are not yet ' 'available for node %(node)s') % {'node': node.uuid}) if interface: steps = [step.copy() for step in all_steps.get(interface, [])] else: steps = [step.copy() for step_list in all_steps.values() for step in step_list] if not steps or not override_priorities: return steps for step in steps: new_priority = override_priorities.get(step.get('step')) if new_priority is not None: step['priority'] = new_priority return steps def agent_execute_clean_step(task, step): """Execute a clean step asynchronously on the agent. #TODO(JoshNang) move to BootInterface :param task: a TaskManager object containing the node :param step: a clean step dictionary to execute :raises: NodeCleaningFailure if the agent does not return a command status :returns: states.CLEANWAIT to signify the step will be completed async """ client = agent_client.AgentClient() ports = objects.Port.list_by_node_id( task.context, task.node.id) result = client.execute_clean_step(step, task.node, ports) if not result.get('command_status'): raise exception.NodeCleaningFailure(_( 'Agent on node %(node)s returned bad command result: ' '%(result)s') % {'node': task.node.uuid, 'result': result.get('command_error')}) return states.CLEANWAIT def agent_add_clean_params(task): """Add required config parameters to node's driver_interal_info. Adds the required conf options to node's driver_internal_info. It is Required to pass the information to IPA. :param task: a TaskManager instance. """ info = task.node.driver_internal_info passes = CONF.deploy.erase_devices_iterations info['agent_erase_devices_iterations'] = passes task.node.driver_internal_info = info task.node.save() def try_set_boot_device(task, device, persistent=True): """Tries to set the boot device on the node. This method tries to set the boot device on the node to the given boot device. Under uefi boot mode, setting of boot device may differ between different machines. IPMI does not work for setting boot devices in uefi mode for certain machines. This method ignores the expected IPMI failure for uefi boot mode and just logs a message. In error cases, it is expected the operator has to manually set the node to boot from the correct device. :param task: a TaskManager object containing the node :param device: the boot device :param persistent: Whether to set the boot device persistently :raises: Any exception from set_boot_device except IPMIFailure (setting of boot device using ipmi is expected to fail). """ try: manager_utils.node_set_boot_device(task, device, persistent=persistent) except exception.IPMIFailure: if get_boot_mode_for_deploy(task.node) == 'uefi': LOG.warning(_LW("ipmitool is unable to set boot device while " "the node %s is in UEFI boot mode. Please set " "the boot device manually.") % task.node.uuid) else: raise def parse_root_device_hints(node): """Parse the root_device property of a node. Parse the root_device property of a node and make it a flat string to be passed via the PXE config. :param node: a single Node. :returns: A flat string with the following format opt1=value1,opt2=value2. Or None if the Node contains no hints. :raises: InvalidParameterValue, if some information is invalid. """ root_device = node.properties.get('root_device') if not root_device: return # Find invalid hints for logging invalid_hints = set(root_device) - VALID_ROOT_DEVICE_HINTS if invalid_hints: raise exception.InvalidParameterValue( _('The hints "%(invalid_hints)s" are invalid. ' 'Valid hints are: "%(valid_hints)s"') % {'invalid_hints': ', '.join(invalid_hints), 'valid_hints': ', '.join(VALID_ROOT_DEVICE_HINTS)}) if 'size' in root_device: try: int(root_device['size']) except ValueError: raise exception.InvalidParameterValue( _('Root device hint "size" is not an integer value.')) hints = [] for key, value in sorted(root_device.items()): # NOTE(lucasagomes): We can't have spaces in the PXE config # file, so we are going to url/percent encode the value here # and decode on the other end. if isinstance(value, six.string_types): value = value.strip() value = parse.quote(value) hints.append("%s=%s" % (key, value)) return ','.join(hints) def is_secure_boot_requested(node): """Returns True if secure_boot is requested for deploy. This method checks node property for secure_boot and returns True if it is requested. :param node: a single Node. :raises: InvalidParameterValue if the capabilities string is not a dictionary or is malformed. :returns: True if secure_boot is requested. """ capabilities = parse_instance_info_capabilities(node) sec_boot = capabilities.get('secure_boot', 'false').lower() return sec_boot == 'true' def is_trusted_boot_requested(node): """Returns True if trusted_boot is requested for deploy. This method checks instance property for trusted_boot and returns True if it is requested. :param node: a single Node. :raises: InvalidParameterValue if the capabilities string is not a dictionary or is malformed. :returns: True if trusted_boot is requested. """ capabilities = parse_instance_info_capabilities(node) trusted_boot = capabilities.get('trusted_boot', 'false').lower() return trusted_boot == 'true' def get_disk_label(node): """Return the disk label requested for deploy, if any. :param node: a single Node. :raises: InvalidParameterValue if the capabilities string is not a dictionary or is malformed. :returns: the disk label or None if no disk label was specified. """ capabilities = parse_instance_info_capabilities(node) return capabilities.get('disk_label') def get_boot_mode_for_deploy(node): """Returns the boot mode that would be used for deploy. This method returns boot mode to be used for deploy. It returns 'uefi' if 'secure_boot' is set to 'true' or returns 'bios' if 'trusted_boot' is set to 'true' in 'instance_info/capabilities' of node. Otherwise it returns value of 'boot_mode' in 'properties/capabilities' of node if set. If that is not set, it returns boot mode in 'instance_info/deploy_boot_mode' for the node. It would return None if boot mode is present neither in 'capabilities' of node 'properties' nor in node's 'instance_info' (which could also be None). :param node: an ironic node object. :returns: 'bios', 'uefi' or None """ if is_secure_boot_requested(node): LOG.debug('Deploy boot mode is uefi for %s.', node.uuid) return 'uefi' if is_trusted_boot_requested(node): # TODO(lintan) Trusted boot also supports uefi, but at the moment, # it should only boot with bios. LOG.debug('Deploy boot mode is bios for %s.', node.uuid) return 'bios' boot_mode = driver_utils.get_node_capability(node, 'boot_mode') if boot_mode is None: instance_info = node.instance_info boot_mode = instance_info.get('deploy_boot_mode') LOG.debug('Deploy boot mode is %(boot_mode)s for %(node)s.', {'boot_mode': boot_mode, 'node': node.uuid}) return boot_mode.lower() if boot_mode else boot_mode def validate_capabilities(node): """Validates that specified supported capabilities have valid value This method checks if the any of the supported capability is present in Node capabilities. For all supported capabilities specified for a Node, it validates that it has a valid value. The node can have capability as part of the 'properties' or 'instance_info' or both. Note that the actual value of a capability does not need to be the same in the node's 'properties' and 'instance_info'. :param node: an ironic node object. :raises: InvalidParameterValue, if the capability is not set to a valid value. """ exp_str = _("The parameter '%(capability)s' from %(field)s has an " "invalid value: '%(value)s'. Acceptable values are: " "%(valid_values)s.") for capability_name, valid_values in SUPPORTED_CAPABILITIES.items(): # Validate capability_name in node's properties/capabilities value = driver_utils.get_node_capability(node, capability_name) if value and (value not in valid_values): field = "properties/capabilities" raise exception.InvalidParameterValue( exp_str % {'capability': capability_name, 'field': field, 'value': value, 'valid_values': ', '.join(valid_values)}) # Validate capability_name in node's instance_info/['capabilities'] capabilities = parse_instance_info_capabilities(node) value = capabilities.get(capability_name) if value and (value not in valid_values): field = "instance_info['capabilities']" raise exception.InvalidParameterValue( exp_str % {'capability': capability_name, 'field': field, 'value': value, 'valid_values': ', '.join(valid_values)}) def validate_image_properties(ctx, deploy_info, properties): """Validate the image. For Glance images it checks that the image exists in Glance and its properties or deployment info contain the properties passed. If it's not a Glance image, it checks that deployment info contains needed properties. :param ctx: security context :param deploy_info: the deploy_info to be validated :param properties: the list of image meta-properties to be validated. :raises: InvalidParameterValue if: * connection to glance failed; * authorization for accessing image failed; * HEAD request to image URL failed or returned response code != 200; * HEAD request response does not contain Content-Length header; * the protocol specified in image URL is not supported. :raises: MissingParameterValue if the image doesn't contain the mentioned properties. """ image_href = deploy_info['image_source'] try: img_service = image_service.get_image_service(image_href, context=ctx) image_props = img_service.show(image_href)['properties'] except (exception.GlanceConnectionFailed, exception.ImageNotAuthorized, exception.Invalid): raise exception.InvalidParameterValue(_( "Failed to connect to Glance to get the properties " "of the image %s") % image_href) except exception.ImageNotFound: raise exception.InvalidParameterValue(_( "Image %s can not be found.") % image_href) except exception.ImageRefValidationFailed as e: raise exception.InvalidParameterValue(e) missing_props = [] for prop in properties: if not (deploy_info.get(prop) or image_props.get(prop)): missing_props.append(prop) if missing_props: props = ', '.join(missing_props) raise exception.MissingParameterValue(_( "Image %(image)s is missing the following properties: " "%(properties)s") % {'image': image_href, 'properties': props}) def get_boot_option(node): """Gets the boot option. :param node: A single Node. :raises: InvalidParameterValue if the capabilities string is not a dict or is malformed. :returns: A string representing the boot option type. Defaults to 'netboot'. """ capabilities = parse_instance_info_capabilities(node) return capabilities.get('boot_option', 'netboot').lower() def prepare_cleaning_ports(task): """Prepare the Ironic ports of the node for cleaning. This method deletes the cleaning ports currently existing for all the ports of the node and then creates a new one for each one of them. It also adds 'vif_port_id' to port.extra of each Ironic port, after creating the cleaning ports. :param task: a TaskManager object containing the node :raises NodeCleaningFailure: if the previous cleaning ports cannot be removed or if new cleaning ports cannot be created """ provider = dhcp_factory.DHCPFactory() # If we have left over ports from a previous cleaning, remove them if getattr(provider.provider, 'delete_cleaning_ports', None): # Allow to raise if it fails, is caught and handled in conductor provider.provider.delete_cleaning_ports(task) # Create cleaning ports if necessary if getattr(provider.provider, 'create_cleaning_ports', None): # Allow to raise if it fails, is caught and handled in conductor ports = provider.provider.create_cleaning_ports(task) # Add vif_port_id for each of the ports because some boot # interfaces expects these to prepare for booting ramdisk. for port in task.ports: extra_dict = port.extra try: extra_dict['vif_port_id'] = ports[port.uuid] except KeyError: # This is an internal error in Ironic. All DHCP providers # implementing create_cleaning_ports are supposed to # return a VIF port ID for all Ironic ports. But # that doesn't seem to be true here. error = (_("When creating cleaning ports, DHCP provider " "didn't return VIF port ID for %s") % port.uuid) raise exception.NodeCleaningFailure( node=task.node.uuid, reason=error) else: port.extra = extra_dict port.save() def tear_down_cleaning_ports(task): """Deletes the cleaning ports created for each of the Ironic ports. This method deletes the cleaning port created before cleaning was started. :param task: a TaskManager object containing the node :raises NodeCleaningFailure: if the cleaning ports cannot be removed. """ # If we created cleaning ports, delete them provider = dhcp_factory.DHCPFactory() if getattr(provider.provider, 'delete_cleaning_ports', None): # Allow to raise if it fails, is caught and handled in conductor provider.provider.delete_cleaning_ports(task) for port in task.ports: if 'vif_port_id' in port.extra: extra_dict = port.extra extra_dict.pop('vif_port_id', None) port.extra = extra_dict port.save() def build_agent_options(node): """Build the options to be passed to the agent ramdisk. :param node: an ironic node object :returns: a dictionary containing the parameters to be passed to agent ramdisk. """ ironic_api = (CONF.conductor.api_url or keystone.get_service_url()).rstrip('/') agent_config_opts = { 'ipa-api-url': ironic_api, 'ipa-driver-name': node.driver, # NOTE: The below entry is a temporary workaround for bug/1433812 'coreos.configdrive': 0, } root_device = parse_root_device_hints(node) if root_device: agent_config_opts['root_device'] = root_device return agent_config_opts def prepare_inband_cleaning(task, manage_boot=True): """Prepares the node to boot into agent for in-band cleaning. This method does the following: 1. Prepares the cleaning ports for the bare metal node and updates the clean parameters in node's driver_internal_info. 2. If 'manage_boot' parameter is set to true, it also calls the 'prepare_ramdisk' method of boot interface to boot the agent ramdisk. 3. Reboots the bare metal node. :param task: a TaskManager object containing the node :param manage_boot: If this is set to True, this method calls the 'prepare_ramdisk' method of boot interface to boot the agent ramdisk. If False, it skips preparing the boot agent ramdisk using boot interface, and assumes that the environment is setup to automatically boot agent ramdisk every time bare metal node is rebooted. :returns: states.CLEANWAIT to signify an asynchronous prepare. :raises NodeCleaningFailure: if the previous cleaning ports cannot be removed or if new cleaning ports cannot be created """ prepare_cleaning_ports(task) # Append required config parameters to node's driver_internal_info # to pass to IPA. agent_add_clean_params(task) if manage_boot: ramdisk_opts = build_agent_options(task.node) # TODO(rameshg87): Below code is to make sure that bash ramdisk # invokes pass_deploy_info vendor passthru when it is booted # for cleaning. Remove the below code once we stop supporting # bash ramdisk in Ironic. Do a late import to avoid circular # import. from ironic.drivers.modules import iscsi_deploy ramdisk_opts.update( iscsi_deploy.build_deploy_ramdisk_options(task.node)) task.driver.boot.prepare_ramdisk(task, ramdisk_opts) manager_utils.node_power_action(task, states.REBOOT) # Tell the conductor we are waiting for the agent to boot. return states.CLEANWAIT def tear_down_inband_cleaning(task, manage_boot=True): """Tears down the environment setup for in-band cleaning. This method does the following: 1. Powers off the bare metal node. 2. If 'manage_boot' parameter is set to true, it also calls the 'clean_up_ramdisk' method of boot interface to clean up the environment that was set for booting agent ramdisk. 3. Deletes the cleaning ports which were setup as part of cleaning. :param task: a TaskManager object containing the node :param manage_boot: If this is set to True, this method calls the 'clean_up_ramdisk' method of boot interface to boot the agent ramdisk. If False, it skips this step. :raises NodeCleaningFailure: if the cleaning ports cannot be removed. """ manager_utils.node_power_action(task, states.POWER_OFF) if manage_boot: task.driver.boot.clean_up_ramdisk(task) tear_down_cleaning_ports(task) def get_image_instance_info(node): """Gets the image information from the node. Get image information for the given node instance from its 'instance_info' property. :param node: a single Node. :returns: A dict with required image properties retrieved from node's 'instance_info'. :raises: MissingParameterValue, if image_source is missing in node's instance_info. Also raises same exception if kernel/ramdisk is missing in instance_info for non-glance images. """ info = {} info['image_source'] = node.instance_info.get('image_source') is_whole_disk_image = node.driver_internal_info.get('is_whole_disk_image') if not is_whole_disk_image: if not service_utils.is_glance_image(info['image_source']): info['kernel'] = node.instance_info.get('kernel') info['ramdisk'] = node.instance_info.get('ramdisk') error_msg = (_("Cannot validate image information for node %s because one " "or more parameters are missing from its instance_info.") % node.uuid) check_for_missing_params(info, error_msg) return info def parse_instance_info(node): """Gets the instance specific Node deployment info. This method validates whether the 'instance_info' property of the supplied node contains the required information for this driver to deploy images to the node. :param node: a single Node. :returns: A dict with the instance_info values. :raises: MissingParameterValue, if any of the required parameters are missing. :raises: InvalidParameterValue, if any of the parameters have invalid value. """ info = node.instance_info i_info = {} i_info['image_source'] = info.get('image_source') iwdi = node.driver_internal_info.get('is_whole_disk_image') if not iwdi: if (i_info['image_source'] and not service_utils.is_glance_image( i_info['image_source'])): i_info['kernel'] = info.get('kernel') i_info['ramdisk'] = info.get('ramdisk') i_info['root_gb'] = info.get('root_gb') error_msg = _("Cannot validate driver deploy. Some parameters were missing" " in node's instance_info") check_for_missing_params(i_info, error_msg) # Internal use only i_info['deploy_key'] = info.get('deploy_key') i_info['swap_mb'] = int(info.get('swap_mb', 0)) i_info['ephemeral_gb'] = info.get('ephemeral_gb', 0) err_msg_invalid = _("Cannot validate parameter for driver deploy. " "Invalid parameter %(param)s. Reason: %(reason)s") for param in DISK_LAYOUT_PARAMS: try: int(i_info[param]) except ValueError: reason = _("%s is not an integer value.") % i_info[param] raise exception.InvalidParameterValue(err_msg_invalid % {'param': param, 'reason': reason}) i_info['root_mb'] = 1024 * int(info.get('root_gb')) if iwdi: if int(i_info['swap_mb']) > 0 or int(i_info['ephemeral_gb']) > 0: err_msg_invalid = _("Cannot deploy whole disk image with " "swap or ephemeral size set") raise exception.InvalidParameterValue(err_msg_invalid) i_info['ephemeral_format'] = info.get('ephemeral_format') i_info['configdrive'] = info.get('configdrive') if i_info['ephemeral_gb'] and not i_info['ephemeral_format']: i_info['ephemeral_format'] = CONF.pxe.default_ephemeral_format preserve_ephemeral = info.get('preserve_ephemeral', False) try: i_info['preserve_ephemeral'] = ( strutils.bool_from_string(preserve_ephemeral, strict=True)) except ValueError as e: raise exception.InvalidParameterValue( err_msg_invalid % {'param': 'preserve_ephemeral', 'reason': e}) # NOTE(Zhenguo): If rebuilding with preserve_ephemeral option, check # that the disk layout is unchanged. if i_info['preserve_ephemeral']: _check_disk_layout_unchanged(node, i_info) return i_info def _check_disk_layout_unchanged(node, i_info): """Check whether disk layout is unchanged. If the node has already been deployed to, this checks whether the disk layout for the node is the same as when it had been deployed to. :param node: the node of interest :param i_info: instance information (a dictionary) for the node, containing disk layout information :raises: InvalidParameterValue if the disk layout changed """ # If a node has been deployed to, this is the instance information # used for that deployment. driver_internal_info = node.driver_internal_info if 'instance' not in driver_internal_info: return error_msg = '' for param in DISK_LAYOUT_PARAMS: param_value = int(driver_internal_info['instance'][param]) if param_value != int(i_info[param]): error_msg += (_(' Deployed value of %(param)s was %(param_value)s ' 'but requested value is %(request_value)s.') % {'param': param, 'param_value': param_value, 'request_value': i_info[param]}) if error_msg: err_msg_invalid = _("The following parameters have different values " "from previous deployment:%(error_msg)s") raise exception.InvalidParameterValue(err_msg_invalid % {'error_msg': error_msg}) ironic-5.1.0/ironic/drivers/modules/iboot.py0000664000567000056710000002356012674513466022264 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Ironic iBoot PDU power manager. """ import time from oslo_config import cfg from oslo_log import log as logging from oslo_service import loopingcall from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LW from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers import base iboot = importutils.try_import('iboot') opts = [ cfg.IntOpt('max_retry', default=3, help=_('Maximum retries for iBoot operations')), cfg.IntOpt('retry_interval', default=1, help=_('Time (in seconds) between retry attempts for iBoot ' 'operations')), cfg.IntOpt('reboot_delay', default=5, min=0, help=_('Time (in seconds) to sleep between when rebooting ' '(powering off and on again).')) ] CONF = cfg.CONF opt_group = cfg.OptGroup(name='iboot', title='Options for the iBoot power driver') CONF.register_group(opt_group) CONF.register_opts(opts, opt_group) LOG = logging.getLogger(__name__) REQUIRED_PROPERTIES = { 'iboot_address': _("IP address of the node. Required."), 'iboot_username': _("username. Required."), 'iboot_password': _("password. Required."), } OPTIONAL_PROPERTIES = { 'iboot_relay_id': _("iBoot PDU relay id; default is 1. Optional."), 'iboot_port': _("iBoot PDU port; default is 9100. Optional."), } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) def _parse_driver_info(node): info = node.driver_info or {} missing_info = [key for key in REQUIRED_PROPERTIES if not info.get(key)] if missing_info: raise exception.MissingParameterValue( _("Missing the following iBoot credentials in node's" " driver_info: %s.") % missing_info) address = info.get('iboot_address', None) username = info.get('iboot_username', None) password = info.get('iboot_password', None) relay_id = info.get('iboot_relay_id', 1) try: relay_id = int(relay_id) except ValueError: raise exception.InvalidParameterValue( _("iBoot PDU relay id must be an integer.")) port = info.get('iboot_port', 9100) port = utils.validate_network_port(port, 'iboot_port') return { 'address': address, 'username': username, 'password': password, 'port': port, 'relay_id': relay_id, 'uuid': node.uuid, } def _get_connection(driver_info): # NOTE: python-iboot wants username and password as strings (not unicode) return iboot.iBootInterface(driver_info['address'], str(driver_info['username']), str(driver_info['password']), port=driver_info['port'], num_relays=driver_info['relay_id']) def _switch(driver_info, enabled): conn = _get_connection(driver_info) relay_id = driver_info['relay_id'] def _wait_for_switch(mutable): if mutable['retries'] > CONF.iboot.max_retry: LOG.warning(_LW( 'Reached maximum number of attempts (%(attempts)d) to set ' 'power state for node %(node)s to "%(op)s"'), {'attempts': mutable['retries'], 'node': driver_info['uuid'], 'op': states.POWER_ON if enabled else states.POWER_OFF}) raise loopingcall.LoopingCallDone() try: mutable['retries'] += 1 mutable['response'] = conn.switch(relay_id, enabled) if mutable['response']: raise loopingcall.LoopingCallDone() except (TypeError, IndexError): LOG.warning(_LW("Cannot call set power state for node '%(node)s' " "at relay '%(relay)s'. iBoot switch() failed."), {'node': driver_info['uuid'], 'relay': relay_id}) mutable = {'response': False, 'retries': 0} timer = loopingcall.FixedIntervalLoopingCall(_wait_for_switch, mutable) timer.start(interval=CONF.iboot.retry_interval).wait() return mutable['response'] def _sleep_switch(seconds): """Function broken out for testing purpose.""" time.sleep(seconds) def _check_power_state(driver_info, pstate): """Function to check power state is correct. Up to max retries.""" # always try once + number of retries for num in range(0, 1 + CONF.iboot.max_retry): state = _power_status(driver_info) if state == pstate: return if num < CONF.iboot.max_retry: time.sleep(CONF.iboot.retry_interval) raise exception.PowerStateFailure(pstate=pstate) def _power_status(driver_info): conn = _get_connection(driver_info) relay_id = driver_info['relay_id'] def _wait_for_power_status(mutable): if mutable['retries'] > CONF.iboot.max_retry: LOG.warning(_LW( 'Reached maximum number of attempts (%(attempts)d) to get ' 'power state for node %(node)s'), {'attempts': mutable['retries'], 'node': driver_info['uuid']}) raise loopingcall.LoopingCallDone() try: mutable['retries'] += 1 response = conn.get_relays() status = response[relay_id - 1] if status: mutable['state'] = states.POWER_ON else: mutable['state'] = states.POWER_OFF raise loopingcall.LoopingCallDone() except (TypeError, IndexError): LOG.warning(_LW("Cannot get power state for node '%(node)s' at " "relay '%(relay)s'. iBoot get_relays() failed."), {'node': driver_info['uuid'], 'relay': relay_id}) mutable = {'state': states.ERROR, 'retries': 0} timer = loopingcall.FixedIntervalLoopingCall(_wait_for_power_status, mutable) timer.start(interval=CONF.iboot.retry_interval).wait() return mutable['state'] class IBootPower(base.PowerInterface): """iBoot PDU Power Driver for Ironic This PowerManager class provides a mechanism for controlling power state via an iBoot capable device. Requires installation of python-iboot: https://github.com/darkip/python-iboot """ def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Validate driver_info for iboot driver. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if iboot parameters are invalid. :raises: MissingParameterValue if required iboot parameters are missing. """ _parse_driver_info(task.node) def get_power_state(self, task): """Get the current power state of the task's node. :param task: a TaskManager instance containing the node to act on. :returns: one of ironic.common.states POWER_OFF, POWER_ON or ERROR. :raises: IBootOperationError on an error from iBoot. :raises: InvalidParameterValue if iboot parameters are invalid. :raises: MissingParameterValue if required iboot parameters are missing. """ driver_info = _parse_driver_info(task.node) return _power_status(driver_info) @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Turn the power on or off. :param task: a TaskManager instance containing the node to act on. :param pstate: The desired power state, one of ironic.common.states POWER_ON, POWER_OFF. :raises: IBootOperationError on an error from iBoot. :raises: InvalidParameterValue if iboot parameters are invalid or if an invalid power state was specified. :raises: MissingParameterValue if required iboot parameters are missing. :raises: PowerStateFailure if the power couldn't be set to pstate. """ driver_info = _parse_driver_info(task.node) if pstate == states.POWER_ON: _switch(driver_info, True) elif pstate == states.POWER_OFF: _switch(driver_info, False) else: raise exception.InvalidParameterValue( _("set_power_state called with invalid " "power state %s.") % pstate) _check_power_state(driver_info, pstate) @task_manager.require_exclusive_lock def reboot(self, task): """Cycles the power to the task's node. :param task: a TaskManager instance containing the node to act on. :raises: IBootOperationError on an error from iBoot. :raises: InvalidParameterValue if iboot parameters are invalid. :raises: MissingParameterValue if required iboot parameters are missing. :raises: PowerStateFailure if the final state of the node is not POWER_ON. """ driver_info = _parse_driver_info(task.node) _switch(driver_info, False) _sleep_switch(CONF.iboot.reboot_delay) _switch(driver_info, True) _check_power_state(driver_info, states.POWER_ON) ironic-5.1.0/ironic/drivers/modules/agent_base_vendor.py0000664000567000056710000010371312674513470024607 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # Copyright 2014 Rackspace, Inc. # Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import time from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import strutils from oslo_utils import timeutils import retrying from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import states from ironic.common import utils from ironic.conductor import rpcapi from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers import base from ironic.drivers.modules import agent_client from ironic.drivers.modules import deploy_utils from ironic import objects agent_opts = [ cfg.IntOpt('heartbeat_timeout', default=300, help=_('Maximum interval (in seconds) for agent heartbeats.')), cfg.IntOpt('post_deploy_get_power_state_retries', default=6, help=_('Number of times to retry getting power state to check ' 'if bare metal node has been powered off after a soft ' 'power off.')), cfg.IntOpt('post_deploy_get_power_state_retry_interval', default=5, help=_('Amount of time (in seconds) to wait between polling ' 'power state after trigger soft poweroff.')), ] CONF = cfg.CONF CONF.register_opts(agent_opts, group='agent') LOG = log.getLogger(__name__) # This contains a nested dictionary containing the post clean step # hooks registered for each clean step of every interface. # Every key of POST_CLEAN_STEP_HOOKS is an interface and its value # is a dictionary. For this inner dictionary, the key is the name of # the clean-step method in the interface, and the value is the post # clean-step hook -- the function that is to be called after successful # completion of the clean step. # # For example: # POST_CLEAN_STEP_HOOKS = # { # 'raid': {'create_configuration': , # 'delete_configuration': } # } # # It means that method '' is to be called after # successfully completing the clean step 'create_configuration' of # raid interface. '' is to be called after # completing 'delete_configuration' of raid interface. POST_CLEAN_STEP_HOOKS = {} VENDOR_PROPERTIES = { 'deploy_forces_oob_reboot': _( 'Whether Ironic should force a reboot of the Node via the out-of-band ' 'channel after deployment is complete. Provides compatiblity with ' 'older deploy ramdisks. Defaults to False. Optional.') } def _get_client(): client = agent_client.AgentClient() return client def post_clean_step_hook(interface, step): """Decorator method for adding a post clean step hook. This is a mechanism for adding a post clean step hook for a particular clean step. The hook will get executed after the clean step gets executed successfully. The hook is not invoked on failure of the clean step. Any method to be made as a hook may be decorated with @post_clean_step_hook mentioning the interface and step after which the hook should be executed. A TaskManager instance and the object for the last completed command (provided by agent) will be passed to the hook method. The return value of this method will be ignored. Any exception raised by this method will be treated as a failure of the clean step and the node will be moved to CLEANFAIL state. :param interface: name of the interface :param step: The name of the step after which it should be executed. :returns: A method which registers the given method as a post clean step hook. """ def decorator(func): POST_CLEAN_STEP_HOOKS.setdefault(interface, {})[step] = func return func return decorator def _get_post_clean_step_hook(node): """Get post clean step hook for the currently executing clean step. This method reads node.clean_step and returns the post clean step hook for the currently executing clean step. :param node: a node object :returns: a method if there is a post clean step hook for this clean step; None otherwise """ interface = node.clean_step.get('interface') step = node.clean_step.get('step') try: return POST_CLEAN_STEP_HOOKS[interface][step] except KeyError: pass class BaseAgentVendor(base.VendorInterface): def __init__(self): self.supported_payload_versions = ['2'] self._client = _get_client() def continue_deploy(self, task, **kwargs): """Continues the deployment of baremetal node. This method continues the deployment of the baremetal node after the ramdisk have been booted. :param task: a TaskManager instance """ pass def deploy_has_started(self, task): """Check if the deployment has started already. :returns: True if the deploy has started, False otherwise. """ pass def deploy_is_done(self, task): """Check if the deployment is already completed. :returns: True if the deployment is completed. False otherwise """ pass def reboot_to_instance(self, task, **kwargs): """Method invoked after the deployment is completed. :param task: a TaskManager instance """ pass def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return VENDOR_PROPERTIES def validate(self, task, method, **kwargs): """Validate the driver-specific Node deployment info. No validation necessary. :param task: a TaskManager instance :param method: method to be validated """ pass def driver_validate(self, method, **kwargs): """Validate the driver deployment info. :param method: method to be validated. """ version = kwargs.get('version') if not version: raise exception.MissingParameterValue(_('Missing parameter ' 'version')) if version not in self.supported_payload_versions: raise exception.InvalidParameterValue(_('Unknown lookup ' 'payload version: %s') % version) def notify_conductor_resume_clean(self, task): LOG.debug('Sending RPC to conductor to resume cleaning for node %s', task.node.uuid) uuid = task.node.uuid rpc = rpcapi.ConductorAPI() topic = rpc.get_topic_for(task.node) # Need to release the lock to let the conductor take it task.release_resources() rpc.continue_node_clean(task.context, uuid, topic=topic) def _refresh_clean_steps(self, task): """Refresh the node's cached clean steps from the booted agent. Gets the node's clean steps from the booted agent and caches them. The steps are cached to make get_clean_steps() calls synchronous, and should be refreshed as soon as the agent boots to start cleaning or if cleaning is restarted because of a cleaning version mismatch. :param task: a TaskManager instance :raises: NodeCleaningFailure if the agent returns invalid results """ node = task.node previous_steps = node.driver_internal_info.get( 'agent_cached_clean_steps') LOG.debug('Refreshing agent clean step cache for node %(node)s. ' 'Previously cached steps: %(steps)s', {'node': node.uuid, 'steps': previous_steps}) agent_result = self._client.get_clean_steps(node, task.ports).get( 'command_result', {}) missing = set(['clean_steps', 'hardware_manager_version']).difference( agent_result) if missing: raise exception.NodeCleaningFailure(_( 'agent get_clean_steps for node %(node)s returned an invalid ' 'result. Keys: %(keys)s are missing from result: %(result)s.') % ({'node': node.uuid, 'keys': missing, 'result': agent_result})) # agent_result['clean_steps'] looks like # {'HardwareManager': [{step1},{steps2}...], ...} steps = collections.defaultdict(list) for step_list in agent_result['clean_steps'].values(): for step in step_list: missing = set(['interface', 'step', 'priority']).difference( step) if missing: raise exception.NodeCleaningFailure(_( 'agent get_clean_steps for node %(node)s returned an ' 'invalid clean step. Keys: %(keys)s are missing from ' 'step: %(step)s.') % ({'node': node.uuid, 'keys': missing, 'step': step})) steps[step['interface']].append(step) # Save hardware manager version, steps, and date info = node.driver_internal_info info['hardware_manager_version'] = agent_result[ 'hardware_manager_version'] info['agent_cached_clean_steps'] = dict(steps) info['agent_cached_clean_steps_refreshed'] = str(timeutils.utcnow()) node.driver_internal_info = info node.save() LOG.debug('Refreshed agent clean step cache for node %(node)s: ' '%(steps)s', {'node': node.uuid, 'steps': steps}) def continue_cleaning(self, task, **kwargs): """Start the next cleaning step if the previous one is complete. In order to avoid errors and make agent upgrades painless, the agent compares the version of all hardware managers at the start of the cleaning (the agent's get_clean_steps() call) and before executing each clean step. If the version has changed between steps, the agent is unable to tell if an ordering change will cause a cleaning issue so it returns CLEAN_VERSION_MISMATCH. For automated cleaning, we restart the entire cleaning cycle. For manual cleaning, we don't. """ node = task.node # For manual clean, the target provision state is MANAGEABLE, whereas # for automated cleaning, it is (the default) AVAILABLE. manual_clean = node.target_provision_state == states.MANAGEABLE command = self._get_completed_cleaning_command(task) LOG.debug('Cleaning command status for node %(node)s on step %(step)s:' ' %(command)s', {'node': node.uuid, 'step': node.clean_step, 'command': command}) if not command: # Command is not done yet return if command.get('command_status') == 'FAILED': msg = (_('Agent returned error for clean step %(step)s on node ' '%(node)s : %(err)s.') % {'node': node.uuid, 'err': command.get('command_error'), 'step': node.clean_step}) LOG.error(msg) return manager_utils.cleaning_error_handler(task, msg) elif command.get('command_status') == 'CLEAN_VERSION_MISMATCH': # Cache the new clean steps (and 'hardware_manager_version') try: self._refresh_clean_steps(task) except exception.NodeCleaningFailure as e: msg = (_('Could not continue cleaning on node ' '%(node)s: %(err)s.') % {'node': node.uuid, 'err': e}) LOG.exception(msg) return manager_utils.cleaning_error_handler(task, msg) if manual_clean: # Don't restart manual cleaning if agent reboots to a new # version. Both are operator actions, unlike automated # cleaning. Manual clean steps are not necessarily idempotent # like automated clean steps and can be even longer running. LOG.info(_LI('During manual cleaning, node %(node)s detected ' 'a clean version mismatch. Re-executing and ' 'continuing from current step %(step)s.'), {'node': node.uuid, 'step': node.clean_step}) driver_internal_info = node.driver_internal_info driver_internal_info['skip_current_clean_step'] = False node.driver_internal_info = driver_internal_info node.save() else: # Restart cleaning, agent must have rebooted to new version LOG.info(_LI('During automated cleaning, node %s detected a ' 'clean version mismatch. Resetting clean steps ' 'and rebooting the node.'), node.uuid) try: manager_utils.set_node_cleaning_steps(task) except exception.NodeCleaningFailure: msg = (_('Could not restart automated cleaning on node ' '%(node)s: %(err)s.') % {'node': node.uuid, 'err': command.get('command_error'), 'step': node.clean_step}) LOG.exception(msg) return manager_utils.cleaning_error_handler(task, msg) self.notify_conductor_resume_clean(task) elif command.get('command_status') == 'SUCCEEDED': clean_step_hook = _get_post_clean_step_hook(node) if clean_step_hook is not None: LOG.debug('For node %(node)s, executing post clean step ' 'hook %(method)s for clean step %(step)s' % {'method': clean_step_hook.__name__, 'node': node.uuid, 'step': node.clean_step}) try: clean_step_hook(task, command) except Exception as e: msg = (_('For node %(node)s, post clean step hook ' '%(method)s failed for clean step %(step)s.' 'Error: %(error)s') % {'method': clean_step_hook.__name__, 'node': node.uuid, 'error': e, 'step': node.clean_step}) LOG.exception(msg) return manager_utils.cleaning_error_handler(task, msg) LOG.info(_LI('Agent on node %s returned cleaning command success, ' 'moving to next clean step'), node.uuid) self.notify_conductor_resume_clean(task) else: msg = (_('Agent returned unknown status for clean step %(step)s ' 'on node %(node)s : %(err)s.') % {'node': node.uuid, 'err': command.get('command_status'), 'step': node.clean_step}) LOG.error(msg) return manager_utils.cleaning_error_handler(task, msg) @base.passthru(['POST']) @task_manager.require_exclusive_lock def heartbeat(self, task, **kwargs): """Method for agent to periodically check in. The agent should be sending its agent_url (so Ironic can talk back) as a kwarg. kwargs should have the following format:: { 'agent_url': 'http://AGENT_HOST:AGENT_PORT' } AGENT_PORT defaults to 9999. """ node = task.node driver_internal_info = node.driver_internal_info LOG.debug( 'Heartbeat from %(node)s, last heartbeat at %(heartbeat)s.', {'node': node.uuid, 'heartbeat': driver_internal_info.get('agent_last_heartbeat')}) driver_internal_info['agent_last_heartbeat'] = int(time.time()) try: driver_internal_info['agent_url'] = kwargs['agent_url'] except KeyError: raise exception.MissingParameterValue(_('For heartbeat operation, ' '"agent_url" must be ' 'specified.')) node.driver_internal_info = driver_internal_info node.save() # Async call backs don't set error state on their own # TODO(jimrollenhagen) improve error messages here msg = _('Failed checking if deploy is done.') try: if node.maintenance: # this shouldn't happen often, but skip the rest if it does. LOG.debug('Heartbeat from node %(node)s in maintenance mode; ' 'not taking any action.', {'node': node.uuid}) return elif (node.provision_state == states.DEPLOYWAIT and not self.deploy_has_started(task)): msg = _('Node failed to get image for deploy.') self.continue_deploy(task, **kwargs) elif (node.provision_state == states.DEPLOYWAIT and self.deploy_is_done(task)): msg = _('Node failed to move to active state.') self.reboot_to_instance(task, **kwargs) elif (node.provision_state == states.DEPLOYWAIT and self.deploy_has_started(task)): node.touch_provisioning() # TODO(lucasagomes): CLEANING here for backwards compat # with previous code, otherwise nodes in CLEANING when this # is deployed would fail. Should be removed once the Mitaka # release starts. elif node.provision_state in (states.CLEANWAIT, states.CLEANING): node.touch_provisioning() try: if not node.clean_step: LOG.debug('Node %s just booted to start cleaning.', node.uuid) msg = _('Node failed to start the first cleaning ' 'step.') # First, cache the clean steps self._refresh_clean_steps(task) # Then set/verify node clean steps and start cleaning manager_utils.set_node_cleaning_steps(task) self.notify_conductor_resume_clean(task) else: msg = _('Node failed to check cleaning progress.') self.continue_cleaning(task, **kwargs) except exception.NoFreeConductorWorker: # waiting for the next heartbeat, node.last_error and # logging message is filled already via conductor's hook pass except Exception as e: err_info = {'node': node.uuid, 'msg': msg, 'e': e} last_error = _('Asynchronous exception for node %(node)s: ' '%(msg)s Exception: %(e)s') % err_info LOG.exception(last_error) if node.provision_state in (states.CLEANING, states.CLEANWAIT): manager_utils.cleaning_error_handler(task, last_error) elif node.provision_state in (states.DEPLOYING, states.DEPLOYWAIT): deploy_utils.set_failed_state(task, last_error) @base.driver_passthru(['POST'], async=False) def lookup(self, context, **kwargs): """Find a matching node for the agent. Method to be called the first time a ramdisk agent checks in. This can be because this is a node just entering decom or a node that rebooted for some reason. We will use the mac addresses listed in the kwargs to find the matching node, then return the node object to the agent. The agent can that use that UUID to use the node vendor passthru method. Currently, we don't handle the instance where the agent doesn't have a matching node (i.e. a brand new, never been in Ironic node). kwargs should have the following format:: { "version": "2" "inventory": { "interfaces": [ { "name": "eth0", "mac_address": "00:11:22:33:44:55", "switch_port_descr": "port24", "switch_chassis_descr": "tor1" }, ... ], ... }, "node_uuid": "ab229209-0139-4588-bbe5-64ccec81dd6e" } The interfaces list should include a list of the non-IPMI MAC addresses in the form aa:bb:cc:dd:ee:ff. node_uuid argument is optional. If it's provided (e.g. as a result of inspection run before lookup), this method will just return a node and options. This method will also return the timeout for heartbeats. The driver will expect the agent to heartbeat before that timeout, or it will be considered down. This will be in a root level key called 'heartbeat_timeout' :raises: NotFound if no matching node is found. :raises: InvalidParameterValue with unknown payload version """ LOG.debug('Agent lookup using data %s', kwargs) uuid = kwargs.get('node_uuid') if uuid: node = objects.Node.get_by_uuid(context, uuid) else: inventory = kwargs.get('inventory') interfaces = self._get_interfaces(inventory) mac_addresses = self._get_mac_addresses(interfaces) node = self._find_node_by_macs(context, mac_addresses) LOG.info(_LI('Initial lookup for node %s succeeded, agent is running ' 'and waiting for commands'), node.uuid) return { 'heartbeat_timeout': CONF.agent.heartbeat_timeout, 'node': node.as_dict() } def _get_completed_cleaning_command(self, task): """Returns None or a completed cleaning command from the agent.""" commands = self._client.get_commands_status(task.node) if not commands: return last_command = commands[-1] if last_command['command_name'] != 'execute_clean_step': # catches race condition where execute_clean_step is still # processing so the command hasn't started yet LOG.debug('Expected agent last command to be "execute_clean_step" ' 'for node %(node)s, instead got "%(command)s". Waiting ' 'for next heartbeat.', {'node': task.node.uuid, 'command': last_command['command_name']}) return last_result = last_command.get('command_result') or {} last_step = last_result.get('clean_step') if last_command['command_status'] == 'RUNNING': LOG.debug('Clean step still running for node %(node)s: %(step)s', {'step': last_step, 'node': task.node.uuid}) return elif (last_command['command_status'] == 'SUCCEEDED' and last_step != task.node.clean_step): # A previous clean_step was running, the new command has not yet # started. LOG.debug('Clean step not yet started for node %(node)s: %(step)s', {'step': last_step, 'node': task.node.uuid}) return else: return last_command def _get_interfaces(self, inventory): interfaces = [] try: interfaces = inventory['interfaces'] except (KeyError, TypeError): raise exception.InvalidParameterValue(_( 'Malformed network interfaces lookup: %s') % inventory) return interfaces def _get_mac_addresses(self, interfaces): """Returns MACs for the network devices.""" mac_addresses = [] for interface in interfaces: try: mac_addresses.append(utils.validate_and_normalize_mac( interface.get('mac_address'))) except exception.InvalidMAC: LOG.warning(_LW('Malformed MAC: %s'), interface.get( 'mac_address')) return mac_addresses def _find_node_by_macs(self, context, mac_addresses): """Get nodes for a given list of MAC addresses. Given a list of MAC addresses, find the ports that match the MACs and return the node they are all connected to. :raises: NodeNotFound if the ports point to multiple nodes or no nodes. """ ports = self._find_ports_by_macs(context, mac_addresses) if not ports: raise exception.NodeNotFound(_( 'No ports matching the given MAC addresses %s exist in the ' 'database.') % mac_addresses) node_id = self._get_node_id(ports) try: node = objects.Node.get_by_id(context, node_id) except exception.NodeNotFound: with excutils.save_and_reraise_exception(): LOG.exception(_LE('Could not find matching node for the ' 'provided MACs %s.'), mac_addresses) return node def _find_ports_by_macs(self, context, mac_addresses): """Get ports for a given list of MAC addresses. Given a list of MAC addresses, find the ports that match the MACs and return them as a list of Port objects, or an empty list if there are no matches """ ports = [] for mac in mac_addresses: # Will do a search by mac if the mac isn't malformed try: port_ob = objects.Port.get_by_address(context, mac) ports.append(port_ob) except exception.PortNotFound: LOG.warning(_LW('MAC address %s not found in database'), mac) return ports def _get_node_id(self, ports): """Get a node ID for a list of ports. Given a list of ports, either return the node_id they all share or raise a NotFound if there are multiple node_ids, which indicates some ports are connected to one node and the remaining port(s) are connected to one or more other nodes. :raises: NodeNotFound if the MACs match multiple nodes. This could happen if you swapped a NIC from one server to another and don't notify Ironic about it or there is a MAC collision (since they're not guaranteed to be unique). """ # See if all the ports point to the same node node_ids = set(port_ob.node_id for port_ob in ports) if len(node_ids) > 1: raise exception.NodeNotFound(_( 'Ports matching mac addresses match multiple nodes. MACs: ' '%(macs)s. Port ids: %(port_ids)s') % {'macs': [port_ob.address for port_ob in ports], 'port_ids': [port_ob.uuid for port_ob in ports]} ) # Only have one node_id left, return it. return node_ids.pop() def _log_and_raise_deployment_error(self, task, msg): """Helper method to log the error and raise exception.""" LOG.error(msg) deploy_utils.set_failed_state(task, msg) raise exception.InstanceDeployFailure(msg) def reboot_and_finish_deploy(self, task): """Helper method to trigger reboot on the node and finish deploy. This method initiates a reboot on the node. On success, it marks the deploy as complete. On failure, it logs the error and marks deploy as failure. :param task: a TaskManager object containing the node :raises: InstanceDeployFailure, if node reboot failed. """ wait = CONF.agent.post_deploy_get_power_state_retry_interval * 1000 attempts = CONF.agent.post_deploy_get_power_state_retries + 1 @retrying.retry( stop_max_attempt_number=attempts, retry_on_result=lambda state: state != states.POWER_OFF, wait_fixed=wait ) def _wait_until_powered_off(task): return task.driver.power.get_power_state(task) node = task.node # Whether ironic should power off the node via out-of-band or # in-band methods oob_power_off = strutils.bool_from_string( node.driver_info.get('deploy_forces_oob_reboot', False)) try: if not oob_power_off: try: self._client.power_off(node) _wait_until_powered_off(task) except Exception as e: LOG.warning( _LW('Failed to soft power off node %(node_uuid)s ' 'in at least %(timeout)d seconds. ' 'Error: %(error)s'), {'node_uuid': node.uuid, 'timeout': (wait * (attempts - 1)) / 1000, 'error': e}) else: # Flush the file system prior to hard rebooting the node result = self._client.sync(node) error = result.get('faultstring') if error: if 'Unknown command' in error: error = _('The version of the IPA ramdisk used in ' 'the deployment do not support the ' 'command "sync"') LOG.warning(_LW( 'Failed to flush the file system prior to hard ' 'rebooting the node %(node)s. Error: %(error)s'), {'node': node.uuid, 'error': error}) manager_utils.node_power_action(task, states.REBOOT) except Exception as e: msg = (_('Error rebooting node %(node)s after deploy. ' 'Error: %(error)s') % {'node': node.uuid, 'error': e}) self._log_and_raise_deployment_error(task, msg) task.process_event('done') LOG.info(_LI('Deployment to node %s done'), task.node.uuid) def prepare_instance_to_boot(self, task, root_uuid, efi_sys_uuid): """Prepares instance to boot. :param task: a TaskManager object containing the node :param root_uuid: the UUID for root partition :param efi_sys_uuid: the UUID for the efi partition :raises: InvalidState if fails to prepare instance """ node = task.node if deploy_utils.get_boot_option(node) == "local": # Install the boot loader self.configure_local_boot( task, root_uuid=root_uuid, efi_system_part_uuid=efi_sys_uuid) try: task.driver.boot.prepare_instance(task) except Exception as e: LOG.error(_LE('Deploy failed for instance %(instance)s. ' 'Error: %(error)s'), {'instance': node.instance_uuid, 'error': e}) msg = _('Failed to continue agent deployment.') self._log_and_raise_deployment_error(task, msg) def configure_local_boot(self, task, root_uuid=None, efi_system_part_uuid=None): """Helper method to configure local boot on the node. This method triggers bootloader installation on the node. On successful installation of bootloader, this method sets the node to boot from disk. :param task: a TaskManager object containing the node :param root_uuid: The UUID of the root partition. This is used for identifying the partition which contains the image deployed or None in case of whole disk images which we expect to already have a bootloader installed. :param efi_system_part_uuid: The UUID of the efi system partition. This is used only in uefi boot mode. :raises: InstanceDeployFailure if bootloader installation failed or on encountering error while setting the boot device on the node. """ node = task.node LOG.debug('Configuring local boot for node %s', node.uuid) if not node.driver_internal_info.get( 'is_whole_disk_image') and root_uuid: LOG.debug('Installing the bootloader for node %(node)s on ' 'partition %(part)s, EFI system partition %(efi)s', {'node': node.uuid, 'part': root_uuid, 'efi': efi_system_part_uuid}) result = self._client.install_bootloader( node, root_uuid=root_uuid, efi_system_part_uuid=efi_system_part_uuid) if result['command_status'] == 'FAILED': msg = (_("Failed to install a bootloader when " "deploying node %(node)s. Error: %(error)s") % {'node': node.uuid, 'error': result['command_error']}) self._log_and_raise_deployment_error(task, msg) try: deploy_utils.try_set_boot_device(task, boot_devices.DISK) except Exception as e: msg = (_("Failed to change the boot device to %(boot_dev)s " "when deploying node %(node)s. Error: %(error)s") % {'boot_dev': boot_devices.DISK, 'node': node.uuid, 'error': e}) self._log_and_raise_deployment_error(task, msg) LOG.info(_LI('Local boot successfully configured for node %s'), node.uuid) ironic-5.1.0/ironic/drivers/modules/seamicro.py0000664000567000056710000005757012674513466022762 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Ironic SeaMicro interfaces. Provides basic power control of servers in SeaMicro chassis via python-seamicroclient. Provides vendor passthru methods for SeaMicro specific functionality. """ import os import re from oslo_config import cfg from oslo_log import log as logging from oslo_service import loopingcall from oslo_utils import importutils import six from six.moves.urllib import parse as urlparse from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules import console_utils seamicroclient = importutils.try_import('seamicroclient') if seamicroclient: from seamicroclient import client as seamicro_client from seamicroclient import exceptions as seamicro_client_exception opts = [ cfg.IntOpt('max_retry', default=3, help=_('Maximum retries for SeaMicro operations')), cfg.IntOpt('action_timeout', default=10, help=_('Seconds to wait for power action to be completed')) ] CONF = cfg.CONF opt_group = cfg.OptGroup(name='seamicro', title='Options for the seamicro power driver') CONF.register_group(opt_group) CONF.register_opts(opts, opt_group) LOG = logging.getLogger(__name__) _BOOT_DEVICES_MAP = { boot_devices.DISK: 'hd0', boot_devices.PXE: 'pxe', } REQUIRED_PROPERTIES = { 'seamicro_api_endpoint': _("API endpoint. Required."), 'seamicro_password': _("password. Required."), 'seamicro_server_id': _("server ID. Required."), 'seamicro_username': _("username. Required."), } OPTIONAL_PROPERTIES = { 'seamicro_api_version': _("version of SeaMicro API client; default is 2. " "Optional.") } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) CONSOLE_PROPERTIES = { 'seamicro_terminal_port': _("node's UDP port to connect to. " "Only required for console access.") } PORT_BASE = 2000 def _get_client(*args, **kwargs): """Creates the python-seamicro_client :param kwargs: A dict of keyword arguments to be passed to the method, which should contain: 'username', 'password', 'auth_url', 'api_version' parameters. :returns: SeaMicro API client. """ cl_kwargs = {'username': kwargs['username'], 'password': kwargs['password'], 'auth_url': kwargs['api_endpoint']} try: return seamicro_client.Client(kwargs['api_version'], **cl_kwargs) except seamicro_client_exception.UnsupportedVersion as e: raise exception.InvalidParameterValue(_( "Invalid 'seamicro_api_version' parameter. Reason: %s.") % e) def _parse_driver_info(node): """Parses and creates seamicro driver info :param node: An Ironic node object. :returns: SeaMicro driver info. :raises: MissingParameterValue if any required parameters are missing. :raises: InvalidParameterValue if required parameter are invalid. """ info = node.driver_info or {} missing_info = [key for key in REQUIRED_PROPERTIES if not info.get(key)] if missing_info: raise exception.MissingParameterValue(_( "SeaMicro driver requires the following parameters to be set in" " node's driver_info: %s.") % missing_info) api_endpoint = info.get('seamicro_api_endpoint') username = info.get('seamicro_username') password = info.get('seamicro_password') server_id = info.get('seamicro_server_id') api_version = info.get('seamicro_api_version', "2") port = info.get('seamicro_terminal_port') if port is not None: port = utils.validate_network_port(port, 'seamicro_terminal_port') r = re.compile(r"(^[0-9]+)/([0-9]+$)") if not r.match(server_id): raise exception.InvalidParameterValue(_( "Invalid 'seamicro_server_id' parameter in node's " "driver_info. Expected format of 'seamicro_server_id' " "is /")) url = urlparse.urlparse(api_endpoint) if (not (url.scheme == "http") or not url.netloc): raise exception.InvalidParameterValue(_( "Invalid 'seamicro_api_endpoint' parameter in node's " "driver_info.")) res = {'username': username, 'password': password, 'api_endpoint': api_endpoint, 'server_id': server_id, 'api_version': api_version, 'uuid': node.uuid, 'port': port} return res def _get_server(driver_info): """Get server from server_id.""" s_client = _get_client(**driver_info) return s_client.servers.get(driver_info['server_id']) def _get_volume(driver_info, volume_id): """Get volume from volume_id.""" s_client = _get_client(**driver_info) return s_client.volumes.get(volume_id) def _get_power_status(node): """Get current power state of this node :param node: Ironic node one of :class:`ironic.db.models.Node` :raises: InvalidParameterValue if a seamicro parameter is invalid. :raises: MissingParameterValue if required seamicro parameters are missing. :raises: ServiceUnavailable on an error from SeaMicro Client. :returns: Power state of the given node """ seamicro_info = _parse_driver_info(node) try: server = _get_server(seamicro_info) if not hasattr(server, 'active') or server.active is None: return states.ERROR if not server.active: return states.POWER_OFF elif server.active: return states.POWER_ON except seamicro_client_exception.NotFound: raise exception.NodeNotFound(node=node.uuid) except seamicro_client_exception.ClientException as ex: LOG.error(_LE("SeaMicro client exception %(msg)s for node %(uuid)s"), {'msg': ex.message, 'uuid': node.uuid}) raise exception.ServiceUnavailable(message=ex.message) def _power_on(node, timeout=None): """Power ON this node :param node: An Ironic node object. :param timeout: Time in seconds to wait till power on is complete. :raises: InvalidParameterValue if a seamicro parameter is invalid. :raises: MissingParameterValue if required seamicro parameters are missing. :returns: Power state of the given node. """ if timeout is None: timeout = CONF.seamicro.action_timeout state = [None] retries = [0] seamicro_info = _parse_driver_info(node) server = _get_server(seamicro_info) def _wait_for_power_on(state, retries): """Called at an interval until the node is powered on.""" state[0] = _get_power_status(node) if state[0] == states.POWER_ON: raise loopingcall.LoopingCallDone() if retries[0] > CONF.seamicro.max_retry: state[0] = states.ERROR raise loopingcall.LoopingCallDone() try: retries[0] += 1 server.power_on() except seamicro_client_exception.ClientException: LOG.warning(_LW("Power-on failed for node %s."), node.uuid) timer = loopingcall.FixedIntervalLoopingCall(_wait_for_power_on, state, retries) timer.start(interval=timeout).wait() return state[0] def _power_off(node, timeout=None): """Power OFF this node :param node: Ironic node one of :class:`ironic.db.models.Node` :param timeout: Time in seconds to wait till power off is compelete :raises: InvalidParameterValue if a seamicro parameter is invalid. :raises: MissingParameterValue if required seamicro parameters are missing. :returns: Power state of the given node """ if timeout is None: timeout = CONF.seamicro.action_timeout state = [None] retries = [0] seamicro_info = _parse_driver_info(node) server = _get_server(seamicro_info) def _wait_for_power_off(state, retries): """Called at an interval until the node is powered off.""" state[0] = _get_power_status(node) if state[0] == states.POWER_OFF: raise loopingcall.LoopingCallDone() if retries[0] > CONF.seamicro.max_retry: state[0] = states.ERROR raise loopingcall.LoopingCallDone() try: retries[0] += 1 server.power_off() except seamicro_client_exception.ClientException: LOG.warning(_LW("Power-off failed for node %s."), node.uuid) timer = loopingcall.FixedIntervalLoopingCall(_wait_for_power_off, state, retries) timer.start(interval=timeout).wait() return state[0] def _reboot(node, timeout=None): """Reboot this node. :param node: Ironic node one of :class:`ironic.db.models.Node` :param timeout: Time in seconds to wait till reboot is compelete :raises: InvalidParameterValue if a seamicro parameter is invalid. :raises: MissingParameterValue if required seamicro parameters are missing. :returns: Power state of the given node """ if timeout is None: timeout = CONF.seamicro.action_timeout state = [None] retries = [0] seamicro_info = _parse_driver_info(node) server = _get_server(seamicro_info) def _wait_for_reboot(state, retries): """Called at an interval until the node is rebooted successfully.""" state[0] = _get_power_status(node) if state[0] == states.POWER_ON: raise loopingcall.LoopingCallDone() if retries[0] > CONF.seamicro.max_retry: state[0] = states.ERROR raise loopingcall.LoopingCallDone() try: retries[0] += 1 server.reset() except seamicro_client_exception.ClientException: LOG.warning(_LW("Reboot failed for node %s."), node.uuid) timer = loopingcall.FixedIntervalLoopingCall(_wait_for_reboot, state, retries) server.reset() timer.start(interval=timeout).wait() return state[0] def _validate_volume(driver_info, volume_id): """Validates if volume is in Storage pools designated for ironic.""" volume = _get_volume(driver_info, volume_id) # Check if the ironic /ironic-/ naming scheme # is present in volume id try: pool_id = volume.id.split('/')[1].lower() except IndexError: pool_id = "" if "ironic-" in pool_id: return True else: raise exception.InvalidParameterValue(_( "Invalid volume id specified")) def _get_pools(driver_info, filters=None): """Get SeaMicro storage pools matching given filters.""" s_client = _get_client(**driver_info) return s_client.pools.list(filters=filters) def _create_volume(driver_info, volume_size): """Create volume in the SeaMicro storage pools designated for ironic.""" ironic_pools = _get_pools(driver_info, filters={'id': 'ironic-'}) if ironic_pools is None: raise exception.VendorPassthruException(_( "No storage pools found for ironic")) least_used_pool = sorted(ironic_pools, key=lambda x: x.freeSize)[0] return _get_client(**driver_info).volumes.create(volume_size, least_used_pool) def get_telnet_port(driver_info): """Get SeaMicro telnet port to listen.""" server_id = int(driver_info['server_id'].split("/")[0]) return PORT_BASE + (10 * server_id) class Power(base.PowerInterface): """SeaMicro Power Interface. This PowerInterface class provides a mechanism for controlling the power state of servers in a seamicro chassis. """ def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Check that node 'driver_info' is valid. Check that node 'driver_info' contains the required fields. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue if required seamicro parameters are missing. """ _parse_driver_info(task.node) def get_power_state(self, task): """Get the current power state of the task's node. Poll the host for the current power state of the node. :param task: a TaskManager instance containing the node to act on. :raises: ServiceUnavailable on an error from SeaMicro Client. :raises: InvalidParameterValue if a seamicro parameter is invalid. :raises: MissingParameterValue when a required parameter is missing :returns: power state. One of :class:`ironic.common.states`. """ return _get_power_status(task.node) @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Turn the power on or off. Set the power state of a node. :param task: a TaskManager instance containing the node to act on. :param pstate: Either POWER_ON or POWER_OFF from :class: `ironic.common.states`. :raises: InvalidParameterValue if an invalid power state was specified or a seamicro parameter is invalid. :raises: MissingParameterValue when a required parameter is missing :raises: PowerStateFailure if the desired power state couldn't be set. """ if pstate == states.POWER_ON: state = _power_on(task.node) elif pstate == states.POWER_OFF: state = _power_off(task.node) else: raise exception.InvalidParameterValue(_( "set_power_state called with invalid power state.")) if state != pstate: raise exception.PowerStateFailure(pstate=pstate) @task_manager.require_exclusive_lock def reboot(self, task): """Cycles the power to the task's node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if a seamicro parameter is invalid. :raises: MissingParameterValue if required seamicro parameters are missing. :raises: PowerStateFailure if the final state of the node is not POWER_ON. """ state = _reboot(task.node) if state != states.POWER_ON: raise exception.PowerStateFailure(pstate=states.POWER_ON) class VendorPassthru(base.VendorInterface): """SeaMicro vendor-specific methods.""" def get_properties(self): return COMMON_PROPERTIES def validate(self, task, method, **kwargs): _parse_driver_info(task.node) @base.passthru(['POST']) def set_node_vlan_id(self, task, **kwargs): """Sets an untagged vlan id for NIC 0 of node. @kwargs vlan_id: id of untagged vlan for NIC 0 of node """ node = task.node vlan_id = kwargs.get('vlan_id') if not vlan_id: raise exception.MissingParameterValue(_("No vlan id provided")) seamicro_info = _parse_driver_info(node) try: server = _get_server(seamicro_info) # remove current vlan for server if len(server.nic['0']['untaggedVlan']) > 0: server.unset_untagged_vlan(server.nic['0']['untaggedVlan']) server = server.refresh(5) server.set_untagged_vlan(vlan_id) except seamicro_client_exception.ClientException as ex: LOG.error(_LE("SeaMicro client exception: %s"), ex.message) raise exception.VendorPassthruException(message=ex.message) properties = node.properties properties['seamicro_vlan_id'] = vlan_id node.properties = properties node.save() @base.passthru(['POST']) def attach_volume(self, task, **kwargs): """Attach a volume to a node. Attach volume from SeaMicro storage pools for ironic to node. If kwargs['volume_id'] not given, Create volume in SeaMicro storage pool and attach to node. @kwargs volume_id: id of pre-provisioned volume that is to be attached as root volume of node @kwargs volume_size: size of new volume to be created and attached as root volume of node """ node = task.node seamicro_info = _parse_driver_info(node) volume_id = kwargs.get('volume_id') if volume_id is None: volume_size = kwargs.get('volume_size') if volume_size is None: raise exception.MissingParameterValue( _("No volume size provided for creating volume")) volume_id = _create_volume(seamicro_info, volume_size) if _validate_volume(seamicro_info, volume_id): try: server = _get_server(seamicro_info) server.detach_volume() server = server.refresh(5) server.attach_volume(volume_id) except seamicro_client_exception.ClientException as ex: LOG.error(_LE("SeaMicro client exception: %s"), ex.message) raise exception.VendorPassthruException(message=ex.message) properties = node.properties properties['seamicro_volume_id'] = volume_id node.properties = properties node.save() class Management(base.ManagementInterface): def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Check that 'driver_info' contains SeaMicro credentials. Validates whether the 'driver_info' property of the supplied task's node contains the required credentials information. :param task: a task from TaskManager. :raises: MissingParameterValue when a required parameter is missing """ _parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return list(_BOOT_DEVICES_MAP.keys()) @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for the task's node. Set the boot device to use on next reboot of the node. :param task: a task from TaskManager. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. Ignored by this driver. :raises: InvalidParameterValue if an invalid boot device is specified or if a seamicro parameter is invalid. :raises: IronicException on an error from seamicro-client. :raises: MissingParameterValue when a required parameter is missing """ if device not in self.get_supported_boot_devices(task): raise exception.InvalidParameterValue(_( "Invalid boot device %s specified.") % device) seamicro_info = _parse_driver_info(task.node) try: server = _get_server(seamicro_info) boot_device = _BOOT_DEVICES_MAP[device] server.set_boot_order(boot_device) except seamicro_client_exception.ClientException as ex: LOG.error(_LE("Seamicro set boot device failed for node " "%(node)s with the following error: %(error)s"), {'node': task.node.uuid, 'error': ex}) raise exception.IronicException(message=six.text_type(ex)) def get_boot_device(self, task): """Get the current boot device for the task's node. Returns the current boot device of the node. Be aware that not all drivers support this. :param task: a task from TaskManager. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ # TODO(lucasagomes): The python-seamicroclient library currently # doesn't expose a method to get the boot device, update it once # it's implemented. return {'boot_device': None, 'persistent': None} def get_sensors_data(self, task): """Get sensors data method. Not implemented by this driver. :param task: a TaskManager instance. """ raise NotImplementedError() class ShellinaboxConsole(base.ConsoleInterface): """A ConsoleInterface that uses telnet and shellinabox.""" def get_properties(self): d = COMMON_PROPERTIES.copy() d.update(CONSOLE_PROPERTIES) return d def validate(self, task): """Validate the Node console info. :param task: a task from TaskManager. :raises: MissingParameterValue if required seamicro parameters are missing :raises: InvalidParameterValue if required parameter are invalid. """ driver_info = _parse_driver_info(task.node) if not driver_info['port']: raise exception.MissingParameterValue(_( "Missing 'seamicro_terminal_port' parameter in node's " "driver_info")) def start_console(self, task): """Start a remote console for the node. :param task: a task from TaskManager :raises: MissingParameterValue if required seamicro parameters are missing :raises: ConsoleError if the directory for the PID file cannot be created :raises: ConsoleSubprocessFailed when invoking the subprocess failed :raises: InvalidParameterValue if required parameter are invalid. """ driver_info = _parse_driver_info(task.node) telnet_port = get_telnet_port(driver_info) chassis_ip = urlparse.urlparse(driver_info['api_endpoint']).netloc seamicro_cmd = ("/:%(uid)s:%(gid)s:HOME:telnet %(chassis)s %(port)s" % {'uid': os.getuid(), 'gid': os.getgid(), 'chassis': chassis_ip, 'port': telnet_port}) console_utils.start_shellinabox_console(driver_info['uuid'], driver_info['port'], seamicro_cmd) def stop_console(self, task): """Stop the remote console session for the node. :param task: a task from TaskManager :raises: ConsoleError if unable to stop the console """ console_utils.stop_shellinabox_console(task.node.uuid) def get_console(self, task): """Get the type and connection information about the console. :raises: MissingParameterValue if required seamicro parameters are missing :raises: InvalidParameterValue if required parameter are invalid. """ driver_info = _parse_driver_info(task.node) url = console_utils.get_shellinabox_console_url(driver_info['port']) return {'type': 'shellinabox', 'url': url} ironic-5.1.0/ironic/drivers/modules/drac/0000775000567000056710000000000012674513633021475 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/drac/common.py0000664000567000056710000001032112674513466023340 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common functionalities shared between different DRAC modules. """ from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common import utils drac_client = importutils.try_import('dracclient.client') drac_constants = importutils.try_import('dracclient.constants') REQUIRED_PROPERTIES = { 'drac_host': _('IP address or hostname of the DRAC card. Required.'), 'drac_username': _('username used for authentication. Required.'), 'drac_password': _('password used for authentication. Required.') } OPTIONAL_PROPERTIES = { 'drac_port': _('port used for WS-Man endpoint; default is 443. Optional.'), 'drac_path': _('path used for WS-Man endpoint; default is "/wsman". ' 'Optional.'), 'drac_protocol': _('protocol used for WS-Man endpoint; one of http, https;' ' default is "https". Optional.'), } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) def parse_driver_info(node): """Parse a node's driver_info values. Parses the driver_info of the node, reads default values and returns a dict containing the combination of both. :param node: an ironic node object. :returns: a dict containing information from driver_info and default values. :raises: InvalidParameterValue if some mandatory information is missing on the node or on invalid inputs. """ driver_info = node.driver_info parsed_driver_info = {} error_msgs = [] for param in REQUIRED_PROPERTIES: try: parsed_driver_info[param] = str(driver_info[param]) except KeyError: error_msgs.append(_("'%s' not supplied to DracDriver.") % param) except UnicodeEncodeError: error_msgs.append(_("'%s' contains non-ASCII symbol.") % param) parsed_driver_info['drac_port'] = driver_info.get('drac_port', 443) try: parsed_driver_info['drac_path'] = str(driver_info.get('drac_path', '/wsman')) except UnicodeEncodeError: error_msgs.append(_("'drac_path' contains non-ASCII symbol.")) try: parsed_driver_info['drac_protocol'] = str( driver_info.get('drac_protocol', 'https')) if parsed_driver_info['drac_protocol'] not in ['http', 'https']: error_msgs.append(_("'drac_protocol' must be either 'http' or " "'https'.")) except UnicodeEncodeError: error_msgs.append(_("'drac_protocol' contains non-ASCII symbol.")) if error_msgs: msg = (_('The following errors were encountered while parsing ' 'driver_info:\n%s') % '\n'.join(error_msgs)) raise exception.InvalidParameterValue(msg) port = parsed_driver_info['drac_port'] parsed_driver_info['drac_port'] = utils.validate_network_port( port, 'drac_port') return parsed_driver_info def get_drac_client(node): """Returns a DRACClient object from python-dracclient library. :param node: an ironic node object. :returns: a DRACClient object. :raises: InvalidParameterValue if mandatory information is missing on the node or on invalid input. """ driver_info = parse_driver_info(node) client = drac_client.DRACClient(driver_info['drac_host'], driver_info['drac_username'], driver_info['drac_password'], driver_info['drac_port'], driver_info['drac_path'], driver_info['drac_protocol']) return client ironic-5.1.0/ironic/drivers/modules/drac/power.py0000664000567000056710000001463412674513466023217 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ DRAC power interface """ from oslo_log import log as logging from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _LE from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import management as drac_management drac_constants = importutils.try_import('dracclient.constants') drac_exceptions = importutils.try_import('dracclient.exceptions') LOG = logging.getLogger(__name__) if drac_constants: POWER_STATES = { drac_constants.POWER_ON: states.POWER_ON, drac_constants.POWER_OFF: states.POWER_OFF, drac_constants.REBOOT: states.REBOOT } REVERSE_POWER_STATES = dict((v, k) for (k, v) in POWER_STATES.items()) def _get_power_state(node): """Returns the current power state of the node. :param node: an ironic node object. :returns: the power state, one of :mod:`ironic.common.states`. :raises: InvalidParameterValue if required DRAC credentials are missing. :raises: DracOperationError on an error from python-dracclient """ client = drac_common.get_drac_client(node) try: drac_power_state = client.get_power_state() except drac_exceptions.BaseClientException as exc: LOG.error(_LE('DRAC driver failed to get power state for node ' '%(node_uuid)s. Reason: %(error)s.'), {'node_uuid': node.uuid, 'error': exc}) raise exception.DracOperationError(error=exc) return POWER_STATES[drac_power_state] def _commit_boot_list_change(node): driver_internal_info = node.driver_internal_info boot_device = node.driver_internal_info.get('drac_boot_device') if boot_device is None: return drac_management.set_boot_device(node, boot_device['boot_device'], boot_device['persistent']) driver_internal_info['drac_boot_device'] = None node.driver_internal_info = driver_internal_info node.save() def _set_power_state(node, power_state): """Turns the server power on/off or do a reboot. :param node: an ironic node object. :param power_state: a power state from :mod:`ironic.common.states`. :raises: InvalidParameterValue if required DRAC credentials are missing. :raises: DracOperationError on an error from python-dracclient """ # NOTE(ifarkas): DRAC interface doesn't allow changing the boot device # multiple times in a row without a reboot. This is # because a change need to be committed via a # configuration job, and further configuration jobs # cannot be created until the previous one is processed # at the next boot. As a workaround, it is saved to # driver_internal_info during set_boot_device and committing # it here. _commit_boot_list_change(node) client = drac_common.get_drac_client(node) target_power_state = REVERSE_POWER_STATES[power_state] try: client.set_power_state(target_power_state) except drac_exceptions.BaseClientException as exc: LOG.error(_LE('DRAC driver failed to set power state for node ' '%(node_uuid)s to %(power_state)s. ' 'Reason: %(error)s.'), {'node_uuid': node.uuid, 'power_state': power_state, 'error': exc}) raise exception.DracOperationError(error=exc) class DracPower(base.PowerInterface): """Interface for power-related actions.""" def get_properties(self): """Return the properties of the interface.""" return drac_common.COMMON_PROPERTIES def validate(self, task): """Validate the driver-specific Node power info. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver to manage the power state of the node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if required driver_info attribute is missing or invalid on the node. """ return drac_common.parse_driver_info(task.node) def get_power_state(self, task): """Return the power state of the node. :param task: a TaskManager instance containing the node to act on. :returns: the power state, one of :mod:`ironic.common.states`. :raises: InvalidParameterValue if required DRAC credentials are missing. :raises: DracOperationError on an error from python-dracclient. """ return _get_power_state(task.node) @task_manager.require_exclusive_lock def set_power_state(self, task, power_state): """Set the power state of the node. :param task: a TaskManager instance containing the node to act on. :param power_state: a power state from :mod:`ironic.common.states`. :raises: InvalidParameterValue if required DRAC credentials are missing. :raises: DracOperationError on an error from python-dracclient. """ _set_power_state(task.node, power_state) @task_manager.require_exclusive_lock def reboot(self, task): """Perform a reboot of the task's node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if required DRAC credentials are missing. :raises: DracOperationError on an error from python-dracclient. """ current_power_state = _get_power_state(task.node) if current_power_state == states.POWER_ON: target_power_state = states.REBOOT else: target_power_state = states.POWER_ON _set_power_state(task.node, target_power_state) ironic-5.1.0/ironic/drivers/modules/drac/bios.py0000664000567000056710000001544612674513466023021 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ DRAC BIOS configuration specific methods """ from oslo_log import log as logging from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _LE from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import job as drac_job drac_exceptions = importutils.try_import('dracclient.exceptions') LOG = logging.getLogger(__name__) def get_config(node): """Get the BIOS configuration. :param node: an ironic node object. :raises: DracOperationError on an error from python-dracclient. :returns: a dictionary containing BIOS settings in the form of: {'EnumAttrib': {'name': 'EnumAttrib', 'current_value': 'Value', 'pending_value': 'New Value', # could also be None 'read_only': False, 'possible_values': ['Value', 'New Value', 'None']}, 'StringAttrib': {'name': 'StringAttrib', 'current_value': 'Information', 'pending_value': None, 'read_only': False, 'min_length': 0, 'max_length': 255, 'pcre_regex': '^[0-9A-Za-z]{0,255}$'}, 'IntegerAttrib': {'name': 'IntegerAttrib', 'current_value': 0, 'pending_value': None, 'read_only': True, 'lower_bound': 0, 'upper_bound': 65535} } The above values are only examples, of course. BIOS attributes exposed via this API will always be either an enumerated attribute, a string attribute, or an integer attribute. All attributes have the following parameters: :name: is the name of the BIOS attribute. :current_value: is the current value of the attribute. It will always be either an integer or a string. :pending_value: is the new value that we want the attribute to have. None means that there is no pending value. :read_only: indicates whether this attribute can be changed. Trying to change a read-only value will result in an error. The read-only flag can change depending on other attributes. A future version of this call may expose the dependencies that indicate when that may happen. Enumerable attributes also have the following parameters: :possible_values: is an array of values it is permissible to set the attribute to. String attributes also have the following parameters: :min_length: is the minimum length of the string. :max_length: is the maximum length of the string. :pcre_regex: is a PCRE compatible regular expression that the string must match. It may be None if the string is read only or if the string does not have to match any particular regular expression. Integer attributes also have the following parameters: :lower_bound: is the minimum value the attribute can have. :upper_bound: is the maximum value the attribute can have. """ client = drac_common.get_drac_client(node) try: return client.list_bios_settings() except drac_exceptions.BaseClientException as exc: LOG.error(_LE('DRAC driver failed to get the BIOS settings for node ' '%(node_uuid)s. Reason: %(error)s.'), {'node_uuid': node.uuid, 'error': exc}) raise exception.DracOperationError(error=exc) def set_config(task, **kwargs): """Sets the pending_value parameter for each of the values passed in. :param task: a TaskManager instance containing the node to act on. :param kwargs: a dictionary of {'AttributeName': 'NewValue'} :raises: DracOperationError on an error from python-dracclient. :returns: A dictionary containing the commit_required key with a boolean value indicating whether commit_bios_config() needs to be called to make the changes. """ node = task.node drac_job.validate_job_queue(node) client = drac_common.get_drac_client(node) if 'http_method' in kwargs: del kwargs['http_method'] try: return client.set_bios_settings(kwargs) except drac_exceptions.BaseClientException as exc: LOG.error(_LE('DRAC driver failed to set the BIOS settings for node ' '%(node_uuid)s. Reason: %(error)s.'), {'node_uuid': node.uuid, 'error': exc}) raise exception.DracOperationError(error=exc) def commit_config(task, reboot=False): """Commits pending changes added by set_config :param task: a TaskManager instance containing the node to act on. :param reboot: indicates whether a reboot job should be automatically created with the config job. :raises: DracOperationError on an error from python-dracclient. :returns: the job_id key with the id of the newly created config job. """ node = task.node drac_job.validate_job_queue(node) client = drac_common.get_drac_client(node) try: return client.commit_pending_bios_changes(reboot) except drac_exceptions.BaseClientException as exc: LOG.error(_LE('DRAC driver failed to commit the pending BIOS changes ' 'for node %(node_uuid)s. Reason: %(error)s.'), {'node_uuid': node.uuid, 'error': exc}) raise exception.DracOperationError(error=exc) def abandon_config(task): """Abandons uncommitted changes added by set_config :param task: a TaskManager instance containing the node to act on. :raises: DracOperationError on an error from python-dracclient. """ node = task.node client = drac_common.get_drac_client(node) try: client.abandon_pending_bios_changes() except drac_exceptions.BaseClientException as exc: LOG.error(_LE('DRAC driver failed to delete the pending BIOS ' 'settings for node %(node_uuid)s. Reason: %(error)s.'), {'node_uuid': node.uuid, 'error': exc}) raise exception.DracOperationError(error=exc) ironic-5.1.0/ironic/drivers/modules/drac/__init__.py0000664000567000056710000000000012674513466023600 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/drac/vendor_passthru.py0000664000567000056710000001077712674513466025315 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ DRAC VendorPassthruBios Driver """ from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.drac import bios from ironic.drivers.modules.drac import common as drac_common class DracVendorPassthru(base.VendorInterface): """Interface for DRAC specific BIOS configuration methods.""" def get_properties(self): """Return the properties of the interface.""" return drac_common.COMMON_PROPERTIES def validate(self, task, **kwargs): """Validate the driver-specific info supplied. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver to manage the power state of the node. :param task: a TaskManager instance containing the node to act on. :param kwargs: not used. :raises: InvalidParameterValue if required driver_info attribute is missing or invalid on the node. """ return drac_common.parse_driver_info(task.node) @base.passthru(['GET'], async=False) def get_bios_config(self, task, **kwargs): """Get the BIOS configuration. This method is used to retrieve the BIOS settings from a node. :param task: a TaskManager instance containing the node to act on. :param kwargs: not used. :raises: DracOperationError on an error from python-dracclient. :returns: a dictionary containing BIOS settings. """ bios_attrs = {} for name, bios_attr in bios.get_config(task.node).items(): # NOTE(ifarkas): call from python-dracclient returns list of # namedtuples, converting it to dict here. bios_attrs[name] = bios_attr.__dict__ return bios_attrs @base.passthru(['POST'], async=False) @task_manager.require_exclusive_lock def set_bios_config(self, task, **kwargs): """Change BIOS settings. This method is used to change the BIOS settings on a node. :param task: a TaskManager instance containing the node to act on. :param kwargs: a dictionary of {'AttributeName': 'NewValue'} :raises: DracOperationError on an error from python-dracclient. :returns: A dictionary containing the commit_required key with a Boolean value indicating whether commit_bios_config() needs to be called to make the changes. """ return bios.set_config(task, **kwargs) @base.passthru(['POST'], async=False) @task_manager.require_exclusive_lock def commit_bios_config(self, task, reboot=False, **kwargs): """Commit a BIOS configuration job. This method is used to commit a BIOS configuration job. submitted through set_bios_config(). :param task: a TaskManager instance containing the node to act on. :param reboot: indicates whether a reboot job should be automatically created with the config job. :param kwargs: not used. :raises: DracOperationError on an error from python-dracclient. :returns: A dictionary containing the job_id key with the id of the newly created config job, and the reboot_required key indicating whether to node needs to be rebooted to start the config job. """ job_id = bios.commit_config(task, reboot=reboot) return {'job_id': job_id, 'reboot_required': not reboot} @base.passthru(['DELETE'], async=False) @task_manager.require_exclusive_lock def abandon_bios_config(self, task, **kwargs): """Abandon a BIOS configuration job. This method is used to abandon a BIOS configuration previously submitted through set_bios_config(). :param task: a TaskManager instance containing the node to act on. :param kwargs: not used. :raises: DracOperationError on an error from python-dracclient. """ bios.abandon_config(task) ironic-5.1.0/ironic/drivers/modules/drac/job.py0000664000567000056710000000353412674513466022632 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ DRAC Lifecycle job specific methods """ from oslo_log import log as logging from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.drivers.modules.drac import common as drac_common drac_exceptions = importutils.try_import('dracclient.exceptions') LOG = logging.getLogger(__name__) def validate_job_queue(node): """Validates the job queue on the node. It raises an exception if an unfinished configuration job exists. :param node: an ironic node object. :raises: DracOperationError on an error from python-dracclient. """ client = drac_common.get_drac_client(node) try: unfinished_jobs = client.list_jobs(only_unfinished=True) except drac_exceptions.BaseClientException as exc: LOG.error(_LE('DRAC driver failed to get the list of unfinished jobs ' 'for node %(node_uuid)s. Reason: %(error)s.'), {'node_uuid': node.uuid, 'error': exc}) raise exception.DracOperationError(error=exc) if unfinished_jobs: msg = _('Unfinished config jobs found: %(jobs)r. Make sure they are ' 'completed before retrying.') % {'jobs': unfinished_jobs} raise exception.DracOperationError(error=msg) ironic-5.1.0/ironic/drivers/modules/drac/management.py0000664000567000056710000002054512674513466024175 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ DRAC management interface """ from oslo_log import log as logging from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import job as drac_job drac_exceptions = importutils.try_import('dracclient.exceptions') LOG = logging.getLogger(__name__) _BOOT_DEVICES_MAP = { boot_devices.DISK: 'HardDisk', boot_devices.PXE: 'NIC', boot_devices.CDROM: 'Optical', } # BootMode constants PERSISTENT_BOOT_MODE = 'IPL' NON_PERSISTENT_BOOT_MODE = 'OneTime' def _get_boot_device(node, drac_boot_devices=None): client = drac_common.get_drac_client(node) try: boot_modes = client.list_boot_modes() next_boot_modes = [mode.id for mode in boot_modes if mode.is_next] if NON_PERSISTENT_BOOT_MODE in next_boot_modes: next_boot_mode = NON_PERSISTENT_BOOT_MODE else: next_boot_mode = next_boot_modes[0] if drac_boot_devices is None: drac_boot_devices = client.list_boot_devices() drac_boot_device = drac_boot_devices[next_boot_mode][0] boot_device = next(key for (key, value) in _BOOT_DEVICES_MAP.items() if value in drac_boot_device.id) return {'boot_device': boot_device, 'persistent': next_boot_mode == PERSISTENT_BOOT_MODE} except (drac_exceptions.BaseClientException, IndexError) as exc: LOG.error(_LE('DRAC driver failed to get next boot mode for ' 'node %(node_uuid)s. Reason: %(error)s.'), {'node_uuid': node.uuid, 'error': exc}) raise exception.DracOperationError(error=exc) def set_boot_device(node, device, persistent=False): """Set the boot device for a node. Set the boot device to use on next boot of the node. :param node: an ironic node object. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: DracOperationError on an error from python-dracclient. """ drac_job.validate_job_queue(node) client = drac_common.get_drac_client(node) try: drac_boot_devices = client.list_boot_devices() current_boot_device = _get_boot_device(node, drac_boot_devices) # If we are already booting from the right device, do nothing. if current_boot_device == {'boot_device': device, 'persistent': persistent}: LOG.debug('DRAC already set to boot from %s', device) return drac_boot_device = next(drac_device.id for drac_device in drac_boot_devices[PERSISTENT_BOOT_MODE] if _BOOT_DEVICES_MAP[device] in drac_device.id) if persistent: boot_list = PERSISTENT_BOOT_MODE else: boot_list = NON_PERSISTENT_BOOT_MODE client.change_boot_device_order(boot_list, drac_boot_device) client.commit_pending_bios_changes() except drac_exceptions.BaseClientException as exc: LOG.error(_LE('DRAC driver failed to change boot device order for ' 'node %(node_uuid)s. Reason: %(error)s.'), {'node_uuid': node.uuid, 'error': exc}) raise exception.DracOperationError(error=exc) class DracManagement(base.ManagementInterface): def get_properties(self): """Return the properties of the interface.""" return drac_common.COMMON_PROPERTIES def validate(self, task): """Validate the driver-specific info supplied. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver to manage the node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if required driver_info attribute is missing or invalid on the node. """ return drac_common.parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a TaskManager instance containing the node to act on. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return list(_BOOT_DEVICES_MAP.keys()) def get_boot_device(self, task): """Get the current boot device for a node. Returns the current boot device of the node. :param task: a TaskManager instance containing the node to act on. :raises: DracOperationError on an error from python-dracclient. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: whether the boot device will persist to all future boots or not, None if it is unknown. """ node = task.node boot_device = node.driver_internal_info.get('drac_boot_device') if boot_device is not None: return boot_device return _get_boot_device(node) @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for a node. Set the boot device to use on next reboot of the node. :param task: a TaskManager instance containing the node to act on. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: InvalidParameterValue if an invalid boot device is specified. """ node = task.node if device not in _BOOT_DEVICES_MAP: raise exception.InvalidParameterValue( _("set_boot_device called with invalid device '%(device)s' " "for node %(node_id)s.") % {'device': device, 'node_id': node.uuid}) # NOTE(ifarkas): DRAC interface doesn't allow changing the boot device # multiple times in a row without a reboot. This is # because a change need to be committed via a # configuration job, and further configuration jobs # cannot be created until the previous one is processed # at the next boot. As a workaround, saving it to # driver_internal_info and committing the change during # power state change. driver_internal_info = node.driver_internal_info driver_internal_info['drac_boot_device'] = {'boot_device': device, 'persistent': persistent} node.driver_internal_info = driver_internal_info node.save() def get_sensors_data(self, task): """Get sensors data. :param task: a TaskManager instance. :raises: FailedToGetSensorData when getting the sensor data fails. :raises: FailedToParseSensorData when parsing sensor data fails. :returns: returns a consistent format dict of sensor data grouped by sensor type, which can be processed by Ceilometer. """ raise NotImplementedError() ironic-5.1.0/ironic/drivers/modules/__init__.py0000664000567000056710000000000012674513466022667 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/fake.py0000664000567000056710000001406212674513466022053 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fake driver interfaces used in testing. This is also an example of some kinds of things which can be done within drivers. For instance, the MultipleVendorInterface class demonstrates how to load more than one interface and wrap them in some logic to route incoming vendor_passthru requests appropriately. This can be useful eg. when mixing functionality between a power interface and a deploy interface, when both rely on separate vendor_passthru methods. """ from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common import states from ironic.drivers import base class FakePower(base.PowerInterface): """Example implementation of a simple power interface.""" def get_properties(self): return {} def validate(self, task): pass def get_power_state(self, task): return task.node.power_state def set_power_state(self, task, power_state): if power_state not in [states.POWER_ON, states.POWER_OFF]: raise exception.InvalidParameterValue( _("set_power_state called with an invalid power" "state: %s.") % power_state) task.node.power_state = power_state def reboot(self, task): pass class FakeBoot(base.BootInterface): """Example implementation of a simple boot interface.""" def get_properties(self): return {} def validate(self, task): pass def prepare_ramdisk(self, task): pass def clean_up_ramdisk(self, task): pass def prepare_instance(self, task): pass def clean_up_instance(self, task): pass class FakeDeploy(base.DeployInterface): """Class for a fake deployment driver. Example imlementation of a deploy interface that uses a separate power interface. """ def get_properties(self): return {} def validate(self, task): pass def deploy(self, task): return states.DEPLOYDONE def tear_down(self, task): return states.DELETED def prepare(self, task): pass def clean_up(self, task): pass def take_over(self, task): pass class FakeVendorA(base.VendorInterface): """Example implementation of a vendor passthru interface.""" def get_properties(self): return {'A1': 'A1 description. Required.', 'A2': 'A2 description. Optional.'} def validate(self, task, method, **kwargs): if method == 'first_method': bar = kwargs.get('bar') if not bar: raise exception.MissingParameterValue(_( "Parameter 'bar' not passed to method 'first_method'.")) @base.passthru(['POST'], description=_("Test if the value of bar is baz")) def first_method(self, task, http_method, bar): return True if bar == 'baz' else False class FakeVendorB(base.VendorInterface): """Example implementation of a secondary vendor passthru.""" def get_properties(self): return {'B1': 'B1 description. Required.', 'B2': 'B2 description. Required.'} def validate(self, task, method, **kwargs): if method in ('second_method', 'third_method_sync'): bar = kwargs.get('bar') if not bar: raise exception.MissingParameterValue(_( "Parameter 'bar' not passed to method '%s'.") % method) @base.passthru(['POST'], description=_("Test if the value of bar is kazoo")) def second_method(self, task, http_method, bar): return True if bar == 'kazoo' else False @base.passthru(['POST'], async=False, description=_("Test if the value of bar is meow")) def third_method_sync(self, task, http_method, bar): return True if bar == 'meow' else False class FakeConsole(base.ConsoleInterface): """Example implementation of a simple console interface.""" def get_properties(self): return {} def validate(self, task): pass def start_console(self, task): pass def stop_console(self, task): pass def get_console(self, task): return {} class FakeManagement(base.ManagementInterface): """Example implementation of a simple management interface.""" def get_properties(self): return {} def validate(self, task): pass def get_supported_boot_devices(self, task): return [boot_devices.PXE] def set_boot_device(self, task, device, persistent=False): if device not in self.get_supported_boot_devices(task): raise exception.InvalidParameterValue(_( "Invalid boot device %s specified.") % device) def get_boot_device(self, task): return {'boot_device': boot_devices.PXE, 'persistent': False} def get_sensors_data(self, task): return {} class FakeInspect(base.InspectInterface): """Example implementation of a simple inspect interface.""" def get_properties(self): return {} def validate(self, task): pass def inspect_hardware(self, task): return states.MANAGEABLE class FakeRAID(base.RAIDInterface): """Example implementation of simple RAIDInterface.""" def get_properties(self): return {} def create_configuration(self, task, create_root_volume=True, create_nonroot_volumes=True): pass def delete_configuration(self, task): pass ironic-5.1.0/ironic/drivers/modules/virtualbox.py0000664000567000056710000003536112674513466023351 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ VirtualBox Driver Modules """ from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers import base pyremotevbox = importutils.try_import('pyremotevbox') if pyremotevbox: from pyremotevbox import exception as virtualbox_exc from pyremotevbox import vbox as virtualbox IRONIC_TO_VIRTUALBOX_DEVICE_MAPPING = { boot_devices.PXE: 'Network', boot_devices.DISK: 'HardDisk', boot_devices.CDROM: 'DVD', } VIRTUALBOX_TO_IRONIC_DEVICE_MAPPING = { v: k for k, v in IRONIC_TO_VIRTUALBOX_DEVICE_MAPPING.items()} VIRTUALBOX_TO_IRONIC_POWER_MAPPING = { 'PoweredOff': states.POWER_OFF, 'Running': states.POWER_ON, 'Error': states.ERROR } opts = [ cfg.PortOpt('port', default=18083, help=_('Port on which VirtualBox web service is listening.')), ] CONF = cfg.CONF CONF.register_opts(opts, group='virtualbox') LOG = logging.getLogger(__name__) REQUIRED_PROPERTIES = { 'virtualbox_vmname': _("Name of the VM in VirtualBox. Required."), 'virtualbox_host': _("IP address or hostname of the VirtualBox host. " "Required.") } OPTIONAL_PROPERTIES = { 'virtualbox_username': _("Username for the VirtualBox host. " "Default value is ''. Optional."), 'virtualbox_password': _("Password for 'virtualbox_username'. " "Default value is ''. Optional."), 'virtualbox_port': _("Port on which VirtualBox web service is listening. " "Optional."), } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) def _strip_virtualbox_from_param_name(param_name): if param_name.startswith('virtualbox_'): return param_name[11:] else: return param_name def _parse_driver_info(node): """Gets the driver specific node driver info. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver. :param node: an Ironic Node object. :returns: a dict containing information from driver_info (or where applicable, config values). :raises: MissingParameterValue, if some required parameter(s) are missing in the node's driver_info. :raises: InvalidParameterValue, if some parameter(s) have invalid value(s) in the node's driver_info. """ info = node.driver_info d_info = {} missing_params = [] for param in REQUIRED_PROPERTIES: try: d_info_param_name = _strip_virtualbox_from_param_name(param) d_info[d_info_param_name] = info[param] except KeyError: missing_params.append(param) if missing_params: msg = (_("The following parameters are missing in driver_info: %s") % ', '.join(missing_params)) raise exception.MissingParameterValue(msg) for param in OPTIONAL_PROPERTIES: if param in info: d_info_param_name = _strip_virtualbox_from_param_name(param) d_info[d_info_param_name] = info[param] port = d_info.get('port', CONF.virtualbox.port) d_info['port'] = utils.validate_network_port(port, 'virtualbox_port') return d_info def _run_virtualbox_method(node, ironic_method, vm_object_method, *call_args, **call_kwargs): """Runs a method of pyremotevbox.vbox.VirtualMachine This runs a method from pyremotevbox.vbox.VirtualMachine. The VirtualMachine method to be invoked and the argument(s) to be passed to it are to be provided. :param node: an Ironic Node object. :param ironic_method: the Ironic method which called '_run_virtualbox_method'. This is used for logging only. :param vm_object_method: The method on the VirtualMachine object to be called. :param call_args: The args to be passed to 'vm_object_method'. :param call_kwargs: The kwargs to be passed to the 'vm_object_method'. :returns: The value returned by 'vm_object_method' :raises: VirtualBoxOperationFailed, if execution of 'vm_object_method' failed. :raises: InvalidParameterValue, - if 'vm_object_method' is not a valid 'VirtualMachine' method. - if some parameter(s) have invalid value(s) in the node's driver_info. :raises: MissingParameterValue, if some required parameter(s) are missing in the node's driver_info. :raises: pyremotevbox.exception.VmInWrongPowerState, if operation cannot be performed when vm is in the current power state. """ driver_info = _parse_driver_info(node) try: host = virtualbox.VirtualBoxHost(**driver_info) vm_object = host.find_vm(driver_info['vmname']) except virtualbox_exc.PyRemoteVBoxException as exc: LOG.error(_LE("Failed while creating a VirtualMachine object for " "node %(node_id)s. Error: %(error)s."), {'node_id': node.uuid, 'error': exc}) raise exception.VirtualBoxOperationFailed(operation=vm_object_method, error=exc) try: func = getattr(vm_object, vm_object_method) except AttributeError: error_msg = _("Invalid VirtualMachine method '%s' passed " "to '_run_virtualbox_method'.") raise exception.InvalidParameterValue(error_msg % vm_object_method) try: return func(*call_args, **call_kwargs) except virtualbox_exc.PyRemoteVBoxException as exc: error_msg = _LE("'%(ironic_method)s' failed for node %(node_id)s with " "error: %(error)s.") LOG.error(error_msg, {'ironic_method': ironic_method, 'node_id': node.uuid, 'error': exc}) raise exception.VirtualBoxOperationFailed(operation=vm_object_method, error=exc) class VirtualBoxPower(base.PowerInterface): def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Check if node.driver_info contains the required credentials. :param task: a TaskManager instance. :raises: MissingParameterValue, if some required parameter(s) are missing in the node's driver_info. :raises: InvalidParameterValue, if some parameter(s) have invalid value(s) in the node's driver_info. """ _parse_driver_info(task.node) def get_power_state(self, task): """Gets the current power state. :param task: a TaskManager instance. :returns: one of :mod:`ironic.common.states` :raises: MissingParameterValue, if some required parameter(s) are missing in the node's driver_info. :raises: InvalidParameterValue, if some parameter(s) have invalid value(s) in the node's driver_info. :raises: VirtualBoxOperationFailed, if error encountered from VirtualBox operation. """ power_status = _run_virtualbox_method(task.node, 'get_power_state', 'get_power_status') try: return VIRTUALBOX_TO_IRONIC_POWER_MAPPING[power_status] except KeyError: msg = _LE("VirtualBox returned unknown state '%(state)s' for " "node %(node)s") LOG.error(msg, {'state': power_status, 'node': task.node.uuid}) return states.ERROR @task_manager.require_exclusive_lock def set_power_state(self, task, target_state): """Turn the current power state on or off. :param task: a TaskManager instance. :param target_state: The desired power state POWER_ON,POWER_OFF or REBOOT from :mod:`ironic.common.states`. :raises: MissingParameterValue, if some required parameter(s) are missing in the node's driver_info. :raises: InvalidParameterValue, if some parameter(s) have invalid value(s) in the node's driver_info OR if an invalid power state was specified. :raises: VirtualBoxOperationFailed, if error encountered from VirtualBox operation. """ if target_state == states.POWER_OFF: _run_virtualbox_method(task.node, 'set_power_state', 'stop') elif target_state == states.POWER_ON: _run_virtualbox_method(task.node, 'set_power_state', 'start') elif target_state == states.REBOOT: self.reboot(task) else: msg = _("'set_power_state' called with invalid power " "state '%s'") % target_state raise exception.InvalidParameterValue(msg) @task_manager.require_exclusive_lock def reboot(self, task): """Reboot the node. :param task: a TaskManager instance. :raises: MissingParameterValue, if some required parameter(s) are missing in the node's driver_info. :raises: InvalidParameterValue, if some parameter(s) have invalid value(s) in the node's driver_info. :raises: VirtualBoxOperationFailed, if error encountered from VirtualBox operation. """ _run_virtualbox_method(task.node, 'reboot', 'stop') _run_virtualbox_method(task.node, 'reboot', 'start') class VirtualBoxManagement(base.ManagementInterface): def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Check that 'driver_info' contains required credentials. Validates whether the 'driver_info' property of the supplied task's node contains the required credentials information. :param task: a task from TaskManager. :raises: MissingParameterValue, if some required parameter(s) are missing in the node's driver_info. :raises: InvalidParameterValue, if some parameter(s) have invalid value(s) in the node's driver_info. """ _parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return list(IRONIC_TO_VIRTUALBOX_DEVICE_MAPPING.keys()) def get_boot_device(self, task): """Get the current boot device for a node. :param task: a task from TaskManager. :returns: a dictionary containing: 'boot_device': one of the ironic.common.boot_devices or None 'persistent': True if boot device is persistent, False otherwise :raises: MissingParameterValue, if some required parameter(s) are missing in the node's driver_info. :raises: InvalidParameterValue, if some parameter(s) have invalid value(s) in the node's driver_info. :raises: VirtualBoxOperationFailed, if error encountered from VirtualBox operation. """ boot_dev = _run_virtualbox_method(task.node, 'get_boot_device', 'get_boot_device') persistent = True ironic_boot_dev = VIRTUALBOX_TO_IRONIC_DEVICE_MAPPING.get(boot_dev, None) if not ironic_boot_dev: persistent = None msg = _LE("VirtualBox returned unknown boot device '%(device)s' " "for node %(node)s") LOG.error(msg, {'device': boot_dev, 'node': task.node.uuid}) return {'boot_device': ironic_boot_dev, 'persistent': persistent} @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for a node. :param task: a task from TaskManager. :param device: ironic.common.boot_devices :param persistent: This argument is ignored as VirtualBox support only persistent boot devices. :raises: MissingParameterValue, if some required parameter(s) are missing in the node's driver_info. :raises: InvalidParameterValue, if some parameter(s) have invalid value(s) in the node's driver_info. :raises: VirtualBoxOperationFailed, if error encountered from VirtualBox operation. """ # NOTE(rameshg87): VirtualBox has only persistent boot devices. try: boot_dev = IRONIC_TO_VIRTUALBOX_DEVICE_MAPPING[device] except KeyError: raise exception.InvalidParameterValue( _("Invalid boot device %s specified.") % device) try: _run_virtualbox_method(task.node, 'set_boot_device', 'set_boot_device', boot_dev) except virtualbox_exc.VmInWrongPowerState as exc: # NOTE(rameshg87): We cannot change the boot device when the vm # is powered on. This is a VirtualBox limitation. We just log # the error silently and return because throwing error will cause # deploys to fail (pxe and agent deploy mechanisms change the boot # device after completing the deployment, when node is powered on). # Since this is driver that is meant only for developers, this # should be okay. Developers will need to set the boot device # manually after powering off the vm when deployment is complete. # This will be documented. LOG.error(_LE("'set_boot_device' failed for node %(node_id)s " "with error: %(error)s"), {'node_id': task.node.uuid, 'error': exc}) def get_sensors_data(self, task): """Get sensors data. :param task: a TaskManager instance. :raises: FailedToGetSensorData when getting the sensor data fails. :raises: FailedToParseSensorData when parsing sensor data fails. :returns: returns a consistent format dict of sensor data grouped by sensor type, which can be processed by Ceilometer. """ raise NotImplementedError() ironic-5.1.0/ironic/drivers/modules/ipmitool.py0000664000567000056710000013472112674513466023006 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2012 Hewlett-Packard Development Company, L.P. # Copyright (c) 2012 NTT DOCOMO, INC. # Copyright 2014 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ IPMI power manager driver. Uses the 'ipmitool' command (http://ipmitool.sourceforge.net/) to remotely manage hardware. This includes setting the boot device, getting a serial-over-LAN console, and controlling the power state of the machine. NOTE THAT CERTAIN DISTROS MAY INSTALL openipmi BY DEFAULT, INSTEAD OF ipmitool, WHICH PROVIDES DIFFERENT COMMAND-LINE OPTIONS AND *IS NOT SUPPORTED* BY THIS DRIVER. """ import contextlib import os import re import subprocess import tempfile import time from ironic_lib import utils as ironic_utils from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_service import loopingcall from oslo_utils import excutils import six from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules import console_utils from ironic.drivers import utils as driver_utils CONF = cfg.CONF CONF.import_opt('retry_timeout', 'ironic.drivers.modules.ipminative', group='ipmi') CONF.import_opt('min_command_interval', 'ironic.drivers.modules.ipminative', group='ipmi') LOG = logging.getLogger(__name__) VALID_PRIV_LEVELS = ['ADMINISTRATOR', 'CALLBACK', 'OPERATOR', 'USER'] VALID_PROTO_VERSIONS = ('2.0', '1.5') REQUIRED_PROPERTIES = { 'ipmi_address': _("IP address or hostname of the node. Required.") } OPTIONAL_PROPERTIES = { 'ipmi_password': _("password. Optional."), 'ipmi_port': _("remote IPMI RMCP port. Optional."), 'ipmi_priv_level': _("privilege level; default is ADMINISTRATOR. One of " "%s. Optional.") % ', '.join(VALID_PRIV_LEVELS), 'ipmi_username': _("username; default is NULL user. Optional."), 'ipmi_bridging': _("bridging_type; default is \"no\". One of \"single\", " "\"dual\", \"no\". Optional."), 'ipmi_transit_channel': _("transit channel for bridged request. Required " "only if ipmi_bridging is set to \"dual\"."), 'ipmi_transit_address': _("transit address for bridged request. Required " "only if ipmi_bridging is set to \"dual\"."), 'ipmi_target_channel': _("destination channel for bridged request. " "Required only if ipmi_bridging is set to " "\"single\" or \"dual\"."), 'ipmi_target_address': _("destination address for bridged request. " "Required only if ipmi_bridging is set " "to \"single\" or \"dual\"."), 'ipmi_local_address': _("local IPMB address for bridged requests. " "Used only if ipmi_bridging is set " "to \"single\" or \"dual\". Optional."), 'ipmi_protocol_version': _('the version of the IPMI protocol; default ' 'is "2.0". One of "1.5", "2.0". Optional.'), 'ipmi_force_boot_device': _("Whether Ironic should specify the boot " "device to the BMC each time the server " "is turned on, eg. because the BMC is not " "capable of remembering the selected boot " "device across power cycles; default value " "is False. Optional.") } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) CONSOLE_PROPERTIES = { 'ipmi_terminal_port': _("node's UDP port to connect to. Only required for " "console access.") } BRIDGING_OPTIONS = [('local_address', '-m'), ('transit_channel', '-B'), ('transit_address', '-T'), ('target_channel', '-b'), ('target_address', '-t')] LAST_CMD_TIME = {} TIMING_SUPPORT = None SINGLE_BRIDGE_SUPPORT = None DUAL_BRIDGE_SUPPORT = None TMP_DIR_CHECKED = None ipmitool_command_options = { 'timing': ['ipmitool', '-N', '0', '-R', '0', '-h'], 'single_bridge': ['ipmitool', '-m', '0', '-b', '0', '-t', '0', '-h'], 'dual_bridge': ['ipmitool', '-m', '0', '-b', '0', '-t', '0', '-B', '0', '-T', '0', '-h']} # Note(TheJulia): This string is hardcoded in ipmitool's lanplus driver # and is substituted in return for the error code received from the IPMI # controller. As of 1.8.15, no internationalization support appears to # be in ipmitool which means the string should always be returned in this # form regardless of locale. IPMITOOL_RETRYABLE_FAILURES = ['insufficient resources for session'] def _check_option_support(options): """Checks if the specific ipmitool options are supported on host. This method updates the module-level variables indicating whether an option is supported so that it is accessible by any driver interface class in this module. It is intended to be called from the __init__ method of such classes only. :param options: list of ipmitool options to be checked :raises: OSError """ for opt in options: if _is_option_supported(opt) is None: try: cmd = ipmitool_command_options[opt] # NOTE(cinerama): use subprocess.check_call to # check options & suppress ipmitool output to # avoid alarming people with open(os.devnull, 'wb') as nullfile: subprocess.check_call(cmd, stdout=nullfile, stderr=nullfile) except subprocess.CalledProcessError: LOG.info(_LI("Option %(opt)s is not supported by ipmitool"), {'opt': opt}) _is_option_supported(opt, False) else: LOG.info(_LI("Option %(opt)s is supported by ipmitool"), {'opt': opt}) _is_option_supported(opt, True) def _is_option_supported(option, is_supported=None): """Indicates whether the particular ipmitool option is supported. :param option: specific ipmitool option :param is_supported: Optional Boolean. when specified, this value is assigned to the module-level variable indicating whether the option is supported. Used only if a value is not already assigned. :returns: True, indicates the option is supported :returns: False, indicates the option is not supported :returns: None, indicates that it is not aware whether the option is supported """ global SINGLE_BRIDGE_SUPPORT global DUAL_BRIDGE_SUPPORT global TIMING_SUPPORT if option == 'single_bridge': if (SINGLE_BRIDGE_SUPPORT is None) and (is_supported is not None): SINGLE_BRIDGE_SUPPORT = is_supported return SINGLE_BRIDGE_SUPPORT elif option == 'dual_bridge': if (DUAL_BRIDGE_SUPPORT is None) and (is_supported is not None): DUAL_BRIDGE_SUPPORT = is_supported return DUAL_BRIDGE_SUPPORT elif option == 'timing': if (TIMING_SUPPORT is None) and (is_supported is not None): TIMING_SUPPORT = is_supported return TIMING_SUPPORT def _console_pwfile_path(uuid): """Return the file path for storing the ipmi password for a console.""" file_name = "%(uuid)s.pw" % {'uuid': uuid} return os.path.join(CONF.tempdir, file_name) @contextlib.contextmanager def _make_password_file(password): """Makes a temporary file that contains the password. :param password: the password :returns: the absolute pathname of the temporary file :raises: PasswordFileFailedToCreate from creating or writing to the temporary file """ f = None try: f = tempfile.NamedTemporaryFile(mode='w', dir=CONF.tempdir) f.write(str(password)) f.flush() except (IOError, OSError) as exc: if f is not None: f.close() raise exception.PasswordFileFailedToCreate(error=exc) except Exception: with excutils.save_and_reraise_exception(): if f is not None: f.close() try: # NOTE(jlvillal): This yield can not be in the try/except block above # because an exception by the caller of this function would then get # changed to a PasswordFileFailedToCreate exception which would mislead # about the problem and its cause. yield f.name finally: if f is not None: f.close() def _parse_driver_info(node): """Gets the parameters required for ipmitool to access the node. :param node: the Node of interest. :returns: dictionary of parameters. :raises: InvalidParameterValue when an invalid value is specified :raises: MissingParameterValue when a required ipmi parameter is missing. """ info = node.driver_info or {} bridging_types = ['single', 'dual'] missing_info = [key for key in REQUIRED_PROPERTIES if not info.get(key)] if missing_info: raise exception.MissingParameterValue(_( "Missing the following IPMI credentials in node's" " driver_info: %s.") % missing_info) address = info.get('ipmi_address') username = info.get('ipmi_username') password = six.text_type(info.get('ipmi_password', '')) dest_port = info.get('ipmi_port') port = info.get('ipmi_terminal_port') priv_level = info.get('ipmi_priv_level', 'ADMINISTRATOR') bridging_type = info.get('ipmi_bridging', 'no') local_address = info.get('ipmi_local_address') transit_channel = info.get('ipmi_transit_channel') transit_address = info.get('ipmi_transit_address') target_channel = info.get('ipmi_target_channel') target_address = info.get('ipmi_target_address') protocol_version = str(info.get('ipmi_protocol_version', '2.0')) force_boot_device = info.get('ipmi_force_boot_device', False) if not username: LOG.warning(_LW('ipmi_username is not defined or empty for node %s: ' 'NULL user will be utilized.') % node.uuid) if not password: LOG.warning(_LW('ipmi_password is not defined or empty for node %s: ' 'NULL password will be utilized.') % node.uuid) if protocol_version not in VALID_PROTO_VERSIONS: valid_versions = ', '.join(VALID_PROTO_VERSIONS) raise exception.InvalidParameterValue(_( "Invalid IPMI protocol version value %(version)s, the valid " "value can be one of %(valid_versions)s") % {'version': protocol_version, 'valid_versions': valid_versions}) if port is not None: port = utils.validate_network_port(port, 'ipmi_terminal_port') if dest_port is not None: dest_port = utils.validate_network_port(dest_port, 'ipmi_port') # check if ipmi_bridging has proper value if bridging_type == 'no': # if bridging is not selected, then set all bridging params to None (local_address, transit_channel, transit_address, target_channel, target_address) = (None,) * 5 elif bridging_type in bridging_types: # check if the particular bridging option is supported on host if not _is_option_supported('%s_bridge' % bridging_type): raise exception.InvalidParameterValue(_( "Value for ipmi_bridging is provided as %s, but IPMI " "bridging is not supported by the IPMI utility installed " "on host. Ensure ipmitool version is > 1.8.11" ) % bridging_type) # ensure that all the required parameters are provided params_undefined = [param for param, value in [ ("ipmi_target_channel", target_channel), ('ipmi_target_address', target_address)] if value is None] if bridging_type == 'dual': params_undefined2 = [param for param, value in [ ("ipmi_transit_channel", transit_channel), ('ipmi_transit_address', transit_address) ] if value is None] params_undefined.extend(params_undefined2) else: # if single bridging was selected, set dual bridge params to None transit_channel = transit_address = None # If the required parameters were not provided, # raise an exception if params_undefined: raise exception.MissingParameterValue(_( "%(param)s not provided") % {'param': params_undefined}) else: raise exception.InvalidParameterValue(_( "Invalid value for ipmi_bridging: %(bridging_type)s," " the valid value can be one of: %(bridging_types)s" ) % {'bridging_type': bridging_type, 'bridging_types': bridging_types + ['no']}) if priv_level not in VALID_PRIV_LEVELS: valid_priv_lvls = ', '.join(VALID_PRIV_LEVELS) raise exception.InvalidParameterValue(_( "Invalid privilege level value:%(priv_level)s, the valid value" " can be one of %(valid_levels)s") % {'priv_level': priv_level, 'valid_levels': valid_priv_lvls}) return { 'address': address, 'dest_port': dest_port, 'username': username, 'password': password, 'port': port, 'uuid': node.uuid, 'priv_level': priv_level, 'local_address': local_address, 'transit_channel': transit_channel, 'transit_address': transit_address, 'target_channel': target_channel, 'target_address': target_address, 'protocol_version': protocol_version, 'force_boot_device': force_boot_device, } def _exec_ipmitool(driver_info, command): """Execute the ipmitool command. :param driver_info: the ipmitool parameters for accessing a node. :param command: the ipmitool command to be executed. :returns: (stdout, stderr) from executing the command. :raises: PasswordFileFailedToCreate from creating or writing to the temporary file. :raises: processutils.ProcessExecutionError from executing the command. """ ipmi_version = ('lanplus' if driver_info['protocol_version'] == '2.0' else 'lan') args = ['ipmitool', '-I', ipmi_version, '-H', driver_info['address'], '-L', driver_info['priv_level'] ] if driver_info['dest_port']: args.append('-p') args.append(driver_info['dest_port']) if driver_info['username']: args.append('-U') args.append(driver_info['username']) for name, option in BRIDGING_OPTIONS: if driver_info[name] is not None: args.append(option) args.append(driver_info[name]) # specify retry timing more precisely, if supported num_tries = max( (CONF.ipmi.retry_timeout // CONF.ipmi.min_command_interval), 1) if _is_option_supported('timing'): args.append('-R') args.append(str(num_tries)) args.append('-N') args.append(str(CONF.ipmi.min_command_interval)) end_time = (time.time() + CONF.ipmi.retry_timeout) while True: num_tries = num_tries - 1 # NOTE(deva): ensure that no communications are sent to a BMC more # often than once every min_command_interval seconds. time_till_next_poll = CONF.ipmi.min_command_interval - ( time.time() - LAST_CMD_TIME.get(driver_info['address'], 0)) if time_till_next_poll > 0: time.sleep(time_till_next_poll) # Resetting the list that will be utilized so the password arguments # from any previous execution are preserved. cmd_args = args[:] # 'ipmitool' command will prompt password if there is no '-f' # option, we set it to '\0' to write a password file to support # empty password with _make_password_file(driver_info['password'] or '\0') as pw_file: cmd_args.append('-f') cmd_args.append(pw_file) cmd_args.extend(command.split(" ")) try: out, err = utils.execute(*cmd_args) return out, err except processutils.ProcessExecutionError as e: with excutils.save_and_reraise_exception() as ctxt: err_list = [x for x in IPMITOOL_RETRYABLE_FAILURES if x in six.text_type(e)] if ((time.time() > end_time) or (num_tries == 0) or not err_list): LOG.error(_LE('IPMI Error while attempting "%(cmd)s"' 'for node %(node)s. Error: %(error)s'), { 'node': driver_info['uuid'], 'cmd': e.cmd, 'error': e }) else: ctxt.reraise = False LOG.warning(_LW('IPMI Error encountered, retrying ' '"%(cmd)s" for node %(node)s. ' 'Error: %(error)s'), { 'node': driver_info['uuid'], 'cmd': e.cmd, 'error': e }) finally: LAST_CMD_TIME[driver_info['address']] = time.time() def _sleep_time(iter): """Return the time-to-sleep for the n'th iteration of a retry loop. This implementation increases exponentially. :param iter: iteration number :returns: number of seconds to sleep """ if iter <= 1: return 1 return iter ** 2 def _set_and_wait(target_state, driver_info): """Helper function for DynamicLoopingCall. This method changes the power state and polls the BMCuntil the desired power state is reached, or CONF.ipmi.retry_timeout would be exceeded by the next iteration. This method assumes the caller knows the current power state and does not check it prior to changing the power state. Most BMCs should be fine, but if a driver is concerned, the state should be checked prior to calling this method. :param target_state: desired power state :param driver_info: the ipmitool parameters for accessing a node. :returns: one of ironic.common.states """ if target_state == states.POWER_ON: state_name = "on" elif target_state == states.POWER_OFF: state_name = "off" def _wait(mutable): try: # Only issue power change command once if mutable['iter'] < 0: _exec_ipmitool(driver_info, "power %s" % state_name) else: mutable['power'] = _power_status(driver_info) except (exception.PasswordFileFailedToCreate, processutils.ProcessExecutionError, exception.IPMIFailure): # Log failures but keep trying LOG.warning(_LW("IPMI power %(state)s failed for node %(node)s."), {'state': state_name, 'node': driver_info['uuid']}) finally: mutable['iter'] += 1 if mutable['power'] == target_state: raise loopingcall.LoopingCallDone() sleep_time = _sleep_time(mutable['iter']) if (sleep_time + mutable['total_time']) > CONF.ipmi.retry_timeout: # Stop if the next loop would exceed maximum retry_timeout LOG.error(_LE('IPMI power %(state)s timed out after ' '%(tries)s retries on node %(node_id)s.'), {'state': state_name, 'tries': mutable['iter'], 'node_id': driver_info['uuid']}) mutable['power'] = states.ERROR raise loopingcall.LoopingCallDone() else: mutable['total_time'] += sleep_time return sleep_time # Use mutable objects so the looped method can change them. # Start 'iter' from -1 so that the first two checks are one second apart. status = {'power': None, 'iter': -1, 'total_time': 0} timer = loopingcall.DynamicLoopingCall(_wait, status) timer.start().wait() return status['power'] def _power_on(driver_info): """Turn the power ON for this node. :param driver_info: the ipmitool parameters for accessing a node. :returns: one of ironic.common.states POWER_ON or ERROR. :raises: IPMIFailure on an error from ipmitool (from _power_status call). """ return _set_and_wait(states.POWER_ON, driver_info) def _power_off(driver_info): """Turn the power OFF for this node. :param driver_info: the ipmitool parameters for accessing a node. :returns: one of ironic.common.states POWER_OFF or ERROR. :raises: IPMIFailure on an error from ipmitool (from _power_status call). """ return _set_and_wait(states.POWER_OFF, driver_info) def _power_status(driver_info): """Get the power status for a node. :param driver_info: the ipmitool access parameters for a node. :returns: one of ironic.common.states POWER_OFF, POWER_ON or ERROR. :raises: IPMIFailure on an error from ipmitool. """ cmd = "power status" try: out_err = _exec_ipmitool(driver_info, cmd) except (exception.PasswordFileFailedToCreate, processutils.ProcessExecutionError) as e: LOG.warning(_LW("IPMI power status failed for node %(node_id)s with " "error: %(error)s."), {'node_id': driver_info['uuid'], 'error': e}) raise exception.IPMIFailure(cmd=cmd) if out_err[0] == "Chassis Power is on\n": return states.POWER_ON elif out_err[0] == "Chassis Power is off\n": return states.POWER_OFF else: return states.ERROR def _process_sensor(sensor_data): sensor_data_fields = sensor_data.split('\n') sensor_data_dict = {} for field in sensor_data_fields: if not field: continue kv_value = field.split(':') if len(kv_value) != 2: continue sensor_data_dict[kv_value[0].strip()] = kv_value[1].strip() return sensor_data_dict def _get_sensor_type(node, sensor_data_dict): # Have only three sensor type name IDs: 'Sensor Type (Analog)' # 'Sensor Type (Discrete)' and 'Sensor Type (Threshold)' for key in ('Sensor Type (Analog)', 'Sensor Type (Discrete)', 'Sensor Type (Threshold)'): try: return sensor_data_dict[key].split(' ', 1)[0] except KeyError: continue raise exception.FailedToParseSensorData( node=node.uuid, error=(_("parse ipmi sensor data failed, unknown sensor type" " data: %(sensors_data)s"), {'sensors_data': sensor_data_dict})) def _parse_ipmi_sensors_data(node, sensors_data): """Parse the IPMI sensors data and format to the dict grouping by type. We run 'ipmitool' command with 'sdr -v' options, which can return sensor details in human-readable format, we need to format them to JSON string dict-based data for Ceilometer Collector which can be sent it as payload out via notification bus and consumed by Ceilometer Collector. :param sensors_data: the sensor data returned by ipmitool command. :returns: the sensor data with JSON format, grouped by sensor type. :raises: FailedToParseSensorData when error encountered during parsing. """ sensors_data_dict = {} if not sensors_data: return sensors_data_dict sensors_data_array = sensors_data.split('\n\n') for sensor_data in sensors_data_array: sensor_data_dict = _process_sensor(sensor_data) if not sensor_data_dict: continue sensor_type = _get_sensor_type(node, sensor_data_dict) # ignore the sensors which has no current 'Sensor Reading' data if 'Sensor Reading' in sensor_data_dict: sensors_data_dict.setdefault( sensor_type, {})[sensor_data_dict['Sensor ID']] = sensor_data_dict # get nothing, no valid sensor data if not sensors_data_dict: raise exception.FailedToParseSensorData( node=node.uuid, error=(_("parse ipmi sensor data failed, get nothing with input" " data: %(sensors_data)s") % {'sensors_data': sensors_data})) return sensors_data_dict @task_manager.require_exclusive_lock def send_raw(task, raw_bytes): """Send raw bytes to the BMC. Bytes should be a string of bytes. :param task: a TaskManager instance. :param raw_bytes: a string of raw bytes to send, e.g. '0x00 0x01' :returns: a tuple with stdout and stderr. :raises: IPMIFailure on an error from ipmitool. :raises: MissingParameterValue if a required parameter is missing. :raises: InvalidParameterValue when an invalid value is specified. """ node_uuid = task.node.uuid LOG.debug('Sending node %(node)s raw bytes %(bytes)s', {'bytes': raw_bytes, 'node': node_uuid}) driver_info = _parse_driver_info(task.node) cmd = 'raw %s' % raw_bytes try: out, err = _exec_ipmitool(driver_info, cmd) LOG.debug('send raw bytes returned stdout: %(stdout)s, stderr:' ' %(stderr)s', {'stdout': out, 'stderr': err}) except (exception.PasswordFileFailedToCreate, processutils.ProcessExecutionError) as e: LOG.exception(_LE('IPMI "raw bytes" failed for node %(node_id)s ' 'with error: %(error)s.'), {'node_id': node_uuid, 'error': e}) raise exception.IPMIFailure(cmd=cmd) return out, err def dump_sdr(task, file_path): """Dump SDR data to a file. :param task: a TaskManager instance. :param file_path: the path to SDR dump file. :raises: IPMIFailure on an error from ipmitool. :raises: MissingParameterValue if a required parameter is missing. :raises: InvalidParameterValue when an invalid value is specified. """ node_uuid = task.node.uuid LOG.debug('Dump SDR data for node %(node)s to file %(name)s', {'name': file_path, 'node': node_uuid}) driver_info = _parse_driver_info(task.node) cmd = 'sdr dump %s' % file_path try: out, err = _exec_ipmitool(driver_info, cmd) LOG.debug('dump SDR returned stdout: %(stdout)s, stderr:' ' %(stderr)s', {'stdout': out, 'stderr': err}) except (exception.PasswordFileFailedToCreate, processutils.ProcessExecutionError) as e: LOG.exception(_LE('IPMI "sdr dump" failed for node %(node_id)s ' 'with error: %(error)s.'), {'node_id': node_uuid, 'error': e}) raise exception.IPMIFailure(cmd=cmd) def _check_temp_dir(): """Check for Valid temp directory.""" global TMP_DIR_CHECKED # because a temporary file is used to pass the password to ipmitool, # we should check the directory if TMP_DIR_CHECKED is None: try: utils.check_dir() except (exception.PathNotFound, exception.DirectoryNotWritable, exception.InsufficientDiskSpace) as e: with excutils.save_and_reraise_exception(): TMP_DIR_CHECKED = False err_msg = (_("Ipmitool drivers need to be able to create " "temporary files to pass password to ipmitool. " "Encountered error: %s") % e) e.message = err_msg LOG.error(err_msg) else: TMP_DIR_CHECKED = True class IPMIPower(base.PowerInterface): def __init__(self): try: _check_option_support(['timing', 'single_bridge', 'dual_bridge']) except OSError: raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to locate usable ipmitool command in " "the system path when checking ipmitool version")) _check_temp_dir() def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Validate driver_info for ipmitool driver. Check that node['driver_info'] contains IPMI credentials. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if required ipmi parameters are missing. :raises: MissingParameterValue if a required parameter is missing. """ _parse_driver_info(task.node) # NOTE(deva): don't actually touch the BMC in validate because it is # called too often, and BMCs are too fragile. # This is a temporary measure to mitigate problems while # 1314954 and 1314961 are resolved. def get_power_state(self, task): """Get the current power state of the task's node. :param task: a TaskManager instance containing the node to act on. :returns: one of ironic.common.states POWER_OFF, POWER_ON or ERROR. :raises: InvalidParameterValue if required ipmi parameters are missing. :raises: MissingParameterValue if a required parameter is missing. :raises: IPMIFailure on an error from ipmitool (from _power_status call). """ driver_info = _parse_driver_info(task.node) return _power_status(driver_info) @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Turn the power on or off. :param task: a TaskManager instance containing the node to act on. :param pstate: The desired power state, one of ironic.common.states POWER_ON, POWER_OFF. :raises: InvalidParameterValue if an invalid power state was specified. :raises: MissingParameterValue if required ipmi parameters are missing :raises: PowerStateFailure if the power couldn't be set to pstate. """ driver_info = _parse_driver_info(task.node) if pstate == states.POWER_ON: driver_utils.ensure_next_boot_device(task, driver_info) state = _power_on(driver_info) elif pstate == states.POWER_OFF: state = _power_off(driver_info) else: raise exception.InvalidParameterValue( _("set_power_state called " "with invalid power state %s.") % pstate) if state != pstate: raise exception.PowerStateFailure(pstate=pstate) @task_manager.require_exclusive_lock def reboot(self, task): """Cycles the power to the task's node. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue if required ipmi parameters are missing. :raises: InvalidParameterValue if an invalid power state was specified. :raises: PowerStateFailure if the final state of the node is not POWER_ON. """ driver_info = _parse_driver_info(task.node) _power_off(driver_info) driver_utils.ensure_next_boot_device(task, driver_info) state = _power_on(driver_info) if state != states.POWER_ON: raise exception.PowerStateFailure(pstate=states.POWER_ON) class IPMIManagement(base.ManagementInterface): def get_properties(self): return COMMON_PROPERTIES def __init__(self): try: _check_option_support(['timing', 'single_bridge', 'dual_bridge']) except OSError: raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to locate usable ipmitool command in " "the system path when checking ipmitool version")) _check_temp_dir() def validate(self, task): """Check that 'driver_info' contains IPMI credentials. Validates whether the 'driver_info' property of the supplied task's node contains the required credentials information. :param task: a task from TaskManager. :raises: InvalidParameterValue if required IPMI parameters are missing. :raises: MissingParameterValue if a required parameter is missing. """ _parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM, boot_devices.BIOS, boot_devices.SAFE] @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for the task's node. Set the boot device to use on next reboot of the node. :param task: a task from TaskManager. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: InvalidParameterValue if an invalid boot device is specified :raises: MissingParameterValue if required ipmi parameters are missing. :raises: IPMIFailure on an error from ipmitool. """ if device not in self.get_supported_boot_devices(task): raise exception.InvalidParameterValue(_( "Invalid boot device %s specified.") % device) # note(JayF): IPMI spec indicates unless you send these raw bytes the # boot device setting times out after 60s. Since it's possible it # could be >60s before a node is rebooted, we should always send them. # This mimics pyghmi's current behavior, and the "option=timeout" # setting on newer ipmitool binaries. timeout_disable = "0x00 0x08 0x03 0x08" send_raw(task, timeout_disable) if task.node.driver_info.get('ipmi_force_boot_device', False): driver_utils.force_persistent_boot(task, device, persistent) # Reset persistent to False, in case of BMC does not support # persistent or we do not have admin rights. persistent = False cmd = "chassis bootdev %s" % device if persistent: cmd = cmd + " options=persistent" driver_info = _parse_driver_info(task.node) try: out, err = _exec_ipmitool(driver_info, cmd) except (exception.PasswordFileFailedToCreate, processutils.ProcessExecutionError) as e: LOG.warning(_LW('IPMI set boot device failed for node %(node)s ' 'when executing "ipmitool %(cmd)s". ' 'Error: %(error)s'), {'node': driver_info['uuid'], 'cmd': cmd, 'error': e}) raise exception.IPMIFailure(cmd=cmd) def get_boot_device(self, task): """Get the current boot device for the task's node. Returns the current boot device of the node. :param task: a task from TaskManager. :raises: InvalidParameterValue if required IPMI parameters are missing. :raises: IPMIFailure on an error from ipmitool. :raises: MissingParameterValue if a required parameter is missing. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ driver_info = task.node.driver_info driver_internal_info = task.node.driver_internal_info if (driver_info.get('ipmi_force_boot_device', False) and driver_internal_info.get('persistent_boot_device') and driver_internal_info.get('is_next_boot_persistent', True)): return { 'boot_device': driver_internal_info['persistent_boot_device'], 'persistent': True } cmd = "chassis bootparam get 5" driver_info = _parse_driver_info(task.node) response = {'boot_device': None, 'persistent': None} try: out, err = _exec_ipmitool(driver_info, cmd) except (exception.PasswordFileFailedToCreate, processutils.ProcessExecutionError) as e: LOG.warning(_LW('IPMI get boot device failed for node %(node)s ' 'when executing "ipmitool %(cmd)s". ' 'Error: %(error)s'), {'node': driver_info['uuid'], 'cmd': cmd, 'error': e}) raise exception.IPMIFailure(cmd=cmd) re_obj = re.search('Boot Device Selector : (.+)?\n', out) if re_obj: boot_selector = re_obj.groups('')[0] if 'PXE' in boot_selector: response['boot_device'] = boot_devices.PXE elif 'Hard-Drive' in boot_selector: if 'Safe-Mode' in boot_selector: response['boot_device'] = boot_devices.SAFE else: response['boot_device'] = boot_devices.DISK elif 'BIOS' in boot_selector: response['boot_device'] = boot_devices.BIOS elif 'CD/DVD' in boot_selector: response['boot_device'] = boot_devices.CDROM response['persistent'] = 'Options apply to all future boots' in out return response def get_sensors_data(self, task): """Get sensors data. :param task: a TaskManager instance. :raises: FailedToGetSensorData when getting the sensor data fails. :raises: FailedToParseSensorData when parsing sensor data fails. :raises: InvalidParameterValue if required ipmi parameters are missing :raises: MissingParameterValue if a required parameter is missing. :returns: returns a dict of sensor data group by sensor type. """ driver_info = _parse_driver_info(task.node) # with '-v' option, we can get the entire sensor data including the # extended sensor informations cmd = "sdr -v" try: out, err = _exec_ipmitool(driver_info, cmd) except (exception.PasswordFileFailedToCreate, processutils.ProcessExecutionError) as e: raise exception.FailedToGetSensorData(node=task.node.uuid, error=e) return _parse_ipmi_sensors_data(task.node, out) class VendorPassthru(base.VendorInterface): def __init__(self): try: _check_option_support(['single_bridge', 'dual_bridge']) except OSError: raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to locate usable ipmitool command in " "the system path when checking ipmitool version")) _check_temp_dir() @base.passthru(['POST']) @task_manager.require_exclusive_lock def send_raw(self, task, http_method, raw_bytes): """Send raw bytes to the BMC. Bytes should be a string of bytes. :param task: a TaskManager instance. :param http_method: the HTTP method used on the request. :param raw_bytes: a string of raw bytes to send, e.g. '0x00 0x01' :raises: IPMIFailure on an error from ipmitool. :raises: MissingParameterValue if a required parameter is missing. :raises: InvalidParameterValue when an invalid value is specified. """ send_raw(task, raw_bytes) @base.passthru(['POST']) @task_manager.require_exclusive_lock def bmc_reset(self, task, http_method, warm=True): """Reset BMC with IPMI command 'bmc reset (warm|cold)'. :param task: a TaskManager instance. :param http_method: the HTTP method used on the request. :param warm: boolean parameter to decide on warm or cold reset. :raises: IPMIFailure on an error from ipmitool. :raises: MissingParameterValue if a required parameter is missing. :raises: InvalidParameterValue when an invalid value is specified """ node_uuid = task.node.uuid if warm: warm_param = 'warm' else: warm_param = 'cold' LOG.debug('Doing %(warm)s BMC reset on node %(node)s', {'warm': warm_param, 'node': node_uuid}) driver_info = _parse_driver_info(task.node) cmd = 'bmc reset %s' % warm_param try: out, err = _exec_ipmitool(driver_info, cmd) LOG.debug('bmc reset returned stdout: %(stdout)s, stderr:' ' %(stderr)s', {'stdout': out, 'stderr': err}) except (exception.PasswordFileFailedToCreate, processutils.ProcessExecutionError) as e: LOG.exception(_LE('IPMI "bmc reset" failed for node %(node_id)s ' 'with error: %(error)s.'), {'node_id': node_uuid, 'error': e}) raise exception.IPMIFailure(cmd=cmd) def get_properties(self): return COMMON_PROPERTIES def validate(self, task, method, **kwargs): """Validate vendor-specific actions. If invalid, raises an exception; otherwise returns None. Valid methods: * send_raw * bmc_reset :param task: a task from TaskManager. :param method: method to be validated :param kwargs: info for action. :raises: InvalidParameterValue when an invalid parameter value is specified. :raises: MissingParameterValue if a required parameter is missing. """ if method == 'send_raw': if not kwargs.get('raw_bytes'): raise exception.MissingParameterValue(_( 'Parameter raw_bytes (string of bytes) was not ' 'specified.')) _parse_driver_info(task.node) class IPMIShellinaboxConsole(base.ConsoleInterface): """A ConsoleInterface that uses ipmitool and shellinabox.""" def __init__(self): try: _check_option_support(['timing', 'single_bridge', 'dual_bridge']) except OSError: raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to locate usable ipmitool command in " "the system path when checking ipmitool version")) _check_temp_dir() def get_properties(self): d = COMMON_PROPERTIES.copy() d.update(CONSOLE_PROPERTIES) return d def validate(self, task): """Validate the Node console info. :param task: a task from TaskManager. :raises: InvalidParameterValue :raises: MissingParameterValue when a required parameter is missing """ driver_info = _parse_driver_info(task.node) if not driver_info['port']: raise exception.MissingParameterValue(_( "Missing 'ipmi_terminal_port' parameter in node's" " driver_info.")) if driver_info['protocol_version'] != '2.0': raise exception.InvalidParameterValue(_( "Serial over lan only works with IPMI protocol version 2.0. " "Check the 'ipmi_protocol_version' parameter in " "node's driver_info")) def start_console(self, task): """Start a remote console for the node. :param task: a task from TaskManager :raises: InvalidParameterValue if required ipmi parameters are missing :raises: PasswordFileFailedToCreate if unable to create a file containing the password :raises: ConsoleError if the directory for the PID file cannot be created :raises: ConsoleSubprocessFailed when invoking the subprocess failed """ driver_info = _parse_driver_info(task.node) path = _console_pwfile_path(driver_info['uuid']) pw_file = console_utils.make_persistent_password_file( path, driver_info['password'] or '\0') ipmi_cmd = ("/:%(uid)s:%(gid)s:HOME:ipmitool -H %(address)s" " -I lanplus -U %(user)s -f %(pwfile)s" % {'uid': os.getuid(), 'gid': os.getgid(), 'address': driver_info['address'], 'user': driver_info['username'], 'pwfile': pw_file}) for name, option in BRIDGING_OPTIONS: if driver_info[name] is not None: ipmi_cmd = " ".join([ipmi_cmd, option, driver_info[name]]) if CONF.debug: ipmi_cmd += " -v" ipmi_cmd += " sol activate" try: console_utils.start_shellinabox_console(driver_info['uuid'], driver_info['port'], ipmi_cmd) except (exception.ConsoleError, exception.ConsoleSubprocessFailed): with excutils.save_and_reraise_exception(): ironic_utils.unlink_without_raise(path) def stop_console(self, task): """Stop the remote console session for the node. :param task: a task from TaskManager :raises: ConsoleError if unable to stop the console """ try: console_utils.stop_shellinabox_console(task.node.uuid) finally: ironic_utils.unlink_without_raise( _console_pwfile_path(task.node.uuid)) def get_console(self, task): """Get the type and connection information about the console.""" driver_info = _parse_driver_info(task.node) url = console_utils.get_shellinabox_console_url(driver_info['port']) return {'type': 'shellinabox', 'url': url} ironic-5.1.0/ironic/drivers/modules/iscsi_deploy.py0000664000567000056710000010427312674513470023632 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from ironic_lib import disk_utils from ironic_lib import utils as ironic_utils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import fileutils from six.moves.urllib import parse from ironic.common import dhcp_factory from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import keystone from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers import base from ironic.drivers.modules import agent_base_vendor from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import image_cache LOG = logging.getLogger(__name__) # NOTE(rameshg87): This file now registers some of opts in pxe group. # This is acceptable for now as a future refactoring into # separate boot and deploy interfaces is planned, and moving config # options twice is not recommended. Hence we would move the parameters # to the appropriate place in the final refactoring. pxe_opts = [ cfg.StrOpt('pxe_append_params', default='nofb nomodeset vga=normal', help=_('Additional append parameters for baremetal PXE boot.')), cfg.StrOpt('default_ephemeral_format', default='ext4', help=_('Default file system format for ephemeral partition, ' 'if one is created.')), cfg.StrOpt('images_path', default='/var/lib/ironic/images/', help=_('On the ironic-conductor node, directory where images ' 'are stored on disk.')), cfg.StrOpt('instance_master_path', default='/var/lib/ironic/master_images', help=_('On the ironic-conductor node, directory where master ' 'instance images are stored on disk. ' 'Setting to disables image caching.')), cfg.IntOpt('image_cache_size', default=20480, help=_('Maximum size (in MiB) of cache for master images, ' 'including those in use.')), # 10080 here is 1 week - 60*24*7. It is entirely arbitrary in the absence # of a facility to disable the ttl entirely. cfg.IntOpt('image_cache_ttl', default=10080, help=_('Maximum TTL (in minutes) for old master images in ' 'cache.')), cfg.StrOpt('disk_devices', default='cciss/c0d0,sda,hda,vda', help=_('The disk devices to scan while doing the deploy.')), ] CONF = cfg.CONF CONF.register_opts(pxe_opts, group='pxe') DISK_LAYOUT_PARAMS = ('root_gb', 'swap_mb', 'ephemeral_gb') @image_cache.cleanup(priority=50) class InstanceImageCache(image_cache.ImageCache): def __init__(self): super(self.__class__, self).__init__( CONF.pxe.instance_master_path, # MiB -> B cache_size=CONF.pxe.image_cache_size * 1024 * 1024, # min -> sec cache_ttl=CONF.pxe.image_cache_ttl * 60) def _get_image_dir_path(node_uuid): """Generate the dir for an instances disk.""" return os.path.join(CONF.pxe.images_path, node_uuid) def _get_image_file_path(node_uuid): """Generate the full path for an instances disk.""" return os.path.join(_get_image_dir_path(node_uuid), 'disk') def _save_disk_layout(node, i_info): """Saves the disk layout. The disk layout used for deployment of the node, is saved. :param node: the node of interest :param i_info: instance information (a dictionary) for the node, containing disk layout information """ driver_internal_info = node.driver_internal_info driver_internal_info['instance'] = {} for param in DISK_LAYOUT_PARAMS: driver_internal_info['instance'][param] = i_info[param] node.driver_internal_info = driver_internal_info node.save() def check_image_size(task): """Check if the requested image is larger than the root partition size. :param task: a TaskManager instance containing the node to act on. :raises: InstanceDeployFailure if size of the image is greater than root partition. """ i_info = deploy_utils.parse_instance_info(task.node) image_path = _get_image_file_path(task.node.uuid) image_mb = disk_utils.get_image_mb(image_path) root_mb = 1024 * int(i_info['root_gb']) if image_mb > root_mb: msg = (_('Root partition is too small for requested image. Image ' 'virtual size: %(image_mb)d MB, Root size: %(root_mb)d MB') % {'image_mb': image_mb, 'root_mb': root_mb}) raise exception.InstanceDeployFailure(msg) def cache_instance_image(ctx, node): """Fetch the instance's image from Glance This method pulls the AMI and writes them to the appropriate place on local disk. :param ctx: context :param node: an ironic node object :returns: a tuple containing the uuid of the image and the path in the filesystem where image is cached. """ i_info = deploy_utils.parse_instance_info(node) fileutils.ensure_tree(_get_image_dir_path(node.uuid)) image_path = _get_image_file_path(node.uuid) uuid = i_info['image_source'] LOG.debug("Fetching image %(ami)s for node %(uuid)s", {'ami': uuid, 'uuid': node.uuid}) deploy_utils.fetch_images(ctx, InstanceImageCache(), [(uuid, image_path)], CONF.force_raw_images) return (uuid, image_path) def destroy_images(node_uuid): """Delete instance's image file. :param node_uuid: the uuid of the ironic node. """ ironic_utils.unlink_without_raise(_get_image_file_path(node_uuid)) utils.rmtree_without_raise(_get_image_dir_path(node_uuid)) InstanceImageCache().clean_up() def get_deploy_info(node, **kwargs): """Returns the information required for doing iSCSI deploy in a dictionary. :param node: ironic node object :param kwargs: the keyword args passed from the conductor node. :raises: MissingParameterValue, if some required parameters were not passed. :raises: InvalidParameterValue, if any of the parameters have invalid value. """ deploy_key = kwargs.get('key') i_info = deploy_utils.parse_instance_info(node) if i_info['deploy_key'] != deploy_key: raise exception.InvalidParameterValue(_("Deploy key does not match")) params = { 'address': kwargs.get('address'), 'port': kwargs.get('port', '3260'), 'iqn': kwargs.get('iqn'), 'lun': kwargs.get('lun', '1'), 'image_path': _get_image_file_path(node.uuid), 'node_uuid': node.uuid} is_whole_disk_image = node.driver_internal_info['is_whole_disk_image'] if not is_whole_disk_image: params.update({'root_mb': 1024 * int(i_info['root_gb']), 'swap_mb': int(i_info['swap_mb']), 'ephemeral_mb': 1024 * int(i_info['ephemeral_gb']), 'preserve_ephemeral': i_info['preserve_ephemeral'], 'boot_option': deploy_utils.get_boot_option(node), 'boot_mode': _get_boot_mode(node)}) # Append disk label if specified disk_label = deploy_utils.get_disk_label(node) if disk_label is not None: params['disk_label'] = disk_label missing = [key for key in params if params[key] is None] if missing: raise exception.MissingParameterValue( _("Parameters %s were not passed to ironic" " for deploy.") % missing) if is_whole_disk_image: return params # configdrive and ephemeral_format are nullable params['ephemeral_format'] = i_info.get('ephemeral_format') params['configdrive'] = i_info.get('configdrive') return params def continue_deploy(task, **kwargs): """Resume a deployment upon getting POST data from deploy ramdisk. This method raises no exceptions because it is intended to be invoked asynchronously as a callback from the deploy ramdisk. :param task: a TaskManager instance containing the node to act on. :param kwargs: the kwargs to be passed to deploy. :raises: InvalidState if the event is not allowed by the associated state machine. :returns: a dictionary containing the following keys: For partition image: 'root uuid': UUID of root partition 'efi system partition uuid': UUID of the uefi system partition (if boot mode is uefi). NOTE: If key exists but value is None, it means partition doesn't exist. For whole disk image: 'disk identifier': ID of the disk to which image was deployed. """ node = task.node params = get_deploy_info(node, **kwargs) ramdisk_error = kwargs.get('error') def _fail_deploy(task, msg): """Fail the deploy after logging and setting error states.""" LOG.error(msg) deploy_utils.set_failed_state(task, msg) destroy_images(task.node.uuid) raise exception.InstanceDeployFailure(msg) if ramdisk_error: msg = _('Error returned from deploy ramdisk: %s') % ramdisk_error _fail_deploy(task, msg) # NOTE(lucasagomes): Let's make sure we don't log the full content # of the config drive here because it can be up to 64MB in size, # so instead let's log "***" in case config drive is enabled. if LOG.isEnabledFor(logging.logging.DEBUG): log_params = { k: params[k] if k != 'configdrive' else '***' for k in params.keys() } LOG.debug('Continuing deployment for node %(node)s, params %(params)s', {'node': node.uuid, 'params': log_params}) uuid_dict_returned = {} try: if node.driver_internal_info['is_whole_disk_image']: uuid_dict_returned = deploy_utils.deploy_disk_image(**params) else: uuid_dict_returned = deploy_utils.deploy_partition_image(**params) except Exception as e: msg = (_('Deploy failed for instance %(instance)s. ' 'Error: %(error)s') % {'instance': node.instance_uuid, 'error': e}) _fail_deploy(task, msg) root_uuid_or_disk_id = uuid_dict_returned.get( 'root uuid', uuid_dict_returned.get('disk identifier')) if not root_uuid_or_disk_id: msg = (_("Couldn't determine the UUID of the root " "partition or the disk identifier after deploying " "node %s") % node.uuid) _fail_deploy(task, msg) if params.get('preserve_ephemeral', False): # Save disk layout information, to check that they are unchanged # for any future rebuilds _save_disk_layout(node, deploy_utils.parse_instance_info(node)) destroy_images(node.uuid) return uuid_dict_returned def do_agent_iscsi_deploy(task, agent_client): """Method invoked when deployed with the agent ramdisk. This method is invoked by drivers for doing iSCSI deploy using agent ramdisk. This method assumes that the agent is booted up on the node and is heartbeating. :param task: a TaskManager object containing the node. :param agent_client: an instance of agent_client.AgentClient which will be used during iscsi deploy (for exposing node's target disk via iSCSI, for install boot loader, etc). :returns: a dictionary containing the following keys: For partition image: 'root uuid': UUID of root partition 'efi system partition uuid': UUID of the uefi system partition (if boot mode is uefi). NOTE: If key exists but value is None, it means partition doesn't exist. For whole disk image: 'disk identifier': ID of the disk to which image was deployed. :raises: InstanceDeployFailure, if it encounters some error during the deploy. """ node = task.node iscsi_options = build_deploy_ramdisk_options(node) iqn = iscsi_options['iscsi_target_iqn'] result = agent_client.start_iscsi_target(node, iqn) if result['command_status'] == 'FAILED': msg = (_("Failed to start the iSCSI target to deploy the " "node %(node)s. Error: %(error)s") % {'node': node.uuid, 'error': result['command_error']}) deploy_utils.set_failed_state(task, msg) raise exception.InstanceDeployFailure(reason=msg) address = parse.urlparse(node.driver_internal_info['agent_url']) address = address.hostname # TODO(lucasagomes): The 'error' and 'key' parameters in the # dictionary below are just being passed because it's needed for # the continue_deploy() method, we are fooling it # for now. The agent driver doesn't use/need those. So we need to # refactor this bits here later. iscsi_params = {'error': result['command_error'], 'iqn': iqn, 'key': iscsi_options['deployment_key'], 'address': address} uuid_dict_returned = continue_deploy(task, **iscsi_params) root_uuid_or_disk_id = uuid_dict_returned.get( 'root uuid', uuid_dict_returned.get('disk identifier')) # TODO(lucasagomes): Move this bit saving the root_uuid to # continue_deploy() driver_internal_info = node.driver_internal_info driver_internal_info['root_uuid_or_disk_id'] = root_uuid_or_disk_id node.driver_internal_info = driver_internal_info node.save() return uuid_dict_returned def _get_boot_mode(node): """Gets the boot mode. :param node: A single Node. :returns: A string representing the boot mode type. Defaults to 'bios'. """ boot_mode = deploy_utils.get_boot_mode_for_deploy(node) if boot_mode: return boot_mode return "bios" def build_deploy_ramdisk_options(node): """Build the ramdisk config options for a node This method builds the ramdisk options for a node, given all the required parameters for doing iscsi deploy. :param node: a single Node. :returns: A dictionary of options to be passed to ramdisk for performing the deploy. """ # NOTE: we should strip '/' from the end because this is intended for # hardcoded ramdisk script ironic_api = (CONF.conductor.api_url or keystone.get_service_url()).rstrip('/') deploy_key = utils.random_alnum(32) i_info = node.instance_info i_info['deploy_key'] = deploy_key node.instance_info = i_info node.save() # XXX(jroll) DIB relies on boot_option=local to decide whether or not to # lay down a bootloader. Hack this for now; fix it for real in Liberty. # See also bug #1441556. boot_option = deploy_utils.get_boot_option(node) if node.driver_internal_info.get('is_whole_disk_image'): boot_option = 'netboot' deploy_options = { 'deployment_id': node['uuid'], 'deployment_key': deploy_key, 'iscsi_target_iqn': 'iqn.2008-10.org.openstack:%s' % node.uuid, 'ironic_api_url': ironic_api, 'disk': CONF.pxe.disk_devices, 'boot_option': boot_option, 'boot_mode': _get_boot_mode(node), # NOTE: The below entry is a temporary workaround for bug/1433812 'coreos.configdrive': 0, } root_device = deploy_utils.parse_root_device_hints(node) if root_device: deploy_options['root_device'] = root_device return deploy_options def validate(task): """Validates the pre-requisites for iSCSI deploy. Validates whether node in the task provided has some ports enrolled. This method validates whether conductor url is available either from CONF file or from keystone. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if the URL of the Ironic API service is not configured in config file and is not accessible via Keystone catalog. :raises: MissingParameterValue if no ports are enrolled for the given node. """ try: # TODO(lucasagomes): Validate the format of the URL CONF.conductor.api_url or keystone.get_service_url() except (exception.KeystoneFailure, exception.CatalogNotFound, exception.KeystoneUnauthorized) as e: raise exception.InvalidParameterValue(_( "Couldn't get the URL of the Ironic API service from the " "configuration file or keystone catalog. Keystone error: %s") % e) # Validate the root device hints deploy_utils.parse_root_device_hints(task.node) deploy_utils.parse_instance_info(task.node) def validate_pass_bootloader_info_input(task, input_params): """Validates the input sent with bootloader install info passthru. This method validates the input sent with bootloader install info passthru. :param task: A TaskManager object. :param input_params: A dictionary of params sent as input to passthru. :raises: InvalidParameterValue, if deploy key passed doesn't match the one stored in instance_info. :raises: MissingParameterValue, if some input is missing. """ params = {'address': input_params.get('address'), 'key': input_params.get('key'), 'status': input_params.get('status')} msg = _("Some mandatory input missing in 'pass_bootloader_info' " "vendor passthru from ramdisk.") deploy_utils.check_for_missing_params(params, msg) deploy_key = task.node.instance_info['deploy_key'] if deploy_key != input_params.get('key'): raise exception.InvalidParameterValue( _("Deploy key %(key_sent)s does not match " "with %(expected_key)s") % {'key_sent': input_params.get('key'), 'expected_key': deploy_key}) def validate_bootloader_install_status(task, input_params): """Validate if bootloader was installed. This method first validates if deploy key sent in vendor passthru was correct one, and then validates whether bootloader installation was successful or not. :param task: A TaskManager object. :param input_params: A dictionary of params sent as input to passthru. :raises: InstanceDeployFailure, if bootloader installation was reported from ramdisk as failure. """ node = task.node if input_params['status'] != 'SUCCEEDED': msg = (_('Failed to install bootloader on node %(node)s. ' 'Error: %(error)s.') % {'node': node.uuid, 'error': input_params.get('error')}) LOG.error(msg) deploy_utils.set_failed_state(task, msg) raise exception.InstanceDeployFailure(msg) LOG.info(_LI('Bootloader successfully installed on node %s'), node.uuid) def finish_deploy(task, address): """Notifies the ramdisk to reboot the node and makes the instance active. This method notifies the ramdisk to proceed to reboot and then makes the instance active. :param task: a TaskManager object. :param address: The IP address of the bare metal node. :raises: InstanceDeployFailure, if notifying ramdisk failed. """ node = task.node try: deploy_utils.notify_ramdisk_to_proceed(address) except Exception as e: LOG.error(_LE('Deploy failed for instance %(instance)s. ' 'Error: %(error)s'), {'instance': node.instance_uuid, 'error': e}) msg = (_('Failed to notify ramdisk to reboot after bootloader ' 'installation. Error: %s') % e) deploy_utils.set_failed_state(task, msg) raise exception.InstanceDeployFailure(msg) # TODO(lucasagomes): When deploying a node with the DIB ramdisk # Ironic will not power control the node at the end of the deployment, # it's the DIB ramdisk that reboots the node. But, for the SSH driver # some changes like setting the boot device only gets applied when the # machine is powered off and on again. So the code below is enforcing # it. For Liberty we need to change the DIB ramdisk so that Ironic # always controls the power state of the node for all drivers. if deploy_utils.get_boot_option(node) == "local" and 'ssh' in node.driver: manager_utils.node_power_action(task, states.REBOOT) LOG.info(_LI('Deployment to node %s done'), node.uuid) task.process_event('done') class ISCSIDeploy(base.DeployInterface): """PXE Deploy Interface for deploy-related actions.""" def get_properties(self): return {} def validate(self, task): """Validate the deployment information for the task's node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue. :raises: MissingParameterValue """ task.driver.boot.validate(task) node = task.node # Check the boot_mode and boot_option capabilities values. deploy_utils.validate_capabilities(node) # TODO(rameshg87): iscsi_ilo driver uses this method. Remove # and copy-paste it's contents here once iscsi_ilo deploy driver # broken down into separate boot and deploy implementations. validate(task) @task_manager.require_exclusive_lock def deploy(self, task): """Start deployment of the task's node. Fetches instance image, creates a temporary keystone token file, updates the DHCP port options for next boot, and issues a reboot request to the power driver. This causes the node to boot into the deployment ramdisk and triggers the next phase of PXE-based deployment via VendorPassthru.pass_deploy_info(). :param task: a TaskManager instance containing the node to act on. :returns: deploy state DEPLOYWAIT. """ node = task.node cache_instance_image(task.context, node) check_image_size(task) manager_utils.node_power_action(task, states.REBOOT) return states.DEPLOYWAIT @task_manager.require_exclusive_lock def tear_down(self, task): """Tear down a previous deployment on the task's node. Power off the node. All actual clean-up is done in the clean_up() method which should be called separately. :param task: a TaskManager instance containing the node to act on. :returns: deploy state DELETED. """ manager_utils.node_power_action(task, states.POWER_OFF) return states.DELETED def prepare(self, task): """Prepare the deployment environment for this task's node. Generates the TFTP configuration for PXE-booting both the deployment and user images, fetches the TFTP image from Glance and add it to the local cache. :param task: a TaskManager instance containing the node to act on. """ node = task.node if node.provision_state == states.ACTIVE: task.driver.boot.prepare_instance(task) else: deploy_opts = build_deploy_ramdisk_options(node) # NOTE(lucasagomes): We are going to extend the normal PXE config # to also contain the agent options so it could be used for # both the DIB ramdisk and the IPA ramdisk agent_opts = deploy_utils.build_agent_options(node) deploy_opts.update(agent_opts) task.driver.boot.prepare_ramdisk(task, deploy_opts) def clean_up(self, task): """Clean up the deployment environment for the task's node. Unlinks TFTP and instance images and triggers image cache cleanup. Removes the TFTP configuration files for this node. As a precaution, this method also ensures the keystone auth token file was removed. :param task: a TaskManager instance containing the node to act on. """ destroy_images(task.node.uuid) task.driver.boot.clean_up_ramdisk(task) task.driver.boot.clean_up_instance(task) provider = dhcp_factory.DHCPFactory() provider.clean_dhcp(task) def take_over(self, task): pass def get_clean_steps(self, task): """Get the list of clean steps from the agent. :param task: a TaskManager object containing the node :raises NodeCleaningFailure: if the clean steps are not yet available (cached), for example, when a node has just been enrolled and has not been cleaned yet. :returns: A list of clean step dictionaries. If bash ramdisk is used for this node, it returns an empty list. """ # TODO(rameshg87): Remove the below code once we stop supporting # bash ramdisk in Ironic. No need to log warning because we have # already logged it in pass_deploy_info. if 'agent_url' not in task.node.driver_internal_info: return [] steps = deploy_utils.agent_get_clean_steps( task, interface='deploy', override_priorities={ 'erase_devices': CONF.deploy.erase_devices_priority}) return steps def execute_clean_step(self, task, step): """Execute a clean step asynchronously on the agent. :param task: a TaskManager object containing the node :param step: a clean step dictionary to execute :raises: NodeCleaningFailure if the agent does not return a command status :returns: states.CLEANWAIT to signify the step will be completed asynchronously. """ return deploy_utils.agent_execute_clean_step(task, step) def prepare_cleaning(self, task): """Boot into the agent to prepare for cleaning. :param task: a TaskManager object containing the node :raises NodeCleaningFailure: if the previous cleaning ports cannot be removed or if new cleaning ports cannot be created :returns: states.CLEANWAIT to signify an asynchronous prepare. """ return deploy_utils.prepare_inband_cleaning( task, manage_boot=True) def tear_down_cleaning(self, task): """Clean up the PXE and DHCP files after cleaning. :param task: a TaskManager object containing the node :raises NodeCleaningFailure: if the cleaning ports cannot be removed """ deploy_utils.tear_down_inband_cleaning( task, manage_boot=True) class VendorPassthru(agent_base_vendor.BaseAgentVendor): """Interface to mix IPMI and PXE vendor-specific interfaces.""" def validate(self, task, method, **kwargs): """Validates the inputs for a vendor passthru. If invalid, raises an exception; otherwise returns None. Valid methods: * pass_deploy_info * pass_bootloader_install_info :param task: a TaskManager instance containing the node to act on. :param method: method to be validated. :param kwargs: kwargs containins the method's parameters. :raises: InvalidParameterValue if any parameters is invalid. """ if method == 'pass_deploy_info': # TODO(rameshg87): Don't validate deploy info if bash ramdisk # booted during cleaning. It will be handled in pass_deploy_info # method. Remove the below code once we stop supporting bash # ramdisk in Ironic. if task.node.provision_state != states.CLEANWAIT: deploy_utils.validate_capabilities(task.node) get_deploy_info(task.node, **kwargs) elif method == 'pass_bootloader_install_info': validate_pass_bootloader_info_input(task, kwargs) @base.passthru(['POST']) @task_manager.require_exclusive_lock def pass_bootloader_install_info(self, task, **kwargs): """Accepts the results of bootloader installation. This method acts as a vendor passthru and accepts the result of the bootloader installation. If bootloader installation was successful, then it notifies the bare metal to proceed to reboot and makes the instance active. If the bootloader installation failed, then it sets provisioning as failed and powers off the node. :param task: A TaskManager object. :param kwargs: The arguments sent with vendor passthru. The expected kwargs are:: 'key': The deploy key for authorization 'status': 'SUCCEEDED' or 'FAILED' 'error': The error message if status == 'FAILED' 'address': The IP address of the ramdisk """ LOG.warning(_LW("The node %s is using the bash deploy ramdisk for " "its deployment. This deploy ramdisk has been " "deprecated. Please use the ironic-python-agent " "(IPA) ramdisk instead."), task.node.uuid) task.process_event('resume') LOG.debug('Continuing the deployment on node %s', task.node.uuid) validate_bootloader_install_status(task, kwargs) finish_deploy(task, kwargs['address']) def _initiate_cleaning(self, task): """Initiates the steps required to start cleaning for the node. This method polls each interface of the driver for getting the clean steps and notifies Ironic conductor to resume cleaning. On error, it sets the node to CLEANFAIL state and populates node.last_error with the error message. :param task: a TaskManager instance containing the node to act on. """ LOG.warning( _LW("Bash deploy ramdisk doesn't support in-band cleaning. " "Please use the ironic-python-agent (IPA) ramdisk " "instead for node %s. "), task.node.uuid) try: manager_utils.set_node_cleaning_steps(task) self.notify_conductor_resume_clean(task) except Exception as e: last_error = ( _('Encountered exception for node %(node)s ' 'while initiating cleaning. Error: %(error)s') % {'node': task.node.uuid, 'error': e}) return manager_utils.cleaning_error_handler(task, last_error) @base.passthru(['POST']) @task_manager.require_exclusive_lock def pass_deploy_info(self, task, **kwargs): """Continues the deployment of baremetal node over iSCSI. This method continues the deployment of the baremetal node over iSCSI from where the deployment ramdisk has left off. :param task: a TaskManager instance containing the node to act on. :param kwargs: kwargs for performing iscsi deployment. :raises: InvalidState """ node = task.node LOG.warning(_LW("The node %s is using the bash deploy ramdisk for " "its deployment. This deploy ramdisk has been " "deprecated. Please use the ironic-python-agent " "(IPA) ramdisk instead."), node.uuid) # TODO(rameshg87): Remove the below code once we stop supporting # bash ramdisk in Ironic. if node.provision_state == states.CLEANWAIT: return self._initiate_cleaning(task) task.process_event('resume') LOG.debug('Continuing the deployment on node %s', node.uuid) is_whole_disk_image = node.driver_internal_info['is_whole_disk_image'] uuid_dict_returned = continue_deploy(task, **kwargs) root_uuid_or_disk_id = uuid_dict_returned.get( 'root uuid', uuid_dict_returned.get('disk identifier')) # save the node's root disk UUID so that another conductor could # rebuild the PXE config file. Due to a shortcoming in Nova objects, # we have to assign to node.driver_internal_info so the node knows it # has changed. driver_internal_info = node.driver_internal_info driver_internal_info['root_uuid_or_disk_id'] = root_uuid_or_disk_id node.driver_internal_info = driver_internal_info node.save() try: task.driver.boot.prepare_instance(task) if deploy_utils.get_boot_option(node) == "local": if not is_whole_disk_image: LOG.debug('Installing the bootloader on node %s', node.uuid) deploy_utils.notify_ramdisk_to_proceed(kwargs['address']) task.process_event('wait') return except Exception as e: LOG.error(_LE('Deploy failed for instance %(instance)s. ' 'Error: %(error)s'), {'instance': node.instance_uuid, 'error': e}) msg = _('Failed to continue iSCSI deployment.') deploy_utils.set_failed_state(task, msg) else: finish_deploy(task, kwargs.get('address')) @task_manager.require_exclusive_lock def continue_deploy(self, task, **kwargs): """Method invoked when deployed with the IPA ramdisk. This method is invoked during a heartbeat from an agent when the node is in wait-call-back state. This deploys the image on the node and then configures the node to boot according to the desired boot option (netboot or localboot). :param task: a TaskManager object containing the node. :param kwargs: the kwargs passed from the heartbeat method. :raises: InstanceDeployFailure, if it encounters some error during the deploy. """ task.process_event('resume') node = task.node LOG.debug('Continuing the deployment on node %s', node.uuid) uuid_dict_returned = do_agent_iscsi_deploy(task, self._client) root_uuid = uuid_dict_returned.get('root uuid') efi_sys_uuid = uuid_dict_returned.get('efi system partition uuid') self.prepare_instance_to_boot(task, root_uuid, efi_sys_uuid) self.reboot_and_finish_deploy(task) ironic-5.1.0/ironic/drivers/modules/irmc/0000775000567000056710000000000012674513633021516 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/irmc/boot.py0000664000567000056710000006156112674513466023050 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iRMC Boot Driver """ import os import shutil import tempfile from ironic_lib import utils as ironic_utils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common import images from ironic.common import states from ironic.conductor import utils as manager_utils from ironic.drivers import base from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.irmc import common as irmc_common scci = importutils.try_import('scciclient.irmc.scci') CONF = cfg.CONF try: if CONF.debug: scci.DEBUG = True except Exception: pass opts = [ cfg.StrOpt('remote_image_share_root', default='/remote_image_share_root', help=_('Ironic conductor node\'s "NFS" or "CIFS" root path')), cfg.StrOpt('remote_image_server', help=_('IP of remote image server')), cfg.StrOpt('remote_image_share_type', default='CIFS', choices=['CIFS', 'NFS'], ignore_case=True, help=_('Share type of virtual media')), cfg.StrOpt('remote_image_share_name', default='share', help=_('share name of remote_image_server')), cfg.StrOpt('remote_image_user_name', help=_('User name of remote_image_server')), cfg.StrOpt('remote_image_user_password', secret=True, help=_('Password of remote_image_user_name')), cfg.StrOpt('remote_image_user_domain', default='', help=_('Domain name of remote_image_user_name')), ] CONF.register_opts(opts, group='irmc') LOG = logging.getLogger(__name__) REQUIRED_PROPERTIES = { 'irmc_deploy_iso': _("Deployment ISO image file name. " "Required."), } COMMON_PROPERTIES = REQUIRED_PROPERTIES def _parse_config_option(): """Parse config file options. This method checks config file options validity. :raises: InvalidParameterValue, if config option has invalid value. """ error_msgs = [] if not os.path.isdir(CONF.irmc.remote_image_share_root): error_msgs.append( _("Value '%s' for remote_image_share_root isn't a directory " "or doesn't exist.") % CONF.irmc.remote_image_share_root) if error_msgs: msg = (_("The following errors were encountered while parsing " "config file:%s") % error_msgs) raise exception.InvalidParameterValue(msg) def _parse_driver_info(node): """Gets the driver specific Node deployment info. This method validates whether the 'driver_info' property of the supplied node contains the required or optional information properly for this driver to deploy images to the node. :param node: a target node of the deployment :returns: the driver_info values of the node. :raises: MissingParameterValue, if any of the required parameters are missing. :raises: InvalidParameterValue, if any of the parameters have invalid value. """ d_info = node.driver_info deploy_info = {} deploy_info['irmc_deploy_iso'] = d_info.get('irmc_deploy_iso') error_msg = _("Error validating iRMC virtual media deploy. Some parameters" " were missing in node's driver_info") deploy_utils.check_for_missing_params(deploy_info, error_msg) if service_utils.is_image_href_ordinary_file_name( deploy_info['irmc_deploy_iso']): deploy_iso = os.path.join(CONF.irmc.remote_image_share_root, deploy_info['irmc_deploy_iso']) if not os.path.isfile(deploy_iso): msg = (_("Deploy ISO file, %(deploy_iso)s, " "not found for node: %(node)s.") % {'deploy_iso': deploy_iso, 'node': node.uuid}) raise exception.InvalidParameterValue(msg) return deploy_info def _parse_instance_info(node): """Gets the instance specific Node deployment info. This method validates whether the 'instance_info' property of the supplied node contains the required or optional information properly for this driver to deploy images to the node. :param node: a target node of the deployment :returns: the instance_info values of the node. :raises: InvalidParameterValue, if any of the parameters have invalid value. """ i_info = node.instance_info deploy_info = {} if i_info.get('irmc_boot_iso'): deploy_info['irmc_boot_iso'] = i_info['irmc_boot_iso'] if service_utils.is_image_href_ordinary_file_name( deploy_info['irmc_boot_iso']): boot_iso = os.path.join(CONF.irmc.remote_image_share_root, deploy_info['irmc_boot_iso']) if not os.path.isfile(boot_iso): msg = (_("Boot ISO file, %(boot_iso)s, " "not found for node: %(node)s.") % {'boot_iso': boot_iso, 'node': node.uuid}) raise exception.InvalidParameterValue(msg) return deploy_info def _parse_deploy_info(node): """Gets the instance and driver specific Node deployment info. This method validates whether the 'instance_info' and 'driver_info' property of the supplied node contains the required information for this driver to deploy images to the node. :param node: a target node of the deployment :returns: a dict with the instance_info and driver_info values. :raises: MissingParameterValue, if any of the required parameters are missing. :raises: InvalidParameterValue, if any of the parameters have invalid value. """ deploy_info = {} deploy_info.update(deploy_utils.get_image_instance_info(node)) deploy_info.update(_parse_driver_info(node)) deploy_info.update(_parse_instance_info(node)) return deploy_info def _setup_deploy_iso(task, ramdisk_options): """Attaches virtual media and sets it as boot device. This method attaches the given deploy ISO as virtual media, prepares the arguments for ramdisk in virtual media floppy. :param task: a TaskManager instance containing the node to act on. :param ramdisk_options: the options to be passed to the ramdisk in virtual media floppy. :raises: ImageRefValidationFailed if no image service can handle specified href. :raises: ImageCreationFailed, if it failed while creating the floppy image. :raises: IRMCOperationError, if some operation on iRMC failed. :raises: InvalidParameterValue if the validation of the PowerInterface or ManagementInterface fails. """ d_info = task.node.driver_info deploy_iso_href = d_info['irmc_deploy_iso'] if service_utils.is_image_href_ordinary_file_name(deploy_iso_href): deploy_iso_file = deploy_iso_href else: deploy_iso_file = _get_deploy_iso_name(task.node) deploy_iso_fullpathname = os.path.join( CONF.irmc.remote_image_share_root, deploy_iso_file) images.fetch(task.context, deploy_iso_href, deploy_iso_fullpathname) _setup_vmedia_for_boot(task, deploy_iso_file, ramdisk_options) manager_utils.node_set_boot_device(task, boot_devices.CDROM) def _get_deploy_iso_name(node): """Returns the deploy ISO file name for a given node. :param node: the node for which ISO file name is to be provided. """ return "deploy-%s.iso" % node.uuid def _get_boot_iso_name(node): """Returns the boot ISO file name for a given node. :param node: the node for which ISO file name is to be provided. """ return "boot-%s.iso" % node.uuid def _prepare_boot_iso(task, root_uuid): """Prepare a boot ISO to boot the node. :param task: a TaskManager instance containing the node to act on. :param root_uuid: the uuid of the root partition. :raises: MissingParameterValue, if any of the required parameters are missing. :raises: InvalidParameterValue, if any of the parameters have invalid value. :raises: ImageCreationFailed, if creating boot ISO for BIOS boot_mode failed. """ deploy_info = _parse_deploy_info(task.node) driver_internal_info = task.node.driver_internal_info # fetch boot iso if deploy_info.get('irmc_boot_iso'): boot_iso_href = deploy_info['irmc_boot_iso'] if service_utils.is_image_href_ordinary_file_name(boot_iso_href): driver_internal_info['irmc_boot_iso'] = boot_iso_href else: boot_iso_filename = _get_boot_iso_name(task.node) boot_iso_fullpathname = os.path.join( CONF.irmc.remote_image_share_root, boot_iso_filename) images.fetch(task.context, boot_iso_href, boot_iso_fullpathname) driver_internal_info['irmc_boot_iso'] = boot_iso_filename # create boot iso else: image_href = deploy_info['image_source'] image_props = ['kernel_id', 'ramdisk_id'] image_properties = images.get_image_properties( task.context, image_href, image_props) kernel_href = (task.node.instance_info.get('kernel') or image_properties['kernel_id']) ramdisk_href = (task.node.instance_info.get('ramdisk') or image_properties['ramdisk_id']) deploy_iso_filename = _get_deploy_iso_name(task.node) deploy_iso = ('file://' + os.path.join( CONF.irmc.remote_image_share_root, deploy_iso_filename)) boot_mode = deploy_utils.get_boot_mode_for_deploy(task.node) kernel_params = CONF.pxe.pxe_append_params boot_iso_filename = _get_boot_iso_name(task.node) boot_iso_fullpathname = os.path.join( CONF.irmc.remote_image_share_root, boot_iso_filename) images.create_boot_iso(task.context, boot_iso_fullpathname, kernel_href, ramdisk_href, deploy_iso, root_uuid, kernel_params, boot_mode) driver_internal_info['irmc_boot_iso'] = boot_iso_filename # save driver_internal_info['irmc_boot_iso'] task.node.driver_internal_info = driver_internal_info task.node.save() def _get_floppy_image_name(node): """Returns the floppy image name for a given node. :param node: the node for which image name is to be provided. """ return "image-%s.img" % node.uuid def _prepare_floppy_image(task, params): """Prepares the floppy image for passing the parameters. This method prepares a temporary vfat filesystem image, which contains the parameters to be passed to the ramdisk. Then it uploads the file NFS or CIFS server. :param task: a TaskManager instance containing the node to act on. :param params: a dictionary containing 'parameter name'->'value' mapping to be passed to the deploy ramdisk via the floppy image. :returns: floppy image filename :raises: ImageCreationFailed, if it failed while creating the floppy image. :raises: IRMCOperationError, if copying floppy image file failed. """ floppy_filename = _get_floppy_image_name(task.node) floppy_fullpathname = os.path.join( CONF.irmc.remote_image_share_root, floppy_filename) with tempfile.NamedTemporaryFile() as vfat_image_tmpfile_obj: images.create_vfat_image(vfat_image_tmpfile_obj.name, parameters=params) try: shutil.copyfile(vfat_image_tmpfile_obj.name, floppy_fullpathname) except IOError as e: operation = _("Copying floppy image file") raise exception.IRMCOperationError( operation=operation, error=e) return floppy_filename def attach_boot_iso_if_needed(task): """Attaches boot ISO for a deployed node if it exists. This method checks the instance info of the bare metal node for a boot ISO. If the instance info has a value of key 'irmc_boot_iso', it indicates that 'boot_option' is 'netboot'. Threfore it attaches the boot ISO on the bare metal node and then sets the node to boot from virtual media cdrom. :param task: a TaskManager instance containing the node to act on. :raises: IRMCOperationError if attaching virtual media failed. :raises: InvalidParameterValue if the validation of the ManagementInterface fails. """ d_info = task.node.driver_internal_info node_state = task.node.provision_state if 'irmc_boot_iso' in d_info and node_state == states.ACTIVE: _setup_vmedia_for_boot(task, d_info['irmc_boot_iso']) manager_utils.node_set_boot_device(task, boot_devices.CDROM) def _setup_vmedia_for_boot(task, bootable_iso_filename, parameters=None): """Sets up the node to boot from the boot ISO image. This method attaches a boot_iso on the node and passes the required parameters to it via a virtual floppy image. :param task: a TaskManager instance containing the node to act on. :param bootable_iso_filename: a bootable ISO image to attach to. The iso file should be present in NFS/CIFS server. :param parameters: the parameters to pass in a virtual floppy image in a dictionary. This is optional. :raises: ImageCreationFailed, if it failed while creating a floppy image. :raises: IRMCOperationError, if attaching a virtual media failed. """ LOG.info(_LI("Setting up node %s to boot from virtual media"), task.node.uuid) _detach_virtual_cd(task.node) _detach_virtual_fd(task.node) if parameters: floppy_image_filename = _prepare_floppy_image(task, parameters) _attach_virtual_fd(task.node, floppy_image_filename) _attach_virtual_cd(task.node, bootable_iso_filename) def _cleanup_vmedia_boot(task): """Cleans a node after a virtual media boot. This method cleans up a node after a virtual media boot. It deletes floppy and cdrom images if they exist in NFS/CIFS server. It also ejects both the virtual media cdrom and the virtual media floppy. :param task: a TaskManager instance containing the node to act on. :raises: IRMCOperationError if ejecting virtual media failed. """ LOG.debug("Cleaning up node %s after virtual media boot", task.node.uuid) node = task.node _detach_virtual_cd(node) _detach_virtual_fd(node) _remove_share_file(_get_floppy_image_name(node)) _remove_share_file(_get_deploy_iso_name(node)) def _remove_share_file(share_filename): """Remove given file from the share file system. :param share_filename: a file name to be removed. """ share_fullpathname = os.path.join( CONF.irmc.remote_image_share_name, share_filename) ironic_utils.unlink_without_raise(share_fullpathname) def _attach_virtual_cd(node, bootable_iso_filename): """Attaches the given url as virtual media on the node. :param node: an ironic node object. :param bootable_iso_filename: a bootable ISO image to attach to. The iso file should be present in NFS/CIFS server. :raises: IRMCOperationError if attaching virtual media failed. """ try: irmc_client = irmc_common.get_irmc_client(node) cd_set_params = scci.get_virtual_cd_set_params_cmd( CONF.irmc.remote_image_server, CONF.irmc.remote_image_user_domain, scci.get_share_type(CONF.irmc.remote_image_share_type), CONF.irmc.remote_image_share_name, bootable_iso_filename, CONF.irmc.remote_image_user_name, CONF.irmc.remote_image_user_password) irmc_client(cd_set_params, async=False) irmc_client(scci.MOUNT_CD, async=False) except scci.SCCIClientError as irmc_exception: LOG.exception(_LE("Error while inserting virtual cdrom " "into node %(uuid)s. Error: %(error)s"), {'uuid': node.uuid, 'error': irmc_exception}) operation = _("Inserting virtual cdrom") raise exception.IRMCOperationError(operation=operation, error=irmc_exception) LOG.info(_LI("Attached virtual cdrom successfully" " for node %s"), node.uuid) def _detach_virtual_cd(node): """Detaches virtual cdrom on the node. :param node: an ironic node object. :raises: IRMCOperationError if eject virtual cdrom failed. """ try: irmc_client = irmc_common.get_irmc_client(node) irmc_client(scci.UNMOUNT_CD) except scci.SCCIClientError as irmc_exception: LOG.exception(_LE("Error while ejecting virtual cdrom " "from node %(uuid)s. Error: %(error)s"), {'uuid': node.uuid, 'error': irmc_exception}) operation = _("Ejecting virtual cdrom") raise exception.IRMCOperationError(operation=operation, error=irmc_exception) LOG.info(_LI("Detached virtual cdrom successfully" " for node %s"), node.uuid) def _attach_virtual_fd(node, floppy_image_filename): """Attaches virtual floppy on the node. :param node: an ironic node object. :raises: IRMCOperationError if insert virtual floppy failed. """ try: irmc_client = irmc_common.get_irmc_client(node) fd_set_params = scci.get_virtual_fd_set_params_cmd( CONF.irmc.remote_image_server, CONF.irmc.remote_image_user_domain, scci.get_share_type(CONF.irmc.remote_image_share_type), CONF.irmc.remote_image_share_name, floppy_image_filename, CONF.irmc.remote_image_user_name, CONF.irmc.remote_image_user_password) irmc_client(fd_set_params, async=False) irmc_client(scci.MOUNT_FD, async=False) except scci.SCCIClientError as irmc_exception: LOG.exception(_LE("Error while inserting virtual floppy " "into node %(uuid)s. Error: %(error)s"), {'uuid': node.uuid, 'error': irmc_exception}) operation = _("Inserting virtual floppy") raise exception.IRMCOperationError(operation=operation, error=irmc_exception) LOG.info(_LI("Attached virtual floppy successfully" " for node %s"), node.uuid) def _detach_virtual_fd(node): """Detaches virtual media floppy on the node. :param node: an ironic node object. :raises: IRMCOperationError if eject virtual media floppy failed. """ try: irmc_client = irmc_common.get_irmc_client(node) irmc_client(scci.UNMOUNT_FD) except scci.SCCIClientError as irmc_exception: LOG.exception(_LE("Error while ejecting virtual floppy " "from node %(uuid)s. Error: %(error)s"), {'uuid': node.uuid, 'error': irmc_exception}) operation = _("Ejecting virtual floppy") raise exception.IRMCOperationError(operation=operation, error=irmc_exception) LOG.info(_LI("Detached virtual floppy successfully" " for node %s"), node.uuid) def check_share_fs_mounted(): """Check if Share File System (NFS or CIFS) is mounted. :raises: InvalidParameterValue, if config option has invalid value. :raises: IRMCSharedFileSystemNotMounted, if shared file system is not mounted. """ _parse_config_option() if not os.path.ismount(CONF.irmc.remote_image_share_root): raise exception.IRMCSharedFileSystemNotMounted( share=CONF.irmc.remote_image_share_root) class IRMCVirtualMediaBoot(base.BootInterface): """iRMC Virtual Media boot-related actions.""" def __init__(self): """Constructor of IRMCVirtualMediaBoot. :raises: IRMCSharedFileSystemNotMounted, if shared file system is not mounted. :raises: InvalidParameterValue, if config option has invalid value. """ check_share_fs_mounted() super(IRMCVirtualMediaBoot, self).__init__() def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Validate the deployment information for the task's node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue, if config option has invalid value. :raises: IRMCSharedFileSystemNotMounted, if shared file system is not mounted. :raises: InvalidParameterValue, if some information is invalid. :raises: MissingParameterValue if 'kernel_id' and 'ramdisk_id' are missing in the Glance image, or if 'kernel' and 'ramdisk' are missing in the Non Glance image. """ check_share_fs_mounted() d_info = _parse_deploy_info(task.node) if task.node.driver_internal_info.get('is_whole_disk_image'): props = [] elif service_utils.is_glance_image(d_info['image_source']): props = ['kernel_id', 'ramdisk_id'] else: props = ['kernel', 'ramdisk'] deploy_utils.validate_image_properties(task.context, d_info, props) def prepare_ramdisk(self, task, ramdisk_params): """Prepares the deploy ramdisk using virtual media. Prepares the options for the deployment ramdisk, sets the node to boot from virtual media cdrom. :param task: a TaskManager instance containing the node to act on. :param ramdisk_params: the options to be passed to the deploy ramdisk. :raises: ImageRefValidationFailed if no image service can handle specified href. :raises: ImageCreationFailed, if it failed while creating the floppy image. :raises: InvalidParameterValue if the validation of the PowerInterface or ManagementInterface fails. :raises: IRMCOperationError, if some operation on iRMC fails. """ deploy_nic_mac = deploy_utils.get_single_nic_with_vif_port_id(task) ramdisk_params['BOOTIF'] = deploy_nic_mac _setup_deploy_iso(task, ramdisk_params) def clean_up_ramdisk(self, task): """Cleans up the boot of ironic ramdisk. This method cleans up the environment that was setup for booting the deploy ramdisk. :param task: a task from TaskManager. :returns: None :raises: IRMCOperationError if iRMC operation failed. """ _cleanup_vmedia_boot(task) def prepare_instance(self, task): """Prepares the boot of instance. This method prepares the boot of the instance after reading relevant information from the node's database. :param task: a task from TaskManager. :returns: None """ _cleanup_vmedia_boot(task) node = task.node iwdi = node.driver_internal_info.get('is_whole_disk_image') if deploy_utils.get_boot_option(node) == "local" or iwdi: manager_utils.node_set_boot_device(task, boot_devices.DISK, persistent=True) else: driver_internal_info = node.driver_internal_info root_uuid_or_disk_id = driver_internal_info['root_uuid_or_disk_id'] self._configure_vmedia_boot(task, root_uuid_or_disk_id) def clean_up_instance(self, task): """Cleans up the boot of instance. This method cleans up the environment that was setup for booting the instance. :param task: a task from TaskManager. :returns: None :raises: IRMCOperationError if iRMC operation failed. """ _remove_share_file(_get_boot_iso_name(task.node)) driver_internal_info = task.node.driver_internal_info driver_internal_info.pop('irmc_boot_iso', None) driver_internal_info.pop('root_uuid_or_disk_id', None) task.node.driver_internal_info = driver_internal_info task.node.save() _cleanup_vmedia_boot(task) def _configure_vmedia_boot(self, task, root_uuid_or_disk_id): """Configure vmedia boot for the node.""" node = task.node _prepare_boot_iso(task, root_uuid_or_disk_id) _setup_vmedia_for_boot( task, node.driver_internal_info['irmc_boot_iso']) manager_utils.node_set_boot_device(task, boot_devices.CDROM, persistent=True) ironic-5.1.0/ironic/drivers/modules/irmc/common.py0000664000567000056710000002172212674513466023370 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common functionalities shared between different iRMC modules. """ import six from oslo_config import cfg from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ scci = importutils.try_import('scciclient.irmc.scci') opts = [ cfg.PortOpt('port', default=443, choices=[443, 80], help=_('Port to be used for iRMC operations')), cfg.StrOpt('auth_method', default='basic', choices=['basic', 'digest'], help=_('Authentication method to be used for iRMC ' 'operations')), cfg.IntOpt('client_timeout', default=60, help=_('Timeout (in seconds) for iRMC operations')), cfg.StrOpt('sensor_method', default='ipmitool', choices=['ipmitool', 'scci'], help=_('Sensor data retrieval method.')), cfg.StrOpt('snmp_version', default='v2c', choices=['v1', 'v2c', 'v3'], help=_('SNMP protocol version')), cfg.PortOpt('snmp_port', default=161, help=_('SNMP port')), cfg.StrOpt('snmp_community', default='public', help=_('SNMP community. Required for versions "v1" and "v2c"')), cfg.StrOpt('snmp_security', help=_('SNMP security name. Required for version "v3"')), ] CONF = cfg.CONF CONF.register_opts(opts, group='irmc') REQUIRED_PROPERTIES = { 'irmc_address': _("IP address or hostname of the iRMC. Required."), 'irmc_username': _("Username for the iRMC with administrator privileges. " "Required."), 'irmc_password': _("Password for irmc_username. Required."), } OPTIONAL_PROPERTIES = { 'irmc_port': _("Port to be used for iRMC operations; either 80 or 443. " "The default value is 443. Optional."), 'irmc_auth_method': _("Authentication method for iRMC operations; " "either 'basic' or 'digest'. The default value is " "'basic'. Optional."), 'irmc_client_timeout': _("Timeout (in seconds) for iRMC operations. " "The default value is 60. Optional."), 'irmc_sensor_method': _("Sensor data retrieval method; either " "'ipmitool' or 'scci'. The default value is " "'ipmitool'. Optional."), 'irmc_snmp_version': _("SNMP protocol version; either 'v1', 'v2c', or " "'v3'. The default value is 'v2c'. Optional."), 'irmc_snmp_port': _("SNMP port. The default is 161. Optional."), 'irmc_snmp_community': _("SNMP community required for versions 'v1' and " "'v2c'. The default value is 'public'. " "Optional."), 'irmc_snmp_security': _("SNMP security name required for version 'v3'. " "Optional."), } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) def parse_driver_info(node): """Gets the specific Node driver info. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver. :param node: An ironic node object. :returns: A dict containing information from driver_info and default values. :raises: InvalidParameterValue if invalid value is contained in the 'driver_info' property. :raises: MissingParameterValue if some mandatory key is missing in the 'driver_info' property. """ info = node.driver_info missing_info = [key for key in REQUIRED_PROPERTIES if not info.get(key)] if missing_info: raise exception.MissingParameterValue(_( "Missing the following iRMC parameters in node's" " driver_info: %s.") % missing_info) req = {key: value for key, value in info.items() if key in REQUIRED_PROPERTIES} # corresponding config names don't have 'irmc_' prefix opt = {param: info.get(param, CONF.irmc.get(param[len('irmc_'):])) for param in OPTIONAL_PROPERTIES} d_info = dict(req, **opt) error_msgs = [] if (d_info['irmc_auth_method'].lower() not in ('basic', 'digest')): error_msgs.append( _("Value '%s' is not supported for 'irmc_auth_method'.") % d_info['irmc_auth_method']) if d_info['irmc_port'] not in (80, 443): error_msgs.append( _("Value '%s' is not supported for 'irmc_port'.") % d_info['irmc_port']) if not isinstance(d_info['irmc_client_timeout'], int): error_msgs.append( _("Value '%s' is not an integer for 'irmc_client_timeout'") % d_info['irmc_client_timeout']) if d_info['irmc_sensor_method'].lower() not in ('ipmitool', 'scci'): error_msgs.append( _("Value '%s' is not supported for 'irmc_sensor_method'.") % d_info['irmc_sensor_method']) if d_info['irmc_snmp_version'].lower() not in ('v1', 'v2c', 'v3'): error_msgs.append( _("Value '%s' is not supported for 'irmc_snmp_version'.") % d_info['irmc_snmp_version']) if not isinstance(d_info['irmc_snmp_port'], int): error_msgs.append( _("Value '%s' is not an integer for 'irmc_snmp_port'") % d_info['irmc_snmp_port']) if (d_info['irmc_snmp_version'].lower() in ('v1', 'v2c') and d_info['irmc_snmp_community'] and not isinstance(d_info['irmc_snmp_community'], six.string_types)): error_msgs.append( _("Value '%s' is not a string for 'irmc_snmp_community'") % d_info['irmc_snmp_community']) if d_info['irmc_snmp_version'].lower() == 'v3': if d_info['irmc_snmp_security']: if not isinstance(d_info['irmc_snmp_security'], six.string_types): error_msgs.append( _("Value '%s' is not a string for " "'irmc_snmp_security'") % d_info['irmc_snmp_security']) else: error_msgs.append( _("'irmc_snmp_security' has to be set for SNMP version 3.")) if error_msgs: msg = (_("The following errors were encountered while parsing " "driver_info:\n%s") % "\n".join(error_msgs)) raise exception.InvalidParameterValue(msg) return d_info def get_irmc_client(node): """Gets an iRMC SCCI client. Given an ironic node object, this method gives back a iRMC SCCI client to do operations on the iRMC. :param node: An ironic node object. :returns: scci_cmd partial function which takes a SCCI command param. :raises: InvalidParameterValue on invalid inputs. :raises: MissingParameterValue if some mandatory information is missing on the node """ driver_info = parse_driver_info(node) scci_client = scci.get_client( driver_info['irmc_address'], driver_info['irmc_username'], driver_info['irmc_password'], port=driver_info['irmc_port'], auth_method=driver_info['irmc_auth_method'], client_timeout=driver_info['irmc_client_timeout']) return scci_client def update_ipmi_properties(task): """Update ipmi properties to node driver_info. :param task: A task from TaskManager. """ node = task.node info = node.driver_info # updating ipmi credentials info['ipmi_address'] = info.get('irmc_address') info['ipmi_username'] = info.get('irmc_username') info['ipmi_password'] = info.get('irmc_password') # saving ipmi credentials to task object task.node.driver_info = info def get_irmc_report(node): """Gets iRMC SCCI report. Given an ironic node object, this method gives back a iRMC SCCI report. :param node: An ironic node object. :returns: A xml.etree.ElementTree object. :raises: InvalidParameterValue on invalid inputs. :raises: MissingParameterValue if some mandatory information is missing on the node. :raises: scci.SCCIInvalidInputError if required parameters are invalid. :raises: scci.SCCIClientError if SCCI failed. """ driver_info = parse_driver_info(node) return scci.get_report( driver_info['irmc_address'], driver_info['irmc_username'], driver_info['irmc_password'], port=driver_info['irmc_port'], auth_method=driver_info['irmc_auth_method'], client_timeout=driver_info['irmc_client_timeout']) ironic-5.1.0/ironic/drivers/modules/irmc/power.py0000664000567000056710000001262012674513466023231 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iRMC Power Driver using the Base Server Profile """ from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules import ipmitool from ironic.drivers.modules.irmc import boot as irmc_boot from ironic.drivers.modules.irmc import common as irmc_common scci = importutils.try_import('scciclient.irmc.scci') CONF = cfg.CONF LOG = logging.getLogger(__name__) if scci: STATES_MAP = {states.POWER_OFF: scci.POWER_OFF, states.POWER_ON: scci.POWER_ON, states.REBOOT: scci.POWER_RESET} def _set_power_state(task, target_state): """Turns the server power on/off or do a reboot. :param task: a TaskManager instance containing the node to act on. :param target_state: target state of the node. :raises: InvalidParameterValue if an invalid power state was specified. :raises: MissingParameterValue if some mandatory information is missing on the node :raises: IRMCOperationError on an error from SCCI """ node = task.node irmc_client = irmc_common.get_irmc_client(node) if target_state in (states.POWER_ON, states.REBOOT): irmc_boot.attach_boot_iso_if_needed(task) try: irmc_client(STATES_MAP[target_state]) except KeyError: msg = _("_set_power_state called with invalid power state " "'%s'") % target_state raise exception.InvalidParameterValue(msg) except scci.SCCIClientError as irmc_exception: LOG.error(_LE("iRMC set_power_state failed to set state to %(tstate)s " " for node %(node_id)s with error: %(error)s"), {'tstate': target_state, 'node_id': node.uuid, 'error': irmc_exception}) operation = _('iRMC set_power_state') raise exception.IRMCOperationError(operation=operation, error=irmc_exception) class IRMCPower(base.PowerInterface): """Interface for power-related actions.""" def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return irmc_common.COMMON_PROPERTIES def validate(self, task): """Validate the driver-specific Node power info. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver to manage the power state of the node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if required driver_info attribute is missing or invalid on the node. :raises: MissingParameterValue if a required parameter is missing. """ irmc_common.parse_driver_info(task.node) def get_power_state(self, task): """Return the power state of the task's node. :param task: a TaskManager instance containing the node to act on. :returns: a power state. One of :mod:`ironic.common.states`. :raises: InvalidParameterValue if required ipmi parameters are missing. :raises: MissingParameterValue if a required parameter is missing. :raises: IPMIFailure on an error from ipmitool (from _power_status call). """ irmc_common.update_ipmi_properties(task) ipmi_power = ipmitool.IPMIPower() return ipmi_power.get_power_state(task) @task_manager.require_exclusive_lock def set_power_state(self, task, power_state): """Set the power state of the task's node. :param task: a TaskManager instance containing the node to act on. :param power_state: Any power state from :mod:`ironic.common.states`. :raises: InvalidParameterValue if an invalid power state was specified. :raises: MissingParameterValue if some mandatory information is missing on the node :raises: IRMCOperationError if failed to set the power state. """ _set_power_state(task, power_state) @task_manager.require_exclusive_lock def reboot(self, task): """Perform a hard reboot of the task's node. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if an invalid power state was specified. :raises: IRMCOperationError if failed to set the power state. """ current_pstate = self.get_power_state(task) if current_pstate == states.POWER_ON: _set_power_state(task, states.REBOOT) elif current_pstate == states.POWER_OFF: _set_power_state(task, states.POWER_ON) ironic-5.1.0/ironic/drivers/modules/irmc/__init__.py0000664000567000056710000000000012674513466023621 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/irmc/inspect.py0000664000567000056710000001463712674513466023554 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iRMC Inspect Interface """ from oslo_log import log as logging from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import states from ironic.drivers import base from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers.modules import snmp from ironic import objects scci = importutils.try_import('scciclient.irmc.scci') LOG = logging.getLogger(__name__) """ SC2.mib: sc2UnitNodeClass returns NIC type. sc2UnitNodeClass OBJECT-TYPE SYNTAX INTEGER { unknown(1), primary(2), secondary(3), management-blade(4), secondary-remote(5), secondary-remote-backup(6), baseboard-controller(7) } ACCESS read-only STATUS mandatory DESCRIPTION "Management node class: primary: local operating system interface secondary: local management controller LAN interface management-blade: management blade interface (in a blade server chassis) secondary-remote: remote management controller (in an RSB concentrator environment) secondary-remote-backup: backup remote management controller baseboard-controller: local baseboard management controller (BMC)" ::= { sc2ManagementNodes 8 } """ NODE_CLASS_OID_VALUE = { 'unknown': 1, 'primary': 2, 'secondary': 3, 'management-blade': 4, 'secondary-remote': 5, 'secondary-remote-backup': 6, 'baseboard-controller': 7 } NODE_CLASS_OID = '1.3.6.1.4.1.231.2.10.2.2.10.3.1.1.8.1' """ SC2.mib: sc2UnitNodeMacAddress returns NIC MAC address sc2UnitNodeMacAddress OBJECT-TYPE SYNTAX PhysAddress ACCESS read-only STATUS mandatory DESCRIPTION "Management node hardware (MAC) address" ::= { sc2ManagementNodes 9 } """ MAC_ADDRESS_OID = '1.3.6.1.4.1.231.2.10.2.2.10.3.1.1.9.1' def _get_mac_addresses(node): """Get mac addresses of the node. :param node: node object. :raises: SNMPFailure if SNMP operation failed. :returns: a list of mac addresses. """ d_info = irmc_common.parse_driver_info(node) snmp_client = snmp.SNMPClient(d_info['irmc_address'], d_info['irmc_snmp_port'], d_info['irmc_snmp_version'], d_info['irmc_snmp_community'], d_info['irmc_snmp_security']) node_classes = snmp_client.get_next(NODE_CLASS_OID) mac_addresses = snmp_client.get_next(MAC_ADDRESS_OID) return [a for c, a in zip(node_classes, mac_addresses) if c == NODE_CLASS_OID_VALUE['primary']] def _inspect_hardware(node): """Inspect the node and get hardware information. :param node: node object. :raises: HardwareInspectionFailure, if unable to get essential hardware properties. :returns: a pair of dictionary and list, the dictionary contains keys as in IRMCInspect.ESSENTIAL_PROPERTIES and its inspected values, the list contains mac addresses. """ try: report = irmc_common.get_irmc_report(node) props = scci.get_essential_properties( report, IRMCInspect.ESSENTIAL_PROPERTIES) macs = _get_mac_addresses(node) except (scci.SCCIInvalidInputError, scci.SCCIClientError, exception.SNMPFailure) as e: error = (_("Inspection failed for node %(node_id)s " "with the following error: %(error)s") % {'node_id': node.uuid, 'error': e}) raise exception.HardwareInspectionFailure(error=error) return (props, macs) class IRMCInspect(base.InspectInterface): """Interface for out of band inspection.""" def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return irmc_common.COMMON_PROPERTIES def validate(self, task): """Validate the driver-specific inspection information. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver. :param task: a TaskManager instance containing the node to act on. :raises: InvalidParameterValue if required driver_info attribute is missing or invalid on the node. :raises: MissingParameterValue if a required parameter is missing. """ irmc_common.parse_driver_info(task.node) def inspect_hardware(self, task): """Inspect hardware. Inspect hardware to obtain the essential hardware properties and mac addresses. :param task: a task from TaskManager. :raises: HardwareInspectionFailure, if hardware inspection failed. :returns: states.MANAGEABLE, if hardware inspection succeeded. """ node = task.node (props, macs) = _inspect_hardware(node) node.properties = dict(node.properties, **props) node.save() for mac in macs: try: new_port = objects.Port(task.context, address=mac, node_id=node.id) new_port.create() LOG.info(_LI("Port created for MAC address %(address)s " "for node %(node_uuid)s during inspection"), {'address': mac, 'node_uuid': node.uuid}) except exception.MACAlreadyExists: LOG.warning(_LW("Port already existed for MAC address " "%(address)s for node %(node_uuid)s " "during inspection"), {'address': mac, 'node_uuid': node.uuid}) LOG.info(_LI("Node %s inspected"), node.uuid) return states.MANAGEABLE ironic-5.1.0/ironic/drivers/modules/irmc/management.py0000664000567000056710000002171112674513466024212 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iRMC Management Driver """ from oslo_log import log as logging from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.conductor import task_manager from ironic.drivers.modules import ipmitool from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers import utils as driver_utils scci = importutils.try_import('scciclient.irmc.scci') LOG = logging.getLogger(__name__) # Boot Option Parameters #5 Data2 defined in # Set/Get System Boot Options Command, IPMI spec v2.0. _BOOTPARAM5_DATA2 = {boot_devices.PXE: '0x04', boot_devices.DISK: '0x08', boot_devices.CDROM: '0x14', boot_devices.BIOS: '0x18', boot_devices.SAFE: '0x0c', } def _get_sensors_data(task): """Get sensors data method. It gets sensor data from the task's node via SCCI, and convert the data from XML to the dict format. :param task: A TaskManager instance. :raises: FailedToGetSensorData when getting the sensor data fails. :returns: Returns a consistent formatted dict of sensor data grouped by sensor type, which can be processed by Ceilometer. """ try: report = irmc_common.get_irmc_report(task.node) sensor = scci.get_sensor_data(report) except (exception.InvalidParameterValue, exception.MissingParameterValue, scci.SCCIInvalidInputError, scci.SCCIClientError) as e: LOG.error(_LE("SCCI get sensor data failed for node %(node_id)s " "with the following error: %(error)s"), {'node_id': task.node.uuid, 'error': e}) raise exception.FailedToGetSensorData( node=task.node.uuid, error=e) sensors_data = {} for sdr in sensor: sensor_type_name = sdr.find('./Data/Decoded/Sensor/TypeName') sensor_type_number = sdr.find('./Data/Decoded/Sensor/Type') entity_name = sdr.find('./Data/Decoded/Entity/Name') entity_id = sdr.find('./Data/Decoded/Entity/ID') if None in (sensor_type_name, sensor_type_number, entity_name, entity_id): continue sensor_type = ('%s (%s)' % (sensor_type_name.text, sensor_type_number.text)) sensor_id = ('%s (%s)' % (entity_name.text, entity_id.text)) reading_value = sdr.find( './Data/Decoded/Sensor/Thresholds/*/Normalized') reading_value_text = "None" if ( reading_value is None) else str(reading_value.text) reading_units = sdr.find('./Data/Decoded/Sensor/BaseUnitName') reading_units_text = "None" if ( reading_units is None) else str(reading_units.text) sensor_reading = '%s %s' % (reading_value_text, reading_units_text) sensors_data.setdefault(sensor_type, {})[sensor_id] = { 'Sensor Reading': sensor_reading, 'Sensor ID': sensor_id, 'Units': reading_units_text, } return sensors_data class IRMCManagement(ipmitool.IPMIManagement): def get_properties(self): """Return the properties of the interface. :returns: Dictionary of : entries. """ return irmc_common.COMMON_PROPERTIES def validate(self, task): """Validate the driver-specific management information. This method validates whether the 'driver_info' property of the supplied node contains the required information for this driver. :param task: A TaskManager instance containing the node to act on. :raises: InvalidParameterValue if required parameters are invalid. :raises: MissingParameterValue if a required parameter is missing. """ irmc_common.parse_driver_info(task.node) irmc_common.update_ipmi_properties(task) super(IRMCManagement, self).validate(task) @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for a node. Set the boot device to use on next reboot of the node. :param task: A task from TaskManager. :param device: The boot device, one of the supported devices listed in :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: InvalidParameterValue if an invalid boot device is specified. :raises: MissingParameterValue if a required parameter is missing. :raises: IPMIFailure on an error from ipmitool. """ if driver_utils.get_node_capability(task.node, 'boot_mode') == 'uefi': if device not in self.get_supported_boot_devices(task): raise exception.InvalidParameterValue(_( "Invalid boot device %s specified.") % device) timeout_disable = "0x00 0x08 0x03 0x08" ipmitool.send_raw(task, timeout_disable) # note(naohirot): As of ipmitool version 1.8.13, # in case of chassis command, the efiboot option doesn't # get set with persistent at the same time. # $ ipmitool chassis bootdev pxe options=efiboot,persistent # In case of raw command, however, both can be set at the # same time. # $ ipmitool raw 0x00 0x08 0x05 0xe0 0x04 0x00 0x00 0x00 # data1^^ ^^data2 # ipmi cmd '0x08' : Set System Boot Options # data1 '0xe0' : persistent and uefi # data1 '0xa0' : next boot only and uefi # data1 = '0xe0' if persistent else '0xa0' bootparam5 = '0x00 0x08 0x05 %s %s 0x00 0x00 0x00' cmd08 = bootparam5 % (data1, _BOOTPARAM5_DATA2[device]) ipmitool.send_raw(task, cmd08) else: super(IRMCManagement, self).set_boot_device( task, device, persistent) def get_sensors_data(self, task): """Get sensors data method. It gets sensor data from the task's node via SCCI, and convert the data from XML to the dict format. :param task: A TaskManager instance. :raises: FailedToGetSensorData when getting the sensor data fails. :raises: FailedToParseSensorData when parsing sensor data fails. :raises: InvalidParameterValue if required parameters are invalid. :raises: MissingParameterValue if a required parameter is missing. :returns: Returns a consistent formatted dict of sensor data grouped by sensor type, which can be processed by Ceilometer. Example:: { 'Sensor Type 1': { 'Sensor ID 1': { 'Sensor Reading': 'Value1 Units1', 'Sensor ID': 'Sensor ID 1', 'Units': 'Units1' }, 'Sensor ID 2': { 'Sensor Reading': 'Value2 Units2', 'Sensor ID': 'Sensor ID 2', 'Units': 'Units2' } }, 'Sensor Type 2': { 'Sensor ID 3': { 'Sensor Reading': 'Value3 Units3', 'Sensor ID': 'Sensor ID 3', 'Units': 'Units3' }, 'Sensor ID 4': { 'Sensor Reading': 'Value4 Units4', 'Sensor ID': 'Sensor ID 4', 'Units': 'Units4' } } } """ # irmc_common.parse_driver_info() makes sure that # d_info['irmc_sensor_method'] is either 'scci' or 'ipmitool'. d_info = irmc_common.parse_driver_info(task.node) sensor_method = d_info['irmc_sensor_method'] if sensor_method == 'scci': return _get_sensors_data(task) elif sensor_method == 'ipmitool': return super(IRMCManagement, self).get_sensors_data(task) ironic-5.1.0/ironic/drivers/modules/cimc/0000775000567000056710000000000012674513633021477 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/cimc/common.py0000664000567000056710000000522512674513466023351 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from contextlib import contextmanager from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers.modules import deploy_utils REQUIRED_PROPERTIES = { 'cimc_address': _('IP or Hostname of the CIMC. Required.'), 'cimc_username': _('CIMC Manager admin username. Required.'), 'cimc_password': _('CIMC Manager password. Required.'), } COMMON_PROPERTIES = REQUIRED_PROPERTIES imcsdk = importutils.try_import('ImcSdk') def parse_driver_info(node): """Parses and creates Cisco driver info. :param node: An Ironic node object. :returns: dictionary that contains node.driver_info parameter/values. :raises: MissingParameterValue if any required parameters are missing. """ info = {} for param in REQUIRED_PROPERTIES: info[param] = node.driver_info.get(param) error_msg = (_("%s driver requires these parameters to be set in the " "node's driver_info.") % node.driver) deploy_utils.check_for_missing_params(info, error_msg) return info def handle_login(task, handle, info): """Login to the CIMC handle. Run login on the CIMC handle, catching any ImcException and reraising it as an ironic CIMCException. :param handle: A CIMC handle. :param info: A list of driver info as produced by parse_driver_info. :raises: CIMCException if there error logging in. """ try: handle.login(info['cimc_address'], info['cimc_username'], info['cimc_password']) except imcsdk.ImcException as e: raise exception.CIMCException(node=task.node.uuid, error=e) @contextmanager def cimc_handle(task): """Context manager for creating a CIMC handle and logging into it. :param task: The current task object. :raises: CIMCException if login fails :yields: A CIMC Handle for the node in the task. """ info = parse_driver_info(task.node) handle = imcsdk.ImcHandle() handle_login(task, handle, info) try: yield handle finally: handle.logout() ironic-5.1.0/ironic/drivers/modules/cimc/power.py0000664000567000056710000001554412674513466023222 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_service import loopingcall from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules.cimc import common imcsdk = importutils.try_import('ImcSdk') opts = [ cfg.IntOpt('max_retry', default=6, help=_('Number of times a power operation needs to be ' 'retried')), cfg.IntOpt('action_interval', default=10, help=_('Amount of time in seconds to wait in between power ' 'operations')), ] CONF = cfg.CONF CONF.register_opts(opts, group='cimc') if imcsdk: CIMC_TO_IRONIC_POWER_STATE = { imcsdk.ComputeRackUnit.CONST_OPER_POWER_ON: states.POWER_ON, imcsdk.ComputeRackUnit.CONST_OPER_POWER_OFF: states.POWER_OFF, } IRONIC_TO_CIMC_POWER_STATE = { states.POWER_ON: imcsdk.ComputeRackUnit.CONST_ADMIN_POWER_UP, states.POWER_OFF: imcsdk.ComputeRackUnit.CONST_ADMIN_POWER_DOWN, states.REBOOT: imcsdk.ComputeRackUnit.CONST_ADMIN_POWER_HARD_RESET_IMMEDIATE } def _wait_for_state_change(target_state, task): """Wait and check for the power state change :param target_state: The target state we are waiting for. :param task: a TaskManager instance containing the node to act on. :raises: CIMCException if there is an error communicating with CIMC """ store = {'state': None, 'retries': CONF.cimc.max_retry} def _wait(store): current_power_state = None with common.cimc_handle(task) as handle: try: rack_unit = handle.get_imc_managedobject( None, None, params={"Dn": "sys/rack-unit-1"} ) except imcsdk.ImcException as e: raise exception.CIMCException(node=task.node.uuid, error=e) else: current_power_state = rack_unit[0].get_attr("OperPower") store['state'] = CIMC_TO_IRONIC_POWER_STATE.get(current_power_state) if store['state'] == target_state: raise loopingcall.LoopingCallDone() store['retries'] -= 1 if store['retries'] <= 0: store['state'] = states.ERROR raise loopingcall.LoopingCallDone() timer = loopingcall.FixedIntervalLoopingCall(_wait, store) timer.start(interval=CONF.cimc.action_interval).wait() return store['state'] class Power(base.PowerInterface): def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return common.COMMON_PROPERTIES def validate(self, task): """Check if node.driver_info contains the required CIMC credentials. :param task: a TaskManager instance. :raises: InvalidParameterValue if required CIMC credentials are missing. """ common.parse_driver_info(task.node) def get_power_state(self, task): """Return the power state of the task's node. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue if a required parameter is missing. :returns: a power state. One of :mod:`ironic.common.states`. :raises: CIMCException if there is an error communicating with CIMC """ current_power_state = None with common.cimc_handle(task) as handle: try: rack_unit = handle.get_imc_managedobject( None, None, params={"Dn": "sys/rack-unit-1"} ) except imcsdk.ImcException as e: raise exception.CIMCException(node=task.node.uuid, error=e) else: current_power_state = rack_unit[0].get_attr("OperPower") return CIMC_TO_IRONIC_POWER_STATE.get(current_power_state, states.ERROR) @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Set the power state of the task's node. :param task: a TaskManager instance containing the node to act on. :param pstate: Any power state from :mod:`ironic.common.states`. :raises: MissingParameterValue if a required parameter is missing. :raises: InvalidParameterValue if an invalid power state is passed :raises: CIMCException if there is an error communicating with CIMC """ if pstate not in IRONIC_TO_CIMC_POWER_STATE: msg = _("set_power_state called for %(node)s with " "invalid state %(state)s") raise exception.InvalidParameterValue( msg % {"node": task.node.uuid, "state": pstate}) with common.cimc_handle(task) as handle: try: handle.set_imc_managedobject( None, class_id="ComputeRackUnit", params={ imcsdk.ComputeRackUnit.ADMIN_POWER: IRONIC_TO_CIMC_POWER_STATE[pstate], imcsdk.ComputeRackUnit.DN: "sys/rack-unit-1" }) except imcsdk.ImcException as e: raise exception.CIMCException(node=task.node.uuid, error=e) if pstate is states.REBOOT: pstate = states.POWER_ON state = _wait_for_state_change(pstate, task) if state != pstate: raise exception.PowerStateFailure(pstate=pstate) @task_manager.require_exclusive_lock def reboot(self, task): """Perform a hard reboot of the task's node. If the node is already powered on then it shall reboot the node, if its off then the node will just be turned on. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue if a required parameter is missing. :raises: CIMCException if there is an error communicating with CIMC """ current_power_state = self.get_power_state(task) if current_power_state == states.POWER_ON: self.set_power_state(task, states.REBOOT) elif current_power_state == states.POWER_OFF: self.set_power_state(task, states.POWER_ON) ironic-5.1.0/ironic/drivers/modules/cimc/__init__.py0000664000567000056710000000000012674513466023602 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/modules/cimc/management.py0000664000567000056710000001372512674513466024201 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.drivers import base from ironic.drivers.modules.cimc import common imcsdk = importutils.try_import('ImcSdk') CIMC_TO_IRONIC_BOOT_DEVICE = { 'storage-read-write': boot_devices.DISK, 'lan-read-only': boot_devices.PXE, 'vm-read-only': boot_devices.CDROM } IRONIC_TO_CIMC_BOOT_DEVICE = { boot_devices.DISK: ('lsbootStorage', 'storage-read-write', 'storage', 'read-write'), boot_devices.PXE: ('lsbootLan', 'lan-read-only', 'lan', 'read-only'), boot_devices.CDROM: ('lsbootVirtualMedia', 'vm-read-only', 'virtual-media', 'read-only') } class CIMCManagement(base.ManagementInterface): def get_properties(self): """Return the properties of the interface. :returns: dictionary of : entries. """ return common.COMMON_PROPERTIES def validate(self, task): """Check if node.driver_info contains the required CIMC credentials. :param task: a TaskManager instance. :raises: InvalidParameterValue if required CIMC credentials are missing. """ common.parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return list(CIMC_TO_IRONIC_BOOT_DEVICE.values()) def get_boot_device(self, task): """Get the current boot device for a node. Provides the current boot device of the node. Be aware that not all drivers support this. :param task: a task from TaskManager. :raises: MissingParameterValue if a required parameter is missing :raises: CIMCException if there is an error from CIMC :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ with common.cimc_handle(task) as handle: method = imcsdk.ImcCore.ExternalMethod("ConfigResolveClass") method.Cookie = handle.cookie method.InDn = "sys/rack-unit-1" method.InHierarchical = "true" method.ClassId = "lsbootDef" try: resp = handle.xml_query(method, imcsdk.WriteXmlOption.DIRTY) except imcsdk.ImcException as e: raise exception.CIMCException(node=task.node.uuid, error=e) error = getattr(resp, 'error_code', None) if error: raise exception.CIMCException(node=task.node.uuid, error=error) bootDevs = resp.OutConfigs.child[0].child first_device = None for dev in bootDevs: try: if int(dev.Order) == 1: first_device = dev break except (ValueError, AttributeError): pass boot_device = (CIMC_TO_IRONIC_BOOT_DEVICE.get( first_device.Rn) if first_device else None) # Every boot device in CIMC is persistent right now persistent = True if boot_device else None return {'boot_device': boot_device, 'persistent': persistent} def set_boot_device(self, task, device, persistent=True): """Set the boot device for a node. Set the boot device to use on next reboot of the node. :param task: a task from TaskManager. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Every boot device in CIMC is persistent right now, so this value is ignored. :raises: InvalidParameterValue if an invalid boot device is specified. :raises: MissingParameterValue if a required parameter is missing :raises: CIMCException if there is an error from CIMC """ with common.cimc_handle(task) as handle: dev = IRONIC_TO_CIMC_BOOT_DEVICE[device] method = imcsdk.ImcCore.ExternalMethod("ConfigConfMo") method.Cookie = handle.cookie method.Dn = "sys/rack-unit-1/boot-policy" method.InHierarchical = "true" config = imcsdk.Imc.ConfigConfig() bootMode = imcsdk.ImcCore.ManagedObject(dev[0]) bootMode.set_attr("access", dev[3]) bootMode.set_attr("type", dev[2]) bootMode.set_attr("Rn", dev[1]) bootMode.set_attr("order", "1") config.add_child(bootMode) method.InConfig = config try: resp = handle.xml_query(method, imcsdk.WriteXmlOption.DIRTY) except imcsdk.ImcException as e: raise exception.CIMCException(node=task.node.uuid, error=e) error = getattr(resp, 'error_code') if error: raise exception.CIMCException(node=task.node.uuid, error=error) def get_sensors_data(self, task): raise NotImplementedError() ironic-5.1.0/ironic/drivers/modules/elilo_efi_pxe_config.template0000664000567000056710000000214112674513466026453 0ustar jenkinsjenkins00000000000000default=deploy image={{pxe_options.deployment_aki_path}} label=deploy initrd={{pxe_options.deployment_ari_path}} append="selinux=0 disk={{ pxe_options.disk }} iscsi_target_iqn={{ pxe_options.iscsi_target_iqn }} deployment_id={{ pxe_options.deployment_id }} deployment_key={{ pxe_options.deployment_key }} ironic_api_url={{ pxe_options.ironic_api_url }} troubleshoot=0 text {{ pxe_options.pxe_append_params|default("", true) }} ip=%I:{{pxe_options.tftp_server}}:%G:%M:%H::on {% if pxe_options.root_device %}root_device={{ pxe_options.root_device }}{% endif %} ipa-api-url={{ pxe_options['ipa-api-url'] }} ipa-driver-name={{ pxe_options['ipa-driver-name'] }} boot_option={{ pxe_options.boot_option }} boot_mode={{ pxe_options['boot_mode'] }} coreos.configdrive=0" image={{pxe_options.aki_path}} label=boot_partition initrd={{pxe_options.ari_path}} append="root={{ ROOT }} ro text {{ pxe_options.pxe_append_params|default("", true) }} ip=%I:{{pxe_options.tftp_server}}:%G:%M:%H::on" image=chain.c32 label=boot_whole_disk append="mbr:{{ DISK_IDENTIFIER }}" ironic-5.1.0/ironic/drivers/modules/ipminative.py0000664000567000056710000006563212674513466023323 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Ironic Native IPMI power manager. """ import os from ironic_lib import utils as ironic_utils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers import base from ironic.drivers.modules import console_utils from ironic.drivers import utils as driver_utils pyghmi = importutils.try_import('pyghmi') if pyghmi: from pyghmi import exceptions as pyghmi_exception from pyghmi.ipmi import command as ipmi_command opts = [ cfg.IntOpt('retry_timeout', default=60, help=_('Maximum time in seconds to retry IPMI operations. ' 'There is a tradeoff when setting this value. Setting ' 'this too low may cause older BMCs to crash and require ' 'a hard reset. However, setting too high can cause the ' 'sync power state periodic task to hang when there are ' 'slow or unresponsive BMCs.')), cfg.IntOpt('min_command_interval', default=5, help=_('Minimum time, in seconds, between IPMI operations ' 'sent to a server. There is a risk with some hardware ' 'that setting this too low may cause the BMC to crash. ' 'Recommended setting is 5 seconds.')), ] CONF = cfg.CONF CONF.register_opts(opts, group='ipmi') LOG = logging.getLogger(__name__) REQUIRED_PROPERTIES = {'ipmi_address': _("IP of the node's BMC. Required."), 'ipmi_password': _("IPMI password. Required."), 'ipmi_username': _("IPMI username. Required.")} OPTIONAL_PROPERTIES = { 'ipmi_force_boot_device': _("Whether Ironic should specify the boot " "device to the BMC each time the server " "is turned on, eg. because the BMC is not " "capable of remembering the selected boot " "device across power cycles; default value " "is False. Optional.") } COMMON_PROPERTIES = REQUIRED_PROPERTIES.copy() COMMON_PROPERTIES.update(OPTIONAL_PROPERTIES) CONSOLE_PROPERTIES = { 'ipmi_terminal_port': _("node's UDP port to connect to. Only required for " "console access.") } _BOOT_DEVICES_MAP = { boot_devices.DISK: 'hd', boot_devices.PXE: 'network', boot_devices.CDROM: 'cdrom', boot_devices.BIOS: 'setup', } def _parse_driver_info(node): """Gets the bmc access info for the given node. :raises: MissingParameterValue when required ipmi credentials are missing. :raises: InvalidParameterValue when the IPMI terminal port is not an integer. """ info = node.driver_info or {} missing_info = [key for key in REQUIRED_PROPERTIES if not info.get(key)] if missing_info: raise exception.MissingParameterValue(_( "Missing the following IPMI credentials in node's" " driver_info: %s.") % missing_info) bmc_info = {} bmc_info['address'] = info.get('ipmi_address') bmc_info['username'] = info.get('ipmi_username') bmc_info['password'] = info.get('ipmi_password') bmc_info['force_boot_device'] = info.get('ipmi_force_boot_device', False) # get additional info bmc_info['uuid'] = node.uuid # terminal port must be an integer port = info.get('ipmi_terminal_port') if port is not None: port = utils.validate_network_port(port, 'ipmi_terminal_port') bmc_info['port'] = port return bmc_info def _console_pwfile_path(uuid): """Return the file path for storing the ipmi password.""" file_name = "%(uuid)s.pw" % {'uuid': uuid} return os.path.join(CONF.tempdir, file_name) def _power_on(driver_info): """Turn the power on for this node. :param driver_info: the bmc access info for a node. :returns: power state POWER_ON, one of :class:`ironic.common.states`. :raises: IPMIFailure when the native ipmi call fails. :raises: PowerStateFailure when invalid power state is returned from ipmi. """ msg = _("IPMI power on failed for node %(node_id)s with the " "following error: %(error)s") try: ipmicmd = ipmi_command.Command(bmc=driver_info['address'], userid=driver_info['username'], password=driver_info['password']) wait = CONF.ipmi.retry_timeout ret = ipmicmd.set_power('on', wait) except pyghmi_exception.IpmiException as e: error = msg % {'node_id': driver_info['uuid'], 'error': e} LOG.error(error) raise exception.IPMIFailure(error) state = ret.get('powerstate') if state == 'on': return states.POWER_ON else: error = _("bad response: %s") % ret LOG.error(msg, {'node_id': driver_info['uuid'], 'error': error}) raise exception.PowerStateFailure(pstate=states.POWER_ON) def _power_off(driver_info): """Turn the power off for this node. :param driver_info: the bmc access info for a node. :returns: power state POWER_OFF, one of :class:`ironic.common.states`. :raises: IPMIFailure when the native ipmi call fails. :raises: PowerStateFailure when invalid power state is returned from ipmi. """ msg = _("IPMI power off failed for node %(node_id)s with the " "following error: %(error)s") try: ipmicmd = ipmi_command.Command(bmc=driver_info['address'], userid=driver_info['username'], password=driver_info['password']) wait = CONF.ipmi.retry_timeout ret = ipmicmd.set_power('off', wait) except pyghmi_exception.IpmiException as e: error = msg % {'node_id': driver_info['uuid'], 'error': e} LOG.error(error) raise exception.IPMIFailure(error) state = ret.get('powerstate') if state == 'off': return states.POWER_OFF else: error = _("bad response: %s") % ret LOG.error(msg, {'node_id': driver_info['uuid'], 'error': error}) raise exception.PowerStateFailure(pstate=states.POWER_OFF) def _reboot(driver_info): """Reboot this node. If the power is off, turn it on. If the power is on, reset it. :param driver_info: the bmc access info for a node. :returns: power state POWER_ON, one of :class:`ironic.common.states`. :raises: IPMIFailure when the native ipmi call fails. :raises: PowerStateFailure when invalid power state is returned from ipmi. """ msg = _("IPMI power reboot failed for node %(node_id)s with the " "following error: %(error)s") try: ipmicmd = ipmi_command.Command(bmc=driver_info['address'], userid=driver_info['username'], password=driver_info['password']) wait = CONF.ipmi.retry_timeout ret = ipmicmd.set_power('boot', wait) except pyghmi_exception.IpmiException as e: error = msg % {'node_id': driver_info['uuid'], 'error': e} LOG.error(error) raise exception.IPMIFailure(error) state = ret.get('powerstate') if state == 'on': return states.POWER_ON else: error = _("bad response: %s") % ret LOG.error(msg, {'node_id': driver_info['uuid'], 'error': error}) raise exception.PowerStateFailure(pstate=states.REBOOT) def _power_status(driver_info): """Get the power status for this node. :param driver_info: the bmc access info for a node. :returns: power state POWER_ON, POWER_OFF or ERROR defined in :class:`ironic.common.states`. :raises: IPMIFailure when the native ipmi call fails. """ try: ipmicmd = ipmi_command.Command(bmc=driver_info['address'], userid=driver_info['username'], password=driver_info['password']) ret = ipmicmd.get_power() except pyghmi_exception.IpmiException as e: msg = (_("IPMI get power state failed for node %(node_id)s " "with the following error: %(error)s") % {'node_id': driver_info['uuid'], 'error': e}) LOG.error(msg) raise exception.IPMIFailure(msg) state = ret.get('powerstate') if state == 'on': return states.POWER_ON elif state == 'off': return states.POWER_OFF else: # NOTE(linggao): Do not throw an exception here because it might # return other valid values. It is up to the caller to decide # what to do. LOG.warning(_LW("IPMI get power state for node %(node_id)s returns the" " following details: %(detail)s"), {'node_id': driver_info['uuid'], 'detail': ret}) return states.ERROR def _get_sensors_data(driver_info): """Get sensors data. :param driver_info: node's driver info :raises: FailedToGetSensorData when getting the sensor data fails. :returns: returns a dict of sensor data group by sensor type. """ try: ipmicmd = ipmi_command.Command(bmc=driver_info['address'], userid=driver_info['username'], password=driver_info['password']) ret = ipmicmd.get_sensor_data() except Exception as e: LOG.error(_LE("IPMI get sensor data failed for node %(node_id)s " "with the following error: %(error)s"), {'node_id': driver_info['uuid'], 'error': e}) raise exception.FailedToGetSensorData( node=driver_info['uuid'], error=e) if not ret: return {} sensors_data = {} for reading in ret: # ignore the sensor data which has no sensor reading value if not reading.value: continue sensors_data.setdefault( reading.type, {})[reading.name] = { 'Sensor Reading': '%s %s' % (reading.value, reading.units), 'Sensor ID': reading.name, 'States': str(reading.states), 'Units': reading.units, 'Health': str(reading.health)} return sensors_data def _parse_raw_bytes(raw_bytes): """Parse raw bytes string. :param raw_bytes: a string of hexadecimal raw bytes, e.g. '0x00 0x01'. :returns: a tuple containing the arguments for pyghmi call as integers, (IPMI net function, IPMI command, list of command's data). :raises: InvalidParameterValue when an invalid value is specified. """ try: bytes_list = [int(x, base=16) for x in raw_bytes.split()] return bytes_list[0], bytes_list[1], bytes_list[2:] except ValueError: raise exception.InvalidParameterValue(_( "Invalid raw bytes string: '%s'") % raw_bytes) except IndexError: raise exception.InvalidParameterValue(_( "Raw bytes string requires two bytes at least.")) def _send_raw(driver_info, raw_bytes): """Send raw bytes to the BMC.""" netfn, command, data = _parse_raw_bytes(raw_bytes) LOG.debug("Sending raw bytes %(bytes)s to node %(node_id)s", {'bytes': raw_bytes, 'node_id': driver_info['uuid']}) try: ipmicmd = ipmi_command.Command(bmc=driver_info['address'], userid=driver_info['username'], password=driver_info['password']) ipmicmd.xraw_command(netfn, command, data=data) except pyghmi_exception.IpmiException as e: msg = (_("IPMI send raw bytes '%(bytes)s' failed for node %(node_id)s" " with the following error: %(error)s") % {'bytes': raw_bytes, 'node_id': driver_info['uuid'], 'error': e}) LOG.error(msg) raise exception.IPMIFailure(msg) class NativeIPMIPower(base.PowerInterface): """The power driver using native python-ipmi library.""" def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Check that node['driver_info'] contains IPMI credentials. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue when required ipmi credentials are missing. """ _parse_driver_info(task.node) def get_power_state(self, task): """Get the current power state of the task's node. :param task: a TaskManager instance containing the node to act on. :returns: power state POWER_ON, POWER_OFF or ERROR defined in :class:`ironic.common.states`. :raises: MissingParameterValue when required ipmi credentials are missing. :raises: IPMIFailure when the native ipmi call fails. """ driver_info = _parse_driver_info(task.node) return _power_status(driver_info) @task_manager.require_exclusive_lock def set_power_state(self, task, pstate): """Turn the power on or off. :param task: a TaskManager instance containing the node to act on. :param pstate: a power state that will be set on the task's node. :raises: IPMIFailure when the native ipmi call fails. :raises: MissingParameterValue when required ipmi credentials are missing. :raises: InvalidParameterValue when an invalid power state is specified :raises: PowerStateFailure when invalid power state is returned from ipmi. """ driver_info = _parse_driver_info(task.node) if pstate == states.POWER_ON: driver_utils.ensure_next_boot_device(task, driver_info) _power_on(driver_info) elif pstate == states.POWER_OFF: _power_off(driver_info) else: raise exception.InvalidParameterValue( _("set_power_state called with an invalid power state: %s." ) % pstate) @task_manager.require_exclusive_lock def reboot(self, task): """Cycles the power to the task's node. :param task: a TaskManager instance containing the node to act on. :raises: IPMIFailure when the native ipmi call fails. :raises: MissingParameterValue when required ipmi credentials are missing. :raises: PowerStateFailure when invalid power state is returned from ipmi. """ driver_info = _parse_driver_info(task.node) driver_utils.ensure_next_boot_device(task, driver_info) _reboot(driver_info) class NativeIPMIManagement(base.ManagementInterface): def get_properties(self): return COMMON_PROPERTIES def validate(self, task): """Check that 'driver_info' contains IPMI credentials. Validates whether the 'driver_info' property of the supplied task's node contains the required credentials information. :param task: a task from TaskManager. :raises: MissingParameterValue when required ipmi credentials are missing. """ _parse_driver_info(task.node) def get_supported_boot_devices(self, task): """Get a list of the supported boot devices. :param task: a task from TaskManager. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ return list(_BOOT_DEVICES_MAP.keys()) @task_manager.require_exclusive_lock def set_boot_device(self, task, device, persistent=False): """Set the boot device for the task's node. Set the boot device to use on next reboot of the node. :param task: a task from TaskManager. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. :raises: InvalidParameterValue if an invalid boot device is specified or required ipmi credentials are missing. :raises: MissingParameterValue when required ipmi credentials are missing. :raises: IPMIFailure on an error from pyghmi. """ if device not in self.get_supported_boot_devices(task): raise exception.InvalidParameterValue(_( "Invalid boot device %s specified.") % device) if task.node.driver_info.get('ipmi_force_boot_device', False): driver_utils.force_persistent_boot(task, device, persistent) # Reset persistent to False, in case of BMC does not support # persistent or we do not have admin rights. persistent = False driver_info = _parse_driver_info(task.node) try: ipmicmd = ipmi_command.Command(bmc=driver_info['address'], userid=driver_info['username'], password=driver_info['password']) bootdev = _BOOT_DEVICES_MAP[device] ipmicmd.set_bootdev(bootdev, persist=persistent) except pyghmi_exception.IpmiException as e: LOG.error(_LE("IPMI set boot device failed for node %(node_id)s " "with the following error: %(error)s"), {'node_id': driver_info['uuid'], 'error': e}) raise exception.IPMIFailure(cmd=e) def get_boot_device(self, task): """Get the current boot device for the task's node. Returns the current boot device of the node. :param task: a task from TaskManager. :raises: MissingParameterValue if required IPMI parameters are missing. :raises: IPMIFailure on an error from pyghmi. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ driver_info = task.node.driver_info driver_internal_info = task.node.driver_internal_info if (driver_info.get('ipmi_force_boot_device', False) and driver_internal_info.get('persistent_boot_device') and driver_internal_info.get('is_next_boot_persistent', True)): return { 'boot_device': driver_internal_info['persistent_boot_device'], 'persistent': True } driver_info = _parse_driver_info(task.node) response = {'boot_device': None} try: ipmicmd = ipmi_command.Command(bmc=driver_info['address'], userid=driver_info['username'], password=driver_info['password']) ret = ipmicmd.get_bootdev() # FIXME(lucasagomes): pyghmi doesn't seem to handle errors # consistently, for some errors it raises an exception # others it just returns a dictionary with the error. if 'error' in ret: raise pyghmi_exception.IpmiException(ret['error']) except pyghmi_exception.IpmiException as e: LOG.error(_LE("IPMI get boot device failed for node %(node_id)s " "with the following error: %(error)s"), {'node_id': driver_info['uuid'], 'error': e}) raise exception.IPMIFailure(cmd=e) response['persistent'] = ret.get('persistent') bootdev = ret.get('bootdev') if bootdev: response['boot_device'] = next((dev for dev, hdev in _BOOT_DEVICES_MAP.items() if hdev == bootdev), None) return response def get_sensors_data(self, task): """Get sensors data. :param task: a TaskManager instance. :raises: FailedToGetSensorData when getting the sensor data fails. :raises: MissingParameterValue if required ipmi parameters are missing :returns: returns a dict of sensor data group by sensor type. """ driver_info = _parse_driver_info(task.node) return _get_sensors_data(driver_info) class NativeIPMIShellinaboxConsole(base.ConsoleInterface): """A ConsoleInterface that uses pyghmi and shellinabox.""" def get_properties(self): d = COMMON_PROPERTIES.copy() d.update(CONSOLE_PROPERTIES) return d def validate(self, task): """Validate the Node console info. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue when required IPMI credentials or the IPMI terminal port are missing :raises: InvalidParameterValue when the IPMI terminal port is not an integer. """ driver_info = _parse_driver_info(task.node) if not driver_info['port']: raise exception.MissingParameterValue(_( "Missing 'ipmi_terminal_port' parameter in node's" " driver_info.")) def start_console(self, task): """Start a remote console for the node. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue when required ipmi credentials are missing. :raises: InvalidParameterValue when the IPMI terminal port is not an integer. :raises: ConsoleError if unable to start the console process. """ driver_info = _parse_driver_info(task.node) path = _console_pwfile_path(driver_info['uuid']) pw_file = console_utils.make_persistent_password_file( path, driver_info['password']) console_cmd = ("/:%(uid)s:%(gid)s:HOME:pyghmicons %(bmc)s" " %(user)s" " %(passwd_file)s" % {'uid': os.getuid(), 'gid': os.getgid(), 'bmc': driver_info['address'], 'user': driver_info['username'], 'passwd_file': pw_file}) try: console_utils.start_shellinabox_console(driver_info['uuid'], driver_info['port'], console_cmd) except exception.ConsoleError: with excutils.save_and_reraise_exception(): ironic_utils.unlink_without_raise(path) def stop_console(self, task): """Stop the remote console session for the node. :param task: a TaskManager instance containing the node to act on. :raises: ConsoleError if unable to stop the console process. """ try: console_utils.stop_shellinabox_console(task.node.uuid) finally: password_file = _console_pwfile_path(task.node.uuid) ironic_utils.unlink_without_raise(password_file) def get_console(self, task): """Get the type and connection information about the console. :param task: a TaskManager instance containing the node to act on. :raises: MissingParameterValue when required IPMI credentials or the IPMI terminal port are missing :raises: InvalidParameterValue when the IPMI terminal port is not an integer. """ driver_info = _parse_driver_info(task.node) url = console_utils.get_shellinabox_console_url(driver_info['port']) return {'type': 'shellinabox', 'url': url} class VendorPassthru(base.VendorInterface): def get_properties(self): return COMMON_PROPERTIES def validate(self, task, method, **kwargs): """Validate vendor-specific actions. :param task: a task from TaskManager. :param method: method to be validated :param kwargs: info for action. :raises: InvalidParameterValue when an invalid parameter value is specified. :raises: MissingParameterValue if a required parameter is missing. """ if method == 'send_raw': raw_bytes = kwargs.get('raw_bytes') if not raw_bytes: raise exception.MissingParameterValue(_( 'Parameter raw_bytes (string of bytes) was not ' 'specified.')) _parse_raw_bytes(raw_bytes) _parse_driver_info(task.node) @base.passthru(['POST']) @task_manager.require_exclusive_lock def send_raw(self, task, http_method, raw_bytes): """Send raw bytes to the BMC. Bytes should be a string of bytes. :param task: a TaskManager instance. :param http_method: the HTTP method used on the request. :param raw_bytes: a string of raw bytes to send, e.g. '0x00 0x01' :raises: IPMIFailure on an error from native IPMI call. :raises: MissingParameterValue if a required parameter is missing. :raises: InvalidParameterValue when an invalid value is specified. """ driver_info = _parse_driver_info(task.node) _send_raw(driver_info, raw_bytes) @base.passthru(['POST']) @task_manager.require_exclusive_lock def bmc_reset(self, task, http_method, warm=True): """Reset BMC via IPMI command. :param task: a TaskManager instance. :param http_method: the HTTP method used on the request. :param warm: boolean parameter to decide on warm or cold reset. :raises: IPMIFailure on an error from native IPMI call. :raises: MissingParameterValue if a required parameter is missing. :raises: InvalidParameterValue when an invalid value is specified """ driver_info = _parse_driver_info(task.node) # NOTE(yuriyz): pyghmi 0.8.0 does not have a method for BMC reset command = '0x03' if warm else '0x02' raw_command = '0x06 ' + command _send_raw(driver_info, raw_command) ironic-5.1.0/ironic/drivers/modules/image_cache.py0000664000567000056710000004412612674513466023356 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Utility for caching master images. """ import os import tempfile import time import uuid from oslo_concurrency import lockutils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import fileutils import six from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _ from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import image_service from ironic.common import images from ironic.common import utils LOG = logging.getLogger(__name__) img_cache_opts = [ cfg.BoolOpt('parallel_image_downloads', default=False, help=_('Run image downloads and raw format conversions in ' 'parallel.')), ] CONF = cfg.CONF CONF.register_opts(img_cache_opts) # This would contain a sorted list of instances of ImageCache to be # considered for cleanup. This list will be kept sorted in non-increasing # order of priority. _cache_cleanup_list = [] class ImageCache(object): """Class handling access to cache for master images.""" def __init__(self, master_dir, cache_size, cache_ttl): """Constructor. :param master_dir: cache directory to work on Value of None disables image caching. :param cache_size: desired maximum cache size in bytes :param cache_ttl: cache entity TTL in seconds """ self.master_dir = master_dir self._cache_size = cache_size self._cache_ttl = cache_ttl if master_dir is not None: fileutils.ensure_tree(master_dir) def fetch_image(self, href, dest_path, ctx=None, force_raw=True): """Fetch image by given href to the destination path. Does nothing if destination path exists and is up to date with cache and href contents. Only creates a hard link (dest_path) to cached image if requested image is already in cache and up to date with href contents. Otherwise downloads an image, stores it in cache and creates a hard link (dest_path) to it. :param href: image UUID or href to fetch :param dest_path: destination file path :param ctx: context :param force_raw: boolean value, whether to convert the image to raw format """ img_download_lock_name = 'download-image' if self.master_dir is None: # NOTE(ghe): We don't share images between instances/hosts if not CONF.parallel_image_downloads: with lockutils.lock(img_download_lock_name, 'ironic-'): _fetch(ctx, href, dest_path, force_raw) else: _fetch(ctx, href, dest_path, force_raw) return # TODO(ghe): have hard links and counts the same behaviour in all fs # NOTE(vdrok): File name is converted to UUID if it's not UUID already, # so that two images with same file names do not collide if service_utils.is_glance_image(href): master_file_name = service_utils.parse_image_ref(href)[0] else: # NOTE(vdrok): Doing conversion of href in case it's unicode # string, UUID cannot be generated for unicode strings on python 2. href_encoded = href.encode('utf-8') if six.PY2 else href master_file_name = str(uuid.uuid5(uuid.NAMESPACE_URL, href_encoded)) master_path = os.path.join(self.master_dir, master_file_name) if CONF.parallel_image_downloads: img_download_lock_name = 'download-image:%s' % master_file_name # TODO(dtantsur): lock expiration time with lockutils.lock(img_download_lock_name, 'ironic-'): # NOTE(vdrok): After rebuild requested image can change, so we # should ensure that dest_path and master_path (if exists) are # pointing to the same file and their content is up to date cache_up_to_date = _delete_master_path_if_stale(master_path, href, ctx) dest_up_to_date = _delete_dest_path_if_stale(master_path, dest_path) if cache_up_to_date and dest_up_to_date: LOG.debug("Destination %(dest)s already exists " "for image %(href)s", {'href': href, 'dest': dest_path}) return if cache_up_to_date: # NOTE(dtantsur): ensure we're not in the middle of clean up with lockutils.lock('master_image', 'ironic-'): os.link(master_path, dest_path) LOG.debug("Master cache hit for image %(href)s", {'href': href}) return LOG.info(_LI("Master cache miss for image %(href)s, " "starting download"), {'href': href}) self._download_image( href, master_path, dest_path, ctx=ctx, force_raw=force_raw) # NOTE(dtantsur): we increased cache size - time to clean up self.clean_up() def _download_image(self, href, master_path, dest_path, ctx=None, force_raw=True): """Download image by href and store at a given path. This method should be called with uuid-specific lock taken. :param href: image UUID or href to fetch :param master_path: destination master path :param dest_path: destination file path :param ctx: context :param force_raw: boolean value, whether to convert the image to raw format """ # TODO(ghe): timeout and retry for downloads # TODO(ghe): logging when image cannot be created tmp_dir = tempfile.mkdtemp(dir=self.master_dir) tmp_path = os.path.join(tmp_dir, href.split('/')[-1]) try: _fetch(ctx, href, tmp_path, force_raw) # NOTE(dtantsur): no need for global lock here - master_path # will have link count >1 at any moment, so won't be cleaned up os.link(tmp_path, master_path) os.link(master_path, dest_path) finally: utils.rmtree_without_raise(tmp_dir) @lockutils.synchronized('master_image', 'ironic-') def clean_up(self, amount=None): """Clean up directory with images, keeping cache of the latest images. Files with link count >1 are never deleted. Protected by global lock, so that no one messes with master images after we get listing and before we actually delete files. :param amount: if present, amount of space to reclaim in bytes, cleaning will stop, if this goal was reached, even if it is possible to clean up more files """ if self.master_dir is None: return LOG.debug("Starting clean up for master image cache %(dir)s" % {'dir': self.master_dir}) amount_copy = amount listing = _find_candidates_for_deletion(self.master_dir) survived, amount = self._clean_up_too_old(listing, amount) if amount is not None and amount <= 0: return amount = self._clean_up_ensure_cache_size(survived, amount) if amount is not None and amount > 0: LOG.warning( _LW("Cache clean up was unable to reclaim %(required)d " "MiB of disk space, still %(left)d MiB required"), {'required': amount_copy / 1024 / 1024, 'left': amount / 1024 / 1024}) def _clean_up_too_old(self, listing, amount): """Clean up stage 1: drop images that are older than TTL. This method removes files all files older than TTL seconds unless 'amount' is non-None. If 'amount' is non-None, it starts removing files older than TTL seconds, oldest first, until the required 'amount' of space is reclaimed. :param listing: list of tuples (file name, last used time) :param amount: if not None, amount of space to reclaim in bytes, cleaning will stop, if this goal was reached, even if it is possible to clean up more files :returns: tuple (list of files left after clean up, amount still to reclaim) """ threshold = time.time() - self._cache_ttl survived = [] for file_name, last_used, stat in listing: if last_used < threshold: try: os.unlink(file_name) except EnvironmentError as exc: LOG.warning(_LW("Unable to delete file %(name)s from " "master image cache: %(exc)s"), {'name': file_name, 'exc': exc}) else: if amount is not None: amount -= stat.st_size if amount <= 0: amount = 0 break else: survived.append((file_name, last_used, stat)) return survived, amount def _clean_up_ensure_cache_size(self, listing, amount): """Clean up stage 2: try to ensure cache size < threshold. Try to delete the oldest files until conditions is satisfied or no more files are eligible for deletion. :param listing: list of tuples (file name, last used time) :param amount: amount of space to reclaim, if possible. if amount is not None, it has higher priority than cache size in settings :returns: amount of space still required after clean up """ # NOTE(dtantsur): Sort listing to delete the oldest files first listing = sorted(listing, key=lambda entry: entry[1], reverse=True) total_listing = (os.path.join(self.master_dir, f) for f in os.listdir(self.master_dir)) total_size = sum(os.path.getsize(f) for f in total_listing) while listing and (total_size > self._cache_size or (amount is not None and amount > 0)): file_name, last_used, stat = listing.pop() try: os.unlink(file_name) except EnvironmentError as exc: LOG.warning(_LW("Unable to delete file %(name)s from " "master image cache: %(exc)s"), {'name': file_name, 'exc': exc}) else: total_size -= stat.st_size if amount is not None: amount -= stat.st_size if total_size > self._cache_size: LOG.info(_LI("After cleaning up cache dir %(dir)s " "cache size %(actual)d is still larger than " "threshold %(expected)d"), {'dir': self.master_dir, 'actual': total_size, 'expected': self._cache_size}) return max(amount, 0) if amount is not None else 0 def _find_candidates_for_deletion(master_dir): """Find files eligible for deletion i.e. with link count ==1. :param master_dir: directory to operate on :returns: iterator yielding tuples (file name, last used time, stat) """ for filename in os.listdir(master_dir): filename = os.path.join(master_dir, filename) stat = os.stat(filename) if not os.path.isfile(filename) or stat.st_nlink > 1: continue # NOTE(dtantsur): Detect most recently accessed files, # seeing atime can be disabled by the mount option # Also include ctime as it changes when image is linked to last_used_time = max(stat.st_mtime, stat.st_atime, stat.st_ctime) yield filename, last_used_time, stat def _free_disk_space_for(path): """Get free disk space on a drive where path is located.""" stat = os.statvfs(path) return stat.f_frsize * stat.f_bavail def _fetch(context, image_href, path, force_raw=False): """Fetch image and convert to raw format if needed.""" path_tmp = "%s.part" % path images.fetch(context, image_href, path_tmp, force_raw=False) # Notes(yjiang5): If glance can provide the virtual size information, # then we can firstly clean cache and then invoke images.fetch(). if force_raw: required_space = images.converted_size(path_tmp) directory = os.path.dirname(path_tmp) _clean_up_caches(directory, required_space) images.image_to_raw(image_href, path, path_tmp) else: os.rename(path_tmp, path) def _clean_up_caches(directory, amount): """Explicitly cleanup caches based on their priority (if required). :param directory: the directory (of the cache) to be freed up. :param amount: amount of space to reclaim. :raises: InsufficientDiskSpace exception, if we cannot free up enough space after trying all the caches. """ free = _free_disk_space_for(directory) if amount < free: return # NOTE(dtantsur): filter caches, whose directory is on the same device st_dev = os.stat(directory).st_dev caches_to_clean = [x[1]() for x in _cache_cleanup_list] caches = (c for c in caches_to_clean if os.stat(c.master_dir).st_dev == st_dev) for cache_to_clean in caches: cache_to_clean.clean_up(amount=(amount - free)) free = _free_disk_space_for(directory) if amount < free: break else: raise exception.InsufficientDiskSpace(path=directory, required=amount / 1024 / 1024, actual=free / 1024 / 1024, ) def clean_up_caches(ctx, directory, images_info): """Explicitly cleanup caches based on their priority (if required). This cleans up the caches to free up the amount of space required for the images in images_info. The caches are cleaned up one after the other in the order of their priority. If we still cannot free up enough space after trying all the caches, this method throws exception. :param ctx: context :param directory: the directory (of the cache) to be freed up. :param images_info: a list of tuples of the form (image_uuid,path) for which space is to be created in cache. :raises: InsufficientDiskSpace exception, if we cannot free up enough space after trying all the caches. """ total_size = sum(images.download_size(ctx, uuid) for (uuid, path) in images_info) _clean_up_caches(directory, total_size) def cleanup(priority): """Decorator method for adding cleanup priority to a class.""" def _add_property_to_class_func(cls): _cache_cleanup_list.append((priority, cls)) _cache_cleanup_list.sort(reverse=True, key=lambda tuple_: tuple_[0]) return cls return _add_property_to_class_func def _delete_master_path_if_stale(master_path, href, ctx): """Delete image from cache if it is not up to date with href contents. :param master_path: path to an image in master cache :param href: image href :param ctx: context to use :returns: True if master_path is up to date with href contents, False if master_path was stale and was deleted or it didn't exist """ if service_utils.is_glance_image(href): # Glance image contents cannot be updated without changing image's UUID return os.path.exists(master_path) if os.path.exists(master_path): img_service = image_service.get_image_service(href, context=ctx) img_mtime = img_service.show(href).get('updated_at') if not img_mtime: # This means that href is not a glance image and doesn't have an # updated_at attribute LOG.warning(_LW("Image service couldn't determine last " "modification time of %(href)s, considering " "cached image up to date."), {'href': href}) return True master_mtime = utils.unix_file_modification_datetime(master_path) if img_mtime <= master_mtime: return True # Delete image from cache as it is outdated LOG.info(_LI('Image %(href)s was last modified at %(remote_time)s. ' 'Deleting the cached copy "%(cached_file)s since it was ' 'last modified at %(local_time)s and may be outdated.'), {'href': href, 'remote_time': img_mtime, 'local_time': master_mtime, 'cached_file': master_path}) os.unlink(master_path) return False def _delete_dest_path_if_stale(master_path, dest_path): """Delete dest_path if it does not point to cached image. :param master_path: path to an image in master cache :param dest_path: hard link to an image :returns: True if dest_path points to master_path, False if dest_path was stale and was deleted or it didn't exist """ dest_path_exists = os.path.exists(dest_path) if not dest_path_exists: # Image not cached, re-download return False master_path_exists = os.path.exists(master_path) if (not master_path_exists or os.stat(master_path).st_ino != os.stat(dest_path).st_ino): # Image exists in cache, but dest_path out of date os.unlink(dest_path) return False return True ironic-5.1.0/ironic/drivers/irmc.py0000664000567000056710000000565312674513466020435 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iRMC Driver for managing FUJITSU PRIMERGY BX S4 or RX S8 generation of FUJITSU PRIMERGY servers, and above servers. """ from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers import base from ironic.drivers.modules import agent from ironic.drivers.modules import ipmitool from ironic.drivers.modules.irmc import boot from ironic.drivers.modules.irmc import inspect from ironic.drivers.modules.irmc import management from ironic.drivers.modules.irmc import power from ironic.drivers.modules import iscsi_deploy class IRMCVirtualMediaIscsiDriver(base.BaseDriver): """iRMC Driver using SCCI. This driver implements the `core` functionality using :class:ironic.drivers.modules.irmc.power.IRMCPower for power management. and :class:ironic.drivers.modules.iscsi_deploy.ISCSIDeploy for deploy. """ def __init__(self): if not importutils.try_import('scciclient.irmc.scci'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import python-scciclient library")) self.power = power.IRMCPower() self.boot = boot.IRMCVirtualMediaBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.console = ipmitool.IPMIShellinaboxConsole() self.management = management.IRMCManagement() self.vendor = iscsi_deploy.VendorPassthru() self.inspect = inspect.IRMCInspect() class IRMCVirtualMediaAgentDriver(base.BaseDriver): """iRMC Driver using SCCI. This driver implements the `core` functionality using :class:ironic.drivers.modules.irmc.power.IRMCPower for power management and :class:ironic.drivers.modules.irmc.deploy.IRMCVirtualMediaAgentDriver for deploy. """ def __init__(self): if not importutils.try_import('scciclient.irmc.scci'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import python-scciclient library")) self.power = power.IRMCPower() self.boot = boot.IRMCVirtualMediaBoot() self.deploy = agent.AgentDeploy() self.console = ipmitool.IPMIShellinaboxConsole() self.management = management.IRMCManagement() self.vendor = agent.AgentVendorInterface() self.inspect = inspect.IRMCInspect() ironic-5.1.0/ironic/drivers/__init__.py0000664000567000056710000000000012674513466021217 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/drivers/fake.py0000664000567000056710000002646512674513466020415 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fake drivers used in testing. """ from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers import base from ironic.drivers.modules import agent from ironic.drivers.modules.amt import management as amt_mgmt from ironic.drivers.modules.amt import power as amt_power from ironic.drivers.modules.cimc import management as cimc_mgmt from ironic.drivers.modules.cimc import power as cimc_power from ironic.drivers.modules.drac import management as drac_mgmt from ironic.drivers.modules.drac import power as drac_power from ironic.drivers.modules.drac import vendor_passthru as drac_vendor from ironic.drivers.modules import fake from ironic.drivers.modules import iboot from ironic.drivers.modules.ilo import inspect as ilo_inspect from ironic.drivers.modules.ilo import management as ilo_management from ironic.drivers.modules.ilo import power as ilo_power from ironic.drivers.modules import inspector from ironic.drivers.modules import ipminative from ironic.drivers.modules import ipmitool from ironic.drivers.modules.irmc import inspect as irmc_inspect from ironic.drivers.modules.irmc import management as irmc_management from ironic.drivers.modules.irmc import power as irmc_power from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules.msftocs import management as msftocs_management from ironic.drivers.modules.msftocs import power as msftocs_power from ironic.drivers.modules.oneview import common as oneview_common from ironic.drivers.modules.oneview import management as oneview_management from ironic.drivers.modules.oneview import power as oneview_power from ironic.drivers.modules import pxe from ironic.drivers.modules import seamicro from ironic.drivers.modules import snmp from ironic.drivers.modules import ssh from ironic.drivers.modules.ucs import management as ucs_mgmt from ironic.drivers.modules.ucs import power as ucs_power from ironic.drivers.modules import virtualbox from ironic.drivers.modules import wol from ironic.drivers import utils class FakeDriver(base.BaseDriver): """Example implementation of a Driver.""" def __init__(self): self.power = fake.FakePower() self.deploy = fake.FakeDeploy() self.boot = fake.FakeBoot() self.a = fake.FakeVendorA() self.b = fake.FakeVendorB() self.mapping = {'first_method': self.a, 'second_method': self.b, 'third_method_sync': self.b} self.vendor = utils.MixinVendorInterface(self.mapping) self.console = fake.FakeConsole() self.management = fake.FakeManagement() self.inspect = fake.FakeInspect() self.raid = fake.FakeRAID() class FakeIPMIToolDriver(base.BaseDriver): """Example implementation of a Driver.""" def __init__(self): self.power = ipmitool.IPMIPower() self.console = ipmitool.IPMIShellinaboxConsole() self.deploy = fake.FakeDeploy() self.vendor = ipmitool.VendorPassthru() self.management = ipmitool.IPMIManagement() class FakePXEDriver(base.BaseDriver): """Example implementation of a Driver.""" def __init__(self): self.power = fake.FakePower() self.boot = pxe.PXEBoot() self.deploy = iscsi_deploy.ISCSIDeploy() self.vendor = iscsi_deploy.VendorPassthru() class FakeSSHDriver(base.BaseDriver): """Example implementation of a Driver.""" def __init__(self): self.power = ssh.SSHPower() self.deploy = fake.FakeDeploy() self.management = ssh.SSHManagement() self.console = ssh.ShellinaboxConsole() class FakeIPMINativeDriver(base.BaseDriver): """Fake IPMINative driver.""" def __init__(self): if not importutils.try_import('pyghmi'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pyghmi IPMI library")) self.power = ipminative.NativeIPMIPower() self.console = ipminative.NativeIPMIShellinaboxConsole() self.deploy = fake.FakeDeploy() self.vendor = ipminative.VendorPassthru() self.management = ipminative.NativeIPMIManagement() class FakeSeaMicroDriver(base.BaseDriver): """Fake SeaMicro driver.""" def __init__(self): if not importutils.try_import('seamicroclient'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import seamicroclient library")) self.power = seamicro.Power() self.deploy = fake.FakeDeploy() self.management = seamicro.Management() self.vendor = seamicro.VendorPassthru() self.console = seamicro.ShellinaboxConsole() class FakeAgentDriver(base.BaseDriver): """Example implementation of an AgentDriver.""" def __init__(self): self.power = fake.FakePower() self.boot = pxe.PXEBoot() self.deploy = agent.AgentDeploy() self.vendor = agent.AgentVendorInterface() self.raid = agent.AgentRAID() class FakeIBootDriver(base.BaseDriver): """Fake iBoot driver.""" def __init__(self): if not importutils.try_import('iboot'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import iboot library")) self.power = iboot.IBootPower() self.deploy = fake.FakeDeploy() class FakeIloDriver(base.BaseDriver): """Fake iLO driver, used in testing.""" def __init__(self): if not importutils.try_import('proliantutils'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import proliantutils library")) self.power = ilo_power.IloPower() self.deploy = fake.FakeDeploy() self.management = ilo_management.IloManagement() self.inspect = ilo_inspect.IloInspect() class FakeDracDriver(base.BaseDriver): """Fake Drac driver.""" def __init__(self): if not importutils.try_import('dracclient'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_('Unable to import python-dracclient library')) self.power = drac_power.DracPower() self.deploy = fake.FakeDeploy() self.management = drac_mgmt.DracManagement() self.vendor = drac_vendor.DracVendorPassthru() class FakeSNMPDriver(base.BaseDriver): """Fake SNMP driver.""" def __init__(self): if not importutils.try_import('pysnmp'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pysnmp library")) self.power = snmp.SNMPPower() self.deploy = fake.FakeDeploy() class FakeIRMCDriver(base.BaseDriver): """Fake iRMC driver.""" def __init__(self): if not importutils.try_import('scciclient'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import python-scciclient library")) self.power = irmc_power.IRMCPower() self.deploy = fake.FakeDeploy() self.management = irmc_management.IRMCManagement() self.inspect = irmc_inspect.IRMCInspect() class FakeVirtualBoxDriver(base.BaseDriver): """Fake VirtualBox driver.""" def __init__(self): if not importutils.try_import('pyremotevbox'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pyremotevbox library")) self.power = virtualbox.VirtualBoxPower() self.deploy = fake.FakeDeploy() self.management = virtualbox.VirtualBoxManagement() class FakeIPMIToolInspectorDriver(base.BaseDriver): """Fake Inspector driver.""" def __init__(self): self.power = ipmitool.IPMIPower() self.console = ipmitool.IPMIShellinaboxConsole() self.deploy = fake.FakeDeploy() self.vendor = ipmitool.VendorPassthru() self.management = ipmitool.IPMIManagement() # NOTE(dtantsur): unlike other uses of Inspector, this one is # unconditional, as this driver is designed for testing inspector # integration. self.inspect = inspector.Inspector() class FakeAMTDriver(base.BaseDriver): """Fake AMT driver.""" def __init__(self): if not importutils.try_import('pywsman'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import pywsman library")) self.power = amt_power.AMTPower() self.deploy = fake.FakeDeploy() self.management = amt_mgmt.AMTManagement() class FakeMSFTOCSDriver(base.BaseDriver): """Fake MSFT OCS driver.""" def __init__(self): self.power = msftocs_power.MSFTOCSPower() self.deploy = fake.FakeDeploy() self.management = msftocs_management.MSFTOCSManagement() class FakeUcsDriver(base.BaseDriver): """Fake UCS driver.""" def __init__(self): if not importutils.try_import('UcsSdk'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import UcsSdk library")) self.power = ucs_power.Power() self.deploy = fake.FakeDeploy() self.management = ucs_mgmt.UcsManagement() class FakeCIMCDriver(base.BaseDriver): """Fake CIMC driver.""" def __init__(self): if not importutils.try_import('ImcSdk'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import ImcSdk library")) self.power = cimc_power.Power() self.deploy = fake.FakeDeploy() self.management = cimc_mgmt.CIMCManagement() class FakeWakeOnLanDriver(base.BaseDriver): """Fake Wake-On-Lan driver.""" def __init__(self): self.power = wol.WakeOnLanPower() self.deploy = fake.FakeDeploy() class FakeOneViewDriver(base.BaseDriver): """Fake OneView driver. For testing purposes. """ def __init__(self): if not importutils.try_import('oneview_client.client'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import python-oneviewclient library")) # Checks connectivity to OneView and version compatibility on driver # initialization oneview_client = oneview_common.get_oneview_client() oneview_client.verify_oneview_version() oneview_client.verify_credentials() self.power = oneview_power.OneViewPower() self.management = oneview_management.OneViewManagement() self.boot = fake.FakeBoot() self.deploy = fake.FakeDeploy() ironic-5.1.0/ironic/drivers/ilo.py0000664000567000056710000000577612674513466020274 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ iLO Driver for managing HP Proliant Gen8 and above servers. """ from oslo_utils import importutils from ironic.common import exception from ironic.common.i18n import _ from ironic.drivers import base from ironic.drivers.modules import agent from ironic.drivers.modules.ilo import boot from ironic.drivers.modules.ilo import console from ironic.drivers.modules.ilo import deploy from ironic.drivers.modules.ilo import inspect from ironic.drivers.modules.ilo import management from ironic.drivers.modules.ilo import power from ironic.drivers.modules.ilo import vendor class IloVirtualMediaIscsiDriver(base.BaseDriver): """IloDriver using IloClient interface. This driver implements the `core` functionality using :class:ironic.drivers.modules.ilo.power.IloPower for power management. and :class:ironic.drivers.modules.ilo.deploy.IloVirtualMediaIscsiDeploy for deploy. """ def __init__(self): if not importutils.try_import('proliantutils'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import proliantutils library")) self.power = power.IloPower() self.boot = boot.IloVirtualMediaBoot() self.deploy = deploy.IloVirtualMediaIscsiDeploy() self.console = console.IloConsoleInterface() self.management = management.IloManagement() self.vendor = vendor.VendorPassthru() self.inspect = inspect.IloInspect() class IloVirtualMediaAgentDriver(base.BaseDriver): """IloDriver using IloClient interface. This driver implements the `core` functionality using :class:ironic.drivers.modules.ilo.power.IloPower for power management and :class:ironic.drivers.modules.ilo.deploy.IloVirtualMediaAgentDriver for deploy. """ def __init__(self): if not importutils.try_import('proliantutils'): raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import proliantutils library")) self.power = power.IloPower() self.boot = boot.IloVirtualMediaBoot() self.deploy = deploy.IloVirtualMediaAgentDeploy() self.console = console.IloConsoleInterface() self.management = management.IloManagement() self.vendor = vendor.IloVirtualMediaAgentVendorInterface() self.inspect = inspect.IloInspect() self.raid = agent.AgentRAID() ironic-5.1.0/ironic/drivers/utils.py0000664000567000056710000002041312674513466020632 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LW from ironic.conductor import utils from ironic.drivers import base LOG = logging.getLogger(__name__) class MixinVendorInterface(base.VendorInterface): """Wrapper around multiple VendorInterfaces.""" def __init__(self, mapping, driver_passthru_mapping=None): """Wrapper around multiple VendorInterfaces. :param mapping: dict of {'method': interface} specifying how to combine multiple vendor interfaces into one vendor driver. :param driver_passthru_mapping: dict of {'method': interface} specifying how to map driver_vendor_passthru calls to interfaces. """ self.mapping = mapping self.driver_level_mapping = driver_passthru_mapping or {} self.vendor_routes = self._build_routes(self.mapping) self.driver_routes = self._build_routes(self.driver_level_mapping, driver_passthru=True) def _build_routes(self, map_dict, driver_passthru=False): """Build the mapping for the vendor calls. Build the mapping between the given methods and the corresponding method metadata. :param map_dict: dict of {'method': interface} specifying how to map multiple vendor calls to interfaces. :param driver_passthru: Boolean value. Whether build the mapping to the node vendor passthru or driver vendor passthru. """ d = {} for method_name in map_dict: iface = map_dict[method_name] if driver_passthru: driver_methods = iface.driver_routes else: driver_methods = iface.vendor_routes try: d.update({method_name: driver_methods[method_name]}) except KeyError: pass return d def _get_route(self, method): """Return the driver interface which contains the given method. :param method: The name of the vendor method. """ if not method: raise exception.MissingParameterValue( _("Method not specified when calling vendor extension.")) try: route = self.mapping[method] except KeyError: raise exception.InvalidParameterValue( _('No handler for method %s') % method) return route def get_properties(self): """Return the properties from all the VendorInterfaces. :returns: a dictionary of : entries. """ properties = {} interfaces = set(self.mapping.values()) for interface in interfaces: properties.update(interface.get_properties()) return properties def validate(self, task, method, **kwargs): """Call validate on the appropriate interface only. :raises: UnsupportedDriverExtension if 'method' can not be mapped to the supported interfaces. :raises: InvalidParameterValue if 'method' is invalid. :raises: MissingParameterValue if missing 'method' or parameters in kwargs. """ route = self._get_route(method) route.validate(task, method=method, **kwargs) def get_node_mac_addresses(task): """Get all MAC addresses for the ports belonging to this task's node. :param task: a TaskManager instance containing the node to act on. :returns: A list of MAC addresses in the format xx:xx:xx:xx:xx:xx. """ return [p.address for p in task.ports] def get_node_capability(node, capability): """Returns 'capability' value from node's 'capabilities' property. :param node: Node object. :param capability: Capability key. :return: Capability value. If capability is not present, then return "None" """ capabilities = node.properties.get('capabilities') if not capabilities: return for node_capability in capabilities.split(','): parts = node_capability.split(':') if len(parts) == 2 and parts[0] and parts[1]: if parts[0].strip() == capability: return parts[1].strip() else: LOG.warning(_LW("Ignoring malformed capability '%s'. " "Format should be 'key:val'."), node_capability) def add_node_capability(task, capability, value): """Add 'capability' to node's 'capabilities' property. If 'capability' is already present, then a duplicate entry will be added. :param task: Task object. :param capability: Capability key. :param value: Capability value. """ node = task.node properties = node.properties capabilities = properties.get('capabilities') new_cap = ':'.join([capability, value]) if capabilities: capabilities = ','.join([capabilities, new_cap]) else: capabilities = new_cap properties['capabilities'] = capabilities node.properties = properties node.save() def ensure_next_boot_device(task, driver_info): """Ensure boot from correct device if persistent is True If ipmi_force_boot_device is True and is_next_boot_persistent, set to boot from correct device, else unset is_next_boot_persistent field. :param task: Node object. :param driver_info: Node driver_info. """ if driver_info.get('force_boot_device', False): driver_internal_info = task.node.driver_internal_info if driver_internal_info.get('is_next_boot_persistent') is False: driver_internal_info.pop('is_next_boot_persistent', None) task.node.driver_internal_info = driver_internal_info task.node.save() else: boot_device = driver_internal_info.get('persistent_boot_device') if boot_device: utils.node_set_boot_device(task, boot_device) def force_persistent_boot(task, device, persistent): """Set persistent boot device to driver_internal_info If persistent is True set 'persistent_boot_device' field to the boot device and reset persistent to False, else set 'is_next_boot_persistent' to False. :param task: Task object. :param device: Boot device. :param persistent: Whether next boot is persistent or not. """ node = task.node driver_internal_info = node.driver_internal_info if persistent: driver_internal_info['persistent_boot_device'] = device else: driver_internal_info['is_next_boot_persistent'] = False node.driver_internal_info = driver_internal_info node.save() def capabilities_to_dict(capabilities): """Parse the capabilities string into a dictionary :param capabilities: the capabilities of the node as a formatted string. :raises: InvalidParameterValue if capabilities is not an string or has a malformed value """ capabilities_dict = {} if capabilities: if not isinstance(capabilities, six.string_types): raise exception.InvalidParameterValue( _("Value of 'capabilities' must be string. Got %s") % type(capabilities)) try: for capability in capabilities.split(','): key, value = capability.split(':') capabilities_dict[key] = value except ValueError: raise exception.InvalidParameterValue( _("Malformed capabilities value: %s") % capability ) return capabilities_dict ironic-5.1.0/ironic/cmd/0000775000567000056710000000000012674513633016201 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/cmd/api.py0000664000567000056710000000257412674513466017340 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Ironic Service API.""" import sys from oslo_config import cfg from ironic.common import service as ironic_service from ironic.objects import base CONF = cfg.CONF def main(): # Parse config file and command line options, then start logging ironic_service.prepare_service(sys.argv) # Enable object backporting via the conductor base.IronicObject.indirection_api = base.IronicObjectIndirectionAPI() # Build and start the WSGI app launcher = ironic_service.process_launcher() server = ironic_service.WSGIService('ironic_api', CONF.api.enable_ssl_api) launcher.launch_service(server, workers=server.workers) launcher.wait() if __name__ == '__main__': sys.exit(main()) ironic-5.1.0/ironic/cmd/conductor.py0000664000567000056710000000262112674513466020560 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ The Ironic Management Service """ import logging import sys from oslo_config import cfg from oslo_log import log from oslo_service import service from ironic.common import service as ironic_service CONF = cfg.CONF def main(): # Pase config file and command line options, then start logging ironic_service.prepare_service(sys.argv) mgr = ironic_service.RPCService(CONF.host, 'ironic.conductor.manager', 'ConductorManager') LOG = log.getLogger(__name__) LOG.debug("Configuration:") CONF.log_opt_values(LOG, logging.DEBUG) launcher = service.launch(CONF, mgr) launcher.wait() if __name__ == '__main__': sys.exit(main()) ironic-5.1.0/ironic/cmd/dbsync.py0000664000567000056710000000611412674513466020043 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Run storage database migration. """ import sys from oslo_config import cfg from ironic.common.i18n import _ from ironic.common import service from ironic.db import migration CONF = cfg.CONF class DBCommand(object): def upgrade(self): migration.upgrade(CONF.command.revision) def revision(self): migration.revision(CONF.command.message, CONF.command.autogenerate) def stamp(self): migration.stamp(CONF.command.revision) def version(self): print(migration.version()) def create_schema(self): migration.create_schema() def add_command_parsers(subparsers): command_object = DBCommand() parser = subparsers.add_parser( 'upgrade', help=_("Upgrade the database schema to the latest version. " "Optionally, use --revision to specify an alembic revision " "string to upgrade to.")) parser.set_defaults(func=command_object.upgrade) parser.add_argument('--revision', nargs='?') parser = subparsers.add_parser('stamp') parser.add_argument('--revision', nargs='?') parser.set_defaults(func=command_object.stamp) parser = subparsers.add_parser( 'revision', help=_("Create a new alembic revision. " "Use --message to set the message string.")) parser.add_argument('-m', '--message') parser.add_argument('--autogenerate', action='store_true') parser.set_defaults(func=command_object.revision) parser = subparsers.add_parser( 'version', help=_("Print the current version information and exit.")) parser.set_defaults(func=command_object.version) parser = subparsers.add_parser( 'create_schema', help=_("Create the database schema.")) parser.set_defaults(func=command_object.create_schema) command_opt = cfg.SubCommandOpt('command', title='Command', help=_('Available commands'), handler=add_command_parsers) CONF.register_cli_opt(command_opt) def main(): # this is hack to work with previous usage of ironic-dbsync # pls change it to ironic-dbsync upgrade valid_commands = set([ 'upgrade', 'downgrade', 'revision', 'version', 'stamp', 'create_schema', ]) if not set(sys.argv) & valid_commands: sys.argv.append('upgrade') service.prepare_service(sys.argv) CONF.command.func() ironic-5.1.0/ironic/cmd/__init__.py0000664000567000056710000000125312674513466020317 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import oslo_i18n as i18n i18n.install('ironic') ironic-5.1.0/ironic/common/0000775000567000056710000000000012674513633016726 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/exception.py0000664000567000056710000004137312674513466021312 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Ironic base exception handling. SHOULD include dedicated exception logging. """ from oslo_config import cfg from oslo_log import log as logging import six from six.moves import http_client from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW LOG = logging.getLogger(__name__) exc_log_opts = [ cfg.BoolOpt('fatal_exception_format_errors', default=False, help=_('Used if there is a formatting error when generating ' 'an exception message (a programming error). If True, ' 'raise an exception; if False, use the unformatted ' 'message.')), ] CONF = cfg.CONF CONF.register_opts(exc_log_opts) class IronicException(Exception): """Base Ironic Exception To correctly use this class, inherit from it and define a '_msg_fmt' property. That message will get printf'd with the keyword arguments provided to the constructor. If you need to access the message from an exception you should use six.text_type(exc) """ _msg_fmt = _("An unknown exception occurred.") code = http_client.INTERNAL_SERVER_ERROR headers = {} safe = False def __init__(self, message=None, **kwargs): self.kwargs = kwargs if 'code' not in self.kwargs: try: self.kwargs['code'] = self.code except AttributeError: pass if not message: # Check if class is using deprecated 'message' attribute. if (hasattr(self, 'message') and self.message): LOG.warning(_LW("Exception class: %s Using the 'message' " "attribute in an exception has been " "deprecated. The exception class should be " "modified to use the '_msg_fmt' " "attribute."), self.__class__.__name__) self._msg_fmt = self.message try: message = self._msg_fmt % kwargs except Exception as e: # kwargs doesn't match a variable in self._msg_fmt # log the issue and the kwargs LOG.exception(_LE('Exception in string format operation')) for name, value in kwargs.items(): LOG.error("%s: %s" % (name, value)) if CONF.fatal_exception_format_errors: raise e else: # at least get the core self._msg_fmt out if something # happened message = self._msg_fmt super(IronicException, self).__init__(message) def __str__(self): """Encode to utf-8 then wsme api can consume it as well.""" if not six.PY3: return unicode(self.args[0]).encode('utf-8') return self.args[0] def __unicode__(self): """Return a unicode representation of the exception message.""" return unicode(self.args[0]) class NotAuthorized(IronicException): _msg_fmt = _("Not authorized.") code = http_client.FORBIDDEN class OperationNotPermitted(NotAuthorized): _msg_fmt = _("Operation not permitted.") class Invalid(IronicException): _msg_fmt = _("Unacceptable parameters.") code = http_client.BAD_REQUEST class Conflict(IronicException): _msg_fmt = _('Conflict.') code = http_client.CONFLICT class TemporaryFailure(IronicException): _msg_fmt = _("Resource temporarily unavailable, please retry.") code = http_client.SERVICE_UNAVAILABLE class NotAcceptable(IronicException): # TODO(deva): We need to set response headers in the API for this exception _msg_fmt = _("Request not acceptable.") code = http_client.NOT_ACCEPTABLE class InvalidState(Conflict): _msg_fmt = _("Invalid resource state.") class NodeAlreadyExists(Conflict): _msg_fmt = _("A node with UUID %(uuid)s already exists.") class MACAlreadyExists(Conflict): _msg_fmt = _("A port with MAC address %(mac)s already exists.") class ChassisAlreadyExists(Conflict): _msg_fmt = _("A chassis with UUID %(uuid)s already exists.") class PortAlreadyExists(Conflict): _msg_fmt = _("A port with UUID %(uuid)s already exists.") class PortgroupAlreadyExists(Conflict): _msg_fmt = _("A portgroup with UUID %(uuid)s already exists.") class PortgroupDuplicateName(Conflict): _msg_fmt = _("A portgroup with name %(name)s already exists.") class PortgroupMACAlreadyExists(Conflict): _msg_fmt = _("A portgroup with MAC address %(mac)s already exists.") class InstanceAssociated(Conflict): _msg_fmt = _("Instance %(instance_uuid)s is already associated with a " "node, it cannot be associated with this other node %(node)s") class DuplicateName(Conflict): _msg_fmt = _("A node with name %(name)s already exists.") class InvalidUUID(Invalid): _msg_fmt = _("Expected a uuid but received %(uuid)s.") class InvalidUuidOrName(Invalid): _msg_fmt = _("Expected a logical name or uuid but received %(name)s.") class InvalidName(Invalid): _msg_fmt = _("Expected a logical name but received %(name)s.") class InvalidIdentity(Invalid): _msg_fmt = _("Expected an uuid or int but received %(identity)s.") class InvalidMAC(Invalid): _msg_fmt = _("Expected a MAC address but received %(mac)s.") class InvalidStateRequested(Invalid): _msg_fmt = _('The requested action "%(action)s" can not be performed ' 'on node "%(node)s" while it is in state "%(state)s".') class PatchError(Invalid): _msg_fmt = _("Couldn't apply patch '%(patch)s'. Reason: %(reason)s") class InstanceDeployFailure(IronicException): _msg_fmt = _("Failed to deploy instance: %(reason)s") class ImageUnacceptable(IronicException): _msg_fmt = _("Image %(image_id)s is unacceptable: %(reason)s") class ImageConvertFailed(IronicException): _msg_fmt = _("Image %(image_id)s is unacceptable: %(reason)s") # Cannot be templated as the error syntax varies. # msg needs to be constructed when raised. class InvalidParameterValue(Invalid): _msg_fmt = _("%(err)s") class MissingParameterValue(InvalidParameterValue): _msg_fmt = _("%(err)s") class Duplicate(IronicException): _msg_fmt = _("Resource already exists.") class NotFound(IronicException): _msg_fmt = _("Resource could not be found.") code = http_client.NOT_FOUND class DHCPLoadError(IronicException): _msg_fmt = _("Failed to load DHCP provider %(dhcp_provider_name)s, " "reason: %(reason)s") class DriverNotFound(NotFound): _msg_fmt = _("Could not find the following driver(s): %(driver_name)s.") class ImageNotFound(NotFound): _msg_fmt = _("Image %(image_id)s could not be found.") class NoValidHost(NotFound): _msg_fmt = _("No valid host was found. Reason: %(reason)s") class InstanceNotFound(NotFound): _msg_fmt = _("Instance %(instance)s could not be found.") class NodeNotFound(NotFound): _msg_fmt = _("Node %(node)s could not be found.") class PortgroupNotFound(NotFound): _msg_fmt = _("Portgroup %(portgroup)s could not be found.") class PortgroupNotEmpty(Invalid): _msg_fmt = _("Cannot complete the requested action because portgroup " "%(portgroup)s contains ports.") class NodeAssociated(InvalidState): _msg_fmt = _("Node %(node)s is associated with instance %(instance)s.") class PortNotFound(NotFound): _msg_fmt = _("Port %(port)s could not be found.") class FailedToUpdateDHCPOptOnPort(IronicException): _msg_fmt = _("Update DHCP options on port: %(port_id)s failed.") class FailedToCleanDHCPOpts(IronicException): _msg_fmt = _("Clean up DHCP options on node: %(node)s failed.") class FailedToGetIPAddressOnPort(IronicException): _msg_fmt = _("Retrieve IP address on port: %(port_id)s failed.") class InvalidIPv4Address(IronicException): _msg_fmt = _("Invalid IPv4 address %(ip_address)s.") class FailedToUpdateMacOnPort(IronicException): _msg_fmt = _("Update MAC address on port: %(port_id)s failed.") class ChassisNotFound(NotFound): _msg_fmt = _("Chassis %(chassis)s could not be found.") class NoDriversLoaded(IronicException): _msg_fmt = _("Conductor %(conductor)s cannot be started " "because no drivers were loaded.") class ConductorNotFound(NotFound): _msg_fmt = _("Conductor %(conductor)s could not be found.") class ConductorAlreadyRegistered(IronicException): _msg_fmt = _("Conductor %(conductor)s already registered.") class PowerStateFailure(InvalidState): _msg_fmt = _("Failed to set node power state to %(pstate)s.") class ExclusiveLockRequired(NotAuthorized): _msg_fmt = _("An exclusive lock is required, " "but the current context has a shared lock.") class NodeMaintenanceFailure(Invalid): _msg_fmt = _("Failed to toggle maintenance-mode flag " "for node %(node)s: %(reason)s") class NodeConsoleNotEnabled(Invalid): _msg_fmt = _("Console access is not enabled on node %(node)s") class NodeInMaintenance(Invalid): _msg_fmt = _("The %(op)s operation can't be performed on node " "%(node)s because it's in maintenance mode.") class ChassisNotEmpty(Invalid): _msg_fmt = _("Cannot complete the requested action because chassis " "%(chassis)s contains nodes.") class IPMIFailure(IronicException): _msg_fmt = _("IPMI call failed: %(cmd)s.") class AMTConnectFailure(IronicException): _msg_fmt = _("Failed to connect to AMT service. This could be caused " "by the wrong amt_address or bad network environment.") class AMTFailure(IronicException): _msg_fmt = _("AMT call failed: %(cmd)s.") class MSFTOCSClientApiException(IronicException): _msg_fmt = _("MSFT OCS call failed.") class SSHConnectFailed(IronicException): _msg_fmt = _("Failed to establish SSH connection to host %(host)s.") class SSHCommandFailed(IronicException): _msg_fmt = _("Failed to execute command via SSH: %(cmd)s.") class UnsupportedDriverExtension(Invalid): _msg_fmt = _('Driver %(driver)s does not support %(extension)s ' '(disabled or not implemented).') class GlanceConnectionFailed(IronicException): _msg_fmt = _("Connection to glance host %(host)s:%(port)s failed: " "%(reason)s") class ImageNotAuthorized(NotAuthorized): _msg_fmt = _("Not authorized for image %(image_id)s.") class InvalidImageRef(Invalid): _msg_fmt = _("Invalid image href %(image_href)s.") class ImageRefValidationFailed(IronicException): _msg_fmt = _("Validation of image href %(image_href)s failed, " "reason: %(reason)s") class ImageDownloadFailed(IronicException): _msg_fmt = _("Failed to download image %(image_href)s, reason: %(reason)s") class KeystoneUnauthorized(IronicException): _msg_fmt = _("Not authorized in Keystone.") class KeystoneFailure(IronicException): pass class CatalogNotFound(IronicException): _msg_fmt = _("Service type %(service_type)s with endpoint type " "%(endpoint_type)s not found in keystone service catalog.") class ServiceUnavailable(IronicException): _msg_fmt = _("Connection failed") class Forbidden(IronicException): _msg_fmt = _("Requested OpenStack Images API is forbidden") class BadRequest(IronicException): pass class InvalidEndpoint(IronicException): _msg_fmt = _("The provided endpoint is invalid") class CommunicationError(IronicException): _msg_fmt = _("Unable to communicate with the server.") class HTTPForbidden(Forbidden): pass class Unauthorized(IronicException): pass class HTTPNotFound(NotFound): pass class ConfigNotFound(IronicException): _msg_fmt = _("Could not find config at %(path)s") class NodeLocked(Conflict): _msg_fmt = _("Node %(node)s is locked by host %(host)s, please retry " "after the current operation is completed.") class NodeNotLocked(Invalid): _msg_fmt = _("Node %(node)s found not to be locked on release") class NoFreeConductorWorker(TemporaryFailure): _msg_fmt = _('Requested action cannot be performed due to lack of free ' 'conductor workers.') code = http_client.SERVICE_UNAVAILABLE class VendorPassthruException(IronicException): pass class ConfigInvalid(IronicException): _msg_fmt = _("Invalid configuration file. %(error_msg)s") class DriverLoadError(IronicException): _msg_fmt = _("Driver %(driver)s could not be loaded. Reason: %(reason)s.") class ConsoleError(IronicException): pass class NoConsolePid(ConsoleError): _msg_fmt = _("Could not find pid in pid file %(pid_path)s") class ConsoleSubprocessFailed(ConsoleError): _msg_fmt = _("Console subprocess failed to start. %(error)s") class PasswordFileFailedToCreate(IronicException): _msg_fmt = _("Failed to create the password file. %(error)s") class IBootOperationError(IronicException): pass class IloOperationError(IronicException): _msg_fmt = _("%(operation)s failed, error: %(error)s") class IloOperationNotSupported(IronicException): _msg_fmt = _("%(operation)s not supported. error: %(error)s") class DracOperationError(IronicException): _msg_fmt = _('DRAC operation failed. Reason: %(error)s') class FailedToGetSensorData(IronicException): _msg_fmt = _("Failed to get sensor data for node %(node)s. " "Error: %(error)s") class FailedToParseSensorData(IronicException): _msg_fmt = _("Failed to parse sensor data for node %(node)s. " "Error: %(error)s") class InsufficientDiskSpace(IronicException): _msg_fmt = _("Disk volume where '%(path)s' is located doesn't have " "enough disk space. Required %(required)d MiB, " "only %(actual)d MiB available space present.") class ImageCreationFailed(IronicException): _msg_fmt = _('Creating %(image_type)s image failed: %(error)s') class SwiftOperationError(IronicException): _msg_fmt = _("Swift operation '%(operation)s' failed: %(error)s") class SwiftObjectNotFoundError(SwiftOperationError): _msg_fmt = _("Swift object %(object)s from container %(container)s " "not found. Operation '%(operation)s' failed.") class SNMPFailure(IronicException): _msg_fmt = _("SNMP operation '%(operation)s' failed: %(error)s") class FileSystemNotSupported(IronicException): _msg_fmt = _("Failed to create a file system. " "File system %(fs)s is not supported.") class IRMCOperationError(IronicException): _msg_fmt = _('iRMC %(operation)s failed. Reason: %(error)s') class IRMCSharedFileSystemNotMounted(IronicException): _msg_fmt = _("iRMC shared file system '%(share)s' is not mounted.") class VirtualBoxOperationFailed(IronicException): _msg_fmt = _("VirtualBox operation '%(operation)s' failed. " "Error: %(error)s") class HardwareInspectionFailure(IronicException): _msg_fmt = _("Failed to inspect hardware. Reason: %(error)s") class NodeCleaningFailure(IronicException): _msg_fmt = _("Failed to clean node %(node)s: %(reason)s") class PathNotFound(IronicException): _msg_fmt = _("Path %(dir)s does not exist.") class DirectoryNotWritable(IronicException): _msg_fmt = _("Directory %(dir)s is not writable.") class UcsOperationError(IronicException): _msg_fmt = _("Cisco UCS client: operation %(operation)s failed for node" " %(node)s. Reason: %(error)s") class UcsConnectionError(IronicException): _msg_fmt = _("Cisco UCS client: connection failed for node " "%(node)s. Reason: %(error)s") class WolOperationError(IronicException): pass class ImageUploadFailed(IronicException): _msg_fmt = _("Failed to upload %(image_name)s image to web server " "%(web_server)s, reason: %(reason)s") class CIMCException(IronicException): _msg_fmt = _("Cisco IMC exception occurred for node %(node)s: %(error)s") class OneViewError(IronicException): _msg_fmt = _("OneView exception occurred. Error: %(error)s") class NodeTagNotFound(IronicException): _msg_fmt = _("Node %(node_id)s doesn't have a tag '%(tag)s'") ironic-5.1.0/ironic/common/states.py0000664000567000056710000002523512674513466020616 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 NTT DOCOMO, INC. # Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Mapping of bare metal node states. Setting the node `power_state` is handled by the conductor's power synchronization thread. Based on the power state retrieved from the driver for the node, the state is set to POWER_ON or POWER_OFF, accordingly. Should this fail, the `power_state` value is left unchanged, and the node is placed into maintenance mode. The `power_state` can also be set manually via the API. A failure to change the state leaves the current state unchanged. The node is NOT placed into maintenance mode in this case. """ from oslo_log import log as logging from ironic.common import fsm LOG = logging.getLogger(__name__) ##################### # Provisioning states ##################### # TODO(deva): add add'l state mappings here VERBS = { 'active': 'deploy', 'deleted': 'delete', 'manage': 'manage', 'provide': 'provide', 'inspect': 'inspect', 'abort': 'abort', 'clean': 'clean', } """ Mapping of state-changing events that are PUT to the REST API This is a mapping of target states which are PUT to the API, eg, PUT /v1/node/states/provision {'target': 'active'} The dict format is: {target string used by the API: internal verb} This provides a reference set of supported actions, and in the future may be used to support renaming these actions. """ NOSTATE = None """ No state information. This state is used with power_state to represent a lack of knowledge of power state, and in target_*_state fields when there is no target. """ ENROLL = 'enroll' """ Node is enrolled. This state indicates that Ironic is aware of a node, but is not managing it. """ VERIFYING = 'verifying' """ Node power management credentials are being verified. """ MANAGEABLE = 'manageable' """ Node is in a manageable state. This state indicates that Ironic has verified, at least once, that it had sufficient information to manage the hardware. While in this state, the node is not available for provisioning (it must be in the AVAILABLE state for that). """ AVAILABLE = 'available' """ Node is available for use and scheduling. This state is replacing the NOSTATE state used prior to Kilo. """ ACTIVE = 'active' """ Node is successfully deployed and associated with an instance. """ DEPLOYWAIT = 'wait call-back' """ Node is waiting to be deployed. This will be the node `provision_state` while the node is waiting for the driver to finish deployment. """ DEPLOYING = 'deploying' """ Node is ready to receive a deploy request, or is currently being deployed. A node will have its `provision_state` set to DEPLOYING briefly before it receives its initial deploy request. It will also move to this state from DEPLOYWAIT after the callback is triggered and deployment is continued (disk partitioning and image copying). """ DEPLOYFAIL = 'deploy failed' """ Node deployment failed. """ DEPLOYDONE = 'deploy complete' """ Node was successfully deployed. This is mainly a target provision state used during deployment. A successfully deployed node should go to ACTIVE status. """ DELETING = 'deleting' """ Node is actively being torn down. """ DELETED = 'deleted' """ Node tear down was successful. In Juno, target_provision_state was set to this value during node tear down. In Kilo, this will be a transitory value of provision_state, and never represented in target_provision_state. """ CLEANING = 'cleaning' """ Node is being automatically cleaned to prepare it for provisioning. """ CLEANWAIT = 'clean wait' """ Node is waiting for a clean step to be finished. This will be the node's `provision_state` while the node is waiting for the driver to finish a cleaning step. """ CLEANFAIL = 'clean failed' """ Node failed cleaning. This requires operator intervention to resolve. """ ERROR = 'error' """ An error occurred during node processing. The `last_error` attribute of the node details should contain an error message. """ REBUILD = 'rebuild' """ Node is to be rebuilt. This is not used as a state, but rather as a "verb" when changing the node's provision_state via the REST API. """ INSPECTING = 'inspecting' """ Node is under inspection. This is the provision state used when inspection is started. A successfully inspected node shall transition to MANAGEABLE status. """ INSPECTFAIL = 'inspect failed' """ Node inspection failed. """ UPDATE_ALLOWED_STATES = (DEPLOYFAIL, INSPECTING, INSPECTFAIL, CLEANFAIL, ERROR, VERIFYING) """Transitional states in which we allow updating a node.""" DELETE_ALLOWED_STATES = (AVAILABLE, NOSTATE, MANAGEABLE, ENROLL) """States in which node deletion is allowed.""" STABLE_STATES = (ENROLL, MANAGEABLE, AVAILABLE, ACTIVE, ERROR) """States that will not transition unless receiving a request.""" ############## # Power states ############## POWER_ON = 'power on' """ Node is powered on. """ POWER_OFF = 'power off' """ Node is powered off. """ REBOOT = 'rebooting' """ Node is rebooting. """ ##################### # State machine model ##################### def on_exit(old_state, event): """Used to log when a state is exited.""" LOG.debug("Exiting old state '%s' in response to event '%s'", old_state, event) def on_enter(new_state, event): """Used to log when entering a state.""" LOG.debug("Entering new state '%s' in response to event '%s'", new_state, event) watchers = {} watchers['on_exit'] = on_exit watchers['on_enter'] = on_enter machine = fsm.FSM() # Add stable states for state in STABLE_STATES: machine.add_state(state, stable=True, **watchers) # Add verifying state machine.add_state(VERIFYING, target=MANAGEABLE, **watchers) # Add deploy* states # NOTE(deva): Juno shows a target_provision_state of DEPLOYDONE # this is changed in Kilo to ACTIVE machine.add_state(DEPLOYING, target=ACTIVE, **watchers) machine.add_state(DEPLOYWAIT, target=ACTIVE, **watchers) machine.add_state(DEPLOYFAIL, target=ACTIVE, **watchers) # Add clean* states machine.add_state(CLEANING, target=AVAILABLE, **watchers) machine.add_state(CLEANWAIT, target=AVAILABLE, **watchers) machine.add_state(CLEANFAIL, target=AVAILABLE, **watchers) # Add delete* states machine.add_state(DELETING, target=AVAILABLE, **watchers) # From AVAILABLE, a deployment may be started machine.add_transition(AVAILABLE, DEPLOYING, 'deploy') # Add inspect* states. machine.add_state(INSPECTING, target=MANAGEABLE, **watchers) machine.add_state(INSPECTFAIL, target=MANAGEABLE, **watchers) # A deployment may fail machine.add_transition(DEPLOYING, DEPLOYFAIL, 'fail') # A failed deployment may be retried # ironic/conductor/manager.py:do_node_deploy() machine.add_transition(DEPLOYFAIL, DEPLOYING, 'rebuild') # NOTE(deva): Juno allows a client to send "active" to initiate a rebuild machine.add_transition(DEPLOYFAIL, DEPLOYING, 'deploy') # A deployment may also wait on external callbacks machine.add_transition(DEPLOYING, DEPLOYWAIT, 'wait') machine.add_transition(DEPLOYWAIT, DEPLOYING, 'resume') # A deployment waiting on callback may time out machine.add_transition(DEPLOYWAIT, DEPLOYFAIL, 'fail') # A deployment may complete machine.add_transition(DEPLOYING, ACTIVE, 'done') # An active instance may be re-deployed # ironic/conductor/manager.py:do_node_deploy() machine.add_transition(ACTIVE, DEPLOYING, 'rebuild') # An active instance may be deleted # ironic/conductor/manager.py:do_node_tear_down() machine.add_transition(ACTIVE, DELETING, 'delete') # While a deployment is waiting, it may be deleted # ironic/conductor/manager.py:do_node_tear_down() machine.add_transition(DEPLOYWAIT, DELETING, 'delete') # A failed deployment may also be deleted # ironic/conductor/manager.py:do_node_tear_down() machine.add_transition(DEPLOYFAIL, DELETING, 'delete') # This state can also transition to error machine.add_transition(DELETING, ERROR, 'error') # When finished deleting, a node will begin cleaning machine.add_transition(DELETING, CLEANING, 'clean') # If cleaning succeeds, it becomes available for scheduling machine.add_transition(CLEANING, AVAILABLE, 'done') # If cleaning fails, wait for operator intervention machine.add_transition(CLEANING, CLEANFAIL, 'fail') machine.add_transition(CLEANWAIT, CLEANFAIL, 'fail') # While waiting for a clean step to be finished, cleaning may be aborted machine.add_transition(CLEANWAIT, CLEANFAIL, 'abort') # Cleaning may also wait on external callbacks machine.add_transition(CLEANING, CLEANWAIT, 'wait') machine.add_transition(CLEANWAIT, CLEANING, 'resume') # An operator may want to move a CLEANFAIL node to MANAGEABLE, to perform # other actions like cleaning machine.add_transition(CLEANFAIL, MANAGEABLE, 'manage') # From MANAGEABLE, a node may move to available after going through automated # cleaning machine.add_transition(MANAGEABLE, CLEANING, 'provide') # From MANAGEABLE, a node may be manually cleaned, going back to manageable # after cleaning is completed machine.add_transition(MANAGEABLE, CLEANING, 'clean') machine.add_transition(CLEANING, MANAGEABLE, 'manage') # From AVAILABLE, a node may be made unavailable by managing it machine.add_transition(AVAILABLE, MANAGEABLE, 'manage') # An errored instance can be rebuilt # ironic/conductor/manager.py:do_node_deploy() machine.add_transition(ERROR, DEPLOYING, 'rebuild') # or deleted # ironic/conductor/manager.py:do_node_tear_down() machine.add_transition(ERROR, DELETING, 'delete') # Added transitions for inspection. # Initiate inspection. machine.add_transition(MANAGEABLE, INSPECTING, 'inspect') # ironic/conductor/manager.py:inspect_hardware(). machine.add_transition(INSPECTING, MANAGEABLE, 'done') # Inspection may fail. machine.add_transition(INSPECTING, INSPECTFAIL, 'fail') # Move the node to manageable state for any other # action. machine.add_transition(INSPECTFAIL, MANAGEABLE, 'manage') # Reinitiate the inspect after inspectfail. machine.add_transition(INSPECTFAIL, INSPECTING, 'inspect') # Start power credentials verification machine.add_transition(ENROLL, VERIFYING, 'manage') # Verification can succeed machine.add_transition(VERIFYING, MANAGEABLE, 'done') # Verification can fail with setting last_error and rolling back to ENROLL machine.add_transition(VERIFYING, ENROLL, 'fail') ironic-5.1.0/ironic/common/isolinux_config.template0000664000567000056710000000014512674513466023666 0ustar jenkinsjenkins00000000000000default boot label boot kernel {{ kernel }} append initrd={{ ramdisk }} text {{ kernel_params }} -- ironic-5.1.0/ironic/common/driver_factory.py0000664000567000056710000001642412674513466022335 0ustar jenkinsjenkins00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_concurrency import lockutils from oslo_config import cfg from oslo_log import log from stevedore import dispatch from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LI from ironic.drivers import base as driver_base LOG = log.getLogger(__name__) driver_opts = [ cfg.ListOpt('enabled_drivers', default=['pxe_ipmitool'], help=_('Specify the list of drivers to load during service ' 'initialization. Missing drivers, or drivers which ' 'fail to initialize, will prevent the conductor ' 'service from starting. The option default is a ' 'recommended set of production-oriented drivers. A ' 'complete list of drivers present on your system may ' 'be found by enumerating the "ironic.drivers" ' 'entrypoint. An example may be found in the ' 'developer documentation online.')), ] CONF = cfg.CONF CONF.register_opts(driver_opts) EM_SEMAPHORE = 'extension_manager' def build_driver_for_task(task, driver_name=None): """Builds a composable driver for a given task. Starts with a `BareDriver` object, and attaches implementations of the various driver interfaces to it. Currently these all come from the monolithic driver singleton, but later will come from separate driver factories and configurable via the database. :param task: The task containing the node to build a driver for. :param driver_name: The name of the monolithic driver to use as a base, if different than task.node.driver. :returns: A driver object for the task. :raises: DriverNotFound if node.driver could not be found in the "ironic.drivers" namespace. """ node = task.node driver = driver_base.BareDriver() _attach_interfaces_to_driver(driver, node, driver_name=driver_name) return driver def _attach_interfaces_to_driver(driver, node, driver_name=None): driver_singleton = get_driver(driver_name or node.driver) for iface in driver_singleton.all_interfaces: impl = getattr(driver_singleton, iface, None) setattr(driver, iface, impl) def get_driver(driver_name): """Simple method to get a ref to an instance of a driver. Driver loading is handled by the DriverFactory class. This method conveniently wraps that class and returns the actual driver object. :param driver_name: the name of the driver class to load :returns: An instance of a class which implements ironic.drivers.base.BaseDriver :raises: DriverNotFound if the requested driver_name could not be found in the "ironic.drivers" namespace. """ try: factory = DriverFactory() return factory[driver_name].obj except KeyError: raise exception.DriverNotFound(driver_name=driver_name) def drivers(): """Get all drivers as a dict name -> driver object.""" factory = DriverFactory() # NOTE(jroll) I don't think this needs to be ordered, but # ConductorManager.init_host seems to depend on this behavior (or at # least the unit tests for it do), and it can't hurt much to keep it # that way. return collections.OrderedDict((name, factory[name].obj) for name in factory.names) class DriverFactory(object): """Discover, load and manage the drivers available.""" # NOTE(deva): loading the _extension_manager as a class member will break # stevedore when it loads a driver, because the driver will # import this file (and thus instantiate another factory). # Instead, we instantiate a NameDispatchExtensionManager only # once, the first time DriverFactory.__init__ is called. _extension_manager = None def __init__(self): if not DriverFactory._extension_manager: DriverFactory._init_extension_manager() def __getitem__(self, name): return self._extension_manager[name] # NOTE(deva): Use lockutils to avoid a potential race in eventlet # that might try to create two driver factories. @classmethod @lockutils.synchronized(EM_SEMAPHORE, 'ironic-') def _init_extension_manager(cls): # NOTE(deva): In case multiple greenthreads queue up on this lock # before _extension_manager is initialized, prevent # creation of multiple NameDispatchExtensionManagers. if cls._extension_manager: return # NOTE(deva): Drivers raise "DriverLoadError" if they are unable to be # loaded, eg. due to missing external dependencies. # We capture that exception, and, only if it is for an # enabled driver, raise it from here. If enabled driver # raises other exception type, it is wrapped in # "DriverLoadError", providing the name of the driver that # caused it, and raised. If the exception is for a # non-enabled driver, we suppress it. def _catch_driver_not_found(mgr, ep, exc): # NOTE(deva): stevedore loads plugins *before* evaluating # _check_func, so we need to check here, too. if ep.name in CONF.enabled_drivers: if not isinstance(exc, exception.DriverLoadError): raise exception.DriverLoadError(driver=ep.name, reason=exc) raise exc def _check_func(ext): return ext.name in CONF.enabled_drivers cls._extension_manager = ( dispatch.NameDispatchExtensionManager( 'ironic.drivers', _check_func, invoke_on_load=True, on_load_failure_callback=_catch_driver_not_found)) # NOTE(deva): if we were unable to load any configured driver, perhaps # because it is not present on the system, raise an error. if (sorted(CONF.enabled_drivers) != sorted(cls._extension_manager.names())): found = cls._extension_manager.names() names = [n for n in CONF.enabled_drivers if n not in found] # just in case more than one could not be found ... names = ', '.join(names) raise exception.DriverNotFound(driver_name=names) LOG.info(_LI("Loaded the following drivers: %s"), cls._extension_manager.names()) @property def names(self): """The list of driver names available.""" return self._extension_manager.names() ironic-5.1.0/ironic/common/keystone.py0000664000567000056710000001264312674513466021153 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneclient import exceptions as ksexception from oslo_concurrency import lockutils from oslo_config import cfg from six.moves.urllib import parse from ironic.common import exception from ironic.common.i18n import _ CONF = cfg.CONF keystone_opts = [ cfg.StrOpt('region_name', help=_('The region used for getting endpoints of OpenStack' ' services.')), ] CONF.register_opts(keystone_opts, group='keystone') CONF.import_group('keystone_authtoken', 'keystonemiddleware.auth_token') _KS_CLIENT = None def _is_apiv3(auth_url, auth_version): """Checks if V3 version of API is being used or not. This method inspects auth_url and auth_version, and checks whether V3 version of the API is being used or not. :param auth_url: a http or https url to be inspected (like 'http://127.0.0.1:9898/'). :param auth_version: a string containing the version (like 'v2', 'v3.0') :returns: True if V3 of the API is being used. """ return auth_version == 'v3.0' or '/v3' in parse.urlparse(auth_url).path def _get_ksclient(token=None): auth_url = CONF.keystone_authtoken.auth_uri if not auth_url: raise exception.KeystoneFailure(_('Keystone API endpoint is missing')) auth_version = CONF.keystone_authtoken.auth_version api_v3 = _is_apiv3(auth_url, auth_version) if api_v3: from keystoneclient.v3 import client else: from keystoneclient.v2_0 import client auth_url = get_keystone_url(auth_url, auth_version) try: if token: return client.Client(token=token, auth_url=auth_url) else: params = {'username': CONF.keystone_authtoken.admin_user, 'password': CONF.keystone_authtoken.admin_password, 'tenant_name': CONF.keystone_authtoken.admin_tenant_name, 'region_name': CONF.keystone.region_name, 'auth_url': auth_url} return _get_ksclient_from_conf(client, **params) except ksexception.Unauthorized: raise exception.KeystoneUnauthorized() except ksexception.AuthorizationFailure as err: raise exception.KeystoneFailure(_('Could not authorize in Keystone:' ' %s') % err) @lockutils.synchronized('keystone_client', 'ironic-') def _get_ksclient_from_conf(client, **params): global _KS_CLIENT # NOTE(yuriyz): use Keystone client default gap, to determine whether the # given token is about to expire if _KS_CLIENT is None or _KS_CLIENT.auth_ref.will_expire_soon(): _KS_CLIENT = client.Client(**params) return _KS_CLIENT def get_keystone_url(auth_url, auth_version): """Gives an http/https url to contact keystone. Given an auth_url and auth_version, this method generates the url in which keystone can be reached. :param auth_url: a http or https url to be inspected (like 'http://127.0.0.1:9898/'). :param auth_version: a string containing the version (like v2, v3.0, etc) :returns: a string containing the keystone url """ api_v3 = _is_apiv3(auth_url, auth_version) api_version = 'v3' if api_v3 else 'v2.0' # NOTE(lucasagomes): Get rid of the trailing '/' otherwise urljoin() # fails to override the version in the URL return parse.urljoin(auth_url.rstrip('/'), api_version) def get_service_url(service_type='baremetal', endpoint_type='internal'): """Wrapper for get service url from keystone service catalog. Given a service_type and an endpoint_type, this method queries keystone service catalog and provides the url for the desired endpoint. :param service_type: the keystone service for which url is required. :param endpoint_type: the type of endpoint for the service. :returns: an http/https url for the desired endpoint. """ ksclient = _get_ksclient() if not ksclient.has_service_catalog(): raise exception.KeystoneFailure(_('No Keystone service catalog ' 'loaded')) try: endpoint = ksclient.service_catalog.url_for( service_type=service_type, endpoint_type=endpoint_type, region_name=CONF.keystone.region_name) except ksexception.EndpointNotFound: raise exception.CatalogNotFound(service_type=service_type, endpoint_type=endpoint_type) return endpoint def get_admin_auth_token(): """Get an admin auth_token from the Keystone.""" ksclient = _get_ksclient() return ksclient.auth_token def token_expires_soon(token, duration=None): """Determines if token expiration is about to occur. :param duration: time interval in seconds :returns: boolean : true if expiration is within the given duration """ ksclient = _get_ksclient(token=token) return ksclient.auth_ref.will_expire_soon(stale_duration=duration) ironic-5.1.0/ironic/common/boot_devices.py0000664000567000056710000000240412674513466021751 0ustar jenkinsjenkins00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Mapping of boot devices used when requesting the system to boot from an alternate device. The options presented were based on the IPMItool chassis bootdev command. You can find the documentation at: http://linux.die.net/man/1/ipmitool NOTE: This module does not include all the options from ipmitool because they don't make sense in the limited context of Ironic right now. """ PXE = 'pxe' "Boot from PXE boot" DISK = 'disk' "Boot from default Hard-drive" CDROM = 'cdrom' "Boot from CD/DVD" BIOS = 'bios' "Boot into BIOS setup" SAFE = 'safe' "Boot from default Hard-drive, request Safe Mode" WANBOOT = 'wanboot' "Boot from Wide Area Network" ironic-5.1.0/ironic/common/grub_conf.template0000664000567000056710000000023612674513466022434 0ustar jenkinsjenkins00000000000000set default=0 set timeout=5 set hidden_timeout_quiet=false menuentry "boot_partition" { linuxefi {{ linux }} {{ kernel_params }} -- initrdefi {{ initrd }} } ironic-5.1.0/ironic/common/dhcp_factory.py0000664000567000056710000000771612674513466021764 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_concurrency import lockutils from oslo_config import cfg import stevedore from ironic.common import exception from ironic.common.i18n import _ dhcp_provider_opts = [ cfg.StrOpt('dhcp_provider', default='neutron', help=_('DHCP provider to use. "neutron" uses Neutron, and ' '"none" uses a no-op provider.')), ] CONF = cfg.CONF CONF.register_opts(dhcp_provider_opts, group='dhcp') _dhcp_provider = None EM_SEMAPHORE = 'dhcp_provider' class DHCPFactory(object): # NOTE(lucasagomes): Instantiate a stevedore.driver.DriverManager # only once, the first time DHCPFactory.__init__ # is called. _dhcp_provider = None def __init__(self, **kwargs): if not DHCPFactory._dhcp_provider: DHCPFactory._set_dhcp_provider(**kwargs) # NOTE(lucasagomes): Use lockutils to avoid a potential race in eventlet # that might try to create two dhcp factories. @classmethod @lockutils.synchronized(EM_SEMAPHORE, 'ironic-') def _set_dhcp_provider(cls, **kwargs): """Initialize the dhcp provider :raises: DHCPLoadError if the dhcp_provider cannot be loaded. """ # NOTE(lucasagomes): In case multiple greenthreads queue up on # this lock before _dhcp_provider is initialized, # prevent creation of multiple DriverManager. if cls._dhcp_provider: return dhcp_provider_name = CONF.dhcp.dhcp_provider try: _extension_manager = stevedore.driver.DriverManager( 'ironic.dhcp', dhcp_provider_name, invoke_kwds=kwargs, invoke_on_load=True) except Exception as e: raise exception.DHCPLoadError( dhcp_provider_name=dhcp_provider_name, reason=e ) cls._dhcp_provider = _extension_manager.driver def update_dhcp(self, task, dhcp_opts, ports=None): """Send or update the DHCP BOOT options for this node. :param task: A TaskManager instance. :param dhcp_opts: this will be a list of dicts, e.g. :: [{'opt_name': 'bootfile-name', 'opt_value': 'pxelinux.0'}, {'opt_name': 'server-ip-address', 'opt_value': '123.123.123.456'}, {'opt_name': 'tftp-server', 'opt_value': '123.123.123.123'}] :param ports: A dict with keys 'ports' and 'portgroups' and dicts as values. Each dict has key/value pairs of the form :. e.g. :: {'ports': {'port.uuid': vif.id}, 'portgroups': {'portgroup.uuid': vif.id}} If the value is None, will get the list of ports/portgroups from the Ironic port/portgroup objects. """ self.provider.update_dhcp_opts(task, dhcp_opts, ports) def clean_dhcp(self, task): """Clean up the DHCP BOOT options for this node. :param task: A TaskManager instance. """ self.provider.clean_dhcp_opts(task) @property def provider(self): return self._dhcp_provider ironic-5.1.0/ironic/common/glance_service/0000775000567000056710000000000012674513633021677 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/glance_service/v2/0000775000567000056710000000000012674513633022226 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/glance_service/v2/__init__.py0000664000567000056710000000000012674513466024331 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/glance_service/v2/image_service.py0000664000567000056710000003751412674513466025420 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import time from oslo_config import cfg from oslo_utils import uuidutils import six from six.moves.urllib import parse as urlparse from swiftclient import utils as swift_utils from ironic.common import exception as exc from ironic.common.glance_service import base_image_service from ironic.common.glance_service import service from ironic.common.glance_service import service_utils from ironic.common.i18n import _ glance_opts = [ cfg.ListOpt('allowed_direct_url_schemes', default=[], help=_('A list of URL schemes that can be downloaded directly ' 'via the direct_url. Currently supported schemes: ' '[file].')), # To upload this key to Swift: # swift post -m Temp-Url-Key:secretkey # When using radosgw, temp url key could be uploaded via the above swift # command, or with: # radosgw-admin user modify --uid=user --temp-url-key=secretkey cfg.StrOpt('swift_temp_url_key', help=_('The secret token given to Swift to allow temporary URL ' 'downloads. Required for temporary URLs.'), secret=True), cfg.IntOpt('swift_temp_url_duration', default=1200, help=_('The length of time in seconds that the temporary URL ' 'will be valid for. Defaults to 20 minutes. If some ' 'deploys get a 401 response code when trying to ' 'download from the temporary URL, try raising this ' 'duration. This value must be greater than or equal to ' 'the value for ' 'swift_temp_url_expected_download_start_delay')), cfg.BoolOpt('swift_temp_url_cache_enabled', default=False, help=_('Whether to cache generated Swift temporary URLs. ' 'Setting it to true is only useful when an image ' 'caching proxy is used. Defaults to False.')), cfg.IntOpt('swift_temp_url_expected_download_start_delay', default=0, min=0, help=_('This is the delay (in seconds) from the time of the ' 'deploy request (when the Swift temporary URL is ' 'generated) to when the IPA ramdisk starts up and URL ' 'is used for the image download. This value is used to ' 'check if the Swift temporary URL duration is large ' 'enough to let the image download begin. Also if ' 'temporary URL caching is enabled this will determine ' 'if a cached entry will still be valid when the ' 'download starts. swift_temp_url_duration value must be ' 'greater than or equal to this option\'s value. ' 'Defaults to 0.')), cfg.StrOpt( 'swift_endpoint_url', help=_('The "endpoint" (scheme, hostname, optional port) for ' 'the Swift URL of the form ' '"endpoint_url/api_version/[account/]container/object_id". ' 'Do not include trailing "/". ' 'For example, use "https://swift.example.com". If using RADOS ' 'Gateway, endpoint may also contain /swift path; if it does ' 'not, it will be appended. Required for temporary URLs.')), cfg.StrOpt( 'swift_api_version', default='v1', help=_('The Swift API version to create a temporary URL for. ' 'Defaults to "v1". Swift temporary URL format: ' '"endpoint_url/api_version/[account/]container/object_id"')), cfg.StrOpt( 'swift_account', help=_('The account that Glance uses to communicate with ' 'Swift. The format is "AUTH_uuid". "uuid" is the ' 'UUID for the account configured in the glance-api.conf. ' 'Required for temporary URLs when Glance backend is Swift. ' 'For example: "AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30". ' 'Swift temporary URL format: ' '"endpoint_url/api_version/[account/]container/object_id"')), cfg.StrOpt( 'swift_container', default='glance', help=_('The Swift container Glance is configured to store its ' 'images in. Defaults to "glance", which is the default ' 'in glance-api.conf. ' 'Swift temporary URL format: ' '"endpoint_url/api_version/[account/]container/object_id"')), cfg.IntOpt('swift_store_multiple_containers_seed', default=0, help=_('This should match a config by the same name in the ' 'Glance configuration file. When set to 0, a ' 'single-tenant store will only use one ' 'container to store all images. When set to an integer ' 'value between 1 and 32, a single-tenant store will use ' 'multiple containers to store images, and this value ' 'will determine how many containers are created.')), cfg.StrOpt('temp_url_endpoint_type', default='swift', choices=['swift', 'radosgw'], help=_('Type of endpoint to use for temporary URLs. If the ' 'Glance backend is Swift, use "swift"; if it is CEPH ' 'with RADOS gateway, use "radosgw".')), ] CONF = cfg.CONF CONF.register_opts(glance_opts, group='glance') TempUrlCacheElement = collections.namedtuple('TempUrlCacheElement', ['url', 'url_expires_at']) class GlanceImageService(base_image_service.BaseImageService, service.ImageService): # A dictionary containing cached temp URLs in namedtuples # in format: # { # : ( # url=, # url_expires_at= # ) # } _cache = {} def detail(self, **kwargs): return self._detail(method='list', **kwargs) def show(self, image_id): return self._show(image_id, method='get') def download(self, image_id, data=None): return self._download(image_id, method='data', data=data) def create(self, image_meta, data=None): image_id = self._create(image_meta, method='create', data=None)['id'] return self.update(image_id, None, data) def update(self, image_id, image_meta, data=None, purge_props=False): # NOTE(ghe): purge_props not working until bug 1206472 solved return self._update(image_id, image_meta, data, method='update', purge_props=False) def delete(self, image_id): return self._delete(image_id, method='delete') def _generate_temp_url(self, path, seconds, key, method, endpoint, image_id): """Get Swift temporary URL. Generates (or returns the cached one if caching is enabled) a temporary URL that gives unauthenticated access to the Swift object. :param path: The full path to the Swift object. Example: /v1/AUTH_account/c/o. :param seconds: The amount of time in seconds the temporary URL will be valid for. :param key: The secret temporary URL key set on the Swift cluster. :param method: A HTTP method, typically either GET or PUT, to allow for this temporary URL. :param endpoint: Endpoint URL of Swift service. :param image_id: UUID of a Glance image. :returns: temporary URL """ if CONF.glance.swift_temp_url_cache_enabled: self._remove_expired_items_from_cache() if image_id in self._cache: return self._cache[image_id].url path = swift_utils.generate_temp_url( path=path, seconds=seconds, key=key, method=method) temp_url = '{endpoint_url}{url_path}'.format( endpoint_url=endpoint, url_path=path) if CONF.glance.swift_temp_url_cache_enabled: query = urlparse.urlparse(temp_url).query exp_time_str = dict(urlparse.parse_qsl(query))['temp_url_expires'] self._cache[image_id] = TempUrlCacheElement( url=temp_url, url_expires_at=int(exp_time_str) ) return temp_url def swift_temp_url(self, image_info): """Generate a no-auth Swift temporary URL. This function will generate (or return the cached one if temp URL cache is enabled) the temporary Swift URL using the image id from Glance and the config options: 'swift_endpoint_url', 'swift_api_version', 'swift_account' and 'swift_container'. The temporary URL will be valid for 'swift_temp_url_duration' seconds. This allows Ironic to download a Glance image without passing around an auth_token. :param image_info: The return from a GET request to Glance for a certain image_id. Should be a dictionary, with keys like 'name' and 'checksum'. See http://docs.openstack.org/developer/glance/glanceapi.html for examples. :returns: A signed Swift URL from which an image can be downloaded, without authentication. :raises: InvalidParameterValue if Swift config options are not set correctly. :raises: MissingParameterValue if a required parameter is not set. :raises: ImageUnacceptable if the image info from Glance does not have a image ID. """ self._validate_temp_url_config() if ('id' not in image_info or not uuidutils.is_uuid_like(image_info['id'])): raise exc.ImageUnacceptable(_( 'The given image info does not have a valid image id: %s') % image_info) image_id = image_info['id'] url_fragments = { 'api_version': CONF.glance.swift_api_version, 'account': CONF.glance.swift_account, 'container': self._get_swift_container(image_id), 'object_id': image_id } endpoint_url = CONF.glance.swift_endpoint_url if CONF.glance.temp_url_endpoint_type == 'radosgw': chunks = urlparse.urlsplit(CONF.glance.swift_endpoint_url) if not chunks.path: endpoint_url = urlparse.urljoin( endpoint_url, 'swift') elif chunks.path != '/swift': raise exc.InvalidParameterValue( _('Swift endpoint URL should only contain scheme, ' 'hostname, optional port and optional /swift path ' 'without trailing slash; provided value is: %s') % endpoint_url) template = '/{api_version}/{container}/{object_id}' else: template = '/{api_version}/{account}/{container}/{object_id}' url_path = template.format(**url_fragments) return self._generate_temp_url( path=url_path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET', endpoint=endpoint_url, image_id=image_id ) def _validate_temp_url_config(self): """Validate the required settings for a temporary URL.""" if not CONF.glance.swift_temp_url_key: raise exc.MissingParameterValue(_( 'Swift temporary URLs require a shared secret to be created. ' 'You must provide "swift_temp_url_key" as a config option.')) if not CONF.glance.swift_endpoint_url: raise exc.MissingParameterValue(_( 'Swift temporary URLs require a Swift endpoint URL. ' 'You must provide "swift_endpoint_url" as a config option.')) if (not CONF.glance.swift_account and CONF.glance.temp_url_endpoint_type == 'swift'): raise exc.MissingParameterValue(_( 'Swift temporary URLs require a Swift account string. ' 'You must provide "swift_account" as a config option.')) if (CONF.glance.swift_temp_url_duration < CONF.glance.swift_temp_url_expected_download_start_delay): raise exc.InvalidParameterValue(_( '"swift_temp_url_duration" must be greater than or equal to ' '"[glance]swift_temp_url_expected_download_start_delay" ' 'option, otherwise the Swift temporary URL may expire before ' 'the download starts.')) seed_num_chars = CONF.glance.swift_store_multiple_containers_seed if (seed_num_chars is None or seed_num_chars < 0 or seed_num_chars > 32): raise exc.InvalidParameterValue(_( "An integer value between 0 and 32 is required for" " swift_store_multiple_containers_seed.")) def _get_swift_container(self, image_id): """Get the Swift container the image is stored in. Code based on: https://github.com/openstack/glance_store/blob/3cd690b3 7dc9d935445aca0998e8aec34a3e3530/glance_store/ _drivers/swift/store.py#L725 Returns appropriate container name depending upon value of ``swift_store_multiple_containers_seed``. In single-container mode, which is a seed value of 0, simply returns ``swift_container``. In multiple-container mode, returns ``swift_container`` as the prefix plus a suffix determined by the multiple container seed examples: single-container mode: 'glance' multiple-container mode: 'glance_3a1' for image uuid 3A1xxxxxxx... :param image_id: UUID of image :returns: The name of the swift container the image is stored in """ seed_num_chars = CONF.glance.swift_store_multiple_containers_seed if seed_num_chars > 0: image_id = str(image_id).lower() num_dashes = image_id[:seed_num_chars].count('-') num_chars = seed_num_chars + num_dashes name_suffix = image_id[:num_chars] new_container_name = (CONF.glance.swift_container + '_' + name_suffix) return new_container_name else: return CONF.glance.swift_container def _get_location(self, image_id): """Get storage URL. Returns the direct url representing the backend storage location, or None if this attribute is not shown by Glance. """ image_meta = self.call('get', image_id) if not service_utils.is_image_available(self.context, image_meta): raise exc.ImageNotFound(image_id=image_id) return getattr(image_meta, 'direct_url', None) def _remove_expired_items_from_cache(self): """Remove expired items from temporary URL cache This function removes entries that will expire before the expected usage time. """ max_valid_time = ( int(time.time()) + CONF.glance.swift_temp_url_expected_download_start_delay) keys_to_remove = [ k for k, v in six.iteritems(self._cache) if (v.url_expires_at < max_valid_time)] for k in keys_to_remove: del self._cache[k] ironic-5.1.0/ironic/common/glance_service/base_image_service.py0000664000567000056710000002500012674513466026046 0ustar jenkinsjenkins00000000000000# Copyright 2010 OpenStack Foundation # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys import time from glanceclient import client from glanceclient import exc as glance_exc from oslo_config import cfg from oslo_log import log import sendfile import six import six.moves.urllib.parse as urlparse from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _LE LOG = log.getLogger(__name__) CONF = cfg.CONF def _translate_image_exception(image_id, exc_value): if isinstance(exc_value, (glance_exc.Forbidden, glance_exc.Unauthorized)): return exception.ImageNotAuthorized(image_id=image_id) if isinstance(exc_value, glance_exc.NotFound): return exception.ImageNotFound(image_id=image_id) if isinstance(exc_value, glance_exc.BadRequest): return exception.Invalid(exc_value) return exc_value def _translate_plain_exception(exc_value): if isinstance(exc_value, (glance_exc.Forbidden, glance_exc.Unauthorized)): return exception.NotAuthorized(exc_value) if isinstance(exc_value, glance_exc.NotFound): return exception.NotFound(exc_value) if isinstance(exc_value, glance_exc.BadRequest): return exception.Invalid(exc_value) return exc_value def check_image_service(func): """Creates a glance client if doesn't exists and calls the function.""" @six.wraps(func) def wrapper(self, *args, **kwargs): """Wrapper around methods calls. :param image_href: href that describes the location of an image """ if self.client: return func(self, *args, **kwargs) image_href = kwargs.get('image_href') (image_id, self.glance_host, self.glance_port, use_ssl) = service_utils.parse_image_ref(image_href) if use_ssl: scheme = 'https' else: scheme = 'http' params = {} params['insecure'] = CONF.glance.glance_api_insecure if (not params['insecure'] and CONF.glance.glance_cafile and use_ssl): params['cacert'] = CONF.glance.glance_cafile if CONF.glance.auth_strategy == 'keystone': params['token'] = self.context.auth_token endpoint = '%s://%s:%s' % (scheme, self.glance_host, self.glance_port) self.client = client.Client(self.version, endpoint, **params) return func(self, *args, **kwargs) return wrapper class BaseImageService(object): def __init__(self, client=None, version=1, context=None): self.client = client self.version = version self.context = context def call(self, method, *args, **kwargs): """Call a glance client method. If we get a connection error, retry the request according to CONF.glance_num_retries. :param context: The request context, for access checks. :param version: The requested API version.v :param method: The method requested to be called. :param args: A list of positional arguments for the method called :param kwargs: A dict of keyword arguments for the method called :raises: GlanceConnectionFailed """ retry_excs = (glance_exc.ServiceUnavailable, glance_exc.InvalidEndpoint, glance_exc.CommunicationError) image_excs = (glance_exc.Forbidden, glance_exc.Unauthorized, glance_exc.NotFound, glance_exc.BadRequest) num_attempts = 1 + CONF.glance.glance_num_retries for attempt in range(1, num_attempts + 1): try: return getattr(self.client.images, method)(*args, **kwargs) except retry_excs as e: host = self.glance_host port = self.glance_port error_msg = _LE("Error contacting glance server " "'%(host)s:%(port)s' for '%(method)s', attempt" " %(attempt)s of %(num_attempts)s failed.") LOG.exception(error_msg, {'host': host, 'port': port, 'num_attempts': num_attempts, 'attempt': attempt, 'method': method}) if attempt == num_attempts: raise exception.GlanceConnectionFailed(host=host, port=port, reason=str(e)) time.sleep(1) except image_excs as e: exc_type, exc_value, exc_trace = sys.exc_info() if method == 'list': new_exc = _translate_plain_exception( exc_value) else: new_exc = _translate_image_exception( args[0], exc_value) six.reraise(type(new_exc), new_exc, exc_trace) @check_image_service def _detail(self, method='list', **kwargs): """Calls out to Glance for a list of detailed image information. :returns: A list of dicts containing image metadata. """ LOG.debug("Getting a full list of images metadata from glance.") params = service_utils.extract_query_params(kwargs, self.version) images = self.call(method, **params) _images = [] for image in images: if service_utils.is_image_available(self.context, image): _images.append(service_utils.translate_from_glance(image)) return _images @check_image_service def _show(self, image_href, method='get'): """Returns a dict with image data for the given opaque image id. :param image_id: The opaque image identifier. :returns: A dict containing image metadata. :raises: ImageNotFound """ LOG.debug("Getting image metadata from glance. Image: %s" % image_href) (image_id, self.glance_host, self.glance_port, use_ssl) = service_utils.parse_image_ref(image_href) image = self.call(method, image_id) if not service_utils.is_image_available(self.context, image): raise exception.ImageNotFound(image_id=image_id) base_image_meta = service_utils.translate_from_glance(image) return base_image_meta @check_image_service def _download(self, image_id, data=None, method='data'): """Calls out to Glance for data and writes data. :param image_id: The opaque image identifier. :param data: (Optional) File object to write data to. """ (image_id, self.glance_host, self.glance_port, use_ssl) = service_utils.parse_image_ref(image_id) if (self.version == 2 and 'file' in CONF.glance.allowed_direct_url_schemes): location = self._get_location(image_id) url = urlparse.urlparse(location) if url.scheme == "file": with open(url.path, "r") as f: filesize = os.path.getsize(f.name) sendfile.sendfile(data.fileno(), f.fileno(), 0, filesize) return image_chunks = self.call(method, image_id) if data is None: return image_chunks else: for chunk in image_chunks: data.write(chunk) @check_image_service def _create(self, image_meta, data=None, method='create'): """Store the image data and return the new image object. :param image_meta: A dict containing image metadata :param data: (Optional) File object to create image from. :returns: dict -- New created image metadata """ sent_service_image_meta = service_utils.translate_to_glance(image_meta) # TODO(ghe): Allow copy-from or location headers Bug #1199532 if data: sent_service_image_meta['data'] = data recv_service_image_meta = self.call(method, **sent_service_image_meta) return service_utils.translate_from_glance(recv_service_image_meta) @check_image_service def _update(self, image_id, image_meta, data=None, method='update', purge_props=False): """Modify the given image with the new data. :param image_id: The opaque image identifier. :param data: (Optional) File object to update data from. :param purge_props: (Optional=False) Purge existing properties. :returns: dict -- New created image metadata """ (image_id, self.glance_host, self.glance_port, use_ssl) = service_utils.parse_image_ref(image_id) if image_meta: image_meta = service_utils.translate_to_glance(image_meta) else: image_meta = {} if self.version == 1: image_meta['purge_props'] = purge_props if data: image_meta['data'] = data # NOTE(bcwaldon): id is not an editable field, but it is likely to be # passed in by calling code. Let's be nice and ignore it. image_meta.pop('id', None) image_meta = self.call(method, image_id, **image_meta) if self.version == 2 and data: self.call('upload', image_id, data) image_meta = self._show(image_id) return image_meta @check_image_service def _delete(self, image_id, method='delete'): """Delete the given image. :param image_id: The opaque image identifier. :raises: ImageNotFound if the image does not exist. :raises: NotAuthorized if the user is not an owner. :raises: ImageNotAuthorized if the user is not authorized. """ (image_id, glance_host, glance_port, use_ssl) = service_utils.parse_image_ref(image_id) self.call(method, image_id) ironic-5.1.0/ironic/common/glance_service/v1/0000775000567000056710000000000012674513633022225 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/glance_service/v1/__init__.py0000664000567000056710000000000012674513466024330 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/glance_service/v1/image_service.py0000664000567000056710000000303612674513466025407 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.common.glance_service import base_image_service from ironic.common.glance_service import service class GlanceImageService(base_image_service.BaseImageService, service.ImageService): def detail(self, **kwargs): return self._detail(method='list', **kwargs) def show(self, image_id): return self._show(image_id, method='get') def download(self, image_id, data=None): return self._download(image_id, method='data', data=data) def create(self, image_meta, data=None): return self._create(image_meta, method='create', data=data) def update(self, image_id, image_meta, data=None, purge_props=False): return self._update(image_id, image_meta, data=data, method='update', purge_props=purge_props) def delete(self, image_id): return self._delete(image_id, method='delete') ironic-5.1.0/ironic/common/glance_service/service_utils.py0000664000567000056710000002157612674513466025150 0ustar jenkinsjenkins00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import itertools import random from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import timeutils from oslo_utils import uuidutils import six import six.moves.urllib.parse as urlparse from ironic.common import exception from ironic.common import image_service CONF = cfg.CONF _GLANCE_API_SERVER = None """ iterator that cycles (indefinitely) over glance API servers. """ def generate_glance_url(): """Generate the URL to glance.""" return "%s://%s:%d" % (CONF.glance.glance_protocol, CONF.glance.glance_host, CONF.glance.glance_port) def generate_image_url(image_ref): """Generate an image URL from an image_ref.""" return "%s/images/%s" % (generate_glance_url(), image_ref) def _extract_attributes(image): IMAGE_ATTRIBUTES = ['size', 'disk_format', 'owner', 'container_format', 'checksum', 'id', 'name', 'created_at', 'updated_at', 'deleted_at', 'deleted', 'status', 'min_disk', 'min_ram', 'is_public'] IMAGE_ATTRIBUTES_V2 = ['tags', 'visibility', 'protected', 'file', 'schema'] output = {} for attr in IMAGE_ATTRIBUTES: output[attr] = getattr(image, attr, None) output['properties'] = getattr(image, 'properties', {}) if hasattr(image, 'schema') and 'v2' in image['schema']: IMAGE_ATTRIBUTES = IMAGE_ATTRIBUTES + IMAGE_ATTRIBUTES_V2 for attr in IMAGE_ATTRIBUTES_V2: output[attr] = getattr(image, attr, None) output['schema'] = image['schema'] for image_property in set(image.keys()) - set(IMAGE_ATTRIBUTES): output['properties'][image_property] = image[image_property] return output def _convert_timestamps_to_datetimes(image_meta): """Convert timestamps to datetime objects Returns image metadata with timestamp fields converted to naive UTC datetime objects. """ for attr in ['created_at', 'updated_at', 'deleted_at']: if image_meta.get(attr): image_meta[attr] = timeutils.normalize_time( timeutils.parse_isotime(image_meta[attr])) return image_meta _CONVERT_PROPS = ('block_device_mapping', 'mappings') def _convert(metadata, method): metadata = copy.deepcopy(metadata) properties = metadata.get('properties') if properties: for attr in _CONVERT_PROPS: if attr in properties: prop = properties[attr] if method == 'from': if isinstance(prop, six.string_types): properties[attr] = jsonutils.loads(prop) if method == 'to': if not isinstance(prop, six.string_types): properties[attr] = jsonutils.dumps(prop) return metadata def _remove_read_only(image_meta): IMAGE_ATTRIBUTES = ['status', 'updated_at', 'created_at', 'deleted_at'] output = copy.deepcopy(image_meta) for attr in IMAGE_ATTRIBUTES: if attr in output: del output[attr] return output def _get_api_server_iterator(): """Return iterator over shuffled API servers. Shuffle a list of CONF.glance.glance_api_servers and return an iterator that will cycle through the list, looping around to the beginning if necessary. If CONF.glance.glance_api_servers isn't set, we fall back to using this as the server: CONF.glance.glance_host:CONF.glance.glance_port. :returns: iterator that cycles (indefinitely) over shuffled glance API servers. The iterator returns tuples of (host, port, use_ssl). """ api_servers = [] configured_servers = (CONF.glance.glance_api_servers or ['%s:%s' % (CONF.glance.glance_host, CONF.glance.glance_port)]) for api_server in configured_servers: if '//' not in api_server: api_server = '%s://%s' % (CONF.glance.glance_protocol, api_server) url = urlparse.urlparse(api_server) port = url.port or 80 host = url.netloc.split(':', 1)[0] use_ssl = (url.scheme == 'https') api_servers.append((host, port, use_ssl)) random.shuffle(api_servers) return itertools.cycle(api_servers) def _get_api_server(): """Return a Glance API server. :returns: for an API server, the tuple (host-or-IP, port, use_ssl), where use_ssl is True to use the 'https' scheme, and False to use 'http'. """ global _GLANCE_API_SERVER if not _GLANCE_API_SERVER: _GLANCE_API_SERVER = _get_api_server_iterator() return six.next(_GLANCE_API_SERVER) def parse_image_ref(image_href): """Parse an image href into composite parts. :param image_href: href of an image :returns: a tuple of the form (image_id, host, port, use_ssl) :raises ValueError """ if '/' not in six.text_type(image_href): image_id = image_href (glance_host, glance_port, use_ssl) = _get_api_server() return (image_id, glance_host, glance_port, use_ssl) else: try: url = urlparse.urlparse(image_href) if url.scheme == 'glance': (glance_host, glance_port, use_ssl) = _get_api_server() image_id = image_href.split('/')[-1] else: glance_port = url.port or 80 glance_host = url.netloc.split(':', 1)[0] image_id = url.path.split('/')[-1] use_ssl = (url.scheme == 'https') return (image_id, glance_host, glance_port, use_ssl) except ValueError: raise exception.InvalidImageRef(image_href=image_href) def extract_query_params(params, version): _params = {} accepted_params = ('filters', 'marker', 'limit', 'sort_key', 'sort_dir') for param in accepted_params: if params.get(param): _params[param] = params.get(param) # ensure filters is a dict _params.setdefault('filters', {}) # NOTE(vish): don't filter out private images # NOTE(ghe): in v2, not passing any visibility doesn't filter prvate images if version == 1: _params['filters'].setdefault('is_public', 'none') return _params def translate_to_glance(image_meta): image_meta = _convert(image_meta, 'to') image_meta = _remove_read_only(image_meta) return image_meta def translate_from_glance(image): image_meta = _extract_attributes(image) image_meta = _convert_timestamps_to_datetimes(image_meta) image_meta = _convert(image_meta, 'from') return image_meta def is_image_available(context, image): """Check image availability. This check is needed in case Nova and Glance are deployed without authentication turned on. """ # The presence of an auth token implies this is an authenticated # request and we need not handle the noauth use-case. if hasattr(context, 'auth_token') and context.auth_token: return True if image.is_public or context.is_admin: return True properties = image.properties if context.project_id and ('owner_id' in properties): return str(properties['owner_id']) == str(context.project_id) if context.project_id and ('project_id' in properties): return str(properties['project_id']) == str(context.project_id) try: user_id = properties['user_id'] except KeyError: return False return str(user_id) == str(context.user_id) def is_glance_image(image_href): if not isinstance(image_href, six.string_types): return False return (image_href.startswith('glance://') or uuidutils.is_uuid_like(image_href)) def is_image_href_ordinary_file_name(image_href): """Check if image_href is a ordinary file name. This method judges if image_href is a ordinary file name or not, which is a file supposed to be stored in share file system. The ordinary file name is neither glance image href nor image service href. :returns: True if image_href is ordinary file name, False otherwise. """ return not (is_glance_image(image_href) or urlparse.urlparse(image_href).scheme.lower() in image_service.protocol_mapping) ironic-5.1.0/ironic/common/glance_service/__init__.py0000664000567000056710000000000012674513466024002 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/glance_service/service.py0000664000567000056710000000513412674513466023720 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six @six.add_metaclass(abc.ABCMeta) class ImageService(object): """Provides storage and retrieval of disk image objects within Glance.""" @abc.abstractmethod def __init__(self): """Constructor.""" @abc.abstractmethod def detail(self): """Calls out to Glance for a list of detailed image information.""" @abc.abstractmethod def show(self, image_id): """Returns a dict with image data for the given opaque image id. :param image_id: The opaque image identifier. :returns: A dict containing image metadata. :raises: ImageNotFound """ @abc.abstractmethod def download(self, image_id, data=None): """Calls out to Glance for data and writes data. :param image_id: The opaque image identifier. :param data: (Optional) File object to write data to. """ @abc.abstractmethod def create(self, image_meta, data=None): """Store the image data and return the new image object. :param image_meta: A dict containing image metadata :param data: (Optional) File object to create image from. :returns: dict -- New created image metadata """ @abc.abstractmethod def update(self, image_id, image_meta, data=None, purge_props=False): """Modify the given image with the new data. :param image_id: The opaque image identifier. :param data: (Optional) File object to update data from. :param purge_props: (Optional=True) Purge existing properties. :returns: dict -- New created image metadata """ @abc.abstractmethod def delete(self, image_id): """Delete the given image. :param image_id: The opaque image identifier. :raises: ImageNotFound if the image does not exist. :raises: NotAuthorized if the user is not an owner. :raises: ImageNotAuthorized if the user is not authorized. """ ironic-5.1.0/ironic/common/paths.py0000664000567000056710000000426012674513466020425 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_config import cfg from ironic.common.i18n import _ path_opts = [ cfg.StrOpt('pybasedir', default=os.path.abspath(os.path.join(os.path.dirname(__file__), '../')), help=_('Directory where the ironic python module is ' 'installed.')), cfg.StrOpt('bindir', default='$pybasedir/bin', help=_('Directory where ironic binaries are installed.')), cfg.StrOpt('state_path', default='$pybasedir', help=_("Top-level directory for maintaining ironic's state.")), ] CONF = cfg.CONF CONF.register_opts(path_opts) def basedir_def(*args): """Return an uninterpolated path relative to $pybasedir.""" return os.path.join('$pybasedir', *args) def bindir_def(*args): """Return an uninterpolated path relative to $bindir.""" return os.path.join('$bindir', *args) def state_path_def(*args): """Return an uninterpolated path relative to $state_path.""" return os.path.join('$state_path', *args) def basedir_rel(*args): """Return a path relative to $pybasedir.""" return os.path.join(CONF.pybasedir, *args) def bindir_rel(*args): """Return a path relative to $bindir.""" return os.path.join(CONF.bindir, *args) def state_path_rel(*args): """Return a path relative to $state_path.""" return os.path.join(CONF.state_path, *args) ironic-5.1.0/ironic/common/pxe_utils.py0000664000567000056710000003143412674513466021325 0ustar jenkinsjenkins00000000000000# # Copyright 2014 Rackspace, Inc # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from ironic_lib import utils as ironic_utils import jinja2 from oslo_config import cfg from oslo_log import log as logging from oslo_utils import fileutils from ironic.common import dhcp_factory from ironic.common import exception from ironic.common.i18n import _ from ironic.common import utils from ironic.drivers.modules import deploy_utils from ironic.drivers import utils as driver_utils CONF = cfg.CONF LOG = logging.getLogger(__name__) PXE_CFG_DIR_NAME = 'pxelinux.cfg' def get_root_dir(): """Returns the directory where the config files and images will live.""" if CONF.pxe.ipxe_enabled: return CONF.deploy.http_root else: return CONF.pxe.tftp_root def _ensure_config_dirs_exist(node_uuid): """Ensure that the node's and PXE configuration directories exist. :param node_uuid: the UUID of the node. """ root_dir = get_root_dir() fileutils.ensure_tree(os.path.join(root_dir, node_uuid)) fileutils.ensure_tree(os.path.join(root_dir, PXE_CFG_DIR_NAME)) def _build_pxe_config(pxe_options, template, root_tag, disk_ident_tag): """Build the PXE boot configuration file. This method builds the PXE boot configuration file by rendering the template with the given parameters. :param pxe_options: A dict of values to set on the configuration file. :param template: The PXE configuration template. :param root_tag: Root tag used in the PXE config file. :param disk_ident_tag: Disk identifier tag used in the PXE config file. :returns: A formatted string with the file content. """ tmpl_path, tmpl_file = os.path.split(template) env = jinja2.Environment(loader=jinja2.FileSystemLoader(tmpl_path)) template = env.get_template(tmpl_file) return template.render({'pxe_options': pxe_options, 'ROOT': root_tag, 'DISK_IDENTIFIER': disk_ident_tag, }) def _link_mac_pxe_configs(task): """Link each MAC address with the PXE configuration file. :param task: A TaskManager instance. """ def create_link(mac_path): ironic_utils.unlink_without_raise(mac_path) relative_source_path = os.path.relpath( pxe_config_file_path, os.path.dirname(mac_path)) utils.create_link_without_raise(relative_source_path, mac_path) pxe_config_file_path = get_pxe_config_file_path(task.node.uuid) for mac in driver_utils.get_node_mac_addresses(task): create_link(_get_pxe_mac_path(mac)) # TODO(lucasagomes): Backward compatibility with :hexraw, # to be removed in Mitaka. # see: https://bugs.launchpad.net/ironic/+bug/1441710 if CONF.pxe.ipxe_enabled: create_link(_get_pxe_mac_path(mac, delimiter='')) def _link_ip_address_pxe_configs(task, hex_form): """Link each IP address with the PXE configuration file. :param task: A TaskManager instance. :param hex_form: Boolean value indicating if the conf file name should be hexadecimal equivalent of supplied ipv4 address. :raises: FailedToGetIPAddressOnPort :raises: InvalidIPv4Address """ pxe_config_file_path = get_pxe_config_file_path(task.node.uuid) api = dhcp_factory.DHCPFactory().provider ip_addrs = api.get_ip_addresses(task) if not ip_addrs: raise exception.FailedToGetIPAddressOnPort(_( "Failed to get IP address for any port on node %s.") % task.node.uuid) for port_ip_address in ip_addrs: ip_address_path = _get_pxe_ip_address_path(port_ip_address, hex_form) ironic_utils.unlink_without_raise(ip_address_path) relative_source_path = os.path.relpath( pxe_config_file_path, os.path.dirname(ip_address_path)) utils.create_link_without_raise(relative_source_path, ip_address_path) def _get_pxe_mac_path(mac, delimiter=None): """Convert a MAC address into a PXE config file name. :param mac: A MAC address string in the format xx:xx:xx:xx:xx:xx. :param delimiter: The MAC address delimiter. Defaults to dash ('-'). :returns: the path to the config file. """ if delimiter is None: delimiter = '-' mac_file_name = mac.replace(':', delimiter).lower() if not CONF.pxe.ipxe_enabled: mac_file_name = '01-' + mac_file_name return os.path.join(get_root_dir(), PXE_CFG_DIR_NAME, mac_file_name) def _get_pxe_ip_address_path(ip_address, hex_form): """Convert an ipv4 address into a PXE config file name. :param ip_address: A valid IPv4 address string in the format 'n.n.n.n'. :param hex_form: Boolean value indicating if the conf file name should be hexadecimal equivalent of supplied ipv4 address. :returns: the path to the config file. """ # elilo bootloader needs hex based config file name. if hex_form: ip = ip_address.split('.') ip_address = '{0:02X}{1:02X}{2:02X}{3:02X}'.format(*map(int, ip)) # grub2 bootloader needs ip based config file name. return os.path.join( CONF.pxe.tftp_root, ip_address + ".conf" ) def get_deploy_kr_info(node_uuid, driver_info): """Get href and tftp path for deploy kernel and ramdisk. Note: driver_info should be validated outside of this method. """ root_dir = get_root_dir() image_info = {} for label in ('deploy_kernel', 'deploy_ramdisk'): image_info[label] = ( str(driver_info[label]), os.path.join(root_dir, node_uuid, label) ) return image_info def get_pxe_config_file_path(node_uuid): """Generate the path for the node's PXE configuration file. :param node_uuid: the UUID of the node. :returns: The path to the node's PXE configuration file. """ return os.path.join(get_root_dir(), node_uuid, 'config') def create_pxe_config(task, pxe_options, template=None): """Generate PXE configuration file and MAC address links for it. This method will generate the PXE configuration file for the task's node under a directory named with the UUID of that node. For each MAC address or DHCP IP address (port) of that node, a symlink for the configuration file will be created under the PXE configuration directory, so regardless of which port boots first they'll get the same PXE configuration. If elilo is the bootloader in use, then its configuration file will be created based on hex form of DHCP IP address. If grub2 bootloader is in use, then its configuration will be created based on DHCP IP address in the form nn.nn.nn.nn. :param task: A TaskManager instance. :param pxe_options: A dictionary with the PXE configuration parameters. :param template: The PXE configuration template. If no template is given the CONF.pxe.pxe_config_template will be used. """ LOG.debug("Building PXE config for node %s", task.node.uuid) if template is None: template = CONF.pxe.pxe_config_template _ensure_config_dirs_exist(task.node.uuid) pxe_config_file_path = get_pxe_config_file_path(task.node.uuid) is_uefi_boot_mode = (deploy_utils.get_boot_mode_for_deploy(task.node) == 'uefi') # grub bootloader panics with '{}' around any of its tags in its # config file. To overcome that 'ROOT' and 'DISK_IDENTIFIER' are enclosed # with '(' and ')' in uefi boot mode. # These changes do not have any impact on elilo bootloader. hex_form = True if is_uefi_boot_mode and utils.is_regex_string_in_file(template, '^menuentry'): hex_form = False pxe_config_root_tag = '(( ROOT ))' pxe_config_disk_ident = '(( DISK_IDENTIFIER ))' else: # TODO(stendulker): We should use '(' ')' as the delimiters for all our # config files so that we do not need special handling for each of the # bootloaders. Should be removed once the Mitaka release starts. pxe_config_root_tag = '{{ ROOT }}' pxe_config_disk_ident = '{{ DISK_IDENTIFIER }}' pxe_config = _build_pxe_config(pxe_options, template, pxe_config_root_tag, pxe_config_disk_ident) utils.write_to_file(pxe_config_file_path, pxe_config) if is_uefi_boot_mode and not CONF.pxe.ipxe_enabled: _link_ip_address_pxe_configs(task, hex_form) else: _link_mac_pxe_configs(task) def clean_up_pxe_config(task): """Clean up the TFTP environment for the task's node. :param task: A TaskManager instance. """ LOG.debug("Cleaning up PXE config for node %s", task.node.uuid) is_uefi_boot_mode = (deploy_utils.get_boot_mode_for_deploy(task.node) == 'uefi') if is_uefi_boot_mode and not CONF.pxe.ipxe_enabled: api = dhcp_factory.DHCPFactory().provider ip_addresses = api.get_ip_addresses(task) if not ip_addresses: return for port_ip_address in ip_addresses: try: # Get xx.xx.xx.xx based grub config file ip_address_path = _get_pxe_ip_address_path(port_ip_address, False) # Get 0AOAOAOA based elilo config file hex_ip_path = _get_pxe_ip_address_path(port_ip_address, True) except exception.InvalidIPv4Address: continue # Cleaning up config files created for grub2. ironic_utils.unlink_without_raise(ip_address_path) # Cleaning up config files created for elilo. ironic_utils.unlink_without_raise(hex_ip_path) else: for mac in driver_utils.get_node_mac_addresses(task): ironic_utils.unlink_without_raise(_get_pxe_mac_path(mac)) # TODO(lucasagomes): Backward compatibility with :hexraw, # to be removed in Mitaka. # see: https://bugs.launchpad.net/ironic/+bug/1441710 if CONF.pxe.ipxe_enabled: ironic_utils.unlink_without_raise(_get_pxe_mac_path(mac, delimiter='')) utils.rmtree_without_raise(os.path.join(get_root_dir(), task.node.uuid)) def dhcp_options_for_instance(task): """Retrieves the DHCP PXE boot options. :param task: A TaskManager instance. """ dhcp_opts = [] if deploy_utils.get_boot_mode_for_deploy(task.node) == 'uefi': boot_file = CONF.pxe.uefi_pxe_bootfile_name else: boot_file = CONF.pxe.pxe_bootfile_name if CONF.pxe.ipxe_enabled: script_name = os.path.basename(CONF.pxe.ipxe_boot_script) ipxe_script_url = '/'.join([CONF.deploy.http_url, script_name]) dhcp_provider_name = dhcp_factory.CONF.dhcp.dhcp_provider # if the request comes from dumb firmware send them the iPXE # boot image. if dhcp_provider_name == 'neutron': # Neutron use dnsmasq as default DHCP agent, add extra config # to neutron "dhcp-match=set:ipxe,175" and use below option dhcp_opts.append({'opt_name': 'tag:!ipxe,bootfile-name', 'opt_value': boot_file}) dhcp_opts.append({'opt_name': 'tag:ipxe,bootfile-name', 'opt_value': ipxe_script_url}) else: # !175 == non-iPXE. # http://ipxe.org/howto/dhcpd#ipxe-specific_options dhcp_opts.append({'opt_name': '!175,bootfile-name', 'opt_value': boot_file}) dhcp_opts.append({'opt_name': 'bootfile-name', 'opt_value': ipxe_script_url}) else: dhcp_opts.append({'opt_name': 'bootfile-name', 'opt_value': boot_file}) dhcp_opts.append({'opt_name': 'server-ip-address', 'opt_value': CONF.pxe.tftp_server}) dhcp_opts.append({'opt_name': 'tftp-server', 'opt_value': CONF.pxe.tftp_server}) # Append the IP version for all the configuration options for opt in dhcp_opts: opt.update({'ip_version': int(CONF.pxe.ip_version)}) return dhcp_opts ironic-5.1.0/ironic/common/hash_ring.py0000664000567000056710000001764412674513466021262 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import bisect import hashlib import threading import time from oslo_config import cfg import six from ironic.common import exception from ironic.common.i18n import _ from ironic.db import api as dbapi hash_opts = [ cfg.IntOpt('hash_partition_exponent', default=5, help=_('Exponent to determine number of hash partitions to use ' 'when distributing load across conductors. Larger ' 'values will result in more even distribution of load ' 'and less load when rebalancing the ring, but more ' 'memory usage. Number of partitions per conductor is ' '(2^hash_partition_exponent). This determines the ' 'granularity of rebalancing: given 10 hosts, and an ' 'exponent of the 2, there are 40 partitions in the ring.' 'A few thousand partitions should make rebalancing ' 'smooth in most cases. The default is suitable for up ' 'to a few hundred conductors. Too many partitions has a ' 'CPU impact.')), cfg.IntOpt('hash_distribution_replicas', default=1, help=_('[Experimental Feature] ' 'Number of hosts to map onto each hash partition. ' 'Setting this to more than one will cause additional ' 'conductor services to prepare deployment environments ' 'and potentially allow the Ironic cluster to recover ' 'more quickly if a conductor instance is terminated.')), cfg.IntOpt('hash_ring_reset_interval', default=180, help=_('Interval (in seconds) between hash ring resets.')), ] CONF = cfg.CONF CONF.register_opts(hash_opts) class HashRing(object): """A stable hash ring. We map item N to a host Y based on the closest lower hash: - hash(item) -> partition - hash(host) -> divider - closest lower divider is the host to use - we hash each host many times to spread load more finely as otherwise adding a host gets (on average) 50% of the load of just one other host assigned to it. """ def __init__(self, hosts, replicas=None): """Create a new hash ring across the specified hosts. :param hosts: an iterable of hosts which will be mapped. :param replicas: number of hosts to map to each hash partition, or len(hosts), which ever is lesser. Default: CONF.hash_distribution_replicas """ if replicas is None: replicas = CONF.hash_distribution_replicas try: self.hosts = set(hosts) self.replicas = replicas if replicas <= len(hosts) else len(hosts) except TypeError: raise exception.Invalid( _("Invalid hosts supplied when building HashRing.")) self._host_hashes = {} for host in hosts: key = str(host).encode('utf8') key_hash = hashlib.md5(key) for p in range(2 ** CONF.hash_partition_exponent): key_hash.update(key) hashed_key = self._hash2int(key_hash) self._host_hashes[hashed_key] = host # Gather the (possibly colliding) resulting hashes into a bisectable # list. self._partitions = sorted(self._host_hashes.keys()) def _hash2int(self, key_hash): """Convert the given hash's digest to a numerical value for the ring. :returns: An integer equivalent value of the digest. """ return int(key_hash.hexdigest(), 16) def _get_partition(self, data): try: if six.PY3 and data is not None: data = data.encode('utf-8') key_hash = hashlib.md5(data) hashed_key = self._hash2int(key_hash) position = bisect.bisect(self._partitions, hashed_key) return position if position < len(self._partitions) else 0 except TypeError: raise exception.Invalid( _("Invalid data supplied to HashRing.get_hosts.")) def get_hosts(self, data, ignore_hosts=None): """Get the list of hosts which the supplied data maps onto. :param data: A string identifier to be mapped across the ring. :param ignore_hosts: A list of hosts to skip when performing the hash. Useful to temporarily skip down hosts without performing a full rebalance. Default: None. :returns: a list of hosts. The length of this list depends on the number of replicas this `HashRing` was created with. It may be less than this if ignore_hosts is not None. """ hosts = [] if ignore_hosts is None: ignore_hosts = set() else: ignore_hosts = set(ignore_hosts) ignore_hosts.intersection_update(self.hosts) partition = self._get_partition(data) for replica in range(0, self.replicas): if len(hosts) + len(ignore_hosts) == len(self.hosts): # prevent infinite loop - cannot allocate more fallbacks. break # Linear probing: partition N, then N+1 etc. host = self._get_host(partition) while host in hosts or host in ignore_hosts: partition += 1 if partition >= len(self._partitions): partition = 0 host = self._get_host(partition) hosts.append(host) return hosts def _get_host(self, partition): """Find what host is serving a partition. :param partition: The index of the partition in the partition map. e.g. 0 is the first partition, 1 is the second. :return: The host object the ring was constructed with. """ return self._host_hashes[self._partitions[partition]] class HashRingManager(object): _hash_rings = None _lock = threading.Lock() def __init__(self): self.dbapi = dbapi.get_instance() self.updated_at = time.time() @property def ring(self): interval = CONF.hash_ring_reset_interval limit = time.time() - interval # Hot path, no lock if self.__class__._hash_rings is not None and self.updated_at >= limit: return self.__class__._hash_rings with self._lock: if self.__class__._hash_rings is None or self.updated_at < limit: rings = self._load_hash_rings() self.__class__._hash_rings = rings self.updated_at = time.time() return self.__class__._hash_rings def _load_hash_rings(self): rings = {} d2c = self.dbapi.get_active_driver_dict() for driver_name, hosts in d2c.items(): rings[driver_name] = HashRing(hosts) return rings @classmethod def reset(cls): with cls._lock: cls._hash_rings = None def __getitem__(self, driver_name): try: return self.ring[driver_name] except KeyError: raise exception.DriverNotFound( _("The driver '%s' is unknown.") % driver_name) ironic-5.1.0/ironic/common/images.py0000664000567000056710000005467112674513466020566 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright (c) 2010 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Handling of VM disk images. """ import os import shutil from ironic_lib import disk_utils from ironic_lib import utils as ironic_utils import jinja2 from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import fileutils from ironic.common import exception from ironic.common.glance_service import service_utils as glance_utils from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common import image_service as service from ironic.common import paths from ironic.common import utils LOG = logging.getLogger(__name__) image_opts = [ cfg.BoolOpt('force_raw_images', default=True, help=_('If True, convert backing images to "raw" disk image ' 'format.')), cfg.StrOpt('isolinux_bin', default='/usr/lib/syslinux/isolinux.bin', help=_('Path to isolinux binary file.')), cfg.StrOpt('isolinux_config_template', default=paths.basedir_def('common/isolinux_config.template'), help=_('Template file for isolinux configuration file.')), cfg.StrOpt('grub_config_template', default=paths.basedir_def('common/grub_conf.template'), help=_('Template file for grub configuration file.')), ] CONF = cfg.CONF CONF.register_opts(image_opts) def _create_root_fs(root_directory, files_info): """Creates a filesystem root in given directory. Given a mapping of absolute path of files to their relative paths within the filesystem, this method copies the files to their destination. :param root_directory: the filesystem root directory. :param files_info: A dict containing absolute path of file to be copied -> relative path within the vfat image. For example, { '/absolute/path/to/file' -> 'relative/path/within/root' ... } :raises: OSError, if creation of any directory failed. :raises: IOError, if copying any of the files failed. """ for src_file, path in files_info.items(): target_file = os.path.join(root_directory, path) dirname = os.path.dirname(target_file) if not os.path.exists(dirname): os.makedirs(dirname) shutil.copyfile(src_file, target_file) def _umount_without_raise(mount_dir): """Helper method to umount without raise.""" try: utils.umount(mount_dir) except processutils.ProcessExecutionError: pass def create_vfat_image(output_file, files_info=None, parameters=None, parameters_file='parameters.txt', fs_size_kib=100): """Creates the fat fs image on the desired file. This method copies the given files to a root directory (optional), writes the parameters specified to the parameters file within the root directory (optional), and then creates a vfat image of the root directory. :param output_file: The path to the file where the fat fs image needs to be created. :param files_info: A dict containing absolute path of file to be copied -> relative path within the vfat image. For example, { '/absolute/path/to/file' -> 'relative/path/within/root' ... } :param parameters: A dict containing key-value pairs of parameters. :param parameters_file: The filename for the parameters file. :param fs_size_kib: size of the vfat filesystem in KiB. :raises: ImageCreationFailed, if image creation failed while doing any of filesystem manipulation activities like creating dirs, mounting, creating filesystem, copying files, etc. """ try: ironic_utils.dd('/dev/zero', output_file, 'count=1', "bs=%dKiB" % fs_size_kib) except processutils.ProcessExecutionError as e: raise exception.ImageCreationFailed(image_type='vfat', error=e) with utils.tempdir() as tmpdir: try: # The label helps ramdisks to find the partition containing # the parameters (by using /dev/disk/by-label/ir-vfd-dev). # NOTE: FAT filesystem label can be up to 11 characters long. ironic_utils.mkfs('vfat', output_file, label="ir-vfd-dev") utils.mount(output_file, tmpdir, '-o', 'umask=0') except processutils.ProcessExecutionError as e: raise exception.ImageCreationFailed(image_type='vfat', error=e) try: if files_info: _create_root_fs(tmpdir, files_info) if parameters: parameters_file = os.path.join(tmpdir, parameters_file) params_list = ['%(key)s=%(val)s' % {'key': k, 'val': v} for k, v in parameters.items()] file_contents = '\n'.join(params_list) utils.write_to_file(parameters_file, file_contents) except Exception as e: LOG.exception(_LE("vfat image creation failed. Error: %s"), e) raise exception.ImageCreationFailed(image_type='vfat', error=e) finally: try: utils.umount(tmpdir) except processutils.ProcessExecutionError as e: raise exception.ImageCreationFailed(image_type='vfat', error=e) def _generate_cfg(kernel_params, template, options): """Generates a isolinux or grub configuration file. Given a given a list of strings containing kernel parameters, this method returns the kernel cmdline string. :param kernel_params: a list of strings(each element being a string like 'K=V' or 'K' or combination of them like 'K1=V1 K2 K3=V3') to be added as the kernel cmdline. :param template: the path of the config template file. :param options: a dictionary of keywords which need to be replaced in template file to generate a proper config file. :returns: a string containing the contents of the isolinux configuration file. """ if not kernel_params: kernel_params = [] kernel_params_str = ' '.join(kernel_params) tmpl_path, tmpl_file = os.path.split(template) env = jinja2.Environment(loader=jinja2.FileSystemLoader(tmpl_path)) template = env.get_template(tmpl_file) options.update({'kernel_params': kernel_params_str}) cfg = template.render(options) return cfg def create_isolinux_image_for_bios(output_file, kernel, ramdisk, kernel_params=None): """Creates an isolinux image on the specified file. Copies the provided kernel, ramdisk to a directory, generates the isolinux configuration file using the kernel parameters provided, and then generates a bootable ISO image. :param output_file: the path to the file where the iso image needs to be created. :param kernel: the kernel to use. :param ramdisk: the ramdisk to use. :param kernel_params: a list of strings(each element being a string like 'K=V' or 'K' or combination of them like 'K1=V1,K2,...') to be added as the kernel cmdline. :raises: ImageCreationFailed, if image creation failed while copying files or while running command to generate iso. """ ISOLINUX_BIN = 'isolinux/isolinux.bin' ISOLINUX_CFG = 'isolinux/isolinux.cfg' options = {'kernel': '/vmlinuz', 'ramdisk': '/initrd'} with utils.tempdir() as tmpdir: files_info = { kernel: 'vmlinuz', ramdisk: 'initrd', CONF.isolinux_bin: ISOLINUX_BIN, } try: _create_root_fs(tmpdir, files_info) except (OSError, IOError) as e: LOG.exception(_LE("Creating the filesystem root failed.")) raise exception.ImageCreationFailed(image_type='iso', error=e) cfg = _generate_cfg(kernel_params, CONF.isolinux_config_template, options) isolinux_cfg = os.path.join(tmpdir, ISOLINUX_CFG) utils.write_to_file(isolinux_cfg, cfg) try: utils.execute('mkisofs', '-r', '-V', "VMEDIA_BOOT_ISO", '-cache-inodes', '-J', '-l', '-no-emul-boot', '-boot-load-size', '4', '-boot-info-table', '-b', ISOLINUX_BIN, '-o', output_file, tmpdir) except processutils.ProcessExecutionError as e: LOG.exception(_LE("Creating ISO image failed.")) raise exception.ImageCreationFailed(image_type='iso', error=e) def create_isolinux_image_for_uefi(output_file, deploy_iso, kernel, ramdisk, kernel_params=None): """Creates an isolinux image on the specified file. Copies the provided kernel, ramdisk, efiboot.img to a directory, creates the path for grub config file, generates the isolinux configuration file using the kernel parameters provided, generates the grub configuration file using kernel parameters and then generates a bootable ISO image for uefi. :param output_file: the path to the file where the iso image needs to be created. :param deploy_iso: deploy iso used to initiate the deploy. :param kernel: the kernel to use. :param ramdisk: the ramdisk to use. :param kernel_params: a list of strings(each element being a string like 'K=V' or 'K' or combination of them like 'K1=V1,K2,...') to be added as the kernel cmdline. :raises: ImageCreationFailed, if image creation failed while copying files or while running command to generate iso. """ ISOLINUX_BIN = 'isolinux/isolinux.bin' ISOLINUX_CFG = 'isolinux/isolinux.cfg' isolinux_options = {'kernel': '/vmlinuz', 'ramdisk': '/initrd'} grub_options = {'linux': '/vmlinuz', 'initrd': '/initrd'} with utils.tempdir() as tmpdir: files_info = { kernel: 'vmlinuz', ramdisk: 'initrd', CONF.isolinux_bin: ISOLINUX_BIN, } # Open the deploy iso used to initiate deploy and copy the # efiboot.img i.e. boot loader to the current temporary # directory. with utils.tempdir() as mountdir: uefi_path_info, e_img_rel_path, grub_rel_path = ( _mount_deploy_iso(deploy_iso, mountdir)) # if either of these variables are not initialized then the # uefi efiboot.img cannot be created. files_info.update(uefi_path_info) try: _create_root_fs(tmpdir, files_info) except (OSError, IOError) as e: LOG.exception(_LE("Creating the filesystem root failed.")) raise exception.ImageCreationFailed(image_type='iso', error=e) finally: _umount_without_raise(mountdir) cfg = _generate_cfg(kernel_params, CONF.isolinux_config_template, isolinux_options) isolinux_cfg = os.path.join(tmpdir, ISOLINUX_CFG) utils.write_to_file(isolinux_cfg, cfg) # Generate and copy grub config file. grub_cfg = os.path.join(tmpdir, grub_rel_path) grub_conf = _generate_cfg(kernel_params, CONF.grub_config_template, grub_options) utils.write_to_file(grub_cfg, grub_conf) # Create the boot_iso. try: utils.execute('mkisofs', '-r', '-V', "VMEDIA_BOOT_ISO", '-cache-inodes', '-J', '-l', '-no-emul-boot', '-boot-load-size', '4', '-boot-info-table', '-b', ISOLINUX_BIN, '-eltorito-alt-boot', '-e', e_img_rel_path, '-no-emul-boot', '-o', output_file, tmpdir) except processutils.ProcessExecutionError as e: LOG.exception(_LE("Creating ISO image failed.")) raise exception.ImageCreationFailed(image_type='iso', error=e) def fetch(context, image_href, path, force_raw=False): # TODO(vish): Improve context handling and add owner and auth data # when it is added to glance. Right now there is no # auth checking in glance, so we assume that access was # checked before we got here. image_service = service.get_image_service(image_href, context=context) LOG.debug("Using %(image_service)s to download image %(image_href)s." % {'image_service': image_service.__class__, 'image_href': image_href}) with fileutils.remove_path_on_error(path): with open(path, "wb") as image_file: image_service.download(image_href, image_file) if force_raw: image_to_raw(image_href, path, "%s.part" % path) def image_to_raw(image_href, path, path_tmp): with fileutils.remove_path_on_error(path_tmp): data = disk_utils.qemu_img_info(path_tmp) fmt = data.file_format if fmt is None: raise exception.ImageUnacceptable( reason=_("'qemu-img info' parsing failed."), image_id=image_href) backing_file = data.backing_file if backing_file is not None: raise exception.ImageUnacceptable( image_id=image_href, reason=_("fmt=%(fmt)s backed by: %(backing_file)s") % {'fmt': fmt, 'backing_file': backing_file}) if fmt != "raw": staged = "%s.converted" % path LOG.debug("%(image)s was %(format)s, converting to raw" % {'image': image_href, 'format': fmt}) with fileutils.remove_path_on_error(staged): disk_utils.convert_image(path_tmp, staged, 'raw') os.unlink(path_tmp) data = disk_utils.qemu_img_info(staged) if data.file_format != "raw": raise exception.ImageConvertFailed( image_id=image_href, reason=_("Converted to raw, but format is " "now %s") % data.file_format) os.rename(staged, path) else: os.rename(path_tmp, path) def image_show(context, image_href, image_service=None): if image_service is None: image_service = service.get_image_service(image_href, context=context) return image_service.show(image_href) def download_size(context, image_href, image_service=None): return image_show(context, image_href, image_service)['size'] def converted_size(path): """Get size of converted raw image. The size of image converted to raw format can be growing up to the virtual size of the image. :param path: path to the image file. :returns: virtual size of the image or 0 if conversion not needed. """ data = disk_utils.qemu_img_info(path) return data.virtual_size def get_image_properties(context, image_href, properties="all"): """Returns the values of several properties of an image :param context: context :param image_href: href of the image :param properties: the properties whose values are required. This argument is optional, default value is "all", so if not specified all properties will be returned. :returns: a dict of the values of the properties. A property not on the glance metadata will have a value of None. """ img_service = service.get_image_service(image_href, context=context) iproperties = img_service.show(image_href)['properties'] if properties == "all": return iproperties return {p: iproperties.get(p) for p in properties} def get_temp_url_for_glance_image(context, image_uuid): """Returns the tmp url for a glance image. :param context: context :param image_uuid: the UUID of the image in glance :returns: the tmp url for the glance image. """ # Glance API version 2 is required for getting direct_url of the image. glance_service = service.GlanceImageService(version=2, context=context) image_properties = glance_service.show(image_uuid) LOG.debug('Got image info: %(info)s for image %(image_uuid)s.', {'info': image_properties, 'image_uuid': image_uuid}) return glance_service.swift_temp_url(image_properties) def create_boot_iso(context, output_filename, kernel_href, ramdisk_href, deploy_iso_href, root_uuid=None, kernel_params=None, boot_mode=None): """Creates a bootable ISO image for a node. Given the hrefs for kernel, ramdisk, root partition's UUID and kernel cmdline arguments, this method fetches the kernel and ramdisk, and builds a bootable ISO image that can be used to boot up the baremetal node. :param context: context :param output_filename: the absolute path of the output ISO file :param kernel_href: URL or glance uuid of the kernel to use :param ramdisk_href: URL or glance uuid of the ramdisk to use :param deploy_iso_href: URL or glance uuid of the deploy iso used :param root_uuid: uuid of the root filesystem (optional) :param kernel_params: a string containing whitespace separated values kernel cmdline arguments of the form K=V or K (optional). :boot_mode: the boot mode in which the deploy is to happen. :raises: ImageCreationFailed, if creating boot ISO failed. """ with utils.tempdir() as tmpdir: kernel_path = os.path.join(tmpdir, kernel_href.split('/')[-1]) ramdisk_path = os.path.join(tmpdir, ramdisk_href.split('/')[-1]) fetch(context, kernel_href, kernel_path) fetch(context, ramdisk_href, ramdisk_path) params = [] if root_uuid: params.append('root=UUID=%s' % root_uuid) if kernel_params: params.append(kernel_params) if boot_mode == 'uefi': deploy_iso = os.path.join(tmpdir, deploy_iso_href.split('/')[-1]) fetch(context, deploy_iso_href, deploy_iso) create_isolinux_image_for_uefi(output_filename, deploy_iso, kernel_path, ramdisk_path, params) else: create_isolinux_image_for_bios(output_filename, kernel_path, ramdisk_path, params) def is_whole_disk_image(ctx, instance_info): """Find out if the image is a partition image or a whole disk image. :param ctx: an admin context :param instance_info: a node's instance info dict :returns True for whole disk images and False for partition images and None on no image_source or Error. """ image_source = instance_info.get('image_source') if not image_source: return is_whole_disk_image = False if glance_utils.is_glance_image(image_source): try: iproperties = get_image_properties(ctx, image_source) except Exception: return is_whole_disk_image = (not iproperties.get('kernel_id') and not iproperties.get('ramdisk_id')) else: # Non glance image ref if (not instance_info.get('kernel') and not instance_info.get('ramdisk')): is_whole_disk_image = True return is_whole_disk_image def _mount_deploy_iso(deploy_iso, mountdir): """This function opens up the deploy iso used for deploy. :param: deploy_iso: path to the deploy iso where its contents are fetched to. :raises: ImageCreationFailed if mount fails. :returns: a tuple consisting of - 1. a dictionary containing the values as required by create_isolinux_image, 2. efiboot.img relative path, and 3. grub.cfg relative path. """ e_img_rel_path = None e_img_path = None grub_rel_path = None grub_path = None try: utils.mount(deploy_iso, mountdir, '-o', 'loop') except processutils.ProcessExecutionError as e: LOG.exception(_LE("mounting the deploy iso failed.")) raise exception.ImageCreationFailed(image_type='iso', error=e) try: for (dir, subdir, files) in os.walk(mountdir): if 'efiboot.img' in files: e_img_path = os.path.join(dir, 'efiboot.img') e_img_rel_path = os.path.relpath(e_img_path, mountdir) if 'grub.cfg' in files: grub_path = os.path.join(dir, 'grub.cfg') grub_rel_path = os.path.relpath(grub_path, mountdir) except (OSError, IOError) as e: LOG.exception(_LE("examining the deploy iso failed.")) _umount_without_raise(mountdir) raise exception.ImageCreationFailed(image_type='iso', error=e) # check if the variables are assigned some values or not during # walk of the mountdir. if not (e_img_path and e_img_rel_path and grub_path and grub_rel_path): error = (_("Deploy iso didn't contain efiboot.img or grub.cfg")) _umount_without_raise(mountdir) raise exception.ImageCreationFailed(image_type='iso', error=error) uefi_path_info = {e_img_path: e_img_rel_path, grub_path: grub_rel_path} # Returning a tuple as it makes the code simpler and clean. # uefi_path_info: is needed by the caller for _create_root_fs to create # appropriate directory structures for uefi boot iso. # grub_rel_path: is needed to copy the new grub.cfg generated using # generate_cfg() to the same directory path structure where it was # present in deploy iso. This path varies for different OS vendors. # e_img_rel_path: is required by mkisofs to generate boot iso. return uefi_path_info, e_img_rel_path, grub_rel_path ironic-5.1.0/ironic/common/safe_utils.py0000664000567000056710000000401612674513466021443 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utilities and helper functions that won't produce circular imports.""" import inspect def getcallargs(function, *args, **kwargs): """This is a simplified inspect.getcallargs (2.7+). It should be replaced when python >= 2.7 is standard. """ keyed_args = {} argnames, varargs, keywords, defaults = inspect.getargspec(function) keyed_args.update(kwargs) # NOTE(alaski) the implicit 'self' or 'cls' argument shows up in # argnames but not in args or kwargs. Uses 'in' rather than '==' because # some tests use 'self2'. if 'self' in argnames[0] or 'cls' == argnames[0]: # The function may not actually be a method or have __self__. # Typically seen when it's stubbed with mox. if inspect.ismethod(function) and hasattr(function, '__self__'): keyed_args[argnames[0]] = function.__self__ else: keyed_args[argnames[0]] = None remaining_argnames = filter(lambda x: x not in keyed_args, argnames) keyed_args.update(dict(zip(remaining_argnames, args))) if defaults: num_defaults = len(defaults) for argname, value in zip(argnames[-num_defaults:], defaults): if argname not in keyed_args: keyed_args[argname] = value return keyed_args ironic-5.1.0/ironic/common/context.py0000664000567000056710000000572712674513466021003 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_context import context class RequestContext(context.RequestContext): """Extends security contexts from the OpenStack common library.""" def __init__(self, auth_token=None, domain_id=None, domain_name=None, user=None, tenant=None, is_admin=False, is_public_api=False, read_only=False, show_deleted=False, request_id=None, roles=None, show_password=True): """Stores several additional request parameters: :param domain_id: The ID of the domain. :param domain_name: The name of the domain. :param is_public_api: Specifies whether the request should be processed without authentication. :param roles: List of user's roles if any. :param show_password: Specifies whether passwords should be masked before sending back to API call. """ super(RequestContext, self).__init__(auth_token=auth_token, user=user, tenant=tenant, is_admin=is_admin, read_only=read_only, show_deleted=show_deleted, request_id=request_id) self.is_public_api = is_public_api self.domain_id = domain_id self.domain_name = domain_name self.show_password = show_password # NOTE(dims): roles was added in context.RequestContext recently. # we should pass roles in __init__ above instead of setting the # value here once the minimum version of oslo.context is updated. self.roles = roles or [] def to_dict(self): return {'auth_token': self.auth_token, 'user': self.user, 'tenant': self.tenant, 'is_admin': self.is_admin, 'read_only': self.read_only, 'show_deleted': self.show_deleted, 'request_id': self.request_id, 'domain_id': self.domain_id, 'roles': self.roles, 'domain_name': self.domain_name, 'show_password': self.show_password, 'is_public_api': self.is_public_api} @classmethod def from_dict(cls, values): values.pop('user', None) values.pop('tenant', None) return cls(**values) ironic-5.1.0/ironic/common/config.py0000664000567000056710000000216512674513466020555 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common import rpc from ironic import version def parse_args(argv, default_config_files=None): rpc.set_defaults(control_exchange='ironic') cfg.CONF(argv[1:], project='ironic', version=version.version_info.release_string(), default_config_files=default_config_files) rpc.init(cfg.CONF) ironic-5.1.0/ironic/common/rpc.py0000664000567000056710000000746612674513466020105 0ustar jenkinsjenkins00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg import oslo_messaging as messaging from ironic.common import context as ironic_context from ironic.common import exception CONF = cfg.CONF TRANSPORT = None NOTIFIER = None ALLOWED_EXMODS = [ exception.__name__, ] EXTRA_EXMODS = [] # NOTE(lucasagomes): The ironic.openstack.common.rpc entries are for # backwards compat with IceHouse rpc_backend configuration values. TRANSPORT_ALIASES = { 'ironic.openstack.common.rpc.impl_kombu': 'rabbit', 'ironic.openstack.common.rpc.impl_qpid': 'qpid', 'ironic.openstack.common.rpc.impl_zmq': 'zmq', 'ironic.rpc.impl_kombu': 'rabbit', 'ironic.rpc.impl_qpid': 'qpid', 'ironic.rpc.impl_zmq': 'zmq', } def init(conf): global TRANSPORT, NOTIFIER exmods = get_allowed_exmods() TRANSPORT = messaging.get_transport(conf, allowed_remote_exmods=exmods, aliases=TRANSPORT_ALIASES) serializer = RequestContextSerializer(messaging.JsonPayloadSerializer()) NOTIFIER = messaging.Notifier(TRANSPORT, serializer=serializer) def cleanup(): global TRANSPORT, NOTIFIER assert TRANSPORT is not None assert NOTIFIER is not None TRANSPORT.cleanup() TRANSPORT = NOTIFIER = None def set_defaults(control_exchange): messaging.set_transport_defaults(control_exchange) def add_extra_exmods(*args): EXTRA_EXMODS.extend(args) def clear_extra_exmods(): del EXTRA_EXMODS[:] def get_allowed_exmods(): return ALLOWED_EXMODS + EXTRA_EXMODS class RequestContextSerializer(messaging.Serializer): def __init__(self, base): self._base = base def serialize_entity(self, context, entity): if not self._base: return entity return self._base.serialize_entity(context, entity) def deserialize_entity(self, context, entity): if not self._base: return entity return self._base.deserialize_entity(context, entity) def serialize_context(self, context): return context.to_dict() def deserialize_context(self, context): return ironic_context.RequestContext.from_dict(context) def get_transport_url(url_str=None): return messaging.TransportURL.parse(CONF, url_str, TRANSPORT_ALIASES) def get_client(target, version_cap=None, serializer=None): assert TRANSPORT is not None serializer = RequestContextSerializer(serializer) return messaging.RPCClient(TRANSPORT, target, version_cap=version_cap, serializer=serializer) def get_server(target, endpoints, serializer=None): assert TRANSPORT is not None serializer = RequestContextSerializer(serializer) return messaging.get_rpc_server(TRANSPORT, target, endpoints, executor='eventlet', serializer=serializer) def get_notifier(service=None, host=None, publisher_id=None): assert NOTIFIER is not None if not publisher_id: publisher_id = "%s.%s" % (service, host or CONF.host) return NOTIFIER.prepare(publisher_id=publisher_id) ironic-5.1.0/ironic/common/__init__.py0000664000567000056710000000000012674513466021031 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/policy.py0000664000567000056710000000435112674513466020606 0ustar jenkinsjenkins00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Policy Engine For Ironic.""" from oslo_concurrency import lockutils from oslo_config import cfg from oslo_policy import policy _ENFORCER = None CONF = cfg.CONF @lockutils.synchronized('policy_enforcer', 'ironic-') def init_enforcer(policy_file=None, rules=None, default_rule=None, use_conf=True): """Synchronously initializes the policy enforcer :param policy_file: Custom policy file to use, if none is specified, `CONF.policy_file` will be used. :param rules: Default dictionary / Rules to use. It will be considered just in the first instantiation. :param default_rule: Default rule to use, CONF.default_rule will be used if none is specified. :param use_conf: Whether to load rules from config file. """ global _ENFORCER if _ENFORCER: return _ENFORCER = policy.Enforcer(CONF, policy_file=policy_file, rules=rules, default_rule=default_rule, use_conf=use_conf) def get_enforcer(): """Provides access to the single instance of Policy enforcer.""" if not _ENFORCER: init_enforcer() return _ENFORCER def enforce(rule, target, creds, do_raise=False, exc=None, *args, **kwargs): """A shortcut for policy.Enforcer.enforce() Checks authorization of a rule against the target and credentials. """ enforcer = get_enforcer() return enforcer.enforce(rule, target, creds, do_raise=do_raise, exc=exc, *args, **kwargs) ironic-5.1.0/ironic/common/i18n.py0000664000567000056710000000213012674513466020057 0ustar jenkinsjenkins00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import oslo_i18n as i18n _translators = i18n.TranslatorFactory(domain='ironic') # The primary translation function using the well-known name "_" _ = _translators.primary # Translators for log levels. # # The abbreviated names are meant to reflect the usual use of a short # name like '_'. The "L" is for "log" and the other letter comes from # the level. _LI = _translators.log_info _LW = _translators.log_warning _LE = _translators.log_error _LC = _translators.log_critical ironic-5.1.0/ironic/common/raid.py0000664000567000056710000001217012674513466020224 0ustar jenkinsjenkins00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import jsonschema from jsonschema import exceptions as json_schema_exc from ironic.common import exception from ironic.common.i18n import _ from ironic.common import utils def _check_and_return_root_volumes(raid_config): """Returns root logical disks after validating RAID config. This method checks if multiple logical disks had 'is_root_volume' set to True and raises an exception if it is True. Otherwise, returns the root logical disk mentioned in the RAID config. :param raid_config: target RAID configuration or current RAID configuration. :returns: the dictionary for the root logical disk if it is present, otherwise None. :raises: InvalidParameterValue, if there were more than one root volume specified in the RAID configuration. """ logical_disks = raid_config['logical_disks'] root_logical_disks = [x for x in logical_disks if x.get('is_root_volume')] if len(root_logical_disks) > 1: msg = _("Raid config cannot have more than one root volume. " "%d root volumes were specified") % len(root_logical_disks) raise exception.InvalidParameterValue(msg) if root_logical_disks: return root_logical_disks[0] def validate_configuration(raid_config, raid_config_schema): """Validates the RAID configuration passed using JSON schema. This method validates a RAID configuration against a RAID configuration schema. :param raid_config: A dictionary containing RAID configuration information :param raid_config_schema: A dictionary which is the schema to be used for validation. :raises: InvalidParameterValue, if validation of the RAID configuration fails. """ try: jsonschema.validate(raid_config, raid_config_schema) except json_schema_exc.ValidationError as e: # NOTE: Even though e.message is deprecated in general, it is said # in jsonschema documentation to use this still. msg = _("RAID config validation error: %s") % e.message raise exception.InvalidParameterValue(msg) # Check if there are multiple root volumes specified. _check_and_return_root_volumes(raid_config) def get_logical_disk_properties(raid_config_schema): """Get logical disk properties from RAID configuration schema. This method reads the logical properties and their textual description from the schema that is passed. :param raid_config_schema: A dictionary which is the schema to be used for getting properties that may be specified for the logical disk. :returns: A dictionary containing the logical disk properties as keys and a textual description for them as values. """ logical_disk_schema = raid_config_schema['properties']['logical_disks'] properties = logical_disk_schema['items']['properties'] return {prop: prop_dict['description'] for prop, prop_dict in properties.items()} def update_raid_info(node, raid_config): """Update the node's information based on the RAID config. This method updates the node's information to make use of the configured RAID for scheduling purposes (through properties['capabilities'] and properties['local_gb']) and deploying purposes (using properties['root_device']). :param node: a node object :param raid_config: The dictionary containing the current RAID configuration. :raises: InvalidParameterValue, if 'raid_config' has more than one root volume or if node.properties['capabilities'] is malformed. """ current = raid_config.copy() current['last_updated'] = str(datetime.datetime.utcnow()) node.raid_config = current # Current RAID configuration can have 0 or 1 root volumes. If there # are > 1 root volumes, then it's invalid. We check for this condition # while accepting target RAID configuration, but this check is just in # place, if some drivers pass > 1 root volumes to this method. root_logical_disk = _check_and_return_root_volumes(raid_config) if root_logical_disk: # Update local_gb and root_device_hint properties = node.properties properties['local_gb'] = root_logical_disk['size_gb'] try: properties['root_device'] = ( root_logical_disk['root_device_hint']) except KeyError: pass properties['capabilities'] = utils.get_updated_capabilities( properties.get('capabilities', ''), {'raid_level': root_logical_disk['raid_level']}) node.properties = properties node.save() ironic-5.1.0/ironic/common/image_service.py0000664000567000056710000003054012674513466022110 0ustar jenkinsjenkins00000000000000# Copyright 2010 OpenStack Foundation # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import datetime import os import shutil from oslo_config import cfg from oslo_utils import importutils import requests import sendfile import six from six.moves import http_client import six.moves.urllib.parse as urlparse from ironic.common import exception from ironic.common.i18n import _ from ironic.common import keystone from ironic.common import utils IMAGE_CHUNK_SIZE = 1024 * 1024 # 1mb CONF = cfg.CONF # Import this opt early so that it is available when registering # glance_opts below. CONF.import_opt('my_ip', 'ironic.netconf') glance_opts = [ cfg.StrOpt('glance_host', default='$my_ip', help=_('Default glance hostname or IP address.')), cfg.PortOpt('glance_port', default=9292, help=_('Default glance port.')), cfg.StrOpt('glance_protocol', default='http', choices=['http', 'https'], help=_('Default protocol to use when connecting to glance. ' 'Set to https for SSL.')), cfg.ListOpt('glance_api_servers', help=_('A list of the glance api servers available to ironic. ' 'Prefix with https:// for SSL-based glance API ' 'servers. Format is [hostname|IP]:port.')), cfg.BoolOpt('glance_api_insecure', default=False, help=_('Allow to perform insecure SSL (https) requests to ' 'glance.')), cfg.IntOpt('glance_num_retries', default=0, help=_('Number of retries when downloading an image from ' 'glance.')), cfg.StrOpt('auth_strategy', default='keystone', choices=['keystone', 'noauth'], help=_('Authentication strategy to use when connecting to ' 'glance.')), cfg.StrOpt('glance_cafile', help=_('Optional path to a CA certificate bundle to be used to ' 'validate the SSL certificate served by glance. It is ' 'used when glance_api_insecure is set to False.')), ] CONF.register_opts(glance_opts, group='glance') def import_versioned_module(version, submodule=None): module = 'ironic.common.glance_service.v%s' % version if submodule: module = '.'.join((module, submodule)) return importutils.try_import(module) def GlanceImageService(client=None, version=1, context=None): module = import_versioned_module(version, 'image_service') service_class = getattr(module, 'GlanceImageService') if (context is not None and CONF.glance.auth_strategy == 'keystone' and not context.auth_token): context.auth_token = keystone.get_admin_auth_token() return service_class(client, version, context) @six.add_metaclass(abc.ABCMeta) class BaseImageService(object): """Provides retrieval of disk images.""" @abc.abstractmethod def validate_href(self, image_href): """Validate image reference. :param image_href: Image reference. :raises: exception.ImageRefValidationFailed. :returns: Information needed to further operate with an image. """ @abc.abstractmethod def download(self, image_href, image_file): """Downloads image to specified location. :param image_href: Image reference. :param image_file: File object to write data to. :raises: exception.ImageRefValidationFailed. :raises: exception.ImageDownloadFailed. """ @abc.abstractmethod def show(self, image_href): """Get dictionary of image properties. :param image_href: Image reference. :raises: exception.ImageRefValidationFailed. :returns: dictionary of image properties. It has three of them: 'size', 'updated_at' and 'properties'. 'updated_at' attribute is a naive UTC datetime object. """ class HttpImageService(BaseImageService): """Provides retrieval of disk images using HTTP.""" def validate_href(self, image_href): """Validate HTTP image reference. :param image_href: Image reference. :raises: exception.ImageRefValidationFailed if HEAD request failed or returned response code not equal to 200. :returns: Response to HEAD request. """ try: response = requests.head(image_href) if response.status_code != http_client.OK: raise exception.ImageRefValidationFailed( image_href=image_href, reason=_("Got HTTP code %s instead of 200 in response to " "HEAD request.") % response.status_code) except requests.RequestException as e: raise exception.ImageRefValidationFailed(image_href=image_href, reason=e) return response def download(self, image_href, image_file): """Downloads image to specified location. :param image_href: Image reference. :param image_file: File object to write data to. :raises: exception.ImageRefValidationFailed if GET request returned response code not equal to 200. :raises: exception.ImageDownloadFailed if: * IOError happened during file write; * GET request failed. """ try: response = requests.get(image_href, stream=True) if response.status_code != http_client.OK: raise exception.ImageRefValidationFailed( image_href=image_href, reason=_("Got HTTP code %s instead of 200 in response to " "GET request.") % response.status_code) with response.raw as input_img: shutil.copyfileobj(input_img, image_file, IMAGE_CHUNK_SIZE) except (requests.RequestException, IOError) as e: raise exception.ImageDownloadFailed(image_href=image_href, reason=e) def show(self, image_href): """Get dictionary of image properties. :param image_href: Image reference. :raises: exception.ImageRefValidationFailed if: * HEAD request failed; * HEAD request returned response code not equal to 200; * Content-Length header not found in response to HEAD request. :returns: dictionary of image properties. It has three of them: 'size', 'updated_at' and 'properties'. 'updated_at' attribute is a naive UTC datetime object. """ response = self.validate_href(image_href) image_size = response.headers.get('Content-Length') if image_size is None: raise exception.ImageRefValidationFailed( image_href=image_href, reason=_("Cannot determine image size as there is no " "Content-Length header specified in response " "to HEAD request.")) # Parse last-modified header to return naive datetime object str_date = response.headers.get('Last-Modified') date = None if str_date: http_date_format_strings = [ '%a, %d %b %Y %H:%M:%S GMT', # RFC 822 '%A, %d-%b-%y %H:%M:%S GMT', # RFC 850 '%a %b %d %H:%M:%S %Y' # ANSI C ] for fmt in http_date_format_strings: try: date = datetime.datetime.strptime(str_date, fmt) break except ValueError: continue return { 'size': int(image_size), 'updated_at': date, 'properties': {} } class FileImageService(BaseImageService): """Provides retrieval of disk images available locally on the conductor.""" def validate_href(self, image_href): """Validate local image reference. :param image_href: Image reference. :raises: exception.ImageRefValidationFailed if source image file doesn't exist. :returns: Path to image file if it exists. """ image_path = urlparse.urlparse(image_href).path if not os.path.isfile(image_path): raise exception.ImageRefValidationFailed( image_href=image_href, reason=_("Specified image file not found.")) return image_path def download(self, image_href, image_file): """Downloads image to specified location. :param image_href: Image reference. :param image_file: File object to write data to. :raises: exception.ImageRefValidationFailed if source image file doesn't exist. :raises: exception.ImageDownloadFailed if exceptions were raised while writing to file or creating hard link. """ source_image_path = self.validate_href(image_href) dest_image_path = image_file.name local_device = os.stat(dest_image_path).st_dev try: # We should have read and write access to source file to create # hard link to it. if (local_device == os.stat(source_image_path).st_dev and os.access(source_image_path, os.R_OK | os.W_OK)): image_file.close() os.remove(dest_image_path) os.link(source_image_path, dest_image_path) else: filesize = os.path.getsize(source_image_path) with open(source_image_path, 'rb') as input_img: sendfile.sendfile(image_file.fileno(), input_img.fileno(), 0, filesize) except Exception as e: raise exception.ImageDownloadFailed(image_href=image_href, reason=e) def show(self, image_href): """Get dictionary of image properties. :param image_href: Image reference. :raises: exception.ImageRefValidationFailed if image file specified doesn't exist. :returns: dictionary of image properties. It has three of them: 'size', 'updated_at' and 'properties'. 'updated_at' attribute is a naive UTC datetime object. """ source_image_path = self.validate_href(image_href) return { 'size': os.path.getsize(source_image_path), 'updated_at': utils.unix_file_modification_datetime( source_image_path), 'properties': {} } protocol_mapping = { 'http': HttpImageService, 'https': HttpImageService, 'file': FileImageService, 'glance': GlanceImageService, } def get_image_service(image_href, client=None, version=1, context=None): """Get image service instance to download the image. :param image_href: String containing href to get image service for. :param client: Glance client to be used for download, used only if image_href is Glance href. :param version: Version of Glance API to use, used only if image_href is Glance href. :param context: request context, used only if image_href is Glance href. :raises: exception.ImageRefValidationFailed if no image service can handle specified href. :returns: Instance of an image service class that is able to download specified image. """ scheme = urlparse.urlparse(image_href).scheme.lower() try: cls = protocol_mapping[scheme or 'glance'] except KeyError: raise exception.ImageRefValidationFailed( image_href=image_href, reason=_('Image download protocol ' '%s is not supported.') % scheme ) if cls == GlanceImageService: return cls(client, version, context) return cls() ironic-5.1.0/ironic/common/swift.py0000664000567000056710000002154412674513466020446 0ustar jenkinsjenkins00000000000000# # Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from six.moves import http_client from six.moves.urllib import parse from swiftclient import client as swift_client from swiftclient import exceptions as swift_exceptions from swiftclient import utils as swift_utils from ironic.common import exception from ironic.common.i18n import _ from ironic.common import keystone swift_opts = [ cfg.IntOpt('swift_max_retries', default=2, help=_('Maximum number of times to retry a Swift request, ' 'before failing.')) ] CONF = cfg.CONF CONF.register_opts(swift_opts, group='swift') CONF.import_opt('admin_user', 'keystonemiddleware.auth_token', group='keystone_authtoken') CONF.import_opt('admin_tenant_name', 'keystonemiddleware.auth_token', group='keystone_authtoken') CONF.import_opt('admin_password', 'keystonemiddleware.auth_token', group='keystone_authtoken') CONF.import_opt('auth_uri', 'keystonemiddleware.auth_token', group='keystone_authtoken') CONF.import_opt('auth_version', 'keystonemiddleware.auth_token', group='keystone_authtoken') CONF.import_opt('insecure', 'keystonemiddleware.auth_token', group='keystone_authtoken') CONF.import_opt('cafile', 'keystonemiddleware.auth_token', group='keystone_authtoken') CONF.import_opt('region_name', 'keystonemiddleware.auth_token', group='keystone_authtoken') class SwiftAPI(object): """API for communicating with Swift.""" def __init__(self, user=None, tenant_name=None, key=None, auth_url=None, auth_version=None, region_name=None): """Constructor for creating a SwiftAPI object. :param user: the name of the user for Swift account :param tenant_name: the name of the tenant for Swift account :param key: the 'password' or key to authenticate with :param auth_url: the url for authentication :param auth_version: the version of api to use for authentication :param region_name: the region used for getting endpoints of swift """ user = user or CONF.keystone_authtoken.admin_user tenant_name = tenant_name or CONF.keystone_authtoken.admin_tenant_name key = key or CONF.keystone_authtoken.admin_password auth_url = auth_url or CONF.keystone_authtoken.auth_uri auth_version = auth_version or CONF.keystone_authtoken.auth_version auth_url = keystone.get_keystone_url(auth_url, auth_version) params = {'retries': CONF.swift.swift_max_retries, 'insecure': CONF.keystone_authtoken.insecure, 'cacert': CONF.keystone_authtoken.cafile, 'user': user, 'tenant_name': tenant_name, 'key': key, 'authurl': auth_url, 'auth_version': auth_version} region_name = region_name or CONF.keystone_authtoken.region_name if region_name: params['os_options'] = {'region_name': region_name} self.connection = swift_client.Connection(**params) def create_object(self, container, object, filename, object_headers=None): """Uploads a given file to Swift. :param container: The name of the container for the object. :param object: The name of the object in Swift :param filename: The file to upload, as the object data :param object_headers: the headers for the object to pass to Swift :returns: The Swift UUID of the object :raises: SwiftOperationError, if any operation with Swift fails. """ try: self.connection.put_container(container) except swift_exceptions.ClientException as e: operation = _("put container") raise exception.SwiftOperationError(operation=operation, error=e) with open(filename, "r") as fileobj: try: obj_uuid = self.connection.put_object(container, object, fileobj, headers=object_headers) except swift_exceptions.ClientException as e: operation = _("put object") raise exception.SwiftOperationError(operation=operation, error=e) return obj_uuid def get_temp_url(self, container, object, timeout): """Returns the temp url for the given Swift object. :param container: The name of the container in which Swift object is placed. :param object: The name of the Swift object. :param timeout: The timeout in seconds after which the generated url should expire. :returns: The temp url for the object. :raises: SwiftOperationError, if any operation with Swift fails. """ try: account_info = self.connection.head_account() except swift_exceptions.ClientException as e: operation = _("head account") raise exception.SwiftOperationError(operation=operation, error=e) storage_url, token = self.connection.get_auth() parse_result = parse.urlparse(storage_url) swift_object_path = '/'.join((parse_result.path, container, object)) temp_url_key = account_info['x-account-meta-temp-url-key'] url_path = swift_utils.generate_temp_url(swift_object_path, timeout, temp_url_key, 'GET') return parse.urlunparse((parse_result.scheme, parse_result.netloc, url_path, None, None, None)) def delete_object(self, container, object): """Deletes the given Swift object. :param container: The name of the container in which Swift object is placed. :param object: The name of the object in Swift to be deleted. :raises: SwiftObjectNotFoundError, if object is not found in Swift. :raises: SwiftOperationError, if operation with Swift fails. """ try: self.connection.delete_object(container, object) except swift_exceptions.ClientException as e: operation = _("delete object") if e.http_status == http_client.NOT_FOUND: raise exception.SwiftObjectNotFoundError(object=object, container=container, operation=operation) raise exception.SwiftOperationError(operation=operation, error=e) def head_object(self, container, object): """Retrieves the information about the given Swift object. :param container: The name of the container in which Swift object is placed. :param object: The name of the object in Swift :returns: The information about the object as returned by Swift client's head_object call. :raises: SwiftOperationError, if operation with Swift fails. """ try: return self.connection.head_object(container, object) except swift_exceptions.ClientException as e: operation = _("head object") raise exception.SwiftOperationError(operation=operation, error=e) def update_object_meta(self, container, object, object_headers): """Update the metadata of a given Swift object. :param container: The name of the container in which Swift object is placed. :param object: The name of the object in Swift :param object_headers: the headers for the object to pass to Swift :raises: SwiftOperationError, if operation with Swift fails. """ try: self.connection.post_object(container, object, object_headers) except swift_exceptions.ClientException as e: operation = _("post object") raise exception.SwiftOperationError(operation=operation, error=e) ironic-5.1.0/ironic/common/service.py0000664000567000056710000001566312674513466020757 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright © 2012 eNovance # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import signal import socket from oslo_concurrency import processutils from oslo_config import cfg from oslo_context import context from oslo_log import log import oslo_messaging as messaging from oslo_service import service from oslo_service import wsgi from oslo_utils import importutils from ironic.api import app from ironic.common import config from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common import rpc from ironic import objects from ironic.objects import base as objects_base service_opts = [ cfg.IntOpt('periodic_interval', default=60, help=_('Default interval (in seconds) for running driver ' 'periodic tasks.'), deprecated_for_removal=True), cfg.StrOpt('host', default=socket.getfqdn(), help=_('Name of this node. This can be an opaque identifier. ' 'It is not necessarily a hostname, FQDN, or IP address. ' 'However, the node name must be valid within ' 'an AMQP key, and if using ZeroMQ, a valid ' 'hostname, FQDN, or IP address.')), ] CONF = cfg.CONF LOG = log.getLogger(__name__) CONF.register_opts(service_opts) class RPCService(service.Service): def __init__(self, host, manager_module, manager_class): super(RPCService, self).__init__() self.host = host manager_module = importutils.try_import(manager_module) manager_class = getattr(manager_module, manager_class) self.manager = manager_class(host, manager_module.MANAGER_TOPIC) self.topic = self.manager.topic self.rpcserver = None self.deregister = True def start(self): super(RPCService, self).start() admin_context = context.RequestContext('admin', 'admin', is_admin=True) target = messaging.Target(topic=self.topic, server=self.host) endpoints = [self.manager] serializer = objects_base.IronicObjectSerializer() self.rpcserver = rpc.get_server(target, endpoints, serializer) self.rpcserver.start() self.handle_signal() self.manager.init_host(admin_context) LOG.info(_LI('Created RPC server for service %(service)s on host ' '%(host)s.'), {'service': self.topic, 'host': self.host}) def stop(self): try: self.rpcserver.stop() self.rpcserver.wait() except Exception as e: LOG.exception(_LE('Service error occurred when stopping the ' 'RPC server. Error: %s'), e) try: self.manager.del_host(deregister=self.deregister) except Exception as e: LOG.exception(_LE('Service error occurred when cleaning up ' 'the RPC manager. Error: %s'), e) super(RPCService, self).stop(graceful=True) LOG.info(_LI('Stopped RPC server for service %(service)s on host ' '%(host)s.'), {'service': self.topic, 'host': self.host}) def _handle_signal(self, signo, frame): LOG.info(_LI('Got signal SIGUSR1. Not deregistering on next shutdown ' 'of service %(service)s on host %(host)s.'), {'service': self.topic, 'host': self.host}) self.deregister = False def handle_signal(self): """Add a signal handler for SIGUSR1. The handler ensures that the manager is not deregistered when it is shutdown. """ signal.signal(signal.SIGUSR1, self._handle_signal) def prepare_service(argv=[]): log.register_options(CONF) log.set_defaults(default_log_levels=['amqp=WARNING', 'amqplib=WARNING', 'qpid.messaging=INFO', 'oslo_messaging=INFO', 'sqlalchemy=WARNING', 'keystoneclient=INFO', 'stevedore=INFO', 'eventlet.wsgi.server=WARNING', 'iso8601=WARNING', 'paramiko=WARNING', 'requests=WARNING', 'neutronclient=WARNING', 'glanceclient=WARNING', 'urllib3.connectionpool=WARNING', ]) config.parse_args(argv) log.setup(CONF, 'ironic') objects.register_all() def process_launcher(): return service.ProcessLauncher(CONF) class WSGIService(service.ServiceBase): """Provides ability to launch ironic API from wsgi app.""" def __init__(self, name, use_ssl=False): """Initialize, but do not start the WSGI server. :param name: The name of the WSGI server given to the loader. :param use_ssl: Wraps the socket in an SSL context if True. :returns: None """ self.name = name self.app = app.VersionSelectorApplication() self.workers = (CONF.api.api_workers or processutils.get_worker_count()) if self.workers and self.workers < 1: raise exception.ConfigInvalid( _("api_workers value of %d is invalid, " "must be greater than 0.") % self.workers) self.server = wsgi.Server(CONF, name, self.app, host=CONF.api.host_ip, port=CONF.api.port, use_ssl=use_ssl, logger_name=name) def start(self): """Start serving this service using loaded configuration. :returns: None """ self.server.start() def stop(self): """Stop serving this API. :returns: None """ self.server.stop() def wait(self): """Wait for the service to stop serving this API. :returns: None """ self.server.wait() def reset(self): """Reset server greenpool size to default. :returns: None """ self.server.reset() ironic-5.1.0/ironic/common/fsm.py0000664000567000056710000001366112674513466020100 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from automaton import exceptions as automaton_exceptions from automaton import machines import six """State machine modelling. This is being used in the implementation of: http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html """ from ironic.common import exception as excp from ironic.common.i18n import _ def _translate_excp(func): """Decorator to translate automaton exceptions into ironic exceptions.""" @six.wraps(func) def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except (automaton_exceptions.InvalidState, automaton_exceptions.NotInitialized, automaton_exceptions.FrozenMachine, automaton_exceptions.NotFound) as e: raise excp.InvalidState(six.text_type(e)) except automaton_exceptions.Duplicate as e: raise excp.Duplicate(six.text_type(e)) return wrapper class FSM(machines.FiniteMachine): """An ironic state-machine class with some ironic specific additions.""" def __init__(self): super(FSM, self).__init__() self._target_state = None # For now make these raise ironic state machine exceptions until # a later period where these should(?) be using the raised automaton # exceptions directly. add_transition = _translate_excp(machines.FiniteMachine.add_transition) @property def target_state(self): return self._target_state def is_stable(self, state): """Is the state stable? :param state: the state of interest :raises: InvalidState if the state is invalid :returns True if it is a stable state; False otherwise """ try: return self._states[state]['stable'] except KeyError: raise excp.InvalidState(_("State '%s' does not exist") % state) @_translate_excp def add_state(self, state, on_enter=None, on_exit=None, target=None, terminal=None, stable=False): """Adds a given state to the state machine. :param stable: Use this to specify that this state is a stable/passive state. A state must have been previously defined as 'stable' before it can be used as a 'target' :param target: The target state for 'state' to go to. Before a state can be used as a target it must have been previously added and specified as 'stable' Further arguments are interpreted as for parent method ``add_state``. """ self._validate_target_state(target) super(FSM, self).add_state(state, terminal=terminal, on_enter=on_enter, on_exit=on_exit) self._states[state].update({ 'stable': stable, 'target': target, }) def _post_process_event(self, event, result): # Clear '_target_state' if we've reached it if (self._target_state is not None and self._target_state == self._current.name): self._target_state = None # If new state has a different target, update the '_target_state' if self._states[self._current.name]['target'] is not None: self._target_state = self._states[self._current.name]['target'] def _validate_target_state(self, target): """Validate the target state. A target state must be a valid state that is 'stable'. :param target: The target state :raises: exception.InvalidState if it is an invalid target state """ if target is None: return if target not in self._states: raise excp.InvalidState( _("Target state '%s' does not exist") % target) if not self.is_stable(target): raise excp.InvalidState( _("Target state '%s' is not a 'stable' state") % target) @_translate_excp def initialize(self, start_state=None, target_state=None): """Initialize the FSM. :param start_state: the FSM is initialized to start from this state :param target_state: if specified, the FSM is initialized to this target state. Otherwise use the default target state """ super(FSM, self).initialize(start_state=start_state) current_state = self._current.name self._validate_target_state(target_state) self._target_state = (target_state or self._states[current_state]['target']) @_translate_excp def process_event(self, event, target_state=None): """process the event. :param event: the event to be processed :param target_state: if specified, the 'final' target state for the event. Otherwise, use the default target state """ super(FSM, self).process_event(event) if target_state: # NOTE(rloo): _post_process_event() was invoked at the end of # the above super().process_event() call. At this # point, the default target state is being used but # we want to use the specified state instead. self._validate_target_state(target_state) self._target_state = target_state ironic-5.1.0/ironic/common/utils.py0000664000567000056710000005462112674513466020454 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # Copyright (c) 2012 NTT DOCOMO, INC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utilities and helper functions.""" import contextlib import datetime import errno import hashlib import os import random import re import shutil import tempfile import netaddr from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import timeutils import paramiko import pytz import six from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW utils_opts = [ cfg.StrOpt('rootwrap_config', default="/etc/ironic/rootwrap.conf", help=_('Path to the rootwrap configuration file to use for ' 'running commands as root.')), cfg.StrOpt('tempdir', default=tempfile.gettempdir(), help=_('Temporary working directory, default is Python temp ' 'dir.')), ] CONF = cfg.CONF CONF.register_opts(utils_opts) LOG = logging.getLogger(__name__) def _get_root_helper(): # NOTE(jlvillal): This function has been moved to ironic-lib. And is # planned to be deleted here. If need to modify this function, please # also do the same modification in ironic-lib return 'sudo ironic-rootwrap %s' % CONF.rootwrap_config def execute(*cmd, **kwargs): """Convenience wrapper around oslo's execute() method. :param cmd: Passed to processutils.execute. :param use_standard_locale: True | False. Defaults to False. If set to True, execute command with standard locale added to environment variables. :returns: (stdout, stderr) from process execution :raises: UnknownArgumentError :raises: ProcessExecutionError """ use_standard_locale = kwargs.pop('use_standard_locale', False) if use_standard_locale: env = kwargs.pop('env_variables', os.environ.copy()) env['LC_ALL'] = 'C' kwargs['env_variables'] = env if kwargs.get('run_as_root') and 'root_helper' not in kwargs: kwargs['root_helper'] = _get_root_helper() result = processutils.execute(*cmd, **kwargs) LOG.debug('Execution completed, command line is "%s"', ' '.join(map(str, cmd))) LOG.debug('Command stdout is: "%s"' % result[0]) LOG.debug('Command stderr is: "%s"' % result[1]) return result def trycmd(*args, **kwargs): """Convenience wrapper around oslo's trycmd() method.""" if kwargs.get('run_as_root') and 'root_helper' not in kwargs: kwargs['root_helper'] = _get_root_helper() return processutils.trycmd(*args, **kwargs) def ssh_connect(connection): """Method to connect to a remote system using ssh protocol. :param connection: a dict of connection parameters. :returns: paramiko.SSHClient -- an active ssh connection. :raises: SSHConnectFailed """ try: ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) key_contents = connection.get('key_contents') if key_contents: data = six.moves.StringIO(key_contents) if "BEGIN RSA PRIVATE" in key_contents: pkey = paramiko.RSAKey.from_private_key(data) elif "BEGIN DSA PRIVATE" in key_contents: pkey = paramiko.DSSKey.from_private_key(data) else: # Can't include the key contents - secure material. raise ValueError(_("Invalid private key")) else: pkey = None ssh.connect(connection.get('host'), username=connection.get('username'), password=connection.get('password'), port=connection.get('port', 22), pkey=pkey, key_filename=connection.get('key_filename'), timeout=connection.get('timeout', 10)) # send TCP keepalive packets every 20 seconds ssh.get_transport().set_keepalive(20) except Exception as e: LOG.debug("SSH connect failed: %s" % e) raise exception.SSHConnectFailed(host=connection.get('host')) return ssh def generate_uid(topic, size=8): characters = '01234567890abcdefghijklmnopqrstuvwxyz' choices = [random.choice(characters) for _x in range(size)] return '%s-%s' % (topic, ''.join(choices)) def random_alnum(size=32): characters = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' return ''.join(random.choice(characters) for _ in range(size)) def delete_if_exists(pathname): """delete a file, but ignore file not found error.""" try: os.unlink(pathname) except OSError as e: if e.errno == errno.ENOENT: return else: raise def is_valid_boolstr(val): """Check if the provided string is a valid bool string or not.""" boolstrs = ('true', 'false', 'yes', 'no', 'y', 'n', '1', '0') return str(val).lower() in boolstrs def is_valid_mac(address): """Verify the format of a MAC address. Check if a MAC address is valid and contains six octets. Accepts colon-separated format only. :param address: MAC address to be validated. :returns: True if valid. False if not. """ m = "[0-9a-f]{2}(:[0-9a-f]{2}){5}$" return (isinstance(address, six.string_types) and re.match(m, address.lower())) _is_valid_logical_name_re = re.compile(r'^[A-Z0-9-._~]+$', re.I) # old is_hostname_safe() regex, retained for backwards compat _is_hostname_safe_re = re.compile(r"""^ [a-z0-9]([a-z0-9\-]{0,61}[a-z0-9])? # host (\.[a-z0-9\-_]{0,62}[a-z0-9])* # domain \.? # trailing dot $""", re.X) def is_valid_logical_name(hostname): """Determine if a logical name is valid. The logical name may only consist of RFC3986 unreserved characters, to wit: ALPHA / DIGIT / "-" / "." / "_" / "~" """ if not isinstance(hostname, six.string_types) or len(hostname) > 255: return False return _is_valid_logical_name_re.match(hostname) is not None def is_hostname_safe(hostname): """Old check for valid logical node names. Retained for compatibility with REST API < 1.10. Nominally, checks that the supplied hostname conforms to: * http://en.wikipedia.org/wiki/Hostname * http://tools.ietf.org/html/rfc952 * http://tools.ietf.org/html/rfc1123 In practice, this check has several shortcomings and errors that are more thoroughly documented in bug #1468508. :param hostname: The hostname to be validated. :returns: True if valid. False if not. """ if not isinstance(hostname, six.string_types) or len(hostname) > 255: return False return _is_hostname_safe_re.match(hostname) is not None def is_valid_no_proxy(no_proxy): """Check no_proxy validity Check if no_proxy value that will be written to environment variable by ironic-python-agent is valid. :param no_proxy: the value that requires validity check. Expected to be a comma-separated list of host names, IP addresses and domain names (with optional :port). :returns: True if no_proxy is valid, False otherwise. """ if not isinstance(no_proxy, six.string_types): return False hostname_re = re.compile('(?!-)[A-Z\d-]{1,63}(? max_length: return False if not all(hostname_re.match(part) for part in hostname.split('.')): return False return True def validate_and_normalize_mac(address): """Validate a MAC address and return normalized form. Checks whether the supplied MAC address is formally correct and normalize it to all lower case. :param address: MAC address to be validated and normalized. :returns: Normalized and validated MAC address. :raises: InvalidMAC If the MAC address is not valid. """ if not is_valid_mac(address): raise exception.InvalidMAC(mac=address) return address.lower() def is_valid_ipv6_cidr(address): try: str(netaddr.IPNetwork(address, version=6).cidr) return True except Exception: return False def get_shortened_ipv6(address): addr = netaddr.IPAddress(address, version=6) return str(addr.ipv6()) def get_shortened_ipv6_cidr(address): net = netaddr.IPNetwork(address, version=6) return str(net.cidr) def is_valid_cidr(address): """Check if the provided ipv4 or ipv6 address is a valid CIDR address.""" try: # Validate the correct CIDR Address netaddr.IPNetwork(address) except netaddr.core.AddrFormatError: return False except UnboundLocalError: # NOTE(MotoKen): work around bug in netaddr 0.7.5 (see detail in # https://github.com/drkjam/netaddr/issues/2) return False # Prior validation partially verify /xx part # Verify it here ip_segment = address.split('/') if (len(ip_segment) <= 1 or ip_segment[1] == ''): return False return True def get_ip_version(network): """Returns the IP version of a network (IPv4 or IPv6). :raises: AddrFormatError if invalid network. """ if netaddr.IPNetwork(network).version == 6: return "IPv6" elif netaddr.IPNetwork(network).version == 4: return "IPv4" def convert_to_list_dict(lst, label): """Convert a value or list into a list of dicts.""" if not lst: return None if not isinstance(lst, list): lst = [lst] return [{label: x} for x in lst] def sanitize_hostname(hostname): """Return a hostname which conforms to RFC-952 and RFC-1123 specs.""" if isinstance(hostname, six.text_type): hostname = hostname.encode('latin-1', 'ignore') hostname = re.sub(b'[ _]', b'-', hostname) hostname = re.sub(b'[^\w.-]+', b'', hostname) hostname = hostname.lower() hostname = hostname.strip(b'.-') return hostname def read_cached_file(filename, cache_info, reload_func=None): """Read from a file if it has been modified. :param cache_info: dictionary to hold opaque cache. :param reload_func: optional function to be called with data when file is reloaded due to a modification. :returns: data from file """ mtime = os.path.getmtime(filename) if not cache_info or mtime != cache_info.get('mtime'): LOG.debug("Reloading cached file %s" % filename) with open(filename) as fap: cache_info['data'] = fap.read() cache_info['mtime'] = mtime if reload_func: reload_func(cache_info['data']) return cache_info['data'] def file_open(*args, **kwargs): """Open file see built-in file() documentation for more details Note: The reason this is kept in a separate module is to easily be able to provide a stub module that doesn't alter system state at all (for unit tests) """ return file(*args, **kwargs) def _get_hash_object(hash_algo_name): """Create a hash object based on given algorithm. :param hash_algo_name: name of the hashing algorithm. :raises: InvalidParameterValue, on unsupported or invalid input. :returns: a hash object based on the given named algorithm. """ algorithms = (hashlib.algorithms_guaranteed if six.PY3 else hashlib.algorithms) if hash_algo_name not in algorithms: msg = (_("Unsupported/Invalid hash name '%s' provided.") % hash_algo_name) LOG.error(msg) raise exception.InvalidParameterValue(msg) return getattr(hashlib, hash_algo_name)() def hash_file(file_like_object, hash_algo='md5'): """Generate a hash for the contents of a file. It returns a hash of the file object as a string of double length, containing only hexadecimal digits. It supports all the algorithms hashlib does. :param file_like_object: file like object whose hash to be calculated. :param hash_algo: name of the hashing strategy, default being 'md5'. :raises: InvalidParameterValue, on unsupported or invalid input. :returns: a condensed digest of the bytes of contents. """ checksum = _get_hash_object(hash_algo) for chunk in iter(lambda: file_like_object.read(32768), b''): checksum.update(chunk) return checksum.hexdigest() @contextlib.contextmanager def temporary_mutation(obj, **kwargs): """Temporarily change object attribute. Temporarily set the attr on a particular object to a given value then revert when finished. One use of this is to temporarily set the read_deleted flag on a context object: with temporary_mutation(context, read_deleted="yes"): do_something_that_needed_deleted_objects() """ def is_dict_like(thing): return hasattr(thing, 'has_key') def get(thing, attr, default): if is_dict_like(thing): return thing.get(attr, default) else: return getattr(thing, attr, default) def set_value(thing, attr, val): if is_dict_like(thing): thing[attr] = val else: setattr(thing, attr, val) def delete(thing, attr): if is_dict_like(thing): del thing[attr] else: delattr(thing, attr) NOT_PRESENT = object() old_values = {} for attr, new_value in kwargs.items(): old_values[attr] = get(obj, attr, NOT_PRESENT) set_value(obj, attr, new_value) try: yield finally: for attr, old_value in old_values.items(): if old_value is NOT_PRESENT: delete(obj, attr) else: set_value(obj, attr, old_value) @contextlib.contextmanager def tempdir(**kwargs): tempfile.tempdir = CONF.tempdir tmpdir = tempfile.mkdtemp(**kwargs) try: yield tmpdir finally: try: shutil.rmtree(tmpdir) except OSError as e: LOG.error(_LE('Could not remove tmpdir: %s'), e) def rmtree_without_raise(path): try: if os.path.isdir(path): shutil.rmtree(path) except OSError as e: LOG.warning(_LW("Failed to remove dir %(path)s, error: %(e)s"), {'path': path, 'e': e}) def write_to_file(path, contents): with open(path, 'w') as f: f.write(contents) def create_link_without_raise(source, link): try: os.symlink(source, link) except OSError as e: if e.errno == errno.EEXIST: return else: LOG.warning( _LW("Failed to create symlink from %(source)s to %(link)s" ", error: %(e)s"), {'source': source, 'link': link, 'e': e}) def safe_rstrip(value, chars=None): """Removes trailing characters from a string if that does not make it empty :param value: A string value that will be stripped. :param chars: Characters to remove. :return: Stripped value. """ if not isinstance(value, six.string_types): LOG.warning(_LW("Failed to remove trailing character. Returning " "original object. Supplied object is not a string: " "%s,"), value) return value return value.rstrip(chars) or value def mount(src, dest, *args): """Mounts a device/image file on specified location. :param src: the path to the source file for mounting :param dest: the path where it needs to be mounted. :param args: a tuple containing the arguments to be passed to mount command. :raises: processutils.ProcessExecutionError if it failed to run the process. """ args = ('mount', ) + args + (src, dest) execute(*args, run_as_root=True, check_exit_code=[0]) def umount(loc, *args): """Umounts a mounted location. :param loc: the path to be unmounted. :param args: a tuple containing the argumnets to be passed to the umount command. :raises: processutils.ProcessExecutionError if it failed to run the process. """ args = ('umount', ) + args + (loc, ) execute(*args, run_as_root=True, check_exit_code=[0]) def check_dir(directory_to_check=None, required_space=1): """Check a directory is usable. This function can be used by drivers to check that directories they need to write to are usable. This should be called from the drivers init function. This function checks that the directory exists and then calls check_dir_writable and check_dir_free_space. If directory_to_check is not provided the default is to use the temp directory. :param directory_to_check: the directory to check. :param required_space: amount of space to check for in MiB. :raises: PathNotFound if directory can not be found :raises: DirectoryNotWritable if user is unable to write to the directory :raises InsufficientDiskSpace: if free space is < required space """ # check if directory_to_check is passed in, if not set to tempdir if directory_to_check is None: directory_to_check = CONF.tempdir LOG.debug("checking directory: %s", directory_to_check) if not os.path.exists(directory_to_check): raise exception.PathNotFound(dir=directory_to_check) _check_dir_writable(directory_to_check) _check_dir_free_space(directory_to_check, required_space) def _check_dir_writable(chk_dir): """Check that the chk_dir is able to be written to. :param chk_dir: Directory to check :raises: DirectoryNotWritable if user is unable to write to the directory """ is_writable = os.access(chk_dir, os.W_OK) if not is_writable: raise exception.DirectoryNotWritable(dir=chk_dir) def _check_dir_free_space(chk_dir, required_space=1): """Check that directory has some free space. :param chk_dir: Directory to check :param required_space: amount of space to check for in MiB. :raises InsufficientDiskSpace: if free space is < required space """ # check that we have some free space stat = os.statvfs(chk_dir) # get dir free space in MiB. free_space = float(stat.f_bsize * stat.f_bavail) / 1024 / 1024 # check for at least required_space MiB free if free_space < required_space: raise exception.InsufficientDiskSpace(path=chk_dir, required=required_space, actual=free_space) def get_updated_capabilities(current_capabilities, new_capabilities): """Returns an updated capability string. This method updates the original (or current) capabilities with the new capabilities. The original capabilities would typically be from a node's properties['capabilities']. From new_capabilities, any new capabilities are added, and existing capabilities may have their values updated. This updated capabilities string is returned. :param current_capabilities: Current capability string :param new_capabilities: the dictionary of capabilities to be updated. :returns: An updated capability string. with new_capabilities. :raises: ValueError, if current_capabilities is malformed or if new_capabilities is not a dictionary """ if not isinstance(new_capabilities, dict): raise ValueError( _("Cannot update capabilities. The new capabilities should be in " "a dictionary. Provided value is %s") % new_capabilities) cap_dict = {} if current_capabilities: try: cap_dict = dict(x.split(':', 1) for x in current_capabilities.split(',')) except ValueError: # Capabilities can be filled by operator. ValueError can # occur in malformed capabilities like: # properties/capabilities='boot_mode:bios,boot_option'. raise ValueError( _("Invalid capabilities string '%s'.") % current_capabilities) cap_dict.update(new_capabilities) return ','.join('%(key)s:%(value)s' % {'key': key, 'value': value} for key, value in cap_dict.items()) def is_regex_string_in_file(path, string): with open(path, 'r') as inf: return any(re.search(string, line) for line in inf.readlines()) def unix_file_modification_datetime(file_name): return timeutils.normalize_time( # normalize time to be UTC without timezone datetime.datetime.fromtimestamp( # fromtimestamp will return local time by default, make it UTC os.path.getmtime(file_name), tz=pytz.utc ) ) def validate_network_port(port, port_name="Port"): """Validates the given port. :param port: TCP/UDP port. :param port_name: Name of the port. :returns: An integer port number. :raises: InvalidParameterValue, if the port is invalid. """ try: port = int(port) except ValueError: raise exception.InvalidParameterValue(_( '%(port_name)s "%(port)s" is not a valid integer.') % {'port_name': port_name, 'port': port}) if port < 1 or port > 65535: raise exception.InvalidParameterValue(_( '%(port_name)s "%(port)s" is out of range. Valid port ' 'numbers must be between 1 and 65535.') % {'port_name': port_name, 'port': port}) return port ironic-5.1.0/ironic/common/config_generator/0000775000567000056710000000000012674513633022241 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/config_generator/generator.py0000664000567000056710000003071612674513466024614 0ustar jenkinsjenkins00000000000000# Copyright 2012 SINA Corporation # Copyright 2014 Cisco Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Extracts OpenStack config option info from module(s).""" # NOTE(GheRivero): Copied from oslo_incubator before getting removed in # Change-Id: If15b77d31a8c615aad8fca30f6dd9928da2d08bb from __future__ import print_function import argparse import imp import os import re import socket import sys import tempfile import textwrap import mock from oslo_config import cfg import oslo_i18n from oslo_utils import importutils import six import stevedore.named oslo_i18n.install('ironic') OPT = "Opt" STROPT = "StrOpt" BOOLOPT = "BoolOpt" INTOPT = "IntOpt" FLOATOPT = "FloatOpt" LISTOPT = "ListOpt" DICTOPT = "DictOpt" MULTISTROPT = "MultiStrOpt" PORTOPT = "PortOpt" OPT_TYPES = { OPT: 'type of value is unknown', STROPT: 'string value', BOOLOPT: 'boolean value', INTOPT: 'integer value', FLOATOPT: 'floating point value', LISTOPT: 'list value', DICTOPT: 'dict value', MULTISTROPT: 'multi valued', PORTOPT: 'port value', } OPTION_REGEX = re.compile(r"(%s)" % "|".join(OPT_TYPES)) PY_EXT = ".py" BASEDIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../../../")) WORDWRAP_WIDTH = 60 def raise_extension_exception(extmanager, ep, err): raise # Don't let the system hostname or FQDN affect config file values. Certain 3rd # party libraries use either 'gethostbyname' or 'getfqdn' to set the default # value. @mock.patch.object(socket, 'gethostname', lambda: 'localhost') @mock.patch.object(socket, 'getfqdn', lambda: 'localhost') @mock.patch.object(tempfile, 'gettempdir', lambda: '/tmp') def generate(argv): parser = argparse.ArgumentParser( description='generate sample configuration file', ) parser.add_argument('-m', dest='modules', action='append') parser.add_argument('-l', dest='libraries', action='append') parser.add_argument('srcfiles', nargs='*') parsed_args = parser.parse_args(argv) mods_by_pkg = dict() for filepath in parsed_args.srcfiles: pkg_name = filepath.split(os.sep)[1] mod_str = '.'.join(['.'.join(filepath.split(os.sep)[:-1]), os.path.basename(filepath).split('.')[0]]) mods_by_pkg.setdefault(pkg_name, list()).append(mod_str) # NOTE(lzyeval): place top level modules before packages pkg_names = sorted(pkg for pkg in mods_by_pkg if pkg.endswith(PY_EXT)) ext_names = sorted(pkg for pkg in mods_by_pkg if pkg not in pkg_names) pkg_names.extend(ext_names) # opts_by_group is a mapping of group name to an options list # The options list is a list of (module, options) tuples opts_by_group = {'DEFAULT': []} if parsed_args.modules: for module_name in parsed_args.modules: module = _import_module(module_name) if module: for group, opts in _list_opts(module): opts_by_group.setdefault(group, []).append((module_name, opts)) # Look for entry points defined in libraries (or applications) for # option discovery, and include their return values in the output. # # Each entry point should be a function returning an iterable # of pairs with the group name (or None for the default group) # and the list of Opt instances for that group. if parsed_args.libraries: loader = stevedore.named.NamedExtensionManager( 'oslo.config.opts', names=list(set(parsed_args.libraries)), invoke_on_load=False, on_load_failure_callback=raise_extension_exception ) for ext in loader: for group, opts in ext.plugin(): opt_list = opts_by_group.setdefault(group or 'DEFAULT', []) opt_list.append((ext.name, opts)) for pkg_name in pkg_names: mods = mods_by_pkg.get(pkg_name) mods.sort() for mod_str in mods: if mod_str.endswith('.__init__'): mod_str = mod_str[:mod_str.rfind(".")] mod_obj = _import_module(mod_str) if not mod_obj: raise RuntimeError("Unable to import module %s" % mod_str) for group, opts in _list_opts(mod_obj): opts_by_group.setdefault(group, []).append((mod_str, opts)) print_group_opts('DEFAULT', opts_by_group.pop('DEFAULT', [])) for group in sorted(opts_by_group.keys()): print_group_opts(group, opts_by_group[group]) def _import_module(mod_str): try: if mod_str.startswith('bin.'): imp.load_source(mod_str[4:], os.path.join('bin', mod_str[4:])) return sys.modules[mod_str[4:]] else: return importutils.import_module(mod_str) except Exception as e: sys.stderr.write("Error importing module %s: %s\n" % (mod_str, e)) return None def _is_in_group(opt, group): """Check if opt is in group.""" for value in group._opts.values(): if value['opt'] is opt: return True return False def _guess_groups(opt): # is it in the DEFAULT group? if _is_in_group(opt, cfg.CONF): return 'DEFAULT' # what other groups is it in? for value in cfg.CONF.values(): if isinstance(value, cfg.CONF.GroupAttr): if _is_in_group(opt, value._group): return value._group.name raise RuntimeError( "Unable to find group for option %s, " "maybe it's defined twice in the same group?" % opt.name ) def _list_opts(obj): def is_opt(o): return (isinstance(o, cfg.Opt) and not isinstance(o, cfg.SubCommandOpt)) opts = list() if 'list_opts' in dir(obj): group_opts = getattr(obj, 'list_opts')() # NOTE(GheRivero): Options without a defined group, # must be registered to the DEFAULT section fixed_list = [] for section, opts in group_opts: if not section: section = 'DEFAULT' fixed_list.append((section, opts)) return fixed_list for attr_str in dir(obj): attr_obj = getattr(obj, attr_str) if is_opt(attr_obj): opts.append(attr_obj) elif (isinstance(attr_obj, list) and all(map(lambda x: is_opt(x), attr_obj))): opts.extend(attr_obj) ret = {} for opt in opts: ret.setdefault(_guess_groups(opt), []).append(opt) return ret.items() def print_group_opts(group, opts_by_module): print("[%s]" % group) print('') for mod, opts in sorted(opts_by_module, key=lambda x: x[0]): print('#') print('# Options defined in %s' % mod) print('#') print('') for opt in opts: _print_opt(opt, group) print('') def _get_choice_text(choice): if choice is None: return '' elif choice == '': return "''" return six.text_type(choice) def _get_my_ip(): try: csock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) csock.connect(('8.8.8.8', 80)) (addr, port) = csock.getsockname() csock.close() return addr except socket.error: return None def _sanitize_default(name, value): """Set up a reasonably sensible default for pybasedir, my_ip and host.""" if value.startswith(sys.prefix): # NOTE(jd) Don't use os.path.join, because it is likely to think the # second part is an absolute pathname and therefore drop the first # part. value = os.path.normpath("/usr/" + value[len(sys.prefix):]) elif value.startswith(BASEDIR): return value.replace(BASEDIR, '/usr/lib/python/site-packages') elif BASEDIR in value: return value.replace(BASEDIR, '') elif value == _get_my_ip(): return '10.0.0.1' elif value.strip() != value: return '"%s"' % value return value def _print_opt(opt, group): opt_name, opt_default, opt_help = opt.dest, opt.default, opt.help if not opt_help: sys.stderr.write('WARNING: "%s" is missing help string.\n' % opt_name) opt_help = "" result = OPTION_REGEX.search(str(type(opt))) if not result: raise ValueError( "Config option: {!r} Unknown option type: {}\n".format( opt_name, type(opt))) opt_type = result.group(0) opt_help = u'%s (%s)' % (opt_help, OPT_TYPES[opt_type]) print('#', "\n# ".join(textwrap.wrap(opt_help, WORDWRAP_WIDTH))) min_value = getattr(opt.type, 'min', None) max_value = getattr(opt.type, 'max', None) choices = getattr(opt.type, 'choices', None) # NOTE(lintan): choices are mutually exclusive with 'min/max', # see oslo.config for more details. if min_value is not None and max_value is not None: print('# Possible values: %(min_value)d-%(max_value)d' % {'min_value': min_value, 'max_value': max_value}) elif min_value is not None: print('# Minimum value: %d' % min_value) elif max_value is not None: print('# Maximum value: %d' % max_value) elif choices is not None: if choices == []: print('# No possible values.') else: choices_text = ', '.join([_get_choice_text(choice) for choice in choices]) print('# Possible values: %s' % choices_text) if opt.deprecated_opts: for deprecated_opt in opt.deprecated_opts: deprecated_name = (deprecated_opt.name if deprecated_opt.name else opt_name) deprecated_group = (deprecated_opt.group if deprecated_opt.group else group) print('# Deprecated group/name - [%s]/%s' % (deprecated_group, deprecated_name)) if opt.deprecated_for_removal: print('# This option is deprecated and planned for removal in a ' 'future release.') try: if opt_default is None: print('#%s=' % opt_name) else: _print_type(opt_type, opt_name, opt_default) print('') except Exception as e: sys.stderr.write('Error in option "%s": %s\n' % (opt_name, e)) sys.exit(1) def _print_type(opt_type, opt_name, opt_default): if opt_type == OPT: print('#%s=%s' % (opt_name, opt_default)) elif opt_type == STROPT: assert(isinstance(opt_default, six.string_types)) print('#%s=%s' % (opt_name, _sanitize_default(opt_name, opt_default))) elif opt_type == BOOLOPT: assert(isinstance(opt_default, bool)) print('#%s=%s' % (opt_name, str(opt_default).lower())) elif opt_type == INTOPT: assert(isinstance(opt_default, int) and not isinstance(opt_default, bool)) print('#%s=%s' % (opt_name, opt_default)) elif opt_type == PORTOPT: assert(isinstance(opt_default, int) and not isinstance(opt_default, bool)) print('#%s=%s' % (opt_name, opt_default)) elif opt_type == FLOATOPT: assert(isinstance(opt_default, float)) print('#%s=%s' % (opt_name, opt_default)) elif opt_type == LISTOPT: assert(isinstance(opt_default, list)) print('#%s=%s' % (opt_name, ','.join(opt_default))) elif opt_type == DICTOPT: assert(isinstance(opt_default, dict)) opt_default_strlist = [str(key) + ':' + str(value) for (key, value) in opt_default.items()] print('#%s=%s' % (opt_name, ','.join(opt_default_strlist))) elif opt_type == MULTISTROPT: assert(isinstance(opt_default, list)) if not opt_default: opt_default = [''] for default in opt_default: print('#%s=%s' % (opt_name, default)) else: raise ValueError("unknown oslo_config type %s" % opt_type) def main(): generate(sys.argv[1:]) if __name__ == '__main__': main() ironic-5.1.0/ironic/common/config_generator/__init__.py0000664000567000056710000000000012674513466024344 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/common/network.py0000664000567000056710000000272312674513466021001 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def get_node_vif_ids(task): """Get all VIF ids for a node. This function does not handle multi node operations. :param task: a TaskManager instance. :returns: A dict of Node's neutron ports where keys are 'ports' & 'portgroups' and the values are dict of UUIDs and their associated VIFs, e.g. :: {'ports': {'port.uuid': vif.id}, 'portgroups': {'portgroup.uuid': vif.id}} """ vifs = {} portgroup_vifs = {} port_vifs = {} for portgroup in task.portgroups: vif = portgroup.extra.get('vif_port_id') if vif: portgroup_vifs[portgroup.uuid] = vif vifs['portgroups'] = portgroup_vifs for port in task.ports: vif = port.extra.get('vif_port_id') if vif: port_vifs[port.uuid] = vif vifs['ports'] = port_vifs return vifs ironic-5.1.0/ironic/netconf.py0000664000567000056710000000225112674513466017450 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_utils import netutils from ironic.common.i18n import _ CONF = cfg.CONF netconf_opts = [ cfg.StrOpt('my_ip', default=netutils.get_my_ipv4(), help=_('IP address of this host. If unset, will determine the ' 'IP programmatically. If unable to do so, will use ' '"127.0.0.1".')), ] CONF.register_opts(netconf_opts) ironic-5.1.0/ironic/nova/0000775000567000056710000000000012674513633016401 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/nova/compute/0000775000567000056710000000000012674513633020055 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/nova/compute/manager.py0000664000567000056710000001032012674513466022041 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # Copyright 2014 Red Hat, Inc. # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Short term workaround for friction in the Nova compute manager with Ironic. https://etherpad.openstack.org/p/ironic-nova-friction contains current design work. The goal here is to generalise the areas where n-c talking to a clustered hypervisor has issues, and long term fold them into the main ComputeManager. """ from nova.compute import manager import nova.context from oslo_concurrency import lockutils CCM_SEMAPHORE = 'clustered_compute_manager' class ClusteredComputeManager(manager.ComputeManager): def init_host(self): """Initialization for a clustered compute service.""" self.driver.init_host(host=self.host) # Not used currently. # context = nova.context.get_admin_context() # instances = instance_obj.InstanceList.get_by_host( # context, self.host, expected_attrs=['info_cache']) # defer_iptables_apply is moot for clusters - no local iptables # if CONF.defer_iptables_apply: # self.driver.filter_defer_apply_on() self.init_virt_events() # try: # evacuation is moot for a clustered hypervisor # # checking that instance was not already evacuated to other host # self._destroy_evacuated_instances(context) # Don't run _init_instance until we solve the partitioning problem # - with N n-cpu's all claiming the same hostname, running # _init_instance here would lead to race conditions where each runs # _init_instance concurrently. # for instance in instances: # self._init_instance(context, instance) # finally: # defer_iptables_apply is moot for clusters - no local iptables # if CONF.defer_iptables_apply: # self.driver.filter_defer_apply_off() def pre_start_hook(self): """Update our available resources After the service is initialized, but before we fully bring the service up by listening on RPC queues, make sure to update our available resources (and indirectly our available nodes). """ # This is an optimisation to immediately advertise resources but # the periodic task will update them eventually anyway, so ignore # errors as they may be transient (e.g. the scheduler isn't # available...). XXX(lifeless) this applies to all ComputeManagers # and once I feature freeze is over we should push that to nova # directly. try: self.update_available_resource(nova.context.get_admin_context()) except Exception: pass @lockutils.synchronized(CCM_SEMAPHORE, 'ironic-') def _update_resources(self): """Update our resources Updates the resources while protecting against a race on self._resource_tracker_dict. """ self.update_available_resource(nova.context.get_admin_context()) def terminate_instance(self, context, instance, bdms, reservations): """Terminate an instance on a node. We override this method and force a post-termination update to Nova's resources. This avoids having to wait for a Nova periodic task tick before nodes can be reused. """ super(ClusteredComputeManager, self).terminate_instance(context, instance, bdms, reservations) self._update_resources() ironic-5.1.0/ironic/nova/compute/__init__.py0000664000567000056710000000000012674513466022160 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/nova/__init__.py0000664000567000056710000000000012674513466020504 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/api/0000775000567000056710000000000012674513633016207 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/api/app.wsgi0000664000567000056710000000205012674513466017663 0ustar jenkinsjenkins00000000000000# -*- mode: python -*- # -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Use this file for deploying the API service under Apache2 mod_wsgi. """ import logging import sys from oslo_config import cfg import oslo_i18n as i18n from oslo_log import log from ironic.api import app from ironic.common import service CONF = cfg.CONF i18n.install('ironic') service.prepare_service(sys.argv) LOG = log.getLogger(__name__) LOG.debug("Configuration:") CONF.log_opt_values(LOG, logging.DEBUG) application = app.VersionSelectorApplication() ironic-5.1.0/ironic/api/expose.py0000664000567000056710000000160112674513466020066 0ustar jenkinsjenkins00000000000000# # Copyright 2015 Rackspace, Inc # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import wsmeext.pecan as wsme_pecan def expose(*args, **kwargs): """Ensure that only JSON, and not XML, is supported.""" if 'rest_content_types' not in kwargs: kwargs['rest_content_types'] = ('json',) return wsme_pecan.wsexpose(*args, **kwargs) ironic-5.1.0/ironic/api/config.py0000664000567000056710000000321512674513466020033 0ustar jenkinsjenkins00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Server Specific Configurations # See https://pecan.readthedocs.org/en/latest/configuration.html#server-configuration # noqa server = { 'port': '6385', 'host': '0.0.0.0' } # Pecan Application Configurations # See https://pecan.readthedocs.org/en/latest/configuration.html#application-configuration # noqa app = { 'root': 'ironic.api.controllers.root.RootController', 'modules': ['ironic.api'], 'static_root': '%(confdir)s/public', 'debug': False, 'enable_acl': True, 'acl_public_routes': [ '/', '/v1', # IPA ramdisk methods '/v1/drivers/[a-z0-9_]*/vendor_passthru/lookup', '/v1/nodes/[a-z0-9\-]+/vendor_passthru/heartbeat', # DIB ramdisk methods # NOTE(yuriyz): support URL without 'v1' for backward compatibility # with old DIB ramdisks. '(?:/v1)?/nodes/[a-z0-9\-]+/vendor_passthru/pass_(?:deploy|' 'bootloader_install)_info', ], } # WSME Configurations # See https://wsme.readthedocs.org/en/latest/integrate.html#configuration wsme = { 'debug': False, } ironic-5.1.0/ironic/api/acl.py0000664000567000056710000000244112674513466017325 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright © 2012 New Dream Network, LLC (DreamHost) # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Access Control Lists (ACL's) control access the API server.""" from ironic.api.middleware import auth_token def install(app, conf, public_routes): """Install ACL check on application. :param app: A WSGI application. :param conf: Settings. Dict'ified and passed to keystonemiddleware :param public_routes: The list of the routes which will be allowed to access without authentication. :return: The same WSGI application with ACL installed. """ return auth_token.AuthTokenMiddleware(app, conf=dict(conf), public_api_routes=public_routes) ironic-5.1.0/ironic/api/__init__.py0000664000567000056710000000526112674513466020330 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ API_SERVICE_OPTS = [ cfg.StrOpt('host_ip', default='0.0.0.0', help=_('The IP address on which ironic-api listens.')), cfg.PortOpt('port', default=6385, help=_('The TCP port on which ironic-api listens.')), cfg.IntOpt('max_limit', default=1000, help=_('The maximum number of items returned in a single ' 'response from a collection resource.')), cfg.StrOpt('public_endpoint', default=None, help=_("Public URL to use when building the links to the API " "resources (for example, \"https://ironic.rocks:6384\")." " If None the links will be built using the request's " "host URL. If the API is operating behind a proxy, you " "will want to change this to represent the proxy's URL. " "Defaults to None.")), cfg.IntOpt('api_workers', help=_('Number of workers for OpenStack Ironic API service. ' 'The default is equal to the number of CPUs available ' 'if that can be determined, else a default worker ' 'count of 1 is returned.')), cfg.BoolOpt('enable_ssl_api', default=False, help=_("Enable the integrated stand-alone API to service " "requests via HTTPS instead of HTTP. If there is a " "front-end service performing HTTPS offloading from " "the service, this option should be False; note, you " "will want to change public API endpoint to represent " "SSL termination URL with 'public_endpoint' option.")), ] CONF = cfg.CONF opt_group = cfg.OptGroup(name='api', title='Options for the ironic-api service') CONF.register_group(opt_group) CONF.register_opts(API_SERVICE_OPTS, opt_group) ironic-5.1.0/ironic/api/controllers/0000775000567000056710000000000012674513633020555 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/api/controllers/base.py0000664000567000056710000000713712674513466022055 0ustar jenkinsjenkins00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import functools from webob import exc import wsme from wsme import types as wtypes from ironic.common.i18n import _ class APIBase(wtypes.Base): created_at = wsme.wsattr(datetime.datetime, readonly=True) """The time in UTC at which the object is created""" updated_at = wsme.wsattr(datetime.datetime, readonly=True) """The time in UTC at which the object is updated""" def as_dict(self): """Render this object as a dict of its fields.""" return dict((k, getattr(self, k)) for k in self.fields if hasattr(self, k) and getattr(self, k) != wsme.Unset) def unset_fields_except(self, except_list=None): """Unset fields so they don't appear in the message body. :param except_list: A list of fields that won't be touched. """ if except_list is None: except_list = [] for k in self.as_dict(): if k not in except_list: setattr(self, k, wsme.Unset) @functools.total_ordering class Version(object): """API Version object.""" string = 'X-OpenStack-Ironic-API-Version' """HTTP Header string carrying the requested version""" min_string = 'X-OpenStack-Ironic-API-Minimum-Version' """HTTP response header""" max_string = 'X-OpenStack-Ironic-API-Maximum-Version' """HTTP response header""" def __init__(self, headers, default_version, latest_version): """Create an API Version object from the supplied headers. :param headers: webob headers :param default_version: version to use if not specified in headers :param latest_version: version to use if latest is requested :raises: webob.HTTPNotAcceptable """ (self.major, self.minor) = Version.parse_headers( headers, default_version, latest_version) def __repr__(self): return '%s.%s' % (self.major, self.minor) @staticmethod def parse_headers(headers, default_version, latest_version): """Determine the API version requested based on the headers supplied. :param headers: webob headers :param default_version: version to use if not specified in headers :param latest_version: version to use if latest is requested :returns: a tupe of (major, minor) version numbers :raises: webob.HTTPNotAcceptable """ version_str = headers.get(Version.string, default_version) if version_str.lower() == 'latest': parse_str = latest_version else: parse_str = version_str try: version = tuple(int(i) for i in parse_str.split('.')) except ValueError: version = () if len(version) != 2: raise exc.HTTPNotAcceptable(_( "Invalid value for %s header") % Version.string) return version def __gt__(a, b): return (a.major, a.minor) > (b.major, b.minor) def __eq__(a, b): return (a.major, a.minor) == (b.major, b.minor) ironic-5.1.0/ironic/api/controllers/root.py0000664000567000056710000000730312674513466022121 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright © 2012 New Dream Network, LLC (DreamHost) # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from pecan import rest from wsme import types as wtypes from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers import v1 from ironic.api.controllers.v1 import versions from ironic.api import expose ID_VERSION1 = 'v1' class Version(base.APIBase): """An API version representation. This class represents an API version, including the minimum and maximum minor versions that are supported within the major version. """ id = wtypes.text """The ID of the (major) version, also acts as the release number""" links = [link.Link] """A Link that point to a specific version of the API""" status = wtypes.text """Status of the version. One of: * CURRENT - the latest version of API, * SUPPORTED - supported, but not latest, version of API, * DEPRECATED - supported, but deprecated, version of API. """ version = wtypes.text """The current, maximum supported (major.minor) version of API.""" min_version = wtypes.text """Minimum supported (major.minor) version of API.""" def __init__(self, id, min_version, version, status='CURRENT'): self.id = id self.links = [link.Link.make_link('self', pecan.request.public_url, self.id, '', bookmark=True)] self.status = status self.version = version self.min_version = min_version class Root(base.APIBase): name = wtypes.text """The name of the API""" description = wtypes.text """Some information about this API""" versions = [Version] """Links to all the versions available in this API""" default_version = Version """A link to the default version of the API""" @staticmethod def convert(): root = Root() root.name = "OpenStack Ironic API" root.description = ("Ironic is an OpenStack project which aims to " "provision baremetal machines.") root.default_version = Version(ID_VERSION1, versions.MIN_VERSION_STRING, versions.MAX_VERSION_STRING) root.versions = [root.default_version] return root class RootController(rest.RestController): _versions = [ID_VERSION1] """All supported API versions""" _default_version = ID_VERSION1 """The default API version""" v1 = v1.Controller() @expose.expose(Root) def get(self): # NOTE: The reason why convert() it's being called for every # request is because we need to get the host url from # the request object to make the links. return Root.convert() @pecan.expose() def _route(self, args): """Overrides the default routing behavior. It redirects the request to the default version of the ironic API if the version number is not specified in the url. """ if args[0] and args[0] not in self._versions: args = [self._default_version] + args return super(RootController, self)._route(args) ironic-5.1.0/ironic/api/controllers/v1/0000775000567000056710000000000012674513633021103 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/api/controllers/v1/port.py0000664000567000056710000004075312674513466022456 0ustar jenkinsjenkins00000000000000# Copyright 2013 UnitedStack Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_utils import uuidutils import pecan from pecan import rest from six.moves import http_client import wsme from wsme import types as wtypes from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.common import exception from ironic.common.i18n import _ from ironic import objects _DEFAULT_RETURN_FIELDS = ('uuid', 'address') class Port(base.APIBase): """API representation of a port. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a port. """ _node_uuid = None def _get_node_uuid(self): return self._node_uuid def _set_node_uuid(self, value): if value and self._node_uuid != value: try: # FIXME(comstud): One should only allow UUID here, but # there seems to be a bug in that tests are passing an # ID. See bug #1301046 for more details. node = objects.Node.get(pecan.request.context, value) self._node_uuid = node.uuid # NOTE(lucasagomes): Create the node_id attribute on-the-fly # to satisfy the api -> rpc object # conversion. self.node_id = node.id except exception.NodeNotFound as e: # Change error code because 404 (NotFound) is inappropriate # response for a POST request to create a Port e.code = http_client.BAD_REQUEST # BadRequest raise e elif value == wtypes.Unset: self._node_uuid = wtypes.Unset uuid = types.uuid """Unique UUID for this port""" address = wsme.wsattr(types.macaddress, mandatory=True) """MAC Address for this port""" extra = {wtypes.text: types.jsontype} """This port's meta data""" node_uuid = wsme.wsproperty(types.uuid, _get_node_uuid, _set_node_uuid, mandatory=True) """The UUID of the node this port belongs to""" links = wsme.wsattr([link.Link], readonly=True) """A list containing a self link and associated port links""" def __init__(self, **kwargs): self.fields = [] fields = list(objects.Port.fields) # NOTE(lucasagomes): node_uuid is not part of objects.Port.fields # because it's an API-only attribute fields.append('node_uuid') for field in fields: # Add fields we expose. if hasattr(self, field): self.fields.append(field) setattr(self, field, kwargs.get(field, wtypes.Unset)) # NOTE(lucasagomes): node_id is an attribute created on-the-fly # by _set_node_uuid(), it needs to be present in the fields so # that as_dict() will contain node_id field when converting it # before saving it in the database. self.fields.append('node_id') setattr(self, 'node_uuid', kwargs.get('node_id', wtypes.Unset)) @staticmethod def _convert_with_links(port, url, fields=None): # NOTE(lucasagomes): Since we are able to return a specified set of # fields the "uuid" can be unset, so we need to save it in another # variable to use when building the links port_uuid = port.uuid if fields is not None: port.unset_fields_except(fields) # never expose the node_id attribute port.node_id = wtypes.Unset port.links = [link.Link.make_link('self', url, 'ports', port_uuid), link.Link.make_link('bookmark', url, 'ports', port_uuid, bookmark=True) ] return port @classmethod def convert_with_links(cls, rpc_port, fields=None): port = Port(**rpc_port.as_dict()) if fields is not None: api_utils.check_for_invalid_fields(fields, port.as_dict()) return cls._convert_with_links(port, pecan.request.public_url, fields=fields) @classmethod def sample(cls, expand=True): sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c', address='fe:54:00:77:07:d9', extra={'foo': 'bar'}, created_at=datetime.datetime.utcnow(), updated_at=datetime.datetime.utcnow()) # NOTE(lucasagomes): node_uuid getter() method look at the # _node_uuid variable sample._node_uuid = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae' fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class PortPatchType(types.JsonPatchType): _api_base = Port class PortCollection(collection.Collection): """API representation of a collection of ports.""" ports = [Port] """A list containing ports objects""" def __init__(self, **kwargs): self._type = 'ports' @staticmethod def convert_with_links(rpc_ports, limit, url=None, fields=None, **kwargs): collection = PortCollection() collection.ports = [Port.convert_with_links(p, fields=fields) for p in rpc_ports] collection.next = collection.get_next(limit, url=url, **kwargs) return collection @classmethod def sample(cls): sample = cls() sample.ports = [Port.sample(expand=False)] return sample class PortsController(rest.RestController): """REST controller for Ports.""" from_nodes = False """A flag to indicate if the requests to this controller are coming from the top-level resource Nodes.""" _custom_actions = { 'detail': ['GET'], } invalid_sort_key_list = ['extra'] def _get_ports_collection(self, node_ident, address, marker, limit, sort_key, sort_dir, resource_url=None, fields=None): if self.from_nodes and not node_ident: raise exception.MissingParameterValue( _("Node identifier not specified.")) limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) marker_obj = None if marker: marker_obj = objects.Port.get_by_uuid(pecan.request.context, marker) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) if node_ident: # FIXME(comstud): Since all we need is the node ID, we can # make this more efficient by only querying # for that column. This will get cleaned up # as we move to the object interface. node = api_utils.get_rpc_node(node_ident) ports = objects.Port.list_by_node_id(pecan.request.context, node.id, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) elif address: ports = self._get_ports_by_address(address) else: ports = objects.Port.list(pecan.request.context, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) return PortCollection.convert_with_links(ports, limit, url=resource_url, fields=fields, sort_key=sort_key, sort_dir=sort_dir) def _get_ports_by_address(self, address): """Retrieve a port by its address. :param address: MAC address of a port, to get the port which has this MAC address. :returns: a list with the port, or an empty list if no port is found. """ try: port = objects.Port.get_by_address(pecan.request.context, address) return [port] except exception.PortNotFound: return [] @expose.expose(PortCollection, types.uuid_or_name, types.uuid, types.macaddress, types.uuid, int, wtypes.text, wtypes.text, types.listtype) def get_all(self, node=None, node_uuid=None, address=None, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None): """Retrieve a list of ports. Note that the 'node_uuid' interface is deprecated in favour of the 'node' interface :param node: UUID or name of a node, to get only ports for that node. :param node_uuid: UUID of a node, to get only ports for that node. :param address: MAC address of a port, to get the port which has this MAC address. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ api_utils.check_allow_specify_fields(fields) if fields is None: fields = _DEFAULT_RETURN_FIELDS if not node_uuid and node: # We're invoking this interface using positional notation, or # explicitly using 'node'. Try and determine which one. # Make sure only one interface, node or node_uuid is used if (not api_utils.allow_node_logical_names() and not uuidutils.is_uuid_like(node)): raise exception.NotAcceptable() return self._get_ports_collection(node_uuid or node, address, marker, limit, sort_key, sort_dir, fields=fields) @expose.expose(PortCollection, types.uuid_or_name, types.uuid, types.macaddress, types.uuid, int, wtypes.text, wtypes.text) def detail(self, node=None, node_uuid=None, address=None, marker=None, limit=None, sort_key='id', sort_dir='asc'): """Retrieve a list of ports with detail. Note that the 'node_uuid' interface is deprecated in favour of the 'node' interface :param node: UUID or name of a node, to get only ports for that node. :param node_uuid: UUID of a node, to get only ports for that node. :param address: MAC address of a port, to get the port which has this MAC address. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. """ if not node_uuid and node: # We're invoking this interface using positional notation, or # explicitly using 'node'. Try and determine which one. # Make sure only one interface, node or node_uuid is used if (not api_utils.allow_node_logical_names() and not uuidutils.is_uuid_like(node)): raise exception.NotAcceptable() # NOTE(lucasagomes): /detail should only work against collections parent = pecan.request.path.split('/')[:-1][-1] if parent != "ports": raise exception.HTTPNotFound resource_url = '/'.join(['ports', 'detail']) return self._get_ports_collection(node_uuid or node, address, marker, limit, sort_key, sort_dir, resource_url) @expose.expose(Port, types.uuid, types.listtype) def get_one(self, port_uuid, fields=None): """Retrieve information about the given port. :param port_uuid: UUID of a port. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ if self.from_nodes: raise exception.OperationNotPermitted api_utils.check_allow_specify_fields(fields) rpc_port = objects.Port.get_by_uuid(pecan.request.context, port_uuid) return Port.convert_with_links(rpc_port, fields=fields) @expose.expose(Port, body=Port, status_code=http_client.CREATED) def post(self, port): """Create a new port. :param port: a port within the request body. """ if self.from_nodes: raise exception.OperationNotPermitted new_port = objects.Port(pecan.request.context, **port.as_dict()) new_port.create() # Set the HTTP Location Header pecan.response.location = link.build_url('ports', new_port.uuid) return Port.convert_with_links(new_port) @wsme.validate(types.uuid, [PortPatchType]) @expose.expose(Port, types.uuid, body=[PortPatchType]) def patch(self, port_uuid, patch): """Update an existing port. :param port_uuid: UUID of a port. :param patch: a json PATCH document to apply to this port. """ if self.from_nodes: raise exception.OperationNotPermitted rpc_port = objects.Port.get_by_uuid(pecan.request.context, port_uuid) try: port_dict = rpc_port.as_dict() # NOTE(lucasagomes): # 1) Remove node_id because it's an internal value and # not present in the API object # 2) Add node_uuid port_dict['node_uuid'] = port_dict.pop('node_id', None) port = Port(**api_utils.apply_jsonpatch(port_dict, patch)) except api_utils.JSONPATCH_EXCEPTIONS as e: raise exception.PatchError(patch=patch, reason=e) # Update only the fields that have changed for field in objects.Port.fields: try: patch_val = getattr(port, field) except AttributeError: # Ignore fields that aren't exposed in the API continue if patch_val == wtypes.Unset: patch_val = None if rpc_port[field] != patch_val: rpc_port[field] = patch_val rpc_node = objects.Node.get_by_id(pecan.request.context, rpc_port.node_id) topic = pecan.request.rpcapi.get_topic_for(rpc_node) new_port = pecan.request.rpcapi.update_port( pecan.request.context, rpc_port, topic) return Port.convert_with_links(new_port) @expose.expose(None, types.uuid, status_code=http_client.NO_CONTENT) def delete(self, port_uuid): """Delete a port. :param port_uuid: UUID of a port. """ if self.from_nodes: raise exception.OperationNotPermitted rpc_port = objects.Port.get_by_uuid(pecan.request.context, port_uuid) rpc_node = objects.Node.get_by_id(pecan.request.context, rpc_port.node_id) topic = pecan.request.rpcapi.get_topic_for(rpc_node) pecan.request.rpcapi.destroy_port(pecan.request.context, rpc_port, topic) ironic-5.1.0/ironic/api/controllers/v1/driver.py0000664000567000056710000002352112674513466022757 0ustar jenkinsjenkins00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from pecan import rest from six.moves import http_client import wsme from wsme import types as wtypes from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.common import exception # Property information for drivers: # key = driver name; # value = dictionary of properties of that driver: # key = property name. # value = description of the property. # NOTE(rloo). This is cached for the lifetime of the API service. If one or # more conductor services are restarted with new driver versions, the API # service should be restarted. _DRIVER_PROPERTIES = {} # Vendor information for drivers: # key = driver name; # value = dictionary of vendor methods of that driver: # key = method name. # value = dictionary with the metadata of that method. # NOTE(lucasagomes). This is cached for the lifetime of the API # service. If one or more conductor services are restarted with new driver # versions, the API service should be restarted. _VENDOR_METHODS = {} # RAID (logical disk) configuration information for drivers: # key = driver name; # value = dictionary of RAID configuration information of that driver: # key = property name. # value = description of the property # NOTE(rloo). This is cached for the lifetime of the API service. If one or # more conductor services are restarted with new driver versions, the API # service should be restarted. _RAID_PROPERTIES = {} class Driver(base.APIBase): """API representation of a driver.""" name = wtypes.text """The name of the driver""" hosts = [wtypes.text] """A list of active conductors that support this driver""" links = wsme.wsattr([link.Link], readonly=True) """A list containing self and bookmark links""" properties = wsme.wsattr([link.Link], readonly=True) """A list containing links to driver properties""" @staticmethod def convert_with_links(name, hosts): driver = Driver() driver.name = name driver.hosts = hosts driver.links = [ link.Link.make_link('self', pecan.request.public_url, 'drivers', name), link.Link.make_link('bookmark', pecan.request.public_url, 'drivers', name, bookmark=True) ] if api_utils.allow_links_node_states_and_driver_properties(): driver.properties = [ link.Link.make_link('self', pecan.request.public_url, 'drivers', name + "/properties"), link.Link.make_link('bookmark', pecan.request.public_url, 'drivers', name + "/properties", bookmark=True) ] return driver @classmethod def sample(cls): sample = cls(name="sample-driver", hosts=["fake-host"]) return sample class DriverList(base.APIBase): """API representation of a list of drivers.""" drivers = [Driver] """A list containing drivers objects""" @staticmethod def convert_with_links(drivers): collection = DriverList() collection.drivers = [ Driver.convert_with_links(dname, list(drivers[dname])) for dname in drivers] return collection @classmethod def sample(cls): sample = cls() sample.drivers = [Driver.sample()] return sample class DriverPassthruController(rest.RestController): """REST controller for driver passthru. This controller allow vendors to expose cross-node functionality in the Ironic API. Ironic will merely relay the message from here to the specified driver, no introspection will be made in the message body. """ _custom_actions = { 'methods': ['GET'] } @expose.expose(wtypes.text, wtypes.text) def methods(self, driver_name): """Retrieve information about vendor methods of the given driver. :param driver_name: name of the driver. :returns: dictionary with : entries. :raises: DriverNotFound if the driver name is invalid or the driver cannot be loaded. """ if driver_name not in _VENDOR_METHODS: topic = pecan.request.rpcapi.get_topic_for_driver(driver_name) ret = pecan.request.rpcapi.get_driver_vendor_passthru_methods( pecan.request.context, driver_name, topic=topic) _VENDOR_METHODS[driver_name] = ret return _VENDOR_METHODS[driver_name] @expose.expose(wtypes.text, wtypes.text, wtypes.text, body=wtypes.text) def _default(self, driver_name, method, data=None): """Call a driver API extension. :param driver_name: name of the driver to call. :param method: name of the method, to be passed to the vendor implementation. :param data: body of data to supply to the specified method. """ topic = pecan.request.rpcapi.get_topic_for_driver(driver_name) return api_utils.vendor_passthru(driver_name, method, topic, data=data, driver_passthru=True) class DriverRaidController(rest.RestController): _custom_actions = { 'logical_disk_properties': ['GET'] } @expose.expose(types.jsontype, wtypes.text) def logical_disk_properties(self, driver_name): """Returns the logical disk properties for the driver. :param driver_name: Name of the driver. :returns: A dictionary containing the properties that can be mentioned for logical disks and a textual description for them. :raises: UnsupportedDriverExtension if the driver doesn't support RAID configuration. :raises: NotAcceptable, if requested version of the API is less than 1.12. :raises: DriverNotFound, if driver is not loaded on any of the conductors. """ if not api_utils.allow_raid_config(): raise exception.NotAcceptable() if driver_name not in _RAID_PROPERTIES: topic = pecan.request.rpcapi.get_topic_for_driver(driver_name) try: info = pecan.request.rpcapi.get_raid_logical_disk_properties( pecan.request.context, driver_name, topic=topic) except exception.UnsupportedDriverExtension as e: # Change error code as 404 seems appropriate because RAID is a # standard interface and all drivers might not have it. e.code = http_client.NOT_FOUND raise _RAID_PROPERTIES[driver_name] = info return _RAID_PROPERTIES[driver_name] class DriversController(rest.RestController): """REST controller for Drivers.""" vendor_passthru = DriverPassthruController() raid = DriverRaidController() """Expose RAID as a sub-element of drivers""" _custom_actions = { 'properties': ['GET'], } @expose.expose(DriverList) def get_all(self): """Retrieve a list of drivers.""" # FIXME(deva): formatting of the auto-generated REST API docs # will break from a single-line doc string. # This is a result of a bug in sphinxcontrib-pecanwsme # https://github.com/dreamhost/sphinxcontrib-pecanwsme/issues/8 driver_list = pecan.request.dbapi.get_active_driver_dict() return DriverList.convert_with_links(driver_list) @expose.expose(Driver, wtypes.text) def get_one(self, driver_name): """Retrieve a single driver.""" # NOTE(russell_h): There is no way to make this more efficient than # retrieving a list of drivers using the current sqlalchemy schema, but # this path must be exposed for Pecan to route any paths we might # choose to expose below it. driver_dict = pecan.request.dbapi.get_active_driver_dict() for name, hosts in driver_dict.items(): if name == driver_name: return Driver.convert_with_links(name, list(hosts)) raise exception.DriverNotFound(driver_name=driver_name) @expose.expose(wtypes.text, wtypes.text) def properties(self, driver_name): """Retrieve property information of the given driver. :param driver_name: name of the driver. :returns: dictionary with : entries. :raises: DriverNotFound (HTTP 404) if the driver name is invalid or the driver cannot be loaded. """ if driver_name not in _DRIVER_PROPERTIES: topic = pecan.request.rpcapi.get_topic_for_driver(driver_name) properties = pecan.request.rpcapi.get_driver_properties( pecan.request.context, driver_name, topic=topic) _DRIVER_PROPERTIES[driver_name] = properties return _DRIVER_PROPERTIES[driver_name] ironic-5.1.0/ironic/api/controllers/v1/state.py0000664000567000056710000000207312674513466022603 0ustar jenkinsjenkins00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from wsme import types as wtypes from ironic.api.controllers import base from ironic.api.controllers import link class State(base.APIBase): current = wtypes.text """The current state""" target = wtypes.text """The user modified desired state""" available = [wtypes.text] """A list of available states it is able to transition to""" links = [link.Link] """A list containing a self link and associated state links""" ironic-5.1.0/ironic/api/controllers/v1/types.py0000664000567000056710000001674012674513466022635 0ustar jenkinsjenkins00000000000000# coding: utf-8 # # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import json from oslo_utils import strutils from oslo_utils import uuidutils import six import wsme from wsme import types as wtypes from ironic.api.controllers.v1 import utils as v1_utils from ironic.common import exception from ironic.common.i18n import _ from ironic.common import utils class MacAddressType(wtypes.UserType): """A simple MAC address type.""" basetype = wtypes.text name = 'macaddress' @staticmethod def validate(value): return utils.validate_and_normalize_mac(value) @staticmethod def frombasetype(value): if value is None: return None return MacAddressType.validate(value) class UuidOrNameType(wtypes.UserType): """A simple UUID or logical name type.""" basetype = wtypes.text name = 'uuid_or_name' @staticmethod def validate(value): if not (uuidutils.is_uuid_like(value) or v1_utils.is_valid_logical_name(value)): raise exception.InvalidUuidOrName(name=value) return value @staticmethod def frombasetype(value): if value is None: return None return UuidOrNameType.validate(value) class NameType(wtypes.UserType): """A simple logical name type.""" basetype = wtypes.text name = 'name' @staticmethod def validate(value): if not v1_utils.is_valid_logical_name(value): raise exception.InvalidName(name=value) return value @staticmethod def frombasetype(value): if value is None: return None return NameType.validate(value) class UuidType(wtypes.UserType): """A simple UUID type.""" basetype = wtypes.text name = 'uuid' @staticmethod def validate(value): if not uuidutils.is_uuid_like(value): raise exception.InvalidUUID(uuid=value) return value @staticmethod def frombasetype(value): if value is None: return None return UuidType.validate(value) class BooleanType(wtypes.UserType): """A simple boolean type.""" basetype = wtypes.text name = 'boolean' @staticmethod def validate(value): try: return strutils.bool_from_string(value, strict=True) except ValueError as e: # raise Invalid to return 400 (BadRequest) in the API raise exception.Invalid(e) @staticmethod def frombasetype(value): if value is None: return None return BooleanType.validate(value) class JsonType(wtypes.UserType): """A simple JSON type.""" basetype = wtypes.text name = 'json' def __str__(self): # These are the json serializable native types return ' | '.join(map(str, (wtypes.text, six.integer_types, float, BooleanType, list, dict, None))) @staticmethod def validate(value): try: json.dumps(value) except TypeError: raise exception.Invalid(_('%s is not JSON serializable') % value) else: return value @staticmethod def frombasetype(value): return JsonType.validate(value) class ListType(wtypes.UserType): """A simple list type.""" basetype = wtypes.text name = 'list' @staticmethod def validate(value): """Validate and convert the input to a ListType. :param value: A comma separated string of values :returns: A list of unique values, whose order is not guaranteed. """ items = [v.strip().lower() for v in six.text_type(value).split(',')] # filter() to remove empty items # set() to remove duplicated items return list(set(filter(None, items))) @staticmethod def frombasetype(value): if value is None: return None return ListType.validate(value) macaddress = MacAddressType() uuid_or_name = UuidOrNameType() name = NameType() uuid = UuidType() boolean = BooleanType() listtype = ListType() # Can't call it 'json' because that's the name of the stdlib module jsontype = JsonType() class JsonPatchType(wtypes.Base): """A complex type that represents a single json-patch operation.""" path = wtypes.wsattr(wtypes.StringType(pattern='^(/[\w-]+)+$'), mandatory=True) op = wtypes.wsattr(wtypes.Enum(str, 'add', 'replace', 'remove'), mandatory=True) value = wsme.wsattr(jsontype, default=wtypes.Unset) # The class of the objects being patched. Override this in subclasses. # Should probably be a subclass of ironic.api.controllers.base.APIBase. _api_base = None # Attributes that are not required for construction, but which may not be # removed if set. Override in subclasses if needed. _extra_non_removable_attrs = set() # Set of non-removable attributes, calculated lazily. _non_removable_attrs = None @staticmethod def internal_attrs(): """Returns a list of internal attributes. Internal attributes can't be added, replaced or removed. This method may be overwritten by derived class. """ return ['/created_at', '/id', '/links', '/updated_at', '/uuid'] @classmethod def non_removable_attrs(cls): """Returns a set of names of attributes that may not be removed. Attributes whose 'mandatory' property is True are automatically added to this set. To add additional attributes to the set, override the field _extra_non_removable_attrs in subclasses, with a set of the form {'/foo', '/bar'}. """ if cls._non_removable_attrs is None: cls._non_removable_attrs = cls._extra_non_removable_attrs.copy() if cls._api_base: fields = inspect.getmembers(cls._api_base, lambda a: not inspect.isroutine(a)) for name, field in fields: if getattr(field, 'mandatory', False): cls._non_removable_attrs.add('/%s' % name) return cls._non_removable_attrs @staticmethod def validate(patch): _path = '/' + patch.path.split('/')[1] if _path in patch.internal_attrs(): msg = _("'%s' is an internal attribute and can not be updated") raise wsme.exc.ClientSideError(msg % patch.path) if patch.path in patch.non_removable_attrs() and patch.op == 'remove': msg = _("'%s' is a mandatory attribute and can not be removed") raise wsme.exc.ClientSideError(msg % patch.path) if patch.op != 'remove': if patch.value is wsme.Unset: msg = _("'add' and 'replace' operations need a value") raise wsme.exc.ClientSideError(msg) ret = {'path': patch.path, 'op': patch.op} if patch.value is not wsme.Unset: ret['value'] = patch.value return ret ironic-5.1.0/ironic/api/controllers/v1/versions.py0000664000567000056710000000574712674513466023346 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is the version 1 API BASE_VERSION = 1 # Here goes a short log of changes in every version. # Refer to doc/source/webapi/v1.rst for a detailed explanation of what # each version contains. # # v1.0: corresponds to Juno API, not supported since Kilo # v1.1: API at the point in time when versioning support was added, # covers the following commits from Kilo cycle: # 827db7fe: Add Node.maintenance_reason # 68eed82b: Add API endpoint to set/unset the node maintenance mode # bc973889: Add sync and async support for passthru methods # e03f443b: Vendor endpoints to support different HTTP methods # e69e5309: Make vendor methods discoverable via the Ironic API # edf532db: Add logic to store the config drive passed by Nova # v1.2: Renamed NOSTATE ("None") to AVAILABLE ("available") # v1.3: Add node.driver_internal_info # v1.4: Add MANAGEABLE state # v1.5: Add logical node names # v1.6: Add INSPECT* states # v1.7: Add node.clean_step # v1.8: Add ability to return a subset of resource fields # v1.9: Add ability to filter nodes by provision state # v1.10: Logical node names support RFC 3986 unreserved characters # v1.11: Nodes appear in ENROLL state by default # v1.12: Add support for RAID # v1.13: Add 'abort' verb to CLEANWAIT # v1.14: Make the following endpoints discoverable via API: # 1. '/v1/nodes//states' # 2. '/v1/drivers//properties' # v1.15: Add ability to do manual cleaning of nodes # v1.16: Add ability to filter nodes by driver. MINOR_0_JUNO = 0 MINOR_1_INITIAL_VERSION = 1 MINOR_2_AVAILABLE_STATE = 2 MINOR_3_DRIVER_INTERNAL_INFO = 3 MINOR_4_MANAGEABLE_STATE = 4 MINOR_5_NODE_NAME = 5 MINOR_6_INSPECT_STATE = 6 MINOR_7_NODE_CLEAN = 7 MINOR_8_FETCHING_SUBSET_OF_FIELDS = 8 MINOR_9_PROVISION_STATE_FILTER = 9 MINOR_10_UNRESTRICTED_NODE_NAME = 10 MINOR_11_ENROLL_STATE = 11 MINOR_12_RAID_CONFIG = 12 MINOR_13_ABORT_VERB = 13 MINOR_14_LINKS_NODESTATES_DRIVERPROPERTIES = 14 MINOR_15_MANUAL_CLEAN = 15 MINOR_16_DRIVER_FILTER = 16 # When adding another version, update MINOR_MAX_VERSION and also update # doc/source/webapi/v1.rst with a detailed explanation of what the version has # changed. MINOR_MAX_VERSION = MINOR_16_DRIVER_FILTER # String representations of the minor and maximum versions MIN_VERSION_STRING = '{}.{}'.format(BASE_VERSION, MINOR_1_INITIAL_VERSION) MAX_VERSION_STRING = '{}.{}'.format(BASE_VERSION, MINOR_MAX_VERSION) ironic-5.1.0/ironic/api/controllers/v1/node.py0000664000567000056710000015567712674513466022433 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ast import datetime import jsonschema from oslo_config import cfg from oslo_log import log from oslo_utils import strutils from oslo_utils import uuidutils import pecan from pecan import rest from six.moves import http_client import wsme from wsme import types as wtypes from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import port from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api.controllers.v1 import versions from ironic.api import expose from ironic.common import exception from ironic.common.i18n import _ from ironic.common import states as ir_states from ironic.conductor import utils as conductor_utils from ironic import objects CONF = cfg.CONF CONF.import_opt('heartbeat_timeout', 'ironic.conductor.manager', group='conductor') LOG = log.getLogger(__name__) _CLEAN_STEPS_SCHEMA = { "$schema": "http://json-schema.org/schema#", "title": "Clean steps schema", "type": "array", # list of clean steps "items": { "type": "object", # args is optional "required": ["interface", "step"], "properties": { "interface": { "description": "driver interface", "enum": list(conductor_utils.CLEANING_INTERFACE_PRIORITY) # interface value must be one of the valid interfaces }, "step": { "description": "name of clean step", "type": "string", "minLength": 1 }, "args": { "description": "additional args", "type": "object", "properties": {} }, }, # interface, step and args are the only expected keys "additionalProperties": False } } # Vendor information for node's driver: # key = driver name; # value = dictionary of node vendor methods of that driver: # key = method name. # value = dictionary with the metadata of that method. # NOTE(lucasagomes). This is cached for the lifetime of the API # service. If one or more conductor services are restarted with new driver # versions, the API service should be restarted. _VENDOR_METHODS = {} _DEFAULT_RETURN_FIELDS = ('instance_uuid', 'maintenance', 'power_state', 'provision_state', 'uuid', 'name') # States where calling do_provisioning_action makes sense PROVISION_ACTION_STATES = (ir_states.VERBS['manage'], ir_states.VERBS['provide'], ir_states.VERBS['abort']) def hide_fields_in_newer_versions(obj): # if requested version is < 1.3, hide driver_internal_info if pecan.request.version.minor < versions.MINOR_3_DRIVER_INTERNAL_INFO: obj.driver_internal_info = wsme.Unset if not api_utils.allow_node_logical_names(): obj.name = wsme.Unset # if requested version is < 1.6, hide inspection_*_at fields if pecan.request.version.minor < versions.MINOR_6_INSPECT_STATE: obj.inspection_finished_at = wsme.Unset obj.inspection_started_at = wsme.Unset if pecan.request.version.minor < versions.MINOR_7_NODE_CLEAN: obj.clean_step = wsme.Unset if pecan.request.version.minor < versions.MINOR_12_RAID_CONFIG: obj.raid_config = wsme.Unset obj.target_raid_config = wsme.Unset def assert_juno_provision_state_name(obj): # if requested version is < 1.2, convert AVAILABLE to the old NOSTATE if (pecan.request.version.minor < versions.MINOR_2_AVAILABLE_STATE and obj.provision_state == ir_states.AVAILABLE): obj.provision_state = ir_states.NOSTATE class BootDeviceController(rest.RestController): _custom_actions = { 'supported': ['GET'], } def _get_boot_device(self, node_ident, supported=False): """Get the current boot device or a list of supported devices. :param node_ident: the UUID or logical name of a node. :param supported: Boolean value. If true return a list of supported boot devices, if false return the current boot device. Default: False. :returns: The current boot device or a list of the supported boot devices. """ rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) if supported: return pecan.request.rpcapi.get_supported_boot_devices( pecan.request.context, rpc_node.uuid, topic) else: return pecan.request.rpcapi.get_boot_device(pecan.request.context, rpc_node.uuid, topic) @expose.expose(None, types.uuid_or_name, wtypes.text, types.boolean, status_code=http_client.NO_CONTENT) def put(self, node_ident, boot_device, persistent=False): """Set the boot device for a node. Set the boot device to use on next reboot of the node. :param node_ident: the UUID or logical name of a node. :param boot_device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. """ rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) pecan.request.rpcapi.set_boot_device(pecan.request.context, rpc_node.uuid, boot_device, persistent=persistent, topic=topic) @expose.expose(wtypes.text, types.uuid_or_name) def get(self, node_ident): """Get the current boot device for a node. :param node_ident: the UUID or logical name of a node. :returns: a json object containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ return self._get_boot_device(node_ident) @expose.expose(wtypes.text, types.uuid_or_name) def supported(self, node_ident): """Get a list of the supported boot devices. :param node_ident: the UUID or logical name of a node. :returns: A json object with the list of supported boot devices. """ boot_devices = self._get_boot_device(node_ident, supported=True) return {'supported_boot_devices': boot_devices} class NodeManagementController(rest.RestController): boot_device = BootDeviceController() """Expose boot_device as a sub-element of management""" class ConsoleInfo(base.APIBase): """API representation of the console information for a node.""" console_enabled = types.boolean """The console state: if the console is enabled or not.""" console_info = {wtypes.text: types.jsontype} """The console information. It typically includes the url to access the console and the type of the application that hosts the console.""" @classmethod def sample(cls): console = {'type': 'shellinabox', 'url': 'http://:4201'} return cls(console_enabled=True, console_info=console) class NodeConsoleController(rest.RestController): @expose.expose(ConsoleInfo, types.uuid_or_name) def get(self, node_ident): """Get connection information about the console. :param node_ident: UUID or logical name of a node. """ rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) try: console = pecan.request.rpcapi.get_console_information( pecan.request.context, rpc_node.uuid, topic) console_state = True except exception.NodeConsoleNotEnabled: console = None console_state = False return ConsoleInfo(console_enabled=console_state, console_info=console) @expose.expose(None, types.uuid_or_name, types.boolean, status_code=http_client.ACCEPTED) def put(self, node_ident, enabled): """Start and stop the node console. :param node_ident: UUID or logical name of a node. :param enabled: Boolean value; whether to enable or disable the console. """ rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) pecan.request.rpcapi.set_console_mode(pecan.request.context, rpc_node.uuid, enabled, topic) # Set the HTTP Location Header url_args = '/'.join([node_ident, 'states', 'console']) pecan.response.location = link.build_url('nodes', url_args) class NodeStates(base.APIBase): """API representation of the states of a node.""" console_enabled = types.boolean """Indicates whether the console access is enabled or disabled on the node.""" power_state = wtypes.text """Represent the current (not transition) power state of the node""" provision_state = wtypes.text """Represent the current (not transition) provision state of the node""" provision_updated_at = datetime.datetime """The UTC date and time of the last provision state change""" target_power_state = wtypes.text """The user modified desired power state of the node.""" target_provision_state = wtypes.text """The user modified desired provision state of the node.""" last_error = wtypes.text """Any error from the most recent (last) asynchronous transaction that started but failed to finish.""" raid_config = wsme.wsattr({wtypes.text: types.jsontype}, readonly=True) """Represents the RAID configuration that the node is configured with.""" target_raid_config = wsme.wsattr({wtypes.text: types.jsontype}, readonly=True) """The desired RAID configuration, to be used the next time the node is configured.""" @staticmethod def convert(rpc_node): attr_list = ['console_enabled', 'last_error', 'power_state', 'provision_state', 'target_power_state', 'target_provision_state', 'provision_updated_at'] if api_utils.allow_raid_config(): attr_list.extend(['raid_config', 'target_raid_config']) states = NodeStates() for attr in attr_list: setattr(states, attr, getattr(rpc_node, attr)) assert_juno_provision_state_name(states) return states @classmethod def sample(cls): sample = cls(target_power_state=ir_states.POWER_ON, target_provision_state=ir_states.ACTIVE, last_error=None, console_enabled=False, provision_updated_at=None, power_state=ir_states.POWER_ON, provision_state=None, raid_config=None, target_raid_config=None) return sample class NodeStatesController(rest.RestController): _custom_actions = { 'power': ['PUT'], 'provision': ['PUT'], 'raid': ['PUT'], } console = NodeConsoleController() """Expose console as a sub-element of states""" @expose.expose(NodeStates, types.uuid_or_name) def get(self, node_ident): """List the states of the node. :param node_ident: the UUID or logical_name of a node. """ # NOTE(lucasagomes): All these state values come from the # DB. Ironic counts with a periodic task that verify the current # power states of the nodes and update the DB accordingly. rpc_node = api_utils.get_rpc_node(node_ident) return NodeStates.convert(rpc_node) @expose.expose(None, types.uuid_or_name, body=types.jsontype) def raid(self, node_ident, target_raid_config): """Set the target raid config of the node. :param node_ident: the UUID or logical name of a node. :param target_raid_config: Desired target RAID configuration of the node. It may be an empty dictionary as well. :raises: UnsupportedDriverExtension, if the node's driver doesn't support RAID configuration. :raises: InvalidParameterValue, if validation of target raid config fails. :raises: NotAcceptable, if requested version of the API is less than 1.12. """ if not api_utils.allow_raid_config(): raise exception.NotAcceptable() rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) try: pecan.request.rpcapi.set_target_raid_config( pecan.request.context, rpc_node.uuid, target_raid_config, topic=topic) except exception.UnsupportedDriverExtension as e: # Change error code as 404 seems appropriate because RAID is a # standard interface and all drivers might not have it. e.code = http_client.NOT_FOUND raise e @expose.expose(None, types.uuid_or_name, wtypes.text, status_code=http_client.ACCEPTED) def power(self, node_ident, target): """Set the power state of the node. :param node_ident: the UUID or logical name of a node. :param target: The desired power state of the node. :raises: ClientSideError (HTTP 409) if a power operation is already in progress. :raises: InvalidStateRequested (HTTP 400) if the requested target state is not valid or if the node is in CLEANING state. """ # TODO(lucasagomes): Test if it's able to transition to the # target state from the current one rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) if target not in [ir_states.POWER_ON, ir_states.POWER_OFF, ir_states.REBOOT]: raise exception.InvalidStateRequested( action=target, node=node_ident, state=rpc_node.power_state) # Don't change power state for nodes being cleaned elif rpc_node.provision_state in (ir_states.CLEANWAIT, ir_states.CLEANING): raise exception.InvalidStateRequested( action=target, node=node_ident, state=rpc_node.provision_state) pecan.request.rpcapi.change_node_power_state(pecan.request.context, rpc_node.uuid, target, topic) # Set the HTTP Location Header url_args = '/'.join([node_ident, 'states']) pecan.response.location = link.build_url('nodes', url_args) @expose.expose(None, types.uuid_or_name, wtypes.text, wtypes.text, types.jsontype, status_code=http_client.ACCEPTED) def provision(self, node_ident, target, configdrive=None, clean_steps=None): """Asynchronous trigger the provisioning of the node. This will set the target provision state of the node, and a background task will begin which actually applies the state change. This call will return a 202 (Accepted) indicating the request was accepted and is in progress; the client should continue to GET the status of this node to observe the status of the requested action. :param node_ident: UUID or logical name of a node. :param target: The desired provision state of the node or verb. :param configdrive: Optional. A gzipped and base64 encoded configdrive. Only valid when setting provision state to "active". :param clean_steps: An ordered list of cleaning steps that will be performed on the node. A cleaning step is a dictionary with required keys 'interface' and 'step', and optional key 'args'. If specified, the value for 'args' is a keyword variable argument dictionary that is passed to the cleaning step method.:: { 'interface': , 'step': , 'args': {: , ..., : } } For example (this isn't a real example, this cleaning step doesn't exist):: { 'interface': 'deploy', 'step': 'upgrade_firmware', 'args': {'force': True} } This is required (and only valid) when target is "clean". :raises: NodeLocked (HTTP 409) if the node is currently locked. :raises: ClientSideError (HTTP 409) if the node is already being provisioned. :raises: InvalidParameterValue (HTTP 400), if validation of clean_steps or power driver interface fails. :raises: InvalidStateRequested (HTTP 400) if the requested transition is not possible from the current state. :raises: NodeInMaintenance (HTTP 400), if operation cannot be performed because the node is in maintenance mode. :raises: NoFreeConductorWorker (HTTP 503) if no workers are available. :raises: NotAcceptable (HTTP 406) if the API version specified does not allow the requested state transition. """ api_utils.check_allow_management_verbs(target) rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) if (target in (ir_states.ACTIVE, ir_states.REBUILD) and rpc_node.maintenance): raise exception.NodeInMaintenance(op=_('provisioning'), node=rpc_node.uuid) m = ir_states.machine.copy() m.initialize(rpc_node.provision_state) if not m.is_actionable_event(ir_states.VERBS.get(target, target)): # Normally, we let the task manager recognize and deal with # NodeLocked exceptions. However, that isn't done until the RPC # calls below. # In order to main backward compatibility with our API HTTP # response codes, we have this check here to deal with cases where # a node is already being operated on (DEPLOYING or such) and we # want to continue returning 409. Without it, we'd return 400. if rpc_node.reservation: raise exception.NodeLocked(node=rpc_node.uuid, host=rpc_node.reservation) raise exception.InvalidStateRequested( action=target, node=rpc_node.uuid, state=rpc_node.provision_state) if configdrive and target != ir_states.ACTIVE: msg = (_('Adding a config drive is only supported when setting ' 'provision state to %s') % ir_states.ACTIVE) raise wsme.exc.ClientSideError( msg, status_code=http_client.BAD_REQUEST) if clean_steps and target != ir_states.VERBS['clean']: msg = (_('"clean_steps" is only valid when setting target ' 'provision state to %s') % ir_states.VERBS['clean']) raise wsme.exc.ClientSideError( msg, status_code=http_client.BAD_REQUEST) # Note that there is a race condition. The node state(s) could change # by the time the RPC call is made and the TaskManager manager gets a # lock. if target == ir_states.ACTIVE: pecan.request.rpcapi.do_node_deploy(pecan.request.context, rpc_node.uuid, False, configdrive, topic) elif target == ir_states.REBUILD: pecan.request.rpcapi.do_node_deploy(pecan.request.context, rpc_node.uuid, True, None, topic) elif target == ir_states.DELETED: pecan.request.rpcapi.do_node_tear_down( pecan.request.context, rpc_node.uuid, topic) elif target == ir_states.VERBS['inspect']: pecan.request.rpcapi.inspect_hardware( pecan.request.context, rpc_node.uuid, topic=topic) elif target == ir_states.VERBS['clean']: if not clean_steps: msg = (_('"clean_steps" is required when setting target ' 'provision state to %s') % ir_states.VERBS['clean']) raise wsme.exc.ClientSideError( msg, status_code=http_client.BAD_REQUEST) _check_clean_steps(clean_steps) pecan.request.rpcapi.do_node_clean( pecan.request.context, rpc_node.uuid, clean_steps, topic) elif target in PROVISION_ACTION_STATES: pecan.request.rpcapi.do_provisioning_action( pecan.request.context, rpc_node.uuid, target, topic) else: msg = (_('The requested action "%(action)s" could not be ' 'understood.') % {'action': target}) raise exception.InvalidStateRequested(message=msg) # Set the HTTP Location Header url_args = '/'.join([node_ident, 'states']) pecan.response.location = link.build_url('nodes', url_args) def _check_clean_steps(clean_steps): """Ensure all necessary keys are present and correct in clean steps. Check that the user-specified clean steps are in the expected format and include the required information. :param clean_steps: a list of clean steps. For more details, see the clean_steps parameter of :func:`NodeStatesController.provision`. :raises: InvalidParameterValue if validation of clean steps fails. """ try: jsonschema.validate(clean_steps, _CLEAN_STEPS_SCHEMA) except jsonschema.ValidationError as exc: raise exception.InvalidParameterValue(_('Invalid clean_steps: %s') % exc) class Node(base.APIBase): """API representation of a bare metal node. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a node. """ _chassis_uuid = None def _get_chassis_uuid(self): return self._chassis_uuid def _set_chassis_uuid(self, value): if value and self._chassis_uuid != value: try: chassis = objects.Chassis.get(pecan.request.context, value) self._chassis_uuid = chassis.uuid # NOTE(lucasagomes): Create the chassis_id attribute on-the-fly # to satisfy the api -> rpc object # conversion. self.chassis_id = chassis.id except exception.ChassisNotFound as e: # Change error code because 404 (NotFound) is inappropriate # response for a POST request to create a Port e.code = http_client.BAD_REQUEST raise e elif value == wtypes.Unset: self._chassis_uuid = wtypes.Unset uuid = types.uuid """Unique UUID for this node""" instance_uuid = types.uuid """The UUID of the instance in nova-compute""" name = wsme.wsattr(wtypes.text) """The logical name for this node""" power_state = wsme.wsattr(wtypes.text, readonly=True) """Represent the current (not transition) power state of the node""" target_power_state = wsme.wsattr(wtypes.text, readonly=True) """The user modified desired power state of the node.""" last_error = wsme.wsattr(wtypes.text, readonly=True) """Any error from the most recent (last) asynchronous transaction that started but failed to finish.""" provision_state = wsme.wsattr(wtypes.text, readonly=True) """Represent the current (not transition) provision state of the node""" reservation = wsme.wsattr(wtypes.text, readonly=True) """The hostname of the conductor that holds an exclusive lock on the node.""" provision_updated_at = datetime.datetime """The UTC date and time of the last provision state change""" inspection_finished_at = datetime.datetime """The UTC date and time when the last hardware inspection finished successfully.""" inspection_started_at = datetime.datetime """The UTC date and time when the hardware inspection was started""" maintenance = types.boolean """Indicates whether the node is in maintenance mode.""" maintenance_reason = wsme.wsattr(wtypes.text, readonly=True) """Indicates reason for putting a node in maintenance mode.""" target_provision_state = wsme.wsattr(wtypes.text, readonly=True) """The user modified desired provision state of the node.""" console_enabled = types.boolean """Indicates whether the console access is enabled or disabled on the node.""" instance_info = {wtypes.text: types.jsontype} """This node's instance info.""" driver = wsme.wsattr(wtypes.text, mandatory=True) """The driver responsible for controlling the node""" driver_info = {wtypes.text: types.jsontype} """This node's driver configuration""" driver_internal_info = wsme.wsattr({wtypes.text: types.jsontype}, readonly=True) """This driver's internal configuration""" clean_step = wsme.wsattr({wtypes.text: types.jsontype}, readonly=True) """The current clean step""" raid_config = wsme.wsattr({wtypes.text: types.jsontype}, readonly=True) """Represents the current RAID configuration of the node """ target_raid_config = wsme.wsattr({wtypes.text: types.jsontype}, readonly=True) """The user modified RAID configuration of the node """ extra = {wtypes.text: types.jsontype} """This node's meta data""" # NOTE: properties should use a class to enforce required properties # current list: arch, cpus, disk, ram, image properties = {wtypes.text: types.jsontype} """The physical characteristics of this node""" chassis_uuid = wsme.wsproperty(types.uuid, _get_chassis_uuid, _set_chassis_uuid) """The UUID of the chassis this node belongs""" links = wsme.wsattr([link.Link], readonly=True) """A list containing a self link and associated node links""" ports = wsme.wsattr([link.Link], readonly=True) """Links to the collection of ports on this node""" states = wsme.wsattr([link.Link], readonly=True) """Links to endpoint for retrieving and setting node states""" # NOTE(deva): "conductor_affinity" shouldn't be presented on the # API because it's an internal value. Don't add it here. def __init__(self, **kwargs): self.fields = [] fields = list(objects.Node.fields) # NOTE(lucasagomes): chassis_uuid is not part of objects.Node.fields # because it's an API-only attribute. fields.append('chassis_uuid') for k in fields: # Add fields we expose. if hasattr(self, k): self.fields.append(k) setattr(self, k, kwargs.get(k, wtypes.Unset)) # NOTE(lucasagomes): chassis_id is an attribute created on-the-fly # by _set_chassis_uuid(), it needs to be present in the fields so # that as_dict() will contain chassis_id field when converting it # before saving it in the database. self.fields.append('chassis_id') setattr(self, 'chassis_uuid', kwargs.get('chassis_id', wtypes.Unset)) @staticmethod def _convert_with_links(node, url, fields=None, show_password=True, show_states_links=True): # NOTE(lucasagomes): Since we are able to return a specified set of # fields the "uuid" can be unset, so we need to save it in another # variable to use when building the links node_uuid = node.uuid if fields is not None: node.unset_fields_except(fields) else: node.ports = [link.Link.make_link('self', url, 'nodes', node_uuid + "/ports"), link.Link.make_link('bookmark', url, 'nodes', node_uuid + "/ports", bookmark=True) ] if show_states_links: node.states = [link.Link.make_link('self', url, 'nodes', node_uuid + "/states"), link.Link.make_link('bookmark', url, 'nodes', node_uuid + "/states", bookmark=True)] if not show_password and node.driver_info != wtypes.Unset: node.driver_info = ast.literal_eval(strutils.mask_password( node.driver_info, "******")) # NOTE(lucasagomes): The numeric ID should not be exposed to # the user, it's internal only. node.chassis_id = wtypes.Unset node.links = [link.Link.make_link('self', url, 'nodes', node_uuid), link.Link.make_link('bookmark', url, 'nodes', node_uuid, bookmark=True) ] return node @classmethod def convert_with_links(cls, rpc_node, fields=None): node = Node(**rpc_node.as_dict()) if fields is not None: api_utils.check_for_invalid_fields(fields, node.as_dict()) assert_juno_provision_state_name(node) hide_fields_in_newer_versions(node) show_password = pecan.request.context.show_password show_states_links = ( api_utils.allow_links_node_states_and_driver_properties()) return cls._convert_with_links(node, pecan.request.public_url, fields=fields, show_password=show_password, show_states_links=show_states_links) @classmethod def sample(cls, expand=True): time = datetime.datetime(2000, 1, 1, 12, 0, 0) node_uuid = '1be26c0b-03f2-4d2e-ae87-c02d7f33c123' instance_uuid = 'dcf1fbc5-93fc-4596-9395-b80572f6267b' name = 'database16-dc02' sample = cls(uuid=node_uuid, instance_uuid=instance_uuid, name=name, power_state=ir_states.POWER_ON, target_power_state=ir_states.NOSTATE, last_error=None, provision_state=ir_states.ACTIVE, target_provision_state=ir_states.NOSTATE, reservation=None, driver='fake', driver_info={}, driver_internal_info={}, extra={}, properties={ 'memory_mb': '1024', 'local_gb': '10', 'cpus': '1'}, updated_at=time, created_at=time, provision_updated_at=time, instance_info={}, maintenance=False, maintenance_reason=None, inspection_finished_at=None, inspection_started_at=time, console_enabled=False, clean_step={}, raid_config=None, target_raid_config=None) # NOTE(matty_dubs): The chassis_uuid getter() is based on the # _chassis_uuid variable: sample._chassis_uuid = 'edcad704-b2da-41d5-96d9-afd580ecfa12' fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class NodePatchType(types.JsonPatchType): _api_base = Node _extra_non_removable_attrs = {'/chassis_uuid'} @staticmethod def internal_attrs(): defaults = types.JsonPatchType.internal_attrs() # TODO(lucasagomes): Include maintenance once the endpoint # v1/nodes//maintenance do more things than updating the DB. return defaults + ['/console_enabled', '/last_error', '/power_state', '/provision_state', '/reservation', '/target_power_state', '/target_provision_state', '/provision_updated_at', '/maintenance_reason', '/driver_internal_info', '/inspection_finished_at', '/inspection_started_at', '/clean_step', '/raid_config', '/target_raid_config'] class NodeCollection(collection.Collection): """API representation of a collection of nodes.""" nodes = [Node] """A list containing nodes objects""" def __init__(self, **kwargs): self._type = 'nodes' @staticmethod def convert_with_links(nodes, limit, url=None, fields=None, **kwargs): collection = NodeCollection() collection.nodes = [Node.convert_with_links(n, fields=fields) for n in nodes] collection.next = collection.get_next(limit, url=url, **kwargs) return collection @classmethod def sample(cls): sample = cls() node = Node.sample(expand=False) sample.nodes = [node] return sample class NodeVendorPassthruController(rest.RestController): """REST controller for VendorPassthru. This controller allow vendors to expose a custom functionality in the Ironic API. Ironic will merely relay the message from here to the appropriate driver, no introspection will be made in the message body. """ _custom_actions = { 'methods': ['GET'] } @expose.expose(wtypes.text, types.uuid_or_name) def methods(self, node_ident): """Retrieve information about vendor methods of the given node. :param node_ident: UUID or logical name of a node. :returns: dictionary with : entries. :raises: NodeNotFound if the node is not found. """ # Raise an exception if node is not found rpc_node = api_utils.get_rpc_node(node_ident) if rpc_node.driver not in _VENDOR_METHODS: topic = pecan.request.rpcapi.get_topic_for(rpc_node) ret = pecan.request.rpcapi.get_node_vendor_passthru_methods( pecan.request.context, rpc_node.uuid, topic=topic) _VENDOR_METHODS[rpc_node.driver] = ret return _VENDOR_METHODS[rpc_node.driver] @expose.expose(wtypes.text, types.uuid_or_name, wtypes.text, body=wtypes.text) def _default(self, node_ident, method, data=None): """Call a vendor extension. :param node_ident: UUID or logical name of a node. :param method: name of the method in vendor driver. :param data: body of data to supply to the specified method. """ # Raise an exception if node is not found rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) return api_utils.vendor_passthru(rpc_node.uuid, method, topic, data=data) class NodeMaintenanceController(rest.RestController): def _set_maintenance(self, node_ident, maintenance_mode, reason=None): rpc_node = api_utils.get_rpc_node(node_ident) rpc_node.maintenance = maintenance_mode rpc_node.maintenance_reason = reason try: topic = pecan.request.rpcapi.get_topic_for(rpc_node) except exception.NoValidHost as e: e.code = http_client.BAD_REQUEST raise e pecan.request.rpcapi.update_node(pecan.request.context, rpc_node, topic=topic) @expose.expose(None, types.uuid_or_name, wtypes.text, status_code=http_client.ACCEPTED) def put(self, node_ident, reason=None): """Put the node in maintenance mode. :param node_ident: the UUID or logical_name of a node. :param reason: Optional, the reason why it's in maintenance. """ self._set_maintenance(node_ident, True, reason=reason) @expose.expose(None, types.uuid_or_name, status_code=http_client.ACCEPTED) def delete(self, node_ident): """Remove the node from maintenance mode. :param node_ident: the UUID or logical name of a node. """ self._set_maintenance(node_ident, False) class NodesController(rest.RestController): """REST controller for Nodes.""" states = NodeStatesController() """Expose the state controller action as a sub-element of nodes""" vendor_passthru = NodeVendorPassthruController() """A resource used for vendors to expose a custom functionality in the API""" ports = port.PortsController() """Expose ports as a sub-element of nodes""" management = NodeManagementController() """Expose management as a sub-element of nodes""" maintenance = NodeMaintenanceController() """Expose maintenance as a sub-element of nodes""" # Set the flag to indicate that the requests to this resource are # coming from a top-level resource ports.from_nodes = True from_chassis = False """A flag to indicate if the requests to this controller are coming from the top-level resource Chassis""" _custom_actions = { 'detail': ['GET'], 'validate': ['GET'], } invalid_sort_key_list = ['properties', 'driver_info', 'extra', 'instance_info', 'driver_internal_info', 'clean_step', 'raid_config', 'target_raid_config'] def _get_nodes_collection(self, chassis_uuid, instance_uuid, associated, maintenance, provision_state, marker, limit, sort_key, sort_dir, driver=None, resource_url=None, fields=None): if self.from_chassis and not chassis_uuid: raise exception.MissingParameterValue( _("Chassis id not specified.")) limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) marker_obj = None if marker: marker_obj = objects.Node.get_by_uuid(pecan.request.context, marker) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) if instance_uuid: nodes = self._get_nodes_by_instance(instance_uuid) else: filters = {} if chassis_uuid: filters['chassis_uuid'] = chassis_uuid if associated is not None: filters['associated'] = associated if maintenance is not None: filters['maintenance'] = maintenance if provision_state: filters['provision_state'] = provision_state if driver: filters['driver'] = driver nodes = objects.Node.list(pecan.request.context, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir, filters=filters) parameters = {'sort_key': sort_key, 'sort_dir': sort_dir} if associated: parameters['associated'] = associated if maintenance: parameters['maintenance'] = maintenance return NodeCollection.convert_with_links(nodes, limit, url=resource_url, fields=fields, **parameters) def _get_nodes_by_instance(self, instance_uuid): """Retrieve a node by its instance uuid. It returns a list with the node, or an empty list if no node is found. """ try: node = objects.Node.get_by_instance_uuid(pecan.request.context, instance_uuid) return [node] except exception.InstanceNotFound: return [] def _check_name_acceptable(self, name, error_msg): """Checks if a node 'name' is acceptable, it does not return a value. This function will raise an exception for unacceptable names. :param name: node name :param error_msg: error message in case of wsme.exc.ClientSideError :raises: exception.NotAcceptable :raises: wsme.exc.ClientSideError """ if not api_utils.allow_node_logical_names(): raise exception.NotAcceptable() if not api_utils.is_valid_node_name(name): raise wsme.exc.ClientSideError( error_msg, status_code=http_client.BAD_REQUEST) def _update_changed_fields(self, node, rpc_node): """Update rpc_node based on changed fields in a node. """ for field in objects.Node.fields: try: patch_val = getattr(node, field) except AttributeError: # Ignore fields that aren't exposed in the API continue if patch_val == wtypes.Unset: patch_val = None if rpc_node[field] != patch_val: rpc_node[field] = patch_val def _check_driver_changed_and_console_enabled(self, rpc_node, node_ident): """Checks if the driver and the console is enabled in a node. If it does, is necessary to prevent updating it because the new driver will not be able to stop a console started by the previous one. :param rpc_node: RPC Node object to be veryfied. :param node_ident: the UUID or logical name of a node. :raises: wsme.exc.ClientSideError """ delta = rpc_node.obj_what_changed() if 'driver' in delta and rpc_node.console_enabled: raise wsme.exc.ClientSideError( _("Node %s can not update the driver while the console is " "enabled. Please stop the console first.") % node_ident, status_code=http_client.CONFLICT) @expose.expose(NodeCollection, types.uuid, types.uuid, types.boolean, types.boolean, wtypes.text, types.uuid, int, wtypes.text, wtypes.text, wtypes.text, types.listtype) def get_all(self, chassis_uuid=None, instance_uuid=None, associated=None, maintenance=None, provision_state=None, marker=None, limit=None, sort_key='id', sort_dir='asc', driver=None, fields=None): """Retrieve a list of nodes. :param chassis_uuid: Optional UUID of a chassis, to get only nodes for that chassis. :param instance_uuid: Optional UUID of an instance, to find the node associated with that instance. :param associated: Optional boolean whether to return a list of associated or unassociated nodes. May be combined with other parameters. :param maintenance: Optional boolean value that indicates whether to get nodes in maintenance mode ("True"), or not in maintenance mode ("False"). :param provision_state: Optional string value to get only nodes in that provision state. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param driver: Optional string value to get only nodes using that driver. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ api_utils.check_allow_specify_fields(fields) api_utils.check_for_invalid_state_and_allow_filter(provision_state) api_utils.check_allow_specify_driver(driver) if fields is None: fields = _DEFAULT_RETURN_FIELDS return self._get_nodes_collection(chassis_uuid, instance_uuid, associated, maintenance, provision_state, marker, limit, sort_key, sort_dir, driver, fields=fields) @expose.expose(NodeCollection, types.uuid, types.uuid, types.boolean, types.boolean, wtypes.text, types.uuid, int, wtypes.text, wtypes.text, wtypes.text) def detail(self, chassis_uuid=None, instance_uuid=None, associated=None, maintenance=None, provision_state=None, marker=None, limit=None, sort_key='id', sort_dir='asc', driver=None): """Retrieve a list of nodes with detail. :param chassis_uuid: Optional UUID of a chassis, to get only nodes for that chassis. :param instance_uuid: Optional UUID of an instance, to find the node associated with that instance. :param associated: Optional boolean whether to return a list of associated or unassociated nodes. May be combined with other parameters. :param maintenance: Optional boolean value that indicates whether to get nodes in maintenance mode ("True"), or not in maintenance mode ("False"). :param provision_state: Optional string value to get only nodes in that provision state. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param driver: Optional string value to get only nodes using that driver. """ api_utils.check_for_invalid_state_and_allow_filter(provision_state) api_utils.check_allow_specify_driver(driver) # /detail should only work against collections parent = pecan.request.path.split('/')[:-1][-1] if parent != "nodes": raise exception.HTTPNotFound resource_url = '/'.join(['nodes', 'detail']) return self._get_nodes_collection(chassis_uuid, instance_uuid, associated, maintenance, provision_state, marker, limit, sort_key, sort_dir, driver, resource_url) @expose.expose(wtypes.text, types.uuid_or_name, types.uuid) def validate(self, node=None, node_uuid=None): """Validate the driver interfaces, using the node's UUID or name. Note that the 'node_uuid' interface is deprecated in favour of the 'node' interface :param node: UUID or name of a node. :param node_uuid: UUID of a node. """ if node is not None: # We're invoking this interface using positional notation, or # explicitly using 'node'. Try and determine which one. if (not api_utils.allow_node_logical_names() and not uuidutils.is_uuid_like(node)): raise exception.NotAcceptable() rpc_node = api_utils.get_rpc_node(node_uuid or node) topic = pecan.request.rpcapi.get_topic_for(rpc_node) return pecan.request.rpcapi.validate_driver_interfaces( pecan.request.context, rpc_node.uuid, topic) @expose.expose(Node, types.uuid_or_name, types.listtype) def get_one(self, node_ident, fields=None): """Retrieve information about the given node. :param node_ident: UUID or logical name of a node. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ if self.from_chassis: raise exception.OperationNotPermitted api_utils.check_allow_specify_fields(fields) rpc_node = api_utils.get_rpc_node(node_ident) return Node.convert_with_links(rpc_node, fields=fields) @expose.expose(Node, body=Node, status_code=http_client.CREATED) def post(self, node): """Create a new node. :param node: a node within the request body. """ if self.from_chassis: raise exception.OperationNotPermitted # NOTE(deva): get_topic_for checks if node.driver is in the hash ring # and raises NoValidHost if it is not. # We need to ensure that node has a UUID before it can # be mapped onto the hash ring. if not node.uuid: node.uuid = uuidutils.generate_uuid() try: pecan.request.rpcapi.get_topic_for(node) except exception.NoValidHost as e: # NOTE(deva): convert from 404 to 400 because client can see # list of available drivers and shouldn't request # one that doesn't exist. e.code = http_client.BAD_REQUEST raise e if node.name: error_msg = _("Cannot create node with invalid name " "%(name)s") % {'name': node.name} self._check_name_acceptable(node.name, error_msg) node.provision_state = api_utils.initial_node_provision_state() new_node = objects.Node(pecan.request.context, **node.as_dict()) new_node.create() # Set the HTTP Location Header pecan.response.location = link.build_url('nodes', new_node.uuid) return Node.convert_with_links(new_node) @wsme.validate(types.uuid, [NodePatchType]) @expose.expose(Node, types.uuid_or_name, body=[NodePatchType]) def patch(self, node_ident, patch): """Update an existing node. :param node_ident: UUID or logical name of a node. :param patch: a json PATCH document to apply to this node. """ if self.from_chassis: raise exception.OperationNotPermitted rpc_node = api_utils.get_rpc_node(node_ident) # TODO(lucasagomes): This code is here for backward compatibility # with old nova Ironic drivers that will attempt to remove the # instance even if it's already deleted in Ironic. This conditional # should be removed in the next cycle (Mitaka). remove_inst_uuid_patch = [{'op': 'remove', 'path': '/instance_uuid'}] if (rpc_node.provision_state in (ir_states.CLEANING, ir_states.CLEANWAIT) and patch == remove_inst_uuid_patch): # The instance_uuid is already removed as part of the node's # tear down, skip this update. return Node.convert_with_links(rpc_node) elif rpc_node.maintenance and patch == remove_inst_uuid_patch: LOG.debug('Removing instance uuid %(instance)s from node %(node)s', {'instance': rpc_node.instance_uuid, 'node': rpc_node.uuid}) # Check if node is transitioning state, although nodes in some states # can be updated. elif (rpc_node.target_provision_state and rpc_node.provision_state not in ir_states.UPDATE_ALLOWED_STATES): msg = _("Node %s can not be updated while a state transition " "is in progress.") raise wsme.exc.ClientSideError( msg % node_ident, status_code=http_client.CONFLICT) name = api_utils.get_patch_value(patch, '/name') if name: error_msg = _("Node %(node)s: Cannot change name to invalid " "name '%(name)s'") % {'node': node_ident, 'name': name} self._check_name_acceptable(name, error_msg) try: node_dict = rpc_node.as_dict() # NOTE(lucasagomes): # 1) Remove chassis_id because it's an internal value and # not present in the API object # 2) Add chassis_uuid node_dict['chassis_uuid'] = node_dict.pop('chassis_id', None) node = Node(**api_utils.apply_jsonpatch(node_dict, patch)) except api_utils.JSONPATCH_EXCEPTIONS as e: raise exception.PatchError(patch=patch, reason=e) self._update_changed_fields(node, rpc_node) # NOTE(deva): we calculate the rpc topic here in case node.driver # has changed, so that update is sent to the # new conductor, not the old one which may fail to # load the new driver. try: topic = pecan.request.rpcapi.get_topic_for(rpc_node) except exception.NoValidHost as e: # NOTE(deva): convert from 404 to 400 because client can see # list of available drivers and shouldn't request # one that doesn't exist. e.code = http_client.BAD_REQUEST raise e self._check_driver_changed_and_console_enabled(rpc_node, node_ident) new_node = pecan.request.rpcapi.update_node( pecan.request.context, rpc_node, topic) return Node.convert_with_links(new_node) @expose.expose(None, types.uuid_or_name, status_code=http_client.NO_CONTENT) def delete(self, node_ident): """Delete a node. :param node_ident: UUID or logical name of a node. """ if self.from_chassis: raise exception.OperationNotPermitted rpc_node = api_utils.get_rpc_node(node_ident) try: topic = pecan.request.rpcapi.get_topic_for(rpc_node) except exception.NoValidHost as e: e.code = http_client.BAD_REQUEST raise e pecan.request.rpcapi.destroy_node(pecan.request.context, rpc_node.uuid, topic) ironic-5.1.0/ironic/api/controllers/v1/__init__.py0000664000567000056710000001553312674513466023227 0ustar jenkinsjenkins00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Version 1 of the Ironic API NOTE: IN PROGRESS AND NOT FULLY IMPLEMENTED. Should maintain feature parity with Nova Baremetal Extension. Specification can be found at ironic/doc/api/v1.rst """ import pecan from pecan import rest from webob import exc from wsme import types as wtypes from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import chassis from ironic.api.controllers.v1 import driver from ironic.api.controllers.v1 import node from ironic.api.controllers.v1 import port from ironic.api.controllers.v1 import versions from ironic.api import expose from ironic.common.i18n import _ BASE_VERSION = versions.BASE_VERSION MIN_VER = base.Version( {base.Version.string: versions.MIN_VERSION_STRING}, versions.MIN_VERSION_STRING, versions.MAX_VERSION_STRING) MAX_VER = base.Version( {base.Version.string: versions.MAX_VERSION_STRING}, versions.MIN_VERSION_STRING, versions.MAX_VERSION_STRING) class MediaType(base.APIBase): """A media type representation.""" base = wtypes.text type = wtypes.text def __init__(self, base, type): self.base = base self.type = type class V1(base.APIBase): """The representation of the version 1 of the API.""" id = wtypes.text """The ID of the version, also acts as the release number""" media_types = [MediaType] """An array of supported media types for this version""" links = [link.Link] """Links that point to a specific URL for this version and documentation""" chassis = [link.Link] """Links to the chassis resource""" nodes = [link.Link] """Links to the nodes resource""" ports = [link.Link] """Links to the ports resource""" drivers = [link.Link] """Links to the drivers resource""" @staticmethod def convert(): v1 = V1() v1.id = "v1" v1.links = [link.Link.make_link('self', pecan.request.public_url, 'v1', '', bookmark=True), link.Link.make_link('describedby', 'http://docs.openstack.org', 'developer/ironic/dev', 'api-spec-v1.html', bookmark=True, type='text/html') ] v1.media_types = [MediaType('application/json', 'application/vnd.openstack.ironic.v1+json')] v1.chassis = [link.Link.make_link('self', pecan.request.public_url, 'chassis', ''), link.Link.make_link('bookmark', pecan.request.public_url, 'chassis', '', bookmark=True) ] v1.nodes = [link.Link.make_link('self', pecan.request.public_url, 'nodes', ''), link.Link.make_link('bookmark', pecan.request.public_url, 'nodes', '', bookmark=True) ] v1.ports = [link.Link.make_link('self', pecan.request.public_url, 'ports', ''), link.Link.make_link('bookmark', pecan.request.public_url, 'ports', '', bookmark=True) ] v1.drivers = [link.Link.make_link('self', pecan.request.public_url, 'drivers', ''), link.Link.make_link('bookmark', pecan.request.public_url, 'drivers', '', bookmark=True) ] return v1 class Controller(rest.RestController): """Version 1 API controller root.""" nodes = node.NodesController() ports = port.PortsController() chassis = chassis.ChassisController() drivers = driver.DriversController() @expose.expose(V1) def get(self): # NOTE: The reason why convert() it's being called for every # request is because we need to get the host url from # the request object to make the links. return V1.convert() def _check_version(self, version, headers=None): if headers is None: headers = {} # ensure that major version in the URL matches the header if version.major != BASE_VERSION: raise exc.HTTPNotAcceptable(_( "Mutually exclusive versions requested. Version %(ver)s " "requested but not supported by this service. The supported " "version range is: [%(min)s, %(max)s].") % {'ver': version, 'min': versions.MIN_VERSION_STRING, 'max': versions.MAX_VERSION_STRING}, headers=headers) # ensure the minor version is within the supported range if version < MIN_VER or version > MAX_VER: raise exc.HTTPNotAcceptable(_( "Version %(ver)s was requested but the minor version is not " "supported by this service. The supported version range is: " "[%(min)s, %(max)s].") % {'ver': version, 'min': versions.MIN_VERSION_STRING, 'max': versions.MAX_VERSION_STRING}, headers=headers) @pecan.expose() def _route(self, args): v = base.Version(pecan.request.headers, versions.MIN_VERSION_STRING, versions.MAX_VERSION_STRING) # Always set the min and max headers pecan.response.headers[base.Version.min_string] = ( versions.MIN_VERSION_STRING) pecan.response.headers[base.Version.max_string] = ( versions.MAX_VERSION_STRING) # assert that requested version is supported self._check_version(v, pecan.response.headers) pecan.response.headers[base.Version.string] = str(v) pecan.request.version = v return super(Controller, self)._route(args) __all__ = (Controller) ironic-5.1.0/ironic/api/controllers/v1/collection.py0000664000567000056710000000326612674513466023623 0ustar jenkinsjenkins00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from wsme import types as wtypes from ironic.api.controllers import base from ironic.api.controllers import link class Collection(base.APIBase): next = wtypes.text """A link to retrieve the next subset of the collection""" @property def collection(self): return getattr(self, self._type) def has_next(self, limit): """Return whether collection has more items.""" return len(self.collection) and len(self.collection) == limit def get_next(self, limit, url=None, **kwargs): """Return a link to the next subset of the collection.""" if not self.has_next(limit): return wtypes.Unset resource_url = url or self._type q_args = ''.join(['%s=%s&' % (key, kwargs[key]) for key in kwargs]) next_args = '?%(args)slimit=%(limit)d&marker=%(marker)s' % { 'args': q_args, 'limit': limit, 'marker': self.collection[-1].uuid} return link.Link.make_link('next', pecan.request.public_url, resource_url, next_args).href ironic-5.1.0/ironic/api/controllers/v1/chassis.py0000664000567000056710000002662112674513470023120 0ustar jenkinsjenkins00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import pecan from pecan import rest from six.moves import http_client import wsme from wsme import types as wtypes from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import node from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.common import exception from ironic.common.i18n import _ from ironic import objects _DEFAULT_RETURN_FIELDS = ('uuid', 'description') class Chassis(base.APIBase): """API representation of a chassis. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a chassis. """ uuid = types.uuid """The UUID of the chassis""" description = wtypes.text """The description of the chassis""" extra = {wtypes.text: types.jsontype} """The metadata of the chassis""" links = wsme.wsattr([link.Link], readonly=True) """A list containing a self link and associated chassis links""" nodes = wsme.wsattr([link.Link], readonly=True) """Links to the collection of nodes contained in this chassis""" def __init__(self, **kwargs): self.fields = [] for field in objects.Chassis.fields: # Skip fields we do not expose. if not hasattr(self, field): continue self.fields.append(field) setattr(self, field, kwargs.get(field, wtypes.Unset)) @staticmethod def _convert_with_links(chassis, url, fields=None): # NOTE(lucasagomes): Since we are able to return a specified set of # fields the "uuid" can be unset, so we need to save it in another # variable to use when building the links chassis_uuid = chassis.uuid if fields is not None: chassis.unset_fields_except(fields) else: chassis.nodes = [link.Link.make_link('self', url, 'chassis', chassis_uuid + "/nodes"), link.Link.make_link('bookmark', url, 'chassis', chassis_uuid + "/nodes", bookmark=True) ] chassis.links = [link.Link.make_link('self', url, 'chassis', chassis_uuid), link.Link.make_link('bookmark', url, 'chassis', chassis_uuid, bookmark=True) ] return chassis @classmethod def convert_with_links(cls, rpc_chassis, fields=None): chassis = Chassis(**rpc_chassis.as_dict()) if fields is not None: api_utils.check_for_invalid_fields(fields, chassis.as_dict()) return cls._convert_with_links(chassis, pecan.request.public_url, fields) @classmethod def sample(cls, expand=True): time = datetime.datetime(2000, 1, 1, 12, 0, 0) sample = cls(uuid='eaaca217-e7d8-47b4-bb41-3f99f20eed89', extra={}, description='Sample chassis', created_at=time, updated_at=time) fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class ChassisPatchType(types.JsonPatchType): _api_base = Chassis class ChassisCollection(collection.Collection): """API representation of a collection of chassis.""" chassis = [Chassis] """A list containing chassis objects""" def __init__(self, **kwargs): self._type = 'chassis' @staticmethod def convert_with_links(chassis, limit, url=None, fields=None, **kwargs): collection = ChassisCollection() collection.chassis = [Chassis.convert_with_links(ch, fields=fields) for ch in chassis] url = url or None collection.next = collection.get_next(limit, url=url, **kwargs) return collection @classmethod def sample(cls): # FIXME(jroll) hack for docs build, bug #1560508 if not hasattr(objects, 'Chassis'): objects.register_all() sample = cls() sample.chassis = [Chassis.sample(expand=False)] return sample class ChassisController(rest.RestController): """REST controller for Chassis.""" nodes = node.NodesController() """Expose nodes as a sub-element of chassis""" # Set the flag to indicate that the requests to this resource are # coming from a top-level resource nodes.from_chassis = True _custom_actions = { 'detail': ['GET'], } invalid_sort_key_list = ['extra'] def _get_chassis_collection(self, marker, limit, sort_key, sort_dir, resource_url=None, fields=None): limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) marker_obj = None if marker: marker_obj = objects.Chassis.get_by_uuid(pecan.request.context, marker) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for sorting") % {'key': sort_key}) chassis = objects.Chassis.list(pecan.request.context, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) return ChassisCollection.convert_with_links(chassis, limit, url=resource_url, fields=fields, sort_key=sort_key, sort_dir=sort_dir) @expose.expose(ChassisCollection, types.uuid, int, wtypes.text, wtypes.text, types.listtype) def get_all(self, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None): """Retrieve a list of chassis. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ api_utils.check_allow_specify_fields(fields) if fields is None: fields = _DEFAULT_RETURN_FIELDS return self._get_chassis_collection(marker, limit, sort_key, sort_dir, fields=fields) @expose.expose(ChassisCollection, types.uuid, int, wtypes.text, wtypes.text) def detail(self, marker=None, limit=None, sort_key='id', sort_dir='asc'): """Retrieve a list of chassis with detail. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. """ # /detail should only work against collections parent = pecan.request.path.split('/')[:-1][-1] if parent != "chassis": raise exception.HTTPNotFound resource_url = '/'.join(['chassis', 'detail']) return self._get_chassis_collection(marker, limit, sort_key, sort_dir, resource_url) @expose.expose(Chassis, types.uuid, types.listtype) def get_one(self, chassis_uuid, fields=None): """Retrieve information about the given chassis. :param chassis_uuid: UUID of a chassis. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ api_utils.check_allow_specify_fields(fields) rpc_chassis = objects.Chassis.get_by_uuid(pecan.request.context, chassis_uuid) return Chassis.convert_with_links(rpc_chassis, fields=fields) @expose.expose(Chassis, body=Chassis, status_code=http_client.CREATED) def post(self, chassis): """Create a new chassis. :param chassis: a chassis within the request body. """ new_chassis = objects.Chassis(pecan.request.context, **chassis.as_dict()) new_chassis.create() # Set the HTTP Location Header pecan.response.location = link.build_url('chassis', new_chassis.uuid) return Chassis.convert_with_links(new_chassis) @wsme.validate(types.uuid, [ChassisPatchType]) @expose.expose(Chassis, types.uuid, body=[ChassisPatchType]) def patch(self, chassis_uuid, patch): """Update an existing chassis. :param chassis_uuid: UUID of a chassis. :param patch: a json PATCH document to apply to this chassis. """ rpc_chassis = objects.Chassis.get_by_uuid(pecan.request.context, chassis_uuid) try: chassis = Chassis( **api_utils.apply_jsonpatch(rpc_chassis.as_dict(), patch)) except api_utils.JSONPATCH_EXCEPTIONS as e: raise exception.PatchError(patch=patch, reason=e) # Update only the fields that have changed for field in objects.Chassis.fields: try: patch_val = getattr(chassis, field) except AttributeError: # Ignore fields that aren't exposed in the API continue if patch_val == wtypes.Unset: patch_val = None if rpc_chassis[field] != patch_val: rpc_chassis[field] = patch_val rpc_chassis.save() return Chassis.convert_with_links(rpc_chassis) @expose.expose(None, types.uuid, status_code=http_client.NO_CONTENT) def delete(self, chassis_uuid): """Delete a chassis. :param chassis_uuid: UUID of a chassis. """ rpc_chassis = objects.Chassis.get_by_uuid(pecan.request.context, chassis_uuid) rpc_chassis.destroy() ironic-5.1.0/ironic/api/controllers/v1/utils.py0000664000567000056710000002310212674513466022617 0ustar jenkinsjenkins00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import jsonpatch from oslo_config import cfg from oslo_utils import uuidutils import pecan import six from six.moves import http_client from webob.static import FileIter import wsme from ironic.api.controllers.v1 import versions from ironic.common import exception from ironic.common.i18n import _ from ironic.common import states from ironic.common import utils from ironic import objects CONF = cfg.CONF JSONPATCH_EXCEPTIONS = (jsonpatch.JsonPatchException, jsonpatch.JsonPointerException, KeyError) # Minimum API version to use for certain verbs MIN_VERB_VERSIONS = { # v1.4 added the MANAGEABLE state and two verbs to move nodes into # and out of that state. Reject requests to do this in older versions states.VERBS['manage']: versions.MINOR_4_MANAGEABLE_STATE, states.VERBS['provide']: versions.MINOR_4_MANAGEABLE_STATE, states.VERBS['inspect']: versions.MINOR_6_INSPECT_STATE, states.VERBS['abort']: versions.MINOR_13_ABORT_VERB, states.VERBS['clean']: versions.MINOR_15_MANUAL_CLEAN, } def validate_limit(limit): if limit is None: return CONF.api.max_limit if limit <= 0: raise wsme.exc.ClientSideError(_("Limit must be positive")) return min(CONF.api.max_limit, limit) def validate_sort_dir(sort_dir): if sort_dir not in ['asc', 'desc']: raise wsme.exc.ClientSideError(_("Invalid sort direction: %s. " "Acceptable values are " "'asc' or 'desc'") % sort_dir) return sort_dir def apply_jsonpatch(doc, patch): for p in patch: if p['op'] == 'add' and p['path'].count('/') == 1: if p['path'].lstrip('/') not in doc: msg = _('Adding a new attribute (%s) to the root of ' ' the resource is not allowed') raise wsme.exc.ClientSideError(msg % p['path']) return jsonpatch.apply_patch(doc, jsonpatch.JsonPatch(patch)) def get_patch_value(patch, path): for p in patch: if p['path'] == path and p['op'] != 'remove': return p['value'] def allow_node_logical_names(): # v1.5 added logical name aliases return pecan.request.version.minor >= versions.MINOR_5_NODE_NAME def get_rpc_node(node_ident): """Get the RPC node from the node uuid or logical name. :param node_ident: the UUID or logical name of a node. :returns: The RPC Node. :raises: InvalidUuidOrName if the name or uuid provided is not valid. :raises: NodeNotFound if the node is not found. """ # Check to see if the node_ident is a valid UUID. If it is, treat it # as a UUID. if uuidutils.is_uuid_like(node_ident): return objects.Node.get_by_uuid(pecan.request.context, node_ident) # We can refer to nodes by their name, if the client supports it if allow_node_logical_names(): if is_valid_logical_name(node_ident): return objects.Node.get_by_name(pecan.request.context, node_ident) raise exception.InvalidUuidOrName(name=node_ident) # Ensure we raise the same exception as we did for the Juno release raise exception.NodeNotFound(node=node_ident) def is_valid_node_name(name): """Determine if the provided name is a valid node name. Check to see that the provided node name is valid, and isn't a UUID. :param: name: the node name to check. :returns: True if the name is valid, False otherwise. """ return is_valid_logical_name(name) and not uuidutils.is_uuid_like(name) def is_valid_logical_name(name): """Determine if the provided name is a valid hostname.""" if pecan.request.version.minor < versions.MINOR_10_UNRESTRICTED_NODE_NAME: return utils.is_hostname_safe(name) else: return utils.is_valid_logical_name(name) def vendor_passthru(ident, method, topic, data=None, driver_passthru=False): """Call a vendor passthru API extension. Call the vendor passthru API extension and process the method response to set the right return code for methods that are asynchronous or synchronous; Attach the return value to the response object if it's being served statically. :param ident: The resource identification. For node's vendor passthru this is the node's UUID, for driver's vendor passthru this is the driver's name. :param method: The vendor method name. :param topic: The RPC topic. :param data: The data passed to the vendor method. Defaults to None. :param driver_passthru: Boolean value. Whether this is a node or driver vendor passthru. Defaults to False. :returns: A WSME response object to be returned by the API. """ if not method: raise wsme.exc.ClientSideError(_("Method not specified")) if data is None: data = {} http_method = pecan.request.method.upper() params = (pecan.request.context, ident, method, http_method, data, topic) if driver_passthru: response = pecan.request.rpcapi.driver_vendor_passthru(*params) else: response = pecan.request.rpcapi.vendor_passthru(*params) status_code = http_client.ACCEPTED if response['async'] else http_client.OK return_value = response['return'] response_params = {'status_code': status_code} # Attach the return value to the response object if response.get('attach'): if isinstance(return_value, six.text_type): # If unicode, convert to bytes return_value = return_value.encode('utf-8') file_ = wsme.types.File(content=return_value) pecan.response.app_iter = FileIter(file_.file) # Since we've attached the return value to the response # object the response body should now be empty. return_value = None response_params['return_type'] = None return wsme.api.Response(return_value, **response_params) def check_for_invalid_fields(fields, object_fields): """Check for requested non-existent fields. Check if the user requested non-existent fields. :param fields: A list of fields requested by the user :object_fields: A list of fields supported by the object. :raises: InvalidParameterValue if invalid fields were requested. """ invalid_fields = set(fields) - set(object_fields) if invalid_fields: raise exception.InvalidParameterValue( _('Field(s) "%s" are not valid') % ', '.join(invalid_fields)) def check_allow_specify_fields(fields): """Check if fetching a subset of the resource attributes is allowed. Version 1.8 of the API allows fetching a subset of the resource attributes, this method checks if the required version is being requested. """ if (fields is not None and pecan.request.version.minor < versions.MINOR_8_FETCHING_SUBSET_OF_FIELDS): raise exception.NotAcceptable() def check_allow_management_verbs(verb): min_version = MIN_VERB_VERSIONS.get(verb) if min_version is not None and pecan.request.version.minor < min_version: raise exception.NotAcceptable() def check_for_invalid_state_and_allow_filter(provision_state): """Check if filtering nodes by provision state is allowed. Version 1.9 of the API allows filter nodes by provision state. """ if provision_state is not None: if (pecan.request.version.minor < versions.MINOR_9_PROVISION_STATE_FILTER): raise exception.NotAcceptable() valid_states = states.machine.states if provision_state not in valid_states: raise exception.InvalidParameterValue( _('Provision state "%s" is not valid') % provision_state) def check_allow_specify_driver(driver): """Check if filtering nodes by driver is allowed. Version 1.16 of the API allows filter nodes by driver. """ if (driver is not None and pecan.request.version.minor < versions.MINOR_16_DRIVER_FILTER): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_16_DRIVER_FILTER}) def initial_node_provision_state(): """Return node state to use by default when creating new nodes. Previously the default state for new nodes was AVAILABLE. Starting with API 1.11 it is ENROLL. """ return (states.AVAILABLE if pecan.request.version.minor < versions.MINOR_11_ENROLL_STATE else states.ENROLL) def allow_raid_config(): """Check if RAID configuration is allowed for the node. Version 1.12 of the API allows RAID configuration for the node. """ return pecan.request.version.minor >= versions.MINOR_12_RAID_CONFIG def allow_links_node_states_and_driver_properties(): """Check if links are displayable. Version 1.14 of the API allows the display of links to node states and driver properties. """ return (pecan.request.version.minor >= versions.MINOR_14_LINKS_NODESTATES_DRIVERPROPERTIES) ironic-5.1.0/ironic/api/controllers/link.py0000664000567000056710000000375612674513466022103 0ustar jenkinsjenkins00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from wsme import types as wtypes from ironic.api.controllers import base def build_url(resource, resource_args, bookmark=False, base_url=None): if base_url is None: base_url = pecan.request.public_url template = '%(url)s/%(res)s' if bookmark else '%(url)s/v1/%(res)s' # FIXME(lucasagomes): I'm getting a 404 when doing a GET on # a nested resource that the URL ends with a '/'. # https://groups.google.com/forum/#!topic/pecan-dev/QfSeviLg5qs template += '%(args)s' if resource_args.startswith('?') else '/%(args)s' return template % {'url': base_url, 'res': resource, 'args': resource_args} class Link(base.APIBase): """A link representation.""" href = wtypes.text """The url of a link.""" rel = wtypes.text """The name of a link.""" type = wtypes.text """Indicates the type of document/link.""" @staticmethod def make_link(rel_name, url, resource, resource_args, bookmark=False, type=wtypes.Unset): href = build_url(resource, resource_args, bookmark=bookmark, base_url=url) return Link(href=href, rel=rel_name, type=type) @classmethod def sample(cls): sample = cls(href="http://localhost:6385/chassis/" "eaaca217-e7d8-47b4-bb41-3f99f20eed89", rel="bookmark") return sample ironic-5.1.0/ironic/api/controllers/__init__.py0000664000567000056710000000000012674513466022660 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/api/app.py0000664000567000056710000000731412674513466017352 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # Copyright © 2012 New Dream Network, LLC (DreamHost) # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg import oslo_middleware.cors as cors_middleware import pecan from ironic.api import acl from ironic.api import config from ironic.api.controllers.base import Version from ironic.api import hooks from ironic.api import middleware from ironic.common.i18n import _ api_opts = [ cfg.StrOpt( 'auth_strategy', default='keystone', choices=['noauth', 'keystone'], help=_('Authentication strategy used by ironic-api. "noauth" should ' 'not be used in a production environment because all ' 'authentication will be disabled.')), cfg.BoolOpt('debug_tracebacks_in_api', default=False, help=_('Return server tracebacks in the API response for any ' 'error responses. WARNING: this is insecure ' 'and should not be used in a production environment.')), cfg.BoolOpt('pecan_debug', default=False, help=_('Enable pecan debug mode. WARNING: this is insecure ' 'and should not be used in a production environment.')), ] CONF = cfg.CONF CONF.register_opts(api_opts) def get_pecan_config(): # Set up the pecan configuration filename = config.__file__.replace('.pyc', '.py') return pecan.configuration.conf_from_file(filename) def setup_app(pecan_config=None, extra_hooks=None): app_hooks = [hooks.ConfigHook(), hooks.DBHook(), hooks.ContextHook(pecan_config.app.acl_public_routes), hooks.RPCHook(), hooks.NoExceptionTracebackHook(), hooks.PublicUrlHook()] if extra_hooks: app_hooks.extend(extra_hooks) if not pecan_config: pecan_config = get_pecan_config() if pecan_config.app.enable_acl: app_hooks.append(hooks.TrustedCallHook()) pecan.configuration.set_config(dict(pecan_config), overwrite=True) app = pecan.make_app( pecan_config.app.root, static_root=pecan_config.app.static_root, debug=CONF.pecan_debug, force_canonical=getattr(pecan_config.app, 'force_canonical', True), hooks=app_hooks, wrap_app=middleware.ParsableErrorMiddleware, ) if pecan_config.app.enable_acl: app = acl.install(app, cfg.CONF, pecan_config.app.acl_public_routes) # Create a CORS wrapper, and attach ironic-specific defaults that must be # included in all CORS responses. app = cors_middleware.CORS(app, CONF) app.set_latent( allow_headers=[Version.max_string, Version.min_string, Version.string], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'], expose_headers=[Version.max_string, Version.min_string, Version.string] ) return app class VersionSelectorApplication(object): def __init__(self): pc = get_pecan_config() pc.app.enable_acl = (CONF.auth_strategy == 'keystone') self.v1 = setup_app(pecan_config=pc) def __call__(self, environ, start_response): return self.v1(environ, start_response) ironic-5.1.0/ironic/api/middleware/0000775000567000056710000000000012674513633020324 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/api/middleware/parsable_error.py0000664000567000056710000000751412674513466023713 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright © 2012 New Dream Network, LLC (DreamHost) # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Middleware to replace the plain text message body of an error response with one formatted so the client can parse it. Based on pecan.middleware.errordocument """ import json from xml import etree as et from oslo_log import log import six import webob from ironic.common.i18n import _ from ironic.common.i18n import _LE LOG = log.getLogger(__name__) class ParsableErrorMiddleware(object): """Replace error body with something the client can parse.""" def __init__(self, app): self.app = app def __call__(self, environ, start_response): # Request for this state, modified by replace_start_response() # and used when an error is being reported. state = {} def replacement_start_response(status, headers, exc_info=None): """Overrides the default response to make errors parsable.""" try: status_code = int(status.split(' ')[0]) state['status_code'] = status_code except (ValueError, TypeError): # pragma: nocover raise Exception(_( 'ErrorDocumentMiddleware received an invalid ' 'status %s') % status) else: if (state['status_code'] // 100) not in (2, 3): # Remove some headers so we can replace them later # when we have the full error message and can # compute the length. headers = [(h, v) for (h, v) in headers if h not in ('Content-Length', 'Content-Type') ] # Save the headers in case we need to modify them. state['headers'] = headers return start_response(status, headers, exc_info) app_iter = self.app(environ, replacement_start_response) if (state['status_code'] // 100) not in (2, 3): req = webob.Request(environ) if (req.accept.best_match(['application/json', 'application/xml']) == 'application/xml'): try: # simple check xml is valid body = [et.ElementTree.tostring( et.ElementTree.fromstring('' + '\n'.join(app_iter) + ''))] except et.ElementTree.ParseError as err: LOG.error(_LE('Error parsing HTTP response: %s'), err) body = ['%s' % state['status_code'] + ''] state['headers'].append(('Content-Type', 'application/xml')) else: if six.PY3: app_iter = [i.decode('utf-8') for i in app_iter] body = [json.dumps({'error_message': '\n'.join(app_iter)})] if six.PY3: body = [item.encode('utf-8') for item in body] state['headers'].append(('Content-Type', 'application/json')) state['headers'].append(('Content-Length', str(len(body[0])))) else: body = app_iter return body ironic-5.1.0/ironic/api/middleware/__init__.py0000664000567000056710000000153112674513466022441 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.api.middleware import auth_token from ironic.api.middleware import parsable_error ParsableErrorMiddleware = parsable_error.ParsableErrorMiddleware AuthTokenMiddleware = auth_token.AuthTokenMiddleware __all__ = (ParsableErrorMiddleware, AuthTokenMiddleware) ironic-5.1.0/ironic/api/middleware/auth_token.py0000664000567000056710000000437312674513466023052 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from keystonemiddleware import auth_token from oslo_log import log from ironic.common import exception from ironic.common.i18n import _ from ironic.common import utils LOG = log.getLogger(__name__) class AuthTokenMiddleware(auth_token.AuthProtocol): """A wrapper on Keystone auth_token middleware. Does not perform verification of authentication tokens for public routes in the API. """ def __init__(self, app, conf, public_api_routes=[]): self._ironic_app = app # TODO(mrda): Remove .xml and ensure that doesn't result in a # 401 Authentication Required instead of 404 Not Found route_pattern_tpl = '%s(\.json|\.xml)?$' try: self.public_api_routes = [re.compile(route_pattern_tpl % route_tpl) for route_tpl in public_api_routes] except re.error as e: msg = _('Cannot compile public API routes: %s') % e LOG.error(msg) raise exception.ConfigInvalid(error_msg=msg) super(AuthTokenMiddleware, self).__init__(app, conf) def __call__(self, env, start_response): path = utils.safe_rstrip(env.get('PATH_INFO'), '/') # The information whether the API call is being performed against the # public API is required for some other components. Saving it to the # WSGI environment is reasonable thereby. env['is_public_api'] = any(map(lambda pattern: re.match(pattern, path), self.public_api_routes)) if env['is_public_api']: return self._ironic_app(env, start_response) return super(AuthTokenMiddleware, self).__call__(env, start_response) ironic-5.1.0/ironic/api/hooks.py0000664000567000056710000001421212674513470017703 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright © 2012 New Dream Network, LLC (DreamHost) # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from pecan import hooks from six.moves import http_client from webob import exc from ironic.common import context from ironic.common import policy from ironic.conductor import rpcapi from ironic.db import api as dbapi class ConfigHook(hooks.PecanHook): """Attach the config object to the request so controllers can get to it.""" def before(self, state): state.request.cfg = cfg.CONF class DBHook(hooks.PecanHook): """Attach the dbapi object to the request so controllers can get to it.""" def before(self, state): state.request.dbapi = dbapi.get_instance() class ContextHook(hooks.PecanHook): """Configures a request context and attaches it to the request. The following HTTP request headers are used: X-User-Id or X-User: Used for context.user_id. X-Tenant-Id or X-Tenant: Used for context.tenant. X-Auth-Token: Used for context.auth_token. X-Roles: Used for setting context.is_admin flag to either True or False. The flag is set to True, if X-Roles contains either an administrator or admin substring. Otherwise it is set to False. """ def __init__(self, public_api_routes): self.public_api_routes = public_api_routes super(ContextHook, self).__init__() def before(self, state): headers = state.request.headers # Do not pass any token with context for noauth mode auth_token = (None if cfg.CONF.auth_strategy == 'noauth' else headers.get('X-Auth-Token')) creds = { 'user': headers.get('X-User') or headers.get('X-User-Id'), 'tenant': headers.get('X-Tenant') or headers.get('X-Tenant-Id'), 'domain_id': headers.get('X-User-Domain-Id'), 'domain_name': headers.get('X-User-Domain-Name'), 'auth_token': auth_token, 'roles': headers.get('X-Roles', '').split(','), } is_admin = policy.enforce('admin_api', creds, creds) is_public_api = state.request.environ.get('is_public_api', False) show_password = policy.enforce('show_password', creds, creds) state.request.context = context.RequestContext( is_admin=is_admin, is_public_api=is_public_api, show_password=show_password, **creds) def after(self, state): if state.request.context == {}: # An incorrect url path will not create RequestContext return # NOTE(lintan): RequestContext will generate a request_id if no one # passing outside, so it always contain a request_id. request_id = state.request.context.request_id state.response.headers['Openstack-Request-Id'] = request_id class RPCHook(hooks.PecanHook): """Attach the rpcapi object to the request so controllers can get to it.""" def before(self, state): state.request.rpcapi = rpcapi.ConductorAPI() class TrustedCallHook(hooks.PecanHook): """Verify that the user has admin rights. Checks whether the API call is performed against a public resource or the user has admin privileges in the appropriate tenant, domain or other administrative unit. """ def before(self, state): ctx = state.request.context if ctx.is_public_api: return policy.enforce('admin_api', ctx.to_dict(), ctx.to_dict(), do_raise=True, exc=exc.HTTPForbidden) class NoExceptionTracebackHook(hooks.PecanHook): """Workaround rpc.common: deserialize_remote_exception. deserialize_remote_exception builds rpc exception traceback into error message which is then sent to the client. Such behavior is a security concern so this hook is aimed to cut-off traceback from the error message. """ # NOTE(max_lobur): 'after' hook used instead of 'on_error' because # 'on_error' never fired for wsme+pecan pair. wsme @wsexpose decorator # catches and handles all the errors, so 'on_error' dedicated for unhandled # exceptions never fired. def after(self, state): # Omit empty body. Some errors may not have body at this level yet. if not state.response.body: return # Do nothing if there is no error. # Status codes in the range 200 (OK) to 399 (400 = BAD_REQUEST) are not # an error. if (http_client.OK <= state.response.status_int < http_client.BAD_REQUEST): return json_body = state.response.json # Do not remove traceback when traceback config is set if cfg.CONF.debug_tracebacks_in_api: return faultstring = json_body.get('faultstring') traceback_marker = 'Traceback (most recent call last):' if faultstring and traceback_marker in faultstring: # Cut-off traceback. faultstring = faultstring.split(traceback_marker, 1)[0] # Remove trailing newlines and spaces if any. json_body['faultstring'] = faultstring.rstrip() # Replace the whole json. Cannot change original one beacause it's # generated on the fly. state.response.json = json_body class PublicUrlHook(hooks.PecanHook): """Attach the right public_url to the request. Attach the right public_url to the request so resources can create links even when the API service is behind a proxy or SSL terminator. """ def before(self, state): state.request.public_url = (cfg.CONF.api.public_endpoint or state.request.host_url) ironic-5.1.0/ironic/__init__.py0000664000567000056710000000134212674513466017553 0ustar jenkinsjenkins00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os os.environ['EVENTLET_NO_GREENDNS'] = 'yes' import eventlet eventlet.monkey_patch(os=False) ironic-5.1.0/ironic/dhcp/0000775000567000056710000000000012674513633016354 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/dhcp/base.py0000664000567000056710000000706112674513466017650 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Abstract base class for dhcp providers. """ import abc import six @six.add_metaclass(abc.ABCMeta) class BaseDHCP(object): """Base class for DHCP provider APIs.""" @abc.abstractmethod def update_port_dhcp_opts(self, port_id, dhcp_options, token=None): """Update one or more DHCP options on the specified port. :param port_id: designate which port these attributes will be applied to. :param dhcp_options: this will be a list of dicts, e.g. :: [{'opt_name': 'bootfile-name', 'opt_value': 'pxelinux.0'}, {'opt_name': 'server-ip-address', 'opt_value': '123.123.123.456'}, {'opt_name': 'tftp-server', 'opt_value': '123.123.123.123'}] :param token: An optional authenticaiton token. :raises: FailedToUpdateDHCPOptOnPort """ @abc.abstractmethod def update_port_address(self, port_id, address, token=None): """Update a port's MAC address. :param port_id: port id. :param address: new MAC address. :param token: An optional authenticaiton token. :raises: FailedToUpdateMacOnPort """ @abc.abstractmethod def update_dhcp_opts(self, task, options, vifs=None): """Send or update the DHCP BOOT options for this node. :param task: A TaskManager instance. :param options: this will be a list of dicts, e.g. :: [{'opt_name': 'bootfile-name', 'opt_value': 'pxelinux.0'}, {'opt_name': 'server-ip-address', 'opt_value': '123.123.123.456'}, {'opt_name': 'tftp-server', 'opt_value': '123.123.123.123'}] :param vifs: A dict with keys 'ports' and 'portgroups' and dicts as values. Each dict has key/value pairs of the form :. e.g. :: {'ports': {'port.uuid': vif.id}, 'portgroups': {'portgroup.uuid': vif.id}} If the value is None, will get the list of ports/portgroups from the Ironic port/portgroup objects. :raises: FailedToUpdateDHCPOptOnPort """ @abc.abstractmethod def get_ip_addresses(self, task): """Get IP addresses for all ports/portgroups in `task`. :param task: A TaskManager instance. :returns: List of IP addresses associated with task's ports and portgroups. """ def clean_dhcp_opts(self, task): """Clean up the DHCP BOOT options for all ports in `task`. :param task: A TaskManager instance. :raises: FailedToCleanDHCPOpts """ pass ironic-5.1.0/ironic/dhcp/neutron.py0000664000567000056710000004114412674513466020430 0ustar jenkinsjenkins00000000000000# # Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from neutronclient.common import exceptions as neutron_client_exc from neutronclient.v2_0 import client as clientv20 from oslo_config import cfg from oslo_log import log as logging from oslo_utils import netutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW from ironic.common import keystone from ironic.common import network from ironic.dhcp import base from ironic.drivers.modules import ssh from ironic.objects.port import Port neutron_opts = [ cfg.StrOpt('url', default='http://$my_ip:9696', help=_('URL for connecting to neutron.')), cfg.IntOpt('url_timeout', default=30, help=_('Timeout value for connecting to neutron in seconds.')), cfg.IntOpt('retries', default=3, help=_('Client retries in the case of a failed request.')), cfg.StrOpt('auth_strategy', default='keystone', choices=['keystone', 'noauth'], help=_('Default authentication strategy to use when connecting ' 'to neutron. ' 'Running neutron in noauth mode (related to but not ' 'affected by this setting) is insecure and should only ' 'be used for testing.')), cfg.StrOpt('cleaning_network_uuid', help=_('UUID of the network to create Neutron ports on, when ' 'booting to a ramdisk for cleaning using Neutron DHCP.')) ] CONF = cfg.CONF CONF.import_opt('my_ip', 'ironic.netconf') CONF.register_opts(neutron_opts, group='neutron') LOG = logging.getLogger(__name__) def _build_client(token=None): """Utility function to create Neutron client.""" params = { 'timeout': CONF.neutron.url_timeout, 'retries': CONF.neutron.retries, 'insecure': CONF.keystone_authtoken.insecure, 'ca_cert': CONF.keystone_authtoken.certfile, } if CONF.neutron.auth_strategy == 'noauth': params['endpoint_url'] = CONF.neutron.url params['auth_strategy'] = 'noauth' else: params['endpoint_url'] = ( CONF.neutron.url or keystone.get_service_url(service_type='network')) params['username'] = CONF.keystone_authtoken.admin_user params['tenant_name'] = CONF.keystone_authtoken.admin_tenant_name params['password'] = CONF.keystone_authtoken.admin_password params['auth_url'] = (CONF.keystone_authtoken.auth_uri or '') if CONF.keystone.region_name: params['region_name'] = CONF.keystone.region_name params['token'] = token return clientv20.Client(**params) class NeutronDHCPApi(base.BaseDHCP): """API for communicating to neutron 2.x API.""" def update_port_dhcp_opts(self, port_id, dhcp_options, token=None): """Update a port's attributes. Update one or more DHCP options on the specified port. For the relevant API spec, see http://docs.openstack.org/api/openstack-network/2.0/content/extra-dhc-opt-ext-update.html :param port_id: designate which port these attributes will be applied to. :param dhcp_options: this will be a list of dicts, e.g. :: [{'opt_name': 'bootfile-name', 'opt_value': 'pxelinux.0'}, {'opt_name': 'server-ip-address', 'opt_value': '123.123.123.456'}, {'opt_name': 'tftp-server', 'opt_value': '123.123.123.123'}] :param token: optional auth token. :raises: FailedToUpdateDHCPOptOnPort """ port_req_body = {'port': {'extra_dhcp_opts': dhcp_options}} try: _build_client(token).update_port(port_id, port_req_body) except neutron_client_exc.NeutronClientException: LOG.exception(_LE("Failed to update Neutron port %s."), port_id) raise exception.FailedToUpdateDHCPOptOnPort(port_id=port_id) def update_port_address(self, port_id, address, token=None): """Update a port's mac address. :param port_id: Neutron port id. :param address: new MAC address. :param token: optional auth token. :raises: FailedToUpdateMacOnPort """ port_req_body = {'port': {'mac_address': address}} try: _build_client(token).update_port(port_id, port_req_body) except neutron_client_exc.NeutronClientException: LOG.exception(_LE("Failed to update MAC address on Neutron " "port %s."), port_id) raise exception.FailedToUpdateMacOnPort(port_id=port_id) def update_dhcp_opts(self, task, options, vifs=None): """Send or update the DHCP BOOT options for this node. :param task: A TaskManager instance. :param options: this will be a list of dicts, e.g. :: [{'opt_name': 'bootfile-name', 'opt_value': 'pxelinux.0'}, {'opt_name': 'server-ip-address', 'opt_value': '123.123.123.456'}, {'opt_name': 'tftp-server', 'opt_value': '123.123.123.123'}] :param vifs: a dict of Neutron port/portgroup dicts to update DHCP options on. The port/portgroup dict key should be Ironic port UUIDs, and the values should be Neutron port UUIDs, e.g. :: {'ports': {'port.uuid': vif.id}, 'portgroups': {'portgroup.uuid': vif.id}} If the value is None, will get the list of ports/portgroups from the Ironic port/portgroup objects. """ if vifs is None: vifs = network.get_node_vif_ids(task) if not (vifs['ports'] or vifs['portgroups']): raise exception.FailedToUpdateDHCPOptOnPort( _("No VIFs found for node %(node)s when attempting " "to update DHCP BOOT options.") % {'node': task.node.uuid}) failures = [] vif_list = [vif for pdict in vifs.values() for vif in pdict.values()] for vif in vif_list: try: self.update_port_dhcp_opts(vif, options, token=task.context.auth_token) except exception.FailedToUpdateDHCPOptOnPort: failures.append(vif) if failures: if len(failures) == len(vif_list): raise exception.FailedToUpdateDHCPOptOnPort(_( "Failed to set DHCP BOOT options for any port on node %s.") % task.node.uuid) else: LOG.warning(_LW("Some errors were encountered when updating " "the DHCP BOOT options for node %(node)s on " "the following Neutron ports: %(ports)s."), {'node': task.node.uuid, 'ports': failures}) # TODO(adam_g): Hack to workaround bug 1334447 until we have a # mechanism for synchronizing events with Neutron. We need to sleep # only if we are booting VMs, which is implied by SSHPower, to ensure # they do not boot before Neutron agents have setup sufficient DHCP # config for netboot. if isinstance(task.driver.power, ssh.SSHPower): LOG.debug("Waiting 15 seconds for Neutron.") time.sleep(15) def _get_fixed_ip_address(self, port_uuid, client): """Get a Neutron port's fixed ip address. :param port_uuid: Neutron port id. :param client: Neutron client instance. :returns: Neutron port ip address. :raises: FailedToGetIPAddressOnPort :raises: InvalidIPv4Address """ ip_address = None try: neutron_port = client.show_port(port_uuid).get('port') except neutron_client_exc.NeutronClientException: LOG.exception(_LE("Failed to Get IP address on Neutron port %s."), port_uuid) raise exception.FailedToGetIPAddressOnPort(port_id=port_uuid) fixed_ips = neutron_port.get('fixed_ips') # NOTE(faizan) At present only the first fixed_ip assigned to this # neutron port will be used, since nova allocates only one fixed_ip # for the instance. if fixed_ips: ip_address = fixed_ips[0].get('ip_address', None) if ip_address: if netutils.is_valid_ipv4(ip_address): return ip_address else: LOG.error(_LE("Neutron returned invalid IPv4 address %s."), ip_address) raise exception.InvalidIPv4Address(ip_address=ip_address) else: LOG.error(_LE("No IP address assigned to Neutron port %s."), port_uuid) raise exception.FailedToGetIPAddressOnPort(port_id=port_uuid) def _get_port_ip_address(self, task, p_obj, client): """Get ip address of ironic port/portgroup assigned by Neutron. :param task: a TaskManager instance. :param p_obj: Ironic port or portgroup object. :param client: Neutron client instance. :returns: List of Neutron vif ip address associated with Node's port/portgroup. :raises: FailedToGetIPAddressOnPort :raises: InvalidIPv4Address """ vif = p_obj.extra.get('vif_port_id') if not vif: obj_name = 'portgroup' if isinstance(p_obj, Port): obj_name = 'port' LOG.warning(_LW("No VIFs found for node %(node)s when attempting " "to get IP address for %(obj_name)s: %(obj_id)."), {'node': task.node.uuid, 'obj_name': obj_name, 'obj_id': p_obj.uuid}) raise exception.FailedToGetIPAddressOnPort(port_id=p_obj.uuid) vif_ip_address = self._get_fixed_ip_address(vif, client) return vif_ip_address def _get_ip_addresses(self, task, pobj_list, client): """Get IP addresses for all ports/portgroups. :param task: a TaskManager instance. :param pobj_list: List of port or portgroup objects. :param client: Neutron client instance. :returns: List of IP addresses associated with task's ports/portgroups. """ failures = [] ip_addresses = [] for obj in pobj_list: try: vif_ip_address = self._get_port_ip_address(task, obj, client) ip_addresses.append(vif_ip_address) except (exception.FailedToGetIPAddressOnPort, exception.InvalidIPv4Address): failures.append(obj.uuid) if failures: obj_name = 'portgroups' if isinstance(pobj_list[0], Port): obj_name = 'ports' LOG.warning(_LW( "Some errors were encountered on node %(node)s " "while retrieving IP addresses on the following " "%(obj_name)s: %(failures)s."), {'node': task.node.uuid, 'obj_name': obj_name, 'failures': failures}) return ip_addresses def get_ip_addresses(self, task): """Get IP addresses for all ports/portgroups in `task`. :param task: a TaskManager instance. :returns: List of IP addresses associated with task's ports/portgroups. """ client = _build_client(task.context.auth_token) port_ip_addresses = self._get_ip_addresses(task, task.ports, client) portgroup_ip_addresses = self._get_ip_addresses( task, task.portgroups, client) return port_ip_addresses + portgroup_ip_addresses def create_cleaning_ports(self, task): """Create neutron ports for each port on task.node to boot the ramdisk. :param task: a TaskManager instance. :raises: InvalidParameterValue if the cleaning network is None :returns: a dictionary in the form {port.uuid: neutron_port['id']} """ if not CONF.neutron.cleaning_network_uuid: raise exception.InvalidParameterValue(_('Valid cleaning network ' 'UUID not provided')) neutron_client = _build_client(task.context.auth_token) body = { 'port': { 'network_id': CONF.neutron.cleaning_network_uuid, 'admin_state_up': True, } } ports = {} for ironic_port in task.ports: body['port']['mac_address'] = ironic_port.address try: port = neutron_client.create_port(body) except neutron_client_exc.ConnectionFailed as e: self._rollback_cleaning_ports(task) msg = (_('Could not create cleaning port on network %(net)s ' 'from %(node)s. %(exc)s') % {'net': CONF.neutron.cleaning_network_uuid, 'node': task.node.uuid, 'exc': e}) LOG.exception(msg) raise exception.NodeCleaningFailure(msg) if not port.get('port') or not port['port'].get('id'): self._rollback_cleaning_ports(task) msg = (_('Failed to create cleaning ports for node ' '%(node)s') % {'node': task.node.uuid}) LOG.error(msg) raise exception.NodeCleaningFailure(msg) # Match return value of get_node_vif_ids() ports[ironic_port.uuid] = port['port']['id'] return ports def delete_cleaning_ports(self, task): """Deletes the neutron port created for booting the ramdisk. :param task: a TaskManager instance. """ neutron_client = _build_client(task.context.auth_token) macs = [p.address for p in task.ports] params = { 'network_id': CONF.neutron.cleaning_network_uuid } try: ports = neutron_client.list_ports(**params) except neutron_client_exc.ConnectionFailed as e: msg = (_('Could not get cleaning network vif for %(node)s ' 'from Neutron, possible network issue. %(exc)s') % {'node': task.node.uuid, 'exc': e}) LOG.exception(msg) raise exception.NodeCleaningFailure(msg) # Iterate the list of Neutron port dicts, remove the ones we added for neutron_port in ports.get('ports', []): # Only delete ports using the node's mac addresses if neutron_port.get('mac_address') in macs: try: neutron_client.delete_port(neutron_port.get('id')) except neutron_client_exc.ConnectionFailed as e: msg = (_('Could not remove cleaning ports on network ' '%(net)s from %(node)s, possible network issue. ' '%(exc)s') % {'net': CONF.neutron.cleaning_network_uuid, 'node': task.node.uuid, 'exc': e}) LOG.exception(msg) raise exception.NodeCleaningFailure(msg) def _rollback_cleaning_ports(self, task): """Attempts to delete any ports created by cleaning Purposefully will not raise any exceptions so error handling can continue. :param task: a TaskManager instance. """ try: self.delete_cleaning_ports(task) except Exception: # Log the error, but let the caller invoke the # manager.cleaning_error_handler(). LOG.exception(_LE('Failed to rollback cleaning port ' 'changes for node %s') % task.node.uuid) ironic-5.1.0/ironic/dhcp/__init__.py0000664000567000056710000000000012674513466020457 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/dhcp/none.py0000664000567000056710000000176712674513466017704 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.dhcp import base class NoneDHCPApi(base.BaseDHCP): """No-op DHCP API.""" def update_port_dhcp_opts(self, port_id, dhcp_options, token=None): pass def update_dhcp_opts(self, task, options, vifs=None): pass def update_port_address(self, port_id, address, token=None): pass def get_ip_addresses(self, task): return [] ironic-5.1.0/ironic/tests/0000775000567000056710000000000012674513633016600 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/base.py0000664000567000056710000001342012674513466020070 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base classes for our unit tests. Allows overriding of config for use of fakes, and some black magic for inline callbacks. """ import copy import os import sys import tempfile import eventlet eventlet.monkey_patch(os=False) import fixtures from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_context import context as ironic_context from oslo_log import log as logging import testtools from ironic.common import config as ironic_config from ironic.common import hash_ring from ironic.objects import base as objects_base from ironic.tests.unit import policy_fixture CONF = cfg.CONF CONF.import_opt('host', 'ironic.common.service') logging.register_options(CONF) logging.setup(CONF, 'ironic') class ReplaceModule(fixtures.Fixture): """Replace a module with a fake module.""" def __init__(self, name, new_value): self.name = name self.new_value = new_value def _restore(self, old_value): sys.modules[self.name] = old_value def setUp(self): super(ReplaceModule, self).setUp() old_value = sys.modules.get(self.name) sys.modules[self.name] = self.new_value self.addCleanup(self._restore, old_value) class TestingException(Exception): pass class TestCase(testtools.TestCase): """Test case base class for all unit tests.""" def setUp(self): """Run before each test method to initialize test environment.""" super(TestCase, self).setUp() self.context = ironic_context.get_admin_context() test_timeout = os.environ.get('OS_TEST_TIMEOUT', 0) try: test_timeout = int(test_timeout) except ValueError: # If timeout value is invalid do not set a timeout. test_timeout = 0 if test_timeout > 0: self.useFixture(fixtures.Timeout(test_timeout, gentle=True)) self.useFixture(fixtures.NestedTempfile()) self.useFixture(fixtures.TempHomeDir()) if (os.environ.get('OS_STDOUT_CAPTURE') == 'True' or os.environ.get('OS_STDOUT_CAPTURE') == '1'): stdout = self.useFixture(fixtures.StringStream('stdout')).stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout)) if (os.environ.get('OS_STDERR_CAPTURE') == 'True' or os.environ.get('OS_STDERR_CAPTURE') == '1'): stderr = self.useFixture(fixtures.StringStream('stderr')).stream self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr)) self.log_fixture = self.useFixture(fixtures.FakeLogger()) self._set_config() # NOTE(danms): Make sure to reset us back to non-remote objects # for each test to avoid interactions. Also, backup the object # registry objects_base.IronicObject.indirection_api = None self._base_test_obj_backup = copy.copy( objects_base.IronicObjectRegistry.obj_classes()) self.addCleanup(self._restore_obj_registry) self.addCleanup(self._clear_attrs) self.addCleanup(hash_ring.HashRingManager().reset) self.useFixture(fixtures.EnvironmentVariable('http_proxy')) self.policy = self.useFixture(policy_fixture.PolicyFixture()) def _set_config(self): self.cfg_fixture = self.useFixture(config_fixture.Config(CONF)) self.config(use_stderr=False, fatal_exception_format_errors=True, tempdir=tempfile.tempdir) self.set_defaults(host='fake-mini', verbose=True) self.set_defaults(connection="sqlite://", sqlite_synchronous=False, group='database') ironic_config.parse_args([], default_config_files=[]) def _restore_obj_registry(self): objects_base.IronicObjectRegistry._registry._obj_classes = ( self._base_test_obj_backup) def _clear_attrs(self): # Delete attributes that don't start with _ so they don't pin # memory around unnecessarily for the duration of the test # suite for key in [k for k in self.__dict__.keys() if k[0] != '_']: del self.__dict__[key] def config(self, **kw): """Override config options for a test.""" self.cfg_fixture.config(**kw) def set_defaults(self, **kw): """Set default values of config options.""" group = kw.pop('group', None) for o, v in kw.items(): self.cfg_fixture.set_default(o, v, group=group) def path_get(self, project_file=None): """Get the absolute path to a file. Used for testing the API. :param project_file: File whose path to return. Default: None. :returns: path to the specified file, or path to project root. """ root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', ) ) if project_file: return os.path.join(root, project_file) else: return root ironic-5.1.0/ironic/tests/functional/0000775000567000056710000000000012674513633020742 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/functional/__init__.py0000664000567000056710000000000012674513466023045 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/__init__.py0000664000567000056710000000000012674513466020703 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/0000775000567000056710000000000012674513633017557 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/objects/0000775000567000056710000000000012674513633021210 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/objects/test_fields.py0000664000567000056710000000450612674513466024100 0ustar jenkinsjenkins00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.common import exception from ironic.objects import fields from ironic.tests import base as test_base class TestMacAddressField(test_base.TestCase): def setUp(self): super(TestMacAddressField, self).setUp() self.field = fields.MACAddressField() def test_coerce(self): values = {'aa:bb:cc:dd:ee:ff': 'aa:bb:cc:dd:ee:ff', 'AA:BB:CC:DD:EE:FF': 'aa:bb:cc:dd:ee:ff', 'AA:bb:cc:11:22:33': 'aa:bb:cc:11:22:33'} for k in values: self.assertEqual(values[k], self.field.coerce('obj', 'attr', k)) def test_coerce_bad_values(self): for v in ('invalid-mac', 'aa-bb-cc-dd-ee-ff'): self.assertRaises(exception.InvalidMAC, self.field.coerce, 'obj', 'attr', v) class TestFlexibleDictField(test_base.TestCase): def setUp(self): super(TestFlexibleDictField, self).setUp() self.field = fields.FlexibleDictField() def test_coerce(self): d = {'foo_1': 'bar', 'foo_2': 2, 'foo_3': [], 'foo_4': {}} self.assertEqual(d, self.field.coerce('obj', 'attr', d)) self.assertEqual({'foo': 'bar'}, self.field.coerce('obj', 'attr', '{"foo": "bar"}')) def test_coerce_bad_values(self): self.assertRaises(TypeError, self.field.coerce, 'obj', 'attr', 123) self.assertRaises(TypeError, self.field.coerce, 'obj', 'attr', True) def test_coerce_nullable_translation(self): # non-nullable self.assertRaises(ValueError, self.field.coerce, 'obj', 'attr', None) # nullable self.field = fields.FlexibleDictField(nullable=True) self.assertEqual({}, self.field.coerce('obj', 'attr', None)) ironic-5.1.0/ironic/tests/unit/objects/test_portgroup.py0000664000567000056710000001405512674513466024673 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from testtools.matchers import HasLength from ironic.common import exception from ironic import objects from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class TestPortgroupObject(base.DbTestCase): def setUp(self): super(TestPortgroupObject, self).setUp() self.fake_portgroup = utils.get_test_portgroup() def test_get_by_id(self): portgroup_id = self.fake_portgroup['id'] with mock.patch.object(self.dbapi, 'get_portgroup_by_id', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup portgroup = objects.Portgroup.get(self.context, portgroup_id) mock_get_portgroup.assert_called_once_with(portgroup_id) self.assertEqual(self.context, portgroup._context) def test_get_by_uuid(self): uuid = self.fake_portgroup['uuid'] with mock.patch.object(self.dbapi, 'get_portgroup_by_uuid', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup portgroup = objects.Portgroup.get(self.context, uuid) mock_get_portgroup.assert_called_once_with(uuid) self.assertEqual(self.context, portgroup._context) def test_get_by_address(self): address = self.fake_portgroup['address'] with mock.patch.object(self.dbapi, 'get_portgroup_by_address', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup portgroup = objects.Portgroup.get(self.context, address) mock_get_portgroup.assert_called_once_with(address) self.assertEqual(self.context, portgroup._context) def test_get_by_name(self): name = self.fake_portgroup['name'] with mock.patch.object(self.dbapi, 'get_portgroup_by_name', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup portgroup = objects.Portgroup.get(self.context, name) mock_get_portgroup.assert_called_once_with(name) self.assertEqual(self.context, portgroup._context) def test_get_bad_id_and_uuid_and_address_and_name(self): self.assertRaises(exception.InvalidIdentity, objects.Portgroup.get, self.context, 'not:a_name_or_uuid') def test_save(self): uuid = self.fake_portgroup['uuid'] address = "b2:54:00:cf:2d:40" test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_portgroup_by_uuid', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup with mock.patch.object(self.dbapi, 'update_portgroup', autospec=True) as mock_update_portgroup: mock_update_portgroup.return_value = ( utils.get_test_portgroup(address=address, updated_at=test_time)) p = objects.Portgroup.get_by_uuid(self.context, uuid) p.address = address p.save() mock_get_portgroup.assert_called_once_with(uuid) mock_update_portgroup.assert_called_once_with( uuid, {'address': "b2:54:00:cf:2d:40"}) self.assertEqual(self.context, p._context) res_updated_at = (p.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_refresh(self): uuid = self.fake_portgroup['uuid'] returns = [self.fake_portgroup, utils.get_test_portgroup(address="c3:54:00:cf:2d:40")] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_portgroup_by_uuid', side_effect=returns, autospec=True) as mock_get_portgroup: p = objects.Portgroup.get_by_uuid(self.context, uuid) self.assertEqual("52:54:00:cf:2d:31", p.address) p.refresh() self.assertEqual("c3:54:00:cf:2d:40", p.address) self.assertEqual(expected, mock_get_portgroup.call_args_list) self.assertEqual(self.context, p._context) def test_list(self): with mock.patch.object(self.dbapi, 'get_portgroup_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_portgroup] portgroups = objects.Portgroup.list(self.context) self.assertThat(portgroups, HasLength(1)) self.assertIsInstance(portgroups[0], objects.Portgroup) self.assertEqual(self.context, portgroups[0]._context) def test_list_by_node_id(self): with mock.patch.object(self.dbapi, 'get_portgroups_by_node_id', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_portgroup] node_id = self.fake_portgroup['node_id'] portgroups = objects.Portgroup.list_by_node_id(self.context, node_id) self.assertThat(portgroups, HasLength(1)) self.assertIsInstance(portgroups[0], objects.Portgroup) self.assertEqual(self.context, portgroups[0]._context) ironic-5.1.0/ironic/tests/unit/objects/test_chassis.py0000664000567000056710000001050312674513466024261 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_utils import uuidutils from testtools.matchers import HasLength from ironic.common import exception from ironic import objects from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class TestChassisObject(base.DbTestCase): def setUp(self): super(TestChassisObject, self).setUp() self.fake_chassis = utils.get_test_chassis() def test_get_by_id(self): chassis_id = self.fake_chassis['id'] with mock.patch.object(self.dbapi, 'get_chassis_by_id', autospec=True) as mock_get_chassis: mock_get_chassis.return_value = self.fake_chassis chassis = objects.Chassis.get(self.context, chassis_id) mock_get_chassis.assert_called_once_with(chassis_id) self.assertEqual(self.context, chassis._context) def test_get_by_uuid(self): uuid = self.fake_chassis['uuid'] with mock.patch.object(self.dbapi, 'get_chassis_by_uuid', autospec=True) as mock_get_chassis: mock_get_chassis.return_value = self.fake_chassis chassis = objects.Chassis.get(self.context, uuid) mock_get_chassis.assert_called_once_with(uuid) self.assertEqual(self.context, chassis._context) def test_get_bad_id_and_uuid(self): self.assertRaises(exception.InvalidIdentity, objects.Chassis.get, self.context, 'not-a-uuid') def test_save(self): uuid = self.fake_chassis['uuid'] extra = {"test": 123} test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_chassis_by_uuid', autospec=True) as mock_get_chassis: mock_get_chassis.return_value = self.fake_chassis with mock.patch.object(self.dbapi, 'update_chassis', autospec=True) as mock_update_chassis: mock_update_chassis.return_value = ( utils.get_test_chassis(extra=extra, updated_at=test_time)) c = objects.Chassis.get_by_uuid(self.context, uuid) c.extra = extra c.save() mock_get_chassis.assert_called_once_with(uuid) mock_update_chassis.assert_called_once_with( uuid, {'extra': {"test": 123}}) self.assertEqual(self.context, c._context) res_updated_at = (c.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_refresh(self): uuid = self.fake_chassis['uuid'] new_uuid = uuidutils.generate_uuid() returns = [dict(self.fake_chassis, uuid=uuid), dict(self.fake_chassis, uuid=new_uuid)] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_chassis_by_uuid', side_effect=returns, autospec=True) as mock_get_chassis: c = objects.Chassis.get_by_uuid(self.context, uuid) self.assertEqual(uuid, c.uuid) c.refresh() self.assertEqual(new_uuid, c.uuid) self.assertEqual(expected, mock_get_chassis.call_args_list) self.assertEqual(self.context, c._context) def test_list(self): with mock.patch.object(self.dbapi, 'get_chassis_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_chassis] chassis = objects.Chassis.list(self.context) self.assertThat(chassis, HasLength(1)) self.assertIsInstance(chassis[0], objects.Chassis) self.assertEqual(self.context, chassis[0]._context) ironic-5.1.0/ironic/tests/unit/objects/test_node.py0000664000567000056710000002002012674513466023544 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from testtools.matchers import HasLength from ironic.common import exception from ironic import objects from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class TestNodeObject(base.DbTestCase): def setUp(self): super(TestNodeObject, self).setUp() self.fake_node = utils.get_test_node() def test_get_by_id(self): node_id = self.fake_node['id'] with mock.patch.object(self.dbapi, 'get_node_by_id', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get(self.context, node_id) mock_get_node.assert_called_once_with(node_id) self.assertEqual(self.context, node._context) def test_get_by_uuid(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get(self.context, uuid) mock_get_node.assert_called_once_with(uuid) self.assertEqual(self.context, node._context) def test_get_bad_id_and_uuid(self): self.assertRaises(exception.InvalidIdentity, objects.Node.get, self.context, 'not-a-uuid') def test_save(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'update_node', autospec=True) as mock_update_node: n = objects.Node.get(self.context, uuid) self.assertEqual({"foo": "bar", "fake_password": "fakepass"}, n.driver_internal_info) n.properties = {"fake": "property"} n.driver = "fake-driver" n.save() mock_get_node.assert_called_once_with(uuid) mock_update_node.assert_called_once_with( uuid, {'properties': {"fake": "property"}, 'driver': 'fake-driver', 'driver_internal_info': {}}) self.assertEqual(self.context, n._context) self.assertEqual({}, n.driver_internal_info) def test_refresh(self): uuid = self.fake_node['uuid'] returns = [dict(self.fake_node, properties={"fake": "first"}), dict(self.fake_node, properties={"fake": "second"})] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_node_by_uuid', side_effect=returns, autospec=True) as mock_get_node: n = objects.Node.get(self.context, uuid) self.assertEqual({"fake": "first"}, n.properties) n.refresh() self.assertEqual({"fake": "second"}, n.properties) self.assertEqual(expected, mock_get_node.call_args_list) self.assertEqual(self.context, n._context) def test_list(self): with mock.patch.object(self.dbapi, 'get_node_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_node] nodes = objects.Node.list(self.context) self.assertThat(nodes, HasLength(1)) self.assertIsInstance(nodes[0], objects.Node) self.assertEqual(self.context, nodes[0]._context) def test_reserve(self): with mock.patch.object(self.dbapi, 'reserve_node', autospec=True) as mock_reserve: mock_reserve.return_value = self.fake_node node_id = self.fake_node['id'] fake_tag = 'fake-tag' node = objects.Node.reserve(self.context, fake_tag, node_id) self.assertIsInstance(node, objects.Node) mock_reserve.assert_called_once_with(fake_tag, node_id) self.assertEqual(self.context, node._context) def test_reserve_node_not_found(self): with mock.patch.object(self.dbapi, 'reserve_node', autospec=True) as mock_reserve: node_id = 'non-existent' mock_reserve.side_effect = iter( [exception.NodeNotFound(node=node_id)]) self.assertRaises(exception.NodeNotFound, objects.Node.reserve, self.context, 'fake-tag', node_id) def test_release(self): with mock.patch.object(self.dbapi, 'release_node', autospec=True) as mock_release: node_id = self.fake_node['id'] fake_tag = 'fake-tag' objects.Node.release(self.context, fake_tag, node_id) mock_release.assert_called_once_with(fake_tag, node_id) def test_release_node_not_found(self): with mock.patch.object(self.dbapi, 'release_node', autospec=True) as mock_release: node_id = 'non-existent' mock_release.side_effect = iter( [exception.NodeNotFound(node=node_id)]) self.assertRaises(exception.NodeNotFound, objects.Node.release, self.context, 'fake-tag', node_id) def test_touch_provisioning(self): with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'touch_node_provisioning', autospec=True) as mock_touch: node = objects.Node.get(self.context, self.fake_node['uuid']) node.touch_provisioning() mock_touch.assert_called_once_with(node.id) def test_create_with_invalid_properties(self): node = objects.Node(self.context, **self.fake_node) node.properties = {"local_gb": "5G"} self.assertRaises(exception.InvalidParameterValue, node.create) def test_update_with_invalid_properties(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get(self.context, uuid) node.properties = {"local_gb": "5G", "memory_mb": "5", 'cpus': '-1', 'cpu_arch': 'x86_64'} self.assertRaisesRegexp(exception.InvalidParameterValue, ".*local_gb=5G, cpus=-1$", node.save) mock_get_node.assert_called_once_with(uuid) def test__validate_property_values_success(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get(self.context, uuid) values = self.fake_node expect = { 'cpu_arch': 'x86_64', "cpus": '8', "local_gb": '10', "memory_mb": '4096', } node._validate_property_values(values['properties']) self.assertEqual(expect, values['properties']) ironic-5.1.0/ironic/tests/unit/objects/test_port.py0000664000567000056710000001112712674513466023613 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from testtools.matchers import HasLength from ironic.common import exception from ironic import objects from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class TestPortObject(base.DbTestCase): def setUp(self): super(TestPortObject, self).setUp() self.fake_port = utils.get_test_port() def test_get_by_id(self): port_id = self.fake_port['id'] with mock.patch.object(self.dbapi, 'get_port_by_id', autospec=True) as mock_get_port: mock_get_port.return_value = self.fake_port port = objects.Port.get(self.context, port_id) mock_get_port.assert_called_once_with(port_id) self.assertEqual(self.context, port._context) def test_get_by_uuid(self): uuid = self.fake_port['uuid'] with mock.patch.object(self.dbapi, 'get_port_by_uuid', autospec=True) as mock_get_port: mock_get_port.return_value = self.fake_port port = objects.Port.get(self.context, uuid) mock_get_port.assert_called_once_with(uuid) self.assertEqual(self.context, port._context) def test_get_by_address(self): address = self.fake_port['address'] with mock.patch.object(self.dbapi, 'get_port_by_address', autospec=True) as mock_get_port: mock_get_port.return_value = self.fake_port port = objects.Port.get(self.context, address) mock_get_port.assert_called_once_with(address) self.assertEqual(self.context, port._context) def test_get_bad_id_and_uuid_and_address(self): self.assertRaises(exception.InvalidIdentity, objects.Port.get, self.context, 'not-a-uuid') def test_save(self): uuid = self.fake_port['uuid'] address = "b2:54:00:cf:2d:40" test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_port_by_uuid', autospec=True) as mock_get_port: mock_get_port.return_value = self.fake_port with mock.patch.object(self.dbapi, 'update_port', autospec=True) as mock_update_port: mock_update_port.return_value = ( utils.get_test_port(address=address, updated_at=test_time)) p = objects.Port.get_by_uuid(self.context, uuid) p.address = address p.save() mock_get_port.assert_called_once_with(uuid) mock_update_port.assert_called_once_with( uuid, {'address': "b2:54:00:cf:2d:40"}) self.assertEqual(self.context, p._context) res_updated_at = (p.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_refresh(self): uuid = self.fake_port['uuid'] returns = [self.fake_port, utils.get_test_port(address="c3:54:00:cf:2d:40")] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_port_by_uuid', side_effect=returns, autospec=True) as mock_get_port: p = objects.Port.get_by_uuid(self.context, uuid) self.assertEqual("52:54:00:cf:2d:31", p.address) p.refresh() self.assertEqual("c3:54:00:cf:2d:40", p.address) self.assertEqual(expected, mock_get_port.call_args_list) self.assertEqual(self.context, p._context) def test_list(self): with mock.patch.object(self.dbapi, 'get_port_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_port] ports = objects.Port.list(self.context) self.assertThat(ports, HasLength(1)) self.assertIsInstance(ports[0], objects.Port) self.assertEqual(self.context, ports[0]._context) ironic-5.1.0/ironic/tests/unit/objects/test_objects.py0000664000567000056710000004676012674513466024273 0ustar jenkinsjenkins00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import datetime import gettext import iso8601 import mock from oslo_context import context from oslo_versionedobjects import base as object_base from oslo_versionedobjects import exception as object_exception from oslo_versionedobjects import fixture as object_fixture import six from ironic.objects import base from ironic.objects import fields from ironic.tests import base as test_base gettext.install('ironic') @base.IronicObjectRegistry.register class MyObj(base.IronicObject, object_base.VersionedObjectDictCompat): VERSION = '1.5' fields = {'foo': fields.IntegerField(), 'bar': fields.StringField(), 'missing': fields.StringField(), } def obj_load_attr(self, attrname): setattr(self, attrname, 'loaded!') @object_base.remotable_classmethod def query(cls, context): obj = cls(context) obj.foo = 1 obj.bar = 'bar' obj.obj_reset_changes() return obj @object_base.remotable def marco(self, context=None): return 'polo' @object_base.remotable def update_test(self, context=None): if context and context.tenant == 'alternate': self.bar = 'alternate-context' else: self.bar = 'updated' @object_base.remotable def save(self, context=None): self.obj_reset_changes() @object_base.remotable def refresh(self, context=None): self.foo = 321 self.bar = 'refreshed' self.obj_reset_changes() @object_base.remotable def modify_save_modify(self, context=None): self.bar = 'meow' self.save() self.foo = 42 class MyObj2(object): @classmethod def obj_name(cls): return 'MyObj' @object_base.remotable_classmethod def get(cls, *args, **kwargs): pass @base.IronicObjectRegistry.register_if(False) class TestSubclassedObject(MyObj): fields = {'new_field': fields.StringField()} class _LocalTest(test_base.TestCase): def setUp(self): super(_LocalTest, self).setUp() # Just in case base.IronicObject.indirection_api = None @contextlib.contextmanager def things_temporarily_local(): # Temporarily go non-remote so the conductor handles # this request directly _api = base.IronicObject.indirection_api base.IronicObject.indirection_api = None yield base.IronicObject.indirection_api = _api class _TestObject(object): def test_hydration_type_error(self): primitive = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.data': {'foo': 'a'}} self.assertRaises(ValueError, MyObj.obj_from_primitive, primitive) def test_hydration(self): primitive = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.data': {'foo': 1}} obj = MyObj.obj_from_primitive(primitive) self.assertEqual(1, obj.foo) def test_hydration_bad_ns(self): primitive = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'foo', 'ironic_object.version': '1.5', 'ironic_object.data': {'foo': 1}} self.assertRaises(object_exception.UnsupportedObjectError, MyObj.obj_from_primitive, primitive) def test_dehydration(self): expected = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.data': {'foo': 1}} obj = MyObj(self.context) obj.foo = 1 obj.obj_reset_changes() self.assertEqual(expected, obj.obj_to_primitive()) def test_get_updates(self): obj = MyObj(self.context) self.assertEqual({}, obj.obj_get_changes()) obj.foo = 123 self.assertEqual({'foo': 123}, obj.obj_get_changes()) obj.bar = 'test' self.assertEqual({'foo': 123, 'bar': 'test'}, obj.obj_get_changes()) obj.obj_reset_changes() self.assertEqual({}, obj.obj_get_changes()) def test_object_property(self): obj = MyObj(self.context, foo=1) self.assertEqual(1, obj.foo) def test_object_property_type_error(self): obj = MyObj(self.context) def fail(): obj.foo = 'a' self.assertRaises(ValueError, fail) def test_load(self): obj = MyObj(self.context) self.assertEqual('loaded!', obj.bar) def test_load_in_base(self): @base.IronicObjectRegistry.register_if(False) class Foo(base.IronicObject, object_base.VersionedObjectDictCompat): fields = {'foobar': fields.IntegerField()} obj = Foo(self.context) self.assertRaisesRegexp( NotImplementedError, "Cannot load 'foobar' in the base class", getattr, obj, 'foobar') def test_loaded_in_primitive(self): obj = MyObj(self.context) obj.foo = 1 obj.obj_reset_changes() self.assertEqual('loaded!', obj.bar) expected = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.changes': ['bar'], 'ironic_object.data': {'foo': 1, 'bar': 'loaded!'}} self.assertEqual(expected, obj.obj_to_primitive()) def test_changes_in_primitive(self): obj = MyObj(self.context) obj.foo = 123 self.assertEqual(set(['foo']), obj.obj_what_changed()) primitive = obj.obj_to_primitive() self.assertTrue('ironic_object.changes' in primitive) obj2 = MyObj.obj_from_primitive(primitive) self.assertEqual(set(['foo']), obj2.obj_what_changed()) obj2.obj_reset_changes() self.assertEqual(set(), obj2.obj_what_changed()) def test_unknown_objtype(self): self.assertRaises(object_exception.UnsupportedObjectError, base.IronicObject.obj_class_from_name, 'foo', '1.0') def test_with_alternate_context(self): ctxt1 = context.RequestContext('foo', 'foo') ctxt2 = context.RequestContext('bar', tenant='alternate') obj = MyObj.query(ctxt1) obj.update_test(ctxt2) self.assertEqual('alternate-context', obj.bar) def test_orphaned_object(self): obj = MyObj.query(self.context) obj._context = None self.assertRaises(object_exception.OrphanedObjectError, obj.update_test) def test_changed_1(self): obj = MyObj.query(self.context) obj.foo = 123 self.assertEqual(set(['foo']), obj.obj_what_changed()) obj.update_test(self.context) self.assertEqual(set(['foo', 'bar']), obj.obj_what_changed()) self.assertEqual(123, obj.foo) def test_changed_2(self): obj = MyObj.query(self.context) obj.foo = 123 self.assertEqual(set(['foo']), obj.obj_what_changed()) obj.save() self.assertEqual(set([]), obj.obj_what_changed()) self.assertEqual(123, obj.foo) def test_changed_3(self): obj = MyObj.query(self.context) obj.foo = 123 self.assertEqual(set(['foo']), obj.obj_what_changed()) obj.refresh() self.assertEqual(set([]), obj.obj_what_changed()) self.assertEqual(321, obj.foo) self.assertEqual('refreshed', obj.bar) def test_changed_4(self): obj = MyObj.query(self.context) obj.bar = 'something' self.assertEqual(set(['bar']), obj.obj_what_changed()) obj.modify_save_modify(self.context) self.assertEqual(set(['foo']), obj.obj_what_changed()) self.assertEqual(42, obj.foo) self.assertEqual('meow', obj.bar) def test_static_result(self): obj = MyObj.query(self.context) self.assertEqual('bar', obj.bar) result = obj.marco() self.assertEqual('polo', result) def test_updates(self): obj = MyObj.query(self.context) self.assertEqual(1, obj.foo) obj.update_test() self.assertEqual('updated', obj.bar) def test_base_attributes(self): dt = datetime.datetime(1955, 11, 5, 0, 0, tzinfo=iso8601.iso8601.Utc()) datatime = fields.DateTimeField() obj = MyObj(self.context) obj.created_at = dt obj.updated_at = dt expected = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.changes': ['created_at', 'updated_at'], 'ironic_object.data': {'created_at': datatime.stringify(dt), 'updated_at': datatime.stringify(dt), } } actual = obj.obj_to_primitive() # ironic_object.changes is built from a set and order is undefined self.assertEqual(sorted(expected['ironic_object.changes']), sorted(actual['ironic_object.changes'])) del expected['ironic_object.changes'], actual['ironic_object.changes'] self.assertEqual(expected, actual) def test_contains(self): obj = MyObj(self.context) self.assertFalse('foo' in obj) obj.foo = 1 self.assertTrue('foo' in obj) self.assertFalse('does_not_exist' in obj) def test_obj_attr_is_set(self): obj = MyObj(self.context, foo=1) self.assertTrue(obj.obj_attr_is_set('foo')) self.assertFalse(obj.obj_attr_is_set('bar')) self.assertRaises(AttributeError, obj.obj_attr_is_set, 'bang') def test_get(self): obj = MyObj(self.context, foo=1) # Foo has value, should not get the default self.assertEqual(obj.get('foo', 2), 1) # Foo has value, should return the value without error self.assertEqual(obj.get('foo'), 1) # Bar is not loaded, so we should get the default self.assertEqual(obj.get('bar', 'not-loaded'), 'not-loaded') # Bar without a default should lazy-load self.assertEqual(obj.get('bar'), 'loaded!') # Bar now has a default, but loaded value should be returned self.assertEqual(obj.get('bar', 'not-loaded'), 'loaded!') # Invalid attribute should raise AttributeError self.assertRaises(AttributeError, obj.get, 'nothing') # ...even with a default self.assertRaises(AttributeError, obj.get, 'nothing', 3) def test_object_inheritance(self): base_fields = list(base.IronicObject.fields) myobj_fields = ['foo', 'bar', 'missing'] + base_fields myobj3_fields = ['new_field'] self.assertTrue(issubclass(TestSubclassedObject, MyObj)) self.assertEqual(len(myobj_fields), len(MyObj.fields)) self.assertEqual(set(myobj_fields), set(MyObj.fields.keys())) self.assertEqual(len(myobj_fields) + len(myobj3_fields), len(TestSubclassedObject.fields)) self.assertEqual(set(myobj_fields) | set(myobj3_fields), set(TestSubclassedObject.fields.keys())) def test_get_changes(self): obj = MyObj(self.context) self.assertEqual({}, obj.obj_get_changes()) obj.foo = 123 self.assertEqual({'foo': 123}, obj.obj_get_changes()) obj.bar = 'test' self.assertEqual({'foo': 123, 'bar': 'test'}, obj.obj_get_changes()) obj.obj_reset_changes() self.assertEqual({}, obj.obj_get_changes()) def test_obj_fields(self): @base.IronicObjectRegistry.register_if(False) class TestObj(base.IronicObject, object_base.VersionedObjectDictCompat): fields = {'foo': fields.IntegerField()} obj_extra_fields = ['bar'] @property def bar(self): return 'this is bar' obj = TestObj(self.context) self.assertEqual(set(['created_at', 'updated_at', 'foo', 'bar']), set(obj.obj_fields)) def test_refresh_object(self): @base.IronicObjectRegistry.register_if(False) class TestObj(base.IronicObject, object_base.VersionedObjectDictCompat): fields = {'foo': fields.IntegerField(), 'bar': fields.StringField()} obj = TestObj(self.context) current_obj = TestObj(self.context) obj.foo = 10 obj.bar = 'obj.bar' current_obj.foo = 2 current_obj.bar = 'current.bar' obj.obj_refresh(current_obj) self.assertEqual(obj.foo, 2) self.assertEqual(obj.bar, 'current.bar') def test_obj_constructor(self): obj = MyObj(self.context, foo=123, bar='abc') self.assertEqual(123, obj.foo) self.assertEqual('abc', obj.bar) self.assertEqual(set(['foo', 'bar']), obj.obj_what_changed()) def test_assign_value_without_DictCompat(self): class TestObj(base.IronicObject): fields = {'foo': fields.IntegerField(), 'bar': fields.StringField()} obj = TestObj(self.context) obj.foo = 10 err_message = '' try: obj['bar'] = 'value' except TypeError as e: err_message = six.text_type(e) finally: self.assertIn("'TestObj' object does not support item assignment", err_message) class TestObject(_LocalTest, _TestObject): pass # The hashes are help developers to check if the change of objects need a # version bump. It is md5 hash of object fields and remotable methods. # The fingerprint values should only be changed if there is a version bump. expected_object_fingerprints = { 'Node': '1.14-9ee8ab283b06398545880dfdedb49891', 'MyObj': '1.5-4f5efe8f0fcaf182bbe1c7fe3ba858db', 'Chassis': '1.3-d656e039fd8ae9f34efc232ab3980905', 'Port': '1.5-a224755c3da5bc5cf1a14a11c0d00f3f', 'Portgroup': '1.0-1ac4db8fa31edd9e1637248ada4c25a1', 'Conductor': '1.1-5091f249719d4a465062a1b3dc7f860d' } class TestObjectVersions(test_base.TestCase): def test_object_version_check(self): classes = base.IronicObjectRegistry.obj_classes() checker = object_fixture.ObjectVersionChecker(obj_classes=classes) # Compute the difference between actual fingerprints and # expect fingerprints. expect = actual = {} if there is no change. expect, actual = checker.test_hashes(expected_object_fingerprints) self.assertEqual(expect, actual, "Some objects fields or remotable methods have been " "modified. Please make sure the version of those " "objects have been bumped and then update " "expected_object_fingerprints with the new hashes. ") class TestObjectSerializer(test_base.TestCase): def test_object_serialization(self): ser = base.IronicObjectSerializer() obj = MyObj(self.context) primitive = ser.serialize_entity(self.context, obj) self.assertTrue('ironic_object.name' in primitive) obj2 = ser.deserialize_entity(self.context, primitive) self.assertIsInstance(obj2, MyObj) self.assertEqual(self.context, obj2._context) def test_object_serialization_iterables(self): ser = base.IronicObjectSerializer() obj = MyObj(self.context) for iterable in (list, tuple, set): thing = iterable([obj]) primitive = ser.serialize_entity(self.context, thing) self.assertEqual(1, len(primitive)) for item in primitive: self.assertFalse(isinstance(item, base.IronicObject)) thing2 = ser.deserialize_entity(self.context, primitive) self.assertEqual(1, len(thing2)) for item in thing2: self.assertIsInstance(item, MyObj) @mock.patch('ironic.objects.base.IronicObject.indirection_api') def _test_deserialize_entity_newer(self, obj_version, backported_to, mock_indirection_api, my_version='1.6'): ser = base.IronicObjectSerializer() mock_indirection_api.object_backport_versions.return_value \ = 'backported' @base.IronicObjectRegistry.register class MyTestObj(MyObj): VERSION = my_version obj = MyTestObj(self.context) obj.VERSION = obj_version primitive = obj.obj_to_primitive() result = ser.deserialize_entity(self.context, primitive) if backported_to is None: self.assertFalse( mock_indirection_api.object_backport_versions.called) else: self.assertEqual('backported', result) versions = object_base.obj_tree_get_versions('MyTestObj') mock_indirection_api.object_backport_versions.assert_called_with( self.context, primitive, versions) def test_deserialize_entity_newer_version_backports(self): "Test object with unsupported (newer) version" self._test_deserialize_entity_newer('1.25', '1.6') def test_deserialize_entity_same_revision_does_not_backport(self): "Test object with supported revision" self._test_deserialize_entity_newer('1.6', None) def test_deserialize_entity_newer_revision_does_not_backport_zero(self): "Test object with supported revision" self._test_deserialize_entity_newer('1.6.0', None) def test_deserialize_entity_newer_revision_does_not_backport(self): "Test object with supported (newer) revision" self._test_deserialize_entity_newer('1.6.1', None) def test_deserialize_entity_newer_version_passes_revision(self): "Test object with unsupported (newer) version and revision" self._test_deserialize_entity_newer('1.7', '1.6.1', my_version='1.6.1') class TestRegistry(test_base.TestCase): @mock.patch('ironic.objects.base.objects') def test_hook_chooses_newer_properly(self, mock_objects): reg = base.IronicObjectRegistry() reg.registration_hook(MyObj, 0) class MyNewerObj(object): VERSION = '1.123' @classmethod def obj_name(cls): return 'MyObj' self.assertEqual(MyObj, mock_objects.MyObj) reg.registration_hook(MyNewerObj, 0) self.assertEqual(MyNewerObj, mock_objects.MyObj) @mock.patch('ironic.objects.base.objects') def test_hook_keeps_newer_properly(self, mock_objects): reg = base.IronicObjectRegistry() reg.registration_hook(MyObj, 0) class MyOlderObj(object): VERSION = '1.1' @classmethod def obj_name(cls): return 'MyObj' self.assertEqual(MyObj, mock_objects.MyObj) reg.registration_hook(MyOlderObj, 0) self.assertEqual(MyObj, mock_objects.MyObj) ironic-5.1.0/ironic/tests/unit/objects/test_conductor.py0000664000567000056710000001205112674513466024624 0ustar jenkinsjenkins00000000000000# coding=utf-8 # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_utils import timeutils from ironic import objects from ironic.objects import fields from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class TestConductorObject(base.DbTestCase): def setUp(self): super(TestConductorObject, self).setUp() self.fake_conductor = ( utils.get_test_conductor(updated_at=timeutils.utcnow())) def test_load(self): host = self.fake_conductor['hostname'] with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: mock_get_cdr.return_value = self.fake_conductor objects.Conductor.get_by_hostname(self.context, host) mock_get_cdr.assert_called_once_with(host) def test_save(self): host = self.fake_conductor['hostname'] with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: mock_get_cdr.return_value = self.fake_conductor c = objects.Conductor.get_by_hostname(self.context, host) c.hostname = 'another-hostname' self.assertRaises(NotImplementedError, c.save, self.context) mock_get_cdr.assert_called_once_with(host) def test_touch(self): host = self.fake_conductor['hostname'] with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: with mock.patch.object(self.dbapi, 'touch_conductor', autospec=True) as mock_touch_cdr: mock_get_cdr.return_value = self.fake_conductor c = objects.Conductor.get_by_hostname(self.context, host) c.touch(self.context) mock_get_cdr.assert_called_once_with(host) mock_touch_cdr.assert_called_once_with(host) def test_refresh(self): host = self.fake_conductor['hostname'] t0 = self.fake_conductor['updated_at'] t1 = t0 + datetime.timedelta(seconds=10) returns = [dict(self.fake_conductor, updated_at=t0), dict(self.fake_conductor, updated_at=t1)] expected = [mock.call(host), mock.call(host)] with mock.patch.object(self.dbapi, 'get_conductor', side_effect=returns, autospec=True) as mock_get_cdr: c = objects.Conductor.get_by_hostname(self.context, host) # ensure timestamps have tzinfo datetime_field = fields.DateTimeField() self.assertEqual( datetime_field.coerce(datetime_field, 'updated_at', t0), c.updated_at) c.refresh() self.assertEqual( datetime_field.coerce(datetime_field, 'updated_at', t1), c.updated_at) self.assertEqual(expected, mock_get_cdr.call_args_list) self.assertEqual(self.context, c._context) def _test_register(self, update_existing=False): host = self.fake_conductor['hostname'] drivers = self.fake_conductor['drivers'] with mock.patch.object(self.dbapi, 'register_conductor', autospec=True) as mock_register_cdr: mock_register_cdr.return_value = self.fake_conductor c = objects.Conductor.register(self.context, host, drivers, update_existing=update_existing) self.assertIsInstance(c, objects.Conductor) mock_register_cdr.assert_called_once_with( {'drivers': drivers, 'hostname': host}, update_existing=update_existing) def test_register(self): self._test_register() def test_register_update_existing_true(self): self._test_register(update_existing=True) def test_unregister(self): host = self.fake_conductor['hostname'] with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: with mock.patch.object(self.dbapi, 'unregister_conductor', autospec=True) as mock_unregister_cdr: mock_get_cdr.return_value = self.fake_conductor c = objects.Conductor.get_by_hostname(self.context, host) c.unregister() mock_unregister_cdr.assert_called_once_with(host) ironic-5.1.0/ironic/tests/unit/objects/__init__.py0000664000567000056710000000000012674513466023313 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/objects/utils.py0000664000567000056710000000736312674513466022737 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Ironic object test utilities.""" from ironic import objects from ironic.tests.unit.db import utils as db_utils def get_test_node(ctxt, **kw): """Return a Node object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ db_node = db_utils.get_test_node(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_node['id'] node = objects.Node(ctxt) for key in db_node: setattr(node, key, db_node[key]) return node def create_test_node(ctxt, **kw): """Create and return a test node object. Create a node in the DB and return a Node object with appropriate attributes. """ node = get_test_node(ctxt, **kw) node.create() return node def get_test_port(ctxt, **kw): """Return a Port object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ db_port = db_utils.get_test_port(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_port['id'] port = objects.Port(ctxt) for key in db_port: setattr(port, key, db_port[key]) return port def create_test_port(ctxt, **kw): """Create and return a test port object. Create a port in the DB and return a Port object with appropriate attributes. """ port = get_test_port(ctxt, **kw) port.create() return port def get_test_chassis(ctxt, **kw): """Return a Chassis object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ db_chassis = db_utils.get_test_chassis(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_chassis['id'] chassis = objects.Chassis(ctxt) for key in db_chassis: setattr(chassis, key, db_chassis[key]) return chassis def create_test_chassis(ctxt, **kw): """Create and return a test chassis object. Create a chassis in the DB and return a Chassis object with appropriate attributes. """ chassis = get_test_chassis(ctxt, **kw) chassis.create() return chassis def get_test_portgroup(ctxt, **kw): """Return a Portgroup object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ db_portgroup = db_utils.get_test_portgroup(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_portgroup['id'] portgroup = objects.Portgroup(ctxt) for key in db_portgroup: setattr(portgroup, key, db_portgroup[key]) return portgroup def create_test_portgroup(ctxt, **kw): """Create and return a test portgroup object. Create a portgroup in the DB and return a Portgroup object with appropriate attributes. """ portgroup = get_test_portgroup(ctxt, **kw) portgroup.create() return portgroup ironic-5.1.0/ironic/tests/unit/drivers/0000775000567000056710000000000012674513633021235 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/test_utils.py0000664000567000056710000002312012674513466024010 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.common import driver_factory from ironic.common import exception from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import fake from ironic.drivers import utils as driver_utils from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class UtilsTestCase(db_base.DbTestCase): def setUp(self): super(UtilsTestCase, self).setUp() mgr_utils.mock_the_extension_manager() self.driver = driver_factory.get_driver("fake") self.node = obj_utils.create_test_node(self.context) def test_vendor_interface_get_properties(self): expected = {'A1': 'A1 description. Required.', 'A2': 'A2 description. Optional.', 'B1': 'B1 description. Required.', 'B2': 'B2 description. Required.'} props = self.driver.vendor.get_properties() self.assertEqual(expected, props) @mock.patch.object(fake.FakeVendorA, 'validate', autospec=True) def test_vendor_interface_validate_valid_methods(self, mock_fakea_validate): with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.vendor.validate(task, method='first_method') mock_fakea_validate.assert_called_once_with( self.driver.vendor.mapping['first_method'], task, method='first_method') def test_vendor_interface_validate_bad_method(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.driver.vendor.validate, task, method='fake_method') def test_get_node_mac_addresses(self): ports = [] ports.append( obj_utils.create_test_port( self.context, address='aa:bb:cc:dd:ee:ff', uuid='bb43dc0b-03f2-4d2e-ae87-c02d7f33cc53', node_id=self.node.id) ) ports.append( obj_utils.create_test_port( self.context, address='dd:ee:ff:aa:bb:cc', uuid='4fc26c0b-03f2-4d2e-ae87-c02d7f33c234', node_id=self.node.id) ) with task_manager.acquire(self.context, self.node.uuid) as task: node_macs = driver_utils.get_node_mac_addresses(task) self.assertEqual(sorted([p.address for p in ports]), sorted(node_macs)) def test_get_node_capability(self): properties = {'capabilities': 'cap1:value1, cap2: value2'} self.node.properties = properties expected = 'value1' expected2 = 'value2' result = driver_utils.get_node_capability(self.node, 'cap1') result2 = driver_utils.get_node_capability(self.node, 'cap2') self.assertEqual(expected, result) self.assertEqual(expected2, result2) def test_get_node_capability_returns_none(self): properties = {'capabilities': 'cap1:value1,cap2:value2'} self.node.properties = properties result = driver_utils.get_node_capability(self.node, 'capX') self.assertIsNone(result) def test_add_node_capability(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = '' driver_utils.add_node_capability(task, 'boot_mode', 'bios') self.assertEqual('boot_mode:bios', task.node.properties['capabilities']) def test_add_node_capability_append(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = 'a:b,c:d' driver_utils.add_node_capability(task, 'boot_mode', 'bios') self.assertEqual('a:b,c:d,boot_mode:bios', task.node.properties['capabilities']) def test_add_node_capability_append_duplicate(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = 'a:b,c:d' driver_utils.add_node_capability(task, 'a', 'b') self.assertEqual('a:b,c:d,a:b', task.node.properties['capabilities']) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_ensure_next_boot_device(self, node_set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info['persistent_boot_device'] = 'pxe' driver_utils.ensure_next_boot_device( task, {'force_boot_device': True} ) node_set_boot_device_mock.assert_called_once_with(task, 'pxe') def test_ensure_next_boot_device_clears_is_next_boot_persistent(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info['persistent_boot_device'] = 'pxe' task.node.driver_internal_info['is_next_boot_persistent'] = False driver_utils.ensure_next_boot_device( task, {'force_boot_device': True} ) task.node.refresh() self.assertNotIn('is_next_boot_persistent', task.node.driver_internal_info) def test_force_persistent_boot_true(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['ipmi_force_boot_device'] = True ret = driver_utils.force_persistent_boot(task, 'pxe', True) self.assertIsNone(ret) task.node.refresh() self.assertIn('persistent_boot_device', task.node.driver_internal_info) def test_force_persistent_boot_false(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = driver_utils.force_persistent_boot(task, 'pxe', False) self.assertIsNone(ret) task.node.refresh() self.assertEqual( False, task.node.driver_internal_info.get('is_next_boot_persistent') ) def test_capabilities_to_dict(self): capabilities_more_than_one_item = 'a:b,c:d' capabilities_exactly_one_item = 'e:f' # Testing empty capabilities self.assertEqual( {}, driver_utils.capabilities_to_dict('') ) self.assertEqual( {'e': 'f'}, driver_utils.capabilities_to_dict(capabilities_exactly_one_item) ) self.assertEqual( {'a': 'b', 'c': 'd'}, driver_utils.capabilities_to_dict(capabilities_more_than_one_item) ) def test_capabilities_to_dict_with_only_key_or_value_fail(self): capabilities_only_key_or_value = 'xpto' exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, capabilities_only_key_or_value ) self.assertEqual('Malformed capabilities value: xpto', str(exc)) def test_capabilities_to_dict_with_invalid_character_fail(self): for test_capabilities in ('xpto:a,', ',xpto:a'): exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, test_capabilities ) self.assertEqual('Malformed capabilities value: ', str(exc)) def test_capabilities_to_dict_with_incorrect_format_fail(self): for test_capabilities in (':xpto,', 'xpto:,', ':,'): exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, test_capabilities ) self.assertEqual('Malformed capabilities value: ', str(exc)) def test_capabilities_not_string(self): capabilities_already_dict = {'a': 'b'} capabilities_something_else = 42 exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, capabilities_already_dict ) self.assertEqual("Value of 'capabilities' must be string. Got " + str(dict), str(exc)) exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, capabilities_something_else ) self.assertEqual("Value of 'capabilities' must be string. Got " + str(int), str(exc)) ironic-5.1.0/ironic/tests/unit/drivers/test_base.py0000664000567000056710000003753712674513466023603 0ustar jenkinsjenkins00000000000000# Copyright 2014 Cisco Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json from futurist import periodics import mock from ironic.common import exception from ironic.common import raid from ironic.drivers import base as driver_base from ironic.tests import base class FakeVendorInterface(driver_base.VendorInterface): def get_properties(self): pass @driver_base.passthru(['POST']) def noexception(self): return "Fake" @driver_base.driver_passthru(['POST']) def driver_noexception(self): return "Fake" @driver_base.passthru(['POST']) def ironicexception(self): raise exception.IronicException("Fake!") @driver_base.passthru(['POST']) def normalexception(self): raise Exception("Fake!") def validate(self, task, **kwargs): pass def driver_validate(self, **kwargs): pass class PassthruDecoratorTestCase(base.TestCase): def setUp(self): super(PassthruDecoratorTestCase, self).setUp() self.fvi = FakeVendorInterface() def test_passthru_noexception(self): result = self.fvi.noexception() self.assertEqual("Fake", result) @mock.patch.object(driver_base, 'LOG', autospec=True) def test_passthru_ironicexception(self, mock_log): self.assertRaises(exception.IronicException, self.fvi.ironicexception, mock.ANY) mock_log.exception.assert_called_with( mock.ANY, 'ironicexception') @mock.patch.object(driver_base, 'LOG', autospec=True) def test_passthru_nonironicexception(self, mock_log): self.assertRaises(exception.VendorPassthruException, self.fvi.normalexception, mock.ANY) mock_log.exception.assert_called_with( mock.ANY, 'normalexception') def test_passthru_check_func_references(self): inst1 = FakeVendorInterface() inst2 = FakeVendorInterface() self.assertNotEqual(inst1.vendor_routes['noexception']['func'], inst2.vendor_routes['noexception']['func']) self.assertNotEqual(inst1.driver_routes['driver_noexception']['func'], inst2.driver_routes['driver_noexception']['func']) class DriverPeriodicTaskTestCase(base.TestCase): def test(self): method_mock = mock.MagicMock(spec_set=[]) class TestClass(object): @driver_base.driver_periodic_task(spacing=42) def method(self, foo, bar=None): method_mock(foo, bar=bar) obj = TestClass() self.assertEqual(42, obj.method._periodic_spacing) self.assertTrue(periodics.is_periodic(obj.method)) obj.method(1, bar=2) method_mock.assert_called_once_with(1, bar=2) class CleanStepDecoratorTestCase(base.TestCase): def setUp(self): super(CleanStepDecoratorTestCase, self).setUp() method_mock = mock.MagicMock() del method_mock._is_clean_step del method_mock._clean_step_priority del method_mock._clean_step_abortable del method_mock._clean_step_argsinfo self.method = method_mock def test__validate_argsinfo(self): # None, empty dict driver_base._validate_argsinfo(None) driver_base._validate_argsinfo({}) # Only description specified driver_base._validate_argsinfo({'arg1': {'description': 'desc1'}}) # Multiple args driver_base._validate_argsinfo({'arg1': {'description': 'desc1', 'required': True}, 'arg2': {'description': 'desc2'}}) def test__validate_argsinfo_not_dict(self): self.assertRaisesRegexp(exception.InvalidParameterValue, 'argsinfo.+dictionary', driver_base._validate_argsinfo, 'not-a-dict') def test__validate_argsinfo_arg_not_dict(self): self.assertRaisesRegexp(exception.InvalidParameterValue, 'Argument.+dictionary', driver_base._validate_argsinfo, {'arg1': 'not-a-dict'}) def test__validate_argsinfo_arg_empty_dict(self): self.assertRaisesRegexp(exception.InvalidParameterValue, 'description', driver_base._validate_argsinfo, {'arg1': {}}) def test__validate_argsinfo_arg_missing_description(self): self.assertRaisesRegexp(exception.InvalidParameterValue, 'description', driver_base._validate_argsinfo, {'arg1': {'required': True}}) def test__validate_argsinfo_arg_description_invalid(self): self.assertRaisesRegexp(exception.InvalidParameterValue, 'string', driver_base._validate_argsinfo, {'arg1': {'description': True}}) def test__validate_argsinfo_arg_required_invalid(self): self.assertRaisesRegexp(exception.InvalidParameterValue, 'Boolean', driver_base._validate_argsinfo, {'arg1': {'description': 'desc1', 'required': 'maybe'}}) def test__validate_argsinfo_arg_unknown_key(self): self.assertRaisesRegexp(exception.InvalidParameterValue, 'invalid', driver_base._validate_argsinfo, {'arg1': {'description': 'desc1', 'unknown': 'bad'}}) def test_clean_step_priority_only(self): d = driver_base.clean_step(priority=10) d(self.method) self.assertTrue(self.method._is_clean_step) self.assertEqual(10, self.method._clean_step_priority) self.assertFalse(self.method._clean_step_abortable) self.assertIsNone(self.method._clean_step_argsinfo) def test_clean_step_all_args(self): argsinfo = {'arg1': {'description': 'desc1', 'required': True}} d = driver_base.clean_step(priority=0, abortable=True, argsinfo=argsinfo) d(self.method) self.assertTrue(self.method._is_clean_step) self.assertEqual(0, self.method._clean_step_priority) self.assertTrue(self.method._clean_step_abortable) self.assertEqual(argsinfo, self.method._clean_step_argsinfo) def test_clean_step_bad_priority(self): d = driver_base.clean_step(priority='hi') self.assertRaisesRegexp(exception.InvalidParameterValue, 'priority', d, self.method) self.assertTrue(self.method._is_clean_step) self.assertFalse(hasattr(self.method, '_clean_step_priority')) self.assertFalse(hasattr(self.method, '_clean_step_abortable')) self.assertFalse(hasattr(self.method, '_clean_step_argsinfo')) def test_clean_step_bad_abortable(self): d = driver_base.clean_step(priority=0, abortable='blue') self.assertRaisesRegexp(exception.InvalidParameterValue, 'abortable', d, self.method) self.assertTrue(self.method._is_clean_step) self.assertEqual(0, self.method._clean_step_priority) self.assertFalse(hasattr(self.method, '_clean_step_abortable')) self.assertFalse(hasattr(self.method, '_clean_step_argsinfo')) @mock.patch.object(driver_base, '_validate_argsinfo', spec_set=True, autospec=True) def test_clean_step_bad_argsinfo(self, mock_valid): mock_valid.side_effect = exception.InvalidParameterValue('bad') d = driver_base.clean_step(priority=0, argsinfo=100) self.assertRaises(exception.InvalidParameterValue, d, self.method) self.assertTrue(self.method._is_clean_step) self.assertEqual(0, self.method._clean_step_priority) self.assertFalse(self.method._clean_step_abortable) self.assertFalse(hasattr(self.method, '_clean_step_argsinfo')) class CleanStepTestCase(base.TestCase): def test_get_and_execute_clean_steps(self): # Create a fake Driver class, create some clean steps, make sure # they are listed correctly, and attempt to execute one of them method_mock = mock.MagicMock(spec_set=[]) method_args_mock = mock.MagicMock(spec_set=[]) task_mock = mock.MagicMock(spec_set=[]) class TestClass(driver_base.BaseInterface): interface_type = 'test' @driver_base.clean_step(priority=0) def manual_method(self, task): pass @driver_base.clean_step(priority=10, abortable=True) def automated_method(self, task): method_mock(task) def not_clean_method(self, task): pass class TestClass2(driver_base.BaseInterface): interface_type = 'test2' @driver_base.clean_step(priority=0) def manual_method2(self, task): pass @driver_base.clean_step(priority=20, abortable=True) def automated_method2(self, task): method_mock(task) def not_clean_method2(self, task): pass class TestClass3(driver_base.BaseInterface): interface_type = 'test3' @driver_base.clean_step(priority=0, abortable=True, argsinfo={ 'arg1': {'description': 'desc1', 'required': True}}) def manual_method3(self, task, **kwargs): method_args_mock(task, **kwargs) @driver_base.clean_step(priority=15, argsinfo={ 'arg10': {'description': 'desc10'}}) def automated_method3(self, task, **kwargs): pass def not_clean_method3(self, task): pass obj = TestClass() obj2 = TestClass2() obj3 = TestClass3() self.assertEqual(2, len(obj.get_clean_steps(task_mock))) # Ensure the steps look correct self.assertEqual(10, obj.get_clean_steps(task_mock)[0]['priority']) self.assertTrue(obj.get_clean_steps(task_mock)[0]['abortable']) self.assertEqual('test', obj.get_clean_steps( task_mock)[0]['interface']) self.assertEqual('automated_method', obj.get_clean_steps( task_mock)[0]['step']) self.assertEqual(0, obj.get_clean_steps(task_mock)[1]['priority']) self.assertFalse(obj.get_clean_steps(task_mock)[1]['abortable']) self.assertEqual('test', obj.get_clean_steps( task_mock)[1]['interface']) self.assertEqual('manual_method', obj.get_clean_steps( task_mock)[1]['step']) # Ensure the second obj get different clean steps self.assertEqual(2, len(obj2.get_clean_steps(task_mock))) # Ensure the steps look correct self.assertEqual(20, obj2.get_clean_steps(task_mock)[0]['priority']) self.assertTrue(obj2.get_clean_steps(task_mock)[0]['abortable']) self.assertEqual('test2', obj2.get_clean_steps( task_mock)[0]['interface']) self.assertEqual('automated_method2', obj2.get_clean_steps( task_mock)[0]['step']) self.assertEqual(0, obj2.get_clean_steps(task_mock)[1]['priority']) self.assertFalse(obj2.get_clean_steps(task_mock)[1]['abortable']) self.assertEqual('test2', obj2.get_clean_steps( task_mock)[1]['interface']) self.assertEqual('manual_method2', obj2.get_clean_steps( task_mock)[1]['step']) self.assertIsNone(obj2.get_clean_steps(task_mock)[0]['argsinfo']) # Ensure the third obj has different clean steps self.assertEqual(2, len(obj3.get_clean_steps(task_mock))) self.assertEqual(15, obj3.get_clean_steps(task_mock)[0]['priority']) self.assertFalse(obj3.get_clean_steps(task_mock)[0]['abortable']) self.assertEqual('test3', obj3.get_clean_steps( task_mock)[0]['interface']) self.assertEqual('automated_method3', obj3.get_clean_steps( task_mock)[0]['step']) self.assertEqual({'arg10': {'description': 'desc10'}}, obj3.get_clean_steps(task_mock)[0]['argsinfo']) self.assertEqual(0, obj3.get_clean_steps(task_mock)[1]['priority']) self.assertTrue(obj3.get_clean_steps(task_mock)[1]['abortable']) self.assertEqual(obj3.interface_type, obj3.get_clean_steps( task_mock)[1]['interface']) self.assertEqual('manual_method3', obj3.get_clean_steps( task_mock)[1]['step']) self.assertEqual({'arg1': {'description': 'desc1', 'required': True}}, obj3.get_clean_steps(task_mock)[1]['argsinfo']) # Ensure we can execute the function. obj.execute_clean_step(task_mock, obj.get_clean_steps(task_mock)[0]) method_mock.assert_called_once_with(task_mock) args = {'arg1': 'val1'} clean_step = {'interface': 'test3', 'step': 'manual_method3', 'args': args} obj3.execute_clean_step(task_mock, clean_step) method_args_mock.assert_called_once_with(task_mock, **args) class MyRAIDInterface(driver_base.RAIDInterface): def create_configuration(self, task): pass def delete_configuration(self, task): pass class RAIDInterfaceTestCase(base.TestCase): @mock.patch.object(driver_base.RAIDInterface, 'validate_raid_config', autospec=True) def test_validate(self, validate_raid_config_mock): raid_interface = MyRAIDInterface() node_mock = mock.MagicMock(target_raid_config='some_raid_config') task_mock = mock.MagicMock(node=node_mock) raid_interface.validate(task_mock) validate_raid_config_mock.assert_called_once_with( raid_interface, task_mock, 'some_raid_config') @mock.patch.object(driver_base.RAIDInterface, 'validate_raid_config', autospec=True) def test_validate_no_target_raid_config(self, validate_raid_config_mock): raid_interface = MyRAIDInterface() node_mock = mock.MagicMock(target_raid_config={}) task_mock = mock.MagicMock(node=node_mock) raid_interface.validate(task_mock) self.assertFalse(validate_raid_config_mock.called) @mock.patch.object(raid, 'validate_configuration', autospec=True) def test_validate_raid_config(self, common_validate_mock): with open(driver_base.RAID_CONFIG_SCHEMA, 'r') as raid_schema_fobj: raid_schema = json.load(raid_schema_fobj) raid_interface = MyRAIDInterface() raid_interface.validate_raid_config('task', 'some_raid_config') common_validate_mock.assert_called_once_with( 'some_raid_config', raid_schema) @mock.patch.object(raid, 'get_logical_disk_properties', autospec=True) def test_get_logical_disk_properties(self, get_properties_mock): with open(driver_base.RAID_CONFIG_SCHEMA, 'r') as raid_schema_fobj: raid_schema = json.load(raid_schema_fobj) raid_interface = MyRAIDInterface() raid_interface.get_logical_disk_properties() get_properties_mock.assert_called_once_with(raid_schema) ironic-5.1.0/ironic/tests/unit/drivers/pxe_grub_config.template0000664000567000056710000000174112674513466026141 0ustar jenkinsjenkins00000000000000set default=deploy set timeout=5 set hidden_timeout_quiet=false menuentry "deploy" { linuxefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_kernel selinux=0 troubleshoot=0 text disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn-1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_id=1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_key=0123456789ABCDEFGHIJKLMNOPQRSTUV ironic_api_url=http://192.168.122.184:6385 test_param boot_server=192.0.2.1 root_device=vendor=fake,size=123 ipa-api-url=http://192.168.122.184:6385 ipa-driver-name=pxe_ssh boot_option=netboot boot_mode=uefi coreos.configdrive=0 initrdefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_ramdisk } menuentry "boot_partition" { linuxefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel root=(( ROOT )) ro text test_param boot_server=192.0.2.1 initrdefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk } menuentry "boot_whole_disk" { linuxefi chain.c32 mbr:(( DISK_IDENTIFIER )) } ironic-5.1.0/ironic/tests/unit/drivers/pxe_config.template0000664000567000056710000000207312674513466025121 0ustar jenkinsjenkins00000000000000default deploy label deploy kernel /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_kernel append initrd=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_ramdisk selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn-1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_id=1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_key=0123456789ABCDEFGHIJKLMNOPQRSTUV ironic_api_url=http://192.168.122.184:6385 troubleshoot=0 text test_param boot_option=netboot root_device=vendor=fake,size=123 ipa-api-url=http://192.168.122.184:6385 ipa-driver-name=pxe_ssh boot_mode=bios coreos.configdrive=0 ipappend 3 label boot_partition kernel /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel append initrd=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk root={{ ROOT }} ro text test_param label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} label trusted_boot kernel mboot append tboot.gz --- /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel root={{ ROOT }} ro text test_param intel_iommu=on --- /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk ironic-5.1.0/ironic/tests/unit/drivers/test_irmc.py0000664000567000056710000001104212674513466023602 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Deploy Driver """ import mock import testtools from ironic.common import exception from ironic.drivers import irmc from ironic.drivers.modules import agent from ironic.drivers.modules import iscsi_deploy class IRMCVirtualMediaIscsiTestCase(testtools.TestCase): def setUp(self): irmc.boot.check_share_fs_mounted_patcher.start() self.addCleanup(irmc.boot.check_share_fs_mounted_patcher.stop) super(IRMCVirtualMediaIscsiTestCase, self).setUp() @mock.patch.object(irmc.importutils, 'try_import', spec_set=True, autospec=True) def test___init___share_fs_mounted_ok(self, mock_try_import): mock_try_import.return_value = True driver = irmc.IRMCVirtualMediaIscsiDriver() self.assertIsInstance(driver.power, irmc.power.IRMCPower) self.assertIsInstance(driver.boot, irmc.boot.IRMCVirtualMediaBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.console, irmc.ipmitool.IPMIShellinaboxConsole) self.assertIsInstance(driver.management, irmc.management.IRMCManagement) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) self.assertIsInstance(driver.inspect, irmc.inspect.IRMCInspect) @mock.patch.object(irmc.importutils, 'try_import') def test___init___try_import_exception(self, mock_try_import): mock_try_import.return_value = False self.assertRaises(exception.DriverLoadError, irmc.IRMCVirtualMediaIscsiDriver) @mock.patch.object(irmc.boot.IRMCVirtualMediaBoot, '__init__', spec_set=True, autospec=True) def test___init___share_fs_not_mounted_exception(self, __init___mock): __init___mock.side_effect = iter( [exception.IRMCSharedFileSystemNotMounted(share='/share')]) self.assertRaises(exception.IRMCSharedFileSystemNotMounted, irmc.IRMCVirtualMediaIscsiDriver) class IRMCVirtualMediaAgentTestCase(testtools.TestCase): def setUp(self): irmc.boot.check_share_fs_mounted_patcher.start() self.addCleanup(irmc.boot.check_share_fs_mounted_patcher.stop) super(IRMCVirtualMediaAgentTestCase, self).setUp() @mock.patch.object(irmc.importutils, 'try_import', spec_set=True, autospec=True) def test___init___share_fs_mounted_ok(self, mock_try_import): mock_try_import.return_value = True driver = irmc.IRMCVirtualMediaAgentDriver() self.assertIsInstance(driver.power, irmc.power.IRMCPower) self.assertIsInstance(driver.boot, irmc.boot.IRMCVirtualMediaBoot) self.assertIsInstance(driver.deploy, agent.AgentDeploy) self.assertIsInstance(driver.console, irmc.ipmitool.IPMIShellinaboxConsole) self.assertIsInstance(driver.management, irmc.management.IRMCManagement) self.assertIsInstance(driver.vendor, irmc.agent.AgentVendorInterface) self.assertIsInstance(driver.inspect, irmc.inspect.IRMCInspect) @mock.patch.object(irmc.importutils, 'try_import') def test___init___try_import_exception(self, mock_try_import): mock_try_import.return_value = False self.assertRaises(exception.DriverLoadError, irmc.IRMCVirtualMediaAgentDriver) @mock.patch.object(irmc.boot.IRMCVirtualMediaBoot, '__init__', spec_set=True, autospec=True) def test___init___share_fs_not_mounted_exception(self, __init___mock): __init___mock.side_effect = iter([ exception.IRMCSharedFileSystemNotMounted(share='/share')]) self.assertRaises(exception.IRMCSharedFileSystemNotMounted, irmc.IRMCVirtualMediaAgentDriver) ironic-5.1.0/ironic/tests/unit/drivers/test_pxe.py0000664000567000056710000003502712674513466023455 0ustar jenkinsjenkins00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for PXE Drivers """ import mock import testtools from ironic.common import exception from ironic.drivers.modules import agent from ironic.drivers.modules.amt import management as amt_management from ironic.drivers.modules.amt import power as amt_power from ironic.drivers.modules.amt import vendor as amt_vendor from ironic.drivers.modules.cimc import management as cimc_management from ironic.drivers.modules.cimc import power as cimc_power from ironic.drivers.modules import iboot from ironic.drivers.modules.ilo import console as ilo_console from ironic.drivers.modules.ilo import inspect as ilo_inspect from ironic.drivers.modules.ilo import management as ilo_management from ironic.drivers.modules.ilo import power as ilo_power from ironic.drivers.modules import ipminative from ironic.drivers.modules import ipmitool from ironic.drivers.modules.irmc import management as irmc_management from ironic.drivers.modules.irmc import power as irmc_power from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules.msftocs import management as msftocs_management from ironic.drivers.modules.msftocs import power as msftocs_power from ironic.drivers.modules import pxe as pxe_module from ironic.drivers.modules import seamicro from ironic.drivers.modules import snmp from ironic.drivers.modules import ssh from ironic.drivers.modules.ucs import management as ucs_management from ironic.drivers.modules.ucs import power as ucs_power from ironic.drivers.modules import virtualbox from ironic.drivers.modules import wol from ironic.drivers import pxe from ironic.drivers import utils class PXEDriversTestCase(testtools.TestCase): def test_pxe_ipmitool_driver(self): driver = pxe.PXEAndIPMIToolDriver() self.assertIsInstance(driver.power, ipmitool.IPMIPower) self.assertIsInstance(driver.console, ipmitool.IPMIShellinaboxConsole) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, ipmitool.IPMIManagement) self.assertIsNone(driver.inspect) # TODO(rameshg87): Need better way of asserting the routes. self.assertIsInstance(driver.vendor, utils.MixinVendorInterface) self.assertIsInstance(driver.raid, agent.AgentRAID) def test_pxe_ssh_driver(self): driver = pxe.PXEAndSSHDriver() self.assertIsInstance(driver.power, ssh.SSHPower) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, ssh.SSHManagement) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) self.assertIsNone(driver.inspect) self.assertIsInstance(driver.raid, agent.AgentRAID) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_ipminative_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndIPMINativeDriver() self.assertIsInstance(driver.power, ipminative.NativeIPMIPower) self.assertIsInstance(driver.console, ipminative.NativeIPMIShellinaboxConsole) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, ipminative.NativeIPMIManagement) # TODO(rameshg87): Need better way of asserting the routes. self.assertIsInstance(driver.vendor, utils.MixinVendorInterface) self.assertIsNone(driver.inspect) self.assertIsInstance(driver.raid, agent.AgentRAID) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_ipminative_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndIPMINativeDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_seamicro_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndSeaMicroDriver() self.assertIsInstance(driver.power, seamicro.Power) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, seamicro.Management) self.assertIsInstance(driver.seamicro_vendor, seamicro.VendorPassthru) self.assertIsInstance(driver.pxe_vendor, iscsi_deploy.VendorPassthru) self.assertIsInstance(driver.vendor, utils.MixinVendorInterface) self.assertIsInstance(driver.console, seamicro.ShellinaboxConsole) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_seamicro_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndSeaMicroDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_iboot_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndIBootDriver() self.assertIsInstance(driver.power, iboot.IBootPower) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_iboot_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndIBootDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_ilo_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndIloDriver() self.assertIsInstance(driver.power, ilo_power.IloPower) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) self.assertIsInstance(driver.console, ilo_console.IloConsoleInterface) self.assertIsInstance(driver.management, ilo_management.IloManagement) self.assertIsInstance(driver.inspect, ilo_inspect.IloInspect) self.assertIsInstance(driver.raid, agent.AgentRAID) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_ilo_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndIloDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_snmp_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndSNMPDriver() self.assertIsInstance(driver.power, snmp.SNMPPower) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) self.assertIsNone(driver.management) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_snmp_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndSNMPDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_irmc_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndIRMCDriver() self.assertIsInstance(driver.power, irmc_power.IRMCPower) self.assertIsInstance(driver.console, ipmitool.IPMIShellinaboxConsole) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, irmc_management.IRMCManagement) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_irmc_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndIRMCDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_vbox_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndVirtualBoxDriver() self.assertIsInstance(driver.power, virtualbox.VirtualBoxPower) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, virtualbox.VirtualBoxManagement) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) self.assertIsInstance(driver.raid, agent.AgentRAID) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_vbox_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndVirtualBoxDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_amt_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndAMTDriver() self.assertIsInstance(driver.power, amt_power.AMTPower) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, amt_management.AMTManagement) self.assertIsInstance(driver.vendor, amt_vendor.AMTPXEVendorPassthru) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_amt_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndAMTDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_msftocs_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndMSFTOCSDriver() self.assertIsInstance(driver.power, msftocs_power.MSFTOCSPower) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, msftocs_management.MSFTOCSManagement) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_ucs_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndUcsDriver() self.assertIsInstance(driver.power, ucs_power.Power) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, ucs_management.UcsManagement) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_ucs_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndUcsDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_cimc_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndCIMCDriver() self.assertIsInstance(driver.power, cimc_power.Power) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.management, cimc_management.CIMCManagement) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_cimc_driver_import_error(self, try_import_mock): try_import_mock.return_value = False self.assertRaises(exception.DriverLoadError, pxe.PXEAndCIMCDriver) @mock.patch.object(pxe.importutils, 'try_import', spec_set=True, autospec=True) def test_pxe_wakeonlan_driver(self, try_import_mock): try_import_mock.return_value = True driver = pxe.PXEAndWakeOnLanDriver() self.assertIsInstance(driver.power, wol.WakeOnLanPower) self.assertIsInstance(driver.boot, pxe_module.PXEBoot) self.assertIsInstance(driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(driver.vendor, iscsi_deploy.VendorPassthru) ironic-5.1.0/ironic/tests/unit/drivers/ipxe_config.template0000664000567000056710000000145112674513466025271 0ustar jenkinsjenkins00000000000000#!ipxe dhcp goto deploy :deploy kernel http://1.2.3.4:1234/deploy_kernel selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn-1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_id=1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_key=0123456789ABCDEFGHIJKLMNOPQRSTUV ironic_api_url=http://192.168.122.184:6385 troubleshoot=0 text test_param boot_option=netboot ip=${ip}:${next-server}:${gateway}:${netmask} BOOTIF=${mac} root_device=vendor=fake,size=123 ipa-api-url=http://192.168.122.184:6385 ipa-driver-name=pxe_ssh boot_mode=bios initrd=deploy_ramdisk coreos.configdrive=0 initrd http://1.2.3.4:1234/deploy_ramdisk boot :boot_partition kernel http://1.2.3.4:1234/kernel root={{ ROOT }} ro text test_param initrd=ramdisk initrd http://1.2.3.4:1234/ramdisk boot :boot_whole_disk sanboot --no-describe ironic-5.1.0/ironic/tests/unit/drivers/modules/0000775000567000056710000000000012674513633022705 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/test_agent_client.py0000664000567000056710000002207112674513470026753 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import mock import requests import six from six.moves import http_client from ironic.common import exception from ironic.drivers.modules import agent_client from ironic.tests import base class MockResponse(object): status_code = http_client.OK def __init__(self, text): assert isinstance(text, six.string_types) self.text = text def json(self): return json.loads(self.text) class MockNode(object): def __init__(self): self.uuid = 'uuid' self.driver_info = {} self.driver_internal_info = { 'agent_url': "http://127.0.0.1:9999", 'clean_version': {'generic': '1'} } self.instance_info = {} def as_dict(self): return { 'uuid': self.uuid, 'driver_info': self.driver_info, 'driver_internal_info': self.driver_internal_info, 'instance_info': self.instance_info } class TestAgentClient(base.TestCase): def setUp(self): super(TestAgentClient, self).setUp() self.client = agent_client.AgentClient() self.client.session = mock.MagicMock(autospec=requests.Session) self.node = MockNode() def test_content_type_header(self): client = agent_client.AgentClient() self.assertEqual('application/json', client.session.headers['Content-Type']) def test__get_command_url(self): command_url = self.client._get_command_url(self.node) expected = self.node.driver_internal_info['agent_url'] + '/v1/commands' self.assertEqual(expected, command_url) def test__get_command_url_fail(self): del self.node.driver_internal_info['agent_url'] self.assertRaises(exception.IronicException, self.client._get_command_url, self.node) def test__get_command_body(self): expected = json.dumps({'name': 'prepare_image', 'params': {}}) self.assertEqual(expected, self.client._get_command_body('prepare_image', {})) def test__command(self): response_data = {'status': 'ok'} response_text = json.dumps(response_data) self.client.session.post.return_value = MockResponse(response_text) method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} url = self.client._get_command_url(self.node) body = self.client._get_command_body(method, params) response = self.client._command(self.node, method, params) self.assertEqual(response, response_data) self.client.session.post.assert_called_once_with( url, data=body, params={'wait': 'false'}) def test__command_fail_json(self): response_text = 'this be not json matey!' self.client.session.post.return_value = MockResponse(response_text) method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} url = self.client._get_command_url(self.node) body = self.client._get_command_body(method, params) self.assertRaises(exception.IronicException, self.client._command, self.node, method, params) self.client.session.post.assert_called_once_with( url, data=body, params={'wait': 'false'}) def test__command_fail_post(self): error = 'Boom' self.client.session.post.side_effect = requests.RequestException(error) method = 'foo.bar' params = {} self.client._get_command_url(self.node) self.client._get_command_body(method, params) e = self.assertRaises(exception.IronicException, self.client._command, self.node, method, params) self.assertEqual('Error invoking agent command %(method)s for node ' '%(node)s. Error: %(error)s' % {'method': method, 'node': self.node.uuid, 'error': error}, str(e)) def test_get_commands_status(self): with mock.patch.object(self.client.session, 'get', autospec=True) as mock_get: res = mock.MagicMock(spec_set=['json']) res.json.return_value = {'commands': []} mock_get.return_value = res self.assertEqual([], self.client.get_commands_status(self.node)) @mock.patch('uuid.uuid4', mock.MagicMock(spec_set=[], return_value='uuid')) def test_prepare_image(self): self.client._command = mock.MagicMock(spec_set=[]) image_info = {'image_id': 'image'} params = {'image_info': image_info} self.client.prepare_image(self.node, image_info, wait=False) self.client._command.assert_called_once_with( node=self.node, method='standby.prepare_image', params=params, wait=False) @mock.patch('uuid.uuid4', mock.MagicMock(spec_set=[], return_value='uuid')) def test_prepare_image_with_configdrive(self): self.client._command = mock.MagicMock(spec_set=[]) configdrive_url = 'http://swift/configdrive' self.node.instance_info['configdrive'] = configdrive_url image_info = {'image_id': 'image'} params = { 'image_info': image_info, 'configdrive': configdrive_url, } self.client.prepare_image(self.node, image_info, wait=False) self.client._command.assert_called_once_with( node=self.node, method='standby.prepare_image', params=params, wait=False) @mock.patch('uuid.uuid4', mock.MagicMock(spec_set=[], return_value='uuid')) def test_start_iscsi_target(self): self.client._command = mock.MagicMock(spec_set=[]) iqn = 'fake-iqn' params = {'iqn': iqn} self.client.start_iscsi_target(self.node, iqn) self.client._command.assert_called_once_with( node=self.node, method='iscsi.start_iscsi_target', params=params, wait=True) @mock.patch('uuid.uuid4', mock.MagicMock(spec_set=[], return_value='uuid')) def test_install_bootloader(self): self.client._command = mock.MagicMock(spec_set=[]) root_uuid = 'fake-root-uuid' efi_system_part_uuid = 'fake-efi-system-part-uuid' params = {'root_uuid': root_uuid, 'efi_system_part_uuid': efi_system_part_uuid} self.client.install_bootloader( self.node, root_uuid, efi_system_part_uuid=efi_system_part_uuid) self.client._command.assert_called_once_with( node=self.node, method='image.install_bootloader', params=params, wait=True) def test_get_clean_steps(self): self.client._command = mock.MagicMock(spec_set=[]) ports = [] expected_params = { 'node': self.node.as_dict(), 'ports': [] } self.client.get_clean_steps(self.node, ports) self.client._command.assert_called_once_with( node=self.node, method='clean.get_clean_steps', params=expected_params, wait=True) def test_execute_clean_step(self): self.client._command = mock.MagicMock(spec_set=[]) ports = [] step = {'priority': 10, 'step': 'erase_devices', 'interface': 'deploy'} expected_params = { 'step': step, 'node': self.node.as_dict(), 'ports': [], 'clean_version': self.node.driver_internal_info.get( 'hardware_manager_version') } self.client.execute_clean_step(step, self.node, ports) self.client._command.assert_called_once_with( node=self.node, method='clean.execute_clean_step', params=expected_params) def test_power_off(self): self.client._command = mock.MagicMock(spec_set=[]) self.client.power_off(self.node) self.client._command.assert_called_once_with( node=self.node, method='standby.power_off', params={}) def test_sync(self): self.client._command = mock.MagicMock(spec_set=[]) self.client.sync(self.node) self.client._command.assert_called_once_with( node=self.node, method='standby.sync', params={}, wait=True) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_iboot.py0000664000567000056710000004116012674513466025440 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for iBoot PDU driver module.""" import types import mock from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import iboot from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_iboot_info() class IBootPrivateMethodTestCase(db_base.DbTestCase): def setUp(self): super(IBootPrivateMethodTestCase, self).setUp() self.config(max_retry=0, group='iboot') self.config(retry_interval=0, group='iboot') def test__parse_driver_info_good(self): node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=INFO_DICT) info = iboot._parse_driver_info(node) self.assertIsNotNone(info.get('address')) self.assertIsNotNone(info.get('username')) self.assertIsNotNone(info.get('password')) self.assertIsNotNone(info.get('port')) self.assertIsNotNone(info.get('relay_id')) def test__parse_driver_info_good_with_explicit_port(self): info = dict(INFO_DICT) info['iboot_port'] = '1234' node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=info) info = iboot._parse_driver_info(node) self.assertEqual(1234, info.get('port')) def test__parse_driver_info_good_with_explicit_relay_id(self): info = dict(INFO_DICT) info['iboot_relay_id'] = '2' node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=info) info = iboot._parse_driver_info(node) self.assertEqual(2, info.get('relay_id')) def test__parse_driver_info_missing_address(self): info = dict(INFO_DICT) del info['iboot_address'] node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=info) self.assertRaises(exception.MissingParameterValue, iboot._parse_driver_info, node) def test__parse_driver_info_missing_username(self): info = dict(INFO_DICT) del info['iboot_username'] node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=info) self.assertRaises(exception.MissingParameterValue, iboot._parse_driver_info, node) def test__parse_driver_info_missing_password(self): info = dict(INFO_DICT) del info['iboot_password'] node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=info) self.assertRaises(exception.MissingParameterValue, iboot._parse_driver_info, node) def test__parse_driver_info_bad_port(self): info = dict(INFO_DICT) info['iboot_port'] = 'not-integer' node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=info) self.assertRaises(exception.InvalidParameterValue, iboot._parse_driver_info, node) def test__parse_driver_info_bad_relay_id(self): info = dict(INFO_DICT) info['iboot_relay_id'] = 'not-integer' node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=info) self.assertRaises(exception.InvalidParameterValue, iboot._parse_driver_info, node) @mock.patch.object(iboot, '_get_connection', autospec=True) def test__power_status_on(self, mock_get_conn): mock_connection = mock.MagicMock(spec_set=['get_relays']) mock_connection.get_relays.return_value = [True] mock_get_conn.return_value = mock_connection node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=INFO_DICT) info = iboot._parse_driver_info(node) status = iboot._power_status(info) self.assertEqual(states.POWER_ON, status) mock_get_conn.assert_called_once_with(info) mock_connection.get_relays.assert_called_once_with() @mock.patch.object(iboot, '_get_connection', autospec=True) def test__power_status_off(self, mock_get_conn): mock_connection = mock.MagicMock(spec_set=['get_relays']) mock_connection.get_relays.return_value = [False] mock_get_conn.return_value = mock_connection node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=INFO_DICT) info = iboot._parse_driver_info(node) status = iboot._power_status(info) self.assertEqual(states.POWER_OFF, status) mock_get_conn.assert_called_once_with(info) mock_connection.get_relays.assert_called_once_with() @mock.patch.object(iboot, '_get_connection', autospec=True) def test__power_status_exception(self, mock_get_conn): mock_connection = mock.MagicMock(spec_set=['get_relays']) mock_connection.get_relays.return_value = None mock_get_conn.return_value = mock_connection node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=INFO_DICT) info = iboot._parse_driver_info(node) status = iboot._power_status(info) self.assertEqual(states.ERROR, status) mock_get_conn.assert_called_once_with(info) mock_connection.get_relays.assert_called_once_with() @mock.patch.object(iboot, '_get_connection', autospec=True) def test__power_status_exception_type_error(self, mock_get_conn): mock_connection = mock.MagicMock(spec_set=['get_relays']) side_effect = TypeError("Surprise!") mock_connection.get_relays.side_effect = side_effect mock_get_conn.return_value = mock_connection node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=INFO_DICT) info = iboot._parse_driver_info(node) status = iboot._power_status(info) self.assertEqual(states.ERROR, status) mock_get_conn.assert_called_once_with(info) mock_connection.get_relays.assert_called_once_with() @mock.patch.object(iboot, '_get_connection', autospec=True) def test__power_status_exception_index_error(self, mock_get_conn): mock_connection = mock.MagicMock(spec_set=['get_relays']) side_effect = IndexError("Gotcha!") mock_connection.get_relays.side_effect = side_effect mock_get_conn.return_value = mock_connection node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=INFO_DICT) info = iboot._parse_driver_info(node) status = iboot._power_status(info) self.assertEqual(states.ERROR, status) mock_get_conn.assert_called_once_with(info) mock_connection.get_relays.assert_called_once_with() @mock.patch.object(iboot, '_get_connection', autospec=True) def test__power_status_error(self, mock_get_conn): mock_connection = mock.MagicMock(spec_set=['get_relays']) mock_connection.get_relays.return_value = list() mock_get_conn.return_value = mock_connection node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=INFO_DICT) info = iboot._parse_driver_info(node) status = iboot._power_status(info) self.assertEqual(states.ERROR, status) mock_get_conn.assert_called_once_with(info) mock_connection.get_relays.assert_called_once_with() @mock.patch.object(iboot, '_get_connection', autospec=True) def test__power_status_retries(self, mock_get_conn): self.config(max_retry=1, group='iboot') mock_connection = mock.MagicMock(spec_set=['get_relays']) side_effect = TypeError("Surprise!") mock_connection.get_relays.side_effect = side_effect mock_get_conn.return_value = mock_connection node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=INFO_DICT) info = iboot._parse_driver_info(node) status = iboot._power_status(info) self.assertEqual(states.ERROR, status) mock_get_conn.assert_called_once_with(info) self.assertEqual(2, mock_connection.get_relays.call_count) class IBootDriverTestCase(db_base.DbTestCase): def setUp(self): super(IBootDriverTestCase, self).setUp() self.config(max_retry=0, group='iboot') self.config(retry_interval=0, group='iboot') self.config(reboot_delay=0, group='iboot') mgr_utils.mock_the_extension_manager(driver='fake_iboot') self.driver = driver_factory.get_driver('fake_iboot') self.node = obj_utils.create_test_node( self.context, driver='fake_iboot', driver_info=INFO_DICT) self.info = iboot._parse_driver_info(self.node) def test_get_properties(self): expected = iboot.COMMON_PROPERTIES with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(iboot, '_power_status', autospec=True) @mock.patch.object(iboot, '_switch', autospec=True) def test_set_power_state_good(self, mock_switch, mock_power_status): mock_power_status.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_ON) # ensure functions were called with the valid parameters mock_switch.assert_called_once_with(self.info, True) mock_power_status.assert_called_once_with(self.info) @mock.patch.object(iboot, '_power_status', autospec=True) @mock.patch.object(iboot, '_switch', autospec=True) def test_set_power_state_bad(self, mock_switch, mock_power_status): mock_power_status.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_ON) # ensure functions were called with the valid parameters mock_switch.assert_called_once_with(self.info, True) mock_power_status.assert_called_once_with(self.info) @mock.patch.object(iboot, '_power_status', autospec=True) @mock.patch.object(iboot, '_switch', autospec=True) def test_set_power_state_retry(self, mock_switch, mock_power_status): self.config(max_retry=2, group='iboot') mock_power_status.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_ON) # ensure functions were called with the valid parameters mock_switch.assert_called_once_with(self.info, True) # 1 + 2 retries self.assertEqual(3, mock_power_status.call_count) @mock.patch.object(iboot, '_power_status', autospec=True) @mock.patch.object(iboot, '_switch', autospec=True) def test_set_power_state_invalid_parameter(self, mock_switch, mock_power_status): mock_power_status.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.set_power_state, task, states.NOSTATE) @mock.patch.object(iboot, '_sleep_switch', spec_set=types.FunctionType) @mock.patch.object(iboot, '_power_status', autospec=True) @mock.patch.object(iboot, '_switch', spec_set=types.FunctionType) def test_reboot_good(self, mock_switch, mock_power_status, mock_sleep_switch): self.config(reboot_delay=3, group='iboot') manager = mock.MagicMock(spec_set=['switch', 'sleep']) mock_power_status.return_value = states.POWER_ON manager.attach_mock(mock_switch, 'switch') manager.attach_mock(mock_sleep_switch, 'sleep') expected = [mock.call.switch(self.info, False), mock.call.sleep(3), mock.call.switch(self.info, True)] with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.reboot(task) self.assertEqual(manager.mock_calls, expected) @mock.patch.object(iboot, '_sleep_switch', spec_set=types.FunctionType) @mock.patch.object(iboot, '_power_status', autospec=True) @mock.patch.object(iboot, '_switch', spec_set=types.FunctionType) def test_reboot_bad(self, mock_switch, mock_power_status, mock_sleep_switch): self.config(reboot_delay=3, group='iboot') manager = mock.MagicMock(spec_set=['switch', 'sleep']) mock_power_status.return_value = states.POWER_OFF manager.attach_mock(mock_switch, 'switch') manager.attach_mock(mock_sleep_switch, 'sleep') expected = [mock.call.switch(self.info, False), mock.call.sleep(3), mock.call.switch(self.info, True)] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.reboot, task) self.assertEqual(manager.mock_calls, expected) @mock.patch.object(iboot, '_power_status', autospec=True) @mock.patch.object(iboot, '_get_connection', autospec=True) def test__switch_retries(self, mock_get_conn, mock_power_status): self.config(max_retry=1, group='iboot') mock_power_status.return_value = states.POWER_ON mock_connection = mock.MagicMock(spec_set=['switch']) side_effect = TypeError("Surprise!") mock_connection.switch.side_effect = side_effect mock_get_conn.return_value = mock_connection iboot._switch(self.info, False) self.assertEqual(2, mock_connection.switch.call_count) @mock.patch.object(iboot, '_power_status', autospec=True) def test_get_power_state(self, mock_power_status): mock_power_status.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: state = task.driver.power.get_power_state(task) self.assertEqual(state, states.POWER_ON) # ensure functions were called with the valid parameters mock_power_status.assert_called_once_with(self.info) @mock.patch.object(iboot, '_parse_driver_info', autospec=True) def test_validate_good(self, parse_drv_info_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) self.assertEqual(1, parse_drv_info_mock.call_count) @mock.patch.object(iboot, '_parse_driver_info', autospec=True) def test_validate_fails(self, parse_drv_info_mock): side_effect = iter([exception.InvalidParameterValue("Bad input")]) parse_drv_info_mock.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) self.assertEqual(1, parse_drv_info_mock.call_count) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_seamicro.py0000664000567000056710000007371112674513466026135 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Ironic SeaMicro driver.""" import uuid import mock from oslo_utils import uuidutils from seamicroclient import client as seamicro_client from seamicroclient import exceptions as seamicro_client_exception from six.moves import http_client from ironic.common import boot_devices from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import console_utils from ironic.drivers.modules import seamicro from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_seamicro_info() class Fake_Server(object): def __init__(self, active=False, *args, **kwargs): self.active = active self.nic = {'0': {'untaggedVlan': ''}} def power_on(self): self.active = True def power_off(self, force=False): self.active = False def reset(self): self.active = True def set_untagged_vlan(self, vlan_id): return def attach_volume(self, volume_id): return def detach_volume(self): return def set_boot_order(self, boot_order): return def refresh(self, wait=0): return self class Fake_Volume(object): def __init__(self, id=None, *args, **kwargs): if id is None: self.id = "%s/%s/%s" % ("0", "ironic-p6-6", str(uuid.uuid4())) else: self.id = id class Fake_Pool(object): def __init__(self, freeSize=None, *args, **kwargs): self.freeSize = freeSize class SeaMicroValidateParametersTestCase(db_base.DbTestCase): def test__parse_driver_info_good(self): # make sure we get back the expected things node = obj_utils.get_test_node( self.context, driver='fake_seamicro', driver_info=INFO_DICT) info = seamicro._parse_driver_info(node) self.assertIsNotNone(info.get('api_endpoint')) self.assertIsNotNone(info.get('username')) self.assertIsNotNone(info.get('password')) self.assertIsNotNone(info.get('server_id')) self.assertIsNotNone(info.get('uuid')) def test__parse_driver_info_missing_api_endpoint(self): # make sure error is raised when info is missing info = dict(INFO_DICT) del info['seamicro_api_endpoint'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, seamicro._parse_driver_info, node) def test__parse_driver_info_missing_username(self): # make sure error is raised when info is missing info = dict(INFO_DICT) del info['seamicro_username'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, seamicro._parse_driver_info, node) def test__parse_driver_info_missing_password(self): # make sure error is raised when info is missing info = dict(INFO_DICT) del info['seamicro_password'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, seamicro._parse_driver_info, node) def test__parse_driver_info_missing_server_id(self): # make sure error is raised when info is missing info = dict(INFO_DICT) del info['seamicro_server_id'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, seamicro._parse_driver_info, node) def test__parse_driver_info_empty_terminal_port(self): info = dict(INFO_DICT) info['seamicro_terminal_port'] = '' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, seamicro._parse_driver_info, node) @mock.patch('eventlet.greenthread.sleep', lambda n: None) class SeaMicroPrivateMethodsTestCase(db_base.DbTestCase): def setUp(self): super(SeaMicroPrivateMethodsTestCase, self).setUp() n = { 'driver': 'fake_seamicro', 'driver_info': INFO_DICT } self.node = obj_utils.create_test_node(self.context, **n) self.Server = Fake_Server self.Volume = Fake_Volume self.Pool = Fake_Pool self.config(action_timeout=0, group='seamicro') self.config(max_retry=2, group='seamicro') self.info = seamicro._parse_driver_info(self.node) @mock.patch.object(seamicro_client, "Client", autospec=True) def test__get_client(self, mock_client): args = {'username': self.info['username'], 'password': self.info['password'], 'auth_url': self.info['api_endpoint']} seamicro._get_client(**self.info) mock_client.assert_called_once_with(self.info['api_version'], **args) @mock.patch.object(seamicro_client, "Client", autospec=True) def test__get_client_fail(self, mock_client): args = {'username': self.info['username'], 'password': self.info['password'], 'auth_url': self.info['api_endpoint']} mock_client.side_effect = seamicro_client_exception.UnsupportedVersion self.assertRaises(exception.InvalidParameterValue, seamicro._get_client, **self.info) mock_client.assert_called_once_with(self.info['api_version'], **args) @mock.patch.object(seamicro, "_get_server", autospec=True) def test__get_power_status_on(self, mock_get_server): mock_get_server.return_value = self.Server(active=True) pstate = seamicro._get_power_status(self.node) self.assertEqual(states.POWER_ON, pstate) @mock.patch.object(seamicro, "_get_server", autospec=True) def test__get_power_status_off(self, mock_get_server): mock_get_server.return_value = self.Server(active=False) pstate = seamicro._get_power_status(self.node) self.assertEqual(states.POWER_OFF, pstate) @mock.patch.object(seamicro, "_get_server", autospec=True) def test__get_power_status_error(self, mock_get_server): mock_get_server.return_value = self.Server(active=None) pstate = seamicro._get_power_status(self.node) self.assertEqual(states.ERROR, pstate) @mock.patch.object(seamicro, "_get_server", autospec=True) def test__power_on_good(self, mock_get_server): mock_get_server.return_value = self.Server(active=False) pstate = seamicro._power_on(self.node) self.assertEqual(states.POWER_ON, pstate) @mock.patch.object(seamicro, "_get_server", autospec=True) def test__power_on_fail(self, mock_get_server): def fake_power_on(): return server = self.Server(active=False) server.power_on = fake_power_on mock_get_server.return_value = server pstate = seamicro._power_on(self.node) self.assertEqual(states.ERROR, pstate) @mock.patch.object(seamicro, "_get_server", autospec=True) def test__power_off_good(self, mock_get_server): mock_get_server.return_value = self.Server(active=True) pstate = seamicro._power_off(self.node) self.assertEqual(states.POWER_OFF, pstate) @mock.patch.object(seamicro, "_get_server", autospec=True) def test__power_off_fail(self, mock_get_server): def fake_power_off(): return server = self.Server(active=True) server.power_off = fake_power_off mock_get_server.return_value = server pstate = seamicro._power_off(self.node) self.assertEqual(states.ERROR, pstate) @mock.patch.object(seamicro, "_get_server", autospec=True) def test__reboot_good(self, mock_get_server): mock_get_server.return_value = self.Server(active=True) pstate = seamicro._reboot(self.node) self.assertEqual(states.POWER_ON, pstate) @mock.patch.object(seamicro, "_get_server", autospec=True) def test__reboot_fail(self, mock_get_server): def fake_reboot(): return server = self.Server(active=False) server.reset = fake_reboot mock_get_server.return_value = server pstate = seamicro._reboot(self.node) self.assertEqual(states.ERROR, pstate) @mock.patch.object(seamicro, "_get_volume", autospec=True) def test__validate_fail(self, mock_get_volume): volume_id = "0/p6-6/vol1" volume = self.Volume() volume.id = volume_id mock_get_volume.return_value = volume self.assertRaises(exception.InvalidParameterValue, seamicro._validate_volume, self.info, volume_id) @mock.patch.object(seamicro, "_get_volume", autospec=True) def test__validate_good(self, mock_get_volume): volume = self.Volume() mock_get_volume.return_value = volume valid = seamicro._validate_volume(self.info, volume.id) self.assertTrue(valid) @mock.patch.object(seamicro, "_get_pools", autospec=True) def test__create_volume_fail(self, mock_get_pools): mock_get_pools.return_value = None self.assertRaises(exception.IronicException, seamicro._create_volume, self.info, 2) @mock.patch.object(seamicro, "_get_pools", autospec=True) @mock.patch.object(seamicro, "_get_client", autospec=True) def test__create_volume_good(self, mock_get_client, mock_get_pools): pools = [self.Pool(1), self.Pool(6), self.Pool(5)] mock_seamicro_volumes = mock.MagicMock(spec_set=['create']) mock_get_client.return_value = mock.MagicMock( volumes=mock_seamicro_volumes, spec_set=['volumes']) mock_get_pools.return_value = pools seamicro._create_volume(self.info, 2) class SeaMicroPowerDriverTestCase(db_base.DbTestCase): def setUp(self): super(SeaMicroPowerDriverTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_seamicro') self.driver = driver_factory.get_driver('fake_seamicro') self.node = obj_utils.create_test_node(self.context, driver='fake_seamicro', driver_info=INFO_DICT) self.get_server_patcher = mock.patch.object(seamicro, '_get_server', autospec=True) self.get_server_mock = None self.Server = Fake_Server self.Volume = Fake_Volume self.info = seamicro._parse_driver_info(self.node) def test_get_properties(self): expected = seamicro.COMMON_PROPERTIES with task_manager.acquire(self.context, self.node['uuid'], shared=True) as task: self.assertEqual(expected, task.driver.power.get_properties()) expected = (list(seamicro.COMMON_PROPERTIES) + list(seamicro.CONSOLE_PROPERTIES)) console_properties = task.driver.console.get_properties().keys() self.assertEqual(sorted(expected), sorted(console_properties)) self.assertEqual(sorted(expected), sorted(task.driver.get_properties().keys())) def test_vendor_routes(self): expected = ['set_node_vlan_id', 'attach_volume'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: vendor_routes = task.driver.vendor.vendor_routes self.assertIsInstance(vendor_routes, dict) self.assertEqual(sorted(expected), sorted(vendor_routes)) def test_driver_routes(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_routes = task.driver.vendor.driver_routes self.assertIsInstance(driver_routes, dict) self.assertEqual({}, driver_routes) @mock.patch.object(seamicro, '_parse_driver_info', autospec=True) def test_power_interface_validate_good(self, parse_drv_info_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=True) as task: task.driver.power.validate(task) self.assertEqual(1, parse_drv_info_mock.call_count) @mock.patch.object(seamicro, '_parse_driver_info', autospec=True) def test_power_interface_validate_fails(self, parse_drv_info_mock): side_effect = iter([exception.InvalidParameterValue("Bad input")]) parse_drv_info_mock.side_effect = side_effect with task_manager.acquire(self.context, self.node['uuid'], shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) self.assertEqual(1, parse_drv_info_mock.call_count) @mock.patch.object(seamicro, '_reboot', autospec=True) def test_reboot(self, mock_reboot): mock_reboot.return_value = states.POWER_ON with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: task.driver.power.reboot(task) mock_reboot.assert_called_once_with(task.node) def test_set_power_state_bad_state(self): self.get_server_mock = self.get_server_patcher.start() self.get_server_mock.return_value = self.Server() with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: self.assertRaises(exception.IronicException, task.driver.power.set_power_state, task, "BAD_PSTATE") self.get_server_patcher.stop() @mock.patch.object(seamicro, '_power_on', autospec=True) def test_set_power_state_on_good(self, mock_power_on): mock_power_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: task.driver.power.set_power_state(task, states.POWER_ON) mock_power_on.assert_called_once_with(task.node) @mock.patch.object(seamicro, '_power_on', autospec=True) def test_set_power_state_on_fail(self, mock_power_on): mock_power_on.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_ON) mock_power_on.assert_called_once_with(task.node) @mock.patch.object(seamicro, '_power_off', autospec=True) def test_set_power_state_off_good(self, mock_power_off): mock_power_off.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: task.driver.power.set_power_state(task, states.POWER_OFF) mock_power_off.assert_called_once_with(task.node) @mock.patch.object(seamicro, '_power_off', autospec=True) def test_set_power_state_off_fail(self, mock_power_off): mock_power_off.return_value = states.POWER_ON with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_OFF) mock_power_off.assert_called_once_with(task.node) @mock.patch.object(seamicro, '_parse_driver_info', autospec=True) def test_vendor_passthru_validate_good(self, mock_info): with task_manager.acquire(self.context, self.node['uuid'], shared=True) as task: for method in task.driver.vendor.vendor_routes: task.driver.vendor.validate(task, **{'method': method}) self.assertEqual(len(task.driver.vendor.vendor_routes), mock_info.call_count) @mock.patch.object(seamicro, '_parse_driver_info', autospec=True) def test_vendor_passthru_validate_parse_driver_info_fail(self, mock_info): mock_info.side_effect = iter([exception.InvalidParameterValue("bad")]) with task_manager.acquire(self.context, self.node['uuid'], shared=True) as task: method = list(task.driver.vendor.vendor_routes)[0] self.assertRaises(exception.InvalidParameterValue, task.driver.vendor.validate, task, **{'method': method}) mock_info.assert_called_once_with(task.node) @mock.patch.object(seamicro, '_get_server', autospec=True) def test_set_node_vlan_id_good(self, mock_get_server): vlan_id = "12" mock_get_server.return_value = self.Server(active="true") with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: kwargs = {'vlan_id': vlan_id} task.driver.vendor.set_node_vlan_id(task, **kwargs) mock_get_server.assert_called_once_with(self.info) def test_set_node_vlan_id_no_input(self): with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.vendor.set_node_vlan_id, task, **{}) @mock.patch.object(seamicro, '_get_server', autospec=True) def test_set_node_vlan_id_fail(self, mock_get_server): def fake_set_untagged_vlan(self, **kwargs): raise seamicro_client_exception.ClientException( http_client.INTERNAL_SERVER_ERROR) vlan_id = "12" server = self.Server(active="true") server.set_untagged_vlan = fake_set_untagged_vlan mock_get_server.return_value = server with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: kwargs = {'vlan_id': vlan_id} self.assertRaises(exception.IronicException, task.driver.vendor.set_node_vlan_id, task, **kwargs) mock_get_server.assert_called_once_with(self.info) @mock.patch.object(seamicro, '_get_server', autospec=True) @mock.patch.object(seamicro, '_validate_volume', autospec=True) def test_attach_volume_with_volume_id_good(self, mock_validate_volume, mock_get_server): volume_id = '0/ironic-p6-1/vol1' mock_validate_volume.return_value = True mock_get_server.return_value = self.Server(active="true") with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: kwargs = {'volume_id': volume_id} task.driver.vendor.attach_volume(task, **kwargs) mock_get_server.assert_called_once_with(self.info) @mock.patch.object(seamicro, '_get_server', autospec=True) @mock.patch.object(seamicro, '_get_volume', autospec=True) def test_attach_volume_with_invalid_volume_id_fail(self, mock_get_volume, mock_get_server): volume_id = '0/p6-1/vol1' mock_get_volume.return_value = self.Volume(volume_id) mock_get_server.return_value = self.Server(active="true") with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: kwargs = {'volume_id': volume_id} self.assertRaises(exception.InvalidParameterValue, task.driver.vendor.attach_volume, task, **kwargs) @mock.patch.object(seamicro, '_get_server', autospec=True) @mock.patch.object(seamicro, '_validate_volume', autospec=True) def test_attach_volume_fail(self, mock_validate_volume, mock_get_server): def fake_attach_volume(self, **kwargs): raise seamicro_client_exception.ClientException( http_client.INTERNAL_SERVER_ERROR) volume_id = '0/p6-1/vol1' mock_validate_volume.return_value = True server = self.Server(active="true") server.attach_volume = fake_attach_volume mock_get_server.return_value = server with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: kwargs = {'volume_id': volume_id} self.assertRaises(exception.IronicException, task.driver.vendor.attach_volume, task, **kwargs) mock_get_server.assert_called_once_with(self.info) @mock.patch.object(seamicro, '_get_server', autospec=True) @mock.patch.object(seamicro, '_validate_volume', autospec=True) @mock.patch.object(seamicro, '_create_volume', autospec=True) def test_attach_volume_with_volume_size_good(self, mock_create_volume, mock_validate_volume, mock_get_server): volume_id = '0/ironic-p6-1/vol1' volume_size = 2 mock_create_volume.return_value = volume_id mock_validate_volume.return_value = True mock_get_server.return_value = self.Server(active="true") with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: kwargs = {'volume_size': volume_size} task.driver.vendor.attach_volume(task, **kwargs) mock_get_server.assert_called_once_with(self.info) mock_create_volume.assert_called_once_with(self.info, volume_size) def test_attach_volume_with_no_input_fail(self): with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.vendor.attach_volume, task, **{}) @mock.patch.object(seamicro, '_get_server', autospec=True) def test_set_boot_device_good(self, mock_get_server): boot_device = "disk" mock_get_server.return_value = self.Server(active="true") with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: task.driver.management.set_boot_device(task, boot_device) mock_get_server.assert_called_once_with(self.info) @mock.patch.object(seamicro, '_get_server', autospec=True) def test_set_boot_device_invalid_device_fail(self, mock_get_server): boot_device = "invalid_device" mock_get_server.return_value = self.Server(active="true") with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.set_boot_device, task, boot_device) @mock.patch.object(seamicro, '_get_server', autospec=True) def test_set_boot_device_fail(self, mock_get_server): def fake_set_boot_order(self, **kwargs): raise seamicro_client_exception.ClientException( http_client.INTERNAL_SERVER_ERROR) boot_device = "pxe" server = self.Server(active="true") server.set_boot_order = fake_set_boot_order mock_get_server.return_value = server with task_manager.acquire(self.context, self.info['uuid'], shared=False) as task: self.assertRaises(exception.IronicException, task.driver.management.set_boot_device, task, boot_device) mock_get_server.assert_called_once_with(self.info) def test_management_interface_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.DISK] self.assertEqual(sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) def test_management_interface_get_boot_device(self): with task_manager.acquire(self.context, self.node.uuid) as task: expected = {'boot_device': None, 'persistent': None} self.assertEqual(expected, task.driver.management.get_boot_device(task)) def test_management_interface_validate_good(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.management.validate(task) def test_management_interface_validate_fail(self): # Missing SEAMICRO driver_info information node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake_seamicro') with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.MissingParameterValue, task.driver.management.validate, task) class SeaMicroDriverTestCase(db_base.DbTestCase): def setUp(self): super(SeaMicroDriverTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_seamicro') self.driver = driver_factory.get_driver('fake_seamicro') self.node = obj_utils.create_test_node(self.context, driver='fake_seamicro', driver_info=INFO_DICT) self.get_server_patcher = mock.patch.object(seamicro, '_get_server', autospec=True) self.get_server_mock = None self.Server = Fake_Server self.Volume = Fake_Volume self.info = seamicro._parse_driver_info(self.node) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console(self, mock_exec): mock_exec.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.console.start_console(task) mock_exec.assert_called_once_with(self.info['uuid'], self.info['port'], mock.ANY) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console_fail(self, mock_exec): mock_exec.side_effect = iter( [exception.ConsoleSubprocessFailed(error='error')]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleSubprocessFailed, self.driver.console.start_console, task) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console(self, mock_exec): mock_exec.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.console.stop_console(task) mock_exec.assert_called_once_with(self.info['uuid']) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console_fail(self, mock_stop): mock_stop.side_effect = iter([exception.ConsoleError()]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleError, self.driver.console.stop_console, task) mock_stop.assert_called_once_with(self.node.uuid) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console_fail_nodir(self, mock_exec): mock_exec.side_effect = iter([exception.ConsoleError()]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleError, self.driver.console.start_console, task) mock_exec.assert_called_once_with(self.node.uuid, mock.ANY, mock.ANY) @mock.patch.object(console_utils, 'get_shellinabox_console_url', autospec=True) def test_get_console(self, mock_exec): url = 'http://localhost:4201' mock_exec.return_value = url expected = {'type': 'shellinabox', 'url': url} with task_manager.acquire(self.context, self.node.uuid) as task: console_info = self.driver.console.get_console(task) self.assertEqual(expected, console_info) mock_exec.assert_called_once_with(self.info['port']) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_ssh.py0000664000567000056710000016572412674513466025136 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Ironic SSH power driver.""" import tempfile import mock from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import uuidutils import paramiko from ironic.common import boot_devices from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers.modules import console_utils from ironic.drivers.modules import ssh from ironic.drivers import utils as driver_utils from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF class SSHValidateParametersTestCase(db_base.DbTestCase): def test__parse_driver_info_good_password(self): # make sure we get back the expected things node = obj_utils.get_test_node( self.context, driver='fake_ssh', driver_info=db_utils.get_test_ssh_info('password')) info = ssh._parse_driver_info(node) self.assertIsNotNone(info.get('host')) self.assertIsNotNone(info.get('username')) self.assertIsNotNone(info.get('password')) self.assertIsNotNone(info.get('port')) self.assertIsNotNone(info.get('virt_type')) self.assertIsNotNone(info.get('cmd_set')) self.assertIsNotNone(info.get('uuid')) def test__parse_driver_info_good_key(self): # make sure we get back the expected things node = obj_utils.get_test_node( self.context, driver='fake_ssh', driver_info=db_utils.get_test_ssh_info('key')) info = ssh._parse_driver_info(node) self.assertIsNotNone(info.get('host')) self.assertIsNotNone(info.get('username')) self.assertIsNotNone(info.get('key_contents')) self.assertIsNotNone(info.get('port')) self.assertIsNotNone(info.get('virt_type')) self.assertIsNotNone(info.get('cmd_set')) self.assertIsNotNone(info.get('uuid')) def test__parse_driver_info_good_file(self): # make sure we get back the expected things d_info = db_utils.get_test_ssh_info('file') tempdir = tempfile.mkdtemp() key_path = tempdir + '/foo' open(key_path, 'wt').close() d_info['ssh_key_filename'] = key_path node = obj_utils.get_test_node( self.context, driver='fake_ssh', driver_info=d_info) info = ssh._parse_driver_info(node) self.assertIsNotNone(info.get('host')) self.assertIsNotNone(info.get('username')) self.assertIsNotNone(info.get('key_filename')) self.assertIsNotNone(info.get('port')) self.assertIsNotNone(info.get('virt_type')) self.assertIsNotNone(info.get('cmd_set')) self.assertIsNotNone(info.get('uuid')) def test__parse_driver_info_bad_file(self): # A filename that doesn't exist errors. info = db_utils.get_test_ssh_info('file') node = obj_utils.get_test_node( self.context, driver='fake_ssh', driver_info=info) self.assertRaises( exception.InvalidParameterValue, ssh._parse_driver_info, node) def test__parse_driver_info_too_many(self): info = db_utils.get_test_ssh_info('too_many') node = obj_utils.get_test_node( self.context, driver='fake_ssh', driver_info=info) self.assertRaises( exception.InvalidParameterValue, ssh._parse_driver_info, node) def test__parse_driver_info_missing_host(self): # make sure error is raised when info is missing info = db_utils.get_test_ssh_info() del info['ssh_address'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ssh._parse_driver_info, node) def test__parse_driver_info_missing_user(self): # make sure error is raised when info is missing info = db_utils.get_test_ssh_info() del info['ssh_username'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ssh._parse_driver_info, node) def test__parse_driver_info_invalid_creds(self): # make sure error is raised when info is missing info = db_utils.get_test_ssh_info('no-creds') node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ssh._parse_driver_info, node) def test__parse_driver_info_missing_virt_type(self): # make sure error is raised when info is missing info = db_utils.get_test_ssh_info() del info['ssh_virt_type'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ssh._parse_driver_info, node) def test__parse_driver_info_ssh_port_wrong_type(self): # make sure error is raised when ssh_port is not integer info = db_utils.get_test_ssh_info() info['ssh_port'] = 'wrong_port_value' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ssh._parse_driver_info, node) def test__normalize_mac_string(self): mac_raw = "0A:1B-2C-3D:4F" mac_clean = ssh._normalize_mac(mac_raw) self.assertEqual("0a1b2c3d4f", mac_clean) def test__normalize_mac_unicode(self): mac_raw = u"0A:1B-2C-3D:4F" mac_clean = ssh._normalize_mac(mac_raw) self.assertEqual("0a1b2c3d4f", mac_clean) def test__parse_driver_info_with_custom_libvirt_uri(self): CONF.set_override('libvirt_uri', 'qemu:///foo', 'ssh') expected_base_cmd = "LC_ALL=C /usr/bin/virsh --connect qemu:///foo" node = obj_utils.get_test_node( self.context, driver='fake_ssh', driver_info=db_utils.get_test_ssh_info()) node['driver_info']['ssh_virt_type'] = 'virsh' info = ssh._parse_driver_info(node) self.assertEqual(expected_base_cmd, info['cmd_set']['base_cmd']) def test__get_boot_device_map_parallels(self): boot_map = ssh._get_boot_device_map('parallels') self.assertEqual('net0', boot_map[boot_devices.PXE]) def test__get_boot_device_map_vbox(self): boot_map = ssh._get_boot_device_map('vbox') self.assertEqual('net', boot_map[boot_devices.PXE]) def test__get_boot_device_map_xenserver(self): boot_map = ssh._get_boot_device_map('xenserver') self.assertEqual('n', boot_map[boot_devices.PXE]) def test__get_boot_device_map_exception(self): self.assertRaises(exception.InvalidParameterValue, ssh._get_boot_device_map, 'this_doesn_t_exist') class SSHPrivateMethodsTestCase(db_base.DbTestCase): def setUp(self): super(SSHPrivateMethodsTestCase, self).setUp() self.node = obj_utils.get_test_node( self.context, driver='fake_ssh', driver_info=db_utils.get_test_ssh_info()) self.sshclient = paramiko.SSHClient() @mock.patch.object(utils, 'ssh_connect', autospec=True) def test__get_connection_client(self, ssh_connect_mock): ssh_connect_mock.return_value = self.sshclient client = ssh._get_connection(self.node) self.assertEqual(self.sshclient, client) driver_info = ssh._parse_driver_info(self.node) ssh_connect_mock.assert_called_once_with(driver_info) @mock.patch.object(utils, 'ssh_connect', autospec=True) def test__get_connection_exception(self, ssh_connect_mock): ssh_connect_mock.side_effect = iter( [exception.SSHConnectFailed(host='fake')]) self.assertRaises(exception.SSHConnectFailed, ssh._get_connection, self.node) driver_info = ssh._parse_driver_info(self.node) ssh_connect_mock.assert_called_once_with(driver_info) @mock.patch.object(processutils, 'ssh_execute', autospec=True) def test__ssh_execute(self, exec_ssh_mock): ssh_cmd = "somecmd" expected = ['a', 'b', 'c'] exec_ssh_mock.return_value = ('\n'.join(expected), '') lst = ssh._ssh_execute(self.sshclient, ssh_cmd) exec_ssh_mock.assert_called_once_with(self.sshclient, ssh_cmd) self.assertEqual(expected, lst) @mock.patch.object(processutils, 'ssh_execute', autospec=True) def test__ssh_execute_exception(self, exec_ssh_mock): ssh_cmd = "somecmd" exec_ssh_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.SSHCommandFailed, ssh._ssh_execute, self.sshclient, ssh_cmd) exec_ssh_mock.assert_called_once_with(self.sshclient, ssh_cmd) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__get_power_status_on_unquoted(self, get_hosts_name_mock, exec_ssh_mock): info = ssh._parse_driver_info(self.node) exec_ssh_mock.return_value = ( 'ExactNodeName', '') get_hosts_name_mock.return_value = "ExactNodeName" pstate = ssh._get_power_status(self.sshclient, info) ssh_cmd = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['list_running']) self.assertEqual(states.POWER_ON, pstate) exec_ssh_mock.assert_called_once_with(self.sshclient, ssh_cmd) get_hosts_name_mock.assert_called_once_with(self.sshclient, info) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__get_power_status_on(self, get_hosts_name_mock, exec_ssh_mock): info = ssh._parse_driver_info(self.node) exec_ssh_mock.return_value = ( '"NodeName" {b43c4982-110c-4c29-9325-d5f41b053513}', '') get_hosts_name_mock.return_value = "NodeName" pstate = ssh._get_power_status(self.sshclient, info) ssh_cmd = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['list_running']) self.assertEqual(states.POWER_ON, pstate) exec_ssh_mock.assert_called_once_with(self.sshclient, ssh_cmd) get_hosts_name_mock.assert_called_once_with(self.sshclient, info) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__get_power_status_off(self, get_hosts_name_mock, exec_ssh_mock): info = ssh._parse_driver_info(self.node) exec_ssh_mock.return_value = ( '"NodeName" {b43c4982-110c-4c29-9325-d5f41b053513}', '') get_hosts_name_mock.return_value = "NotNodeName" pstate = ssh._get_power_status(self.sshclient, info) ssh_cmd = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['list_running']) self.assertEqual(states.POWER_OFF, pstate) exec_ssh_mock.assert_called_once_with(self.sshclient, ssh_cmd) get_hosts_name_mock.assert_called_once_with(self.sshclient, info) @mock.patch.object(processutils, 'ssh_execute', autospec=True) def test__get_power_status_exception(self, exec_ssh_mock): info = ssh._parse_driver_info(self.node) exec_ssh_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.SSHCommandFailed, ssh._get_power_status, self.sshclient, info) ssh_cmd = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['list_all']) exec_ssh_mock.assert_called_once_with( self.sshclient, ssh_cmd) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__get_power_status_correct_node(self, get_hosts_name_mock, exec_ssh_mock): # Bug: #1397834 test that get_power_status return status of # baremeta_1 (off) and not baremetal_11 (on) info = ssh._parse_driver_info(self.node) exec_ssh_mock.return_value = ('"baremetal_11"\n"seed"\n', '') get_hosts_name_mock.return_value = "baremetal_1" pstate = ssh._get_power_status(self.sshclient, info) self.assertEqual(states.POWER_OFF, pstate) ssh_cmd = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['list_running']) exec_ssh_mock.assert_called_once_with(self.sshclient, ssh_cmd) @mock.patch.object(processutils, 'ssh_execute', autospec=True) def test__get_hosts_name_for_node_match(self, exec_ssh_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] ssh_cmd = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['list_all']) cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['get_node_macs']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') exec_ssh_mock.side_effect = iter([('NodeName', ''), ('52:54:00:cf:2d:31', '')]) expected = [mock.call(self.sshclient, ssh_cmd), mock.call(self.sshclient, cmd_to_exec)] found_name = ssh._get_hosts_name_for_node(self.sshclient, info) self.assertEqual('NodeName', found_name) self.assertEqual(expected, exec_ssh_mock.call_args_list) @mock.patch.object(processutils, 'ssh_execute', autospec=True) def test__get_hosts_name_for_node_no_match(self, exec_ssh_mock): self.config(group='ssh', get_vm_name_attempts=2) self.config(group='ssh', get_vm_name_retry_interval=0) info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "22:22:22:22:22:22"] exec_ssh_mock.side_effect = iter([('NodeName', ''), ('52:54:00:cf:2d:31', '')] * 2) ssh_cmd = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['list_all']) cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['get_node_macs']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') expected = [mock.call(self.sshclient, ssh_cmd), mock.call(self.sshclient, cmd_to_exec)] * 2 self.assertRaises(exception.NodeNotFound, ssh._get_hosts_name_for_node, self.sshclient, info) self.assertEqual(expected, exec_ssh_mock.call_args_list) @mock.patch.object(processutils, 'ssh_execute', autospec=True) def test__get_hosts_name_for_node_match_after_retry(self, exec_ssh_mock): self.config(group='ssh', get_vm_name_attempts=2) self.config(group='ssh', get_vm_name_retry_interval=0) info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "22:22:22:22:22:22"] exec_ssh_mock.side_effect = iter([('NodeName', ''), ('', ''), ('NodeName', ''), ('11:11:11:11:11:11', '')]) ssh_cmd = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['list_all']) cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['get_node_macs']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') expected = [mock.call(self.sshclient, ssh_cmd), mock.call(self.sshclient, cmd_to_exec)] * 2 found_name = ssh._get_hosts_name_for_node(self.sshclient, info) self.assertEqual('NodeName', found_name) self.assertEqual(expected, exec_ssh_mock.call_args_list) @mock.patch.object(processutils, 'ssh_execute', autospec=True) def test__get_hosts_name_for_node_exception(self, exec_ssh_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] ssh_cmd = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['list_all']) cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['get_node_macs']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') exec_ssh_mock.side_effect = iter( [('NodeName', ''), processutils.ProcessExecutionError]) expected = [mock.call(self.sshclient, ssh_cmd), mock.call(self.sshclient, cmd_to_exec)] self.assertRaises(exception.SSHCommandFailed, ssh._get_hosts_name_for_node, self.sshclient, info) self.assertEqual(expected, exec_ssh_mock.call_args_list) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_power_status', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__power_on_good(self, get_hosts_name_mock, get_power_status_mock, exec_ssh_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_power_status_mock.side_effect = iter([states.POWER_OFF, states.POWER_ON]) get_hosts_name_mock.return_value = "NodeName" expected = [mock.call(self.sshclient, info), mock.call(self.sshclient, info)] cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['start_cmd']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') current_state = ssh._power_on(self.sshclient, info) self.assertEqual(states.POWER_ON, current_state) self.assertEqual(expected, get_power_status_mock.call_args_list) get_hosts_name_mock.assert_called_once_with(self.sshclient, info) exec_ssh_mock.assert_called_once_with(self.sshclient, cmd_to_exec) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_power_status', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__power_on_fail(self, get_hosts_name_mock, get_power_status_mock, exec_ssh_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_power_status_mock.side_effect = iter([states.POWER_OFF, states.POWER_OFF]) get_hosts_name_mock.return_value = "NodeName" expected = [mock.call(self.sshclient, info), mock.call(self.sshclient, info)] cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['start_cmd']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') current_state = ssh._power_on(self.sshclient, info) self.assertEqual(states.ERROR, current_state) self.assertEqual(expected, get_power_status_mock.call_args_list) get_hosts_name_mock.assert_called_once_with(self.sshclient, info) exec_ssh_mock.assert_called_once_with(self.sshclient, cmd_to_exec) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_power_status', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__power_on_exception(self, get_hosts_name_mock, get_power_status_mock, exec_ssh_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] exec_ssh_mock.side_effect = processutils.ProcessExecutionError get_power_status_mock.side_effect = iter([states.POWER_OFF, states.POWER_ON]) get_hosts_name_mock.return_value = "NodeName" cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['start_cmd']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') self.assertRaises(exception.SSHCommandFailed, ssh._power_on, self.sshclient, info) get_power_status_mock.assert_called_once_with(self.sshclient, info) get_hosts_name_mock.assert_called_once_with(self.sshclient, info) exec_ssh_mock.assert_called_once_with(self.sshclient, cmd_to_exec) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_power_status', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__power_off_good(self, get_hosts_name_mock, get_power_status_mock, exec_ssh_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_power_status_mock.side_effect = iter([states.POWER_ON, states.POWER_OFF]) get_hosts_name_mock.return_value = "NodeName" expected = [mock.call(self.sshclient, info), mock.call(self.sshclient, info)] cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['stop_cmd']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') current_state = ssh._power_off(self.sshclient, info) self.assertEqual(states.POWER_OFF, current_state) self.assertEqual(expected, get_power_status_mock.call_args_list) get_hosts_name_mock.assert_called_once_with(self.sshclient, info) exec_ssh_mock.assert_called_once_with(self.sshclient, cmd_to_exec) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_power_status', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__power_off_fail(self, get_hosts_name_mock, get_power_status_mock, exec_ssh_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_power_status_mock.side_effect = iter([states.POWER_ON, states.POWER_ON]) get_hosts_name_mock.return_value = "NodeName" expected = [mock.call(self.sshclient, info), mock.call(self.sshclient, info)] cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['stop_cmd']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') current_state = ssh._power_off(self.sshclient, info) self.assertEqual(states.ERROR, current_state) self.assertEqual(expected, get_power_status_mock.call_args_list) get_hosts_name_mock.assert_called_once_with(self.sshclient, info) exec_ssh_mock.assert_called_once_with(self.sshclient, cmd_to_exec) @mock.patch.object(processutils, 'ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_power_status', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test__power_off_exception(self, get_hosts_name_mock, get_power_status_mock, exec_ssh_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] exec_ssh_mock.side_effect = processutils.ProcessExecutionError get_power_status_mock.side_effect = iter([states.POWER_ON, states.POWER_OFF]) get_hosts_name_mock.return_value = "NodeName" cmd_to_exec = "%s %s" % (info['cmd_set']['base_cmd'], info['cmd_set']['stop_cmd']) cmd_to_exec = cmd_to_exec.replace('{_NodeName_}', 'NodeName') self.assertRaises(exception.SSHCommandFailed, ssh._power_off, self.sshclient, info) get_power_status_mock.assert_called_once_with(self.sshclient, info) get_hosts_name_mock.assert_called_once_with(self.sshclient, info) exec_ssh_mock.assert_called_once_with(self.sshclient, cmd_to_exec) def test_exec_ssh_command_good(self): class Channel(object): def recv_exit_status(self): return 0 class Stream(object): def __init__(self, buffer=''): self.buffer = buffer self.channel = Channel() def read(self): return self.buffer def close(self): pass with mock.patch.object(self.sshclient, 'exec_command', autospec=True) as exec_command_mock: exec_command_mock.return_value = (Stream(), Stream('hello'), Stream()) stdout, stderr = processutils.ssh_execute(self.sshclient, "command") self.assertEqual('hello', stdout) exec_command_mock.assert_called_once_with("command") def test_exec_ssh_command_fail(self): class Channel(object): def recv_exit_status(self): return 127 class Stream(object): def __init__(self, buffer=''): self.buffer = buffer self.channel = Channel() def read(self): return self.buffer def close(self): pass with mock.patch.object(self.sshclient, 'exec_command', autospec=True) as exec_command_mock: exec_command_mock.return_value = (Stream(), Stream('hello'), Stream()) self.assertRaises(processutils.ProcessExecutionError, processutils.ssh_execute, self.sshclient, "command") exec_command_mock.assert_called_once_with("command") class SSHDriverTestCase(db_base.DbTestCase): def setUp(self): super(SSHDriverTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_ssh") self.driver = driver_factory.get_driver("fake_ssh") self.node = obj_utils.create_test_node( self.context, driver='fake_ssh', driver_info=db_utils.get_test_ssh_info()) self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) self.sshclient = paramiko.SSHClient() @mock.patch.object(utils, 'ssh_connect', autospec=True) def test__validate_info_ssh_connect_failed(self, ssh_connect_mock): ssh_connect_mock.side_effect = iter( [exception.SSHConnectFailed(host='fake')]) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) driver_info = ssh._parse_driver_info(task.node) ssh_connect_mock.assert_called_once_with(driver_info) def test_get_properties(self): expected = ssh.COMMON_PROPERTIES expected2 = list(ssh.COMMON_PROPERTIES) + list(ssh.CONSOLE_PROPERTIES) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.power.get_properties()) self.assertEqual(expected, task.driver.management.get_properties()) self.assertEqual( sorted(expected2), sorted(task.driver.console.get_properties().keys())) self.assertEqual( sorted(expected2), sorted(task.driver.get_properties().keys())) def test_validate_fail_no_port(self): new_node = obj_utils.create_test_node( self.context, uuid='aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', driver='fake_ssh', driver_info=db_utils.get_test_ssh_info()) with task_manager.acquire(self.context, new_node.uuid, shared=True) as task: self.assertRaises(exception.MissingParameterValue, task.driver.power.validate, task) @mock.patch.object(driver_utils, 'get_node_mac_addresses', autospec=True) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_power_on', autospec=True) def test_reboot_good(self, power_on_mock, get_conn_mock, get_mac_addr_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_mac_addr_mock.return_value = info['macs'] get_conn_mock.return_value = self.sshclient power_on_mock.return_value = states.POWER_ON with mock.patch.object(ssh, '_parse_driver_info', autospec=True) as parse_drv_info_mock: parse_drv_info_mock.return_value = info with task_manager.acquire(self.context, info['uuid'], shared=False) as task: task.driver.power.reboot(task) parse_drv_info_mock.assert_called_once_with(task.node) get_mac_addr_mock.assert_called_once_with(mock.ANY) get_conn_mock.assert_called_once_with(task.node) power_on_mock.assert_called_once_with(self.sshclient, info) @mock.patch.object(driver_utils, 'get_node_mac_addresses', autospec=True) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_power_on', autospec=True) def test_reboot_fail(self, power_on_mock, get_conn_mock, get_mac_addr_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_mac_addr_mock.return_value = info['macs'] get_conn_mock.return_value = self.sshclient power_on_mock.return_value = states.POWER_OFF with mock.patch.object(ssh, '_parse_driver_info', autospec=True) as parse_drv_info_mock: parse_drv_info_mock.return_value = info with task_manager.acquire(self.context, info['uuid'], shared=False) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.reboot, task) parse_drv_info_mock.assert_called_once_with(task.node) get_mac_addr_mock.assert_called_once_with(mock.ANY) get_conn_mock.assert_called_once_with(task.node) power_on_mock.assert_called_once_with(self.sshclient, info) @mock.patch.object(driver_utils, 'get_node_mac_addresses', autospec=True) @mock.patch.object(ssh, '_get_connection', autospec=True) def test_set_power_state_bad_state(self, get_conn_mock, get_mac_addr_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_mac_addr_mock.return_value = info['macs'] get_conn_mock.return_value = self.sshclient with mock.patch.object(ssh, '_parse_driver_info', autospec=True) as parse_drv_info_mock: parse_drv_info_mock.return_value = info with task_manager.acquire(self.context, info['uuid'], shared=False) as task: self.assertRaises( exception.InvalidParameterValue, task.driver.power.set_power_state, task, "BAD_PSTATE") parse_drv_info_mock.assert_called_once_with(task.node) get_mac_addr_mock.assert_called_once_with(mock.ANY) get_conn_mock.assert_called_once_with(task.node) @mock.patch.object(driver_utils, 'get_node_mac_addresses', autospec=True) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_power_on', autospec=True) def test_set_power_state_on_good(self, power_on_mock, get_conn_mock, get_mac_addr_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_mac_addr_mock.return_value = info['macs'] get_conn_mock.return_value = self.sshclient power_on_mock.return_value = states.POWER_ON with mock.patch.object(ssh, '_parse_driver_info', autospec=True) as parse_drv_info_mock: parse_drv_info_mock.return_value = info with task_manager.acquire(self.context, info['uuid'], shared=False) as task: task.driver.power.set_power_state(task, states.POWER_ON) parse_drv_info_mock.assert_called_once_with(task.node) get_mac_addr_mock.assert_called_once_with(mock.ANY) get_conn_mock.assert_called_once_with(task.node) power_on_mock.assert_called_once_with(self.sshclient, info) @mock.patch.object(driver_utils, 'get_node_mac_addresses', autospec=True) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_power_on', autospec=True) def test_set_power_state_on_fail(self, power_on_mock, get_conn_mock, get_mac_addr_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_mac_addr_mock.return_value = info['macs'] get_conn_mock.return_value = self.sshclient power_on_mock.return_value = states.POWER_OFF with mock.patch.object(ssh, '_parse_driver_info', autospec=True) as parse_drv_info_mock: parse_drv_info_mock.return_value = info with task_manager.acquire(self.context, info['uuid'], shared=False) as task: self.assertRaises( exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_ON) parse_drv_info_mock.assert_called_once_with(task.node) get_mac_addr_mock.assert_called_once_with(mock.ANY) get_conn_mock.assert_called_once_with(task.node) power_on_mock.assert_called_once_with(self.sshclient, info) @mock.patch.object(driver_utils, 'get_node_mac_addresses', autospec=True) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_power_off', autospec=True) def test_set_power_state_off_good(self, power_off_mock, get_conn_mock, get_mac_addr_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_mac_addr_mock.return_value = info['macs'] get_conn_mock.return_value = self.sshclient power_off_mock.return_value = states.POWER_OFF with mock.patch.object(ssh, '_parse_driver_info', autospec=True) as parse_drv_info_mock: parse_drv_info_mock.return_value = info with task_manager.acquire(self.context, info['uuid'], shared=False) as task: task.driver.power.set_power_state(task, states.POWER_OFF) parse_drv_info_mock.assert_called_once_with(task.node) get_mac_addr_mock.assert_called_once_with(mock.ANY) get_conn_mock.assert_called_once_with(task.node) power_off_mock.assert_called_once_with(self.sshclient, info) @mock.patch.object(driver_utils, 'get_node_mac_addresses', autospec=True) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_power_off', autospec=True) def test_set_power_state_off_fail(self, power_off_mock, get_conn_mock, get_mac_addr_mock): info = ssh._parse_driver_info(self.node) info['macs'] = ["11:11:11:11:11:11", "52:54:00:cf:2d:31"] get_mac_addr_mock.return_value = info['macs'] get_conn_mock.return_value = self.sshclient power_off_mock.return_value = states.POWER_ON with mock.patch.object(ssh, '_parse_driver_info', autospec=True) as parse_drv_info_mock: parse_drv_info_mock.return_value = info with task_manager.acquire(self.context, info['uuid'], shared=False) as task: self.assertRaises( exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_OFF) parse_drv_info_mock.assert_called_once_with(task.node) get_mac_addr_mock.assert_called_once_with(mock.ANY) get_conn_mock.assert_called_once_with(task.node) power_off_mock.assert_called_once_with(self.sshclient, info) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_management_interface_set_boot_device_vbox_ok(self, mock_exc, mock_h, mock_get_conn): fake_name = 'fake-name' mock_h.return_value = fake_name mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'vbox' self.driver.management.set_boot_device(task, boot_devices.PXE) expected_cmd = ('LC_ALL=C /usr/bin/VBoxManage modifyvm %s ' '--boot1 net') % fake_name mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_management_interface_set_boot_device_parallels_ok(self, mock_exc, mock_h, mock_get_conn): fake_name = 'fake-name' mock_h.return_value = fake_name mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'parallels' self.driver.management.set_boot_device(task, boot_devices.PXE) expected_cmd = ('LC_ALL=C /usr/bin/prlctl set %s ' '--device-bootorder "net0"') % fake_name mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_management_interface_set_boot_device_virsh_ok(self, mock_exc, mock_h, mock_get_conn): fake_name = 'fake-name' mock_h.return_value = fake_name mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'virsh' self.driver.management.set_boot_device(task, boot_devices.PXE) expected_cmd = ('EDITOR="sed -i \'/' '/d;/<\\/os>/i\\\'" ' 'LC_ALL=C /usr/bin/virsh --connect qemu:///system ' 'edit %s') % fake_name mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_management_interface_set_boot_device_xenserver_ok(self, mock_exc, mock_h, mock_get_conn): fake_name = 'fake-name' mock_h.return_value = fake_name mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'xenserver' self.driver.management.set_boot_device(task, boot_devices.PXE) expected_cmd = ("LC_ALL=C /opt/xensource/bin/xe vm-param-set uuid=%s " "HVM-boot-params:order='n'") % fake_name mock_exc.assert_called_once_with(mock.ANY, expected_cmd) def test_set_boot_device_bad_device(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.driver.management.set_boot_device, task, 'invalid-device') @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test_set_boot_device_not_supported(self, mock_h, mock_get_conn): mock_h.return_value = 'NodeName' mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: # vmware does not support set_boot_device() task.node['driver_info']['ssh_virt_type'] = 'vmware' self.assertRaises(NotImplementedError, self.driver.management.set_boot_device, task, boot_devices.PXE) def test_management_interface_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM] self.assertEqual(sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_management_interface_get_boot_device_vbox(self, mock_exc, mock_h, mock_get_conn): fake_name = 'fake-name' mock_h.return_value = fake_name mock_exc.return_value = ('net', '') mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'vbox' result = self.driver.management.get_boot_device(task) self.assertEqual(boot_devices.PXE, result['boot_device']) expected_cmd = ('LC_ALL=C /usr/bin/VBoxManage showvminfo ' '--machinereadable %s ' '| awk -F \'"\' \'/boot1/{print $2}\'') % fake_name mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_management_interface_get_boot_device_parallels(self, mock_exc, mock_h, mock_get_conn): fake_name = 'fake-name' mock_h.return_value = fake_name mock_exc.return_value = ('net0', '') mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'parallels' result = self.driver.management.get_boot_device(task) self.assertEqual(boot_devices.PXE, result['boot_device']) expected_cmd = ('LC_ALL=C /usr/bin/prlctl list -i %s ' '| awk \'/^Boot order:/ {print $3}\'') % fake_name mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_management_interface_get_boot_device_virsh(self, mock_exc, mock_h, mock_get_conn): fake_name = 'fake-name' mock_h.return_value = fake_name mock_exc.return_value = ('network', '') mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'virsh' result = self.driver.management.get_boot_device(task) self.assertEqual(boot_devices.PXE, result['boot_device']) expected_cmd = ('LC_ALL=C /usr/bin/virsh --connect ' 'qemu:///system dumpxml %s | awk \'/boot dev=/ ' '{ gsub( ".*dev=" Q, "" ); gsub( Q ".*", "" ); ' 'print; }\' Q="\'" RS="[<>]" | head -1') % fake_name mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_management_interface_get_boot_device_xenserver(self, mock_exc, mock_h, mock_get_conn): fake_name = 'fake-name' mock_h.return_value = fake_name mock_exc.return_value = ('n', '') mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'xenserver' result = self.driver.management.get_boot_device(task) self.assertEqual(boot_devices.PXE, result['boot_device']) expected_cmd = ('LC_ALL=C /opt/xensource/bin/xe vm-param-get ' 'uuid=%s --param-name=HVM-boot-params ' 'param-key=order | cut -b 1') % fake_name mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) def test_get_boot_device_not_supported(self, mock_h, mock_get_conn): mock_h.return_value = 'NodeName' mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: # vmware does not support get_boot_device() task.node['driver_info']['ssh_virt_type'] = 'vmware' expected = {'boot_device': None, 'persistent': None} self.assertEqual(expected, self.driver.management.get_boot_device(task)) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_get_power_state_vmware(self, mock_exc, mock_h, mock_get_conn): # To see replacing {_NodeName_} in vmware's list_running nodename = 'fakevm' mock_h.return_value = nodename mock_get_conn.return_value = self.sshclient # list_running quotes names mock_exc.return_value = ('"%s"' % nodename, '') with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'vmware' power_state = self.driver.power.get_power_state(task) self.assertEqual(states.POWER_ON, power_state) expected_cmd = ("LC_ALL=C /bin/vim-cmd vmsvc/power.getstate " "%(node)s | grep 'Powered on' >/dev/null && " "echo '\"%(node)s\"' || true") % {'node': nodename} mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) def test_get_power_state_xenserver(self, mock_exc, mock_h, mock_get_conn): # To see replacing {_NodeName_} in xenserver's list_running nodename = 'fakevm' mock_h.return_value = nodename mock_get_conn.return_value = self.sshclient mock_exc.return_value = (nodename, '') with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'xenserver' power_state = self.driver.power.get_power_state(task) self.assertEqual(states.POWER_ON, power_state) expected_cmd = ("LC_ALL=C /opt/xensource/bin/xe " "vm-list power-state=running --minimal | tr ',' '\n'") mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_power_status', autospec=True) def test_start_command_xenserver(self, mock_power, mock_exc, mock_h, mock_get_conn): mock_power.side_effect = [states.POWER_OFF, states.POWER_ON] nodename = 'fakevm' mock_h.return_value = nodename mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'xenserver' self.driver.power.set_power_state(task, states.POWER_ON) expected_cmd = ("LC_ALL=C /opt/xensource/bin/xe " "vm-start uuid=fakevm && sleep 10s") mock_exc.assert_called_once_with(mock.ANY, expected_cmd) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(ssh, '_ssh_execute', autospec=True) @mock.patch.object(ssh, '_get_power_status', autospec=True) def test_stop_command_xenserver(self, mock_power, mock_exc, mock_h, mock_get_conn): mock_power.side_effect = [states.POWER_ON, states.POWER_OFF] nodename = 'fakevm' mock_h.return_value = nodename mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: task.node['driver_info']['ssh_virt_type'] = 'xenserver' self.driver.power.set_power_state(task, states.POWER_OFF) expected_cmd = ("LC_ALL=C /opt/xensource/bin/xe " "vm-shutdown uuid=fakevm force=true") mock_exc.assert_called_once_with(mock.ANY, expected_cmd) def test_management_interface_validate_good(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.management.validate(task) def test_management_interface_validate_fail(self): # Missing SSH driver_info information node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake_ssh') with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.MissingParameterValue, task.driver.management.validate, task) def test_console_validate(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ssh_virt_type'] = 'virsh' task.node.driver_info['ssh_terminal_port'] = 123 task.driver.console.validate(task) def test_console_validate_missing_port(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ssh_virt_type'] = 'virsh' task.node.driver_info.pop('ssh_terminal_port', None) self.assertRaises(exception.MissingParameterValue, task.driver.console.validate, task) def test_console_validate_not_virsh(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info = db_utils.get_test_ssh_info( virt_type='vbox') self.assertRaisesRegex(exception.InvalidParameterValue, 'not supported for non-virsh types', task.driver.console.validate, task) def test_console_validate_invalid_port(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ssh_terminal_port'] = '' self.assertRaisesRegex(exception.InvalidParameterValue, 'is not a valid integer', task.driver.console.validate, task) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console(self, mock_exec, get_hosts_name_mock, mock_get_conn): info = ssh._parse_driver_info(self.node) mock_exec.return_value = None get_hosts_name_mock.return_value = "NodeName" mock_get_conn.return_value = self.sshclient with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.console.start_console(task) mock_exec.assert_called_once_with(info['uuid'], info['terminal_port'], mock.ANY) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console_fail(self, mock_exec, get_hosts_name_mock, mock_get_conn): get_hosts_name_mock.return_value = "NodeName" mock_get_conn.return_value = self.sshclient mock_exec.side_effect = exception.ConsoleSubprocessFailed( error='error') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleSubprocessFailed, self.driver.console.start_console, task) mock_exec.assert_called_once_with(self.node.uuid, mock.ANY, mock.ANY) @mock.patch.object(ssh, '_get_connection', autospec=True) @mock.patch.object(ssh, '_get_hosts_name_for_node', autospec=True) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console_fail_nodir(self, mock_exec, get_hosts_name_mock, mock_get_conn): get_hosts_name_mock.return_value = "NodeName" mock_get_conn.return_value = self.sshclient mock_exec.side_effect = exception.ConsoleError() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleError, self.driver.console.start_console, task) mock_exec.assert_called_once_with(self.node.uuid, mock.ANY, mock.ANY) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console(self, mock_exec): mock_exec.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.console.stop_console(task) mock_exec.assert_called_once_with(self.node.uuid) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console_fail(self, mock_stop): mock_stop.side_effect = exception.ConsoleError() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleError, self.driver.console.stop_console, task) mock_stop.assert_called_once_with(self.node.uuid) @mock.patch.object(console_utils, 'get_shellinabox_console_url', autospec=True) def test_get_console(self, mock_exec): url = 'http://localhost:4201' mock_exec.return_value = url expected = {'type': 'shellinabox', 'url': url} with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['ssh_terminal_port'] = 6900 console_info = self.driver.console.get_console(task) self.assertEqual(expected, console_info) mock_exec.assert_called_once_with(6900) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_ipminative.py0000664000567000056710000006537212674513466026504 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for Native IPMI power driver module. """ import mock from oslo_utils import uuidutils from pyghmi import exceptions as pyghmi_exception from ironic.common import boot_devices from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import console_utils from ironic.drivers.modules import ipminative from ironic.drivers import utils as driver_utils from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_ipmi_info() class IPMINativePrivateMethodTestCase(db_base.DbTestCase): """Test cases for ipminative private methods.""" def setUp(self): super(IPMINativePrivateMethodTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='fake_ipminative', driver_info=INFO_DICT) self.info = ipminative._parse_driver_info(self.node) def test__parse_driver_info(self): # make sure we get back the expected things self.assertIsNotNone(self.info.get('address')) self.assertIsNotNone(self.info.get('username')) self.assertIsNotNone(self.info.get('password')) self.assertIsNotNone(self.info.get('uuid')) self.assertIsNotNone(self.info.get('force_boot_device')) # make sure error is raised when info, eg. username, is missing info = dict(INFO_DICT) del info['ipmi_username'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ipminative._parse_driver_info, node) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__power_status_on(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.get_power.return_value = {'powerstate': 'on'} state = ipminative._power_status(self.info) ipmicmd.get_power.assert_called_once_with() self.assertEqual(states.POWER_ON, state) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__power_status_off(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.get_power.return_value = {'powerstate': 'off'} state = ipminative._power_status(self.info) ipmicmd.get_power.assert_called_once_with() self.assertEqual(states.POWER_OFF, state) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__power_status_error(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.get_power.return_value = {'powerstate': 'Error'} state = ipminative._power_status(self.info) ipmicmd.get_power.assert_called_once_with() self.assertEqual(states.ERROR, state) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__power_on(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.set_power.return_value = {'powerstate': 'on'} self.config(retry_timeout=400, group='ipmi') state = ipminative._power_on(self.info) ipmicmd.set_power.assert_called_once_with('on', 400) self.assertEqual(states.POWER_ON, state) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__power_off(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.set_power.return_value = {'powerstate': 'off'} self.config(retry_timeout=500, group='ipmi') state = ipminative._power_off(self.info) ipmicmd.set_power.assert_called_once_with('off', 500) self.assertEqual(states.POWER_OFF, state) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__reboot(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.set_power.return_value = {'powerstate': 'on'} self.config(retry_timeout=600, group='ipmi') state = ipminative._reboot(self.info) ipmicmd.set_power.assert_called_once_with('boot', 600) self.assertEqual(states.POWER_ON, state) def _create_sensor_object(self, value, type_, name, states=None, units='fake_units', health=0): if states is None: states = [] return type('Reading', (object, ), { 'value': value, 'type': type_, 'name': name, 'states': states, 'units': units, 'health': health})() @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__get_sensors_data(self, ipmi_mock): reading_1 = self._create_sensor_object('fake_value1', 'fake_type_A', 'fake_name1') reading_2 = self._create_sensor_object('fake_value2', 'fake_type_A', 'fake_name2') reading_3 = self._create_sensor_object('fake_value3', 'fake_type_B', 'fake_name3') readings = [reading_1, reading_2, reading_3] ipmicmd = ipmi_mock.return_value ipmicmd.get_sensor_data.return_value = readings expected = { 'fake_type_A': { 'fake_name1': { 'Health': '0', 'Sensor ID': 'fake_name1', 'Sensor Reading': 'fake_value1 fake_units', 'States': '[]', 'Units': 'fake_units' }, 'fake_name2': { 'Health': '0', 'Sensor ID': 'fake_name2', 'Sensor Reading': 'fake_value2 fake_units', 'States': '[]', 'Units': 'fake_units' } }, 'fake_type_B': { 'fake_name3': { 'Health': '0', 'Sensor ID': 'fake_name3', 'Sensor Reading': 'fake_value3 fake_units', 'States': '[]', 'Units': 'fake_units' } } } ret = ipminative._get_sensors_data(self.info) self.assertEqual(expected, ret) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__get_sensors_data_missing_values(self, ipmi_mock): reading_1 = self._create_sensor_object('fake_value1', 'fake_type_A', 'fake_name1') reading_2 = self._create_sensor_object(None, 'fake_type_A', 'fake_name2') reading_3 = self._create_sensor_object(None, 'fake_type_B', 'fake_name3') readings = [reading_1, reading_2, reading_3] ipmicmd = ipmi_mock.return_value ipmicmd.get_sensor_data.return_value = readings expected = { 'fake_type_A': { 'fake_name1': { 'Health': '0', 'Sensor ID': 'fake_name1', 'Sensor Reading': 'fake_value1 fake_units', 'States': '[]', 'Units': 'fake_units' } } } ret = ipminative._get_sensors_data(self.info) self.assertEqual(expected, ret) def test__parse_raw_bytes_ok(self): bytes_string = '0x11 0x12 0x25 0xFF' netfn, cmd, data = ipminative._parse_raw_bytes(bytes_string) self.assertEqual(0x11, netfn) self.assertEqual(0x12, cmd) self.assertEqual([0x25, 0xFF], data) def test__parse_raw_bytes_invalid_value(self): bytes_string = '0x11 oops' self.assertRaises(exception.InvalidParameterValue, ipminative._parse_raw_bytes, bytes_string) def test__parse_raw_bytes_missing_byte(self): bytes_string = '0x11' self.assertRaises(exception.InvalidParameterValue, ipminative._parse_raw_bytes, bytes_string) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__send_raw(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipminative._send_raw(self.info, '0x01 0x02 0x03 0x04') ipmicmd.xraw_command.assert_called_once_with(1, 2, data=[3, 4]) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test__send_raw_fail(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.xraw_command.side_effect = pyghmi_exception.IpmiException() self.assertRaises(exception.IPMIFailure, ipminative._send_raw, self.info, '0x01 0x02') class IPMINativeDriverTestCase(db_base.DbTestCase): """Test cases for ipminative.NativeIPMIPower class functions.""" def setUp(self): super(IPMINativeDriverTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_ipminative") self.driver = driver_factory.get_driver("fake_ipminative") self.node = obj_utils.create_test_node(self.context, driver='fake_ipminative', driver_info=INFO_DICT) self.info = ipminative._parse_driver_info(self.node) def test_get_properties(self): expected = ipminative.COMMON_PROPERTIES self.assertEqual(expected, self.driver.power.get_properties()) self.assertEqual(expected, self.driver.management.get_properties()) self.assertEqual(expected, self.driver.vendor.get_properties()) expected = list(ipminative.COMMON_PROPERTIES) expected += list(ipminative.CONSOLE_PROPERTIES) self.assertEqual(sorted(expected), sorted(self.driver.console.get_properties().keys())) self.assertEqual(sorted(expected), sorted(self.driver.get_properties().keys())) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_get_power_state(self, ipmi_mock): # Getting the mocked command. cmd_mock = ipmi_mock.return_value # Getting the get power mock. get_power_mock = cmd_mock.get_power return_values = [{'powerstate': 'error'}, {'powerstate': 'on'}, {'powerstate': 'off'}] get_power_mock.side_effect = lambda: return_values.pop() with task_manager.acquire(self.context, self.node.uuid) as task: pstate = self.driver.power.get_power_state(task) self.assertEqual(states.POWER_OFF, pstate) pstate = self.driver.power.get_power_state(task) self.assertEqual(states.POWER_ON, pstate) pstate = self.driver.power.get_power_state(task) self.assertEqual(states.ERROR, pstate) self.assertEqual(3, get_power_mock.call_count, "pyghmi.ipmi.command.Command.get_power was not" " called 3 times.") @mock.patch.object(ipminative, '_power_on', autospec=True) def test_set_power_on_ok(self, power_on_mock): power_on_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.power.set_power_state( task, states.POWER_ON) power_on_mock.assert_called_once_with(self.info) @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipminative, '_power_on', autospec=True) def test_set_power_on_with_next_boot(self, power_on_mock, mock_next_boot): power_on_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.power.set_power_state( task, states.POWER_ON) mock_next_boot.assert_called_once_with(task, self.info) power_on_mock.assert_called_once_with(self.info) @mock.patch.object(ipminative, '_power_off', autospec=True) def test_set_power_off_ok(self, power_off_mock): power_off_mock.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.power.set_power_state( task, states.POWER_OFF) power_off_mock.assert_called_once_with(self.info) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_set_power_on_fail(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.set_power.return_value = {'powerstate': 'error'} self.config(retry_timeout=500, group='ipmi') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, self.driver.power.set_power_state, task, states.POWER_ON) ipmicmd.set_power.assert_called_once_with('on', 500) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_set_boot_device_ok(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.set_bootdev.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.management.set_boot_device(task, boot_devices.PXE) # PXE is converted to 'network' internally by ipminative ipmicmd.set_bootdev.assert_called_once_with('network', persist=False) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_force_set_boot_device_ok(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.set_bootdev.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['ipmi_force_boot_device'] = True self.driver.management.set_boot_device(task, boot_devices.PXE) task.node.refresh() self.assertEqual( False, task.node.driver_internal_info['is_next_boot_persistent'] ) # PXE is converted to 'network' internally by ipminative ipmicmd.set_bootdev.assert_called_once_with('network', persist=False) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_set_boot_device_with_persistent(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.set_bootdev.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['ipmi_force_boot_device'] = True self.driver.management.set_boot_device(task, boot_devices.PXE, True) self.assertEqual( boot_devices.PXE, task.node.driver_internal_info['persistent_boot_device']) # PXE is converted to 'network' internally by ipminative ipmicmd.set_bootdev.assert_called_once_with('network', persist=False) def test_set_boot_device_bad_device(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.driver.management.set_boot_device, task, 'fake-device') @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipminative, '_reboot', autospec=True) def test_reboot_ok(self, reboot_mock, mock_next_boot): reboot_mock.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.power.reboot(task) mock_next_boot.assert_called_once_with(task, self.info) reboot_mock.assert_called_once_with(self.info) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_reboot_fail(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.set_power.return_value = {'powerstate': 'error'} self.config(retry_timeout=500, group='ipmi') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, self.driver.power.reboot, task) ipmicmd.set_power.assert_called_once_with('boot', 500) def test_management_interface_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM, boot_devices.BIOS] self.assertEqual(sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_management_interface_get_boot_device_good(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.get_bootdev.return_value = {'bootdev': 'hd'} with task_manager.acquire(self.context, self.node.uuid) as task: bootdev = self.driver.management.get_boot_device(task) self.assertEqual(boot_devices.DISK, bootdev['boot_device']) self.assertIsNone(bootdev['persistent']) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_management_interface_get_boot_device_persistent(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.get_bootdev.return_value = {'bootdev': 'hd', 'persistent': True} with task_manager.acquire(self.context, self.node.uuid) as task: bootdev = self.driver.management.get_boot_device(task) self.assertEqual(boot_devices.DISK, bootdev['boot_device']) self.assertTrue(bootdev['persistent']) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_management_interface_get_boot_device_fail(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.get_bootdev.side_effect = pyghmi_exception.IpmiException with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IPMIFailure, self.driver.management.get_boot_device, task) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_management_interface_get_boot_device_fail_dict(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.get_bootdev.return_value = {'error': 'boooom'} with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IPMIFailure, self.driver.management.get_boot_device, task) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_management_interface_get_boot_device_unknown(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.get_bootdev.return_value = {'bootdev': 'unknown'} with task_manager.acquire(self.context, self.node.uuid) as task: expected = {'boot_device': None, 'persistent': None} self.assertEqual(expected, self.driver.management.get_boot_device(task)) def test_get_force_boot_device_persistent(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['ipmi_force_boot_device'] = True task.node.driver_internal_info['persistent_boot_device'] = 'pxe' bootdev = self.driver.management.get_boot_device(task) self.assertEqual('pxe', bootdev['boot_device']) self.assertTrue(bootdev['persistent']) def test_management_interface_validate_good(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.management.validate(task) def test_management_interface_validate_fail(self): # Missing IPMI driver_info information node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake_ipminative') with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.MissingParameterValue, task.driver.management.validate, task) @mock.patch('pyghmi.ipmi.command.Command', autospec=True) def test_get_sensors_data(self, ipmi_mock): ipmicmd = ipmi_mock.return_value ipmicmd.get_sensor_data.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.management.get_sensors_data(task) ipmicmd.get_sensor_data.assert_called_once_with() @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console(self, mock_exec): mock_exec.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.console.start_console(task) mock_exec.assert_called_once_with(self.info['uuid'], self.info['port'], mock.ANY) self.assertTrue(mock_exec.called) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console_fail(self, mock_exec): mock_exec.side_effect = iter( [exception.ConsoleSubprocessFailed(error='error')]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleSubprocessFailed, self.driver.console.start_console, task) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console(self, mock_exec): mock_exec.return_value = None with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.console.stop_console(task) mock_exec.assert_called_once_with(self.info['uuid']) self.assertTrue(mock_exec.called) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console_fail(self, mock_stop): mock_stop.side_effect = iter([exception.ConsoleError()]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleError, self.driver.console.stop_console, task) mock_stop.assert_called_once_with(self.node.uuid) @mock.patch.object(console_utils, 'get_shellinabox_console_url', autospec=True) def test_get_console(self, mock_exec): url = 'http://localhost:4201' mock_exec.return_value = url expected = {'type': 'shellinabox', 'url': url} with task_manager.acquire(self.context, self.node.uuid) as task: console_info = self.driver.console.get_console(task) self.assertEqual(expected, console_info) mock_exec.assert_called_once_with(self.info['port']) self.assertTrue(mock_exec.called) @mock.patch.object(ipminative, '_parse_driver_info', autospec=True) @mock.patch.object(ipminative, '_parse_raw_bytes', autospec=True) def test_vendor_passthru_validate__send_raw_bytes_good(self, mock_raw, mock_driver): with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.vendor.validate(task, method='send_raw', http_method='POST', raw_bytes='0x00 0x01') mock_raw.assert_called_once_with('0x00 0x01') mock_driver.assert_called_once_with(task.node) def test_vendor_passthru_validate__send_raw_bytes_fail(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.MissingParameterValue, self.driver.vendor.validate, task, method='send_raw') def test_vendor_passthru_vendor_routes(self): expected = ['send_raw', 'bmc_reset'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: vendor_routes = task.driver.vendor.vendor_routes self.assertIsInstance(vendor_routes, dict) self.assertEqual(sorted(expected), sorted(vendor_routes)) @mock.patch.object(ipminative, '_send_raw', autospec=True) def test_send_raw(self, send_raw_mock): bytes = '0x00 0x01' with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.vendor.send_raw(task, http_method='POST', raw_bytes=bytes) send_raw_mock.assert_called_once_with(self.info, bytes) @mock.patch.object(ipminative, '_send_raw', autospec=True) def _test_bmc_reset(self, warm, send_raw_mock): expected_bytes = '0x06 0x03' if warm else '0x06 0x02' with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.vendor.bmc_reset(task, http_method='POST', warm=warm) send_raw_mock.assert_called_once_with(self.info, expected_bytes) def test_bmc_reset_cold(self): self._test_bmc_reset(False) def test_bmc_reset_warm(self): self._test_bmc_reset(True) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_ipmitool.py0000664000567000056710000027025512674513466026171 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2012 Hewlett-Packard Development Company, L.P. # Copyright (c) 2012 NTT DOCOMO, INC. # Copyright 2014 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Test class for IPMITool driver module.""" import os import stat import subprocess import tempfile import time import types import mock from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import uuidutils import six from ironic.common import boot_devices from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers.modules import console_utils from ironic.drivers.modules import ipmitool as ipmi from ironic.drivers import utils as driver_utils from ironic.tests import base from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF CONF.import_opt('min_command_interval', 'ironic.drivers.modules.ipminative', group='ipmi') INFO_DICT = db_utils.get_test_ipmi_info() # BRIDGE_INFO_DICT will have all the bridging parameters appended BRIDGE_INFO_DICT = INFO_DICT.copy() BRIDGE_INFO_DICT.update(db_utils.get_test_ipmi_bridging_parameters()) class IPMIToolCheckInitTestCase(base.TestCase): @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None ipmi.IPMIPower() mock_support.assert_called_with(mock.ANY) mock_check_dir.assert_called_once_with() @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls_raises_1(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None mock_check_dir.side_effect = iter( [exception.PathNotFound(dir="foo_dir")]) self.assertRaises(exception.PathNotFound, ipmi.IPMIPower) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls_raises_2(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None mock_check_dir.side_effect = iter( [exception.DirectoryNotWritable(dir="foo_dir")]) self.assertRaises(exception.DirectoryNotWritable, ipmi.IPMIPower) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls_raises_3(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None mock_check_dir.side_effect = iter([exception.InsufficientDiskSpace( path="foo_dir", required=1, actual=0)]) self.assertRaises(exception.InsufficientDiskSpace, ipmi.IPMIPower) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls_already_checked(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = True ipmi.IPMIPower() mock_support.assert_called_with(mock.ANY) self.assertEqual(0, mock_check_dir.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_management_init_calls(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None ipmi.IPMIManagement() mock_support.assert_called_with(mock.ANY) mock_check_dir.assert_called_once_with() @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_management_init_calls_already_checked(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = False ipmi.IPMIManagement() mock_support.assert_called_with(mock.ANY) self.assertEqual(0, mock_check_dir.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_vendor_passthru_init_calls(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None ipmi.VendorPassthru() mock_support.assert_called_with(mock.ANY) mock_check_dir.assert_called_once_with() @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_vendor_passthru_init_calls_already_checked(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = True ipmi.VendorPassthru() mock_support.assert_called_with(mock.ANY) self.assertEqual(0, mock_check_dir.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_console_init_calls(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None ipmi.IPMIShellinaboxConsole() mock_support.assert_called_with(mock.ANY) mock_check_dir.assert_called_once_with() @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_console_init_calls_already_checked(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = True ipmi.IPMIShellinaboxConsole() mock_support.assert_called_with(mock.ANY) self.assertEqual(0, mock_check_dir.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(subprocess, 'check_call', autospec=True) class IPMIToolCheckOptionSupportedTestCase(base.TestCase): def test_check_timing_pass(self, mock_chkcall, mock_support): mock_chkcall.return_value = (None, None) mock_support.return_value = None expected = [mock.call('timing'), mock.call('timing', True)] ipmi._check_option_support(['timing']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_timing_fail(self, mock_chkcall, mock_support): mock_chkcall.side_effect = iter( [subprocess.CalledProcessError(1, 'ipmitool')]) mock_support.return_value = None expected = [mock.call('timing'), mock.call('timing', False)] ipmi._check_option_support(['timing']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_timing_no_ipmitool(self, mock_chkcall, mock_support): mock_chkcall.side_effect = iter([OSError()]) mock_support.return_value = None expected = [mock.call('timing')] self.assertRaises(OSError, ipmi._check_option_support, ['timing']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_single_bridge_pass(self, mock_chkcall, mock_support): mock_chkcall.return_value = (None, None) mock_support.return_value = None expected = [mock.call('single_bridge'), mock.call('single_bridge', True)] ipmi._check_option_support(['single_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_single_bridge_fail(self, mock_chkcall, mock_support): mock_chkcall.side_effect = iter( [subprocess.CalledProcessError(1, 'ipmitool')]) mock_support.return_value = None expected = [mock.call('single_bridge'), mock.call('single_bridge', False)] ipmi._check_option_support(['single_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_single_bridge_no_ipmitool(self, mock_chkcall, mock_support): mock_chkcall.side_effect = iter([OSError()]) mock_support.return_value = None expected = [mock.call('single_bridge')] self.assertRaises(OSError, ipmi._check_option_support, ['single_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_dual_bridge_pass(self, mock_chkcall, mock_support): mock_chkcall.return_value = (None, None) mock_support.return_value = None expected = [mock.call('dual_bridge'), mock.call('dual_bridge', True)] ipmi._check_option_support(['dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_dual_bridge_fail(self, mock_chkcall, mock_support): mock_chkcall.side_effect = iter( [subprocess.CalledProcessError(1, 'ipmitool')]) mock_support.return_value = None expected = [mock.call('dual_bridge'), mock.call('dual_bridge', False)] ipmi._check_option_support(['dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_dual_bridge_no_ipmitool(self, mock_chkcall, mock_support): mock_chkcall.side_effect = iter([OSError()]) mock_support.return_value = None expected = [mock.call('dual_bridge')] self.assertRaises(OSError, ipmi._check_option_support, ['dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_all_options_pass(self, mock_chkcall, mock_support): mock_chkcall.return_value = (None, None) mock_support.return_value = None expected = [ mock.call('timing'), mock.call('timing', True), mock.call('single_bridge'), mock.call('single_bridge', True), mock.call('dual_bridge'), mock.call('dual_bridge', True)] ipmi._check_option_support(['timing', 'single_bridge', 'dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_all_options_fail(self, mock_chkcall, mock_support): options = ['timing', 'single_bridge', 'dual_bridge'] mock_chkcall.side_effect = iter( [subprocess.CalledProcessError(1, 'ipmitool')] * len(options)) mock_support.return_value = None expected = [ mock.call('timing'), mock.call('timing', False), mock.call('single_bridge'), mock.call('single_bridge', False), mock.call('dual_bridge'), mock.call('dual_bridge', False)] ipmi._check_option_support(options) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_all_options_no_ipmitool(self, mock_chkcall, mock_support): mock_chkcall.side_effect = iter([OSError()]) mock_support.return_value = None # exception is raised once ipmitool was not found for an command expected = [mock.call('timing')] self.assertRaises(OSError, ipmi._check_option_support, ['timing', 'single_bridge', 'dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) @mock.patch.object(time, 'sleep', autospec=True) class IPMIToolPrivateMethodTestCase(db_base.DbTestCase): def setUp(self): super(IPMIToolPrivateMethodTestCase, self).setUp() self.node = obj_utils.get_test_node( self.context, driver='fake_ipmitool', driver_info=INFO_DICT) self.info = ipmi._parse_driver_info(self.node) def _test__make_password_file(self, mock_sleep, input_password, exception_to_raise=None): pw_file = None try: with ipmi._make_password_file(input_password) as pw_file: if exception_to_raise is not None: raise exception_to_raise self.assertTrue(os.path.isfile(pw_file)) self.assertEqual(0o600, os.stat(pw_file)[stat.ST_MODE] & 0o777) with open(pw_file, "r") as f: password = f.read() self.assertEqual(str(input_password), password) finally: if pw_file is not None: self.assertFalse(os.path.isfile(pw_file)) def test__make_password_file_str_password(self, mock_sleep): self._test__make_password_file(mock_sleep, self.info.get('password')) def test__make_password_file_with_numeric_password(self, mock_sleep): self._test__make_password_file(mock_sleep, 12345) def test__make_password_file_caller_exception(self, mock_sleep): # Test caller raising exception result = self.assertRaises( ValueError, self._test__make_password_file, mock_sleep, 12345, ValueError('we should fail')) self.assertEqual('we should fail', six.text_type(result)) @mock.patch.object(tempfile, 'NamedTemporaryFile', new=mock.MagicMock(side_effect=OSError('Test Error'))) def test__make_password_file_tempfile_known_exception(self, mock_sleep): # Test OSError exception in _make_password_file for # tempfile.NamedTemporaryFile self.assertRaises( exception.PasswordFileFailedToCreate, self._test__make_password_file, mock_sleep, 12345) @mock.patch.object( tempfile, 'NamedTemporaryFile', new=mock.MagicMock(side_effect=OverflowError('Test Error'))) def test__make_password_file_tempfile_unknown_exception(self, mock_sleep): # Test exception in _make_password_file for tempfile.NamedTemporaryFile result = self.assertRaises( OverflowError, self._test__make_password_file, mock_sleep, 12345) self.assertEqual('Test Error', six.text_type(result)) def test__make_password_file_write_exception(self, mock_sleep): # Test exception in _make_password_file for write() mock_namedtemp = mock.mock_open(mock.MagicMock(name='JLV')) with mock.patch('tempfile.NamedTemporaryFile', mock_namedtemp): mock_filehandle = mock_namedtemp.return_value mock_write = mock_filehandle.write mock_write.side_effect = OSError('Test 2 Error') self.assertRaises( exception.PasswordFileFailedToCreate, self._test__make_password_file, mock_sleep, 12345) def test__parse_driver_info(self, mock_sleep): # make sure we get back the expected things _OPTIONS = ['address', 'username', 'password', 'uuid'] for option in _OPTIONS: self.assertIsNotNone(self.info.get(option)) info = dict(INFO_DICT) # test the default value for 'priv_level' node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) self.assertEqual('ADMINISTRATOR', ret['priv_level']) # ipmi_username / ipmi_password are not mandatory del info['ipmi_username'] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) del info['ipmi_password'] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) # make sure error is raised when ipmi_address is missing info = dict(INFO_DICT) del info['ipmi_address'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ipmi._parse_driver_info, node) # test the invalid priv_level value info = dict(INFO_DICT) info['ipmi_priv_level'] = 'ABCD' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_invalid_bridging_type( self, mock_support, mock_sleep): info = BRIDGE_INFO_DICT.copy() # make sure error is raised when ipmi_bridging has unexpected value info['ipmi_bridging'] = 'junk' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) self.assertFalse(mock_support.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_no_bridging( self, mock_support, mock_sleep): _OPTIONS = ['address', 'username', 'password', 'uuid'] _BRIDGING_OPTIONS = ['local_address', 'transit_channel', 'transit_address', 'target_channel', 'target_address'] info = BRIDGE_INFO_DICT.copy() info['ipmi_bridging'] = 'no' node = obj_utils.get_test_node(self.context, driver='fake_ipmitool', driver_info=info) ret = ipmi._parse_driver_info(node) # ensure that _is_option_supported was not called self.assertFalse(mock_support.called) # check if we got all the required options for option in _OPTIONS: self.assertIsNotNone(ret[option]) # test the default value for 'priv_level' self.assertEqual('ADMINISTRATOR', ret['priv_level']) # check if bridging parameters were set to None for option in _BRIDGING_OPTIONS: self.assertIsNone(ret[option]) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_dual_bridging_pass( self, mock_support, mock_sleep): _OPTIONS = ['address', 'username', 'password', 'uuid', 'local_address', 'transit_channel', 'transit_address', 'target_channel', 'target_address'] node = obj_utils.get_test_node(self.context, driver='fake_ipmitool', driver_info=BRIDGE_INFO_DICT) expected = [mock.call('dual_bridge')] # test double bridging and make sure we get back expected result mock_support.return_value = True ret = ipmi._parse_driver_info(node) self.assertEqual(expected, mock_support.call_args_list) for option in _OPTIONS: self.assertIsNotNone(ret[option]) # test the default value for 'priv_level' self.assertEqual('ADMINISTRATOR', ret['priv_level']) info = BRIDGE_INFO_DICT.copy() # ipmi_local_address / ipmi_username / ipmi_password are not mandatory for optional_arg in ['ipmi_local_address', 'ipmi_username', 'ipmi_password']: del info[optional_arg] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) self.assertEqual(mock.call('dual_bridge'), mock_support.call_args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_dual_bridging_not_supported( self, mock_support, mock_sleep): node = obj_utils.get_test_node(self.context, driver='fake_ipmitool', driver_info=BRIDGE_INFO_DICT) # if dual bridge is not supported then check if error is raised mock_support.return_value = False self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) mock_support.assert_called_once_with('dual_bridge') @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_dual_bridging_missing_parameters( self, mock_support, mock_sleep): info = BRIDGE_INFO_DICT.copy() mock_support.return_value = True # make sure error is raised when dual bridging is selected and the # required parameters for dual bridging are not provided for param in ['ipmi_transit_channel', 'ipmi_target_address', 'ipmi_transit_address', 'ipmi_target_channel']: del info[param] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ipmi._parse_driver_info, node) self.assertEqual(mock.call('dual_bridge'), mock_support.call_args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_single_bridging_pass( self, mock_support, mock_sleep): _OPTIONS = ['address', 'username', 'password', 'uuid', 'local_address', 'target_channel', 'target_address'] info = BRIDGE_INFO_DICT.copy() info['ipmi_bridging'] = 'single' node = obj_utils.get_test_node(self.context, driver='fake_ipmitool', driver_info=info) expected = [mock.call('single_bridge')] # test single bridging and make sure we get back expected things mock_support.return_value = True ret = ipmi._parse_driver_info(node) self.assertEqual(expected, mock_support.call_args_list) for option in _OPTIONS: self.assertIsNotNone(ret[option]) # test the default value for 'priv_level' self.assertEqual('ADMINISTRATOR', ret['priv_level']) # check if dual bridge params are set to None self.assertIsNone(ret['transit_channel']) self.assertIsNone(ret['transit_address']) # ipmi_local_address / ipmi_username / ipmi_password are not mandatory for optional_arg in ['ipmi_local_address', 'ipmi_username', 'ipmi_password']: del info[optional_arg] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) self.assertEqual(mock.call('single_bridge'), mock_support.call_args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_single_bridging_not_supported( self, mock_support, mock_sleep): info = BRIDGE_INFO_DICT.copy() info['ipmi_bridging'] = 'single' node = obj_utils.get_test_node(self.context, driver='fake_ipmitool', driver_info=info) # if single bridge is not supported then check if error is raised mock_support.return_value = False self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) mock_support.assert_called_once_with('single_bridge') @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_single_bridging_missing_parameters( self, mock_support, mock_sleep): info = dict(BRIDGE_INFO_DICT) info['ipmi_bridging'] = 'single' mock_support.return_value = True # make sure error is raised when single bridging is selected and the # required parameters for single bridging are not provided for param in ['ipmi_target_channel', 'ipmi_target_address']: del info[param] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ipmi._parse_driver_info, node) self.assertEqual(mock.call('single_bridge'), mock_support.call_args) def test__parse_driver_info_numeric_password( self, mock_sleep): # ipmi_password must not be converted to int / float # even if it includes just numbers. info = dict(INFO_DICT) info['ipmi_password'] = 12345678 node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) self.assertEqual(six.u('12345678'), ret['password']) self.assertIsInstance(ret['password'], six.text_type) def test__parse_driver_info_ipmi_prot_version_1_5(self, mock_sleep): info = dict(INFO_DICT) info['ipmi_protocol_version'] = '1.5' node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) self.assertEqual('1.5', ret['protocol_version']) def test__parse_driver_info_invalid_ipmi_prot_version(self, mock_sleep): info = dict(INFO_DICT) info['ipmi_protocol_version'] = '9000' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) def test__parse_driver_info_invalid_ipmi_port(self, mock_sleep): info = dict(INFO_DICT) info['ipmi_port'] = '700000' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) def test__parse_driver_info_ipmi_port_valid(self, mock_sleep): info = dict(INFO_DICT) info['ipmi_port'] = '623' node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) self.assertEqual(623, ret['dest_port']) @mock.patch.object(ipmi.LOG, 'warning', spec_set=True, autospec=True) def test__parse_driver_info_undefined_credentials( self, mock_log, mock_sleep): info = dict(INFO_DICT) del info['ipmi_username'] del info['ipmi_password'] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) calls = [ mock.call(u'ipmi_username is not defined or empty for node ' u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123: NULL user will ' u'be utilized.'), mock.call(u'ipmi_password is not defined or empty for node ' u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123: NULL password ' u'will be utilized.'), ] mock_log.assert_has_calls(calls) @mock.patch.object(ipmi.LOG, 'warning', spec_set=True, autospec=True) def test__parse_driver_info_have_credentials( self, mock_log, mock_sleep): """Ensure no warnings generated if have credentials""" info = dict(INFO_DICT) node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) self.assertFalse(mock_log.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_first_call_to_address(self, mock_exec, mock_pwf, mock_support, mock_sleep): ipmi.LAST_CMD_TIME = {} pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle, 'A', 'B', 'C', ] mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_pwf.assert_called_once_with(self.info['password']) mock_exec.assert_called_once_with(*args) self.assertFalse(mock_sleep.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_second_call_to_address_sleep( self, mock_exec, mock_pwf, mock_support, mock_sleep): ipmi.LAST_CMD_TIME = {} pw_file_handle1 = tempfile.NamedTemporaryFile() pw_file1 = pw_file_handle1.name file_handle1 = open(pw_file1, "w") pw_file_handle2 = tempfile.NamedTemporaryFile() pw_file2 = pw_file_handle2.name file_handle2 = open(pw_file2, "w") args = [[ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle1, 'A', 'B', 'C', ], [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle2, 'D', 'E', 'F', ]] expected = [mock.call('timing'), mock.call('timing')] mock_support.return_value = False mock_pwf.side_effect = iter([file_handle1, file_handle2]) mock_exec.side_effect = iter([(None, None), (None, None)]) ipmi._exec_ipmitool(self.info, 'A B C') mock_exec.assert_called_with(*args[0]) ipmi._exec_ipmitool(self.info, 'D E F') self.assertTrue(mock_sleep.called) self.assertEqual(expected, mock_support.call_args_list) mock_exec.assert_called_with(*args[1]) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_second_call_to_address_no_sleep( self, mock_exec, mock_pwf, mock_support, mock_sleep): ipmi.LAST_CMD_TIME = {} pw_file_handle1 = tempfile.NamedTemporaryFile() pw_file1 = pw_file_handle1.name file_handle1 = open(pw_file1, "w") pw_file_handle2 = tempfile.NamedTemporaryFile() pw_file2 = pw_file_handle2.name file_handle2 = open(pw_file2, "w") args = [[ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle1, 'A', 'B', 'C', ], [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle2, 'D', 'E', 'F', ]] expected = [mock.call('timing'), mock.call('timing')] mock_support.return_value = False mock_pwf.side_effect = iter([file_handle1, file_handle2]) mock_exec.side_effect = iter([(None, None), (None, None)]) ipmi._exec_ipmitool(self.info, 'A B C') mock_exec.assert_called_with(*args[0]) # act like enough time has passed ipmi.LAST_CMD_TIME[self.info['address']] = ( time.time() - CONF.ipmi.min_command_interval) ipmi._exec_ipmitool(self.info, 'D E F') self.assertFalse(mock_sleep.called) self.assertEqual(expected, mock_support.call_args_list) mock_exec.assert_called_with(*args[1]) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_two_calls_to_diff_address( self, mock_exec, mock_pwf, mock_support, mock_sleep): ipmi.LAST_CMD_TIME = {} pw_file_handle1 = tempfile.NamedTemporaryFile() pw_file1 = pw_file_handle1.name file_handle1 = open(pw_file1, "w") pw_file_handle2 = tempfile.NamedTemporaryFile() pw_file2 = pw_file_handle2.name file_handle2 = open(pw_file2, "w") args = [[ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle1, 'A', 'B', 'C', ], [ 'ipmitool', '-I', 'lanplus', '-H', '127.127.127.127', '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle2, 'D', 'E', 'F', ]] expected = [mock.call('timing'), mock.call('timing')] mock_support.return_value = False mock_pwf.side_effect = iter([file_handle1, file_handle2]) mock_exec.side_effect = iter([(None, None), (None, None)]) ipmi._exec_ipmitool(self.info, 'A B C') mock_exec.assert_called_with(*args[0]) self.info['address'] = '127.127.127.127' ipmi._exec_ipmitool(self.info, 'D E F') self.assertFalse(mock_sleep.called) self.assertEqual(expected, mock_support.call_args_list) mock_exec.assert_called_with(*args[1]) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_without_timing( self, mock_exec, mock_pwf, mock_support, mock_sleep): pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle, 'A', 'B', 'C', ] mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_pwf.assert_called_once_with(self.info['password']) mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_timing( self, mock_exec, mock_pwf, mock_support, mock_sleep): pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-R', '12', '-N', '5', '-f', file_handle, 'A', 'B', 'C', ] mock_support.return_value = True mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_pwf.assert_called_once_with(self.info['password']) mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_without_username( self, mock_exec, mock_pwf, mock_support, mock_sleep): # An undefined username is treated the same as an empty username and # will cause no user (-U) to be specified. self.info['username'] = None pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-f', file_handle, 'A', 'B', 'C', ] mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertTrue(mock_pwf.called) mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_empty_username( self, mock_exec, mock_pwf, mock_support, mock_sleep): # An empty username is treated the same as an undefined username and # will cause no user (-U) to be specified. self.info['username'] = "" pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-f', file_handle, 'A', 'B', 'C', ] mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertTrue(mock_pwf.called) mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_without_password( self, mock_exec, mock_pwf, mock_support, mock_sleep): # An undefined password is treated the same as an empty password and # will cause a NULL (\0) password to be used""" self.info['password'] = None pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle, 'A', 'B', 'C', ] mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertTrue(mock_pwf.called) mock_exec.assert_called_once_with(*args) mock_pwf.assert_called_once_with('\0') @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_empty_password( self, mock_exec, mock_pwf, mock_support, mock_sleep): # An empty password is treated the same as an undefined password and # will cause a NULL (\0) password to be used""" self.info['password'] = "" pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle, 'A', 'B', 'C', ] mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertTrue(mock_pwf.called) mock_exec.assert_called_once_with(*args) mock_pwf.assert_called_once_with('\0') @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_dual_bridging(self, mock_exec, mock_pwf, mock_support, mock_sleep): node = obj_utils.get_test_node(self.context, driver='fake_ipmitool', driver_info=BRIDGE_INFO_DICT) # when support for dual bridge command is called returns True mock_support.return_value = True info = ipmi._parse_driver_info(node) pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', info['address'], '-L', info['priv_level'], '-U', info['username'], '-m', info['local_address'], '-B', info['transit_channel'], '-T', info['transit_address'], '-b', info['target_channel'], '-t', info['target_address'], '-f', file_handle, 'A', 'B', 'C', ] expected = [mock.call('dual_bridge'), mock.call('timing')] # When support for timing command is called returns False mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(info, 'A B C') self.assertEqual(expected, mock_support.call_args_list) self.assertTrue(mock_pwf.called) mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_single_bridging(self, mock_exec, mock_pwf, mock_support, mock_sleep): single_bridge_info = dict(BRIDGE_INFO_DICT) single_bridge_info['ipmi_bridging'] = 'single' node = obj_utils.get_test_node(self.context, driver='fake_ipmitool', driver_info=single_bridge_info) # when support for single bridge command is called returns True mock_support.return_value = True info = ipmi._parse_driver_info(node) info['transit_channel'] = info['transit_address'] = None pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', info['address'], '-L', info['priv_level'], '-U', info['username'], '-m', info['local_address'], '-b', info['target_channel'], '-t', info['target_address'], '-f', file_handle, 'A', 'B', 'C', ] expected = [mock.call('single_bridge'), mock.call('timing')] # When support for timing command is called returns False mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(info, 'A B C') self.assertEqual(expected, mock_support.call_args_list) self.assertTrue(mock_pwf.called) mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_exception( self, mock_exec, mock_pwf, mock_support, mock_sleep): pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', file_handle, 'A', 'B', 'C', ] mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.side_effect = iter([processutils.ProcessExecutionError("x")]) self.assertRaises(processutils.ProcessExecutionError, ipmi._exec_ipmitool, self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_pwf.assert_called_once_with(self.info['password']) mock_exec.assert_called_once_with(*args) self.assertEqual(1, mock_exec.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_exception_retry( self, mock_exec, mock_support, mock_sleep): ipmi.LAST_CMD_TIME = {} mock_support.return_value = False mock_exec.side_effect = iter([ processutils.ProcessExecutionError( stderr="insufficient resources for session" ), (None, None) ]) # Directly set the configuration values such that # the logic will cause _exec_ipmitool to retry twice. self.config(min_command_interval=1, group='ipmi') self.config(retry_timeout=2, group='ipmi') ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertEqual(2, mock_exec.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_exception_retries_exceeded( self, mock_exec, mock_support, mock_sleep): ipmi.LAST_CMD_TIME = {} mock_support.return_value = False mock_exec.side_effect = iter([processutils.ProcessExecutionError( stderr="insufficient resources for session" )]) # Directly set the configuration values such that # the logic will cause _exec_ipmitool to timeout. self.config(min_command_interval=1, group='ipmi') self.config(retry_timeout=1, group='ipmi') self.assertRaises(processutils.ProcessExecutionError, ipmi._exec_ipmitool, self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertEqual(1, mock_exec.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_exception_non_retryable_failure( self, mock_exec, mock_support, mock_sleep): ipmi.LAST_CMD_TIME = {} mock_support.return_value = False # Return a retryable error, then an error that cannot # be retried thus resulting in a single retry # attempt by _exec_ipmitool. mock_exec.side_effect = iter([ processutils.ProcessExecutionError( stderr="insufficient resources for session" ), processutils.ProcessExecutionError( stderr="Unknown" ), ]) # Directly set the configuration values such that # the logic will cause _exec_ipmitool to retry up # to 3 times. self.config(min_command_interval=1, group='ipmi') self.config(retry_timeout=3, group='ipmi') self.assertRaises(processutils.ProcessExecutionError, ipmi._exec_ipmitool, self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertEqual(2, mock_exec.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_IPMI_version_1_5( self, mock_exec, mock_pwf, mock_support, mock_sleep): self.info['protocol_version'] = '1.5' # Assert it uses "-I lan" (1.5) instead of "-I lanplus" (2.0) args = [ 'ipmitool', '-I', 'lan', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-f', mock.ANY, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertTrue(mock_pwf.called) mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_port(self, mock_exec, mock_pwf, mock_support, mock_sleep): self.info['dest_port'] = '1623' ipmi.LAST_CMD_TIME = {} pw_file_handle = tempfile.NamedTemporaryFile() pw_file = pw_file_handle.name file_handle = open(pw_file, "w") args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-p', '1623', '-U', self.info['username'], '-f', file_handle, 'A', 'B', 'C', ] mock_support.return_value = False mock_pwf.return_value = file_handle mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_pwf.assert_called_once_with(self.info['password']) mock_exec.assert_called_once_with(*args) self.assertFalse(mock_sleep.called) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__power_status_on(self, mock_exec, mock_sleep): mock_exec.return_value = ["Chassis Power is on\n", None] state = ipmi._power_status(self.info) mock_exec.assert_called_once_with(self.info, "power status") self.assertEqual(states.POWER_ON, state) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__power_status_off(self, mock_exec, mock_sleep): mock_exec.return_value = ["Chassis Power is off\n", None] state = ipmi._power_status(self.info) mock_exec.assert_called_once_with(self.info, "power status") self.assertEqual(states.POWER_OFF, state) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__power_status_error(self, mock_exec, mock_sleep): mock_exec.return_value = ["Chassis Power is badstate\n", None] state = ipmi._power_status(self.info) mock_exec.assert_called_once_with(self.info, "power status") self.assertEqual(states.ERROR, state) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__power_status_exception(self, mock_exec, mock_sleep): mock_exec.side_effect = iter( [processutils.ProcessExecutionError("error")]) self.assertRaises(exception.IPMIFailure, ipmi._power_status, self.info) mock_exec.assert_called_once_with(self.info, "power status") @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) @mock.patch('eventlet.greenthread.sleep', autospec=True) def test__power_on_max_retries(self, sleep_mock, mock_exec, mock_sleep): self.config(retry_timeout=2, group='ipmi') def side_effect(driver_info, command): resp_dict = {"power status": ["Chassis Power is off\n", None], "power on": [None, None]} return resp_dict.get(command, ["Bad\n", None]) mock_exec.side_effect = side_effect expected = [mock.call(self.info, "power on"), mock.call(self.info, "power status"), mock.call(self.info, "power status")] state = ipmi._power_on(self.info) self.assertEqual(mock_exec.call_args_list, expected) self.assertEqual(states.ERROR, state) class IPMIToolDriverTestCase(db_base.DbTestCase): def setUp(self): super(IPMIToolDriverTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_ipmitool") self.driver = driver_factory.get_driver("fake_ipmitool") self.node = obj_utils.create_test_node(self.context, driver='fake_ipmitool', driver_info=INFO_DICT) self.info = ipmi._parse_driver_info(self.node) @mock.patch.object(ipmi, "_parse_driver_info", autospec=True) def test_power_validate(self, mock_parse): node = obj_utils.get_test_node(self.context, driver='fake_ipmitool', driver_info=INFO_DICT) mock_parse.return_value = {} with task_manager.acquire(self.context, node.uuid) as task: task.driver.power.validate(task) mock_parse.assert_called_once_with(mock.ANY) def test_get_properties(self): expected = ipmi.COMMON_PROPERTIES self.assertEqual(expected, self.driver.power.get_properties()) expected = list(ipmi.COMMON_PROPERTIES) + list(ipmi.CONSOLE_PROPERTIES) self.assertEqual(sorted(expected), sorted(self.driver.console.get_properties().keys())) self.assertEqual(sorted(expected), sorted(self.driver.get_properties().keys())) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_get_power_state(self, mock_exec): returns = iter([["Chassis Power is off\n", None], ["Chassis Power is on\n", None], ["\n", None]]) expected = [mock.call(self.info, "power status"), mock.call(self.info, "power status"), mock.call(self.info, "power status")] mock_exec.side_effect = returns with task_manager.acquire(self.context, self.node.uuid) as task: pstate = self.driver.power.get_power_state(task) self.assertEqual(states.POWER_OFF, pstate) pstate = self.driver.power.get_power_state(task) self.assertEqual(states.POWER_ON, pstate) pstate = self.driver.power.get_power_state(task) self.assertEqual(states.ERROR, pstate) self.assertEqual(mock_exec.call_args_list, expected) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_get_power_state_exception(self, mock_exec): mock_exec.side_effect = iter( [processutils.ProcessExecutionError("error")]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IPMIFailure, self.driver.power.get_power_state, task) mock_exec.assert_called_once_with(self.info, "power status") @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_on_ok(self, mock_off, mock_on): self.config(retry_timeout=0, group='ipmi') mock_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.power.set_power_state(task, states.POWER_ON) mock_on.assert_called_once_with(self.info) self.assertFalse(mock_off.called) @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_on_with_next_boot(self, mock_off, mock_on, mock_next_boot): self.config(retry_timeout=0, group='ipmi') mock_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.power.set_power_state(task, states.POWER_ON) mock_next_boot.assert_called_once_with(task, self.info) mock_on.assert_called_once_with(self.info) self.assertFalse(mock_off.called) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_off_ok(self, mock_off, mock_on): self.config(retry_timeout=0, group='ipmi') mock_off.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.power.set_power_state(task, states.POWER_OFF) mock_off.assert_called_once_with(self.info) self.assertFalse(mock_on.called) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_on_fail(self, mock_off, mock_on): self.config(retry_timeout=0, group='ipmi') mock_on.return_value = states.ERROR with task_manager.acquire(self.context, self.node['uuid']) as task: self.assertRaises(exception.PowerStateFailure, self.driver.power.set_power_state, task, states.POWER_ON) mock_on.assert_called_once_with(self.info) self.assertFalse(mock_off.called) def test_set_power_invalid_state(self): with task_manager.acquire(self.context, self.node['uuid']) as task: self.assertRaises(exception.InvalidParameterValue, self.driver.power.set_power_state, task, "fake state") @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_send_raw_bytes_ok(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.vendor.send_raw(task, http_method='POST', raw_bytes='0x00 0x01') mock_exec.assert_called_once_with(self.info, 'raw 0x00 0x01') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_send_raw_bytes_fail(self, mock_exec): mock_exec.side_effect = iter( [exception.PasswordFileFailedToCreate('error')]) with task_manager.acquire(self.context, self.node['uuid']) as task: self.assertRaises(exception.IPMIFailure, self.driver.vendor.send_raw, task, http_method='POST', raw_bytes='0x00 0x01') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__bmc_reset_ok(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.vendor.bmc_reset(task, 'POST') mock_exec.assert_called_once_with(self.info, 'bmc reset warm') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__bmc_reset_cold(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.vendor.bmc_reset(task, 'POST', warm=False) mock_exec.assert_called_once_with(self.info, 'bmc reset cold') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__bmc_reset_fail(self, mock_exec): mock_exec.side_effect = iter([processutils.ProcessExecutionError()]) with task_manager.acquire(self.context, self.node['uuid']) as task: self.assertRaises(exception.IPMIFailure, self.driver.vendor.bmc_reset, task, 'POST') @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_off', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_on', spec_set=types.FunctionType) def test_reboot_ok(self, mock_on, mock_off, mock_next_boot): manager = mock.MagicMock() # NOTE(rloo): if autospec is True, then manager.mock_calls is empty mock_on.return_value = states.POWER_ON manager.attach_mock(mock_off, 'power_off') manager.attach_mock(mock_on, 'power_on') expected = [mock.call.power_off(self.info), mock.call.power_on(self.info)] with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.power.reboot(task) mock_next_boot.assert_called_once_with(task, self.info) self.assertEqual(manager.mock_calls, expected) @mock.patch.object(ipmi, '_power_off', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_on', spec_set=types.FunctionType) def test_reboot_fail(self, mock_on, mock_off): manager = mock.MagicMock() # NOTE(rloo): if autospec is True, then manager.mock_calls is empty mock_on.return_value = states.ERROR manager.attach_mock(mock_off, 'power_off') manager.attach_mock(mock_on, 'power_on') expected = [mock.call.power_off(self.info), mock.call.power_on(self.info)] with task_manager.acquire(self.context, self.node['uuid']) as task: self.assertRaises(exception.PowerStateFailure, self.driver.power.reboot, task) self.assertEqual(manager.mock_calls, expected) @mock.patch.object(ipmi, '_parse_driver_info', autospec=True) def test_vendor_passthru_validate__parse_driver_info_fail(self, info_mock): info_mock.side_effect = iter([exception.InvalidParameterValue("bad")]) with task_manager.acquire(self.context, self.node['uuid']) as task: self.assertRaises(exception.InvalidParameterValue, self.driver.vendor.validate, task, method='send_raw', raw_bytes='0x00 0x01') info_mock.assert_called_once_with(task.node) def test_vendor_passthru_validate__send_raw_bytes_good(self): with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.vendor.validate(task, method='send_raw', http_method='POST', raw_bytes='0x00 0x01') def test_vendor_passthru_validate__send_raw_bytes_fail(self): with task_manager.acquire(self.context, self.node['uuid']) as task: self.assertRaises(exception.MissingParameterValue, self.driver.vendor.validate, task, method='send_raw') @mock.patch.object(ipmi.VendorPassthru, 'send_raw', autospec=True) def test_vendor_passthru_call_send_raw_bytes(self, raw_bytes_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.driver.vendor.send_raw(task, http_method='POST', raw_bytes='0x00 0x01') raw_bytes_mock.assert_called_once_with( self.driver.vendor, task, http_method='POST', raw_bytes='0x00 0x01') def test_vendor_passthru_validate__bmc_reset_good(self): with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.vendor.validate(task, method='bmc_reset') def test_vendor_passthru_validate__bmc_reset_warm_good(self): with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.vendor.validate(task, method='bmc_reset', warm=True) def test_vendor_passthru_validate__bmc_reset_cold_good(self): with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.vendor.validate(task, method='bmc_reset', warm=False) @mock.patch.object(ipmi.VendorPassthru, 'bmc_reset', autospec=True) def test_vendor_passthru_call_bmc_reset_warm(self, bmc_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.driver.vendor.bmc_reset(task, 'POST', warm=True) bmc_mock.assert_called_once_with( self.driver.vendor, task, 'POST', warm=True) @mock.patch.object(ipmi.VendorPassthru, 'bmc_reset', autospec=True) def test_vendor_passthru_call_bmc_reset_cold(self, bmc_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.driver.vendor.bmc_reset(task, 'POST', warm=False) bmc_mock.assert_called_once_with( self.driver.vendor, task, 'POST', warm=False) def test_vendor_passthru_vendor_routes(self): expected = ['send_raw', 'bmc_reset'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: vendor_routes = task.driver.vendor.vendor_routes self.assertIsInstance(vendor_routes, dict) self.assertEqual(sorted(expected), sorted(vendor_routes)) def test_vendor_passthru_driver_routes(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_routes = task.driver.vendor.driver_routes self.assertIsInstance(driver_routes, dict) self.assertEqual({}, driver_routes) def test_console_validate(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ipmi_terminal_port'] = 123 task.driver.console.validate(task) def test_console_validate_missing_port(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info.pop('ipmi_terminal_port', None) self.assertRaises(exception.MissingParameterValue, task.driver.console.validate, task) def test_console_validate_invalid_port(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ipmi_terminal_port'] = '' self.assertRaises(exception.InvalidParameterValue, task.driver.console.validate, task) def test_console_validate_wrong_ipmi_protocol_version(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ipmi_terminal_port'] = 123 task.node.driver_info['ipmi_protocol_version'] = '1.5' self.assertRaises(exception.InvalidParameterValue, task.driver.console.validate, task) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console(self, mock_exec): mock_exec.return_value = None with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.console.start_console(task) mock_exec.assert_called_once_with(self.info['uuid'], self.info['port'], mock.ANY) self.assertTrue(mock_exec.called) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console_fail(self, mock_exec): mock_exec.side_effect = iter( [exception.ConsoleSubprocessFailed(error='error')]) with task_manager.acquire(self.context, self.node['uuid']) as task: self.assertRaises(exception.ConsoleSubprocessFailed, self.driver.console.start_console, task) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console_fail_nodir(self, mock_exec): mock_exec.side_effect = iter([exception.ConsoleError()]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleError, self.driver.console.start_console, task) mock_exec.assert_called_once_with(self.node.uuid, mock.ANY, mock.ANY) @mock.patch.object(console_utils, 'make_persistent_password_file', autospec=True) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test_start_console_empty_password(self, mock_exec, mock_pass): driver_info = self.node.driver_info del driver_info['ipmi_password'] self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.console.start_console(task) mock_pass.assert_called_once_with(mock.ANY, '\0') mock_exec.assert_called_once_with(self.info['uuid'], self.info['port'], mock.ANY) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console(self, mock_exec): mock_exec.return_value = None with task_manager.acquire(self.context, self.node['uuid']) as task: self.driver.console.stop_console(task) mock_exec.assert_called_once_with(self.info['uuid']) self.assertTrue(mock_exec.called) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console_fail(self, mock_stop): mock_stop.side_effect = iter([exception.ConsoleError()]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleError, self.driver.console.stop_console, task) mock_stop.assert_called_once_with(self.node.uuid) @mock.patch.object(console_utils, 'get_shellinabox_console_url', autospec=True) def test_get_console(self, mock_exec): url = 'http://localhost:4201' mock_exec.return_value = url expected = {'type': 'shellinabox', 'url': url} with task_manager.acquire(self.context, self.node['uuid']) as task: console_info = self.driver.console.get_console(task) self.assertEqual(expected, console_info) mock_exec.assert_called_once_with(self.info['port']) self.assertTrue(mock_exec.called) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_ok(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.management.set_boot_device(task, boot_devices.PXE) mock_calls = [mock.call(self.info, "raw 0x00 0x08 0x03 0x08"), mock.call(self.info, "chassis bootdev pxe")] mock_exec.assert_has_calls(mock_calls) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_force_set_boot_device_ok(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['ipmi_force_boot_device'] = True self.info['force_boot_device'] = True self.driver.management.set_boot_device(task, boot_devices.PXE) task.node.refresh() self.assertEqual( False, task.node.driver_internal_info['is_next_boot_persistent'] ) mock_calls = [mock.call(self.info, "raw 0x00 0x08 0x03 0x08"), mock.call(self.info, "chassis bootdev pxe")] mock_exec.assert_has_calls(mock_calls) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_persistent(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['ipmi_force_boot_device'] = True self.info['force_boot_device'] = True self.driver.management.set_boot_device(task, boot_devices.PXE, True) self.assertEqual( boot_devices.PXE, task.node.driver_internal_info['persistent_boot_device']) mock_calls = [mock.call(self.info, "raw 0x00 0x08 0x03 0x08"), mock.call(self.info, "chassis bootdev pxe")] mock_exec.assert_has_calls(mock_calls) def test_management_interface_set_boot_device_bad_device(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.driver.management.set_boot_device, task, 'fake-device') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_exec_failed(self, mock_exec): mock_exec.side_effect = iter([processutils.ProcessExecutionError()]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IPMIFailure, self.driver.management.set_boot_device, task, boot_devices.PXE) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_unknown_exception(self, mock_exec): class FakeException(Exception): pass mock_exec.side_effect = iter([FakeException('boom')]) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(FakeException, self.driver.management.set_boot_device, task, boot_devices.PXE) def test_management_interface_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM, boot_devices.BIOS, boot_devices.SAFE] self.assertEqual(sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_get_boot_device(self, mock_exec): # output, expected boot device bootdevs = [('Boot Device Selector : ' 'Force Boot from default Hard-Drive\n', boot_devices.DISK), ('Boot Device Selector : ' 'Force Boot from default Hard-Drive, request Safe-Mode\n', boot_devices.SAFE), ('Boot Device Selector : ' 'Force Boot into BIOS Setup\n', boot_devices.BIOS), ('Boot Device Selector : ' 'Force PXE\n', boot_devices.PXE), ('Boot Device Selector : ' 'Force Boot from CD/DVD\n', boot_devices.CDROM)] with task_manager.acquire(self.context, self.node.uuid) as task: for out, expected_device in bootdevs: mock_exec.return_value = (out, '') expected_response = {'boot_device': expected_device, 'persistent': False} self.assertEqual(expected_response, task.driver.management.get_boot_device(task)) mock_exec.assert_called_with(mock.ANY, "chassis bootparam get 5") @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_get_boot_device_unknown_dev(self, mock_exec): with task_manager.acquire(self.context, self.node.uuid) as task: mock_exec.return_value = ('Boot Device Selector : Fake\n', '') response = task.driver.management.get_boot_device(task) self.assertIsNone(response['boot_device']) mock_exec.assert_called_with(mock.ANY, "chassis bootparam get 5") @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_get_boot_device_fail(self, mock_exec): with task_manager.acquire(self.context, self.node.uuid) as task: mock_exec.side_effect = iter( [processutils.ProcessExecutionError()]) self.assertRaises(exception.IPMIFailure, task.driver.management.get_boot_device, task) mock_exec.assert_called_with(mock.ANY, "chassis bootparam get 5") @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_get_boot_device_persistent(self, mock_exec): outputs = [('Options apply to only next boot\n' 'Boot Device Selector : Force PXE\n', False), ('Options apply to all future boots\n' 'Boot Device Selector : Force PXE\n', True)] with task_manager.acquire(self.context, self.node.uuid) as task: for out, expected_persistent in outputs: mock_exec.return_value = (out, '') expected_response = {'boot_device': boot_devices.PXE, 'persistent': expected_persistent} self.assertEqual(expected_response, task.driver.management.get_boot_device(task)) mock_exec.assert_called_with(mock.ANY, "chassis bootparam get 5") def test_get_force_boot_device_persistent(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['ipmi_force_boot_device'] = True task.node.driver_internal_info['persistent_boot_device'] = 'pxe' bootdev = self.driver.management.get_boot_device(task) self.assertEqual('pxe', bootdev['boot_device']) self.assertTrue(bootdev['persistent']) def test_management_interface_validate_good(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.management.validate(task) def test_management_interface_validate_fail(self): # Missing IPMI driver_info information node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake_ipmitool') with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.MissingParameterValue, task.driver.management.validate, task) def test__parse_ipmi_sensor_data_ok(self): fake_sensors_data = """ Sensor ID : Temp (0x1) Entity ID : 3.1 (Processor) Sensor Type (Analog) : Temperature Sensor Reading : -58 (+/- 1) degrees C Status : ok Nominal Reading : 50.000 Normal Minimum : 11.000 Normal Maximum : 69.000 Upper critical : 90.000 Upper non-critical : 85.000 Positive Hysteresis : 1.000 Negative Hysteresis : 1.000 Sensor ID : Temp (0x2) Entity ID : 3.2 (Processor) Sensor Type (Analog) : Temperature Sensor Reading : 50 (+/- 1) degrees C Status : ok Nominal Reading : 50.000 Normal Minimum : 11.000 Normal Maximum : 69.000 Upper critical : 90.000 Upper non-critical : 85.000 Positive Hysteresis : 1.000 Negative Hysteresis : 1.000 Sensor ID : FAN MOD 1A RPM (0x30) Entity ID : 7.1 (System Board) Sensor Type (Analog) : Fan Sensor Reading : 8400 (+/- 75) RPM Status : ok Nominal Reading : 5325.000 Normal Minimum : 10425.000 Normal Maximum : 14775.000 Lower critical : 4275.000 Positive Hysteresis : 375.000 Negative Hysteresis : 375.000 Sensor ID : FAN MOD 1B RPM (0x31) Entity ID : 7.1 (System Board) Sensor Type (Analog) : Fan Sensor Reading : 8550 (+/- 75) RPM Status : ok Nominal Reading : 7800.000 Normal Minimum : 10425.000 Normal Maximum : 14775.000 Lower critical : 4275.000 Positive Hysteresis : 375.000 Negative Hysteresis : 375.000 """ expected_return = { 'Fan': { 'FAN MOD 1A RPM (0x30)': { 'Status': 'ok', 'Sensor Reading': '8400 (+/- 75) RPM', 'Entity ID': '7.1 (System Board)', 'Normal Minimum': '10425.000', 'Positive Hysteresis': '375.000', 'Normal Maximum': '14775.000', 'Sensor Type (Analog)': 'Fan', 'Lower critical': '4275.000', 'Negative Hysteresis': '375.000', 'Sensor ID': 'FAN MOD 1A RPM (0x30)', 'Nominal Reading': '5325.000' }, 'FAN MOD 1B RPM (0x31)': { 'Status': 'ok', 'Sensor Reading': '8550 (+/- 75) RPM', 'Entity ID': '7.1 (System Board)', 'Normal Minimum': '10425.000', 'Positive Hysteresis': '375.000', 'Normal Maximum': '14775.000', 'Sensor Type (Analog)': 'Fan', 'Lower critical': '4275.000', 'Negative Hysteresis': '375.000', 'Sensor ID': 'FAN MOD 1B RPM (0x31)', 'Nominal Reading': '7800.000' } }, 'Temperature': { 'Temp (0x1)': { 'Status': 'ok', 'Sensor Reading': '-58 (+/- 1) degrees C', 'Entity ID': '3.1 (Processor)', 'Normal Minimum': '11.000', 'Positive Hysteresis': '1.000', 'Upper non-critical': '85.000', 'Normal Maximum': '69.000', 'Sensor Type (Analog)': 'Temperature', 'Negative Hysteresis': '1.000', 'Upper critical': '90.000', 'Sensor ID': 'Temp (0x1)', 'Nominal Reading': '50.000' }, 'Temp (0x2)': { 'Status': 'ok', 'Sensor Reading': '50 (+/- 1) degrees C', 'Entity ID': '3.2 (Processor)', 'Normal Minimum': '11.000', 'Positive Hysteresis': '1.000', 'Upper non-critical': '85.000', 'Normal Maximum': '69.000', 'Sensor Type (Analog)': 'Temperature', 'Negative Hysteresis': '1.000', 'Upper critical': '90.000', 'Sensor ID': 'Temp (0x2)', 'Nominal Reading': '50.000' } } } ret = ipmi._parse_ipmi_sensors_data(self.node, fake_sensors_data) self.assertEqual(expected_return, ret) def test__parse_ipmi_sensor_data_missing_sensor_reading(self): fake_sensors_data = """ Sensor ID : Temp (0x1) Entity ID : 3.1 (Processor) Sensor Type (Analog) : Temperature Status : ok Nominal Reading : 50.000 Normal Minimum : 11.000 Normal Maximum : 69.000 Upper critical : 90.000 Upper non-critical : 85.000 Positive Hysteresis : 1.000 Negative Hysteresis : 1.000 Sensor ID : Temp (0x2) Entity ID : 3.2 (Processor) Sensor Type (Analog) : Temperature Sensor Reading : 50 (+/- 1) degrees C Status : ok Nominal Reading : 50.000 Normal Minimum : 11.000 Normal Maximum : 69.000 Upper critical : 90.000 Upper non-critical : 85.000 Positive Hysteresis : 1.000 Negative Hysteresis : 1.000 Sensor ID : FAN MOD 1A RPM (0x30) Entity ID : 7.1 (System Board) Sensor Type (Analog) : Fan Sensor Reading : 8400 (+/- 75) RPM Status : ok Nominal Reading : 5325.000 Normal Minimum : 10425.000 Normal Maximum : 14775.000 Lower critical : 4275.000 Positive Hysteresis : 375.000 Negative Hysteresis : 375.000 """ expected_return = { 'Fan': { 'FAN MOD 1A RPM (0x30)': { 'Status': 'ok', 'Sensor Reading': '8400 (+/- 75) RPM', 'Entity ID': '7.1 (System Board)', 'Normal Minimum': '10425.000', 'Positive Hysteresis': '375.000', 'Normal Maximum': '14775.000', 'Sensor Type (Analog)': 'Fan', 'Lower critical': '4275.000', 'Negative Hysteresis': '375.000', 'Sensor ID': 'FAN MOD 1A RPM (0x30)', 'Nominal Reading': '5325.000' } }, 'Temperature': { 'Temp (0x2)': { 'Status': 'ok', 'Sensor Reading': '50 (+/- 1) degrees C', 'Entity ID': '3.2 (Processor)', 'Normal Minimum': '11.000', 'Positive Hysteresis': '1.000', 'Upper non-critical': '85.000', 'Normal Maximum': '69.000', 'Sensor Type (Analog)': 'Temperature', 'Negative Hysteresis': '1.000', 'Upper critical': '90.000', 'Sensor ID': 'Temp (0x2)', 'Nominal Reading': '50.000' } } } ret = ipmi._parse_ipmi_sensors_data(self.node, fake_sensors_data) self.assertEqual(expected_return, ret) def test__parse_ipmi_sensor_data_failed(self): fake_sensors_data = "abcdef" self.assertRaises(exception.FailedToParseSensorData, ipmi._parse_ipmi_sensors_data, self.node, fake_sensors_data) fake_sensors_data = "abc:def:ghi" self.assertRaises(exception.FailedToParseSensorData, ipmi._parse_ipmi_sensors_data, self.node, fake_sensors_data) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_dump_sdr_ok(self, mock_exec): mock_exec.return_value = (None, None) with task_manager.acquire(self.context, self.node.uuid) as task: ipmi.dump_sdr(task, 'foo_file') mock_exec.assert_called_once_with(self.info, 'sdr dump foo_file') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_dump_sdr_fail(self, mock_exec): with task_manager.acquire(self.context, self.node.uuid) as task: mock_exec.side_effect = processutils.ProcessExecutionError() self.assertRaises(exception.IPMIFailure, ipmi.dump_sdr, task, 'foo_file') mock_exec.assert_called_once_with(self.info, 'sdr dump foo_file') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_send_raw_bytes_returns(self, mock_exec): fake_ret = ('foo', 'bar') mock_exec.return_value = fake_ret with task_manager.acquire(self.context, self.node.uuid) as task: ret = ipmi.send_raw(task, 'fake raw') self.assertEqual(fake_ret, ret) ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/0000775000567000056710000000000012674513633023470 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/test_management.py0000664000567000056710000006707712674513466027242 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Management Interface used by iLO modules.""" import mock from oslo_config import cfg from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import management as ilo_management from ironic.drivers.modules import ipmitool from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils ilo_error = importutils.try_import('proliantutils.exception') INFO_DICT = db_utils.get_test_ilo_info() CONF = cfg.CONF class IloManagementTestCase(db_base.DbTestCase): def setUp(self): super(IloManagementTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_ilo") self.node = obj_utils.create_test_node( self.context, driver='fake_ilo', driver_info=INFO_DICT) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected = ilo_management.MANAGEMENT_PROPERTIES self.assertEqual(expected, task.driver.management.get_properties()) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, driver_info_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.validate(task) driver_info_mock.assert_called_once_with(task.node) def test_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM] self.assertEqual( sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_boot_device_next_boot(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.get_one_time_boot.return_value = 'CDROM' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_device = boot_devices.CDROM expected_response = {'boot_device': expected_device, 'persistent': False} self.assertEqual(expected_response, task.driver.management.get_boot_device(task)) ilo_object_mock.get_one_time_boot.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_boot_device_persistent(self, get_ilo_object_mock): ilo_mock = get_ilo_object_mock.return_value ilo_mock.get_one_time_boot.return_value = 'Normal' ilo_mock.get_persistent_boot_device.return_value = 'NETWORK' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_device = boot_devices.PXE expected_response = {'boot_device': expected_device, 'persistent': True} self.assertEqual(expected_response, task.driver.management.get_boot_device(task)) ilo_mock.get_one_time_boot.assert_called_once_with() ilo_mock.get_persistent_boot_device.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_boot_device_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.get_one_time_boot.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, task.driver.management.get_boot_device, task) ilo_mock_object.get_one_time_boot.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_boot_device_persistent_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_one_time_boot.return_value = 'Normal' exc = ilo_error.IloError('error') ilo_mock_object.get_persistent_boot_device.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, task.driver.management.get_boot_device, task) ilo_mock_object.get_one_time_boot.assert_called_once_with() ilo_mock_object.get_persistent_boot_device.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_device_ok(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device(task, boot_devices.CDROM, False) get_ilo_object_mock.assert_called_once_with(task.node) ilo_object_mock.set_one_time_boot.assert_called_once_with('CDROM') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_device_persistent_true(self, get_ilo_object_mock): ilo_mock = get_ilo_object_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device(task, boot_devices.PXE, True) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock.update_persistent_boot.assert_called_once_with( ['NETWORK']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_device_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.set_one_time_boot.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, task.driver.management.set_boot_device, task, boot_devices.PXE) ilo_mock_object.set_one_time_boot.assert_called_once_with('NETWORK') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_device_persistent_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.update_persistent_boot.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, task.driver.management.set_boot_device, task, boot_devices.PXE, True) ilo_mock_object.update_persistent_boot.assert_called_once_with( ['NETWORK']) def test_set_boot_device_invalid_device(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.set_boot_device, task, 'fake-device') @mock.patch.object(ilo_common, 'update_ipmi_properties', spec_set=True, autospec=True) @mock.patch.object(ipmitool.IPMIManagement, 'get_sensors_data', spec_set=True, autospec=True) def test_get_sensor_data(self, get_sensors_data_mock, update_ipmi_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.get_sensors_data(task) update_ipmi_mock.assert_called_once_with(task) get_sensors_data_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test__execute_ilo_clean_step_ok(self, get_ilo_object_mock): ilo_mock = get_ilo_object_mock.return_value clean_step_mock = getattr(ilo_mock, 'fake-step') ilo_management._execute_ilo_clean_step( self.node, 'fake-step', 'args', kwarg='kwarg') clean_step_mock.assert_called_once_with('args', kwarg='kwarg') @mock.patch.object(ilo_management, 'LOG', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test__execute_ilo_clean_step_not_supported(self, get_ilo_object_mock, log_mock): ilo_mock = get_ilo_object_mock.return_value exc = ilo_error.IloCommandNotSupportedError("error") clean_step_mock = getattr(ilo_mock, 'fake-step') clean_step_mock.side_effect = exc ilo_management._execute_ilo_clean_step( self.node, 'fake-step', 'args', kwarg='kwarg') clean_step_mock.assert_called_once_with('args', kwarg='kwarg') self.assertTrue(log_mock.warning.called) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test__execute_ilo_clean_step_fail(self, get_ilo_object_mock): ilo_mock = get_ilo_object_mock.return_value exc = ilo_error.IloError("error") clean_step_mock = getattr(ilo_mock, 'fake-step') clean_step_mock.side_effect = exc self.assertRaises(exception.NodeCleaningFailure, ilo_management._execute_ilo_clean_step, self.node, 'fake-step', 'args', kwarg='kwarg') clean_step_mock.assert_called_once_with('args', kwarg='kwarg') @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) def test_reset_ilo(self, clean_step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_ilo(task) clean_step_mock.assert_called_once_with(task.node, 'reset_ilo') @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) def test_reset_ilo_credential_ok(self, clean_step_mock): info = self.node.driver_info info['ilo_change_password'] = "fake-password" self.node.driver_info = info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_ilo_credential(task) clean_step_mock.assert_called_once_with( task.node, 'reset_ilo_credential', 'fake-password') self.assertIsNone( task.node.driver_info.get('ilo_change_password')) self.assertEqual(task.node.driver_info['ilo_password'], 'fake-password') @mock.patch.object(ilo_management, 'LOG', spec_set=True, autospec=True) @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) def test_reset_ilo_credential_no_password(self, clean_step_mock, log_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_ilo_credential(task) self.assertFalse(clean_step_mock.called) self.assertTrue(log_mock.info.called) @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) def test_reset_bios_to_default(self, clean_step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_bios_to_default(task) clean_step_mock.assert_called_once_with(task.node, 'reset_bios_to_default') @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) def test_reset_secure_boot_keys_to_default(self, clean_step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_secure_boot_keys_to_default(task) clean_step_mock.assert_called_once_with(task.node, 'reset_secure_boot_keys') @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) def test_clear_secure_boot_keys(self, clean_step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.clear_secure_boot_keys(task) clean_step_mock.assert_called_once_with(task.node, 'clear_secure_boot_keys') @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) def test_activate_license(self, clean_step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: activate_license_args = { 'ilo_license_key': 'XXXXX-YYYYY-ZZZZZ-XYZZZ-XXYYZ'} task.driver.management.activate_license(task, **activate_license_args) clean_step_mock.assert_called_once_with( task.node, 'activate_license', 'XXXXX-YYYYY-ZZZZZ-XYZZZ-XXYYZ') @mock.patch.object(ilo_management, 'LOG', spec_set=True, autospec=True) @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) def test_activate_license_no_or_invalid_format_license_key( self, clean_step_mock, log_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: for license_key_value in (None, [], {}): activate_license_args = {'ilo_license_key': license_key_value} self.assertRaises(exception.InvalidParameterValue, task.driver.management.activate_license, task, **activate_license_args) self.assertFalse(clean_step_mock.called) @mock.patch.object(ilo_management, 'LOG') @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) @mock.patch.object(ilo_management.firmware_processor, 'FirmwareProcessor', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'remove_single_or_list_of_files', spec_set=True, autospec=True) def test_update_firmware_calls_clean_step_foreach_url( self, remove_file_mock, FirmwareProcessor_mock, clean_step_mock, LOG_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # | GIVEN | firmware_images = [ { 'url': 'file:///any_path', 'checksum': 'xxxx', 'component': 'ilo' }, { 'url': 'http://any_url', 'checksum': 'xxxx', 'component': 'cpld' }, { 'url': 'https://any_url', 'checksum': 'xxxx', 'component': 'power_pic' }, { 'url': 'swift://container/object', 'checksum': 'xxxx', 'component': 'bios' }, { 'url': 'file:///any_path', 'checksum': 'xxxx', 'component': 'chassis' } ] FirmwareProcessor_mock.return_value.process_fw_on.side_effect = [ ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_filepath', 'filepath'), ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_httppath', 'httppath'), ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_httpspath', 'httpspath'), ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_swiftpath', 'swiftpath'), ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_another_filepath', 'filepath2') ] firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': firmware_images} # | WHEN | task.driver.management.update_firmware(task, **firmware_update_args) # | THEN | calls = [mock.call(task.node, 'update_firmware', 'fw_location_for_filepath', 'ilo'), mock.call(task.node, 'update_firmware', 'fw_location_for_httppath', 'cpld'), mock.call(task.node, 'update_firmware', 'fw_location_for_httpspath', 'power_pic'), mock.call(task.node, 'update_firmware', 'fw_location_for_swiftpath', 'bios'), mock.call(task.node, 'update_firmware', 'fw_location_for_another_filepath', 'chassis'), ] clean_step_mock.assert_has_calls(calls) self.assertTrue(clean_step_mock.call_count == 5) def test_update_firmware_throws_if_invalid_update_mode_provided(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # | GIVEN | firmware_update_args = {'firmware_update_mode': 'invalid_mode', 'firmware_images': None} # | WHEN & THEN | self.assertRaises(exception.InvalidParameterValue, task.driver.management.update_firmware, task, **firmware_update_args) def test_update_firmware_throws_error_for_no_firmware_url(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # | GIVEN | firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': []} # | WHEN & THEN | self.assertRaises(exception.InvalidParameterValue, task.driver.management.update_firmware, task, **firmware_update_args) def test_update_firmware_throws_error_for_invalid_component_type(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # | GIVEN | firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'xxxx', 'component': 'xyz' } ]} # | WHEN & THEN | self.assertRaises(exception.NodeCleaningFailure, task.driver.management.update_firmware, task, **firmware_update_args) @mock.patch.object(ilo_management, 'LOG') @mock.patch.object(ilo_management.firmware_processor.FirmwareProcessor, 'process_fw_on', spec_set=True, autospec=True) def test_update_firmware_throws_error_for_checksum_validation_error( self, process_fw_on_mock, LOG_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # | GIVEN | firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'invalid_checksum', 'component': 'bios' } ]} process_fw_on_mock.side_effect = exception.ImageRefValidationFailed # | WHEN & THEN | self.assertRaises(exception.NodeCleaningFailure, task.driver.management.update_firmware, task, **firmware_update_args) @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) @mock.patch.object(ilo_management.firmware_processor, 'FirmwareProcessor', spec_set=True, autospec=True) def test_update_firmware_doesnt_update_any_if_processing_on_any_url_fails( self, FirmwareProcessor_mock, clean_step_mock): """update_firmware throws error for failure in processing any url update_firmware doesn't invoke firmware update of proliantutils for any url if processing on any firmware url fails. """ with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # | GIVEN | firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'xxxx', 'component': 'ilo' }, { 'url': 'any_invalid_url', 'checksum': 'xxxx', 'component': 'bios' }] } FirmwareProcessor_mock.return_value.process_fw_on.side_effect = [ ilo_management.firmware_processor.FirmwareImageLocation( 'extracted_firmware_url_of_any_valid_url', 'filename'), exception.IronicException ] # | WHEN & THEN | self.assertRaises(exception.NodeCleaningFailure, task.driver.management.update_firmware, task, **firmware_update_args) self.assertFalse(clean_step_mock.called) @mock.patch.object(ilo_management, 'LOG') @mock.patch.object(ilo_management, '_execute_ilo_clean_step', spec_set=True, autospec=True) @mock.patch.object(ilo_management.firmware_processor, 'FirmwareProcessor', spec_set=True, autospec=True) @mock.patch.object(ilo_management.firmware_processor.FirmwareImageLocation, 'remove', spec_set=True, autospec=True) def test_update_firmware_cleans_all_files_if_exc_thrown( self, remove_mock, FirmwareProcessor_mock, clean_step_mock, LOG_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # | GIVEN | firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'xxxx', 'component': 'ilo' }, { 'url': 'any_invalid_url', 'checksum': 'xxxx', 'component': 'bios' }] } fw_loc_obj_1 = (ilo_management.firmware_processor. FirmwareImageLocation('extracted_firmware_url_1', 'filename_1')) fw_loc_obj_2 = (ilo_management.firmware_processor. FirmwareImageLocation('extracted_firmware_url_2', 'filename_2')) FirmwareProcessor_mock.return_value.process_fw_on.side_effect = [ fw_loc_obj_1, fw_loc_obj_2 ] clean_step_mock.side_effect = exception.NodeCleaningFailure( node=self.node.uuid, reason='ilo_exc') # | WHEN & THEN | self.assertRaises(exception.NodeCleaningFailure, task.driver.management.update_firmware, task, **firmware_update_args) clean_step_mock.assert_called_once_with( task.node, 'update_firmware', 'extracted_firmware_url_1', 'ilo') self.assertTrue(LOG_mock.error.called) remove_mock.assert_has_calls([mock.call(fw_loc_obj_1), mock.call(fw_loc_obj_2)]) ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/test_boot.py0000664000567000056710000007617312674513466026066 0ustar jenkinsjenkins00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for boot methods used by iLO modules.""" import tempfile from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg import six from ironic.common import boot_devices from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common import image_service from ironic.common import images from ironic.common import swift from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import boot as ilo_boot from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers import utils as driver_utils from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils if six.PY3: import io file = io.BytesIO INFO_DICT = db_utils.get_test_ilo_info() CONF = cfg.CONF class IloBootCommonMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IloBootCommonMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="iscsi_ilo") self.node = obj_utils.create_test_node( self.context, driver='iscsi_ilo', driver_info=INFO_DICT) def test_parse_driver_info(self): self.node.driver_info['ilo_deploy_iso'] = 'deploy-iso' expected_driver_info = {'ilo_deploy_iso': 'deploy-iso'} actual_driver_info = ilo_boot.parse_driver_info(self.node) self.assertEqual(expected_driver_info, actual_driver_info) def test_parse_driver_info_exc(self): self.assertRaises(exception.MissingParameterValue, ilo_boot.parse_driver_info, self.node) class IloBootPrivateMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IloBootPrivateMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="iscsi_ilo") self.node = obj_utils.create_test_node( self.context, driver='iscsi_ilo', driver_info=INFO_DICT) def test__get_boot_iso_object_name(self): boot_iso_actual = ilo_boot._get_boot_iso_object_name(self.node) boot_iso_expected = "boot-%s" % self.node.uuid self.assertEqual(boot_iso_expected, boot_iso_actual) @mock.patch.object(image_service.HttpImageService, 'validate_href', spec_set=True, autospec=True) def test__get_boot_iso_http_url(self, service_mock): url = 'http://abc.org/image/qcow2' i_info = self.node.instance_info i_info['ilo_boot_iso'] = url self.node.instance_info = i_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') service_mock.assert_called_once_with(mock.ANY, url) self.assertEqual(url, boot_iso_actual) @mock.patch.object(image_service.HttpImageService, 'validate_href', spec_set=True, autospec=True) def test__get_boot_iso_unsupported_url(self, validate_href_mock): validate_href_mock.side_effect = iter( [exception.ImageRefValidationFailed( image_href='file://img.qcow2', reason='fail')]) url = 'file://img.qcow2' i_info = self.node.instance_info i_info['ilo_boot_iso'] = url self.node.instance_info = i_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.ImageRefValidationFailed, ilo_boot._get_boot_iso, task, 'root-uuid') @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_glance_image(self, deploy_info_mock, image_props_mock): deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': u'glance://uui\u0111', 'kernel_id': None, 'ramdisk_id': None} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_iso_created_in_web_server'] = False task.node.driver_internal_info = driver_internal_info task.node.save() boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) boot_iso_expected = u'glance://uui\u0111' self.assertEqual(boot_iso_expected, boot_iso_actual) @mock.patch.object(deploy_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_boot.LOG, 'error', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_uefi_no_glance_image(self, deploy_info_mock, image_props_mock, log_mock, boot_mode_mock): deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': None, 'kernel_id': None, 'ramdisk_id': None} properties = {'capabilities': 'boot_mode:uefi'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties = properties boot_iso_result = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) self.assertTrue(log_mock.called) self.assertFalse(boot_mode_mock.called) self.assertIsNone(boot_iso_result) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'get_node_capability', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_create(self, deploy_info_mock, image_props_mock, capability_mock, boot_object_name_mock, swift_api_mock, create_boot_iso_mock, tempfile_mock): CONF.ilo.swift_ilo_container = 'ilo-cont' CONF.pxe.pxe_append_params = 'kernel-params' swift_obj_mock = swift_api_mock.return_value fileobj_mock = mock.MagicMock(spec=file) fileobj_mock.name = 'tmpfile' mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = fileobj_mock tempfile_mock.return_value = mock_file_handle deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': None, 'kernel_id': 'kernel_uuid', 'ramdisk_id': 'ramdisk_uuid'} boot_object_name_mock.return_value = 'abcdef' create_boot_iso_mock.return_value = '/path/to/boot-iso' capability_mock.return_value = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) boot_object_name_mock.assert_called_once_with(task.node) create_boot_iso_mock.assert_called_once_with(task.context, 'tmpfile', 'kernel_uuid', 'ramdisk_uuid', 'deploy_iso_uuid', 'root-uuid', 'kernel-params', 'uefi') swift_obj_mock.create_object.assert_called_once_with('ilo-cont', 'abcdef', 'tmpfile') boot_iso_expected = 'swift:abcdef' self.assertEqual(boot_iso_expected, boot_iso_actual) @mock.patch.object(ilo_common, 'copy_image_to_web_server', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'get_node_capability', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_recreate_boot_iso_use_webserver( self, deploy_info_mock, image_props_mock, capability_mock, boot_object_name_mock, create_boot_iso_mock, tempfile_mock, copy_file_mock): CONF.ilo.swift_ilo_container = 'ilo-cont' CONF.ilo.use_web_server_for_images = True CONF.deploy.http_url = "http://10.10.1.30/httpboot" CONF.deploy.http_root = "/httpboot" CONF.pxe.pxe_append_params = 'kernel-params' fileobj_mock = mock.MagicMock(spec=file) fileobj_mock.name = 'tmpfile' mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = fileobj_mock tempfile_mock.return_value = mock_file_handle ramdisk_href = "http://10.10.1.30/httpboot/ramdisk" kernel_href = "http://10.10.1.30/httpboot/kernel" deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': None, 'kernel_id': kernel_href, 'ramdisk_id': ramdisk_href} boot_object_name_mock.return_value = 'new_boot_iso' create_boot_iso_mock.return_value = '/path/to/boot-iso' capability_mock.return_value = 'uefi' copy_file_mock.return_value = "http://10.10.1.30/httpboot/new_boot_iso" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_iso_created_in_web_server'] = True instance_info = task.node.instance_info old_boot_iso = 'http://10.10.1.30/httpboot/old_boot_iso' instance_info['ilo_boot_iso'] = old_boot_iso boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) boot_object_name_mock.assert_called_once_with(task.node) create_boot_iso_mock.assert_called_once_with(task.context, 'tmpfile', kernel_href, ramdisk_href, 'deploy_iso_uuid', 'root-uuid', 'kernel-params', 'uefi') boot_iso_expected = 'http://10.10.1.30/httpboot/new_boot_iso' self.assertEqual(boot_iso_expected, boot_iso_actual) copy_file_mock.assert_called_once_with(fileobj_mock.name, 'new_boot_iso') @mock.patch.object(ilo_common, 'copy_image_to_web_server', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'get_node_capability', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_create_use_webserver_true_ramdisk_webserver( self, deploy_info_mock, image_props_mock, capability_mock, boot_object_name_mock, create_boot_iso_mock, tempfile_mock, copy_file_mock): CONF.ilo.swift_ilo_container = 'ilo-cont' CONF.ilo.use_web_server_for_images = True CONF.deploy.http_url = "http://10.10.1.30/httpboot" CONF.deploy.http_root = "/httpboot" CONF.pxe.pxe_append_params = 'kernel-params' fileobj_mock = mock.MagicMock(spec=file) fileobj_mock.name = 'tmpfile' mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = fileobj_mock tempfile_mock.return_value = mock_file_handle ramdisk_href = "http://10.10.1.30/httpboot/ramdisk" kernel_href = "http://10.10.1.30/httpboot/kernel" deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': None, 'kernel_id': kernel_href, 'ramdisk_id': ramdisk_href} boot_object_name_mock.return_value = 'abcdef' create_boot_iso_mock.return_value = '/path/to/boot-iso' capability_mock.return_value = 'uefi' copy_file_mock.return_value = "http://10.10.1.30/httpboot/abcdef" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) boot_object_name_mock.assert_called_once_with(task.node) create_boot_iso_mock.assert_called_once_with(task.context, 'tmpfile', kernel_href, ramdisk_href, 'deploy_iso_uuid', 'root-uuid', 'kernel-params', 'uefi') boot_iso_expected = 'http://10.10.1.30/httpboot/abcdef' self.assertEqual(boot_iso_expected, boot_iso_actual) copy_file_mock.assert_called_once_with(fileobj_mock.name, 'abcdef') @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test__clean_up_boot_iso_for_instance(self, swift_mock, boot_object_name_mock): swift_obj_mock = swift_mock.return_value CONF.ilo.swift_ilo_container = 'ilo-cont' boot_object_name_mock.return_value = 'boot-object' i_info = self.node.instance_info i_info['ilo_boot_iso'] = 'swift:bootiso' self.node.instance_info = i_info self.node.save() ilo_boot._clean_up_boot_iso_for_instance(self.node) swift_obj_mock.delete_object.assert_called_once_with('ilo-cont', 'boot-object') @mock.patch.object(ilo_boot.LOG, 'exception', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test__clean_up_boot_iso_for_instance_exc(self, swift_mock, boot_object_name_mock, log_mock): swift_obj_mock = swift_mock.return_value exc = exception.SwiftObjectNotFoundError('error') swift_obj_mock.delete_object.side_effect = exc CONF.ilo.swift_ilo_container = 'ilo-cont' boot_object_name_mock.return_value = 'boot-object' i_info = self.node.instance_info i_info['ilo_boot_iso'] = 'swift:bootiso' self.node.instance_info = i_info self.node.save() ilo_boot._clean_up_boot_iso_for_instance(self.node) swift_obj_mock.delete_object.assert_called_once_with('ilo-cont', 'boot-object') self.assertTrue(log_mock.called) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) def test__clean_up_boot_iso_for_instance_on_webserver(self, unlink_mock): CONF.ilo.use_web_server_for_images = True CONF.deploy.http_root = "/webserver" i_info = self.node.instance_info i_info['ilo_boot_iso'] = 'http://x.y.z.a/webserver/boot-object' self.node.instance_info = i_info self.node.save() boot_iso_path = "/webserver/boot-object" ilo_boot._clean_up_boot_iso_for_instance(self.node) unlink_mock.assert_called_once_with(boot_iso_path) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) def test__clean_up_boot_iso_for_instance_no_boot_iso( self, boot_object_name_mock): ilo_boot._clean_up_boot_iso_for_instance(self.node) self.assertFalse(boot_object_name_mock.called) @mock.patch.object(ilo_boot, 'parse_driver_info', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'get_image_instance_info', spec_set=True, autospec=True) def test__parse_deploy_info(self, instance_info_mock, driver_info_mock): instance_info_mock.return_value = {'a': 'b'} driver_info_mock.return_value = {'c': 'd'} expected_info = {'a': 'b', 'c': 'd'} actual_info = ilo_boot._parse_deploy_info(self.node) self.assertEqual(expected_info, actual_info) class IloVirtualMediaBootTestCase(db_base.DbTestCase): def setUp(self): super(IloVirtualMediaBootTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="iscsi_ilo") self.node = obj_utils.create_test_node( self.context, driver='iscsi_ilo', driver_info=INFO_DICT) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def _test_validate(self, deploy_info_mock, validate_prop_mock, props_expected): d_info = {'image_source': 'uuid'} deploy_info_mock.return_value = d_info with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.validate(task) deploy_info_mock.assert_called_once_with(task.node) validate_prop_mock.assert_called_once_with( task.context, d_info, props_expected) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) def test_validate_glance_partition_image(self, is_glance_image_mock): is_glance_image_mock.return_value = True self._test_validate(props_expected=['kernel_id', 'ramdisk_id']) def test_validate_whole_disk_image(self): self.node.driver_internal_info = {'is_whole_disk_image': True} self.node.save() self._test_validate(props_expected=[]) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) def test_validate_non_glance_partition_image(self, is_glance_image_mock): is_glance_image_mock.return_value = False self._test_validate(props_expected=['kernel', 'ramdisk']) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'get_single_nic_with_vif_port_id', spec_set=True, autospec=True) def _test_prepare_ramdisk(self, get_nic_mock, setup_vmedia_mock, eject_mock, ilo_boot_iso, image_source, ramdisk_params={'a': 'b'}): instance_info = self.node.instance_info instance_info['ilo_boot_iso'] = ilo_boot_iso instance_info['image_source'] = image_source self.node.instance_info = instance_info self.node.save() get_nic_mock.return_value = '12:34:56:78:90:ab' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['ilo_deploy_iso'] = 'deploy-iso' task.driver.boot.prepare_ramdisk(task, ramdisk_params) eject_mock.assert_called_once_with(task) expected_ramdisk_opts = {'a': 'b', 'BOOTIF': '12:34:56:78:90:ab'} get_nic_mock.assert_called_once_with(task) setup_vmedia_mock.assert_called_once_with(task, 'deploy-iso', expected_ramdisk_opts) def test_prepare_ramdisk_glance_image(self): self._test_prepare_ramdisk( ilo_boot_iso='swift:abcdef', image_source='6b2f0c0c-79e8-4db6-842e-43c9764204af') self.node.refresh() self.assertNotIn('ilo_boot_iso', self.node.instance_info) def test_prepare_ramdisk_not_a_glance_image(self): self._test_prepare_ramdisk( ilo_boot_iso='http://mybootiso', image_source='http://myimage') self.node.refresh() self.assertEqual('http://mybootiso', self.node.instance_info['ilo_boot_iso']) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso', spec_set=True, autospec=True) def test__configure_vmedia_boot_with_boot_iso( self, get_boot_iso_mock, setup_vmedia_mock, set_boot_device_mock): root_uuid = {'root uuid': 'root_uuid'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_boot_iso_mock.return_value = 'boot.iso' task.driver.boot._configure_vmedia_boot( task, root_uuid) get_boot_iso_mock.assert_called_once_with( task, root_uuid) setup_vmedia_mock.assert_called_once_with( task, 'boot.iso') set_boot_device_mock.assert_called_once_with( task, boot_devices.CDROM, persistent=True) self.assertEqual('boot.iso', task.node.instance_info['ilo_boot_iso']) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso', spec_set=True, autospec=True) def test__configure_vmedia_boot_without_boot_iso( self, get_boot_iso_mock, setup_vmedia_mock, set_boot_device_mock): root_uuid = {'root uuid': 'root_uuid'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_boot_iso_mock.return_value = None task.driver.boot._configure_vmedia_boot( task, root_uuid) get_boot_iso_mock.assert_called_once_with( task, root_uuid) self.assertFalse(setup_vmedia_mock.called) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_clean_up_boot_iso_for_instance', spec_set=True, autospec=True) def test_clean_up_instance(self, cleanup_iso_mock, cleanup_vmedia_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_iso_created_in_web_server'] = False driver_internal_info['root_uuid_or_disk_id'] = ( "12312642-09d3-467f-8e09-12385826a123") task.node.driver_internal_info = driver_internal_info task.node.save() task.driver.boot.clean_up_instance(task) cleanup_iso_mock.assert_called_once_with(task.node) cleanup_vmedia_mock.assert_called_once_with(task) driver_internal_info = task.node.driver_internal_info boot_iso_created = driver_internal_info.get( 'boot_iso_created_in_web_server') root_uuid = driver_internal_info.get('root_uuid_or_disk_id') self.assertIsNone(boot_iso_created) self.assertIsNone(root_uuid) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) def test_clean_up_ramdisk(self, cleanup_vmedia_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_ramdisk(task) cleanup_vmedia_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) def _test_prepare_instance_whole_disk_image( self, cleanup_vmedia_boot_mock, set_boot_device_mock): self.node.driver_internal_info = {'is_whole_disk_image': True} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) cleanup_vmedia_boot_mock.assert_called_once_with(task) set_boot_device_mock.assert_called_once_with(task, boot_devices.DISK, persistent=True) def test_prepare_instance_whole_disk_image_local(self): self.node.instance_info = {'capabilities': '{"boot_option": "local"}'} self.node.save() self._test_prepare_instance_whole_disk_image() def test_prepare_instance_whole_disk_image(self): self._test_prepare_instance_whole_disk_image() @mock.patch.object(ilo_boot.IloVirtualMediaBoot, '_configure_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) def test_prepare_instance_partition_image( self, cleanup_vmedia_boot_mock, configure_vmedia_mock): self.node.driver_internal_info = {'root_uuid_or_disk_id': ( "12312642-09d3-467f-8e09-12385826a123")} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) cleanup_vmedia_boot_mock.assert_called_once_with(task) configure_vmedia_mock.assert_called_once_with( mock.ANY, task, "12312642-09d3-467f-8e09-12385826a123") ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/test_firmware_processor.py0000664000567000056710000006330612674513466031030 0ustar jenkinsjenkins00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Firmware Processor used by iLO management interface.""" import mock from oslo_utils import importutils import six from six.moves import builtins as __builtin__ import six.moves.urllib.parse as urlparse if six.PY3: import io file = io.BytesIO from ironic.common import exception from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import firmware_processor as ilo_fw_processor from ironic.tests import base ilo_error = importutils.try_import('proliantutils.exception') class FirmwareProcessorTestCase(base.TestCase): def setUp(self): super(FirmwareProcessorTestCase, self).setUp() self.any_url = 'http://netloc/path' self.fw_processor_fake = mock.MagicMock( parsed_url='set it as required') def test_verify_firmware_update_args_throws_for_invalid_update_mode(self): # | GIVEN | update_firmware_mock = mock.MagicMock() firmware_update_args = {'firmware_update_mode': 'invalid_mode', 'firmware_images': None} # Note(deray): Need to set __name__ attribute explicitly to keep # ``six.wraps`` happy. Passing this to the `name` argument at the time # creation of Mock doesn't help. update_firmware_mock.__name__ = 'update_firmware_mock' wrapped_func = (ilo_fw_processor. verify_firmware_update_args(update_firmware_mock)) node_fake = mock.MagicMock(uuid='fake_node_uuid') task_fake = mock.MagicMock(node=node_fake) # | WHEN & THEN | self.assertRaises(exception.InvalidParameterValue, wrapped_func, mock.ANY, task_fake, **firmware_update_args) def test_verify_firmware_update_args_throws_for_no_firmware_url(self): # | GIVEN | update_firmware_mock = mock.MagicMock() firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': []} update_firmware_mock.__name__ = 'update_firmware_mock' wrapped_func = (ilo_fw_processor. verify_firmware_update_args(update_firmware_mock)) # | WHEN & THEN | self.assertRaises(exception.InvalidParameterValue, wrapped_func, mock.ANY, mock.ANY, **firmware_update_args) def test_get_and_validate_firmware_image_info(self): # | GIVEN | firmware_image_info = { 'url': self.any_url, 'checksum': 'b64c8f7799cfbb553d384d34dc43fafe336cc889', 'component': 'BIOS' } # | WHEN | url, checksum, component = ( ilo_fw_processor.get_and_validate_firmware_image_info( firmware_image_info)) # | THEN | self.assertEqual(self.any_url, url) self.assertEqual('b64c8f7799cfbb553d384d34dc43fafe336cc889', checksum) self.assertEqual('bios', component) def test_get_and_validate_firmware_image_info_fails_for_missing_parameter( self): # | GIVEN | invalid_firmware_image_info = { 'url': self.any_url, 'component': 'bios' } # | WHEN | & | THEN | self.assertRaisesRegexp( exception.MissingParameterValue, 'checksum', ilo_fw_processor.get_and_validate_firmware_image_info, invalid_firmware_image_info) def test_get_and_validate_firmware_image_info_fails_for_empty_parameter( self): # | GIVEN | invalid_firmware_image_info = { 'url': self.any_url, 'checksum': 'valid_checksum', 'component': '' } # | WHEN | & | THEN | self.assertRaisesRegexp( exception.MissingParameterValue, 'component', ilo_fw_processor.get_and_validate_firmware_image_info, invalid_firmware_image_info) def test_get_and_validate_firmware_image_info_fails_for_invalid_component( self): # | GIVEN | invalid_firmware_image_info = { 'url': self.any_url, 'checksum': 'valid_checksum', 'component': 'INVALID' } # | WHEN | & | THEN | self.assertRaises( exception.InvalidParameterValue, ilo_fw_processor.get_and_validate_firmware_image_info, invalid_firmware_image_info) def test_fw_processor_ctor_sets_parsed_url_attrib_of_fw_processor(self): # | WHEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # | THEN | self.assertEqual(self.any_url, fw_processor.parsed_url.geturl()) @mock.patch.object( ilo_fw_processor, '_download_file_based_fw_to', autospec=True) def test__download_file_based_fw_to_gets_invoked_for_file_based_firmware( self, _download_file_based_fw_to_mock): # | GIVEN | some_file_url = 'file:///some_location/some_firmware_file' # | WHEN | fw_processor = ilo_fw_processor.FirmwareProcessor(some_file_url) fw_processor._download_fw_to('some_target_file') # | THEN | _download_file_based_fw_to_mock.assert_called_once_with( fw_processor, 'some_target_file') @mock.patch.object( ilo_fw_processor, '_download_http_based_fw_to', autospec=True) def test__download_http_based_fw_to_gets_invoked_for_http_based_firmware( self, _download_http_based_fw_to_mock): # | GIVEN | for some_http_url in ('http://netloc/path_to_firmware_file', 'https://netloc/path_to_firmware_file'): # | WHEN | fw_processor = ilo_fw_processor.FirmwareProcessor(some_http_url) fw_processor._download_fw_to('some_target_file') # | THEN | _download_http_based_fw_to_mock.assert_called_once_with( fw_processor, 'some_target_file') _download_http_based_fw_to_mock.reset_mock() @mock.patch.object( ilo_fw_processor, '_download_swift_based_fw_to', autospec=True) def test__download_swift_based_fw_to_gets_invoked_for_swift_based_firmware( self, _download_swift_based_fw_to_mock): # | GIVEN | some_swift_url = 'swift://containername/objectname' # | WHEN | fw_processor = ilo_fw_processor.FirmwareProcessor(some_swift_url) fw_processor._download_fw_to('some_target_file') # | THEN | _download_swift_based_fw_to_mock.assert_called_once_with( fw_processor, 'some_target_file') def test_fw_processor_ctor_throws_exception_with_invalid_firmware_url( self): # | GIVEN | any_invalid_firmware_url = 'any_invalid_url' # | WHEN | & | THEN | self.assertRaises(exception.InvalidParameterValue, ilo_fw_processor.FirmwareProcessor, any_invalid_firmware_url) @mock.patch.object(ilo_fw_processor, 'tempfile', autospec=True) @mock.patch.object(ilo_fw_processor, 'os', autospec=True) @mock.patch.object(ilo_fw_processor, 'shutil', autospec=True) @mock.patch.object(ilo_common, 'verify_image_checksum', spec_set=True, autospec=True) @mock.patch.object( ilo_fw_processor, '_extract_fw_from_file', autospec=True) def test_process_fw_on_calls__download_fw_to( self, _extract_fw_from_file_mock, verify_checksum_mock, shutil_mock, os_mock, tempfile_mock): # | GIVEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # Now mock the __download_fw_to method of fw_processor instance _download_fw_to_mock = mock.MagicMock() fw_processor._download_fw_to = _download_fw_to_mock expected_return_location = (ilo_fw_processor.FirmwareImageLocation( 'some_location/file', 'file')) _extract_fw_from_file_mock.return_value = (expected_return_location, True) node_mock = mock.ANY checksum_fake = mock.ANY # | WHEN | actual_return_location = fw_processor.process_fw_on(node_mock, checksum_fake) # | THEN | _download_fw_to_mock.assert_called_once_with( os_mock.path.join.return_value) self.assertEqual(expected_return_location.fw_image_location, actual_return_location.fw_image_location) self.assertEqual(expected_return_location.fw_image_filename, actual_return_location.fw_image_filename) @mock.patch.object(ilo_fw_processor, 'tempfile', autospec=True) @mock.patch.object(ilo_fw_processor, 'os', autospec=True) @mock.patch.object(ilo_fw_processor, 'shutil', autospec=True) @mock.patch.object(ilo_common, 'verify_image_checksum', spec_set=True, autospec=True) @mock.patch.object( ilo_fw_processor, '_extract_fw_from_file', autospec=True) def test_process_fw_on_verifies_checksum_of_downloaded_fw_file( self, _extract_fw_from_file_mock, verify_checksum_mock, shutil_mock, os_mock, tempfile_mock): # | GIVEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # Now mock the __download_fw_to method of fw_processor instance _download_fw_to_mock = mock.MagicMock() fw_processor._download_fw_to = _download_fw_to_mock expected_return_location = (ilo_fw_processor.FirmwareImageLocation( 'some_location/file', 'file')) _extract_fw_from_file_mock.return_value = (expected_return_location, True) node_mock = mock.ANY checksum_fake = mock.ANY # | WHEN | actual_return_location = fw_processor.process_fw_on(node_mock, checksum_fake) # | THEN | _download_fw_to_mock.assert_called_once_with( os_mock.path.join.return_value) verify_checksum_mock.assert_called_once_with( os_mock.path.join.return_value, checksum_fake) self.assertEqual(expected_return_location.fw_image_location, actual_return_location.fw_image_location) self.assertEqual(expected_return_location.fw_image_filename, actual_return_location.fw_image_filename) @mock.patch.object(ilo_fw_processor, 'tempfile', autospec=True) @mock.patch.object(ilo_fw_processor, 'os', autospec=True) @mock.patch.object(ilo_fw_processor, 'shutil', autospec=True) @mock.patch.object(ilo_common, 'verify_image_checksum', spec_set=True, autospec=True) def test_process_fw_on_throws_error_if_checksum_validation_fails( self, verify_checksum_mock, shutil_mock, os_mock, tempfile_mock): # | GIVEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # Now mock the __download_fw_to method of fw_processor instance _download_fw_to_mock = mock.MagicMock() fw_processor._download_fw_to = _download_fw_to_mock verify_checksum_mock.side_effect = exception.ImageRefValidationFailed( image_href='some image', reason='checksum verification failed') node_mock = mock.ANY checksum_fake = mock.ANY # | WHEN | & | THEN | self.assertRaises(exception.ImageRefValidationFailed, fw_processor.process_fw_on, node_mock, checksum_fake) shutil_mock.rmtree.assert_called_once_with( tempfile_mock.mkdtemp(), ignore_errors=True) @mock.patch.object(ilo_fw_processor, 'tempfile', autospec=True) @mock.patch.object(ilo_fw_processor, 'os', autospec=True) @mock.patch.object(ilo_fw_processor, 'shutil', autospec=True) @mock.patch.object(ilo_common, 'verify_image_checksum', spec_set=True, autospec=True) @mock.patch.object( ilo_fw_processor, '_extract_fw_from_file', autospec=True) def test_process_fw_on_calls__extract_fw_from_file( self, _extract_fw_from_file_mock, verify_checksum_mock, shutil_mock, os_mock, tempfile_mock): # | GIVEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # Now mock the __download_fw_to method of fw_processor instance _download_fw_to_mock = mock.MagicMock() fw_processor._download_fw_to = _download_fw_to_mock expected_return_location = (ilo_fw_processor.FirmwareImageLocation( 'some_location/file', 'file')) _extract_fw_from_file_mock.return_value = (expected_return_location, True) node_mock = mock.ANY checksum_fake = mock.ANY # | WHEN | actual_return_location = fw_processor.process_fw_on(node_mock, checksum_fake) # | THEN | _extract_fw_from_file_mock.assert_called_once_with( node_mock, os_mock.path.join.return_value) self.assertEqual(expected_return_location.fw_image_location, actual_return_location.fw_image_location) self.assertEqual(expected_return_location.fw_image_filename, actual_return_location.fw_image_filename) shutil_mock.rmtree.assert_called_once_with( tempfile_mock.mkdtemp(), ignore_errors=True) @mock.patch.object(__builtin__, 'open', autospec=True) @mock.patch.object( ilo_fw_processor.image_service, 'FileImageService', autospec=True) def test__download_file_based_fw_to_copies_file_to_target( self, file_image_service_mock, open_mock): # | GIVEN | fd_mock = mock.MagicMock(spec=file) open_mock.return_value = fd_mock fd_mock.__enter__.return_value = fd_mock any_file_based_firmware_file = 'file:///tmp/any_file_path' firmware_file_path = '/tmp/any_file_path' self.fw_processor_fake.parsed_url = urlparse.urlparse( any_file_based_firmware_file) # | WHEN | ilo_fw_processor._download_file_based_fw_to(self.fw_processor_fake, 'target_file') # | THEN | file_image_service_mock.return_value.download.assert_called_once_with( firmware_file_path, fd_mock) @mock.patch.object(__builtin__, 'open', autospec=True) @mock.patch.object(ilo_fw_processor, 'image_service', autospec=True) def test__download_http_based_fw_to_downloads_the_fw_file( self, image_service_mock, open_mock): # | GIVEN | fd_mock = mock.MagicMock(spec=file) open_mock.return_value = fd_mock fd_mock.__enter__.return_value = fd_mock any_http_based_firmware_file = 'http://netloc/path_to_firmware_file' any_target_file = 'any_target_file' self.fw_processor_fake.parsed_url = urlparse.urlparse( any_http_based_firmware_file) # | WHEN | ilo_fw_processor._download_http_based_fw_to(self.fw_processor_fake, any_target_file) # | THEN | image_service_mock.HttpImageService().download.assert_called_once_with( any_http_based_firmware_file, fd_mock) @mock.patch.object(ilo_fw_processor, 'urlparse', autospec=True) @mock.patch.object( ilo_fw_processor, '_download_http_based_fw_to', autospec=True) @mock.patch.object(ilo_fw_processor, 'swift', autospec=True) def test__download_swift_based_fw_to_creates_temp_url( self, swift_mock, _download_http_based_fw_to_mock, urlparse_mock): # | GIVEN | any_swift_based_firmware_file = 'swift://containername/objectname' any_target_file = 'any_target_file' self.fw_processor_fake.parsed_url = urlparse.urlparse( any_swift_based_firmware_file) # | WHEN | ilo_fw_processor._download_swift_based_fw_to(self.fw_processor_fake, any_target_file) # | THEN | swift_mock.SwiftAPI().get_temp_url.assert_called_once_with( 'containername', 'objectname', mock.ANY) @mock.patch.object(urlparse, 'urlparse', autospec=True) @mock.patch.object( ilo_fw_processor, '_download_http_based_fw_to', autospec=True) @mock.patch.object(ilo_fw_processor, 'swift', autospec=True) def test__download_swift_based_fw_to_calls__download_http_based_fw_to( self, swift_mock, _download_http_based_fw_to_mock, urlparse_mock): """_download_swift_based_fw_to invokes _download_http_based_fw_to _download_swift_based_fw_to makes a call to _download_http_based_fw_to in turn with temp url set as the url attribute of fw_processor instance """ # | GIVEN | any_swift_based_firmware_file = 'swift://containername/objectname' any_target_file = 'any_target_file' self.fw_processor_fake.parsed_url = urlparse.urlparse( any_swift_based_firmware_file) urlparse_mock.reset_mock() # | WHEN | ilo_fw_processor._download_swift_based_fw_to(self.fw_processor_fake, any_target_file) # | THEN | _download_http_based_fw_to_mock.assert_called_once_with( self.fw_processor_fake, any_target_file) urlparse_mock.assert_called_once_with( swift_mock.SwiftAPI().get_temp_url.return_value) self.assertEqual( urlparse_mock.return_value, self.fw_processor_fake.parsed_url) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) def test__extract_fw_from_file_calls_process_firmware_image( self, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' ilo_object_mock = ilo_common_mock.get_ilo_object.return_value utils_mock.process_firmware_image.return_value = ('some_location', True, True) # | WHEN | ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file) # | THEN | utils_mock.process_firmware_image.assert_called_once_with( any_target_file, ilo_object_mock) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) def test__extract_fw_from_file_doesnt_upload_firmware( self, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', False, True) # | WHEN | ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file) # | THEN | ilo_common_mock.copy_image_to_web_server.assert_not_called() @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) @mock.patch.object(ilo_fw_processor, '_remove_file_based_me', autospec=True) def test__extract_fw_from_file_sets_loc_obj_remove_to_file_if_no_upload( self, _remove_mock, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', False, True) # | WHEN | location_obj, is_different_file = ( ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file)) location_obj.remove() # | THEN | _remove_mock.assert_called_once_with(location_obj) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) def test__extract_fw_from_file_uploads_firmware_to_webserver( self, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', True, True) self.config(use_web_server_for_images=True, group='ilo') # | WHEN | ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file) # | THEN | ilo_common_mock.copy_image_to_web_server.assert_called_once_with( 'some_location/some_fw_file', 'some_fw_file') @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) @mock.patch.object(ilo_fw_processor, '_remove_webserver_based_me', autospec=True) def test__extract_fw_from_file_sets_loc_obj_remove_to_webserver( self, _remove_mock, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', True, True) self.config(use_web_server_for_images=True, group='ilo') # | WHEN | location_obj, is_different_file = ( ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file)) location_obj.remove() # | THEN | _remove_mock.assert_called_once_with(location_obj) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) def test__extract_fw_from_file_uploads_firmware_to_swift( self, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', True, True) self.config(use_web_server_for_images=False, group='ilo') # | WHEN | ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file) # | THEN | ilo_common_mock.copy_image_to_swift.assert_called_once_with( 'some_location/some_fw_file', 'some_fw_file') @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) @mock.patch.object(ilo_fw_processor, '_remove_swift_based_me', autospec=True) def test__extract_fw_from_file_sets_loc_obj_remove_to_swift( self, _remove_mock, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', True, True) self.config(use_web_server_for_images=False, group='ilo') # | WHEN | location_obj, is_different_file = ( ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file)) location_obj.remove() # | THEN | _remove_mock.assert_called_once_with(location_obj) def test_fw_img_loc_sets_these_attributes(self): # | GIVEN | any_loc = 'some_location/some_fw_file' any_s_filename = 'some_fw_file' # | WHEN | location_obj = ilo_fw_processor.FirmwareImageLocation( any_loc, any_s_filename) # | THEN | self.assertEqual(any_loc, location_obj.fw_image_location) self.assertEqual(any_s_filename, location_obj.fw_image_filename) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) def test__remove_file_based_me( self, ilo_common_mock): # | GIVEN | fw_img_location_obj_fake = mock.MagicMock() # | WHEN | ilo_fw_processor._remove_file_based_me(fw_img_location_obj_fake) # | THEN | (ilo_common_mock.remove_single_or_list_of_files. assert_called_with(fw_img_location_obj_fake.fw_image_location)) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) def test__remove_swift_based_me(self, ilo_common_mock): # | GIVEN | fw_img_location_obj_fake = mock.MagicMock() # | WHEN | ilo_fw_processor._remove_swift_based_me(fw_img_location_obj_fake) # | THEN | (ilo_common_mock.remove_image_from_swift.assert_called_with( fw_img_location_obj_fake.fw_image_filename, "firmware update")) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) def test__remove_webserver_based_me(self, ilo_common_mock): # | GIVEN | fw_img_location_obj_fake = mock.MagicMock() # | WHEN | ilo_fw_processor._remove_webserver_based_me(fw_img_location_obj_fake) # | THEN | (ilo_common_mock.remove_image_from_web_server.assert_called_with( fw_img_location_obj_fake.fw_image_filename)) ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/test_deploy.py0000664000567000056710000007334112674513466026411 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for deploy methods used by iLO modules.""" import mock from oslo_config import cfg import six from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import deploy as ilo_deploy from ironic.drivers.modules import iscsi_deploy from ironic.drivers import utils as driver_utils from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils if six.PY3: import io file = io.BytesIO INFO_DICT = db_utils.get_test_ilo_info() CONF = cfg.CONF class IloDeployPrivateMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IloDeployPrivateMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="iscsi_ilo") self.node = obj_utils.create_test_node( self.context, driver='iscsi_ilo', driver_info=INFO_DICT) @mock.patch.object(ilo_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_secure_boot_mode', spec_set=True, autospec=True) def test__disable_secure_boot_false(self, func_get_secure_boot_mode, func_set_secure_boot_mode): func_get_secure_boot_mode.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: returned_state = ilo_deploy._disable_secure_boot(task) func_get_secure_boot_mode.assert_called_once_with(task) self.assertFalse(func_set_secure_boot_mode.called) self.assertFalse(returned_state) @mock.patch.object(ilo_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_secure_boot_mode', spec_set=True, autospec=True) def test__disable_secure_boot_true(self, func_get_secure_boot_mode, func_set_secure_boot_mode): func_get_secure_boot_mode.return_value = True with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: returned_state = ilo_deploy._disable_secure_boot(task) func_get_secure_boot_mode.assert_called_once_with(task) func_set_secure_boot_mode.assert_called_once_with(task, False) self.assertTrue(returned_state) @mock.patch.object(ilo_deploy.LOG, 'debug', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, 'exception', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_secure_boot_mode', spec_set=True, autospec=True) def test__disable_secure_boot_exception(self, func_get_secure_boot_mode, exception_mock, mock_log): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: exception_mock.IloOperationNotSupported = Exception func_get_secure_boot_mode.side_effect = Exception returned_state = ilo_deploy._disable_secure_boot(task) func_get_secure_boot_mode.assert_called_once_with(task) self.assertTrue(mock_log.called) self.assertFalse(returned_state) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_disable_secure_boot', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test__prepare_node_for_deploy(self, func_node_power_action, func_disable_secure_boot, func_update_boot_mode): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_disable_secure_boot.return_value = False ilo_deploy._prepare_node_for_deploy(task) func_node_power_action.assert_called_once_with(task, states.POWER_OFF) func_disable_secure_boot.assert_called_once_with(task) func_update_boot_mode.assert_called_once_with(task) bootmode = driver_utils.get_node_capability(task.node, "boot_mode") self.assertIsNone(bootmode) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_disable_secure_boot', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test__prepare_node_for_deploy_sec_boot_on(self, func_node_power_action, func_disable_secure_boot, func_update_boot_mode): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_disable_secure_boot.return_value = True ilo_deploy._prepare_node_for_deploy(task) func_node_power_action.assert_called_once_with(task, states.POWER_OFF) func_disable_secure_boot.assert_called_once_with(task) self.assertFalse(func_update_boot_mode.called) ret_boot_mode = task.node.instance_info['deploy_boot_mode'] self.assertEqual('uefi', ret_boot_mode) bootmode = driver_utils.get_node_capability(task.node, "boot_mode") self.assertIsNone(bootmode) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_disable_secure_boot', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test__prepare_node_for_deploy_inst_info(self, func_node_power_action, func_disable_secure_boot, func_update_boot_mode): instance_info = {'capabilities': '{"secure_boot": "true"}'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_disable_secure_boot.return_value = False task.node.instance_info = instance_info ilo_deploy._prepare_node_for_deploy(task) func_node_power_action.assert_called_once_with(task, states.POWER_OFF) func_disable_secure_boot.assert_called_once_with(task) func_update_boot_mode.assert_called_once_with(task) bootmode = driver_utils.get_node_capability(task.node, "boot_mode") self.assertIsNone(bootmode) deploy_boot_mode = task.node.instance_info.get('deploy_boot_mode') self.assertIsNone(deploy_boot_mode) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_disable_secure_boot', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test__prepare_node_for_deploy_sec_boot_on_inst_info( self, func_node_power_action, func_disable_secure_boot, func_update_boot_mode): instance_info = {'capabilities': '{"secure_boot": "true"}'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_disable_secure_boot.return_value = True task.node.instance_info = instance_info ilo_deploy._prepare_node_for_deploy(task) func_node_power_action.assert_called_once_with(task, states.POWER_OFF) func_disable_secure_boot.assert_called_once_with(task) self.assertFalse(func_update_boot_mode.called) bootmode = driver_utils.get_node_capability(task.node, "boot_mode") self.assertIsNone(bootmode) deploy_boot_mode = task.node.instance_info.get('deploy_boot_mode') self.assertIsNone(deploy_boot_mode) class IloVirtualMediaIscsiDeployTestCase(db_base.DbTestCase): def setUp(self): super(IloVirtualMediaIscsiDeployTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="iscsi_ilo") self.node = obj_utils.create_test_node( self.context, driver='iscsi_ilo', driver_info=INFO_DICT) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'tear_down', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_tear_down(self, node_power_action_mock, iscsi_tear_down_mock, update_secure_boot_mode_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: iscsi_tear_down_mock.return_value = states.DELETED returned_state = task.driver.deploy.tear_down(task) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) iscsi_tear_down_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.DELETED, returned_state) @mock.patch.object(ilo_deploy.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, 'exception', spec_set=True, autospec=True) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'tear_down', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_tear_down_handle_exception(self, node_power_action_mock, update_secure_boot_mode_mock, iscsi_tear_down_mock, exception_mock, mock_log): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: iscsi_tear_down_mock.return_value = states.DELETED exception_mock.IloOperationNotSupported = Exception update_secure_boot_mode_mock.side_effect = Exception returned_state = task.driver.deploy.tear_down(task) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) iscsi_tear_down_mock.assert_called_once_with(mock.ANY, task) self.assertTrue(mock_log.called) self.assertEqual(states.DELETED, returned_state) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'deploy', spec_set=True, autospec=True) def test_deploy(self, iscsi_deploy_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.deploy(task) iscsi_deploy_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'prepare', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_prepare_node_for_deploy', spec_set=True, autospec=True) def test_prepare(self, func_prepare_node_for_deploy, iscsi_deploy_prepare_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.prepare(task) func_prepare_node_for_deploy.assert_called_once_with(task) iscsi_deploy_prepare_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'prepare', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_prepare_node_for_deploy', spec_set=True, autospec=True) def test_prepare_active_node(self, func_prepare_node_for_deploy, iscsi_deploy_prepare_mock): self.node.provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.prepare(task) self.assertFalse(func_prepare_node_for_deploy.called) iscsi_deploy_prepare_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'prepare_cleaning', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_prepare_cleaning(self, node_power_action_mock, iscsi_prep_clean_mock): iscsi_prep_clean_mock.return_value = states.CLEANWAIT with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = task.driver.deploy.prepare_cleaning(task) self.assertEqual(states.CLEANWAIT, ret) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) iscsi_prep_clean_mock.assert_called_once_with(mock.ANY, task) class IloVirtualMediaAgentDeployTestCase(db_base.DbTestCase): def setUp(self): super(IloVirtualMediaAgentDeployTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="agent_ilo") self.node = obj_utils.create_test_node( self.context, driver='agent_ilo', driver_info=INFO_DICT) @mock.patch.object(agent.AgentDeploy, 'tear_down', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_tear_down(self, node_power_action_mock, update_secure_boot_mode_mock, agent_teardown_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: agent_teardown_mock.return_value = states.DELETED returned_state = task.driver.deploy.tear_down(task) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) self.assertEqual(states.DELETED, returned_state) @mock.patch.object(agent.AgentDeploy, 'tear_down', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, 'exception', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_tear_down_handle_exception(self, node_power_action_mock, update_secure_boot_mode_mock, exception_mock, mock_log, agent_teardown_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: agent_teardown_mock.return_value = states.DELETED exception_mock.IloOperationNotSupported = Exception update_secure_boot_mode_mock.side_effect = Exception returned_state = task.driver.deploy.tear_down(task) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) agent_teardown_mock.assert_called_once_with(mock.ANY, task) self.assertTrue(mock_log.called) self.assertEqual(states.DELETED, returned_state) @mock.patch.object(ilo_deploy, '_prepare_node_for_deploy', spec_set=True, autospec=True) @mock.patch.object(agent.AgentDeploy, 'prepare', spec_set=True, autospec=True) def test_prepare(self, agent_prepare_mock, func_prepare_node_for_deploy): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.prepare(task) func_prepare_node_for_deploy.assert_called_once_with(task) agent_prepare_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(agent.AgentDeploy, 'prepare', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_prepare_node_for_deploy', spec_set=True, autospec=True) def test_prepare_active_node(self, func_prepare_node_for_deploy, agent_prepare_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.ACTIVE task.driver.deploy.prepare(task) self.assertFalse(func_prepare_node_for_deploy.called) agent_prepare_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(deploy_utils, 'agent_get_clean_steps', spec_set=True, autospec=True) def test_get_clean_steps_with_conf_option(self, get_clean_step_mock): self.config(clean_priority_erase_devices=20, group='ilo') get_clean_step_mock.return_value = [{ 'step': 'erase_devices', 'priority': 10, 'interface': 'deploy', 'reboot_requested': False }] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.get_clean_steps(task) get_clean_step_mock.assert_called_once_with( task, interface='deploy', override_priorities={'erase_devices': 20}) @mock.patch.object(deploy_utils, 'agent_get_clean_steps', spec_set=True, autospec=True) def test_get_clean_steps_erase_devices_disable(self, get_clean_step_mock): self.config(clean_priority_erase_devices=0, group='ilo') get_clean_step_mock.return_value = [{ 'step': 'erase_devices', 'priority': 10, 'interface': 'deploy', 'reboot_requested': False }] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.get_clean_steps(task) get_clean_step_mock.assert_called_once_with( task, interface='deploy', override_priorities={'erase_devices': 0}) @mock.patch.object(deploy_utils, 'agent_get_clean_steps', spec_set=True, autospec=True) def test_get_clean_steps_without_conf_option(self, get_clean_step_mock): get_clean_step_mock.return_value = [{ 'step': 'erase_devices', 'priority': 10, 'interface': 'deploy', 'reboot_requested': False }] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.get_clean_steps(task) get_clean_step_mock.assert_called_once_with( task, interface='deploy', override_priorities={'erase_devices': None}) @mock.patch.object(agent.AgentDeploy, 'prepare_cleaning', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_prepare_cleaning(self, node_power_action_mock, agent_prep_clean_mock): agent_prep_clean_mock.return_value = states.CLEANWAIT with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = task.driver.deploy.prepare_cleaning(task) self.assertEqual(states.CLEANWAIT, ret) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) agent_prep_clean_mock.assert_called_once_with(mock.ANY, task) class IloPXEDeployTestCase(db_base.DbTestCase): def setUp(self): super(IloPXEDeployTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="pxe_ilo") self.node = obj_utils.create_test_node( self.context, driver='pxe_ilo', driver_info=INFO_DICT) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'validate', spec_set=True, autospec=True) def test_validate(self, pxe_validate_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.validate(task) pxe_validate_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'prepare', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_prepare_node_for_deploy', spec_set=True, autospec=True) def test_prepare(self, prepare_node_mock, pxe_prepare_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' task.driver.deploy.prepare(task) prepare_node_mock.assert_called_once_with(task) pxe_prepare_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'prepare', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_prepare_node_for_deploy', spec_set=True, autospec=True) def test_prepare_active_node(self, prepare_node_mock, pxe_prepare_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.ACTIVE task.node.properties['capabilities'] = 'boot_mode:uefi' task.driver.deploy.prepare(task) self.assertFalse(prepare_node_mock.called) pxe_prepare_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'prepare', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, '_prepare_node_for_deploy', spec_set=True, autospec=True) def test_prepare_uefi_whole_disk_image_fail(self, prepare_node_for_deploy_mock, pxe_prepare_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' task.node.driver_internal_info['is_whole_disk_image'] = True self.assertRaises(exception.InvalidParameterValue, task.driver.deploy.prepare, task) prepare_node_for_deploy_mock.assert_called_once_with(task) self.assertFalse(pxe_prepare_mock.called) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'deploy', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) def test_deploy_boot_mode_exists(self, set_persistent_mock, pxe_deploy_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.deploy(task) set_persistent_mock.assert_called_with(task, boot_devices.PXE) pxe_deploy_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'tear_down', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_tear_down(self, node_power_action_mock, update_secure_boot_mode_mock, pxe_tear_down_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: pxe_tear_down_mock.return_value = states.DELETED returned_state = task.driver.deploy.tear_down(task) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) pxe_tear_down_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.DELETED, returned_state) @mock.patch.object(ilo_deploy.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'tear_down', spec_set=True, autospec=True) @mock.patch.object(ilo_deploy, 'exception', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_tear_down_handle_exception(self, node_power_action_mock, update_secure_boot_mode_mock, exception_mock, pxe_tear_down_mock, mock_log): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: pxe_tear_down_mock.return_value = states.DELETED exception_mock.IloOperationNotSupported = Exception update_secure_boot_mode_mock.side_effect = Exception returned_state = task.driver.deploy.tear_down(task) update_secure_boot_mode_mock.assert_called_once_with(task, False) pxe_tear_down_mock.assert_called_once_with(mock.ANY, task) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) self.assertTrue(mock_log.called) self.assertEqual(states.DELETED, returned_state) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'prepare_cleaning', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_prepare_cleaning(self, node_power_action_mock, iscsi_prep_clean_mock): iscsi_prep_clean_mock.return_value = states.CLEANWAIT with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = task.driver.deploy.prepare_cleaning(task) self.assertEqual(states.CLEANWAIT, ret) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) iscsi_prep_clean_mock.assert_called_once_with(mock.ANY, task) ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/test_console.py0000664000567000056710000000562712674513466026561 0ustar jenkinsjenkins00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for common methods used by iLO modules.""" import mock from oslo_config import cfg import six from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules import ipmitool from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils if six.PY3: import io file = io.BytesIO INFO_DICT = db_utils.get_test_ilo_info() CONF = cfg.CONF class IloConsoleInterfaceTestCase(db_base.DbTestCase): def setUp(self): super(IloConsoleInterfaceTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="iscsi_ilo") self.node = obj_utils.create_test_node( self.context, driver='iscsi_ilo', driver_info=INFO_DICT) @mock.patch.object(ipmitool.IPMIShellinaboxConsole, 'validate', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_ipmi_properties', spec_set=True, autospec=True) def test_validate(self, update_ipmi_mock, ipmi_validate_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['console_port'] = 60 task.driver.console.validate(task) update_ipmi_mock.assert_called_once_with(task) ipmi_validate_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(ipmitool.IPMIShellinaboxConsole, 'validate', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_ipmi_properties', spec_set=True, autospec=True) def test_validate_exc(self, update_ipmi_mock, ipmi_validate_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, task.driver.console.validate, task) self.assertEqual(0, update_ipmi_mock.call_count) self.assertEqual(0, ipmi_validate_mock.call_count) ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/test_common.py0000664000567000056710000013145512674513466026406 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for common methods used by iLO modules.""" import hashlib import os import shutil import tempfile from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from oslo_utils import importutils import six import six.moves.builtins as __builtin__ from ironic.common import boot_devices from ironic.common import exception from ironic.common import images from ironic.common import swift from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils ilo_client = importutils.try_import('proliantutils.ilo.client') ilo_error = importutils.try_import('proliantutils.exception') if six.PY3: import io file = io.BytesIO CONF = cfg.CONF class IloValidateParametersTestCase(db_base.DbTestCase): def setUp(self): super(IloValidateParametersTestCase, self).setUp() self.node = obj_utils.create_test_node( self.context, driver='fake_ilo', driver_info=db_utils.get_test_ilo_info()) def test_parse_driver_info(self): info = ilo_common.parse_driver_info(self.node) self.assertIsNotNone(info.get('ilo_address')) self.assertIsNotNone(info.get('ilo_username')) self.assertIsNotNone(info.get('ilo_password')) self.assertIsNotNone(info.get('client_timeout')) self.assertIsNotNone(info.get('client_port')) def test_parse_driver_info_missing_address(self): del self.node.driver_info['ilo_address'] self.assertRaises(exception.MissingParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_missing_username(self): del self.node.driver_info['ilo_username'] self.assertRaises(exception.MissingParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_missing_password(self): del self.node.driver_info['ilo_password'] self.assertRaises(exception.MissingParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_timeout(self): self.node.driver_info['client_timeout'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_port(self): self.node.driver_info['client_port'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) self.node.driver_info['client_port'] = '65536' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) self.node.driver_info['console_port'] = 'invalid' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) self.node.driver_info['console_port'] = '-1' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_missing_multiple_params(self): del self.node.driver_info['ilo_password'] del self.node.driver_info['ilo_address'] try: ilo_common.parse_driver_info(self.node) self.fail("parse_driver_info did not throw exception.") except exception.MissingParameterValue as e: self.assertIn('ilo_password', str(e)) self.assertIn('ilo_address', str(e)) def test_parse_driver_info_invalid_multiple_params(self): self.node.driver_info['client_timeout'] = 'qwe' try: ilo_common.parse_driver_info(self.node) self.fail("parse_driver_info did not throw exception.") except exception.InvalidParameterValue as e: self.assertIn('client_timeout', str(e)) class IloCommonMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IloCommonMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_ilo") self.info = db_utils.get_test_ilo_info() self.node = obj_utils.create_test_node( self.context, driver='fake_ilo', driver_info=self.info) @mock.patch.object(ilo_client, 'IloClient', spec_set=True, autospec=True) def test_get_ilo_object(self, ilo_client_mock): self.info['client_timeout'] = 60 self.info['client_port'] = 443 ilo_client_mock.return_value = 'ilo_object' returned_ilo_object = ilo_common.get_ilo_object(self.node) ilo_client_mock.assert_called_with( self.info['ilo_address'], self.info['ilo_username'], self.info['ilo_password'], self.info['client_timeout'], self.info['client_port']) self.assertEqual('ilo_object', returned_ilo_object) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_ilo_license(self, get_ilo_object_mock): ilo_advanced_license = {'LICENSE_TYPE': 'iLO 3 Advanced'} ilo_standard_license = {'LICENSE_TYPE': 'iLO 3'} ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_all_licenses.return_value = ilo_advanced_license license = ilo_common.get_ilo_license(self.node) self.assertEqual(ilo_common.ADVANCED_LICENSE, license) ilo_mock_object.get_all_licenses.return_value = ilo_standard_license license = ilo_common.get_ilo_license(self.node) self.assertEqual(ilo_common.STANDARD_LICENSE, license) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_ilo_license_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.get_all_licenses.side_effect = exc self.assertRaises(exception.IloOperationError, ilo_common.get_ilo_license, self.node) def test_update_ipmi_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ipmi_info = { "ipmi_address": "1.2.3.4", "ipmi_username": "admin", "ipmi_password": "fake", "ipmi_terminal_port": 60 } self.info['console_port'] = 60 task.node.driver_info = self.info ilo_common.update_ipmi_properties(task) actual_info = task.node.driver_info expected_info = dict(self.info, **ipmi_info) self.assertEqual(expected_info, actual_info) def test__get_floppy_image_name(self): image_name_expected = 'image-' + self.node.uuid image_name_actual = ilo_common._get_floppy_image_name(self.node) self.assertEqual(image_name_expected, image_name_actual) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(images, 'create_vfat_image', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) def test__prepare_floppy_image(self, tempfile_mock, fatimage_mock, swift_api_mock): mock_image_file_handle = mock.MagicMock(spec=file) mock_image_file_obj = mock.MagicMock(spec=file) mock_image_file_obj.name = 'image-tmp-file' mock_image_file_handle.__enter__.return_value = mock_image_file_obj tempfile_mock.return_value = mock_image_file_handle swift_obj_mock = swift_api_mock.return_value self.config(swift_ilo_container='ilo_cont', group='ilo') self.config(swift_object_expiry_timeout=1, group='ilo') deploy_args = {'arg1': 'val1', 'arg2': 'val2'} swift_obj_mock.get_temp_url.return_value = 'temp-url' timeout = CONF.ilo.swift_object_expiry_timeout object_headers = {'X-Delete-After': timeout} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: temp_url = ilo_common._prepare_floppy_image(task, deploy_args) node_uuid = task.node.uuid object_name = 'image-' + node_uuid fatimage_mock.assert_called_once_with('image-tmp-file', parameters=deploy_args) swift_obj_mock.create_object.assert_called_once_with( 'ilo_cont', object_name, 'image-tmp-file', object_headers=object_headers) swift_obj_mock.get_temp_url.assert_called_once_with( 'ilo_cont', object_name, timeout) self.assertEqual('temp-url', temp_url) @mock.patch.object(ilo_common, 'copy_image_to_web_server', spec_set=True, autospec=True) @mock.patch.object(images, 'create_vfat_image', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) def test__prepare_floppy_image_use_webserver(self, tempfile_mock, fatimage_mock, copy_mock): mock_image_file_handle = mock.MagicMock(spec=file) mock_image_file_obj = mock.MagicMock(spec=file) mock_image_file_obj.name = 'image-tmp-file' mock_image_file_handle.__enter__.return_value = mock_image_file_obj tempfile_mock.return_value = mock_image_file_handle self.config(use_web_server_for_images=True, group='ilo') deploy_args = {'arg1': 'val1', 'arg2': 'val2'} CONF.deploy.http_url = "http://abc.com/httpboot" CONF.deploy.http_root = "/httpboot" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: node_uuid = task.node.uuid object_name = 'image-' + node_uuid http_url = CONF.deploy.http_url + '/' + object_name copy_mock.return_value = "http://abc.com/httpboot/" + object_name temp_url = ilo_common._prepare_floppy_image(task, deploy_args) fatimage_mock.assert_called_once_with('image-tmp-file', parameters=deploy_args) copy_mock.assert_called_once_with('image-tmp-file', object_name) self.assertEqual(http_url, temp_url) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_attach_vmedia(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value insert_media_mock = ilo_mock_object.insert_virtual_media set_status_mock = ilo_mock_object.set_vm_status ilo_common.attach_vmedia(self.node, 'FLOPPY', 'url') insert_media_mock.assert_called_once_with('url', device='FLOPPY') set_status_mock.assert_called_once_with( device='FLOPPY', boot_option='CONNECT', write_protect='YES') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_attach_vmedia_fails(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value set_status_mock = ilo_mock_object.set_vm_status exc = ilo_error.IloError('error') set_status_mock.side_effect = exc self.assertRaises(exception.IloOperationError, ilo_common.attach_vmedia, self.node, 'FLOPPY', 'url') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_mode(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value get_pending_boot_mode_mock = ilo_object_mock.get_pending_boot_mode set_pending_boot_mode_mock = ilo_object_mock.set_pending_boot_mode get_pending_boot_mode_mock.return_value = 'LEGACY' ilo_common.set_boot_mode(self.node, 'uefi') get_ilo_object_mock.assert_called_once_with(self.node) get_pending_boot_mode_mock.assert_called_once_with() set_pending_boot_mode_mock.assert_called_once_with('UEFI') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_mode_without_set_pending_boot_mode(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value get_pending_boot_mode_mock = ilo_object_mock.get_pending_boot_mode get_pending_boot_mode_mock.return_value = 'LEGACY' ilo_common.set_boot_mode(self.node, 'bios') get_ilo_object_mock.assert_called_once_with(self.node) get_pending_boot_mode_mock.assert_called_once_with() self.assertFalse(ilo_object_mock.set_pending_boot_mode.called) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_mode_with_IloOperationError(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value get_pending_boot_mode_mock = ilo_object_mock.get_pending_boot_mode get_pending_boot_mode_mock.return_value = 'UEFI' set_pending_boot_mode_mock = ilo_object_mock.set_pending_boot_mode exc = ilo_error.IloError('error') set_pending_boot_mode_mock.side_effect = exc self.assertRaises(exception.IloOperationError, ilo_common.set_boot_mode, self.node, 'bios') get_ilo_object_mock.assert_called_once_with(self.node) get_pending_boot_mode_mock.assert_called_once_with() @mock.patch.object(ilo_common, 'set_boot_mode', spec_set=True, autospec=True) def test_update_boot_mode_instance_info_exists(self, set_boot_mode_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['deploy_boot_mode'] = 'bios' ilo_common.update_boot_mode(task) set_boot_mode_mock.assert_called_once_with(task.node, 'bios') @mock.patch.object(ilo_common, 'set_boot_mode', spec_set=True, autospec=True) def test_update_boot_mode_capabilities_exist(self, set_boot_mode_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = 'boot_mode:bios' ilo_common.update_boot_mode(task) set_boot_mode_mock.assert_called_once_with(task.node, 'bios') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_update_boot_mode(self, get_ilo_object_mock): ilo_mock_obj = get_ilo_object_mock.return_value ilo_mock_obj.get_pending_boot_mode.return_value = 'LEGACY' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.update_boot_mode(task) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock_obj.get_pending_boot_mode.assert_called_once_with() self.assertEqual('bios', task.node.instance_info['deploy_boot_mode']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_update_boot_mode_unknown(self, get_ilo_object_mock): ilo_mock_obj = get_ilo_object_mock.return_value ilo_mock_obj.get_pending_boot_mode.return_value = 'UNKNOWN' set_pending_boot_mode_mock = ilo_mock_obj.set_pending_boot_mode with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.update_boot_mode(task) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock_obj.get_pending_boot_mode.assert_called_once_with() set_pending_boot_mode_mock.assert_called_once_with('UEFI') self.assertEqual('uefi', task.node.instance_info['deploy_boot_mode']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_update_boot_mode_unknown_except(self, get_ilo_object_mock): ilo_mock_obj = get_ilo_object_mock.return_value ilo_mock_obj.get_pending_boot_mode.return_value = 'UNKNOWN' set_pending_boot_mode_mock = ilo_mock_obj.set_pending_boot_mode exc = ilo_error.IloError('error') set_pending_boot_mode_mock.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, ilo_common.update_boot_mode, task) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock_obj.get_pending_boot_mode.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_update_boot_mode_legacy(self, get_ilo_object_mock): ilo_mock_obj = get_ilo_object_mock.return_value exc = ilo_error.IloCommandNotSupportedError('error') ilo_mock_obj.get_pending_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.update_boot_mode(task) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock_obj.get_pending_boot_mode.assert_called_once_with() self.assertEqual('bios', task.node.instance_info['deploy_boot_mode']) @mock.patch.object(ilo_common, 'set_boot_mode', spec_set=True, autospec=True) def test_update_boot_mode_prop_boot_mode_exist(self, set_boot_mode_mock): properties = {'capabilities': 'boot_mode:uefi'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties = properties ilo_common.update_boot_mode(task) set_boot_mode_mock.assert_called_once_with(task.node, 'uefi') @mock.patch.object(images, 'get_temp_url_for_glance_image', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'attach_vmedia', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_prepare_floppy_image', spec_set=True, autospec=True) def test_setup_vmedia_for_boot_with_parameters( self, prepare_image_mock, attach_vmedia_mock, temp_url_mock): parameters = {'a': 'b'} boot_iso = '733d1c44-a2ea-414b-aca7-69decf20d810' prepare_image_mock.return_value = 'floppy_url' temp_url_mock.return_value = 'image_url' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.setup_vmedia_for_boot(task, boot_iso, parameters) prepare_image_mock.assert_called_once_with(task, parameters) attach_vmedia_mock.assert_any_call(task.node, 'FLOPPY', 'floppy_url') temp_url_mock.assert_called_once_with( task.context, '733d1c44-a2ea-414b-aca7-69decf20d810') attach_vmedia_mock.assert_any_call(task.node, 'CDROM', 'image_url') @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'attach_vmedia', spec_set=True, autospec=True) def test_setup_vmedia_for_boot_with_swift(self, attach_vmedia_mock, swift_api_mock): swift_obj_mock = swift_api_mock.return_value boot_iso = 'swift:object-name' swift_obj_mock.get_temp_url.return_value = 'image_url' CONF.keystone_authtoken.auth_uri = 'http://authurl' CONF.ilo.swift_ilo_container = 'ilo_cont' CONF.ilo.swift_object_expiry_timeout = 1 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.setup_vmedia_for_boot(task, boot_iso) swift_obj_mock.get_temp_url.assert_called_once_with( 'ilo_cont', 'object-name', 1) attach_vmedia_mock.assert_called_once_with( task.node, 'CDROM', 'image_url') @mock.patch.object(ilo_common, 'attach_vmedia', spec_set=True, autospec=True) def test_setup_vmedia_for_boot_with_url(self, attach_vmedia_mock): boot_iso = 'http://abc.com/img.iso' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.setup_vmedia_for_boot(task, boot_iso) attach_vmedia_mock.assert_called_once_with(task.node, 'CDROM', boot_iso) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_get_floppy_image_name', spec_set=True, autospec=True) def test_cleanup_vmedia_boot(self, get_name_mock, swift_api_mock, eject_mock): swift_obj_mock = swift_api_mock.return_value CONF.ilo.swift_ilo_container = 'ilo_cont' get_name_mock.return_value = 'image-node-uuid' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.cleanup_vmedia_boot(task) swift_obj_mock.delete_object.assert_called_once_with( 'ilo_cont', 'image-node-uuid') eject_mock.assert_called_once_with(task) @mock.patch.object(ilo_common.LOG, 'exception', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_get_floppy_image_name', spec_set=True, autospec=True) def test_cleanup_vmedia_boot_exc(self, get_name_mock, swift_api_mock, eject_mock, log_mock): exc = exception.SwiftOperationError('error') swift_obj_mock = swift_api_mock.return_value swift_obj_mock.delete_object.side_effect = exc CONF.ilo.swift_ilo_container = 'ilo_cont' get_name_mock.return_value = 'image-node-uuid' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.cleanup_vmedia_boot(task) swift_obj_mock.delete_object.assert_called_once_with( 'ilo_cont', 'image-node-uuid') self.assertTrue(log_mock.called) eject_mock.assert_called_once_with(task) @mock.patch.object(ilo_common.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_get_floppy_image_name', spec_set=True, autospec=True) def test_cleanup_vmedia_boot_exc_resource_not_found(self, get_name_mock, swift_api_mock, eject_mock, log_mock): exc = exception.SwiftObjectNotFoundError('error') swift_obj_mock = swift_api_mock.return_value swift_obj_mock.delete_object.side_effect = exc CONF.ilo.swift_ilo_container = 'ilo_cont' get_name_mock.return_value = 'image-node-uuid' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.cleanup_vmedia_boot(task) swift_obj_mock.delete_object.assert_called_once_with( 'ilo_cont', 'image-node-uuid') self.assertTrue(log_mock.called) eject_mock.assert_called_once_with(task) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'destroy_floppy_image_from_web_server', spec_set=True, autospec=True) def test_cleanup_vmedia_boot_for_webserver(self, destroy_image_mock, eject_mock): CONF.ilo.use_web_server_for_images = True with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.cleanup_vmedia_boot(task) destroy_image_mock.assert_called_once_with(task.node) eject_mock.assert_called_once_with(task) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_eject_vmedia_devices(self, get_ilo_object_mock): ilo_object_mock = mock.MagicMock(spec=['eject_virtual_media']) get_ilo_object_mock.return_value = ilo_object_mock with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.eject_vmedia_devices(task) ilo_object_mock.eject_virtual_media.assert_has_calls( [mock.call('FLOPPY'), mock.call('CDROM')]) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_eject_vmedia_devices_raises( self, get_ilo_object_mock): ilo_object_mock = mock.MagicMock(spec=['eject_virtual_media']) get_ilo_object_mock.return_value = ilo_object_mock exc = ilo_error.IloError('error') ilo_object_mock.eject_virtual_media.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, ilo_common.eject_vmedia_devices, task) ilo_object_mock.eject_virtual_media.assert_called_once_with( 'FLOPPY') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_secure_boot_mode(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.get_current_boot_mode.return_value = 'UEFI' ilo_object_mock.get_secure_boot_mode.return_value = True with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = ilo_common.get_secure_boot_mode(task) ilo_object_mock.get_current_boot_mode.assert_called_once_with() ilo_object_mock.get_secure_boot_mode.assert_called_once_with() self.assertTrue(ret) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_secure_boot_mode_bios(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.get_current_boot_mode.return_value = 'BIOS' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = ilo_common.get_secure_boot_mode(task) ilo_object_mock.get_current_boot_mode.assert_called_once_with() self.assertFalse(ilo_object_mock.get_secure_boot_mode.called) self.assertFalse(ret) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_secure_boot_mode_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.get_current_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, ilo_common.get_secure_boot_mode, task) ilo_mock_object.get_current_boot_mode.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_secure_boot_mode_not_supported(self, ilo_object_mock): ilo_mock_object = ilo_object_mock.return_value exc = ilo_error.IloCommandNotSupportedError('error') ilo_mock_object.get_current_boot_mode.return_value = 'UEFI' ilo_mock_object.get_secure_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationNotSupported, ilo_common.get_secure_boot_mode, task) ilo_mock_object.get_current_boot_mode.assert_called_once_with() ilo_mock_object.get_secure_boot_mode.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_secure_boot_mode(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.set_secure_boot_mode(task, True) ilo_object_mock.set_secure_boot_mode.assert_called_once_with(True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_secure_boot_mode_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.set_secure_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, ilo_common.set_secure_boot_mode, task, False) ilo_mock_object.set_secure_boot_mode.assert_called_once_with(False) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_secure_boot_mode_not_supported(self, ilo_object_mock): ilo_mock_object = ilo_object_mock.return_value exc = ilo_error.IloCommandNotSupportedError('error') ilo_mock_object.set_secure_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationNotSupported, ilo_common.set_secure_boot_mode, task, False) ilo_mock_object.set_secure_boot_mode.assert_called_once_with(False) @mock.patch.object(os, 'chmod', spec_set=True, autospec=True) @mock.patch.object(shutil, 'copyfile', spec_set=True, autospec=True) def test_copy_image_to_web_server(self, copy_mock, chmod_mock): CONF.deploy.http_url = "http://x.y.z.a/webserver/" CONF.deploy.http_root = "/webserver" expected_url = "http://x.y.z.a/webserver/image-UUID" source = 'tmp_image_file' destination = "image-UUID" image_path = "/webserver/image-UUID" actual_url = ilo_common.copy_image_to_web_server(source, destination) self.assertEqual(expected_url, actual_url) copy_mock.assert_called_once_with(source, image_path) chmod_mock.assert_called_once_with(image_path, 0o644) @mock.patch.object(os, 'chmod', spec_set=True, autospec=True) @mock.patch.object(shutil, 'copyfile', spec_set=True, autospec=True) def test_copy_image_to_web_server_fails(self, copy_mock, chmod_mock): CONF.deploy.http_url = "http://x.y.z.a/webserver/" CONF.deploy.http_root = "/webserver" source = 'tmp_image_file' destination = "image-UUID" image_path = "/webserver/image-UUID" exc = exception.ImageUploadFailed('reason') copy_mock.side_effect = exc self.assertRaises(exception.ImageUploadFailed, ilo_common.copy_image_to_web_server, source, destination) copy_mock.assert_called_once_with(source, image_path) self.assertFalse(chmod_mock.called) @mock.patch.object(ilo_common, 'ironic_utils', autospec=True) def test_remove_image_from_web_server(self, utils_mock): # | GIVEN | CONF.deploy.http_url = "http://x.y.z.a/webserver/" CONF.deploy.http_root = "/webserver" object_name = 'tmp_image_file' # | WHEN | ilo_common.remove_image_from_web_server(object_name) # | THEN | (utils_mock.unlink_without_raise. assert_called_once_with("/webserver/tmp_image_file")) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'LOG') def test_copy_image_to_swift(self, LOG_mock, swift_api_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') self.config(swift_object_expiry_timeout=1, group='ilo') container = CONF.ilo.swift_ilo_container timeout = CONF.ilo.swift_object_expiry_timeout swift_obj_mock = swift_api_mock.return_value destination_object_name = 'destination_object_name' source_file_path = 'tmp_image_file' object_headers = {'X-Delete-After': timeout} # | WHEN | ilo_common.copy_image_to_swift(source_file_path, destination_object_name) # | THEN | swift_obj_mock.create_object.assert_called_once_with( container, destination_object_name, source_file_path, object_headers=object_headers) swift_obj_mock.get_temp_url.assert_called_once_with( container, destination_object_name, timeout) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test_copy_image_to_swift_throws_error_if_swift_operation_fails( self, swift_api_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') self.config(swift_object_expiry_timeout=1, group='ilo') swift_obj_mock = swift_api_mock.return_value destination_object_name = 'destination_object_name' source_file_path = 'tmp_image_file' swift_obj_mock.create_object.side_effect = ( exception.SwiftOperationError(operation='create_object', error='failed')) # | WHEN | & | THEN | self.assertRaises(exception.SwiftOperationError, ilo_common.copy_image_to_swift, source_file_path, destination_object_name) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test_remove_image_from_swift(self, swift_api_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') container = CONF.ilo.swift_ilo_container swift_obj_mock = swift_api_mock.return_value object_name = 'object_name' # | WHEN | ilo_common.remove_image_from_swift(object_name) # | THEN | swift_obj_mock.delete_object.assert_called_once_with( container, object_name) @mock.patch.object(ilo_common, 'LOG') @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test_remove_image_from_swift_suppresses_notfound_exc( self, swift_api_mock, LOG_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') container = CONF.ilo.swift_ilo_container swift_obj_mock = swift_api_mock.return_value object_name = 'object_name' raised_exc = exception.SwiftObjectNotFoundError( operation='delete_object', object=object_name, container=container) swift_obj_mock.delete_object.side_effect = raised_exc # | WHEN | ilo_common.remove_image_from_swift(object_name) # | THEN | LOG_mock.warning.assert_called_once_with( mock.ANY, {'associated_with_msg': "", 'err': raised_exc}) @mock.patch.object(ilo_common, 'LOG') @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test_remove_image_from_swift_suppresses_operror_exc( self, swift_api_mock, LOG_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') container = CONF.ilo.swift_ilo_container swift_obj_mock = swift_api_mock.return_value object_name = 'object_name' raised_exc = exception.SwiftOperationError(operation='delete_object', error='failed') swift_obj_mock.delete_object.side_effect = raised_exc # | WHEN | ilo_common.remove_image_from_swift(object_name, 'alice_in_wonderland') # | THEN | LOG_mock.exception.assert_called_once_with( mock.ANY, {'object_name': object_name, 'container': container, 'associated_with_msg': ("associated with " "alice_in_wonderland"), 'err': raised_exc}) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_get_floppy_image_name', spec_set=True, autospec=True) def test_destroy_floppy_image_from_web_server(self, get_floppy_name_mock, utils_mock): get_floppy_name_mock.return_value = 'image-uuid' CONF.deploy.http_root = "/webserver/" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.destroy_floppy_image_from_web_server(task.node) get_floppy_name_mock.assert_called_once_with(task.node) utils_mock.assert_called_once_with('/webserver/image-uuid') @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) def test_setup_vmedia(self, func_setup_vmedia_for_boot, func_set_boot_device): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: parameters = {'a': 'b'} iso = '733d1c44-a2ea-414b-aca7-69decf20d810' ilo_common.setup_vmedia(task, iso, parameters) func_setup_vmedia_for_boot.assert_called_once_with(task, iso, parameters) func_set_boot_device.assert_called_once_with(task, boot_devices.CDROM) @mock.patch.object(deploy_utils, 'is_secure_boot_requested', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'set_secure_boot_mode', spec_set=True, autospec=True) def test_update_secure_boot_mode_passed_true(self, func_set_secure_boot_mode, func_is_secure_boot_req): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_is_secure_boot_req.return_value = True ilo_common.update_secure_boot_mode(task, True) func_set_secure_boot_mode.assert_called_once_with(task, True) @mock.patch.object(deploy_utils, 'is_secure_boot_requested', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'set_secure_boot_mode', spec_set=True, autospec=True) def test_update_secure_boot_mode_passed_false(self, func_set_secure_boot_mode, func_is_secure_boot_req): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_is_secure_boot_req.return_value = False ilo_common.update_secure_boot_mode(task, False) self.assertFalse(func_set_secure_boot_mode.called) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) def test_remove_single_or_list_of_files_with_file_list(self, unlink_mock): # | GIVEN | file_list = ['/any_path1/any_file1', '/any_path2/any_file2', '/any_path3/any_file3'] # | WHEN | ilo_common.remove_single_or_list_of_files(file_list) # | THEN | calls = [mock.call('/any_path1/any_file1'), mock.call('/any_path2/any_file2'), mock.call('/any_path3/any_file3')] unlink_mock.assert_has_calls(calls) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) def test_remove_single_or_list_of_files_with_file_str(self, unlink_mock): # | GIVEN | file_path = '/any_path1/any_file' # | WHEN | ilo_common.remove_single_or_list_of_files(file_path) # | THEN | unlink_mock.assert_called_once_with('/any_path1/any_file') @mock.patch.object(__builtin__, 'open', autospec=True) def test_verify_image_checksum(self, open_mock): # | GIVEN | data = b'Yankee Doodle went to town riding on a pony;' file_like_object = six.BytesIO(data) open_mock().__enter__.return_value = file_like_object actual_hash = hashlib.md5(data).hexdigest() # | WHEN | ilo_common.verify_image_checksum(file_like_object, actual_hash) # | THEN | # no any exception thrown def test_verify_image_checksum_throws_for_nonexistent_file(self): # | GIVEN | invalid_file_path = '/some/invalid/file/path' # | WHEN | & | THEN | self.assertRaises(exception.ImageRefValidationFailed, ilo_common.verify_image_checksum, invalid_file_path, 'hash_xxx') @mock.patch.object(__builtin__, 'open', autospec=True) def test_verify_image_checksum_throws_for_failed_validation(self, open_mock): # | GIVEN | data = b'Yankee Doodle went to town riding on a pony;' file_like_object = six.BytesIO(data) open_mock().__enter__.return_value = file_like_object invalid_hash = 'invalid_hash_value' # | WHEN | & | THEN | self.assertRaises(exception.ImageRefValidationFailed, ilo_common.verify_image_checksum, file_like_object, invalid_hash) ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/test_inspect.py0000664000567000056710000004526412674513466026565 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Management Interface used by iLO modules.""" import mock from oslo_config import cfg import six from ironic.common import exception from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.conductor import utils as conductor_utils from ironic.db import api as dbapi from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import inspect as ilo_inspect from ironic.drivers.modules.ilo import power as ilo_power from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_ilo_info() CONF = cfg.CONF class IloInspectTestCase(db_base.DbTestCase): def setUp(self): super(IloInspectTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_ilo") self.node = obj_utils.create_test_node( self.context, driver='fake_ilo', driver_info=INFO_DICT) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: properties = ilo_common.REQUIRED_PROPERTIES.copy() self.assertEqual(properties, task.driver.inspect.get_properties()) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, driver_info_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.inspect.validate(task) driver_info_mock.assert_called_once_with(task.node) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_essential_ok(self, get_ilo_object_mock, power_mock, get_essential_mock, create_port_mock, get_capabilities_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} capabilities = '' result = {'properties': properties, 'macs': macs} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.inspect.inspect_hardware(task) self.assertEqual(properties, task.node.properties) power_mock.assert_called_once_with(mock.ANY, task) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task.node, macs) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(conductor_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_essential_ok_power_off(self, get_ilo_object_mock, power_mock, set_power_mock, get_essential_mock, create_port_mock, get_capabilities_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} capabilities = '' result = {'properties': properties, 'macs': macs} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.inspect.inspect_hardware(task) self.assertEqual(properties, task.node.properties) power_mock.assert_called_once_with(mock.ANY, task) set_power_mock.assert_any_call(task, states.POWER_ON) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task.node, macs) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_essential_capabilities_ok(self, get_ilo_object_mock, power_mock, get_essential_mock, create_port_mock, get_capabilities_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} capability_str = 'BootMode:uefi' capabilities = {'BootMode': 'uefi'} result = {'properties': properties, 'macs': macs} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.inspect.inspect_hardware(task) expected_properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64', 'capabilities': capability_str} self.assertEqual(expected_properties, task.node.properties) power_mock.assert_called_once_with(mock.ANY, task) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task.node, macs) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_essential_capabilities_exist_ok(self, get_ilo_object_mock, power_mock, get_essential_mock, create_port_mock, get_capabilities_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64', 'somekey': 'somevalue'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} result = {'properties': properties, 'macs': macs} capabilities = {'BootMode': 'uefi'} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties = {'capabilities': 'foo:bar'} expected_capabilities = ('BootMode:uefi,' 'foo:bar') set1 = set(expected_capabilities.split(',')) task.driver.inspect.inspect_hardware(task) end_capabilities = task.node.properties['capabilities'] set2 = set(end_capabilities.split(',')) self.assertEqual(set1, set2) expected_properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64', 'capabilities': end_capabilities} power_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(task.node.properties, expected_properties) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task.node, macs) class TestInspectPrivateMethods(db_base.DbTestCase): def setUp(self): super(TestInspectPrivateMethods, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_ilo") self.node = obj_utils.create_test_node( self.context, driver='fake_ilo', driver_info=INFO_DICT) @mock.patch.object(ilo_inspect.LOG, 'info', spec_set=True, autospec=True) @mock.patch.object(dbapi, 'get_instance', spec_set=True, autospec=True) def test__create_ports_if_not_exist(self, instance_mock, log_mock): db_obj = instance_mock.return_value macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} node_id = self.node.id port_dict1 = {'address': 'aa:aa:aa:aa:aa:aa', 'node_id': node_id} port_dict2 = {'address': 'bb:bb:bb:bb:bb:bb', 'node_id': node_id} ilo_inspect._create_ports_if_not_exist(self.node, macs) instance_mock.assert_called_once_with() self.assertTrue(log_mock.called) db_obj.create_port.assert_any_call(port_dict1) db_obj.create_port.assert_any_call(port_dict2) @mock.patch.object(ilo_inspect.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(dbapi, 'get_instance', spec_set=True, autospec=True) def test__create_ports_if_not_exist_mac_exception(self, instance_mock, log_mock): dbapi_mock = instance_mock.return_value dbapi_mock.create_port.side_effect = exception.MACAlreadyExists('f') macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} ilo_inspect._create_ports_if_not_exist(self.node, macs) instance_mock.assert_called_once_with() self.assertTrue(log_mock.called) def test__get_essential_properties_ok(self): ilo_mock = mock.MagicMock(spec=['get_essential_properties']) properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result actual_result = ilo_inspect._get_essential_properties(self.node, ilo_mock) self.assertEqual(result, actual_result) def test__get_essential_properties_fail(self): ilo_mock = mock.MagicMock( spec=['get_additional_capabilities', 'get_essential_properties']) # Missing key: cpu_arch properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result result = self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) self.assertEqual( six.text_type(result), ("Failed to inspect hardware. Reason: Server didn't return the " "key(s): cpu_arch")) def test__get_essential_properties_fail_invalid_format(self): ilo_mock = mock.MagicMock( spec=['get_additional_capabilities', 'get_essential_properties']) # Not a dict properties = ['memory_mb', '512', 'local_gb', '10', 'cpus', '1'] macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] capabilities = '' result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result ilo_mock.get_additional_capabilities.return_value = capabilities self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) def test__get_essential_properties_fail_mac_invalid_format(self): ilo_mock = mock.MagicMock(spec=['get_essential_properties']) properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} # Not a dict macs = 'aa:aa:aa:aa:aa:aa' result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) def test__get_essential_properties_hardware_port_empty(self): ilo_mock = mock.MagicMock( spec=['get_additional_capabilities', 'get_essential_properties']) properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} # Not a dictionary macs = None result = {'properties': properties, 'macs': macs} capabilities = '' ilo_mock.get_essential_properties.return_value = result ilo_mock.get_additional_capabilities.return_value = capabilities self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) def test__get_essential_properties_hardware_port_not_dict(self): ilo_mock = mock.MagicMock(spec=['get_essential_properties']) properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} # Not a dict macs = 'aa:bb:cc:dd:ee:ff' result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result result = self.assertRaises( exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) @mock.patch.object(utils, 'get_updated_capabilities', spec_set=True, autospec=True) def test__get_capabilities_ok(self, capability_mock): ilo_mock = mock.MagicMock(spec=['get_server_capabilities']) capabilities = {'ilo_firmware_version': 'xyz'} ilo_mock.get_server_capabilities.return_value = capabilities cap = ilo_inspect._get_capabilities(self.node, ilo_mock) self.assertEqual(cap, capabilities) def test__validate_ok(self): properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '2', 'cpu_arch': 'x86_arch'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa'} data = {'properties': properties, 'macs': macs} valid_keys = ilo_inspect.IloInspect.ESSENTIAL_PROPERTIES ilo_inspect._validate(self.node, data) self.assertEqual(sorted(set(properties)), sorted(valid_keys)) def test__validate_essential_keys_fail_missing_key(self): properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa'} data = {'properties': properties, 'macs': macs} self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._validate, self.node, data) ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/test_power.py0000664000567000056710000002463512674513466026253 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for IloPower module.""" import mock from oslo_config import cfg from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import power as ilo_power from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils ilo_error = importutils.try_import('proliantutils.exception') INFO_DICT = db_utils.get_test_ilo_info() CONF = cfg.CONF @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) class IloPowerInternalMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IloPowerInternalMethodsTestCase, self).setUp() driver_info = INFO_DICT mgr_utils.mock_the_extension_manager(driver="fake_ilo") self.node = db_utils.create_test_node( driver='fake_ilo', driver_info=driver_info, instance_uuid='instance_uuid_123') CONF.set_override('power_retry', 2, 'ilo') CONF.set_override('power_wait', 0, 'ilo') def test__get_power_state(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'ON' self.assertEqual( states.POWER_ON, ilo_power._get_power_state(self.node)) ilo_mock_object.get_host_power_status.return_value = 'OFF' self.assertEqual( states.POWER_OFF, ilo_power._get_power_state(self.node)) ilo_mock_object.get_host_power_status.return_value = 'ERROR' self.assertEqual(states.ERROR, ilo_power._get_power_state(self.node)) def test__get_power_state_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.get_host_power_status.side_effect = exc self.assertRaises(exception.IloOperationError, ilo_power._get_power_state, self.node) ilo_mock_object.get_host_power_status.assert_called_once_with() def test__set_power_state_invalid_state(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, ilo_power._set_power_state, task, states.ERROR) def test__set_power_state_reboot_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.reset_server.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IloOperationError, ilo_power._set_power_state, task, states.REBOOT) ilo_mock_object.reset_server.assert_called_once_with() def test__set_power_state_reboot_ok(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.side_effect = ['ON', 'OFF', 'ON'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_power._set_power_state(task, states.REBOOT) ilo_mock_object.reset_server.assert_called_once_with() def test__set_power_state_off_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'ON' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.PowerStateFailure, ilo_power._set_power_state, task, states.POWER_OFF) ilo_mock_object.get_host_power_status.assert_called_with() ilo_mock_object.hold_pwr_btn.assert_called_once_with() def test__set_power_state_on_ok(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.side_effect = ['OFF', 'ON'] target_state = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_power._set_power_state(task, target_state) ilo_mock_object.get_host_power_status.assert_called_with() ilo_mock_object.set_host_power.assert_called_once_with('ON') @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) def test__attach_boot_iso_if_needed( self, setup_vmedia_mock, set_boot_device_mock, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.ACTIVE task.node.instance_info['ilo_boot_iso'] = 'boot-iso' ilo_power._attach_boot_iso_if_needed(task) setup_vmedia_mock.assert_called_once_with(task, 'boot-iso') set_boot_device_mock.assert_called_once_with(task, boot_devices.CDROM) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) def test__attach_boot_iso_if_needed_on_rebuild( self, setup_vmedia_mock, set_boot_device_mock, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYING task.node.instance_info['ilo_boot_iso'] = 'boot-iso' ilo_power._attach_boot_iso_if_needed(task) self.assertFalse(setup_vmedia_mock.called) self.assertFalse(set_boot_device_mock.called) class IloPowerTestCase(db_base.DbTestCase): def setUp(self): super(IloPowerTestCase, self).setUp() driver_info = INFO_DICT mgr_utils.mock_the_extension_manager(driver="fake_ilo") self.node = obj_utils.create_test_node(self.context, driver='fake_ilo', driver_info=driver_info) def test_get_properties(self): expected = ilo_common.COMMON_PROPERTIES with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.power.get_properties()) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, mock_drvinfo): side_effect = iter([exception.InvalidParameterValue("Invalid Input")]) mock_drvinfo.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) @mock.patch.object(ilo_power, '_get_power_state', spec_set=True, autospec=True) def test_get_power_state(self, mock_get_power): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_get_power.return_value = states.POWER_ON self.assertEqual(states.POWER_ON, task.driver.power.get_power_state(task)) mock_get_power.assert_called_once_with(task.node) @mock.patch.object(ilo_power, '_set_power_state', spec_set=True, autospec=True) def test_set_power_state(self, mock_set_power): mock_set_power.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_ON) mock_set_power.assert_called_once_with(task, states.POWER_ON) @mock.patch.object(ilo_power, '_set_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_power, '_get_power_state', spec_set=True, autospec=True) def test_reboot(self, mock_get_power, mock_set_power): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_power.return_value = states.POWER_ON mock_set_power.return_value = states.POWER_ON task.driver.power.reboot(task) mock_get_power.assert_called_once_with(task.node) mock_set_power.assert_called_once_with(task, states.REBOOT) ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/__init__.py0000664000567000056710000000000012674513466025573 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/ilo/test_vendor.py0000664000567000056710000002424512674513466026411 0ustar jenkinsjenkins00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for vendor methods used by iLO modules.""" import mock from oslo_config import cfg from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import vendor as ilo_vendor from ironic.drivers.modules import iscsi_deploy from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_ilo_info() CONF = cfg.CONF class VendorPassthruTestCase(db_base.DbTestCase): def setUp(self): super(VendorPassthruTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="iscsi_ilo") self.node = obj_utils.create_test_node(self.context, driver='iscsi_ilo', driver_info=INFO_DICT) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia', spec_set=True, autospec=True) def test_boot_into_iso(self, setup_vmedia_mock, power_action_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.boot_into_iso(task, boot_iso_href='foo') setup_vmedia_mock.assert_called_once_with(task, 'foo', ramdisk_options=None) power_action_mock.assert_called_once_with(task, states.REBOOT) @mock.patch.object(ilo_vendor.VendorPassthru, '_validate_boot_into_iso', spec_set=True, autospec=True) def test_validate_boot_into_iso(self, validate_boot_into_iso_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: vendor = ilo_vendor.VendorPassthru() vendor.validate(task, method='boot_into_iso', foo='bar') validate_boot_into_iso_mock.assert_called_once_with( vendor, task, {'foo': 'bar'}) def test__validate_boot_into_iso_invalid_state(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.AVAILABLE self.assertRaises( exception.InvalidStateRequested, task.driver.vendor._validate_boot_into_iso, task, {}) def test__validate_boot_into_iso_missing_boot_iso_href(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.MANAGEABLE self.assertRaises( exception.MissingParameterValue, task.driver.vendor._validate_boot_into_iso, task, {}) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) def test__validate_boot_into_iso_manage(self, validate_image_prop_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: info = {'boot_iso_href': 'foo'} task.node.provision_state = states.MANAGEABLE task.driver.vendor._validate_boot_into_iso( task, info) validate_image_prop_mock.assert_called_once_with( task.context, {'image_source': 'foo'}, []) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) def test__validate_boot_into_iso_maintenance( self, validate_image_prop_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: info = {'boot_iso_href': 'foo'} task.node.maintenance = True task.driver.vendor._validate_boot_into_iso( task, info) validate_image_prop_mock.assert_called_once_with( task.context, {'image_source': 'foo'}, []) @mock.patch.object(iscsi_deploy.VendorPassthru, 'pass_deploy_info', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) def test_pass_deploy_info(self, func_update_boot_mode, func_update_secure_boot_mode, vendorpassthru_mock): kwargs = {'address': '123456'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE task.driver.vendor.pass_deploy_info(task, **kwargs) func_update_boot_mode.assert_called_once_with(task) func_update_secure_boot_mode.assert_called_once_with(task, True) vendorpassthru_mock.assert_called_once_with( mock.ANY, task, **kwargs) @mock.patch.object(iscsi_deploy.VendorPassthru, 'continue_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', autospec=True) def test_continue_deploy(self, func_update_boot_mode, func_update_secure_boot_mode, pxe_vendorpassthru_mock): kwargs = {'address': '123456'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE task.driver.vendor.continue_deploy(task, **kwargs) func_update_boot_mode.assert_called_once_with(task) func_update_secure_boot_mode.assert_called_once_with(task, True) pxe_vendorpassthru_mock.assert_called_once_with( mock.ANY, task, **kwargs) class IloVirtualMediaAgentVendorInterfaceTestCase(db_base.DbTestCase): def setUp(self): super(IloVirtualMediaAgentVendorInterfaceTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="agent_ilo") self.node = obj_utils.create_test_node( self.context, driver='agent_ilo', driver_info=INFO_DICT) @mock.patch.object(agent.AgentVendorInterface, 'reboot_to_instance', spec_set=True, autospec=True) @mock.patch.object(agent.AgentVendorInterface, 'check_deploy_success', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) def test_reboot_to_instance(self, func_update_secure_boot_mode, func_update_boot_mode, check_deploy_success_mock, agent_reboot_to_instance_mock): kwargs = {'address': '123456'} check_deploy_success_mock.return_value = None with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.reboot_to_instance(task, **kwargs) check_deploy_success_mock.assert_called_once_with( mock.ANY, task.node) func_update_boot_mode.assert_called_once_with(task) func_update_secure_boot_mode.assert_called_once_with(task, True) agent_reboot_to_instance_mock.assert_called_once_with( mock.ANY, task, **kwargs) @mock.patch.object(agent.AgentVendorInterface, 'reboot_to_instance', spec_set=True, autospec=True) @mock.patch.object(agent.AgentVendorInterface, 'check_deploy_success', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) def test_reboot_to_instance_deploy_fail(self, func_update_secure_boot_mode, func_update_boot_mode, check_deploy_success_mock, agent_reboot_to_instance_mock): kwargs = {'address': '123456'} check_deploy_success_mock.return_value = "Error" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.reboot_to_instance(task, **kwargs) check_deploy_success_mock.assert_called_once_with( mock.ANY, task.node) self.assertFalse(func_update_boot_mode.called) self.assertFalse(func_update_secure_boot_mode.called) agent_reboot_to_instance_mock.assert_called_once_with( mock.ANY, task, **kwargs) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_image_cache.py0000664000567000056710000010070212674513466026527 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for ImageCache class and helper functions.""" import datetime import os import tempfile import time import uuid import mock from oslo_utils import uuidutils import six from ironic.common import exception from ironic.common import image_service from ironic.common import images from ironic.common import utils from ironic.drivers.modules import image_cache from ironic.tests import base def touch(filename): open(filename, 'w').close() class TestImageCacheFetch(base.TestCase): def setUp(self): super(TestImageCacheFetch, self).setUp() self.master_dir = tempfile.mkdtemp() self.cache = image_cache.ImageCache(self.master_dir, None, None) self.dest_dir = tempfile.mkdtemp() self.dest_path = os.path.join(self.dest_dir, 'dest') self.uuid = uuidutils.generate_uuid() self.master_path = os.path.join(self.master_dir, self.uuid) @mock.patch.object(image_cache, '_fetch', autospec=True) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) def test_fetch_image_no_master_dir(self, mock_download, mock_clean_up, mock_fetch): self.cache.master_dir = None self.cache.fetch_image(self.uuid, self.dest_path) self.assertFalse(mock_download.called) mock_fetch.assert_called_once_with( None, self.uuid, self.dest_path, True) self.assertFalse(mock_clean_up.called) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(image_cache, '_delete_dest_path_if_stale', return_value=True, autospec=True) @mock.patch.object(image_cache, '_delete_master_path_if_stale', return_value=True, autospec=True) def test_fetch_image_dest_and_master_uptodate( self, mock_cache_upd, mock_dest_upd, mock_link, mock_download, mock_clean_up): self.cache.fetch_image(self.uuid, self.dest_path) mock_cache_upd.assert_called_once_with(self.master_path, self.uuid, None) mock_dest_upd.assert_called_once_with(self.master_path, self.dest_path) self.assertFalse(mock_link.called) self.assertFalse(mock_download.called) self.assertFalse(mock_clean_up.called) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(image_cache, '_delete_dest_path_if_stale', return_value=False, autospec=True) @mock.patch.object(image_cache, '_delete_master_path_if_stale', return_value=True, autospec=True) def test_fetch_image_dest_out_of_date( self, mock_cache_upd, mock_dest_upd, mock_link, mock_download, mock_clean_up): self.cache.fetch_image(self.uuid, self.dest_path) mock_cache_upd.assert_called_once_with(self.master_path, self.uuid, None) mock_dest_upd.assert_called_once_with(self.master_path, self.dest_path) mock_link.assert_called_once_with(self.master_path, self.dest_path) self.assertFalse(mock_download.called) self.assertFalse(mock_clean_up.called) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(image_cache, '_delete_dest_path_if_stale', return_value=True, autospec=True) @mock.patch.object(image_cache, '_delete_master_path_if_stale', return_value=False, autospec=True) def test_fetch_image_master_out_of_date( self, mock_cache_upd, mock_dest_upd, mock_link, mock_download, mock_clean_up): self.cache.fetch_image(self.uuid, self.dest_path) mock_cache_upd.assert_called_once_with(self.master_path, self.uuid, None) mock_dest_upd.assert_called_once_with(self.master_path, self.dest_path) self.assertFalse(mock_link.called) mock_download.assert_called_once_with( self.cache, self.uuid, self.master_path, self.dest_path, ctx=None, force_raw=True) mock_clean_up.assert_called_once_with(self.cache) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(image_cache, '_delete_dest_path_if_stale', return_value=True, autospec=True) @mock.patch.object(image_cache, '_delete_master_path_if_stale', return_value=False, autospec=True) def test_fetch_image_both_master_and_dest_out_of_date( self, mock_cache_upd, mock_dest_upd, mock_link, mock_download, mock_clean_up): self.cache.fetch_image(self.uuid, self.dest_path) mock_cache_upd.assert_called_once_with(self.master_path, self.uuid, None) mock_dest_upd.assert_called_once_with(self.master_path, self.dest_path) self.assertFalse(mock_link.called) mock_download.assert_called_once_with( self.cache, self.uuid, self.master_path, self.dest_path, ctx=None, force_raw=True) mock_clean_up.assert_called_once_with(self.cache) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) def test_fetch_image_not_uuid(self, mock_download, mock_clean_up): href = u'http://abc.com/ubuntu.qcow2' href_encoded = href.encode('utf-8') if six.PY2 else href href_converted = str(uuid.uuid5(uuid.NAMESPACE_URL, href_encoded)) master_path = os.path.join(self.master_dir, href_converted) self.cache.fetch_image(href, self.dest_path) mock_download.assert_called_once_with( self.cache, href, master_path, self.dest_path, ctx=None, force_raw=True) self.assertTrue(mock_clean_up.called) @mock.patch.object(image_cache, '_fetch', autospec=True) def test__download_image(self, mock_fetch): def _fake_fetch(ctx, uuid, tmp_path, *args): self.assertEqual(self.uuid, uuid) self.assertNotEqual(self.dest_path, tmp_path) self.assertNotEqual(os.path.dirname(tmp_path), self.master_dir) with open(tmp_path, 'w') as fp: fp.write("TEST") mock_fetch.side_effect = _fake_fetch self.cache._download_image(self.uuid, self.master_path, self.dest_path) self.assertTrue(os.path.isfile(self.dest_path)) self.assertTrue(os.path.isfile(self.master_path)) self.assertEqual(os.stat(self.dest_path).st_ino, os.stat(self.master_path).st_ino) with open(self.dest_path) as fp: self.assertEqual("TEST", fp.read()) @mock.patch.object(os, 'unlink', autospec=True) class TestUpdateImages(base.TestCase): def setUp(self): super(TestUpdateImages, self).setUp() self.master_dir = tempfile.mkdtemp() self.dest_dir = tempfile.mkdtemp() self.dest_path = os.path.join(self.dest_dir, 'dest') self.uuid = uuidutils.generate_uuid() self.master_path = os.path.join(self.master_dir, self.uuid) @mock.patch.object(os.path, 'exists', return_value=False, autospec=True) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_glance_img_not_cached( self, mock_gis, mock_path_exists, mock_unlink): res = image_cache._delete_master_path_if_stale(self.master_path, self.uuid, None) self.assertFalse(mock_gis.called) self.assertFalse(mock_unlink.called) mock_path_exists.assert_called_once_with(self.master_path) self.assertFalse(res) @mock.patch.object(os.path, 'exists', return_value=True, autospec=True) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_glance_img( self, mock_gis, mock_path_exists, mock_unlink): res = image_cache._delete_master_path_if_stale(self.master_path, self.uuid, None) self.assertFalse(mock_gis.called) self.assertFalse(mock_unlink.called) mock_path_exists.assert_called_once_with(self.master_path) self.assertTrue(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_no_master(self, mock_gis, mock_unlink): res = image_cache._delete_master_path_if_stale(self.master_path, 'http://11', None) self.assertFalse(mock_gis.called) self.assertFalse(mock_unlink.called) self.assertFalse(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_no_updated_at(self, mock_gis, mock_unlink): touch(self.master_path) href = 'http://awesomefreeimages.al/img111' mock_gis.return_value.show.return_value = {} res = image_cache._delete_master_path_if_stale(self.master_path, href, None) mock_gis.assert_called_once_with(href, context=None) self.assertFalse(mock_unlink.called) self.assertTrue(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_master_up_to_date(self, mock_gis, mock_unlink): touch(self.master_path) href = 'http://awesomefreeimages.al/img999' mock_gis.return_value.show.return_value = { 'updated_at': datetime.datetime(1999, 11, 15, 8, 12, 31) } res = image_cache._delete_master_path_if_stale(self.master_path, href, None) mock_gis.assert_called_once_with(href, context=None) self.assertFalse(mock_unlink.called) self.assertTrue(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_master_same_time(self, mock_gis, mock_unlink): # When times identical should not delete cached file touch(self.master_path) mtime = utils.unix_file_modification_datetime(self.master_path) href = 'http://awesomefreeimages.al/img999' mock_gis.return_value.show.return_value = { 'updated_at': mtime } res = image_cache._delete_master_path_if_stale(self.master_path, href, None) mock_gis.assert_called_once_with(href, context=None) self.assertFalse(mock_unlink.called) self.assertTrue(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_out_of_date(self, mock_gis, mock_unlink): touch(self.master_path) href = 'http://awesomefreeimages.al/img999' mock_gis.return_value.show.return_value = { 'updated_at': datetime.datetime((datetime.datetime.utcnow().year + 1), 11, 15, 8, 12, 31) } res = image_cache._delete_master_path_if_stale(self.master_path, href, None) mock_gis.assert_called_once_with(href, context=None) mock_unlink.assert_called_once_with(self.master_path) self.assertFalse(res) def test__delete_dest_path_if_stale_no_dest(self, mock_unlink): res = image_cache._delete_dest_path_if_stale(self.master_path, self.dest_path) self.assertFalse(mock_unlink.called) self.assertFalse(res) def test__delete_dest_path_if_stale_no_master(self, mock_unlink): touch(self.dest_path) res = image_cache._delete_dest_path_if_stale(self.master_path, self.dest_path) mock_unlink.assert_called_once_with(self.dest_path) self.assertFalse(res) def test__delete_dest_path_if_stale_out_of_date(self, mock_unlink): touch(self.master_path) touch(self.dest_path) res = image_cache._delete_dest_path_if_stale(self.master_path, self.dest_path) mock_unlink.assert_called_once_with(self.dest_path) self.assertFalse(res) def test__delete_dest_path_if_stale_up_to_date(self, mock_unlink): touch(self.master_path) os.link(self.master_path, self.dest_path) res = image_cache._delete_dest_path_if_stale(self.master_path, self.dest_path) self.assertFalse(mock_unlink.called) self.assertTrue(res) class TestImageCacheCleanUp(base.TestCase): def setUp(self): super(TestImageCacheCleanUp, self).setUp() self.master_dir = tempfile.mkdtemp() self.cache = image_cache.ImageCache(self.master_dir, cache_size=10, cache_ttl=600) @mock.patch.object(image_cache.ImageCache, '_clean_up_ensure_cache_size', autospec=True) def test_clean_up_old_deleted(self, mock_clean_size): mock_clean_size.return_value = None files = [os.path.join(self.master_dir, str(i)) for i in range(2)] for filename in files: touch(filename) # NOTE(dtantsur): Can't alter ctime, have to set mtime to the future new_current_time = time.time() + 900 os.utime(files[0], (new_current_time - 100, new_current_time - 100)) with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up() mock_clean_size.assert_called_once_with(self.cache, mock.ANY, None) survived = mock_clean_size.call_args[0][1] self.assertEqual(1, len(survived)) self.assertEqual(files[0], survived[0][0]) # NOTE(dtantsur): do not compare milliseconds self.assertEqual(int(new_current_time - 100), int(survived[0][1])) self.assertEqual(int(new_current_time - 100), int(survived[0][2].st_mtime)) @mock.patch.object(image_cache.ImageCache, '_clean_up_ensure_cache_size', autospec=True) def test_clean_up_old_with_amount(self, mock_clean_size): files = [os.path.join(self.master_dir, str(i)) for i in range(2)] for filename in files: open(filename, 'wb').write(b'X') new_current_time = time.time() + 900 with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up(amount=1) self.assertFalse(mock_clean_size.called) # Exactly one file is expected to be deleted self.assertTrue(any(os.path.exists(f) for f in files)) self.assertFalse(all(os.path.exists(f) for f in files)) @mock.patch.object(image_cache.ImageCache, '_clean_up_ensure_cache_size', autospec=True) def test_clean_up_files_with_links_untouched(self, mock_clean_size): mock_clean_size.return_value = None files = [os.path.join(self.master_dir, str(i)) for i in range(2)] for filename in files: touch(filename) os.link(filename, filename + 'copy') new_current_time = time.time() + 900 with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up() for filename in files: self.assertTrue(os.path.exists(filename)) mock_clean_size.assert_called_once_with(mock.ANY, [], None) @mock.patch.object(image_cache.ImageCache, '_clean_up_too_old', autospec=True) def test_clean_up_ensure_cache_size(self, mock_clean_ttl): mock_clean_ttl.side_effect = lambda *xx: xx[1:] # NOTE(dtantsur): Cache size in test is 10 bytes, we create 6 files # with 3 bytes each and expect 3 to be deleted files = [os.path.join(self.master_dir, str(i)) for i in range(6)] for filename in files: with open(filename, 'w') as fp: fp.write('123') # NOTE(dtantsur): Make 3 files 'newer' to check that # old ones are deleted first new_current_time = time.time() + 100 for filename in files[:3]: os.utime(filename, (new_current_time, new_current_time)) with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up() for filename in files[:3]: self.assertTrue(os.path.exists(filename)) for filename in files[3:]: self.assertFalse(os.path.exists(filename)) mock_clean_ttl.assert_called_once_with(mock.ANY, mock.ANY, None) @mock.patch.object(image_cache.ImageCache, '_clean_up_too_old', autospec=True) def test_clean_up_ensure_cache_size_with_amount(self, mock_clean_ttl): mock_clean_ttl.side_effect = lambda *xx: xx[1:] # NOTE(dtantsur): Cache size in test is 10 bytes, we create 6 files # with 3 bytes each and set amount to be 15, 5 files are to be deleted files = [os.path.join(self.master_dir, str(i)) for i in range(6)] for filename in files: with open(filename, 'w') as fp: fp.write('123') # NOTE(dtantsur): Make 1 file 'newer' to check that # old ones are deleted first new_current_time = time.time() + 100 os.utime(files[0], (new_current_time, new_current_time)) with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up(amount=15) self.assertTrue(os.path.exists(files[0])) for filename in files[5:]: self.assertFalse(os.path.exists(filename)) mock_clean_ttl.assert_called_once_with(mock.ANY, mock.ANY, 15) @mock.patch.object(image_cache.LOG, 'info', autospec=True) @mock.patch.object(image_cache.ImageCache, '_clean_up_too_old', autospec=True) def test_clean_up_cache_still_large(self, mock_clean_ttl, mock_log): mock_clean_ttl.side_effect = lambda *xx: xx[1:] # NOTE(dtantsur): Cache size in test is 10 bytes, we create 2 files # than cannot be deleted and expected this to be logged files = [os.path.join(self.master_dir, str(i)) for i in range(2)] for filename in files: with open(filename, 'w') as fp: fp.write('123') os.link(filename, filename + 'copy') self.cache.clean_up() for filename in files: self.assertTrue(os.path.exists(filename)) self.assertTrue(mock_log.called) mock_clean_ttl.assert_called_once_with(mock.ANY, mock.ANY, None) @mock.patch.object(utils, 'rmtree_without_raise', autospec=True) @mock.patch.object(image_cache, '_fetch', autospec=True) def test_temp_images_not_cleaned(self, mock_fetch, mock_rmtree): def _fake_fetch(ctx, uuid, tmp_path, *args): with open(tmp_path, 'w') as fp: fp.write("TEST" * 10) # assume cleanup from another thread at this moment self.cache.clean_up() self.assertTrue(os.path.exists(tmp_path)) mock_fetch.side_effect = _fake_fetch master_path = os.path.join(self.master_dir, 'uuid') dest_path = os.path.join(tempfile.mkdtemp(), 'dest') self.cache._download_image('uuid', master_path, dest_path) self.assertTrue(mock_rmtree.called) @mock.patch.object(utils, 'rmtree_without_raise', autospec=True) @mock.patch.object(image_cache, '_fetch', autospec=True) def test_temp_dir_exception(self, mock_fetch, mock_rmtree): mock_fetch.side_effect = exception.IronicException self.assertRaises(exception.IronicException, self.cache._download_image, 'uuid', 'fake', 'fake') self.assertTrue(mock_rmtree.called) @mock.patch.object(image_cache.LOG, 'warning', autospec=True) @mock.patch.object(image_cache.ImageCache, '_clean_up_too_old', autospec=True) @mock.patch.object(image_cache.ImageCache, '_clean_up_ensure_cache_size', autospec=True) def test_clean_up_amount_not_satisfied(self, mock_clean_size, mock_clean_ttl, mock_log): mock_clean_ttl.side_effect = lambda *xx: xx[1:] mock_clean_size.side_effect = lambda self, listing, amount: amount self.cache.clean_up(amount=15) self.assertTrue(mock_log.called) def test_cleanup_ordering(self): class ParentCache(image_cache.ImageCache): def __init__(self): super(ParentCache, self).__init__('a', 1, 1, None) @image_cache.cleanup(priority=10000) class Cache1(ParentCache): pass @image_cache.cleanup(priority=20000) class Cache2(ParentCache): pass @image_cache.cleanup(priority=10000) class Cache3(ParentCache): pass self.assertEqual(image_cache._cache_cleanup_list[0][1], Cache2) # The order of caches with same prioirty is not deterministic. item_possibilities = [Cache1, Cache3] second_item_actual = image_cache._cache_cleanup_list[1][1] self.assertIn(second_item_actual, item_possibilities) item_possibilities.remove(second_item_actual) third_item_actual = image_cache._cache_cleanup_list[2][1] self.assertEqual(item_possibilities[0], third_item_actual) @mock.patch.object(image_cache, '_cache_cleanup_list', autospec=True) @mock.patch.object(os, 'statvfs', autospec=True) @mock.patch.object(image_service, 'get_image_service', autospec=True) class CleanupImageCacheTestCase(base.TestCase): def setUp(self): super(CleanupImageCacheTestCase, self).setUp() self.mock_first_cache = mock.MagicMock(spec_set=[]) self.mock_second_cache = mock.MagicMock(spec_set=[]) self.cache_cleanup_list = [(50, self.mock_first_cache), (20, self.mock_second_cache)] self.mock_first_cache.return_value.master_dir = 'first_cache_dir' self.mock_second_cache.return_value.master_dir = 'second_cache_dir' def test_no_clean_up(self, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Enough space found - no clean up mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.return_value = mock.MagicMock( spec_set=['f_frsize', 'f_bavail'], f_frsize=1, f_bavail=1024) cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list image_cache.clean_up_caches(None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_once_with('master_dir') self.assertFalse(self.mock_first_cache.return_value.clean_up.called) self.assertFalse(self.mock_second_cache.return_value.clean_up.called) mock_statvfs.assert_called_once_with('master_dir') @mock.patch.object(os, 'stat', autospec=True) def test_one_clean_up(self, mock_stat, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Not enough space, first cache clean up is enough mock_stat.return_value.st_dev = 1 mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.side_effect = [ mock.MagicMock(f_frsize=1, f_bavail=1, spec_set=['f_frsize', 'f_bavail']), mock.MagicMock(f_frsize=1, f_bavail=1024, spec_set=['f_frsize', 'f_bavail']) ] cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list image_cache.clean_up_caches(None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_with('master_dir') self.assertEqual(2, mock_statvfs.call_count) self.mock_first_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) self.assertFalse(self.mock_second_cache.return_value.clean_up.called) # Since we are using generator expression in clean_up_caches, stat on # second cache wouldn't be called if we got enough free space on # cleaning up the first cache. mock_stat_calls_expected = [mock.call('master_dir'), mock.call('first_cache_dir')] mock_statvfs_calls_expected = [mock.call('master_dir'), mock.call('master_dir')] self.assertEqual(mock_stat_calls_expected, mock_stat.mock_calls) self.assertEqual(mock_statvfs_calls_expected, mock_statvfs.mock_calls) @mock.patch.object(os, 'stat', autospec=True) def test_clean_up_another_fs(self, mock_stat, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Not enough space, need to cleanup second cache mock_stat.side_effect = [mock.MagicMock(st_dev=1, spec_set=['st_dev']), mock.MagicMock(st_dev=2, spec_set=['st_dev']), mock.MagicMock(st_dev=1, spec_set=['st_dev'])] mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.side_effect = [ mock.MagicMock(f_frsize=1, f_bavail=1, spec_set=['f_frsize', 'f_bavail']), mock.MagicMock(f_frsize=1, f_bavail=1024, spec_set=['f_frsize', 'f_bavail']) ] cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list image_cache.clean_up_caches(None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_with('master_dir') self.assertEqual(2, mock_statvfs.call_count) self.mock_second_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) self.assertFalse(self.mock_first_cache.return_value.clean_up.called) # Since first cache exists on a different partition, it wouldn't be # considered for cleanup. mock_stat_calls_expected = [mock.call('master_dir'), mock.call('first_cache_dir'), mock.call('second_cache_dir')] mock_statvfs_calls_expected = [mock.call('master_dir'), mock.call('master_dir')] self.assertEqual(mock_stat_calls_expected, mock_stat.mock_calls) self.assertEqual(mock_statvfs_calls_expected, mock_statvfs.mock_calls) @mock.patch.object(os, 'stat', autospec=True) def test_both_clean_up(self, mock_stat, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Not enough space, clean up of both caches required mock_stat.return_value.st_dev = 1 mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.side_effect = [ mock.MagicMock(f_frsize=1, f_bavail=1, spec_set=['f_frsize', 'f_bavail']), mock.MagicMock(f_frsize=1, f_bavail=2, spec_set=['f_frsize', 'f_bavail']), mock.MagicMock(f_frsize=1, f_bavail=1024, spec_set=['f_frsize', 'f_bavail']) ] cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list image_cache.clean_up_caches(None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_with('master_dir') self.assertEqual(3, mock_statvfs.call_count) self.mock_first_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) self.mock_second_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 2)) mock_stat_calls_expected = [mock.call('master_dir'), mock.call('first_cache_dir'), mock.call('second_cache_dir')] mock_statvfs_calls_expected = [mock.call('master_dir'), mock.call('master_dir'), mock.call('master_dir')] self.assertEqual(mock_stat_calls_expected, mock_stat.mock_calls) self.assertEqual(mock_statvfs_calls_expected, mock_statvfs.mock_calls) @mock.patch.object(os, 'stat', autospec=True) def test_clean_up_fail(self, mock_stat, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Not enough space even after cleaning both caches - failure mock_stat.return_value.st_dev = 1 mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.return_value = mock.MagicMock( f_frsize=1, f_bavail=1, spec_set=['f_frsize', 'f_bavail']) cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list self.assertRaises(exception.InsufficientDiskSpace, image_cache.clean_up_caches, None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_with('master_dir') self.assertEqual(3, mock_statvfs.call_count) self.mock_first_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) self.mock_second_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) mock_stat_calls_expected = [mock.call('master_dir'), mock.call('first_cache_dir'), mock.call('second_cache_dir')] mock_statvfs_calls_expected = [mock.call('master_dir'), mock.call('master_dir'), mock.call('master_dir')] self.assertEqual(mock_stat_calls_expected, mock_stat.mock_calls) self.assertEqual(mock_statvfs_calls_expected, mock_statvfs.mock_calls) class TestFetchCleanup(base.TestCase): @mock.patch.object(images, 'converted_size', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(images, 'image_to_raw', autospec=True) @mock.patch.object(image_cache, '_clean_up_caches', autospec=True) def test__fetch(self, mock_clean, mock_raw, mock_fetch, mock_size): mock_size.return_value = 100 image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True) mock_fetch.assert_called_once_with('fake', 'fake-uuid', '/foo/bar.part', force_raw=False) mock_clean.assert_called_once_with('/foo', 100) mock_raw.assert_called_once_with('fake-uuid', '/foo/bar', '/foo/bar.part') ironic-5.1.0/ironic/tests/unit/drivers/modules/test_pxe.py0000664000567000056710000012071212674513470025114 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for PXE driver.""" import os import shutil import tempfile from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from oslo_serialization import jsonutils as json from oslo_utils import fileutils from ironic.common import boot_devices from ironic.common import dhcp_factory from ironic.common import exception from ironic.common.glance_service import base_image_service from ironic.common import pxe_utils from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import agent_base_vendor from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import pxe from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF INST_INFO_DICT = db_utils.get_test_pxe_instance_info() DRV_INFO_DICT = db_utils.get_test_pxe_driver_info() DRV_INTERNAL_INFO_DICT = db_utils.get_test_pxe_driver_internal_info() class PXEPrivateMethodsTestCase(db_base.DbTestCase): def setUp(self): super(PXEPrivateMethodsTestCase, self).setUp() n = { 'driver': 'fake_pxe', 'instance_info': INST_INFO_DICT, 'driver_info': DRV_INFO_DICT, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } mgr_utils.mock_the_extension_manager(driver="fake_pxe") self.node = obj_utils.create_test_node(self.context, **n) def _test_get_pxe_conf_option(self, driver, expected_value): mgr_utils.mock_the_extension_manager(driver=driver) self.node.driver = driver self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: returned_value = pxe._get_pxe_conf_option( task, 'pxe_config_template') self.assertEqual(expected_value, returned_value) def test_get_pxe_conf_option_iscsi_deploy(self): self.config(group='pxe', pxe_config_template='my-pxe-config-template') self._test_get_pxe_conf_option('fake_pxe', 'my-pxe-config-template') def test_get_pxe_conf_option_agent_deploy_default(self): self.config(group='pxe', pxe_config_template='my-pxe-config-template') self._test_get_pxe_conf_option('fake_agent', 'my-pxe-config-template') def test_get_pxe_conf_option_agent_deploy_not_default(self): self.config(group='agent', agent_pxe_config_template='my-agent-config-template') self.config(group='pxe', pxe_config_template='my-pxe-config-template') self._test_get_pxe_conf_option('fake_agent', 'my-agent-config-template') def test__parse_driver_info_missing_deploy_kernel(self): del self.node.driver_info['deploy_kernel'] self.assertRaises(exception.MissingParameterValue, pxe._parse_driver_info, self.node) def test__parse_driver_info_missing_deploy_ramdisk(self): del self.node.driver_info['deploy_ramdisk'] self.assertRaises(exception.MissingParameterValue, pxe._parse_driver_info, self.node) def test__parse_driver_info(self): expected_info = {'deploy_ramdisk': 'glance://deploy_ramdisk_uuid', 'deploy_kernel': 'glance://deploy_kernel_uuid'} image_info = pxe._parse_driver_info(self.node) self.assertEqual(expected_info, image_info) def test__get_deploy_image_info(self): expected_info = {'deploy_ramdisk': (DRV_INFO_DICT['deploy_ramdisk'], os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'deploy_ramdisk')), 'deploy_kernel': (DRV_INFO_DICT['deploy_kernel'], os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'deploy_kernel'))} image_info = pxe._get_deploy_image_info(self.node) self.assertEqual(expected_info, image_info) def test__get_deploy_image_info_missing_deploy_kernel(self): del self.node.driver_info['deploy_kernel'] self.assertRaises(exception.MissingParameterValue, pxe._get_deploy_image_info, self.node) def test__get_deploy_image_info_deploy_ramdisk(self): del self.node.driver_info['deploy_ramdisk'] self.assertRaises(exception.MissingParameterValue, pxe._get_deploy_image_info, self.node) @mock.patch.object(base_image_service.BaseImageService, '_show', autospec=True) def _test__get_instance_image_info(self, show_mock): properties = {'properties': {u'kernel_id': u'instance_kernel_uuid', u'ramdisk_id': u'instance_ramdisk_uuid'}} expected_info = {'ramdisk': ('instance_ramdisk_uuid', os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'ramdisk')), 'kernel': ('instance_kernel_uuid', os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'kernel'))} show_mock.return_value = properties self.context.auth_token = 'fake' image_info = pxe._get_instance_image_info(self.node, self.context) show_mock.assert_called_once_with(mock.ANY, 'glance://image_uuid', method='get') self.assertEqual(expected_info, image_info) # test with saved info show_mock.reset_mock() image_info = pxe._get_instance_image_info(self.node, self.context) self.assertEqual(expected_info, image_info) self.assertFalse(show_mock.called) self.assertEqual('instance_kernel_uuid', self.node.instance_info.get('kernel')) self.assertEqual('instance_ramdisk_uuid', self.node.instance_info.get('ramdisk')) def test__get_instance_image_info(self): # Tests when 'is_whole_disk_image' exists in driver_internal_info self._test__get_instance_image_info() def test__get_instance_image_info_without_is_whole_disk_image(self): # Tests when 'is_whole_disk_image' doesn't exists in # driver_internal_info del self.node.driver_internal_info['is_whole_disk_image'] self.node.save() self._test__get_instance_image_info() @mock.patch.object(base_image_service.BaseImageService, '_show', autospec=True) def test__get_instance_image_info_whole_disk_image(self, show_mock): properties = {'properties': None} show_mock.return_value = properties self.node.driver_internal_info['is_whole_disk_image'] = True image_info = pxe._get_instance_image_info(self.node, self.context) self.assertEqual({}, image_info) @mock.patch.object(pxe_utils, '_build_pxe_config', autospec=True) def _test_build_pxe_config_options(self, build_pxe_mock, whle_dsk_img=False, ipxe_enabled=False, ipxe_timeout=0): self.config(pxe_append_params='test_param', group='pxe') # NOTE: right '/' should be removed from url string self.config(api_url='http://192.168.122.184:6385', group='conductor') self.config(disk_devices='sda', group='pxe') self.config(ipxe_timeout=ipxe_timeout, group='pxe') driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = whle_dsk_img self.node.driver_internal_info = driver_internal_info self.node.save() tftp_server = CONF.pxe.tftp_server if ipxe_enabled: http_url = 'http://192.1.2.3:1234' self.config(ipxe_enabled=True, group='pxe') self.config(http_url=http_url, group='deploy') deploy_kernel = os.path.join(http_url, self.node.uuid, 'deploy_kernel') deploy_ramdisk = os.path.join(http_url, self.node.uuid, 'deploy_ramdisk') kernel = os.path.join(http_url, self.node.uuid, 'kernel') ramdisk = os.path.join(http_url, self.node.uuid, 'ramdisk') root_dir = CONF.deploy.http_root else: deploy_kernel = os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'deploy_kernel') deploy_ramdisk = os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'deploy_ramdisk') kernel = os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'kernel') ramdisk = os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'ramdisk') root_dir = CONF.pxe.tftp_root if whle_dsk_img: ramdisk = 'no_ramdisk' kernel = 'no_kernel' ipxe_timeout_in_ms = ipxe_timeout * 1000 expected_options = { 'ari_path': ramdisk, 'deployment_ari_path': deploy_ramdisk, 'pxe_append_params': 'test_param', 'aki_path': kernel, 'deployment_aki_path': deploy_kernel, 'tftp_server': tftp_server, 'ipxe_timeout': ipxe_timeout_in_ms, } image_info = {'deploy_kernel': ('deploy_kernel', os.path.join(root_dir, self.node.uuid, 'deploy_kernel')), 'deploy_ramdisk': ('deploy_ramdisk', os.path.join(root_dir, self.node.uuid, 'deploy_ramdisk')), 'kernel': ('kernel_id', os.path.join(root_dir, self.node.uuid, 'kernel')), 'ramdisk': ('ramdisk_id', os.path.join(root_dir, self.node.uuid, 'ramdisk'))} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: options = pxe._build_pxe_config_options(task, image_info) self.assertEqual(expected_options, options) def test__build_pxe_config_options(self): self._test_build_pxe_config_options(whle_dsk_img=True, ipxe_enabled=False) def test__build_pxe_config_options_ipxe(self): self._test_build_pxe_config_options(whle_dsk_img=True, ipxe_enabled=True) def test__build_pxe_config_options_without_is_whole_disk_image(self): del self.node.driver_internal_info['is_whole_disk_image'] self.node.save() self._test_build_pxe_config_options(whle_dsk_img=False, ipxe_enabled=False) def test__build_pxe_config_options_ipxe_and_ipxe_timeout(self): self._test_build_pxe_config_options(whle_dsk_img=True, ipxe_enabled=True, ipxe_timeout=120) @mock.patch.object(pxe_utils, '_build_pxe_config', autospec=True) def test__build_pxe_config_options_whole_disk_image(self, build_pxe_mock, ipxe_enabled=False): self.config(pxe_append_params='test_param', group='pxe') # NOTE: right '/' should be removed from url string self.config(api_url='http://192.168.122.184:6385', group='conductor') self.config(disk_devices='sda', group='pxe') tftp_server = CONF.pxe.tftp_server if ipxe_enabled: http_url = 'http://192.1.2.3:1234' self.config(ipxe_enabled=True, group='pxe') self.config(http_url=http_url, group='deploy') deploy_kernel = os.path.join(http_url, self.node.uuid, 'deploy_kernel') deploy_ramdisk = os.path.join(http_url, self.node.uuid, 'deploy_ramdisk') root_dir = CONF.deploy.http_root else: deploy_kernel = os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'deploy_kernel') deploy_ramdisk = os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'deploy_ramdisk') root_dir = CONF.pxe.tftp_root expected_options = { 'deployment_ari_path': deploy_ramdisk, 'pxe_append_params': 'test_param', 'deployment_aki_path': deploy_kernel, 'tftp_server': tftp_server, 'aki_path': 'no_kernel', 'ari_path': 'no_ramdisk', 'ipxe_timeout': 0, } image_info = {'deploy_kernel': ('deploy_kernel', os.path.join(root_dir, self.node.uuid, 'deploy_kernel')), 'deploy_ramdisk': ('deploy_ramdisk', os.path.join(root_dir, self.node.uuid, 'deploy_ramdisk')), } driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: options = pxe._build_pxe_config_options(task, image_info) self.assertEqual(expected_options, options) def test__build_pxe_config_options_no_kernel_no_ramdisk(self): del self.node.driver_internal_info['is_whole_disk_image'] self.node.save() self.config(group='pxe', tftp_server='my-tftp-server') self.config(group='pxe', pxe_append_params='my-pxe-append-params') image_info = { 'deploy_kernel': ('deploy_kernel', 'path-to-deploy_kernel'), 'deploy_ramdisk': ('deploy_ramdisk', 'path-to-deploy_ramdisk')} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: options = pxe._build_pxe_config_options(task, image_info) expected_options = { 'deployment_aki_path': 'path-to-deploy_kernel', 'deployment_ari_path': 'path-to-deploy_ramdisk', 'pxe_append_params': 'my-pxe-append-params', 'tftp_server': 'my-tftp-server', 'aki_path': 'no_kernel', 'ari_path': 'no_ramdisk', 'ipxe_timeout': 0} self.assertEqual(expected_options, options) @mock.patch.object(deploy_utils, 'fetch_images', autospec=True) def test__cache_tftp_images_master_path(self, mock_fetch_image): temp_dir = tempfile.mkdtemp() self.config(tftp_root=temp_dir, group='pxe') self.config(tftp_master_path=os.path.join(temp_dir, 'tftp_master_path'), group='pxe') image_path = os.path.join(temp_dir, self.node.uuid, 'deploy_kernel') image_info = {'deploy_kernel': ('deploy_kernel', image_path)} fileutils.ensure_tree(CONF.pxe.tftp_master_path) pxe._cache_ramdisk_kernel(None, self.node, image_info) mock_fetch_image.assert_called_once_with(None, mock.ANY, [('deploy_kernel', image_path)], True) @mock.patch.object(pxe, 'TFTPImageCache', lambda: None) @mock.patch.object(fileutils, 'ensure_tree', autospec=True) @mock.patch.object(deploy_utils, 'fetch_images', autospec=True) def test__cache_ramdisk_kernel(self, mock_fetch_image, mock_ensure_tree): self.config(ipxe_enabled=False, group='pxe') fake_pxe_info = {'foo': 'bar'} expected_path = os.path.join(CONF.pxe.tftp_root, self.node.uuid) pxe._cache_ramdisk_kernel(self.context, self.node, fake_pxe_info) mock_ensure_tree.assert_called_with(expected_path) mock_fetch_image.assert_called_once_with( self.context, mock.ANY, list(fake_pxe_info.values()), True) @mock.patch.object(pxe, 'TFTPImageCache', lambda: None) @mock.patch.object(fileutils, 'ensure_tree', autospec=True) @mock.patch.object(deploy_utils, 'fetch_images', autospec=True) def test__cache_ramdisk_kernel_ipxe(self, mock_fetch_image, mock_ensure_tree): self.config(ipxe_enabled=True, group='pxe') fake_pxe_info = {'foo': 'bar'} expected_path = os.path.join(CONF.deploy.http_root, self.node.uuid) pxe._cache_ramdisk_kernel(self.context, self.node, fake_pxe_info) mock_ensure_tree.assert_called_with(expected_path) mock_fetch_image.assert_called_once_with(self.context, mock.ANY, list(fake_pxe_info.values()), True) @mock.patch.object(pxe.LOG, 'error', autospec=True) def test_validate_boot_option_for_uefi_exc(self, mock_log): properties = {'capabilities': 'boot_mode:uefi'} instance_info = {"boot_option": "netboot"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = True self.assertRaises(exception.InvalidParameterValue, pxe.validate_boot_option_for_uefi, self.node) self.assertTrue(mock_log.called) @mock.patch.object(pxe.LOG, 'error', autospec=True) def test_validate_boot_option_for_uefi_noexc_one(self, mock_log): properties = {'capabilities': 'boot_mode:uefi'} instance_info = {"boot_option": "local"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = True pxe.validate_boot_option_for_uefi(self.node) self.assertFalse(mock_log.called) @mock.patch.object(pxe.LOG, 'error', autospec=True) def test_validate_boot_option_for_uefi_noexc_two(self, mock_log): properties = {'capabilities': 'boot_mode:bios'} instance_info = {"boot_option": "local"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = True pxe.validate_boot_option_for_uefi(self.node) self.assertFalse(mock_log.called) @mock.patch.object(pxe.LOG, 'error', autospec=True) def test_validate_boot_option_for_uefi_noexc_three(self, mock_log): properties = {'capabilities': 'boot_mode:uefi'} instance_info = {"boot_option": "local"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = False pxe.validate_boot_option_for_uefi(self.node) self.assertFalse(mock_log.called) @mock.patch.object(pxe.LOG, 'error', autospec=True) def test_validate_boot_parameters_for_trusted_boot_one(self, mock_log): properties = {'capabilities': 'boot_mode:uefi'} instance_info = {"boot_option": "netboot"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InvalidParameterValue, pxe.validate_boot_parameters_for_trusted_boot, self.node) self.assertTrue(mock_log.called) @mock.patch.object(pxe.LOG, 'error', autospec=True) def test_validate_boot_parameters_for_trusted_boot_two(self, mock_log): properties = {'capabilities': 'boot_mode:bios'} instance_info = {"boot_option": "local"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InvalidParameterValue, pxe.validate_boot_parameters_for_trusted_boot, self.node) self.assertTrue(mock_log.called) @mock.patch.object(pxe.LOG, 'error', autospec=True) def test_validate_boot_parameters_for_trusted_boot_three(self, mock_log): properties = {'capabilities': 'boot_mode:bios'} instance_info = {"boot_option": "netboot"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = True self.assertRaises(exception.InvalidParameterValue, pxe.validate_boot_parameters_for_trusted_boot, self.node) self.assertTrue(mock_log.called) @mock.patch.object(pxe.LOG, 'error', autospec=True) def test_validate_boot_parameters_for_trusted_boot_pass(self, mock_log): properties = {'capabilities': 'boot_mode:bios'} instance_info = {"boot_option": "netboot"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = False pxe.validate_boot_parameters_for_trusted_boot(self.node) self.assertFalse(mock_log.called) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(pxe_utils, 'clean_up_pxe_config', autospec=True) @mock.patch.object(pxe, 'TFTPImageCache', autospec=True) class CleanUpPxeEnvTestCase(db_base.DbTestCase): def setUp(self): super(CleanUpPxeEnvTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_pxe") instance_info = INST_INFO_DICT instance_info['deploy_key'] = 'fake-56789' self.node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) def test__clean_up_pxe_env(self, mock_cache, mock_pxe_clean, mock_unlink): image_info = {'label': ['', 'deploy_kernel']} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: pxe._clean_up_pxe_env(task, image_info) mock_pxe_clean.assert_called_once_with(task) mock_unlink.assert_any_call('deploy_kernel') mock_cache.return_value.clean_up.assert_called_once_with() class PXEBootTestCase(db_base.DbTestCase): def setUp(self): super(PXEBootTestCase, self).setUp() self.context.auth_token = 'fake' self.temp_dir = tempfile.mkdtemp() self.config(tftp_root=self.temp_dir, group='pxe') self.temp_dir = tempfile.mkdtemp() self.config(images_path=self.temp_dir, group='pxe') mgr_utils.mock_the_extension_manager(driver="fake_pxe") instance_info = INST_INFO_DICT instance_info['deploy_key'] = 'fake-56789' self.node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT) self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) self.config(group='conductor', api_url='http://127.0.0.1:1234/') def test_get_properties(self): expected = pxe.COMMON_PROPERTIES expected.update(agent_base_vendor.VENDOR_PROPERTIES) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(base_image_service.BaseImageService, '_show', autospec=True) def test_validate_good(self, mock_glance): mock_glance.return_value = {'properties': {'kernel_id': 'fake-kernel', 'ramdisk_id': 'fake-initr'}} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot.validate(task) @mock.patch.object(base_image_service.BaseImageService, '_show', autospec=True) def test_validate_good_whole_disk_image(self, mock_glance): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.boot.validate(task) def test_validate_fail_missing_deploy_kernel(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: del task.node.driver_info['deploy_kernel'] self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_missing_deploy_ramdisk(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: del task.node.driver_info['deploy_ramdisk'] self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_missing_image_source(self): info = dict(INST_INFO_DICT) del info['image_source'] self.node.instance_info = json.dumps(info) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node['instance_info'] = json.dumps(info) self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_invalid_config_uefi_whole_disk_image(self): properties = {'capabilities': 'boot_mode:uefi,boot_option:netboot'} instance_info = {"boot_option": "netboot"} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties = properties task.node.instance_info['capabilities'] = instance_info task.node.driver_internal_info['is_whole_disk_image'] = True self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) def test_validate_fail_no_port(self): new_node = obj_utils.create_test_node( self.context, uuid='aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', driver='fake_pxe', instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT) with task_manager.acquire(self.context, new_node.uuid, shared=True) as task: self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_trusted_boot_with_secure_boot(self): instance_info = {"boot_option": "netboot", "secure_boot": "true", "trusted_boot": "true"} properties = {'capabilities': 'trusted_boot:true'} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.instance_info['capabilities'] = instance_info task.node.properties = properties task.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) def test_validate_fail_invalid_trusted_boot_value(self): properties = {'capabilities': 'trusted_boot:value'} instance_info = {"trusted_boot": "value"} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties = properties task.node.instance_info['capabilities'] = instance_info self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) @mock.patch.object(base_image_service.BaseImageService, '_show', autospec=True) def test_validate_fail_no_image_kernel_ramdisk_props(self, mock_glance): mock_glance.return_value = {'properties': {}} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) @mock.patch.object(base_image_service.BaseImageService, '_show', autospec=True) def test_validate_fail_glance_image_doesnt_exists(self, mock_glance): mock_glance.side_effect = iter([exception.ImageNotFound('not found')]) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) @mock.patch.object(base_image_service.BaseImageService, '_show', autospec=True) def test_validate_fail_glance_conn_problem(self, mock_glance): exceptions = (exception.GlanceConnectionFailed('connection fail'), exception.ImageNotAuthorized('not authorized'), exception.Invalid('invalid')) mock_glance.side_effect = iter(exceptions) for exc in exceptions: with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) @mock.patch.object(dhcp_factory, 'DHCPFactory') @mock.patch.object(pxe, '_get_instance_image_info', autospec=True) @mock.patch.object(pxe, '_get_deploy_image_info', autospec=True) @mock.patch.object(pxe, '_cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe, '_build_pxe_config_options', autospec=True) @mock.patch.object(pxe_utils, 'create_pxe_config', autospec=True) def _test_prepare_ramdisk(self, mock_pxe_config, mock_build_pxe, mock_cache_r_k, mock_deploy_img_info, mock_instance_img_info, dhcp_factory_mock, uefi=False, cleaning=False): mock_build_pxe.return_value = {} mock_deploy_img_info.return_value = {'deploy_kernel': 'a'} mock_instance_img_info.return_value = {'kernel': 'b'} mock_pxe_config.return_value = None mock_cache_r_k.return_value = None provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance(task) task.driver.boot.prepare_ramdisk(task, {'foo': 'bar'}) mock_deploy_img_info.assert_called_once_with(task.node) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) if cleaning is False: mock_cache_r_k.assert_called_once_with( self.context, task.node, {'deploy_kernel': 'a', 'kernel': 'b'}) mock_instance_img_info.assert_called_once_with(task.node, self.context) else: mock_cache_r_k.assert_called_once_with( self.context, task.node, {'deploy_kernel': 'a'}) if uefi: mock_pxe_config.assert_called_once_with( task, {'foo': 'bar'}, CONF.pxe.uefi_pxe_config_template) else: mock_pxe_config.assert_called_once_with( task, {'foo': 'bar'}, CONF.pxe.pxe_config_template) def test_prepare_ramdisk(self): self.node.provision_state = states.DEPLOYING self.node.save() self._test_prepare_ramdisk() def test_prepare_ramdisk_uefi(self): self.node.provision_state = states.DEPLOYING self.node.save() properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() self._test_prepare_ramdisk(uefi=True) @mock.patch.object(shutil, 'copyfile', autospec=True) def test_prepare_ramdisk_ipxe(self, copyfile_mock): self.node.provision_state = states.DEPLOYING self.node.save() self.config(group='pxe', ipxe_enabled=True) self.config(group='deploy', http_url='http://myserver') self._test_prepare_ramdisk() copyfile_mock.assert_called_once_with( CONF.pxe.ipxe_boot_script, os.path.join( CONF.deploy.http_root, os.path.basename(CONF.pxe.ipxe_boot_script))) def test_prepare_ramdisk_cleaning(self): self.node.provision_state = states.CLEANING self.node.save() self._test_prepare_ramdisk(cleaning=True) @mock.patch.object(pxe, '_clean_up_pxe_env', autospec=True) @mock.patch.object(pxe, '_get_deploy_image_info', autospec=True) def test_clean_up_ramdisk(self, get_deploy_image_info_mock, clean_up_pxe_env_mock): with task_manager.acquire(self.context, self.node.uuid) as task: image_info = {'deploy_kernel': ['', '/path/to/deploy_kernel'], 'deploy_ramdisk': ['', '/path/to/deploy_ramdisk']} get_deploy_image_info_mock.return_value = image_info task.driver.boot.clean_up_ramdisk(task) clean_up_pxe_env_mock.assert_called_once_with(task, image_info) get_deploy_image_info_mock.assert_called_once_with(task.node) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe, '_cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe, '_get_instance_image_info', autospec=True) def test_prepare_instance_netboot( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} get_image_info_mock.return_value = image_info with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance(task) pxe_config_path = pxe_utils.get_pxe_config_file_path( task.node.uuid) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.driver_internal_info['root_uuid_or_disk_id'] = ( "30212642-09d3-467f-8e09-21685826ab50") task.node.driver_internal_info['is_whole_disk_image'] = False task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with( task.node, task.context) cache_mock.assert_called_once_with( task.context, task.node, image_info) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) switch_pxe_config_mock.assert_called_once_with( pxe_config_path, "30212642-09d3-467f-8e09-21685826ab50", 'bios', False, False) set_boot_device_mock.assert_called_once_with(task, boot_devices.PXE) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory') @mock.patch.object(pxe, '_cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe, '_get_instance_image_info', autospec=True) def test_prepare_instance_netboot_missing_root_uuid( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} get_image_info_mock.return_value = image_info with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance(task) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.driver_internal_info['is_whole_disk_image'] = False task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with( task.node, task.context) cache_mock.assert_called_once_with( task.context, task.node, image_info) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) self.assertFalse(switch_pxe_config_mock.called) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(pxe_utils, 'clean_up_pxe_config', autospec=True) def test_prepare_instance_localboot(self, clean_up_pxe_config_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.node.instance_info['capabilities'] = {'boot_option': 'local'} task.driver.boot.prepare_instance(task) clean_up_pxe_config_mock.assert_called_once_with(task) set_boot_device_mock.assert_called_once_with(task, boot_devices.DISK) @mock.patch.object(pxe, '_clean_up_pxe_env', autospec=True) @mock.patch.object(pxe, '_get_instance_image_info', autospec=True) def test_clean_up_instance(self, get_image_info_mock, clean_up_pxe_env_mock): with task_manager.acquire(self.context, self.node.uuid) as task: image_info = {'kernel': ['', '/path/to/kernel'], 'ramdisk': ['', '/path/to/ramdisk']} get_image_info_mock.return_value = image_info task.driver.boot.clean_up_instance(task) clean_up_pxe_env_mock.assert_called_once_with(task, image_info) get_image_info_mock.assert_called_once_with( task.node, task.context) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_console_utils.py0000664000567000056710000004350412674513466027212 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2014 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for console_utils driver module.""" import errno import os import psutil import random import signal import string import subprocess import tempfile from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from oslo_utils import netutils from ironic.common import exception from ironic.drivers.modules import console_utils from ironic.drivers.modules import ipmitool as ipmi from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF INFO_DICT = db_utils.get_test_ipmi_info() class ConsoleUtilsTestCase(db_base.DbTestCase): def setUp(self): super(ConsoleUtilsTestCase, self).setUp() self.node = obj_utils.get_test_node( self.context, driver='fake_ipmitool', driver_info=INFO_DICT) self.info = ipmi._parse_driver_info(self.node) def test__get_console_pid_dir(self): pid_dir = '/tmp/pid_dir' self.config(terminal_pid_dir=pid_dir, group='console') dir = console_utils._get_console_pid_dir() self.assertEqual(pid_dir, dir) def test__get_console_pid_dir_tempdir(self): self.config(tempdir='/tmp/fake_dir') dir = console_utils._get_console_pid_dir() self.assertEqual(CONF.tempdir, dir) @mock.patch.object(os, 'makedirs', autospec=True) @mock.patch.object(os.path, 'exists', autospec=True) def test__ensure_console_pid_dir_exists(self, mock_path_exists, mock_makedirs): mock_path_exists.return_value = True mock_makedirs.side_effect = OSError pid_dir = console_utils._get_console_pid_dir() console_utils._ensure_console_pid_dir_exists() mock_path_exists.assert_called_once_with(pid_dir) self.assertFalse(mock_makedirs.called) @mock.patch.object(os, 'makedirs', autospec=True) @mock.patch.object(os.path, 'exists', autospec=True) def test__ensure_console_pid_dir_exists_fail(self, mock_path_exists, mock_makedirs): mock_path_exists.return_value = False mock_makedirs.side_effect = OSError pid_dir = console_utils._get_console_pid_dir() self.assertRaises(exception.ConsoleError, console_utils._ensure_console_pid_dir_exists) mock_path_exists.assert_called_once_with(pid_dir) mock_makedirs.assert_called_once_with(pid_dir) @mock.patch.object(console_utils, '_get_console_pid_dir', autospec=True) def test__get_console_pid_file(self, mock_dir): mock_dir.return_value = tempfile.gettempdir() expected_path = '%(tempdir)s/%(uuid)s.pid' % { 'tempdir': mock_dir.return_value, 'uuid': self.info.get('uuid')} path = console_utils._get_console_pid_file(self.info['uuid']) self.assertEqual(expected_path, path) mock_dir.assert_called_once_with() @mock.patch.object(console_utils, '_get_console_pid_file', autospec=True) def test__get_console_pid(self, mock_exec): tmp_file_handle = tempfile.NamedTemporaryFile() tmp_file = tmp_file_handle.name self.addCleanup(ironic_utils.unlink_without_raise, tmp_file) with open(tmp_file, "w") as f: f.write("12345\n") mock_exec.return_value = tmp_file pid = console_utils._get_console_pid(self.info['uuid']) mock_exec.assert_called_once_with(self.info['uuid']) self.assertEqual(pid, 12345) @mock.patch.object(console_utils, '_get_console_pid_file', autospec=True) def test__get_console_pid_not_a_num(self, mock_exec): tmp_file_handle = tempfile.NamedTemporaryFile() tmp_file = tmp_file_handle.name self.addCleanup(ironic_utils.unlink_without_raise, tmp_file) with open(tmp_file, "w") as f: f.write("Hello World\n") mock_exec.return_value = tmp_file self.assertRaises(exception.NoConsolePid, console_utils._get_console_pid, self.info['uuid']) mock_exec.assert_called_once_with(self.info['uuid']) def test__get_console_pid_file_not_found(self): self.assertRaises(exception.NoConsolePid, console_utils._get_console_pid, self.info['uuid']) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(os, 'kill', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) def test__stop_console(self, mock_pid, mock_kill, mock_unlink): pid_file = console_utils._get_console_pid_file(self.info['uuid']) mock_pid.return_value = 12345 console_utils._stop_console(self.info['uuid']) mock_pid.assert_called_once_with(self.info['uuid']) mock_kill.assert_called_once_with(mock_pid.return_value, signal.SIGTERM) mock_unlink.assert_called_once_with(pid_file) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(os, 'kill', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) def test__stop_console_nopid(self, mock_pid, mock_kill, mock_unlink): pid_file = console_utils._get_console_pid_file(self.info['uuid']) mock_pid.side_effect = iter( [exception.NoConsolePid(pid_path="/tmp/blah")]) self.assertRaises(exception.NoConsolePid, console_utils._stop_console, self.info['uuid']) mock_pid.assert_called_once_with(self.info['uuid']) self.assertFalse(mock_kill.called) mock_unlink.assert_called_once_with(pid_file) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(os, 'kill', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) def test__stop_console_shellinabox_not_running(self, mock_pid, mock_kill, mock_unlink): pid_file = console_utils._get_console_pid_file(self.info['uuid']) mock_pid.return_value = 12345 mock_kill.side_effect = OSError(errno.ESRCH, 'message') console_utils._stop_console(self.info['uuid']) mock_pid.assert_called_once_with(self.info['uuid']) mock_kill.assert_called_once_with(mock_pid.return_value, signal.SIGTERM) mock_unlink.assert_called_once_with(pid_file) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(os, 'kill', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) def test__stop_console_exception(self, mock_pid, mock_kill, mock_unlink): pid_file = console_utils._get_console_pid_file(self.info['uuid']) mock_pid.return_value = 12345 mock_kill.side_effect = OSError(2, 'message') self.assertRaises(exception.ConsoleError, console_utils._stop_console, self.info['uuid']) mock_pid.assert_called_once_with(self.info['uuid']) mock_kill.assert_called_once_with(mock_pid.return_value, signal.SIGTERM) mock_unlink.assert_called_once_with(pid_file) def _get_shellinabox_console(self, scheme): generated_url = ( console_utils.get_shellinabox_console_url(self.info['port'])) console_host = CONF.my_ip if netutils.is_valid_ipv6(console_host): console_host = '[%s]' % console_host http_url = "%s://%s:%s" % (scheme, console_host, self.info['port']) self.assertEqual(http_url, generated_url) def test_get_shellinabox_console_url(self): self._get_shellinabox_console('http') def test_get_shellinabox_console_https_url(self): # specify terminal_cert_dir in /etc/ironic/ironic.conf self.config(terminal_cert_dir='/tmp', group='console') # use https self._get_shellinabox_console('https') def test_make_persistent_password_file(self): filepath = '%(tempdir)s/%(node_uuid)s' % { 'tempdir': tempfile.gettempdir(), 'node_uuid': self.info['uuid']} password = ''.join([random.choice(string.ascii_letters) for n in range(16)]) console_utils.make_persistent_password_file(filepath, password) # make sure file exists self.assertTrue(os.path.exists(filepath)) # make sure the content is correct with open(filepath) as file: content = file.read() self.assertEqual(password, content) # delete the file os.unlink(filepath) @mock.patch.object(os, 'chmod', autospec=True) def test_make_persistent_password_file_fail(self, mock_chmod): mock_chmod.side_effect = IOError() filepath = '%(tempdir)s/%(node_uuid)s' % { 'tempdir': tempfile.gettempdir(), 'node_uuid': self.info['uuid']} self.assertRaises(exception.PasswordFileFailedToCreate, console_utils.make_persistent_password_file, filepath, 'password') @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) @mock.patch.object(psutil, 'pid_exists', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console(self, mock_stop, mock_dir_exists, mock_pid_exists, mock_pid, mock_popen): mock_popen.return_value.poll.return_value = 0 mock_pid.return_value = 12345 mock_pid_exists.return_value = True # touch the pid file pid_file = console_utils._get_console_pid_file(self.info['uuid']) open(pid_file, 'a').close() self.addCleanup(os.remove, pid_file) self.assertTrue(os.path.exists(pid_file)) console_utils.start_shellinabox_console(self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_pid.assert_called_once_with(self.info['uuid']) mock_pid_exists.assert_called_once_with(12345) mock_popen.assert_called_once_with(mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.PIPE) mock_popen.return_value.poll.assert_called_once_with() @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) @mock.patch.object(psutil, 'pid_exists', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console_nopid(self, mock_stop, mock_dir_exists, mock_pid_exists, mock_pid, mock_popen): # no existing PID file before starting mock_stop.side_effect = iter([exception.NoConsolePid('/tmp/blah')]) mock_popen.return_value.poll.return_value = 0 mock_pid.return_value = 12345 mock_pid_exists.return_value = True # touch the pid file pid_file = console_utils._get_console_pid_file(self.info['uuid']) open(pid_file, 'a').close() self.addCleanup(os.remove, pid_file) self.assertTrue(os.path.exists(pid_file)) console_utils.start_shellinabox_console(self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_pid.assert_called_once_with(self.info['uuid']) mock_pid_exists.assert_called_once_with(12345) mock_popen.assert_called_once_with(mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.PIPE) mock_popen.return_value.poll.assert_called_once_with() @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console_fail(self, mock_stop, mock_dir_exists, mock_popen): mock_popen.return_value.poll.return_value = 1 mock_popen.return_value.communicate.return_value = ('output', 'error') self.assertRaises(exception.ConsoleSubprocessFailed, console_utils.start_shellinabox_console, self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_popen.assert_called_once_with(mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.PIPE) mock_popen.return_value.poll.assert_called_once_with() @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) @mock.patch.object(psutil, 'pid_exists', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console_fail_no_pid(self, mock_stop, mock_dir_exists, mock_pid_exists, mock_pid, mock_popen): mock_popen.return_value.poll.return_value = 0 mock_pid.return_value = 12345 mock_pid_exists.return_value = False mock_popen.return_value.communicate.return_value = ('output', 'error') # touch the pid file pid_file = console_utils._get_console_pid_file(self.info['uuid']) open(pid_file, 'a').close() self.addCleanup(os.remove, pid_file) self.assertTrue(os.path.exists(pid_file)) self.assertRaises(exception.ConsoleSubprocessFailed, console_utils.start_shellinabox_console, self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_pid.assert_called_once_with(self.info['uuid']) mock_pid_exists.assert_called_once_with(12345) mock_popen.assert_called_once_with(mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.PIPE) mock_popen.return_value.poll.assert_called_once_with() @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console_fail_nopiddir(self, mock_stop, mock_dir_exists, mock_popen): mock_dir_exists.side_effect = iter( [exception.ConsoleError(message='fail')]) mock_popen.return_value.poll.return_value = 0 self.assertRaises(exception.ConsoleError, console_utils.start_shellinabox_console, self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() self.assertFalse(mock_popen.called) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_stop_shellinabox_console(self, mock_stop): console_utils.stop_shellinabox_console(self.info['uuid']) mock_stop.assert_called_once_with(self.info['uuid']) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_stop_shellinabox_console_fail_nopid(self, mock_stop): mock_stop.side_effect = iter([exception.NoConsolePid('/tmp/blah')]) console_utils.stop_shellinabox_console(self.info['uuid']) mock_stop.assert_called_once_with(self.info['uuid']) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_snmp.py0000664000567000056710000017675212674513466025321 0ustar jenkinsjenkins00000000000000# Copyright 2013,2014 Cray Inc # # Authors: David Hewson # Stig Telfer # Mark Goddard # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for SNMP power driver module.""" import time import mock from oslo_config import cfg from pysnmp.entity.rfc3413.oneliner import cmdgen from pysnmp import error as snmp_error from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import snmp as snmp from ironic.tests import base from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF INFO_DICT = db_utils.get_test_snmp_info() @mock.patch.object(cmdgen, 'CommandGenerator', autospec=True) class SNMPClientTestCase(base.TestCase): def setUp(self): super(SNMPClientTestCase, self).setUp() self.address = '1.2.3.4' self.port = '6700' self.oid = 'oid' self.value = 'value' def test___init__(self, mock_cmdgen): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V1) mock_cmdgen.assert_called_once_with() self.assertEqual(self.address, client.address) self.assertEqual(self.port, client.port) self.assertEqual(snmp.SNMP_V1, client.version) self.assertIsNone(client.community) self.assertFalse('security' in client.__dict__) self.assertEqual(mock_cmdgen.return_value, client.cmd_gen) @mock.patch.object(cmdgen, 'CommunityData', autospec=True) def test__get_auth_v1(self, mock_community, mock_cmdgen): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V1) client._get_auth() mock_cmdgen.assert_called_once_with() mock_community.assert_called_once_with(client.community, mpModel=0) @mock.patch.object(cmdgen, 'UsmUserData', autospec=True) def test__get_auth_v3(self, mock_user, mock_cmdgen): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) client._get_auth() mock_cmdgen.assert_called_once_with() mock_user.assert_called_once_with(client.security) @mock.patch.object(cmdgen, 'UdpTransportTarget', autospec=True) def test__get_transport(self, mock_transport, mock_cmdgen): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) client._get_transport() mock_cmdgen.assert_called_once_with() mock_transport.assert_called_once_with((client.address, client.port)) @mock.patch.object(cmdgen, 'UdpTransportTarget', autospec=True) def test__get_transport_err(self, mock_transport, mock_cmdgen): mock_transport.side_effect = snmp_error.PySnmpError client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(snmp_error.PySnmpError, client._get_transport) mock_cmdgen.assert_called_once_with() mock_transport.assert_called_once_with((client.address, client.port)) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get(self, mock_auth, mock_transport, mock_cmdgen): var_bind = (self.oid, self.value) mock_cmdgenerator = mock_cmdgen.return_value mock_cmdgenerator.getCmd.return_value = ("", None, 0, [var_bind]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) val = client.get(self.oid) self.assertEqual(var_bind[1], val) mock_cmdgenerator.getCmd.assert_called_once_with(mock.ANY, mock.ANY, self.oid) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_next(self, mock_auth, mock_transport, mock_cmdgen): var_bind = (self.oid, self.value) mock_cmdgenerator = mock_cmdgen.return_value mock_cmdgenerator.nextCmd.return_value = ( "", None, 0, [[var_bind, var_bind]]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) val = client.get_next(self.oid) self.assertEqual([self.value, self.value], val) mock_cmdgenerator.nextCmd.assert_called_once_with(mock.ANY, mock.ANY, self.oid) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_err_transport(self, mock_auth, mock_transport, mock_cmdgen): mock_transport.side_effect = snmp_error.PySnmpError var_bind = (self.oid, self.value) mock_cmdgenerator = mock_cmdgen.return_value mock_cmdgenerator.getCmd.return_value = ("engine error", None, 0, [var_bind]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.get, self.oid) self.assertFalse(mock_cmdgenerator.getCmd.called) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_next_err_transport(self, mock_auth, mock_transport, mock_cmdgen): mock_transport.side_effect = snmp_error.PySnmpError var_bind = (self.oid, self.value) mock_cmdgenerator = mock_cmdgen.return_value mock_cmdgenerator.nextCmd.return_value = ("engine error", None, 0, [[var_bind, var_bind]]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.get_next, self.oid) self.assertFalse(mock_cmdgenerator.nextCmd.called) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_err_engine(self, mock_auth, mock_transport, mock_cmdgen): var_bind = (self.oid, self.value) mock_cmdgenerator = mock_cmdgen.return_value mock_cmdgenerator.getCmd.return_value = ("engine error", None, 0, [var_bind]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.get, self.oid) mock_cmdgenerator.getCmd.assert_called_once_with(mock.ANY, mock.ANY, self.oid) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_next_err_engine(self, mock_auth, mock_transport, mock_cmdgen): var_bind = (self.oid, self.value) mock_cmdgenerator = mock_cmdgen.return_value mock_cmdgenerator.nextCmd.return_value = ("engine error", None, 0, [[var_bind, var_bind]]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.get_next, self.oid) mock_cmdgenerator.nextCmd.assert_called_once_with(mock.ANY, mock.ANY, self.oid) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_set(self, mock_auth, mock_transport, mock_cmdgen): var_bind = (self.oid, self.value) mock_cmdgenerator = mock_cmdgen.return_value mock_cmdgenerator.setCmd.return_value = ("", None, 0, [var_bind]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) client.set(self.oid, self.value) mock_cmdgenerator.setCmd.assert_called_once_with(mock.ANY, mock.ANY, var_bind) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_set_err_transport(self, mock_auth, mock_transport, mock_cmdgen): mock_transport.side_effect = snmp_error.PySnmpError var_bind = (self.oid, self.value) mock_cmdgenerator = mock_cmdgen.return_value mock_cmdgenerator.setCmd.return_value = ("engine error", None, 0, [var_bind]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.set, self.oid, self.value) self.assertFalse(mock_cmdgenerator.setCmd.called) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_set_err_engine(self, mock_auth, mock_transport, mock_cmdgen): var_bind = (self.oid, self.value) mock_cmdgenerator = mock_cmdgen.return_value mock_cmdgenerator.setCmd.return_value = ("engine error", None, 0, [var_bind]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.set, self.oid, self.value) mock_cmdgenerator.setCmd.assert_called_once_with(mock.ANY, mock.ANY, var_bind) class SNMPValidateParametersTestCase(db_base.DbTestCase): def _get_test_node(self, driver_info): return obj_utils.get_test_node( self.context, driver_info=driver_info) def test__parse_driver_info_default(self): # Make sure we get back the expected things. node = self._get_test_node(INFO_DICT) info = snmp._parse_driver_info(node) self.assertEqual(INFO_DICT['snmp_driver'], info.get('driver')) self.assertEqual(INFO_DICT['snmp_address'], info.get('address')) self.assertEqual(INFO_DICT['snmp_port'], str(info.get('port'))) self.assertEqual(INFO_DICT['snmp_outlet'], info.get('outlet')) self.assertEqual(INFO_DICT['snmp_version'], info.get('version')) self.assertEqual(INFO_DICT.get('snmp_community'), info.get('community')) self.assertEqual(INFO_DICT.get('snmp_security'), info.get('security')) def test__parse_driver_info_apc(self): # Make sure the APC driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='apc') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('apc', info.get('driver')) def test__parse_driver_info_apc_masterswitch(self): # Make sure the APC driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='apc_masterswitch') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('apc_masterswitch', info.get('driver')) def test__parse_driver_info_apc_masterswitchplus(self): # Make sure the APC driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='apc_masterswitchplus') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('apc_masterswitchplus', info.get('driver')) def test__parse_driver_info_apc_rackpdu(self): # Make sure the APC driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='apc_rackpdu') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('apc_rackpdu', info.get('driver')) def test__parse_driver_info_aten(self): # Make sure the Aten driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='aten') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('aten', info.get('driver')) def test__parse_driver_info_cyberpower(self): # Make sure the CyberPower driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='cyberpower') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('cyberpower', info.get('driver')) def test__parse_driver_info_eatonpower(self): # Make sure the Eaton Power driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='eatonpower') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('eatonpower', info.get('driver')) def test__parse_driver_info_teltronix(self): # Make sure the Teltronix driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='teltronix') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('teltronix', info.get('driver')) def test__parse_driver_info_snmp_v1(self): # Make sure SNMPv1 is parsed with a community string. info = db_utils.get_test_snmp_info(snmp_version='1', snmp_community='public') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('1', info.get('version')) self.assertEqual('public', info.get('community')) def test__parse_driver_info_snmp_v2c(self): # Make sure SNMPv2c is parsed with a community string. info = db_utils.get_test_snmp_info(snmp_version='2c', snmp_community='private') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('2c', info.get('version')) self.assertEqual('private', info.get('community')) def test__parse_driver_info_snmp_v3(self): # Make sure SNMPv3 is parsed with a security string. info = db_utils.get_test_snmp_info(snmp_version='3', snmp_security='pass') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('3', info.get('version')) self.assertEqual('pass', info.get('security')) def test__parse_driver_info_snmp_port_default(self): # Make sure default SNMP UDP port numbers are correct info = dict(INFO_DICT) del info['snmp_port'] node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual(161, info.get('port')) def test__parse_driver_info_snmp_port(self): # Make sure non-default SNMP UDP port numbers can be configured info = db_utils.get_test_snmp_info(snmp_port='10161') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual(10161, info.get('port')) def test__parse_driver_info_missing_driver(self): # Make sure exception is raised when the driver type is missing. info = dict(INFO_DICT) del info['snmp_driver'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_invalid_driver(self): # Make sure exception is raised when the driver type is invalid. info = db_utils.get_test_snmp_info(snmp_driver='invalidpower') node = self._get_test_node(info) self.assertRaises(exception.InvalidParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_address(self): # Make sure exception is raised when the address is missing. info = dict(INFO_DICT) del info['snmp_address'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_outlet(self): # Make sure exception is raised when the outlet is missing. info = dict(INFO_DICT) del info['snmp_outlet'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_default_version(self): # Make sure version defaults to 1 when it is missing. info = dict(INFO_DICT) del info['snmp_version'] node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('1', info.get('version')) self.assertEqual(INFO_DICT['snmp_community'], info.get('community')) def test__parse_driver_info_invalid_version(self): # Make sure exception is raised when version is invalid. info = db_utils.get_test_snmp_info(snmp_version='42', snmp_community='public', snmp_security='pass') node = self._get_test_node(info) self.assertRaises(exception.InvalidParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_default_version_and_missing_community(self): # Make sure exception is raised when version and community are missing. info = dict(INFO_DICT) del info['snmp_version'] del info['snmp_community'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_community_snmp_v1(self): # Make sure exception is raised when community is missing with SNMPv1. info = dict(INFO_DICT) del info['snmp_community'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_community_snmp_v2c(self): # Make sure exception is raised when community is missing with SNMPv2c. info = db_utils.get_test_snmp_info(snmp_version='2c') del info['snmp_community'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_security(self): # Make sure exception is raised when security is missing with SNMPv3. info = db_utils.get_test_snmp_info(snmp_version='3') del info['snmp_security'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) @mock.patch.object(snmp, '_get_client', autospec=True) class SNMPDeviceDriverTestCase(db_base.DbTestCase): """Tests for the SNMP device-specific driver classes. The SNMP client object is mocked to allow various error cases to be tested. """ def setUp(self): super(SNMPDeviceDriverTestCase, self).setUp() self.node = obj_utils.get_test_node( self.context, driver='fake_snmp', driver_info=INFO_DICT) def _update_driver_info(self, **kwargs): self.node["driver_info"].update(**kwargs) def _set_snmp_driver(self, snmp_driver): self._update_driver_info(snmp_driver=snmp_driver) def _get_snmp_failure(self): return exception.SNMPFailure(operation='test-operation', error='test-error') def test_power_state_on(self, mock_get_client): # Ensure the power on state is queried correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_ON, pstate) def test_power_state_off(self, mock_get_client): # Ensure the power off state is queried correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_OFF, pstate) def test_power_state_error(self, mock_get_client): # Ensure an unexpected power state returns an error mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = 42 pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.ERROR, pstate) def test_power_state_snmp_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a query are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_state) mock_client.get.assert_called_once_with(driver._snmp_oid()) def test_power_on(self, mock_get_client): # Ensure the device is powered on correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_ON, pstate) def test_power_off(self, mock_get_client): # Ensure the device is powered off correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_OFF, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_on_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for the state to change following a power on mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_on] pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_off_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for the state to change following a power off mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_on, driver.value_power_off] pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_OFF, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_on_invalid_state(self, mock_sleep, mock_get_client): # Ensure driver retries when querying unexpected states following a # power on mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = 42 pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_off_invalid_state(self, mock_sleep, mock_get_client): # Ensure driver retries when querying unexpected states following a # power off mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = 42 pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) def test_power_on_snmp_set_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a power on set operation # are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.set.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_on) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) def test_power_off_snmp_set_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a power off set # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.set.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_off) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) def test_power_on_snmp_get_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a power on get operation # are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_on) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) mock_client.get.assert_called_once_with(driver._snmp_oid()) def test_power_off_snmp_get_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a power off get # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_off) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) mock_client.get.assert_called_once_with(driver._snmp_oid()) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_on_timeout(self, mock_sleep, mock_get_client): # Ensure that a power on consistency poll timeout causes an error mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_off_timeout(self, mock_sleep, mock_get_client): # Ensure that a power off consistency poll timeout causes an error mock_client = mock_get_client.return_value CONF.snmp.power_timeout = 5 driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) def test_power_reset(self, mock_get_client): # Ensure the device is reset correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_reset_off_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for the power off state change following a power # reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_on, driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 3 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_reset_on_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for the power on state change following a power # reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 3 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_reset_off_delay_on_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for both state changes following a power reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_on, driver.value_power_off, driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 4 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_reset_off_invalid_state(self, mock_sleep, mock_get_client): # Ensure driver retries when querying unexpected states following a # power off during a reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = 42 pstate = driver.power_reset() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_reset_on_invalid_state(self, mock_sleep, mock_get_client): # Ensure driver retries when querying unexpected states following a # power on during a reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) attempts = CONF.snmp.power_timeout // driver.retry_interval mock_client.get.side_effect = ([driver.value_power_off] + [42] * attempts) pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * (1 + attempts) mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_reset_off_timeout(self, mock_sleep, mock_get_client): # Ensure that a power off consistency poll timeout during a reset # causes an error mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_reset() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("eventlet.greenthread.sleep", autospec=True) def test_power_reset_on_timeout(self, mock_sleep, mock_get_client): # Ensure that a power on consistency poll timeout during a reset # causes an error mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) attempts = CONF.snmp.power_timeout // driver.retry_interval mock_client.get.side_effect = ([driver.value_power_off] * (1 + attempts)) pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * (1 + attempts) mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) def test_power_reset_off_snmp_set_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a reset power off set # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.set.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_reset) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) self.assertFalse(mock_client.get.called) def test_power_reset_off_snmp_get_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a reset power off get # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_reset) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) mock_client.get.assert_called_once_with(driver._snmp_oid()) def test_power_reset_on_snmp_set_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a reset power on set # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.set.side_effect = [None, self._get_snmp_failure()] mock_client.get.return_value = driver.value_power_off self.assertRaises(exception.SNMPFailure, driver.power_reset) calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) mock_client.get.assert_called_once_with(driver._snmp_oid()) @mock.patch.object(time, 'sleep', autospec=True) def test_power_reset_delay_option(self, mock_sleep, mock_get_client): # Test for 'reboot_delay' config option self.config(reboot_delay=5, group='snmp') mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) mock_sleep.assert_called_once_with(5) def test_power_reset_on_snmp_get_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a reset power on get # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, self._get_snmp_failure()] self.assertRaises(exception.SNMPFailure, driver.power_reset) calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid()), mock.call(driver._snmp_oid())] mock_client.get.assert_has_calls(calls) def _test_simple_device_power_state_on(self, snmp_driver, mock_get_client): # Ensure a simple device driver queries power on correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_ON, pstate) def _test_simple_device_power_state_off(self, snmp_driver, mock_get_client): # Ensure a simple device driver queries power off correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_OFF, pstate) def _test_simple_device_power_on(self, snmp_driver, mock_get_client): # Ensure a simple device driver powers on correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_ON, pstate) def _test_simple_device_power_off(self, snmp_driver, mock_get_client): # Ensure a simple device driver powers off correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_OFF, pstate) def _test_simple_device_power_reset(self, snmp_driver, mock_get_client): # Ensure a simple device driver resets correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) def test_apc_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the APC # driver self._update_driver_info(snmp_driver="apc", snmp_outlet="3") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 4, 4, 2, 1, 3, 3) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(2, driver.value_power_off) def test_apc_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('apc', mock_get_client) def test_apc_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('apc', mock_get_client) def test_apc_power_on(self, mock_get_client): self._test_simple_device_power_on('apc', mock_get_client) def test_apc_power_off(self, mock_get_client): self._test_simple_device_power_off('apc', mock_get_client) def test_apc_power_reset(self, mock_get_client): self._test_simple_device_power_reset('apc', mock_get_client) def test_apc_masterswitch_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the APC # masterswitch driver self._update_driver_info(snmp_driver="apc_masterswitch", snmp_outlet="6") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 4, 4, 2, 1, 3, 6) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(2, driver.value_power_off) def test_apc_masterswitch_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('apc_masterswitch', mock_get_client) def test_apc_masterswitch_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('apc_masterswitch', mock_get_client) def test_apc_masterswitch_power_on(self, mock_get_client): self._test_simple_device_power_on('apc_masterswitch', mock_get_client) def test_apc_masterswitch_power_off(self, mock_get_client): self._test_simple_device_power_off('apc_masterswitch', mock_get_client) def test_apc_masterswitch_power_reset(self, mock_get_client): self._test_simple_device_power_reset('apc_masterswitch', mock_get_client) def test_apc_masterswitchplus_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the APC # masterswitchplus driver self._update_driver_info(snmp_driver="apc_masterswitchplus", snmp_outlet="6") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 6, 5, 1, 1, 5, 6) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(3, driver.value_power_off) def test_apc_masterswitchplus_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('apc_masterswitchplus', mock_get_client) def test_apc_masterswitchplus_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('apc_masterswitchplus', mock_get_client) def test_apc_masterswitchplus_power_on(self, mock_get_client): self._test_simple_device_power_on('apc_masterswitchplus', mock_get_client) def test_apc_masterswitchplus_power_off(self, mock_get_client): self._test_simple_device_power_off('apc_masterswitchplus', mock_get_client) def test_apc_masterswitchplus_power_reset(self, mock_get_client): self._test_simple_device_power_reset('apc_masterswitchplus', mock_get_client) def test_apc_rackpdu_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the APC # rackpdu driver self._update_driver_info(snmp_driver="apc_rackpdu", snmp_outlet="6") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 12, 3, 3, 1, 1, 4, 6) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(2, driver.value_power_off) def test_apc_rackpdu_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('apc_rackpdu', mock_get_client) def test_apc_rackpdu_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('apc_rackpdu', mock_get_client) def test_apc_rackpdu_power_on(self, mock_get_client): self._test_simple_device_power_on('apc_rackpdu', mock_get_client) def test_apc_rackpdu_power_off(self, mock_get_client): self._test_simple_device_power_off('apc_rackpdu', mock_get_client) def test_apc_rackpdu_power_reset(self, mock_get_client): self._test_simple_device_power_reset('apc_rackpdu', mock_get_client) def test_aten_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the # Aten driver self._update_driver_info(snmp_driver="aten", snmp_outlet="3") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 21317, 1, 3, 2, 2, 2, 2, 3, 0) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(2, driver.value_power_on) self.assertEqual(1, driver.value_power_off) def test_aten_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('aten', mock_get_client) def test_aten_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('aten', mock_get_client) def test_aten_power_on(self, mock_get_client): self._test_simple_device_power_on('aten', mock_get_client) def test_aten_power_off(self, mock_get_client): self._test_simple_device_power_off('aten', mock_get_client) def test_aten_power_reset(self, mock_get_client): self._test_simple_device_power_reset('aten', mock_get_client) def test_cyberpower_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the # CyberPower driver self._update_driver_info(snmp_driver="cyberpower", snmp_outlet="3") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 3808, 1, 1, 3, 3, 3, 1, 1, 4, 3) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(2, driver.value_power_off) def test_cyberpower_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('cyberpower', mock_get_client) def test_cyberpower_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('cyberpower', mock_get_client) def test_cyberpower_power_on(self, mock_get_client): self._test_simple_device_power_on('cyberpower', mock_get_client) def test_cyberpower_power_off(self, mock_get_client): self._test_simple_device_power_off('cyberpower', mock_get_client) def test_cyberpower_power_reset(self, mock_get_client): self._test_simple_device_power_reset('cyberpower', mock_get_client) def test_teltronix_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the # Teltronix driver self._update_driver_info(snmp_driver="teltronix", snmp_outlet="3") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 23620, 1, 2, 2, 1, 4, 3) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(2, driver.value_power_on) self.assertEqual(1, driver.value_power_off) def test_teltronix_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('teltronix', mock_get_client) def test_teltronix_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('teltronix', mock_get_client) def test_teltronix_power_on(self, mock_get_client): self._test_simple_device_power_on('teltronix', mock_get_client) def test_teltronix_power_off(self, mock_get_client): self._test_simple_device_power_off('teltronix', mock_get_client) def test_teltronix_power_reset(self, mock_get_client): self._test_simple_device_power_reset('teltronix', mock_get_client) def test_eaton_power_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the Eaton # Power driver self._update_driver_info(snmp_driver="eatonpower", snmp_outlet="3") driver = snmp._get_driver(self.node) status_oid = (1, 3, 6, 1, 4, 1, 534, 6, 6, 7, 6, 6, 1, 2, 3) poweron_oid = (1, 3, 6, 1, 4, 1, 534, 6, 6, 7, 6, 6, 1, 3, 3) poweroff_oid = (1, 3, 6, 1, 4, 1, 534, 6, 6, 7, 6, 6, 1, 4, 3) self.assertEqual(status_oid, driver._snmp_oid(driver.oid_status)) self.assertEqual(poweron_oid, driver._snmp_oid(driver.oid_poweron)) self.assertEqual(poweroff_oid, driver._snmp_oid(driver.oid_poweroff)) self.assertEqual(0, driver.status_off) self.assertEqual(1, driver.status_on) self.assertEqual(2, driver.status_pending_off) self.assertEqual(3, driver.status_pending_on) def test_eaton_power_power_state_on(self, mock_get_client): # Ensure the Eaton Power driver queries on correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_on pstate = driver.power_state() mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_ON, pstate) def test_eaton_power_power_state_off(self, mock_get_client): # Ensure the Eaton Power driver queries off correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_off pstate = driver.power_state() mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_OFF, pstate) def test_eaton_power_power_state_pending_off(self, mock_get_client): # Ensure the Eaton Power driver queries pending off correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_pending_off pstate = driver.power_state() mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_ON, pstate) def test_eaton_power_power_state_pending_on(self, mock_get_client): # Ensure the Eaton Power driver queries pending on correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_pending_on pstate = driver.power_state() mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_OFF, pstate) def test_eaton_power_power_on(self, mock_get_client): # Ensure the Eaton Power driver powers on correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_on pstate = driver.power_on() mock_client.set.assert_called_once_with( driver._snmp_oid(driver.oid_poweron), driver.value_power_on) mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_ON, pstate) def test_eaton_power_power_off(self, mock_get_client): # Ensure the Eaton Power driver powers off correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_off pstate = driver.power_off() mock_client.set.assert_called_once_with( driver._snmp_oid(driver.oid_poweroff), driver.value_power_off) mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_OFF, pstate) def test_eaton_power_power_reset(self, mock_get_client): # Ensure the Eaton Power driver resets correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.status_off, driver.status_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(driver.oid_poweroff), driver.value_power_off), mock.call(driver._snmp_oid(driver.oid_poweron), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid(driver.oid_status))] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch.object(snmp, '_get_driver', autospec=True) class SNMPDriverTestCase(db_base.DbTestCase): """SNMP power driver interface tests. In this test case, the SNMP power driver interface is exercised. The device-specific SNMP driver is mocked to allow various error cases to be tested. """ def setUp(self): super(SNMPDriverTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_snmp') self.node = obj_utils.create_test_node(self.context, driver='fake_snmp', driver_info=INFO_DICT) def _get_snmp_failure(self): return exception.SNMPFailure(operation='test-operation', error='test-error') def test_get_properties(self, mock_get_driver): expected = snmp.COMMON_PROPERTIES with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(expected, task.driver.get_properties()) def test_get_power_state_on(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_state.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: pstate = task.driver.power.get_power_state(task) mock_driver.power_state.assert_called_once_with() self.assertEqual(states.POWER_ON, pstate) def test_get_power_state_off(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_state.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: pstate = task.driver.power.get_power_state(task) mock_driver.power_state.assert_called_once_with() self.assertEqual(states.POWER_OFF, pstate) def test_get_power_state_error(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_state.return_value = states.ERROR with task_manager.acquire(self.context, self.node.uuid) as task: pstate = task.driver.power.get_power_state(task) mock_driver.power_state.assert_called_once_with() self.assertEqual(states.ERROR, pstate) def test_get_power_state_snmp_failure(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_state.side_effect = self._get_snmp_failure() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.SNMPFailure, task.driver.power.get_power_state, task) mock_driver.power_state.assert_called_once_with() def test_set_power_state_on(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_ON) mock_driver.power_on.assert_called_once_with() def test_set_power_state_off(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_off.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_OFF) mock_driver.power_off.assert_called_once_with() def test_set_power_state_error(self, mock_get_driver): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.set_power_state, task, states.ERROR) def test_set_power_state_on_snmp_failure(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_on.side_effect = self._get_snmp_failure() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.SNMPFailure, task.driver.power.set_power_state, task, states.POWER_ON) mock_driver.power_on.assert_called_once_with() def test_set_power_state_off_snmp_failure(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_off.side_effect = self._get_snmp_failure() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.SNMPFailure, task.driver.power.set_power_state, task, states.POWER_OFF) mock_driver.power_off.assert_called_once_with() def test_set_power_state_on_timeout(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_on.return_value = states.ERROR with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_ON) mock_driver.power_on.assert_called_once_with() def test_set_power_state_off_timeout(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_off.return_value = states.ERROR with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_OFF) mock_driver.power_off.assert_called_once_with() def test_reboot(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_reset.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.reboot(task) mock_driver.power_reset.assert_called_once_with() def test_reboot_snmp_failure(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_reset.side_effect = self._get_snmp_failure() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.SNMPFailure, task.driver.power.reboot, task) mock_driver.power_reset.assert_called_once_with() def test_reboot_timeout(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_reset.return_value = states.ERROR with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.reboot, task) mock_driver.power_reset.assert_called_once_with() ironic-5.1.0/ironic/tests/unit/drivers/modules/amt/0000775000567000056710000000000012674513633023466 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/amt/test_management.py0000664000567000056710000002633412674513466027227 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for AMT ManagementInterface """ import mock from oslo_config import cfg from ironic.common import boot_devices from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.amt import common as amt_common from ironic.drivers.modules.amt import management as amt_mgmt from ironic.drivers.modules.amt import resource_uris from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers.modules.amt import utils as test_utils from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_amt_info() CONF = cfg.CONF @mock.patch.object(amt_common, 'pywsman', spec_set=mock_specs.PYWSMAN_SPEC) class AMTManagementInteralMethodsTestCase(db_base.DbTestCase): def setUp(self): super(AMTManagementInteralMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_amt') self.node = obj_utils.create_test_node(self.context, driver='fake_amt', driver_info=INFO_DICT) @mock.patch.object(amt_common, 'awake_amt_interface', spec_set=True, autospec=True) def test__set_boot_device_order(self, mock_aw, mock_client_pywsman): namespace = resource_uris.CIM_BootConfigSetting device = boot_devices.PXE result_xml = test_utils.build_soap_xml([{'ReturnValue': '0'}], namespace) mock_xml = test_utils.mock_wsman_root(result_xml) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.invoke.return_value = mock_xml amt_mgmt._set_boot_device_order(self.node, device) mock_pywsman.invoke.assert_called_once_with( mock.ANY, namespace, 'ChangeBootOrder', mock.ANY) self.assertTrue(mock_aw.called) @mock.patch.object(amt_common, 'awake_amt_interface', spec_set=True, autospec=True) def test__set_boot_device_order_fail(self, mock_aw, mock_client_pywsman): namespace = resource_uris.CIM_BootConfigSetting device = boot_devices.PXE result_xml = test_utils.build_soap_xml([{'ReturnValue': '2'}], namespace) mock_xml = test_utils.mock_wsman_root(result_xml) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.invoke.return_value = mock_xml self.assertRaises(exception.AMTFailure, amt_mgmt._set_boot_device_order, self.node, device) mock_pywsman.invoke.assert_called_once_with( mock.ANY, namespace, 'ChangeBootOrder', mock.ANY) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.invoke.return_value = None self.assertRaises(exception.AMTConnectFailure, amt_mgmt._set_boot_device_order, self.node, device) self.assertTrue(mock_aw.called) @mock.patch.object(amt_common, 'awake_amt_interface', spec_set=True, autospec=True) def test__enable_boot_config(self, mock_aw, mock_client_pywsman): namespace = resource_uris.CIM_BootService result_xml = test_utils.build_soap_xml([{'ReturnValue': '0'}], namespace) mock_xml = test_utils.mock_wsman_root(result_xml) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.invoke.return_value = mock_xml amt_mgmt._enable_boot_config(self.node) mock_pywsman.invoke.assert_called_once_with( mock.ANY, namespace, 'SetBootConfigRole', mock.ANY) self.assertTrue(mock_aw.called) @mock.patch.object(amt_common, 'awake_amt_interface', spec_set=True, autospec=True) def test__enable_boot_config_fail(self, mock_aw, mock_client_pywsman): namespace = resource_uris.CIM_BootService result_xml = test_utils.build_soap_xml([{'ReturnValue': '2'}], namespace) mock_xml = test_utils.mock_wsman_root(result_xml) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.invoke.return_value = mock_xml self.assertRaises(exception.AMTFailure, amt_mgmt._enable_boot_config, self.node) mock_pywsman.invoke.assert_called_once_with( mock.ANY, namespace, 'SetBootConfigRole', mock.ANY) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.invoke.return_value = None self.assertRaises(exception.AMTConnectFailure, amt_mgmt._enable_boot_config, self.node) self.assertTrue(mock_aw.called) class AMTManagementTestCase(db_base.DbTestCase): def setUp(self): super(AMTManagementTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_amt') self.info = INFO_DICT self.node = obj_utils.create_test_node(self.context, driver='fake_amt', driver_info=self.info) def test_get_properties(self): expected = amt_common.COMMON_PROPERTIES with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(amt_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.management.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(amt_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_drvinfo.side_effect = iter( [exception.InvalidParameterValue('x')]) self.assertRaises(exception.InvalidParameterValue, task.driver.management.validate, task) def test_get_supported_boot_devices(self): expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual( sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) def test_set_boot_device_one_time(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device(task, 'pxe') self.assertEqual('pxe', task.node.driver_internal_info["amt_boot_device"]) self.assertFalse( task.node.driver_internal_info["amt_boot_persistent"]) def test_set_boot_device_persistent(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device(task, 'pxe', persistent=True) self.assertEqual('pxe', task.node.driver_internal_info["amt_boot_device"]) self.assertTrue( task.node.driver_internal_info["amt_boot_persistent"]) def test_set_boot_device_fail(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.set_boot_device, task, 'fake-device') @mock.patch.object(amt_mgmt, '_enable_boot_config', spec_set=True, autospec=True) @mock.patch.object(amt_mgmt, '_set_boot_device_order', spec_set=True, autospec=True) def test_ensure_next_boot_device_one_time(self, mock_sbdo, mock_ebc): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: device = boot_devices.PXE task.node.driver_internal_info['amt_boot_device'] = 'pxe' task.driver.management.ensure_next_boot_device(task.node, device) self.assertEqual('disk', task.node.driver_internal_info["amt_boot_device"]) self.assertTrue( task.node.driver_internal_info["amt_boot_persistent"]) mock_sbdo.assert_called_once_with(task.node, device) mock_ebc.assert_called_once_with(task.node) @mock.patch.object(amt_mgmt, '_enable_boot_config', spec_set=True, autospec=True) @mock.patch.object(amt_mgmt, '_set_boot_device_order', spec_set=True, autospec=True) def test_ensure_next_boot_device_persistent(self, mock_sbdo, mock_ebc): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: device = boot_devices.PXE task.node.driver_internal_info['amt_boot_device'] = 'pxe' task.node.driver_internal_info['amt_boot_persistent'] = True task.driver.management.ensure_next_boot_device(task.node, device) self.assertEqual('pxe', task.node.driver_internal_info["amt_boot_device"]) self.assertTrue( task.node.driver_internal_info["amt_boot_persistent"]) mock_sbdo.assert_called_once_with(task.node, device) mock_ebc.assert_called_once_with(task.node) def test_get_boot_device(self): expected = {'boot_device': boot_devices.DISK, 'persistent': True} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.management.get_boot_device(task)) def test_get_sensor_data(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(NotImplementedError, task.driver.management.get_sensors_data, task) ironic-5.1.0/ironic/tests/unit/drivers/modules/amt/test_common.py0000664000567000056710000002177712674513466026411 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for AMT Common """ import mock from oslo_concurrency import processutils from oslo_config import cfg import time from ironic.common import exception from ironic.common import utils from ironic.drivers.modules.amt import common as amt_common from ironic.drivers.modules.amt import resource_uris from ironic.tests import base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers.modules.amt import utils as test_utils from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_amt_info() CONF = cfg.CONF class AMTCommonMethodsTestCase(db_base.DbTestCase): def setUp(self): super(AMTCommonMethodsTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='fake_amt', driver_info=INFO_DICT) def test_parse_driver_info(self): info = amt_common.parse_driver_info(self.node) self.assertIsNotNone(info.get('address')) self.assertIsNotNone(info.get('username')) self.assertIsNotNone(info.get('password')) self.assertIsNotNone(info.get('protocol')) self.assertIsNotNone(info.get('uuid')) def test_parse_driver_info_missing_address(self): del self.node.driver_info['amt_address'] self.assertRaises(exception.MissingParameterValue, amt_common.parse_driver_info, self.node) def test_parse_driver_info_missing_username(self): del self.node.driver_info['amt_username'] self.assertRaises(exception.MissingParameterValue, amt_common.parse_driver_info, self.node) def test_parse_driver_info_missing_password(self): del self.node.driver_info['amt_password'] self.assertRaises(exception.MissingParameterValue, amt_common.parse_driver_info, self.node) def test_parse_driver_info_missing_protocol(self): del self.node.driver_info['amt_protocol'] info = amt_common.parse_driver_info(self.node) self.assertEqual('http', info.get('protocol')) def test_parse_driver_info_wrong_protocol(self): self.node.driver_info['amt_protocol'] = 'fake-protocol' self.assertRaises(exception.InvalidParameterValue, amt_common.parse_driver_info, self.node) @mock.patch.object(amt_common, 'Client', spec_set=True, autospec=True) def test_get_wsman_client(self, mock_client): info = amt_common.parse_driver_info(self.node) amt_common.get_wsman_client(self.node) options = {'address': info['address'], 'protocol': info['protocol'], 'username': info['username'], 'password': info['password']} mock_client.assert_called_once_with(**options) def test_xml_find(self): namespace = 'http://fake' value = 'fake_value' test_xml = test_utils.build_soap_xml([{'test_element': value}], namespace) mock_doc = test_utils.mock_wsman_root(test_xml) result = amt_common.xml_find(mock_doc, namespace, 'test_element') self.assertEqual(value, result.text) def test_xml_find_fail(self): mock_doc = None self.assertRaises(exception.AMTConnectFailure, amt_common.xml_find, mock_doc, 'namespace', 'test_element') @mock.patch.object(amt_common, 'pywsman', spec_set=mock_specs.PYWSMAN_SPEC) class AMTCommonClientTestCase(base.TestCase): def setUp(self): super(AMTCommonClientTestCase, self).setUp() self.info = {key[4:]: INFO_DICT[key] for key in INFO_DICT.keys()} def test_wsman_get(self, mock_client_pywsman): namespace = resource_uris.CIM_AssociatedPowerManagementService result_xml = test_utils.build_soap_xml([{'PowerState': '2'}], namespace) mock_doc = test_utils.mock_wsman_root(result_xml) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.get.return_value = mock_doc client = amt_common.Client(**self.info) client.wsman_get(namespace) mock_pywsman.get.assert_called_once_with(mock.ANY, namespace) def test_wsman_get_fail(self, mock_client_pywsman): namespace = amt_common._SOAP_ENVELOPE result_xml = test_utils.build_soap_xml([{'Fault': 'fault'}], namespace) mock_doc = test_utils.mock_wsman_root(result_xml) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.get.return_value = mock_doc client = amt_common.Client(**self.info) self.assertRaises(exception.AMTFailure, client.wsman_get, namespace) mock_pywsman.get.assert_called_once_with(mock.ANY, namespace) def test_wsman_invoke(self, mock_client_pywsman): namespace = resource_uris.CIM_BootSourceSetting result_xml = test_utils.build_soap_xml([{'ReturnValue': '0'}], namespace) mock_doc = test_utils.mock_wsman_root(result_xml) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.invoke.return_value = mock_doc method = 'ChangeBootOrder' options = mock.Mock(spec_set=[]) client = amt_common.Client(**self.info) doc = None client.wsman_invoke(options, namespace, method, doc) mock_pywsman.invoke.assert_called_once_with(options, namespace, method) doc = 'fake-input' client.wsman_invoke(options, namespace, method, doc) mock_pywsman.invoke.assert_called_with(options, namespace, method, doc) def test_wsman_invoke_fail(self, mock_client_pywsman): namespace = resource_uris.CIM_BootSourceSetting result_xml = test_utils.build_soap_xml([{'ReturnValue': '2'}], namespace) mock_doc = test_utils.mock_wsman_root(result_xml) mock_pywsman = mock_client_pywsman.Client.return_value mock_pywsman.invoke.return_value = mock_doc method = 'fake-method' options = mock.Mock(spec_set=[]) client = amt_common.Client(**self.info) self.assertRaises(exception.AMTFailure, client.wsman_invoke, options, namespace, method) mock_pywsman.invoke.assert_called_once_with(options, namespace, method) class AwakeAMTInterfaceTestCase(db_base.DbTestCase): def setUp(self): super(AwakeAMTInterfaceTestCase, self).setUp() amt_common.AMT_AWAKE_CACHE = {} self.info = INFO_DICT self.node = obj_utils.create_test_node(self.context, driver='fake_amt', driver_info=self.info) @mock.patch.object(utils, 'execute', spec_set=True, autospec=True) def test_awake_amt_interface(self, mock_ex): amt_common.awake_amt_interface(self.node) expected_args = ['ping', '-i', 0.2, '-c', 5, '1.2.3.4'] mock_ex.assert_called_once_with(*expected_args) @mock.patch.object(utils, 'execute', spec_set=True, autospec=True) def test_awake_amt_interface_fail(self, mock_ex): mock_ex.side_effect = processutils.ProcessExecutionError('x') self.assertRaises(exception.AMTConnectFailure, amt_common.awake_amt_interface, self.node) @mock.patch.object(utils, 'execute', spec_set=True, autospec=True) def test_awake_amt_interface_in_cache_time(self, mock_ex): amt_common.AMT_AWAKE_CACHE[self.node.uuid] = time.time() amt_common.awake_amt_interface(self.node) self.assertFalse(mock_ex.called) @mock.patch.object(utils, 'execute', spec_set=True, autospec=True) def test_awake_amt_interface_disable(self, mock_ex): CONF.set_override('awake_interval', 0, 'amt') amt_common.awake_amt_interface(self.node) self.assertFalse(mock_ex.called) def test_out_range_protocol(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'protocol', 'fake', 'amt', enforce_type=True) ironic-5.1.0/ironic/tests/unit/drivers/modules/amt/test_power.py0000664000567000056710000003364612674513466026253 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for AMT ManagementInterface """ import mock from oslo_config import cfg from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.amt import common as amt_common from ironic.drivers.modules.amt import management as amt_mgmt from ironic.drivers.modules.amt import power as amt_power from ironic.drivers.modules.amt import resource_uris from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers.modules.amt import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_amt_info() CONF = cfg.CONF class AMTPowerInteralMethodsTestCase(db_base.DbTestCase): def setUp(self): super(AMTPowerInteralMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_amt') self.info = INFO_DICT self.node = obj_utils.create_test_node(self.context, driver='fake_amt', driver_info=self.info) CONF.set_override('max_attempts', 2, 'amt') CONF.set_override('action_wait', 0, 'amt') @mock.patch.object(amt_common, 'awake_amt_interface', spec_set=True, autospec=True) @mock.patch.object(amt_common, 'get_wsman_client', spec_set=True, autospec=True) def test__set_power_state(self, mock_client_pywsman, mock_aw): namespace = resource_uris.CIM_PowerManagementService mock_client = mock_client_pywsman.return_value amt_power._set_power_state(self.node, states.POWER_ON) mock_client.wsman_invoke.assert_called_once_with( mock.ANY, namespace, 'RequestPowerStateChange', mock.ANY) self.assertTrue(mock_aw.called) @mock.patch.object(amt_common, 'awake_amt_interface', spec_set=True, autospec=True) @mock.patch.object(amt_common, 'get_wsman_client', spec_set=True, autospec=True) def test__set_power_state_fail(self, mock_client_pywsman, mock_aw): mock_client = mock_client_pywsman.return_value mock_client.wsman_invoke.side_effect = exception.AMTFailure('x') self.assertRaises(exception.AMTFailure, amt_power._set_power_state, self.node, states.POWER_ON) self.assertTrue(mock_aw.called) @mock.patch.object(amt_common, 'awake_amt_interface', spec_set=True, autospec=True) @mock.patch.object(amt_common, 'get_wsman_client', spec_set=True, autospec=True) def test__power_status(self, mock_gwc, mock_aw): namespace = resource_uris.CIM_AssociatedPowerManagementService result_xml = test_utils.build_soap_xml([{'PowerState': '2'}], namespace) mock_doc = test_utils.mock_wsman_root(result_xml) mock_client = mock_gwc.return_value mock_client.wsman_get.return_value = mock_doc self.assertEqual( states.POWER_ON, amt_power._power_status(self.node)) result_xml = test_utils.build_soap_xml([{'PowerState': '8'}], namespace) mock_doc = test_utils.mock_wsman_root(result_xml) mock_client = mock_gwc.return_value mock_client.wsman_get.return_value = mock_doc self.assertEqual( states.POWER_OFF, amt_power._power_status(self.node)) result_xml = test_utils.build_soap_xml([{'PowerState': '4'}], namespace) mock_doc = test_utils.mock_wsman_root(result_xml) mock_client = mock_gwc.return_value mock_client.wsman_get.return_value = mock_doc self.assertEqual( states.ERROR, amt_power._power_status(self.node)) self.assertTrue(mock_aw.called) @mock.patch.object(amt_common, 'awake_amt_interface', spec_set=True, autospec=True) @mock.patch.object(amt_common, 'get_wsman_client', spec_set=True, autospec=True) def test__power_status_fail(self, mock_gwc, mock_aw): mock_client = mock_gwc.return_value mock_client.wsman_get.side_effect = exception.AMTFailure('x') self.assertRaises(exception.AMTFailure, amt_power._power_status, self.node) self.assertTrue(mock_aw.called) @mock.patch.object(amt_mgmt.AMTManagement, 'ensure_next_boot_device', spec_set=True, autospec=True) @mock.patch.object(amt_power, '_power_status', spec_set=True, autospec=True) @mock.patch.object(amt_power, '_set_power_state', spec_set=True, autospec=True) def test__set_and_wait_power_on_with_boot_device(self, mock_sps, mock_ps, mock_enbd): target_state = states.POWER_ON boot_device = boot_devices.PXE mock_ps.side_effect = iter([states.POWER_OFF, states.POWER_ON]) mock_enbd.return_value = None with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_internal_info['amt_boot_device'] = boot_device result = amt_power._set_and_wait(task, target_state) self.assertEqual(states.POWER_ON, result) mock_enbd.assert_called_with(task.driver.management, task.node, boot_devices.PXE) mock_sps.assert_called_once_with(task.node, states.POWER_ON) mock_ps.assert_called_with(task.node) @mock.patch.object(amt_power, '_power_status', spec_set=True, autospec=True) @mock.patch.object(amt_power, '_set_power_state', spec_set=True, autospec=True) def test__set_and_wait_power_on_without_boot_device(self, mock_sps, mock_ps): target_state = states.POWER_ON mock_ps.side_effect = iter([states.POWER_OFF, states.POWER_ON]) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(states.POWER_ON, amt_power._set_and_wait(task, target_state)) mock_sps.assert_called_once_with(task.node, states.POWER_ON) mock_ps.assert_called_with(task.node) boot_device = boot_devices.DISK self.node.driver_internal_info['amt_boot_device'] = boot_device mock_ps.side_effect = iter([states.POWER_OFF, states.POWER_ON]) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(states.POWER_ON, amt_power._set_and_wait(task, target_state)) mock_sps.assert_called_with(task.node, states.POWER_ON) mock_ps.assert_called_with(task.node) def test__set_and_wait_wrong_target_state(self): target_state = 'fake-state' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, amt_power._set_and_wait, task, target_state) @mock.patch.object(amt_power, '_power_status', spec_set=True, autospec=True) @mock.patch.object(amt_power, '_set_power_state', spec_set=True, autospec=True) def test__set_and_wait_exceed_iterations(self, mock_sps, mock_ps): target_state = states.POWER_ON mock_ps.side_effect = iter([states.POWER_OFF, states.POWER_OFF, states.POWER_OFF]) mock_sps.return_value = exception.AMTFailure('x') with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.PowerStateFailure, amt_power._set_and_wait, task, target_state) mock_sps.assert_called_with(task.node, states.POWER_ON) mock_ps.assert_called_with(task.node) self.assertEqual(3, mock_ps.call_count) @mock.patch.object(amt_power, '_power_status', spec_set=True, autospec=True) def test__set_and_wait_already_target_state(self, mock_ps): target_state = states.POWER_ON mock_ps.side_effect = iter([states.POWER_ON]) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(states.POWER_ON, amt_power._set_and_wait(task, target_state)) mock_ps.assert_called_with(task.node) @mock.patch.object(amt_power, '_power_status', spec_set=True, autospec=True) @mock.patch.object(amt_power, '_set_power_state', spec_set=True, autospec=True) def test__set_and_wait_power_off(self, mock_sps, mock_ps): target_state = states.POWER_OFF mock_ps.side_effect = iter([states.POWER_ON, states.POWER_OFF]) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(states.POWER_OFF, amt_power._set_and_wait(task, target_state)) mock_sps.assert_called_once_with(task.node, states.POWER_OFF) mock_ps.assert_called_with(task.node) class AMTPowerTestCase(db_base.DbTestCase): def setUp(self): super(AMTPowerTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_amt') self.info = INFO_DICT self.node = obj_utils.create_test_node(self.context, driver='fake_amt', driver_info=self.info) def test_get_properties(self): expected = amt_common.COMMON_PROPERTIES with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(amt_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(amt_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_drvinfo.side_effect = iter( [exception.InvalidParameterValue('x')]) self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) @mock.patch.object(amt_power, '_power_status', spec_set=True, autospec=True) def test_get_power_state(self, mock_ps): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_ps.return_value = states.POWER_ON self.assertEqual(states.POWER_ON, task.driver.power.get_power_state(task)) mock_ps.assert_called_once_with(task.node) @mock.patch.object(amt_power, '_set_and_wait', spec_set=True, autospec=True) def test_set_power_state(self, mock_saw): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: pstate = states.POWER_ON mock_saw.return_value = states.POWER_ON task.driver.power.set_power_state(task, pstate) mock_saw.assert_called_once_with(task, pstate) @mock.patch.object(amt_power, '_set_and_wait', spec_set=True, autospec=True) def test_set_power_state_fail(self, mock_saw): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: pstate = states.POWER_ON mock_saw.side_effect = iter([exception.PowerStateFailure('x')]) self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, pstate) mock_saw.assert_called_once_with(task, pstate) @mock.patch.object(amt_power, '_set_and_wait', spec_set=True, autospec=True) def test_reboot(self, mock_saw): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task) calls = [mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)] mock_saw.assert_has_calls(calls) ironic-5.1.0/ironic/tests/unit/drivers/modules/amt/__init__.py0000664000567000056710000000000012674513466025571 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/amt/test_vendor.py0000664000567000056710000001471612674513466026411 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for AMT Vendor methods.""" import mock from ironic.common import boot_devices from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.amt import management as amt_mgmt from ironic.drivers.modules import iscsi_deploy from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_amt_info() class AMTPXEVendorPassthruTestCase(db_base.DbTestCase): def setUp(self): super(AMTPXEVendorPassthruTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="pxe_amt") self.node = obj_utils.create_test_node( self.context, driver='pxe_amt', driver_info=INFO_DICT) def test_vendor_routes(self): expected = ['heartbeat', 'pass_deploy_info', 'pass_bootloader_install_info'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: vendor_routes = task.driver.vendor.vendor_routes self.assertIsInstance(vendor_routes, dict) self.assertEqual(sorted(expected), sorted(list(vendor_routes))) def test_driver_routes(self): expected = ['lookup'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_routes = task.driver.vendor.driver_routes self.assertIsInstance(driver_routes, dict) self.assertEqual(sorted(expected), sorted(list(driver_routes))) @mock.patch.object(amt_mgmt.AMTManagement, 'ensure_next_boot_device', spec_set=True, autospec=True) @mock.patch.object(iscsi_deploy.VendorPassthru, 'pass_deploy_info', spec_set=True, autospec=True) def test_vendorpassthru_pass_deploy_info_netboot(self, mock_pxe_vendorpassthru, mock_ensure): kwargs = {'address': '123456'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE task.node.instance_info['capabilities'] = { "boot_option": "netboot" } task.driver.vendor.pass_deploy_info(task, **kwargs) mock_ensure.assert_called_with( task.driver.management, task.node, boot_devices.PXE) mock_pxe_vendorpassthru.assert_called_once_with( task.driver.vendor, task, **kwargs) @mock.patch.object(amt_mgmt.AMTManagement, 'ensure_next_boot_device', spec_set=True, autospec=True) @mock.patch.object(iscsi_deploy.VendorPassthru, 'pass_deploy_info', spec_set=True, autospec=True) def test_vendorpassthru_pass_deploy_info_localboot(self, mock_pxe_vendorpassthru, mock_ensure): kwargs = {'address': '123456'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE task.node.instance_info['capabilities'] = {"boot_option": "local"} task.driver.vendor.pass_deploy_info(task, **kwargs) self.assertFalse(mock_ensure.called) mock_pxe_vendorpassthru.assert_called_once_with( task.driver.vendor, task, **kwargs) @mock.patch.object(amt_mgmt.AMTManagement, 'ensure_next_boot_device', spec_set=True, autospec=True) @mock.patch.object(iscsi_deploy.VendorPassthru, 'continue_deploy', spec_set=True, autospec=True) def test_vendorpassthru_continue_deploy_netboot(self, mock_pxe_vendorpassthru, mock_ensure): kwargs = {'address': '123456'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE task.node.instance_info['capabilities'] = { "boot_option": "netboot" } task.driver.vendor.continue_deploy(task, **kwargs) mock_ensure.assert_called_with( task.driver.management, task.node, boot_devices.PXE) mock_pxe_vendorpassthru.assert_called_once_with( task.driver.vendor, task, **kwargs) @mock.patch.object(amt_mgmt.AMTManagement, 'ensure_next_boot_device', spec_set=True, autospec=True) @mock.patch.object(iscsi_deploy.VendorPassthru, 'continue_deploy', spec_set=True, autospec=True) def test_vendorpassthru_continue_deploy_localboot(self, mock_pxe_vendorpassthru, mock_ensure): kwargs = {'address': '123456'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE task.node.instance_info['capabilities'] = {"boot_option": "local"} task.driver.vendor.continue_deploy(task, **kwargs) self.assertFalse(mock_ensure.called) mock_pxe_vendorpassthru.assert_called_once_with( task.driver.vendor, task, **kwargs) ironic-5.1.0/ironic/tests/unit/drivers/modules/amt/utils.py0000664000567000056710000000461012674513466025205 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from xml.etree import ElementTree import mock def build_soap_xml(items, namespace=None): """Build a SOAP XML. :param items: a list of dictionaries where key is the element name and the value is the element text. :param namespace: the namespace for the elements, None for no namespace. Defaults to None :returns: a XML string. """ def _create_element(name, value=None): xml_string = name if namespace: xml_string = "{%(namespace)s}%(item)s" % {'namespace': namespace, 'item': xml_string} element = ElementTree.Element(xml_string) element.text = value return element soap_namespace = "http://www.w3.org/2003/05/soap-envelope" envelope_element = ElementTree.Element("{%s}Envelope" % soap_namespace) body_element = ElementTree.Element("{%s}Body" % soap_namespace) for item in items: for i in item: insertion_point = _create_element(i) if isinstance(item[i], dict): for j, value in item[i].items(): insertion_point.append(_create_element(j, value)) else: insertion_point.text = item[i] body_element.append(insertion_point) envelope_element.append(body_element) return ElementTree.tostring(envelope_element) def mock_wsman_root(return_value): """Helper function to mock the root() from wsman client.""" mock_xml_root = mock.Mock(spec_set=['string']) mock_xml_root.string.return_value = return_value mock_xml = mock.Mock(spec_set=['context', 'root']) mock_xml.context.return_value = None mock_xml.root.return_value = mock_xml_root return mock_xml ironic-5.1.0/ironic/tests/unit/drivers/modules/test_wol.py0000664000567000056710000002157512674513466025135 0ustar jenkinsjenkins00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Wake-On-Lan driver module.""" import socket import time import mock from oslo_utils import uuidutils from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import wol from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils @mock.patch.object(time, 'sleep', lambda *_: None) class WakeOnLanPrivateMethodTestCase(db_base.DbTestCase): def setUp(self): super(WakeOnLanPrivateMethodTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_wol') self.driver = driver_factory.get_driver('fake_wol') self.node = obj_utils.create_test_node(self.context, driver='fake_wol') self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) def test__parse_parameters(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: params = wol._parse_parameters(task) self.assertEqual('255.255.255.255', params['host']) self.assertEqual(9, params['port']) def test__parse_parameters_non_default_params(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info = {'wol_host': '1.2.3.4', 'wol_port': 7} params = wol._parse_parameters(task) self.assertEqual('1.2.3.4', params['host']) self.assertEqual(7, params['port']) def test__parse_parameters_no_ports_fail(self): node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), driver='fake_wol') with task_manager.acquire( self.context, node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, wol._parse_parameters, task) @mock.patch.object(socket, 'socket', autospec=True, spec_set=True) def test_send_magic_packets(self, mock_socket): fake_socket = mock.Mock(spec=socket, spec_set=True) mock_socket.return_value = fake_socket() obj_utils.create_test_port(self.context, uuid=uuidutils.generate_uuid(), address='aa:bb:cc:dd:ee:ff', node_id=self.node.id) with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: wol._send_magic_packets(task, '255.255.255.255', 9) expected_calls = [ mock.call(), mock.call().setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1), mock.call().sendto(mock.ANY, ('255.255.255.255', 9)), mock.call().sendto(mock.ANY, ('255.255.255.255', 9)), mock.call().close()] fake_socket.assert_has_calls(expected_calls) self.assertEqual(1, mock_socket.call_count) @mock.patch.object(socket, 'socket', autospec=True, spec_set=True) def test_send_magic_packets_network_sendto_error(self, mock_socket): fake_socket = mock.Mock(spec=socket, spec_set=True) fake_socket.return_value.sendto.side_effect = socket.error('boom') mock_socket.return_value = fake_socket() with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.WolOperationError, wol._send_magic_packets, task, '255.255.255.255', 9) self.assertEqual(1, mock_socket.call_count) # assert sendt0() was invoked fake_socket.return_value.sendto.assert_called_once_with( mock.ANY, ('255.255.255.255', 9)) @mock.patch.object(socket, 'socket', autospec=True, spec_set=True) def test_magic_packet_format(self, mock_socket): fake_socket = mock.Mock(spec=socket, spec_set=True) mock_socket.return_value = fake_socket() with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: wol._send_magic_packets(task, '255.255.255.255', 9) expct_packet = (b'\xff\xff\xff\xff\xff\xffRT\x00\xcf-1RT\x00' b'\xcf-1RT\x00\xcf-1RT\x00\xcf-1RT\x00\xcf-1RT' b'\x00\xcf-1RT\x00\xcf-1RT\x00\xcf-1RT\x00' b'\xcf-1RT\x00\xcf-1RT\x00\xcf-1RT\x00\xcf-1RT' b'\x00\xcf-1RT\x00\xcf-1RT\x00\xcf-1RT\x00\xcf-1') mock_socket.return_value.sendto.assert_called_once_with( expct_packet, ('255.255.255.255', 9)) @mock.patch.object(time, 'sleep', lambda *_: None) class WakeOnLanDriverTestCase(db_base.DbTestCase): def setUp(self): super(WakeOnLanDriverTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_wol') self.driver = driver_factory.get_driver('fake_wol') self.node = obj_utils.create_test_node(self.context, driver='fake_wol') self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) def test_get_properties(self): expected = wol.COMMON_PROPERTIES with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) def test_get_power_state(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.power_state = states.POWER_ON pstate = task.driver.power.get_power_state(task) self.assertEqual(states.POWER_ON, pstate) def test_get_power_state_nostate(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.power_state = states.NOSTATE pstate = task.driver.power.get_power_state(task) self.assertEqual(states.POWER_OFF, pstate) @mock.patch.object(wol, '_send_magic_packets', autospec=True, spec_set=True) def test_set_power_state_power_on(self, mock_magic): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_ON) mock_magic.assert_called_once_with(task, '255.255.255.255', 9) @mock.patch.object(wol.LOG, 'info', autospec=True, spec_set=True) @mock.patch.object(wol, '_send_magic_packets', autospec=True, spec_set=True) def test_set_power_state_power_off(self, mock_magic, mock_log): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_OFF) mock_log.assert_called_once_with(mock.ANY, self.node.uuid) # assert magic packets weren't sent self.assertFalse(mock_magic.called) @mock.patch.object(wol, '_send_magic_packets', autospec=True, spec_set=True) def test_set_power_state_power_fail(self, mock_magic): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.set_power_state, task, 'wrong-state') # assert magic packets weren't sent self.assertFalse(mock_magic.called) @mock.patch.object(wol.LOG, 'info', autospec=True, spec_set=True) @mock.patch.object(wol.WakeOnLanPower, 'set_power_state', autospec=True, spec_set=True) def test_reboot(self, mock_power, mock_log): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.reboot(task) mock_log.assert_called_once_with(mock.ANY, self.node.uuid) mock_power.assert_called_once_with(task.driver.power, task, states.POWER_ON) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_virtualbox.py0000664000567000056710000004454412674513466026534 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for VirtualBox Driver Modules.""" import mock from oslo_config import cfg from pyremotevbox import exception as pyremotevbox_exc from pyremotevbox import vbox as pyremotevbox_vbox from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import virtualbox from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = { 'virtualbox_vmname': 'baremetal1', 'virtualbox_host': '10.0.2.2', 'virtualbox_username': 'username', 'virtualbox_password': 'password', 'virtualbox_port': 12345, } CONF = cfg.CONF class VirtualBoxMethodsTestCase(db_base.DbTestCase): def setUp(self): super(VirtualBoxMethodsTestCase, self).setUp() driver_info = INFO_DICT.copy() mgr_utils.mock_the_extension_manager(driver="fake_vbox") self.node = obj_utils.create_test_node(self.context, driver='fake_vbox', driver_info=driver_info) def test__parse_driver_info(self): info = virtualbox._parse_driver_info(self.node) self.assertEqual('baremetal1', info['vmname']) self.assertEqual('10.0.2.2', info['host']) self.assertEqual('username', info['username']) self.assertEqual('password', info['password']) self.assertEqual(12345, info['port']) def test__parse_driver_info_missing_vmname(self): del self.node.driver_info['virtualbox_vmname'] self.assertRaises(exception.MissingParameterValue, virtualbox._parse_driver_info, self.node) def test__parse_driver_info_missing_host(self): del self.node.driver_info['virtualbox_host'] self.assertRaises(exception.MissingParameterValue, virtualbox._parse_driver_info, self.node) def test__parse_driver_info_invalid_port(self): self.node.driver_info['virtualbox_port'] = 'invalid-port' self.assertRaises(exception.InvalidParameterValue, virtualbox._parse_driver_info, self.node) def test__parse_driver_info_missing_port(self): del self.node.driver_info['virtualbox_port'] info = virtualbox._parse_driver_info(self.node) self.assertEqual(18083, info['port']) @mock.patch.object(pyremotevbox_vbox, 'VirtualBoxHost', autospec=True) def test__run_virtualbox_method(self, host_mock): host_object_mock = mock.MagicMock(spec_set=['find_vm']) func_mock = mock.MagicMock(spec_set=[]) vm_object_mock = mock.MagicMock(spec_set=['foo'], foo=func_mock) host_mock.return_value = host_object_mock host_object_mock.find_vm.return_value = vm_object_mock func_mock.return_value = 'return-value' return_value = virtualbox._run_virtualbox_method( self.node, 'some-ironic-method', 'foo', 'args', kwarg='kwarg') host_mock.assert_called_once_with(vmname='baremetal1', host='10.0.2.2', username='username', password='password', port=12345) host_object_mock.find_vm.assert_called_once_with('baremetal1') func_mock.assert_called_once_with('args', kwarg='kwarg') self.assertEqual('return-value', return_value) @mock.patch.object(pyremotevbox_vbox, 'VirtualBoxHost', autospec=True) def test__run_virtualbox_method_get_host_fails(self, host_mock): host_mock.side_effect = pyremotevbox_exc.PyRemoteVBoxException self.assertRaises(exception.VirtualBoxOperationFailed, virtualbox._run_virtualbox_method, self.node, 'some-ironic-method', 'foo', 'args', kwarg='kwarg') @mock.patch.object(pyremotevbox_vbox, 'VirtualBoxHost', autospec=True) def test__run_virtualbox_method_find_vm_fails(self, host_mock): host_object_mock = mock.MagicMock(spec_set=['find_vm']) host_mock.return_value = host_object_mock exc = pyremotevbox_exc.PyRemoteVBoxException host_object_mock.find_vm.side_effect = exc self.assertRaises(exception.VirtualBoxOperationFailed, virtualbox._run_virtualbox_method, self.node, 'some-ironic-method', 'foo', 'args', kwarg='kwarg') host_mock.assert_called_once_with(vmname='baremetal1', host='10.0.2.2', username='username', password='password', port=12345) host_object_mock.find_vm.assert_called_once_with('baremetal1') @mock.patch.object(pyremotevbox_vbox, 'VirtualBoxHost', autospec=True) def test__run_virtualbox_method_func_fails(self, host_mock): host_object_mock = mock.MagicMock(spec_set=['find_vm']) host_mock.return_value = host_object_mock func_mock = mock.MagicMock() vm_object_mock = mock.MagicMock(spec_set=['foo'], foo=func_mock) host_object_mock.find_vm.return_value = vm_object_mock func_mock.side_effect = pyremotevbox_exc.PyRemoteVBoxException self.assertRaises(exception.VirtualBoxOperationFailed, virtualbox._run_virtualbox_method, self.node, 'some-ironic-method', 'foo', 'args', kwarg='kwarg') host_mock.assert_called_once_with(vmname='baremetal1', host='10.0.2.2', username='username', password='password', port=12345) host_object_mock.find_vm.assert_called_once_with('baremetal1') func_mock.assert_called_once_with('args', kwarg='kwarg') @mock.patch.object(pyremotevbox_vbox, 'VirtualBoxHost', autospec=True) def test__run_virtualbox_method_invalid_method(self, host_mock): host_object_mock = mock.MagicMock(spec_set=['find_vm']) host_mock.return_value = host_object_mock vm_object_mock = mock.MagicMock(spec_set=[]) host_object_mock.find_vm.return_value = vm_object_mock del vm_object_mock.foo self.assertRaises(exception.InvalidParameterValue, virtualbox._run_virtualbox_method, self.node, 'some-ironic-method', 'foo', 'args', kwarg='kwarg') host_mock.assert_called_once_with(vmname='baremetal1', host='10.0.2.2', username='username', password='password', port=12345) host_object_mock.find_vm.assert_called_once_with('baremetal1') @mock.patch.object(pyremotevbox_vbox, 'VirtualBoxHost', autospec=True) def test__run_virtualbox_method_vm_wrong_power_state(self, host_mock): host_object_mock = mock.MagicMock(spec_set=['find_vm']) host_mock.return_value = host_object_mock func_mock = mock.MagicMock(spec_set=[]) vm_object_mock = mock.MagicMock(spec_set=['foo'], foo=func_mock) host_object_mock.find_vm.return_value = vm_object_mock func_mock.side_effect = pyremotevbox_exc.VmInWrongPowerState # _run_virtualbox_method() doesn't catch VmInWrongPowerState and # lets caller handle it. self.assertRaises(pyremotevbox_exc.VmInWrongPowerState, virtualbox._run_virtualbox_method, self.node, 'some-ironic-method', 'foo', 'args', kwarg='kwarg') host_mock.assert_called_once_with(vmname='baremetal1', host='10.0.2.2', username='username', password='password', port=12345) host_object_mock.find_vm.assert_called_once_with('baremetal1') func_mock.assert_called_once_with('args', kwarg='kwarg') class VirtualBoxPowerTestCase(db_base.DbTestCase): def setUp(self): super(VirtualBoxPowerTestCase, self).setUp() driver_info = INFO_DICT.copy() mgr_utils.mock_the_extension_manager(driver="fake_vbox") self.node = obj_utils.create_test_node(self.context, driver='fake_vbox', driver_info=driver_info) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: properties = task.driver.power.get_properties() self.assertIn('virtualbox_vmname', properties) self.assertIn('virtualbox_host', properties) @mock.patch.object(virtualbox, '_parse_driver_info', autospec=True) def test_validate(self, parse_info_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.validate(task) parse_info_mock.assert_called_once_with(task.node) @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_get_power_state(self, run_method_mock): run_method_mock.return_value = 'PoweredOff' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: power_state = task.driver.power.get_power_state(task) run_method_mock.assert_called_once_with(task.node, 'get_power_state', 'get_power_status') self.assertEqual(states.POWER_OFF, power_state) @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_get_power_state_invalid_state(self, run_method_mock): run_method_mock.return_value = 'invalid-state' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: power_state = task.driver.power.get_power_state(task) run_method_mock.assert_called_once_with(task.node, 'get_power_state', 'get_power_status') self.assertEqual(states.ERROR, power_state) @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_set_power_state_off(self, run_method_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_OFF) run_method_mock.assert_called_once_with(task.node, 'set_power_state', 'stop') @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_set_power_state_on(self, run_method_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_ON) run_method_mock.assert_called_once_with(task.node, 'set_power_state', 'start') @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_set_power_state_reboot(self, run_method_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.REBOOT) run_method_mock.assert_any_call(task.node, 'reboot', 'stop') run_method_mock.assert_any_call(task.node, 'reboot', 'start') def test_set_power_state_invalid_state(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.set_power_state, task, 'invalid-state') @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_reboot(self, run_method_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task) run_method_mock.assert_any_call(task.node, 'reboot', 'stop') run_method_mock.assert_any_call(task.node, 'reboot', 'start') class VirtualBoxManagementTestCase(db_base.DbTestCase): def setUp(self): super(VirtualBoxManagementTestCase, self).setUp() driver_info = INFO_DICT.copy() mgr_utils.mock_the_extension_manager(driver="fake_vbox") self.node = obj_utils.create_test_node(self.context, driver='fake_vbox', driver_info=driver_info) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: properties = task.driver.management.get_properties() self.assertIn('virtualbox_vmname', properties) self.assertIn('virtualbox_host', properties) @mock.patch.object(virtualbox, '_parse_driver_info', autospec=True) def test_validate(self, parse_info_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.validate(task) parse_info_mock.assert_called_once_with(task.node) def test_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: devices = task.driver.management.get_supported_boot_devices(task) self.assertIn(boot_devices.PXE, devices) self.assertIn(boot_devices.DISK, devices) self.assertIn(boot_devices.CDROM, devices) @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_get_boot_device_ok(self, run_method_mock): run_method_mock.return_value = 'Network' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret_val = task.driver.management.get_boot_device(task) run_method_mock.assert_called_once_with(task.node, 'get_boot_device', 'get_boot_device') self.assertEqual(boot_devices.PXE, ret_val['boot_device']) self.assertTrue(ret_val['persistent']) @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_get_boot_device_invalid(self, run_method_mock): run_method_mock.return_value = 'invalid-boot-device' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret_val = task.driver.management.get_boot_device(task) self.assertIsNone(ret_val['boot_device']) self.assertIsNone(ret_val['persistent']) @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_set_boot_device_ok(self, run_method_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device(task, boot_devices.PXE) run_method_mock.assert_called_once_with(task.node, 'set_boot_device', 'set_boot_device', 'Network') @mock.patch.object(virtualbox, 'LOG', autospec=True) @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_set_boot_device_wrong_power_state(self, run_method_mock, log_mock): run_method_mock.side_effect = pyremotevbox_exc.VmInWrongPowerState with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device(task, boot_devices.PXE) log_mock.error.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(virtualbox, '_run_virtualbox_method', autospec=True) def test_set_boot_device_invalid(self, run_method_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.set_boot_device, task, 'invalid-boot-device') def test_get_sensors_data(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(NotImplementedError, task.driver.management.get_sensors_data, task) ironic-5.1.0/ironic/tests/unit/drivers/modules/ucs/0000775000567000056710000000000012674513633023477 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/ucs/test_management.py0000664000567000056710000001407012674513466027232 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Test class for UCS ManagementInterface """ import mock from oslo_config import cfg from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.ucs import helper as ucs_helper from ironic.drivers.modules.ucs import management as ucs_mgmt from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils ucs_error = importutils.try_import('UcsSdk.utils.exception') INFO_DICT = db_utils.get_test_ucs_info() CONF = cfg.CONF class UcsManagementTestCase(db_base.DbTestCase): def setUp(self): super(UcsManagementTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_ucs') self.node = obj_utils.create_test_node(self.context, driver='fake_ucs', driver_info=INFO_DICT) self.interface = ucs_mgmt.UcsManagement() self.task = mock.Mock() self.task.node = self.node def test_get_properties(self): expected = ucs_helper.COMMON_PROPERTIES self.assertEqual(expected, self.interface.get_properties()) def test_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM] self.assertEqual( sorted(expected), sorted(self.interface.get_supported_boot_devices(task))) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch( 'ironic.drivers.modules.ucs.management.ucs_mgmt.BootDeviceHelper', spec_set=True, autospec=True) def test_get_boot_device(self, mock_ucs_mgmt, mock_helper): mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock_mgmt = mock_ucs_mgmt.return_value mock_mgmt.get_boot_device.return_value = { 'boot_device': 'disk', 'persistent': False } with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_device = boot_devices.DISK expected_response = {'boot_device': expected_device, 'persistent': False} self.assertEqual(expected_response, self.interface.get_boot_device(task)) mock_mgmt.get_boot_device.assert_called_once_with() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch( 'ironic.drivers.modules.ucs.management.ucs_mgmt.BootDeviceHelper', spec_set=True, autospec=True) def test_get_boot_device_fail(self, mock_ucs_mgmt, mock_helper): mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock_mgmt = mock_ucs_mgmt.return_value side_effect = ucs_error.UcsOperationError( operation='getting boot device', error='failed', node=self.node.uuid ) mock_mgmt.get_boot_device.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.UcsOperationError, self.interface.get_boot_device, task) mock_mgmt.get_boot_device.assert_called_once_with() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch( 'ironic.drivers.modules.ucs.management.ucs_mgmt.BootDeviceHelper', spec_set=True, autospec=True) def test_set_boot_device(self, mock_mgmt, mock_helper): mc_mgmt = mock_mgmt.return_value mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.interface.set_boot_device(task, boot_devices.CDROM) mc_mgmt.set_boot_device.assert_called_once_with('cdrom', False) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch( 'ironic.drivers.modules.ucs.management.ucs_mgmt.BootDeviceHelper', spec_set=True, autospec=True) def test_set_boot_device_fail(self, mock_mgmt, mock_helper): mc_mgmt = mock_mgmt.return_value mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) side_effect = exception.UcsOperationError( operation='setting boot device', error='failed', node=self.node.uuid) mc_mgmt.set_boot_device.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IronicException, self.interface.set_boot_device, task, boot_devices.PXE) mc_mgmt.set_boot_device.assert_called_once_with( boot_devices.PXE, False) def test_get_sensors_data(self): self.assertRaises(NotImplementedError, self.interface.get_sensors_data, self.task) ironic-5.1.0/ironic/tests/unit/drivers/modules/ucs/test_helper.py0000664000567000056710000001606412674513466026402 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Test class for common methods used by UCS modules.""" import mock from oslo_config import cfg from oslo_utils import importutils from ironic.common import exception from ironic.conductor import task_manager from ironic.db import api as dbapi from ironic.drivers.modules.ucs import helper as ucs_helper from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils ucs_error = importutils.try_import('UcsSdk.utils.exception') INFO_DICT = db_utils.get_test_ucs_info() CONF = cfg.CONF class UcsValidateParametersTestCase(db_base.DbTestCase): def setUp(self): super(UcsValidateParametersTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_ucs") self.node = obj_utils.create_test_node(self.context, driver='fake_ucs', driver_info=INFO_DICT) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.helper = ucs_helper.CiscoUcsHelper(task) def test_parse_driver_info(self): info = ucs_helper.parse_driver_info(self.node) self.assertIsNotNone(info.get('ucs_address')) self.assertIsNotNone(info.get('ucs_username')) self.assertIsNotNone(info.get('ucs_password')) self.assertIsNotNone(info.get('ucs_service_profile')) def test_parse_driver_info_missing_address(self): del self.node.driver_info['ucs_address'] self.assertRaises(exception.MissingParameterValue, ucs_helper.parse_driver_info, self.node) def test_parse_driver_info_missing_username(self): del self.node.driver_info['ucs_username'] self.assertRaises(exception.MissingParameterValue, ucs_helper.parse_driver_info, self.node) def test_parse_driver_info_missing_password(self): del self.node.driver_info['ucs_password'] self.assertRaises(exception.MissingParameterValue, ucs_helper.parse_driver_info, self.node) def test_parse_driver_info_missing_service_profile(self): del self.node.driver_info['ucs_service_profile'] self.assertRaises(exception.MissingParameterValue, ucs_helper.parse_driver_info, self.node) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) def test_connect_ucsm(self, mock_helper): mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.helper.connect_ucsm() mock_helper.generate_ucsm_handle.assert_called_once_with( task.node.driver_info['ucs_address'], task.node.driver_info['ucs_username'], task.node.driver_info['ucs_password'] ) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) def test_connect_ucsm_fail(self, mock_helper): side_effect = ucs_error.UcsConnectionError( message='connecting to ucsm', error='failed') mock_helper.generate_ucsm_handle.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.UcsConnectionError, self.helper.connect_ucsm ) mock_helper.generate_ucsm_handle.assert_called_once_with( task.node.driver_info['ucs_address'], task.node.driver_info['ucs_username'], task.node.driver_info['ucs_password'] ) @mock.patch('ironic.drivers.modules.ucs.helper', autospec=True) def test_logout(self, mock_helper): self.helper.logout() class UcsCommonMethodsTestcase(db_base.DbTestCase): def setUp(self): super(UcsCommonMethodsTestcase, self).setUp() self.dbapi = dbapi.get_instance() mgr_utils.mock_the_extension_manager(driver="fake_ucs") self.node = obj_utils.create_test_node(self.context, driver='fake_ucs', driver_info=INFO_DICT.copy()) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.helper = ucs_helper.CiscoUcsHelper(task) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', autospec=True) @mock.patch('ironic.drivers.modules.ucs.helper.CiscoUcsHelper', autospec=True) def test_requires_ucs_client_ok_logout(self, mc_helper, mock_ucs_helper): mock_helper = mc_helper.return_value mock_helper.logout.return_value = None mock_working_function = mock.Mock() mock_working_function.__name__ = "Working" mock_working_function.return_valure = "Success" mock_ucs_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: wont_error = ucs_helper.requires_ucs_client( mock_working_function) wont_error(wont_error, task) mock_helper.logout.assert_called_once_with() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', autospec=True) @mock.patch('ironic.drivers.modules.ucs.helper.CiscoUcsHelper', autospec=True) def test_requires_ucs_client_fail_logout(self, mc_helper, mock_ucs_helper): mock_helper = mc_helper.return_value mock_helper.logout.return_value = None mock_broken_function = mock.Mock() mock_broken_function.__name__ = "Broken" mock_broken_function.side_effect = exception.IronicException() mock_ucs_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: will_error = ucs_helper.requires_ucs_client(mock_broken_function) self.assertRaises(exception.IronicException, will_error, will_error, task) mock_helper.logout.assert_called_once_with() ironic-5.1.0/ironic/tests/unit/drivers/modules/ucs/test_power.py0000664000567000056710000003556312674513466026264 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Test class for UcsPower module.""" import mock from oslo_config import cfg from oslo_utils import importutils from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.ucs import helper as ucs_helper from ironic.drivers.modules.ucs import power as ucs_power from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils ucs_error = importutils.try_import('UcsSdk.utils.exception') INFO_DICT = db_utils.get_test_ucs_info() CONF = cfg.CONF class UcsPowerTestCase(db_base.DbTestCase): def setUp(self): super(UcsPowerTestCase, self).setUp() driver_info = INFO_DICT mgr_utils.mock_the_extension_manager(driver="fake_ucs") self.node = obj_utils.create_test_node(self.context, driver='fake_ucs', driver_info=driver_info) CONF.set_override('max_retry', 2, 'cisco_ucs') CONF.set_override('action_interval', 0, 'cisco_ucs') self.interface = ucs_power.Power() def test_get_properties(self): expected = ucs_helper.COMMON_PROPERTIES expected.update(ucs_helper.COMMON_PROPERTIES) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(ucs_helper, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_parse_driver_info): mock_parse_driver_info.return_value = {} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.interface.validate(task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch.object(ucs_helper, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, mock_parse_driver_info): side_effect = iter([exception.InvalidParameterValue('Invalid Input')]) mock_parse_driver_info.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, self.interface.validate, task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_get_power_state_up(self, mock_power_helper, mock_helper): mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock_power = mock_power_helper.return_value with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_power.get_power_state.return_value = 'up' self.assertEqual(states.POWER_ON, self.interface.get_power_state(task)) mock_power.get_power_state.assert_called_once_with() mock_power.get_power_state.reset_mock() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_get_power_state_down(self, mock_power_helper, mock_helper): mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock_power = mock_power_helper.return_value with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_power.get_power_state.return_value = 'down' self.assertEqual(states.POWER_OFF, self.interface.get_power_state(task)) mock_power.get_power_state.assert_called_once_with() mock_power.get_power_state.reset_mock() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_get_power_state_error(self, mock_power_helper, mock_helper): mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock_power = mock_power_helper.return_value with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_power.get_power_state.return_value = states.ERROR self.assertEqual(states.ERROR, self.interface.get_power_state(task)) mock_power.get_power_state.assert_called_once_with() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_get_power_state_fail(self, mock_ucs_power, mock_helper): mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) power = mock_ucs_power.return_value power.get_power_state.side_effect = ( ucs_error.UcsOperationError(operation='getting power state', error='failed')) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.UcsOperationError, self.interface.get_power_state, task) power.get_power_state.assert_called_with() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power._wait_for_state_change', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_set_power_state(self, mock_power_helper, mock__wait, mock_helper): target_state = states.POWER_ON mock_power = mock_power_helper.return_value mock_power.get_power_state.side_effect = ['down', 'up'] mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock__wait.return_value = target_state with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertIsNone(self.interface.set_power_state(task, target_state)) mock_power.set_power_state.assert_called_once_with('up') mock_power.get_power_state.assert_called_once_with() mock__wait.assert_called_once_with(target_state, mock_power) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_set_power_state_fail(self, mock_power_helper, mock_helper): mock_power = mock_power_helper.return_value mock_power.set_power_state.side_effect = ( ucs_error.UcsOperationError(operation='setting power state', error='failed')) mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.UcsOperationError, self.interface.set_power_state, task, states.POWER_OFF) mock_power.set_power_state.assert_called_once_with('down') @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) def test_set_power_state_invalid_state(self, mock_helper): mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, self.interface.set_power_state, task, states.ERROR) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test__wait_for_state_change_already_target_state( self, mock_ucs_power, mock_helper): mock_power = mock_ucs_power.return_value target_state = states.POWER_ON mock_power.get_power_state.return_value = 'up' mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) self.assertEqual(states.POWER_ON, ucs_power._wait_for_state_change( target_state, mock_power)) mock_power.get_power_state.assert_called_with() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test__wait_for_state_change_exceed_iterations( self, mock_power_helper, mock_helper): mock_power = mock_power_helper.return_value target_state = states.POWER_ON mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock_power.get_power_state.side_effect = ( ['down', 'down', 'down', 'down']) self.assertEqual(states.ERROR, ucs_power._wait_for_state_change( target_state, mock_power) ) mock_power.get_power_state.assert_called_with() self.assertEqual(4, mock_power.get_power_state.call_count) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power._wait_for_state_change', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_set_and_wait_for_state_change_fail( self, mock_power_helper, mock__wait, mock_helper): target_state = states.POWER_ON mock_power = mock_power_helper.return_value mock_power.get_power_state.return_value = 'down' mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock__wait.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.PowerStateFailure, self.interface.set_power_state, task, target_state) mock_power.set_power_state.assert_called_once_with('up') mock_power.get_power_state.assert_called_once_with() mock__wait.assert_called_once_with(target_state, mock_power) @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power._wait_for_state_change', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_reboot(self, mock_power_helper, mock__wait, mock_helper): mock_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock_power = mock_power_helper.return_value mock__wait.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertIsNone(self.interface.reboot(task)) mock_power.reboot.assert_called_once_with() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_reboot_fail(self, mock_power_helper, mock_ucs_helper): mock_ucs_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock_power = mock_power_helper.return_value mock_power.reboot.side_effect = ( ucs_error.UcsOperationError(operation='rebooting', error='failed')) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.UcsOperationError, self.interface.reboot, task ) mock_power.reboot.assert_called_once_with() @mock.patch('ironic.drivers.modules.ucs.helper.ucs_helper', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power._wait_for_state_change', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.ucs.power.ucs_power.UcsPower', spec_set=True, autospec=True) def test_reboot__wait_state_change_fail(self, mock_power_helper, mock__wait, mock_ucs_helper): mock_ucs_helper.generate_ucsm_handle.return_value = (True, mock.Mock()) mock_power = mock_power_helper.return_value mock__wait.return_value = states.ERROR with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.PowerStateFailure, self.interface.reboot, task) mock_power.reboot.assert_called_once_with() ironic-5.1.0/ironic/tests/unit/drivers/modules/ucs/__init__.py0000664000567000056710000000000012674513466025602 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/msftocs/0000775000567000056710000000000012674513633024363 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/msftocs/test_management.py0000664000567000056710000001353312674513466030121 0ustar jenkinsjenkins00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for MSFT OCS ManagementInterface """ import mock from ironic.common import boot_devices from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.msftocs import common as msftocs_common from ironic.drivers.modules.msftocs import msftocsclient from ironic.drivers import utils as drivers_utils from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_msftocs_info() class MSFTOCSManagementTestCase(db_base.DbTestCase): def setUp(self): super(MSFTOCSManagementTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_msftocs') self.info = INFO_DICT self.node = obj_utils.create_test_node(self.context, driver='fake_msftocs', driver_info=self.info) def test_get_properties(self): expected = msftocs_common.REQUIRED_PROPERTIES with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(msftocs_common, 'parse_driver_info', autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(msftocs_common, 'parse_driver_info', autospec=True) def test_validate_fail(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_drvinfo.side_effect = iter( [exception.InvalidParameterValue('x')]) self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) def test_get_supported_boot_devices(self): expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.BIOS] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual( sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) @mock.patch.object(msftocs_common, 'get_client_info', autospec=True) def _test_set_boot_device_one_time(self, persistent, uefi, mock_gci): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_c = mock.MagicMock(spec=msftocsclient.MSFTOCSClientApi) blade_id = task.node.driver_info['msftocs_blade_id'] mock_gci.return_value = (mock_c, blade_id) if uefi: drivers_utils.add_node_capability(task, 'boot_mode', 'uefi') task.driver.management.set_boot_device( task, boot_devices.PXE, persistent) mock_gci.assert_called_once_with(task.node.driver_info) mock_c.set_next_boot.assert_called_once_with( blade_id, msftocsclient.BOOT_TYPE_FORCE_PXE, persistent, uefi) def test_set_boot_device_one_time(self): self._test_set_boot_device_one_time(False, False) def test_set_boot_device_persistent(self): self._test_set_boot_device_one_time(True, False) def test_set_boot_device_uefi(self): self._test_set_boot_device_one_time(True, True) def test_set_boot_device_fail(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.set_boot_device, task, 'fake-device') @mock.patch.object(msftocs_common, 'get_client_info', autospec=True) def test_get_boot_device(self, mock_gci): expected = {'boot_device': boot_devices.DISK, 'persistent': None} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_c = mock.MagicMock(spec=msftocsclient.MSFTOCSClientApi) blade_id = task.node.driver_info['msftocs_blade_id'] mock_gci.return_value = (mock_c, blade_id) force_hdd = msftocsclient.BOOT_TYPE_FORCE_DEFAULT_HDD mock_c.get_next_boot.return_value = force_hdd self.assertEqual(expected, task.driver.management.get_boot_device(task)) mock_gci.assert_called_once_with(task.node.driver_info) mock_c.get_next_boot.assert_called_once_with(blade_id) def test_get_sensor_data(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(NotImplementedError, task.driver.management.get_sensors_data, task) ironic-5.1.0/ironic/tests/unit/drivers/modules/msftocs/test_msftocsclient.py0000664000567000056710000001661412674513466030665 0ustar jenkinsjenkins00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for MSFT OCS REST API client """ import mock import requests from requests import exceptions as requests_exceptions from ironic.common import exception from ironic.drivers.modules.msftocs import msftocsclient from ironic.tests import base FAKE_BOOT_RESPONSE = ( '' 'Success' '1' 'Success' '1' 'ForcePxe' '') % msftocsclient.WCSNS FAKE_BLADE_RESPONSE = ( '' 'Success' '1' '' '1' '') % msftocsclient.WCSNS FAKE_POWER_STATE_RESPONSE = ( '' 'Success' '1' 'Blade Power is On, firmware decompressed' '' '1' '0' 'ON' '') % msftocsclient.WCSNS FAKE_BLADE_STATE_RESPONSE = ( '' 'Success' '1' '' '1' 'ON' '') % msftocsclient.WCSNS class MSFTOCSClientApiTestCase(base.TestCase): def setUp(self): super(MSFTOCSClientApiTestCase, self).setUp() self._fake_base_url = "http://fakehost:8000" self._fake_username = "admin" self._fake_password = 'fake' self._fake_blade_id = 1 self._client = msftocsclient.MSFTOCSClientApi( self._fake_base_url, self._fake_username, self._fake_password) @mock.patch.object(requests, 'get', autospec=True) def test__exec_cmd(self, mock_get): fake_response_text = 'fake_response_text' fake_rel_url = 'fake_rel_url' mock_get.return_value.text = 'fake_response_text' self.assertEqual(fake_response_text, self._client._exec_cmd(fake_rel_url)) mock_get.assert_called_once_with( self._fake_base_url + "/" + fake_rel_url, auth=mock.ANY) @mock.patch.object(requests, 'get', autospec=True) def test__exec_cmd_http_get_fail(self, mock_get): fake_rel_url = 'fake_rel_url' mock_get.side_effect = iter([requests_exceptions.ConnectionError('x')]) self.assertRaises(exception.MSFTOCSClientApiException, self._client._exec_cmd, fake_rel_url) mock_get.assert_called_once_with( self._fake_base_url + "/" + fake_rel_url, auth=mock.ANY) def test__check_completion_code(self): et = self._client._check_completion_code(FAKE_BOOT_RESPONSE) self.assertEqual('{%s}BootResponse' % msftocsclient.WCSNS, et.tag) def test__check_completion_code_fail(self): self.assertRaises(exception.MSFTOCSClientApiException, self._client._check_completion_code, '' % msftocsclient.WCSNS) def test__check_completion_with_bad_completion_code_fail(self): self.assertRaises(exception.MSFTOCSClientApiException, self._client._check_completion_code, '' 'Fail' '' % msftocsclient.WCSNS) def test__check_completion_code_xml_parsing_fail(self): self.assertRaises(exception.MSFTOCSClientApiException, self._client._check_completion_code, 'bad_xml') @mock.patch.object( msftocsclient.MSFTOCSClientApi, '_exec_cmd', autospec=True) def test_get_blade_state(self, mock_exec_cmd): mock_exec_cmd.return_value = FAKE_BLADE_STATE_RESPONSE self.assertEqual( msftocsclient.POWER_STATUS_ON, self._client.get_blade_state(self._fake_blade_id)) mock_exec_cmd.assert_called_once_with( self._client, "GetBladeState?bladeId=%d" % self._fake_blade_id) @mock.patch.object( msftocsclient.MSFTOCSClientApi, '_exec_cmd', autospec=True) def test_set_blade_on(self, mock_exec_cmd): mock_exec_cmd.return_value = FAKE_BLADE_RESPONSE self._client.set_blade_on(self._fake_blade_id) mock_exec_cmd.assert_called_once_with( self._client, "SetBladeOn?bladeId=%d" % self._fake_blade_id) @mock.patch.object( msftocsclient.MSFTOCSClientApi, '_exec_cmd', autospec=True) def test_set_blade_off(self, mock_exec_cmd): mock_exec_cmd.return_value = FAKE_BLADE_RESPONSE self._client.set_blade_off(self._fake_blade_id) mock_exec_cmd.assert_called_once_with( self._client, "SetBladeOff?bladeId=%d" % self._fake_blade_id) @mock.patch.object( msftocsclient.MSFTOCSClientApi, '_exec_cmd', autospec=True) def test_set_blade_power_cycle(self, mock_exec_cmd): mock_exec_cmd.return_value = FAKE_BLADE_RESPONSE self._client.set_blade_power_cycle(self._fake_blade_id) mock_exec_cmd.assert_called_once_with( self._client, "SetBladeActivePowerCycle?bladeId=%d&offTime=0" % self._fake_blade_id) @mock.patch.object( msftocsclient.MSFTOCSClientApi, '_exec_cmd', autospec=True) def test_get_next_boot(self, mock_exec_cmd): mock_exec_cmd.return_value = FAKE_BOOT_RESPONSE self.assertEqual( msftocsclient.BOOT_TYPE_FORCE_PXE, self._client.get_next_boot(self._fake_blade_id)) mock_exec_cmd.assert_called_once_with( self._client, "GetNextBoot?bladeId=%d" % self._fake_blade_id) @mock.patch.object( msftocsclient.MSFTOCSClientApi, '_exec_cmd', autospec=True) def test_set_next_boot(self, mock_exec_cmd): mock_exec_cmd.return_value = FAKE_BOOT_RESPONSE self._client.set_next_boot(self._fake_blade_id, msftocsclient.BOOT_TYPE_FORCE_PXE) mock_exec_cmd.assert_called_once_with( self._client, "SetNextBoot?bladeId=%(blade_id)d&bootType=%(boot_type)d&" "uefi=%(uefi)s&persistent=%(persistent)s" % {"blade_id": self._fake_blade_id, "boot_type": msftocsclient.BOOT_TYPE_FORCE_PXE, "uefi": "true", "persistent": "true"}) ironic-5.1.0/ironic/tests/unit/drivers/modules/msftocs/test_common.py0000664000567000056710000001237112674513466027274 0ustar jenkinsjenkins00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for MSFT OCS common functions """ import mock from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.msftocs import common as msftocs_common from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_msftocs_info() class MSFTOCSCommonTestCase(db_base.DbTestCase): def setUp(self): super(MSFTOCSCommonTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_msftocs') self.info = INFO_DICT self.node = obj_utils.create_test_node(self.context, driver='fake_msftocs', driver_info=self.info) def test_get_client_info(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_info = task.node.driver_info (client, blade_id) = msftocs_common.get_client_info(driver_info) self.assertEqual(driver_info['msftocs_base_url'], client._base_url) self.assertEqual(driver_info['msftocs_username'], client._username) self.assertEqual(driver_info['msftocs_password'], client._password) self.assertEqual(driver_info['msftocs_blade_id'], blade_id) @mock.patch.object(msftocs_common, '_is_valid_url', autospec=True) def test_parse_driver_info(self, mock_is_valid_url): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: msftocs_common.parse_driver_info(task.node) mock_is_valid_url.assert_called_once_with( task.node.driver_info['msftocs_base_url']) def test_parse_driver_info_fail_missing_param(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: del task.node.driver_info['msftocs_base_url'] self.assertRaises(exception.MissingParameterValue, msftocs_common.parse_driver_info, task.node) def test_parse_driver_info_fail_bad_url(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info['msftocs_base_url'] = "bad-url" self.assertRaises(exception.InvalidParameterValue, msftocs_common.parse_driver_info, task.node) def test_parse_driver_info_fail_bad_blade_id_type(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info['msftocs_blade_id'] = "bad-blade-id" self.assertRaises(exception.InvalidParameterValue, msftocs_common.parse_driver_info, task.node) def test_parse_driver_info_fail_bad_blade_id_value(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info['msftocs_blade_id'] = 0 self.assertRaises(exception.InvalidParameterValue, msftocs_common.parse_driver_info, task.node) def test__is_valid_url(self): self.assertIs(True, msftocs_common._is_valid_url("http://fake.com")) self.assertIs( True, msftocs_common._is_valid_url("http://www.fake.com")) self.assertIs(True, msftocs_common._is_valid_url("http://FAKE.com")) self.assertIs(True, msftocs_common._is_valid_url("http://fake")) self.assertIs( True, msftocs_common._is_valid_url("http://fake.com/blah")) self.assertIs(True, msftocs_common._is_valid_url("http://localhost")) self.assertIs(True, msftocs_common._is_valid_url("https://fake.com")) self.assertIs(True, msftocs_common._is_valid_url("http://10.0.0.1")) self.assertIs(False, msftocs_common._is_valid_url("bad-url")) self.assertIs(False, msftocs_common._is_valid_url("http://.bad-url")) self.assertIs(False, msftocs_common._is_valid_url("http://bad-url$")) self.assertIs(False, msftocs_common._is_valid_url("http://$bad-url")) self.assertIs(False, msftocs_common._is_valid_url("http://bad$url")) self.assertIs(False, msftocs_common._is_valid_url(None)) self.assertIs(False, msftocs_common._is_valid_url(0)) ironic-5.1.0/ironic/tests/unit/drivers/modules/msftocs/test_power.py0000664000567000056710000001727412674513466027147 0ustar jenkinsjenkins00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for MSFT OCS PowerInterface """ import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.msftocs import common as msftocs_common from ironic.drivers.modules.msftocs import msftocsclient from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_msftocs_info() class MSFTOCSPowerTestCase(db_base.DbTestCase): def setUp(self): super(MSFTOCSPowerTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_msftocs') self.info = INFO_DICT self.node = obj_utils.create_test_node(self.context, driver='fake_msftocs', driver_info=self.info) def test_get_properties(self): expected = msftocs_common.REQUIRED_PROPERTIES with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(msftocs_common, 'parse_driver_info', autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(msftocs_common, 'parse_driver_info', autospec=True) def test_validate_fail(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_drvinfo.side_effect = iter( [exception.InvalidParameterValue('x')]) self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) @mock.patch.object(msftocs_common, 'get_client_info', autospec=True) def test_get_power_state(self, mock_gci): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_c = mock.MagicMock(spec=msftocsclient.MSFTOCSClientApi) blade_id = task.node.driver_info['msftocs_blade_id'] mock_gci.return_value = (mock_c, blade_id) mock_c.get_blade_state.return_value = msftocsclient.POWER_STATUS_ON self.assertEqual(states.POWER_ON, task.driver.power.get_power_state(task)) mock_gci.assert_called_once_with(task.node.driver_info) mock_c.get_blade_state.assert_called_once_with(blade_id) @mock.patch.object(msftocs_common, 'get_client_info', autospec=True) def test_set_power_state_on(self, mock_gci): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_c = mock.MagicMock(spec=msftocsclient.MSFTOCSClientApi) blade_id = task.node.driver_info['msftocs_blade_id'] mock_gci.return_value = (mock_c, blade_id) task.driver.power.set_power_state(task, states.POWER_ON) mock_gci.assert_called_once_with(task.node.driver_info) mock_c.set_blade_on.assert_called_once_with(blade_id) @mock.patch.object(msftocs_common, 'get_client_info', autospec=True) def test_set_power_state_off(self, mock_gci): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_c = mock.MagicMock(spec=msftocsclient.MSFTOCSClientApi) blade_id = task.node.driver_info['msftocs_blade_id'] mock_gci.return_value = (mock_c, blade_id) task.driver.power.set_power_state(task, states.POWER_OFF) mock_gci.assert_called_once_with(task.node.driver_info) mock_c.set_blade_off.assert_called_once_with(blade_id) @mock.patch.object(msftocs_common, 'get_client_info', autospec=True) def test_set_power_state_blade_on_fail(self, mock_gci): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_c = mock.MagicMock(spec=msftocsclient.MSFTOCSClientApi) blade_id = task.node.driver_info['msftocs_blade_id'] mock_gci.return_value = (mock_c, blade_id) ex = exception.MSFTOCSClientApiException('x') mock_c.set_blade_on.side_effect = ex pstate = states.POWER_ON self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, pstate) mock_gci.assert_called_once_with(task.node.driver_info) mock_c.set_blade_on.assert_called_once_with(blade_id) @mock.patch.object(msftocs_common, 'get_client_info', autospec=True) def test_set_power_state_invalid_parameter_fail(self, mock_gci): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_c = mock.MagicMock(spec=msftocsclient.MSFTOCSClientApi) blade_id = task.node.driver_info['msftocs_blade_id'] mock_gci.return_value = (mock_c, blade_id) pstate = states.ERROR self.assertRaises(exception.InvalidParameterValue, task.driver.power.set_power_state, task, pstate) mock_gci.assert_called_once_with(task.node.driver_info) @mock.patch.object(msftocs_common, 'get_client_info', autospec=True) def test_reboot(self, mock_gci): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_c = mock.MagicMock(spec=msftocsclient.MSFTOCSClientApi) blade_id = task.node.driver_info['msftocs_blade_id'] mock_gci.return_value = (mock_c, blade_id) task.driver.power.reboot(task) mock_gci.assert_called_once_with(task.node.driver_info) mock_c.set_blade_power_cycle.assert_called_once_with(blade_id) @mock.patch.object(msftocs_common, 'get_client_info', autospec=True) def test_reboot_fail(self, mock_gci): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_c = mock.MagicMock(spec=msftocsclient.MSFTOCSClientApi) blade_id = task.node.driver_info['msftocs_blade_id'] mock_gci.return_value = (mock_c, blade_id) ex = exception.MSFTOCSClientApiException('x') mock_c.set_blade_power_cycle.side_effect = ex self.assertRaises(exception.PowerStateFailure, task.driver.power.reboot, task) mock_gci.assert_called_once_with(task.node.driver_info) mock_c.set_blade_power_cycle.assert_called_once_with(blade_id) ironic-5.1.0/ironic/tests/unit/drivers/modules/msftocs/__init__.py0000664000567000056710000000000012674513466026466 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/oneview/0000775000567000056710000000000012674513633024361 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/oneview/test_management.py0000664000567000056710000002034412674513466030115 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2015 Hewlett Packard Development Company, LP # Copyright 2015 Universidade Federal de Campina Grande # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import importutils from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import driver_factory from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.oneview import common from ironic.drivers.modules.oneview import management from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils oneview_exceptions = importutils.try_import('oneview_client.exceptions') @mock.patch.object(common, 'get_oneview_client', spect_set=True, autospec=True) class OneViewManagementDriverTestCase(db_base.DbTestCase): def setUp(self): super(OneViewManagementDriverTestCase, self).setUp() self.config(manager_url='https://1.2.3.4', group='oneview') self.config(username='user', group='oneview') self.config(password='password', group='oneview') mgr_utils.mock_the_extension_manager(driver="fake_oneview") self.driver = driver_factory.get_driver("fake_oneview") self.node = obj_utils.create_test_node( self.context, driver='fake_oneview', properties=db_utils.get_test_oneview_properties(), driver_info=db_utils.get_test_oneview_driver_info(), ) self.info = common.get_oneview_info(self.node) @mock.patch.object(common, 'validate_oneview_resources_compatibility', spect_set=True, autospec=True) def test_validate(self, mock_validate, mock_get_ov_client): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.management.validate(task) self.assertTrue(mock_validate.called) def test_validate_fail(self, mock_get_ov_client): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), id=999, driver='fake_oneview') with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.MissingParameterValue, task.driver.management.validate, task) @mock.patch.object(common, 'validate_oneview_resources_compatibility', spect_set=True, autospec=True) def test_validate_fail_exception(self, mock_validate, mock_get_ov_client): mock_validate.side_effect = exception.OneViewError('message') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.validate, task) def test_get_properties(self, mock_get_ov_client): expected = common.COMMON_PROPERTIES self.assertItemsEqual(expected, self.driver.management.get_properties()) def test_set_boot_device(self, mock_get_ov_client): oneview_client = mock_get_ov_client() with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.management.set_boot_device(task, boot_devices.PXE) oneview_client.set_boot_device.assert_called_once_with( self.info, management.BOOT_DEVICE_MAPPING_TO_OV.get(boot_devices.PXE) ) def test_set_boot_device_invalid_device(self, mock_get_ov_client): oneview_client = mock_get_ov_client() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.driver.management.set_boot_device, task, 'fake-device') self.assertFalse(oneview_client.set_boot_device.called) def test_set_boot_device_fail_to_get_server_profile(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.get_server_profile_from_hardware.side_effect = \ oneview_exceptions.OneViewException() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.OneViewError, self.driver.management.set_boot_device, task, 'disk') self.assertFalse(oneview_client.set_boot_device.called) def test_set_boot_device_without_server_profile(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.get_server_profile_from_hardware.return_value = False with task_manager.acquire(self.context, self.node.uuid) as task: expected_msg = ( 'A Server Profile is not associated with node %s.' % self.node.uuid ) self.assertRaisesRegexp( exception.OperationNotPermitted, expected_msg, self.driver.management.set_boot_device, task, 'disk' ) def test_get_supported_boot_devices(self, mock_get_ov_client): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM] self.assertItemsEqual( expected, task.driver.management.get_supported_boot_devices(task), ) def test_get_boot_device(self, mock_get_ov_client): device_mapping = management.BOOT_DEVICE_MAPPING_TO_OV oneview_client = mock_get_ov_client() with task_manager.acquire(self.context, self.node.uuid) as task: # For each known device on OneView, Ironic should return its # counterpart value for device_ironic, device_ov in device_mapping.items(): oneview_client.get_boot_order.return_value = [device_ov] expected_response = { 'boot_device': device_ironic, 'persistent': True } response = self.driver.management.get_boot_device(task) self.assertEqual(expected_response, response) oneview_client.get_boot_order.assert_called_with(self.info) def test_get_boot_device_fail(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.get_boot_order.side_effect = \ oneview_exceptions.OneViewException() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.OneViewError, self.driver.management.get_boot_device, task) oneview_client.get_boot_order.assert_called_with(self.info) def test_get_boot_device_unknown_device(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.get_boot_order.return_value = ["spam", "bacon"] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises( exception.InvalidParameterValue, task.driver.management.get_boot_device, task ) def test_get_sensors_data_not_implemented(self, mock_get_ov_client): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises( NotImplementedError, task.driver.management.get_sensors_data, task ) ironic-5.1.0/ironic/tests/unit/drivers/modules/oneview/test_common.py0000664000567000056710000002743712674513466027303 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2015 Hewlett Packard Development Company, LP # Copyright 2015 Universidade Federal de Campina Grande # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import importutils from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.oneview import common from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils oneview_states = importutils.try_import('oneview_client.states') class OneViewCommonTestCase(db_base.DbTestCase): def setUp(self): super(OneViewCommonTestCase, self).setUp() self.node = obj_utils.create_test_node( self.context, driver='fake_oneview', properties=db_utils.get_test_oneview_properties(), driver_info=db_utils.get_test_oneview_driver_info(), ) self.config(manager_url='https://1.2.3.4', group='oneview') self.config(username='user', group='oneview') self.config(password='password', group='oneview') mgr_utils.mock_the_extension_manager(driver="fake_oneview") def test_verify_node_info(self): common.verify_node_info(self.node) def test_verify_node_info_missing_node_properties(self): self.node.properties = { "cpu_arch": "x86_64", "cpus": "8", "local_gb": "10", "memory_mb": "4096", "capabilities": ("enclosure_group_uri:fake_eg_uri," "server_profile_template_uri:fake_spt_uri") } with self.assertRaisesRegexp(exception.MissingParameterValue, "server_hardware_type_uri"): common.verify_node_info(self.node) def test_verify_node_info_missing_node_driver_info(self): self.node.driver_info = {} with self.assertRaisesRegexp(exception.MissingParameterValue, "server_hardware_uri"): common.verify_node_info(self.node) def test_verify_node_info_missing_spt(self): properties = db_utils.get_test_oneview_properties() properties["capabilities"] = ("server_hardware_type_uri:fake_sht_uri," "enclosure_group_uri:fake_eg_uri") self.node.properties = properties with self.assertRaisesRegexp(exception.MissingParameterValue, "server_profile_template_uri"): common.verify_node_info(self.node) def test_verify_node_info_missing_sh(self): driver_info = db_utils.get_test_oneview_driver_info() del driver_info["server_hardware_uri"] properties = db_utils.get_test_oneview_properties() properties["capabilities"] = ( "server_hardware_type_uri:fake_sht_uri," "enclosure_group_uri:fake_eg_uri," "server_profile_template_uri:fake_spt_uri" ) self.node.properties = properties self.node.driver_info = driver_info with self.assertRaisesRegexp(exception.MissingParameterValue, "server_hardware_uri"): common.verify_node_info(self.node) def test_verify_node_info_missing_sht(self): driver_info = db_utils.get_test_oneview_driver_info() properties = db_utils.get_test_oneview_properties() properties["capabilities"] = ( "enclosure_group_uri:fake_eg_uri," "server_profile_template_uri:fake_spt_uri" ) self.node.properties = properties self.node.driver_info = driver_info with self.assertRaisesRegexp(exception.MissingParameterValue, "server_hardware_type_uri"): common.verify_node_info(self.node) def test_get_oneview_info(self): complete_node = self.node expected_node_info = { 'server_hardware_uri': 'fake_sh_uri', 'server_hardware_type_uri': 'fake_sht_uri', 'enclosure_group_uri': 'fake_eg_uri', 'server_profile_template_uri': 'fake_spt_uri', } self.assertEqual( expected_node_info, common.get_oneview_info(complete_node) ) def test_get_oneview_info_missing_spt(self): driver_info = db_utils.get_test_oneview_driver_info() properties = db_utils.get_test_oneview_properties() properties["capabilities"] = ("server_hardware_type_uri:fake_sht_uri," "enclosure_group_uri:fake_eg_uri") self.node.driver_info = driver_info self.node.properties = properties incomplete_node = self.node expected_node_info = { 'server_hardware_uri': 'fake_sh_uri', 'server_hardware_type_uri': 'fake_sht_uri', 'enclosure_group_uri': 'fake_eg_uri', 'server_profile_template_uri': None, } self.assertEqual( expected_node_info, common.get_oneview_info(incomplete_node) ) def test_get_oneview_info_missing_sh(self): driver_info = db_utils.get_test_oneview_driver_info() del driver_info["server_hardware_uri"] properties = db_utils.get_test_oneview_properties() properties["capabilities"] = ( "server_hardware_type_uri:fake_sht_uri," "enclosure_group_uri:fake_eg_uri," "server_profile_template_uri:fake_spt_uri" ) self.node.driver_info = driver_info self.node.properties = properties incomplete_node = self.node expected_node_info = { 'server_hardware_uri': None, 'server_hardware_type_uri': 'fake_sht_uri', 'enclosure_group_uri': 'fake_eg_uri', 'server_profile_template_uri': 'fake_spt_uri', } self.assertEqual( expected_node_info, common.get_oneview_info(incomplete_node) ) # TODO(gabriel-bezerra): Remove this after Mitaka @mock.patch.object(common, 'LOG', autospec=True) def test_deprecated_spt_in_driver_info(self, log_mock): # the current model has server_profile_template_uri in # properties/capabilities instead of driver_info driver_info = db_utils.get_test_oneview_driver_info() driver_info['server_profile_template_uri'] = 'fake_spt_uri' properties = db_utils.get_test_oneview_properties() properties["capabilities"] = ("server_hardware_type_uri:fake_sht_uri," "enclosure_group_uri:fake_eg_uri") self.node.driver_info = driver_info self.node.properties = properties deprecated_node = self.node expected_node_info = { 'server_hardware_uri': 'fake_sh_uri', 'server_hardware_type_uri': 'fake_sht_uri', 'enclosure_group_uri': 'fake_eg_uri', 'server_profile_template_uri': 'fake_spt_uri', } self.assertEqual( expected_node_info, common.get_oneview_info(deprecated_node) ) # must be valid common.verify_node_info(deprecated_node) log_mock.warning.assert_called_once_with( "Using 'server_profile_template_uri' in driver_info is " "now deprecated and will be ignored in future releases. " "Node %s should have it in its properties/capabilities " "instead.", self.node.uuid ) # TODO(gabriel-bezerra): Remove this after Mitaka def test_deprecated_spt_in_driver_info_and_in_capabilites(self): # information in capabilities precedes driver_info driver_info = db_utils.get_test_oneview_driver_info() driver_info['server_profile_template_uri'] = 'unused_fake_spt_uri' self.node.driver_info = driver_info deprecated_node = self.node expected_node_info = { 'server_hardware_uri': 'fake_sh_uri', 'server_hardware_type_uri': 'fake_sht_uri', 'enclosure_group_uri': 'fake_eg_uri', 'server_profile_template_uri': 'fake_spt_uri', } self.assertEqual( expected_node_info, common.get_oneview_info(deprecated_node) ) # must be valid common.verify_node_info(deprecated_node) def test__verify_node_info(self): common._verify_node_info("properties", {"a": True, "b": False, "c": 0, "d": "something", "e": "somethingelse"}, ["a", "b", "c", "e"]) def test__verify_node_info_fails(self): self.assertRaises( exception.MissingParameterValue, common._verify_node_info, "properties", {"a": 1, "b": 2, "c": 3}, ["x"] ) def test__verify_node_info_missing_values_empty_string(self): with self.assertRaisesRegexp(exception.MissingParameterValue, "'properties:a', 'properties:b'"): common._verify_node_info("properties", {"a": '', "b": None, "c": "something"}, ["a", "b", "c"]) def _test_translate_oneview_states(self, power_state_to_translate, expected_translated_power_state): translated_power_state = common.translate_oneview_power_state( power_state_to_translate) self.assertEqual(translated_power_state, expected_translated_power_state) def test_all_scenarios_for_translate_oneview_states(self): self._test_translate_oneview_states( oneview_states.ONEVIEW_POWERING_OFF, states.POWER_ON) self._test_translate_oneview_states( oneview_states.ONEVIEW_POWER_OFF, states.POWER_OFF) self._test_translate_oneview_states( oneview_states.ONEVIEW_POWERING_ON, states.POWER_OFF) self._test_translate_oneview_states( oneview_states.ONEVIEW_RESETTING, states.REBOOT) self._test_translate_oneview_states("anything", states.ERROR) @mock.patch.object(common, 'get_oneview_client', spec_set=True, autospec=True) def test_validate_oneview_resources_compatibility(self, mock_get_ov_client): oneview_client = mock_get_ov_client() with task_manager.acquire(self.context, self.node.uuid) as task: common.validate_oneview_resources_compatibility(task) self.assertTrue( oneview_client.validate_node_server_hardware.called) self.assertTrue( oneview_client.validate_node_server_hardware_type.called) self.assertTrue( oneview_client.check_server_profile_is_applied.called) self.assertTrue( oneview_client.is_node_port_mac_compatible_with_server_profile. called) self.assertTrue( oneview_client.validate_node_enclosure_group.called) self.assertTrue( oneview_client.validate_node_server_profile_template.called) ironic-5.1.0/ironic/tests/unit/drivers/modules/oneview/test_power.py0000664000567000056710000002010112674513466027124 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2015 Hewlett Packard Development Company, LP # Copyright 2015 Universidade Federal de Campina Grande # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import importutils from oslo_utils import uuidutils from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.oneview import common from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils oneview_exceptions = importutils.try_import('oneview_client.exceptions') POWER_ON = 'On' POWER_OFF = 'Off' ERROR = 'error' @mock.patch.object(common, 'get_oneview_client', spec_set=True, autospec=True) class OneViewPowerDriverTestCase(db_base.DbTestCase): def setUp(self): super(OneViewPowerDriverTestCase, self).setUp() self.config(manager_url='https://1.2.3.4', group='oneview') self.config(username='user', group='oneview') self.config(password='password', group='oneview') mgr_utils.mock_the_extension_manager(driver='fake_oneview') self.driver = driver_factory.get_driver('fake_oneview') self.node = obj_utils.create_test_node( self.context, driver='fake_oneview', properties=db_utils.get_test_oneview_properties(), driver_info=db_utils.get_test_oneview_driver_info(), ) self.info = common.get_oneview_info(self.node) @mock.patch.object(common, 'validate_oneview_resources_compatibility', spect_set=True, autospec=True) def test_power_interface_validate(self, mock_validate, mock_get_ov_client): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.validate(task) self.assertTrue(mock_validate.called) def test_power_interface_validate_fail(self, mock_get_ov_client): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), id=999, driver='fake_oneview') with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.MissingParameterValue, task.driver.power.validate, task) @mock.patch.object(common, 'validate_oneview_resources_compatibility', spect_set=True, autospec=True) def test_power_interface_validate_fail_exception(self, mock_validate, mock_get_ov_client): mock_validate.side_effect = exception.OneViewError('message') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) def test_power_interface_get_properties(self, mock_get_ov_client): expected = common.COMMON_PROPERTIES self.assertItemsEqual(expected, self.driver.power.get_properties()) def test_get_power_state(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.get_node_power_state.return_value = POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.power.get_power_state(task) oneview_client.get_node_power_state.assert_called_once_with(self.info) def test_get_power_state_fail(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.get_node_power_state.side_effect = \ oneview_exceptions.OneViewException() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises( exception.OneViewError, self.driver.power.get_power_state, task ) def test_set_power_on(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.power_on.return_value = POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.power.set_power_state(task, states.POWER_ON) oneview_client.power_on.assert_called_once_with(self.info) def test_set_power_off(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.power_off.return_value = POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.power.set_power_state(task, states.POWER_OFF) oneview_client.power_off.assert_called_once_with(self.info) def test_set_power_on_fail(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.power_on.side_effect = \ oneview_exceptions.OneViewException() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.OneViewError, self.driver.power.set_power_state, task, states.POWER_ON) oneview_client.power_on.assert_called_once_with(self.info) def test_set_power_off_fail(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.power_off.side_effect = \ oneview_exceptions.OneViewException() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.OneViewError, self.driver.power.set_power_state, task, states.POWER_OFF) oneview_client.power_off.assert_called_once_with(self.info) def test_set_power_invalid_state(self, mock_get_ov_client): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.driver.power.set_power_state, task, 'fake state') def test_set_power_reboot(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.power_off.return_value = POWER_OFF oneview_client.power_on.return_value = POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.power.set_power_state(task, states.REBOOT) oneview_client.power_off.assert_called_once_with(self.info) oneview_client.power_on.assert_called_once_with(self.info) def test_reboot(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.power_off.return_value = POWER_OFF oneview_client.power_on.return_value = POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.power.reboot(task) oneview_client.power_off.assert_called_once_with(self.info) oneview_client.power_on.assert_called_once_with(self.info) def test_reboot_fail(self, mock_get_ov_client): oneview_client = mock_get_ov_client() oneview_client.power_off.side_effect = \ oneview_exceptions.OneViewException() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.OneViewError, self.driver.power.reboot, task) oneview_client.power_off.assert_called_once_with(self.info) self.assertFalse(oneview_client.power_on.called) ironic-5.1.0/ironic/tests/unit/drivers/modules/oneview/__init__.py0000664000567000056710000000000012674513466026464 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/oneview/test_vendor.py0000664000567000056710000003023712674513466027300 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import types import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent_client from ironic.drivers.modules.oneview import power from ironic.drivers.modules.oneview import vendor from ironic.drivers.modules import pxe from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils GET_POWER_STATE_RETRIES = 5 class TestBaseAgentVendor(db_base.DbTestCase): def setUp(self): super(TestBaseAgentVendor, self).setUp() self.config( post_deploy_get_power_state_retries=GET_POWER_STATE_RETRIES, group='agent') mgr_utils.mock_the_extension_manager(driver="agent_pxe_oneview") self.passthru = vendor.AgentVendorInterface() self.node = obj_utils.create_test_node( self.context, driver='agent_pxe_oneview', properties=db_utils.get_test_oneview_properties(), driver_info=db_utils.get_test_oneview_driver_info(), ) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(power.OneViewPower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch('ironic.conductor.utils.node_set_boot_device', autospec=True) def test_reboot_and_finish_deploy(self, set_bootdev_mock, power_off_mock, get_power_state_mock, node_power_action_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.side_effect = [states.POWER_ON, states.POWER_OFF] self.passthru.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(2, get_power_state_mock.call_count) set_bootdev_mock.assert_called_once_with(task, 'disk', persistent=True) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(power.OneViewPower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_soft_poweroff_doesnt_complete( self, power_off_mock, get_power_state_mock, node_power_action_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_ON self.passthru.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(GET_POWER_STATE_RETRIES + 1, get_power_state_mock.call_count) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON) ]) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_soft_poweroff_fails( self, power_off_mock, node_power_action_mock): power_off_mock.side_effect = iter([RuntimeError("boom")]) self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.passthru.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON) ]) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(power.OneViewPower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_get_power_state_fails( self, power_off_mock, get_power_state_mock, node_power_action_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.side_effect = iter([RuntimeError("boom")]) self.passthru.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(GET_POWER_STATE_RETRIES + 1, get_power_state_mock.call_count) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON) ]) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(power.OneViewPower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_power_action_fails( self, power_off_mock, get_power_state_mock, node_power_action_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_ON node_power_action_mock.side_effect = iter([RuntimeError("boom")]) self.assertRaises(exception.InstanceDeployFailure, self.passthru.reboot_and_finish_deploy, task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(GET_POWER_STATE_RETRIES + 1, get_power_state_mock.call_count) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_OFF)]) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(power.OneViewPower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch('ironic.drivers.modules.agent.AgentVendorInterface' '.check_deploy_success', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) def test_reboot_to_instance(self, clean_pxe_mock, check_deploy_mock, power_off_mock, get_power_state_mock, node_power_action_mock): check_deploy_mock.return_value = None self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_OFF task.node.driver_internal_info['is_whole_disk_image'] = True self.passthru.reboot_to_instance(task) clean_pxe_mock.assert_called_once_with(task.driver.boot, task) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(power.OneViewPower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch('ironic.drivers.modules.agent.AgentVendorInterface' '.check_deploy_success', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) def test_reboot_to_instance_boot_none(self, clean_pxe_mock, check_deploy_mock, power_off_mock, get_power_state_mock, node_power_action_mock): check_deploy_mock.return_value = None self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_OFF task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.boot = None self.passthru.reboot_to_instance(task) self.assertFalse(clean_pxe_mock.called) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_inspector.py0000664000567000056710000002353012674513466026333 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import eventlet import ironic_inspector_client as client import mock from ironic.common import driver_factory from ironic.common import exception from ironic.common import keystone from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import inspector from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class DisabledTestCase(db_base.DbTestCase): def setUp(self): super(DisabledTestCase, self).setUp() def _do_mock(self): # NOTE(dtantsur): fake driver always has inspection, using another one mgr_utils.mock_the_extension_manager("pxe_ssh") self.driver = driver_factory.get_driver("pxe_ssh") def test_disabled(self): self.config(enabled=False, group='inspector') self._do_mock() self.assertIsNone(self.driver.inspect) # NOTE(dtantsur): it's expected that fake_inspector fails to load # in this case self.assertRaises(exception.DriverLoadError, mgr_utils.mock_the_extension_manager, "fake_inspector") def test_enabled(self): self.config(enabled=True, group='inspector') self._do_mock() self.assertIsNotNone(self.driver.inspect) @mock.patch.object(inspector, 'client', None) def test_init_inspector_not_imported(self): self.assertRaises(exception.DriverLoadError, inspector.Inspector) def test_init_ok(self): self.config(enabled=True, group='inspector') inspector.Inspector() class BaseTestCase(db_base.DbTestCase): def setUp(self): super(BaseTestCase, self).setUp() self.config(enabled=True, group='inspector') mgr_utils.mock_the_extension_manager("fake_inspector") self.driver = driver_factory.get_driver("fake_inspector") self.node = obj_utils.get_test_node(self.context) self.task = mock.MagicMock(spec=task_manager.TaskManager) self.task.context = mock.MagicMock(spec_set=['auth_token']) self.task.shared = False self.task.node = self.node self.task.driver = self.driver self.api_version = (1, 0) class CommonFunctionsTestCase(BaseTestCase): def test_validate_ok(self): self.driver.inspect.validate(self.task) def test_get_properties(self): res = self.driver.inspect.get_properties() self.assertEqual({}, res) def test_create_if_enabled(self): res = inspector.Inspector.create_if_enabled('driver') self.assertIsInstance(res, inspector.Inspector) @mock.patch.object(inspector.LOG, 'info', autospec=True) def test_create_if_enabled_disabled(self, warn_mock): self.config(enabled=False, group='inspector') res = inspector.Inspector.create_if_enabled('driver') self.assertIsNone(res) self.assertTrue(warn_mock.called) @mock.patch.object(eventlet, 'spawn_n', lambda f, *a, **kw: f(*a, **kw)) @mock.patch.object(client, 'introspect') class InspectHardwareTestCase(BaseTestCase): def test_ok(self, mock_introspect): self.assertEqual(states.INSPECTING, self.driver.inspect.inspect_hardware(self.task)) mock_introspect.assert_called_once_with( self.node.uuid, api_version=self.api_version, auth_token=self.task.context.auth_token) def test_url(self, mock_introspect): self.config(service_url='meow', group='inspector') self.assertEqual(states.INSPECTING, self.driver.inspect.inspect_hardware(self.task)) mock_introspect.assert_called_once_with( self.node.uuid, api_version=self.api_version, auth_token=self.task.context.auth_token, base_url='meow') @mock.patch.object(task_manager, 'acquire', autospec=True) def test_error(self, mock_acquire, mock_introspect): mock_introspect.side_effect = RuntimeError('boom') self.driver.inspect.inspect_hardware(self.task) mock_introspect.assert_called_once_with( self.node.uuid, api_version=self.api_version, auth_token=self.task.context.auth_token) task = mock_acquire.return_value.__enter__.return_value self.assertIn('boom', task.node.last_error) task.process_event.assert_called_once_with('fail') @mock.patch.object(keystone, 'get_admin_auth_token', lambda: 'the token') @mock.patch.object(client, 'get_status') class CheckStatusTestCase(BaseTestCase): def setUp(self): super(CheckStatusTestCase, self).setUp() self.node.provision_state = states.INSPECTING def test_not_inspecting(self, mock_get): self.node.provision_state = states.MANAGEABLE inspector._check_status(self.task) self.assertFalse(mock_get.called) def test_not_inspector(self, mock_get): self.task.driver.inspect = object() inspector._check_status(self.task) self.assertFalse(mock_get.called) def test_not_finished(self, mock_get): mock_get.return_value = {} inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid, api_version=self.api_version, auth_token='the token') self.assertFalse(self.task.process_event.called) def test_exception_ignored(self, mock_get): mock_get.side_effect = RuntimeError('boom') inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid, api_version=self.api_version, auth_token='the token') self.assertFalse(self.task.process_event.called) def test_status_ok(self, mock_get): mock_get.return_value = {'finished': True} inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid, api_version=self.api_version, auth_token='the token') self.task.process_event.assert_called_once_with('done') def test_status_error(self, mock_get): mock_get.return_value = {'error': 'boom'} inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid, api_version=self.api_version, auth_token='the token') self.task.process_event.assert_called_once_with('fail') self.assertIn('boom', self.node.last_error) def test_service_url(self, mock_get): self.config(service_url='meow', group='inspector') mock_get.return_value = {'finished': True} inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid, api_version=self.api_version, auth_token='the token', base_url='meow') self.task.process_event.assert_called_once_with('done') def test_is_standalone(self, mock_get): self.config(auth_strategy='noauth') mock_get.return_value = {'finished': True} inspector._check_status(self.task) mock_get.assert_called_once_with( self.node.uuid, api_version=self.api_version, auth_token=self.task.context.auth_token) self.task.process_event.assert_called_once_with('done') def test_not_standalone(self, mock_get): self.config(auth_strategy='keystone') mock_get.return_value = {'finished': True} inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid, api_version=self.api_version, auth_token='the token') self.task.process_event.assert_called_once_with('done') @mock.patch.object(eventlet.greenthread, 'spawn_n', lambda f, *a, **kw: f(*a, **kw)) @mock.patch.object(task_manager, 'acquire', autospec=True) @mock.patch.object(inspector, '_check_status', autospec=True) class PeriodicTaskTestCase(BaseTestCase): def test_ok(self, mock_check, mock_acquire): mgr = mock.MagicMock(spec=['iter_nodes']) mgr.iter_nodes.return_value = [('1', 'd1'), ('2', 'd2')] tasks = [mock.sentinel.task1, mock.sentinel.task2] mock_acquire.side_effect = ( mock.MagicMock(__enter__=mock.MagicMock(return_value=task)) for task in tasks ) inspector.Inspector()._periodic_check_result( mgr, mock.sentinel.context) mock_check.assert_any_call(tasks[0]) mock_check.assert_any_call(tasks[1]) self.assertEqual(2, mock_acquire.call_count) def test_node_locked(self, mock_check, mock_acquire): iter_nodes_ret = [('1', 'd1'), ('2', 'd2')] mock_acquire.side_effect = iter([exception.NodeLocked("boom")] * len(iter_nodes_ret)) mgr = mock.MagicMock(spec=['iter_nodes']) mgr.iter_nodes.return_value = iter_nodes_ret inspector.Inspector()._periodic_check_result( mgr, mock.sentinel.context) self.assertFalse(mock_check.called) self.assertEqual(2, mock_acquire.call_count) ironic-5.1.0/ironic/tests/unit/drivers/modules/drac/0000775000567000056710000000000012674513633023616 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/drac/test_management.py0000664000567000056710000003012312674513466027346 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC management interface """ import mock import ironic.common.boot_devices from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import job as drac_job from ironic.drivers.modules.drac import management as drac_mgmt from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_drac_info() @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracManagementInternalMethodsTestCase(db_base.DbTestCase): def setUp(self): super(DracManagementInternalMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_drac') self.node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) self.boot_mode_ipl = {'id': 'IPL', 'name': 'BootSeq', 'is_current': True, 'is_next': True} self.boot_mode_one_time = {'id': 'OneTime', 'name': 'OneTimeBootMode', 'is_current': False, 'is_next': False} self.boot_device_pxe = { 'id': 'BIOS.Setup.1-1#BootSeq#NIC.Embedded.1-1-1', 'boot_mode': 'IPL', 'current_assigned_sequence': 0, 'pending_assigned_sequence': 0, 'bios_boot_string': 'Embedded NIC 1 Port 1 Partition 1'} self.boot_device_disk = { 'id': 'BIOS.Setup.1-1#BootSeq#HardDisk.List.1-1', 'boot_mode': 'IPL', 'current_assigned_sequence': 1, 'pending_assigned_sequence': 1, 'bios_boot_string': 'Hard drive C: BootSeq'} def test__get_boot_device(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = [ mock.Mock(**self.boot_mode_ipl), mock.Mock(**self.boot_mode_one_time)] mock_client.list_boot_devices.return_value = { 'IPL': [mock.Mock(**self.boot_device_pxe), mock.Mock(**self.boot_device_disk)]} boot_device = drac_mgmt._get_boot_device(self.node) expected_boot_device = {'boot_device': 'pxe', 'persistent': True} self.assertEqual(expected_boot_device, boot_device) mock_client.list_boot_modes.assert_called_once_with() mock_client.list_boot_devices.assert_called_once_with() def test__get_boot_device_not_persistent(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.boot_mode_one_time['is_next'] = True mock_client.list_boot_modes.return_value = [ mock.Mock(**self.boot_mode_ipl), mock.Mock(**self.boot_mode_one_time)] mock_client.list_boot_devices.return_value = { 'OneTime': [mock.Mock(**self.boot_device_pxe), mock.Mock(**self.boot_device_disk)]} boot_device = drac_mgmt._get_boot_device(self.node) expected_boot_device = {'boot_device': 'pxe', 'persistent': False} self.assertEqual(expected_boot_device, boot_device) mock_client.list_boot_modes.assert_called_once_with() mock_client.list_boot_devices.assert_called_once_with() def test__get_boot_device_with_empty_boot_mode_list(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = [] self.assertRaises(exception.DracOperationError, drac_mgmt._get_boot_device, self.node) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_set_boot_device(self, mock_validate_job_queue, mock__get_boot_device, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = [ mock.Mock(**self.boot_mode_ipl), mock.Mock(**self.boot_mode_one_time)] mock_client.list_boot_devices.return_value = { 'IPL': [mock.Mock(**self.boot_device_pxe), mock.Mock(**self.boot_device_disk)]} boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} mock__get_boot_device.return_value = boot_device boot_device = drac_mgmt.set_boot_device( self.node, ironic.common.boot_devices.PXE, persistent=False) mock_validate_job_queue.assert_called_once_with(self.node) mock_client.change_boot_device_order.assert_called_once_with( 'OneTime', 'BIOS.Setup.1-1#BootSeq#NIC.Embedded.1-1-1') mock_client.commit_pending_bios_changes.assert_called_once_with() @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_set_boot_device_called_with_no_change( self, mock_validate_job_queue, mock__get_boot_device, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = [ mock.Mock(**self.boot_mode_ipl), mock.Mock(**self.boot_mode_one_time)] mock_client.list_boot_devices.return_value = { 'IPL': [mock.Mock(**self.boot_device_pxe), mock.Mock(**self.boot_device_disk)]} boot_device = {'boot_device': ironic.common.boot_devices.PXE, 'persistent': True} mock__get_boot_device.return_value = boot_device boot_device = drac_mgmt.set_boot_device( self.node, ironic.common.boot_devices.PXE, persistent=True) mock_validate_job_queue.assert_called_once_with(self.node) self.assertEqual(0, mock_client.change_boot_device_order.call_count) self.assertEqual(0, mock_client.commit_pending_bios_changes.call_count) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_set_boot_device_with_invalid_job_queue( self, mock_validate_job_queue, mock__get_boot_device, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_validate_job_queue.side_effect = exception.DracOperationError( 'boom') self.assertRaises(exception.DracOperationError, drac_mgmt.set_boot_device, self.node, ironic.common.boot_devices.PXE, persistent=True) self.assertEqual(0, mock_client.change_boot_device_order.call_count) self.assertEqual(0, mock_client.commit_pending_bios_changes.call_count) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracManagementTestCase(db_base.DbTestCase): def setUp(self): super(DracManagementTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_drac') self.node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) def test_get_properties(self, mock_get_drac_client): expected = drac_common.COMMON_PROPERTIES driver = drac_mgmt.DracManagement() self.assertEqual(expected, driver.get_properties()) def test_get_supported_boot_devices(self, mock_get_drac_client): expected_boot_devices = [ironic.common.boot_devices.PXE, ironic.common.boot_devices.DISK, ironic.common.boot_devices.CDROM] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: boot_devices = ( task.driver.management.get_supported_boot_devices(task)) self.assertEqual(sorted(expected_boot_devices), sorted(boot_devices)) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) def test_get_boot_device(self, mock__get_boot_device, mock_get_drac_client): expected_boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} mock__get_boot_device.return_value = expected_boot_device with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: boot_device = task.driver.management.get_boot_device(task) self.assertEqual(expected_boot_device, boot_device) mock__get_boot_device.assert_called_once_with(task.node) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) def test_get_boot_device_from_driver_internal_info(self, mock__get_boot_device, mock_get_drac_client): expected_boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_internal_info['drac_boot_device'] = ( expected_boot_device) boot_device = task.driver.management.get_boot_device(task) self.assertEqual(expected_boot_device, boot_device) self.assertEqual(0, mock__get_boot_device.call_count) def test_set_boot_device(self, mock_get_drac_client): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device( task, ironic.common.boot_devices.DISK, persistent=True) expected_boot_device = { 'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} self.node.refresh() self.assertEqual( self.node.driver_internal_info['drac_boot_device'], expected_boot_device) def test_set_boot_device_fail(self, mock_get_drac_client): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.set_boot_device, task, 'foo') def test_get_sensors_data(self, mock_get_drac_client): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(NotImplementedError, task.driver.management.get_sensors_data, task) ironic-5.1.0/ironic/tests/unit/drivers/modules/drac/test_job.py0000664000567000056710000000503312674513466026006 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC job specific methods """ from dracclient import exceptions as drac_exceptions import mock from ironic.common import exception from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import job as drac_job from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_drac_info() @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracJobTestCase(db_base.DbTestCase): def setUp(self): super(DracJobTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_drac') self.node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) def test_validate_job_queue(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_jobs.return_value = [] drac_job.validate_job_queue(self.node) mock_client.list_jobs.assert_called_once_with(only_unfinished=True) def test_validate_job_queue_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = drac_exceptions.BaseClientException('boom') mock_client.list_jobs.side_effect = exc self.assertRaises(exception.DracOperationError, drac_job.validate_job_queue, self.node) def test_validate_job_queue_invalid(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_jobs.return_value = [42] self.assertRaises(exception.DracOperationError, drac_job.validate_job_queue, self.node) ironic-5.1.0/ironic/tests/unit/drivers/modules/drac/test_bios.py0000664000567000056710000001514212674513466026172 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # Copyright 2015 Dell, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC BIOS configuration specific methods """ from dracclient import exceptions as drac_exceptions import mock from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.drac import common as drac_common from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_drac_info() class DracBIOSConfigurationTestCase(db_base.DbTestCase): def setUp(self): super(DracBIOSConfigurationTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_drac') self.node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) patch_get_drac_client = mock.patch.object( drac_common, 'get_drac_client', spec_set=True, autospec=True) mock_get_drac_client = patch_get_drac_client.start() self.mock_client = mock.Mock() mock_get_drac_client.return_value = self.mock_client self.addCleanup(patch_get_drac_client.stop) proc_virt_attr = { 'name': 'ProcVirtualization', 'current_value': 'Enabled', 'pending_value': None, 'read_only': False, 'possible_values': ['Enabled', 'Disabled']} self.bios_attrs = { 'ProcVirtualization': mock.Mock(**proc_virt_attr) } def test_get_config(self): self.mock_client.list_bios_settings.return_value = self.bios_attrs with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: bios_config = task.driver.vendor.get_bios_config(task) self.mock_client.list_bios_settings.assert_called_once_with() self.assertIn('ProcVirtualization', bios_config) def test_get_config_fail(self): exc = drac_exceptions.BaseClientException('boom') self.mock_client.list_bios_settings.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.DracOperationError, task.driver.vendor.get_bios_config, task) self.mock_client.list_bios_settings.assert_called_once_with() def test_set_config(self): self.mock_client.list_jobs.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.set_bios_config(task, ProcVirtualization='Enabled') self.mock_client.list_jobs.assert_called_once_with( only_unfinished=True) self.mock_client.set_bios_settings.assert_called_once_with( {'ProcVirtualization': 'Enabled'}) def test_set_config_fail(self): self.mock_client.list_jobs.return_value = [] exc = drac_exceptions.BaseClientException('boom') self.mock_client.set_bios_settings.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.vendor.set_bios_config, task, ProcVirtualization='Enabled') self.mock_client.set_bios_settings.assert_called_once_with( {'ProcVirtualization': 'Enabled'}) def test_commit_config(self): self.mock_client.list_jobs.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.commit_bios_config(task) self.mock_client.list_jobs.assert_called_once_with( only_unfinished=True) self.mock_client.commit_pending_bios_changes.assert_called_once_with( False) def test_commit_config_with_reboot(self): self.mock_client.list_jobs.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.commit_bios_config(task, reboot=True) self.mock_client.list_jobs.assert_called_once_with( only_unfinished=True) self.mock_client.commit_pending_bios_changes.assert_called_once_with( True) def test_commit_config_fail(self): self.mock_client.list_jobs.return_value = [] exc = drac_exceptions.BaseClientException('boom') self.mock_client.commit_pending_bios_changes.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.vendor.commit_bios_config, task) self.mock_client.list_jobs.assert_called_once_with( only_unfinished=True) self.mock_client.commit_pending_bios_changes.assert_called_once_with( False) def test_abandon_config(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.abandon_bios_config(task) self.mock_client.abandon_pending_bios_changes.assert_called_once_with() def test_abandon_config_fail(self): exc = drac_exceptions.BaseClientException('boom') self.mock_client.abandon_pending_bios_changes.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.vendor.abandon_bios_config, task) self.mock_client.abandon_pending_bios_changes.assert_called_once_with() ironic-5.1.0/ironic/tests/unit/drivers/modules/drac/bios_wsman_mock.py0000664000567000056710000002712412674513466027354 0ustar jenkinsjenkins00000000000000# # Copyright 2015 Dell, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC BIOS interface """ from ironic.drivers.modules.drac import resource_uris Enumerations = { resource_uris.DCIM_BIOSEnumeration: { 'XML': """ http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous http://schemas.xmlsoap.org/ws/2004/09/enumeration/EnumerateResponse uuid:1f5cd907-0e6f-1e6f-8002-4f266e3acab8 uuid:219ca357-0e6f-1e6f-a828-f0e4fb722ab8 MemTest Disabled 310 BIOS.Setup.1-1 Memory Settings MemSettings BIOS.Setup.1-1:MemTest false Enabled Disabled C States ProcCStates Disabled 1706 BIOS.Setup.1-1 System Profile Settings SysProfileSettings BIOS.Setup.1-1:ProcCStates true Enabled Disabled """, 'Dict': { 'MemTest': { 'name': 'MemTest', 'current_value': 'Disabled', 'pending_value': None, 'read_only': False, 'possible_values': ['Disabled', 'Enabled']}, 'ProcCStates': { 'name': 'ProcCStates', 'current_value': 'Disabled', 'pending_value': None, 'read_only': True, 'possible_values': ['Disabled', 'Enabled']}}}, resource_uris.DCIM_BIOSString: { 'XML': """ http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous http://schemas.xmlsoap.org/ws/2004/09/enumeration/EnumerateResponse uuid:1f877bcb-0e6f-1e6f-8004-4f266e3acab8 uuid:21bea321-0e6f-1e6f-a82b-f0e4fb722ab8 SystemModelName PowerEdge R630 201 BIOS.Setup.1-1 System Information SysInformation BIOS.Setup.1-1:SystemModelName true 40 0 SystemModelName2 PowerEdge R630 201 BIOS.Setup.1-1 System Information SysInformation BIOS.Setup.1-1:SystemModelName2 true 40 0 Asset Tag AssetTag 1903 BIOS.Setup.1-1 Miscellaneous Settings MiscSettings BIOS.Setup.1-1:AssetTag false 63 0 ^[ -~]{0,63}$ """, 'Dict': { 'SystemModelName': { 'name': 'SystemModelName', 'current_value': 'PowerEdge R630', 'pending_value': None, 'read_only': True, 'min_length': 0, 'max_length': 40, 'pcre_regex': None}, 'SystemModelName2': { 'name': 'SystemModelName2', 'current_value': 'PowerEdge R630', 'pending_value': None, 'read_only': True, 'min_length': 0, 'max_length': 40, 'pcre_regex': None}, 'AssetTag': { 'name': 'AssetTag', 'current_value': None, 'pending_value': None, 'read_only': False, 'min_length': 0, 'max_length': 63, 'pcre_regex': '^[ -~]{0,63}$'}}}, resource_uris.DCIM_BIOSInteger: { 'XML': """ http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous http://schemas.xmlsoap.org/ws/2004/09/enumeration/EnumerateResponse uuid:1fa60792-0e6f-1e6f-8005-4f266e3acab8 uuid:21ccf01d-0e6f-1e6f-a82d-f0e4fb722ab8 Proc1NumCores 8 439 BIOS.Setup.1-1 Processor Settings ProcSettings BIOS.Setup.1-1:Proc1NumCores true 0 65535 AcPwrRcvryUserDelay 60 1825 BIOS.Setup.1-1 System Security SysSecurity BIOS.Setup.1-1:AcPwrRcvryUserDelay false 60 240 """, 'Dict': { 'Proc1NumCores': { 'name': 'Proc1NumCores', 'current_value': 8, 'pending_value': None, 'read_only': True, 'lower_bound': 0, 'upper_bound': 65535}, 'AcPwrRcvryUserDelay': { 'name': 'AcPwrRcvryUserDelay', 'current_value': 60, 'pending_value': None, 'read_only': False, 'lower_bound': 60, 'upper_bound': 240}}}} Invoke_Commit = """ http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_BIOSService/SetAttributesResponse uuid:42baa476-0ee9-1ee9-8020-4f266e3acab8 uuid:fadae2f8-0eea-1eea-9626-76a8f1d9bed4 The command was successful. BIOS001 Yes 0 Set PendingValue """ ironic-5.1.0/ironic/tests/unit/drivers/modules/drac/test_common.py0000664000567000056710000001232712674513466026530 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for common methods used by DRAC modules. """ import dracclient.client import mock from ironic.common import exception from ironic.drivers.modules.drac import common as drac_common from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_drac_info() class DracCommonMethodsTestCase(db_base.DbTestCase): def test_parse_driver_info(self): node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) info = drac_common.parse_driver_info(node) self.assertIsNotNone(info.get('drac_host')) self.assertIsNotNone(info.get('drac_port')) self.assertIsNotNone(info.get('drac_path')) self.assertIsNotNone(info.get('drac_protocol')) self.assertIsNotNone(info.get('drac_username')) self.assertIsNotNone(info.get('drac_password')) def test_parse_driver_info_missing_host(self): node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) del node.driver_info['drac_host'] self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) def test_parse_driver_info_missing_port(self): node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) del node.driver_info['drac_port'] info = drac_common.parse_driver_info(node) self.assertEqual(443, info.get('drac_port')) def test_parse_driver_info_invalid_port(self): node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) node.driver_info['drac_port'] = 'foo' self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) def test_parse_driver_info_missing_path(self): node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) del node.driver_info['drac_path'] info = drac_common.parse_driver_info(node) self.assertEqual('/wsman', info.get('drac_path')) def test_parse_driver_info_missing_protocol(self): node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) del node.driver_info['drac_protocol'] info = drac_common.parse_driver_info(node) self.assertEqual('https', info.get('drac_protocol')) def test_parse_driver_info_invalid_protocol(self): node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) node.driver_info['drac_protocol'] = 'foo' self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) def test_parse_driver_info_missing_username(self): node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) del node.driver_info['drac_username'] self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) def test_parse_driver_info_missing_password(self): node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) del node.driver_info['drac_password'] self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) @mock.patch.object(dracclient.client, 'DRACClient', autospec=True) def test_get_drac_client(self, mock_dracclient): expected_call = mock.call('1.2.3.4', 'admin', 'fake', 443, '/wsman', 'https') node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) drac_common.get_drac_client(node) self.assertEqual(mock_dracclient.mock_calls, [expected_call]) ironic-5.1.0/ironic/tests/unit/drivers/modules/drac/test_power.py0000664000567000056710000001174712674513466026401 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC power interface """ from dracclient import constants as drac_constants from dracclient import exceptions as drac_exceptions import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import power as drac_power from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_drac_info() @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracPowerTestCase(base.DbTestCase): def setUp(self): super(DracPowerTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_drac') self.node = obj_utils.create_test_node(self.context, driver='fake_drac', driver_info=INFO_DICT) def test_get_properties(self, mock_get_drac_client): expected = drac_common.COMMON_PROPERTIES driver = drac_power.DracPower() self.assertEqual(expected, driver.get_properties()) def test_get_power_state(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.return_value = drac_constants.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: power_state = task.driver.power.get_power_state(task) self.assertEqual(states.POWER_ON, power_state) mock_client.get_power_state.assert_called_once_with() def test_get_power_state_fail(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value exc = drac_exceptions.BaseClientException('boom') mock_client.get_power_state.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.DracOperationError, task.driver.power.get_power_state, task) mock_client.get_power_state.assert_called_once_with() def test_set_power_state(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_OFF) drac_power_state = drac_power.REVERSE_POWER_STATES[states.POWER_OFF] mock_client.set_power_state.assert_called_once_with(drac_power_state) def test_set_power_state_fail(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value exc = drac_exceptions.BaseClientException('boom') mock_client.set_power_state.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.power.set_power_state, task, states.POWER_OFF) drac_power_state = drac_power.REVERSE_POWER_STATES[states.POWER_OFF] mock_client.set_power_state.assert_called_once_with(drac_power_state) def test_reboot_while_powered_on(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.return_value = drac_constants.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task) drac_power_state = drac_power.REVERSE_POWER_STATES[states.REBOOT] mock_client.set_power_state.assert_called_once_with(drac_power_state) def test_reboot_while_powered_off(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.return_value = drac_constants.POWER_OFF with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task) drac_power_state = drac_power.REVERSE_POWER_STATES[states.POWER_ON] mock_client.set_power_state.assert_called_once_with(drac_power_state) ironic-5.1.0/ironic/tests/unit/drivers/modules/drac/__init__.py0000664000567000056710000000000012674513466025721 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/__init__.py0000664000567000056710000000000012674513466025010 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/test_deploy_utils.py0000664000567000056710000024400212674513466027040 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 NTT DOCOMO, INC. # Copyright 2011 OpenStack Foundation # Copyright 2011 Ilya Alekseyev # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import tempfile import time import types from ironic_lib import disk_utils from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from oslo_utils import uuidutils import testtools from testtools import matchers from ironic.common import boot_devices from ironic.common import exception from ironic.common import image_service from ironic.common import keystone from ironic.common import states from ironic.common import utils as common_utils from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent_client from ironic.drivers.modules import deploy_utils as utils from ironic.drivers.modules import image_cache from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import pxe from ironic.tests import base as tests_base from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INST_INFO_DICT = db_utils.get_test_pxe_instance_info() DRV_INFO_DICT = db_utils.get_test_pxe_driver_info() DRV_INTERNAL_INFO_DICT = db_utils.get_test_pxe_driver_internal_info() _PXECONF_DEPLOY = b""" default deploy label deploy kernel deploy_kernel append initrd=deploy_ramdisk ipappend 3 label boot_partition kernel kernel append initrd=ramdisk root={{ ROOT }} label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} label trusted_boot kernel mboot append tboot.gz --- kernel root={{ ROOT }} --- ramdisk """ _PXECONF_BOOT_PARTITION = """ default boot_partition label deploy kernel deploy_kernel append initrd=deploy_ramdisk ipappend 3 label boot_partition kernel kernel append initrd=ramdisk root=UUID=12345678-1234-1234-1234-1234567890abcdef label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} label trusted_boot kernel mboot append tboot.gz --- kernel root=UUID=12345678-1234-1234-1234-1234567890abcdef \ --- ramdisk """ _PXECONF_BOOT_WHOLE_DISK = """ default boot_whole_disk label deploy kernel deploy_kernel append initrd=deploy_ramdisk ipappend 3 label boot_partition kernel kernel append initrd=ramdisk root={{ ROOT }} label boot_whole_disk COM32 chain.c32 append mbr:0x12345678 label trusted_boot kernel mboot append tboot.gz --- kernel root={{ ROOT }} --- ramdisk """ _PXECONF_TRUSTED_BOOT = """ default trusted_boot label deploy kernel deploy_kernel append initrd=deploy_ramdisk ipappend 3 label boot_partition kernel kernel append initrd=ramdisk root=UUID=12345678-1234-1234-1234-1234567890abcdef label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} label trusted_boot kernel mboot append tboot.gz --- kernel root=UUID=12345678-1234-1234-1234-1234567890abcdef \ --- ramdisk """ _IPXECONF_DEPLOY = b""" #!ipxe dhcp goto deploy :deploy kernel deploy_kernel initrd deploy_ramdisk boot :boot_partition kernel kernel append initrd=ramdisk root={{ ROOT }} boot :boot_whole_disk kernel chain.c32 append mbr:{{ DISK_IDENTIFIER }} boot """ _IPXECONF_BOOT_PARTITION = """ #!ipxe dhcp goto boot_partition :deploy kernel deploy_kernel initrd deploy_ramdisk boot :boot_partition kernel kernel append initrd=ramdisk root=UUID=12345678-1234-1234-1234-1234567890abcdef boot :boot_whole_disk kernel chain.c32 append mbr:{{ DISK_IDENTIFIER }} boot """ _IPXECONF_BOOT_WHOLE_DISK = """ #!ipxe dhcp goto boot_whole_disk :deploy kernel deploy_kernel initrd deploy_ramdisk boot :boot_partition kernel kernel append initrd=ramdisk root={{ ROOT }} boot :boot_whole_disk kernel chain.c32 append mbr:0x12345678 boot """ _UEFI_PXECONF_DEPLOY = b""" default=deploy image=deploy_kernel label=deploy initrd=deploy_ramdisk append="ro text" image=kernel label=boot_partition initrd=ramdisk append="root={{ ROOT }}" image=chain.c32 label=boot_whole_disk append="mbr:{{ DISK_IDENTIFIER }}" """ _UEFI_PXECONF_BOOT_PARTITION = """ default=boot_partition image=deploy_kernel label=deploy initrd=deploy_ramdisk append="ro text" image=kernel label=boot_partition initrd=ramdisk append="root=UUID=12345678-1234-1234-1234-1234567890abcdef" image=chain.c32 label=boot_whole_disk append="mbr:{{ DISK_IDENTIFIER }}" """ _UEFI_PXECONF_BOOT_WHOLE_DISK = """ default=boot_whole_disk image=deploy_kernel label=deploy initrd=deploy_ramdisk append="ro text" image=kernel label=boot_partition initrd=ramdisk append="root={{ ROOT }}" image=chain.c32 label=boot_whole_disk append="mbr:0x12345678" """ _UEFI_PXECONF_DEPLOY_GRUB = b""" set default=deploy set timeout=5 set hidden_timeout_quiet=false menuentry "deploy" { linuxefi deploy_kernel "ro text" initrdefi deploy_ramdisk } menuentry "boot_partition" { linuxefi kernel "root=(( ROOT ))" initrdefi ramdisk } menuentry "boot_whole_disk" { linuxefi chain.c32 mbr:(( DISK_IDENTIFIER )) } """ _UEFI_PXECONF_BOOT_PARTITION_GRUB = """ set default=boot_partition set timeout=5 set hidden_timeout_quiet=false menuentry "deploy" { linuxefi deploy_kernel "ro text" initrdefi deploy_ramdisk } menuentry "boot_partition" { linuxefi kernel "root=UUID=12345678-1234-1234-1234-1234567890abcdef" initrdefi ramdisk } menuentry "boot_whole_disk" { linuxefi chain.c32 mbr:(( DISK_IDENTIFIER )) } """ _UEFI_PXECONF_BOOT_WHOLE_DISK_GRUB = """ set default=boot_whole_disk set timeout=5 set hidden_timeout_quiet=false menuentry "deploy" { linuxefi deploy_kernel "ro text" initrdefi deploy_ramdisk } menuentry "boot_partition" { linuxefi kernel "root=(( ROOT ))" initrdefi ramdisk } menuentry "boot_whole_disk" { linuxefi chain.c32 mbr:0x12345678 } """ @mock.patch.object(time, 'sleep', lambda seconds: None) class PhysicalWorkTestCase(tests_base.TestCase): def _mock_calls(self, name_list, module): patch_list = [mock.patch.object(module, name, spec_set=types.FunctionType) for name in name_list] mock_list = [patcher.start() for patcher in patch_list] for patcher in patch_list: self.addCleanup(patcher.stop) parent_mock = mock.MagicMock(spec=[]) for mocker, name in zip(mock_list, name_list): parent_mock.attach_mock(mocker, name) return parent_mock def _test_deploy_partition_image(self, boot_option=None, boot_mode=None, disk_label=None): """Check loosely all functions are called with right args.""" address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' root_mb = 128 swap_mb = 64 ephemeral_mb = 0 ephemeral_format = None configdrive_mb = 0 node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" dev = '/dev/fake' swap_part = '/dev/fake-part1' root_part = '/dev/fake-part2' root_uuid = '12345678-1234-1234-12345678-12345678abcdef' utils_name_list = ['get_dev', 'discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi', 'notify'] disk_utils_name_list = ['is_block_device', 'get_image_mb', 'make_partitions', 'populate_image', 'mkfs', 'block_uuid', 'destroy_disk_metadata'] utils_mock = self._mock_calls(utils_name_list, utils) utils_mock.get_dev.return_value = dev disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.get_image_mb.return_value = 1 disk_utils_mock.is_block_device.return_value = True disk_utils_mock.block_uuid.return_value = root_uuid disk_utils_mock.make_partitions.return_value = {'root': root_part, 'swap': swap_part} make_partitions_expected_args = [dev, root_mb, swap_mb, ephemeral_mb, configdrive_mb, node_uuid] make_partitions_expected_kwargs = {'commit': True, 'disk_label': disk_label} deploy_kwargs = {} if boot_option: make_partitions_expected_kwargs['boot_option'] = boot_option deploy_kwargs['boot_option'] = boot_option else: make_partitions_expected_kwargs['boot_option'] = 'netboot' if boot_mode: make_partitions_expected_kwargs['boot_mode'] = boot_mode deploy_kwargs['boot_mode'] = boot_mode else: make_partitions_expected_kwargs['boot_mode'] = 'bios' if disk_label: deploy_kwargs['disk_label'] = disk_label # If no boot_option, then it should default to netboot. utils_calls_expected = [mock.call.get_dev(address, port, iqn, lun), mock.call.discovery(address, port), mock.call.login_iscsi(address, port, iqn), mock.call.logout_iscsi(address, port, iqn), mock.call.delete_iscsi(address, port, iqn)] disk_utils_calls_expected = [mock.call.get_image_mb(image_path), mock.call.is_block_device(dev), mock.call.destroy_disk_metadata( dev, node_uuid), mock.call.make_partitions( *make_partitions_expected_args, **make_partitions_expected_kwargs), mock.call.is_block_device(root_part), mock.call.is_block_device(swap_part), mock.call.populate_image( image_path, root_part), mock.call.mkfs( dev=swap_part, fs='swap', label='swap1'), mock.call.block_uuid(root_part)] uuids_dict_returned = utils.deploy_partition_image( address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid, **deploy_kwargs) self.assertEqual(utils_calls_expected, utils_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) expected_uuid_dict = { 'root uuid': root_uuid, 'efi system partition uuid': None} self.assertEqual(expected_uuid_dict, uuids_dict_returned) def test_deploy_partition_image_without_boot_option(self): self._test_deploy_partition_image() def test_deploy_partition_image_netboot(self): self._test_deploy_partition_image(boot_option="netboot") def test_deploy_partition_image_localboot(self): self._test_deploy_partition_image(boot_option="local") def test_deploy_partition_image_wo_boot_option_and_wo_boot_mode(self): self._test_deploy_partition_image() def test_deploy_partition_image_netboot_bios(self): self._test_deploy_partition_image(boot_option="netboot", boot_mode="bios") def test_deploy_partition_image_localboot_bios(self): self._test_deploy_partition_image(boot_option="local", boot_mode="bios") def test_deploy_partition_image_netboot_uefi(self): self._test_deploy_partition_image(boot_option="netboot", boot_mode="uefi") def test_deploy_partition_image_disk_label(self): self._test_deploy_partition_image(disk_label='gpt') @mock.patch.object(disk_utils, 'get_image_mb', return_value=129, autospec=True) def test_deploy_partition_image_image_exceeds_root_partition(self, gim_mock): address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' root_mb = 128 swap_mb = 64 ephemeral_mb = 0 ephemeral_format = None node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" self.assertRaises(exception.InstanceDeployFailure, utils.deploy_partition_image, address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid) gim_mock.assert_called_once_with(image_path) # We mock utils.block_uuid separately here because we can't predict # the order in which it will be called. @mock.patch.object(disk_utils, 'block_uuid', autospec=True) def test_deploy_partition_image_localboot_uefi(self, block_uuid_mock): """Check loosely all functions are called with right args.""" address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' root_mb = 128 swap_mb = 64 ephemeral_mb = 0 ephemeral_format = None configdrive_mb = 0 node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" dev = '/dev/fake' swap_part = '/dev/fake-part2' root_part = '/dev/fake-part3' efi_system_part = '/dev/fake-part1' root_uuid = '12345678-1234-1234-12345678-12345678abcdef' efi_system_part_uuid = '9036-482' utils_name_list = ['get_dev', 'discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi', 'notify'] disk_utils_name_list = ['get_image_mb', 'make_partitions', 'is_block_device', 'populate_image', 'mkfs', 'destroy_disk_metadata'] utils_mock = self._mock_calls(utils_name_list, utils) utils_mock.get_dev.return_value = dev disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.get_image_mb.return_value = 1 disk_utils_mock.is_block_device.return_value = True def block_uuid_side_effect(device): if device == root_part: return root_uuid if device == efi_system_part: return efi_system_part_uuid block_uuid_mock.side_effect = block_uuid_side_effect disk_utils_mock.make_partitions.return_value = { 'root': root_part, 'swap': swap_part, 'efi system partition': efi_system_part} # If no boot_option, then it should default to netboot. utils_calls_expected = [mock.call.get_dev(address, port, iqn, lun), mock.call.discovery(address, port), mock.call.login_iscsi(address, port, iqn), mock.call.logout_iscsi(address, port, iqn), mock.call.delete_iscsi(address, port, iqn)] disk_utils_calls_expected = [mock.call.get_image_mb(image_path), mock.call.is_block_device(dev), mock.call.destroy_disk_metadata( dev, node_uuid), mock.call.make_partitions( dev, root_mb, swap_mb, ephemeral_mb, configdrive_mb, node_uuid, commit=True, boot_option="local", boot_mode="uefi", disk_label=None), mock.call.is_block_device(root_part), mock.call.is_block_device(swap_part), mock.call.is_block_device( efi_system_part), mock.call.mkfs( dev=efi_system_part, fs='vfat', label='efi-part'), mock.call.populate_image( image_path, root_part), mock.call.mkfs( dev=swap_part, fs='swap', label='swap1')] uuid_dict_returned = utils.deploy_partition_image( address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid, boot_option="local", boot_mode="uefi") self.assertEqual(utils_calls_expected, utils_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) block_uuid_mock.assert_any_call('/dev/fake-part1') block_uuid_mock.assert_any_call('/dev/fake-part3') expected_uuid_dict = { 'root uuid': root_uuid, 'efi system partition uuid': efi_system_part_uuid} self.assertEqual(expected_uuid_dict, uuid_dict_returned) def test_deploy_partition_image_without_swap(self): """Check loosely all functions are called with right args.""" address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' root_mb = 128 swap_mb = 0 ephemeral_mb = 0 ephemeral_format = None configdrive_mb = 0 node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" dev = '/dev/fake' root_part = '/dev/fake-part1' root_uuid = '12345678-1234-1234-12345678-12345678abcdef' utils_name_list = ['get_dev', 'discovery', 'login_iscsi', 'notify', 'logout_iscsi', 'delete_iscsi'] disk_utils_name_list = ['make_partitions', 'get_image_mb', 'is_block_device', 'populate_image', 'block_uuid', 'destroy_disk_metadata'] utils_mock = self._mock_calls(utils_name_list, utils) utils_mock.get_dev.return_value = dev disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.get_image_mb.return_value = 1 disk_utils_mock.is_block_device.return_value = True disk_utils_mock.block_uuid.return_value = root_uuid disk_utils_mock.make_partitions.return_value = {'root': root_part} utils_calls_expected = [mock.call.get_dev(address, port, iqn, lun), mock.call.discovery(address, port), mock.call.login_iscsi(address, port, iqn), mock.call.logout_iscsi(address, port, iqn), mock.call.delete_iscsi(address, port, iqn)] disk_utils_calls_expected = [mock.call.get_image_mb(image_path), mock.call.is_block_device(dev), mock.call.destroy_disk_metadata( dev, node_uuid), mock.call.make_partitions( dev, root_mb, swap_mb, ephemeral_mb, configdrive_mb, node_uuid, commit=True, boot_option="netboot", boot_mode="bios", disk_label=None), mock.call.is_block_device(root_part), mock.call.populate_image( image_path, root_part), mock.call.block_uuid(root_part)] uuid_dict_returned = utils.deploy_partition_image(address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid) self.assertEqual(utils_calls_expected, utils_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) self.assertEqual(root_uuid, uuid_dict_returned['root uuid']) def test_deploy_partition_image_with_ephemeral(self): """Check loosely all functions are called with right args.""" address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' root_mb = 128 swap_mb = 64 ephemeral_mb = 256 configdrive_mb = 0 ephemeral_format = 'exttest' node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" dev = '/dev/fake' ephemeral_part = '/dev/fake-part1' swap_part = '/dev/fake-part2' root_part = '/dev/fake-part3' root_uuid = '12345678-1234-1234-12345678-12345678abcdef' utils_name_list = ['get_dev', 'discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi', 'notify'] disk_utils_name_list = ['get_image_mb', 'make_partitions', 'is_block_device', 'populate_image', 'mkfs', 'block_uuid', 'destroy_disk_metadata'] utils_mock = self._mock_calls(utils_name_list, utils) utils_mock.get_dev.return_value = dev disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.get_image_mb.return_value = 1 disk_utils_mock.is_block_device.return_value = True disk_utils_mock.block_uuid.return_value = root_uuid disk_utils_mock.make_partitions.return_value = { 'swap': swap_part, 'ephemeral': ephemeral_part, 'root': root_part} utils_calls_expected = [mock.call.get_dev(address, port, iqn, lun), mock.call.discovery(address, port), mock.call.login_iscsi(address, port, iqn), mock.call.logout_iscsi(address, port, iqn), mock.call.delete_iscsi(address, port, iqn)] disk_utils_calls_expected = [mock.call.get_image_mb(image_path), mock.call.is_block_device(dev), mock.call.destroy_disk_metadata( dev, node_uuid), mock.call.make_partitions( dev, root_mb, swap_mb, ephemeral_mb, configdrive_mb, node_uuid, commit=True, boot_option="netboot", boot_mode="bios", disk_label=None), mock.call.is_block_device(root_part), mock.call.is_block_device(swap_part), mock.call.is_block_device(ephemeral_part), mock.call.populate_image( image_path, root_part), mock.call.mkfs( dev=swap_part, fs='swap', label='swap1'), mock.call.mkfs( dev=ephemeral_part, fs=ephemeral_format, label='ephemeral0'), mock.call.block_uuid(root_part)] uuid_dict_returned = utils.deploy_partition_image(address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid) self.assertEqual(utils_calls_expected, utils_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) self.assertEqual(root_uuid, uuid_dict_returned['root uuid']) def test_deploy_partition_image_preserve_ephemeral(self): """Check if all functions are called with right args.""" address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' root_mb = 128 swap_mb = 64 ephemeral_mb = 256 ephemeral_format = 'exttest' configdrive_mb = 0 node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" dev = '/dev/fake' ephemeral_part = '/dev/fake-part1' swap_part = '/dev/fake-part2' root_part = '/dev/fake-part3' root_uuid = '12345678-1234-1234-12345678-12345678abcdef' utils_name_list = ['get_dev', 'discovery', 'login_iscsi', 'delete_iscsi', 'logout_iscsi', 'notify'] disk_utils_name_list = ['make_partitions', 'get_image_mb', 'is_block_device', 'populate_image', 'mkfs', 'block_uuid', 'get_dev_block_size'] utils_mock = self._mock_calls(utils_name_list, utils) utils_mock.get_dev.return_value = dev disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.get_image_mb.return_value = 1 disk_utils_mock.is_block_device.return_value = True disk_utils_mock.block_uuid.return_value = root_uuid disk_utils_mock.make_partitions.return_value = { 'swap': swap_part, 'ephemeral': ephemeral_part, 'root': root_part} disk_utils_mock.block_uuid.return_value = root_uuid utils_calls_expected = [mock.call.get_dev(address, port, iqn, lun), mock.call.discovery(address, port), mock.call.login_iscsi(address, port, iqn), mock.call.logout_iscsi(address, port, iqn), mock.call.delete_iscsi(address, port, iqn)] disk_utils_calls_expected = [mock.call.get_image_mb(image_path), mock.call.is_block_device(dev), mock.call.make_partitions( dev, root_mb, swap_mb, ephemeral_mb, configdrive_mb, node_uuid, commit=False, boot_option="netboot", boot_mode="bios", disk_label=None), mock.call.is_block_device(root_part), mock.call.is_block_device(swap_part), mock.call.is_block_device(ephemeral_part), mock.call.populate_image( image_path, root_part), mock.call.mkfs( dev=swap_part, fs='swap', label='swap1'), mock.call.block_uuid(root_part)] uuid_dict_returned = utils.deploy_partition_image( address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid, preserve_ephemeral=True, boot_option="netboot") self.assertEqual(utils_calls_expected, utils_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) self.assertFalse(disk_utils_mock.get_dev_block_size.called) self.assertEqual(root_uuid, uuid_dict_returned['root uuid']) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) def test_deploy_partition_image_with_configdrive(self, mock_unlink): """Check loosely all functions are called with right args.""" address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' root_mb = 128 swap_mb = 0 ephemeral_mb = 0 configdrive_mb = 10 ephemeral_format = None node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" configdrive_url = 'http://1.2.3.4/cd' dev = '/dev/fake' configdrive_part = '/dev/fake-part1' root_part = '/dev/fake-part2' root_uuid = '12345678-1234-1234-12345678-12345678abcdef' utils_name_list = ['get_dev', 'discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi', 'notify'] disk_utils_name_list = ['is_block_device', 'populate_image', 'get_image_mb', 'destroy_disk_metadata', 'dd', 'block_uuid', 'make_partitions', '_get_configdrive'] utils_mock = self._mock_calls(utils_name_list, utils) utils_mock.get_dev.return_value = dev disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.get_image_mb.return_value = 1 disk_utils_mock.is_block_device.return_value = True disk_utils_mock.block_uuid.return_value = root_uuid disk_utils_mock.make_partitions.return_value = { 'root': root_part, 'configdrive': configdrive_part} disk_utils_mock._get_configdrive.return_value = (10, 'configdrive-path') utils_calls_expected = [mock.call.get_dev(address, port, iqn, lun), mock.call.discovery(address, port), mock.call.login_iscsi(address, port, iqn), mock.call.logout_iscsi(address, port, iqn), mock.call.delete_iscsi(address, port, iqn)] disk_utils_calls_expected = [mock.call.get_image_mb(image_path), mock.call.is_block_device(dev), mock.call.destroy_disk_metadata( dev, node_uuid), mock.call._get_configdrive( configdrive_url, node_uuid, tempdir=None), mock.call.make_partitions( dev, root_mb, swap_mb, ephemeral_mb, configdrive_mb, node_uuid, commit=True, boot_option="netboot", boot_mode="bios", disk_label=None), mock.call.is_block_device(root_part), mock.call.is_block_device( configdrive_part), mock.call.dd(mock.ANY, configdrive_part), mock.call.populate_image( image_path, root_part), mock.call.block_uuid(root_part)] uuid_dict_returned = utils.deploy_partition_image( address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid, configdrive=configdrive_url) self.assertEqual(utils_calls_expected, utils_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) self.assertEqual(root_uuid, uuid_dict_returned['root uuid']) mock_unlink.assert_called_once_with('configdrive-path') @mock.patch.object(disk_utils, 'get_disk_identifier', autospec=True) def test_deploy_whole_disk_image(self, mock_gdi): """Check loosely all functions are called with right args.""" address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" dev = '/dev/fake' utils_name_list = ['get_dev', 'discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi', 'notify'] disk_utils_name_list = ['is_block_device', 'populate_image'] utils_mock = self._mock_calls(utils_name_list, utils) utils_mock.get_dev.return_value = dev disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.is_block_device.return_value = True mock_gdi.return_value = '0x12345678' utils_calls_expected = [mock.call.get_dev(address, port, iqn, lun), mock.call.discovery(address, port), mock.call.login_iscsi(address, port, iqn), mock.call.logout_iscsi(address, port, iqn), mock.call.delete_iscsi(address, port, iqn)] disk_utils_calls_expected = [mock.call.is_block_device(dev), mock.call.populate_image(image_path, dev)] uuid_dict_returned = utils.deploy_disk_image(address, port, iqn, lun, image_path, node_uuid) self.assertEqual(utils_calls_expected, utils_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) self.assertEqual('0x12345678', uuid_dict_returned['disk identifier']) @mock.patch.object(common_utils, 'execute', autospec=True) def test_verify_iscsi_connection_raises(self, mock_exec): iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.abc', ''] self.assertRaises(exception.InstanceDeployFailure, utils.verify_iscsi_connection, iqn) self.assertEqual(3, mock_exec.call_count) @mock.patch.object(os.path, 'exists', autospec=True) def test_check_file_system_for_iscsi_device_raises(self, mock_os): iqn = 'iqn.xyz' ip = "127.0.0.1" port = "22" mock_os.return_value = False self.assertRaises(exception.InstanceDeployFailure, utils.check_file_system_for_iscsi_device, ip, port, iqn) self.assertEqual(3, mock_os.call_count) @mock.patch.object(os.path, 'exists', autospec=True) def test_check_file_system_for_iscsi_device(self, mock_os): iqn = 'iqn.xyz' ip = "127.0.0.1" port = "22" check_dir = "/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-1" % (ip, port, iqn) mock_os.return_value = True utils.check_file_system_for_iscsi_device(ip, port, iqn) mock_os.assert_called_once_with(check_dir) @mock.patch.object(common_utils, 'execute', autospec=True) def test_verify_iscsi_connection(self, mock_exec): iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.xyz', ''] utils.verify_iscsi_connection(iqn) mock_exec.assert_called_once_with( 'iscsiadm', '-m', 'node', '-S', run_as_root=True, check_exit_code=[0]) @mock.patch.object(common_utils, 'execute', autospec=True) def test_force_iscsi_lun_update(self, mock_exec): iqn = 'iqn.xyz' utils.force_iscsi_lun_update(iqn) mock_exec.assert_called_once_with( 'iscsiadm', '-m', 'node', '-T', iqn, '-R', run_as_root=True, check_exit_code=[0]) @mock.patch.object(common_utils, 'execute', autospec=True) @mock.patch.object(utils, 'verify_iscsi_connection', autospec=True) @mock.patch.object(utils, 'force_iscsi_lun_update', autospec=True) @mock.patch.object(utils, 'check_file_system_for_iscsi_device', autospec=True) def test_login_iscsi_calls_verify_and_update(self, mock_check_dev, mock_update, mock_verify, mock_exec): address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.xyz', ''] utils.login_iscsi(address, port, iqn) mock_exec.assert_called_once_with( 'iscsiadm', '-m', 'node', '-p', '%s:%s' % (address, port), '-T', iqn, '--login', run_as_root=True, check_exit_code=[0], attempts=5, delay_on_retry=True) mock_verify.assert_called_once_with(iqn) mock_update.assert_called_once_with(iqn) mock_check_dev.assert_called_once_with(address, port, iqn) @mock.patch.object(disk_utils, 'is_block_device', lambda d: True) def test_always_logout_and_delete_iscsi(self): """Check if logout_iscsi() and delete_iscsi() are called. Make sure that logout_iscsi() and delete_iscsi() are called once login_iscsi() is invoked. """ address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' root_mb = 128 swap_mb = 64 ephemeral_mb = 256 ephemeral_format = 'exttest' node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" dev = '/dev/fake' class TestException(Exception): pass utils_name_list = ['get_dev', 'discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi'] disk_utils_name_list = ['get_image_mb', 'work_on_disk'] utils_mock = self._mock_calls(utils_name_list, utils) utils_mock.get_dev.return_value = dev disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.get_image_mb.return_value = 1 disk_utils_mock.work_on_disk.side_effect = TestException utils_calls_expected = [mock.call.get_dev(address, port, iqn, lun), mock.call.discovery(address, port), mock.call.login_iscsi(address, port, iqn), mock.call.logout_iscsi(address, port, iqn), mock.call.delete_iscsi(address, port, iqn)] disk_utils_calls_expected = [mock.call.get_image_mb(image_path), mock.call.work_on_disk( dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format, image_path, node_uuid, configdrive=None, preserve_ephemeral=False, boot_option="netboot", boot_mode="bios", disk_label=None)] self.assertRaises(TestException, utils.deploy_partition_image, address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid) self.assertEqual(utils_calls_expected, utils_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) class SwitchPxeConfigTestCase(tests_base.TestCase): def _create_config(self, ipxe=False, boot_mode=None, boot_loader='elilo'): (fd, fname) = tempfile.mkstemp() if boot_mode == 'uefi' and not ipxe: if boot_loader == 'grub': pxe_cfg = _UEFI_PXECONF_DEPLOY_GRUB else: pxe_cfg = _UEFI_PXECONF_DEPLOY else: pxe_cfg = _IPXECONF_DEPLOY if ipxe else _PXECONF_DEPLOY os.write(fd, pxe_cfg) os.close(fd) self.addCleanup(os.unlink, fname) return fname def test_switch_pxe_config_partition_image(self): boot_mode = 'bios' fname = self._create_config() utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_PXECONF_BOOT_PARTITION, pxeconf) def test_switch_pxe_config_whole_disk_image(self): boot_mode = 'bios' fname = self._create_config() utils.switch_pxe_config(fname, '0x12345678', boot_mode, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_PXECONF_BOOT_WHOLE_DISK, pxeconf) def test_switch_pxe_config_trusted_boot(self): boot_mode = 'bios' fname = self._create_config() utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_PXECONF_TRUSTED_BOOT, pxeconf) def test_switch_ipxe_config_partition_image(self): boot_mode = 'bios' cfg.CONF.set_override('ipxe_enabled', True, 'pxe') fname = self._create_config(ipxe=True) utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_IPXECONF_BOOT_PARTITION, pxeconf) def test_switch_ipxe_config_whole_disk_image(self): boot_mode = 'bios' cfg.CONF.set_override('ipxe_enabled', True, 'pxe') fname = self._create_config(ipxe=True) utils.switch_pxe_config(fname, '0x12345678', boot_mode, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_IPXECONF_BOOT_WHOLE_DISK, pxeconf) def test_switch_uefi_elilo_pxe_config_partition_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode) utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_UEFI_PXECONF_BOOT_PARTITION, pxeconf) def test_switch_uefi_elilo_config_whole_disk_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode) utils.switch_pxe_config(fname, '0x12345678', boot_mode, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_UEFI_PXECONF_BOOT_WHOLE_DISK, pxeconf) def test_switch_uefi_grub_pxe_config_partition_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode, boot_loader='grub') utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_UEFI_PXECONF_BOOT_PARTITION_GRUB, pxeconf) def test_switch_uefi_grub_config_whole_disk_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode, boot_loader='grub') utils.switch_pxe_config(fname, '0x12345678', boot_mode, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_UEFI_PXECONF_BOOT_WHOLE_DISK_GRUB, pxeconf) def test_switch_uefi_ipxe_config_partition_image(self): boot_mode = 'uefi' cfg.CONF.set_override('ipxe_enabled', True, 'pxe') fname = self._create_config(boot_mode=boot_mode, ipxe=True) utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_IPXECONF_BOOT_PARTITION, pxeconf) def test_switch_uefi_ipxe_config_whole_disk_image(self): boot_mode = 'uefi' cfg.CONF.set_override('ipxe_enabled', True, 'pxe') fname = self._create_config(boot_mode=boot_mode, ipxe=True) utils.switch_pxe_config(fname, '0x12345678', boot_mode, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_IPXECONF_BOOT_WHOLE_DISK, pxeconf) @mock.patch('time.sleep', lambda sec: None) class OtherFunctionTestCase(db_base.DbTestCase): def setUp(self): super(OtherFunctionTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_pxe") self.node = obj_utils.create_test_node(self.context, driver='fake_pxe') def test_get_dev(self): expected = '/dev/disk/by-path/ip-1.2.3.4:5678-iscsi-iqn.fake-lun-9' actual = utils.get_dev('1.2.3.4', 5678, 'iqn.fake', 9) self.assertEqual(expected, actual) def test_parse_root_device_hints(self): self.node.properties['root_device'] = { 'wwn': 123456, 'model': 'foo-model', 'size': 123, 'serial': 'foo-serial', 'vendor': 'foo-vendor', 'name': '/dev/sda', 'wwn_with_extension': 123456111, 'wwn_vendor_extension': 111, } expected = ('model=foo-model,name=/dev/sda,serial=foo-serial,size=123,' 'vendor=foo-vendor,wwn=123456,wwn_vendor_extension=111,' 'wwn_with_extension=123456111') result = utils.parse_root_device_hints(self.node) self.assertEqual(expected, result) def test_parse_root_device_hints_string_space(self): self.node.properties['root_device'] = {'model': 'fake model'} expected = 'model=fake%20model' result = utils.parse_root_device_hints(self.node) self.assertEqual(expected, result) def test_parse_root_device_hints_no_hints(self): self.node.properties = {} result = utils.parse_root_device_hints(self.node) self.assertIsNone(result) def test_parse_root_device_hints_invalid_hints(self): self.node.properties['root_device'] = {'vehicle': 'Owlship'} self.assertRaises(exception.InvalidParameterValue, utils.parse_root_device_hints, self.node) def test_parse_root_device_hints_invalid_size(self): self.node.properties['root_device'] = {'size': 'not-int'} self.assertRaises(exception.InvalidParameterValue, utils.parse_root_device_hints, self.node) @mock.patch.object(utils, 'LOG', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(task_manager.TaskManager, 'process_event', autospec=True) def _test_set_failed_state(self, mock_event, mock_power, mock_log, event_value=None, power_value=None, log_calls=None): err_msg = 'some failure' mock_event.side_effect = event_value mock_power.side_effect = power_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: utils.set_failed_state(task, err_msg) mock_event.assert_called_once_with(task, 'fail') mock_power.assert_called_once_with(task, states.POWER_OFF) self.assertEqual(err_msg, task.node.last_error) if log_calls: mock_log.exception.assert_has_calls(log_calls) else: self.assertFalse(mock_log.called) def test_set_failed_state(self): exc_state = exception.InvalidState('invalid state') exc_param = exception.InvalidParameterValue('invalid parameter') mock_call = mock.call(mock.ANY) self._test_set_failed_state() calls = [mock_call] self._test_set_failed_state(event_value=iter([exc_state] * len(calls)), log_calls=calls) calls = [mock_call] self._test_set_failed_state(power_value=iter([exc_param] * len(calls)), log_calls=calls) calls = [mock_call, mock_call] self._test_set_failed_state(event_value=iter([exc_state] * len(calls)), power_value=iter([exc_param] * len(calls)), log_calls=calls) def test_get_boot_option(self): self.node.instance_info = {'capabilities': '{"boot_option": "local"}'} result = utils.get_boot_option(self.node) self.assertEqual("local", result) def test_get_boot_option_default_value(self): self.node.instance_info = {} result = utils.get_boot_option(self.node) self.assertEqual("netboot", result) @mock.patch.object(image_cache, 'clean_up_caches', autospec=True) def test_fetch_images(self, mock_clean_up_caches): mock_cache = mock.MagicMock( spec_set=['fetch_image', 'master_dir'], master_dir='master_dir') utils.fetch_images(None, mock_cache, [('uuid', 'path')]) mock_clean_up_caches.assert_called_once_with(None, 'master_dir', [('uuid', 'path')]) mock_cache.fetch_image.assert_called_once_with('uuid', 'path', ctx=None, force_raw=True) @mock.patch.object(image_cache, 'clean_up_caches', autospec=True) def test_fetch_images_fail(self, mock_clean_up_caches): exc = exception.InsufficientDiskSpace(path='a', required=2, actual=1) mock_cache = mock.MagicMock( spec_set=['master_dir'], master_dir='master_dir') mock_clean_up_caches.side_effect = iter([exc]) self.assertRaises(exception.InstanceDeployFailure, utils.fetch_images, None, mock_cache, [('uuid', 'path')]) mock_clean_up_caches.assert_called_once_with(None, 'master_dir', [('uuid', 'path')]) class VirtualMediaDeployUtilsTestCase(db_base.DbTestCase): def setUp(self): super(VirtualMediaDeployUtilsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="iscsi_ilo") info_dict = db_utils.get_test_ilo_info() self.node = obj_utils.create_test_node( self.context, driver='iscsi_ilo', driver_info=info_dict) def test_get_single_nic_with_vif_port_id(self): obj_utils.create_test_port( self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-A'}, driver='iscsi_ilo') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: address = utils.get_single_nic_with_vif_port_id(task) self.assertEqual('aa:bb:cc:dd:ee:ff', address) class ParseInstanceInfoCapabilitiesTestCase(tests_base.TestCase): def setUp(self): super(ParseInstanceInfoCapabilitiesTestCase, self).setUp() self.node = obj_utils.get_test_node(self.context, driver='fake') def test_parse_instance_info_capabilities_string(self): self.node.instance_info = {'capabilities': '{"cat": "meow"}'} expected_result = {"cat": "meow"} result = utils.parse_instance_info_capabilities(self.node) self.assertEqual(expected_result, result) def test_parse_instance_info_capabilities(self): self.node.instance_info = {'capabilities': {"dog": "wuff"}} expected_result = {"dog": "wuff"} result = utils.parse_instance_info_capabilities(self.node) self.assertEqual(expected_result, result) def test_parse_instance_info_invalid_type(self): self.node.instance_info = {'capabilities': 'not-a-dict'} self.assertRaises(exception.InvalidParameterValue, utils.parse_instance_info_capabilities, self.node) def test_is_secure_boot_requested_true(self): self.node.instance_info = {'capabilities': {"secure_boot": "tRue"}} self.assertTrue(utils.is_secure_boot_requested(self.node)) def test_is_secure_boot_requested_false(self): self.node.instance_info = {'capabilities': {"secure_boot": "false"}} self.assertFalse(utils.is_secure_boot_requested(self.node)) def test_is_secure_boot_requested_invalid(self): self.node.instance_info = {'capabilities': {"secure_boot": "invalid"}} self.assertFalse(utils.is_secure_boot_requested(self.node)) def test_is_trusted_boot_requested_true(self): self.node.instance_info = {'capabilities': {"trusted_boot": "true"}} self.assertTrue(utils.is_trusted_boot_requested(self.node)) def test_is_trusted_boot_requested_false(self): self.node.instance_info = {'capabilities': {"trusted_boot": "false"}} self.assertFalse(utils.is_trusted_boot_requested(self.node)) def test_is_trusted_boot_requested_invalid(self): self.node.instance_info = {'capabilities': {"trusted_boot": "invalid"}} self.assertFalse(utils.is_trusted_boot_requested(self.node)) def test_get_boot_mode_for_deploy_using_capabilities(self): properties = {'capabilities': 'boot_mode:uefi,cap2:value2'} self.node.properties = properties result = utils.get_boot_mode_for_deploy(self.node) self.assertEqual('uefi', result) def test_get_boot_mode_for_deploy_using_instance_info_cap(self): instance_info = {'capabilities': {'secure_boot': 'True'}} self.node.instance_info = instance_info result = utils.get_boot_mode_for_deploy(self.node) self.assertEqual('uefi', result) instance_info = {'capabilities': {'trusted_boot': 'True'}} self.node.instance_info = instance_info result = utils.get_boot_mode_for_deploy(self.node) self.assertEqual('bios', result) instance_info = {'capabilities': {'trusted_boot': 'True'}, 'capabilities': {'secure_boot': 'True'}} self.node.instance_info = instance_info result = utils.get_boot_mode_for_deploy(self.node) self.assertEqual('uefi', result) def test_get_boot_mode_for_deploy_using_instance_info(self): instance_info = {'deploy_boot_mode': 'bios'} self.node.instance_info = instance_info result = utils.get_boot_mode_for_deploy(self.node) self.assertEqual('bios', result) def test_validate_boot_mode_capability(self): prop = {'capabilities': 'boot_mode:uefi,cap2:value2'} self.node.properties = prop result = utils.validate_capabilities(self.node) self.assertIsNone(result) def test_validate_boot_mode_capability_with_exc(self): prop = {'capabilities': 'boot_mode:UEFI,cap2:value2'} self.node.properties = prop self.assertRaises(exception.InvalidParameterValue, utils.validate_capabilities, self.node) def test_validate_boot_mode_capability_instance_info(self): inst_info = {'capabilities': {"boot_mode": "uefi", "cap2": "value2"}} self.node.instance_info = inst_info result = utils.validate_capabilities(self.node) self.assertIsNone(result) def test_validate_boot_mode_capability_instance_info_with_exc(self): inst_info = {'capabilities': {"boot_mode": "UEFI", "cap2": "value2"}} self.node.instance_info = inst_info self.assertRaises(exception.InvalidParameterValue, utils.validate_capabilities, self.node) def test_validate_trusted_boot_capability(self): properties = {'capabilities': 'trusted_boot:value'} self.node.properties = properties self.assertRaises(exception.InvalidParameterValue, utils.validate_capabilities, self.node) def test_all_supported_capabilities(self): self.assertEqual(('local', 'netboot'), utils.SUPPORTED_CAPABILITIES['boot_option']) self.assertEqual(('bios', 'uefi'), utils.SUPPORTED_CAPABILITIES['boot_mode']) self.assertEqual(('true', 'false'), utils.SUPPORTED_CAPABILITIES['secure_boot']) self.assertEqual(('true', 'false'), utils.SUPPORTED_CAPABILITIES['trusted_boot']) def test_get_disk_label(self): inst_info = {'capabilities': {'disk_label': 'gpt', 'foo': 'bar'}} self.node.instance_info = inst_info result = utils.get_disk_label(self.node) self.assertEqual('gpt', result) class TrySetBootDeviceTestCase(db_base.DbTestCase): def setUp(self): super(TrySetBootDeviceTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake") self.node = obj_utils.create_test_node(self.context, driver="fake") @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_try_set_boot_device_okay(self, node_set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: utils.try_set_boot_device(task, boot_devices.DISK, persistent=True) node_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(utils, 'LOG', autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_try_set_boot_device_ipmifailure_uefi( self, node_set_boot_device_mock, log_mock): self.node.properties = {'capabilities': 'boot_mode:uefi'} self.node.save() node_set_boot_device_mock.side_effect = iter( [exception.IPMIFailure(cmd='a')]) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: utils.try_set_boot_device(task, boot_devices.DISK, persistent=True) node_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) log_mock.warning.assert_called_once_with(mock.ANY) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_try_set_boot_device_ipmifailure_bios( self, node_set_boot_device_mock): node_set_boot_device_mock.side_effect = iter( [exception.IPMIFailure(cmd='a')]) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IPMIFailure, utils.try_set_boot_device, task, boot_devices.DISK, persistent=True) node_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_try_set_boot_device_some_other_exception( self, node_set_boot_device_mock): exc = exception.IloOperationError(operation="qwe", error="error") node_set_boot_device_mock.side_effect = iter([exc]) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, utils.try_set_boot_device, task, boot_devices.DISK, persistent=True) node_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) class AgentMethodsTestCase(db_base.DbTestCase): def setUp(self): super(AgentMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_agent') self.clean_steps = { 'deploy': [ {'interface': 'deploy', 'step': 'erase_devices', 'priority': 20}, {'interface': 'deploy', 'step': 'update_firmware', 'priority': 30} ], 'raid': [ {'interface': 'raid', 'step': 'create_configuration', 'priority': 10} ] } n = {'driver': 'fake_agent', 'driver_internal_info': { 'agent_cached_clean_steps': self.clean_steps}} self.node = obj_utils.create_test_node(self.context, **n) self.ports = [obj_utils.create_test_port(self.context, node_id=self.node.id)] def test_agent_get_clean_steps(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = utils.agent_get_clean_steps(task) # Since steps are returned in dicts, they have non-deterministic # ordering self.assertThat(response, matchers.HasLength(3)) self.assertIn(self.clean_steps['deploy'][0], response) self.assertIn(self.clean_steps['deploy'][1], response) self.assertIn(self.clean_steps['raid'][0], response) def test_get_clean_steps_custom_interface(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = utils.agent_get_clean_steps(task, interface='raid') self.assertThat(response, matchers.HasLength(1)) self.assertEqual(self.clean_steps['raid'], response) def test_get_clean_steps_override_priorities(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: new_priorities = {'create_configuration': 42} response = utils.agent_get_clean_steps( task, interface='raid', override_priorities=new_priorities) self.assertEqual(42, response[0]['priority']) def test_get_clean_steps_override_priorities_none(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: # this is simulating the default value of a configuration option new_priorities = {'create_configuration': None} response = utils.agent_get_clean_steps( task, interface='raid', override_priorities=new_priorities) self.assertEqual(10, response[0]['priority']) def test_get_clean_steps_missing_steps(self): info = self.node.driver_internal_info del info['agent_cached_clean_steps'] self.node.driver_internal_info = info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.NodeCleaningFailure, utils.agent_get_clean_steps, task) @mock.patch('ironic.objects.Port.list_by_node_id', spec_set=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'execute_clean_step', autospec=True) def test_execute_clean_step(self, client_mock, list_ports_mock): client_mock.return_value = { 'command_status': 'SUCCEEDED'} list_ports_mock.return_value = self.ports with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = utils.agent_execute_clean_step( task, self.clean_steps['deploy'][0]) self.assertEqual(states.CLEANWAIT, response) @mock.patch('ironic.objects.Port.list_by_node_id', spec_set=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'execute_clean_step', autospec=True) def test_execute_clean_step_running(self, client_mock, list_ports_mock): client_mock.return_value = { 'command_status': 'RUNNING'} list_ports_mock.return_value = self.ports with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = utils.agent_execute_clean_step( task, self.clean_steps['deploy'][0]) self.assertEqual(states.CLEANWAIT, response) @mock.patch('ironic.objects.Port.list_by_node_id', spec_set=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'execute_clean_step', autospec=True) def test_execute_clean_step_version_mismatch( self, client_mock, list_ports_mock): client_mock.return_value = { 'command_status': 'RUNNING'} list_ports_mock.return_value = self.ports with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = utils.agent_execute_clean_step( task, self.clean_steps['deploy'][0]) self.assertEqual(states.CLEANWAIT, response) def test_agent_add_clean_params(self): cfg.CONF.deploy.erase_devices_iterations = 2 with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: utils.agent_add_clean_params(task) self.assertEqual(task.node.driver_internal_info.get( 'agent_erase_devices_iterations'), 2) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.delete_cleaning_ports', autospec=True) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.create_cleaning_ports', autospec=True) def _test_prepare_inband_cleaning_ports( self, create_mock, delete_mock, return_vif_port_id=True): if return_vif_port_id: create_mock.return_value = {self.ports[0].uuid: 'vif-port-id'} else: create_mock.return_value = {} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: utils.prepare_cleaning_ports(task) create_mock.assert_called_once_with(mock.ANY, task) delete_mock.assert_called_once_with(mock.ANY, task) self.ports[0].refresh() self.assertEqual('vif-port-id', self.ports[0].extra['vif_port_id']) def test_prepare_inband_cleaning_ports(self): self._test_prepare_inband_cleaning_ports() def test_prepare_inband_cleaning_ports_no_vif_port_id(self): self.assertRaises( exception.NodeCleaningFailure, self._test_prepare_inband_cleaning_ports, return_vif_port_id=False) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.delete_cleaning_ports', autospec=True) def test_tear_down_inband_cleaning_ports(self, neutron_mock): extra_dict = self.ports[0].extra extra_dict['vif_port_id'] = 'vif-port-id' self.ports[0].extra = extra_dict self.ports[0].save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: utils.tear_down_cleaning_ports(task) neutron_mock.assert_called_once_with(mock.ANY, task) self.ports[0].refresh() self.assertNotIn('vif_port_id', self.ports[0].extra) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', autospec=True) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'build_deploy_ramdisk_options', autospec=True) @mock.patch.object(utils, 'build_agent_options', autospec=True) @mock.patch.object(utils, 'prepare_cleaning_ports', autospec=True) def _test_prepare_inband_cleaning( self, prepare_cleaning_ports_mock, iscsi_build_options_mock, build_options_mock, power_mock, prepare_ramdisk_mock, manage_boot=True): build_options_mock.return_value = {'a': 'b'} iscsi_build_options_mock.return_value = {'c': 'd'} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertEqual( states.CLEANWAIT, utils.prepare_inband_cleaning(task, manage_boot=manage_boot)) prepare_cleaning_ports_mock.assert_called_once_with(task) power_mock.assert_called_once_with(task, states.REBOOT) self.assertEqual(task.node.driver_internal_info.get( 'agent_erase_devices_iterations'), 1) if manage_boot: prepare_ramdisk_mock.assert_called_once_with( mock.ANY, mock.ANY, {'a': 'b', 'c': 'd'}) build_options_mock.assert_called_once_with(task.node) else: self.assertFalse(prepare_ramdisk_mock.called) self.assertFalse(build_options_mock.called) def test_prepare_inband_cleaning(self): self._test_prepare_inband_cleaning() def test_prepare_inband_cleaning_manage_boot_false(self): self._test_prepare_inband_cleaning(manage_boot=False) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) @mock.patch.object(utils, 'tear_down_cleaning_ports', autospec=True) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def _test_tear_down_inband_cleaning( self, power_mock, tear_down_ports_mock, clean_up_ramdisk_mock, manage_boot=True): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: utils.tear_down_inband_cleaning(task, manage_boot=manage_boot) power_mock.assert_called_once_with(task, states.POWER_OFF) tear_down_ports_mock.assert_called_once_with(task) if manage_boot: clean_up_ramdisk_mock.assert_called_once_with( task.driver.boot, task) else: self.assertFalse(clean_up_ramdisk_mock.called) def test_tear_down_inband_cleaning(self): self._test_tear_down_inband_cleaning(manage_boot=True) def test_tear_down_inband_cleaning_manage_boot_false(self): self._test_tear_down_inband_cleaning(manage_boot=False) def test_build_agent_options_conf(self): self.config(api_url='api-url', group='conductor') options = utils.build_agent_options(self.node) self.assertEqual('api-url', options['ipa-api-url']) self.assertEqual('fake_agent', options['ipa-driver-name']) self.assertEqual(0, options['coreos.configdrive']) @mock.patch.object(keystone, 'get_service_url', autospec=True) def test_build_agent_options_keystone(self, get_url_mock): self.config(api_url=None, group='conductor') get_url_mock.return_value = 'api-url' options = utils.build_agent_options(self.node) self.assertEqual('api-url', options['ipa-api-url']) self.assertEqual('fake_agent', options['ipa-driver-name']) self.assertEqual(0, options['coreos.configdrive']) def test_build_agent_options_root_device_hints(self): self.config(api_url='api-url', group='conductor') self.node.properties['root_device'] = {'model': 'fake_model'} options = utils.build_agent_options(self.node) self.assertEqual('api-url', options['ipa-api-url']) self.assertEqual('fake_agent', options['ipa-driver-name']) self.assertEqual('model=fake_model', options['root_device']) @mock.patch.object(disk_utils, 'is_block_device', autospec=True) @mock.patch.object(utils, 'login_iscsi', lambda *_: None) @mock.patch.object(utils, 'discovery', lambda *_: None) @mock.patch.object(utils, 'logout_iscsi', lambda *_: None) @mock.patch.object(utils, 'delete_iscsi', lambda *_: None) @mock.patch.object(utils, 'get_dev', lambda *_: '/dev/fake') class ISCSISetupAndHandleErrorsTestCase(tests_base.TestCase): def test_no_parent_device(self, mock_ibd): address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 mock_ibd.return_value = False expected_dev = '/dev/fake' with testtools.ExpectedException(exception.InstanceDeployFailure): with utils._iscsi_setup_and_handle_errors( address, port, iqn, lun) as dev: self.assertEqual(expected_dev, dev) mock_ibd.assert_called_once_with(expected_dev) def test_parent_device_yield(self, mock_ibd): address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 expected_dev = '/dev/fake' mock_ibd.return_value = True with utils._iscsi_setup_and_handle_errors( address, port, iqn, lun) as dev: self.assertEqual(expected_dev, dev) mock_ibd.assert_called_once_with(expected_dev) class ValidateImagePropertiesTestCase(db_base.DbTestCase): @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_validate_image_properties_glance_image(self, image_service_mock): node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) inst_info = utils.get_image_instance_info(node) image_service_mock.return_value.show.return_value = { 'properties': {'kernel_id': '1111', 'ramdisk_id': '2222'}, } utils.validate_image_properties(self.context, inst_info, ['kernel_id', 'ramdisk_id']) image_service_mock.assert_called_once_with( node.instance_info['image_source'], context=self.context ) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_validate_image_properties_glance_image_missing_prop( self, image_service_mock): node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) inst_info = utils.get_image_instance_info(node) image_service_mock.return_value.show.return_value = { 'properties': {'kernel_id': '1111'}, } self.assertRaises(exception.MissingParameterValue, utils.validate_image_properties, self.context, inst_info, ['kernel_id', 'ramdisk_id']) image_service_mock.assert_called_once_with( node.instance_info['image_source'], context=self.context ) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_validate_image_properties_glance_image_not_authorized( self, image_service_mock): inst_info = {'image_source': 'uuid'} show_mock = image_service_mock.return_value.show show_mock.side_effect = exception.ImageNotAuthorized(image_id='uuid') self.assertRaises(exception.InvalidParameterValue, utils.validate_image_properties, self.context, inst_info, []) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_validate_image_properties_glance_image_not_found( self, image_service_mock): inst_info = {'image_source': 'uuid'} show_mock = image_service_mock.return_value.show show_mock.side_effect = exception.ImageNotFound(image_id='uuid') self.assertRaises(exception.InvalidParameterValue, utils.validate_image_properties, self.context, inst_info, []) def test_validate_image_properties_invalid_image_href(self): inst_info = {'image_source': 'emule://uuid'} self.assertRaises(exception.InvalidParameterValue, utils.validate_image_properties, self.context, inst_info, []) @mock.patch.object(image_service.HttpImageService, 'show', autospec=True) def test_validate_image_properties_nonglance_image( self, image_service_show_mock): instance_info = { 'image_source': 'http://ubuntu', 'kernel': 'kernel_uuid', 'ramdisk': 'file://initrd', 'root_gb': 100, } image_service_show_mock.return_value = {'size': 1, 'properties': {}} node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) inst_info = utils.get_image_instance_info(node) utils.validate_image_properties(self.context, inst_info, ['kernel', 'ramdisk']) image_service_show_mock.assert_called_once_with( mock.ANY, instance_info['image_source']) @mock.patch.object(image_service.HttpImageService, 'show', autospec=True) def test_validate_image_properties_nonglance_image_validation_fail( self, img_service_show_mock): instance_info = { 'image_source': 'http://ubuntu', 'kernel': 'kernel_uuid', 'ramdisk': 'file://initrd', 'root_gb': 100, } img_service_show_mock.side_effect = iter( [exception.ImageRefValidationFailed( image_href='http://ubuntu', reason='HTTPError')]) node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) inst_info = utils.get_image_instance_info(node) self.assertRaises(exception.InvalidParameterValue, utils.validate_image_properties, self.context, inst_info, ['kernel', 'ramdisk']) class ValidateParametersTestCase(db_base.DbTestCase): def _test__get_img_instance_info( self, instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT): # make sure we get back the expected things node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=instance_info, driver_info=driver_info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) info = utils.get_image_instance_info(node) self.assertIsNotNone(info.get('image_source')) return info def test__get_img_instance_info_good(self): self._test__get_img_instance_info() def test__get_img_instance_info_good_non_glance_image(self): instance_info = INST_INFO_DICT.copy() instance_info['image_source'] = 'http://image' instance_info['kernel'] = 'http://kernel' instance_info['ramdisk'] = 'http://ramdisk' info = self._test__get_img_instance_info(instance_info=instance_info) self.assertIsNotNone(info.get('ramdisk')) self.assertIsNotNone(info.get('kernel')) def test__get_img_instance_info_non_glance_image_missing_kernel(self): instance_info = INST_INFO_DICT.copy() instance_info['image_source'] = 'http://image' instance_info['ramdisk'] = 'http://ramdisk' self.assertRaises( exception.MissingParameterValue, self._test__get_img_instance_info, instance_info=instance_info) def test__get_img_instance_info_non_glance_image_missing_ramdisk(self): instance_info = INST_INFO_DICT.copy() instance_info['image_source'] = 'http://image' instance_info['kernel'] = 'http://kernel' self.assertRaises( exception.MissingParameterValue, self._test__get_img_instance_info, instance_info=instance_info) def test__get_img_instance_info_missing_image_source(self): instance_info = INST_INFO_DICT.copy() del instance_info['image_source'] self.assertRaises( exception.MissingParameterValue, self._test__get_img_instance_info, instance_info=instance_info) def test__get_img_instance_info_whole_disk_image(self): driver_internal_info = DRV_INTERNAL_INFO_DICT.copy() driver_internal_info['is_whole_disk_image'] = True self._test__get_img_instance_info( driver_internal_info=driver_internal_info) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_agent.py0000664000567000056710000016170412674513470025424 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import types import mock from oslo_config import cfg from ironic.common import dhcp_factory from ironic.common import exception from ironic.common import image_service from ironic.common import images from ironic.common import raid from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent from ironic.drivers.modules import agent_base_vendor from ironic.drivers.modules import agent_client from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import fake from ironic.drivers.modules import pxe from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as object_utils INSTANCE_INFO = db_utils.get_test_agent_instance_info() DRIVER_INFO = db_utils.get_test_agent_driver_info() DRIVER_INTERNAL_INFO = db_utils.get_test_agent_driver_internal_info() CONF = cfg.CONF class TestAgentMethods(db_base.DbTestCase): def setUp(self): super(TestAgentMethods, self).setUp() self.node = object_utils.create_test_node(self.context, driver='fake_agent') dhcp_factory.DHCPFactory._dhcp_provider = None @mock.patch.object(image_service, 'GlanceImageService', autospec=True) def test_build_instance_info_for_deploy_glance_image(self, glance_mock): i_info = self.node.instance_info i_info['image_source'] = '733d1c44-a2ea-414b-aca7-69decf20d810' driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = driver_internal_info self.node.instance_info = i_info self.node.save() image_info = {'checksum': 'aa', 'disk_format': 'qcow2', 'container_format': 'bare'} glance_mock.return_value.show = mock.MagicMock(spec_set=[], return_value=image_info) mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: agent.build_instance_info_for_deploy(task) glance_mock.assert_called_once_with(version=2, context=task.context) glance_mock.return_value.show.assert_called_once_with( self.node.instance_info['image_source']) glance_mock.return_value.swift_temp_url.assert_called_once_with( image_info) @mock.patch.object(deploy_utils, 'parse_instance_info', autospec=True) @mock.patch.object(image_service, 'GlanceImageService', autospec=True) def test_build_instance_info_for_deploy_glance_partition_image( self, glance_mock, parse_instance_info_mock): i_info = self.node.instance_info i_info['image_source'] = '733d1c44-a2ea-414b-aca7-69decf20d810' i_info['kernel'] = '13ce5a56-1de3-4916-b8b2-be778645d003' i_info['ramdisk'] = 'a5a370a8-1b39-433f-be63-2c7d708e4b4e' i_info['root_gb'] = 5 i_info['swap_mb'] = 4 i_info['ephemeral_gb'] = 0 i_info['ephemeral_format'] = None i_info['configdrive'] = 'configdrive' driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = False self.node.driver_internal_info = driver_internal_info self.node.instance_info = i_info self.node.save() image_info = {'checksum': 'aa', 'disk_format': 'qcow2', 'container_format': 'bare', 'properties': {'kernel_id': 'kernel', 'ramdisk_id': 'ramdisk'}} glance_mock.return_value.show = mock.MagicMock(spec_set=[], return_value=image_info) glance_obj_mock = glance_mock.return_value glance_obj_mock.swift_temp_url.return_value = 'temp-url' parse_instance_info_mock.return_value = {'swap_mb': 4} image_source = '733d1c44-a2ea-414b-aca7-69decf20d810' expected_i_info = {'root_gb': 5, 'swap_mb': 4, 'ephemeral_gb': 0, 'ephemeral_format': None, 'configdrive': 'configdrive', 'image_source': image_source, 'image_url': 'temp-url', 'kernel': 'kernel', 'ramdisk': 'ramdisk', 'image_type': 'partition', 'image_checksum': 'aa', 'fake_password': 'fakepass', 'image_container_format': 'bare', 'image_disk_format': 'qcow2', 'foo': 'bar'} mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: info = agent.build_instance_info_for_deploy(task) glance_mock.assert_called_once_with(version=2, context=task.context) glance_mock.return_value.show.assert_called_once_with( self.node.instance_info['image_source']) glance_mock.return_value.swift_temp_url.assert_called_once_with( image_info) image_type = task.node.instance_info.get('image_type') self.assertEqual('partition', image_type) self.assertEqual('kernel', info.get('kernel')) self.assertEqual('ramdisk', info.get('ramdisk')) self.assertEqual(expected_i_info, info) parse_instance_info_mock.assert_called_once_with(task.node) @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) def test_build_instance_info_for_deploy_nonglance_image( self, validate_href_mock): i_info = self.node.instance_info driver_internal_info = self.node.driver_internal_info i_info['image_source'] = 'http://image-ref' i_info['image_checksum'] = 'aa' i_info['root_gb'] = 10 i_info['image_checksum'] = 'aa' driver_internal_info['is_whole_disk_image'] = True self.node.instance_info = i_info self.node.driver_internal_info = driver_internal_info self.node.save() mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: info = agent.build_instance_info_for_deploy(task) self.assertEqual(self.node.instance_info['image_source'], info['image_url']) validate_href_mock.assert_called_once_with( mock.ANY, 'http://image-ref') @mock.patch.object(deploy_utils, 'parse_instance_info', autospec=True) @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) def test_build_instance_info_for_deploy_nonglance_partition_image( self, validate_href_mock, parse_instance_info_mock): i_info = self.node.instance_info driver_internal_info = self.node.driver_internal_info i_info['image_source'] = 'http://image-ref' i_info['kernel'] = 'http://kernel-ref' i_info['ramdisk'] = 'http://ramdisk-ref' i_info['image_checksum'] = 'aa' i_info['root_gb'] = 10 driver_internal_info['is_whole_disk_image'] = False self.node.instance_info = i_info self.node.driver_internal_info = driver_internal_info self.node.save() mgr_utils.mock_the_extension_manager(driver='fake_agent') validate_href_mock.side_effect = ['http://image-ref', 'http://kernel-ref', 'http://ramdisk-ref'] parse_instance_info_mock.return_value = {'swap_mb': 5} expected_i_info = {'image_source': 'http://image-ref', 'image_url': 'http://image-ref', 'image_type': 'partition', 'kernel': 'http://kernel-ref', 'ramdisk': 'http://ramdisk-ref', 'image_checksum': 'aa', 'root_gb': 10, 'swap_mb': 5, 'fake_password': 'fakepass', 'foo': 'bar'} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: info = agent.build_instance_info_for_deploy(task) self.assertEqual(self.node.instance_info['image_source'], info['image_url']) validate_href_mock.assert_called_once_with( mock.ANY, 'http://image-ref') self.assertEqual('partition', info.get('image_type')) self.assertEqual(expected_i_info, info) parse_instance_info_mock.assert_called_once_with(task.node) @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) def test_build_instance_info_for_deploy_nonsupported_image( self, validate_href_mock): validate_href_mock.side_effect = iter( [exception.ImageRefValidationFailed( image_href='file://img.qcow2', reason='fail')]) i_info = self.node.instance_info i_info['image_source'] = 'file://img.qcow2' i_info['image_checksum'] = 'aa' self.node.instance_info = i_info self.node.save() mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.ImageRefValidationFailed, agent.build_instance_info_for_deploy, task) @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size(self, show_mock): show_mock.return_value = { 'size': 10 * 1024 * 1024, 'disk_format': 'qcow2', } mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 agent.check_image_size(task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_without_memory_mb(self, show_mock): mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties.pop('memory_mb', None) agent.check_image_size(task, 'fake-image') self.assertFalse(show_mock.called) @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_fail(self, show_mock): show_mock.return_value = { 'size': 11 * 1024 * 1024, 'disk_format': 'qcow2', } mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 self.assertRaises(exception.InvalidParameterValue, agent.check_image_size, task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_fail_by_agent_consumed_memory(self, show_mock): self.config(memory_consumed_by_agent=2, group='agent') show_mock.return_value = { 'size': 9 * 1024 * 1024, 'disk_format': 'qcow2', } mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 self.assertRaises(exception.InvalidParameterValue, agent.check_image_size, task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_raw_stream_enabled(self, show_mock): CONF.set_override('stream_raw_images', True, 'agent') # Image is bigger than memory but it's raw and will be streamed # so the test should pass show_mock.return_value = { 'size': 15 * 1024 * 1024, 'disk_format': 'raw', } mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 agent.check_image_size(task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_raw_stream_disabled(self, show_mock): CONF.set_override('stream_raw_images', False, 'agent') show_mock.return_value = { 'size': 15 * 1024 * 1024, 'disk_format': 'raw', } mgr_utils.mock_the_extension_manager(driver='fake_agent') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 # Image is raw but stream is disabled, so test should fail since # the image is bigger than the RAM size self.assertRaises(exception.InvalidParameterValue, agent.check_image_size, task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') class TestAgentDeploy(db_base.DbTestCase): def setUp(self): super(TestAgentDeploy, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_agent') self.driver = agent.AgentDeploy() n = { 'driver': 'fake_agent', 'instance_info': INSTANCE_INFO, 'driver_info': DRIVER_INFO, 'driver_internal_info': DRIVER_INTERNAL_INFO, } self.node = object_utils.create_test_node(self.context, **n) self.ports = [ object_utils.create_test_port(self.context, node_id=self.node.id)] dhcp_factory.DHCPFactory._dhcp_provider = None def test_get_properties(self): expected = agent.COMMON_PROPERTIES self.assertEqual(expected, self.driver.get_properties()) @mock.patch.object(deploy_utils, 'validate_capabilities', spec_set=True, autospec=True) @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate(self, pxe_boot_validate_mock, show_mock, validate_capability_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.driver.validate(task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) show_mock.assert_called_once_with(self.context, 'fake-image') validate_capability_mock.assert_called_once_with(task.node) @mock.patch.object(deploy_utils, 'validate_capabilities', spec_set=True, autospec=True) @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_driver_info_manage_agent_boot_false( self, pxe_boot_validate_mock, show_mock, validate_capability_mock): self.config(manage_agent_boot=False, group='agent') self.node.driver_info = {} self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.driver.validate(task) self.assertFalse(pxe_boot_validate_mock.called) show_mock.assert_called_once_with(self.context, 'fake-image') validate_capability_mock.assert_called_once_with(task.node) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_instance_info_missing_params( self, pxe_boot_validate_mock): self.node.instance_info = {} self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: e = self.assertRaises(exception.MissingParameterValue, self.driver.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) self.assertIn('instance_info.image_source', str(e)) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_nonglance_image_no_checksum( self, pxe_boot_validate_mock): i_info = self.node.instance_info i_info['image_source'] = 'http://image-ref' del i_info['image_checksum'] self.node.instance_info = i_info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, self.driver.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_invalid_root_device_hints( self, pxe_boot_validate_mock, show_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties['root_device'] = {'size': 'not-int'} self.assertRaises(exception.InvalidParameterValue, task.driver.deploy.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_invalid_proxies(self, pxe_boot_validate_mock, show_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info.update({ 'image_https_proxy': 'git://spam.ni', 'image_http_proxy': 'http://spam.ni', 'image_no_proxy': '1' * 500}) self.assertRaisesRegexp(exception.InvalidParameterValue, 'image_https_proxy.*image_no_proxy', task.driver.deploy.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def test_deploy(self, power_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: driver_return = self.driver.deploy(task) self.assertEqual(driver_return, states.DEPLOYWAIT) power_mock.assert_called_once_with(task, states.REBOOT) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def test_tear_down(self, power_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: driver_return = self.driver.tear_down(task) power_mock.assert_called_once_with(task, states.POWER_OFF) self.assertEqual(driver_return, states.DELETED) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(agent, 'build_instance_info_for_deploy') def test_prepare(self, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_instance_info_mock.return_value = {'foo': 'bar'} build_options_mock.return_value = {'a': 'b'} self.driver.prepare(task) build_instance_info_mock.assert_called_once_with(task) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'a': 'b'}) self.node.refresh() self.assertEqual('bar', self.node.instance_info['foo']) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(agent, 'build_instance_info_for_deploy') def test_prepare_manage_agent_boot_false( self, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock): self.config(group='agent', manage_agent_boot=False) with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_instance_info_mock.return_value = {'foo': 'bar'} self.driver.prepare(task) build_instance_info_mock.assert_called_once_with(task) self.assertFalse(build_options_mock.called) self.assertFalse(pxe_prepare_ramdisk_mock.called) self.node.refresh() self.assertEqual('bar', self.node.instance_info['foo']) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(agent, 'build_instance_info_for_deploy') def test_prepare_active( self, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.ACTIVE self.driver.prepare(task) self.assertFalse(build_instance_info_mock.called) self.assertFalse(build_options_mock.called) self.assertFalse(pxe_prepare_ramdisk_mock.called) @mock.patch('ironic.common.dhcp_factory.DHCPFactory._set_dhcp_provider') @mock.patch('ironic.common.dhcp_factory.DHCPFactory.clean_dhcp') @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk') def test_clean_up(self, pxe_clean_up_ramdisk_mock, clean_dhcp_mock, set_dhcp_provider_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.driver.clean_up(task) pxe_clean_up_ramdisk_mock.assert_called_once_with(task) set_dhcp_provider_mock.assert_called_once_with() clean_dhcp_mock.assert_called_once_with(task) @mock.patch('ironic.common.dhcp_factory.DHCPFactory._set_dhcp_provider') @mock.patch('ironic.common.dhcp_factory.DHCPFactory.clean_dhcp') @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk') def test_clean_up_manage_agent_boot_false(self, pxe_clean_up_ramdisk_mock, clean_dhcp_mock, set_dhcp_provider_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.config(group='agent', manage_agent_boot=False) self.driver.clean_up(task) self.assertFalse(pxe_clean_up_ramdisk_mock.called) set_dhcp_provider_mock.assert_called_once_with() clean_dhcp_mock.assert_called_once_with(task) @mock.patch('ironic.drivers.modules.deploy_utils.agent_get_clean_steps', autospec=True) def test_get_clean_steps(self, mock_get_clean_steps): # Test getting clean steps mock_steps = [{'priority': 10, 'interface': 'deploy', 'step': 'erase_devices'}] mock_get_clean_steps.return_value = mock_steps with task_manager.acquire(self.context, self.node.uuid) as task: steps = self.driver.get_clean_steps(task) mock_get_clean_steps.assert_called_once_with( task, interface='deploy', override_priorities={'erase_devices': None}) self.assertEqual(mock_steps, steps) @mock.patch('ironic.drivers.modules.deploy_utils.agent_get_clean_steps', autospec=True) def test_get_clean_steps_config_priority(self, mock_get_clean_steps): # Test that we can override the priority of get clean steps # Use 0 because it is an edge case (false-y) and used in devstack self.config(erase_devices_priority=0, group='deploy') mock_steps = [{'priority': 10, 'interface': 'deploy', 'step': 'erase_devices'}] mock_get_clean_steps.return_value = mock_steps with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.get_clean_steps(task) mock_get_clean_steps.assert_called_once_with( task, interface='deploy', override_priorities={'erase_devices': 0}) @mock.patch.object(deploy_utils, 'prepare_inband_cleaning', autospec=True) def test_prepare_cleaning(self, prepare_inband_cleaning_mock): prepare_inband_cleaning_mock.return_value = states.CLEANWAIT with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual( states.CLEANWAIT, self.driver.prepare_cleaning(task)) prepare_inband_cleaning_mock.assert_called_once_with( task, manage_boot=True) @mock.patch.object(deploy_utils, 'prepare_inband_cleaning', autospec=True) def test_prepare_cleaning_manage_agent_boot_false( self, prepare_inband_cleaning_mock): prepare_inband_cleaning_mock.return_value = states.CLEANWAIT self.config(group='agent', manage_agent_boot=False) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual( states.CLEANWAIT, self.driver.prepare_cleaning(task)) prepare_inband_cleaning_mock.assert_called_once_with( task, manage_boot=False) @mock.patch.object(deploy_utils, 'tear_down_inband_cleaning', autospec=True) def test_tear_down_cleaning(self, tear_down_cleaning_mock): with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.tear_down_cleaning(task) tear_down_cleaning_mock.assert_called_once_with( task, manage_boot=True) @mock.patch.object(deploy_utils, 'tear_down_inband_cleaning', autospec=True) def test_tear_down_cleaning_manage_agent_boot_false( self, tear_down_cleaning_mock): self.config(group='agent', manage_agent_boot=False) with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.tear_down_cleaning(task) tear_down_cleaning_mock.assert_called_once_with( task, manage_boot=False) class TestAgentVendor(db_base.DbTestCase): def setUp(self): super(TestAgentVendor, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_agent") self.passthru = agent.AgentVendorInterface() n = { 'driver': 'fake_agent', 'instance_info': INSTANCE_INFO, 'driver_info': DRIVER_INFO, 'driver_internal_info': DRIVER_INTERNAL_INFO, } self.node = object_utils.create_test_node(self.context, **n) def _test_continue_deploy(self, additional_driver_info=None, additional_expected_image_info=None): self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE driver_info = self.node.driver_info driver_info.update(additional_driver_info or {}) self.node.driver_info = driver_info self.node.save() test_temp_url = 'http://image' expected_image_info = { 'urls': [test_temp_url], 'id': 'fake-image', 'checksum': 'checksum', 'disk_format': 'qcow2', 'container_format': 'bare', 'stream_raw_images': CONF.agent.stream_raw_images, } expected_image_info.update(additional_expected_image_info or {}) client_mock = mock.MagicMock(spec_set=['prepare_image']) self.passthru._client = client_mock with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.passthru.continue_deploy(task) client_mock.prepare_image.assert_called_with(task.node, expected_image_info) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) def test_continue_deploy(self): self._test_continue_deploy() def test_continue_deploy_with_proxies(self): self._test_continue_deploy( additional_driver_info={'image_https_proxy': 'https://spam.ni', 'image_http_proxy': 'spam.ni', 'image_no_proxy': '.eggs.com'}, additional_expected_image_info={ 'proxies': {'https': 'https://spam.ni', 'http': 'spam.ni'}, 'no_proxy': '.eggs.com'} ) def test_continue_deploy_with_no_proxy_without_proxies(self): self._test_continue_deploy( additional_driver_info={'image_no_proxy': '.eggs.com'} ) def test_continue_deploy_image_source_is_url(self): instance_info = self.node.instance_info instance_info['image_source'] = 'http://example.com/woof.img' self.node.instance_info = instance_info self._test_continue_deploy( additional_expected_image_info={ 'id': 'woof.img' } ) def test_continue_deploy_partition_image(self): self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE i_info = self.node.instance_info i_info['kernel'] = 'kernel' i_info['ramdisk'] = 'ramdisk' i_info['root_gb'] = 10 i_info['swap_mb'] = 10 i_info['ephemeral_mb'] = 0 i_info['ephemeral_format'] = 'abc' i_info['configdrive'] = 'configdrive' i_info['preserve_ephemeral'] = False i_info['image_type'] = 'partition' i_info['root_mb'] = 10240 i_info['deploy_boot_mode'] = 'bios' i_info['capabilities'] = {"boot_option": "local", "disk_label": "msdos"} self.node.instance_info = i_info driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = False self.node.driver_internal_info = driver_internal_info self.node.save() test_temp_url = 'http://image' expected_image_info = { 'urls': [test_temp_url], 'id': 'fake-image', 'node_uuid': self.node.uuid, 'checksum': 'checksum', 'disk_format': 'qcow2', 'container_format': 'bare', 'stream_raw_images': True, 'kernel': 'kernel', 'ramdisk': 'ramdisk', 'root_gb': 10, 'swap_mb': 10, 'ephemeral_mb': 0, 'ephemeral_format': 'abc', 'configdrive': 'configdrive', 'preserve_ephemeral': False, 'image_type': 'partition', 'root_mb': 10240, 'boot_option': 'local', 'deploy_boot_mode': 'bios', 'disk_label': 'msdos' } client_mock = mock.MagicMock(spec_set=['prepare_image']) self.passthru._client = client_mock with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.passthru.continue_deploy(task) client_mock.prepare_image.assert_called_with(task.node, expected_image_info) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(agent.AgentVendorInterface, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentVendorInterface' '.check_deploy_success', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) def test_reboot_to_instance(self, clean_pxe_mock, check_deploy_mock, prepare_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock): check_deploy_mock.return_value = None uuid_mock.return_value = 'root_uuid' self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_OFF task.node.driver_internal_info['is_whole_disk_image'] = True self.passthru.reboot_to_instance(task) clean_pxe_mock.assert_called_once_with(task.driver.boot, task) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.REBOOT) self.assertFalse(prepare_mock.called) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) driver_int_info = task.node.driver_internal_info self.assertIsNone(driver_int_info.get('root_uuid_or_disk_id')) self.assertFalse(uuid_mock.called) @mock.patch.object(deploy_utils, 'get_boot_mode_for_deploy', autospec=True) @mock.patch.object(agent.AgentVendorInterface, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentVendorInterface' '.check_deploy_success', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) def test_reboot_to_instance_partition_image(self, clean_pxe_mock, check_deploy_mock, prepare_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock, boot_mode_mock): check_deploy_mock.return_value = None uuid_mock.return_value = 'root_uuid' self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() boot_mode_mock.return_value = 'bios' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_OFF task.node.driver_internal_info['is_whole_disk_image'] = False self.passthru.reboot_to_instance(task) self.assertFalse(clean_pxe_mock.called) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.REBOOT) prepare_mock.assert_called_once_with(task.driver.boot, task) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) driver_int_info = task.node.driver_internal_info self.assertEqual(driver_int_info.get('root_uuid_or_disk_id'), 'root_uuid') uuid_mock.assert_called_once_with(self.passthru, task, 'root_uuid') boot_mode_mock.assert_called_once_with(task.node) @mock.patch.object(agent.AgentVendorInterface, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentVendorInterface' '.check_deploy_success', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) def test_reboot_to_instance_boot_none(self, clean_pxe_mock, check_deploy_mock, prepare_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock): check_deploy_mock.return_value = None self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_OFF task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.boot = None self.passthru.reboot_to_instance(task) self.assertFalse(clean_pxe_mock.called) self.assertFalse(prepare_mock.called) power_off_mock.assert_called_once_with(task.node) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) driver_int_info = task.node.driver_internal_info self.assertIsNone(driver_int_info.get('root_uuid_or_disk_id')) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.REBOOT) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) self.assertFalse(uuid_mock.called) @mock.patch.object(agent.AgentVendorInterface, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentVendorInterface' '.check_deploy_success', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) def test_reboot_to_instance_boot_error(self, clean_pxe_mock, check_deploy_mock, prepare_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock): check_deploy_mock.return_value = "Error" uuid_mock.return_value = None self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_OFF task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.boot = None self.passthru.reboot_to_instance(task) self.assertFalse(clean_pxe_mock.called) self.assertFalse(prepare_mock.called) self.assertFalse(power_off_mock.called) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'configure_local_boot', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent.AgentVendorInterface, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentVendorInterface' '.check_deploy_success', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) def test_reboot_to_instance_localboot(self, clean_pxe_mock, check_deploy_mock, prepare_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock, bootdev_mock, configure_mock): check_deploy_mock.return_value = None uuid_mock.side_effect = ['root_uuid', 'efi_uuid'] self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_OFF task.node.driver_internal_info['is_whole_disk_image'] = False boot_option = {'capabilities': '{"boot_option": "local"}'} task.node.instance_info = boot_option self.passthru.reboot_to_instance(task) self.assertFalse(clean_pxe_mock.called) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) self.assertFalse(bootdev_mock.called) power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.REBOOT) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [] self.assertFalse(self.passthru.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started_is_done(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'prepare_image', 'command_status': 'SUCCESS'}] self.assertTrue(self.passthru.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started_did_start(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'prepare_image', 'command_status': 'RUNNING'}] self.assertTrue(self.passthru.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started_multiple_commands(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'cache_image', 'command_status': 'SUCCESS'}, {'command_name': 'prepare_image', 'command_status': 'RUNNING'}] self.assertTrue(self.passthru.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started_other_commands(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'cache_image', 'command_status': 'SUCCESS'}] self.assertFalse(self.passthru.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_is_done(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'prepare_image', 'command_status': 'SUCCESS'}] self.assertTrue(self.passthru.deploy_is_done(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_is_done_empty_response(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [] self.assertFalse(self.passthru.deploy_is_done(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_is_done_race(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'some_other_command', 'command_status': 'SUCCESS'}] self.assertFalse(self.passthru.deploy_is_done(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_is_done_still_running(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'prepare_image', 'command_status': 'RUNNING'}] self.assertFalse(self.passthru.deploy_is_done(task)) class AgentRAIDTestCase(db_base.DbTestCase): def setUp(self): super(AgentRAIDTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_agent") self.passthru = agent.AgentVendorInterface() self.target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 0, 'is_root_volume': True}, {'size_gb': 200, 'raid_level': 5} ]} self.clean_step = {'step': 'create_configuration', 'interface': 'raid'} n = { 'driver': 'fake_agent', 'instance_info': INSTANCE_INFO, 'driver_info': DRIVER_INFO, 'driver_internal_info': DRIVER_INTERNAL_INFO, 'target_raid_config': self.target_raid_config, 'clean_step': self.clean_step, } self.node = object_utils.create_test_node(self.context, **n) @mock.patch.object(deploy_utils, 'agent_get_clean_steps', autospec=True) def test_get_clean_steps(self, get_steps_mock): get_steps_mock.return_value = [ {'step': 'create_configuration', 'interface': 'raid', 'priority': 1}, {'step': 'delete_configuration', 'interface': 'raid', 'priority': 2}] with task_manager.acquire(self.context, self.node.uuid) as task: ret = task.driver.raid.get_clean_steps(task) self.assertEqual(0, ret[0]['priority']) self.assertEqual(0, ret[1]['priority']) @mock.patch.object(deploy_utils, 'agent_execute_clean_step', autospec=True) def test_create_configuration(self, execute_mock): with task_manager.acquire(self.context, self.node.uuid) as task: execute_mock.return_value = states.CLEANWAIT return_value = task.driver.raid.create_configuration(task) self.assertEqual(states.CLEANWAIT, return_value) self.assertEqual( self.target_raid_config, task.node.driver_internal_info['target_raid_config']) execute_mock.assert_called_once_with(task, self.clean_step) @mock.patch.object(deploy_utils, 'agent_execute_clean_step', autospec=True) def test_create_configuration_skip_root(self, execute_mock): with task_manager.acquire(self.context, self.node.uuid) as task: execute_mock.return_value = states.CLEANWAIT return_value = task.driver.raid.create_configuration( task, create_root_volume=False) self.assertEqual(states.CLEANWAIT, return_value) execute_mock.assert_called_once_with(task, self.clean_step) exp_target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 5} ]} self.assertEqual( exp_target_raid_config, task.node.driver_internal_info['target_raid_config']) @mock.patch.object(deploy_utils, 'agent_execute_clean_step', autospec=True) def test_create_configuration_skip_nonroot(self, execute_mock): with task_manager.acquire(self.context, self.node.uuid) as task: execute_mock.return_value = states.CLEANWAIT return_value = task.driver.raid.create_configuration( task, create_nonroot_volumes=False) self.assertEqual(states.CLEANWAIT, return_value) execute_mock.assert_called_once_with(task, self.clean_step) exp_target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 0, 'is_root_volume': True}, ]} self.assertEqual( exp_target_raid_config, task.node.driver_internal_info['target_raid_config']) @mock.patch.object(deploy_utils, 'agent_execute_clean_step', autospec=True) def test_create_configuration_no_target_raid_config_after_skipping( self, execute_mock): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises( exception.MissingParameterValue, task.driver.raid.create_configuration, task, create_root_volume=False, create_nonroot_volumes=False) self.assertFalse(execute_mock.called) @mock.patch.object(deploy_utils, 'agent_execute_clean_step', autospec=True) def test_create_configuration_empty_target_raid_config( self, execute_mock): execute_mock.return_value = states.CLEANING self.node.target_raid_config = {} self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.MissingParameterValue, task.driver.raid.create_configuration, task) self.assertFalse(execute_mock.called) @mock.patch.object(raid, 'update_raid_info', autospec=True) def test__create_configuration_final( self, update_raid_info_mock): command = {'command_result': {'clean_result': 'foo'}} with task_manager.acquire(self.context, self.node.uuid) as task: raid_mgmt = agent.AgentRAID raid_mgmt._create_configuration_final(task, command) update_raid_info_mock.assert_called_once_with(task.node, 'foo') @mock.patch.object(raid, 'update_raid_info', autospec=True) def test__create_configuration_final_registered( self, update_raid_info_mock): self.node.clean_step = {'interface': 'raid', 'step': 'create_configuration'} command = {'command_result': {'clean_result': 'foo'}} create_hook = agent_base_vendor._get_post_clean_step_hook(self.node) with task_manager.acquire(self.context, self.node.uuid) as task: create_hook(task, command) update_raid_info_mock.assert_called_once_with(task.node, 'foo') @mock.patch.object(raid, 'update_raid_info', autospec=True) def test__create_configuration_final_bad_command_result( self, update_raid_info_mock): command = {} with task_manager.acquire(self.context, self.node.uuid) as task: raid_mgmt = agent.AgentRAID self.assertRaises(exception.IronicException, raid_mgmt._create_configuration_final, task, command) self.assertFalse(update_raid_info_mock.called) @mock.patch.object(deploy_utils, 'agent_execute_clean_step', autospec=True) def test_delete_configuration(self, execute_mock): execute_mock.return_value = states.CLEANING with task_manager.acquire(self.context, self.node.uuid) as task: return_value = task.driver.raid.delete_configuration(task) execute_mock.assert_called_once_with(task, self.clean_step) self.assertEqual(states.CLEANING, return_value) def test__delete_configuration_final(self): command = {'command_result': {'clean_result': 'foo'}} with task_manager.acquire(self.context, self.node.uuid) as task: task.node.raid_config = {'foo': 'bar'} raid_mgmt = agent.AgentRAID raid_mgmt._delete_configuration_final(task, command) self.node.refresh() self.assertEqual({}, self.node.raid_config) def test__delete_configuration_final_registered( self): self.node.clean_step = {'interface': 'raid', 'step': 'delete_configuration'} self.node.raid_config = {'foo': 'bar'} command = {'command_result': {'clean_result': 'foo'}} delete_hook = agent_base_vendor._get_post_clean_step_hook(self.node) with task_manager.acquire(self.context, self.node.uuid) as task: delete_hook(task, command) self.node.refresh() self.assertEqual({}, self.node.raid_config) ironic-5.1.0/ironic/tests/unit/drivers/modules/irmc/0000775000567000056710000000000012674513633023637 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/irmc/test_management.py0000664000567000056710000003212512674513466027373 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Management Driver """ import os import xml.etree.ElementTree as ET import mock from ironic.common import boot_devices from ironic.common import driver_factory from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules import ipmitool from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers.modules.irmc import management as irmc_management from ironic.drivers import utils as driver_utils from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_irmc_info() class IRMCManagementTestCase(db_base.DbTestCase): def setUp(self): super(IRMCManagementTestCase, self).setUp() driver_info = INFO_DICT mgr_utils.mock_the_extension_manager(driver="fake_irmc") self.driver = driver_factory.get_driver("fake_irmc") self.node = obj_utils.create_test_node(self.context, driver='fake_irmc', driver_info=driver_info) self.info = irmc_common.parse_driver_info(self.node) def test_get_properties(self): expected = irmc_common.COMMON_PROPERTIES expected.update(ipmitool.COMMON_PROPERTIES) expected.update(ipmitool.CONSOLE_PROPERTIES) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.management.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, mock_drvinfo): side_effect = iter([exception.InvalidParameterValue("Invalid Input")]) mock_drvinfo.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.validate, task) def test_management_interface_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM, boot_devices.BIOS, boot_devices.SAFE] self.assertEqual(sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) @mock.patch.object(ipmitool.IPMIManagement, 'set_boot_device', spec_set=True, autospec=True) def test_management_interface_set_boot_device_no_mode_ok( self, set_boot_device_mock): """no boot mode specified.""" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device(task, boot_devices.PXE) set_boot_device_mock.assert_called_once_with( task.driver.management, task, boot_devices.PXE, False) @mock.patch.object(ipmitool.IPMIManagement, 'set_boot_device', spec_set=True, autospec=True) def test_management_interface_set_boot_device_bios_ok( self, set_boot_device_mock): """bios mode specified.""" with task_manager.acquire(self.context, self.node.uuid) as task: driver_utils.add_node_capability(task, 'boot_mode', 'bios') task.driver.management.set_boot_device(task, boot_devices.PXE) set_boot_device_mock.assert_called_once_with( task.driver.management, task, boot_devices.PXE, False) @mock.patch.object(irmc_management.ipmitool, "send_raw", spec_set=True, autospec=True) def _test_management_interface_set_boot_device_uefi_ok(self, params, expected_raw_code, send_raw_mock): send_raw_mock.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = '' driver_utils.add_node_capability(task, 'boot_mode', 'uefi') self.driver.management.set_boot_device(task, **params) send_raw_mock.assert_has_calls([ mock.call(task, "0x00 0x08 0x03 0x08"), mock.call(task, expected_raw_code)]) def test_management_interface_set_boot_device_uefi_ok_pxe(self): params = {'device': boot_devices.PXE, 'persistent': False} self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xa0 0x04 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xe0 0x04 0x00 0x00 0x00") def test_management_interface_set_boot_device_uefi_ok_disk(self): params = {'device': boot_devices.DISK, 'persistent': False} self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xa0 0x08 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xe0 0x08 0x00 0x00 0x00") def test_management_interface_set_boot_device_uefi_ok_cdrom(self): params = {'device': boot_devices.CDROM, 'persistent': False} self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xa0 0x14 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xe0 0x14 0x00 0x00 0x00") def test_management_interface_set_boot_device_uefi_ok_bios(self): params = {'device': boot_devices.BIOS, 'persistent': False} self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xa0 0x18 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xe0 0x18 0x00 0x00 0x00") def test_management_interface_set_boot_device_uefi_ok_safe(self): params = {'device': boot_devices.SAFE, 'persistent': False} self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xa0 0x0c 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_uefi_ok( params, "0x00 0x08 0x05 0xe0 0x0c 0x00 0x00 0x00") @mock.patch.object(irmc_management.ipmitool, "send_raw", spec_set=True, autospec=True) def test_management_interface_set_boot_device_uefi_ng(self, send_raw_mock): """uefi mode, next boot only, unknown device.""" send_raw_mock.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: driver_utils.add_node_capability(task, 'boot_mode', 'uefi') self.assertRaises(exception.InvalidParameterValue, self.driver.management.set_boot_device, task, "unknown") @mock.patch.object(irmc_management, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test_management_interface_get_sensors_data_scci_ok( self, mock_get_irmc_report, mock_scci): """'irmc_sensor_method' = 'scci' specified and OK data.""" with open(os.path.join(os.path.dirname(__file__), 'fake_sensors_data_ok.xml'), "r") as report: fake_txt = report.read() fake_xml = ET.fromstring(fake_txt) mock_get_irmc_report.return_value = fake_xml mock_scci.get_sensor_data.return_value = fake_xml.find( "./System/SensorDataRecords") with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_sensor_method'] = 'scci' sensor_dict = self.driver.management.get_sensors_data(task) expected = { 'Fan (4)': { 'FAN1 SYS (29)': { 'Units': 'RPM', 'Sensor ID': 'FAN1 SYS (29)', 'Sensor Reading': '600 RPM' }, 'FAN2 SYS (29)': { 'Units': 'None', 'Sensor ID': 'FAN2 SYS (29)', 'Sensor Reading': 'None None' } }, 'Temperature (1)': { 'Systemboard 1 (7)': { 'Units': 'degree C', 'Sensor ID': 'Systemboard 1 (7)', 'Sensor Reading': '80 degree C' }, 'Ambient (55)': { 'Units': 'degree C', 'Sensor ID': 'Ambient (55)', 'Sensor Reading': '42 degree C' } } } self.assertEqual(expected, sensor_dict) @mock.patch.object(irmc_management, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test_management_interface_get_sensors_data_scci_ng( self, mock_get_irmc_report, mock_scci): """'irmc_sensor_method' = 'scci' specified and NG data.""" with open(os.path.join(os.path.dirname(__file__), 'fake_sensors_data_ng.xml'), "r") as report: fake_txt = report.read() fake_xml = ET.fromstring(fake_txt) mock_get_irmc_report.return_value = fake_xml mock_scci.get_sensor_data.return_value = fake_xml.find( "./System/SensorDataRecords") with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_sensor_method'] = 'scci' sensor_dict = self.driver.management.get_sensors_data(task) self.assertEqual(len(sensor_dict), 0) @mock.patch.object(ipmitool.IPMIManagement, 'get_sensors_data', spec_set=True, autospec=True) def test_management_interface_get_sensors_data_ipmitool_ok( self, get_sensors_data_mock): """'irmc_sensor_method' = 'ipmitool' specified.""" with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_sensor_method'] = 'ipmitool' task.driver.management.get_sensors_data(task) get_sensors_data_mock.assert_called_once_with( task.driver.management, task) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test_management_interface_get_sensors_data_exception( self, get_irmc_report_mock): """'FailedToGetSensorData Exception.""" get_irmc_report_mock.side_effect = exception.InvalidParameterValue( "Fake Error") irmc_management.scci.SCCIInvalidInputError = Exception irmc_management.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_sensor_method'] = 'scci' e = self.assertRaises(exception.FailedToGetSensorData, self.driver.management.get_sensors_data, task) self.assertEqual("Failed to get sensor data for node 1be26c0b-" + "03f2-4d2e-ae87-c02d7f33c123. Error: Fake Error", str(e)) ironic-5.1.0/ironic/tests/unit/drivers/modules/irmc/test_boot.py0000664000567000056710000014077312674513466026233 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Boot Driver """ import os import shutil import tempfile from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg import six from ironic.common import boot_devices from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _ from ironic.common import images from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.irmc import boot as irmc_boot from ironic.drivers.modules.irmc import common as irmc_common from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils if six.PY3: import io file = io.BytesIO INFO_DICT = db_utils.get_test_irmc_info() CONF = cfg.CONF class IRMCDeployPrivateMethodsTestCase(db_base.DbTestCase): def setUp(self): irmc_boot.check_share_fs_mounted_patcher.start() self.addCleanup(irmc_boot.check_share_fs_mounted_patcher.stop) super(IRMCDeployPrivateMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='iscsi_irmc') self.node = obj_utils.create_test_node( self.context, driver='iscsi_irmc', driver_info=INFO_DICT) CONF.irmc.remote_image_share_root = '/remote_image_share_root' CONF.irmc.remote_image_server = '10.20.30.40' CONF.irmc.remote_image_share_type = 'NFS' CONF.irmc.remote_image_share_name = 'share' CONF.irmc.remote_image_user_name = 'admin' CONF.irmc.remote_image_user_password = 'admin0' CONF.irmc.remote_image_user_domain = 'local' @mock.patch.object(os.path, 'isdir', spec_set=True, autospec=True) def test__parse_config_option(self, isdir_mock): isdir_mock.return_value = True result = irmc_boot._parse_config_option() isdir_mock.assert_called_once_with('/remote_image_share_root') self.assertIsNone(result) @mock.patch.object(os.path, 'isdir', spec_set=True, autospec=True) def test__parse_config_option_non_existed_root(self, isdir_mock): CONF.irmc.remote_image_share_root = '/non_existed_root' isdir_mock.return_value = False self.assertRaises(exception.InvalidParameterValue, irmc_boot._parse_config_option) isdir_mock.assert_called_once_with('/non_existed_root') @mock.patch.object(os.path, 'isfile', spec_set=True, autospec=True) def test__parse_driver_info_in_share(self, isfile_mock): """With required 'irmc_deploy_iso' in share.""" isfile_mock.return_value = True self.node.driver_info['irmc_deploy_iso'] = 'deploy.iso' driver_info_expected = {'irmc_deploy_iso': 'deploy.iso'} driver_info_actual = irmc_boot._parse_driver_info(self.node) isfile_mock.assert_called_once_with( '/remote_image_share_root/deploy.iso') self.assertEqual(driver_info_expected, driver_info_actual) @mock.patch.object(service_utils, 'is_image_href_ordinary_file_name', spec_set=True, autospec=True) def test__parse_driver_info_not_in_share( self, is_image_href_ordinary_file_name_mock): """With required 'irmc_deploy_iso' not in share.""" self.node.driver_info[ 'irmc_deploy_iso'] = 'bc784057-a140-4130-add3-ef890457e6b3' driver_info_expected = {'irmc_deploy_iso': 'bc784057-a140-4130-add3-ef890457e6b3'} is_image_href_ordinary_file_name_mock.return_value = False driver_info_actual = irmc_boot._parse_driver_info(self.node) self.assertEqual(driver_info_expected, driver_info_actual) @mock.patch.object(os.path, 'isfile', spec_set=True, autospec=True) def test__parse_driver_info_with_deploy_iso_invalid(self, isfile_mock): """With required 'irmc_deploy_iso' non existed.""" isfile_mock.return_value = False with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_deploy_iso'] = 'deploy.iso' error_msg = (_("Deploy ISO file, %(deploy_iso)s, " "not found for node: %(node)s.") % {'deploy_iso': '/remote_image_share_root/deploy.iso', 'node': task.node.uuid}) e = self.assertRaises(exception.InvalidParameterValue, irmc_boot._parse_driver_info, task.node) self.assertEqual(error_msg, str(e)) def test__parse_driver_info_with_deploy_iso_missing(self): """With required 'irmc_deploy_iso' empty.""" self.node.driver_info['irmc_deploy_iso'] = None error_msg = ("Error validating iRMC virtual media deploy. Some" " parameters were missing in node's driver_info." " Missing are: ['irmc_deploy_iso']") e = self.assertRaises(exception.MissingParameterValue, irmc_boot._parse_driver_info, self.node) self.assertEqual(error_msg, str(e)) def test__parse_instance_info_with_boot_iso_file_name_ok(self): """With optional 'irmc_boot_iso' file name.""" CONF.irmc.remote_image_share_root = '/etc' self.node.instance_info['irmc_boot_iso'] = 'hosts' instance_info_expected = {'irmc_boot_iso': 'hosts'} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_without_boot_iso_ok(self): """With optional no 'irmc_boot_iso' file name.""" CONF.irmc.remote_image_share_root = '/etc' self.node.instance_info['irmc_boot_iso'] = None instance_info_expected = {} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_with_boot_iso_uuid_ok(self): """With optional 'irmc_boot_iso' glance uuid.""" self.node.instance_info[ 'irmc_boot_iso'] = 'bc784057-a140-4130-add3-ef890457e6b3' instance_info_expected = {'irmc_boot_iso': 'bc784057-a140-4130-add3-ef890457e6b3'} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_with_boot_iso_glance_ok(self): """With optional 'irmc_boot_iso' glance url.""" self.node.instance_info['irmc_boot_iso'] = ( 'glance://bc784057-a140-4130-add3-ef890457e6b3') instance_info_expected = { 'irmc_boot_iso': 'glance://bc784057-a140-4130-add3-ef890457e6b3', } instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_with_boot_iso_http_ok(self): """With optional 'irmc_boot_iso' http url.""" self.node.driver_info[ 'irmc_deploy_iso'] = 'http://irmc_boot_iso' driver_info_expected = {'irmc_deploy_iso': 'http://irmc_boot_iso'} driver_info_actual = irmc_boot._parse_driver_info(self.node) self.assertEqual(driver_info_expected, driver_info_actual) def test__parse_instance_info_with_boot_iso_https_ok(self): """With optional 'irmc_boot_iso' https url.""" self.node.instance_info[ 'irmc_boot_iso'] = 'https://irmc_boot_iso' instance_info_expected = {'irmc_boot_iso': 'https://irmc_boot_iso'} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_with_boot_iso_file_url_ok(self): """With optional 'irmc_boot_iso' file url.""" self.node.instance_info[ 'irmc_boot_iso'] = 'file://irmc_boot_iso' instance_info_expected = {'irmc_boot_iso': 'file://irmc_boot_iso'} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) @mock.patch.object(os.path, 'isfile', spec_set=True, autospec=True) def test__parse_instance_info_with_boot_iso_invalid(self, isfile_mock): CONF.irmc.remote_image_share_root = '/etc' isfile_mock.return_value = False with task_manager.acquire(self.context, self.node.uuid) as task: task.node.instance_info['irmc_boot_iso'] = 'hosts~non~existed' error_msg = (_("Boot ISO file, %(boot_iso)s, " "not found for node: %(node)s.") % {'boot_iso': '/etc/hosts~non~existed', 'node': task.node.uuid}) e = self.assertRaises(exception.InvalidParameterValue, irmc_boot._parse_instance_info, task.node) self.assertEqual(error_msg, str(e)) @mock.patch.object(deploy_utils, 'get_image_instance_info', spec_set=True, autospec=True) @mock.patch('os.path.isfile', autospec=True) def test_parse_deploy_info_ok(self, mock_isfile, get_image_instance_info_mock): CONF.irmc.remote_image_share_root = '/etc' get_image_instance_info_mock.return_value = {'a': 'b'} driver_info_expected = {'a': 'b', 'irmc_deploy_iso': 'hosts', 'irmc_boot_iso': 'fstab'} with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_deploy_iso'] = 'hosts' task.node.instance_info['irmc_boot_iso'] = 'fstab' driver_info_actual = irmc_boot._parse_deploy_info(task.node) self.assertEqual(driver_info_expected, driver_info_actual) boot_iso_path = os.path.join( CONF.irmc.remote_image_share_root, task.node.instance_info['irmc_boot_iso'] ) mock_isfile.assert_any_call(boot_iso_path) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) def test__setup_deploy_iso_with_file(self, fetch_mock, setup_vmedia_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['irmc_deploy_iso'] = 'deploy_iso_filename' ramdisk_opts = {'a': 'b'} irmc_boot._setup_deploy_iso(task, ramdisk_opts) self.assertFalse(fetch_mock.called) setup_vmedia_mock.assert_called_once_with( task, 'deploy_iso_filename', ramdisk_opts) set_boot_device_mock.assert_called_once_with(task, boot_devices.CDROM) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) def test_setup_deploy_iso_with_image_service( self, fetch_mock, setup_vmedia_mock, set_boot_device_mock): CONF.irmc.remote_image_share_root = '/' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['irmc_deploy_iso'] = 'glance://deploy_iso' ramdisk_opts = {'a': 'b'} irmc_boot._setup_deploy_iso(task, ramdisk_opts) fetch_mock.assert_called_once_with( task.context, 'glance://deploy_iso', "/deploy-%s.iso" % self.node.uuid) setup_vmedia_mock.assert_called_once_with( task, "deploy-%s.iso" % self.node.uuid, ramdisk_opts) set_boot_device_mock.assert_called_once_with( task, boot_devices.CDROM) def test__get_deploy_iso_name(self): actual = irmc_boot._get_deploy_iso_name(self.node) expected = "deploy-%s.iso" % self.node.uuid self.assertEqual(expected, actual) def test__get_boot_iso_name(self): actual = irmc_boot._get_boot_iso_name(self.node) expected = "boot-%s.iso" % self.node.uuid self.assertEqual(expected, actual) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__prepare_boot_iso_file(self, deploy_info_mock, fetch_mock, image_props_mock, boot_mode_mock, create_boot_iso_mock): deploy_info_mock.return_value = {'irmc_boot_iso': 'irmc_boot.iso'} with task_manager.acquire(self.context, self.node.uuid) as task: irmc_boot._prepare_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) self.assertFalse(fetch_mock.called) self.assertFalse(image_props_mock.called) self.assertFalse(boot_mode_mock.called) self.assertFalse(create_boot_iso_mock.called) task.node.refresh() self.assertEqual('irmc_boot.iso', task.node.driver_internal_info['irmc_boot_iso']) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_image_href_ordinary_file_name', spec_set=True, autospec=True) def test__prepare_boot_iso_fetch_ok(self, is_image_href_ordinary_file_name_mock, deploy_info_mock, fetch_mock, image_props_mock, boot_mode_mock, create_boot_iso_mock): CONF.irmc.remote_image_share_root = '/' image = '733d1c44-a2ea-414b-aca7-69decf20d810' is_image_href_ordinary_file_name_mock.return_value = False deploy_info_mock.return_value = {'irmc_boot_iso': image} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['irmc_boot_iso'] = image irmc_boot._prepare_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) fetch_mock.assert_called_once_with( task.context, image, "/boot-%s.iso" % self.node.uuid) self.assertFalse(image_props_mock.called) self.assertFalse(boot_mode_mock.called) self.assertFalse(create_boot_iso_mock.called) task.node.refresh() self.assertEqual("boot-%s.iso" % self.node.uuid, task.node.driver_internal_info['irmc_boot_iso']) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__prepare_boot_iso_create_ok(self, deploy_info_mock, fetch_mock, image_props_mock, boot_mode_mock, create_boot_iso_mock): CONF.pxe.pxe_append_params = 'kernel-params' deploy_info_mock.return_value = {'image_source': 'image-uuid'} image_props_mock.return_value = {'kernel_id': 'kernel_uuid', 'ramdisk_id': 'ramdisk_uuid'} CONF.irmc.remote_image_share_name = '/remote_image_share_root' boot_mode_mock.return_value = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._prepare_boot_iso(task, 'root-uuid') self.assertFalse(fetch_mock.called) deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['kernel_id', 'ramdisk_id']) create_boot_iso_mock.assert_called_once_with( task.context, '/remote_image_share_root/' + "boot-%s.iso" % self.node.uuid, 'kernel_uuid', 'ramdisk_uuid', 'file:///remote_image_share_root/' + "deploy-%s.iso" % self.node.uuid, 'root-uuid', 'kernel-params', 'uefi') task.node.refresh() self.assertEqual("boot-%s.iso" % self.node.uuid, task.node.driver_internal_info['irmc_boot_iso']) def test__get_floppy_image_name(self): actual = irmc_boot._get_floppy_image_name(self.node) expected = "image-%s.img" % self.node.uuid self.assertEqual(expected, actual) @mock.patch.object(shutil, 'copyfile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_vfat_image', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) def test__prepare_floppy_image(self, tempfile_mock, create_vfat_image_mock, copyfile_mock): mock_image_file_handle = mock.MagicMock(spec=file) mock_image_file_obj = mock.MagicMock() mock_image_file_obj.name = 'image-tmp-file' mock_image_file_handle.__enter__.return_value = mock_image_file_obj tempfile_mock.side_effect = iter([mock_image_file_handle]) deploy_args = {'arg1': 'val1', 'arg2': 'val2'} CONF.irmc.remote_image_share_name = '/remote_image_share_root' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._prepare_floppy_image(task, deploy_args) create_vfat_image_mock.assert_called_once_with( 'image-tmp-file', parameters=deploy_args) copyfile_mock.assert_called_once_with( 'image-tmp-file', '/remote_image_share_root/' + "image-%s.img" % self.node.uuid) @mock.patch.object(shutil, 'copyfile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_vfat_image', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) def test__prepare_floppy_image_exception(self, tempfile_mock, create_vfat_image_mock, copyfile_mock): mock_image_file_handle = mock.MagicMock(spec=file) mock_image_file_obj = mock.MagicMock() mock_image_file_obj.name = 'image-tmp-file' mock_image_file_handle.__enter__.return_value = mock_image_file_obj tempfile_mock.side_effect = iter([mock_image_file_handle]) deploy_args = {'arg1': 'val1', 'arg2': 'val2'} CONF.irmc.remote_image_share_name = '/remote_image_share_root' copyfile_mock.side_effect = iter([IOError("fake error")]) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IRMCOperationError, irmc_boot._prepare_floppy_image, task, deploy_args) create_vfat_image_mock.assert_called_once_with( 'image-tmp-file', parameters=deploy_args) copyfile_mock.assert_called_once_with( 'image-tmp-file', '/remote_image_share_root/' + "image-%s.img" % self.node.uuid) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) def test_attach_boot_iso_if_needed( self, setup_vmedia_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.ACTIVE task.node.driver_internal_info['irmc_boot_iso'] = 'boot-iso' irmc_boot.attach_boot_iso_if_needed(task) setup_vmedia_mock.assert_called_once_with(task, 'boot-iso') set_boot_device_mock.assert_called_once_with( task, boot_devices.CDROM) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) def test_attach_boot_iso_if_needed_on_rebuild( self, setup_vmedia_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYING task.node.driver_internal_info['irmc_boot_iso'] = 'boot-iso' irmc_boot.attach_boot_iso_if_needed(task) self.assertFalse(setup_vmedia_mock.called) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(irmc_boot, '_attach_virtual_cd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_attach_virtual_fd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_prepare_floppy_image', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_fd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_cd', spec_set=True, autospec=True) def test__setup_vmedia_for_boot_with_parameters(self, _detach_virtual_cd_mock, _detach_virtual_fd_mock, _prepare_floppy_image_mock, _attach_virtual_fd_mock, _attach_virtual_cd_mock): parameters = {'a': 'b'} iso_filename = 'deploy_iso_or_boot_iso' _prepare_floppy_image_mock.return_value = 'floppy_file_name' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._setup_vmedia_for_boot(task, iso_filename, parameters) _detach_virtual_cd_mock.assert_called_once_with(task.node) _detach_virtual_fd_mock.assert_called_once_with(task.node) _prepare_floppy_image_mock.assert_called_once_with(task, parameters) _attach_virtual_fd_mock.assert_called_once_with(task.node, 'floppy_file_name') _attach_virtual_cd_mock.assert_called_once_with(task.node, iso_filename) @mock.patch.object(irmc_boot, '_attach_virtual_cd', autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_fd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_cd', spec_set=True, autospec=True) def test__setup_vmedia_for_boot_without_parameters( self, _detach_virtual_cd_mock, _detach_virtual_fd_mock, _attach_virtual_cd_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._setup_vmedia_for_boot(task, 'bootable_iso_filename') _detach_virtual_cd_mock.assert_called_once_with(task.node) _detach_virtual_fd_mock.assert_called_once_with(task.node) _attach_virtual_cd_mock.assert_called_once_with( task.node, 'bootable_iso_filename') @mock.patch.object(irmc_boot, '_get_deploy_iso_name', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_get_floppy_image_name', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_remove_share_file', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_fd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_cd', spec_set=True, autospec=True) def test__cleanup_vmedia_boot_ok(self, _detach_virtual_cd_mock, _detach_virtual_fd_mock, _remove_share_file_mock, _get_floppy_image_name_mock, _get_deploy_iso_name_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._cleanup_vmedia_boot(task) _detach_virtual_cd_mock.assert_called_once_with(task.node) _detach_virtual_fd_mock.assert_called_once_with(task.node) _get_floppy_image_name_mock.assert_called_once_with(task.node) _get_deploy_iso_name_mock.assert_called_once_with(task.node) self.assertTrue(_remove_share_file_mock.call_count, 2) _remove_share_file_mock.assert_has_calls( [mock.call(_get_floppy_image_name_mock(task.node)), mock.call(_get_deploy_iso_name_mock(task.node))]) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) def test__remove_share_file(self, unlink_without_raise_mock): CONF.irmc.remote_image_share_name = '/' irmc_boot._remove_share_file("boot.iso") unlink_without_raise_mock.assert_called_once_with('/boot.iso') @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__attach_virtual_cd_ok(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_boot.scci.get_virtual_cd_set_params_cmd = ( mock.MagicMock(sepc_set=[])) cd_set_params = (irmc_boot.scci .get_virtual_cd_set_params_cmd.return_value) CONF.irmc.remote_image_server = '10.20.30.40' CONF.irmc.remote_image_user_domain = 'local' CONF.irmc.remote_image_share_type = 'NFS' CONF.irmc.remote_image_share_name = 'share' CONF.irmc.remote_image_user_name = 'admin' CONF.irmc.remote_image_user_password = 'admin0' irmc_boot.scci.get_share_type.return_value = 0 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._attach_virtual_cd(task.node, 'iso_filename') get_irmc_client_mock.assert_called_once_with(task.node) (irmc_boot.scci.get_virtual_cd_set_params_cmd .assert_called_once_with)('10.20.30.40', 'local', 0, 'share', 'iso_filename', 'admin', 'admin0') irmc_client.assert_has_calls( [mock.call(cd_set_params, async=False), mock.call(irmc_boot.scci.MOUNT_CD, async=False)]) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__attach_virtual_cd_fail(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception("fake error") irmc_boot.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: e = self.assertRaises(exception.IRMCOperationError, irmc_boot._attach_virtual_cd, task.node, 'iso_filename') get_irmc_client_mock.assert_called_once_with(task.node) self.assertEqual("iRMC Inserting virtual cdrom failed. " + "Reason: fake error", str(e)) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__detach_virtual_cd_ok(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._detach_virtual_cd(task.node) irmc_client.assert_called_once_with(irmc_boot.scci.UNMOUNT_CD) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__detach_virtual_cd_fail(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception("fake error") irmc_boot.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: e = self.assertRaises(exception.IRMCOperationError, irmc_boot._detach_virtual_cd, task.node) self.assertEqual("iRMC Ejecting virtual cdrom failed. " + "Reason: fake error", str(e)) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__attach_virtual_fd_ok(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_boot.scci.get_virtual_fd_set_params_cmd = ( mock.MagicMock(sepc_set=[])) fd_set_params = (irmc_boot.scci .get_virtual_fd_set_params_cmd.return_value) CONF.irmc.remote_image_server = '10.20.30.40' CONF.irmc.remote_image_user_domain = 'local' CONF.irmc.remote_image_share_type = 'NFS' CONF.irmc.remote_image_share_name = 'share' CONF.irmc.remote_image_user_name = 'admin' CONF.irmc.remote_image_user_password = 'admin0' irmc_boot.scci.get_share_type.return_value = 0 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._attach_virtual_fd(task.node, 'floppy_image_filename') get_irmc_client_mock.assert_called_once_with(task.node) (irmc_boot.scci.get_virtual_fd_set_params_cmd .assert_called_once_with)('10.20.30.40', 'local', 0, 'share', 'floppy_image_filename', 'admin', 'admin0') irmc_client.assert_has_calls( [mock.call(fd_set_params, async=False), mock.call(irmc_boot.scci.MOUNT_FD, async=False)]) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__attach_virtual_fd_fail(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception("fake error") irmc_boot.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: e = self.assertRaises(exception.IRMCOperationError, irmc_boot._attach_virtual_fd, task.node, 'iso_filename') get_irmc_client_mock.assert_called_once_with(task.node) self.assertEqual("iRMC Inserting virtual floppy failed. " + "Reason: fake error", str(e)) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__detach_virtual_fd_ok(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._detach_virtual_fd(task.node) irmc_client.assert_called_once_with(irmc_boot.scci.UNMOUNT_FD) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__detach_virtual_fd_fail(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception("fake error") irmc_boot.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: e = self.assertRaises(exception.IRMCOperationError, irmc_boot._detach_virtual_fd, task.node) self.assertEqual("iRMC Ejecting virtual floppy failed. " "Reason: fake error", str(e)) @mock.patch.object(irmc_boot, '_parse_config_option', spec_set=True, autospec=True) def test_check_share_fs_mounted_ok(self, parse_conf_mock): # Note(naohirot): mock.patch.stop() and mock.patch.start() don't work. # therefor monkey patching is used to # irmc_boot.check_share_fs_mounted. # irmc_boot.check_share_fs_mounted is mocked in # third_party_driver_mocks.py. # irmc_boot.check_share_fs_mounted_orig is the real function. CONF.irmc.remote_image_share_root = '/' CONF.irmc.remote_image_share_type = 'nfs' result = irmc_boot.check_share_fs_mounted_orig() parse_conf_mock.assert_called_once_with() self.assertIsNone(result) @mock.patch.object(irmc_boot, '_parse_config_option', spec_set=True, autospec=True) def test_check_share_fs_mounted_exception(self, parse_conf_mock): # Note(naohirot): mock.patch.stop() and mock.patch.start() don't work. # therefor monkey patching is used to # irmc_boot.check_share_fs_mounted. # irmc_boot.check_share_fs_mounted is mocked in # third_party_driver_mocks.py. # irmc_boot.check_share_fs_mounted_orig is the real function. CONF.irmc.remote_image_share_root = '/etc' CONF.irmc.remote_image_share_type = 'cifs' self.assertRaises(exception.IRMCSharedFileSystemNotMounted, irmc_boot.check_share_fs_mounted_orig) parse_conf_mock.assert_called_once_with() class IRMCVirtualMediaBootTestCase(db_base.DbTestCase): def setUp(self): irmc_boot.check_share_fs_mounted_patcher.start() self.addCleanup(irmc_boot.check_share_fs_mounted_patcher.stop) super(IRMCVirtualMediaBootTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="iscsi_irmc") self.node = obj_utils.create_test_node( self.context, driver='iscsi_irmc', driver_info=INFO_DICT) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'check_share_fs_mounted', spec_set=True, autospec=True) def test_validate_whole_disk_image(self, check_share_fs_mounted_mock, deploy_info_mock, is_glance_image_mock, validate_prop_mock): d_info = {'image_source': '733d1c44-a2ea-414b-aca7-69decf20d810'} deploy_info_mock.return_value = d_info with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info = {'is_whole_disk_image': True} task.driver.boot.validate(task) check_share_fs_mounted_mock.assert_called_once_with() deploy_info_mock.assert_called_once_with(task.node) self.assertFalse(is_glance_image_mock.called) validate_prop_mock.assert_called_once_with(task.context, d_info, []) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'check_share_fs_mounted', spec_set=True, autospec=True) def test_validate_glance_image(self, check_share_fs_mounted_mock, deploy_info_mock, is_glance_image_mock, validate_prop_mock): d_info = {'image_source': '733d1c44-a2ea-414b-aca7-69decf20d810'} deploy_info_mock.return_value = d_info is_glance_image_mock.return_value = True with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.validate(task) check_share_fs_mounted_mock.assert_called_once_with() deploy_info_mock.assert_called_once_with(task.node) validate_prop_mock.assert_called_once_with( task.context, d_info, ['kernel_id', 'ramdisk_id']) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'check_share_fs_mounted', spec_set=True, autospec=True) def test_validate_non_glance_image(self, check_share_fs_mounted_mock, deploy_info_mock, is_glance_image_mock, validate_prop_mock): d_info = {'image_source': '733d1c44-a2ea-414b-aca7-69decf20d810'} deploy_info_mock.return_value = d_info is_glance_image_mock.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.validate(task) check_share_fs_mounted_mock.assert_called_once_with() deploy_info_mock.assert_called_once_with(task.node) validate_prop_mock.assert_called_once_with( task.context, d_info, ['kernel', 'ramdisk']) @mock.patch.object(irmc_boot, '_setup_deploy_iso', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'get_single_nic_with_vif_port_id', spec_set=True, autospec=True) def test_prepare_ramdisk(self, get_single_nic_with_vif_port_id_mock, _setup_deploy_iso_mock): instance_info = self.node.instance_info instance_info['irmc_boot_iso'] = 'glance://abcdef' instance_info['image_source'] = '6b2f0c0c-79e8-4db6-842e-43c9764204af' self.node.instance_info = instance_info self.node.save() ramdisk_params = {'a': 'b'} get_single_nic_with_vif_port_id_mock.return_value = '12:34:56:78:90:ab' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_ramdisk(task, ramdisk_params) expected_ramdisk_opts = {'a': 'b', 'BOOTIF': '12:34:56:78:90:ab'} get_single_nic_with_vif_port_id_mock.assert_called_once_with( task) _setup_deploy_iso_mock.assert_called_once_with( task, expected_ramdisk_opts) self.assertEqual('glance://abcdef', self.node.instance_info['irmc_boot_iso']) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_clean_up_ramdisk(self, _cleanup_vmedia_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_ramdisk(task) _cleanup_vmedia_boot_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def _test_prepare_instance_whole_disk_image( self, _cleanup_vmedia_boot_mock, set_boot_device_mock): self.node.driver_internal_info = {'is_whole_disk_image': True} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) _cleanup_vmedia_boot_mock.assert_called_once_with(task) set_boot_device_mock.assert_called_once_with(task, boot_devices.DISK, persistent=True) def test_prepare_instance_whole_disk_image_local(self): self.node.instance_info = {'capabilities': '{"boot_option": "local"}'} self.node.save() self._test_prepare_instance_whole_disk_image() def test_prepare_instance_whole_disk_image(self): self._test_prepare_instance_whole_disk_image() @mock.patch.object(irmc_boot.IRMCVirtualMediaBoot, '_configure_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_prepare_instance_partition_image( self, _cleanup_vmedia_boot_mock, _configure_vmedia_mock): self.node.driver_internal_info = {'root_uuid_or_disk_id': "some_uuid"} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) _cleanup_vmedia_boot_mock.assert_called_once_with(task) _configure_vmedia_mock.assert_called_once_with(mock.ANY, task, "some_uuid") @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_remove_share_file', spec_set=True, autospec=True) def test_clean_up_instance(self, _remove_share_file_mock, _cleanup_vmedia_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['irmc_boot_iso'] = 'glance://deploy_iso' task.node.driver_internal_info['irmc_boot_iso'] = 'irmc_boot.iso' task.node.driver_internal_info = {'root_uuid_or_disk_id': ( "12312642-09d3-467f-8e09-12385826a123")} task.driver.boot.clean_up_instance(task) _remove_share_file_mock.assert_called_once_with( irmc_boot._get_boot_iso_name(task.node)) self.assertNotIn('irmc_boot_iso', task.node.driver_internal_info) self.assertNotIn('root_uuid_or_disk_id', task.node.driver_internal_info) _cleanup_vmedia_boot_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_prepare_boot_iso', spec_set=True, autospec=True) def test__configure_vmedia_boot(self, _prepare_boot_iso_mock, _setup_vmedia_for_boot_mock, node_set_boot_device): root_uuid_or_disk_id = {'root uuid': 'root_uuid'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info['irmc_boot_iso'] = 'boot.iso' task.driver.boot._configure_vmedia_boot( task, root_uuid_or_disk_id) _prepare_boot_iso_mock.assert_called_once_with( task, root_uuid_or_disk_id) _setup_vmedia_for_boot_mock.assert_called_once_with( task, 'boot.iso') node_set_boot_device.assert_called_once_with( task, boot_devices.CDROM, persistent=True) def test_remote_image_share_type_values(self): cfg.CONF.set_override('remote_image_share_type', 'cifs', 'irmc', enforce_type=True) cfg.CONF.set_override('remote_image_share_type', 'nfs', 'irmc', enforce_type=True) self.assertRaises(ValueError, cfg.CONF.set_override, 'remote_image_share_type', 'fake', 'irmc', enforce_type=True) ironic-5.1.0/ironic/tests/unit/drivers/modules/irmc/fake_sensors_data_ok.xml0000664000567000056710000001022112674513466030525 0ustar jenkinsjenkins00000000000000 20 0 0 1 55 0 Ambient 1 Temperature 1 0 degree C unspecified 168 42 4 1 148 37 24 6 20 0 0 2 7 0 Systemboard 1 1 Temperature 1 0 degree C unspecified 80 80 75 75 20 0 0 35 29 0 FAN1 SYS 4 Fan 18 0 RPM unspecified 10 600 20 0 0 36 29 1 FAN2 SYS 4 Fan 18 0 unspecified 10 ironic-5.1.0/ironic/tests/unit/drivers/modules/irmc/test_common.py0000664000567000056710000002217412674513466026552 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for common methods used by iRMC modules. """ import mock from oslo_config import cfg from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.irmc import common as irmc_common from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs from ironic.tests.unit.objects import utils as obj_utils class IRMCValidateParametersTestCase(db_base.DbTestCase): def setUp(self): super(IRMCValidateParametersTestCase, self).setUp() self.node = obj_utils.create_test_node( self.context, driver='fake_irmc', driver_info=db_utils.get_test_irmc_info()) def test_parse_driver_info(self): info = irmc_common.parse_driver_info(self.node) self.assertIsNotNone(info.get('irmc_address')) self.assertIsNotNone(info.get('irmc_username')) self.assertIsNotNone(info.get('irmc_password')) self.assertIsNotNone(info.get('irmc_client_timeout')) self.assertIsNotNone(info.get('irmc_port')) self.assertIsNotNone(info.get('irmc_auth_method')) self.assertIsNotNone(info.get('irmc_sensor_method')) self.assertIsNotNone(info.get('irmc_snmp_version')) self.assertIsNotNone(info.get('irmc_snmp_port')) self.assertIsNotNone(info.get('irmc_snmp_community')) self.assertFalse(info.get('irmc_snmp_security')) def test_parse_driver_option_default(self): self.node.driver_info = { "irmc_address": "1.2.3.4", "irmc_username": "admin0", "irmc_password": "fake0", } info = irmc_common.parse_driver_info(self.node) self.assertEqual('basic', info.get('irmc_auth_method')) self.assertEqual(443, info.get('irmc_port')) self.assertEqual(60, info.get('irmc_client_timeout')) self.assertEqual('ipmitool', info.get('irmc_sensor_method')) def test_parse_driver_info_missing_address(self): del self.node.driver_info['irmc_address'] self.assertRaises(exception.MissingParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_missing_username(self): del self.node.driver_info['irmc_username'] self.assertRaises(exception.MissingParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_missing_password(self): del self.node.driver_info['irmc_password'] self.assertRaises(exception.MissingParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_timeout(self): self.node.driver_info['irmc_client_timeout'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_port(self): self.node.driver_info['irmc_port'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_auth_method(self): self.node.driver_info['irmc_auth_method'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_sensor_method(self): self.node.driver_info['irmc_sensor_method'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_missing_multiple_params(self): del self.node.driver_info['irmc_password'] del self.node.driver_info['irmc_address'] try: irmc_common.parse_driver_info(self.node) self.fail("parse_driver_info did not throw exception.") except exception.MissingParameterValue as e: self.assertIn('irmc_password', str(e)) self.assertIn('irmc_address', str(e)) def test_parse_driver_info_invalid_snmp_version(self): self.node.driver_info['irmc_snmp_version'] = 'v3x' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_snmp_port(self): self.node.driver_info['irmc_snmp_port'] = '161' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_snmp_community(self): self.node.driver_info['irmc_snmp_version'] = 'v2c' self.node.driver_info['irmc_snmp_community'] = 100 self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_snmp_security(self): self.node.driver_info['irmc_snmp_version'] = 'v3' self.node.driver_info['irmc_snmp_security'] = 100 self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_empty_snmp_security(self): self.node.driver_info['irmc_snmp_version'] = 'v3' self.node.driver_info['irmc_snmp_security'] = '' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) class IRMCCommonMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IRMCCommonMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_irmc") self.info = db_utils.get_test_irmc_info() self.node = obj_utils.create_test_node( self.context, driver='fake_irmc', driver_info=self.info) @mock.patch.object(irmc_common, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) def test_get_irmc_client(self, mock_scci): self.info['irmc_port'] = 80 self.info['irmc_auth_method'] = 'digest' self.info['irmc_client_timeout'] = 60 mock_scci.get_client.return_value = 'get_client' returned_mock_scci_get_client = irmc_common.get_irmc_client(self.node) mock_scci.get_client.assert_called_with( self.info['irmc_address'], self.info['irmc_username'], self.info['irmc_password'], port=self.info['irmc_port'], auth_method=self.info['irmc_auth_method'], client_timeout=self.info['irmc_client_timeout']) self.assertEqual('get_client', returned_mock_scci_get_client) def test_update_ipmi_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ipmi_info = { "ipmi_address": "1.2.3.4", "ipmi_username": "admin0", "ipmi_password": "fake0", } task.node.driver_info = self.info irmc_common.update_ipmi_properties(task) actual_info = task.node.driver_info expected_info = dict(self.info, **ipmi_info) self.assertEqual(expected_info, actual_info) @mock.patch.object(irmc_common, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) def test_get_irmc_report(self, mock_scci): self.info['irmc_port'] = 80 self.info['irmc_auth_method'] = 'digest' self.info['irmc_client_timeout'] = 60 mock_scci.get_report.return_value = 'get_report' returned_mock_scci_get_report = irmc_common.get_irmc_report(self.node) mock_scci.get_report.assert_called_with( self.info['irmc_address'], self.info['irmc_username'], self.info['irmc_password'], port=self.info['irmc_port'], auth_method=self.info['irmc_auth_method'], client_timeout=self.info['irmc_client_timeout']) self.assertEqual('get_report', returned_mock_scci_get_report) def test_out_range_port(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'port', 60, 'irmc', enforce_type=True) def test_out_range_auth_method(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'auth_method', 'fake', 'irmc', enforce_type=True) def test_out_range_sensor_method(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'sensor_method', 'fake', 'irmc', enforce_type=True) ironic-5.1.0/ironic/tests/unit/drivers/modules/irmc/test_inspect.py0000664000567000056710000002632212674513466026726 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Inspection Driver """ import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers.modules.irmc import inspect as irmc_inspect from ironic import objects from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_irmc_info() class IRMCInspectInternalMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IRMCInspectInternalMethodsTestCase, self).setUp() driver_info = INFO_DICT mgr_utils.mock_the_extension_manager(driver='fake_irmc') self.node = obj_utils.create_test_node(self.context, driver='fake_irmc', driver_info=driver_info) @mock.patch('ironic.drivers.modules.irmc.inspect.snmp.SNMPClient', spec_set=True, autospec=True) def test__get_mac_addresses(self, snmpclient_mock): snmpclient_mock.return_value = mock.Mock( **{'get_next.side_effect': [[2, 2, 7], ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb', 'cc:cc:cc:cc:cc:cc']]}) inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: result = irmc_inspect._get_mac_addresses(task.node) self.assertEqual(inspected_macs, result) @mock.patch.object(irmc_inspect, '_get_mac_addresses', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test__inspect_hardware( self, get_irmc_report_mock, scci_mock, _get_mac_addresses_mock): inspected_props = { 'memory_mb': '1024', 'local_gb': 10, 'cpus': 2, 'cpu_arch': 'x86_64'} inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] report = 'fake_report' get_irmc_report_mock.return_value = report scci_mock.get_essential_properties.return_value = inspected_props _get_mac_addresses_mock.return_value = inspected_macs with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: result = irmc_inspect._inspect_hardware(task.node) get_irmc_report_mock.assert_called_once_with(task.node) scci_mock.get_essential_properties.assert_called_once_with( report, irmc_inspect.IRMCInspect.ESSENTIAL_PROPERTIES) self.assertEqual((inspected_props, inspected_macs), result) @mock.patch.object(irmc_inspect, '_get_mac_addresses', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test__inspect_hardware_exception( self, get_irmc_report_mock, scci_mock, _get_mac_addresses_mock): report = 'fake_report' get_irmc_report_mock.return_value = report side_effect = exception.SNMPFailure("fake exception") scci_mock.get_essential_properties.side_effect = side_effect irmc_inspect.scci.SCCIInvalidInputError = Exception irmc_inspect.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.HardwareInspectionFailure, irmc_inspect._inspect_hardware, task.node) get_irmc_report_mock.assert_called_once_with(task.node) self.assertFalse(_get_mac_addresses_mock.called) class IRMCInspectTestCase(db_base.DbTestCase): def setUp(self): super(IRMCInspectTestCase, self).setUp() driver_info = INFO_DICT mgr_utils.mock_the_extension_manager(driver="fake_irmc") self.node = obj_utils.create_test_node(self.context, driver='fake_irmc', driver_info=driver_info) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in irmc_common.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, parse_driver_info_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) parse_driver_info_mock.assert_called_once_with(task.node) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, parse_driver_info_mock): side_effect = exception.InvalidParameterValue("Invalid Input") parse_driver_info_mock.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) @mock.patch.object(irmc_inspect.LOG, 'info', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.irmc.inspect.objects.Port', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, '_inspect_hardware', spec_set=True, autospec=True) def test_inspect_hardware(self, _inspect_hardware_mock, port_mock, info_mock): inspected_props = { 'memory_mb': '1024', 'local_gb': 10, 'cpus': 2, 'cpu_arch': 'x86_64'} inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] _inspect_hardware_mock.return_value = (inspected_props, inspected_macs) new_port_mock1 = mock.MagicMock(spec=objects.Port) new_port_mock2 = mock.MagicMock(spec=objects.Port) port_mock.side_effect = [new_port_mock1, new_port_mock2] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: result = task.driver.inspect.inspect_hardware(task) node_id = task.node.id _inspect_hardware_mock.assert_called_once_with(task.node) # note (naohirot): # as of mock 1.2, assert_has_calls has a bug which returns # "AssertionError: Calls not found." if mock_calls has class # method call such as below: # AssertionError: Calls not found. # Expected: [call.list_by_node_id( # , # 1)] # Actual: [call.list_by_node_id( # , # 1)] # # workaround, remove class method call from mock_calls list del port_mock.mock_calls[0] port_mock.assert_has_calls([ # workaround, comment out class method call from expected list # mock.call.list_by_node_id(task.context, node_id), mock.call(task.context, address=inspected_macs[0], node_id=node_id), mock.call(task.context, address=inspected_macs[1], node_id=node_id) ]) new_port_mock1.create.assert_called_once_with() new_port_mock2.create.assert_called_once_with() self.assertTrue(info_mock.called) task.node.refresh() self.assertEqual(inspected_props, task.node.properties) self.assertEqual(states.MANAGEABLE, result) @mock.patch('ironic.objects.Port', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, '_inspect_hardware', spec_set=True, autospec=True) def test_inspect_hardware_inspect_exception( self, _inspect_hardware_mock, port_mock): side_effect = exception.HardwareInspectionFailure("fake exception") _inspect_hardware_mock.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.HardwareInspectionFailure, task.driver.inspect.inspect_hardware, task) self.assertFalse(port_mock.called) @mock.patch.object(irmc_inspect.LOG, 'warn', spec_set=True, autospec=True) @mock.patch('ironic.objects.Port', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, '_inspect_hardware', spec_set=True, autospec=True) def test_inspect_hardware_mac_already_exist( self, _inspect_hardware_mock, port_mock, warn_mock): inspected_props = { 'memory_mb': '1024', 'local_gb': 10, 'cpus': 2, 'cpu_arch': 'x86_64'} inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] _inspect_hardware_mock.return_value = (inspected_props, inspected_macs) side_effect = exception.MACAlreadyExists("fake exception") new_port_mock = port_mock.return_value new_port_mock.create.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: result = task.driver.inspect.inspect_hardware(task) _inspect_hardware_mock.assert_called_once_with(task.node) self.assertTrue(port_mock.call_count, 2) task.node.refresh() self.assertEqual(inspected_props, task.node.properties) self.assertEqual(states.MANAGEABLE, result) ironic-5.1.0/ironic/tests/unit/drivers/modules/irmc/test_power.py0000664000567000056710000002060112674513466026407 0ustar jenkinsjenkins00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Power Driver """ import mock from oslo_config import cfg from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.irmc import boot as irmc_boot from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers.modules.irmc import power as irmc_power from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_irmc_info() CONF = cfg.CONF @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) class IRMCPowerInternalMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IRMCPowerInternalMethodsTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake_irmc') driver_info = INFO_DICT self.node = db_utils.create_test_node( driver='fake_irmc', driver_info=driver_info, instance_uuid='instance_uuid_123') @mock.patch.object(irmc_boot, 'attach_boot_iso_if_needed') def test__set_power_state_power_on_ok( self, attach_boot_iso_if_needed_mock, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value target_state = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._set_power_state(task, target_state) attach_boot_iso_if_needed_mock.assert_called_once_with(task) irmc_client.assert_called_once_with(irmc_power.scci.POWER_ON) def test__set_power_state_power_off_ok(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value target_state = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._set_power_state(task, target_state) irmc_client.assert_called_once_with(irmc_power.scci.POWER_OFF) @mock.patch.object(irmc_boot, 'attach_boot_iso_if_needed') def test__set_power_state_power_reboot_ok( self, attach_boot_iso_if_needed_mock, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value target_state = states.REBOOT with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._set_power_state(task, target_state) attach_boot_iso_if_needed_mock.assert_called_once_with(task) irmc_client.assert_called_once_with(irmc_power.scci.POWER_RESET) def test__set_power_state_invalid_target_state(self, get_irmc_client_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, irmc_power._set_power_state, task, states.ERROR) def test__set_power_state_scci_exception(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception() irmc_power.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, irmc_power._set_power_state, task, states.POWER_ON) class IRMCPowerTestCase(db_base.DbTestCase): def setUp(self): super(IRMCPowerTestCase, self).setUp() driver_info = INFO_DICT mgr_utils.mock_the_extension_manager(driver="fake_irmc") self.node = obj_utils.create_test_node(self.context, driver='fake_irmc', driver_info=driver_info) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in irmc_common.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, mock_drvinfo): side_effect = iter([exception.InvalidParameterValue("Invalid Input")]) mock_drvinfo.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) @mock.patch('ironic.drivers.modules.irmc.power.ipmitool.IPMIPower', spec_set=True, autospec=True) def test_get_power_state(self, mock_IPMIPower): ipmi_power = mock_IPMIPower.return_value ipmi_power.get_power_state.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(states.POWER_ON, task.driver.power.get_power_state(task)) ipmi_power.get_power_state.assert_called_once_with(task) @mock.patch.object(irmc_power, '_set_power_state', spec_set=True, autospec=True) def test_set_power_state(self, mock_set_power): mock_set_power.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_ON) mock_set_power.assert_called_once_with(task, states.POWER_ON) @mock.patch.object(irmc_power, '_set_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_reboot_reboot(self, mock_get_power, mock_set_power): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_power.return_value = states.POWER_ON task.driver.power.reboot(task) mock_get_power.assert_called_once_with( task.driver.power, task) mock_set_power.assert_called_once_with(task, states.REBOOT) @mock.patch.object(irmc_power, '_set_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_reboot_power_on(self, mock_get_power, mock_set_power): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_power.return_value = states.POWER_OFF task.driver.power.reboot(task) mock_get_power.assert_called_once_with( task.driver.power, task) mock_set_power.assert_called_once_with(task, states.POWER_ON) ironic-5.1.0/ironic/tests/unit/drivers/modules/irmc/__init__.py0000664000567000056710000000000012674513466025742 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/irmc/fake_sensors_data_ng.xml0000664000567000056710000001025712674513466030531 0ustar jenkinsjenkins00000000000000 20 0 0 1 55 0 Ambient 1 1 0 degree C unspecified 168 42 4 1 148 37 24 6 20 0 0 2 7 0 Systemboard 1 Temperature 1 0 degree C unspecified 80 80 75 75 20 0 0 35 29 0 4 Fan 18 0 RPM unspecified 10 600 20 0 0 36 1 FAN2 SYS 4 Fan 18 0 RPM unspecified 10 600 ironic-5.1.0/ironic/tests/unit/drivers/modules/cimc/0000775000567000056710000000000012674513633023620 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/cimc/test_management.py0000664000567000056710000001305312674513466027353 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_utils import importutils from six.moves import http_client from ironic.common import boot_devices from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.cimc import common from ironic.tests.unit.drivers.modules.cimc import test_common imcsdk = importutils.try_import('ImcSdk') @mock.patch.object(common, 'cimc_handle', autospec=True) class CIMCManagementTestCase(test_common.CIMCBaseTestCase): def test_get_properties(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertEqual(common.COMMON_PROPERTIES, task.driver.management.get_properties()) @mock.patch.object(common, "parse_driver_info", autospec=True) def test_validate(self, mock_driver_info, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.validate(task) mock_driver_info.assert_called_once_with(task.node) def test_get_supported_boot_devices(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM] result = task.driver.management.get_supported_boot_devices(task) self.assertEqual(sorted(expected), sorted(result)) def test_get_boot_device(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: handle.xml_query.return_value.error_code = None mock_dev = mock.MagicMock() mock_dev.Order = 1 mock_dev.Rn = 'storage-read-write' handle.xml_query().OutConfigs.child[0].child = [mock_dev] device = task.driver.management.get_boot_device(task) self.assertEqual( {'boot_device': boot_devices.DISK, 'persistent': True}, device) def test_get_boot_device_fail(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: handle.xml_query.return_value.error_code = None mock_dev = mock.MagicMock() mock_dev.Order = 1 mock_dev.Rn = 'storage-read-write' handle.xml_query().OutConfigs.child[0].child = [mock_dev] device = task.driver.management.get_boot_device(task) self.assertEqual( {'boot_device': boot_devices.DISK, 'persistent': True}, device) def test_set_boot_device(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: handle.xml_query.return_value.error_code = None task.driver.management.set_boot_device(task, boot_devices.DISK) method = imcsdk.ImcCore.ExternalMethod("ConfigConfMo") method.Cookie = handle.cookie method.Dn = "sys/rack-unit-1/boot-policy" method.InHierarchical = "true" config = imcsdk.Imc.ConfigConfig() bootMode = imcsdk.ImcCore.ManagedObject('lsbootStorage') bootMode.set_attr("access", 'read-write') bootMode.set_attr("type", 'storage') bootMode.set_attr("Rn", 'storage-read-write') bootMode.set_attr("order", "1") config.add_child(bootMode) method.InConfig = config handle.xml_query.assert_called_once_with( method, imcsdk.WriteXmlOption.DIRTY) def test_set_boot_device_fail(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: method = imcsdk.ImcCore.ExternalMethod("ConfigConfMo") handle.xml_query.return_value.error_code = ( str(http_client.NOT_FOUND)) self.assertRaises(exception.CIMCException, task.driver.management.set_boot_device, task, boot_devices.DISK) handle.xml_query.assert_called_once_with( method, imcsdk.WriteXmlOption.DIRTY) def test_get_sensors_data(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(NotImplementedError, task.driver.management.get_sensors_data, task) ironic-5.1.0/ironic/tests/unit/drivers/modules/cimc/test_common.py0000664000567000056710000001162512674513466026532 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from oslo_utils import importutils from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.cimc import common as cimc_common from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils imcsdk = importutils.try_import('ImcSdk') CONF = cfg.CONF class CIMCBaseTestCase(db_base.DbTestCase): def setUp(self): super(CIMCBaseTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_cimc") self.node = obj_utils.create_test_node( self.context, driver='fake_cimc', driver_info=db_utils.get_test_cimc_info(), instance_uuid="fake_uuid") CONF.set_override('max_retry', 2, 'cimc') CONF.set_override('action_interval', 0, 'cimc') class ParseDriverInfoTestCase(CIMCBaseTestCase): def test_parse_driver_info(self): info = cimc_common.parse_driver_info(self.node) self.assertIsNotNone(info.get('cimc_address')) self.assertIsNotNone(info.get('cimc_username')) self.assertIsNotNone(info.get('cimc_password')) def test_parse_driver_info_missing_address(self): del self.node.driver_info['cimc_address'] self.assertRaises(exception.MissingParameterValue, cimc_common.parse_driver_info, self.node) def test_parse_driver_info_missing_username(self): del self.node.driver_info['cimc_username'] self.assertRaises(exception.MissingParameterValue, cimc_common.parse_driver_info, self.node) def test_parse_driver_info_missing_password(self): del self.node.driver_info['cimc_password'] self.assertRaises(exception.MissingParameterValue, cimc_common.parse_driver_info, self.node) @mock.patch.object(cimc_common, 'cimc_handle', autospec=True) class CIMCHandleLogin(CIMCBaseTestCase): def test_cimc_handle_login(self, mock_handle): info = cimc_common.parse_driver_info(self.node) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: cimc_common.handle_login(task, handle, info) handle.login.assert_called_once_with( self.node.driver_info['cimc_address'], self.node.driver_info['cimc_username'], self.node.driver_info['cimc_password']) def test_cimc_handle_login_exception(self, mock_handle): info = cimc_common.parse_driver_info(self.node) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: handle.login.side_effect = imcsdk.ImcException('Boom') self.assertRaises(exception.CIMCException, cimc_common.handle_login, task, handle, info) handle.login.assert_called_once_with( self.node.driver_info['cimc_address'], self.node.driver_info['cimc_username'], self.node.driver_info['cimc_password']) class CIMCHandleTestCase(CIMCBaseTestCase): @mock.patch.object(imcsdk, 'ImcHandle', autospec=True) @mock.patch.object(cimc_common, 'handle_login', autospec=True) def test_cimc_handle(self, mock_login, mock_handle): mo_hand = mock.MagicMock() mo_hand.username = self.node.driver_info.get('cimc_username') mo_hand.password = self.node.driver_info.get('cimc_password') mo_hand.name = self.node.driver_info.get('cimc_address') mock_handle.return_value = mo_hand info = cimc_common.parse_driver_info(self.node) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with cimc_common.cimc_handle(task) as handle: self.assertEqual(handle, mock_handle.return_value) mock_login.assert_called_once_with(task, mock_handle.return_value, info) mock_handle.return_value.logout.assert_called_once_with() ironic-5.1.0/ironic/tests/unit/drivers/modules/cimc/test_power.py0000664000567000056710000003266212674513466026402 0ustar jenkinsjenkins00000000000000# Copyright 2015, Cisco Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from oslo_utils import importutils from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.cimc import common from ironic.drivers.modules.cimc import power from ironic.tests.unit.drivers.modules.cimc import test_common imcsdk = importutils.try_import('ImcSdk') CONF = cfg.CONF @mock.patch.object(common, 'cimc_handle', autospec=True) class WaitForStateChangeTestCase(test_common.CIMCBaseTestCase): def setUp(self): super(WaitForStateChangeTestCase, self).setUp() CONF.set_override('max_retry', 2, 'cimc') CONF.set_override('action_interval', 0, 'cimc') def test__wait_for_state_change(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: mock_rack_unit = mock.MagicMock() mock_rack_unit.get_attr.return_value = ( imcsdk.ComputeRackUnit.CONST_OPER_POWER_ON) handle.get_imc_managedobject.return_value = [mock_rack_unit] state = power._wait_for_state_change(states.POWER_ON, task) handle.get_imc_managedobject.assert_called_once_with( None, None, params={"Dn": "sys/rack-unit-1"}) self.assertEqual(state, states.POWER_ON) def test__wait_for_state_change_fail(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: mock_rack_unit = mock.MagicMock() mock_rack_unit.get_attr.return_value = ( imcsdk.ComputeRackUnit.CONST_OPER_POWER_OFF) handle.get_imc_managedobject.return_value = [mock_rack_unit] state = power._wait_for_state_change(states.POWER_ON, task) calls = [ mock.call(None, None, params={"Dn": "sys/rack-unit-1"}), mock.call(None, None, params={"Dn": "sys/rack-unit-1"}) ] handle.get_imc_managedobject.assert_has_calls(calls) self.assertEqual(state, states.ERROR) def test__wait_for_state_change_imc_exception(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: handle.get_imc_managedobject.side_effect = ( imcsdk.ImcException('Boom')) self.assertRaises( exception.CIMCException, power._wait_for_state_change, states.POWER_ON, task) handle.get_imc_managedobject.assert_called_once_with( None, None, params={"Dn": "sys/rack-unit-1"}) @mock.patch.object(common, 'cimc_handle', autospec=True) class PowerTestCase(test_common.CIMCBaseTestCase): def test_get_properties(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertEqual(common.COMMON_PROPERTIES, task.driver.power.get_properties()) @mock.patch.object(common, "parse_driver_info", autospec=True) def test_validate(self, mock_driver_info, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.validate(task) mock_driver_info.assert_called_once_with(task.node) def test_get_power_state(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: mock_rack_unit = mock.MagicMock() mock_rack_unit.get_attr.return_value = ( imcsdk.ComputeRackUnit.CONST_OPER_POWER_ON) handle.get_imc_managedobject.return_value = [mock_rack_unit] state = task.driver.power.get_power_state(task) handle.get_imc_managedobject.assert_called_once_with( None, None, params={"Dn": "sys/rack-unit-1"}) self.assertEqual(states.POWER_ON, state) def test_get_power_state_fail(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: mock_rack_unit = mock.MagicMock() mock_rack_unit.get_attr.return_value = ( imcsdk.ComputeRackUnit.CONST_OPER_POWER_ON) handle.get_imc_managedobject.side_effect = ( imcsdk.ImcException("boom")) self.assertRaises(exception.CIMCException, task.driver.power.get_power_state, task) handle.get_imc_managedobject.assert_called_once_with( None, None, params={"Dn": "sys/rack-unit-1"}) def test_set_power_state_invalid_state(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.set_power_state, task, states.ERROR) def test_set_power_state_reboot_ok(self, mock_handle): hri = imcsdk.ComputeRackUnit.CONST_ADMIN_POWER_HARD_RESET_IMMEDIATE with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: mock_rack_unit = mock.MagicMock() mock_rack_unit.get_attr.side_effect = [ imcsdk.ComputeRackUnit.CONST_OPER_POWER_OFF, imcsdk.ComputeRackUnit.CONST_OPER_POWER_ON ] handle.get_imc_managedobject.return_value = [mock_rack_unit] task.driver.power.set_power_state(task, states.REBOOT) handle.set_imc_managedobject.assert_called_once_with( None, class_id="ComputeRackUnit", params={ imcsdk.ComputeRackUnit.ADMIN_POWER: hri, imcsdk.ComputeRackUnit.DN: "sys/rack-unit-1" }) handle.get_imc_managedobject.assert_called_with( None, None, params={"Dn": "sys/rack-unit-1"}) def test_set_power_state_reboot_fail(self, mock_handle): hri = imcsdk.ComputeRackUnit.CONST_ADMIN_POWER_HARD_RESET_IMMEDIATE with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: handle.get_imc_managedobject.side_effect = ( imcsdk.ImcException("boom")) self.assertRaises(exception.CIMCException, task.driver.power.set_power_state, task, states.REBOOT) handle.set_imc_managedobject.assert_called_once_with( None, class_id="ComputeRackUnit", params={ imcsdk.ComputeRackUnit.ADMIN_POWER: hri, imcsdk.ComputeRackUnit.DN: "sys/rack-unit-1" }) handle.get_imc_managedobject.assert_called_with( None, None, params={"Dn": "sys/rack-unit-1"}) def test_set_power_state_on_ok(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: mock_rack_unit = mock.MagicMock() mock_rack_unit.get_attr.side_effect = [ imcsdk.ComputeRackUnit.CONST_OPER_POWER_OFF, imcsdk.ComputeRackUnit.CONST_OPER_POWER_ON ] handle.get_imc_managedobject.return_value = [mock_rack_unit] task.driver.power.set_power_state(task, states.POWER_ON) handle.set_imc_managedobject.assert_called_once_with( None, class_id="ComputeRackUnit", params={ imcsdk.ComputeRackUnit.ADMIN_POWER: imcsdk.ComputeRackUnit.CONST_ADMIN_POWER_UP, imcsdk.ComputeRackUnit.DN: "sys/rack-unit-1" }) handle.get_imc_managedobject.assert_called_with( None, None, params={"Dn": "sys/rack-unit-1"}) def test_set_power_state_on_fail(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: handle.get_imc_managedobject.side_effect = ( imcsdk.ImcException("boom")) self.assertRaises(exception.CIMCException, task.driver.power.set_power_state, task, states.POWER_ON) handle.set_imc_managedobject.assert_called_once_with( None, class_id="ComputeRackUnit", params={ imcsdk.ComputeRackUnit.ADMIN_POWER: imcsdk.ComputeRackUnit.CONST_ADMIN_POWER_UP, imcsdk.ComputeRackUnit.DN: "sys/rack-unit-1" }) handle.get_imc_managedobject.assert_called_with( None, None, params={"Dn": "sys/rack-unit-1"}) def test_set_power_state_off_ok(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: mock_rack_unit = mock.MagicMock() mock_rack_unit.get_attr.side_effect = [ imcsdk.ComputeRackUnit.CONST_OPER_POWER_ON, imcsdk.ComputeRackUnit.CONST_OPER_POWER_OFF ] handle.get_imc_managedobject.return_value = [mock_rack_unit] task.driver.power.set_power_state(task, states.POWER_OFF) handle.set_imc_managedobject.assert_called_once_with( None, class_id="ComputeRackUnit", params={ imcsdk.ComputeRackUnit.ADMIN_POWER: imcsdk.ComputeRackUnit.CONST_ADMIN_POWER_DOWN, imcsdk.ComputeRackUnit.DN: "sys/rack-unit-1" }) handle.get_imc_managedobject.assert_called_with( None, None, params={"Dn": "sys/rack-unit-1"}) def test_set_power_state_off_fail(self, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: with mock_handle(task) as handle: handle.get_imc_managedobject.side_effect = ( imcsdk.ImcException("boom")) self.assertRaises(exception.CIMCException, task.driver.power.set_power_state, task, states.POWER_OFF) handle.set_imc_managedobject.assert_called_once_with( None, class_id="ComputeRackUnit", params={ imcsdk.ComputeRackUnit.ADMIN_POWER: imcsdk.ComputeRackUnit.CONST_ADMIN_POWER_DOWN, imcsdk.ComputeRackUnit.DN: "sys/rack-unit-1" }) handle.get_imc_managedobject.assert_called_with( None, None, params={"Dn": "sys/rack-unit-1"}) @mock.patch.object(power.Power, "set_power_state", autospec=True) @mock.patch.object(power.Power, "get_power_state", autospec=True) def test_reboot_on(self, mock_get_state, mock_set_state, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_state.return_value = states.POWER_ON task.driver.power.reboot(task) mock_set_state.assert_called_with(mock.ANY, task, states.REBOOT) @mock.patch.object(power.Power, "set_power_state", autospec=True) @mock.patch.object(power.Power, "get_power_state", autospec=True) def test_reboot_off(self, mock_get_state, mock_set_state, mock_handle): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_state.return_value = states.POWER_OFF task.driver.power.reboot(task) mock_set_state.assert_called_with(mock.ANY, task, states.POWER_ON) ironic-5.1.0/ironic/tests/unit/drivers/modules/cimc/__init__.py0000664000567000056710000000000012674513466025723 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/drivers/modules/test_agent_base_vendor.py0000664000567000056710000017444512674513470030001 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import types import mock from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent_base_vendor from ironic.drivers.modules import agent_client from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import fake from ironic.drivers.modules import pxe from ironic import objects from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as object_utils INSTANCE_INFO = db_utils.get_test_agent_instance_info() DRIVER_INFO = db_utils.get_test_agent_driver_info() DRIVER_INTERNAL_INFO = db_utils.get_test_agent_driver_internal_info() class TestBaseAgentVendor(db_base.DbTestCase): def setUp(self): super(TestBaseAgentVendor, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_agent") self.passthru = agent_base_vendor.BaseAgentVendor() n = { 'driver': 'fake_agent', 'instance_info': INSTANCE_INFO, 'driver_info': DRIVER_INFO, 'driver_internal_info': DRIVER_INTERNAL_INFO, } self.node = object_utils.create_test_node(self.context, **n) def test_validate(self): with task_manager.acquire(self.context, self.node.uuid) as task: method = 'heartbeat' self.passthru.validate(task, method) def test_driver_validate(self): kwargs = {'version': '2'} method = 'lookup' self.passthru.driver_validate(method, **kwargs) def test_driver_validate_invalid_paremeter(self): method = 'lookup' kwargs = {'version': '1'} self.assertRaises(exception.InvalidParameterValue, self.passthru.driver_validate, method, **kwargs) def test_driver_validate_missing_parameter(self): method = 'lookup' kwargs = {} self.assertRaises(exception.MissingParameterValue, self.passthru.driver_validate, method, **kwargs) def test_lookup_version_not_found(self): kwargs = { 'version': '999', } with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.passthru.lookup, task.context, **kwargs) @mock.patch('ironic.drivers.modules.agent_base_vendor.BaseAgentVendor' '._find_node_by_macs', autospec=True) def test_lookup_v2(self, find_mock): kwargs = { 'version': '2', 'inventory': { 'interfaces': [ { 'mac_address': 'aa:bb:cc:dd:ee:ff', 'name': 'eth0' }, { 'mac_address': 'ff:ee:dd:cc:bb:aa', 'name': 'eth1' } ] } } find_mock.return_value = self.node with task_manager.acquire(self.context, self.node.uuid) as task: node = self.passthru.lookup(task.context, **kwargs) self.assertEqual(self.node.as_dict(), node['node']) def test_lookup_v2_missing_inventory(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.passthru.lookup, task.context) def test_lookup_v2_empty_inventory(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.passthru.lookup, task.context, inventory={}) def test_lookup_v2_empty_interfaces(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.NodeNotFound, self.passthru.lookup, task.context, version='2', inventory={'interfaces': []}) @mock.patch.object(objects.Node, 'get_by_uuid') def test_lookup_v2_with_node_uuid(self, mock_get_node): kwargs = { 'version': '2', 'node_uuid': 'fake uuid', 'inventory': { 'interfaces': [ { 'mac_address': 'aa:bb:cc:dd:ee:ff', 'name': 'eth0' }, { 'mac_address': 'ff:ee:dd:cc:bb:aa', 'name': 'eth1' } ] } } mock_get_node.return_value = self.node with task_manager.acquire(self.context, self.node.uuid) as task: node = self.passthru.lookup(task.context, **kwargs) self.assertEqual(self.node.as_dict(), node['node']) mock_get_node.assert_called_once_with(mock.ANY, 'fake uuid') @mock.patch.object(objects.port.Port, 'get_by_address', spec_set=types.FunctionType) def test_find_ports_by_macs(self, mock_get_port): fake_port = object_utils.get_test_port(self.context) mock_get_port.return_value = fake_port macs = ['aa:bb:cc:dd:ee:ff'] with task_manager.acquire( self.context, self.node['uuid'], shared=True) as task: ports = self.passthru._find_ports_by_macs(task, macs) self.assertEqual(1, len(ports)) self.assertEqual(fake_port.uuid, ports[0].uuid) self.assertEqual(fake_port.node_id, ports[0].node_id) @mock.patch.object(objects.port.Port, 'get_by_address', spec_set=types.FunctionType) def test_find_ports_by_macs_bad_params(self, mock_get_port): mock_get_port.side_effect = exception.PortNotFound(port="123") macs = ['aa:bb:cc:dd:ee:ff'] with task_manager.acquire( self.context, self.node['uuid'], shared=True) as task: empty_ids = self.passthru._find_ports_by_macs(task, macs) self.assertEqual([], empty_ids) @mock.patch('ironic.objects.node.Node.get_by_id', spec_set=types.FunctionType) @mock.patch('ironic.drivers.modules.agent_base_vendor.BaseAgentVendor' '._get_node_id', autospec=True) @mock.patch('ironic.drivers.modules.agent_base_vendor.BaseAgentVendor' '._find_ports_by_macs', autospec=True) def test_find_node_by_macs(self, ports_mock, node_id_mock, node_mock): ports_mock.return_value = object_utils.get_test_port(self.context) node_id_mock.return_value = '1' node_mock.return_value = self.node macs = ['aa:bb:cc:dd:ee:ff'] with task_manager.acquire( self.context, self.node['uuid'], shared=True) as task: node = self.passthru._find_node_by_macs(task, macs) self.assertEqual(node, node) @mock.patch('ironic.drivers.modules.agent_base_vendor.BaseAgentVendor' '._find_ports_by_macs', autospec=True) def test_find_node_by_macs_no_ports(self, ports_mock): ports_mock.return_value = [] macs = ['aa:bb:cc:dd:ee:ff'] with task_manager.acquire( self.context, self.node['uuid'], shared=True) as task: self.assertRaises(exception.NodeNotFound, self.passthru._find_node_by_macs, task, macs) @mock.patch('ironic.objects.node.Node.get_by_uuid', spec_set=types.FunctionType) @mock.patch('ironic.drivers.modules.agent_base_vendor.BaseAgentVendor' '._get_node_id', autospec=True) @mock.patch('ironic.drivers.modules.agent_base_vendor.BaseAgentVendor' '._find_ports_by_macs', autospec=True) def test_find_node_by_macs_nodenotfound(self, ports_mock, node_id_mock, node_mock): port = object_utils.get_test_port(self.context) ports_mock.return_value = [port] node_id_mock.return_value = self.node['uuid'] node_mock.side_effect = [self.node, exception.NodeNotFound(node=self.node)] macs = ['aa:bb:cc:dd:ee:ff'] with task_manager.acquire( self.context, self.node['uuid'], shared=True) as task: self.assertRaises(exception.NodeNotFound, self.passthru._find_node_by_macs, task, macs) def test_get_node_id(self): fake_port1 = object_utils.get_test_port(self.context, node_id=123, address="aa:bb:cc:dd:ee:fe") fake_port2 = object_utils.get_test_port(self.context, node_id=123, id=42, address="aa:bb:cc:dd:ee:fb", uuid='1be26c0b-03f2-4d2e-ae87-' 'c02d7f33c782') node_id = self.passthru._get_node_id([fake_port1, fake_port2]) self.assertEqual(fake_port2.node_id, node_id) def test_get_node_id_exception(self): fake_port1 = object_utils.get_test_port(self.context, node_id=123, address="aa:bb:cc:dd:ee:fc") fake_port2 = object_utils.get_test_port(self.context, node_id=321, id=42, address="aa:bb:cc:dd:ee:fd", uuid='1be26c0b-03f2-4d2e-ae87-' 'c02d7f33c782') self.assertRaises(exception.NodeNotFound, self.passthru._get_node_id, [fake_port1, fake_port2]) def test_get_interfaces(self): fake_inventory = { 'interfaces': [ { 'mac_address': 'aa:bb:cc:dd:ee:ff', 'name': 'eth0' } ] } interfaces = self.passthru._get_interfaces(fake_inventory) self.assertEqual(fake_inventory['interfaces'], interfaces) def test_get_interfaces_bad(self): self.assertRaises(exception.InvalidParameterValue, self.passthru._get_interfaces, inventory={}) def test_heartbeat(self): kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.passthru.heartbeat(task, **kwargs) def test_heartbeat_bad(self): kwargs = {} with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.assertRaises(exception.MissingParameterValue, self.passthru.heartbeat, task, **kwargs) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'deploy_has_started', autospec=True) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'deploy_is_done', autospec=True) @mock.patch.object(agent_base_vendor.LOG, 'exception', autospec=True) def test_heartbeat_deploy_done_fails(self, log_mock, done_mock, failed_mock, deploy_started_mock): deploy_started_mock.return_value = True kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } done_mock.side_effect = iter([Exception('LlamaException')]) with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE self.passthru.heartbeat(task, **kwargs) failed_mock.assert_called_once_with(task, mock.ANY) log_mock.assert_called_once_with( 'Asynchronous exception for node ' '1be26c0b-03f2-4d2e-ae87-c02d7f33c123: Failed checking if deploy ' 'is done. Exception: LlamaException') @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'deploy_has_started', autospec=True) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'deploy_is_done', autospec=True) @mock.patch.object(agent_base_vendor.LOG, 'exception', autospec=True) def test_heartbeat_deploy_done_raises_with_event(self, log_mock, done_mock, failed_mock, deploy_started_mock): deploy_started_mock.return_value = True kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: def driver_failure(*args, **kwargs): # simulate driver failure that both advances the FSM # and raises an exception task.node.provision_state = states.DEPLOYFAIL raise Exception('LlamaException') task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE done_mock.side_effect = driver_failure self.passthru.heartbeat(task, **kwargs) # task.node.provision_state being set to DEPLOYFAIL # within the driver_failue, hearbeat should not call # deploy_utils.set_failed_state anymore self.assertFalse(failed_mock.called) log_mock.assert_called_once_with( 'Asynchronous exception for node ' '1be26c0b-03f2-4d2e-ae87-c02d7f33c123: Failed checking if deploy ' 'is done. Exception: LlamaException') @mock.patch.object(objects.node.Node, 'touch_provisioning', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, '_refresh_clean_steps', autospec=True) @mock.patch.object(manager_utils, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) def test_heartbeat_resume_clean(self, mock_notify, mock_set_steps, mock_refresh, mock_touch): kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } self.node.clean_step = {} for state in (states.CLEANWAIT, states.CLEANING): self.node.provision_state = state self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.passthru.heartbeat(task, **kwargs) mock_touch.assert_called_once_with(mock.ANY) mock_refresh.assert_called_once_with(mock.ANY, task) mock_notify.assert_called_once_with(mock.ANY, task) mock_set_steps.assert_called_once_with(task) # Reset mocks for the next interaction mock_touch.reset_mock() mock_refresh.reset_mock() mock_notify.reset_mock() mock_set_steps.reset_mock() @mock.patch.object(manager_utils, 'cleaning_error_handler') @mock.patch.object(objects.node.Node, 'touch_provisioning', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, '_refresh_clean_steps', autospec=True) @mock.patch.object(manager_utils, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) def test_heartbeat_resume_clean_fails(self, mock_notify, mock_set_steps, mock_refresh, mock_touch, mock_handler): mocks = [mock_refresh, mock_set_steps, mock_notify] kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } self.node.clean_step = {} self.node.save() for state in (states.CLEANWAIT, states.CLEANING): self.node.provision_state = state self.node.save() for i in range(len(mocks)): before_failed_mocks = mocks[:i] failed_mock = mocks[i] after_failed_mocks = mocks[i + 1:] failed_mock.side_effect = Exception() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.passthru.heartbeat(task, **kwargs) mock_touch.assert_called_once_with(mock.ANY) mock_handler.assert_called_once_with(task, mock.ANY) for called in before_failed_mocks + [failed_mock]: self.assertTrue(called.called) for not_called in after_failed_mocks: self.assertFalse(not_called.called) # Reset mocks for the next interaction for m in mocks + [mock_touch, mock_handler]: m.reset_mock() failed_mock.side_effect = None @mock.patch.object(objects.node.Node, 'touch_provisioning', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'continue_cleaning', autospec=True) def test_heartbeat_continue_cleaning(self, mock_continue, mock_touch): kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'foo', 'reboot_requested': False } for state in (states.CLEANWAIT, states.CLEANING): self.node.provision_state = state self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.passthru.heartbeat(task, **kwargs) mock_touch.assert_called_once_with(mock.ANY) mock_continue.assert_called_once_with(mock.ANY, task, **kwargs) # Reset mocks for the next interaction mock_touch.reset_mock() mock_continue.reset_mock() @mock.patch.object(manager_utils, 'cleaning_error_handler') @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'continue_cleaning', autospec=True) def test_heartbeat_continue_cleaning_fails(self, mock_continue, mock_handler): kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'foo', 'reboot_requested': False } mock_continue.side_effect = Exception() for state in (states.CLEANWAIT, states.CLEANING): self.node.provision_state = state self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.passthru.heartbeat(task, **kwargs) mock_continue.assert_called_once_with(mock.ANY, task, **kwargs) mock_handler.assert_called_once_with(task, mock.ANY) mock_handler.reset_mock() mock_continue.reset_mock() @mock.patch.object(manager_utils, 'cleaning_error_handler') @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'continue_cleaning', autospec=True) def test_heartbeat_continue_cleaning_no_worker(self, mock_continue, mock_handler): kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'foo', 'reboot_requested': False } mock_continue.side_effect = exception.NoFreeConductorWorker() for state in (states.CLEANWAIT, states.CLEANING): self.node.provision_state = state self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.passthru.heartbeat(task, **kwargs) mock_continue.assert_called_once_with(mock.ANY, task, **kwargs) self.assertFalse(mock_handler.called) mock_handler.reset_mock() mock_continue.reset_mock() @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'continue_deploy', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'reboot_to_instance', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) def test_heartbeat_noops_maintenance_mode(self, ncrc_mock, rti_mock, cd_mock): """Ensures that heartbeat() no-ops for a maintenance node.""" kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } self.node.maintenance = True for state in (states.AVAILABLE, states.DEPLOYWAIT, states.DEPLOYING, states.CLEANING): self.node.provision_state = state self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.passthru.heartbeat(task, **kwargs) self.assertEqual(0, ncrc_mock.call_count) self.assertEqual(0, rti_mock.call_count) self.assertEqual(0, cd_mock.call_count) @mock.patch.object(objects.node.Node, 'touch_provisioning', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'deploy_has_started', autospec=True) def test_heartbeat_touch_provisioning(self, mock_deploy_started, mock_touch): mock_deploy_started.return_value = True kwargs = { 'agent_url': 'http://127.0.0.1:9999/bar' } self.node.provision_state = states.DEPLOYWAIT self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.passthru.heartbeat(task, **kwargs) mock_touch.assert_called_once_with(mock.ANY) def test_vendor_passthru_vendor_routes(self): expected = ['heartbeat'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: vendor_routes = task.driver.vendor.vendor_routes self.assertIsInstance(vendor_routes, dict) self.assertEqual(expected, list(vendor_routes)) def test_vendor_passthru_driver_routes(self): expected = ['lookup'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_routes = task.driver.vendor.driver_routes self.assertIsInstance(driver_routes, dict) self.assertEqual(expected, list(driver_routes)) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy(self, power_off_mock, get_power_state_mock, node_power_action_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_power_state_mock.side_effect = [states.POWER_ON, states.POWER_OFF] self.passthru.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(2, get_power_state_mock.call_count) node_power_action_mock.assert_called_once_with( task, states.REBOOT) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_soft_poweroff_doesnt_complete( self, power_off_mock, get_power_state_mock, node_power_action_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_power_state_mock.return_value = states.POWER_ON self.passthru.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(7, get_power_state_mock.call_count) node_power_action_mock.assert_called_once_with( task, states.REBOOT) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_soft_poweroff_fails( self, power_off_mock, node_power_action_mock): power_off_mock.side_effect = iter([RuntimeError("boom")]) self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.passthru.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) node_power_action_mock.assert_called_once_with( task, states.REBOOT) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_get_power_state_fails( self, power_off_mock, get_power_state_mock, node_power_action_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_power_state_mock.side_effect = iter([RuntimeError("boom")]) self.passthru.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(7, get_power_state_mock.call_count) node_power_action_mock.assert_called_once_with( task, states.REBOOT) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_power_action_fails( self, power_off_mock, get_power_state_mock, node_power_action_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_power_state_mock.return_value = states.POWER_ON node_power_action_mock.side_effect = iter([RuntimeError("boom")]) self.assertRaises(exception.InstanceDeployFailure, self.passthru.reboot_and_finish_deploy, task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(7, get_power_state_mock.call_count) node_power_action_mock.assert_has_calls([ mock.call(task, states.REBOOT), mock.call(task, states.POWER_OFF)]) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(agent_client.AgentClient, 'sync', spec=types.FunctionType) def test_reboot_and_finish_deploy_power_action_oob_power_off( self, sync_mock, node_power_action_mock): # Enable force power off driver_info = self.node.driver_info driver_info['deploy_forces_oob_reboot'] = True self.node.driver_info = driver_info self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.passthru.reboot_and_finish_deploy(task) sync_mock.assert_called_once_with(task.node) node_power_action_mock.assert_called_once_with( task, states.REBOOT) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(agent_base_vendor.LOG, 'warning', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(agent_client.AgentClient, 'sync', spec=types.FunctionType) def test_reboot_and_finish_deploy_power_action_oob_power_off_failed( self, sync_mock, node_power_action_mock, log_mock): # Enable force power off driver_info = self.node.driver_info driver_info['deploy_forces_oob_reboot'] = True self.node.driver_info = driver_info self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: sync_mock.return_value = {'faultstring': 'Unknown command: blah'} self.passthru.reboot_and_finish_deploy(task) sync_mock.assert_called_once_with(task.node) node_power_action_mock.assert_called_once_with( task, states.REBOOT) self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) log_error = ('The version of the IPA ramdisk used in the ' 'deployment do not support the command "sync"') log_mock.assert_called_once_with( 'Failed to flush the file system prior to hard rebooting the ' 'node %(node)s. Error: %(error)s', {'node': task.node.uuid, 'error': log_error}) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) def test_configure_local_boot(self, try_set_boot_device_mock, install_bootloader_mock): install_bootloader_mock.return_value = { 'command_status': 'SUCCESS', 'command_error': None} with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.passthru.configure_local_boot(task, root_uuid='some-root-uuid') try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK) install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid='some-root-uuid', efi_system_part_uuid=None) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) def test_configure_local_boot_uefi(self, try_set_boot_device_mock, install_bootloader_mock): install_bootloader_mock.return_value = { 'command_status': 'SUCCESS', 'command_error': None} with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.passthru.configure_local_boot( task, root_uuid='some-root-uuid', efi_system_part_uuid='efi-system-part-uuid') try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK) install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid='some-root-uuid', efi_system_part_uuid='efi-system-part-uuid') @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_whole_disk_image( self, install_bootloader_mock, try_set_boot_device_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.passthru.configure_local_boot(task) self.assertFalse(install_bootloader_mock.called) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_no_root_uuid( self, install_bootloader_mock, try_set_boot_device_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.passthru.configure_local_boot(task) self.assertFalse(install_bootloader_mock.called) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_boot_loader_install_fail( self, install_bootloader_mock): install_bootloader_mock.return_value = { 'command_status': 'FAILED', 'command_error': 'boom'} self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InstanceDeployFailure, self.passthru.configure_local_boot, task, root_uuid='some-root-uuid') install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid='some-root-uuid', efi_system_part_uuid=None) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_set_boot_device_fail( self, install_bootloader_mock, try_set_boot_device_mock): install_bootloader_mock.return_value = { 'command_status': 'SUCCESS', 'command_error': None} try_set_boot_device_mock.side_effect = iter([RuntimeError('error')]) self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InstanceDeployFailure, self.passthru.configure_local_boot, task, root_uuid='some-root-uuid') install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid='some-root-uuid', efi_system_part_uuid=None) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(deploy_utils, 'get_boot_option', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'configure_local_boot', autospec=True) def test_prepare_instance_to_boot_netboot(self, configure_mock, boot_option_mock, prepare_instance_mock, failed_state_mock): boot_option_mock.return_value = 'netboot' prepare_instance_mock.return_value = None self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() root_uuid = 'root_uuid' efi_system_part_uuid = 'efi_sys_uuid' with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.passthru.prepare_instance_to_boot(task, root_uuid, efi_system_part_uuid) self.assertFalse(configure_mock.called) boot_option_mock.assert_called_once_with(task.node) prepare_instance_mock.assert_called_once_with(task.driver.boot, task) self.assertFalse(failed_state_mock.called) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(deploy_utils, 'get_boot_option', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'configure_local_boot', autospec=True) def test_prepare_instance_to_boot_localboot(self, configure_mock, boot_option_mock, prepare_instance_mock, failed_state_mock): boot_option_mock.return_value = 'local' prepare_instance_mock.return_value = None self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() root_uuid = 'root_uuid' efi_system_part_uuid = 'efi_sys_uuid' with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.passthru.prepare_instance_to_boot(task, root_uuid, efi_system_part_uuid) configure_mock.assert_called_once_with(self.passthru, task, root_uuid, efi_system_part_uuid) boot_option_mock.assert_called_once_with(task.node) prepare_instance_mock.assert_called_once_with(task.driver.boot, task) self.assertFalse(failed_state_mock.called) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(deploy_utils, 'get_boot_option', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'configure_local_boot', autospec=True) def test_prepare_instance_to_boot_configure_fails(self, configure_mock, boot_option_mock, prepare_mock, failed_state_mock): boot_option_mock.return_value = 'local' self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() root_uuid = 'root_uuid' efi_system_part_uuid = 'efi_sys_uuid' reason = 'reason' configure_mock.side_effect = ( exception.InstanceDeployFailure(reason=reason)) prepare_mock.side_effect = ( exception.InstanceDeployFailure(reason=reason)) with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.assertRaises(exception.InstanceDeployFailure, self.passthru.prepare_instance_to_boot, task, root_uuid, efi_system_part_uuid) configure_mock.assert_called_once_with(self.passthru, task, root_uuid, efi_system_part_uuid) boot_option_mock.assert_called_once_with(task.node) self.assertFalse(prepare_mock.called) self.assertFalse(failed_state_mock.called) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning(self, status_mock, notify_mock): # Test a successful execute clean step on the agent self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'erase_devices', 'reboot_requested': False } self.node.save() status_mock.return_value = [{ 'command_status': 'SUCCEEDED', 'command_name': 'execute_clean_step', 'command_result': { 'clean_step': self.node.clean_step } }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.passthru.continue_cleaning(task) notify_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(agent_base_vendor, '_get_post_clean_step_hook', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_with_hook( self, status_mock, notify_mock, get_hook_mock): self.node.clean_step = { 'priority': 10, 'interface': 'raid', 'step': 'create_configuration', } self.node.save() command_status = { 'command_status': 'SUCCEEDED', 'command_name': 'execute_clean_step', 'command_result': {'clean_step': self.node.clean_step}} status_mock.return_value = [command_status] hook_mock = mock.MagicMock(spec=types.FunctionType, __name__='foo') get_hook_mock.return_value = hook_mock with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.passthru.continue_cleaning(task) get_hook_mock.assert_called_once_with(task.node) hook_mock.assert_called_once_with(task, command_status) notify_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) @mock.patch.object(agent_base_vendor, '_get_post_clean_step_hook', autospec=True) @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_with_hook_fails( self, status_mock, error_handler_mock, get_hook_mock, notify_mock): self.node.clean_step = { 'priority': 10, 'interface': 'raid', 'step': 'create_configuration', } self.node.save() command_status = { 'command_status': 'SUCCEEDED', 'command_name': 'execute_clean_step', 'command_result': {'clean_step': self.node.clean_step}} status_mock.return_value = [command_status] hook_mock = mock.MagicMock(spec=types.FunctionType, __name__='foo') hook_mock.side_effect = RuntimeError('error') get_hook_mock.return_value = hook_mock with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.passthru.continue_cleaning(task) get_hook_mock.assert_called_once_with(task.node) hook_mock.assert_called_once_with(task, command_status) error_handler_mock.assert_called_once_with(task, mock.ANY) self.assertFalse(notify_mock.called) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_old_command(self, status_mock, notify_mock): # Test when a second execute_clean_step happens to the agent, but # the new step hasn't started yet. self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'erase_devices', 'reboot_requested': False } self.node.save() status_mock.return_value = [{ 'command_status': 'SUCCEEDED', 'command_name': 'execute_clean_step', 'command_result': { 'priority': 20, 'interface': 'deploy', 'step': 'update_firmware', 'reboot_requested': False } }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.passthru.continue_cleaning(task) self.assertFalse(notify_mock.called) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_running(self, status_mock, notify_mock): # Test that no action is taken while a clean step is executing status_mock.return_value = [{ 'command_status': 'RUNNING', 'command_name': 'execute_clean_step', 'command_result': None }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.passthru.continue_cleaning(task) self.assertFalse(notify_mock.called) @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_fail(self, status_mock, error_mock): # Test the a failure puts the node in CLEANFAIL status_mock.return_value = [{ 'command_status': 'FAILED', 'command_name': 'execute_clean_step', 'command_result': {} }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.passthru.continue_cleaning(task) error_mock.assert_called_once_with(task, mock.ANY) @mock.patch.object(manager_utils, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, '_refresh_clean_steps', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def _test_continue_cleaning_clean_version_mismatch( self, status_mock, refresh_steps_mock, notify_mock, steps_mock, manual=False): status_mock.return_value = [{ 'command_status': 'CLEAN_VERSION_MISMATCH', 'command_name': 'execute_clean_step', }] tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE self.node.provision_state = states.CLEANWAIT self.node.target_provision_state = tgt_prov_state self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.passthru.continue_cleaning(task) notify_mock.assert_called_once_with(mock.ANY, task) refresh_steps_mock.assert_called_once_with(mock.ANY, task) if manual: self.assertFalse( task.node.driver_internal_info['skip_current_clean_step']) self.assertFalse(steps_mock.called) else: steps_mock.assert_called_once_with(task) self.assertFalse('skip_current_clean_step' in task.node.driver_internal_info) def test_continue_cleaning_automated_clean_version_mismatch(self): self._test_continue_cleaning_clean_version_mismatch() def test_continue_cleaning_manual_clean_version_mismatch(self): self._test_continue_cleaning_clean_version_mismatch(manual=True) @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(manager_utils, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, '_refresh_clean_steps', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_clean_version_mismatch_fail( self, status_mock, refresh_steps_mock, notify_mock, steps_mock, error_mock, manual=False): status_mock.return_value = [{ 'command_status': 'CLEAN_VERSION_MISMATCH', 'command_name': 'execute_clean_step', 'command_result': {'hardware_manager_version': {'Generic': '1'}} }] refresh_steps_mock.side_effect = exception.NodeCleaningFailure("boo") tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE self.node.provision_state = states.CLEANWAIT self.node.target_provision_state = tgt_prov_state self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.passthru.continue_cleaning(task) status_mock.assert_called_once_with(mock.ANY, task.node) refresh_steps_mock.assert_called_once_with(mock.ANY, task) error_mock.assert_called_once_with(task, mock.ANY) self.assertFalse(notify_mock.called) self.assertFalse(steps_mock.called) @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_unknown(self, status_mock, error_mock): # Test that unknown commands are treated as failures status_mock.return_value = [{ 'command_status': 'UNKNOWN', 'command_name': 'execute_clean_step', 'command_result': {} }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.passthru.continue_cleaning(task) error_mock.assert_called_once_with(task, mock.ANY) def _test_clean_step_hook(self, hook_dict_mock): """Helper method for unit tests related to clean step hooks. This is a helper method for other unit tests related to clean step hooks. It acceps a mock 'hook_dict_mock' which is a MagicMock and sets it up to function as a mock dictionary. After that, it defines a dummy hook_method for two clean steps raid.create_configuration and raid.delete_configuration. :param hook_dict_mock: An instance of mock.MagicMock() which is the mocked value of agent_base_vendor.POST_CLEAN_STEP_HOOKS :returns: a tuple, where the first item is the hook method created by this method and second item is the backend dictionary for the mocked hook_dict_mock """ hook_dict = {} def get(key, default): return hook_dict.get(key, default) def getitem(self, key): return hook_dict[key] def setdefault(key, default): if key not in hook_dict: hook_dict[key] = default return hook_dict[key] hook_dict_mock.get = get hook_dict_mock.__getitem__ = getitem hook_dict_mock.setdefault = setdefault some_function_mock = mock.MagicMock() @agent_base_vendor.post_clean_step_hook( interface='raid', step='delete_configuration') @agent_base_vendor.post_clean_step_hook( interface='raid', step='create_configuration') def hook_method(): some_function_mock('some-arguments') return hook_method, hook_dict @mock.patch.object(agent_base_vendor, 'POST_CLEAN_STEP_HOOKS', spec_set=dict) def test_post_clean_step_hook(self, hook_dict_mock): # This unit test makes sure that hook methods are registered # properly and entries are made in # agent_base_vendor.POST_CLEAN_STEP_HOOKS hook_method, hook_dict = self._test_clean_step_hook(hook_dict_mock) self.assertEqual(hook_method, hook_dict['raid']['create_configuration']) self.assertEqual(hook_method, hook_dict['raid']['delete_configuration']) @mock.patch.object(agent_base_vendor, 'POST_CLEAN_STEP_HOOKS', spec_set=dict) def test__get_post_clean_step_hook(self, hook_dict_mock): # Check if agent_base_vendor._get_post_clean_step_hook can get # clean step for which hook is registered. hook_method, hook_dict = self._test_clean_step_hook(hook_dict_mock) self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() hook_returned = agent_base_vendor._get_post_clean_step_hook(self.node) self.assertEqual(hook_method, hook_returned) @mock.patch.object(agent_base_vendor, 'POST_CLEAN_STEP_HOOKS', spec_set=dict) def test__get_post_clean_step_hook_no_hook_registered( self, hook_dict_mock): # Make sure agent_base_vendor._get_post_clean_step_hook returns # None when no clean step hook is registered for the clean step. hook_method, hook_dict = self._test_clean_step_hook(hook_dict_mock) self.node.clean_step = {'step': 'some-clean-step', 'interface': 'some-other-interface'} self.node.save() hook_returned = agent_base_vendor._get_post_clean_step_hook(self.node) self.assertIsNone(hook_returned) class TestRefreshCleanSteps(TestBaseAgentVendor): def setUp(self): super(TestRefreshCleanSteps, self).setUp() self.node.driver_internal_info['agent_url'] = 'http://127.0.0.1:9999' self.ports = [object_utils.create_test_port(self.context, node_id=self.node.id)] self.clean_steps = { 'hardware_manager_version': '1', 'clean_steps': { 'GenericHardwareManager': [ {'interface': 'deploy', 'step': 'erase_devices', 'priority': 20}, ], 'SpecificHardwareManager': [ {'interface': 'deploy', 'step': 'update_firmware', 'priority': 30}, {'interface': 'raid', 'step': 'create_configuration', 'priority': 10}, ] } } @mock.patch.object(agent_client.AgentClient, 'get_clean_steps', autospec=True) def test__refresh_clean_steps(self, client_mock): client_mock.return_value = { 'command_result': self.clean_steps} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.passthru._refresh_clean_steps(task) client_mock.assert_called_once_with(mock.ANY, task.node, task.ports) self.assertEqual('1', task.node.driver_internal_info[ 'hardware_manager_version']) self.assertTrue('agent_cached_clean_steps_refreshed' in task.node.driver_internal_info) steps = task.node.driver_internal_info['agent_cached_clean_steps'] # Since steps are returned in dicts, they have non-deterministic # ordering self.assertEqual(2, len(steps)) self.assertIn(self.clean_steps['clean_steps'][ 'GenericHardwareManager'][0], steps['deploy']) self.assertIn(self.clean_steps['clean_steps'][ 'SpecificHardwareManager'][0], steps['deploy']) self.assertEqual([self.clean_steps['clean_steps'][ 'SpecificHardwareManager'][1]], steps['raid']) @mock.patch.object(agent_client.AgentClient, 'get_clean_steps', autospec=True) def test__refresh_clean_steps_missing_steps(self, client_mock): del self.clean_steps['clean_steps'] client_mock.return_value = { 'command_result': self.clean_steps} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex(exception.NodeCleaningFailure, 'invalid result', self.passthru._refresh_clean_steps, task) client_mock.assert_called_once_with(mock.ANY, task.node, task.ports) @mock.patch.object(agent_client.AgentClient, 'get_clean_steps', autospec=True) def test__refresh_clean_steps_missing_interface(self, client_mock): step = self.clean_steps['clean_steps']['SpecificHardwareManager'][1] del step['interface'] client_mock.return_value = { 'command_result': self.clean_steps} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex(exception.NodeCleaningFailure, 'invalid clean step', self.passthru._refresh_clean_steps, task) client_mock.assert_called_once_with(mock.ANY, task.node, task.ports) def test_get_properties(self): expected = agent_base_vendor.VENDOR_PROPERTIES self.assertEqual(expected, self.passthru.get_properties()) ironic-5.1.0/ironic/tests/unit/drivers/modules/test_iscsi_deploy.py0000664000567000056710000022307712674513466027023 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for iSCSI deploy mechanism.""" import os import tempfile from ironic_lib import disk_utils from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from oslo_utils import fileutils from oslo_utils import uuidutils from ironic.common import dhcp_factory from ironic.common import driver_factory from ironic.common import exception from ironic.common import keystone from ironic.common import pxe_utils from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent_base_vendor from ironic.drivers.modules import agent_client from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import fake from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import pxe from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF INST_INFO_DICT = db_utils.get_test_pxe_instance_info() DRV_INFO_DICT = db_utils.get_test_pxe_driver_info() DRV_INTERNAL_INFO_DICT = db_utils.get_test_pxe_driver_internal_info() class IscsiDeployValidateParametersTestCase(db_base.DbTestCase): def test_parse_instance_info_good(self): # make sure we get back the expected things node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=INST_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT ) info = deploy_utils.parse_instance_info(node) self.assertIsNotNone(info.get('image_source')) self.assertIsNotNone(info.get('root_gb')) self.assertEqual(0, info.get('ephemeral_gb')) self.assertIsNone(info.get('configdrive')) def test_parse_instance_info_missing_instance_source(self): # make sure error is raised when info is missing info = dict(INST_INFO_DICT) del info['image_source'] node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.MissingParameterValue, deploy_utils.parse_instance_info, node) def test_parse_instance_info_missing_root_gb(self): # make sure error is raised when info is missing info = dict(INST_INFO_DICT) del info['root_gb'] node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.MissingParameterValue, deploy_utils.parse_instance_info, node) def test_parse_instance_info_invalid_root_gb(self): info = dict(INST_INFO_DICT) info['root_gb'] = 'foobar' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.InvalidParameterValue, deploy_utils.parse_instance_info, node) def test_parse_instance_info_valid_ephemeral_gb(self): ephemeral_gb = 10 ephemeral_fmt = 'test-fmt' info = dict(INST_INFO_DICT) info['ephemeral_gb'] = ephemeral_gb info['ephemeral_format'] = ephemeral_fmt node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) data = deploy_utils.parse_instance_info(node) self.assertEqual(ephemeral_gb, data.get('ephemeral_gb')) self.assertEqual(ephemeral_fmt, data.get('ephemeral_format')) def test_parse_instance_info_unicode_swap_mb(self): swap_mb = u'10' swap_mb_int = 10 info = dict(INST_INFO_DICT) info['swap_mb'] = swap_mb node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) data = deploy_utils.parse_instance_info(node) self.assertEqual(swap_mb_int, data.get('swap_mb')) def test_parse_instance_info_invalid_ephemeral_gb(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 'foobar' info['ephemeral_format'] = 'exttest' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.InvalidParameterValue, deploy_utils.parse_instance_info, node) def test_parse_instance_info_valid_ephemeral_missing_format(self): ephemeral_gb = 10 ephemeral_fmt = 'test-fmt' info = dict(INST_INFO_DICT) info['ephemeral_gb'] = ephemeral_gb info['ephemeral_format'] = None self.config(default_ephemeral_format=ephemeral_fmt, group='pxe') node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) instance_info = deploy_utils.parse_instance_info(node) self.assertEqual(ephemeral_fmt, instance_info['ephemeral_format']) def test_parse_instance_info_valid_preserve_ephemeral_true(self): info = dict(INST_INFO_DICT) for opt in ['true', 'TRUE', 'True', 't', 'on', 'yes', 'y', '1']: info['preserve_ephemeral'] = opt node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) data = deploy_utils.parse_instance_info(node) self.assertTrue(data.get('preserve_ephemeral')) def test_parse_instance_info_valid_preserve_ephemeral_false(self): info = dict(INST_INFO_DICT) for opt in ['false', 'FALSE', 'False', 'f', 'off', 'no', 'n', '0']: info['preserve_ephemeral'] = opt node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) data = deploy_utils.parse_instance_info(node) self.assertFalse(data.get('preserve_ephemeral')) def test_parse_instance_info_invalid_preserve_ephemeral(self): info = dict(INST_INFO_DICT) info['preserve_ephemeral'] = 'foobar' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.InvalidParameterValue, deploy_utils.parse_instance_info, node) def test_parse_instance_info_invalid_ephemeral_disk(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 10 info['swap_mb'] = 0 info['root_gb'] = 20 info['preserve_ephemeral'] = True drv_internal_dict = {'instance': {'ephemeral_gb': 9, 'swap_mb': 0, 'root_gb': 20}} drv_internal_dict.update(DRV_INTERNAL_INFO_DICT) node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=drv_internal_dict, ) self.assertRaises(exception.InvalidParameterValue, deploy_utils.parse_instance_info, node) def test__check_disk_layout_unchanged_fails(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 10 info['swap_mb'] = 0 info['root_gb'] = 20 info['preserve_ephemeral'] = True drv_internal_dict = {'instance': {'ephemeral_gb': 20, 'swap_mb': 0, 'root_gb': 20}} drv_internal_dict.update(DRV_INTERNAL_INFO_DICT) node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=drv_internal_dict, ) self.assertRaises(exception.InvalidParameterValue, deploy_utils._check_disk_layout_unchanged, node, info) def test__check_disk_layout_unchanged(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 10 info['swap_mb'] = 0 info['root_gb'] = 20 info['preserve_ephemeral'] = True drv_internal_dict = {'instance': {'ephemeral_gb': 10, 'swap_mb': 0, 'root_gb': 20}} drv_internal_dict.update(DRV_INTERNAL_INFO_DICT) node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=drv_internal_dict, ) self.assertIsNone(deploy_utils._check_disk_layout_unchanged(node, info)) def test__save_disk_layout(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 10 info['swap_mb'] = 0 info['root_gb'] = 10 info['preserve_ephemeral'] = False node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) iscsi_deploy._save_disk_layout(node, info) node.refresh() for param in ('ephemeral_gb', 'swap_mb', 'root_gb'): self.assertEqual( info[param], node.driver_internal_info['instance'][param] ) def test_parse_instance_info_configdrive(self): info = dict(INST_INFO_DICT) info['configdrive'] = 'http://1.2.3.4/cd' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) instance_info = deploy_utils.parse_instance_info(node) self.assertEqual('http://1.2.3.4/cd', instance_info['configdrive']) def test_parse_instance_info_nonglance_image(self): info = INST_INFO_DICT.copy() info['image_source'] = 'file:///image.qcow2' info['kernel'] = 'file:///image.vmlinuz' info['ramdisk'] = 'file:///image.initrd' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) deploy_utils.parse_instance_info(node) def test_parse_instance_info_nonglance_image_no_kernel(self): info = INST_INFO_DICT.copy() info['image_source'] = 'file:///image.qcow2' info['ramdisk'] = 'file:///image.initrd' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.MissingParameterValue, deploy_utils.parse_instance_info, node) def test_parse_instance_info_whole_disk_image(self): driver_internal_info = dict(DRV_INTERNAL_INFO_DICT) driver_internal_info['is_whole_disk_image'] = True node = obj_utils.create_test_node( self.context, instance_info=INST_INFO_DICT, driver_internal_info=driver_internal_info, ) instance_info = deploy_utils.parse_instance_info(node) self.assertIsNotNone(instance_info.get('image_source')) self.assertIsNotNone(instance_info.get('root_gb')) self.assertEqual(0, instance_info.get('swap_mb')) self.assertEqual(0, instance_info.get('ephemeral_gb')) self.assertIsNone(instance_info.get('configdrive')) def test_parse_instance_info_whole_disk_image_missing_root(self): info = dict(INST_INFO_DICT) del info['root_gb'] node = obj_utils.create_test_node(self.context, instance_info=info) self.assertRaises(exception.InvalidParameterValue, deploy_utils.parse_instance_info, node) class IscsiDeployPrivateMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IscsiDeployPrivateMethodsTestCase, self).setUp() n = { 'driver': 'fake_pxe', 'instance_info': INST_INFO_DICT, 'driver_info': DRV_INFO_DICT, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } mgr_utils.mock_the_extension_manager(driver="fake_pxe") self.node = obj_utils.create_test_node(self.context, **n) def test__get_image_dir_path(self): self.assertEqual(os.path.join(CONF.pxe.images_path, self.node.uuid), iscsi_deploy._get_image_dir_path(self.node.uuid)) def test__get_image_file_path(self): self.assertEqual(os.path.join(CONF.pxe.images_path, self.node.uuid, 'disk'), iscsi_deploy._get_image_file_path(self.node.uuid)) class IscsiDeployMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IscsiDeployMethodsTestCase, self).setUp() instance_info = dict(INST_INFO_DICT) instance_info['deploy_key'] = 'fake-56789' n = { 'driver': 'fake_pxe', 'instance_info': instance_info, 'driver_info': DRV_INFO_DICT, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } mgr_utils.mock_the_extension_manager(driver="fake_pxe") self.node = obj_utils.create_test_node(self.context, **n) @mock.patch.object(disk_utils, 'get_image_mb', autospec=True) def test_check_image_size(self, get_image_mb_mock): get_image_mb_mock.return_value = 1000 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['root_gb'] = 1 iscsi_deploy.check_image_size(task) get_image_mb_mock.assert_called_once_with( iscsi_deploy._get_image_file_path(task.node.uuid)) @mock.patch.object(disk_utils, 'get_image_mb', autospec=True) def test_check_image_size_fails(self, get_image_mb_mock): get_image_mb_mock.return_value = 1025 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['root_gb'] = 1 self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.check_image_size, task) get_image_mb_mock.assert_called_once_with( iscsi_deploy._get_image_file_path(task.node.uuid)) @mock.patch.object(deploy_utils, 'fetch_images', autospec=True) def test_cache_instance_images_master_path(self, mock_fetch_image): temp_dir = tempfile.mkdtemp() self.config(images_path=temp_dir, group='pxe') self.config(instance_master_path=os.path.join(temp_dir, 'instance_master_path'), group='pxe') fileutils.ensure_tree(CONF.pxe.instance_master_path) (uuid, image_path) = iscsi_deploy.cache_instance_image(None, self.node) mock_fetch_image.assert_called_once_with(None, mock.ANY, [(uuid, image_path)], True) self.assertEqual('glance://image_uuid', uuid) self.assertEqual(os.path.join(temp_dir, self.node.uuid, 'disk'), image_path) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(utils, 'rmtree_without_raise', autospec=True) @mock.patch.object(iscsi_deploy, 'InstanceImageCache', autospec=True) def test_destroy_images(self, mock_cache, mock_rmtree, mock_unlink): self.config(images_path='/path', group='pxe') iscsi_deploy.destroy_images('uuid') mock_cache.return_value.clean_up.assert_called_once_with() mock_unlink.assert_called_once_with('/path/uuid/disk') mock_rmtree.assert_called_once_with('/path/uuid') def _test_build_deploy_ramdisk_options(self, mock_alnum, api_url, expected_root_device=None, expected_boot_option='netboot', expected_boot_mode='bios'): fake_key = '0123456789ABCDEFGHIJKLMNOPQRSTUV' fake_disk = 'fake-disk' self.config(disk_devices=fake_disk, group='pxe') mock_alnum.return_value = fake_key expected_iqn = 'iqn.2008-10.org.openstack:%s' % self.node.uuid expected_opts = { 'iscsi_target_iqn': expected_iqn, 'deployment_id': self.node.uuid, 'deployment_key': fake_key, 'disk': fake_disk, 'ironic_api_url': api_url, 'boot_option': expected_boot_option, 'boot_mode': expected_boot_mode, 'coreos.configdrive': 0, } if expected_root_device: expected_opts['root_device'] = expected_root_device opts = iscsi_deploy.build_deploy_ramdisk_options(self.node) self.assertEqual(expected_opts, opts) mock_alnum.assert_called_once_with(32) # assert deploy_key was injected in the node self.assertIn('deploy_key', self.node.instance_info) @mock.patch.object(keystone, 'get_service_url', autospec=True) @mock.patch.object(utils, 'random_alnum', autospec=True) def test_build_deploy_ramdisk_options(self, mock_alnum, mock_get_url): fake_api_url = 'http://127.0.0.1:6385' self.config(api_url=fake_api_url, group='conductor') self._test_build_deploy_ramdisk_options(mock_alnum, fake_api_url) # As we are getting the Ironic api url from the config file # assert keystone wasn't called self.assertFalse(mock_get_url.called) @mock.patch.object(keystone, 'get_service_url', autospec=True) @mock.patch.object(utils, 'random_alnum', autospec=True) def test_build_deploy_ramdisk_options_keystone(self, mock_alnum, mock_get_url): fake_api_url = 'http://127.0.0.1:6385' mock_get_url.return_value = fake_api_url self._test_build_deploy_ramdisk_options(mock_alnum, fake_api_url) # As the Ironic api url is not specified in the config file # assert we are getting it from keystone mock_get_url.assert_called_once_with() @mock.patch.object(keystone, 'get_service_url', autospec=True) @mock.patch.object(utils, 'random_alnum', autospec=True) def test_build_deploy_ramdisk_options_root_device(self, mock_alnum, mock_get_url): self.node.properties['root_device'] = {'wwn': 123456} expected = 'wwn=123456' fake_api_url = 'http://127.0.0.1:6385' self.config(api_url=fake_api_url, group='conductor') self._test_build_deploy_ramdisk_options(mock_alnum, fake_api_url, expected_root_device=expected) @mock.patch.object(keystone, 'get_service_url', autospec=True) @mock.patch.object(utils, 'random_alnum', autospec=True) def test_build_deploy_ramdisk_options_boot_option(self, mock_alnum, mock_get_url): self.node.instance_info = {'capabilities': '{"boot_option": "local"}'} expected = 'local' fake_api_url = 'http://127.0.0.1:6385' self.config(api_url=fake_api_url, group='conductor') self._test_build_deploy_ramdisk_options(mock_alnum, fake_api_url, expected_boot_option=expected) @mock.patch.object(keystone, 'get_service_url', autospec=True) @mock.patch.object(utils, 'random_alnum', autospec=True) def test_build_deploy_ramdisk_options_whole_disk_image(self, mock_alnum, mock_get_url): """Tests a hack to boot_option for whole disk images. This hack is in place to fix bug #1441556. """ self.node.instance_info = {'capabilities': '{"boot_option": "local"}'} dii = self.node.driver_internal_info dii['is_whole_disk_image'] = True self.node.driver_internal_info = dii self.node.save() expected = 'netboot' fake_api_url = 'http://127.0.0.1:6385' self.config(api_url=fake_api_url, group='conductor') self._test_build_deploy_ramdisk_options(mock_alnum, fake_api_url, expected_boot_option=expected) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(iscsi_deploy, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(deploy_utils, 'deploy_partition_image', autospec=True) def test_continue_deploy_fail(self, deploy_mock, power_mock, mock_image_cache, mock_disk_layout): kwargs = {'address': '123456', 'iqn': 'aaa-bbb', 'key': 'fake-56789'} deploy_mock.side_effect = iter([ exception.InstanceDeployFailure("test deploy error")]) self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: params = iscsi_deploy.get_deploy_info(task.node, **kwargs) self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.continue_deploy, task, **kwargs) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNotNone(task.node.last_error) deploy_mock.assert_called_once_with(**params) power_mock.assert_called_once_with(task, states.POWER_OFF) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertFalse(mock_disk_layout.called) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(iscsi_deploy, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(deploy_utils, 'deploy_partition_image', autospec=True) def test_continue_deploy_ramdisk_fails(self, deploy_mock, power_mock, mock_image_cache, mock_disk_layout): kwargs = {'address': '123456', 'iqn': 'aaa-bbb', 'key': 'fake-56789', 'error': 'test ramdisk error'} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.continue_deploy, task, **kwargs) self.assertIsNotNone(task.node.last_error) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) power_mock.assert_called_once_with(task, states.POWER_OFF) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertFalse(deploy_mock.called) self.assertFalse(mock_disk_layout.called) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(iscsi_deploy, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(deploy_utils, 'deploy_partition_image', autospec=True) def test_continue_deploy_fail_no_root_uuid_or_disk_id( self, deploy_mock, power_mock, mock_image_cache, mock_disk_layout): kwargs = {'address': '123456', 'iqn': 'aaa-bbb', 'key': 'fake-56789'} deploy_mock.return_value = {} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: params = iscsi_deploy.get_deploy_info(task.node, **kwargs) self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.continue_deploy, task, **kwargs) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNotNone(task.node.last_error) deploy_mock.assert_called_once_with(**params) power_mock.assert_called_once_with(task, states.POWER_OFF) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertFalse(mock_disk_layout.called) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(iscsi_deploy, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(deploy_utils, 'deploy_partition_image', autospec=True) def test_continue_deploy_fail_empty_root_uuid( self, deploy_mock, power_mock, mock_image_cache, mock_disk_layout): kwargs = {'address': '123456', 'iqn': 'aaa-bbb', 'key': 'fake-56789'} deploy_mock.return_value = {'root uuid': ''} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: params = iscsi_deploy.get_deploy_info(task.node, **kwargs) self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.continue_deploy, task, **kwargs) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNotNone(task.node.last_error) deploy_mock.assert_called_once_with(**params) power_mock.assert_called_once_with(task, states.POWER_OFF) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertFalse(mock_disk_layout.called) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(iscsi_deploy, 'LOG', autospec=True) @mock.patch.object(iscsi_deploy, 'get_deploy_info', autospec=True) @mock.patch.object(iscsi_deploy, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(deploy_utils, 'deploy_partition_image', autospec=True) def test_continue_deploy(self, deploy_mock, power_mock, mock_image_cache, mock_deploy_info, mock_log, mock_disk_layout): kwargs = {'address': '123456', 'iqn': 'aaa-bbb', 'key': 'fake-56789'} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() mock_deploy_info.return_value = { 'address': '123456', 'boot_option': 'netboot', 'configdrive': "I've got the power", 'ephemeral_format': None, 'ephemeral_mb': 0, 'image_path': (u'/var/lib/ironic/images/1be26c0b-03f2-4d2e-ae87-' u'c02d7f33c123/disk'), 'iqn': 'aaa-bbb', 'lun': '1', 'node_uuid': u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123', 'port': '3260', 'preserve_ephemeral': True, 'root_mb': 102400, 'swap_mb': 0, } log_params = mock_deploy_info.return_value.copy() # Make sure we don't log the full content of the configdrive log_params['configdrive'] = '***' expected_dict = { 'node': self.node.uuid, 'params': log_params, } uuid_dict_returned = {'root uuid': '12345678-87654321'} deploy_mock.return_value = uuid_dict_returned with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_log.isEnabledFor.return_value = True retval = iscsi_deploy.continue_deploy(task, **kwargs) mock_log.debug.assert_called_once_with( mock.ANY, expected_dict) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNone(task.node.last_error) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertEqual(uuid_dict_returned, retval) mock_disk_layout.assert_called_once_with(task.node, mock.ANY) @mock.patch.object(iscsi_deploy, 'LOG', autospec=True) @mock.patch.object(iscsi_deploy, 'get_deploy_info', autospec=True) @mock.patch.object(iscsi_deploy, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(deploy_utils, 'deploy_disk_image', autospec=True) def test_continue_deploy_whole_disk_image( self, deploy_mock, power_mock, mock_image_cache, mock_deploy_info, mock_log): kwargs = {'address': '123456', 'iqn': 'aaa-bbb', 'key': 'fake-56789'} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() mock_deploy_info.return_value = { 'address': '123456', 'image_path': (u'/var/lib/ironic/images/1be26c0b-03f2-4d2e-ae87-' u'c02d7f33c123/disk'), 'iqn': 'aaa-bbb', 'lun': '1', 'node_uuid': u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123', 'port': '3260', } log_params = mock_deploy_info.return_value.copy() expected_dict = { 'node': self.node.uuid, 'params': log_params, } uuid_dict_returned = {'disk identifier': '87654321'} deploy_mock.return_value = uuid_dict_returned with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = True mock_log.isEnabledFor.return_value = True retval = iscsi_deploy.continue_deploy(task, **kwargs) mock_log.debug.assert_called_once_with( mock.ANY, expected_dict) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNone(task.node.last_error) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertEqual(uuid_dict_returned, retval) def _test_get_deploy_info(self, extra_instance_info=None): if extra_instance_info is None: extra_instance_info = {} instance_info = self.node.instance_info instance_info['deploy_key'] = 'key' instance_info.update(extra_instance_info) self.node.instance_info = instance_info kwargs = {'address': '1.1.1.1', 'iqn': 'target-iqn', 'key': 'key'} ret_val = iscsi_deploy.get_deploy_info(self.node, **kwargs) self.assertEqual('1.1.1.1', ret_val['address']) self.assertEqual('target-iqn', ret_val['iqn']) return ret_val def test_get_deploy_info_boot_option_default(self): ret_val = self._test_get_deploy_info() self.assertEqual('netboot', ret_val['boot_option']) def test_get_deploy_info_netboot_specified(self): capabilities = {'capabilities': {'boot_option': 'netboot'}} ret_val = self._test_get_deploy_info(extra_instance_info=capabilities) self.assertEqual('netboot', ret_val['boot_option']) def test_get_deploy_info_localboot(self): capabilities = {'capabilities': {'boot_option': 'local'}} ret_val = self._test_get_deploy_info(extra_instance_info=capabilities) self.assertEqual('local', ret_val['boot_option']) def test_get_deploy_info_disk_label(self): capabilities = {'capabilities': {'disk_label': 'msdos'}} ret_val = self._test_get_deploy_info(extra_instance_info=capabilities) self.assertEqual('msdos', ret_val['disk_label']) def test_get_deploy_info_not_specified(self): ret_val = self._test_get_deploy_info() self.assertNotIn('disk_label', ret_val) @mock.patch.object(iscsi_deploy, 'continue_deploy', autospec=True) @mock.patch.object(iscsi_deploy, 'build_deploy_ramdisk_options', autospec=True) def test_do_agent_iscsi_deploy_okay(self, build_options_mock, continue_deploy_mock): build_options_mock.return_value = {'deployment_key': 'abcdef', 'iscsi_target_iqn': 'iqn-qweqwe'} agent_client_mock = mock.MagicMock(spec_set=agent_client.AgentClient) agent_client_mock.start_iscsi_target.return_value = { 'command_status': 'SUCCESS', 'command_error': None} driver_internal_info = {'agent_url': 'http://1.2.3.4:1234'} self.node.driver_internal_info = driver_internal_info self.node.save() uuid_dict_returned = {'root uuid': 'some-root-uuid'} continue_deploy_mock.return_value = uuid_dict_returned with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret_val = iscsi_deploy.do_agent_iscsi_deploy( task, agent_client_mock) build_options_mock.assert_called_once_with(task.node) agent_client_mock.start_iscsi_target.assert_called_once_with( task.node, 'iqn-qweqwe') continue_deploy_mock.assert_called_once_with( task, error=None, iqn='iqn-qweqwe', key='abcdef', address='1.2.3.4') self.assertEqual( 'some-root-uuid', task.node.driver_internal_info['root_uuid_or_disk_id']) self.assertEqual(ret_val, uuid_dict_returned) @mock.patch.object(iscsi_deploy, 'build_deploy_ramdisk_options', autospec=True) def test_do_agent_iscsi_deploy_start_iscsi_failure(self, build_options_mock): build_options_mock.return_value = {'deployment_key': 'abcdef', 'iscsi_target_iqn': 'iqn-qweqwe'} agent_client_mock = mock.MagicMock(spec_set=agent_client.AgentClient) agent_client_mock.start_iscsi_target.return_value = { 'command_status': 'FAILED', 'command_error': 'booom'} self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.do_agent_iscsi_deploy, task, agent_client_mock) build_options_mock.assert_called_once_with(task.node) agent_client_mock.start_iscsi_target.assert_called_once_with( task.node, 'iqn-qweqwe') self.node.refresh() self.assertEqual(states.DEPLOYFAIL, self.node.provision_state) self.assertEqual(states.ACTIVE, self.node.target_provision_state) self.assertIsNotNone(self.node.last_error) def test_validate_pass_bootloader_info_input(self): params = {'key': 'some-random-key', 'address': '1.2.3.4', 'error': '', 'status': 'SUCCEEDED'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['deploy_key'] = 'some-random-key' # Assert that the method doesn't raise iscsi_deploy.validate_pass_bootloader_info_input(task, params) def test_validate_pass_bootloader_info_missing_status(self): params = {'key': 'some-random-key', 'address': '1.2.3.4'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, iscsi_deploy.validate_pass_bootloader_info_input, task, params) def test_validate_pass_bootloader_info_missing_key(self): params = {'status': 'SUCCEEDED', 'address': '1.2.3.4'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, iscsi_deploy.validate_pass_bootloader_info_input, task, params) def test_validate_pass_bootloader_info_missing_address(self): params = {'status': 'SUCCEEDED', 'key': 'some-random-key'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, iscsi_deploy.validate_pass_bootloader_info_input, task, params) def test_validate_pass_bootloader_info_input_invalid_key(self): params = {'key': 'some-other-key', 'address': '1.2.3.4', 'status': 'SUCCEEDED'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['deploy_key'] = 'some-random-key' self.assertRaises(exception.InvalidParameterValue, iscsi_deploy.validate_pass_bootloader_info_input, task, params) def test_validate_bootloader_install_status(self): kwargs = {'key': 'abcdef', 'status': 'SUCCEEDED', 'error': ''} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['deploy_key'] = 'abcdef' # Nothing much to assert except that it shouldn't raise. iscsi_deploy.validate_bootloader_install_status(task, kwargs) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) def test_validate_bootloader_install_status_install_failed( self, set_fail_state_mock): kwargs = {'key': 'abcdef', 'status': 'FAILED', 'error': 'some-error'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.DEPLOYING task.node.target_provision_state = states.ACTIVE task.node.instance_info['deploy_key'] = 'abcdef' self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.validate_bootloader_install_status, task, kwargs) set_fail_state_mock.assert_called_once_with(task, mock.ANY) @mock.patch.object(deploy_utils, 'notify_ramdisk_to_proceed', autospec=True) def test_finish_deploy(self, notify_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: iscsi_deploy.finish_deploy(task, '1.2.3.4') notify_mock.assert_called_once_with('1.2.3.4') self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(deploy_utils, 'notify_ramdisk_to_proceed', autospec=True) def test_finish_deploy_notify_fails(self, notify_mock, set_fail_state_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: notify_mock.side_effect = RuntimeError() self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.finish_deploy, task, '1.2.3.4') set_fail_state_mock.assert_called_once_with(task, mock.ANY) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(deploy_utils, 'notify_ramdisk_to_proceed', autospec=True) def test_finish_deploy_ssh_with_local_boot(self, notify_mock, node_power_mock): instance_info = dict(INST_INFO_DICT) instance_info['capabilities'] = {'boot_option': 'local'} n = { 'uuid': uuidutils.generate_uuid(), 'driver': 'fake_ssh', 'instance_info': instance_info, 'provision_state': states.DEPLOYING, 'target_provision_state': states.ACTIVE, } mgr_utils.mock_the_extension_manager(driver="fake_ssh") node = obj_utils.create_test_node(self.context, **n) with task_manager.acquire(self.context, node.uuid, shared=False) as task: iscsi_deploy.finish_deploy(task, '1.2.3.4') notify_mock.assert_called_once_with('1.2.3.4') self.assertEqual(states.ACTIVE, task.node.provision_state) self.assertEqual(states.NOSTATE, task.node.target_provision_state) node_power_mock.assert_called_once_with(task, states.REBOOT) @mock.patch.object(keystone, 'get_service_url', autospec=True) def test_validate_good_api_url_from_config_file(self, mock_ks): # not present in the keystone catalog mock_ks.side_effect = exception.KeystoneFailure self.config(group='conductor', api_url='http://foo') with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: iscsi_deploy.validate(task) self.assertFalse(mock_ks.called) @mock.patch.object(keystone, 'get_service_url', autospec=True) def test_validate_good_api_url_from_keystone(self, mock_ks): # present in the keystone catalog mock_ks.return_value = 'http://127.0.0.1:1234' # not present in the config file self.config(group='conductor', api_url=None) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: iscsi_deploy.validate(task) mock_ks.assert_called_once_with() @mock.patch.object(keystone, 'get_service_url', autospec=True) def test_validate_fail_no_api_url(self, mock_ks): # not present in the keystone catalog mock_ks.side_effect = exception.KeystoneFailure # not present in the config file self.config(group='conductor', api_url=None) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, iscsi_deploy.validate, task) mock_ks.assert_called_once_with() def test_validate_invalid_root_device_hints(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties['root_device'] = {'size': 'not-int'} self.assertRaises(exception.InvalidParameterValue, iscsi_deploy.validate, task) class ISCSIDeployTestCase(db_base.DbTestCase): def setUp(self): super(ISCSIDeployTestCase, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake_pxe") self.driver = driver_factory.get_driver("fake_pxe") self.driver.vendor = iscsi_deploy.VendorPassthru() self.node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.node.driver_internal_info['agent_url'] = 'http://1.2.3.4:1234' self.task = mock.MagicMock(spec=task_manager.TaskManager) self.task.shared = False self.task.node = self.node self.task.driver = self.driver self.task.context = self.context dhcp_factory.DHCPFactory._dhcp_provider = None def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual({}, task.driver.deploy.get_properties()) @mock.patch.object(iscsi_deploy, 'validate', autospec=True) @mock.patch.object(deploy_utils, 'validate_capabilities', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate(self, pxe_validate_mock, validate_capabilities_mock, validate_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.deploy.validate(task) pxe_validate_mock.assert_called_once_with(task.driver.boot, task) validate_capabilities_mock.assert_called_once_with(task.node) validate_mock.assert_called_once_with(task) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) def test_prepare_node_active(self, prepare_instance_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.ACTIVE task.driver.deploy.prepare(task) prepare_instance_mock.assert_called_once_with( task.driver.boot, task) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(iscsi_deploy, 'build_deploy_ramdisk_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', autospec=True) def test_prepare_node_deploying(self, mock_prepare_ramdisk, mock_iscsi_options, mock_agent_options): mock_iscsi_options.return_value = {'a': 'b'} mock_agent_options.return_value = {'c': 'd'} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYWAIT task.driver.deploy.prepare(task) mock_iscsi_options.assert_called_once_with(task.node) mock_agent_options.assert_called_once_with(task.node) mock_prepare_ramdisk.assert_called_once_with( task.driver.boot, task, {'a': 'b', 'c': 'd'}) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'check_image_size', autospec=True) @mock.patch.object(iscsi_deploy, 'cache_instance_image', autospec=True) def test_deploy(self, mock_cache_instance_image, mock_check_image_size, mock_node_power_action): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: state = task.driver.deploy.deploy(task) self.assertEqual(state, states.DEPLOYWAIT) mock_cache_instance_image.assert_called_once_with( self.context, task.node) mock_check_image_size.assert_called_once_with(task) mock_node_power_action.assert_called_once_with(task, states.REBOOT) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_tear_down(self, node_power_action_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: state = task.driver.deploy.tear_down(task) self.assertEqual(state, states.DELETED) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) @mock.patch('ironic.common.dhcp_factory.DHCPFactory._set_dhcp_provider') @mock.patch('ironic.common.dhcp_factory.DHCPFactory.clean_dhcp') @mock.patch.object(pxe.PXEBoot, 'clean_up_instance', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) @mock.patch.object(iscsi_deploy, 'destroy_images', autospec=True) def test_clean_up(self, destroy_images_mock, clean_up_ramdisk_mock, clean_up_instance_mock, clean_dhcp_mock, set_dhcp_provider_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.clean_up(task) destroy_images_mock.assert_called_once_with(task.node.uuid) clean_up_ramdisk_mock.assert_called_once_with( task.driver.boot, task) clean_up_instance_mock.assert_called_once_with( task.driver.boot, task) set_dhcp_provider_mock.assert_called_once_with() clean_dhcp_mock.assert_called_once_with(task) @mock.patch.object(deploy_utils, 'prepare_inband_cleaning', autospec=True) def test_prepare_cleaning(self, prepare_inband_cleaning_mock): prepare_inband_cleaning_mock.return_value = states.CLEANWAIT with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual( states.CLEANWAIT, task.driver.deploy.prepare_cleaning(task)) prepare_inband_cleaning_mock.assert_called_once_with( task, manage_boot=True) @mock.patch.object(deploy_utils, 'tear_down_inband_cleaning', autospec=True) def test_tear_down_cleaning(self, tear_down_cleaning_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.deploy.tear_down_cleaning(task) tear_down_cleaning_mock.assert_called_once_with( task, manage_boot=True) @mock.patch('ironic.drivers.modules.deploy_utils.agent_get_clean_steps', autospec=True) def test_get_clean_steps(self, mock_get_clean_steps): # Test getting clean steps self.config(group='deploy', erase_devices_priority=10) mock_steps = [{'priority': 10, 'interface': 'deploy', 'step': 'erase_devices'}] self.node.driver_internal_info = {'agent_url': 'foo'} self.node.save() mock_get_clean_steps.return_value = mock_steps with task_manager.acquire(self.context, self.node.uuid) as task: steps = task.driver.deploy.get_clean_steps(task) mock_get_clean_steps.assert_called_once_with( task, interface='deploy', override_priorities={ 'erase_devices': 10}) self.assertEqual(mock_steps, steps) @mock.patch('ironic.drivers.modules.deploy_utils.agent_get_clean_steps', autospec=True) def test_get_clean_steps_no_agent_url(self, mock_get_clean_steps): # Test getting clean steps self.node.driver_internal_info = {} self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: steps = task.driver.deploy.get_clean_steps(task) self.assertEqual([], steps) self.assertFalse(mock_get_clean_steps.called) @mock.patch.object(deploy_utils, 'agent_execute_clean_step', autospec=True) def test_execute_clean_step(self, agent_execute_clean_step_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.deploy.execute_clean_step( task, {'some-step': 'step-info'}) agent_execute_clean_step_mock.assert_called_once_with( task, {'some-step': 'step-info'}) class TestVendorPassthru(db_base.DbTestCase): def setUp(self): super(TestVendorPassthru, self).setUp() mgr_utils.mock_the_extension_manager() self.driver = driver_factory.get_driver("fake") self.driver.vendor = iscsi_deploy.VendorPassthru() self.node = obj_utils.create_test_node( self.context, driver='fake', instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.node.driver_internal_info['agent_url'] = 'http://1.2.3.4:1234' self.task = mock.MagicMock(spec=task_manager.TaskManager) self.task.shared = False self.task.node = self.node self.task.driver = self.driver self.task.context = self.context def test_validate_good(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.instance_info['deploy_key'] = 'fake-56789' task.driver.vendor.validate(task, method='pass_deploy_info', address='123456', iqn='aaa-bbb', key='fake-56789') def test_validate_pass_deploy_info_during_cleaning(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.CLEANWAIT # Assert that it doesn't raise. self.assertIsNone( task.driver.vendor.validate(task, method='pass_deploy_info', address='123456', iqn='aaa-bbb', key='fake-56789')) def test_validate_fail(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.vendor.validate, task, method='pass_deploy_info', key='fake-56789') def test_validate_key_notmatch(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.vendor.validate, task, method='pass_deploy_info', address='123456', iqn='aaa-bbb', key='fake-12345') @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) @mock.patch.object(manager_utils, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(iscsi_deploy, 'LOG', spec=['warning']) def test__initiate_cleaning(self, log_mock, set_node_cleaning_steps_mock, notify_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.vendor._initiate_cleaning(task) log_mock.warning.assert_called_once_with(mock.ANY, mock.ANY) set_node_cleaning_steps_mock.assert_called_once_with(task) notify_mock.assert_called_once_with(self.driver.vendor, task) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'notify_conductor_resume_clean', autospec=True) @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(manager_utils, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(iscsi_deploy, 'LOG', spec=['warning']) def test__initiate_cleaning_exception( self, log_mock, set_node_cleaning_steps_mock, cleaning_error_handler_mock, notify_mock): set_node_cleaning_steps_mock.side_effect = RuntimeError() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.vendor._initiate_cleaning(task) log_mock.warning.assert_called_once_with(mock.ANY, mock.ANY) set_node_cleaning_steps_mock.assert_called_once_with(task) cleaning_error_handler_mock.assert_called_once_with(task, mock.ANY) self.assertFalse(notify_mock.called) @mock.patch.object(fake.FakeBoot, 'prepare_instance', autospec=True) @mock.patch.object(deploy_utils, 'notify_ramdisk_to_proceed', autospec=True) @mock.patch.object(iscsi_deploy, 'InstanceImageCache', autospec=True) @mock.patch.object(deploy_utils, 'deploy_partition_image', autospec=True) def _test_pass_deploy_info_deploy(self, is_localboot, mock_deploy, mock_image_cache, notify_mock, fakeboot_prepare_instance_mock): # set local boot i_info = self.node.instance_info if is_localboot: i_info['capabilities'] = '{"boot_option": "local"}' i_info['deploy_key'] = 'fake-56789' self.node.instance_info = i_info self.node.power_state = states.POWER_ON self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() root_uuid = "12345678-1234-1234-1234-1234567890abcxyz" mock_deploy.return_value = {'root uuid': root_uuid} with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.vendor.pass_deploy_info( task, address='123456', iqn='aaa-bbb', key='fake-56789') self.node.refresh() self.assertEqual(states.POWER_ON, self.node.power_state) self.assertIn('root_uuid_or_disk_id', self.node.driver_internal_info) self.assertIsNone(self.node.last_error) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() notify_mock.assert_called_once_with('123456') fakeboot_prepare_instance_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(fake.FakeBoot, 'prepare_instance', autospec=True) @mock.patch.object(deploy_utils, 'notify_ramdisk_to_proceed', autospec=True) @mock.patch.object(iscsi_deploy, 'InstanceImageCache', autospec=True) @mock.patch.object(deploy_utils, 'deploy_disk_image', autospec=True) def _test_pass_deploy_info_whole_disk_image(self, is_localboot, mock_deploy, mock_image_cache, notify_mock, fakeboot_prep_inst_mock): i_info = self.node.instance_info # set local boot if is_localboot: i_info['capabilities'] = '{"boot_option": "local"}' i_info['deploy_key'] = 'fake-56789' self.node.instance_info = i_info self.node.power_state = states.POWER_ON self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() disk_id = '0x12345678' mock_deploy.return_value = {'disk identifier': disk_id} with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.vendor.pass_deploy_info(task, address='123456', iqn='aaa-bbb', key='fake-56789') self.node.refresh() self.assertEqual(states.POWER_ON, self.node.power_state) self.assertIsNone(self.node.last_error) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() notify_mock.assert_called_once_with('123456') fakeboot_prep_inst_mock.assert_called_once_with(mock.ANY, task) def test_pass_deploy_info_deploy(self): self._test_pass_deploy_info_deploy(False) self.assertEqual(states.ACTIVE, self.node.provision_state) self.assertEqual(states.NOSTATE, self.node.target_provision_state) def test_pass_deploy_info_localboot(self): self._test_pass_deploy_info_deploy(True) self.assertEqual(states.DEPLOYWAIT, self.node.provision_state) self.assertEqual(states.ACTIVE, self.node.target_provision_state) def test_pass_deploy_info_whole_disk_image(self): self._test_pass_deploy_info_whole_disk_image(False) self.assertEqual(states.ACTIVE, self.node.provision_state) self.assertEqual(states.NOSTATE, self.node.target_provision_state) def test_pass_deploy_info_whole_disk_image_localboot(self): self._test_pass_deploy_info_whole_disk_image(True) self.assertEqual(states.ACTIVE, self.node.provision_state) self.assertEqual(states.NOSTATE, self.node.target_provision_state) def test_pass_deploy_info_invalid(self): self.node.power_state = states.POWER_ON self.node.provision_state = states.AVAILABLE self.node.target_provision_state = states.NOSTATE self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidState, task.driver.vendor.pass_deploy_info, task, address='123456', iqn='aaa-bbb', key='fake-56789', error='test ramdisk error') self.node.refresh() self.assertEqual(states.AVAILABLE, self.node.provision_state) self.assertEqual(states.NOSTATE, self.node.target_provision_state) self.assertEqual(states.POWER_ON, self.node.power_state) @mock.patch.object(iscsi_deploy.VendorPassthru, 'pass_deploy_info') def test_pass_deploy_info_lock_elevated(self, mock_deploy_info): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.vendor.pass_deploy_info( task, address='123456', iqn='aaa-bbb', key='fake-56789') # lock elevated w/o exception self.assertEqual(1, mock_deploy_info.call_count, "pass_deploy_info was not called once.") @mock.patch.object(iscsi_deploy.VendorPassthru, '_initiate_cleaning', autospec=True) def test_pass_deploy_info_cleaning(self, initiate_cleaning_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.node.provision_state = states.CLEANWAIT task.driver.vendor.pass_deploy_info( task, address='123456', iqn='aaa-bbb', key='fake-56789') initiate_cleaning_mock.assert_called_once_with( task.driver.vendor, task) # Asserting if we are still on CLEANWAIT state confirms that # we return from pass_deploy_info method after initiating # cleaning. self.assertEqual(states.CLEANWAIT, task.node.provision_state) def test_vendor_routes(self): expected = ['heartbeat', 'pass_deploy_info', 'pass_bootloader_install_info'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: vendor_routes = task.driver.vendor.vendor_routes self.assertIsInstance(vendor_routes, dict) self.assertEqual(sorted(expected), sorted(list(vendor_routes))) def test_driver_routes(self): expected = ['lookup'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_routes = task.driver.vendor.driver_routes self.assertIsInstance(driver_routes, dict) self.assertEqual(sorted(expected), sorted(list(driver_routes))) @mock.patch.object(iscsi_deploy, 'validate_bootloader_install_status', autospec=True) @mock.patch.object(iscsi_deploy, 'finish_deploy', autospec=True) def test_pass_bootloader_install_info(self, finish_deploy_mock, validate_input_mock): kwargs = {'method': 'pass_deploy_info', 'address': '123456'} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.pass_bootloader_install_info(task, **kwargs) finish_deploy_mock.assert_called_once_with(task, '123456') validate_input_mock.assert_called_once_with(task, kwargs) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'reboot_and_finish_deploy', autospec=True) @mock.patch.object(iscsi_deploy, 'do_agent_iscsi_deploy', autospec=True) def test_continue_deploy_netboot(self, do_agent_iscsi_deploy_mock, reboot_and_finish_deploy_mock): uuid_dict_returned = {'root uuid': 'some-root-uuid'} do_agent_iscsi_deploy_mock.return_value = uuid_dict_returned self.driver.vendor.continue_deploy(self.task) do_agent_iscsi_deploy_mock.assert_called_once_with( self.task, self.driver.vendor._client) reboot_and_finish_deploy_mock.assert_called_once_with( mock.ANY, self.task) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'reboot_and_finish_deploy', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'configure_local_boot', autospec=True) @mock.patch.object(iscsi_deploy, 'do_agent_iscsi_deploy', autospec=True) def test_continue_deploy_localboot(self, do_agent_iscsi_deploy_mock, configure_local_boot_mock, reboot_and_finish_deploy_mock): self.node.instance_info = { 'capabilities': {'boot_option': 'local'}} self.node.save() uuid_dict_returned = {'root uuid': 'some-root-uuid'} do_agent_iscsi_deploy_mock.return_value = uuid_dict_returned self.driver.vendor.continue_deploy(self.task) do_agent_iscsi_deploy_mock.assert_called_once_with( self.task, self.driver.vendor._client) configure_local_boot_mock.assert_called_once_with( self.task.driver.vendor, self.task, root_uuid='some-root-uuid', efi_system_part_uuid=None) reboot_and_finish_deploy_mock.assert_called_once_with( self.task.driver.vendor, self.task) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'reboot_and_finish_deploy', autospec=True) @mock.patch.object(agent_base_vendor.BaseAgentVendor, 'configure_local_boot', autospec=True) @mock.patch.object(iscsi_deploy, 'do_agent_iscsi_deploy', autospec=True) def test_continue_deploy_localboot_uefi(self, do_agent_iscsi_deploy_mock, configure_local_boot_mock, reboot_and_finish_deploy_mock): self.node.instance_info = { 'capabilities': {'boot_option': 'local'}} self.node.save() uuid_dict_returned = {'root uuid': 'some-root-uuid', 'efi system partition uuid': 'efi-part-uuid'} do_agent_iscsi_deploy_mock.return_value = uuid_dict_returned self.driver.vendor.continue_deploy(self.task) do_agent_iscsi_deploy_mock.assert_called_once_with( self.task, self.driver.vendor._client) configure_local_boot_mock.assert_called_once_with( self.task.driver.vendor, self.task, root_uuid='some-root-uuid', efi_system_part_uuid='efi-part-uuid') reboot_and_finish_deploy_mock.assert_called_once_with( self.task.driver.vendor, self.task) # Cleanup of iscsi_deploy with pxe boot interface class CleanUpFullFlowTestCase(db_base.DbTestCase): def setUp(self): super(CleanUpFullFlowTestCase, self).setUp() self.config(image_cache_size=0, group='pxe') # Configure node mgr_utils.mock_the_extension_manager(driver="fake_pxe") instance_info = INST_INFO_DICT instance_info['deploy_key'] = 'fake-56789' self.node = obj_utils.create_test_node( self.context, driver='fake_pxe', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) # Configure temporary directories pxe_temp_dir = tempfile.mkdtemp() self.config(tftp_root=pxe_temp_dir, group='pxe') tftp_master_dir = os.path.join(CONF.pxe.tftp_root, 'tftp_master') self.config(tftp_master_path=tftp_master_dir, group='pxe') os.makedirs(tftp_master_dir) instance_temp_dir = tempfile.mkdtemp() self.config(images_path=instance_temp_dir, group='pxe') instance_master_dir = os.path.join(CONF.pxe.images_path, 'instance_master') self.config(instance_master_path=instance_master_dir, group='pxe') os.makedirs(instance_master_dir) self.pxe_config_dir = os.path.join(CONF.pxe.tftp_root, 'pxelinux.cfg') os.makedirs(self.pxe_config_dir) # Populate some file names self.master_kernel_path = os.path.join(CONF.pxe.tftp_master_path, 'kernel') self.master_instance_path = os.path.join(CONF.pxe.instance_master_path, 'image_uuid') self.node_tftp_dir = os.path.join(CONF.pxe.tftp_root, self.node.uuid) os.makedirs(self.node_tftp_dir) self.kernel_path = os.path.join(self.node_tftp_dir, 'kernel') self.node_image_dir = iscsi_deploy._get_image_dir_path(self.node.uuid) os.makedirs(self.node_image_dir) self.image_path = iscsi_deploy._get_image_file_path(self.node.uuid) self.config_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) self.mac_path = pxe_utils._get_pxe_mac_path(self.port.address) # Create files self.files = [self.config_path, self.master_kernel_path, self.master_instance_path] for fname in self.files: # NOTE(dtantsur): files with 0 size won't be cleaned up with open(fname, 'w') as fp: fp.write('test') os.link(self.config_path, self.mac_path) os.link(self.master_kernel_path, self.kernel_path) os.link(self.master_instance_path, self.image_path) dhcp_factory.DHCPFactory._dhcp_provider = None @mock.patch('ironic.common.dhcp_factory.DHCPFactory._set_dhcp_provider') @mock.patch('ironic.common.dhcp_factory.DHCPFactory.clean_dhcp') @mock.patch.object(pxe, '_get_instance_image_info', autospec=True) @mock.patch.object(pxe, '_get_deploy_image_info', autospec=True) def test_clean_up_with_master(self, mock_get_deploy_image_info, mock_get_instance_image_info, clean_dhcp_mock, set_dhcp_provider_mock): image_info = {'kernel': ('kernel_uuid', self.kernel_path)} mock_get_instance_image_info.return_value = image_info mock_get_deploy_image_info.return_value = {} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.deploy.clean_up(task) mock_get_instance_image_info.assert_called_with(task.node, task.context) mock_get_deploy_image_info.assert_called_with(task.node) set_dhcp_provider_mock.assert_called_once_with() clean_dhcp_mock.assert_called_once_with(task) for path in ([self.kernel_path, self.image_path, self.config_path] + self.files): self.assertFalse(os.path.exists(path), '%s is not expected to exist' % path) ironic-5.1.0/ironic/tests/unit/drivers/third_party_driver_mocks.py0000664000567000056710000003127612674513466026724 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """This module detects whether third-party libraries, utilized by third-party drivers, are present on the system. If they are not, it mocks them and tinkers with sys.modules so that the drivers can be loaded by unit tests, and the unit tests can continue to test the functionality of those drivers without the respective external libraries' actually being present. Any external library required by a third-party driver should be mocked here. Current list of mocked libraries: - seamicroclient - ipminative - proliantutils - pysnmp - scciclient - oneview_client - pywsman - python-dracclient """ import sys import mock from oslo_utils import importutils import six from ironic.drivers.modules import ipmitool from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs # attempt to load the external 'seamicroclient' library, which is # required by the optional drivers.modules.seamicro module seamicroclient = importutils.try_import("seamicroclient") if not seamicroclient: smc = mock.MagicMock(spec_set=mock_specs.SEAMICRO_SPEC) smc.client = mock.MagicMock(spec_set=mock_specs.SEAMICRO_CLIENT_MOD_SPEC) smc.exceptions = mock.MagicMock(spec_set=mock_specs.SEAMICRO_EXC_SPEC) smc.exceptions.ClientException = Exception smc.exceptions.UnsupportedVersion = Exception sys.modules['seamicroclient'] = smc sys.modules['seamicroclient.client'] = smc.client sys.modules['seamicroclient.exceptions'] = smc.exceptions # if anything has loaded the seamicro driver yet, reload it now that # the external library has been mocked if 'ironic.drivers.modules.seamicro' in sys.modules: six.moves.reload_module(sys.modules['ironic.drivers.modules.seamicro']) # IPMITool driver checks the system for presence of 'ipmitool' binary during # __init__. We bypass that check in order to run the unit tests, which do not # depend on 'ipmitool' being on the system. ipmitool.TIMING_SUPPORT = False ipmitool.DUAL_BRIDGE_SUPPORT = False ipmitool.SINGLE_BRIDGE_SUPPORT = False pyghmi = importutils.try_import("pyghmi") if not pyghmi: p = mock.MagicMock(spec_set=mock_specs.PYGHMI_SPEC) p.exceptions = mock.MagicMock(spec_set=mock_specs.PYGHMI_EXC_SPEC) p.exceptions.IpmiException = Exception p.ipmi = mock.MagicMock(spec_set=mock_specs.PYGHMI_IPMI_SPEC) p.ipmi.command = mock.MagicMock(spec_set=mock_specs.PYGHMI_IPMICMD_SPEC) p.ipmi.command.Command = mock.MagicMock(spec_set=[]) sys.modules['pyghmi'] = p sys.modules['pyghmi.exceptions'] = p.exceptions sys.modules['pyghmi.ipmi'] = p.ipmi sys.modules['pyghmi.ipmi.command'] = p.ipmi.command # FIXME(deva): the next line is a hack, because several unit tests # actually depend on this particular string being present # in pyghmi.ipmi.command.boot_devices p.ipmi.command.boot_devices = {'pxe': 4} if 'ironic.drivers.modules.ipminative' in sys.modules: six.moves.reload_module(sys.modules['ironic.drivers.modules.ipminative']) proliantutils = importutils.try_import('proliantutils') if not proliantutils: proliantutils = mock.MagicMock(spec_set=mock_specs.PROLIANTUTILS_SPEC) sys.modules['proliantutils'] = proliantutils sys.modules['proliantutils.ilo'] = proliantutils.ilo sys.modules['proliantutils.ilo.client'] = proliantutils.ilo.client sys.modules['proliantutils.exception'] = proliantutils.exception sys.modules['proliantutils.utils'] = proliantutils.utils proliantutils.utils.process_firmware_image = mock.MagicMock() proliantutils.exception.IloError = type('IloError', (Exception,), {}) command_exception = type('IloCommandNotSupportedError', (Exception,), {}) proliantutils.exception.IloCommandNotSupportedError = command_exception proliantutils.exception.InvalidInputError = type( 'InvalidInputError', (Exception,), {}) proliantutils.exception.ImageExtractionFailed = type( 'ImageExtractionFailed', (Exception,), {}) if 'ironic.drivers.ilo' in sys.modules: six.moves.reload_module(sys.modules['ironic.drivers.ilo']) oneview_client = importutils.try_import('oneview_client') if not oneview_client: oneview_client = mock.MagicMock(spec_set=mock_specs.ONEVIEWCLIENT_SPEC) sys.modules['oneview_client'] = oneview_client sys.modules['oneview_client.client'] = oneview_client.client sys.modules['oneview_client.client.Client'] = mock.MagicMock( spec_set=mock_specs.ONEVIEWCLIENT_CLIENT_CLS_SPEC ) states = mock.MagicMock( spec_set=mock_specs.ONEVIEWCLIENT_STATES_SPEC, ONEVIEW_POWER_OFF='Off', ONEVIEW_POWERING_OFF='PoweringOff', ONEVIEW_POWER_ON='On', ONEVIEW_POWERING_ON='PoweringOn', ONEVIEW_RESETTING='Resetting', ONEVIEW_ERROR='error') sys.modules['oneview_client.states'] = states sys.modules['oneview_client.exceptions'] = oneview_client.exceptions oneview_client.exceptions.OneViewException = type('OneViewException', (Exception,), {}) if 'ironic.drivers.oneview' in sys.modules: six.moves.reload_module(sys.modules['ironic.drivers.modules.oneview']) # attempt to load the external 'pywsman' library, which is required by # the optional drivers.modules.amt module pywsman = importutils.try_import('pywsman') if not pywsman: pywsman = mock.MagicMock(spec_set=mock_specs.PYWSMAN_SPEC) sys.modules['pywsman'] = pywsman # Now that the external library has been mocked, if anything had already # loaded any of the drivers, reload them. if 'ironic.drivers.modules.amt' in sys.modules: six.moves.reload_module(sys.modules['ironic.drivers.modules.amt']) # attempt to load the external 'python-dracclient' library, which is required # by the optional drivers.modules.drac module dracclient = importutils.try_import('dracclient') if not dracclient: dracclient = mock.MagicMock(spec_set=mock_specs.DRACCLIENT_SPEC) dracclient.client = mock.MagicMock( spec_set=mock_specs.DRACCLIENT_CLIENT_MOD_SPEC) dracclient.constants = mock.MagicMock( spec_set=mock_specs.DRACCLIENT_CONSTANTS_MOD_SPEC, POWER_OFF=mock.sentinel.POWER_OFF, POWER_ON=mock.sentinel.POWER_ON, REBOOT=mock.sentinel.REBOOT) sys.modules['dracclient'] = dracclient sys.modules['dracclient.client'] = dracclient.client sys.modules['dracclient.constants'] = dracclient.constants sys.modules['dracclient.exceptions'] = dracclient.exceptions dracclient.exceptions.BaseClientException = type('BaseClientException', (Exception,), {}) # Now that the external library has been mocked, if anything had already # loaded any of the drivers, reload them. if 'ironic.drivers.modules.drac' in sys.modules: six.moves.reload_module(sys.modules['ironic.drivers.modules.drac']) # attempt to load the external 'iboot' library, which is required by # the optional drivers.modules.iboot module iboot = importutils.try_import("iboot") if not iboot: ib = mock.MagicMock(spec_set=mock_specs.IBOOT_SPEC) ib.iBootInterface = mock.MagicMock(spec_set=[]) sys.modules['iboot'] = ib # if anything has loaded the iboot driver yet, reload it now that the # external library has been mocked if 'ironic.drivers.modules.iboot' in sys.modules: six.moves.reload_module(sys.modules['ironic.drivers.modules.iboot']) # attempt to load the external 'pysnmp' library, which is required by # the optional drivers.modules.snmp module pysnmp = importutils.try_import("pysnmp") if not pysnmp: pysnmp = mock.MagicMock(spec_set=mock_specs.PYWSNMP_SPEC) sys.modules["pysnmp"] = pysnmp sys.modules["pysnmp.entity"] = pysnmp.entity sys.modules["pysnmp.entity.rfc3413"] = pysnmp.entity.rfc3413 sys.modules["pysnmp.entity.rfc3413.oneliner"] = ( pysnmp.entity.rfc3413.oneliner) sys.modules["pysnmp.entity.rfc3413.oneliner.cmdgen"] = ( pysnmp.entity.rfc3413.oneliner.cmdgen) sys.modules["pysnmp.error"] = pysnmp.error pysnmp.error.PySnmpError = Exception sys.modules["pysnmp.proto"] = pysnmp.proto sys.modules["pysnmp.proto.rfc1902"] = pysnmp.proto.rfc1902 # Patch the RFC1902 integer class with a python int pysnmp.proto.rfc1902.Integer = int # if anything has loaded the snmp driver yet, reload it now that the # external library has been mocked if 'ironic.drivers.modules.snmp' in sys.modules: six.moves.reload_module(sys.modules['ironic.drivers.modules.snmp']) # attempt to load the external 'scciclient' library, which is required by # the optional drivers.modules.irmc module scciclient = importutils.try_import('scciclient') if not scciclient: mock_scciclient = mock.MagicMock(spec_set=mock_specs.SCCICLIENT_SPEC) sys.modules['scciclient'] = mock_scciclient sys.modules['scciclient.irmc'] = mock_scciclient.irmc sys.modules['scciclient.irmc.scci'] = mock.MagicMock( spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC, POWER_OFF=mock.sentinel.POWER_OFF, POWER_ON=mock.sentinel.POWER_ON, POWER_RESET=mock.sentinel.POWER_RESET, MOUNT_CD=mock.sentinel.MOUNT_CD, UNMOUNT_CD=mock.sentinel.UNMOUNT_CD, MOUNT_FD=mock.sentinel.MOUNT_FD, UNMOUNT_FD=mock.sentinel.UNMOUNT_FD) # if anything has loaded the iRMC driver yet, reload it now that the # external library has been mocked if 'ironic.drivers.modules.irmc' in sys.modules: six.moves.reload_module(sys.modules['ironic.drivers.modules.irmc']) # install mock object to prevent 'iscsi_irmc' and 'agent_irmc' from # checking whether NFS/CIFS share file system is mounted or not. irmc_boot = importutils.import_module( 'ironic.drivers.modules.irmc.boot') irmc_boot.check_share_fs_mounted_orig = irmc_boot.check_share_fs_mounted irmc_boot.check_share_fs_mounted_patcher = mock.patch( 'ironic.drivers.modules.irmc.boot.check_share_fs_mounted') irmc_boot.check_share_fs_mounted_patcher.return_value = None pyremotevbox = importutils.try_import('pyremotevbox') if not pyremotevbox: pyremotevbox = mock.MagicMock(spec_set=mock_specs.PYREMOTEVBOX_SPEC) pyremotevbox.exception = mock.MagicMock( spec_set=mock_specs.PYREMOTEVBOX_EXC_SPEC) pyremotevbox.exception.PyRemoteVBoxException = Exception pyremotevbox.exception.VmInWrongPowerState = Exception pyremotevbox.vbox = mock.MagicMock( spec_set=mock_specs.PYREMOTEVBOX_VBOX_SPEC) sys.modules['pyremotevbox'] = pyremotevbox if 'ironic.drivers.modules.virtualbox' in sys.modules: six.moves.reload_module( sys.modules['ironic.drivers.modules.virtualbox']) ironic_inspector_client = importutils.try_import('ironic_inspector_client') if not ironic_inspector_client: ironic_inspector_client = mock.MagicMock( spec_set=mock_specs.IRONIC_INSPECTOR_CLIENT_SPEC) sys.modules['ironic_inspector_client'] = ironic_inspector_client if 'ironic.drivers.modules.inspector' in sys.modules: six.moves.reload_module( sys.modules['ironic.drivers.modules.inspector']) class MockKwargsException(Exception): def __init__(self, *args, **kwargs): super(MockKwargsException, self).__init__(*args) self.kwargs = kwargs ucssdk = importutils.try_import('UcsSdk') if not ucssdk: ucssdk = mock.MagicMock() sys.modules['UcsSdk'] = ucssdk sys.modules['UcsSdk.utils'] = ucssdk.utils sys.modules['UcsSdk.utils.power'] = ucssdk.utils.power sys.modules['UcsSdk.utils.management'] = ucssdk.utils.management sys.modules['UcsSdk.utils.exception'] = ucssdk.utils.exception ucssdk.utils.exception.UcsOperationError = ( type('UcsOperationError', (MockKwargsException,), {})) ucssdk.utils.exception.UcsConnectionError = ( type('UcsConnectionError', (MockKwargsException,), {})) if 'ironic.drivers.modules.ucs' in sys.modules: six.moves.reload_module( sys.modules['ironic.drivers.modules.ucs']) imcsdk = importutils.try_import('ImcSdk') if not imcsdk: imcsdk = mock.MagicMock() imcsdk.ImcException = Exception sys.modules['ImcSdk'] = imcsdk if 'ironic.drivers.modules.cimc' in sys.modules: six.moves.reload_module( sys.modules['ironic.drivers.modules.cimc']) ironic-5.1.0/ironic/tests/unit/drivers/agent_pxe_config.template0000664000567000056710000000107612674513466026301 0ustar jenkinsjenkins00000000000000default deploy label deploy kernel /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_kernel append initrd=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_ramdisk text test_param ipa-api-url=http://192.168.122.184:6385 ipa-driver-name=agent_ipmitool root_device=vendor=fake,size=123 coreos.configdrive=0 label boot_partition kernel /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel append initrd=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk root={{ ROOT }} ro text test_param label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} ironic-5.1.0/ironic/tests/unit/drivers/third_party_driver_mock_specs.py0000664000567000056710000000544612674513466027736 0ustar jenkinsjenkins00000000000000# Copyright 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """This module provides mock 'specs' for third party modules that can be used when needing to mock those third party modules""" # python-dracclient DRACCLIENT_SPEC = ( 'client', 'constants', 'exceptions' ) DRACCLIENT_CLIENT_MOD_SPEC = ( 'DRACClient', ) DRACCLIENT_CONSTANTS_MOD_SPEC = ( 'POWER_OFF', 'POWER_ON', 'REBOOT' ) # iboot IBOOT_SPEC = ( 'iBootInterface', ) # ironic_inspector IRONIC_INSPECTOR_CLIENT_SPEC = ( 'introspect', 'get_status', ) # proliantutils PROLIANTUTILS_SPEC = ( 'exception', 'ilo', 'utils', ) # pyghmi PYGHMI_SPEC = ( 'exceptions', 'ipmi', ) PYGHMI_EXC_SPEC = ( 'IpmiException', ) PYGHMI_IPMI_SPEC = ( 'command', ) PYGHMI_IPMICMD_SPEC = ( 'boot_devices', 'Command', ) # pyremotevbox PYREMOTEVBOX_SPEC = ( 'exception', 'vbox', ) PYREMOTEVBOX_EXC_SPEC = ( 'PyRemoteVBoxException', 'VmInWrongPowerState', ) PYREMOTEVBOX_VBOX_SPEC = ( 'VirtualBoxHost', ) # pywsman PYWSMAN_SPEC = ( 'Client', 'ClientOptions', 'EndPointReference', 'FLAG_ENUMERATION_OPTIMIZATION', 'Filter', 'XmlDoc', 'wsman_transport_set_verify_host', 'wsman_transport_set_verify_peer', ) # pywsnmp PYWSNMP_SPEC = ( 'entity', 'error', 'proto', ) # scciclient SCCICLIENT_SPEC = ( 'irmc', ) SCCICLIENT_IRMC_SCCI_SPEC = ( 'POWER_OFF', 'POWER_ON', 'POWER_RESET', 'MOUNT_CD', 'UNMOUNT_CD', 'MOUNT_FD', 'UNMOUNT_FD', 'SCCIClientError', 'SCCIInvalidInputError', 'get_share_type', 'get_client', 'get_report', 'get_sensor_data', 'get_virtual_cd_set_params_cmd', 'get_virtual_fd_set_params_cmd', 'get_essential_properties', ) ONEVIEWCLIENT_SPEC = ( 'client', 'states', 'exceptions', ) ONEVIEWCLIENT_CLIENT_CLS_SPEC = ( ) ONEVIEWCLIENT_STATES_SPEC = ( 'ONEVIEW_POWER_OFF', 'ONEVIEW_POWERING_OFF', 'ONEVIEW_POWER_ON', 'ONEVIEW_POWERING_ON', 'ONEVIEW_RESETTING', 'ONEVIEW_ERROR', ) # seamicro SEAMICRO_SPEC = ( 'client', 'exceptions', ) # seamicro.client module SEAMICRO_CLIENT_MOD_SPEC = ( 'Client', ) SEAMICRO_EXC_SPEC = ( 'ClientException', 'UnsupportedVersion', ) ironic-5.1.0/ironic/tests/unit/drivers/__init__.py0000664000567000056710000000175112674513466023356 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(deva): since __init__ is loaded before the files in the same directory, # and some third-party driver tests may need to have their # external libraries mocked, we load the file which does that # mocking here -- in the __init__. from ironic.tests.unit.drivers import third_party_driver_mocks # noqa ironic-5.1.0/ironic/tests/unit/drivers/ipxe_config_timeout.template0000664000567000056710000000154112674513466027037 0ustar jenkinsjenkins00000000000000#!ipxe dhcp goto deploy :deploy kernel --timeout 120 http://1.2.3.4:1234/deploy_kernel selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn-1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_id=1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_key=0123456789ABCDEFGHIJKLMNOPQRSTUV ironic_api_url=http://192.168.122.184:6385 troubleshoot=0 text test_param boot_option=netboot ip=${ip}:${next-server}:${gateway}:${netmask} BOOTIF=${mac} root_device=vendor=fake,size=123 ipa-api-url=http://192.168.122.184:6385 ipa-driver-name=pxe_ssh boot_mode=bios initrd=deploy_ramdisk coreos.configdrive=0 initrd --timeout 120 http://1.2.3.4:1234/deploy_ramdisk boot :boot_partition kernel --timeout 120 http://1.2.3.4:1234/kernel root={{ ROOT }} ro text test_param initrd=ramdisk initrd --timeout 120 http://1.2.3.4:1234/ramdisk boot :boot_whole_disk sanboot --no-describe ironic-5.1.0/ironic/tests/unit/drivers/test_agent.py0000664000567000056710000000531312674513466023752 0ustar jenkinsjenkins00000000000000# Copyright 2015 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for Agent Deploy Driver """ import mock import testtools from ironic.common import exception from ironic.drivers import agent from ironic.drivers.modules import agent as agent_module from ironic.drivers.modules.amt import management as amt_management from ironic.drivers.modules.amt import power as amt_power from ironic.drivers.modules import iboot from ironic.drivers.modules import pxe from ironic.drivers.modules import wol class AgentAndAMTDriverTestCase(testtools.TestCase): @mock.patch.object(agent.importutils, 'try_import', spec_set=True, autospec=True) def test___init__(self, mock_try_import): mock_try_import.return_value = True driver = agent.AgentAndAMTDriver() self.assertIsInstance(driver.power, amt_power.AMTPower) self.assertIsInstance(driver.boot, pxe.PXEBoot) self.assertIsInstance(driver.deploy, agent_module.AgentDeploy) self.assertIsInstance(driver.management, amt_management.AMTManagement) self.assertIsInstance(driver.vendor, agent_module.AgentVendorInterface) @mock.patch.object(agent.importutils, 'try_import') def test___init___try_import_exception(self, mock_try_import): mock_try_import.return_value = False self.assertRaises(exception.DriverLoadError, agent.AgentAndAMTDriver) class AgentAndWakeOnLanDriverTestCase(testtools.TestCase): def test___init__(self): driver = agent.AgentAndWakeOnLanDriver() self.assertIsInstance(driver.power, wol.WakeOnLanPower) self.assertIsInstance(driver.boot, pxe.PXEBoot) self.assertIsInstance(driver.deploy, agent_module.AgentDeploy) self.assertIsInstance(driver.vendor, agent_module.AgentVendorInterface) class AgentAndIBootDriverTestCase(testtools.TestCase): def test___init__(self): driver = agent.AgentAndIBootDriver() self.assertIsInstance(driver.power, iboot.IBootPower) self.assertIsInstance(driver.boot, pxe.PXEBoot) self.assertIsInstance(driver.deploy, agent_module.AgentDeploy) self.assertIsInstance(driver.vendor, agent_module.AgentVendorInterface) ironic-5.1.0/ironic/tests/unit/drivers/elilo_efi_pxe_config.template0000664000567000056710000000170312674513466027127 0ustar jenkinsjenkins00000000000000default=deploy image=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_kernel label=deploy initrd=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_ramdisk append="selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn-1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_id=1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_key=0123456789ABCDEFGHIJKLMNOPQRSTUV ironic_api_url=http://192.168.122.184:6385 troubleshoot=0 text test_param ip=%I::%G:%M:%H::on root_device=vendor=fake,size=123 ipa-api-url=http://192.168.122.184:6385 ipa-driver-name=pxe_ssh boot_option=netboot boot_mode=uefi coreos.configdrive=0" image=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel label=boot_partition initrd=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk append="root={{ ROOT }} ro text test_param ip=%I::%G:%M:%H::on" image=chain.c32 label=boot_whole_disk append="mbr:{{ DISK_IDENTIFIER }}" ironic-5.1.0/ironic/tests/unit/drivers/ipxe_uefi_config.template0000664000567000056710000000145112674513466026301 0ustar jenkinsjenkins00000000000000#!ipxe dhcp goto deploy :deploy kernel http://1.2.3.4:1234/deploy_kernel selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn-1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_id=1be26c0b-03f2-4d2e-ae87-c02d7f33c123 deployment_key=0123456789ABCDEFGHIJKLMNOPQRSTUV ironic_api_url=http://192.168.122.184:6385 troubleshoot=0 text test_param boot_option=netboot ip=${ip}:${next-server}:${gateway}:${netmask} BOOTIF=${mac} root_device=vendor=fake,size=123 ipa-api-url=http://192.168.122.184:6385 ipa-driver-name=pxe_ssh boot_mode=uefi initrd=deploy_ramdisk coreos.configdrive=0 initrd http://1.2.3.4:1234/deploy_ramdisk boot :boot_partition kernel http://1.2.3.4:1234/kernel root={{ ROOT }} ro text test_param initrd=ramdisk initrd http://1.2.3.4:1234/ramdisk boot :boot_whole_disk sanboot --no-describe ironic-5.1.0/ironic/tests/unit/drivers/test_fake.py0000664000567000056710000001114612674513466023563 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Fake driver.""" import mock from ironic.common import boot_devices from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base as driver_base from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class FakeDriverTestCase(db_base.DbTestCase): def setUp(self): super(FakeDriverTestCase, self).setUp() mgr_utils.mock_the_extension_manager() self.driver = driver_factory.get_driver("fake") self.node = obj_utils.get_test_node(self.context) self.task = mock.MagicMock(spec=task_manager.TaskManager) self.task.shared = False self.task.node = self.node self.task.driver = self.driver def test_driver_interfaces(self): # fake driver implements only 5 out of 6 interfaces self.assertIsInstance(self.driver.power, driver_base.PowerInterface) self.assertIsInstance(self.driver.deploy, driver_base.DeployInterface) self.assertIsInstance(self.driver.vendor, driver_base.VendorInterface) self.assertIsInstance(self.driver.console, driver_base.ConsoleInterface) self.assertIsNone(self.driver.rescue) def test_get_properties(self): expected = ['A1', 'A2', 'B1', 'B2'] properties = self.driver.get_properties() self.assertEqual(sorted(expected), sorted(properties.keys())) def test_power_interface(self): self.assertEqual({}, self.driver.power.get_properties()) self.driver.power.validate(self.task) self.driver.power.get_power_state(self.task) self.assertRaises(exception.InvalidParameterValue, self.driver.power.set_power_state, self.task, states.NOSTATE) self.driver.power.set_power_state(self.task, states.POWER_ON) self.driver.power.reboot(self.task) def test_deploy_interface(self): self.assertEqual({}, self.driver.deploy.get_properties()) self.driver.deploy.validate(None) self.driver.deploy.prepare(None) self.driver.deploy.deploy(None) self.driver.deploy.take_over(None) self.driver.deploy.clean_up(None) self.driver.deploy.tear_down(None) def test_console_interface(self): self.assertEqual({}, self.driver.console.get_properties()) self.driver.console.validate(self.task) self.driver.console.start_console(self.task) self.driver.console.stop_console(self.task) self.driver.console.get_console(self.task) def test_management_interface_get_properties(self): self.assertEqual({}, self.driver.management.get_properties()) def test_management_interface_validate(self): self.driver.management.validate(self.task) def test_management_interface_set_boot_device_good(self): self.driver.management.set_boot_device(self.task, boot_devices.PXE) def test_management_interface_set_boot_device_fail(self): self.assertRaises(exception.InvalidParameterValue, self.driver.management.set_boot_device, self.task, 'not-supported') def test_management_interface_get_supported_boot_devices(self): expected = [boot_devices.PXE] self.assertEqual( expected, self.driver.management.get_supported_boot_devices(self.task)) def test_management_interface_get_boot_device(self): expected = {'boot_device': boot_devices.PXE, 'persistent': False} self.assertEqual(expected, self.driver.management.get_boot_device(self.task)) def test_inspect_interface(self): self.assertEqual({}, self.driver.inspect.get_properties()) self.driver.inspect.validate(self.task) self.driver.inspect.inspect_hardware(self.task) ironic-5.1.0/ironic/tests/unit/cmd/0000775000567000056710000000000012674513633020322 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/cmd/test_dbsync.py0000664000567000056710000000164212674513466023224 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.db import migration from ironic.tests.unit.db import base class DbSyncTestCase(base.DbTestCase): def test_upgrade_and_version(self): migration.upgrade('head') v = migration.version() self.assertTrue(v) ironic-5.1.0/ironic/tests/unit/cmd/__init__.py0000664000567000056710000000000012674513466022425 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/common/0000775000567000056710000000000012674513633021047 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/common/test_utils.py0000664000567000056710000007650612674513466023642 0ustar jenkinsjenkins00000000000000# Copyright 2011 Justin Santa Barbara # Copyright 2012 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import errno import hashlib import os import os.path import shutil import tempfile import mock import netaddr from oslo_concurrency import processutils from oslo_config import cfg import six import six.moves.builtins as __builtin__ from ironic.common import exception from ironic.common import utils from ironic.tests import base CONF = cfg.CONF class BareMetalUtilsTestCase(base.TestCase): def test_random_alnum(self): s = utils.random_alnum(10) self.assertEqual(10, len(s)) s = utils.random_alnum(100) self.assertEqual(100, len(s)) def test_create_link(self): with mock.patch.object(os, "symlink", autospec=True) as symlink_mock: symlink_mock.return_value = None utils.create_link_without_raise("/fake/source", "/fake/link") symlink_mock.assert_called_once_with("/fake/source", "/fake/link") def test_create_link_EEXIST(self): with mock.patch.object(os, "symlink", autospec=True) as symlink_mock: symlink_mock.side_effect = OSError(errno.EEXIST) utils.create_link_without_raise("/fake/source", "/fake/link") symlink_mock.assert_called_once_with("/fake/source", "/fake/link") class ExecuteTestCase(base.TestCase): def test_retry_on_failure(self): fd, tmpfilename = tempfile.mkstemp() _, tmpfilename2 = tempfile.mkstemp() try: fp = os.fdopen(fd, 'w+') fp.write('''#!/bin/sh # If stdin fails to get passed during one of the runs, make a note. if ! grep -q foo then echo 'failure' > "$1" fi # If stdin has failed to get passed during this or a previous run, exit early. if grep failure "$1" then exit 1 fi runs="$(cat $1)" if [ -z "$runs" ] then runs=0 fi runs=$(($runs + 1)) echo $runs > "$1" exit 1 ''') fp.close() os.chmod(tmpfilename, 0o755) try: self.assertRaises(processutils.ProcessExecutionError, utils.execute, tmpfilename, tmpfilename2, attempts=10, process_input=b'foo', delay_on_retry=False) except OSError as e: if e.errno == errno.EACCES: self.skipTest("Permissions error detected. " "Are you running with a noexec /tmp?") else: raise fp = open(tmpfilename2, 'r') runs = fp.read() fp.close() self.assertNotEqual(runs.strip(), 'failure', 'stdin did not ' 'always get passed ' 'correctly') runs = int(runs.strip()) self.assertEqual(10, runs, 'Ran %d times instead of 10.' % (runs,)) finally: os.unlink(tmpfilename) os.unlink(tmpfilename2) def test_unknown_kwargs_raises_error(self): self.assertRaises(processutils.UnknownArgumentError, utils.execute, '/usr/bin/env', 'true', this_is_not_a_valid_kwarg=True) def test_check_exit_code_boolean(self): utils.execute('/usr/bin/env', 'false', check_exit_code=False) self.assertRaises(processutils.ProcessExecutionError, utils.execute, '/usr/bin/env', 'false', check_exit_code=True) def test_no_retry_on_success(self): fd, tmpfilename = tempfile.mkstemp() _, tmpfilename2 = tempfile.mkstemp() try: fp = os.fdopen(fd, 'w+') fp.write('''#!/bin/sh # If we've already run, bail out. grep -q foo "$1" && exit 1 # Mark that we've run before. echo foo > "$1" # Check that stdin gets passed correctly. grep foo ''') fp.close() os.chmod(tmpfilename, 0o755) try: utils.execute(tmpfilename, tmpfilename2, process_input=b'foo', attempts=2) except OSError as e: if e.errno == errno.EACCES: self.skipTest("Permissions error detected. " "Are you running with a noexec /tmp?") else: raise finally: os.unlink(tmpfilename) os.unlink(tmpfilename2) @mock.patch.object(processutils, 'execute', autospec=True) @mock.patch.object(os.environ, 'copy', return_value={}, autospec=True) def test_execute_use_standard_locale_no_env_variables(self, env_mock, execute_mock): utils.execute('foo', use_standard_locale=True) execute_mock.assert_called_once_with('foo', env_variables={'LC_ALL': 'C'}) @mock.patch.object(processutils, 'execute', autospec=True) def test_execute_use_standard_locale_with_env_variables(self, execute_mock): utils.execute('foo', use_standard_locale=True, env_variables={'foo': 'bar'}) execute_mock.assert_called_once_with('foo', env_variables={'LC_ALL': 'C', 'foo': 'bar'}) @mock.patch.object(processutils, 'execute', autospec=True) def test_execute_not_use_standard_locale(self, execute_mock): utils.execute('foo', use_standard_locale=False, env_variables={'foo': 'bar'}) execute_mock.assert_called_once_with('foo', env_variables={'foo': 'bar'}) def test_execute_get_root_helper(self): with mock.patch.object( processutils, 'execute', autospec=True) as execute_mock: helper = utils._get_root_helper() utils.execute('foo', run_as_root=True) execute_mock.assert_called_once_with('foo', run_as_root=True, root_helper=helper) def test_execute_without_root_helper(self): with mock.patch.object( processutils, 'execute', autospec=True) as execute_mock: utils.execute('foo', run_as_root=False) execute_mock.assert_called_once_with('foo', run_as_root=False) class GenericUtilsTestCase(base.TestCase): def test_hostname_unicode_sanitization(self): hostname = u"\u7684.test.example.com" self.assertEqual(b"test.example.com", utils.sanitize_hostname(hostname)) def test_hostname_sanitize_periods(self): hostname = "....test.example.com..." self.assertEqual(b"test.example.com", utils.sanitize_hostname(hostname)) def test_hostname_sanitize_dashes(self): hostname = "----test.example.com---" self.assertEqual(b"test.example.com", utils.sanitize_hostname(hostname)) def test_hostname_sanitize_characters(self): hostname = "(#@&$!(@*--#&91)(__=+--test-host.example!!.com-0+" self.assertEqual(b"91----test-host.example.com-0", utils.sanitize_hostname(hostname)) def test_hostname_translate(self): hostname = "<}\x1fh\x10e\x08l\x02l\x05o\x12!{>" self.assertEqual(b"hello", utils.sanitize_hostname(hostname)) def test_read_cached_file(self): with mock.patch.object( os.path, "getmtime", autospec=True) as getmtime_mock: getmtime_mock.return_value = 1 cache_data = {"data": 1123, "mtime": 1} data = utils.read_cached_file("/this/is/a/fake", cache_data) self.assertEqual(cache_data["data"], data) getmtime_mock.assert_called_once_with(mock.ANY) def test_read_modified_cached_file(self): with mock.patch.object( os.path, "getmtime", autospec=True) as getmtime_mock: with mock.patch.object( __builtin__, 'open', autospec=True) as open_mock: getmtime_mock.return_value = 2 fake_contents = "lorem ipsum" fake_file = mock.Mock() fake_file.read.return_value = fake_contents fake_context_manager = mock.MagicMock() fake_context_manager.__enter__.return_value = fake_file fake_context_manager.__exit__.return_value = None open_mock.return_value = fake_context_manager cache_data = {"data": 1123, "mtime": 1} self.reload_called = False def test_reload(reloaded_data): self.assertEqual(fake_contents, reloaded_data) self.reload_called = True data = utils.read_cached_file("/this/is/a/fake", cache_data, reload_func=test_reload) self.assertEqual(fake_contents, data) self.assertTrue(self.reload_called) getmtime_mock.assert_called_once_with(mock.ANY) open_mock.assert_called_once_with(mock.ANY) fake_file.read.assert_called_once_with() fake_context_manager.__exit__.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) fake_context_manager.__enter__.assert_called_once_with() @mock.patch.object(utils, 'hashlib', autospec=True) def test__get_hash_object(self, hashlib_mock): algorithms_available = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') hashlib_mock.algorithms_guaranteed = algorithms_available hashlib_mock.algorithms = algorithms_available # | WHEN | utils._get_hash_object('md5') utils._get_hash_object('sha1') utils._get_hash_object('sha224') utils._get_hash_object('sha256') utils._get_hash_object('sha384') utils._get_hash_object('sha512') # | THEN | calls = [mock.call.md5(), mock.call.sha1(), mock.call.sha224(), mock.call.sha256(), mock.call.sha384(), mock.call.sha512()] hashlib_mock.assert_has_calls(calls) def test__get_hash_object_throws_for_invalid_or_unsupported_hash_name( self): # | WHEN | & | THEN | self.assertRaises(exception.InvalidParameterValue, utils._get_hash_object, 'hickory-dickory-dock') def test_hash_file_for_md5(self): # | GIVEN | data = b'Mary had a little lamb, its fleece as white as snow' file_like_object = six.BytesIO(data) expected = hashlib.md5(data).hexdigest() # | WHEN | actual = utils.hash_file(file_like_object) # using default, 'md5' # | THEN | self.assertEqual(expected, actual) def test_hash_file_for_sha1(self): # | GIVEN | data = b'Mary had a little lamb, its fleece as white as snow' file_like_object = six.BytesIO(data) expected = hashlib.sha1(data).hexdigest() # | WHEN | actual = utils.hash_file(file_like_object, 'sha1') # | THEN | self.assertEqual(expected, actual) def test_hash_file_for_sha512(self): # | GIVEN | data = b'Mary had a little lamb, its fleece as white as snow' file_like_object = six.BytesIO(data) expected = hashlib.sha512(data).hexdigest() # | WHEN | actual = utils.hash_file(file_like_object, 'sha512') # | THEN | self.assertEqual(expected, actual) def test_hash_file_throws_for_invalid_or_unsupported_hash(self): # | GIVEN | data = b'Mary had a little lamb, its fleece as white as snow' file_like_object = six.BytesIO(data) # | WHEN | & | THEN | self.assertRaises(exception.InvalidParameterValue, utils.hash_file, file_like_object, 'hickory-dickory-dock') def test_is_valid_boolstr(self): self.assertTrue(utils.is_valid_boolstr('true')) self.assertTrue(utils.is_valid_boolstr('false')) self.assertTrue(utils.is_valid_boolstr('yes')) self.assertTrue(utils.is_valid_boolstr('no')) self.assertTrue(utils.is_valid_boolstr('y')) self.assertTrue(utils.is_valid_boolstr('n')) self.assertTrue(utils.is_valid_boolstr('1')) self.assertTrue(utils.is_valid_boolstr('0')) self.assertFalse(utils.is_valid_boolstr('maybe')) self.assertFalse(utils.is_valid_boolstr('only on tuesdays')) def test_is_valid_ipv6_cidr(self): self.assertTrue(utils.is_valid_ipv6_cidr("2600::/64")) self.assertTrue(utils.is_valid_ipv6_cidr( "abcd:ef01:2345:6789:abcd:ef01:192.168.254.254/48")) self.assertTrue(utils.is_valid_ipv6_cidr( "0000:0000:0000:0000:0000:0000:0000:0001/32")) self.assertTrue(utils.is_valid_ipv6_cidr( "0000:0000:0000:0000:0000:0000:0000:0001")) self.assertFalse(utils.is_valid_ipv6_cidr("foo")) self.assertFalse(utils.is_valid_ipv6_cidr("127.0.0.1")) def test_get_shortened_ipv6(self): self.assertEqual("abcd:ef01:2345:6789:abcd:ef01:c0a8:fefe", utils.get_shortened_ipv6( "abcd:ef01:2345:6789:abcd:ef01:192.168.254.254")) self.assertEqual("::1", utils.get_shortened_ipv6( "0000:0000:0000:0000:0000:0000:0000:0001")) self.assertEqual("caca::caca:0:babe:201:102", utils.get_shortened_ipv6( "caca:0000:0000:caca:0000:babe:0201:0102")) self.assertRaises(netaddr.AddrFormatError, utils.get_shortened_ipv6, "127.0.0.1") self.assertRaises(netaddr.AddrFormatError, utils.get_shortened_ipv6, "failure") def test_get_shortened_ipv6_cidr(self): self.assertEqual("2600::/64", utils.get_shortened_ipv6_cidr( "2600:0000:0000:0000:0000:0000:0000:0000/64")) self.assertEqual("2600::/64", utils.get_shortened_ipv6_cidr( "2600::1/64")) self.assertRaises(netaddr.AddrFormatError, utils.get_shortened_ipv6_cidr, "127.0.0.1") self.assertRaises(netaddr.AddrFormatError, utils.get_shortened_ipv6_cidr, "failure") def test_is_valid_mac(self): self.assertTrue(utils.is_valid_mac("52:54:00:cf:2d:31")) self.assertTrue(utils.is_valid_mac(u"52:54:00:cf:2d:31")) self.assertFalse(utils.is_valid_mac("127.0.0.1")) self.assertFalse(utils.is_valid_mac("not:a:mac:address")) self.assertFalse(utils.is_valid_mac("52-54-00-cf-2d-31")) self.assertFalse(utils.is_valid_mac("aa bb cc dd ee ff")) self.assertTrue(utils.is_valid_mac("AA:BB:CC:DD:EE:FF")) self.assertFalse(utils.is_valid_mac("AA BB CC DD EE FF")) self.assertFalse(utils.is_valid_mac("AA-BB-CC-DD-EE-FF")) def test_is_hostname_safe(self): self.assertTrue(utils.is_hostname_safe('spam')) self.assertFalse(utils.is_hostname_safe('spAm')) self.assertFalse(utils.is_hostname_safe('SPAM')) self.assertFalse(utils.is_hostname_safe('-spam')) self.assertFalse(utils.is_hostname_safe('spam-')) self.assertTrue(utils.is_hostname_safe('spam-eggs')) self.assertFalse(utils.is_hostname_safe('spam_eggs')) self.assertFalse(utils.is_hostname_safe('spam eggs')) self.assertTrue(utils.is_hostname_safe('spam.eggs')) self.assertTrue(utils.is_hostname_safe('9spam')) self.assertTrue(utils.is_hostname_safe('spam7')) self.assertTrue(utils.is_hostname_safe('br34kf4st')) self.assertFalse(utils.is_hostname_safe('$pam')) self.assertFalse(utils.is_hostname_safe('egg$')) self.assertFalse(utils.is_hostname_safe('spam#eggs')) self.assertFalse(utils.is_hostname_safe(' eggs')) self.assertFalse(utils.is_hostname_safe('spam ')) self.assertTrue(utils.is_hostname_safe('s')) self.assertTrue(utils.is_hostname_safe('s' * 63)) self.assertFalse(utils.is_hostname_safe('s' * 64)) self.assertFalse(utils.is_hostname_safe('')) self.assertFalse(utils.is_hostname_safe(None)) # Need to ensure a binary response for success or fail self.assertIsNotNone(utils.is_hostname_safe('spam')) self.assertIsNotNone(utils.is_hostname_safe('-spam')) self.assertTrue(utils.is_hostname_safe('www.rackspace.com')) self.assertTrue(utils.is_hostname_safe('www.rackspace.com.')) self.assertTrue(utils.is_hostname_safe('http._sctp.www.example.com')) self.assertTrue(utils.is_hostname_safe('mail.pets_r_us.net')) self.assertTrue(utils.is_hostname_safe('mail-server-15.my_host.org')) self.assertFalse(utils.is_hostname_safe('www.nothere.com_')) self.assertFalse(utils.is_hostname_safe('www.nothere_.com')) self.assertFalse(utils.is_hostname_safe('www..nothere.com')) long_str = 'a' * 63 + '.' + 'b' * 63 + '.' + 'c' * 63 + '.' + 'd' * 63 self.assertTrue(utils.is_hostname_safe(long_str)) self.assertFalse(utils.is_hostname_safe(long_str + '.')) self.assertFalse(utils.is_hostname_safe('a' * 255)) def test_is_valid_logical_name(self): valid = ( 'spam', 'spAm', 'SPAM', 'spam-eggs', 'spam.eggs', 'spam_eggs', 'spam~eggs', '9spam', 'spam7', '~spam', '.spam', '.~-_', '~', 'br34kf4st', 's', 's' * 63, 's' * 255) invalid = ( ' ', 'spam eggs', '$pam', 'egg$', 'spam#eggs', ' eggs', 'spam ', '', None, 'spam%20') for hostname in valid: result = utils.is_valid_logical_name(hostname) # Need to ensure a binary response for success. assertTrue # is too generous, and would pass this test if, for # instance, a regex Match object were returned. self.assertIs(result, True, "%s is unexpectedly invalid" % hostname) for hostname in invalid: result = utils.is_valid_logical_name(hostname) # Need to ensure a binary response for # success. assertFalse is too generous and would pass this # test if None were returned. self.assertIs(result, False, "%s is unexpectedly valid" % hostname) def test_validate_and_normalize_mac(self): mac = 'AA:BB:CC:DD:EE:FF' with mock.patch.object(utils, 'is_valid_mac', autospec=True) as m_mock: m_mock.return_value = True self.assertEqual(mac.lower(), utils.validate_and_normalize_mac(mac)) def test_validate_and_normalize_mac_invalid_format(self): with mock.patch.object(utils, 'is_valid_mac', autospec=True) as m_mock: m_mock.return_value = False self.assertRaises(exception.InvalidMAC, utils.validate_and_normalize_mac, 'invalid-mac') def test_safe_rstrip(self): value = '/test/' rstripped_value = '/test' not_rstripped = '/' self.assertEqual(rstripped_value, utils.safe_rstrip(value, '/')) self.assertEqual(not_rstripped, utils.safe_rstrip(not_rstripped, '/')) def test_safe_rstrip_not_raises_exceptions(self): # Supplying an integer should normally raise an exception because it # does not save the rstrip() method. value = 10 # In the case of raising an exception safe_rstrip() should return the # original value. self.assertEqual(value, utils.safe_rstrip(value)) @mock.patch.object(os.path, 'getmtime', return_value=1439465889.4964755, autospec=True) def test_unix_file_modification_datetime(self, mtime_mock): expected = datetime.datetime(2015, 8, 13, 11, 38, 9, 496475) self.assertEqual(expected, utils.unix_file_modification_datetime('foo')) mtime_mock.assert_called_once_with('foo') def test_is_valid_no_proxy(self): # Valid values for 'no_proxy' valid_no_proxy = [ ('a' * 63 + '.' + '0' * 63 + '.c.' + 'd' * 61 + '.' + 'e' * 61), ('A' * 63 + '.' + '0' * 63 + '.C.' + 'D' * 61 + '.' + 'E' * 61), ('.' + 'a' * 62 + '.' + '0' * 62 + '.c.' + 'd' * 61 + '.' + 'e' * 61), ',,example.com:3128,', '192.168.1.1', # IP should be valid ] # Test each one individually, so if failure easier to determine which # one failed. for no_proxy in valid_no_proxy: self.assertTrue( utils.is_valid_no_proxy(no_proxy), msg="'no_proxy' value should be valid: {}".format(no_proxy)) # Test valid when joined together self.assertTrue(utils.is_valid_no_proxy(','.join(valid_no_proxy))) # Test valid when joined together with whitespace self.assertTrue(utils.is_valid_no_proxy(' , '.join(valid_no_proxy))) # empty string should also be valid self.assertTrue(utils.is_valid_no_proxy('')) # Invalid values for 'no_proxy' invalid_no_proxy = [ ('A' * 64 + '.' + '0' * 63 + '.C.' + 'D' * 61 + '.' + 'E' * 61), # too long (> 253) ('a' * 100), 'a..com', ('.' + 'a' * 63 + '.' + '0' * 62 + '.c.' + 'd' * 61 + '.' + 'e' * 61), # too long (> 251 after deleting .) ('*.' + 'a' * 60 + '.' + '0' * 60 + '.c.' + 'd' * 61 + '.' + 'e' * 61), # starts with *. 'c.-a.com', 'c.a-.com', ] for no_proxy in invalid_no_proxy: self.assertFalse( utils.is_valid_no_proxy(no_proxy), msg="'no_proxy' value should be invalid: {}".format(no_proxy)) class TempFilesTestCase(base.TestCase): def test_tempdir(self): dirname = None with utils.tempdir() as tempdir: self.assertTrue(os.path.isdir(tempdir)) dirname = tempdir self.assertFalse(os.path.exists(dirname)) @mock.patch.object(shutil, 'rmtree', autospec=True) @mock.patch.object(tempfile, 'mkdtemp', autospec=True) def test_tempdir_mocked(self, mkdtemp_mock, rmtree_mock): self.config(tempdir='abc') mkdtemp_mock.return_value = 'temp-dir' kwargs = {'dir': 'b'} with utils.tempdir(**kwargs) as tempdir: self.assertEqual('temp-dir', tempdir) tempdir_created = tempdir mkdtemp_mock.assert_called_once_with(**kwargs) rmtree_mock.assert_called_once_with(tempdir_created) @mock.patch.object(utils, 'LOG', autospec=True) @mock.patch.object(shutil, 'rmtree', autospec=True) @mock.patch.object(tempfile, 'mkdtemp', autospec=True) def test_tempdir_mocked_error_on_rmtree(self, mkdtemp_mock, rmtree_mock, log_mock): self.config(tempdir='abc') mkdtemp_mock.return_value = 'temp-dir' rmtree_mock.side_effect = OSError with utils.tempdir() as tempdir: self.assertEqual('temp-dir', tempdir) tempdir_created = tempdir rmtree_mock.assert_called_once_with(tempdir_created) self.assertTrue(log_mock.error.called) @mock.patch.object(os.path, 'exists', autospec=True) @mock.patch.object(utils, '_check_dir_writable', autospec=True) @mock.patch.object(utils, '_check_dir_free_space', autospec=True) def test_check_dir_with_pass_in(self, mock_free_space, mock_dir_writable, mock_exists): mock_exists.return_value = True # test passing in a directory and size utils.check_dir(directory_to_check='/fake/path', required_space=5) mock_exists.assert_called_once_with('/fake/path') mock_dir_writable.assert_called_once_with('/fake/path') mock_free_space.assert_called_once_with('/fake/path', 5) @mock.patch.object(os.path, 'exists', autospec=True) @mock.patch.object(utils, '_check_dir_writable', autospec=True) @mock.patch.object(utils, '_check_dir_free_space', autospec=True) def test_check_dir_no_dir(self, mock_free_space, mock_dir_writable, mock_exists): mock_exists.return_value = False self.config(tempdir='/fake/path') self.assertRaises(exception.PathNotFound, utils.check_dir) mock_exists.assert_called_once_with(CONF.tempdir) self.assertFalse(mock_free_space.called) self.assertFalse(mock_dir_writable.called) @mock.patch.object(os.path, 'exists', autospec=True) @mock.patch.object(utils, '_check_dir_writable', autospec=True) @mock.patch.object(utils, '_check_dir_free_space', autospec=True) def test_check_dir_ok(self, mock_free_space, mock_dir_writable, mock_exists): mock_exists.return_value = True self.config(tempdir='/fake/path') utils.check_dir() mock_exists.assert_called_once_with(CONF.tempdir) mock_dir_writable.assert_called_once_with(CONF.tempdir) mock_free_space.assert_called_once_with(CONF.tempdir, 1) @mock.patch.object(os, 'access', autospec=True) def test__check_dir_writable_ok(self, mock_access): mock_access.return_value = True self.assertIsNone(utils._check_dir_writable("/fake/path")) mock_access.assert_called_once_with("/fake/path", os.W_OK) @mock.patch.object(os, 'access', autospec=True) def test__check_dir_writable_not_writable(self, mock_access): mock_access.return_value = False self.assertRaises(exception.DirectoryNotWritable, utils._check_dir_writable, "/fake/path") mock_access.assert_called_once_with("/fake/path", os.W_OK) @mock.patch.object(os, 'statvfs', autospec=True) def test__check_dir_free_space_ok(self, mock_stat): statvfs_mock_return = mock.MagicMock() statvfs_mock_return.f_bsize = 5 statvfs_mock_return.f_frsize = 0 statvfs_mock_return.f_blocks = 0 statvfs_mock_return.f_bfree = 0 statvfs_mock_return.f_bavail = 1024 * 1024 statvfs_mock_return.f_files = 0 statvfs_mock_return.f_ffree = 0 statvfs_mock_return.f_favail = 0 statvfs_mock_return.f_flag = 0 statvfs_mock_return.f_namemax = 0 mock_stat.return_value = statvfs_mock_return utils._check_dir_free_space("/fake/path") mock_stat.assert_called_once_with("/fake/path") @mock.patch.object(os, 'statvfs', autospec=True) def test_check_dir_free_space_raises(self, mock_stat): statvfs_mock_return = mock.MagicMock() statvfs_mock_return.f_bsize = 1 statvfs_mock_return.f_frsize = 0 statvfs_mock_return.f_blocks = 0 statvfs_mock_return.f_bfree = 0 statvfs_mock_return.f_bavail = 1024 statvfs_mock_return.f_files = 0 statvfs_mock_return.f_ffree = 0 statvfs_mock_return.f_favail = 0 statvfs_mock_return.f_flag = 0 statvfs_mock_return.f_namemax = 0 mock_stat.return_value = statvfs_mock_return self.assertRaises(exception.InsufficientDiskSpace, utils._check_dir_free_space, "/fake/path") mock_stat.assert_called_once_with("/fake/path") class GetUpdatedCapabilitiesTestCase(base.TestCase): def test_get_updated_capabilities(self): capabilities = {'ilo_firmware_version': 'xyz'} cap_string = 'ilo_firmware_version:xyz' cap_returned = utils.get_updated_capabilities(None, capabilities) self.assertEqual(cap_string, cap_returned) self.assertIsInstance(cap_returned, str) def test_get_updated_capabilities_multiple_keys(self): capabilities = {'ilo_firmware_version': 'xyz', 'foo': 'bar', 'somekey': 'value'} cap_string = 'ilo_firmware_version:xyz,foo:bar,somekey:value' cap_returned = utils.get_updated_capabilities(None, capabilities) set1 = set(cap_string.split(',')) set2 = set(cap_returned.split(',')) self.assertEqual(set1, set2) self.assertIsInstance(cap_returned, str) def test_get_updated_capabilities_invalid_capabilities(self): capabilities = 'ilo_firmware_version' self.assertRaises(ValueError, utils.get_updated_capabilities, capabilities, {}) def test_get_updated_capabilities_capabilities_not_dict(self): capabilities = ['ilo_firmware_version:xyz', 'foo:bar'] self.assertRaises(ValueError, utils.get_updated_capabilities, None, capabilities) def test_get_updated_capabilities_add_to_existing_capabilities(self): new_capabilities = {'BootMode': 'uefi'} expected_capabilities = 'BootMode:uefi,foo:bar' cap_returned = utils.get_updated_capabilities('foo:bar', new_capabilities) set1 = set(expected_capabilities.split(',')) set2 = set(cap_returned.split(',')) self.assertEqual(set1, set2) self.assertIsInstance(cap_returned, str) def test_get_updated_capabilities_replace_to_existing_capabilities(self): new_capabilities = {'BootMode': 'bios'} expected_capabilities = 'BootMode:bios' cap_returned = utils.get_updated_capabilities('BootMode:uefi', new_capabilities) set1 = set(expected_capabilities.split(',')) set2 = set(cap_returned.split(',')) self.assertEqual(set1, set2) self.assertIsInstance(cap_returned, str) def test_validate_network_port(self): port = utils.validate_network_port('1', 'message') self.assertEqual(1, port) port = utils.validate_network_port('65535') self.assertEqual(65535, port) def test_validate_network_port_fail(self): self.assertRaisesRegexp(exception.InvalidParameterValue, 'Port "65536" is out of range.', utils.validate_network_port, '65536') self.assertRaisesRegexp(exception.InvalidParameterValue, 'fake_port "-1" is out of range.', utils.validate_network_port, '-1', 'fake_port') self.assertRaisesRegexp(exception.InvalidParameterValue, 'Port "invalid" is not a valid integer.', utils.validate_network_port, 'invalid') ironic-5.1.0/ironic/tests/unit/common/test_swift.py0000664000567000056710000002142712674513466023626 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg import six from six.moves import builtins as __builtin__ from six.moves import http_client from swiftclient import client as swift_client from swiftclient import exceptions as swift_exception from swiftclient import utils as swift_utils from ironic.common import exception from ironic.common import swift from ironic.tests import base CONF = cfg.CONF if six.PY3: import io file = io.BytesIO @mock.patch.object(swift_client, 'Connection', autospec=True) class SwiftTestCase(base.TestCase): def setUp(self): super(SwiftTestCase, self).setUp() self.swift_exception = swift_exception.ClientException('', '') self.config(admin_user='admin', group='keystone_authtoken') self.config(admin_tenant_name='tenant', group='keystone_authtoken') self.config(admin_password='password', group='keystone_authtoken') self.config(auth_uri='http://authurl', group='keystone_authtoken') self.config(auth_version='2', group='keystone_authtoken') self.config(swift_max_retries=2, group='swift') self.config(insecure=0, group='keystone_authtoken') self.config(cafile='/path/to/ca/file', group='keystone_authtoken') self.expected_params = {'retries': 2, 'insecure': 0, 'user': 'admin', 'tenant_name': 'tenant', 'key': 'password', 'authurl': 'http://authurl/v2.0', 'cacert': '/path/to/ca/file', 'auth_version': '2'} def test___init__(self, connection_mock): swift.SwiftAPI() connection_mock.assert_called_once_with(**self.expected_params) def test__init__with_region_from_config(self, connection_mock): self.config(region_name='region1', group='keystone_authtoken') swift.SwiftAPI() params = self.expected_params.copy() params['os_options'] = {'region_name': 'region1'} connection_mock.assert_called_once_with(**params) def test__init__with_region_from_constructor(self, connection_mock): swift.SwiftAPI(region_name='region1') params = self.expected_params.copy() params['os_options'] = {'region_name': 'region1'} connection_mock.assert_called_once_with(**params) @mock.patch.object(__builtin__, 'open', autospec=True) def test_create_object(self, open_mock, connection_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'file-object' open_mock.return_value = mock_file_handle connection_obj_mock.put_object.return_value = 'object-uuid' object_uuid = swiftapi.create_object('container', 'object', 'some-file-location') connection_obj_mock.put_container.assert_called_once_with('container') connection_obj_mock.put_object.assert_called_once_with( 'container', 'object', 'file-object', headers=None) self.assertEqual('object-uuid', object_uuid) @mock.patch.object(__builtin__, 'open', autospec=True) def test_create_object_create_container_fails(self, open_mock, connection_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value connection_obj_mock.put_container.side_effect = self.swift_exception self.assertRaises(exception.SwiftOperationError, swiftapi.create_object, 'container', 'object', 'some-file-location') connection_obj_mock.put_container.assert_called_once_with('container') self.assertFalse(connection_obj_mock.put_object.called) @mock.patch.object(__builtin__, 'open', autospec=True) def test_create_object_put_object_fails(self, open_mock, connection_mock): swiftapi = swift.SwiftAPI() mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'file-object' open_mock.return_value = mock_file_handle connection_obj_mock = connection_mock.return_value connection_obj_mock.head_account.side_effect = None connection_obj_mock.put_object.side_effect = self.swift_exception self.assertRaises(exception.SwiftOperationError, swiftapi.create_object, 'container', 'object', 'some-file-location') connection_obj_mock.put_container.assert_called_once_with('container') connection_obj_mock.put_object.assert_called_once_with( 'container', 'object', 'file-object', headers=None) @mock.patch.object(swift_utils, 'generate_temp_url', autospec=True) def test_get_temp_url(self, gen_temp_url_mock, connection_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value auth = ['http://host/v1/AUTH_tenant_id', 'token'] connection_obj_mock.get_auth.return_value = auth head_ret_val = {'x-account-meta-temp-url-key': 'secretkey'} connection_obj_mock.head_account.return_value = head_ret_val gen_temp_url_mock.return_value = 'temp-url-path' temp_url_returned = swiftapi.get_temp_url('container', 'object', 10) connection_obj_mock.get_auth.assert_called_once_with() connection_obj_mock.head_account.assert_called_once_with() object_path_expected = '/v1/AUTH_tenant_id/container/object' gen_temp_url_mock.assert_called_once_with(object_path_expected, 10, 'secretkey', 'GET') self.assertEqual('http://host/temp-url-path', temp_url_returned) def test_delete_object(self, connection_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value swiftapi.delete_object('container', 'object') connection_obj_mock.delete_object.assert_called_once_with('container', 'object') def test_delete_object_exc_resource_not_found(self, connection_mock): swiftapi = swift.SwiftAPI() exc = swift_exception.ClientException( "Resource not found", http_status=http_client.NOT_FOUND) connection_obj_mock = connection_mock.return_value connection_obj_mock.delete_object.side_effect = exc self.assertRaises(exception.SwiftObjectNotFoundError, swiftapi.delete_object, 'container', 'object') connection_obj_mock.delete_object.assert_called_once_with('container', 'object') def test_delete_object_exc(self, connection_mock): swiftapi = swift.SwiftAPI() exc = swift_exception.ClientException("Operation error") connection_obj_mock = connection_mock.return_value connection_obj_mock.delete_object.side_effect = exc self.assertRaises(exception.SwiftOperationError, swiftapi.delete_object, 'container', 'object') connection_obj_mock.delete_object.assert_called_once_with('container', 'object') def test_head_object(self, connection_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value expected_head_result = {'a': 'b'} connection_obj_mock.head_object.return_value = expected_head_result actual_head_result = swiftapi.head_object('container', 'object') connection_obj_mock.head_object.assert_called_once_with('container', 'object') self.assertEqual(expected_head_result, actual_head_result) def test_update_object_meta(self, connection_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value headers = {'a': 'b'} swiftapi.update_object_meta('container', 'object', headers) connection_obj_mock.post_object.assert_called_once_with( 'container', 'object', headers) ironic-5.1.0/ironic/tests/unit/common/test_pxe_utils.py0000664000567000056710000007100112674513466024477 0ustar jenkinsjenkins00000000000000# # Copyright 2014 Rackspace, Inc # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from oslo_config import cfg import six from ironic.common import pxe_utils from ironic.conductor import task_manager from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as object_utils CONF = cfg.CONF class TestPXEUtils(db_base.DbTestCase): def setUp(self): super(TestPXEUtils, self).setUp() mgr_utils.mock_the_extension_manager(driver="fake") common_pxe_options = { 'deployment_aki_path': u'/tftpboot/1be26c0b-03f2-4d2e-ae87-' u'c02d7f33c123/deploy_kernel', 'aki_path': u'/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/' u'kernel', 'ari_path': u'/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/' u'ramdisk', 'pxe_append_params': 'test_param', 'deployment_ari_path': u'/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7' u'f33c123/deploy_ramdisk', 'root_device': 'vendor=fake,size=123', 'ipa-api-url': 'http://192.168.122.184:6385', 'ipxe_timeout': 0, } self.pxe_options = { 'deployment_key': '0123456789ABCDEFGHIJKLMNOPQRSTUV', 'iscsi_target_iqn': u'iqn-1be26c0b-03f2-4d2e-ae87-c02d7f33' u'c123', 'deployment_id': u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123', 'ironic_api_url': 'http://192.168.122.184:6385', 'disk': 'cciss/c0d0,sda,hda,vda', 'boot_option': 'netboot', 'ipa-driver-name': 'pxe_ssh', } self.pxe_options.update(common_pxe_options) self.pxe_options_bios = { 'boot_mode': 'bios', } self.pxe_options_bios.update(self.pxe_options) self.pxe_options_uefi = { 'boot_mode': 'uefi', } self.pxe_options_uefi.update(self.pxe_options) self.agent_pxe_options = { 'ipa-driver-name': 'agent_ipmitool', } self.agent_pxe_options.update(common_pxe_options) self.ipxe_options = self.pxe_options.copy() self.ipxe_options.update({ 'deployment_aki_path': 'http://1.2.3.4:1234/deploy_kernel', 'deployment_ari_path': 'http://1.2.3.4:1234/deploy_ramdisk', 'aki_path': 'http://1.2.3.4:1234/kernel', 'ari_path': 'http://1.2.3.4:1234/ramdisk', }) self.ipxe_options_bios = { 'boot_mode': 'bios', } self.ipxe_options_bios.update(self.ipxe_options) self.ipxe_options_timeout = self.ipxe_options_bios.copy() self.ipxe_options_timeout.update({ 'ipxe_timeout': 120 }) self.ipxe_options_uefi = { 'boot_mode': 'uefi', } self.ipxe_options_uefi.update(self.ipxe_options) self.node = object_utils.create_test_node(self.context) def test__build_pxe_config(self): rendered_template = pxe_utils._build_pxe_config( self.pxe_options_bios, CONF.pxe.pxe_config_template, '{{ ROOT }}', '{{ DISK_IDENTIFIER }}') expected_template = open( 'ironic/tests/unit/drivers/pxe_config.template').read().rstrip() self.assertEqual(six.text_type(expected_template), rendered_template) def test__build_pxe_config_with_agent(self): rendered_template = pxe_utils._build_pxe_config( self.agent_pxe_options, CONF.agent.agent_pxe_config_template, '{{ ROOT }}', '{{ DISK_IDENTIFIER }}') template_file = 'ironic/tests/unit/drivers/agent_pxe_config.template' expected_template = open(template_file).read().rstrip() self.assertEqual(six.text_type(expected_template), rendered_template) def test__build_ipxe_bios_config(self): # NOTE(lucasagomes): iPXE is just an extension of the PXE driver, # it doesn't have it's own configuration option for template. # More info: # http://docs.openstack.org/developer/ironic/deploy/install-guide.html self.config( pxe_config_template='ironic/drivers/modules/ipxe_config.template', group='pxe' ) self.config(http_url='http://1.2.3.4:1234', group='deploy') rendered_template = pxe_utils._build_pxe_config( self.ipxe_options_bios, CONF.pxe.pxe_config_template, '{{ ROOT }}', '{{ DISK_IDENTIFIER }}') expected_template = open( 'ironic/tests/unit/drivers/ipxe_config.template').read().rstrip() self.assertEqual(six.text_type(expected_template), rendered_template) def test__build_ipxe_timeout_config(self): # NOTE(lucasagomes): iPXE is just an extension of the PXE driver, # it doesn't have it's own configuration option for template. # More info: # http://docs.openstack.org/developer/ironic/deploy/install-guide.html self.config( pxe_config_template='ironic/drivers/modules/ipxe_config.template', group='pxe' ) self.config(http_url='http://1.2.3.4:1234', group='deploy') rendered_template = pxe_utils._build_pxe_config( self.ipxe_options_timeout, CONF.pxe.pxe_config_template, '{{ ROOT }}', '{{ DISK_IDENTIFIER }}') tpl_file = 'ironic/tests/unit/drivers/ipxe_config_timeout.template' expected_template = open(tpl_file).read().rstrip() self.assertEqual(six.text_type(expected_template), rendered_template) def test__build_ipxe_uefi_config(self): # NOTE(lucasagomes): iPXE is just an extension of the PXE driver, # it doesn't have it's own configuration option for template. # More info: # http://docs.openstack.org/developer/ironic/deploy/install-guide.html self.config( pxe_config_template='ironic/drivers/modules/ipxe_config.template', group='pxe' ) self.config(http_url='http://1.2.3.4:1234', group='deploy') rendered_template = pxe_utils._build_pxe_config( self.ipxe_options_uefi, CONF.pxe.pxe_config_template, '{{ ROOT }}', '{{ DISK_IDENTIFIER }}') expected_template = open( 'ironic/tests/unit/drivers/' 'ipxe_uefi_config.template').read().rstrip() self.assertEqual(six.text_type(expected_template), rendered_template) def test__build_elilo_config(self): pxe_opts = self.pxe_options pxe_opts['boot_mode'] = 'uefi' rendered_template = pxe_utils._build_pxe_config( pxe_opts, CONF.pxe.uefi_pxe_config_template, '{{ ROOT }}', '{{ DISK_IDENTIFIER }}') expected_template = open( 'ironic/tests/unit/drivers/elilo_efi_pxe_config.template' ).read().rstrip() self.assertEqual(six.text_type(expected_template), rendered_template) def test__build_grub_config(self): pxe_opts = self.pxe_options pxe_opts['boot_mode'] = 'uefi' pxe_opts['tftp_server'] = '192.0.2.1' grub_tmplte = "ironic/drivers/modules/pxe_grub_config.template" rendered_template = pxe_utils._build_pxe_config( pxe_opts, grub_tmplte, '(( ROOT ))', '(( DISK_IDENTIFIER ))') template_file = 'ironic/tests/unit/drivers/pxe_grub_config.template' expected_template = open(template_file).read().rstrip() self.assertEqual(six.text_type(expected_template), rendered_template) @mock.patch('ironic.common.utils.create_link_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) @mock.patch('ironic.drivers.utils.get_node_mac_addresses', autospec=True) def test__write_mac_pxe_configs(self, get_macs_mock, unlink_mock, create_link_mock): macs = [ '00:11:22:33:44:55:66', '00:11:22:33:44:55:67' ] get_macs_mock.return_value = macs create_link_calls = [ mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/pxelinux.cfg/01-00-11-22-33-44-55-66'), mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/pxelinux.cfg/01-00-11-22-33-44-55-67') ] unlink_calls = [ mock.call('/tftpboot/pxelinux.cfg/01-00-11-22-33-44-55-66'), mock.call('/tftpboot/pxelinux.cfg/01-00-11-22-33-44-55-67'), ] with task_manager.acquire(self.context, self.node.uuid) as task: pxe_utils._link_mac_pxe_configs(task) unlink_mock.assert_has_calls(unlink_calls) create_link_mock.assert_has_calls(create_link_calls) @mock.patch('ironic.common.utils.create_link_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) @mock.patch('ironic.drivers.utils.get_node_mac_addresses', autospec=True) def test__write_mac_ipxe_configs(self, get_macs_mock, unlink_mock, create_link_mock): self.config(ipxe_enabled=True, group='pxe') macs = [ '00:11:22:33:44:55:66', '00:11:22:33:44:55:67' ] get_macs_mock.return_value = macs create_link_calls = [ mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/httpboot/pxelinux.cfg/00-11-22-33-44-55-66'), mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/httpboot/pxelinux.cfg/00112233445566'), mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/httpboot/pxelinux.cfg/00-11-22-33-44-55-67'), mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/httpboot/pxelinux.cfg/00112233445567'), ] unlink_calls = [ mock.call('/httpboot/pxelinux.cfg/00-11-22-33-44-55-66'), mock.call('/httpboot/pxelinux.cfg/00112233445566'), mock.call('/httpboot/pxelinux.cfg/00-11-22-33-44-55-67'), mock.call('/httpboot/pxelinux.cfg/00112233445567'), ] with task_manager.acquire(self.context, self.node.uuid) as task: pxe_utils._link_mac_pxe_configs(task) unlink_mock.assert_has_calls(unlink_calls) create_link_mock.assert_has_calls(create_link_calls) @mock.patch('ironic.common.utils.create_link_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) @mock.patch('ironic.common.dhcp_factory.DHCPFactory.provider', autospec=True) def test__link_ip_address_pxe_configs(self, provider_mock, unlink_mock, create_link_mock): ip_address = '10.10.0.1' address = "aa:aa:aa:aa:aa:aa" object_utils.create_test_port(self.context, node_id=self.node.id, address=address) provider_mock.get_ip_addresses.return_value = [ip_address] create_link_calls = [ mock.call(u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', u'/tftpboot/10.10.0.1.conf'), ] with task_manager.acquire(self.context, self.node.uuid) as task: pxe_utils._link_ip_address_pxe_configs(task, False) unlink_mock.assert_called_once_with('/tftpboot/10.10.0.1.conf') create_link_mock.assert_has_calls(create_link_calls) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch.object(pxe_utils, '_build_pxe_config', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config(self, ensure_tree_mock, build_mock, write_mock): build_mock.return_value = self.pxe_options_bios with task_manager.acquire(self.context, self.node.uuid) as task: pxe_utils.create_pxe_config(task, self.pxe_options_bios, CONF.pxe.pxe_config_template) build_mock.assert_called_with(self.pxe_options_bios, CONF.pxe.pxe_config_template, '{{ ROOT }}', '{{ DISK_IDENTIFIER }}') ensure_calls = [ mock.call(os.path.join(CONF.pxe.tftp_root, self.node.uuid)), mock.call(os.path.join(CONF.pxe.tftp_root, 'pxelinux.cfg')) ] ensure_tree_mock.assert_has_calls(ensure_calls) pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) write_mock.assert_called_with(pxe_cfg_file_path, self.pxe_options_bios) @mock.patch('ironic.common.pxe_utils._link_ip_address_pxe_configs', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.pxe_utils._build_pxe_config', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config_uefi_elilo(self, ensure_tree_mock, build_mock, write_mock, link_ip_configs_mock): build_mock.return_value = self.pxe_options_uefi with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' pxe_utils.create_pxe_config(task, self.pxe_options_uefi, CONF.pxe.uefi_pxe_config_template) ensure_calls = [ mock.call(os.path.join(CONF.pxe.tftp_root, self.node.uuid)), mock.call(os.path.join(CONF.pxe.tftp_root, 'pxelinux.cfg')) ] ensure_tree_mock.assert_has_calls(ensure_calls) build_mock.assert_called_with(self.pxe_options_uefi, CONF.pxe.uefi_pxe_config_template, '{{ ROOT }}', '{{ DISK_IDENTIFIER }}') link_ip_configs_mock.assert_called_once_with(task, True) pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) write_mock.assert_called_with(pxe_cfg_file_path, self.pxe_options_uefi) @mock.patch('ironic.common.pxe_utils._link_ip_address_pxe_configs', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.pxe_utils._build_pxe_config', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config_uefi_grub(self, ensure_tree_mock, build_mock, write_mock, link_ip_configs_mock): build_mock.return_value = self.pxe_options_uefi grub_tmplte = "ironic/drivers/modules/pxe_grub_config.template" with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' pxe_utils.create_pxe_config(task, self.pxe_options_uefi, grub_tmplte) ensure_calls = [ mock.call(os.path.join(CONF.pxe.tftp_root, self.node.uuid)), mock.call(os.path.join(CONF.pxe.tftp_root, 'pxelinux.cfg')) ] ensure_tree_mock.assert_has_calls(ensure_calls) build_mock.assert_called_with(self.pxe_options_uefi, grub_tmplte, '(( ROOT ))', '(( DISK_IDENTIFIER ))') link_ip_configs_mock.assert_called_once_with(task, False) pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) write_mock.assert_called_with(pxe_cfg_file_path, self.pxe_options_uefi) @mock.patch('ironic.common.pxe_utils._link_mac_pxe_configs', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.pxe_utils._build_pxe_config', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config_uefi_ipxe(self, ensure_tree_mock, build_mock, write_mock, link_mac_pxe_mock): self.config(ipxe_enabled=True, group='pxe') build_mock.return_value = self.ipxe_options_uefi ipxe_template = "ironic/drivers/modules/ipxe_config.template" with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' pxe_utils.create_pxe_config(task, self.ipxe_options_uefi, ipxe_template) ensure_calls = [ mock.call(os.path.join(CONF.deploy.http_root, self.node.uuid)), mock.call(os.path.join(CONF.deploy.http_root, 'pxelinux.cfg')) ] ensure_tree_mock.assert_has_calls(ensure_calls) build_mock.assert_called_with(self.ipxe_options_uefi, ipxe_template, '{{ ROOT }}', '{{ DISK_IDENTIFIER }}') link_mac_pxe_mock.assert_called_once_with(task) pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) write_mock.assert_called_with(pxe_cfg_file_path, self.ipxe_options_uefi) @mock.patch('ironic.common.utils.rmtree_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) def test_clean_up_pxe_config(self, unlink_mock, rmtree_mock): address = "aa:aa:aa:aa:aa:aa" object_utils.create_test_port(self.context, node_id=self.node.id, address=address) with task_manager.acquire(self.context, self.node.uuid) as task: pxe_utils.clean_up_pxe_config(task) unlink_mock.assert_called_once_with("/tftpboot/pxelinux.cfg/01-%s" % address.replace(':', '-')) rmtree_mock.assert_called_once_with( os.path.join(CONF.pxe.tftp_root, self.node.uuid)) def test__get_pxe_mac_path(self): mac = '00:11:22:33:44:55:66' self.assertEqual('/tftpboot/pxelinux.cfg/01-00-11-22-33-44-55-66', pxe_utils._get_pxe_mac_path(mac)) def test__get_pxe_mac_path_ipxe(self): self.config(ipxe_enabled=True, group='pxe') self.config(http_root='/httpboot', group='deploy') mac = '00:11:22:33:AA:BB:CC' self.assertEqual('/httpboot/pxelinux.cfg/00-11-22-33-aa-bb-cc', pxe_utils._get_pxe_mac_path(mac)) def test__get_pxe_ip_address_path(self): ipaddress = '10.10.0.1' self.assertEqual('/tftpboot/10.10.0.1.conf', pxe_utils._get_pxe_ip_address_path(ipaddress, False)) def test_get_root_dir(self): expected_dir = '/tftproot' self.config(ipxe_enabled=False, group='pxe') self.config(tftp_root=expected_dir, group='pxe') self.assertEqual(expected_dir, pxe_utils.get_root_dir()) def test_get_root_dir_ipxe(self): expected_dir = '/httpboot' self.config(ipxe_enabled=True, group='pxe') self.config(http_root=expected_dir, group='deploy') self.assertEqual(expected_dir, pxe_utils.get_root_dir()) def test_get_pxe_config_file_path(self): self.assertEqual(os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'config'), pxe_utils.get_pxe_config_file_path(self.node.uuid)) def _dhcp_options_for_instance(self, ip_version=4): self.config(ip_version=ip_version, group='pxe') self.config(tftp_server='192.0.2.1', group='pxe') self.config(pxe_bootfile_name='fake-bootfile', group='pxe') expected_info = [{'opt_name': 'bootfile-name', 'opt_value': 'fake-bootfile', 'ip_version': ip_version}, {'opt_name': 'server-ip-address', 'opt_value': '192.0.2.1', 'ip_version': ip_version}, {'opt_name': 'tftp-server', 'opt_value': '192.0.2.1', 'ip_version': ip_version}, ] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(expected_info, pxe_utils.dhcp_options_for_instance(task)) def test_dhcp_options_for_instance(self): self._dhcp_options_for_instance(ip_version=4) def test_dhcp_options_for_instance_ipv6(self): self._dhcp_options_for_instance(ip_version=6) def _test_get_deploy_kr_info(self, expected_dir): node_uuid = 'fake-node' driver_info = { 'deploy_kernel': 'glance://deploy-kernel', 'deploy_ramdisk': 'glance://deploy-ramdisk', } expected = { 'deploy_kernel': ('glance://deploy-kernel', expected_dir + '/fake-node/deploy_kernel'), 'deploy_ramdisk': ('glance://deploy-ramdisk', expected_dir + '/fake-node/deploy_ramdisk'), } kr_info = pxe_utils.get_deploy_kr_info(node_uuid, driver_info) self.assertEqual(expected, kr_info) def test_get_deploy_kr_info(self): expected_dir = '/tftp' self.config(tftp_root=expected_dir, group='pxe') self._test_get_deploy_kr_info(expected_dir) def test_get_deploy_kr_info_ipxe(self): expected_dir = '/http' self.config(ipxe_enabled=True, group='pxe') self.config(http_root=expected_dir, group='deploy') self._test_get_deploy_kr_info(expected_dir) def test_get_deploy_kr_info_bad_driver_info(self): self.config(tftp_root='/tftp', group='pxe') node_uuid = 'fake-node' driver_info = {} self.assertRaises(KeyError, pxe_utils.get_deploy_kr_info, node_uuid, driver_info) def _dhcp_options_for_instance_ipxe(self, task, boot_file): self.config(tftp_server='192.0.2.1', group='pxe') self.config(ipxe_enabled=True, group='pxe') self.config(http_url='http://192.0.3.2:1234', group='deploy') self.config(ipxe_boot_script='/test/boot.ipxe', group='pxe') self.config(dhcp_provider='isc', group='dhcp') expected_boot_script_url = 'http://192.0.3.2:1234/boot.ipxe' expected_info = [{'opt_name': '!175,bootfile-name', 'opt_value': boot_file, 'ip_version': 4}, {'opt_name': 'server-ip-address', 'opt_value': '192.0.2.1', 'ip_version': 4}, {'opt_name': 'tftp-server', 'opt_value': '192.0.2.1', 'ip_version': 4}, {'opt_name': 'bootfile-name', 'opt_value': expected_boot_script_url, 'ip_version': 4}] self.assertItemsEqual(expected_info, pxe_utils.dhcp_options_for_instance(task)) self.config(dhcp_provider='neutron', group='dhcp') expected_boot_script_url = 'http://192.0.3.2:1234/boot.ipxe' expected_info = [{'opt_name': 'tag:!ipxe,bootfile-name', 'opt_value': boot_file, 'ip_version': 4}, {'opt_name': 'server-ip-address', 'opt_value': '192.0.2.1', 'ip_version': 4}, {'opt_name': 'tftp-server', 'opt_value': '192.0.2.1', 'ip_version': 4}, {'opt_name': 'tag:ipxe,bootfile-name', 'opt_value': expected_boot_script_url, 'ip_version': 4}] self.assertItemsEqual(expected_info, pxe_utils.dhcp_options_for_instance(task)) def test_dhcp_options_for_instance_ipxe_bios(self): boot_file = 'fake-bootfile-bios' self.config(pxe_bootfile_name=boot_file, group='pxe') with task_manager.acquire(self.context, self.node.uuid) as task: self._dhcp_options_for_instance_ipxe(task, boot_file) def test_dhcp_options_for_instance_ipxe_uefi(self): boot_file = 'fake-bootfile-uefi' self.config(uefi_pxe_bootfile_name=boot_file, group='pxe') with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' self._dhcp_options_for_instance_ipxe(task, boot_file) @mock.patch('ironic.common.utils.rmtree_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) @mock.patch('ironic.common.dhcp_factory.DHCPFactory.provider') def test_clean_up_pxe_config_uefi(self, provider_mock, unlink_mock, rmtree_mock): ip_address = '10.10.0.1' address = "aa:aa:aa:aa:aa:aa" properties = {'capabilities': 'boot_mode:uefi'} object_utils.create_test_port(self.context, node_id=self.node.id, address=address) provider_mock.get_ip_addresses.return_value = [ip_address] with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties = properties pxe_utils.clean_up_pxe_config(task) unlink_calls = [ mock.call('/tftpboot/10.10.0.1.conf'), mock.call('/tftpboot/0A0A0001.conf') ] unlink_mock.assert_has_calls(unlink_calls) rmtree_mock.assert_called_once_with( os.path.join(CONF.pxe.tftp_root, self.node.uuid)) @mock.patch('ironic.common.utils.rmtree_without_raise') @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) @mock.patch('ironic.common.dhcp_factory.DHCPFactory.provider') def test_clean_up_pxe_config_uefi_instance_info(self, provider_mock, unlink_mock, rmtree_mock): ip_address = '10.10.0.1' address = "aa:aa:aa:aa:aa:aa" object_utils.create_test_port(self.context, node_id=self.node.id, address=address) provider_mock.get_ip_addresses.return_value = [ip_address] with task_manager.acquire(self.context, self.node.uuid) as task: task.node.instance_info['deploy_boot_mode'] = 'uefi' pxe_utils.clean_up_pxe_config(task) unlink_calls = [ mock.call('/tftpboot/10.10.0.1.conf'), mock.call('/tftpboot/0A0A0001.conf') ] unlink_mock.assert_has_calls(unlink_calls) rmtree_mock.assert_called_once_with( os.path.join(CONF.pxe.tftp_root, self.node.uuid)) @mock.patch('ironic.common.utils.rmtree_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) def test_clean_up_ipxe_config_uefi(self, unlink_mock, rmtree_mock): self.config(ipxe_enabled=True, group='pxe') address = "aa:aa:aa:aa:aa:aa" properties = {'capabilities': 'boot_mode:uefi'} object_utils.create_test_port(self.context, node_id=self.node.id, address=address) with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties = properties pxe_utils.clean_up_pxe_config(task) unlink_calls = [ mock.call('/httpboot/pxelinux.cfg/aa-aa-aa-aa-aa-aa'), mock.call('/httpboot/pxelinux.cfg/aaaaaaaaaaaa') ] unlink_mock.assert_has_calls(unlink_calls) rmtree_mock.assert_called_once_with( os.path.join(CONF.deploy.http_root, self.node.uuid)) ironic-5.1.0/ironic/tests/unit/common/test_fsm.py0000664000567000056710000000724412674513466023260 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.common import exception as excp from ironic.common import fsm from ironic.tests import base class FSMTest(base.TestCase): def setUp(self): super(FSMTest, self).setUp() m = fsm.FSM() m.add_state('working', stable=True) m.add_state('daydream') m.add_state('wakeup', target='working') m.add_state('play', stable=True) m.add_transition('wakeup', 'working', 'walk') self.fsm = m def test_is_stable(self): self.assertTrue(self.fsm.is_stable('working')) def test_is_stable_not(self): self.assertFalse(self.fsm.is_stable('daydream')) def test_is_stable_invalid_state(self): self.assertRaises(excp.InvalidState, self.fsm.is_stable, 'foo') def test_target_state_stable(self): # Test to verify that adding a new state with a 'target' state pointing # to a 'stable' state does not raise an exception self.fsm.add_state('foo', target='working') self.fsm.default_start_state = 'working' self.fsm.initialize() def test__validate_target_state(self): # valid self.fsm._validate_target_state('working') # target doesn't exist self.assertRaisesRegexp(excp.InvalidState, "does not exist", self.fsm._validate_target_state, 'new state') # target isn't a stable state self.assertRaisesRegexp(excp.InvalidState, "stable", self.fsm._validate_target_state, 'daydream') def test_initialize(self): # no start state self.assertRaises(excp.InvalidState, self.fsm.initialize) # no target state self.fsm.initialize('working') self.assertEqual('working', self.fsm.current_state) self.assertIsNone(self.fsm.target_state) # default target state self.fsm.initialize('wakeup') self.assertEqual('wakeup', self.fsm.current_state) self.assertEqual('working', self.fsm.target_state) # specify (it overrides default) target state self.fsm.initialize('wakeup', 'play') self.assertEqual('wakeup', self.fsm.current_state) self.assertEqual('play', self.fsm.target_state) # specify an invalid target state self.assertRaises(excp.InvalidState, self.fsm.initialize, 'wakeup', 'daydream') def test_process_event(self): # default target state self.fsm.initialize('wakeup') self.fsm.process_event('walk') self.assertEqual('working', self.fsm.current_state) self.assertIsNone(self.fsm.target_state) # specify (it overrides default) target state self.fsm.initialize('wakeup') self.fsm.process_event('walk', 'play') self.assertEqual('working', self.fsm.current_state) self.assertEqual('play', self.fsm.target_state) # specify an invalid target state self.fsm.initialize('wakeup') self.assertRaises(excp.InvalidState, self.fsm.process_event, 'walk', 'daydream') ironic-5.1.0/ironic/tests/unit/common/test_states.py0000664000567000056710000000271512674513466023774 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Intel Corporation. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from ironic.common import states from ironic.tests import base class StatesTest(base.TestCase): def test_state_values_length(self): """test_state_values_length State values can be a maximum of 15 characters because they are stored in the database and the size of the database entry is 15 characters. This is specified in db/sqlalchemy/models.py """ for key, value in states.__dict__.items(): # Assumption: A state variable name is all UPPERCASE and contents # are a string. if key.upper() == key and isinstance(value, six.string_types): self.assertTrue( (len(value) <= 15), "Value for state: {} is greater than 15 characters".format( key)) ironic-5.1.0/ironic/tests/unit/common/test_image_service.py0000664000567000056710000004051412674513466025272 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import os import shutil import mock from oslo_config import cfg import requests import sendfile import six import six.moves.builtins as __builtin__ from six.moves import http_client from ironic.common import exception from ironic.common.glance_service.v1 import image_service as glance_v1_service from ironic.common import image_service from ironic.common import keystone from ironic.tests import base if six.PY3: import io file = io.BytesIO class HttpImageServiceTestCase(base.TestCase): def setUp(self): super(HttpImageServiceTestCase, self).setUp() self.service = image_service.HttpImageService() self.href = 'http://127.0.0.1:12345/fedora.qcow2' @mock.patch.object(requests, 'head', autospec=True) def test_validate_href(self, head_mock): response = head_mock.return_value response.status_code = http_client.OK self.service.validate_href(self.href) head_mock.assert_called_once_with(self.href) response.status_code = http_client.NO_CONTENT self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) response.status_code = http_client.BAD_REQUEST self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) @mock.patch.object(requests, 'head', autospec=True) def test_validate_href_error_code(self, head_mock): head_mock.return_value.status_code = http_client.BAD_REQUEST self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) head_mock.assert_called_once_with(self.href) @mock.patch.object(requests, 'head', autospec=True) def test_validate_href_error(self, head_mock): head_mock.side_effect = iter([requests.ConnectionError()]) self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) head_mock.assert_called_once_with(self.href) @mock.patch.object(requests, 'head', autospec=True) def _test_show(self, head_mock, mtime, mtime_date): head_mock.return_value.status_code = http_client.OK head_mock.return_value.headers = { 'Content-Length': 100, 'Last-Modified': mtime } result = self.service.show(self.href) head_mock.assert_called_once_with(self.href) self.assertEqual({'size': 100, 'updated_at': mtime_date, 'properties': {}}, result) def test_show_rfc_822(self): self._test_show(mtime='Tue, 15 Nov 2014 08:12:31 GMT', mtime_date=datetime.datetime(2014, 11, 15, 8, 12, 31)) def test_show_rfc_850(self): self._test_show(mtime='Tuesday, 15-Nov-14 08:12:31 GMT', mtime_date=datetime.datetime(2014, 11, 15, 8, 12, 31)) def test_show_ansi_c(self): self._test_show(mtime='Tue Nov 15 08:12:31 2014', mtime_date=datetime.datetime(2014, 11, 15, 8, 12, 31)) @mock.patch.object(requests, 'head', autospec=True) def test_show_no_content_length(self, head_mock): head_mock.return_value.status_code = http_client.OK head_mock.return_value.headers = {} self.assertRaises(exception.ImageRefValidationFailed, self.service.show, self.href) head_mock.assert_called_with(self.href) @mock.patch.object(shutil, 'copyfileobj', autospec=True) @mock.patch.object(requests, 'get', autospec=True) def test_download_success(self, req_get_mock, shutil_mock): response_mock = req_get_mock.return_value response_mock.status_code = http_client.OK response_mock.raw = mock.MagicMock(spec=file) file_mock = mock.Mock(spec=file) self.service.download(self.href, file_mock) shutil_mock.assert_called_once_with( response_mock.raw.__enter__(), file_mock, image_service.IMAGE_CHUNK_SIZE ) req_get_mock.assert_called_once_with(self.href, stream=True) @mock.patch.object(requests, 'get', autospec=True) def test_download_fail_connerror(self, req_get_mock): req_get_mock.side_effect = iter([requests.ConnectionError()]) file_mock = mock.Mock(spec=file) self.assertRaises(exception.ImageDownloadFailed, self.service.download, self.href, file_mock) @mock.patch.object(shutil, 'copyfileobj', autospec=True) @mock.patch.object(requests, 'get', autospec=True) def test_download_fail_ioerror(self, req_get_mock, shutil_mock): response_mock = req_get_mock.return_value response_mock.status_code = http_client.OK response_mock.raw = mock.MagicMock(spec=file) file_mock = mock.Mock(spec=file) shutil_mock.side_effect = IOError self.assertRaises(exception.ImageDownloadFailed, self.service.download, self.href, file_mock) req_get_mock.assert_called_once_with(self.href, stream=True) class FileImageServiceTestCase(base.TestCase): def setUp(self): super(FileImageServiceTestCase, self).setUp() self.service = image_service.FileImageService() self.href = 'file:///home/user/image.qcow2' self.href_path = '/home/user/image.qcow2' @mock.patch.object(os.path, 'isfile', return_value=True, autospec=True) def test_validate_href(self, path_exists_mock): self.service.validate_href(self.href) path_exists_mock.assert_called_once_with(self.href_path) @mock.patch.object(os.path, 'isfile', return_value=False, autospec=True) def test_validate_href_path_not_found_or_not_file(self, path_exists_mock): self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) path_exists_mock.assert_called_once_with(self.href_path) @mock.patch.object(os.path, 'getmtime', return_value=1431087909.1641912, autospec=True) @mock.patch.object(os.path, 'getsize', return_value=42, autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_show(self, _validate_mock, getsize_mock, getmtime_mock): _validate_mock.return_value = self.href_path result = self.service.show(self.href) getsize_mock.assert_called_once_with(self.href_path) getmtime_mock.assert_called_once_with(self.href_path) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual({'size': 42, 'updated_at': datetime.datetime(2015, 5, 8, 12, 25, 9, 164191), 'properties': {}}, result) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(os, 'remove', autospec=True) @mock.patch.object(os, 'access', return_value=True, autospec=True) @mock.patch.object(os, 'stat', autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_download_hard_link(self, _validate_mock, stat_mock, access_mock, remove_mock, link_mock): _validate_mock.return_value = self.href_path stat_mock.return_value.st_dev = 'dev1' file_mock = mock.Mock(spec=file) file_mock.name = 'file' self.service.download(self.href, file_mock) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual(2, stat_mock.call_count) access_mock.assert_called_once_with(self.href_path, os.R_OK | os.W_OK) remove_mock.assert_called_once_with('file') link_mock.assert_called_once_with(self.href_path, 'file') @mock.patch.object(sendfile, 'sendfile', autospec=True) @mock.patch.object(os.path, 'getsize', return_value=42, autospec=True) @mock.patch.object(__builtin__, 'open', autospec=True) @mock.patch.object(os, 'access', return_value=False, autospec=True) @mock.patch.object(os, 'stat', autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_download_copy(self, _validate_mock, stat_mock, access_mock, open_mock, size_mock, copy_mock): _validate_mock.return_value = self.href_path stat_mock.return_value.st_dev = 'dev1' file_mock = mock.MagicMock(spec=file) file_mock.name = 'file' input_mock = mock.MagicMock(spec=file) open_mock.return_value = input_mock self.service.download(self.href, file_mock) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual(2, stat_mock.call_count) access_mock.assert_called_once_with(self.href_path, os.R_OK | os.W_OK) copy_mock.assert_called_once_with(file_mock.fileno(), input_mock.__enter__().fileno(), 0, 42) size_mock.assert_called_once_with(self.href_path) @mock.patch.object(os, 'remove', side_effect=OSError, autospec=True) @mock.patch.object(os, 'access', return_value=True, autospec=True) @mock.patch.object(os, 'stat', autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_download_hard_link_fail(self, _validate_mock, stat_mock, access_mock, remove_mock): _validate_mock.return_value = self.href_path stat_mock.return_value.st_dev = 'dev1' file_mock = mock.MagicMock(spec=file) file_mock.name = 'file' self.assertRaises(exception.ImageDownloadFailed, self.service.download, self.href, file_mock) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual(2, stat_mock.call_count) access_mock.assert_called_once_with(self.href_path, os.R_OK | os.W_OK) @mock.patch.object(sendfile, 'sendfile', side_effect=OSError, autospec=True) @mock.patch.object(os.path, 'getsize', return_value=42, autospec=True) @mock.patch.object(__builtin__, 'open', autospec=True) @mock.patch.object(os, 'access', return_value=False, autospec=True) @mock.patch.object(os, 'stat', autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_download_copy_fail(self, _validate_mock, stat_mock, access_mock, open_mock, size_mock, copy_mock): _validate_mock.return_value = self.href_path stat_mock.return_value.st_dev = 'dev1' file_mock = mock.MagicMock(spec=file) file_mock.name = 'file' input_mock = mock.MagicMock(spec=file) open_mock.return_value = input_mock self.assertRaises(exception.ImageDownloadFailed, self.service.download, self.href, file_mock) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual(2, stat_mock.call_count) access_mock.assert_called_once_with(self.href_path, os.R_OK | os.W_OK) size_mock.assert_called_once_with(self.href_path) class ServiceGetterTestCase(base.TestCase): @mock.patch.object(keystone, 'get_admin_auth_token', autospec=True) @mock.patch.object(glance_v1_service.GlanceImageService, '__init__', return_value=None, autospec=True) def test_get_glance_image_service(self, glance_service_mock, token_mock): image_href = 'image-uuid' self.context.auth_token = 'fake' image_service.get_image_service(image_href, context=self.context) glance_service_mock.assert_called_once_with(mock.ANY, None, 1, self.context) self.assertFalse(token_mock.called) @mock.patch.object(keystone, 'get_admin_auth_token', autospec=True) @mock.patch.object(glance_v1_service.GlanceImageService, '__init__', return_value=None, autospec=True) def test_get_glance_image_service_url(self, glance_service_mock, token_mock): image_href = 'glance://image-uuid' self.context.auth_token = 'fake' image_service.get_image_service(image_href, context=self.context) glance_service_mock.assert_called_once_with(mock.ANY, None, 1, self.context) self.assertFalse(token_mock.called) @mock.patch.object(keystone, 'get_admin_auth_token', autospec=True) @mock.patch.object(glance_v1_service.GlanceImageService, '__init__', return_value=None, autospec=True) def test_get_glance_image_service_no_token(self, glance_service_mock, token_mock): image_href = 'image-uuid' self.context.auth_token = None token_mock.return_value = 'admin-token' image_service.get_image_service(image_href, context=self.context) glance_service_mock.assert_called_once_with(mock.ANY, None, 1, self.context) token_mock.assert_called_once_with() self.assertEqual('admin-token', self.context.auth_token) @mock.patch.object(keystone, 'get_admin_auth_token', autospec=True) @mock.patch.object(glance_v1_service.GlanceImageService, '__init__', return_value=None, autospec=True) def test_get_glance_image_service_token_not_needed(self, glance_service_mock, token_mock): image_href = 'image-uuid' self.context.auth_token = None self.config(auth_strategy='noauth', group='glance') image_service.get_image_service(image_href, context=self.context) glance_service_mock.assert_called_once_with(mock.ANY, None, 1, self.context) self.assertFalse(token_mock.called) self.assertIsNone(self.context.auth_token) @mock.patch.object(image_service.HttpImageService, '__init__', return_value=None, autospec=True) def test_get_http_image_service(self, http_service_mock): image_href = 'http://127.0.0.1/image.qcow2' image_service.get_image_service(image_href) http_service_mock.assert_called_once_with() @mock.patch.object(image_service.HttpImageService, '__init__', return_value=None, autospec=True) def test_get_https_image_service(self, http_service_mock): image_href = 'https://127.0.0.1/image.qcow2' image_service.get_image_service(image_href) http_service_mock.assert_called_once_with() @mock.patch.object(image_service.FileImageService, '__init__', return_value=None, autospec=True) def test_get_file_image_service(self, local_service_mock): image_href = 'file:///home/user/image.qcow2' image_service.get_image_service(image_href) local_service_mock.assert_called_once_with() def test_get_image_service_unknown_protocol(self): image_href = 'usenet://alt.binaries.dvd/image.qcow2' self.assertRaises(exception.ImageRefValidationFailed, image_service.get_image_service, image_href) def test_out_range_auth_strategy(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'auth_strategy', 'fake', 'glance', enforce_type=True) def test_out_range_glance_protocol(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'glance_protocol', 'fake', 'glance', enforce_type=True) ironic-5.1.0/ironic/tests/unit/common/test_network.py0000664000567000056710000001043012674513466024153 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import uuidutils from ironic.common import network from ironic.conductor import task_manager from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as object_utils class TestNetwork(db_base.DbTestCase): def setUp(self): super(TestNetwork, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake') self.node = object_utils.create_test_node(self.context) def test_get_node_vif_ids_no_ports_no_portgroups(self): expected = {'portgroups': {}, 'ports': {}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def test_get_node_vif_ids_one_port(self): port1 = db_utils.create_test_port(node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-A'}, driver='fake') expected = {'portgroups': {}, 'ports': {port1.uuid: 'test-vif-A'}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def test_get_node_vif_ids_one_portgroup(self): pg1 = db_utils.create_test_portgroup( node_id=self.node.id, extra={'vif_port_id': 'test-vif-A'}) expected = {'portgroups': {pg1.uuid: 'test-vif-A'}, 'ports': {}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def test_get_node_vif_ids_two_ports(self): port1 = db_utils.create_test_port(node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-A'}, driver='fake') port2 = db_utils.create_test_port(node_id=self.node.id, address='dd:ee:ff:aa:bb:cc', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-B'}, driver='fake') expected = {'portgroups': {}, 'ports': {port1.uuid: 'test-vif-A', port2.uuid: 'test-vif-B'}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def test_get_node_vif_ids_two_portgroups(self): pg1 = db_utils.create_test_portgroup( node_id=self.node.id, extra={'vif_port_id': 'test-vif-A'}) pg2 = db_utils.create_test_portgroup( uuid=uuidutils.generate_uuid(), address='dd:ee:ff:aa:bb:cc', node_id=self.node.id, name='barname', extra={'vif_port_id': 'test-vif-B'}) expected = {'portgroups': {pg1.uuid: 'test-vif-A', pg2.uuid: 'test-vif-B'}, 'ports': {}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) ironic-5.1.0/ironic/tests/unit/common/test_raid.py0000664000567000056710000002554412674513466023415 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json from ironic.common import exception from ironic.common import raid from ironic.drivers import base as drivers_base from ironic.tests import base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils from ironic.tests.unit import raid_constants class ValidateRaidConfigurationTestCase(base.TestCase): def setUp(self): with open(drivers_base.RAID_CONFIG_SCHEMA, 'r') as raid_schema_fobj: self.schema = json.load(raid_schema_fobj) super(ValidateRaidConfigurationTestCase, self).setUp() def test_validate_configuration_okay(self): raid_config = json.loads(raid_constants.RAID_CONFIG_OKAY) raid.validate_configuration( raid_config, raid_config_schema=self.schema) def test_validate_configuration_no_logical_disk(self): self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, {}, raid_config_schema=self.schema) def test_validate_configuration_zero_logical_disks(self): raid_config = json.loads(raid_constants.RAID_CONFIG_NO_LOGICAL_DISKS) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_no_raid_level(self): raid_config = json.loads(raid_constants.RAID_CONFIG_NO_RAID_LEVEL) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_raid_level(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_RAID_LEVEL) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_no_size_gb(self): raid_config = json.loads(raid_constants.RAID_CONFIG_NO_SIZE_GB) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_max_size_gb(self): raid_config = json.loads(raid_constants.RAID_CONFIG_MAX_SIZE_GB) raid.validate_configuration(raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_size_gb(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_SIZE_GB) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_is_root_volume(self): raid_config_str = raid_constants.RAID_CONFIG_INVALID_IS_ROOT_VOL raid_config = json.loads(raid_config_str) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_multiple_is_root_volume(self): raid_config_str = raid_constants.RAID_CONFIG_MULTIPLE_IS_ROOT_VOL raid_config = json.loads(raid_config_str) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_share_physical_disks(self): raid_config_str = raid_constants.RAID_CONFIG_INVALID_SHARE_PHY_DISKS raid_config = json.loads(raid_config_str) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_disk_type(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_DISK_TYPE) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_int_type(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_INT_TYPE) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_number_of_phy_disks(self): raid_config_str = raid_constants.RAID_CONFIG_INVALID_NUM_PHY_DISKS raid_config = json.loads(raid_config_str) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_physical_disks(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_PHY_DISKS) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_additional_property(self): raid_config = json.loads(raid_constants.RAID_CONFIG_ADDITIONAL_PROP) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_custom_schema(self): raid_config = json.loads(raid_constants.CUSTOM_SCHEMA_RAID_CONFIG) schema = json.loads(raid_constants.CUSTOM_RAID_SCHEMA) raid.validate_configuration(raid_config, raid_config_schema=schema) class RaidPublicMethodsTestCase(db_base.DbTestCase): def test_get_logical_disk_properties(self): with open(drivers_base.RAID_CONFIG_SCHEMA, 'r') as raid_schema_fobj: schema = json.load(raid_schema_fobj) logical_disk_properties = raid.get_logical_disk_properties(schema) self.assertIn('raid_level', logical_disk_properties) self.assertIn('size_gb', logical_disk_properties) self.assertIn('volume_name', logical_disk_properties) self.assertIn('is_root_volume', logical_disk_properties) self.assertIn('share_physical_disks', logical_disk_properties) self.assertIn('disk_type', logical_disk_properties) self.assertIn('interface_type', logical_disk_properties) self.assertIn('number_of_physical_disks', logical_disk_properties) self.assertIn('controller', logical_disk_properties) self.assertIn('physical_disks', logical_disk_properties) def test_get_logical_disk_properties_custom_schema(self): raid_schema = json.loads(raid_constants.CUSTOM_RAID_SCHEMA) logical_disk_properties = raid.get_logical_disk_properties( raid_config_schema=raid_schema) self.assertIn('raid_level', logical_disk_properties) self.assertIn('size_gb', logical_disk_properties) self.assertIn('foo', logical_disk_properties) def _test_update_raid_info(self, current_config, capabilities=None): node = obj_utils.create_test_node(self.context, driver='fake') if capabilities: properties = node.properties properties['capabilities'] = capabilities del properties['local_gb'] node.properties = properties target_raid_config = json.loads(raid_constants.RAID_CONFIG_OKAY) node.target_raid_config = target_raid_config node.save() raid.update_raid_info(node, current_config) properties = node.properties current = node.raid_config target = node.target_raid_config self.assertIsNotNone(current['last_updated']) self.assertIsInstance(current['logical_disks'][0], dict) if current_config['logical_disks'][0].get('is_root_volume'): self.assertEqual({'wwn': '600508B100'}, properties['root_device']) self.assertEqual(100, properties['local_gb']) self.assertIn('raid_level:1', properties['capabilities']) if capabilities: self.assertIn(capabilities, properties['capabilities']) else: self.assertNotIn('local_gb', properties) self.assertNotIn('root_device', properties) if capabilities: self.assertNotIn('raid_level:1', properties['capabilities']) # Verify node.target_raid_config is preserved. self.assertEqual(target_raid_config, target) def test_update_raid_info_okay(self): current_config = json.loads(raid_constants.CURRENT_RAID_CONFIG) self._test_update_raid_info(current_config, capabilities='boot_mode:bios') def test_update_raid_info_okay_no_root_volumes(self): current_config = json.loads(raid_constants.CURRENT_RAID_CONFIG) del current_config['logical_disks'][0]['is_root_volume'] del current_config['logical_disks'][0]['root_device_hint'] self._test_update_raid_info(current_config, capabilities='boot_mode:bios') def test_update_raid_info_okay_current_capabilities_empty(self): current_config = json.loads(raid_constants.CURRENT_RAID_CONFIG) self._test_update_raid_info(current_config, capabilities=None) def test_update_raid_info_multiple_root_volumes(self): current_config = json.loads(raid_constants.RAID_CONFIG_MULTIPLE_ROOT) self.assertRaises(exception.InvalidParameterValue, self._test_update_raid_info, current_config) ironic-5.1.0/ironic/tests/unit/common/test_rpc.py0000664000567000056710000000606612674513466023260 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.common import context as ironic_context from ironic.common import rpc from ironic.tests import base class TestRequestContextSerializer(base.TestCase): def setUp(self): super(TestRequestContextSerializer, self).setUp() self.mock_serializer = mock.MagicMock() self.serializer = rpc.RequestContextSerializer(self.mock_serializer) self.context = ironic_context.RequestContext() self.entity = {'foo': 'bar'} def test_serialize_entity(self): self.serializer.serialize_entity(self.context, self.entity) self.mock_serializer.serialize_entity.assert_called_with( self.context, self.entity) def test_serialize_entity_empty_base(self): # NOTE(viktors): Return False for check `if self.serializer._base:` bool_args = {'__bool__': lambda *args: False, '__nonzero__': lambda *args: False} self.mock_serializer.configure_mock(**bool_args) entity = self.serializer.serialize_entity(self.context, self.entity) self.assertFalse(self.mock_serializer.serialize_entity.called) # If self.serializer._base is empty, return entity directly self.assertEqual(self.entity, entity) def test_deserialize_entity(self): self.serializer.deserialize_entity(self.context, self.entity) self.mock_serializer.deserialize_entity.assert_called_with( self.context, self.entity) def test_deserialize_entity_empty_base(self): # NOTE(viktors): Return False for check `if self.serializer._base:` bool_args = {'__bool__': lambda *args: False, '__nonzero__': lambda *args: False} self.mock_serializer.configure_mock(**bool_args) entity = self.serializer.deserialize_entity(self.context, self.entity) self.assertFalse(self.mock_serializer.serialize_entity.called) self.assertEqual(self.entity, entity) def test_serialize_context(self): serialize_values = self.serializer.serialize_context(self.context) self.assertEqual(self.context.to_dict(), serialize_values) def test_deserialize_context(self): self.context.user = 'fake-user' self.context.tenant = 'fake-tenant' serialize_values = self.context.to_dict() new_context = self.serializer.deserialize_context(serialize_values) # Ironic RequestContext from_dict will pop 'user' and 'tenant' and # initialize to None. self.assertIsNone(new_context.user) self.assertIsNone(new_context.tenant) ironic-5.1.0/ironic/tests/unit/common/test_policy.py0000664000567000056710000000505612674513466023771 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.common import policy from ironic.tests import base class PolicyTestCase(base.TestCase): """Tests whether the configuration of the policy engine is corect.""" def test_admin_api(self): creds = ({'roles': [u'admin']}, {'roles': ['administrator']}, {'roles': ['admin', 'administrator']}) for c in creds: self.assertTrue(policy.enforce('admin_api', c, c)) def test_public_api(self): creds = {'is_public_api': 'True'} self.assertTrue(policy.enforce('public_api', creds, creds)) def test_trusted_call(self): creds = ({'roles': ['admin']}, {'is_public_api': 'True'}, {'roles': ['admin'], 'is_public_api': 'True'}, {'roles': ['Member'], 'is_public_api': 'True'}) for c in creds: self.assertTrue(policy.enforce('trusted_call', c, c)) def test_show_password(self): creds = {'roles': [u'admin'], 'tenant': 'admin'} self.assertTrue(policy.enforce('show_password', creds, creds)) class PolicyTestCaseNegative(base.TestCase): """Tests whether the configuration of the policy engine is corect.""" def test_admin_api(self): creds = {'roles': ['Member']} self.assertFalse(policy.enforce('admin_api', creds, creds)) def test_public_api(self): creds = ({'is_public_api': 'False'}, {}) for c in creds: self.assertFalse(policy.enforce('public_api', c, c)) def test_trusted_call(self): creds = ({'roles': ['Member']}, {'is_public_api': 'False'}, {'roles': ['Member'], 'is_public_api': 'False'}) for c in creds: self.assertFalse(policy.enforce('trusted_call', c, c)) def test_show_password(self): creds = {'roles': [u'admin'], 'tenant': 'demo'} self.assertFalse(policy.enforce('show_password', creds, creds)) ironic-5.1.0/ironic/tests/unit/common/test_service.py0000664000567000056710000000570412674513466024132 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_concurrency import processutils from oslo_config import cfg from ironic.common import exception from ironic.common import service from ironic.tests import base CONF = cfg.CONF class TestWSGIService(base.TestCase): @mock.patch.object(service.wsgi, 'Server') def test_workers_set_default(self, wsgi_server): service_name = "ironic_api" test_service = service.WSGIService(service_name) self.assertEqual(processutils.get_worker_count(), test_service.workers) wsgi_server.assert_called_once_with(CONF, service_name, test_service.app, host='0.0.0.0', port=6385, use_ssl=False, logger_name=service_name) @mock.patch.object(service.wsgi, 'Server') def test_workers_set_correct_setting(self, wsgi_server): self.config(api_workers=8, group='api') test_service = service.WSGIService("ironic_api") self.assertEqual(8, test_service.workers) @mock.patch.object(service.wsgi, 'Server') def test_workers_set_zero_setting(self, wsgi_server): self.config(api_workers=0, group='api') test_service = service.WSGIService("ironic_api") self.assertEqual(processutils.get_worker_count(), test_service.workers) @mock.patch.object(service.wsgi, 'Server') def test_workers_set_negative_setting(self, wsgi_server): self.config(api_workers=-2, group='api') self.assertRaises(exception.ConfigInvalid, service.WSGIService, 'ironic_api') self.assertFalse(wsgi_server.called) @mock.patch.object(service.wsgi, 'Server') def test_wsgi_service_with_ssl_enabled(self, wsgi_server): self.config(enable_ssl_api=True, group='api') service_name = 'ironic_api' srv = service.WSGIService('ironic_api', CONF.api.enable_ssl_api) wsgi_server.assert_called_once_with(CONF, service_name, srv.app, host='0.0.0.0', port=6385, use_ssl=True, logger_name=service_name) ironic-5.1.0/ironic/tests/unit/common/__init__.py0000664000567000056710000000000012674513466023152 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/common/test_exception.py0000664000567000056710000000333312674513466024464 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 IBM, Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from ironic.common import exception from ironic.tests import base class DeprecatedException(exception.IronicException): message = 'Using message is deprecated %(foo)s' class TestIronicException(base.TestCase): def test____init__(self): expected = b'\xc3\xa9\xe0\xaf\xb2\xe0\xbe\x84' if six.PY3: expected = expected.decode('utf-8') message = chr(233) + chr(0x0bf2) + chr(3972) else: message = unichr(233) + unichr(0x0bf2) + unichr(3972) exc = exception.IronicException(message) self.assertEqual(expected, exc.__str__()) @mock.patch.object(exception.LOG, 'warning', autospec=True) def test_message_deprecated(self, mock_logw): exc = DeprecatedException(foo='spam') mock_logw.assert_called_once_with( "Exception class: %s Using the 'message' " "attribute in an exception has been deprecated. The exception " "class should be modified to use the '_msg_fmt' attribute.", 'DeprecatedException') self.assertEqual('Using message is deprecated spam', str(exc)) ironic-5.1.0/ironic/tests/unit/common/test_keystone.py0000664000567000056710000001771412674513466024337 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneclient import exceptions as ksexception import mock from ironic.common import exception from ironic.common import keystone from ironic.tests import base class FakeCatalog(object): def url_for(self, **kwargs): return 'fake-url' class FakeAccessInfo(object): def will_expire_soon(self): pass class FakeClient(object): def __init__(self, **kwargs): self.service_catalog = FakeCatalog() self.auth_ref = FakeAccessInfo() def has_service_catalog(self): return True class KeystoneTestCase(base.TestCase): def setUp(self): super(KeystoneTestCase, self).setUp() self.config(group='keystone_authtoken', auth_uri='http://127.0.0.1:9898/', admin_user='fake', admin_password='fake', admin_tenant_name='fake') self.config(group='keystone', region_name='fake') keystone._KS_CLIENT = None def test_failure_authorization(self): self.assertRaises(exception.KeystoneFailure, keystone.get_service_url) @mock.patch.object(FakeCatalog, 'url_for', autospec=True) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_get_url(self, mock_ks, mock_uf): fake_url = 'http://127.0.0.1:6385' mock_uf.return_value = fake_url mock_ks.return_value = FakeClient() res = keystone.get_service_url() self.assertEqual(fake_url, res) @mock.patch.object(FakeCatalog, 'url_for', autospec=True) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_url_not_found(self, mock_ks, mock_uf): mock_uf.side_effect = ksexception.EndpointNotFound mock_ks.return_value = FakeClient() self.assertRaises(exception.CatalogNotFound, keystone.get_service_url) @mock.patch.object(FakeClient, 'has_service_catalog', autospec=True) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_no_catalog(self, mock_ks, mock_hsc): mock_hsc.return_value = False mock_ks.return_value = FakeClient() self.assertRaises(exception.KeystoneFailure, keystone.get_service_url) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_unauthorized(self, mock_ks): mock_ks.side_effect = ksexception.Unauthorized self.assertRaises(exception.KeystoneUnauthorized, keystone.get_service_url) def test_get_service_url_fail_missing_auth_uri(self): self.config(group='keystone_authtoken', auth_uri=None) self.assertRaises(exception.KeystoneFailure, keystone.get_service_url) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_get_service_url_versionless_v2(self, mock_ks): mock_ks.return_value = FakeClient() self.config(group='keystone_authtoken', auth_uri='http://127.0.0.1') expected_url = 'http://127.0.0.1/v2.0' keystone.get_service_url() mock_ks.assert_called_once_with(username='fake', password='fake', tenant_name='fake', region_name='fake', auth_url=expected_url) @mock.patch('keystoneclient.v3.client.Client', autospec=True) def test_get_service_url_versionless_v3(self, mock_ks): mock_ks.return_value = FakeClient() self.config(group='keystone_authtoken', auth_version='v3.0', auth_uri='http://127.0.0.1') expected_url = 'http://127.0.0.1/v3' keystone.get_service_url() mock_ks.assert_called_once_with(username='fake', password='fake', tenant_name='fake', region_name='fake', auth_url=expected_url) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_get_service_url_version_override(self, mock_ks): mock_ks.return_value = FakeClient() self.config(group='keystone_authtoken', auth_uri='http://127.0.0.1/v2.0/') expected_url = 'http://127.0.0.1/v2.0' keystone.get_service_url() mock_ks.assert_called_once_with(username='fake', password='fake', tenant_name='fake', region_name='fake', auth_url=expected_url) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_get_admin_auth_token(self, mock_ks): fake_client = FakeClient() fake_client.auth_token = '123456' mock_ks.return_value = fake_client self.assertEqual('123456', keystone.get_admin_auth_token()) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_get_region_name_v2(self, mock_ks): mock_ks.return_value = FakeClient() self.config(group='keystone', region_name='fake_region') expected_url = 'http://127.0.0.1:9898/v2.0' expected_region = 'fake_region' keystone.get_service_url() mock_ks.assert_called_once_with(username='fake', password='fake', tenant_name='fake', region_name=expected_region, auth_url=expected_url) @mock.patch('keystoneclient.v3.client.Client', autospec=True) def test_get_region_name_v3(self, mock_ks): mock_ks.return_value = FakeClient() self.config(group='keystone', region_name='fake_region') self.config(group='keystone_authtoken', auth_version='v3.0') expected_url = 'http://127.0.0.1:9898/v3' expected_region = 'fake_region' keystone.get_service_url() mock_ks.assert_called_once_with(username='fake', password='fake', tenant_name='fake', region_name=expected_region, auth_url=expected_url) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_cache_client_init(self, mock_ks): fake_client = FakeClient() mock_ks.return_value = fake_client self.assertEqual(fake_client, keystone._get_ksclient()) self.assertEqual(fake_client, keystone._KS_CLIENT) self.assertEqual(1, mock_ks.call_count) @mock.patch.object(FakeAccessInfo, 'will_expire_soon', autospec=True) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_cache_client_cached(self, mock_ks, mock_expire): mock_expire.return_value = False fake_client = FakeClient() keystone._KS_CLIENT = fake_client self.assertEqual(fake_client, keystone._get_ksclient()) self.assertEqual(fake_client, keystone._KS_CLIENT) self.assertFalse(mock_ks.called) @mock.patch.object(FakeAccessInfo, 'will_expire_soon', autospec=True) @mock.patch('keystoneclient.v2_0.client.Client', autospec=True) def test_cache_client_expired(self, mock_ks, mock_expire): mock_expire.return_value = True fake_client = FakeClient() keystone._KS_CLIENT = fake_client new_client = FakeClient() mock_ks.return_value = new_client self.assertEqual(new_client, keystone._get_ksclient()) self.assertEqual(new_client, keystone._KS_CLIENT) self.assertEqual(1, mock_ks.call_count) ironic-5.1.0/ironic/tests/unit/common/test_glance_service.py0000664000567000056710000013442712674513466025450 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import time from glanceclient import client as glance_client from glanceclient import exc as glance_exc import mock from oslo_config import cfg from oslo_context import context from oslo_serialization import jsonutils from oslo_utils import uuidutils from six.moves.urllib import parse as urlparse import testtools from ironic.common import exception from ironic.common.glance_service import base_image_service from ironic.common.glance_service import service_utils from ironic.common.glance_service.v2 import image_service as glance_v2 from ironic.common import image_service as service from ironic.tests import base from ironic.tests.unit import stubs CONF = cfg.CONF class NullWriter(object): """Used to test ImageService.get which takes a writer object.""" def write(self, *arg, **kwargs): pass class TestGlanceSerializer(testtools.TestCase): def test_serialize(self): metadata = {'name': 'image1', 'is_public': True, 'foo': 'bar', 'properties': { 'prop1': 'propvalue1', 'mappings': [ {'virtual': 'aaa', 'device': 'bbb'}, {'virtual': 'xxx', 'device': 'yyy'}], 'block_device_mapping': [ {'virtual_device': 'fake', 'device_name': '/dev/fake'}, {'virtual_device': 'ephemeral0', 'device_name': '/dev/fake0'}]}} converted_expected = { 'name': 'image1', 'is_public': True, 'foo': 'bar', 'properties': {'prop1': 'propvalue1'} } converted = service_utils._convert(metadata, 'to') self.assertEqual(metadata, service_utils._convert(converted, 'from')) # Fields that rely on dict ordering can't be compared as text mappings = jsonutils.loads(converted['properties'] .pop('mappings')) self.assertEqual([{"device": "bbb", "virtual": "aaa"}, {"device": "yyy", "virtual": "xxx"}], mappings) bd_mapping = jsonutils.loads(converted['properties'] .pop('block_device_mapping')) self.assertEqual([{"virtual_device": "fake", "device_name": "/dev/fake"}, {"virtual_device": "ephemeral0", "device_name": "/dev/fake0"}], bd_mapping) # Compare the remaining self.assertEqual(converted_expected, converted) class TestGlanceImageService(base.TestCase): NOW_GLANCE_OLD_FORMAT = "2010-10-11T10:30:22" NOW_GLANCE_FORMAT = "2010-10-11T10:30:22.000000" NOW_DATETIME = datetime.datetime(2010, 10, 11, 10, 30, 22) def setUp(self): super(TestGlanceImageService, self).setUp() client = stubs.StubGlanceClient() self.context = context.RequestContext(auth_token=True) self.context.user_id = 'fake' self.context.project_id = 'fake' self.service = service.GlanceImageService(client, 1, self.context) self.config(glance_host='localhost', group='glance') try: self.config(auth_strategy='keystone', group='glance') except Exception: opts = [ cfg.StrOpt('auth_strategy', default='keystone'), ] CONF.register_opts(opts) return @staticmethod def _make_fixture(**kwargs): fixture = {'name': None, 'properties': {}, 'status': None, 'is_public': None} fixture.update(kwargs) return fixture @property def endpoint(self): # For glanceclient versions >= 0.13, the endpoint is located # under http_client (blueprint common-client-library-2) # I5addc38eb2e2dd0be91b566fda7c0d81787ffa75 # Test both options to keep backward compatibility if getattr(self.service.client, 'endpoint', None): endpoint = self.service.client.endpoint else: endpoint = self.service.client.http_client.endpoint return endpoint def _make_datetime_fixture(self): return self._make_fixture(created_at=self.NOW_GLANCE_FORMAT, updated_at=self.NOW_GLANCE_FORMAT, deleted_at=self.NOW_GLANCE_FORMAT) def test_create_with_instance_id(self): # Ensure instance_id is persisted as an image-property. fixture = {'name': 'test image', 'is_public': False, 'properties': {'instance_id': '42', 'user_id': 'fake'}} image_id = self.service.create(fixture)['id'] image_meta = self.service.show(image_id) expected = { 'id': image_id, 'name': 'test image', 'is_public': False, 'size': None, 'min_disk': None, 'min_ram': None, 'disk_format': None, 'container_format': None, 'checksum': None, 'created_at': self.NOW_DATETIME, 'updated_at': self.NOW_DATETIME, 'deleted_at': None, 'deleted': None, 'status': None, 'properties': {'instance_id': '42', 'user_id': 'fake'}, 'owner': None, } self.assertDictEqual(expected, image_meta) image_metas = self.service.detail() self.assertDictEqual(expected, image_metas[0]) def test_create_without_instance_id(self): """Test creating an image without an instance ID. Ensure we can create an image without having to specify an instance_id. Public images are an example of an image not tied to an instance. """ fixture = {'name': 'test image', 'is_public': False} image_id = self.service.create(fixture)['id'] expected = { 'id': image_id, 'name': 'test image', 'is_public': False, 'size': None, 'min_disk': None, 'min_ram': None, 'disk_format': None, 'container_format': None, 'checksum': None, 'created_at': self.NOW_DATETIME, 'updated_at': self.NOW_DATETIME, 'deleted_at': None, 'deleted': None, 'status': None, 'properties': {}, 'owner': None, } actual = self.service.show(image_id) self.assertDictEqual(expected, actual) def test_create(self): fixture = self._make_fixture(name='test image') num_images = len(self.service.detail()) image_id = self.service.create(fixture)['id'] self.assertIsNotNone(image_id) self.assertEqual( num_images + 1, len(self.service.detail())) def test_create_and_show_non_existing_image(self): fixture = self._make_fixture(name='test image') image_id = self.service.create(fixture)['id'] self.assertIsNotNone(image_id) self.assertRaises(exception.ImageNotFound, self.service.show, 'bad image id') def test_detail_private_image(self): fixture = self._make_fixture(name='test image') fixture['is_public'] = False properties = {'owner_id': 'proj1'} fixture['properties'] = properties self.service.create(fixture)['id'] proj = self.context.project_id self.context.project_id = 'proj1' image_metas = self.service.detail() self.context.project_id = proj self.assertEqual(1, len(image_metas)) self.assertEqual('test image', image_metas[0]['name']) self.assertFalse(image_metas[0]['is_public']) def test_detail_marker(self): fixtures = [] ids = [] for i in range(10): fixture = self._make_fixture(name='TestImage %d' % (i)) fixtures.append(fixture) ids.append(self.service.create(fixture)['id']) image_metas = self.service.detail(marker=ids[1]) self.assertEqual(8, len(image_metas)) i = 2 for meta in image_metas: expected = { 'id': ids[i], 'status': None, 'is_public': None, 'name': 'TestImage %d' % (i), 'properties': {}, 'size': None, 'min_disk': None, 'min_ram': None, 'disk_format': None, 'container_format': None, 'checksum': None, 'created_at': self.NOW_DATETIME, 'updated_at': self.NOW_DATETIME, 'deleted_at': None, 'deleted': None, 'owner': None, } self.assertDictEqual(expected, meta) i = i + 1 def test_detail_limit(self): fixtures = [] ids = [] for i in range(10): fixture = self._make_fixture(name='TestImage %d' % (i)) fixtures.append(fixture) ids.append(self.service.create(fixture)['id']) image_metas = self.service.detail(limit=5) self.assertEqual(5, len(image_metas)) def test_detail_default_limit(self): fixtures = [] ids = [] for i in range(10): fixture = self._make_fixture(name='TestImage %d' % (i)) fixtures.append(fixture) ids.append(self.service.create(fixture)['id']) image_metas = self.service.detail() for i, meta in enumerate(image_metas): self.assertEqual(meta['name'], 'TestImage %d' % (i)) def test_detail_marker_and_limit(self): fixtures = [] ids = [] for i in range(10): fixture = self._make_fixture(name='TestImage %d' % (i)) fixtures.append(fixture) ids.append(self.service.create(fixture)['id']) image_metas = self.service.detail(marker=ids[3], limit=5) self.assertEqual(5, len(image_metas)) i = 4 for meta in image_metas: expected = { 'id': ids[i], 'status': None, 'is_public': None, 'name': 'TestImage %d' % (i), 'properties': {}, 'size': None, 'min_disk': None, 'min_ram': None, 'disk_format': None, 'container_format': None, 'checksum': None, 'created_at': self.NOW_DATETIME, 'updated_at': self.NOW_DATETIME, 'deleted_at': None, 'deleted': None, 'owner': None, } self.assertDictEqual(expected, meta) i = i + 1 def test_detail_invalid_marker(self): fixtures = [] ids = [] for i in range(10): fixture = self._make_fixture(name='TestImage %d' % (i)) fixtures.append(fixture) ids.append(self.service.create(fixture)['id']) self.assertRaises(exception.Invalid, self.service.detail, marker='invalidmarker') def test_update(self): fixture = self._make_fixture(name='test image') image = self.service.create(fixture) image_id = image['id'] fixture['name'] = 'new image name' self.service.update(image_id, fixture) new_image_data = self.service.show(image_id) self.assertEqual('new image name', new_image_data['name']) def test_delete(self): fixture1 = self._make_fixture(name='test image 1') fixture2 = self._make_fixture(name='test image 2') fixtures = [fixture1, fixture2] num_images = len(self.service.detail()) self.assertEqual(0, num_images) ids = [] for fixture in fixtures: new_id = self.service.create(fixture)['id'] ids.append(new_id) num_images = len(self.service.detail()) self.assertEqual(2, num_images) self.service.delete(ids[0]) # When you delete an image from glance, it sets the status to DELETED # and doesn't actually remove the image. # Check the image is still there. num_images = len(self.service.detail()) self.assertEqual(2, num_images) # Check the image is marked as deleted. num_images = len([x for x in self.service.detail() if not x['deleted']]) self.assertEqual(1, num_images) def test_show_passes_through_to_client(self): fixture = self._make_fixture(name='image1', is_public=True) image_id = self.service.create(fixture)['id'] image_meta = self.service.show(image_id) expected = { 'id': image_id, 'name': 'image1', 'is_public': True, 'size': None, 'min_disk': None, 'min_ram': None, 'disk_format': None, 'container_format': None, 'checksum': None, 'created_at': self.NOW_DATETIME, 'updated_at': self.NOW_DATETIME, 'deleted_at': None, 'deleted': None, 'status': None, 'properties': {}, 'owner': None, } self.assertEqual(expected, image_meta) def test_show_raises_when_no_authtoken_in_the_context(self): fixture = self._make_fixture(name='image1', is_public=False, properties={'one': 'two'}) image_id = self.service.create(fixture)['id'] self.context.auth_token = False self.assertRaises(exception.ImageNotFound, self.service.show, image_id) def test_detail_passes_through_to_client(self): fixture = self._make_fixture(name='image10', is_public=True) image_id = self.service.create(fixture)['id'] image_metas = self.service.detail() expected = [ { 'id': image_id, 'name': 'image10', 'is_public': True, 'size': None, 'min_disk': None, 'min_ram': None, 'disk_format': None, 'container_format': None, 'checksum': None, 'created_at': self.NOW_DATETIME, 'updated_at': self.NOW_DATETIME, 'deleted_at': None, 'deleted': None, 'status': None, 'properties': {}, 'owner': None, }, ] self.assertEqual(expected, image_metas) def test_show_makes_datetimes(self): fixture = self._make_datetime_fixture() image_id = self.service.create(fixture)['id'] image_meta = self.service.show(image_id) self.assertEqual(self.NOW_DATETIME, image_meta['created_at']) self.assertEqual(self.NOW_DATETIME, image_meta['updated_at']) def test_detail_makes_datetimes(self): fixture = self._make_datetime_fixture() self.service.create(fixture) image_meta = self.service.detail()[0] self.assertEqual(self.NOW_DATETIME, image_meta['created_at']) self.assertEqual(self.NOW_DATETIME, image_meta['updated_at']) @mock.patch.object(time, 'sleep', autospec=True) def test_download_with_retries(self, mock_sleep): tries = [0] class MyGlanceStubClient(stubs.StubGlanceClient): """A client that fails the first time, then succeeds.""" def get(self, image_id): if tries[0] == 0: tries[0] = 1 raise glance_exc.ServiceUnavailable('') else: return {} stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = service.GlanceImageService(stub_client, 1, stub_context) image_id = 1 # doesn't matter writer = NullWriter() # When retries are disabled, we should get an exception self.config(glance_num_retries=0, group='glance') self.assertRaises(exception.GlanceConnectionFailed, stub_service.download, image_id, writer) # Now lets enable retries. No exception should happen now. tries = [0] self.config(glance_num_retries=1, group='glance') stub_service.download(image_id, writer) self.assertTrue(mock_sleep.called) @mock.patch('sendfile.sendfile', autospec=True) @mock.patch('os.path.getsize', autospec=True) @mock.patch('%s.open' % __name__, new=mock.mock_open(), create=True) def test_download_file_url(self, mock_getsize, mock_sendfile): # NOTE: only in v2 API class MyGlanceStubClient(stubs.StubGlanceClient): """A client that returns a file url.""" s_tmpfname = '/whatever/source' def get(self, image_id): return type('GlanceTestDirectUrlMeta', (object,), {'direct_url': 'file://%s' + self.s_tmpfname}) stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_client = MyGlanceStubClient() stub_service = service.GlanceImageService(stub_client, context=stub_context, version=2) image_id = 1 # doesn't matter self.config(allowed_direct_url_schemes=['file'], group='glance') # patching open in base_image_service module namespace # to make call-spec assertions with mock.patch('ironic.common.glance_service.base_image_service.open', new=mock.mock_open(), create=True) as mock_ironic_open: with open('/whatever/target', 'w') as mock_target_fd: stub_service.download(image_id, mock_target_fd) # assert the image data was neither read nor written # but rather sendfiled mock_ironic_open.assert_called_once_with(MyGlanceStubClient.s_tmpfname, 'r') mock_source_fd = mock_ironic_open() self.assertFalse(mock_source_fd.read.called) self.assertFalse(mock_target_fd.write.called) mock_sendfile.assert_called_once_with( mock_target_fd.fileno(), mock_source_fd.fileno(), 0, mock_getsize(MyGlanceStubClient.s_tmpfname)) def test_client_forbidden_converts_to_imagenotauthed(self): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that raises a Forbidden exception.""" def get(self, image_id): raise glance_exc.Forbidden(image_id) stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = service.GlanceImageService(stub_client, 1, stub_context) image_id = 1 # doesn't matter writer = NullWriter() self.assertRaises(exception.ImageNotAuthorized, stub_service.download, image_id, writer) def test_client_httpforbidden_converts_to_imagenotauthed(self): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that raises a HTTPForbidden exception.""" def get(self, image_id): raise glance_exc.HTTPForbidden(image_id) stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = service.GlanceImageService(stub_client, 1, stub_context) image_id = 1 # doesn't matter writer = NullWriter() self.assertRaises(exception.ImageNotAuthorized, stub_service.download, image_id, writer) def test_client_notfound_converts_to_imagenotfound(self): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that raises a NotFound exception.""" def get(self, image_id): raise glance_exc.NotFound(image_id) stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = service.GlanceImageService(stub_client, 1, stub_context) image_id = 1 # doesn't matter writer = NullWriter() self.assertRaises(exception.ImageNotFound, stub_service.download, image_id, writer) def test_client_httpnotfound_converts_to_imagenotfound(self): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that raises a HTTPNotFound exception.""" def get(self, image_id): raise glance_exc.HTTPNotFound(image_id) stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = service.GlanceImageService(stub_client, 1, stub_context) image_id = 1 # doesn't matter writer = NullWriter() self.assertRaises(exception.ImageNotFound, stub_service.download, image_id, writer) def test_check_image_service_client_set(self): def func(self): return True self.service.client = True wrapped_func = base_image_service.check_image_service(func) self.assertTrue(wrapped_func(self.service)) @mock.patch.object(glance_client, 'Client', autospec=True) def test_check_image_service__no_client_set_http(self, mock_gclient): def func(service, *args, **kwargs): return (self.endpoint, args, kwargs) endpoint = 'http://123.123.123.123:9292' mock_gclient.return_value.endpoint = endpoint self.service.client = None params = {'image_href': '%s/image_uuid' % endpoint} self.config(auth_strategy='keystone', group='glance') wrapped_func = base_image_service.check_image_service(func) self.assertEqual((endpoint, (), params), wrapped_func(self.service, **params)) mock_gclient.assert_called_once_with( 1, endpoint, **{'insecure': CONF.glance.glance_api_insecure, 'token': self.context.auth_token}) @mock.patch.object(glance_client, 'Client', autospec=True) def test_get_image_service__no_client_set_https_insecure(self, mock_gclient): def func(service, *args, **kwargs): return (self.endpoint, args, kwargs) endpoint = 'https://123.123.123.123:9292' mock_gclient.return_value.endpoint = endpoint self.service.client = None params = {'image_href': '%s/image_uuid' % endpoint} self.config(auth_strategy='keystone', group='glance') self.config(glance_api_insecure=True, group='glance') wrapped_func = base_image_service.check_image_service(func) self.assertEqual((endpoint, (), params), wrapped_func(self.service, **params)) mock_gclient.assert_called_once_with( 1, endpoint, **{'insecure': CONF.glance.glance_api_insecure, 'token': self.context.auth_token}) @mock.patch.object(glance_client, 'Client', autospec=True) def test_get_image_service__no_client_set_https_secure(self, mock_gclient): def func(service, *args, **kwargs): return (self.endpoint, args, kwargs) endpoint = 'https://123.123.123.123:9292' mock_gclient.return_value.endpoint = endpoint self.service.client = None params = {'image_href': '%s/image_uuid' % endpoint} self.config(auth_strategy='keystone', group='glance') self.config(glance_api_insecure=False, group='glance') self.config(glance_cafile='/path/to/certfile', group='glance') wrapped_func = base_image_service.check_image_service(func) self.assertEqual((endpoint, (), params), wrapped_func(self.service, **params)) mock_gclient.assert_called_once_with( 1, endpoint, **{'cacert': CONF.glance.glance_cafile, 'insecure': CONF.glance.glance_api_insecure, 'token': self.context.auth_token}) def _create_failing_glance_client(info): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that fails the first time, then succeeds.""" def get(self, image_id): info['num_calls'] += 1 if info['num_calls'] == 1: raise glance_exc.ServiceUnavailable('') return {} return MyGlanceStubClient() class TestGlanceSwiftTempURL(base.TestCase): def setUp(self): super(TestGlanceSwiftTempURL, self).setUp() client = stubs.StubGlanceClient() self.context = context.RequestContext() self.context.auth_token = 'fake' self.service = service.GlanceImageService(client, 2, self.context) self.config(swift_temp_url_key='correcthorsebatterystaple', group='glance') self.config(swift_endpoint_url='https://swift.example.com', group='glance') self.config(swift_account='AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30', group='glance') self.config(swift_api_version='v1', group='glance') self.config(swift_container='glance', group='glance') self.config(swift_temp_url_duration=1200, group='glance') self.config(swift_store_multiple_containers_seed=0, group='glance') self.config() self.fake_image = { 'id': '757274c4-2856-4bd2-bb20-9a4a231e187b' } @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url(self, tempurl_mock): path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_invalid_image_info(self, tempurl_mock): self.service._validate_temp_url_config = mock.Mock() image_info = {} self.assertRaises(exception.ImageUnacceptable, self.service.swift_temp_url, image_info) image_info = {'id': 'not an id'} self.assertRaises(exception.ImageUnacceptable, self.service.swift_temp_url, image_info) self.assertFalse(tempurl_mock.called) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_radosgw(self, tempurl_mock): self.config(temp_url_endpoint_type='radosgw', group='glance') path = ('/v1' '/glance' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual( (urlparse.urljoin(CONF.glance.swift_endpoint_url, 'swift') + tempurl_mock.return_value), temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_radosgw_endpoint_with_swift(self, tempurl_mock): self.config(swift_endpoint_url='https://swift.radosgw.com/swift', group='glance') self.config(temp_url_endpoint_type='radosgw', group='glance') path = ('/v1' '/glance' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual( CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_radosgw_endpoint_invalid(self, tempurl_mock): self.config(swift_endpoint_url='https://swift.radosgw.com/eggs/', group='glance') self.config(temp_url_endpoint_type='radosgw', group='glance') self.service._validate_temp_url_config = mock.Mock() self.assertRaises(exception.InvalidParameterValue, self.service.swift_temp_url, self.fake_image) self.assertFalse(tempurl_mock.called) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_multiple_containers(self, tempurl_mock): self.config(swift_store_multiple_containers_seed=8, group='glance') path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance_757274c4' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') def test_swift_temp_url_url_bad_no_info(self): self.assertRaises(exception.ImageUnacceptable, self.service.swift_temp_url, image_info={}) def test__validate_temp_url_config(self): self.service._validate_temp_url_config() def test__validate_temp_url_key_exception(self): self.config(swift_temp_url_key=None, group='glance') self.assertRaises(exception.MissingParameterValue, self.service._validate_temp_url_config) def test__validate_temp_url_endpoint_config_exception(self): self.config(swift_endpoint_url=None, group='glance') self.assertRaises(exception.MissingParameterValue, self.service._validate_temp_url_config) def test__validate_temp_url_account_exception(self): self.config(swift_account=None, group='glance') self.assertRaises(exception.MissingParameterValue, self.service._validate_temp_url_config) def test__validate_temp_url_no_account_exception_radosgw(self): self.config(swift_account=None, group='glance') self.config(temp_url_endpoint_type='radosgw', group='glance') self.service._validate_temp_url_config() def test__validate_temp_url_endpoint_less_than_download_delay(self): self.config(swift_temp_url_expected_download_start_delay=1000, group='glance') self.config(swift_temp_url_duration=15, group='glance') self.assertRaises(exception.InvalidParameterValue, self.service._validate_temp_url_config) def test__validate_temp_url_multiple_containers(self): self.config(swift_store_multiple_containers_seed=-1, group='glance') self.assertRaises(exception.InvalidParameterValue, self.service._validate_temp_url_config) self.config(swift_store_multiple_containers_seed=None, group='glance') self.assertRaises(exception.InvalidParameterValue, self.service._validate_temp_url_config) self.config(swift_store_multiple_containers_seed=33, group='glance') self.assertRaises(exception.InvalidParameterValue, self.service._validate_temp_url_config) class TestSwiftTempUrlCache(base.TestCase): def setUp(self): super(TestSwiftTempUrlCache, self).setUp() client = stubs.StubGlanceClient() self.context = context.RequestContext() self.context.auth_token = 'fake' self.config(swift_temp_url_expected_download_start_delay=100, group='glance') self.config(swift_temp_url_key='correcthorsebatterystaple', group='glance') self.config(swift_endpoint_url='https://swift.example.com', group='glance') self.config(swift_account='AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30', group='glance') self.config(swift_api_version='v1', group='glance') self.config(swift_container='glance', group='glance') self.config(swift_temp_url_duration=1200, group='glance') self.config(swift_temp_url_cache_enabled=True, group='glance') self.config(swift_store_multiple_containers_seed=0, group='glance') self.glance_service = service.GlanceImageService(client, version=2, context=self.context) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_add_items_to_cache(self, tempurl_mock): fake_image = { 'id': uuidutils.generate_uuid() } path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/%s' % fake_image['id']) exp_time = int(time.time()) + 1200 tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=%s' % exp_time) cleanup_mock = mock.Mock() self.glance_service._remove_expired_items_from_cache = cleanup_mock self.glance_service._validate_temp_url_config = mock.Mock() temp_url = self.glance_service.swift_temp_url( image_info=fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) cleanup_mock.assert_called_once_with() tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') self.assertEqual((temp_url, exp_time), self.glance_service._cache[fake_image['id']]) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_return_cached_tempurl(self, tempurl_mock): fake_image = { 'id': uuidutils.generate_uuid() } exp_time = int(time.time()) + 1200 temp_url = CONF.glance.swift_endpoint_url + ( '/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/%(uuid)s' '?temp_url_sig=hmacsig&temp_url_expires=%(exp_time)s' % {'uuid': fake_image['id'], 'exp_time': exp_time} ) self.glance_service._cache[fake_image['id']] = ( glance_v2.TempUrlCacheElement(url=temp_url, url_expires_at=exp_time) ) cleanup_mock = mock.Mock() self.glance_service._remove_expired_items_from_cache = cleanup_mock self.glance_service._validate_temp_url_config = mock.Mock() self.assertEqual( temp_url, self.glance_service.swift_temp_url(image_info=fake_image) ) cleanup_mock.assert_called_once_with() self.assertFalse(tempurl_mock.called) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_do_not_return_expired_tempurls(self, tempurl_mock): fake_image = { 'id': uuidutils.generate_uuid() } old_exp_time = int(time.time()) + 99 path = ( '/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/%s' % fake_image['id'] ) query = '?temp_url_sig=hmacsig&temp_url_expires=%s' self.glance_service._cache[fake_image['id']] = ( glance_v2.TempUrlCacheElement( url=(CONF.glance.swift_endpoint_url + path + query % old_exp_time), url_expires_at=old_exp_time) ) new_exp_time = int(time.time()) + 1200 tempurl_mock.return_value = ( path + query % new_exp_time) self.glance_service._validate_temp_url_config = mock.Mock() fresh_temp_url = self.glance_service.swift_temp_url( image_info=fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, fresh_temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') self.assertEqual( (fresh_temp_url, new_exp_time), self.glance_service._cache[fake_image['id']]) def test_remove_expired_items_from_cache(self): expired_items = { uuidutils.generate_uuid(): glance_v2.TempUrlCacheElement( 'fake-url-1', int(time.time()) - 10 ), uuidutils.generate_uuid(): glance_v2.TempUrlCacheElement( 'fake-url-2', int(time.time()) + 90 # Agent won't be able to start in time ) } valid_items = { uuidutils.generate_uuid(): glance_v2.TempUrlCacheElement( 'fake-url-3', int(time.time()) + 1000 ), uuidutils.generate_uuid(): glance_v2.TempUrlCacheElement( 'fake-url-4', int(time.time()) + 2000 ) } self.glance_service._cache.update(expired_items) self.glance_service._cache.update(valid_items) self.glance_service._remove_expired_items_from_cache() for uuid in valid_items: self.assertEqual(valid_items[uuid], self.glance_service._cache[uuid]) for uuid in expired_items: self.assertNotIn(uuid, self.glance_service._cache) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def _test__generate_temp_url(self, fake_image, tempurl_mock): path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/%s' % fake_image['id']) tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') self.glance_service._validate_temp_url_config = mock.Mock() temp_url = self.glance_service._generate_temp_url( path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET', endpoint=CONF.glance.swift_endpoint_url, image_id=fake_image['id'] ) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') def test_swift_temp_url_cache_enabled(self): fake_image = { 'id': uuidutils.generate_uuid() } rm_expired = mock.Mock() self.glance_service._remove_expired_items_from_cache = rm_expired self._test__generate_temp_url(fake_image) rm_expired.assert_called_once_with() self.assertIn(fake_image['id'], self.glance_service._cache) def test_swift_temp_url_cache_disabled(self): self.config(swift_temp_url_cache_enabled=False, group='glance') fake_image = { 'id': uuidutils.generate_uuid() } rm_expired = mock.Mock() self.glance_service._remove_expired_items_from_cache = rm_expired self._test__generate_temp_url(fake_image) self.assertFalse(rm_expired.called) self.assertNotIn(fake_image['id'], self.glance_service._cache) class TestGlanceUrl(base.TestCase): def test_generate_glance_http_url(self): self.config(glance_host="127.0.0.1", group='glance') generated_url = service_utils.generate_glance_url() http_url = "http://%s:%d" % (CONF.glance.glance_host, CONF.glance.glance_port) self.assertEqual(http_url, generated_url) def test_generate_glance_https_url(self): self.config(glance_protocol="https", group='glance') self.config(glance_host="127.0.0.1", group='glance') generated_url = service_utils.generate_glance_url() https_url = "https://%s:%d" % (CONF.glance.glance_host, CONF.glance.glance_port) self.assertEqual(https_url, generated_url) class TestServiceUtils(base.TestCase): def test_parse_image_ref_no_ssl(self): image_href = u'http://127.0.0.1:9292/image_path/'\ u'image_\u00F9\u00FA\u00EE\u0111' parsed_href = service_utils.parse_image_ref(image_href) self.assertEqual((u'image_\u00F9\u00FA\u00EE\u0111', '127.0.0.1', 9292, False), parsed_href) def test_parse_image_ref_ssl(self): image_href = 'https://127.0.0.1:9292/image_path/'\ u'image_\u00F9\u00FA\u00EE\u0111' parsed_href = service_utils.parse_image_ref(image_href) self.assertEqual((u'image_\u00F9\u00FA\u00EE\u0111', '127.0.0.1', 9292, True), parsed_href) def test_generate_image_url(self): image_href = u'image_\u00F9\u00FA\u00EE\u0111' self.config(glance_host='123.123.123.123', group='glance') self.config(glance_port=1234, group='glance') self.config(glance_protocol='https', group='glance') generated_url = service_utils.generate_image_url(image_href) self.assertEqual('https://123.123.123.123:1234/images/' u'image_\u00F9\u00FA\u00EE\u0111', generated_url) def test_is_glance_image(self): image_href = u'uui\u0111' self.assertFalse(service_utils.is_glance_image(image_href)) image_href = u'733d1c44-a2ea-414b-aca7-69decf20d810' self.assertTrue(service_utils.is_glance_image(image_href)) image_href = u'glance://uui\u0111' self.assertTrue(service_utils.is_glance_image(image_href)) image_href = 'http://aaa/bbb' self.assertFalse(service_utils.is_glance_image(image_href)) image_href = None self.assertFalse(service_utils.is_glance_image(image_href)) def test_is_image_href_ordinary_file_name_true(self): image = u"\u0111eploy.iso" result = service_utils.is_image_href_ordinary_file_name(image) self.assertTrue(result) def test_is_image_href_ordinary_file_name_false(self): for image in ('733d1c44-a2ea-414b-aca7-69decf20d810', u'glance://\u0111eploy_iso', u'http://\u0111eploy_iso', u'https://\u0111eploy_iso', u'file://\u0111eploy_iso',): result = service_utils.is_image_href_ordinary_file_name(image) self.assertFalse(result) class TestGlanceAPIServers(base.TestCase): def setUp(self): super(TestGlanceAPIServers, self).setUp() service_utils._GLANCE_API_SERVER = None def test__get_api_servers_default(self): host, port, use_ssl = service_utils._get_api_server() self.assertEqual(CONF.glance.glance_host, host) self.assertEqual(CONF.glance.glance_port, port) self.assertEqual(CONF.glance.glance_protocol == 'https', use_ssl) def test__get_api_servers_one(self): CONF.set_override('glance_api_servers', ['https://10.0.0.1:9293'], 'glance') s1 = service_utils._get_api_server() s2 = service_utils._get_api_server() self.assertEqual(('10.0.0.1', 9293, True), s1) # Only one server, should always get the same one self.assertEqual(s1, s2) def test__get_api_servers_two(self): CONF.set_override('glance_api_servers', ['http://10.0.0.1:9293', 'http://10.0.0.2:9294'], 'glance') s1 = service_utils._get_api_server() s2 = service_utils._get_api_server() s3 = service_utils._get_api_server() self.assertNotEqual(s1, s2) # 2 servers, so cycles to the first again self.assertEqual(s1, s3) ironic-5.1.0/ironic/tests/unit/common/test_hash_ring.py0000664000567000056710000002452012674513466024431 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib import time import mock from oslo_config import cfg from testtools import matchers from ironic.common import exception from ironic.common import hash_ring from ironic.tests import base from ironic.tests.unit.db import base as db_base CONF = cfg.CONF class HashRingTestCase(base.TestCase): # NOTE(deva): the mapping used in these tests is as follows: # if hosts = [foo, bar]: # fake -> foo, bar # if hosts = [foo, bar, baz]: # fake -> foo, bar, baz # fake-again -> bar, baz, foo @mock.patch.object(hashlib, 'md5', autospec=True) def test__hash2int_returns_int(self, mock_md5): CONF.set_override('hash_partition_exponent', 0) r1 = 32 * 'a' r2 = 32 * 'b' mock_md5.return_value.hexdigest.side_effect = [r1, r2] hosts = ['foo', 'bar'] replicas = 1 ring = hash_ring.HashRing(hosts, replicas=replicas) self.assertIn(int(r1, 16), ring._host_hashes) self.assertIn(int(r2, 16), ring._host_hashes) def test_create_ring(self): hosts = ['foo', 'bar'] replicas = 2 ring = hash_ring.HashRing(hosts, replicas=replicas) self.assertEqual(set(hosts), ring.hosts) self.assertEqual(replicas, ring.replicas) def test_create_with_different_partition_counts(self): hosts = ['foo', 'bar'] CONF.set_override('hash_partition_exponent', 2) ring = hash_ring.HashRing(hosts) self.assertEqual(2 ** 2 * 2, len(ring._partitions)) CONF.set_override('hash_partition_exponent', 8) ring = hash_ring.HashRing(hosts) self.assertEqual(2 ** 8 * 2, len(ring._partitions)) CONF.set_override('hash_partition_exponent', 16) ring = hash_ring.HashRing(hosts) self.assertEqual(2 ** 16 * 2, len(ring._partitions)) def test_distribution_one_replica(self): hosts = ['foo', 'bar', 'baz'] ring = hash_ring.HashRing(hosts, replicas=1) fake_1_hosts = ring.get_hosts('fake') fake_2_hosts = ring.get_hosts('fake-again') # We should have one hosts for each thing self.assertThat(fake_1_hosts, matchers.HasLength(1)) self.assertThat(fake_2_hosts, matchers.HasLength(1)) # And they must not be the same answers even on this simple data. self.assertNotEqual(fake_1_hosts, fake_2_hosts) def test_distribution_two_replicas(self): hosts = ['foo', 'bar', 'baz'] ring = hash_ring.HashRing(hosts, replicas=2) fake_1_hosts = ring.get_hosts('fake') fake_2_hosts = ring.get_hosts('fake-again') # We should have two hosts for each thing self.assertThat(fake_1_hosts, matchers.HasLength(2)) self.assertThat(fake_2_hosts, matchers.HasLength(2)) # And they must not be the same answers even on this simple data # because if they were we'd be making the active replica a hot spot. self.assertNotEqual(fake_1_hosts, fake_2_hosts) def test_distribution_three_replicas(self): hosts = ['foo', 'bar', 'baz'] ring = hash_ring.HashRing(hosts, replicas=3) fake_1_hosts = ring.get_hosts('fake') fake_2_hosts = ring.get_hosts('fake-again') # We should have two hosts for each thing self.assertThat(fake_1_hosts, matchers.HasLength(3)) self.assertThat(fake_2_hosts, matchers.HasLength(3)) # And they must not be the same answers even on this simple data # because if they were we'd be making the active replica a hot spot. self.assertNotEqual(fake_1_hosts, fake_2_hosts) self.assertNotEqual(fake_1_hosts[0], fake_2_hosts[0]) def test_ignore_hosts(self): hosts = ['foo', 'bar', 'baz'] ring = hash_ring.HashRing(hosts, replicas=1) equals_bar_or_baz = matchers.MatchesAny( matchers.Equals(['bar']), matchers.Equals(['baz'])) self.assertThat( ring.get_hosts('fake', ignore_hosts=['foo']), equals_bar_or_baz) self.assertThat( ring.get_hosts('fake', ignore_hosts=['foo', 'bar']), equals_bar_or_baz) self.assertEqual([], ring.get_hosts('fake', ignore_hosts=hosts)) def test_ignore_hosts_with_replicas(self): hosts = ['foo', 'bar', 'baz'] ring = hash_ring.HashRing(hosts, replicas=2) self.assertEqual( set(['bar', 'baz']), set(ring.get_hosts('fake', ignore_hosts=['foo']))) self.assertEqual( set(['baz']), set(ring.get_hosts('fake', ignore_hosts=['foo', 'bar']))) self.assertEqual( set(['baz', 'foo']), set(ring.get_hosts('fake-again', ignore_hosts=['bar']))) self.assertEqual( set(['foo']), set(ring.get_hosts('fake-again', ignore_hosts=['bar', 'baz']))) self.assertEqual([], ring.get_hosts('fake', ignore_hosts=hosts)) def _compare_rings(self, nodes, conductors, ring, new_conductors, new_ring): delta = {} mapping = dict((node, ring.get_hosts(node)[0]) for node in nodes) new_mapping = dict( (node, new_ring.get_hosts(node)[0]) for node in nodes) for key, old in mapping.items(): new = new_mapping.get(key, None) if new != old: delta[key] = (old, new) return delta def test_rebalance_stability_join(self): num_conductors = 10 num_nodes = 10000 # Adding 1 conductor to a set of N should move 1/(N+1) of all nodes # Eg, for a cluster of 10 nodes, adding one should move 1/11, or 9% # We allow for 1/N to allow for rounding in tests. redistribution_factor = 1.0 / num_conductors nodes = [str(x) for x in range(num_nodes)] conductors = [str(x) for x in range(num_conductors)] new_conductors = conductors + ['new'] delta = self._compare_rings( nodes, conductors, hash_ring.HashRing(conductors), new_conductors, hash_ring.HashRing(new_conductors)) self.assertLess(len(delta), num_nodes * redistribution_factor) def test_rebalance_stability_leave(self): num_conductors = 10 num_nodes = 10000 # Removing 1 conductor from a set of N should move 1/(N) of all nodes # Eg, for a cluster of 10 nodes, removing one should move 1/10, or 10% # We allow for 1/(N-1) to allow for rounding in tests. redistribution_factor = 1.0 / (num_conductors - 1) nodes = [str(x) for x in range(num_nodes)] conductors = [str(x) for x in range(num_conductors)] new_conductors = conductors[:] new_conductors.pop() delta = self._compare_rings( nodes, conductors, hash_ring.HashRing(conductors), new_conductors, hash_ring.HashRing(new_conductors)) self.assertLess(len(delta), num_nodes * redistribution_factor) def test_more_replicas_than_hosts(self): hosts = ['foo', 'bar'] ring = hash_ring.HashRing(hosts, replicas=10) self.assertEqual(set(hosts), set(ring.get_hosts('fake'))) def test_ignore_non_existent_host(self): hosts = ['foo', 'bar'] ring = hash_ring.HashRing(hosts, replicas=1) self.assertEqual(['foo'], ring.get_hosts('fake', ignore_hosts=['baz'])) def test_create_ring_invalid_data(self): hosts = None self.assertRaises(exception.Invalid, hash_ring.HashRing, hosts) def test_get_hosts_invalid_data(self): hosts = ['foo', 'bar'] ring = hash_ring.HashRing(hosts) self.assertRaises(exception.Invalid, ring.get_hosts, None) class HashRingManagerTestCase(db_base.DbTestCase): def setUp(self): super(HashRingManagerTestCase, self).setUp() self.ring_manager = hash_ring.HashRingManager() def register_conductors(self): self.dbapi.register_conductor({ 'hostname': 'host1', 'drivers': ['driver1', 'driver2'], }) self.dbapi.register_conductor({ 'hostname': 'host2', 'drivers': ['driver1'], }) def test_hash_ring_manager_get_ring_success(self): self.register_conductors() ring = self.ring_manager['driver1'] self.assertEqual(sorted(['host1', 'host2']), sorted(ring.hosts)) def test_hash_ring_manager_driver_not_found(self): self.register_conductors() self.assertRaises(exception.DriverNotFound, self.ring_manager.__getitem__, 'driver3') def test_hash_ring_manager_no_refresh(self): # If a new conductor is registered after the ring manager is # initialized, it won't be seen. Long term this is probably # undesirable, but today is the intended behavior. self.assertRaises(exception.DriverNotFound, self.ring_manager.__getitem__, 'driver1') self.register_conductors() self.assertRaises(exception.DriverNotFound, self.ring_manager.__getitem__, 'driver1') def test_hash_ring_manager_refresh(self): CONF.set_override('hash_ring_reset_interval', 30) # Initialize the ring manager to make _hash_rings not None, then # hash ring will refresh only when time interval exceeded. self.assertRaises(exception.DriverNotFound, self.ring_manager.__getitem__, 'driver1') self.register_conductors() self.ring_manager.updated_at = time.time() - 30 self.ring_manager.__getitem__('driver1') ironic-5.1.0/ironic/tests/unit/common/test_images.py0000664000567000056710000012050012674513466023727 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import shutil from ironic_lib import disk_utils from ironic_lib import utils as ironic_utils import mock from oslo_concurrency import processutils from oslo_config import cfg import six import six.moves.builtins as __builtin__ from ironic.common import exception from ironic.common.glance_service import service_utils as glance_utils from ironic.common import image_service from ironic.common import images from ironic.common import utils from ironic.tests import base if six.PY3: import io file = io.BytesIO CONF = cfg.CONF class IronicImagesTestCase(base.TestCase): class FakeImgInfo(object): pass @mock.patch.object(image_service, 'get_image_service', autospec=True) @mock.patch.object(__builtin__, 'open', autospec=True) def test_fetch_image_service(self, open_mock, image_service_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'file' open_mock.return_value = mock_file_handle images.fetch('context', 'image_href', 'path') open_mock.assert_called_once_with('path', 'wb') image_service_mock.assert_called_once_with('image_href', context='context') image_service_mock.return_value.download.assert_called_once_with( 'image_href', 'file') @mock.patch.object(image_service, 'get_image_service', autospec=True) @mock.patch.object(images, 'image_to_raw', autospec=True) @mock.patch.object(__builtin__, 'open', autospec=True) def test_fetch_image_service_force_raw(self, open_mock, image_to_raw_mock, image_service_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'file' open_mock.return_value = mock_file_handle images.fetch('context', 'image_href', 'path', force_raw=True) open_mock.assert_called_once_with('path', 'wb') image_service_mock.return_value.download.assert_called_once_with( 'image_href', 'file') image_to_raw_mock.assert_called_once_with( 'image_href', 'path', 'path.part') @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw_no_file_format(self, qemu_img_info_mock): info = self.FakeImgInfo() info.file_format = None qemu_img_info_mock.return_value = info e = self.assertRaises(exception.ImageUnacceptable, images.image_to_raw, 'image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_called_once_with('path_tmp') self.assertIn("'qemu-img info' parsing failed.", str(e)) @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw_backing_file_present(self, qemu_img_info_mock): info = self.FakeImgInfo() info.file_format = 'raw' info.backing_file = 'backing_file' qemu_img_info_mock.return_value = info e = self.assertRaises(exception.ImageUnacceptable, images.image_to_raw, 'image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_called_once_with('path_tmp') self.assertIn("fmt=raw backed by: backing_file", str(e)) @mock.patch.object(os, 'rename', autospec=True) @mock.patch.object(os, 'unlink', autospec=True) @mock.patch.object(disk_utils, 'convert_image', autospec=True) @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw(self, qemu_img_info_mock, convert_image_mock, unlink_mock, rename_mock): CONF.set_override('force_raw_images', True) info = self.FakeImgInfo() info.file_format = 'fmt' info.backing_file = None qemu_img_info_mock.return_value = info def convert_side_effect(source, dest, out_format): info.file_format = 'raw' convert_image_mock.side_effect = convert_side_effect images.image_to_raw('image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_has_calls([mock.call('path_tmp'), mock.call('path.converted')]) convert_image_mock.assert_called_once_with('path_tmp', 'path.converted', 'raw') unlink_mock.assert_called_once_with('path_tmp') rename_mock.assert_called_once_with('path.converted', 'path') @mock.patch.object(os, 'unlink', autospec=True) @mock.patch.object(disk_utils, 'convert_image', autospec=True) @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw_not_raw_after_conversion(self, qemu_img_info_mock, convert_image_mock, unlink_mock): CONF.set_override('force_raw_images', True) info = self.FakeImgInfo() info.file_format = 'fmt' info.backing_file = None qemu_img_info_mock.return_value = info self.assertRaises(exception.ImageConvertFailed, images.image_to_raw, 'image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_has_calls([mock.call('path_tmp'), mock.call('path.converted')]) convert_image_mock.assert_called_once_with('path_tmp', 'path.converted', 'raw') unlink_mock.assert_called_once_with('path_tmp') @mock.patch.object(os, 'rename', autospec=True) @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw_already_raw_format(self, qemu_img_info_mock, rename_mock): info = self.FakeImgInfo() info.file_format = 'raw' info.backing_file = None qemu_img_info_mock.return_value = info images.image_to_raw('image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_called_once_with('path_tmp') rename_mock.assert_called_once_with('path_tmp', 'path') @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_image_show_no_image_service(self, image_service_mock): images.image_show('context', 'image_href') image_service_mock.assert_called_once_with('image_href', context='context') image_service_mock.return_value.show.assert_called_once_with( 'image_href') def test_image_show_image_service(self): image_service_mock = mock.MagicMock() images.image_show('context', 'image_href', image_service_mock) image_service_mock.show.assert_called_once_with('image_href') @mock.patch.object(images, 'image_show', autospec=True) def test_download_size(self, show_mock): show_mock.return_value = {'size': 123456} size = images.download_size('context', 'image_href', 'image_service') self.assertEqual(123456, size) show_mock.assert_called_once_with('context', 'image_href', 'image_service') @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_converted_size(self, qemu_img_info_mock): info = self.FakeImgInfo() info.virtual_size = 1 qemu_img_info_mock.return_value = info size = images.converted_size('path') qemu_img_info_mock.assert_called_once_with('path') self.assertEqual(1, size) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_no_img_src(self, mock_igi, mock_gip): instance_info = {'image_source': ''} iwdi = images.is_whole_disk_image('context', instance_info) self.assertIsNone(iwdi) self.assertFalse(mock_igi.called) self.assertFalse(mock_gip.called) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_partition_image(self, mock_igi, mock_gip): mock_igi.return_value = True mock_gip.return_value = {'kernel_id': 'kernel', 'ramdisk_id': 'ramdisk'} instance_info = {'image_source': 'glance://partition_image'} image_source = instance_info['image_source'] is_whole_disk_image = images.is_whole_disk_image('context', instance_info) self.assertFalse(is_whole_disk_image) mock_igi.assert_called_once_with(image_source) mock_gip.assert_called_once_with('context', image_source) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_whole_disk_image(self, mock_igi, mock_gip): mock_igi.return_value = True mock_gip.return_value = {} instance_info = {'image_source': 'glance://whole_disk_image'} image_source = instance_info['image_source'] is_whole_disk_image = images.is_whole_disk_image('context', instance_info) self.assertTrue(is_whole_disk_image) mock_igi.assert_called_once_with(image_source) mock_gip.assert_called_once_with('context', image_source) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_partition_non_glance(self, mock_igi, mock_gip): mock_igi.return_value = False instance_info = {'image_source': 'partition_image', 'kernel': 'kernel', 'ramdisk': 'ramdisk'} is_whole_disk_image = images.is_whole_disk_image('context', instance_info) self.assertFalse(is_whole_disk_image) self.assertFalse(mock_gip.called) mock_igi.assert_called_once_with(instance_info['image_source']) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_whole_disk_non_glance(self, mock_igi, mock_gip): mock_igi.return_value = False instance_info = {'image_source': 'whole_disk_image'} is_whole_disk_image = images.is_whole_disk_image('context', instance_info) self.assertTrue(is_whole_disk_image) self.assertFalse(mock_gip.called) mock_igi.assert_called_once_with(instance_info['image_source']) class FsImageTestCase(base.TestCase): @mock.patch.object(shutil, 'copyfile', autospec=True) @mock.patch.object(os, 'makedirs', autospec=True) @mock.patch.object(os.path, 'dirname', autospec=True) @mock.patch.object(os.path, 'exists', autospec=True) def test__create_root_fs(self, path_exists_mock, dirname_mock, mkdir_mock, cp_mock): path_exists_mock_func = lambda path: path == 'root_dir' files_info = { 'a1': 'b1', 'a2': 'b2', 'a3': 'sub_dir/b3'} path_exists_mock.side_effect = path_exists_mock_func dirname_mock.side_effect = iter( ['root_dir', 'root_dir', 'root_dir/sub_dir', 'root_dir/sub_dir']) images._create_root_fs('root_dir', files_info) cp_mock.assert_any_call('a1', 'root_dir/b1') cp_mock.assert_any_call('a2', 'root_dir/b2') cp_mock.assert_any_call('a3', 'root_dir/sub_dir/b3') path_exists_mock.assert_any_call('root_dir/sub_dir') dirname_mock.assert_any_call('root_dir/b1') dirname_mock.assert_any_call('root_dir/b2') dirname_mock.assert_any_call('root_dir/sub_dir/b3') mkdir_mock.assert_called_once_with('root_dir/sub_dir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(ironic_utils, 'dd', autospec=True) @mock.patch.object(utils, 'umount', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) @mock.patch.object(ironic_utils, 'mkfs', autospec=True) def test_create_vfat_image( self, mkfs_mock, mount_mock, umount_mock, dd_mock, write_mock, tempdir_mock, create_root_fs_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tempdir' tempdir_mock.return_value = mock_file_handle parameters = {'p1': 'v1'} files_info = {'a': 'b'} images.create_vfat_image('tgt_file', parameters=parameters, files_info=files_info, parameters_file='qwe', fs_size_kib=1000) dd_mock.assert_called_once_with('/dev/zero', 'tgt_file', 'count=1', 'bs=1000KiB') mkfs_mock.assert_called_once_with('vfat', 'tgt_file', label="ir-vfd-dev") mount_mock.assert_called_once_with('tgt_file', 'tempdir', '-o', 'umask=0') parameters_file_path = os.path.join('tempdir', 'qwe') write_mock.assert_called_once_with(parameters_file_path, 'p1=v1') create_root_fs_mock.assert_called_once_with('tempdir', files_info) umount_mock.assert_called_once_with('tempdir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(ironic_utils, 'dd', autospec=True) @mock.patch.object(utils, 'umount', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) @mock.patch.object(ironic_utils, 'mkfs', autospec=True) def test_create_vfat_image_always_umount( self, mkfs_mock, mount_mock, umount_mock, dd_mock, tempdir_mock, create_root_fs_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tempdir' tempdir_mock.return_value = mock_file_handle files_info = {'a': 'b'} create_root_fs_mock.side_effect = OSError() self.assertRaises(exception.ImageCreationFailed, images.create_vfat_image, 'tgt_file', files_info=files_info) umount_mock.assert_called_once_with('tempdir') @mock.patch.object(ironic_utils, 'dd', autospec=True) def test_create_vfat_image_dd_fails(self, dd_mock): dd_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_vfat_image, 'tgt_file') @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(ironic_utils, 'dd', autospec=True) @mock.patch.object(ironic_utils, 'mkfs', autospec=True) def test_create_vfat_image_mkfs_fails(self, mkfs_mock, dd_mock, tempdir_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tempdir' tempdir_mock.return_value = mock_file_handle mkfs_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_vfat_image, 'tgt_file') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(ironic_utils, 'dd', autospec=True) @mock.patch.object(utils, 'umount', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) @mock.patch.object(ironic_utils, 'mkfs', autospec=True) def test_create_vfat_image_umount_fails( self, mkfs_mock, mount_mock, umount_mock, dd_mock, tempdir_mock, create_root_fs_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tempdir' tempdir_mock.return_value = mock_file_handle umount_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_vfat_image, 'tgt_file') @mock.patch.object(utils, 'umount', autospec=True) def test__umount_without_raise(self, umount_mock): umount_mock.side_effect = processutils.ProcessExecutionError images._umount_without_raise('mountdir') umount_mock.assert_called_once_with('mountdir') def test__generate_isolinux_cfg(self): kernel_params = ['key1=value1', 'key2'] options = {'kernel': '/vmlinuz', 'ramdisk': '/initrd'} expected_cfg = ("default boot\n" "\n" "label boot\n" "kernel /vmlinuz\n" "append initrd=/initrd text key1=value1 key2 --") cfg = images._generate_cfg(kernel_params, CONF.isolinux_config_template, options) self.assertEqual(expected_cfg, cfg) def test__generate_grub_cfg(self): kernel_params = ['key1=value1', 'key2'] options = {'linux': '/vmlinuz', 'initrd': '/initrd'} expected_cfg = ("set default=0\n" "set timeout=5\n" "set hidden_timeout_quiet=false\n" "\n" "menuentry \"boot_partition\" {\n" "linuxefi /vmlinuz key1=value1 key2 --\n" "initrdefi /initrd\n" "}") cfg = images._generate_cfg(kernel_params, CONF.grub_config_template, options) self.assertEqual(expected_cfg, cfg) @mock.patch.object(os.path, 'relpath', autospec=True) @mock.patch.object(os, 'walk', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) def test__mount_deploy_iso(self, mount_mock, walk_mock, relpath_mock): walk_mock.return_value = [('/tmpdir1/EFI/ubuntu', [], ['grub.cfg']), ('/tmpdir1/isolinux', [], ['efiboot.img', 'isolinux.bin', 'isolinux.cfg'])] relpath_mock.side_effect = iter( ['EFI/ubuntu/grub.cfg', 'isolinux/efiboot.img']) images._mount_deploy_iso('path/to/deployiso', 'tmpdir1') mount_mock.assert_called_once_with('path/to/deployiso', 'tmpdir1', '-o', 'loop') walk_mock.assert_called_once_with('tmpdir1') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(os.path, 'relpath', autospec=True) @mock.patch.object(os, 'walk', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) def test__mount_deploy_iso_fail_no_efibootimg(self, mount_mock, walk_mock, relpath_mock, umount_mock): walk_mock.return_value = [('/tmpdir1/EFI/ubuntu', [], ['grub.cfg']), ('/tmpdir1/isolinux', [], ['isolinux.bin', 'isolinux.cfg'])] relpath_mock.side_effect = iter(['EFI/ubuntu/grub.cfg']) self.assertRaises(exception.ImageCreationFailed, images._mount_deploy_iso, 'path/to/deployiso', 'tmpdir1') mount_mock.assert_called_once_with('path/to/deployiso', 'tmpdir1', '-o', 'loop') walk_mock.assert_called_once_with('tmpdir1') umount_mock.assert_called_once_with('tmpdir1') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(os.path, 'relpath', autospec=True) @mock.patch.object(os, 'walk', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) def test__mount_deploy_iso_fails_no_grub_cfg(self, mount_mock, walk_mock, relpath_mock, umount_mock): walk_mock.return_value = [('/tmpdir1/EFI/ubuntu', '', []), ('/tmpdir1/isolinux', '', ['efiboot.img', 'isolinux.bin', 'isolinux.cfg'])] relpath_mock.side_effect = iter(['isolinux/efiboot.img']) self.assertRaises(exception.ImageCreationFailed, images._mount_deploy_iso, 'path/to/deployiso', 'tmpdir1') mount_mock.assert_called_once_with('path/to/deployiso', 'tmpdir1', '-o', 'loop') walk_mock.assert_called_once_with('tmpdir1') umount_mock.assert_called_once_with('tmpdir1') @mock.patch.object(utils, 'mount', autospec=True) def test__mount_deploy_iso_fail_with_ExecutionError(self, mount_mock): mount_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images._mount_deploy_iso, 'path/to/deployiso', 'tmpdir1') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(images, '_mount_deploy_iso', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(images, '_generate_cfg', autospec=True) def test_create_isolinux_image_for_uefi( self, gen_cfg_mock, tempdir_mock, mount_mock, execute_mock, write_to_file_mock, create_root_fs_mock, umount_mock): files_info = { 'path/to/kernel': 'vmlinuz', 'path/to/ramdisk': 'initrd', CONF.isolinux_bin: 'isolinux/isolinux.bin', 'path/to/grub': 'relpath/to/grub.cfg', 'sourceabspath/to/efiboot.img': 'path/to/efiboot.img' } cfg = "cfg" cfg_file = 'tmpdir/isolinux/isolinux.cfg' grubcfg = "grubcfg" grub_file = 'tmpdir/relpath/to/grub.cfg' gen_cfg_mock.side_effect = iter([cfg, grubcfg]) params = ['a=b', 'c'] isolinux_options = {'kernel': '/vmlinuz', 'ramdisk': '/initrd'} grub_options = {'linux': '/vmlinuz', 'initrd': '/initrd'} uefi_path_info = { 'sourceabspath/to/efiboot.img': 'path/to/efiboot.img', 'path/to/grub': 'relpath/to/grub.cfg'} grub_rel_path = 'relpath/to/grub.cfg' e_img_rel_path = 'path/to/efiboot.img' mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tmpdir' mock_file_handle1 = mock.MagicMock(spec=file) mock_file_handle1.__enter__.return_value = 'mountdir' tempdir_mock.side_effect = iter( [mock_file_handle, mock_file_handle1]) mount_mock.return_value = (uefi_path_info, e_img_rel_path, grub_rel_path) images.create_isolinux_image_for_uefi('tgt_file', 'path/to/deploy_iso', 'path/to/kernel', 'path/to/ramdisk', kernel_params=params) mount_mock.assert_called_once_with('path/to/deploy_iso', 'mountdir') create_root_fs_mock.assert_called_once_with('tmpdir', files_info) gen_cfg_mock.assert_any_call(params, CONF.isolinux_config_template, isolinux_options) write_to_file_mock.assert_any_call(cfg_file, cfg) gen_cfg_mock.assert_any_call(params, CONF.grub_config_template, grub_options) write_to_file_mock.assert_any_call(grub_file, grubcfg) execute_mock.assert_called_once_with( 'mkisofs', '-r', '-V', "VMEDIA_BOOT_ISO", '-cache-inodes', '-J', '-l', '-no-emul-boot', '-boot-load-size', '4', '-boot-info-table', '-b', 'isolinux/isolinux.bin', '-eltorito-alt-boot', '-e', 'path/to/efiboot.img', '-no-emul-boot', '-o', 'tgt_file', 'tmpdir') umount_mock.assert_called_once_with('mountdir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(images, '_generate_cfg', autospec=True) def test_create_isolinux_image_for_bios( self, gen_cfg_mock, execute_mock, tempdir_mock, write_to_file_mock, create_root_fs_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle cfg = "cfg" cfg_file = 'tmpdir/isolinux/isolinux.cfg' gen_cfg_mock.return_value = cfg params = ['a=b', 'c'] isolinux_options = {'kernel': '/vmlinuz', 'ramdisk': '/initrd'} images.create_isolinux_image_for_bios('tgt_file', 'path/to/kernel', 'path/to/ramdisk', kernel_params=params) files_info = { 'path/to/kernel': 'vmlinuz', 'path/to/ramdisk': 'initrd', CONF.isolinux_bin: 'isolinux/isolinux.bin' } create_root_fs_mock.assert_called_once_with('tmpdir', files_info) gen_cfg_mock.assert_called_once_with(params, CONF.isolinux_config_template, isolinux_options) write_to_file_mock.assert_called_once_with(cfg_file, cfg) execute_mock.assert_called_once_with( 'mkisofs', '-r', '-V', "VMEDIA_BOOT_ISO", '-cache-inodes', '-J', '-l', '-no-emul-boot', '-boot-load-size', '4', '-boot-info-table', '-b', 'isolinux/isolinux.bin', '-o', 'tgt_file', 'tmpdir') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(os, 'walk', autospec=True) def test_create_isolinux_image_uefi_rootfs_fails(self, walk_mock, utils_mock, tempdir_mock, create_root_fs_mock, umount_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tmpdir' mock_file_handle1 = mock.MagicMock(spec=file) mock_file_handle1.__enter__.return_value = 'mountdir' tempdir_mock.side_effect = iter( [mock_file_handle, mock_file_handle1]) create_root_fs_mock.side_effect = IOError self.assertRaises(exception.ImageCreationFailed, images.create_isolinux_image_for_uefi, 'tgt_file', 'path/to/deployiso', 'path/to/kernel', 'path/to/ramdisk') umount_mock.assert_called_once_with('mountdir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(os, 'walk', autospec=True) def test_create_isolinux_image_bios_rootfs_fails(self, walk_mock, utils_mock, tempdir_mock, create_root_fs_mock): create_root_fs_mock.side_effect = IOError self.assertRaises(exception.ImageCreationFailed, images.create_isolinux_image_for_bios, 'tgt_file', 'path/to/kernel', 'path/to/ramdisk') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(images, '_mount_deploy_iso', autospec=True) @mock.patch.object(images, '_generate_cfg', autospec=True) def test_create_isolinux_image_mkisofs_fails(self, gen_cfg_mock, mount_mock, utils_mock, tempdir_mock, write_to_file_mock, create_root_fs_mock, umount_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tmpdir' mock_file_handle1 = mock.MagicMock(spec=file) mock_file_handle1.__enter__.return_value = 'mountdir' tempdir_mock.side_effect = iter( [mock_file_handle, mock_file_handle1]) mount_mock.return_value = ({'a': 'a'}, 'b', 'c') utils_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_isolinux_image_for_uefi, 'tgt_file', 'path/to/deployiso', 'path/to/kernel', 'path/to/ramdisk') umount_mock.assert_called_once_with('mountdir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(images, '_generate_cfg', autospec=True) def test_create_isolinux_image_bios_mkisofs_fails(self, gen_cfg_mock, utils_mock, tempdir_mock, write_to_file_mock, create_root_fs_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle utils_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_isolinux_image_for_bios, 'tgt_file', 'path/to/kernel', 'path/to/ramdisk') @mock.patch.object(images, 'create_isolinux_image_for_uefi', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_uefi( self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso('ctx', 'output_file', 'kernel-uuid', 'ramdisk-uuid', 'deploy_iso-uuid', 'root-uuid', 'kernel-params', 'uefi') fetch_images_mock.assert_any_call( 'ctx', 'kernel-uuid', 'tmpdir/kernel-uuid') fetch_images_mock.assert_any_call( 'ctx', 'ramdisk-uuid', 'tmpdir/ramdisk-uuid') fetch_images_mock.assert_any_call( 'ctx', 'deploy_iso-uuid', 'tmpdir/deploy_iso-uuid') params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with( 'output_file', 'tmpdir/deploy_iso-uuid', 'tmpdir/kernel-uuid', 'tmpdir/ramdisk-uuid', params) @mock.patch.object(images, 'create_isolinux_image_for_uefi', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_uefi_for_hrefs( self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso('ctx', 'output_file', 'http://kernel-href', 'http://ramdisk-href', 'http://deploy_iso-href', 'root-uuid', 'kernel-params', 'uefi') expected_calls = [mock.call('ctx', 'http://kernel-href', 'tmpdir/kernel-href'), mock.call('ctx', 'http://ramdisk-href', 'tmpdir/ramdisk-href'), mock.call('ctx', 'http://deploy_iso-href', 'tmpdir/deploy_iso-href')] fetch_images_mock.assert_has_calls(expected_calls) params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with( 'output_file', 'tmpdir/deploy_iso-href', 'tmpdir/kernel-href', 'tmpdir/ramdisk-href', params) @mock.patch.object(images, 'create_isolinux_image_for_bios', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_bios( self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso('ctx', 'output_file', 'kernel-uuid', 'ramdisk-uuid', 'deploy_iso-uuid', 'root-uuid', 'kernel-params', 'bios') fetch_images_mock.assert_any_call( 'ctx', 'kernel-uuid', 'tmpdir/kernel-uuid') fetch_images_mock.assert_any_call( 'ctx', 'ramdisk-uuid', 'tmpdir/ramdisk-uuid') # Note (NobodyCam): the orginal assert asserted that fetch_images # was not called with parameters, this did not # work, So I instead assert that there were only # Two calls to the mock validating the above # asserts. self.assertEqual(2, fetch_images_mock.call_count) params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with('output_file', 'tmpdir/kernel-uuid', 'tmpdir/ramdisk-uuid', params) @mock.patch.object(images, 'create_isolinux_image_for_bios', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_bios_with_no_boot_mode(self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=file) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso('ctx', 'output_file', 'kernel-uuid', 'ramdisk-uuid', 'deploy_iso-uuid', 'root-uuid', 'kernel-params', None) fetch_images_mock.assert_any_call( 'ctx', 'kernel-uuid', 'tmpdir/kernel-uuid') fetch_images_mock.assert_any_call( 'ctx', 'ramdisk-uuid', 'tmpdir/ramdisk-uuid') params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with('output_file', 'tmpdir/kernel-uuid', 'tmpdir/ramdisk-uuid', params) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_get_glance_image_properties_no_such_prop(self, image_service_mock): prop_dict = {'properties': {'p1': 'v1', 'p2': 'v2'}} image_service_obj_mock = image_service_mock.return_value image_service_obj_mock.show.return_value = prop_dict ret_val = images.get_image_properties('con', 'uuid', ['p1', 'p2', 'p3']) image_service_mock.assert_called_once_with('uuid', context='con') image_service_obj_mock.show.assert_called_once_with('uuid') self.assertEqual({'p1': 'v1', 'p2': 'v2', 'p3': None}, ret_val) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_get_glance_image_properties_default_all( self, image_service_mock): prop_dict = {'properties': {'p1': 'v1', 'p2': 'v2'}} image_service_obj_mock = image_service_mock.return_value image_service_obj_mock.show.return_value = prop_dict ret_val = images.get_image_properties('con', 'uuid') image_service_mock.assert_called_once_with('uuid', context='con') image_service_obj_mock.show.assert_called_once_with('uuid') self.assertEqual({'p1': 'v1', 'p2': 'v2'}, ret_val) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_get_glance_image_properties_with_prop_subset( self, image_service_mock): prop_dict = {'properties': {'p1': 'v1', 'p2': 'v2', 'p3': 'v3'}} image_service_obj_mock = image_service_mock.return_value image_service_obj_mock.show.return_value = prop_dict ret_val = images.get_image_properties('con', 'uuid', ['p1', 'p3']) image_service_mock.assert_called_once_with('uuid', context='con') image_service_obj_mock.show.assert_called_once_with('uuid') self.assertEqual({'p1': 'v1', 'p3': 'v3'}, ret_val) @mock.patch.object(image_service, 'GlanceImageService', autospec=True) def test_get_temp_url_for_glance_image(self, image_service_mock): direct_url = 'swift+http://host/v1/AUTH_xx/con/obj' image_info = {'id': 'qwe', 'properties': {'direct_url': direct_url}} glance_service_mock = image_service_mock.return_value glance_service_mock.swift_temp_url.return_value = 'temp-url' glance_service_mock.show.return_value = image_info temp_url = images.get_temp_url_for_glance_image('context', 'glance_uuid') glance_service_mock.show.assert_called_once_with('glance_uuid') self.assertEqual('temp-url', temp_url) ironic-5.1.0/ironic/tests/unit/common/test_driver_factory.py0000664000567000056710000000616612674513466025517 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from stevedore import dispatch from ironic.common import driver_factory from ironic.common import exception from ironic.drivers import base as drivers_base from ironic.tests import base class FakeEp(object): name = 'fake' class DriverLoadTestCase(base.TestCase): def setUp(self): super(DriverLoadTestCase, self).setUp() driver_factory.DriverFactory._extension_manager = None def _fake_init_name_err(self, *args, **kwargs): kwargs['on_load_failure_callback'](None, FakeEp, NameError('aaa')) def _fake_init_driver_err(self, *args, **kwargs): kwargs['on_load_failure_callback'](None, FakeEp, exception.DriverLoadError( driver='aaa', reason='bbb')) def test_driver_load_error_if_driver_enabled(self): self.config(enabled_drivers=['fake']) with mock.patch.object(dispatch.NameDispatchExtensionManager, '__init__', self._fake_init_driver_err): self.assertRaises( exception.DriverLoadError, driver_factory.DriverFactory._init_extension_manager) def test_wrap_in_driver_load_error_if_driver_enabled(self): self.config(enabled_drivers=['fake']) with mock.patch.object(dispatch.NameDispatchExtensionManager, '__init__', self._fake_init_name_err): self.assertRaises( exception.DriverLoadError, driver_factory.DriverFactory._init_extension_manager) @mock.patch.object(dispatch.NameDispatchExtensionManager, 'names', autospec=True) def test_no_driver_load_error_if_driver_disabled(self, mock_em): self.config(enabled_drivers=[]) with mock.patch.object(dispatch.NameDispatchExtensionManager, '__init__', self._fake_init_driver_err): driver_factory.DriverFactory._init_extension_manager() self.assertEqual(2, mock_em.call_count) class GetDriverTestCase(base.TestCase): def setUp(self): super(GetDriverTestCase, self).setUp() driver_factory.DriverFactory._extension_manager = None self.config(enabled_drivers=['fake']) def test_get_driver_known(self): driver = driver_factory.get_driver('fake') self.assertIsInstance(driver, drivers_base.BaseDriver) def test_get_driver_unknown(self): self.assertRaises(exception.DriverNotFound, driver_factory.get_driver, 'unknown_driver') ironic-5.1.0/ironic/tests/unit/policy_fixture.py0000664000567000056710000000277012674513466023210 0ustar jenkinsjenkins00000000000000# Copyright 2012 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import fixtures from oslo_config import cfg from oslo_policy import opts as policy_opts from ironic.common import policy as ironic_policy from ironic.tests.unit import fake_policy CONF = cfg.CONF class PolicyFixture(fixtures.Fixture): def __init__(self, compat=None): self.compat = compat def setUp(self): super(PolicyFixture, self).setUp() self.policy_dir = self.useFixture(fixtures.TempDir()) self.policy_file_name = os.path.join(self.policy_dir.path, 'policy.json') with open(self.policy_file_name, 'w') as policy_file: policy_file.write(fake_policy.get_policy_data(self.compat)) policy_opts.set_defaults(CONF) CONF.set_override('policy_file', self.policy_file_name, 'oslo_policy') ironic_policy._ENFORCER = None self.addCleanup(ironic_policy.get_enforcer().clear) ironic-5.1.0/ironic/tests/unit/api/0000775000567000056710000000000012674513633020330 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/api/base.py0000664000567000056710000002412012674513466021617 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base classes for API tests.""" # NOTE: Ported from ceilometer/tests/api.py (subsequently moved to # ceilometer/tests/api/__init__.py). This should be oslo'ified: # https://bugs.launchpad.net/ironic/+bug/1255115. import mock from oslo_config import cfg import pecan import pecan.testing from six.moves.urllib import parse as urlparse from ironic.tests.unit.db import base PATH_PREFIX = '/v1' cfg.CONF.import_group('keystone_authtoken', 'keystonemiddleware.auth_token') class BaseApiTest(base.DbTestCase): """Pecan controller functional testing class. Used for functional tests of Pecan controllers where you need to test your literal application and its integration with the framework. """ SOURCE_DATA = {'test_source': {'somekey': '666'}} def setUp(self): super(BaseApiTest, self).setUp() cfg.CONF.set_override("auth_version", "v2.0", group='keystone_authtoken') cfg.CONF.set_override("admin_user", "admin", group='keystone_authtoken') self.app = self._make_app() def reset_pecan(): pecan.set_config({}, overwrite=True) self.addCleanup(reset_pecan) p = mock.patch('ironic.api.controllers.v1.Controller._check_version') self._check_version = p.start() self.addCleanup(p.stop) def _make_app(self, enable_acl=False): # Determine where we are so we can set up paths in the config root_dir = self.path_get() self.config = { 'app': { 'root': 'ironic.api.controllers.root.RootController', 'modules': ['ironic.api'], 'static_root': '%s/public' % root_dir, 'template_path': '%s/api/templates' % root_dir, 'enable_acl': enable_acl, 'acl_public_routes': ['/', '/v1'], }, } return pecan.testing.load_test_app(self.config) def _request_json(self, path, params, expect_errors=False, headers=None, method="post", extra_environ=None, status=None, path_prefix=PATH_PREFIX): """Sends simulated HTTP request to Pecan test app. :param path: url path of target service :param params: content for wsgi.input of request :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param method: Request method type. Appropriate method function call should be used rather than passing attribute in. :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response :param path_prefix: prefix of the url path """ full_path = path_prefix + path print('%s: %s %s' % (method.upper(), full_path, params)) response = getattr(self.app, "%s_json" % method)( str(full_path), params=params, headers=headers, status=status, extra_environ=extra_environ, expect_errors=expect_errors ) print('GOT:%s' % response) return response def put_json(self, path, params, expect_errors=False, headers=None, extra_environ=None, status=None): """Sends simulated HTTP PUT request to Pecan test app. :param path: url path of target service :param params: content for wsgi.input of request :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response """ return self._request_json(path=path, params=params, expect_errors=expect_errors, headers=headers, extra_environ=extra_environ, status=status, method="put") def post_json(self, path, params, expect_errors=False, headers=None, extra_environ=None, status=None): """Sends simulated HTTP POST request to Pecan test app. :param path: url path of target service :param params: content for wsgi.input of request :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response """ return self._request_json(path=path, params=params, expect_errors=expect_errors, headers=headers, extra_environ=extra_environ, status=status, method="post") def patch_json(self, path, params, expect_errors=False, headers=None, extra_environ=None, status=None): """Sends simulated HTTP PATCH request to Pecan test app. :param path: url path of target service :param params: content for wsgi.input of request :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response """ return self._request_json(path=path, params=params, expect_errors=expect_errors, headers=headers, extra_environ=extra_environ, status=status, method="patch") def delete(self, path, expect_errors=False, headers=None, extra_environ=None, status=None, path_prefix=PATH_PREFIX): """Sends simulated HTTP DELETE request to Pecan test app. :param path: url path of target service :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response :param path_prefix: prefix of the url path """ full_path = path_prefix + path print('DELETE: %s' % (full_path)) response = self.app.delete(str(full_path), headers=headers, status=status, extra_environ=extra_environ, expect_errors=expect_errors) print('GOT:%s' % response) return response def get_json(self, path, expect_errors=False, headers=None, extra_environ=None, q=[], path_prefix=PATH_PREFIX, **params): """Sends simulated HTTP GET request to Pecan test app. :param path: url path of target service :param expect_errors: Boolean value;whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param q: list of queries consisting of: field, value, op, and type keys :param path_prefix: prefix of the url path :param params: content for wsgi.input of request """ full_path = path_prefix + path query_params = {'q.field': [], 'q.value': [], 'q.op': [], } for query in q: for name in ['field', 'op', 'value']: query_params['q.%s' % name].append(query.get(name, '')) all_params = {} all_params.update(params) if q: all_params.update(query_params) print('GET: %s %r' % (full_path, all_params)) response = self.app.get(full_path, params=all_params, headers=headers, extra_environ=extra_environ, expect_errors=expect_errors) if not expect_errors: response = response.json print('GOT:%s' % response) return response def validate_link(self, link, bookmark=False): """Checks if the given link can get correct data.""" # removes the scheme and net location parts of the link url_parts = list(urlparse.urlparse(link)) url_parts[0] = url_parts[1] = '' # bookmark link should not have the version in the URL if bookmark and url_parts[2].startswith(PATH_PREFIX): return False full_path = urlparse.urlunparse(url_parts) try: self.get_json(full_path, path_prefix='') return True except Exception: return False ironic-5.1.0/ironic/tests/unit/api/test_base.py0000664000567000056710000001046012674513466022660 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from six.moves import http_client from webob import exc from ironic.api.controllers import base as cbase from ironic.tests.unit.api import base class TestBase(base.BaseApiTest): def test_api_setup(self): pass def test_bad_uri(self): response = self.get_json('/bad/path', expect_errors=True, headers={"Accept": "application/json"}) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual("application/json", response.content_type) self.assertTrue(response.json['error_message']) class TestVersion(base.BaseApiTest): @mock.patch('ironic.api.controllers.base.Version.parse_headers') def test_init(self, mock_parse): a = mock.Mock() b = mock.Mock() mock_parse.return_value = (a, b) v = cbase.Version('test', 'foo', 'bar') mock_parse.assert_called_with('test', 'foo', 'bar') self.assertEqual(a, v.major) self.assertEqual(b, v.minor) @mock.patch('ironic.api.controllers.base.Version.parse_headers') def test_repr(self, mock_parse): mock_parse.return_value = (123, 456) v = cbase.Version('test', mock.ANY, mock.ANY) result = "%s" % v self.assertEqual('123.456', result) @mock.patch('ironic.api.controllers.base.Version.parse_headers') def test_repr_with_strings(self, mock_parse): mock_parse.return_value = ('abc', 'def') v = cbase.Version('test', mock.ANY, mock.ANY) result = "%s" % v self.assertEqual('abc.def', result) def test_parse_headers_ok(self): version = cbase.Version.parse_headers( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) self.assertEqual((123, 456), version) def test_parse_headers_latest(self): for s in ['latest', 'LATEST']: version = cbase.Version.parse_headers( {cbase.Version.string: s}, mock.ANY, '1.9') self.assertEqual((1, 9), version) def test_parse_headers_bad_length(self): self.assertRaises( exc.HTTPNotAcceptable, cbase.Version.parse_headers, {cbase.Version.string: '1'}, mock.ANY, mock.ANY) self.assertRaises( exc.HTTPNotAcceptable, cbase.Version.parse_headers, {cbase.Version.string: '1.2.3'}, mock.ANY, mock.ANY) def test_parse_no_header(self): # this asserts that the minimum version string of "1.1" is applied version = cbase.Version.parse_headers({}, '1.1', '1.5') self.assertEqual((1, 1), version) def test_equals(self): ver_1 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) ver_2 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) self.assertTrue(hasattr(ver_1, '__eq__')) self.assertEqual(ver_1, ver_2) def test_greaterthan(self): ver_1 = cbase.Version( {cbase.Version.string: '123.457'}, mock.ANY, mock.ANY) ver_2 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) self.assertTrue(hasattr(ver_1, '__gt__')) self.assertGreater(ver_1, ver_2) def test_lessthan(self): # __lt__ is created by @functools.total_ordering, make sure it exists # and works ver_1 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) ver_2 = cbase.Version( {cbase.Version.string: '123.457'}, mock.ANY, mock.ANY) self.assertTrue(hasattr(ver_1, '__lt__')) self.assertLess(ver_1, ver_2) ironic-5.1.0/ironic/tests/unit/api/test_root.py0000664000567000056710000000422012674513466022726 0ustar jenkinsjenkins00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.api.controllers.v1 import versions from ironic.tests.unit.api import base class TestRoot(base.BaseApiTest): def test_get_root(self): response = self.get_json('/', path_prefix='') # Check fields are not empty [self.assertNotIn(f, ['', []]) for f in response] self.assertEqual('OpenStack Ironic API', response['name']) self.assertTrue(response['description']) self.assertEqual([response['default_version']], response['versions']) version1 = response['default_version'] self.assertEqual('v1', version1['id']) self.assertEqual('CURRENT', version1['status']) self.assertEqual(versions.MIN_VERSION_STRING, version1['min_version']) self.assertEqual(versions.MAX_VERSION_STRING, version1['version']) class TestV1Root(base.BaseApiTest): def test_get_v1_root(self): data = self.get_json('/') self.assertEqual('v1', data['id']) # Check fields are not empty for f in data.keys(): self.assertNotIn(f, ['', []]) # Check if all known resources are present and there are no extra ones. not_resources = ('id', 'links', 'media_types') actual_resources = tuple(set(data.keys()) - set(not_resources)) expected_resources = ('chassis', 'drivers', 'nodes', 'ports') self.assertEqual(sorted(expected_resources), sorted(actual_resources)) self.assertIn({'type': 'application/vnd.openstack.ironic.v1+json', 'base': 'application/json'}, data['media_types']) ironic-5.1.0/ironic/tests/unit/api/test_middleware.py0000664000567000056710000000746612674513466024077 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests to assert that various incorporated middleware works as expected. """ from oslo_config import cfg import oslo_middleware.cors as cors_middleware from six.moves import http_client from ironic.tests.unit.api import base class TestCORSMiddleware(base.BaseApiTest): '''Provide a basic smoke test to ensure CORS middleware is active. The tests below provide minimal confirmation that the CORS middleware is active, and may be configured. For comprehensive tests, please consult the test suite in oslo_middleware. ''' def setUp(self): # Make sure the CORS options are registered cfg.CONF.register_opts(cors_middleware.CORS_OPTS, 'cors') # Load up our valid domain values before the application is created. cfg.CONF.set_override("allowed_origin", "http://valid.example.com", group='cors') # Create the application. super(TestCORSMiddleware, self).setUp() @staticmethod def _response_string(status_code): """Helper function to return string in form of 'CODE DESCRIPTION'. For example: '200 OK' """ return '{} {}'.format(status_code, http_client.responses[status_code]) def test_valid_cors_options_request(self): response = self.app \ .options('/', headers={ 'Origin': 'http://valid.example.com', 'Access-Control-Request-Method': 'GET' }) # Assert response status. self.assertEqual( self._response_string(http_client.OK), response.status) self.assertIn('Access-Control-Allow-Origin', response.headers) self.assertEqual('http://valid.example.com', response.headers['Access-Control-Allow-Origin']) def test_invalid_cors_options_request(self): response = self.app \ .options('/', headers={ 'Origin': 'http://invalid.example.com', 'Access-Control-Request-Method': 'GET' }) # Assert response status. self.assertEqual( self._response_string(http_client.OK), response.status) self.assertNotIn('Access-Control-Allow-Origin', response.headers) def test_valid_cors_get_request(self): response = self.app \ .get('/', headers={ 'Origin': 'http://valid.example.com' }) # Assert response status. self.assertEqual( self._response_string(http_client.OK), response.status) self.assertIn('Access-Control-Allow-Origin', response.headers) self.assertEqual('http://valid.example.com', response.headers['Access-Control-Allow-Origin']) def test_invalid_cors_get_request(self): response = self.app \ .get('/', headers={ 'Origin': 'http://invalid.example.com' }) # Assert response status. self.assertEqual( self._response_string(http_client.OK), response.status) self.assertNotIn('Access-Control-Allow-Origin', response.headers) ironic-5.1.0/ironic/tests/unit/api/v1/0000775000567000056710000000000012674513633020656 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/api/v1/test_utils.py0000664000567000056710000003522312674513466023440 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_utils import uuidutils import pecan from six.moves import http_client from webob.static import FileIter import wsme from ironic.api.controllers.v1 import utils from ironic.common import exception from ironic import objects from ironic.tests import base from ironic.tests.unit.api import utils as test_api_utils CONF = cfg.CONF class TestApiUtils(base.TestCase): def test_validate_limit(self): limit = utils.validate_limit(10) self.assertEqual(10, 10) # max limit limit = utils.validate_limit(999999999) self.assertEqual(CONF.api.max_limit, limit) # negative self.assertRaises(wsme.exc.ClientSideError, utils.validate_limit, -1) # zero self.assertRaises(wsme.exc.ClientSideError, utils.validate_limit, 0) def test_validate_sort_dir(self): sort_dir = utils.validate_sort_dir('asc') self.assertEqual('asc', sort_dir) # invalid sort_dir parameter self.assertRaises(wsme.exc.ClientSideError, utils.validate_sort_dir, 'fake-sort') def test_get_patch_value_no_path(self): patch = [{'path': '/name', 'op': 'update', 'value': 'node-0'}] path = '/invalid' value = utils.get_patch_value(patch, path) self.assertIsNone(value) def test_get_patch_value_remove(self): patch = [{'path': '/name', 'op': 'remove'}] path = '/name' value = utils.get_patch_value(patch, path) self.assertIsNone(value) def test_get_patch_value_success(self): patch = [{'path': '/name', 'op': 'replace', 'value': 'node-x'}] path = '/name' value = utils.get_patch_value(patch, path) self.assertEqual('node-x', value) def test_check_for_invalid_fields(self): requested = ['field_1', 'field_3'] supported = ['field_1', 'field_2', 'field_3'] utils.check_for_invalid_fields(requested, supported) def test_check_for_invalid_fields_fail(self): requested = ['field_1', 'field_4'] supported = ['field_1', 'field_2', 'field_3'] self.assertRaises(exception.InvalidParameterValue, utils.check_for_invalid_fields, requested, supported) @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_specify_fields(self, mock_request): mock_request.version.minor = 8 self.assertIsNone(utils.check_allow_specify_fields(['foo'])) @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_specify_fields_fail(self, mock_request): mock_request.version.minor = 7 self.assertRaises(exception.NotAcceptable, utils.check_allow_specify_fields, ['foo']) @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_specify_driver(self, mock_request): mock_request.version.minor = 16 self.assertIsNone(utils.check_allow_specify_driver(['fake'])) @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_specify_driver_fail(self, mock_request): mock_request.version.minor = 15 self.assertRaises(exception.NotAcceptable, utils.check_allow_specify_driver, ['fake']) @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_manage_verbs(self, mock_request): mock_request.version.minor = 4 utils.check_allow_management_verbs('manage') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_manage_verbs_fail(self, mock_request): mock_request.version.minor = 3 self.assertRaises(exception.NotAcceptable, utils.check_allow_management_verbs, 'manage') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_provide_verbs(self, mock_request): mock_request.version.minor = 4 utils.check_allow_management_verbs('provide') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_provide_verbs_fail(self, mock_request): mock_request.version.minor = 3 self.assertRaises(exception.NotAcceptable, utils.check_allow_management_verbs, 'provide') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_inspect_verbs(self, mock_request): mock_request.version.minor = 6 utils.check_allow_management_verbs('inspect') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_inspect_verbs_fail(self, mock_request): mock_request.version.minor = 5 self.assertRaises(exception.NotAcceptable, utils.check_allow_management_verbs, 'inspect') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_abort_verbs(self, mock_request): mock_request.version.minor = 13 utils.check_allow_management_verbs('abort') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_abort_verbs_fail(self, mock_request): mock_request.version.minor = 12 self.assertRaises(exception.NotAcceptable, utils.check_allow_management_verbs, 'abort') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_clean_verbs(self, mock_request): mock_request.version.minor = 15 utils.check_allow_management_verbs('clean') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_clean_verbs_fail(self, mock_request): mock_request.version.minor = 14 self.assertRaises(exception.NotAcceptable, utils.check_allow_management_verbs, 'clean') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_check_allow_unknown_verbs(self, mock_request): utils.check_allow_management_verbs('rebuild') @mock.patch.object(pecan, 'request', spec_set=['version']) def test_allow_links_node_states_and_driver_properties(self, mock_request): mock_request.version.minor = 14 self.assertTrue(utils.allow_links_node_states_and_driver_properties()) mock_request.version.minor = 10 self.assertFalse(utils.allow_links_node_states_and_driver_properties()) class TestNodeIdent(base.TestCase): def setUp(self): super(TestNodeIdent, self).setUp() self.valid_name = 'my-host' self.valid_uuid = uuidutils.generate_uuid() self.invalid_name = 'Mr Plow' self.node = test_api_utils.post_get_test_node() @mock.patch.object(pecan, 'request') def test_allow_node_logical_names_pre_name(self, mock_pecan_req): mock_pecan_req.version.minor = 1 self.assertFalse(utils.allow_node_logical_names()) @mock.patch.object(pecan, 'request') def test_allow_node_logical_names_post_name(self, mock_pecan_req): mock_pecan_req.version.minor = 5 self.assertTrue(utils.allow_node_logical_names()) @mock.patch("pecan.request") def test_is_valid_node_name(self, mock_pecan_req): mock_pecan_req.version.minor = 10 self.assertTrue(utils.is_valid_node_name(self.valid_name)) self.assertFalse(utils.is_valid_node_name(self.invalid_name)) self.assertFalse(utils.is_valid_node_name(self.valid_uuid)) @mock.patch.object(pecan, 'request') @mock.patch.object(utils, 'allow_node_logical_names') @mock.patch.object(objects.Node, 'get_by_uuid') @mock.patch.object(objects.Node, 'get_by_name') def test_get_rpc_node_expect_uuid(self, mock_gbn, mock_gbu, mock_anln, mock_pr): mock_anln.return_value = True self.node['uuid'] = self.valid_uuid mock_gbu.return_value = self.node self.assertEqual(self.node, utils.get_rpc_node(self.valid_uuid)) self.assertEqual(1, mock_gbu.call_count) self.assertEqual(0, mock_gbn.call_count) @mock.patch.object(pecan, 'request') @mock.patch.object(utils, 'allow_node_logical_names') @mock.patch.object(objects.Node, 'get_by_uuid') @mock.patch.object(objects.Node, 'get_by_name') def test_get_rpc_node_expect_name(self, mock_gbn, mock_gbu, mock_anln, mock_pr): mock_pr.version.minor = 10 mock_anln.return_value = True self.node['name'] = self.valid_name mock_gbn.return_value = self.node self.assertEqual(self.node, utils.get_rpc_node(self.valid_name)) self.assertEqual(0, mock_gbu.call_count) self.assertEqual(1, mock_gbn.call_count) @mock.patch.object(pecan, 'request') @mock.patch.object(utils, 'allow_node_logical_names') @mock.patch.object(objects.Node, 'get_by_uuid') @mock.patch.object(objects.Node, 'get_by_name') def test_get_rpc_node_invalid_name(self, mock_gbn, mock_gbu, mock_anln, mock_pr): mock_pr.version.minor = 10 mock_anln.return_value = True self.assertRaises(exception.InvalidUuidOrName, utils.get_rpc_node, self.invalid_name) @mock.patch.object(pecan, 'request') @mock.patch.object(utils, 'allow_node_logical_names') @mock.patch.object(objects.Node, 'get_by_uuid') @mock.patch.object(objects.Node, 'get_by_name') def test_get_rpc_node_by_uuid_no_logical_name(self, mock_gbn, mock_gbu, mock_anln, mock_pr): # allow_node_logical_name() should have no effect mock_anln.return_value = False self.node['uuid'] = self.valid_uuid mock_gbu.return_value = self.node self.assertEqual(self.node, utils.get_rpc_node(self.valid_uuid)) self.assertEqual(1, mock_gbu.call_count) self.assertEqual(0, mock_gbn.call_count) @mock.patch.object(pecan, 'request') @mock.patch.object(utils, 'allow_node_logical_names') @mock.patch.object(objects.Node, 'get_by_uuid') @mock.patch.object(objects.Node, 'get_by_name') def test_get_rpc_node_by_name_no_logical_name(self, mock_gbn, mock_gbu, mock_anln, mock_pr): mock_anln.return_value = False self.node['name'] = self.valid_name mock_gbn.return_value = self.node self.assertRaises(exception.NodeNotFound, utils.get_rpc_node, self.valid_name) class TestVendorPassthru(base.TestCase): def test_method_not_specified(self): self.assertRaises(wsme.exc.ClientSideError, utils.vendor_passthru, 'fake-ident', None, 'fake-topic', data='fake-data') @mock.patch.object(pecan, 'request', spec_set=['method', 'context', 'rpcapi']) def _vendor_passthru(self, mock_request, async=True, driver_passthru=False): return_value = {'return': 'SpongeBob', 'async': async, 'attach': False} mock_request.method = 'post' mock_request.context = 'fake-context' passthru_mock = None if driver_passthru: passthru_mock = mock_request.rpcapi.driver_vendor_passthru else: passthru_mock = mock_request.rpcapi.vendor_passthru passthru_mock.return_value = return_value response = utils.vendor_passthru('fake-ident', 'squarepants', 'fake-topic', data='fake-data', driver_passthru=driver_passthru) passthru_mock.assert_called_once_with( 'fake-context', 'fake-ident', 'squarepants', 'POST', 'fake-data', 'fake-topic') self.assertIsInstance(response, wsme.api.Response) self.assertEqual('SpongeBob', response.obj) self.assertEqual(response.return_type, wsme.types.Unset) sc = http_client.ACCEPTED if async else http_client.OK self.assertEqual(sc, response.status_code) def test_vendor_passthru_async(self): self._vendor_passthru() def test_vendor_passthru_sync(self): self._vendor_passthru(async=False) def test_driver_vendor_passthru_async(self): self._vendor_passthru(driver_passthru=True) def test_driver_vendor_passthru_sync(self): self._vendor_passthru(async=False, driver_passthru=True) @mock.patch.object(pecan, 'response', spec_set=['app_iter']) @mock.patch.object(pecan, 'request', spec_set=['method', 'context', 'rpcapi']) def _test_vendor_passthru_attach(self, return_value, expct_return_value, mock_request, mock_response): return_ = {'return': return_value, 'async': False, 'attach': True} mock_request.method = 'get' mock_request.context = 'fake-context' mock_request.rpcapi.driver_vendor_passthru.return_value = return_ response = utils.vendor_passthru('fake-ident', 'bar', 'fake-topic', data='fake-data', driver_passthru=True) mock_request.rpcapi.driver_vendor_passthru.assert_called_once_with( 'fake-context', 'fake-ident', 'bar', 'GET', 'fake-data', 'fake-topic') # Assert file was attached to the response object self.assertIsInstance(mock_response.app_iter, FileIter) self.assertEqual(expct_return_value, mock_response.app_iter.file.read()) # Assert response message is none self.assertIsInstance(response, wsme.api.Response) self.assertIsNone(response.obj) self.assertIsNone(response.return_type) self.assertEqual(http_client.OK, response.status_code) def test_vendor_passthru_attach(self): self._test_vendor_passthru_attach('foo', b'foo') def test_vendor_passthru_attach_unicode_to_byte(self): self._test_vendor_passthru_attach(u'não', b'n\xc3\xa3o') def test_vendor_passthru_attach_byte_to_byte(self): self._test_vendor_passthru_attach(b'\x00\x01', b'\x00\x01') ironic-5.1.0/ironic/tests/unit/api/v1/test_drivers.py0000664000567000056710000003667412674513466023771 0ustar jenkinsjenkins00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock from oslo_config import cfg from six.moves import http_client from testtools.matchers import HasLength from ironic.api.controllers import base as api_base from ironic.api.controllers.v1 import driver from ironic.common import exception from ironic.conductor import rpcapi from ironic.tests.unit.api import base class TestListDrivers(base.BaseApiTest): d1 = 'fake-driver1' d2 = 'fake-driver2' h1 = 'fake-host1' h2 = 'fake-host2' def register_fake_conductors(self): self.dbapi.register_conductor({ 'hostname': self.h1, 'drivers': [self.d1, self.d2], }) self.dbapi.register_conductor({ 'hostname': self.h2, 'drivers': [self.d2], }) def test_drivers(self): self.register_fake_conductors() expected = sorted([ {'name': self.d1, 'hosts': [self.h1]}, {'name': self.d2, 'hosts': [self.h1, self.h2]}, ], key=lambda d: d['name']) data = self.get_json('/drivers') self.assertThat(data['drivers'], HasLength(2)) drivers = sorted(data['drivers'], key=lambda d: d['name']) for i in range(len(expected)): d = drivers[i] self.assertEqual(expected[i]['name'], d['name']) self.assertEqual(sorted(expected[i]['hosts']), sorted(d['hosts'])) self.validate_link(d['links'][0]['href']) self.validate_link(d['links'][1]['href']) def test_drivers_no_active_conductor(self): data = self.get_json('/drivers') self.assertThat(data['drivers'], HasLength(0)) self.assertEqual([], data['drivers']) @mock.patch.object(rpcapi.ConductorAPI, 'get_driver_properties') def test_drivers_get_one_ok(self, mock_driver_properties): # get_driver_properties mock is required by validate_link() self.register_fake_conductors() data = self.get_json('/drivers/%s' % self.d1, headers={api_base.Version.string: '1.14'}) self.assertEqual(self.d1, data['name']) self.assertEqual([self.h1], data['hosts']) self.assertIn('properties', data.keys()) self.validate_link(data['links'][0]['href']) self.validate_link(data['links'][1]['href']) self.validate_link(data['properties'][0]['href']) self.validate_link(data['properties'][1]['href']) def test_driver_properties_hidden_in_lower_version(self): self.register_fake_conductors() data = self.get_json('/drivers/%s' % self.d1, headers={api_base.Version.string: '1.8'}) self.assertNotIn('properties', data.keys()) def test_drivers_get_one_not_found(self): response = self.get_json('/drivers/%s' % self.d1, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def _test_links(self, public_url=None): cfg.CONF.set_override('public_endpoint', public_url, 'api') self.register_fake_conductors() data = self.get_json('/drivers/%s' % self.d1) self.assertIn('links', data.keys()) self.assertEqual(2, len(data['links'])) self.assertIn(self.d1, data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark)) if public_url is not None: expected = [{'href': '%s/v1/drivers/%s' % (public_url, self.d1), 'rel': 'self'}, {'href': '%s/drivers/%s' % (public_url, self.d1), 'rel': 'bookmark'}] for i in expected: self.assertIn(i, data['links']) def test_links(self): self._test_links() def test_links_public_url(self): self._test_links(public_url='http://foo') @mock.patch.object(rpcapi.ConductorAPI, 'driver_vendor_passthru') def test_driver_vendor_passthru_sync(self, mocked_driver_vendor_passthru): self.register_fake_conductors() mocked_driver_vendor_passthru.return_value = { 'return': {'return_key': 'return_value'}, 'async': False, 'attach': False} response = self.post_json( '/drivers/%s/vendor_passthru/do_test' % self.d1, {'test_key': 'test_value'}) self.assertEqual(http_client.OK, response.status_int) self.assertEqual(mocked_driver_vendor_passthru.return_value['return'], response.json) @mock.patch.object(rpcapi.ConductorAPI, 'driver_vendor_passthru') def test_driver_vendor_passthru_async(self, mocked_driver_vendor_passthru): self.register_fake_conductors() mocked_driver_vendor_passthru.return_value = {'return': None, 'async': True, 'attach': False} response = self.post_json( '/drivers/%s/vendor_passthru/do_test' % self.d1, {'test_key': 'test_value'}) self.assertEqual(http_client.ACCEPTED, response.status_int) self.assertIsNone(mocked_driver_vendor_passthru.return_value['return']) @mock.patch.object(rpcapi.ConductorAPI, 'driver_vendor_passthru') def test_driver_vendor_passthru_put(self, mocked_driver_vendor_passthru): self.register_fake_conductors() return_value = {'return': None, 'async': True, 'attach': False} mocked_driver_vendor_passthru.return_value = return_value response = self.put_json( '/drivers/%s/vendor_passthru/do_test' % self.d1, {'test_key': 'test_value'}) self.assertEqual(http_client.ACCEPTED, response.status_int) self.assertEqual(return_value['return'], response.json) @mock.patch.object(rpcapi.ConductorAPI, 'driver_vendor_passthru') def test_driver_vendor_passthru_get(self, mocked_driver_vendor_passthru): self.register_fake_conductors() return_value = {'return': 'foo', 'async': False, 'attach': False} mocked_driver_vendor_passthru.return_value = return_value response = self.get_json( '/drivers/%s/vendor_passthru/do_test' % self.d1) self.assertEqual(return_value['return'], response) @mock.patch.object(rpcapi.ConductorAPI, 'driver_vendor_passthru') def test_driver_vendor_passthru_delete(self, mock_driver_vendor_passthru): self.register_fake_conductors() return_value = {'return': None, 'async': True, 'attach': False} mock_driver_vendor_passthru.return_value = return_value response = self.delete( '/drivers/%s/vendor_passthru/do_test' % self.d1) self.assertEqual(http_client.ACCEPTED, response.status_int) self.assertEqual(return_value['return'], response.json) def test_driver_vendor_passthru_driver_not_found(self): # tests when given driver is not found # e.g. get_topic_for_driver fails to find the driver response = self.post_json( '/drivers/%s/vendor_passthru/do_test' % self.d1, {'test_key': 'test_value'}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_driver_vendor_passthru_method_not_found(self): response = self.post_json( '/drivers/%s/vendor_passthru' % self.d1, {'test_key': 'test_value'}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) error = json.loads(response.json['error_message']) self.assertEqual('Missing argument: "method"', error['faultstring']) @mock.patch.object(rpcapi.ConductorAPI, 'get_driver_vendor_passthru_methods') def test_driver_vendor_passthru_methods(self, get_methods_mock): self.register_fake_conductors() return_value = {'foo': 'bar'} get_methods_mock.return_value = return_value path = '/drivers/%s/vendor_passthru/methods' % self.d1 data = self.get_json(path) self.assertEqual(return_value, data) get_methods_mock.assert_called_once_with(mock.ANY, self.d1, topic=mock.ANY) # Now let's test the cache: Reset the mock get_methods_mock.reset_mock() # Call it again data = self.get_json(path) self.assertEqual(return_value, data) # Assert RPC method wasn't called this time self.assertFalse(get_methods_mock.called) @mock.patch.object(rpcapi.ConductorAPI, 'get_raid_logical_disk_properties') def test_raid_logical_disk_properties(self, disk_prop_mock): driver._RAID_PROPERTIES = {} self.register_fake_conductors() properties = {'foo': 'description of foo'} disk_prop_mock.return_value = properties path = '/drivers/%s/raid/logical_disk_properties' % self.d1 data = self.get_json(path, headers={api_base.Version.string: "1.12"}) self.assertEqual(properties, data) disk_prop_mock.assert_called_once_with(mock.ANY, self.d1, topic=mock.ANY) @mock.patch.object(rpcapi.ConductorAPI, 'get_raid_logical_disk_properties') def test_raid_logical_disk_properties_older_version(self, disk_prop_mock): driver._RAID_PROPERTIES = {} self.register_fake_conductors() properties = {'foo': 'description of foo'} disk_prop_mock.return_value = properties path = '/drivers/%s/raid/logical_disk_properties' % self.d1 ret = self.get_json(path, headers={api_base.Version.string: "1.4"}, expect_errors=True) self.assertEqual(http_client.NOT_ACCEPTABLE, ret.status_code) @mock.patch.object(rpcapi.ConductorAPI, 'get_raid_logical_disk_properties') def test_raid_logical_disk_properties_cached(self, disk_prop_mock): # only one RPC-conductor call will be made and the info cached # for subsequent requests driver._RAID_PROPERTIES = {} self.register_fake_conductors() properties = {'foo': 'description of foo'} disk_prop_mock.return_value = properties path = '/drivers/%s/raid/logical_disk_properties' % self.d1 for i in range(3): data = self.get_json(path, headers={api_base.Version.string: "1.12"}) self.assertEqual(properties, data) disk_prop_mock.assert_called_once_with(mock.ANY, self.d1, topic=mock.ANY) self.assertEqual(properties, driver._RAID_PROPERTIES[self.d1]) @mock.patch.object(rpcapi.ConductorAPI, 'get_raid_logical_disk_properties') def test_raid_logical_disk_properties_iface_not_supported( self, disk_prop_mock): driver._RAID_PROPERTIES = {} self.register_fake_conductors() disk_prop_mock.side_effect = iter( [exception.UnsupportedDriverExtension( extension='raid', driver='fake')]) path = '/drivers/%s/raid/logical_disk_properties' % self.d1 ret = self.get_json(path, headers={api_base.Version.string: "1.12"}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, ret.status_code) self.assertTrue(ret.json['error_message']) disk_prop_mock.assert_called_once_with(mock.ANY, self.d1, topic=mock.ANY) @mock.patch.object(rpcapi.ConductorAPI, 'get_driver_properties') @mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for_driver') class TestDriverProperties(base.BaseApiTest): def test_driver_properties_fake(self, mock_topic, mock_properties): # Can get driver properties for fake driver. driver._DRIVER_PROPERTIES = {} driver_name = 'fake' mock_topic.return_value = 'fake_topic' mock_properties.return_value = {'prop1': 'Property 1. Required.'} data = self.get_json('/drivers/%s/properties' % driver_name) self.assertEqual(mock_properties.return_value, data) mock_topic.assert_called_once_with(driver_name) mock_properties.assert_called_once_with(mock.ANY, driver_name, topic=mock_topic.return_value) self.assertEqual(mock_properties.return_value, driver._DRIVER_PROPERTIES[driver_name]) def test_driver_properties_cached(self, mock_topic, mock_properties): # only one RPC-conductor call will be made and the info cached # for subsequent requests driver._DRIVER_PROPERTIES = {} driver_name = 'fake' mock_topic.return_value = 'fake_topic' mock_properties.return_value = {'prop1': 'Property 1. Required.'} data = self.get_json('/drivers/%s/properties' % driver_name) data = self.get_json('/drivers/%s/properties' % driver_name) data = self.get_json('/drivers/%s/properties' % driver_name) self.assertEqual(mock_properties.return_value, data) mock_topic.assert_called_once_with(driver_name) mock_properties.assert_called_once_with(mock.ANY, driver_name, topic=mock_topic.return_value) self.assertEqual(mock_properties.return_value, driver._DRIVER_PROPERTIES[driver_name]) def test_driver_properties_invalid_driver_name(self, mock_topic, mock_properties): # Cannot get driver properties for an invalid driver; no RPC topic # exists for it. driver._DRIVER_PROPERTIES = {} driver_name = 'bad_driver' mock_topic.side_effect = exception.DriverNotFound( driver_name=driver_name) mock_properties.return_value = {'prop1': 'Property 1. Required.'} ret = self.get_json('/drivers/%s/properties' % driver_name, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, ret.status_int) mock_topic.assert_called_once_with(driver_name) self.assertFalse(mock_properties.called) def test_driver_properties_cannot_load(self, mock_topic, mock_properties): # Cannot get driver properties for the driver. Although an RPC topic # exists for it, the conductor wasn't able to load it. driver._DRIVER_PROPERTIES = {} driver_name = 'driver' mock_topic.return_value = 'driver_topic' mock_properties.side_effect = exception.DriverNotFound( driver_name=driver_name) ret = self.get_json('/drivers/%s/properties' % driver_name, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, ret.status_int) mock_topic.assert_called_once_with(driver_name) mock_properties.assert_called_once_with(mock.ANY, driver_name, topic=mock_topic.return_value) ironic-5.1.0/ironic/tests/unit/api/v1/test_types.py0000664000567000056710000002523612674513466023447 0ustar jenkinsjenkins00000000000000# coding: utf-8 # # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from six.moves import http_client import webtest import wsme from wsme import types as wtypes from ironic.api.controllers.v1 import types from ironic.common import exception from ironic.common import utils from ironic.tests import base class TestMacAddressType(base.TestCase): def test_valid_mac_addr(self): test_mac = 'aa:bb:cc:11:22:33' with mock.patch.object(utils, 'validate_and_normalize_mac') as m_mock: types.MacAddressType.validate(test_mac) m_mock.assert_called_once_with(test_mac) def test_invalid_mac_addr(self): self.assertRaises(exception.InvalidMAC, types.MacAddressType.validate, 'invalid-mac') class TestUuidType(base.TestCase): def test_valid_uuid(self): test_uuid = '1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e' self.assertEqual(test_uuid, types.UuidType.validate(test_uuid)) def test_invalid_uuid(self): self.assertRaises(exception.InvalidUUID, types.UuidType.validate, 'invalid-uuid') class TestNameType(base.TestCase): @mock.patch("pecan.request") def test_valid_name(self, mock_pecan_req): mock_pecan_req.version.minor = 10 test_name = 'hal-9000' self.assertEqual(test_name, types.NameType.validate(test_name)) @mock.patch("pecan.request") def test_invalid_name(self, mock_pecan_req): mock_pecan_req.version.minor = 10 self.assertRaises(exception.InvalidName, types.NameType.validate, '-this is not valid-') class TestUuidOrNameType(base.TestCase): @mock.patch("pecan.request") def test_valid_uuid(self, mock_pecan_req): mock_pecan_req.version.minor = 10 test_uuid = '1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e' self.assertTrue(types.UuidOrNameType.validate(test_uuid)) @mock.patch("pecan.request") def test_valid_name(self, mock_pecan_req): mock_pecan_req.version.minor = 10 test_name = 'dc16-database5' self.assertTrue(types.UuidOrNameType.validate(test_name)) @mock.patch("pecan.request") def test_invalid_uuid_or_name(self, mock_pecan_req): mock_pecan_req.version.minor = 10 self.assertRaises(exception.InvalidUuidOrName, types.UuidOrNameType.validate, 'inval#uuid%or*name') class MyBaseType(object): """Helper class, patched by objects of type MyPatchType""" mandatory = wsme.wsattr(wtypes.text, mandatory=True) class MyPatchType(types.JsonPatchType): """Helper class for TestJsonPatchType tests.""" _api_base = MyBaseType _extra_non_removable_attrs = {'/non_removable'} @staticmethod def internal_attrs(): return ['/internal'] class MyRoot(wsme.WSRoot): """Helper class for TestJsonPatchType tests.""" @wsme.expose([wsme.types.text], body=[MyPatchType]) @wsme.validate([MyPatchType]) def test(self, patch): return patch class TestJsonPatchType(base.TestCase): def setUp(self): super(TestJsonPatchType, self).setUp() self.app = webtest.TestApp(MyRoot(['restjson']).wsgiapp()) def _patch_json(self, params, expect_errors=False): return self.app.patch_json('/test', params=params, headers={'Accept': 'application/json'}, expect_errors=expect_errors) def test_valid_patches(self): valid_patches = [{'path': '/extra/foo', 'op': 'remove'}, {'path': '/extra/foo', 'op': 'add', 'value': 'bar'}, {'path': '/str', 'op': 'replace', 'value': 'bar'}, {'path': '/bool', 'op': 'add', 'value': True}, {'path': '/int', 'op': 'add', 'value': 1}, {'path': '/float', 'op': 'add', 'value': 0.123}, {'path': '/list', 'op': 'add', 'value': [1, 2]}, {'path': '/none', 'op': 'add', 'value': None}, {'path': '/empty_dict', 'op': 'add', 'value': {}}, {'path': '/empty_list', 'op': 'add', 'value': []}, {'path': '/dict', 'op': 'add', 'value': {'cat': 'meow'}}] ret = self._patch_json(valid_patches, False) self.assertEqual(http_client.OK, ret.status_int) self.assertItemsEqual(valid_patches, ret.json) def test_cannot_update_internal_attr(self): patch = [{'path': '/internal', 'op': 'replace', 'value': 'foo'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) def test_cannot_update_internal_dict_attr(self): patch = [{'path': '/internal/test', 'op': 'replace', 'value': 'foo'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) def test_mandatory_attr(self): patch = [{'op': 'replace', 'path': '/mandatory', 'value': 'foo'}] ret = self._patch_json(patch, False) self.assertEqual(http_client.OK, ret.status_int) self.assertEqual(patch, ret.json) def test_cannot_remove_mandatory_attr(self): patch = [{'op': 'remove', 'path': '/mandatory'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) def test_cannot_remove_extra_non_removable_attr(self): patch = [{'op': 'remove', 'path': '/non_removable'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) def test_missing_required_fields_path(self): missing_path = [{'op': 'remove'}] ret = self._patch_json(missing_path, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) def test_missing_required_fields_op(self): missing_op = [{'path': '/foo'}] ret = self._patch_json(missing_op, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) def test_invalid_op(self): patch = [{'path': '/foo', 'op': 'invalid'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) def test_invalid_path(self): patch = [{'path': 'invalid-path', 'op': 'remove'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) def test_cannot_add_with_no_value(self): patch = [{'path': '/extra/foo', 'op': 'add'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) def test_cannot_replace_with_no_value(self): patch = [{'path': '/foo', 'op': 'replace'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['faultstring']) class TestBooleanType(base.TestCase): def test_valid_true_values(self): v = types.BooleanType() self.assertTrue(v.validate("true")) self.assertTrue(v.validate("TRUE")) self.assertTrue(v.validate("True")) self.assertTrue(v.validate("t")) self.assertTrue(v.validate("1")) self.assertTrue(v.validate("y")) self.assertTrue(v.validate("yes")) self.assertTrue(v.validate("on")) def test_valid_false_values(self): v = types.BooleanType() self.assertFalse(v.validate("false")) self.assertFalse(v.validate("FALSE")) self.assertFalse(v.validate("False")) self.assertFalse(v.validate("f")) self.assertFalse(v.validate("0")) self.assertFalse(v.validate("n")) self.assertFalse(v.validate("no")) self.assertFalse(v.validate("off")) def test_invalid_value(self): v = types.BooleanType() self.assertRaises(exception.Invalid, v.validate, "invalid-value") self.assertRaises(exception.Invalid, v.validate, "01") class TestJsonType(base.TestCase): def test_valid_values(self): vt = types.jsontype value = vt.validate("hello") self.assertEqual("hello", value) value = vt.validate(10) self.assertEqual(10, value) value = vt.validate(0.123) self.assertEqual(0.123, value) value = vt.validate(True) self.assertTrue(value) value = vt.validate([1, 2, 3]) self.assertEqual([1, 2, 3], value) value = vt.validate({'foo': 'bar'}) self.assertEqual({'foo': 'bar'}, value) value = vt.validate(None) self.assertIsNone(value) def test_invalid_values(self): vt = types.jsontype self.assertRaises(exception.Invalid, vt.validate, object()) def test_apimultitype_tostring(self): vts = str(types.jsontype) self.assertIn(str(wtypes.text), vts) self.assertIn(str(int), vts) if six.PY2: self.assertIn(str(long), vts) self.assertIn(str(float), vts) self.assertIn(str(types.BooleanType), vts) self.assertIn(str(list), vts) self.assertIn(str(dict), vts) self.assertIn(str(None), vts) class TestListType(base.TestCase): def test_list_type(self): v = types.ListType() self.assertItemsEqual(['foo', 'bar'], v.validate('foo,bar')) self.assertItemsEqual(['cat', 'meow'], v.validate("cat , meow")) self.assertItemsEqual(['spongebob', 'squarepants'], v.validate("SpongeBob,SquarePants")) self.assertItemsEqual(['foo', 'bar'], v.validate("foo, ,,bar")) self.assertItemsEqual(['foo', 'bar'], v.validate("foo,foo,foo,bar")) self.assertIsInstance(v.validate('foo,bar'), list) ironic-5.1.0/ironic/tests/unit/api/v1/test_root.py0000664000567000056710000000471112674513466023261 0ustar jenkinsjenkins00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from webob import exc as webob_exc from ironic.api.controllers import v1 as v1_api from ironic.tests import base as test_base from ironic.tests.unit.api import base as api_base class TestV1Routing(api_base.BaseApiTest): def setUp(self): super(TestV1Routing, self).setUp() def test_route_checks_version(self): self.get_json('/') self._check_version.assert_called_once_with(mock.ANY, mock.ANY) class TestCheckVersions(test_base.TestCase): def setUp(self): super(TestCheckVersions, self).setUp() class ver(object): major = None minor = None self.version = ver() def test_check_version_invalid_major_version(self): self.version.major = v1_api.BASE_VERSION + 1 self.version.minor = v1_api.MIN_VER.minor self.assertRaises( webob_exc.HTTPNotAcceptable, v1_api.Controller()._check_version, self.version) def test_check_version_too_low(self): self.version.major = v1_api.BASE_VERSION self.version.minor = v1_api.MIN_VER.minor - 1 self.assertRaises( webob_exc.HTTPNotAcceptable, v1_api.Controller()._check_version, self.version) def test_check_version_too_high(self): self.version.major = v1_api.BASE_VERSION self.version.minor = v1_api.MAX_VER.minor + 1 e = self.assertRaises( webob_exc.HTTPNotAcceptable, v1_api.Controller()._check_version, self.version, {'fake-headers': v1_api.MAX_VER.minor}) self.assertEqual(v1_api.MAX_VER.minor, e.headers['fake-headers']) def test_check_version_ok(self): self.version.major = v1_api.BASE_VERSION self.version.minor = v1_api.MIN_VER.minor v1_api.Controller()._check_version(self.version) ironic-5.1.0/ironic/tests/unit/api/v1/test_ports.py0000664000567000056710000011034612674513466023447 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /ports/ methods. """ import datetime import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils import six from six.moves import http_client from six.moves.urllib import parse as urlparse from testtools.matchers import HasLength from wsme import types as wtypes from ironic.api.controllers import base as api_base from ironic.api.controllers import v1 as api_v1 from ironic.api.controllers.v1 import port as api_port from ironic.api.controllers.v1 import utils as api_utils from ironic.common import exception from ironic.conductor import rpcapi from ironic.tests import base from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.api import utils as apiutils from ironic.tests.unit.db import utils as dbutils from ironic.tests.unit.objects import utils as obj_utils # NOTE(lucasagomes): When creating a port via API (POST) # we have to use node_uuid def post_get_test_port(**kw): port = apiutils.port_post_data(**kw) node = dbutils.get_test_node() port['node_uuid'] = kw.get('node_uuid', node['uuid']) return port class TestPortObject(base.TestCase): def test_port_init(self): port_dict = apiutils.port_post_data(node_id=None) del port_dict['extra'] port = api_port.Port(**port_dict) self.assertEqual(wtypes.Unset, port.extra) class TestListPorts(test_api_base.BaseApiTest): def setUp(self): super(TestListPorts, self).setUp() self.node = obj_utils.create_test_node(self.context) def test_empty(self): data = self.get_json('/ports') self.assertEqual([], data['ports']) def test_one(self): port = obj_utils.create_test_port(self.context, node_id=self.node.id) data = self.get_json('/ports') self.assertEqual(port.uuid, data['ports'][0]["uuid"]) self.assertNotIn('extra', data['ports'][0]) self.assertNotIn('node_uuid', data['ports'][0]) # never expose the node_id self.assertNotIn('node_id', data['ports'][0]) def test_get_one(self): port = obj_utils.create_test_port(self.context, node_id=self.node.id) data = self.get_json('/ports/%s' % port.uuid) self.assertEqual(port.uuid, data['uuid']) self.assertIn('extra', data) self.assertIn('node_uuid', data) # never expose the node_id self.assertNotIn('node_id', data) def test_get_one_custom_fields(self): port = obj_utils.create_test_port(self.context, node_id=self.node.id) fields = 'address,extra' data = self.get_json( '/ports/%s?fields=%s' % (port.uuid, fields), headers={api_base.Version.string: str(api_v1.MAX_VER)}) # We always append "links" self.assertItemsEqual(['address', 'extra', 'links'], data) def test_get_collection_custom_fields(self): fields = 'uuid,extra' for i in range(3): obj_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:3%s' % i) data = self.get_json( '/ports?fields=%s' % fields, headers={api_base.Version.string: str(api_v1.MAX_VER)}) self.assertEqual(3, len(data['ports'])) for port in data['ports']: # We always append "links" self.assertItemsEqual(['uuid', 'extra', 'links'], port) def test_get_custom_fields_invalid_fields(self): port = obj_utils.create_test_port(self.context, node_id=self.node.id) fields = 'uuid,spongebob' response = self.get_json( '/ports/%s?fields=%s' % (port.uuid, fields), headers={api_base.Version.string: str(api_v1.MAX_VER)}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('spongebob', response.json['error_message']) def test_get_custom_fields_invalid_api_version(self): port = obj_utils.create_test_port(self.context, node_id=self.node.id) fields = 'uuid,extra' response = self.get_json( '/ports/%s?fields=%s' % (port.uuid, fields), headers={api_base.Version.string: str(api_v1.MIN_VER)}, expect_errors=True) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_int) def test_detail(self): port = obj_utils.create_test_port(self.context, node_id=self.node.id) data = self.get_json('/ports/detail') self.assertEqual(port.uuid, data['ports'][0]["uuid"]) self.assertIn('extra', data['ports'][0]) self.assertIn('node_uuid', data['ports'][0]) # never expose the node_id self.assertNotIn('node_id', data['ports'][0]) def test_detail_against_single(self): port = obj_utils.create_test_port(self.context, node_id=self.node.id) response = self.get_json('/ports/%s/detail' % port.uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_many(self): ports = [] for id_ in range(5): port = obj_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:3%s' % id_) ports.append(port.uuid) data = self.get_json('/ports') self.assertEqual(len(ports), len(data['ports'])) uuids = [n['uuid'] for n in data['ports']] six.assertCountEqual(self, ports, uuids) def _test_links(self, public_url=None): cfg.CONF.set_override('public_endpoint', public_url, 'api') uuid = uuidutils.generate_uuid() obj_utils.create_test_port(self.context, uuid=uuid, node_id=self.node.id) data = self.get_json('/ports/%s' % uuid) self.assertIn('links', data.keys()) self.assertEqual(2, len(data['links'])) self.assertIn(uuid, data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark)) if public_url is not None: expected = [{'href': '%s/v1/ports/%s' % (public_url, uuid), 'rel': 'self'}, {'href': '%s/ports/%s' % (public_url, uuid), 'rel': 'bookmark'}] for i in expected: self.assertIn(i, data['links']) def test_links(self): self._test_links() def test_links_public_url(self): self._test_links(public_url='http://foo') def test_collection_links(self): ports = [] for id_ in range(5): port = obj_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:3%s' % id_) ports.append(port.uuid) data = self.get_json('/ports/?limit=3') self.assertEqual(3, len(data['ports'])) next_marker = data['ports'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_default_limit(self): cfg.CONF.set_override('max_limit', 3, 'api') ports = [] for id_ in range(5): port = obj_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:3%s' % id_) ports.append(port.uuid) data = self.get_json('/ports') self.assertEqual(3, len(data['ports'])) next_marker = data['ports'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_port_by_address(self): address_template = "aa:bb:cc:dd:ee:f%d" for id_ in range(3): obj_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address=address_template % id_) target_address = address_template % 1 data = self.get_json('/ports?address=%s' % target_address) self.assertThat(data['ports'], HasLength(1)) self.assertEqual(target_address, data['ports'][0]['address']) def test_port_by_address_non_existent_address(self): # non-existent address data = self.get_json('/ports?address=%s' % 'aa:bb:cc:dd:ee:ff') self.assertThat(data['ports'], HasLength(0)) def test_port_by_address_invalid_address_format(self): obj_utils.create_test_port(self.context, node_id=self.node.id) invalid_address = 'invalid-mac-format' response = self.get_json('/ports?address=%s' % invalid_address, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_address, response.json['error_message']) def test_sort_key(self): ports = [] for id_ in range(3): port = obj_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:3%s' % id_) ports.append(port.uuid) data = self.get_json('/ports?sort_key=uuid') uuids = [n['uuid'] for n in data['ports']] self.assertEqual(sorted(ports), uuids) def test_sort_key_invalid(self): invalid_keys_list = ['foo', 'extra'] for invalid_key in invalid_keys_list: response = self.get_json('/ports?sort_key=%s' % invalid_key, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) @mock.patch.object(api_utils, 'get_rpc_node') def test_get_all_by_node_name_ok(self, mock_get_rpc_node): # GET /v1/ports specifying node_name - success mock_get_rpc_node.return_value = self.node for i in range(5): if i < 3: node_id = self.node.id else: node_id = 100000 + i obj_utils.create_test_port(self.context, node_id=node_id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:3%s' % i) data = self.get_json("/ports?node=%s" % 'test-node', headers={api_base.Version.string: '1.5'}) self.assertEqual(3, len(data['ports'])) @mock.patch.object(api_utils, 'get_rpc_node') def test_get_all_by_node_uuid_and_name(self, mock_get_rpc_node): # GET /v1/ports specifying node and uuid - should only use node_uuid mock_get_rpc_node.return_value = self.node obj_utils.create_test_port(self.context, node_id=self.node.id) self.get_json('/ports/detail?node_uuid=%s&node=%s' % (self.node.uuid, 'node-name')) mock_get_rpc_node.assert_called_once_with(self.node.uuid) @mock.patch.object(api_utils, 'get_rpc_node') def test_get_all_by_node_name_not_supported(self, mock_get_rpc_node): # GET /v1/ports specifying node_name - name not supported mock_get_rpc_node.side_effect = ( exception.InvalidUuidOrName(name=self.node.uuid)) for i in range(3): obj_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:3%s' % i) data = self.get_json("/ports?node=%s" % 'test-node', expect_errors=True) self.assertEqual(0, mock_get_rpc_node.call_count) self.assertEqual(http_client.NOT_ACCEPTABLE, data.status_int) @mock.patch.object(api_utils, 'get_rpc_node') def test_detail_by_node_name_ok(self, mock_get_rpc_node): # GET /v1/ports/detail specifying node_name - success mock_get_rpc_node.return_value = self.node port = obj_utils.create_test_port(self.context, node_id=self.node.id) data = self.get_json('/ports/detail?node=%s' % 'test-node', headers={api_base.Version.string: '1.5'}) self.assertEqual(port.uuid, data['ports'][0]['uuid']) self.assertEqual(self.node.uuid, data['ports'][0]['node_uuid']) @mock.patch.object(api_utils, 'get_rpc_node') def test_detail_by_node_name_not_supported(self, mock_get_rpc_node): # GET /v1/ports/detail specifying node_name - name not supported mock_get_rpc_node.side_effect = ( exception.InvalidUuidOrName(name=self.node.uuid)) obj_utils.create_test_port(self.context, node_id=self.node.id) data = self.get_json('/ports/detail?node=%s' % 'test-node', expect_errors=True) self.assertEqual(0, mock_get_rpc_node.call_count) self.assertEqual(http_client.NOT_ACCEPTABLE, data.status_int) @mock.patch.object(api_port.PortsController, '_get_ports_collection') def test_detail_with_incorrect_api_usage(self, mock_gpc): # GET /v1/ports/detail specifying node and node_uuid. In this case # we expect the node_uuid interface to be used. self.get_json('/ports/detail?node=%s&node_uuid=%s' % ('test-node', self.node.uuid)) mock_gpc.assert_called_once_with(self.node.uuid, mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY) @mock.patch.object(rpcapi.ConductorAPI, 'update_port') class TestPatch(test_api_base.BaseApiTest): def setUp(self): super(TestPatch, self).setUp() self.node = obj_utils.create_test_node(self.context) self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) def test_update_byid(self, mock_upd): extra = {'foo': 'bar'} mock_upd.return_value = self.port mock_upd.return_value.extra = extra response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) def test_update_byaddress_not_allowed(self, mock_upd): extra = {'foo': 'bar'} mock_upd.return_value = self.port mock_upd.return_value.extra = extra response = self.patch_json('/ports/%s' % self.port.address, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertIn(self.port.address, response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_not_found(self, mock_upd): uuid = uuidutils.generate_uuid() response = self.patch_json('/ports/%s' % uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_replace_singular(self, mock_upd): address = 'aa:bb:cc:dd:ee:ff' mock_upd.return_value = self.port mock_upd.return_value.address = address response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/address', 'value': address, 'op': 'replace'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(address, response.json['address']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(address, kargs.address) def test_replace_address_already_exist(self, mock_upd): address = 'aa:aa:aa:aa:aa:aa' mock_upd.side_effect = exception.MACAlreadyExists(mac=address) response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/address', 'value': address, 'op': 'replace'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CONFLICT, response.status_code) self.assertTrue(response.json['error_message']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(address, kargs.address) def test_replace_node_uuid(self, mock_upd): mock_upd.return_value = self.port response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/node_uuid', 'value': self.node.uuid, 'op': 'replace'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_add_node_uuid(self, mock_upd): mock_upd.return_value = self.port response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/node_uuid', 'value': self.node.uuid, 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_add_node_id(self, mock_upd): response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/node_id', 'value': '1', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_replace_node_id(self, mock_upd): response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/node_id', 'value': '1', 'op': 'replace'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_remove_node_id(self, mock_upd): response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/node_id', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_replace_non_existent_node_uuid(self, mock_upd): node_uuid = '12506333-a81c-4d59-9987-889ed5f8687b' response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/node_uuid', 'value': node_uuid, 'op': 'replace'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertIn(node_uuid, response.json['error_message']) self.assertFalse(mock_upd.called) def test_replace_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} self.port.extra = extra self.port.save() # mutate extra so we replace all of them extra = dict((k, extra[k] + 'x') for k in extra.keys()) patch = [] for k in extra.keys(): patch.append({'path': '/extra/%s' % k, 'value': extra[k], 'op': 'replace'}) mock_upd.return_value = self.port mock_upd.return_value.extra = extra response = self.patch_json('/ports/%s' % self.port.uuid, patch) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) def test_remove_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} self.port.extra = extra self.port.save() # Removing one item from the collection extra.pop('foo1') mock_upd.return_value = self.port mock_upd.return_value.extra = extra response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/extra/foo1', 'op': 'remove'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) # Removing the collection extra = {} mock_upd.return_value.extra = extra response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/extra', 'op': 'remove'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual({}, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) # Assert nothing else was changed self.assertEqual(self.port.uuid, response.json['uuid']) self.assertEqual(self.port.address, response.json['address']) def test_remove_non_existent_property_fail(self, mock_upd): response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/extra/non-existent', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_remove_mandatory_field(self, mock_upd): response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/address', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertIn('mandatory attribute', response.json['error_message']) self.assertFalse(mock_upd.called) def test_add_root(self, mock_upd): address = 'aa:bb:cc:dd:ee:ff' mock_upd.return_value = self.port mock_upd.return_value.address = address response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/address', 'value': address, 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(address, response.json['address']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(address, kargs.address) def test_add_root_non_existent(self, mock_upd): response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_add_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} patch = [] for k in extra.keys(): patch.append({'path': '/extra/%s' % k, 'value': extra[k], 'op': 'add'}) mock_upd.return_value = self.port mock_upd.return_value.extra = extra response = self.patch_json('/ports/%s' % self.port.uuid, patch) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) def test_remove_uuid(self, mock_upd): response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/uuid', 'op': 'remove'}], expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_address_invalid_format(self, mock_upd): response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/address', 'value': 'invalid-format', 'op': 'replace'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_port_address_normalized(self, mock_upd): address = 'AA:BB:CC:DD:EE:FF' mock_upd.return_value = self.port mock_upd.return_value.address = address.lower() response = self.patch_json('/ports/%s' % self.port.uuid, [{'path': '/address', 'value': address, 'op': 'replace'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(address.lower(), response.json['address']) kargs = mock_upd.call_args[0][1] self.assertEqual(address.lower(), kargs.address) class TestPost(test_api_base.BaseApiTest): def setUp(self): super(TestPost, self).setUp() self.node = obj_utils.create_test_node(self.context) @mock.patch.object(timeutils, 'utcnow') def test_create_port(self, mock_utcnow): pdict = post_get_test_port() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.post_json('/ports', pdict) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/ports/%s' % pdict['uuid']) self.assertEqual(pdict['uuid'], result['uuid']) self.assertFalse(result['updated_at']) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/ports/%s' % pdict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) def test_create_port_doesnt_contain_id(self): with mock.patch.object(self.dbapi, 'create_port', wraps=self.dbapi.create_port) as cp_mock: pdict = post_get_test_port(extra={'foo': 123}) self.post_json('/ports', pdict) result = self.get_json('/ports/%s' % pdict['uuid']) self.assertEqual(pdict['extra'], result['extra']) cp_mock.assert_called_once_with(mock.ANY) # Check that 'id' is not in first arg of positional args self.assertNotIn('id', cp_mock.call_args[0][0]) def test_create_port_generate_uuid(self): pdict = post_get_test_port() del pdict['uuid'] response = self.post_json('/ports', pdict) result = self.get_json('/ports/%s' % response.json['uuid']) self.assertEqual(pdict['address'], result['address']) self.assertTrue(uuidutils.is_uuid_like(result['uuid'])) def test_create_port_valid_extra(self): pdict = post_get_test_port(extra={'str': 'foo', 'int': 123, 'float': 0.1, 'bool': True, 'list': [1, 2], 'none': None, 'dict': {'cat': 'meow'}}) self.post_json('/ports', pdict) result = self.get_json('/ports/%s' % pdict['uuid']) self.assertEqual(pdict['extra'], result['extra']) def test_create_port_no_mandatory_field_address(self): pdict = post_get_test_port() del pdict['address'] response = self.post_json('/ports', pdict, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_port_no_mandatory_field_node_uuid(self): pdict = post_get_test_port() del pdict['node_uuid'] response = self.post_json('/ports', pdict, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_port_invalid_addr_format(self): pdict = post_get_test_port(address='invalid-format') response = self.post_json('/ports', pdict, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_port_address_normalized(self): address = 'AA:BB:CC:DD:EE:FF' pdict = post_get_test_port(address=address) self.post_json('/ports', pdict) result = self.get_json('/ports/%s' % pdict['uuid']) self.assertEqual(address.lower(), result['address']) def test_create_port_with_hyphens_delimiter(self): pdict = post_get_test_port() colonsMAC = pdict['address'] hyphensMAC = colonsMAC.replace(':', '-') pdict['address'] = hyphensMAC response = self.post_json('/ports', pdict, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_port_invalid_node_uuid_format(self): pdict = post_get_test_port(node_uuid='invalid-format') response = self.post_json('/ports', pdict, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_node_uuid_to_node_id_mapping(self): pdict = post_get_test_port(node_uuid=self.node['uuid']) self.post_json('/ports', pdict) # GET doesn't return the node_id it's an internal value port = self.dbapi.get_port_by_uuid(pdict['uuid']) self.assertEqual(self.node['id'], port.node_id) def test_create_port_node_uuid_not_found(self): pdict = post_get_test_port( node_uuid='1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e') response = self.post_json('/ports', pdict, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_create_port_address_already_exist(self): address = 'AA:AA:AA:11:22:33' pdict = post_get_test_port(address=address) self.post_json('/ports', pdict) pdict['uuid'] = uuidutils.generate_uuid() response = self.post_json('/ports', pdict, expect_errors=True) self.assertEqual(http_client.CONFLICT, response.status_int) self.assertEqual('application/json', response.content_type) error_msg = response.json['error_message'] self.assertTrue(error_msg) self.assertIn(address, error_msg.upper()) @mock.patch.object(rpcapi.ConductorAPI, 'destroy_port') class TestDelete(test_api_base.BaseApiTest): def setUp(self): super(TestDelete, self).setUp() self.node = obj_utils.create_test_node(self.context) self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) gtf = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = gtf.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(gtf.stop) def test_delete_port_byaddress(self, mock_dpt): response = self.delete('/ports/%s' % self.port.address, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(self.port.address, response.json['error_message']) def test_delete_port_byid(self, mock_dpt): self.delete('/ports/%s' % self.port.uuid, expect_errors=True) self.assertTrue(mock_dpt.called) def test_delete_port_node_locked(self, mock_dpt): self.node.reserve(self.context, 'fake', self.node.uuid) mock_dpt.side_effect = exception.NodeLocked(node='fake-node', host='fake-host') ret = self.delete('/ports/%s' % self.port.uuid, expect_errors=True) self.assertEqual(http_client.CONFLICT, ret.status_code) self.assertTrue(ret.json['error_message']) self.assertTrue(mock_dpt.called) ironic-5.1.0/ironic/tests/unit/api/v1/test_chassis.py0000664000567000056710000005245712674513466023745 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /chassis/ methods. """ import datetime import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils import six from six.moves import http_client from six.moves.urllib import parse as urlparse from wsme import types as wtypes from ironic.api.controllers import base as api_base from ironic.api.controllers import v1 as api_v1 from ironic.api.controllers.v1 import chassis as api_chassis from ironic.tests import base from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.api import utils as apiutils from ironic.tests.unit.objects import utils as obj_utils class TestChassisObject(base.TestCase): def test_chassis_init(self): chassis_dict = apiutils.chassis_post_data() del chassis_dict['description'] chassis = api_chassis.Chassis(**chassis_dict) self.assertEqual(wtypes.Unset, chassis.description) class TestListChassis(test_api_base.BaseApiTest): def test_empty(self): data = self.get_json('/chassis') self.assertEqual([], data['chassis']) def test_one(self): chassis = obj_utils.create_test_chassis(self.context) data = self.get_json('/chassis') self.assertEqual(chassis.uuid, data['chassis'][0]["uuid"]) self.assertNotIn('extra', data['chassis'][0]) self.assertNotIn('nodes', data['chassis'][0]) def test_get_one(self): chassis = obj_utils.create_test_chassis(self.context) data = self.get_json('/chassis/%s' % chassis['uuid']) self.assertEqual(chassis.uuid, data['uuid']) self.assertIn('extra', data) self.assertIn('nodes', data) def test_get_one_custom_fields(self): chassis = obj_utils.create_test_chassis(self.context) fields = 'extra,description' data = self.get_json( '/chassis/%s?fields=%s' % (chassis.uuid, fields), headers={api_base.Version.string: str(api_v1.MAX_VER)}) # We always append "links" self.assertItemsEqual(['description', 'extra', 'links'], data) def test_get_collection_custom_fields(self): fields = 'uuid,extra' for i in range(3): obj_utils.create_test_chassis( self.context, uuid=uuidutils.generate_uuid()) data = self.get_json( '/chassis?fields=%s' % fields, headers={api_base.Version.string: str(api_v1.MAX_VER)}) self.assertEqual(3, len(data['chassis'])) for ch in data['chassis']: # We always append "links" self.assertItemsEqual(['uuid', 'extra', 'links'], ch) def test_get_custom_fields_invalid_fields(self): chassis = obj_utils.create_test_chassis(self.context) fields = 'uuid,spongebob' response = self.get_json( '/chassis/%s?fields=%s' % (chassis.uuid, fields), headers={api_base.Version.string: str(api_v1.MAX_VER)}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('spongebob', response.json['error_message']) def test_get_custom_fields_invalid_api_version(self): chassis = obj_utils.create_test_chassis(self.context) fields = 'uuid,extra' response = self.get_json( '/chassis/%s?fields=%s' % (chassis.uuid, fields), headers={api_base.Version.string: str(api_v1.MIN_VER)}, expect_errors=True) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_int) def test_detail(self): chassis = obj_utils.create_test_chassis(self.context) data = self.get_json('/chassis/detail') self.assertEqual(chassis.uuid, data['chassis'][0]["uuid"]) self.assertIn('extra', data['chassis'][0]) self.assertIn('nodes', data['chassis'][0]) def test_detail_against_single(self): chassis = obj_utils.create_test_chassis(self.context) response = self.get_json('/chassis/%s/detail' % chassis['uuid'], expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_many(self): ch_list = [] for id_ in range(5): chassis = obj_utils.create_test_chassis( self.context, uuid=uuidutils.generate_uuid()) ch_list.append(chassis.uuid) data = self.get_json('/chassis') self.assertEqual(len(ch_list), len(data['chassis'])) uuids = [n['uuid'] for n in data['chassis']] six.assertCountEqual(self, ch_list, uuids) def _test_links(self, public_url=None): cfg.CONF.set_override('public_endpoint', public_url, 'api') uuid = uuidutils.generate_uuid() obj_utils.create_test_chassis(self.context, uuid=uuid) data = self.get_json('/chassis/%s' % uuid) self.assertIn('links', data.keys()) self.assertEqual(2, len(data['links'])) self.assertIn(uuid, data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark)) if public_url is not None: expected = [{'href': '%s/v1/chassis/%s' % (public_url, uuid), 'rel': 'self'}, {'href': '%s/chassis/%s' % (public_url, uuid), 'rel': 'bookmark'}] for i in expected: self.assertIn(i, data['links']) def test_links(self): self._test_links() def test_links_public_url(self): self._test_links(public_url='http://foo') def test_collection_links(self): for id in range(5): obj_utils.create_test_chassis(self.context, uuid=uuidutils.generate_uuid()) data = self.get_json('/chassis/?limit=3') self.assertEqual(3, len(data['chassis'])) next_marker = data['chassis'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_default_limit(self): cfg.CONF.set_override('max_limit', 3, 'api') for id_ in range(5): obj_utils.create_test_chassis(self.context, uuid=uuidutils.generate_uuid()) data = self.get_json('/chassis') self.assertEqual(3, len(data['chassis'])) next_marker = data['chassis'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_sort_key(self): ch_list = [] for id_ in range(3): chassis = obj_utils.create_test_chassis( self.context, uuid=uuidutils.generate_uuid()) ch_list.append(chassis.uuid) data = self.get_json('/chassis?sort_key=uuid') uuids = [n['uuid'] for n in data['chassis']] self.assertEqual(sorted(ch_list), uuids) def test_sort_key_invalid(self): invalid_keys_list = ['foo', 'extra'] for invalid_key in invalid_keys_list: response = self.get_json('/chassis?sort_key=%s' % invalid_key, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) def test_nodes_subresource_link(self): chassis = obj_utils.create_test_chassis(self.context) data = self.get_json('/chassis/%s' % chassis.uuid) self.assertIn('nodes', data.keys()) def test_nodes_subresource(self): chassis = obj_utils.create_test_chassis(self.context) for id_ in range(2): obj_utils.create_test_node(self.context, chassis_id=chassis.id, uuid=uuidutils.generate_uuid()) data = self.get_json('/chassis/%s/nodes' % chassis.uuid) self.assertEqual(2, len(data['nodes'])) self.assertNotIn('next', data.keys()) # Test collection pagination data = self.get_json('/chassis/%s/nodes?limit=1' % chassis.uuid) self.assertEqual(1, len(data['nodes'])) self.assertIn('next', data.keys()) def test_nodes_subresource_no_uuid(self): response = self.get_json('/chassis/nodes', expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_nodes_subresource_chassis_not_found(self): non_existent_uuid = 'eeeeeeee-cccc-aaaa-bbbb-cccccccccccc' response = self.get_json('/chassis/%s/nodes' % non_existent_uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) class TestPatch(test_api_base.BaseApiTest): def setUp(self): super(TestPatch, self).setUp() obj_utils.create_test_chassis(self.context) def test_update_not_found(self): uuid = uuidutils.generate_uuid() response = self.patch_json('/chassis/%s' % uuid, [{'path': '/extra/a', 'value': 'b', 'op': 'add'}], expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) @mock.patch.object(timeutils, 'utcnow') def test_replace_singular(self, mock_utcnow): chassis = obj_utils.get_test_chassis(self.context) description = 'chassis-new-description' test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/description', 'value': description, 'op': 'replace'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) self.assertEqual(description, result['description']) return_updated_at = timeutils.parse_isotime( result['updated_at']).replace(tzinfo=None) self.assertEqual(test_time, return_updated_at) def test_replace_multi(self): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} chassis = obj_utils.create_test_chassis(self.context, extra=extra, uuid=uuidutils.generate_uuid()) new_value = 'new value' response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/extra/foo2', 'value': new_value, 'op': 'replace'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) extra["foo2"] = new_value self.assertEqual(extra, result['extra']) def test_remove_singular(self): chassis = obj_utils.create_test_chassis(self.context, extra={'a': 'b'}, uuid=uuidutils.generate_uuid()) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/description', 'op': 'remove'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) self.assertIsNone(result['description']) # Assert nothing else was changed self.assertEqual(chassis.uuid, result['uuid']) self.assertEqual(chassis.extra, result['extra']) def test_remove_multi(self): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} chassis = obj_utils.create_test_chassis(self.context, extra=extra, description="foobar", uuid=uuidutils.generate_uuid()) # Removing one item from the collection response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/extra/foo2', 'op': 'remove'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) extra.pop("foo2") self.assertEqual(extra, result['extra']) # Removing the collection response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/extra', 'op': 'remove'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) self.assertEqual({}, result['extra']) # Assert nothing else was changed self.assertEqual(chassis.uuid, result['uuid']) self.assertEqual(chassis.description, result['description']) def test_remove_non_existent_property_fail(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json( '/chassis/%s' % chassis.uuid, [{'path': '/extra/non-existent', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_add_root(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/description', 'value': 'test', 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_int) def test_add_root_non_existent(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_add_multi(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/extra/foo1', 'value': 'bar1', 'op': 'add'}, {'path': '/extra/foo2', 'value': 'bar2', 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) expected = {"foo1": "bar1", "foo2": "bar2"} self.assertEqual(expected, result['extra']) def test_patch_nodes_subresource(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s/nodes' % chassis.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_remove_uuid(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/uuid', 'op': 'remove'}], expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) class TestPost(test_api_base.BaseApiTest): @mock.patch.object(timeutils, 'utcnow') def test_create_chassis(self, mock_utcnow): cdict = apiutils.chassis_post_data() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.post_json('/chassis', cdict) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/chassis/%s' % cdict['uuid']) self.assertEqual(cdict['uuid'], result['uuid']) self.assertFalse(result['updated_at']) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/chassis/%s' % cdict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) def test_create_chassis_doesnt_contain_id(self): with mock.patch.object(self.dbapi, 'create_chassis', wraps=self.dbapi.create_chassis) as cc_mock: cdict = apiutils.chassis_post_data(extra={'foo': 123}) self.post_json('/chassis', cdict) result = self.get_json('/chassis/%s' % cdict['uuid']) self.assertEqual(cdict['extra'], result['extra']) cc_mock.assert_called_once_with(mock.ANY) # Check that 'id' is not in first arg of positional args self.assertNotIn('id', cc_mock.call_args[0][0]) def test_create_chassis_generate_uuid(self): cdict = apiutils.chassis_post_data() del cdict['uuid'] self.post_json('/chassis', cdict) result = self.get_json('/chassis') self.assertEqual(cdict['description'], result['chassis'][0]['description']) self.assertTrue(uuidutils.is_uuid_like(result['chassis'][0]['uuid'])) def test_post_nodes_subresource(self): chassis = obj_utils.create_test_chassis(self.context) ndict = apiutils.node_post_data() ndict['chassis_uuid'] = chassis.uuid response = self.post_json('/chassis/nodes', ndict, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_create_chassis_valid_extra(self): cdict = apiutils.chassis_post_data(extra={'str': 'foo', 'int': 123, 'float': 0.1, 'bool': True, 'list': [1, 2], 'none': None, 'dict': {'cat': 'meow'}}) self.post_json('/chassis', cdict) result = self.get_json('/chassis/%s' % cdict['uuid']) self.assertEqual(cdict['extra'], result['extra']) def test_create_chassis_unicode_description(self): descr = u'\u0430\u043c\u043e' cdict = apiutils.chassis_post_data(description=descr) self.post_json('/chassis', cdict) result = self.get_json('/chassis/%s' % cdict['uuid']) self.assertEqual(descr, result['description']) class TestDelete(test_api_base.BaseApiTest): def test_delete_chassis(self): chassis = obj_utils.create_test_chassis(self.context) self.delete('/chassis/%s' % chassis.uuid) response = self.get_json('/chassis/%s' % chassis.uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_delete_chassis_with_node(self): chassis = obj_utils.create_test_chassis(self.context) obj_utils.create_test_node(self.context, chassis_id=chassis.id) response = self.delete('/chassis/%s' % chassis.uuid, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) self.assertIn(chassis.uuid, response.json['error_message']) def test_delete_chassis_not_found(self): uuid = uuidutils.generate_uuid() response = self.delete('/chassis/%s' % uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_delete_nodes_subresource(self): chassis = obj_utils.create_test_chassis(self.context) response = self.delete('/chassis/%s/nodes' % chassis.uuid, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) ironic-5.1.0/ironic/tests/unit/api/v1/test_nodes.py0000664000567000056710000034711112674513466023412 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /nodes/ methods. """ import datetime import json import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils import six from six.moves import http_client from six.moves.urllib import parse as urlparse from testtools.matchers import HasLength from wsme import types as wtypes from ironic.api.controllers import base as api_base from ironic.api.controllers import v1 as api_v1 from ironic.api.controllers.v1 import node as api_node from ironic.api.controllers.v1 import utils as api_utils from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import rpcapi from ironic import objects from ironic.tests import base from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.api import utils as test_api_utils from ironic.tests.unit.objects import utils as obj_utils class TestNodeObject(base.TestCase): def test_node_init(self): node_dict = test_api_utils.node_post_data() del node_dict['instance_uuid'] node = api_node.Node(**node_dict) self.assertEqual(wtypes.Unset, node.instance_uuid) class TestListNodes(test_api_base.BaseApiTest): def setUp(self): super(TestListNodes, self).setUp() self.chassis = obj_utils.create_test_chassis(self.context) p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) def _create_association_test_nodes(self): # create some unassociated nodes unassociated_nodes = [] for id in range(3): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) unassociated_nodes.append(node.uuid) # created some associated nodes associated_nodes = [] for id in range(4): node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), instance_uuid=uuidutils.generate_uuid()) associated_nodes.append(node.uuid) return {'associated': associated_nodes, 'unassociated': unassociated_nodes} def test_empty(self): data = self.get_json('/nodes') self.assertEqual([], data['nodes']) def test_one(self): node = obj_utils.create_test_node(self.context, chassis_id=self.chassis.id) data = self.get_json( '/nodes', headers={api_base.Version.string: str(api_v1.MAX_VER)}) self.assertIn('instance_uuid', data['nodes'][0]) self.assertIn('maintenance', data['nodes'][0]) self.assertIn('power_state', data['nodes'][0]) self.assertIn('provision_state', data['nodes'][0]) self.assertIn('uuid', data['nodes'][0]) self.assertEqual(node.uuid, data['nodes'][0]["uuid"]) self.assertNotIn('driver', data['nodes'][0]) self.assertNotIn('driver_info', data['nodes'][0]) self.assertNotIn('driver_internal_info', data['nodes'][0]) self.assertNotIn('extra', data['nodes'][0]) self.assertNotIn('properties', data['nodes'][0]) self.assertNotIn('chassis_uuid', data['nodes'][0]) self.assertNotIn('reservation', data['nodes'][0]) self.assertNotIn('console_enabled', data['nodes'][0]) self.assertNotIn('target_power_state', data['nodes'][0]) self.assertNotIn('target_provision_state', data['nodes'][0]) self.assertNotIn('provision_updated_at', data['nodes'][0]) self.assertNotIn('maintenance_reason', data['nodes'][0]) self.assertNotIn('clean_step', data['nodes'][0]) self.assertNotIn('raid_config', data['nodes'][0]) self.assertNotIn('target_raid_config', data['nodes'][0]) # never expose the chassis_id self.assertNotIn('chassis_id', data['nodes'][0]) def test_get_one(self): node = obj_utils.create_test_node(self.context, chassis_id=self.chassis.id) data = self.get_json( '/nodes/%s' % node.uuid, headers={api_base.Version.string: str(api_v1.MAX_VER)}) self.assertEqual(node.uuid, data['uuid']) self.assertIn('driver', data) self.assertIn('driver_info', data) self.assertEqual('******', data['driver_info']['fake_password']) self.assertEqual('bar', data['driver_info']['foo']) self.assertIn('driver_internal_info', data) self.assertIn('extra', data) self.assertIn('properties', data) self.assertIn('chassis_uuid', data) self.assertIn('reservation', data) self.assertIn('maintenance_reason', data) self.assertIn('name', data) self.assertIn('inspection_finished_at', data) self.assertIn('inspection_started_at', data) self.assertIn('clean_step', data) self.assertIn('states', data) # never expose the chassis_id self.assertNotIn('chassis_id', data) def test_node_states_field_hidden_in_lower_version(self): node = obj_utils.create_test_node(self.context, chassis_id=self.chassis.id) data = self.get_json( '/nodes/%s' % node.uuid, headers={api_base.Version.string: '1.8'}) self.assertNotIn('states', data) def test_get_one_custom_fields(self): node = obj_utils.create_test_node(self.context, chassis_id=self.chassis.id) fields = 'extra,instance_info' data = self.get_json( '/nodes/%s?fields=%s' % (node.uuid, fields), headers={api_base.Version.string: str(api_v1.MAX_VER)}) # We always append "links" self.assertItemsEqual(['extra', 'instance_info', 'links'], data) def test_get_collection_custom_fields(self): fields = 'uuid,instance_info' for i in range(3): obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), instance_uuid=uuidutils.generate_uuid()) data = self.get_json( '/nodes?fields=%s' % fields, headers={api_base.Version.string: str(api_v1.MAX_VER)}) self.assertEqual(3, len(data['nodes'])) for node in data['nodes']: # We always append "links" self.assertItemsEqual(['uuid', 'instance_info', 'links'], node) def test_get_custom_fields_invalid_fields(self): node = obj_utils.create_test_node(self.context, chassis_id=self.chassis.id) fields = 'uuid,spongebob' response = self.get_json( '/nodes/%s?fields=%s' % (node.uuid, fields), headers={api_base.Version.string: str(api_v1.MAX_VER)}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('spongebob', response.json['error_message']) def test_get_custom_fields_invalid_api_version(self): node = obj_utils.create_test_node(self.context, chassis_id=self.chassis.id) fields = 'uuid,extra' response = self.get_json( '/nodes/%s?fields=%s' % (node.uuid, fields), headers={api_base.Version.string: str(api_v1.MIN_VER)}, expect_errors=True) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_int) def test_get_one_custom_fields_show_password(self): node = obj_utils.create_test_node(self.context, chassis_id=self.chassis.id, driver_info={'fake_password': 'bar'}) fields = 'driver_info' data = self.get_json( '/nodes/%s?fields=%s' % (node.uuid, fields), headers={api_base.Version.string: str(api_v1.MAX_VER)}) # We always append "links" self.assertItemsEqual(['driver_info', 'links'], data) self.assertEqual('******', data['driver_info']['fake_password']) def test_detail(self): node = obj_utils.create_test_node(self.context, chassis_id=self.chassis.id) data = self.get_json( '/nodes/detail', headers={api_base.Version.string: str(api_v1.MAX_VER)}) self.assertEqual(node.uuid, data['nodes'][0]["uuid"]) self.assertIn('name', data['nodes'][0]) self.assertIn('driver', data['nodes'][0]) self.assertIn('driver_info', data['nodes'][0]) self.assertIn('extra', data['nodes'][0]) self.assertIn('properties', data['nodes'][0]) self.assertIn('chassis_uuid', data['nodes'][0]) self.assertIn('reservation', data['nodes'][0]) self.assertIn('maintenance', data['nodes'][0]) self.assertIn('console_enabled', data['nodes'][0]) self.assertIn('target_power_state', data['nodes'][0]) self.assertIn('target_provision_state', data['nodes'][0]) self.assertIn('provision_updated_at', data['nodes'][0]) self.assertIn('inspection_finished_at', data['nodes'][0]) self.assertIn('inspection_started_at', data['nodes'][0]) self.assertIn('raid_config', data['nodes'][0]) self.assertIn('target_raid_config', data['nodes'][0]) # never expose the chassis_id self.assertNotIn('chassis_id', data['nodes'][0]) def test_detail_against_single(self): node = obj_utils.create_test_node(self.context) response = self.get_json('/nodes/%s/detail' % node.uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_mask_available_state(self): node = obj_utils.create_test_node(self.context, provision_state=states.AVAILABLE) data = self.get_json( '/nodes/%s' % node.uuid, headers={api_base.Version.string: str(api_v1.MIN_VER)}) self.assertEqual(states.NOSTATE, data['provision_state']) data = self.get_json('/nodes/%s' % node.uuid, headers={api_base.Version.string: "1.2"}) self.assertEqual(states.AVAILABLE, data['provision_state']) def test_hide_fields_in_newer_versions_driver_internal(self): node = obj_utils.create_test_node(self.context, driver_internal_info={"foo": "bar"}) data = self.get_json( '/nodes/%s' % node.uuid, headers={api_base.Version.string: str(api_v1.MIN_VER)}) self.assertNotIn('driver_internal_info', data) data = self.get_json('/nodes/%s' % node.uuid, headers={api_base.Version.string: "1.3"}) self.assertEqual({"foo": "bar"}, data['driver_internal_info']) def test_hide_fields_in_newer_versions_name(self): node = obj_utils.create_test_node(self.context, name="fish") data = self.get_json('/nodes/%s' % node.uuid, headers={api_base.Version.string: "1.4"}) self.assertNotIn('name', data) data = self.get_json('/nodes/%s' % node.uuid, headers={api_base.Version.string: "1.5"}) self.assertEqual('fish', data['name']) def test_hide_fields_in_newer_versions_inspection(self): some_time = datetime.datetime(2015, 3, 18, 19, 20) node = obj_utils.create_test_node(self.context, inspection_started_at=some_time) data = self.get_json( '/nodes/%s' % node.uuid, headers={api_base.Version.string: str(api_v1.MIN_VER)}) self.assertNotIn('inspection_finished_at', data) self.assertNotIn('inspection_started_at', data) data = self.get_json('/nodes/%s' % node.uuid, headers={api_base.Version.string: "1.6"}) started = timeutils.parse_isotime( data['inspection_started_at']).replace(tzinfo=None) self.assertEqual(some_time, started) self.assertIsNone(data['inspection_finished_at']) def test_hide_fields_in_newer_versions_clean_step(self): node = obj_utils.create_test_node(self.context, clean_step={"foo": "bar"}) data = self.get_json( '/nodes/%s' % node.uuid, headers={api_base.Version.string: str(api_v1.MIN_VER)}) self.assertNotIn('clean_step', data) data = self.get_json('/nodes/%s' % node.uuid, headers={api_base.Version.string: "1.7"}) self.assertEqual({"foo": "bar"}, data['clean_step']) def test_many(self): nodes = [] for id in range(5): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) nodes.append(node.uuid) data = self.get_json('/nodes') self.assertEqual(len(nodes), len(data['nodes'])) uuids = [n['uuid'] for n in data['nodes']] self.assertEqual(sorted(nodes), sorted(uuids)) def test_many_have_names(self): nodes = [] node_names = [] for id in range(5): name = 'node-%s' % id node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), name=name) nodes.append(node.uuid) node_names.append(name) data = self.get_json('/nodes', headers={api_base.Version.string: "1.5"}) names = [n['name'] for n in data['nodes']] self.assertEqual(len(nodes), len(data['nodes'])) self.assertEqual(sorted(node_names), sorted(names)) def _test_links(self, public_url=None): cfg.CONF.set_override('public_endpoint', public_url, 'api') uuid = uuidutils.generate_uuid() obj_utils.create_test_node(self.context, uuid=uuid) data = self.get_json('/nodes/%s' % uuid) self.assertIn('links', data.keys()) self.assertEqual(2, len(data['links'])) self.assertIn(uuid, data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark)) if public_url is not None: expected = [{'href': '%s/v1/nodes/%s' % (public_url, uuid), 'rel': 'self'}, {'href': '%s/nodes/%s' % (public_url, uuid), 'rel': 'bookmark'}] for i in expected: self.assertIn(i, data['links']) def test_links(self): self._test_links() def test_links_public_url(self): self._test_links(public_url='http://foo') def test_collection_links(self): nodes = [] for id in range(5): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) nodes.append(node.uuid) data = self.get_json('/nodes/?limit=3') self.assertEqual(3, len(data['nodes'])) next_marker = data['nodes'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_default_limit(self): cfg.CONF.set_override('max_limit', 3, 'api') nodes = [] for id in range(5): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) nodes.append(node.uuid) data = self.get_json('/nodes') self.assertEqual(3, len(data['nodes'])) next_marker = data['nodes'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_sort_key(self): nodes = [] for id in range(3): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) nodes.append(node.uuid) data = self.get_json('/nodes?sort_key=uuid') uuids = [n['uuid'] for n in data['nodes']] self.assertEqual(sorted(nodes), uuids) def test_sort_key_invalid(self): invalid_keys_list = ['foo', 'properties', 'driver_info', 'extra', 'instance_info', 'driver_internal_info', 'clean_step'] for invalid_key in invalid_keys_list: response = self.get_json('/nodes?sort_key=%s' % invalid_key, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) def test_ports_subresource_link(self): node = obj_utils.create_test_node(self.context) data = self.get_json('/nodes/%s' % node.uuid) self.assertIn('ports', data.keys()) def test_ports_subresource(self): node = obj_utils.create_test_node(self.context) for id_ in range(2): obj_utils.create_test_port(self.context, node_id=node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:3%s' % id_) data = self.get_json('/nodes/%s/ports' % node.uuid) self.assertEqual(2, len(data['ports'])) self.assertNotIn('next', data.keys()) # Test collection pagination data = self.get_json('/nodes/%s/ports?limit=1' % node.uuid) self.assertEqual(1, len(data['ports'])) self.assertIn('next', data.keys()) def test_ports_subresource_noid(self): node = obj_utils.create_test_node(self.context) obj_utils.create_test_port(self.context, node_id=node.id) # No node id specified response = self.get_json('/nodes/ports', expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_ports_subresource_node_not_found(self): non_existent_uuid = 'eeeeeeee-cccc-aaaa-bbbb-cccccccccccc' response = self.get_json('/nodes/%s/ports' % non_existent_uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) @mock.patch.object(timeutils, 'utcnow') def _test_node_states(self, mock_utcnow, api_version=None): fake_state = 'fake-state' fake_error = 'fake-error' fake_config = '{"foo": "bar"}' test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time node = obj_utils.create_test_node(self.context, power_state=fake_state, target_power_state=fake_state, provision_state=fake_state, target_provision_state=fake_state, provision_updated_at=test_time, raid_config=fake_config, target_raid_config=fake_config, last_error=fake_error) headers = {} if api_version: headers = {api_base.Version.string: api_version} data = self.get_json('/nodes/%s/states' % node.uuid, headers=headers) self.assertEqual(fake_state, data['power_state']) self.assertEqual(fake_state, data['target_power_state']) self.assertEqual(fake_state, data['provision_state']) self.assertEqual(fake_state, data['target_provision_state']) prov_up_at = timeutils.parse_isotime( data['provision_updated_at']).replace(tzinfo=None) self.assertEqual(test_time, prov_up_at) self.assertEqual(fake_error, data['last_error']) self.assertFalse(data['console_enabled']) return data def test_node_states(self): self._test_node_states() def test_node_states_raid(self): data = self._test_node_states(api_version="1.12") self.assertEqual({'foo': 'bar'}, data['raid_config']) self.assertEqual({'foo': 'bar'}, data['target_raid_config']) @mock.patch.object(timeutils, 'utcnow') def test_node_states_by_name(self, mock_utcnow): fake_state = 'fake-state' fake_error = 'fake-error' test_time = datetime.datetime(1971, 3, 9, 0, 0) mock_utcnow.return_value = test_time node = obj_utils.create_test_node(self.context, name='eggs', power_state=fake_state, target_power_state=fake_state, provision_state=fake_state, target_provision_state=fake_state, provision_updated_at=test_time, last_error=fake_error) data = self.get_json('/nodes/%s/states' % node.name, headers={api_base.Version.string: "1.5"}) self.assertEqual(fake_state, data['power_state']) self.assertEqual(fake_state, data['target_power_state']) self.assertEqual(fake_state, data['provision_state']) self.assertEqual(fake_state, data['target_provision_state']) prov_up_at = timeutils.parse_isotime( data['provision_updated_at']).replace(tzinfo=None) self.assertEqual(test_time, prov_up_at) self.assertEqual(fake_error, data['last_error']) self.assertFalse(data['console_enabled']) def test_node_by_instance_uuid(self): node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), instance_uuid=uuidutils.generate_uuid()) instance_uuid = node.instance_uuid data = self.get_json('/nodes?instance_uuid=%s' % instance_uuid, headers={api_base.Version.string: "1.5"}) self.assertThat(data['nodes'], HasLength(1)) self.assertEqual(node['instance_uuid'], data['nodes'][0]["instance_uuid"]) def test_node_by_instance_uuid_wrong_uuid(self): obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), instance_uuid=uuidutils.generate_uuid()) wrong_uuid = uuidutils.generate_uuid() data = self.get_json('/nodes?instance_uuid=%s' % wrong_uuid) self.assertThat(data['nodes'], HasLength(0)) def test_node_by_instance_uuid_invalid_uuid(self): response = self.get_json('/nodes?instance_uuid=fake', expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) def test_associated_nodes_insensitive(self): associated_nodes = (self ._create_association_test_nodes() .get('associated')) data = self.get_json('/nodes?associated=true') data1 = self.get_json('/nodes?associated=True') uuids = [n['uuid'] for n in data['nodes']] uuids1 = [n['uuid'] for n in data1['nodes']] self.assertEqual(sorted(associated_nodes), sorted(uuids1)) self.assertEqual(sorted(associated_nodes), sorted(uuids)) def test_associated_nodes_error(self): self._create_association_test_nodes() response = self.get_json('/nodes?associated=blah', expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_unassociated_nodes_insensitive(self): unassociated_nodes = (self ._create_association_test_nodes() .get('unassociated')) data = self.get_json('/nodes?associated=false') data1 = self.get_json('/nodes?associated=FALSE') uuids = [n['uuid'] for n in data['nodes']] uuids1 = [n['uuid'] for n in data1['nodes']] self.assertEqual(sorted(unassociated_nodes), sorted(uuids1)) self.assertEqual(sorted(unassociated_nodes), sorted(uuids)) def test_unassociated_nodes_with_limit(self): unassociated_nodes = (self ._create_association_test_nodes() .get('unassociated')) data = self.get_json('/nodes?associated=False&limit=2') self.assertThat(data['nodes'], HasLength(2)) self.assertTrue(data['nodes'][0]['uuid'] in unassociated_nodes) def test_next_link_with_association(self): self._create_association_test_nodes() data = self.get_json('/nodes/?limit=3&associated=True') self.assertThat(data['nodes'], HasLength(3)) self.assertIn('associated=True', data['next']) def test_detail_with_association_filter(self): associated_nodes = (self ._create_association_test_nodes() .get('associated')) data = self.get_json('/nodes/detail?associated=true') self.assertIn('driver', data['nodes'][0]) self.assertEqual(len(associated_nodes), len(data['nodes'])) def test_next_link_with_association_with_detail(self): self._create_association_test_nodes() data = self.get_json('/nodes/detail?limit=3&associated=true') self.assertThat(data['nodes'], HasLength(3)) self.assertIn('driver', data['nodes'][0]) self.assertIn('associated=True', data['next']) def test_detail_with_instance_uuid(self): node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), instance_uuid=uuidutils.generate_uuid(), chassis_id=self.chassis.id) instance_uuid = node.instance_uuid data = self.get_json('/nodes/detail?instance_uuid=%s' % instance_uuid) self.assertEqual(node['instance_uuid'], data['nodes'][0]["instance_uuid"]) self.assertIn('driver', data['nodes'][0]) self.assertIn('driver_info', data['nodes'][0]) self.assertIn('extra', data['nodes'][0]) self.assertIn('properties', data['nodes'][0]) self.assertIn('chassis_uuid', data['nodes'][0]) # never expose the chassis_id self.assertNotIn('chassis_id', data['nodes'][0]) def test_maintenance_nodes(self): nodes = [] for id in range(5): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), maintenance=id % 2) nodes.append(node) data = self.get_json('/nodes?maintenance=true') uuids = [n['uuid'] for n in data['nodes']] test_uuids_1 = [n.uuid for n in nodes if n.maintenance] self.assertEqual(sorted(test_uuids_1), sorted(uuids)) data = self.get_json('/nodes?maintenance=false') uuids = [n['uuid'] for n in data['nodes']] test_uuids_0 = [n.uuid for n in nodes if not n.maintenance] self.assertEqual(sorted(test_uuids_0), sorted(uuids)) def test_maintenance_nodes_error(self): response = self.get_json('/nodes?associated=true&maintenance=blah', expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_maintenance_nodes_associated(self): self._create_association_test_nodes() node = obj_utils.create_test_node( self.context, instance_uuid=uuidutils.generate_uuid(), maintenance=True) data = self.get_json('/nodes?associated=true&maintenance=false') uuids = [n['uuid'] for n in data['nodes']] self.assertNotIn(node.uuid, uuids) data = self.get_json('/nodes?associated=true&maintenance=true') uuids = [n['uuid'] for n in data['nodes']] self.assertIn(node.uuid, uuids) data = self.get_json('/nodes?associated=true&maintenance=TruE') uuids = [n['uuid'] for n in data['nodes']] self.assertIn(node.uuid, uuids) def test_get_nodes_by_provision_state(self): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), provision_state=states.AVAILABLE) node1 = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), provision_state=states.DEPLOYING) data = self.get_json('/nodes?provision_state=available', headers={api_base.Version.string: "1.9"}) uuids = [n['uuid'] for n in data['nodes']] self.assertIn(node.uuid, uuids) self.assertNotIn(node1.uuid, uuids) data = self.get_json('/nodes?provision_state=deploying', headers={api_base.Version.string: "1.9"}) uuids = [n['uuid'] for n in data['nodes']] self.assertIn(node1.uuid, uuids) self.assertNotIn(node.uuid, uuids) def test_get_nodes_by_invalid_provision_state(self): response = self.get_json('/nodes?provision_state=test', headers={api_base.Version.string: "1.9"}, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_get_nodes_by_provision_state_not_allowed(self): response = self.get_json('/nodes?provision_state=test', headers={api_base.Version.string: "1.8"}, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_code) self.assertTrue(response.json['error_message']) def test_get_nodes_by_driver(self): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='pxe_ssh') node1 = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake') data = self.get_json('/nodes?driver=pxe_ssh', headers={api_base.Version.string: "1.16"}) uuids = [n['uuid'] for n in data['nodes']] self.assertIn(node.uuid, uuids) self.assertNotIn(node1.uuid, uuids) data = self.get_json('/nodes?driver=fake', headers={api_base.Version.string: "1.16"}) uuids = [n['uuid'] for n in data['nodes']] self.assertIn(node1.uuid, uuids) self.assertNotIn(node.uuid, uuids) def test_get_nodes_by_invalid_driver(self): data = self.get_json('/nodes?driver=test', headers={api_base.Version.string: "1.16"}) self.assertEqual(0, len(data['nodes'])) def test_get_nodes_by_driver_invalid_api_version(self): response = self.get_json( '/nodes?driver=fake', headers={api_base.Version.string: str(api_v1.MIN_VER)}, expect_errors=True) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_code) self.assertTrue(response.json['error_message']) def test_get_console_information(self): node = obj_utils.create_test_node(self.context) expected_console_info = {'test': 'test-data'} expected_data = {'console_enabled': True, 'console_info': expected_console_info} with mock.patch.object(rpcapi.ConductorAPI, 'get_console_information') as mock_gci: mock_gci.return_value = expected_console_info data = self.get_json('/nodes/%s/states/console' % node.uuid) self.assertEqual(expected_data, data) mock_gci.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'get_console_information') def test_get_console_information_by_name(self, mock_gci): node = obj_utils.create_test_node(self.context, name='spam') expected_console_info = {'test': 'test-data'} expected_data = {'console_enabled': True, 'console_info': expected_console_info} mock_gci.return_value = expected_console_info data = self.get_json('/nodes/%s/states/console' % node.name, headers={api_base.Version.string: "1.5"}) self.assertEqual(expected_data, data) mock_gci.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') def test_get_console_information_console_disabled(self): node = obj_utils.create_test_node(self.context) expected_data = {'console_enabled': False, 'console_info': None} with mock.patch.object(rpcapi.ConductorAPI, 'get_console_information') as mock_gci: mock_gci.side_effect = ( exception.NodeConsoleNotEnabled(node=node.uuid)) data = self.get_json('/nodes/%s/states/console' % node.uuid) self.assertEqual(expected_data, data) mock_gci.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') def test_get_console_information_not_supported(self): node = obj_utils.create_test_node(self.context) with mock.patch.object(rpcapi.ConductorAPI, 'get_console_information') as mock_gci: mock_gci.side_effect = exception.UnsupportedDriverExtension( extension='console', driver='test-driver') ret = self.get_json('/nodes/%s/states/console' % node.uuid, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) mock_gci.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'get_boot_device') def test_get_boot_device(self, mock_gbd): node = obj_utils.create_test_node(self.context) expected_data = {'boot_device': boot_devices.PXE, 'persistent': True} mock_gbd.return_value = expected_data data = self.get_json('/nodes/%s/management/boot_device' % node.uuid) self.assertEqual(expected_data, data) mock_gbd.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'get_boot_device') def test_get_boot_device_by_name(self, mock_gbd): node = obj_utils.create_test_node(self.context, name='spam') expected_data = {'boot_device': boot_devices.PXE, 'persistent': True} mock_gbd.return_value = expected_data data = self.get_json('/nodes/%s/management/boot_device' % node.name, headers={api_base.Version.string: "1.5"}) self.assertEqual(expected_data, data) mock_gbd.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'get_boot_device') def test_get_boot_device_iface_not_supported(self, mock_gbd): node = obj_utils.create_test_node(self.context) mock_gbd.side_effect = exception.UnsupportedDriverExtension( extension='management', driver='test-driver') ret = self.get_json('/nodes/%s/management/boot_device' % node.uuid, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) self.assertTrue(ret.json['error_message']) mock_gbd.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'get_supported_boot_devices') def test_get_supported_boot_devices(self, mock_gsbd): mock_gsbd.return_value = [boot_devices.PXE] node = obj_utils.create_test_node(self.context) data = self.get_json('/nodes/%s/management/boot_device/supported' % node.uuid) expected_data = {'supported_boot_devices': [boot_devices.PXE]} self.assertEqual(expected_data, data) mock_gsbd.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'get_supported_boot_devices') def test_get_supported_boot_devices_by_name(self, mock_gsbd): mock_gsbd.return_value = [boot_devices.PXE] node = obj_utils.create_test_node(self.context, name='spam') data = self.get_json( '/nodes/%s/management/boot_device/supported' % node.name, headers={api_base.Version.string: "1.5"}) expected_data = {'supported_boot_devices': [boot_devices.PXE]} self.assertEqual(expected_data, data) mock_gsbd.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'get_supported_boot_devices') def test_get_supported_boot_devices_iface_not_supported(self, mock_gsbd): node = obj_utils.create_test_node(self.context) mock_gsbd.side_effect = exception.UnsupportedDriverExtension( extension='management', driver='test-driver') ret = self.get_json('/nodes/%s/management/boot_device/supported' % node.uuid, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) self.assertTrue(ret.json['error_message']) mock_gsbd.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'validate_driver_interfaces') def test_validate_by_uuid_using_deprecated_interface(self, mock_vdi): # Note(mrda): The 'node_uuid' interface is deprecated in favour # of the 'node' interface node = obj_utils.create_test_node(self.context) self.get_json('/nodes/validate?node_uuid=%s' % node.uuid) mock_vdi.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'validate_driver_interfaces') def test_validate_by_uuid(self, mock_vdi): node = obj_utils.create_test_node(self.context) self.get_json('/nodes/validate?node=%s' % node.uuid, headers={api_base.Version.string: "1.5"}) mock_vdi.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'validate_driver_interfaces') def test_validate_by_name_unsupported(self, mock_vdi): node = obj_utils.create_test_node(self.context, name='spam') ret = self.get_json('/nodes/validate?node=%s' % node.name, expect_errors=True) self.assertEqual(http_client.NOT_ACCEPTABLE, ret.status_code) self.assertFalse(mock_vdi.called) @mock.patch.object(rpcapi.ConductorAPI, 'validate_driver_interfaces') def test_validate_by_name(self, mock_vdi): node = obj_utils.create_test_node(self.context, name='spam') self.get_json('/nodes/validate?node=%s' % node.name, headers={api_base.Version.string: "1.5"}) # note that this should be node.uuid here as we get that from the # rpc_node lookup and pass that downwards mock_vdi.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') class TestPatch(test_api_base.BaseApiTest): def setUp(self): super(TestPatch, self).setUp() self.chassis = obj_utils.create_test_chassis(self.context) self.node = obj_utils.create_test_node(self.context, name='node-57', chassis_id=self.chassis.id) self.node_no_name = obj_utils.create_test_node( self.context, uuid='deadbeef-0000-1111-2222-333333333333', chassis_id=self.chassis.id) p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) p = mock.patch.object(rpcapi.ConductorAPI, 'update_node') self.mock_update_node = p.start() self.addCleanup(p.stop) p = mock.patch.object(rpcapi.ConductorAPI, 'change_node_power_state') self.mock_cnps = p.start() self.addCleanup(p.stop) def test_update_ok(self): self.mock_update_node.return_value = self.node (self .mock_update_node .return_value .updated_at) = "2013-12-03T06:20:41.184720+00:00" response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/instance_uuid', 'value': 'aaaaaaaa-1111-bbbb-2222-cccccccccccc', 'op': 'replace'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(self.mock_update_node.return_value.updated_at, timeutils.parse_isotime(response.json['updated_at'])) self.mock_update_node.assert_called_once_with( mock.ANY, mock.ANY, 'test-topic') def test_update_by_name_unsupported(self): self.mock_update_node.return_value = self.node (self .mock_update_node .return_value .updated_at) = "2013-12-03T06:20:41.184720+00:00" response = self.patch_json( '/nodes/%s' % self.node.name, [{'path': '/instance_uuid', 'value': 'aaaaaaaa-1111-bbbb-2222-cccccccccccc', 'op': 'replace'}], expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_code) self.assertFalse(self.mock_update_node.called) def test_update_ok_by_name(self): self.mock_update_node.return_value = self.node (self .mock_update_node .return_value .updated_at) = "2013-12-03T06:20:41.184720+00:00" response = self.patch_json( '/nodes/%s' % self.node.name, [{'path': '/instance_uuid', 'value': 'aaaaaaaa-1111-bbbb-2222-cccccccccccc', 'op': 'replace'}], headers={api_base.Version.string: "1.5"}) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(self.mock_update_node.return_value.updated_at, timeutils.parse_isotime(response.json['updated_at'])) self.mock_update_node.assert_called_once_with( mock.ANY, mock.ANY, 'test-topic') def test_update_state(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'power_state': 'new state'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_update_fails_bad_driver_info(self): fake_err = 'Fake Error Message' self.mock_update_node.side_effect = ( exception.InvalidParameterValue(fake_err)) response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/driver_info/this', 'value': 'foo', 'op': 'add'}, {'path': '/driver_info/that', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.mock_update_node.assert_called_once_with( mock.ANY, mock.ANY, 'test-topic') def test_update_fails_bad_driver(self): self.mock_gtf.side_effect = exception.NoValidHost('Fake Error') response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/driver', 'value': 'bad-driver', 'op': 'replace'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) def test_add_ok(self): self.mock_update_node.return_value = self.node response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.mock_update_node.assert_called_once_with( mock.ANY, mock.ANY, 'test-topic') def test_add_root(self): self.mock_update_node.return_value = self.node response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/instance_uuid', 'value': 'aaaaaaaa-1111-bbbb-2222-cccccccccccc', 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.mock_update_node.assert_called_once_with( mock.ANY, mock.ANY, 'test-topic') def test_add_root_non_existent(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_remove_ok(self): self.mock_update_node.return_value = self.node response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/extra', 'op': 'remove'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.mock_update_node.assert_called_once_with( mock.ANY, mock.ANY, 'test-topic') def test_remove_non_existent_property_fail(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/extra/non-existent', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_update_allowed_in_power_transition(self): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), target_power_state=states.POWER_OFF) self.mock_update_node.return_value = node response = self.patch_json('/nodes/%s' % node.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}]) self.assertEqual(http_client.OK, response.status_code) def test_update_allowed_in_maintenance(self): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), target_power_state=states.POWER_OFF, maintenance=True) self.mock_update_node.return_value = node response = self.patch_json('/nodes/%s' % node.uuid, [{'path': '/instance_uuid', 'op': 'remove'}]) self.assertEqual(http_client.OK, response.status_code) def test_add_state_in_deployfail(self): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), provision_state=states.DEPLOYFAIL, target_provision_state=states.ACTIVE) self.mock_update_node.return_value = node response = self.patch_json('/nodes/%s' % node.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.mock_update_node.assert_called_once_with( mock.ANY, mock.ANY, 'test-topic') def test_patch_ports_subresource(self): response = self.patch_json('/nodes/%s/ports' % self.node.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_remove_uuid(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/uuid', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_remove_instance_uuid_clean_backward_compat(self): for state in (states.CLEANING, states.CLEANWAIT): node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), provision_state=state, target_provision_state=states.AVAILABLE) self.mock_update_node.return_value = node response = self.patch_json('/nodes/%s' % node.uuid, [{'op': 'remove', 'path': '/instance_uuid'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) # NOTE(lucasagomes): instance_uuid is already removed as part of # node's tear down, assert update has not been called. This test # should be removed in the next cycle (Mitaka). self.assertFalse(self.mock_update_node.called) def test_add_state_in_cleaning(self): node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), provision_state=states.CLEANING, target_provision_state=states.AVAILABLE) self.mock_update_node.return_value = node response = self.patch_json('/nodes/%s' % node.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CONFLICT, response.status_code) self.assertTrue(response.json['error_message']) def test_remove_mandatory_field(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/driver', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_replace_chassis_uuid(self): self.mock_update_node.return_value = self.node response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/chassis_uuid', 'value': self.chassis.uuid, 'op': 'replace'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_add_chassis_uuid(self): self.mock_update_node.return_value = self.node response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/chassis_uuid', 'value': self.chassis.uuid, 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_add_chassis_id(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/chassis_id', 'value': '1', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_replace_chassis_id(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/chassis_id', 'value': '1', 'op': 'replace'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_remove_chassis_id(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/chassis_id', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_replace_non_existent_chassis_uuid(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/chassis_uuid', 'value': 'eeeeeeee-dddd-cccc-bbbb-aaaaaaaaaaaa', 'op': 'replace'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_remove_internal_field(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/last_error', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_replace_internal_field(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/power_state', 'op': 'replace', 'value': 'fake-state'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_replace_maintenance(self): self.mock_update_node.return_value = self.node response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/maintenance', 'op': 'replace', 'value': 'true'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.mock_update_node.assert_called_once_with( mock.ANY, mock.ANY, 'test-topic') def test_replace_maintenance_by_name(self): self.mock_update_node.return_value = self.node response = self.patch_json( '/nodes/%s' % self.node.name, [{'path': '/maintenance', 'op': 'replace', 'value': 'true'}], headers={api_base.Version.string: "1.5"}) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.mock_update_node.assert_called_once_with( mock.ANY, mock.ANY, 'test-topic') def test_replace_consoled_enabled(self): response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/console_enabled', 'op': 'replace', 'value': True}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_replace_provision_updated_at(self): test_time = '2000-01-01 00:00:00' response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/provision_updated_at', 'op': 'replace', 'value': test_time}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_patch_add_name_ok(self): self.mock_update_node.return_value = self.node_no_name test_name = 'guido-van-rossum' response = self.patch_json('/nodes/%s' % self.node_no_name.uuid, [{'path': '/name', 'op': 'add', 'value': test_name}], headers={api_base.Version.string: "1.5"}) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_patch_add_name_invalid(self): self.mock_update_node.return_value = self.node_no_name test_name = 'i am invalid' response = self.patch_json('/nodes/%s' % self.node_no_name.uuid, [{'path': '/name', 'op': 'add', 'value': test_name}], headers={api_base.Version.string: "1.10"}, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_patch_name_replace_ok(self): self.mock_update_node.return_value = self.node test_name = 'guido-van-rossum' response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/name', 'op': 'replace', 'value': test_name}], headers={api_base.Version.string: "1.5"}) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_patch_add_replace_invalid(self): self.mock_update_node.return_value = self.node_no_name test_name = 'Guido Van Error' response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/name', 'op': 'replace', 'value': test_name}], headers={api_base.Version.string: "1.5"}, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_patch_duplicate_name(self): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) test_name = "this-is-my-node" self.mock_update_node.side_effect = exception.DuplicateName(test_name) response = self.patch_json('/nodes/%s' % node.uuid, [{'path': '/name', 'op': 'replace', 'value': test_name}], headers={api_base.Version.string: "1.5"}, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CONFLICT, response.status_code) self.assertTrue(response.json['error_message']) @mock.patch.object(api_node.NodesController, '_check_name_acceptable') def test_patch_name_remove_ok(self, cna_mock): self.mock_update_node.return_value = self.node response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/name', 'op': 'remove'}], headers={api_base.Version.string: "1.5"}) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertFalse(cna_mock.called) @mock.patch.object(api_utils, 'get_rpc_node') def test_patch_update_drive_console_enabled(self, mock_rpc_node): self.node.console_enabled = True mock_rpc_node.return_value = self.node response = self.patch_json('/nodes/%s' % self.node.uuid, [{'path': '/driver', 'value': 'foo', 'op': 'add'}], expect_errors=True) mock_rpc_node.assert_called_once_with(self.node.uuid) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CONFLICT, response.status_code) self.assertTrue(response.json['error_message']) def test_update_in_UPDATE_ALLOWED_STATES(self): for state in states.UPDATE_ALLOWED_STATES: node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), provision_state=state, target_provision_state=states.AVAILABLE) self.mock_update_node.return_value = node response = self.patch_json('/nodes/%s' % node.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) class TestPost(test_api_base.BaseApiTest): def setUp(self): super(TestPost, self).setUp() self.chassis = obj_utils.create_test_chassis(self.context) p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) @mock.patch.object(timeutils, 'utcnow') def test_create_node(self, mock_utcnow): ndict = test_api_utils.post_get_test_node() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.post_json('/nodes', ndict) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/nodes/%s' % ndict['uuid']) self.assertEqual(ndict['uuid'], result['uuid']) self.assertFalse(result['updated_at']) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/nodes/%s' % ndict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) def test_create_node_default_state_none(self): ndict = test_api_utils.post_get_test_node() response = self.post_json('/nodes', ndict, headers={api_base.Version.string: "1.10"}) self.assertEqual(http_client.CREATED, response.status_int) # default state remains NONE/AVAILABLE result = self.get_json('/nodes/%s' % ndict['uuid']) self.assertEqual(states.NOSTATE, result['provision_state']) result = self.get_json('/nodes/%s' % ndict['uuid'], headers={api_base.Version.string: "1.10"}) self.assertEqual(ndict['uuid'], result['uuid']) self.assertEqual(states.AVAILABLE, result['provision_state']) def test_create_node_default_state_enroll(self): ndict = test_api_utils.post_get_test_node() response = self.post_json('/nodes', ndict, headers={api_base.Version.string: "1.11"}) self.assertEqual(http_client.CREATED, response.status_int) # default state is ENROLL result = self.get_json('/nodes/%s' % ndict['uuid']) self.assertEqual(ndict['uuid'], result['uuid']) self.assertEqual(states.ENROLL, result['provision_state']) def test_create_node_doesnt_contain_id(self): # FIXME(comstud): I'd like to make this test not use the # dbapi, however, no matter what I do when trying to mock # Node.create(), the API fails to convert the objects.Node # into the API Node object correctly (it leaves all fields # as Unset). with mock.patch.object(self.dbapi, 'create_node', wraps=self.dbapi.create_node) as cn_mock: ndict = test_api_utils.post_get_test_node(extra={'foo': 123}) self.post_json('/nodes', ndict) result = self.get_json('/nodes/%s' % ndict['uuid']) self.assertEqual(ndict['extra'], result['extra']) cn_mock.assert_called_once_with(mock.ANY) # Check that 'id' is not in first arg of positional args self.assertNotIn('id', cn_mock.call_args[0][0]) def _test_jsontype_attributes(self, attr_name): kwargs = {attr_name: {'str': 'foo', 'int': 123, 'float': 0.1, 'bool': True, 'list': [1, 2], 'none': None, 'dict': {'cat': 'meow'}}} ndict = test_api_utils.post_get_test_node(**kwargs) self.post_json('/nodes', ndict) result = self.get_json('/nodes/%s' % ndict['uuid']) self.assertEqual(ndict[attr_name], result[attr_name]) def test_create_node_valid_extra(self): self._test_jsontype_attributes('extra') def test_create_node_valid_properties(self): self._test_jsontype_attributes('properties') def test_create_node_valid_driver_info(self): self._test_jsontype_attributes('driver_info') def test_create_node_valid_instance_info(self): self._test_jsontype_attributes('instance_info') def _test_vendor_passthru_ok(self, mock_vendor, return_value=None, is_async=True): expected_status = http_client.ACCEPTED if is_async else http_client.OK expected_return_value = json.dumps(return_value) if six.PY3: expected_return_value = expected_return_value.encode('utf-8') node = obj_utils.create_test_node(self.context) info = {'foo': 'bar'} mock_vendor.return_value = {'return': return_value, 'async': is_async, 'attach': False} response = self.post_json('/nodes/%s/vendor_passthru/test' % node.uuid, info) mock_vendor.assert_called_once_with( mock.ANY, node.uuid, 'test', 'POST', info, 'test-topic') self.assertEqual(expected_return_value, response.body) self.assertEqual(expected_status, response.status_code) def _test_vendor_passthru_ok_by_name(self, mock_vendor, return_value=None, is_async=True): expected_status = http_client.ACCEPTED if is_async else http_client.OK expected_return_value = json.dumps(return_value) if six.PY3: expected_return_value = expected_return_value.encode('utf-8') node = obj_utils.create_test_node(self.context, name='node-109') info = {'foo': 'bar'} mock_vendor.return_value = {'return': return_value, 'async': is_async, 'attach': False} response = self.post_json('/nodes/%s/vendor_passthru/test' % node.name, info, headers={api_base.Version.string: "1.5"}) mock_vendor.assert_called_once_with( mock.ANY, node.uuid, 'test', 'POST', info, 'test-topic') self.assertEqual(expected_return_value, response.body) self.assertEqual(expected_status, response.status_code) @mock.patch.object(rpcapi.ConductorAPI, 'vendor_passthru') def test_vendor_passthru_async(self, mock_vendor): self._test_vendor_passthru_ok(mock_vendor) @mock.patch.object(rpcapi.ConductorAPI, 'vendor_passthru') def test_vendor_passthru_sync(self, mock_vendor): return_value = {'cat': 'meow'} self._test_vendor_passthru_ok(mock_vendor, return_value=return_value, is_async=False) @mock.patch.object(rpcapi.ConductorAPI, 'vendor_passthru') def test_vendor_passthru_put(self, mocked_vendor_passthru): node = obj_utils.create_test_node(self.context) return_value = {'return': None, 'async': True, 'attach': False} mocked_vendor_passthru.return_value = return_value response = self.put_json( '/nodes/%s/vendor_passthru/do_test' % node.uuid, {'test_key': 'test_value'}) self.assertEqual(http_client.ACCEPTED, response.status_int) self.assertEqual(return_value['return'], response.json) @mock.patch.object(rpcapi.ConductorAPI, 'vendor_passthru') def test_vendor_passthru_by_name(self, mock_vendor): self._test_vendor_passthru_ok_by_name(mock_vendor) @mock.patch.object(rpcapi.ConductorAPI, 'vendor_passthru') def test_vendor_passthru_get(self, mocked_vendor_passthru): node = obj_utils.create_test_node(self.context) return_value = {'return': 'foo', 'async': False, 'attach': False} mocked_vendor_passthru.return_value = return_value response = self.get_json( '/nodes/%s/vendor_passthru/do_test' % node.uuid) self.assertEqual(return_value['return'], response) @mock.patch.object(rpcapi.ConductorAPI, 'vendor_passthru') def test_vendor_passthru_delete(self, mock_vendor_passthru): node = obj_utils.create_test_node(self.context) return_value = {'return': None, 'async': True, 'attach': False} mock_vendor_passthru.return_value = return_value response = self.delete( '/nodes/%s/vendor_passthru/do_test' % node.uuid) self.assertEqual(http_client.ACCEPTED, response.status_int) self.assertEqual(return_value['return'], response.json) def test_vendor_passthru_no_such_method(self): node = obj_utils.create_test_node(self.context) uuid = node.uuid info = {'foo': 'bar'} with mock.patch.object( rpcapi.ConductorAPI, 'vendor_passthru') as mock_vendor: mock_vendor.side_effect = exception.UnsupportedDriverExtension( **{'driver': node.driver, 'node': uuid, 'extension': 'test'}) response = self.post_json('/nodes/%s/vendor_passthru/test' % uuid, info, expect_errors=True) mock_vendor.assert_called_once_with( mock.ANY, uuid, 'test', 'POST', info, 'test-topic') self.assertEqual(http_client.BAD_REQUEST, response.status_code) def test_vendor_passthru_without_method(self): node = obj_utils.create_test_node(self.context) response = self.post_json('/nodes/%s/vendor_passthru' % node.uuid, {'foo': 'bar'}, expect_errors=True) self.assertEqual('application/json', response.content_type, ) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_post_ports_subresource(self): node = obj_utils.create_test_node(self.context) pdict = test_api_utils.port_post_data(node_id=None) pdict['node_uuid'] = node.uuid response = self.post_json('/nodes/ports', pdict, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_create_node_no_mandatory_field_driver(self): ndict = test_api_utils.post_get_test_node() del ndict['driver'] response = self.post_json('/nodes', ndict, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_node_invalid_driver(self): ndict = test_api_utils.post_get_test_node() self.mock_gtf.side_effect = exception.NoValidHost('Fake Error') response = self.post_json('/nodes', ndict, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_node_no_chassis_uuid(self): ndict = test_api_utils.post_get_test_node() del ndict['chassis_uuid'] response = self.post_json('/nodes', ndict) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CREATED, response.status_int) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/nodes/%s' % ndict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) def test_create_node_with_chassis_uuid(self): ndict = test_api_utils.post_get_test_node( chassis_uuid=self.chassis.uuid) response = self.post_json('/nodes', ndict) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/nodes/%s' % ndict['uuid']) self.assertEqual(ndict['chassis_uuid'], result['chassis_uuid']) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/nodes/%s' % ndict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) def test_create_node_chassis_uuid_not_found(self): ndict = test_api_utils.post_get_test_node( chassis_uuid='1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e') response = self.post_json('/nodes', ndict, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_create_node_with_internal_field(self): ndict = test_api_utils.post_get_test_node() ndict['reservation'] = 'fake' response = self.post_json('/nodes', ndict, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) @mock.patch.object(rpcapi.ConductorAPI, 'get_node_vendor_passthru_methods') def test_vendor_passthru_methods(self, get_methods_mock): return_value = {'foo': 'bar'} get_methods_mock.return_value = return_value node = obj_utils.create_test_node(self.context) path = '/nodes/%s/vendor_passthru/methods' % node.uuid data = self.get_json(path) self.assertEqual(return_value, data) get_methods_mock.assert_called_once_with(mock.ANY, node.uuid, topic=mock.ANY) # Now let's test the cache: Reset the mock get_methods_mock.reset_mock() # Call it again data = self.get_json(path) self.assertEqual(return_value, data) # Assert RPC method wasn't called this time self.assertFalse(get_methods_mock.called) class TestDelete(test_api_base.BaseApiTest): def setUp(self): super(TestDelete, self).setUp() p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) @mock.patch.object(rpcapi.ConductorAPI, 'destroy_node') def test_delete_node(self, mock_dn): node = obj_utils.create_test_node(self.context) self.delete('/nodes/%s' % node.uuid) mock_dn.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'destroy_node') def test_delete_node_by_name_unsupported(self, mock_dn): node = obj_utils.create_test_node(self.context, name='foo') self.delete('/nodes/%s' % node.name, expect_errors=True) self.assertFalse(mock_dn.called) @mock.patch.object(rpcapi.ConductorAPI, 'destroy_node') def test_delete_node_by_name(self, mock_dn): node = obj_utils.create_test_node(self.context, name='foo') self.delete('/nodes/%s' % node.name, headers={api_base.Version.string: "1.5"}) mock_dn.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(objects.Node, 'get_by_uuid') def test_delete_node_not_found(self, mock_gbu): node = obj_utils.get_test_node(self.context) mock_gbu.side_effect = exception.NodeNotFound(node=node.uuid) response = self.delete('/nodes/%s' % node.uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) mock_gbu.assert_called_once_with(mock.ANY, node.uuid) @mock.patch.object(objects.Node, 'get_by_name') def test_delete_node_not_found_by_name_unsupported(self, mock_gbn): node = obj_utils.get_test_node(self.context, name='foo') mock_gbn.side_effect = exception.NodeNotFound(node=node.name) response = self.delete('/nodes/%s' % node.name, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertFalse(mock_gbn.called) @mock.patch.object(objects.Node, 'get_by_name') def test_delete_node_not_found_by_name(self, mock_gbn): node = obj_utils.get_test_node(self.context, name='foo') mock_gbn.side_effect = exception.NodeNotFound(node=node.name) response = self.delete('/nodes/%s' % node.name, headers={api_base.Version.string: "1.5"}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) mock_gbn.assert_called_once_with(mock.ANY, node.name) def test_delete_ports_subresource(self): node = obj_utils.create_test_node(self.context) response = self.delete('/nodes/%s/ports' % node.uuid, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) @mock.patch.object(rpcapi.ConductorAPI, 'destroy_node') def test_delete_associated(self, mock_dn): node = obj_utils.create_test_node( self.context, instance_uuid='aaaaaaaa-1111-bbbb-2222-cccccccccccc') mock_dn.side_effect = exception.NodeAssociated( node=node.uuid, instance=node.instance_uuid) response = self.delete('/nodes/%s' % node.uuid, expect_errors=True) self.assertEqual(http_client.CONFLICT, response.status_int) mock_dn.assert_called_once_with(mock.ANY, node.uuid, 'test-topic') @mock.patch.object(objects.Node, 'get_by_uuid') @mock.patch.object(rpcapi.ConductorAPI, 'update_node') def test_delete_node_maintenance_mode(self, mock_update, mock_get): node = obj_utils.create_test_node(self.context, maintenance=True, maintenance_reason='blah') mock_get.return_value = node response = self.delete('/nodes/%s/maintenance' % node.uuid) self.assertEqual(http_client.ACCEPTED, response.status_int) self.assertEqual(b'', response.body) self.assertFalse(node.maintenance) self.assertIsNone(node.maintenance_reason) mock_get.assert_called_once_with(mock.ANY, node.uuid) mock_update.assert_called_once_with(mock.ANY, mock.ANY, topic='test-topic') @mock.patch.object(objects.Node, 'get_by_name') @mock.patch.object(rpcapi.ConductorAPI, 'update_node') def test_delete_node_maintenance_mode_by_name(self, mock_update, mock_get): node = obj_utils.create_test_node(self.context, maintenance=True, maintenance_reason='blah', name='foo') mock_get.return_value = node response = self.delete('/nodes/%s/maintenance' % node.name, headers={api_base.Version.string: "1.5"}) self.assertEqual(http_client.ACCEPTED, response.status_int) self.assertEqual(b'', response.body) self.assertFalse(node.maintenance) self.assertIsNone(node.maintenance_reason) mock_get.assert_called_once_with(mock.ANY, node.name) mock_update.assert_called_once_with(mock.ANY, mock.ANY, topic='test-topic') class TestPut(test_api_base.BaseApiTest): def setUp(self): super(TestPut, self).setUp() self.node = obj_utils.create_test_node( self.context, provision_state=states.AVAILABLE, name='node-39') p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) p = mock.patch.object(rpcapi.ConductorAPI, 'change_node_power_state') self.mock_cnps = p.start() self.addCleanup(p.stop) p = mock.patch.object(rpcapi.ConductorAPI, 'do_node_deploy') self.mock_dnd = p.start() self.addCleanup(p.stop) p = mock.patch.object(rpcapi.ConductorAPI, 'do_node_tear_down') self.mock_dntd = p.start() self.addCleanup(p.stop) p = mock.patch.object(rpcapi.ConductorAPI, 'inspect_hardware') self.mock_dnih = p.start() self.addCleanup(p.stop) def test_power_state(self): response = self.put_json('/nodes/%s/states/power' % self.node.uuid, {'target': states.POWER_ON}) self.assertEqual(http_client.ACCEPTED, response.status_code) self.assertEqual(b'', response.body) self.mock_cnps.assert_called_once_with(mock.ANY, self.node.uuid, states.POWER_ON, 'test-topic') # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/nodes/%s/states' % self.node.uuid self.assertEqual(urlparse.urlparse(response.location).path, expected_location) def test_power_state_by_name_unsupported(self): response = self.put_json('/nodes/%s/states/power' % self.node.name, {'target': states.POWER_ON}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_code) def test_power_state_by_name(self): response = self.put_json('/nodes/%s/states/power' % self.node.name, {'target': states.POWER_ON}, headers={api_base.Version.string: "1.5"}) self.assertEqual(http_client.ACCEPTED, response.status_code) self.assertEqual(b'', response.body) self.mock_cnps.assert_called_once_with(mock.ANY, self.node.uuid, states.POWER_ON, 'test-topic') # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/nodes/%s/states' % self.node.name self.assertEqual(urlparse.urlparse(response.location).path, expected_location) def test_power_invalid_state_request(self): ret = self.put_json('/nodes/%s/states/power' % self.node.uuid, {'target': 'not-supported'}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) def test_power_change_when_being_cleaned(self): for state in (states.CLEANING, states.CLEANWAIT): self.node.provision_state = state self.node.save() ret = self.put_json('/nodes/%s/states/power' % self.node.uuid, {'target': states.POWER_OFF}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) def test_provision_invalid_state_request(self): ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': 'not-supported'}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) def test_provision_with_deploy(self): ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.ACTIVE}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) self.mock_dnd.assert_called_once_with( mock.ANY, self.node.uuid, False, None, 'test-topic') # Check location header self.assertIsNotNone(ret.location) expected_location = '/v1/nodes/%s/states' % self.node.uuid self.assertEqual(urlparse.urlparse(ret.location).path, expected_location) def test_provision_by_name_unsupported(self): ret = self.put_json('/nodes/%s/states/provision' % self.node.name, {'target': states.ACTIVE}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, ret.status_code) def test_provision_by_name(self): ret = self.put_json('/nodes/%s/states/provision' % self.node.name, {'target': states.ACTIVE}, headers={api_base.Version.string: "1.5"}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) self.mock_dnd.assert_called_once_with( mock.ANY, self.node.uuid, False, None, 'test-topic') def test_provision_with_deploy_configdrive(self): ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.ACTIVE, 'configdrive': 'foo'}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) self.mock_dnd.assert_called_once_with( mock.ANY, self.node.uuid, False, 'foo', 'test-topic') # Check location header self.assertIsNotNone(ret.location) expected_location = '/v1/nodes/%s/states' % self.node.uuid self.assertEqual(urlparse.urlparse(ret.location).path, expected_location) def test_provision_with_configdrive_not_active(self): ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.DELETED, 'configdrive': 'foo'}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) def test_provision_with_tear_down(self): node = self.node node.provision_state = states.ACTIVE node.target_provision_state = states.NOSTATE node.save() ret = self.put_json('/nodes/%s/states/provision' % node.uuid, {'target': states.DELETED}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) self.mock_dntd.assert_called_once_with( mock.ANY, node.uuid, 'test-topic') # Check location header self.assertIsNotNone(ret.location) expected_location = '/v1/nodes/%s/states' % node.uuid self.assertEqual(urlparse.urlparse(ret.location).path, expected_location) def test_provision_already_in_progress(self): node = self.node node.provision_state = states.DEPLOYING node.target_provision_state = states.ACTIVE node.reservation = 'fake-host' node.save() ret = self.put_json('/nodes/%s/states/provision' % node.uuid, {'target': states.ACTIVE}, expect_errors=True) self.assertEqual(http_client.CONFLICT, ret.status_code) # Conflict self.assertFalse(self.mock_dnd.called) def test_provision_locked_with_correct_state(self): node = self.node node.provision_state = states.AVAILABLE node.target_provision_state = states.NOSTATE node.reservation = 'fake-host' node.save() self.mock_dnd.side_effect = iter([exception.NodeLocked(node='', host='')]) ret = self.put_json('/nodes/%s/states/provision' % node.uuid, {'target': states.ACTIVE}, expect_errors=True) self.assertEqual(http_client.CONFLICT, ret.status_code) # Conflict self.assertTrue(self.mock_dnd.called) def test_provision_with_tear_down_in_progress_deploywait(self): node = self.node node.provision_state = states.DEPLOYWAIT node.target_provision_state = states.ACTIVE node.save() ret = self.put_json('/nodes/%s/states/provision' % node.uuid, {'target': states.DELETED}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) self.mock_dntd.assert_called_once_with( mock.ANY, node.uuid, 'test-topic') # Check location header self.assertIsNotNone(ret.location) expected_location = '/v1/nodes/%s/states' % node.uuid self.assertEqual(urlparse.urlparse(ret.location).path, expected_location) # NOTE(deva): this test asserts API funcionality which is not part of # the new-ironic-state-machine in Kilo. It is retained for backwards # compatibility with Juno. # TODO(deva): add a deprecation-warning to the REST result # and check for it here. def test_provision_with_deploy_after_deployfail(self): node = self.node node.provision_state = states.DEPLOYFAIL node.target_provision_state = states.ACTIVE node.save() ret = self.put_json('/nodes/%s/states/provision' % node.uuid, {'target': states.ACTIVE}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) self.mock_dnd.assert_called_once_with( mock.ANY, node.uuid, False, None, 'test-topic') # Check location header self.assertIsNotNone(ret.location) expected_location = '/v1/nodes/%s/states' % node.uuid self.assertEqual(expected_location, urlparse.urlparse(ret.location).path) def test_provision_already_in_state(self): self.node.provision_state = states.ACTIVE self.node.save() ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.ACTIVE}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) @mock.patch.object(rpcapi.ConductorAPI, 'do_provisioning_action') def test_provide_from_manage(self, mock_dpa): self.node.provision_state = states.MANAGEABLE self.node.save() ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.VERBS['provide']}, headers={api_base.Version.string: "1.4"}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) mock_dpa.assert_called_once_with(mock.ANY, self.node.uuid, states.VERBS['provide'], 'test-topic') def test_inspect_already_in_progress(self): node = self.node node.provision_state = states.INSPECTING node.target_provision_state = states.MANAGEABLE node.reservation = 'fake-host' node.save() ret = self.put_json('/nodes/%s/states/provision' % node.uuid, {'target': states.MANAGEABLE}, expect_errors=True) self.assertEqual(http_client.CONFLICT, ret.status_code) # Conflict @mock.patch.object(rpcapi.ConductorAPI, 'do_provisioning_action') def test_manage_from_available(self, mock_dpa): self.node.provision_state = states.AVAILABLE self.node.save() ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.VERBS['manage']}, headers={api_base.Version.string: "1.4"}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) mock_dpa.assert_called_once_with(mock.ANY, self.node.uuid, states.VERBS['manage'], 'test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'do_provisioning_action') def test_bad_requests_in_managed_state(self, mock_dpa): self.node.provision_state = states.MANAGEABLE self.node.save() for state in [states.ACTIVE, states.REBUILD, states.DELETED]: ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.ACTIVE}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) self.assertEqual(0, mock_dpa.call_count) @mock.patch.object(rpcapi.ConductorAPI, 'do_provisioning_action') def test_abort_cleanwait(self, mock_dpa): self.node.provision_state = states.CLEANWAIT self.node.save() ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.VERBS['abort']}, headers={api_base.Version.string: "1.13"}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) mock_dpa.assert_called_once_with(mock.ANY, self.node.uuid, states.VERBS['abort'], 'test-topic') def test_abort_invalid_state(self): # "abort" is only valid for nodes in CLEANWAIT self.node.provision_state = states.CLEANING self.node.save() ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.VERBS['abort']}, headers={api_base.Version.string: "1.13"}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) def test_provision_with_cleansteps_not_clean(self): self.node.provision_state = states.MANAGEABLE self.node.save() ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.VERBS['provide'], 'clean_steps': 'foo'}, headers={api_base.Version.string: "1.4"}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) def test_clean_no_cleansteps(self): self.node.provision_state = states.MANAGEABLE self.node.save() ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.VERBS['clean']}, headers={api_base.Version.string: "1.15"}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) @mock.patch.object(rpcapi.ConductorAPI, 'do_node_clean') @mock.patch.object(api_node, '_check_clean_steps') def test_clean_check_steps_fail(self, mock_check, mock_rpcapi): self.node.provision_state = states.MANAGEABLE self.node.save() mock_check.side_effect = exception.InvalidParameterValue('bad') clean_steps = [{"step": "upgrade_firmware", "interface": "deploy"}] ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.VERBS['clean'], 'clean_steps': clean_steps}, headers={api_base.Version.string: "1.15"}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) mock_check.assert_called_once_with(clean_steps) self.assertFalse(mock_rpcapi.called) @mock.patch.object(rpcapi.ConductorAPI, 'do_node_clean') @mock.patch.object(api_node, '_check_clean_steps') def test_clean(self, mock_check, mock_rpcapi): self.node.provision_state = states.MANAGEABLE self.node.save() clean_steps = [{"step": "upgrade_firmware", "interface": "deploy"}] ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.VERBS['clean'], 'clean_steps': clean_steps}, headers={api_base.Version.string: "1.15"}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) mock_check.assert_called_once_with(clean_steps) mock_rpcapi.assert_called_once_with(mock.ANY, self.node.uuid, clean_steps, 'test-topic') def test_set_console_mode_enabled(self): with mock.patch.object(rpcapi.ConductorAPI, 'set_console_mode') as mock_scm: ret = self.put_json('/nodes/%s/states/console' % self.node.uuid, {'enabled': "true"}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) mock_scm.assert_called_once_with(mock.ANY, self.node.uuid, True, 'test-topic') # Check location header self.assertIsNotNone(ret.location) expected_location = '/v1/nodes/%s/states/console' % self.node.uuid self.assertEqual(urlparse.urlparse(ret.location).path, expected_location) @mock.patch.object(rpcapi.ConductorAPI, 'set_console_mode') def test_set_console_by_name_unsupported(self, mock_scm): ret = self.put_json('/nodes/%s/states/console' % self.node.name, {'enabled': "true"}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, ret.status_code) @mock.patch.object(rpcapi.ConductorAPI, 'set_console_mode') def test_set_console_by_name(self, mock_scm): ret = self.put_json('/nodes/%s/states/console' % self.node.name, {'enabled': "true"}, headers={api_base.Version.string: "1.5"}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) mock_scm.assert_called_once_with(mock.ANY, self.node.uuid, True, 'test-topic') def test_set_console_mode_disabled(self): with mock.patch.object(rpcapi.ConductorAPI, 'set_console_mode') as mock_scm: ret = self.put_json('/nodes/%s/states/console' % self.node.uuid, {'enabled': "false"}) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) mock_scm.assert_called_once_with(mock.ANY, self.node.uuid, False, 'test-topic') # Check location header self.assertIsNotNone(ret.location) expected_location = '/v1/nodes/%s/states/console' % self.node.uuid self.assertEqual(urlparse.urlparse(ret.location).path, expected_location) def test_set_console_mode_bad_request(self): with mock.patch.object(rpcapi.ConductorAPI, 'set_console_mode') as mock_scm: ret = self.put_json('/nodes/%s/states/console' % self.node.uuid, {'enabled': "invalid-value"}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) # assert set_console_mode wasn't called assert not mock_scm.called def test_set_console_mode_bad_request_missing_parameter(self): with mock.patch.object(rpcapi.ConductorAPI, 'set_console_mode') as mock_scm: ret = self.put_json('/nodes/%s/states/console' % self.node.uuid, {}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) # assert set_console_mode wasn't called assert not mock_scm.called def test_set_console_mode_console_not_supported(self): with mock.patch.object(rpcapi.ConductorAPI, 'set_console_mode') as mock_scm: mock_scm.side_effect = exception.UnsupportedDriverExtension( extension='console', driver='test-driver') ret = self.put_json('/nodes/%s/states/console' % self.node.uuid, {'enabled': "true"}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) mock_scm.assert_called_once_with(mock.ANY, self.node.uuid, True, 'test-topic') def test_provision_node_in_maintenance_fail(self): self.node.maintenance = True self.node.save() ret = self.put_json('/nodes/%s/states/provision' % self.node.uuid, {'target': states.ACTIVE}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) self.assertTrue(ret.json['error_message']) @mock.patch.object(rpcapi.ConductorAPI, 'set_target_raid_config', autospec=True) def test_put_raid(self, set_raid_config_mock): raid_config = {'logical_disks': [{'size_gb': 100, 'raid_level': 1}]} ret = self.put_json( '/nodes/%s/states/raid' % self.node.uuid, raid_config, headers={api_base.Version.string: "1.12"}) self.assertEqual(http_client.NO_CONTENT, ret.status_code) self.assertEqual(b'', ret.body) set_raid_config_mock.assert_called_once_with( mock.ANY, mock.ANY, self.node.uuid, raid_config, topic=mock.ANY) @mock.patch.object(rpcapi.ConductorAPI, 'set_target_raid_config', autospec=True) def test_put_raid_older_version(self, set_raid_config_mock): raid_config = {'logical_disks': [{'size_gb': 100, 'raid_level': 1}]} ret = self.put_json( '/nodes/%s/states/raid' % self.node.uuid, raid_config, headers={api_base.Version.string: "1.5"}, expect_errors=True) self.assertEqual(http_client.NOT_ACCEPTABLE, ret.status_code) self.assertFalse(set_raid_config_mock.called) @mock.patch.object(rpcapi.ConductorAPI, 'set_target_raid_config', autospec=True) def test_put_raid_iface_not_supported(self, set_raid_config_mock): raid_config = {'logical_disks': [{'size_gb': 100, 'raid_level': 1}]} set_raid_config_mock.side_effect = iter([ exception.UnsupportedDriverExtension(extension='raid', driver='fake')]) ret = self.put_json( '/nodes/%s/states/raid' % self.node.uuid, raid_config, headers={api_base.Version.string: "1.12"}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, ret.status_code) self.assertTrue(ret.json['error_message']) set_raid_config_mock.assert_called_once_with( mock.ANY, mock.ANY, self.node.uuid, raid_config, topic=mock.ANY) @mock.patch.object(rpcapi.ConductorAPI, 'set_target_raid_config', autospec=True) def test_put_raid_invalid_parameter_value(self, set_raid_config_mock): raid_config = {'logical_disks': [{'size_gb': 100, 'raid_level': 1}]} set_raid_config_mock.side_effect = iter([ exception.InvalidParameterValue('foo')]) ret = self.put_json( '/nodes/%s/states/raid' % self.node.uuid, raid_config, headers={api_base.Version.string: "1.12"}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) self.assertTrue(ret.json['error_message']) set_raid_config_mock.assert_called_once_with( mock.ANY, mock.ANY, self.node.uuid, raid_config, topic=mock.ANY) @mock.patch.object(rpcapi.ConductorAPI, 'set_boot_device') def test_set_boot_device(self, mock_sbd): device = boot_devices.PXE ret = self.put_json('/nodes/%s/management/boot_device' % self.node.uuid, {'boot_device': device}) self.assertEqual(http_client.NO_CONTENT, ret.status_code) self.assertEqual(b'', ret.body) mock_sbd.assert_called_once_with(mock.ANY, self.node.uuid, device, persistent=False, topic='test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'set_boot_device') def test_set_boot_device_by_name(self, mock_sbd): device = boot_devices.PXE ret = self.put_json('/nodes/%s/management/boot_device' % self.node.name, {'boot_device': device}, headers={api_base.Version.string: "1.5"}) self.assertEqual(http_client.NO_CONTENT, ret.status_code) self.assertEqual(b'', ret.body) mock_sbd.assert_called_once_with(mock.ANY, self.node.uuid, device, persistent=False, topic='test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'set_boot_device') def test_set_boot_device_not_supported(self, mock_sbd): mock_sbd.side_effect = exception.UnsupportedDriverExtension( extension='management', driver='test-driver') device = boot_devices.PXE ret = self.put_json('/nodes/%s/management/boot_device' % self.node.uuid, {'boot_device': device}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) self.assertTrue(ret.json['error_message']) mock_sbd.assert_called_once_with(mock.ANY, self.node.uuid, device, persistent=False, topic='test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'set_boot_device') def test_set_boot_device_persistent(self, mock_sbd): device = boot_devices.PXE ret = self.put_json('/nodes/%s/management/boot_device?persistent=True' % self.node.uuid, {'boot_device': device}) self.assertEqual(http_client.NO_CONTENT, ret.status_code) self.assertEqual(b'', ret.body) mock_sbd.assert_called_once_with(mock.ANY, self.node.uuid, device, persistent=True, topic='test-topic') @mock.patch.object(rpcapi.ConductorAPI, 'set_boot_device') def test_set_boot_device_persistent_invalid_value(self, mock_sbd): device = boot_devices.PXE ret = self.put_json('/nodes/%s/management/boot_device?persistent=blah' % self.node.uuid, {'boot_device': device}, expect_errors=True) self.assertEqual('application/json', ret.content_type) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) def _test_set_node_maintenance_mode(self, mock_update, mock_get, reason, node_ident, is_by_name=False): request_body = {} if reason: request_body['reason'] = reason self.node.maintenance = False mock_get.return_value = self.node if is_by_name: headers = {api_base.Version.string: "1.5"} else: headers = {} ret = self.put_json('/nodes/%s/maintenance' % node_ident, request_body, headers=headers) self.assertEqual(http_client.ACCEPTED, ret.status_code) self.assertEqual(b'', ret.body) self.assertTrue(self.node.maintenance) self.assertEqual(reason, self.node.maintenance_reason) mock_get.assert_called_once_with(mock.ANY, node_ident) mock_update.assert_called_once_with(mock.ANY, mock.ANY, topic='test-topic') @mock.patch.object(objects.Node, 'get_by_uuid') @mock.patch.object(rpcapi.ConductorAPI, 'update_node') def test_set_node_maintenance_mode(self, mock_update, mock_get): self._test_set_node_maintenance_mode(mock_update, mock_get, 'fake_reason', self.node.uuid) @mock.patch.object(objects.Node, 'get_by_uuid') @mock.patch.object(rpcapi.ConductorAPI, 'update_node') def test_set_node_maintenance_mode_no_reason(self, mock_update, mock_get): self._test_set_node_maintenance_mode(mock_update, mock_get, None, self.node.uuid) @mock.patch.object(objects.Node, 'get_by_name') @mock.patch.object(rpcapi.ConductorAPI, 'update_node') def test_set_node_maintenance_mode_by_name(self, mock_update, mock_get): self._test_set_node_maintenance_mode(mock_update, mock_get, 'fake_reason', self.node.name, is_by_name=True) @mock.patch.object(objects.Node, 'get_by_name') @mock.patch.object(rpcapi.ConductorAPI, 'update_node') def test_set_node_maintenance_mode_no_reason_by_name(self, mock_update, mock_get): self._test_set_node_maintenance_mode(mock_update, mock_get, None, self.node.name, is_by_name=True) class TestCheckCleanSteps(base.TestCase): def test__check_clean_steps_not_list(self): clean_steps = {"step": "upgrade_firmware", "interface": "deploy"} self.assertRaisesRegexp(exception.InvalidParameterValue, "not of type 'array'", api_node._check_clean_steps, clean_steps) def test__check_clean_steps_step_not_dict(self): clean_steps = ['clean step'] self.assertRaisesRegexp(exception.InvalidParameterValue, "not of type 'object'", api_node._check_clean_steps, clean_steps) def test__check_clean_steps_step_key_invalid(self): clean_steps = [{"step": "upgrade_firmware", "interface": "deploy", "unknown": "upgrade_firmware"}] self.assertRaisesRegexp(exception.InvalidParameterValue, 'unexpected', api_node._check_clean_steps, clean_steps) def test__check_clean_steps_step_missing_interface(self): clean_steps = [{"step": "upgrade_firmware"}] self.assertRaisesRegexp(exception.InvalidParameterValue, 'interface', api_node._check_clean_steps, clean_steps) def test__check_clean_steps_step_missing_step_key(self): clean_steps = [{"interface": "deploy"}] self.assertRaisesRegexp(exception.InvalidParameterValue, 'step', api_node._check_clean_steps, clean_steps) def test__check_clean_steps_step_missing_step_value(self): clean_steps = [{"step": None, "interface": "deploy"}] self.assertRaisesRegexp(exception.InvalidParameterValue, "not of type 'string'", api_node._check_clean_steps, clean_steps) def test__check_clean_steps_step_min_length_step_value(self): clean_steps = [{"step": "", "interface": "deploy"}] self.assertRaisesRegexp(exception.InvalidParameterValue, 'is too short', api_node._check_clean_steps, clean_steps) def test__check_clean_steps_step_interface_value_invalid(self): clean_steps = [{"step": "upgrade_firmware", "interface": "not"}] self.assertRaisesRegexp(exception.InvalidParameterValue, 'is not one of', api_node._check_clean_steps, clean_steps) def test__check_clean_steps_step_args_value_invalid(self): clean_steps = [{"step": "upgrade_firmware", "interface": "deploy", "args": "invalid args"}] self.assertRaisesRegexp(exception.InvalidParameterValue, 'args', api_node._check_clean_steps, clean_steps) def test__check_clean_steps_valid(self): clean_steps = [{"step": "upgrade_firmware", "interface": "deploy"}] api_node._check_clean_steps(clean_steps) step1 = {"step": "upgrade_firmware", "interface": "deploy", "args": {"arg1": "value1", "arg2": "value2"}} api_node._check_clean_steps([step1]) step2 = {"step": "configure raid", "interface": "raid"} api_node._check_clean_steps([step1, step2]) ironic-5.1.0/ironic/tests/unit/api/v1/test_versions.py0000664000567000056710000000470612674513466024152 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the versions constants and methods. """ import re from ironic.api.controllers.v1 import versions from ironic.tests import base class TestVersionConstants(base.TestCase): def setUp(self): super(TestVersionConstants, self).setUp() # Get all of our named constants. They all begin with r'MINOR_[0-9]' self.minor_consts = [x for x in dir(versions) if re.search(r'^MINOR_[0-9]', x)] # Sort key needs to be an integer def minor_key(x): return int(x.split('_', 2)[1]) self.minor_consts.sort(key=minor_key) def test_max_ver_str(self): # Test to make sure MAX_VERSION_STRING corresponds with the largest # MINOR_ constant max_ver = '1.{}'.format(getattr(versions, self.minor_consts[-1])) self.assertEqual(max_ver, versions.MAX_VERSION_STRING) def test_min_ver_str(self): # Try to make sure someone doesn't change the MIN_VERSION_STRING by # accident and make sure it exists self.assertEqual('1.1', versions.MIN_VERSION_STRING) def test_name_value_match(self): # Test to make sure variable name matches the value. For example # MINOR_99_FOO should equal 99 for var_name in self.minor_consts: version = int(var_name.split('_', 2)[1]) self.assertEqual( version, getattr(versions, var_name), 'Constant "{}" does not equal {}'.format(var_name, version)) def test_duplicates(self): # Test to make sure no duplicates values seen_values = set() for var_name in self.minor_consts: value = getattr(versions, var_name) self.assertNotIn( value, seen_values, 'The value {} has been used more than once'.format(value)) seen_values.add(value) ironic-5.1.0/ironic/tests/unit/api/v1/__init__.py0000664000567000056710000000000012674513466022761 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/api/test_acl.py0000664000567000056710000001000112674513466022474 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for ACL. Checks whether certain kinds of requests are blocked or allowed to be processed. """ import mock from oslo_config import cfg from six.moves import http_client from ironic.tests.unit.api import base from ironic.tests.unit.api import utils from ironic.tests.unit.db import utils as db_utils cfg.CONF.import_opt('cache', 'keystonemiddleware.auth_token', group='keystone_authtoken') class TestACL(base.BaseApiTest): def setUp(self): super(TestACL, self).setUp() self.environ = {'fake.cache': utils.FakeMemcache()} self.fake_db_node = db_utils.get_test_node(chassis_id=None) self.node_path = '/nodes/%s' % self.fake_db_node['uuid'] def get_json(self, path, expect_errors=False, headers=None, q=[], **param): return super(TestACL, self).get_json(path, expect_errors=expect_errors, headers=headers, q=q, extra_environ=self.environ, **param) def _make_app(self): cfg.CONF.set_override('cache', 'fake.cache', group='keystone_authtoken') return super(TestACL, self)._make_app(enable_acl=True) def test_non_authenticated(self): response = self.get_json(self.node_path, expect_errors=True) self.assertEqual(http_client.UNAUTHORIZED, response.status_int) def test_authenticated(self): with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_db_node response = self.get_json( self.node_path, headers={'X-Auth-Token': utils.ADMIN_TOKEN}) self.assertEqual(self.fake_db_node['uuid'], response['uuid']) mock_get_node.assert_called_once_with(self.fake_db_node['uuid']) def test_non_admin(self): response = self.get_json(self.node_path, headers={'X-Auth-Token': utils.MEMBER_TOKEN}, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_non_admin_with_admin_header(self): response = self.get_json(self.node_path, headers={'X-Auth-Token': utils.MEMBER_TOKEN, 'X-Roles': 'admin'}, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_public_api(self): # expect_errors should be set to True: If expect_errors is set to False # the response gets converted to JSON and we cannot read the response # code so easy. for route in ('/', '/v1'): response = self.get_json(route, path_prefix='', expect_errors=True) self.assertEqual(http_client.OK, response.status_int) def test_public_api_with_path_extensions(self): routes = {'/v1/': http_client.OK, '/v1.json': http_client.OK, '/v1.xml': http_client.NOT_FOUND} for url in routes: response = self.get_json(url, path_prefix='', expect_errors=True) self.assertEqual(routes[url], response.status_int) ironic-5.1.0/ironic/tests/unit/api/__init__.py0000664000567000056710000000000012674513466022433 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/api/test_hooks.py0000664000567000056710000003326112674513470023070 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for the Pecan API hooks.""" import json import mock from oslo_config import cfg import oslo_messaging as messaging import six from six.moves import http_client from webob import exc as webob_exc from ironic.api.controllers import root from ironic.api import hooks from ironic.common import context from ironic.tests.unit.api import base from ironic.tests.unit import policy_fixture class FakeRequest(object): def __init__(self, headers, context, environ): self.headers = headers self.context = context self.environ = environ or {} self.version = (1, 0) self.host_url = 'http://127.0.0.1:6385' class FakeRequestState(object): def __init__(self, headers=None, context=None, environ=None): self.request = FakeRequest(headers, context, environ) self.response = FakeRequest(headers, context, environ) def set_context(self): headers = self.request.headers creds = { 'user': headers.get('X-User') or headers.get('X-User-Id'), 'tenant': headers.get('X-Tenant') or headers.get('X-Tenant-Id'), 'domain_id': headers.get('X-User-Domain-Id'), 'domain_name': headers.get('X-User-Domain-Name'), 'auth_token': headers.get('X-Auth-Token'), 'roles': headers.get('X-Roles', '').split(','), } is_admin = ('admin' in creds['roles'] or 'administrator' in creds['roles']) is_public_api = self.request.environ.get('is_public_api', False) show_password = ('admin' in creds['tenant']) self.request.context = context.RequestContext( is_admin=is_admin, is_public_api=is_public_api, show_password=show_password, **creds) def fake_headers(admin=False): headers = { 'X-Auth-Token': '8d9f235ca7464dd7ba46f81515797ea0', 'X-Domain-Id': 'None', 'X-Domain-Name': 'None', 'X-Project-Domain-Id': 'default', 'X-Project-Domain-Name': 'Default', 'X-Project-Id': 'b4efa69d4ffa4973863f2eefc094f7f8', 'X-Project-Name': 'admin', 'X-Role': '_member_,admin', 'X-Roles': '_member_,admin', 'X-Tenant': 'foo', 'X-Tenant-Id': 'b4efa69d4ffa4973863f2eefc094f7f8', 'X-Tenant-Name': 'foo', 'X-User': 'foo', 'X-User-Domain-Id': 'default', 'X-User-Domain-Name': 'Default', 'X-User-Id': '604ab2a197c442c2a84aba66708a9e1e', 'X-User-Name': 'foo', 'X-OpenStack-Ironic-API-Version': '1.0' } if admin: headers.update({ 'X-Project-Name': 'admin', 'X-Role': '_member_,admin', 'X-Roles': '_member_,admin', 'X-Tenant': 'admin', 'X-Tenant-Name': 'admin', }) else: headers.update({ 'X-Project-Name': 'foo', 'X-Role': '_member_', 'X-Roles': '_member_', }) return headers class TestNoExceptionTracebackHook(base.BaseApiTest): TRACE = [u'Traceback (most recent call last):', u' File "/opt/stack/ironic/ironic/openstack/common/rpc/amqp.py",' ' line 434, in _process_data\\n **args)', u' File "/opt/stack/ironic/ironic/openstack/common/rpc/' 'dispatcher.py", line 172, in dispatch\\n result =' ' getattr(proxyobj, method)(ctxt, **kwargs)'] MSG_WITHOUT_TRACE = "Test exception message." MSG_WITH_TRACE = MSG_WITHOUT_TRACE + "\n" + "\n".join(TRACE) def setUp(self): super(TestNoExceptionTracebackHook, self).setUp() p = mock.patch.object(root.Root, 'convert') self.root_convert_mock = p.start() self.addCleanup(p.stop) def test_hook_exception_success(self): self.root_convert_mock.side_effect = Exception(self.MSG_WITH_TRACE) response = self.get_json('/', path_prefix='', expect_errors=True) actual_msg = json.loads(response.json['error_message'])['faultstring'] self.assertEqual(self.MSG_WITHOUT_TRACE, actual_msg) def test_hook_remote_error_success(self): test_exc_type = 'TestException' self.root_convert_mock.side_effect = messaging.rpc.RemoteError( test_exc_type, self.MSG_WITHOUT_TRACE, self.TRACE) response = self.get_json('/', path_prefix='', expect_errors=True) # NOTE(max_lobur): For RemoteError the client message will still have # some garbage because in RemoteError traceback is serialized as a list # instead of'\n'.join(trace). But since RemoteError is kind of very # rare thing (happens due to wrong deserialization settings etc.) # we don't care about this garbage. expected_msg = ("Remote error: %s %s" % (test_exc_type, self.MSG_WITHOUT_TRACE) + ("\n[u'" if six.PY2 else "\n['")) actual_msg = json.loads(response.json['error_message'])['faultstring'] self.assertEqual(expected_msg, actual_msg) def _test_hook_without_traceback(self): msg = "Error message without traceback \n but \n multiline" self.root_convert_mock.side_effect = Exception(msg) response = self.get_json('/', path_prefix='', expect_errors=True) actual_msg = json.loads(response.json['error_message'])['faultstring'] self.assertEqual(msg, actual_msg) def test_hook_without_traceback(self): self._test_hook_without_traceback() def test_hook_without_traceback_debug(self): cfg.CONF.set_override('debug', True) self._test_hook_without_traceback() def test_hook_without_traceback_debug_tracebacks(self): cfg.CONF.set_override('debug_tracebacks_in_api', True) self._test_hook_without_traceback() def _test_hook_on_serverfault(self): self.root_convert_mock.side_effect = Exception(self.MSG_WITH_TRACE) response = self.get_json('/', path_prefix='', expect_errors=True) actual_msg = json.loads( response.json['error_message'])['faultstring'] return actual_msg def test_hook_on_serverfault(self): msg = self._test_hook_on_serverfault() self.assertEqual(self.MSG_WITHOUT_TRACE, msg) def test_hook_on_serverfault_debug(self): cfg.CONF.set_override('debug', True) msg = self._test_hook_on_serverfault() self.assertEqual(self.MSG_WITHOUT_TRACE, msg) def test_hook_on_serverfault_debug_tracebacks(self): cfg.CONF.set_override('debug_tracebacks_in_api', True) msg = self._test_hook_on_serverfault() self.assertEqual(self.MSG_WITH_TRACE, msg) def _test_hook_on_clientfault(self): client_error = Exception(self.MSG_WITH_TRACE) client_error.code = http_client.BAD_REQUEST self.root_convert_mock.side_effect = client_error response = self.get_json('/', path_prefix='', expect_errors=True) actual_msg = json.loads( response.json['error_message'])['faultstring'] return actual_msg def test_hook_on_clientfault(self): msg = self._test_hook_on_clientfault() self.assertEqual(self.MSG_WITHOUT_TRACE, msg) def test_hook_on_clientfault_debug(self): cfg.CONF.set_override('debug', True) msg = self._test_hook_on_clientfault() self.assertEqual(self.MSG_WITHOUT_TRACE, msg) def test_hook_on_clientfault_debug_tracebacks(self): cfg.CONF.set_override('debug_tracebacks_in_api', True) msg = self._test_hook_on_clientfault() self.assertEqual(self.MSG_WITH_TRACE, msg) class TestContextHook(base.BaseApiTest): @mock.patch.object(context, 'RequestContext') def test_context_hook_not_admin(self, mock_ctx): headers = fake_headers(admin=False) reqstate = FakeRequestState(headers=headers) context_hook = hooks.ContextHook(None) context_hook.before(reqstate) mock_ctx.assert_called_with( auth_token=headers['X-Auth-Token'], user=headers['X-User'], tenant=headers['X-Tenant'], domain_id=headers['X-User-Domain-Id'], domain_name=headers['X-User-Domain-Name'], is_public_api=False, show_password=False, is_admin=False, roles=headers['X-Roles'].split(',')) @mock.patch.object(context, 'RequestContext') def test_context_hook_admin(self, mock_ctx): headers = fake_headers(admin=True) reqstate = FakeRequestState(headers=headers) context_hook = hooks.ContextHook(None) context_hook.before(reqstate) mock_ctx.assert_called_with( auth_token=headers['X-Auth-Token'], user=headers['X-User'], tenant=headers['X-Tenant'], domain_id=headers['X-User-Domain-Id'], domain_name=headers['X-User-Domain-Name'], is_public_api=False, show_password=True, is_admin=True, roles=headers['X-Roles'].split(',')) @mock.patch.object(context, 'RequestContext') def test_context_hook_public_api(self, mock_ctx): headers = fake_headers(admin=True) env = {'is_public_api': True} reqstate = FakeRequestState(headers=headers, environ=env) context_hook = hooks.ContextHook(None) context_hook.before(reqstate) mock_ctx.assert_called_with( auth_token=headers['X-Auth-Token'], user=headers['X-User'], tenant=headers['X-Tenant'], domain_id=headers['X-User-Domain-Id'], domain_name=headers['X-User-Domain-Name'], is_public_api=True, show_password=True, is_admin=True, roles=headers['X-Roles'].split(',')) @mock.patch.object(context, 'RequestContext') def test_context_hook_noauth_token_removed(self, mock_ctx): cfg.CONF.set_override('auth_strategy', 'noauth') headers = fake_headers(admin=False) reqstate = FakeRequestState(headers=headers) context_hook = hooks.ContextHook(None) context_hook.before(reqstate) mock_ctx.assert_called_with( auth_token=None, user=headers['X-User'], tenant=headers['X-Tenant'], domain_id=headers['X-User-Domain-Id'], domain_name=headers['X-User-Domain-Name'], is_public_api=False, show_password=False, is_admin=False, roles=headers['X-Roles'].split(',')) @mock.patch.object(context, 'RequestContext') def test_context_hook_after_add_request_id(self, mock_ctx): headers = fake_headers(admin=True) reqstate = FakeRequestState(headers=headers) reqstate.set_context() reqstate.request.context.request_id = 'fake-id' context_hook = hooks.ContextHook(None) context_hook.after(reqstate) self.assertIn('Openstack-Request-Id', reqstate.response.headers) self.assertEqual( 'fake-id', reqstate.response.headers['Openstack-Request-Id']) def test_context_hook_after_miss_context(self): response = self.get_json('/bad/path', expect_errors=True) self.assertNotIn('Openstack-Request-Id', response.headers) class TestTrustedCallHook(base.BaseApiTest): def test_trusted_call_hook_not_admin(self): headers = fake_headers(admin=False) reqstate = FakeRequestState(headers=headers) reqstate.set_context() trusted_call_hook = hooks.TrustedCallHook() self.assertRaises(webob_exc.HTTPForbidden, trusted_call_hook.before, reqstate) def test_trusted_call_hook_admin(self): headers = fake_headers(admin=True) reqstate = FakeRequestState(headers=headers) reqstate.set_context() trusted_call_hook = hooks.TrustedCallHook() trusted_call_hook.before(reqstate) def test_trusted_call_hook_public_api(self): headers = fake_headers(admin=False) env = {'is_public_api': True} reqstate = FakeRequestState(headers=headers, environ=env) reqstate.set_context() trusted_call_hook = hooks.TrustedCallHook() trusted_call_hook.before(reqstate) class TestTrustedCallHookCompatJuno(TestTrustedCallHook): def setUp(self): super(TestTrustedCallHookCompatJuno, self).setUp() self.policy = self.useFixture( policy_fixture.PolicyFixture(compat='juno')) def test_trusted_call_hook_public_api(self): self.skipTest('no public_api trusted call policy in juno') class TestPublicUrlHook(base.BaseApiTest): def test_before_host_url(self): headers = fake_headers() reqstate = FakeRequestState(headers=headers) trusted_call_hook = hooks.PublicUrlHook() trusted_call_hook.before(reqstate) self.assertEqual(reqstate.request.host_url, reqstate.request.public_url) def test_before_public_endpoint(self): cfg.CONF.set_override('public_endpoint', 'http://foo', 'api') headers = fake_headers() reqstate = FakeRequestState(headers=headers) trusted_call_hook = hooks.PublicUrlHook() trusted_call_hook.before(reqstate) self.assertEqual('http://foo', reqstate.request.public_url) ironic-5.1.0/ironic/tests/unit/api/utils.py0000664000567000056710000001023212674513466022044 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Utils for testing the API service. """ import datetime import hashlib import json from ironic.api.controllers.v1 import chassis as chassis_controller from ironic.api.controllers.v1 import node as node_controller from ironic.api.controllers.v1 import port as port_controller from ironic.tests.unit.db import utils ADMIN_TOKEN = '4562138218392831' MEMBER_TOKEN = '4562138218392832' ADMIN_TOKEN_HASH = hashlib.sha256(ADMIN_TOKEN.encode()).hexdigest() MEMBER_TOKEN_HASH = hashlib.sha256(MEMBER_TOKEN.encode()).hexdigest() ADMIN_BODY = { 'access': { 'token': {'id': ADMIN_TOKEN, 'expires': '2100-09-11T00:00:00'}, 'user': {'id': 'user_id1', 'name': 'user_name1', 'tenantId': '123i2910', 'tenantName': 'mytenant', 'roles': [{'name': 'admin'}]}, } } MEMBER_BODY = { 'access': { 'token': {'id': MEMBER_TOKEN, 'expires': '2100-09-11T00:00:00'}, 'user': {'id': 'user_id2', 'name': 'user-good', 'tenantId': 'project-good', 'tenantName': 'goodies', 'roles': [{'name': 'Member'}]}, } } class FakeMemcache(object): """Fake cache that is used for keystone tokens lookup.""" # NOTE(lucasagomes): keystonemiddleware >= 2.0.0 the token cache # keys are sha256 hashes of the token key. This was introduced in # https://review.openstack.org/#/c/186971 _cache = { 'tokens/%s' % ADMIN_TOKEN: ADMIN_BODY, 'tokens/%s' % ADMIN_TOKEN_HASH: ADMIN_BODY, 'tokens/%s' % MEMBER_TOKEN: MEMBER_BODY, 'tokens/%s' % MEMBER_TOKEN_HASH: MEMBER_BODY, } def __init__(self): self.set_key = None self.set_value = None self.token_expiration = None def get(self, key): dt = datetime.datetime.utcnow() + datetime.timedelta(minutes=5) return json.dumps((self._cache.get(key), dt.isoformat())) def set(self, key, value, time=0, min_compress_len=0): self.set_value = value self.set_key = key def remove_internal(values, internal): # NOTE(yuriyz): internal attributes should not be posted, except uuid int_attr = [attr.lstrip('/') for attr in internal if attr != '/uuid'] return {k: v for (k, v) in values.items() if k not in int_attr} def node_post_data(**kw): node = utils.get_test_node(**kw) # These values are not part of the API object node.pop('conductor_affinity') node.pop('chassis_id') node.pop('target_raid_config') node.pop('raid_config') node.pop('tags') internal = node_controller.NodePatchType.internal_attrs() return remove_internal(node, internal) def port_post_data(**kw): port = utils.get_test_port(**kw) # node_id is not part of the API object port.pop('node_id') # TODO(vsaienko): remove when API part is added port.pop('local_link_connection') port.pop('pxe_enabled') # portgroup_id is not part of the API object port.pop('portgroup_id') internal = port_controller.PortPatchType.internal_attrs() return remove_internal(port, internal) def chassis_post_data(**kw): chassis = utils.get_test_chassis(**kw) internal = chassis_controller.ChassisPatchType.internal_attrs() return remove_internal(chassis, internal) def post_get_test_node(**kw): # NOTE(lucasagomes): When creating a node via API (POST) # we have to use chassis_uuid node = node_post_data(**kw) chassis = utils.get_test_chassis() node['chassis_uuid'] = kw.get('chassis_uuid', chassis['uuid']) return node ironic-5.1.0/ironic/tests/unit/fake_policy.py0000664000567000056710000000230012674513466022415 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. policy_data = """ { "admin_api": "role:admin or role:administrator", "public_api": "is_public_api:True", "trusted_call": "rule:admin_api or rule:public_api", "default": "rule:trusted_call", "show_password": "tenant:admin" } """ policy_data_compat_juno = """ { "admin": "role:admin or role:administrator", "admin_api": "is_admin:True", "default": "rule:admin_api" } """ def get_policy_data(compat): if not compat: return policy_data elif compat == 'juno': return policy_data_compat_juno else: raise Exception('Policy data for %s not available' % compat) ironic-5.1.0/ironic/tests/unit/__init__.py0000664000567000056710000000234212674513466021675 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`ironic.tests.unit` -- ironic unit tests ===================================================== .. automodule:: ironic.tests.unit :platform: Unix """ # TODO(deva): move eventlet imports to ironic.__init__ once we move to PBR import eventlet from ironic import objects eventlet.monkey_patch(os=False) # NOTE(comstud): Make sure we have all of the objects loaded. We do this # at module import time, because we may be using mock decorators in our # tests that run at import time. objects.register_all() ironic-5.1.0/ironic/tests/unit/dhcp/0000775000567000056710000000000012674513633020475 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/dhcp/test_neutron.py0000664000567000056710000006543512674513466023621 0ustar jenkinsjenkins00000000000000# # Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions as neutron_client_exc from neutronclient.v2_0 import client from oslo_config import cfg from oslo_utils import uuidutils from ironic.common import dhcp_factory from ironic.common import exception from ironic.common import pxe_utils from ironic.conductor import task_manager from ironic.dhcp import neutron from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as object_utils class TestNeutron(db_base.DbTestCase): def setUp(self): super(TestNeutron, self).setUp() mgr_utils.mock_the_extension_manager(driver='fake') self.config( cleaning_network_uuid='00000000-0000-0000-0000-000000000000', group='neutron') self.config(enabled_drivers=['fake']) self.config(dhcp_provider='neutron', group='dhcp') self.config(url='test-url', url_timeout=30, retries=2, group='neutron') self.config(insecure=False, certfile='test-file', admin_user='test-admin-user', admin_tenant_name='test-admin-tenant', admin_password='test-admin-password', auth_uri='test-auth-uri', group='keystone_authtoken') self.node = object_utils.create_test_node(self.context) self.ports = [ object_utils.create_test_port( self.context, node_id=self.node.id, id=2, uuid='1be26c0b-03f2-4d2e-ae87-c02d7f33c782', address='52:54:00:cf:2d:32')] # Very simple neutron port representation self.neutron_port = {'id': '132f871f-eaec-4fed-9475-0d54465e0f00', 'mac_address': '52:54:00:cf:2d:32'} dhcp_factory.DHCPFactory._dhcp_provider = None @mock.patch.object(client.Client, "__init__") def test__build_client_with_token(self, mock_client_init): token = 'test-token-123' expected = {'timeout': 30, 'retries': 2, 'insecure': False, 'ca_cert': 'test-file', 'token': token, 'endpoint_url': 'test-url', 'username': 'test-admin-user', 'tenant_name': 'test-admin-tenant', 'password': 'test-admin-password', 'auth_url': 'test-auth-uri'} mock_client_init.return_value = None neutron._build_client(token=token) mock_client_init.assert_called_once_with(**expected) @mock.patch.object(client.Client, "__init__") def test__build_client_without_token(self, mock_client_init): expected = {'timeout': 30, 'retries': 2, 'insecure': False, 'ca_cert': 'test-file', 'token': None, 'endpoint_url': 'test-url', 'username': 'test-admin-user', 'tenant_name': 'test-admin-tenant', 'password': 'test-admin-password', 'auth_url': 'test-auth-uri'} mock_client_init.return_value = None neutron._build_client(token=None) mock_client_init.assert_called_once_with(**expected) @mock.patch.object(client.Client, "__init__") def test__build_client_with_region(self, mock_client_init): expected = {'timeout': 30, 'retries': 2, 'insecure': False, 'ca_cert': 'test-file', 'token': None, 'endpoint_url': 'test-url', 'username': 'test-admin-user', 'tenant_name': 'test-admin-tenant', 'password': 'test-admin-password', 'auth_url': 'test-auth-uri', 'region_name': 'test-region'} self.config(region_name='test-region', group='keystone') mock_client_init.return_value = None neutron._build_client(token=None) mock_client_init.assert_called_once_with(**expected) @mock.patch.object(client.Client, "__init__") def test__build_client_noauth(self, mock_client_init): self.config(auth_strategy='noauth', group='neutron') expected = {'ca_cert': 'test-file', 'insecure': False, 'endpoint_url': 'test-url', 'timeout': 30, 'retries': 2, 'auth_strategy': 'noauth'} mock_client_init.return_value = None neutron._build_client(token=None) mock_client_init.assert_called_once_with(**expected) @mock.patch.object(client.Client, 'update_port') @mock.patch.object(client.Client, "__init__") def test_update_port_dhcp_opts(self, mock_client_init, mock_update_port): opts = [{'opt_name': 'bootfile-name', 'opt_value': 'pxelinux.0'}, {'opt_name': 'tftp-server', 'opt_value': '1.1.1.1'}, {'opt_name': 'server-ip-address', 'opt_value': '1.1.1.1'}] port_id = 'fake-port-id' expected = {'port': {'extra_dhcp_opts': opts}} mock_client_init.return_value = None api = dhcp_factory.DHCPFactory() api.provider.update_port_dhcp_opts(port_id, opts) mock_update_port.assert_called_once_with(port_id, expected) @mock.patch.object(client.Client, 'update_port') @mock.patch.object(client.Client, "__init__") def test_update_port_dhcp_opts_with_exception(self, mock_client_init, mock_update_port): opts = [{}] port_id = 'fake-port-id' mock_client_init.return_value = None mock_update_port.side_effect = ( neutron_client_exc.NeutronClientException()) api = dhcp_factory.DHCPFactory() self.assertRaises( exception.FailedToUpdateDHCPOptOnPort, api.provider.update_port_dhcp_opts, port_id, opts) @mock.patch.object(client.Client, 'update_port') @mock.patch.object(client.Client, '__init__') def test_update_port_address(self, mock_client_init, mock_update_port): address = 'fe:54:00:77:07:d9' port_id = 'fake-port-id' expected = {'port': {'mac_address': address}} mock_client_init.return_value = None api = dhcp_factory.DHCPFactory() api.provider.update_port_address(port_id, address) mock_update_port.assert_called_once_with(port_id, expected) @mock.patch.object(client.Client, 'update_port') @mock.patch.object(client.Client, '__init__') def test_update_port_address_with_exception(self, mock_client_init, mock_update_port): address = 'fe:54:00:77:07:d9' port_id = 'fake-port-id' mock_client_init.return_value = None api = dhcp_factory.DHCPFactory() mock_update_port.side_effect = ( neutron_client_exc.NeutronClientException()) self.assertRaises(exception.FailedToUpdateMacOnPort, api.provider.update_port_address, port_id, address) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts') @mock.patch('ironic.common.network.get_node_vif_ids') def test_update_dhcp(self, mock_gnvi, mock_updo): mock_gnvi.return_value = {'ports': {'port-uuid': 'vif-uuid'}, 'portgroups': {}} with task_manager.acquire(self.context, self.node.uuid) as task: opts = pxe_utils.dhcp_options_for_instance(task) api = dhcp_factory.DHCPFactory() api.update_dhcp(task, opts) mock_updo.assert_called_once_with('vif-uuid', opts, token=self.context.auth_token) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts') @mock.patch('ironic.common.network.get_node_vif_ids') def test_update_dhcp_no_vif_data(self, mock_gnvi, mock_updo): mock_gnvi.return_value = {'portgroups': {}, 'ports': {}} with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory() self.assertRaises(exception.FailedToUpdateDHCPOptOnPort, api.update_dhcp, task, self.node) self.assertFalse(mock_updo.called) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts') @mock.patch('ironic.common.network.get_node_vif_ids') def test_update_dhcp_some_failures(self, mock_gnvi, mock_updo): # confirm update is called twice, one fails, but no exception raised mock_gnvi.return_value = {'ports': {'p1': 'v1', 'p2': 'v2'}, 'portgroups': {}} exc = exception.FailedToUpdateDHCPOptOnPort('fake exception') mock_updo.side_effect = [None, exc] with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory() api.update_dhcp(task, self.node) mock_gnvi.assert_called_once_with(task) self.assertEqual(2, mock_updo.call_count) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts') @mock.patch('ironic.common.network.get_node_vif_ids') def test_update_dhcp_fails(self, mock_gnvi, mock_updo): # confirm update is called twice, both fail, and exception is raised mock_gnvi.return_value = {'ports': {'p1': 'v1', 'p2': 'v2'}, 'portgroups': {}} exc = exception.FailedToUpdateDHCPOptOnPort('fake exception') mock_updo.side_effect = [exc, exc] with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory() self.assertRaises(exception.FailedToUpdateDHCPOptOnPort, api.update_dhcp, task, self.node) mock_gnvi.assert_called_once_with(task) self.assertEqual(2, mock_updo.call_count) def test__get_fixed_ip_address(self): port_id = 'fake-port-id' expected = "192.168.1.3" api = dhcp_factory.DHCPFactory().provider port_data = { "id": port_id, "network_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "admin_state_up": True, "status": "ACTIVE", "mac_address": "fa:16:3e:4c:2c:30", "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "device_id": 'bece68a3-2f8b-4e66-9092-244493d6aba7', } fake_client = mock.Mock() fake_client.show_port.return_value = {'port': port_data} result = api._get_fixed_ip_address(port_id, fake_client) self.assertEqual(expected, result) fake_client.show_port.assert_called_once_with(port_id) def test__get_fixed_ip_address_invalid_ip(self): port_id = 'fake-port-id' api = dhcp_factory.DHCPFactory().provider port_data = { "id": port_id, "network_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "admin_state_up": True, "status": "ACTIVE", "mac_address": "fa:16:3e:4c:2c:30", "fixed_ips": [ { "ip_address": "invalid.ip", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "device_id": 'bece68a3-2f8b-4e66-9092-244493d6aba7', } fake_client = mock.Mock() fake_client.show_port.return_value = {'port': port_data} self.assertRaises(exception.InvalidIPv4Address, api._get_fixed_ip_address, port_id, fake_client) fake_client.show_port.assert_called_once_with(port_id) def test__get_fixed_ip_address_with_exception(self): port_id = 'fake-port-id' api = dhcp_factory.DHCPFactory().provider fake_client = mock.Mock() fake_client.show_port.side_effect = ( neutron_client_exc.NeutronClientException()) self.assertRaises(exception.FailedToGetIPAddressOnPort, api._get_fixed_ip_address, port_id, fake_client) fake_client.show_port.assert_called_once_with(port_id) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address') def test__get_port_ip_address(self, mock_gfia): expected = "192.168.1.3" port = object_utils.create_test_port(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-A'}, driver='fake') mock_gfia.return_value = expected with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api._get_port_ip_address(task, port, mock.sentinel.client) self.assertEqual(expected, result) mock_gfia.assert_called_once_with('test-vif-A', mock.sentinel.client) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address') def test__get_port_ip_address_for_portgroup(self, mock_gfia): expected = "192.168.1.3" pg = object_utils.create_test_portgroup(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-A'}, driver='fake') mock_gfia.return_value = expected with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api._get_port_ip_address(task, pg, mock.sentinel.client) self.assertEqual(expected, result) mock_gfia.assert_called_once_with('test-vif-A', mock.sentinel.client) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address') def test__get_port_ip_address_with_exception(self, mock_gfia): expected = "192.168.1.3" port = object_utils.create_test_port(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), driver='fake') mock_gfia.return_value = expected with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider self.assertRaises(exception.FailedToGetIPAddressOnPort, api._get_port_ip_address, task, port, mock.sentinel.client) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address') def test__get_port_ip_address_for_portgroup_with_exception( self, mock_gfia): expected = "192.168.1.3" pg = object_utils.create_test_portgroup(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), driver='fake') mock_gfia.return_value = expected with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider self.assertRaises(exception.FailedToGetIPAddressOnPort, api._get_port_ip_address, task, pg, mock.sentinel.client) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address') def test__get_ip_addresses_ports(self, mock_gfia): ip_address = '10.10.0.1' expected = [ip_address] port = object_utils.create_test_port(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-A'}, driver='fake') mock_gfia.return_value = ip_address with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api._get_ip_addresses(task, [port], mock.sentinel.client) self.assertEqual(expected, result) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address') def test__get_ip_addresses_portgroup(self, mock_gfia): ip_address = '10.10.0.1' expected = [ip_address] pg = object_utils.create_test_portgroup(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-A'}, driver='fake') mock_gfia.return_value = ip_address with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api._get_ip_addresses(task, [pg], mock.sentinel.client) self.assertEqual(expected, result) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_port_ip_address') def test_get_ip_addresses(self, get_ip_mock): ip_address = '10.10.0.1' expected = [ip_address] get_ip_mock.return_value = ip_address with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api.get_ip_addresses(task) get_ip_mock.assert_called_once_with(task, task.ports[0], mock.ANY) self.assertEqual(expected, result) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_port_ip_address') def test_get_ip_addresses_for_port_and_portgroup(self, get_ip_mock): object_utils.create_test_portgroup(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-A'}, driver='fake') with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider api.get_ip_addresses(task) get_ip_mock.assert_has_calls( [mock.call(task, task.ports[0], mock.ANY), mock.call(task, task.portgroups[0], mock.ANY)]) @mock.patch.object(client.Client, 'create_port') def test_create_cleaning_ports(self, create_mock): # Ensure we can create cleaning ports for in band cleaning create_mock.return_value = {'port': self.neutron_port} expected = {self.ports[0].uuid: self.neutron_port['id']} api = dhcp_factory.DHCPFactory().provider with task_manager.acquire(self.context, self.node.uuid) as task: ports = api.create_cleaning_ports(task) self.assertEqual(expected, ports) create_mock.assert_called_once_with({'port': { 'network_id': '00000000-0000-0000-0000-000000000000', 'admin_state_up': True, 'mac_address': self.ports[0].address}}) @mock.patch.object(neutron.NeutronDHCPApi, '_rollback_cleaning_ports') @mock.patch.object(client.Client, 'create_port') def test_create_cleaning_ports_fail(self, create_mock, rollback_mock): # Check that if creating a port fails, the ports are cleaned up create_mock.side_effect = neutron_client_exc.ConnectionFailed api = dhcp_factory.DHCPFactory().provider with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.NodeCleaningFailure, api.create_cleaning_ports, task) create_mock.assert_called_once_with({'port': { 'network_id': '00000000-0000-0000-0000-000000000000', 'admin_state_up': True, 'mac_address': self.ports[0].address}}) rollback_mock.assert_called_once_with(task) @mock.patch.object(neutron.NeutronDHCPApi, '_rollback_cleaning_ports') @mock.patch.object(client.Client, 'create_port') def test_create_cleaning_ports_fail_delayed(self, create_mock, rollback_mock): """Check ports are cleaned up on failure to create them This test checks that the port clean-up occurs when the port create call was successful, but the port in fact was not created. """ # NOTE(pas-ha) this is trying to emulate the complex port object # with both methods and dictionary access with methods on elements mockport = mock.MagicMock() create_mock.return_value = mockport # fail only on second 'or' branch to fool lazy eval # and actually execute both expressions to assert on both mocks mockport.get.return_value = True mockitem = mock.Mock() mockport.__getitem__.return_value = mockitem mockitem.get.return_value = None api = dhcp_factory.DHCPFactory().provider with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.NodeCleaningFailure, api.create_cleaning_ports, task) create_mock.assert_called_once_with({'port': { 'network_id': '00000000-0000-0000-0000-000000000000', 'admin_state_up': True, 'mac_address': self.ports[0].address}}) rollback_mock.assert_called_once_with(task) mockport.get.assert_called_once_with('port') mockitem.get.assert_called_once_with('id') mockport.__getitem__.assert_called_once_with('port') @mock.patch.object(client.Client, 'create_port') def test_create_cleaning_ports_bad_config(self, create_mock): # Check an error is raised if the cleaning network is not set self.config(cleaning_network_uuid=None, group='neutron') api = dhcp_factory.DHCPFactory().provider with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, api.create_cleaning_ports, task) @mock.patch.object(client.Client, 'delete_port') @mock.patch.object(client.Client, 'list_ports') def test_delete_cleaning_ports(self, list_mock, delete_mock): # Ensure that we can delete cleaning ports, and that ports with # different macs don't get deleted other_port = {'id': '132f871f-eaec-4fed-9475-0d54465e0f01', 'mac_address': 'aa:bb:cc:dd:ee:ff'} list_mock.return_value = {'ports': [self.neutron_port, other_port]} api = dhcp_factory.DHCPFactory().provider with task_manager.acquire(self.context, self.node.uuid) as task: api.delete_cleaning_ports(task) list_mock.assert_called_once_with( network_id='00000000-0000-0000-0000-000000000000') delete_mock.assert_called_once_with(self.neutron_port['id']) @mock.patch.object(client.Client, 'list_ports') def test_delete_cleaning_ports_list_fail(self, list_mock): # Check that if listing ports fails, the node goes to cleanfail list_mock.side_effect = neutron_client_exc.ConnectionFailed api = dhcp_factory.DHCPFactory().provider with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.NodeCleaningFailure, api.delete_cleaning_ports, task) list_mock.assert_called_once_with( network_id='00000000-0000-0000-0000-000000000000') @mock.patch.object(client.Client, 'delete_port') @mock.patch.object(client.Client, 'list_ports') def test_delete_cleaning_ports_delete_fail(self, list_mock, delete_mock): # Check that if deleting ports fails, the node goes to cleanfail list_mock.return_value = {'ports': [self.neutron_port]} delete_mock.side_effect = neutron_client_exc.ConnectionFailed api = dhcp_factory.DHCPFactory().provider with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.NodeCleaningFailure, api.delete_cleaning_ports, task) list_mock.assert_called_once_with( network_id='00000000-0000-0000-0000-000000000000') delete_mock.assert_called_once_with(self.neutron_port['id']) def test_out_range_auth_strategy(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'auth_strategy', 'fake', 'neutron', enforce_type=True) ironic-5.1.0/ironic/tests/unit/dhcp/__init__.py0000664000567000056710000000000012674513466022600 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/dhcp/test_factory.py0000664000567000056710000000763512674513466023574 0ustar jenkinsjenkins00000000000000# Copyright 2014 Rackspace, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import mock import stevedore from ironic.common import dhcp_factory from ironic.common import exception from ironic.dhcp import base as base_class from ironic.dhcp import neutron from ironic.dhcp import none from ironic.tests import base class TestDHCPFactory(base.TestCase): def setUp(self): super(TestDHCPFactory, self).setUp() self.config(enabled_drivers=['fake']) self.config(url='test-url', url_timeout=30, group='neutron') dhcp_factory.DHCPFactory._dhcp_provider = None self.addCleanup(setattr, dhcp_factory.DHCPFactory, '_dhcp_provider', None) def test_default_dhcp(self): # dhcp provider should default to neutron api = dhcp_factory.DHCPFactory() self.assertIsInstance(api.provider, neutron.NeutronDHCPApi) def test_set_none_dhcp(self): self.config(dhcp_provider='none', group='dhcp') api = dhcp_factory.DHCPFactory() self.assertIsInstance(api.provider, none.NoneDHCPApi) def test_set_neutron_dhcp(self): self.config(dhcp_provider='neutron', group='dhcp') api = dhcp_factory.DHCPFactory() self.assertIsInstance(api.provider, neutron.NeutronDHCPApi) def test_only_one_dhcp(self): self.config(dhcp_provider='none', group='dhcp') dhcp_factory.DHCPFactory() with mock.patch.object(dhcp_factory.DHCPFactory, '_set_dhcp_provider') as mock_set_dhcp: # There is already a dhcp_provider, so this shouldn't call # _set_dhcp_provider again. dhcp_factory.DHCPFactory() self.assertEqual(0, mock_set_dhcp.call_count) def test_set_bad_dhcp(self): self.config(dhcp_provider='bad_dhcp', group='dhcp') self.assertRaises(exception.DHCPLoadError, dhcp_factory.DHCPFactory) @mock.patch.object(stevedore.driver, 'DriverManager', autospec=True) def test_dhcp_some_error(self, mock_drv_mgr): mock_drv_mgr.side_effect = Exception('No module mymod found.') self.assertRaises(exception.DHCPLoadError, dhcp_factory.DHCPFactory) class CompareBasetoModules(base.TestCase): def test_drivers_match_dhcp_base(self): def _get_public_apis(inst): methods = {} for (name, value) in inspect.getmembers(inst, inspect.ismethod): if name.startswith("_"): continue methods[name] = value return methods def _compare_classes(baseclass, driverclass): basemethods = _get_public_apis(baseclass) implmethods = _get_public_apis(driverclass) for name in basemethods: baseargs = inspect.getargspec(basemethods[name]) implargs = inspect.getargspec(implmethods[name]) self.assertEqual( baseargs, implargs, "%s args of %s don't match base %s" % ( name, driverclass, baseclass) ) _compare_classes(base_class.BaseDHCP, none.NoneDHCPApi) _compare_classes(base_class.BaseDHCP, neutron.NeutronDHCPApi) ironic-5.1.0/ironic/tests/unit/raid_constants.py0000664000567000056710000001316212674513466023153 0ustar jenkinsjenkins00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Different RAID configurations for unit tests in test_raid.py RAID_CONFIG_OKAY = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "volume_name": "my-volume", "is_root_volume": true, "share_physical_disks": false, "disk_type": "ssd", "interface_type": "sas", "number_of_physical_disks": 2, "controller": "Smart Array P822 in Slot 2", "physical_disks": [ "5I:1:1", "5I:1:2" ] } ] } ''' RAID_CONFIG_NO_LOGICAL_DISKS = ''' { "logical_disks": [] } ''' RAID_CONFIG_NO_RAID_LEVEL = ''' { "logical_disks": [ { "size_gb": 100 } ] } ''' RAID_CONFIG_INVALID_RAID_LEVEL = ''' { "logical_disks": [ { "size_gb": 100, "raid_level": "foo" } ] } ''' RAID_CONFIG_NO_SIZE_GB = ''' { "logical_disks": [ { "raid_level": "1" } ] } ''' RAID_CONFIG_INVALID_SIZE_GB = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": "abcd" } ] } ''' RAID_CONFIG_MAX_SIZE_GB = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": "MAX" } ] } ''' RAID_CONFIG_INVALID_IS_ROOT_VOL = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "is_root_volume": "True" } ] } ''' RAID_CONFIG_MULTIPLE_IS_ROOT_VOL = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "is_root_volume": true }, { "raid_level": "1", "size_gb": 100, "is_root_volume": true } ] } ''' RAID_CONFIG_INVALID_SHARE_PHY_DISKS = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "share_physical_disks": "True" } ] } ''' RAID_CONFIG_INVALID_DISK_TYPE = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "disk_type": "foo" } ] } ''' RAID_CONFIG_INVALID_INT_TYPE = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "interface_type": "foo" } ] } ''' RAID_CONFIG_INVALID_NUM_PHY_DISKS = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "number_of_physical_disks": "a" } ] } ''' RAID_CONFIG_INVALID_PHY_DISKS = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "controller": "Smart Array P822 in Slot 2", "physical_disks": "5I:1:1" } ] } ''' RAID_CONFIG_ADDITIONAL_PROP = ''' { "logical_disks": [ { "raid_levelllllll": "1", "size_gb": 100 } ] } ''' CUSTOM_SCHEMA_RAID_CONFIG = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "foo": "bar" } ] } ''' CUSTOM_RAID_SCHEMA = ''' { "type": "object", "properties": { "logical_disks": { "type": "array", "items": { "type": "object", "properties": { "raid_level": { "type": "string", "enum": [ "0", "1", "2", "5", "6", "1+0" ], "description": "RAID level for the logical disk." }, "size_gb": { "type": "integer", "minimum": 0, "exclusiveMinimum": true, "description": "Size (Integer) for the logical disk." }, "foo": { "type": "string", "description": "property foo" } }, "required": ["raid_level", "size_gb"], "additionalProperties": false }, "minItems": 1 } }, "required": ["logical_disks"], "additionalProperties": false } ''' CURRENT_RAID_CONFIG = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "controller": "Smart Array P822 in Slot 2", "is_root_volume": true, "physical_disks": [ "5I:1:1", "5I:1:2" ], "root_device_hint": { "wwn": "600508B100" } } ] } ''' RAID_CONFIG_MULTIPLE_ROOT = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "controller": "Smart Array P822 in Slot 2", "is_root_volume": true, "physical_disks": [ "5I:1:1", "5I:1:2" ], "root_device_hint": { "wwn": "600508B100" } }, { "raid_level": "1", "size_gb": 100, "controller": "Smart Array P822 in Slot 2", "is_root_volume": true, "physical_disks": [ "5I:1:1", "5I:1:2" ], "root_device_hint": { "wwn": "600508B100" } } ] } ''' ironic-5.1.0/ironic/tests/unit/stubs.py0000664000567000056710000000752212674513466021303 0ustar jenkinsjenkins00000000000000# Copyright (c) 2011 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glanceclient import exc as glance_exc NOW_GLANCE_FORMAT = "2010-10-11T10:30:22" class StubGlanceClient(object): def __init__(self, images=None): self._images = [] _images = images or [] map(lambda image: self.create(**image), _images) # NOTE(bcwaldon): HACK to get client.images.* to work self.images = lambda: None for fn in ('list', 'get', 'data', 'create', 'update', 'delete'): setattr(self.images, fn, getattr(self, fn)) # TODO(bcwaldon): implement filters def list(self, filters=None, marker=None, limit=30): if marker is None: index = 0 else: for index, image in enumerate(self._images): if image.id == str(marker): index += 1 break else: raise glance_exc.BadRequest('Marker not found') return self._images[index:index + limit] def get(self, image_id): for image in self._images: if image.id == str(image_id): return image raise glance_exc.NotFound(image_id) def data(self, image_id): self.get(image_id) return [] def create(self, **metadata): metadata['created_at'] = NOW_GLANCE_FORMAT metadata['updated_at'] = NOW_GLANCE_FORMAT self._images.append(FakeImage(metadata)) try: image_id = str(metadata['id']) except KeyError: # auto-generate an id if one wasn't provided image_id = str(len(self._images)) self._images[-1].id = image_id return self._images[-1] def update(self, image_id, **metadata): for i, image in enumerate(self._images): if image.id == str(image_id): for k, v in metadata.items(): setattr(self._images[i], k, v) return self._images[i] raise glance_exc.NotFound(image_id) def delete(self, image_id): for i, image in enumerate(self._images): if image.id == image_id: # When you delete an image from glance, it sets the status to # DELETED. If you try to delete a DELETED image, it raises # HTTPForbidden. image_data = self._images[i] if image_data.deleted: raise glance_exc.Forbidden() image_data.deleted = True return raise glance_exc.NotFound(image_id) class FakeImage(object): def __init__(self, metadata): IMAGE_ATTRIBUTES = ['size', 'disk_format', 'owner', 'container_format', 'checksum', 'id', 'name', 'created_at', 'updated_at', 'deleted', 'status', 'min_disk', 'min_ram', 'is_public'] raw = dict.fromkeys(IMAGE_ATTRIBUTES) raw.update(metadata) self.__dict__['raw'] = raw def __getattr__(self, key): try: return self.__dict__['raw'][key] except KeyError: raise AttributeError(key) def __setattr__(self, key, value): try: self.__dict__['raw'][key] = value except KeyError: raise AttributeError(key) ironic-5.1.0/ironic/tests/unit/conductor/0000775000567000056710000000000012674513633021557 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/conductor/test_utils.py0000664000567000056710000007546512674513466024355 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as conductor_utils from ironic import objects from ironic.tests import base as tests_base from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base from ironic.tests.unit.db import utils from ironic.tests.unit.objects import utils as obj_utils class NodeSetBootDeviceTestCase(base.DbTestCase): def test_node_set_boot_device_non_existent_device(self): mgr_utils.mock_the_extension_manager(driver="fake_ipmitool") self.driver = driver_factory.get_driver("fake_ipmitool") ipmi_info = utils.get_test_ipmi_info() node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake_ipmitool', driver_info=ipmi_info) task = task_manager.TaskManager(self.context, node.uuid) self.assertRaises(exception.InvalidParameterValue, conductor_utils.node_set_boot_device, task, device='fake') def test_node_set_boot_device_valid(self): mgr_utils.mock_the_extension_manager(driver="fake_ipmitool") self.driver = driver_factory.get_driver("fake_ipmitool") ipmi_info = utils.get_test_ipmi_info() node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake_ipmitool', driver_info=ipmi_info) task = task_manager.TaskManager(self.context, node.uuid) with mock.patch.object(self.driver.management, 'set_boot_device') as mock_sbd: conductor_utils.node_set_boot_device(task, device='pxe') mock_sbd.assert_called_once_with(task, device='pxe', persistent=False) class NodePowerActionTestCase(base.DbTestCase): def setUp(self): super(NodePowerActionTestCase, self).setUp() mgr_utils.mock_the_extension_manager() self.driver = driver_factory.get_driver("fake") def test_node_power_action_power_on(self): """Test node_power_action to turn node power on.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake', power_state=states.POWER_OFF) task = task_manager.TaskManager(self.context, node.uuid) with mock.patch.object(self.driver.power, 'get_power_state') as get_power_mock: get_power_mock.return_value = states.POWER_OFF conductor_utils.node_power_action(task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) def test_node_power_action_power_off(self): """Test node_power_action to turn node power off.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) with mock.patch.object(self.driver.power, 'get_power_state') as get_power_mock: get_power_mock.return_value = states.POWER_ON conductor_utils.node_power_action(task, states.POWER_OFF) node.refresh() get_power_mock.assert_called_once_with(mock.ANY) self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) def test_node_power_action_power_reboot(self): """Test for reboot a node.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) with mock.patch.object(self.driver.power, 'reboot') as reboot_mock: conductor_utils.node_power_action(task, states.REBOOT) node.refresh() reboot_mock.assert_called_once_with(mock.ANY) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) def test_node_power_action_invalid_state(self): """Test for exception when changing to an invalid power state.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) with mock.patch.object(self.driver.power, 'get_power_state') as get_power_mock: get_power_mock.return_value = states.POWER_ON self.assertRaises(exception.InvalidParameterValue, conductor_utils.node_power_action, task, "INVALID_POWER_STATE") node.refresh() get_power_mock.assert_called_once_with(mock.ANY) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNotNone(node['last_error']) # last_error is cleared when a new transaction happens conductor_utils.node_power_action(task, states.POWER_OFF) node.refresh() self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) def test_node_power_action_already_being_processed(self): """Test node power action after aborted power action. The target_power_state is expected to be None so it isn't checked in the code. This is what happens if it is not None. (Eg, if a conductor had died during a previous power-off attempt and left the target_power_state set to states.POWER_OFF, and the user is attempting to power-off again.) """ node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake', power_state=states.POWER_ON, target_power_state=states.POWER_OFF) task = task_manager.TaskManager(self.context, node.uuid) conductor_utils.node_power_action(task, states.POWER_OFF) node.refresh() self.assertEqual(states.POWER_OFF, node['power_state']) self.assertEqual(states.NOSTATE, node['target_power_state']) self.assertIsNone(node['last_error']) def test_node_power_action_in_same_state(self): """Test setting node state to its present state. Test that we don't try to set the power state if the requested state is the same as the current state. """ node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake', last_error='anything but None', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) with mock.patch.object(self.driver.power, 'get_power_state') as get_power_mock: get_power_mock.return_value = states.POWER_ON with mock.patch.object(self.driver.power, 'set_power_state') as set_power_mock: conductor_utils.node_power_action(task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY) self.assertFalse(set_power_mock.called, "set_power_state unexpectedly called") self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) def test_node_power_action_in_same_state_db_not_in_sync(self): """Test setting node state to its present state if DB is out of sync. Under rare conditions (see bug #1403106) database might contain stale information, make sure we fix it. """ node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake', last_error='anything but None', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) with mock.patch.object(self.driver.power, 'get_power_state') as get_power_mock: get_power_mock.return_value = states.POWER_OFF with mock.patch.object(self.driver.power, 'set_power_state') as set_power_mock: conductor_utils.node_power_action(task, states.POWER_OFF) node.refresh() get_power_mock.assert_called_once_with(mock.ANY) self.assertFalse(set_power_mock.called, "set_power_state unexpectedly called") self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) def test_node_power_action_failed_getting_state(self): """Test for exception when we can't get the current power state.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) with mock.patch.object(self.driver.power, 'get_power_state') as get_power_state_mock: get_power_state_mock.side_effect = ( exception.InvalidParameterValue('failed getting power state')) self.assertRaises(exception.InvalidParameterValue, conductor_utils.node_power_action, task, states.POWER_ON) node.refresh() get_power_state_mock.assert_called_once_with(mock.ANY) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNotNone(node['last_error']) def test_node_power_action_set_power_failure(self): """Test if an exception is thrown when the set_power call fails.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake', power_state=states.POWER_OFF) task = task_manager.TaskManager(self.context, node.uuid) with mock.patch.object(self.driver.power, 'get_power_state') as get_power_mock: with mock.patch.object(self.driver.power, 'set_power_state') as set_power_mock: get_power_mock.return_value = states.POWER_OFF set_power_mock.side_effect = exception.IronicException() self.assertRaises( exception.IronicException, conductor_utils.node_power_action, task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY) set_power_mock.assert_called_once_with(mock.ANY, states.POWER_ON) self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNotNone(node['last_error']) class CleanupAfterTimeoutTestCase(tests_base.TestCase): def setUp(self): super(CleanupAfterTimeoutTestCase, self).setUp() self.task = mock.Mock(spec=task_manager.TaskManager) self.task.context = mock.sentinel.context self.task.driver = mock.Mock(spec_set=['deploy']) self.task.shared = False self.task.node = mock.Mock(spec_set=objects.Node) self.node = self.task.node def test_cleanup_after_timeout(self): conductor_utils.cleanup_after_timeout(self.task) self.node.save.assert_called_once_with() self.task.driver.deploy.clean_up.assert_called_once_with(self.task) self.assertIn('Timeout reached', self.node.last_error) def test_cleanup_after_timeout_shared_lock(self): self.task.shared = True self.assertRaises(exception.ExclusiveLockRequired, conductor_utils.cleanup_after_timeout, self.task) def test_cleanup_after_timeout_cleanup_ironic_exception(self): clean_up_mock = self.task.driver.deploy.clean_up clean_up_mock.side_effect = exception.IronicException('moocow') conductor_utils.cleanup_after_timeout(self.task) self.task.driver.deploy.clean_up.assert_called_once_with(self.task) self.assertEqual([mock.call()] * 2, self.node.save.call_args_list) self.assertIn('moocow', self.node.last_error) def test_cleanup_after_timeout_cleanup_random_exception(self): clean_up_mock = self.task.driver.deploy.clean_up clean_up_mock.side_effect = Exception('moocow') conductor_utils.cleanup_after_timeout(self.task) self.task.driver.deploy.clean_up.assert_called_once_with(self.task) self.assertEqual([mock.call()] * 2, self.node.save.call_args_list) self.assertIn('Deploy timed out', self.node.last_error) class NodeCleaningStepsTestCase(base.DbTestCase): def setUp(self): super(NodeCleaningStepsTestCase, self).setUp() mgr_utils.mock_the_extension_manager() self.power_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'power'} self.deploy_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'deploy'} self.deploy_erase = { 'step': 'erase_disks', 'priority': 20, 'interface': 'deploy'} # Automated cleaning should be executed in this order self.clean_steps = [self.deploy_erase, self.power_update, self.deploy_update] # Manual clean step self.deploy_raid = { 'step': 'build_raid', 'priority': 0, 'interface': 'deploy', 'argsinfo': {'arg1': {'description': 'desc1', 'required': True}, 'arg2': {'description': 'desc2'}}} @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_clean_steps') @mock.patch('ironic.drivers.modules.fake.FakePower.get_clean_steps') def test__get_cleaning_steps(self, mock_power_steps, mock_deploy_steps): # Test getting cleaning steps, with one driver returning None, two # conflicting priorities, and asserting they are ordered properly. node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE) mock_power_steps.return_value = [self.power_update] mock_deploy_steps.return_value = [self.deploy_erase, self.deploy_update] with task_manager.acquire( self.context, node.uuid, shared=False) as task: steps = conductor_utils._get_cleaning_steps(task, enabled=False) self.assertEqual(self.clean_steps, steps) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_clean_steps') @mock.patch('ironic.drivers.modules.fake.FakePower.get_clean_steps') def test__get_cleaning_steps_unsorted(self, mock_power_steps, mock_deploy_steps): node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=states.MANAGEABLE) mock_deploy_steps.return_value = [self.deploy_raid, self.deploy_update, self.deploy_erase] with task_manager.acquire( self.context, node.uuid, shared=False) as task: steps = conductor_utils._get_cleaning_steps(task, enabled=False, sort=False) self.assertEqual(mock_deploy_steps.return_value, steps) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_clean_steps') @mock.patch('ironic.drivers.modules.fake.FakePower.get_clean_steps') def test__get_cleaning_steps_only_enabled(self, mock_power_steps, mock_deploy_steps): # Test getting only cleaning steps, with one driver returning None, two # conflicting priorities, and asserting they are ordered properly. # Should discard zero-priority (manual) clean step node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE) mock_power_steps.return_value = [self.power_update] mock_deploy_steps.return_value = [self.deploy_erase, self.deploy_update, self.deploy_raid] with task_manager.acquire( self.context, node.uuid, shared=True) as task: steps = conductor_utils._get_cleaning_steps(task, enabled=True) self.assertEqual(self.clean_steps, steps) @mock.patch.object(conductor_utils, '_validate_user_clean_steps') @mock.patch.object(conductor_utils, '_get_cleaning_steps') def test_set_node_cleaning_steps_automated(self, mock_steps, mock_validate_user_steps): mock_steps.return_value = self.clean_steps node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, last_error=None, clean_step=None) with task_manager.acquire( self.context, node.uuid, shared=False) as task: conductor_utils.set_node_cleaning_steps(task) node.refresh() self.assertEqual(self.clean_steps, node.driver_internal_info['clean_steps']) self.assertEqual({}, node.clean_step) mock_steps.assert_called_once_with(task, enabled=True) self.assertFalse(mock_validate_user_steps.called) @mock.patch.object(conductor_utils, '_validate_user_clean_steps') @mock.patch.object(conductor_utils, '_get_cleaning_steps') def test_set_node_cleaning_steps_manual(self, mock_steps, mock_validate_user_steps): clean_steps = [self.deploy_raid] mock_steps.return_value = self.clean_steps node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=states.MANAGEABLE, last_error=None, clean_step=None, driver_internal_info={'clean_steps': clean_steps}) with task_manager.acquire( self.context, node.uuid, shared=False) as task: conductor_utils.set_node_cleaning_steps(task) node.refresh() self.assertEqual(clean_steps, node.driver_internal_info['clean_steps']) self.assertEqual({}, node.clean_step) self.assertFalse(mock_steps.called) mock_validate_user_steps.assert_called_once_with(task, clean_steps) @mock.patch.object(conductor_utils, '_get_cleaning_steps') def test__validate_user_clean_steps(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = self.clean_steps user_steps = [{'step': 'update_firmware', 'interface': 'power'}, {'step': 'erase_disks', 'interface': 'deploy'}] with task_manager.acquire(self.context, node.uuid) as task: conductor_utils._validate_user_clean_steps(task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_utils, '_get_cleaning_steps') def test__validate_user_clean_steps_no_steps(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = self.clean_steps with task_manager.acquire(self.context, node.uuid) as task: conductor_utils._validate_user_clean_steps(task, []) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_utils, '_get_cleaning_steps') def test__validate_user_clean_steps_get_steps_exception(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.side_effect = exception.NodeCleaningFailure('bad') with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.NodeCleaningFailure, conductor_utils._validate_user_clean_steps, task, []) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_utils, '_get_cleaning_steps') def test__validate_user_clean_steps_not_supported(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = [self.power_update, self.deploy_raid] user_steps = [{'step': 'update_firmware', 'interface': 'power'}, {'step': 'bad_step', 'interface': 'deploy'}] with task_manager.acquire(self.context, node.uuid) as task: self.assertRaisesRegexp(exception.InvalidParameterValue, "does not support.*bad_step", conductor_utils._validate_user_clean_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_utils, '_get_cleaning_steps') def test__validate_user_clean_steps_invalid_arg(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = self.clean_steps user_steps = [{'step': 'update_firmware', 'interface': 'power', 'args': {'arg1': 'val1', 'arg2': 'val2'}}, {'step': 'erase_disks', 'interface': 'deploy'}] with task_manager.acquire(self.context, node.uuid) as task: self.assertRaisesRegexp(exception.InvalidParameterValue, "update_firmware.*invalid.*arg1", conductor_utils._validate_user_clean_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_utils, '_get_cleaning_steps') def test__validate_user_clean_steps_missing_required_arg(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = [self.power_update, self.deploy_raid] user_steps = [{'step': 'update_firmware', 'interface': 'power'}, {'step': 'build_raid', 'interface': 'deploy'}] with task_manager.acquire(self.context, node.uuid) as task: self.assertRaisesRegexp(exception.InvalidParameterValue, "build_raid.*missing.*arg1", conductor_utils._validate_user_clean_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) class ErrorHandlersTestCase(tests_base.TestCase): def setUp(self): super(ErrorHandlersTestCase, self).setUp() self.task = mock.Mock(spec=task_manager.TaskManager) self.task.driver = mock.Mock(spec_set=['deploy']) self.task.node = mock.Mock(spec_set=objects.Node) self.node = self.task.node @mock.patch.object(conductor_utils, 'LOG') def test_provision_error_handler_no_worker(self, log_mock): exc = exception.NoFreeConductorWorker() conductor_utils.provisioning_error_handler(exc, self.node, 'state-one', 'state-two') self.node.save.assert_called_once_with() self.assertEqual('state-one', self.node.provision_state) self.assertEqual('state-two', self.node.target_provision_state) self.assertIn('No free conductor workers', self.node.last_error) self.assertTrue(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG') def test_provision_error_handler_other_error(self, log_mock): exc = Exception('foo') conductor_utils.provisioning_error_handler(exc, self.node, 'state-one', 'state-two') self.assertFalse(self.node.save.called) self.assertFalse(log_mock.warning.called) def test_cleaning_error_handler(self): self.node.provision_state = states.CLEANING target = 'baz' self.node.target_provision_state = target self.node.driver_internal_info = {} msg = 'error bar' conductor_utils.cleaning_error_handler(self.task, msg) self.node.save.assert_called_once_with() self.assertEqual({}, self.node.clean_step) self.assertFalse('clean_step_index' in self.node.driver_internal_info) self.assertEqual(msg, self.node.last_error) self.assertTrue(self.node.maintenance) self.assertEqual(msg, self.node.maintenance_reason) driver = self.task.driver.deploy driver.tear_down_cleaning.assert_called_once_with(self.task) self.task.process_event.assert_called_once_with('fail', target_state=None) def test_cleaning_error_handler_manual(self): target = states.MANAGEABLE self.node.target_provision_state = target conductor_utils.cleaning_error_handler(self.task, 'foo') self.task.process_event.assert_called_once_with('fail', target_state=target) def test_cleaning_error_handler_no_teardown(self): target = states.MANAGEABLE self.node.target_provision_state = target conductor_utils.cleaning_error_handler(self.task, 'foo', tear_down_cleaning=False) self.assertFalse(self.task.driver.deploy.tear_down_cleaning.called) self.task.process_event.assert_called_once_with('fail', target_state=target) def test_cleaning_error_handler_no_fail(self): conductor_utils.cleaning_error_handler(self.task, 'foo', set_fail_state=False) driver = self.task.driver.deploy driver.tear_down_cleaning.assert_called_once_with(self.task) self.assertFalse(self.task.process_event.called) @mock.patch.object(conductor_utils, 'LOG') def test_cleaning_error_handler_tear_down_error(self, log_mock): driver = self.task.driver.deploy driver.tear_down_cleaning.side_effect = Exception('bar') conductor_utils.cleaning_error_handler(self.task, 'foo') self.assertTrue(log_mock.exception.called) @mock.patch.object(conductor_utils, 'LOG') def test_spawn_cleaning_error_handler_no_worker(self, log_mock): exc = exception.NoFreeConductorWorker() conductor_utils.spawn_cleaning_error_handler(exc, self.node) self.node.save.assert_called_once_with() self.assertIn('No free conductor workers', self.node.last_error) self.assertTrue(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG') def test_spawn_cleaning_error_handler_other_error(self, log_mock): exc = Exception('foo') conductor_utils.spawn_cleaning_error_handler(exc, self.node) self.assertFalse(self.node.save.called) self.assertFalse(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG') def test_power_state_error_handler_no_worker(self, log_mock): exc = exception.NoFreeConductorWorker() conductor_utils.power_state_error_handler(exc, self.node, 'newstate') self.node.save.assert_called_once_with() self.assertEqual('newstate', self.node.power_state) self.assertEqual(states.NOSTATE, self.node.target_power_state) self.assertIn('No free conductor workers', self.node.last_error) self.assertTrue(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG') def test_power_state_error_handler_other_error(self, log_mock): exc = Exception('foo') conductor_utils.power_state_error_handler(exc, self.node, 'foo') self.assertFalse(self.node.save.called) self.assertFalse(log_mock.warning.called) ironic-5.1.0/ironic/tests/unit/conductor/test_base_manager.py0000664000567000056710000002206412674513466025604 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Ironic BaseConductorManager.""" import eventlet import futurist from futurist import periodics import mock from oslo_config import cfg from oslo_db import exception as db_exception from ironic.common import driver_factory from ironic.common import exception from ironic.conductor import base_manager from ironic.conductor import manager from ironic.drivers import base as drivers_base from ironic import objects from ironic.tests import base as tests_base from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as tests_db_base from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF @mgr_utils.mock_record_keepalive class StartStopTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_start_registers_conductor(self): self.assertRaises(exception.ConductorNotFound, objects.Conductor.get_by_hostname, self.context, self.hostname) self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) def test_start_clears_conductor_locks(self): node = obj_utils.create_test_node(self.context, reservation=self.hostname) node.save() self._start_service() node.refresh() self.assertIsNone(node.reservation) def test_stop_unregisters_conductor(self): self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) self.service.del_host() self.assertRaises(exception.ConductorNotFound, objects.Conductor.get_by_hostname, self.context, self.hostname) def test_stop_doesnt_unregister_conductor(self): self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) self.service.del_host(deregister=False) res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) @mock.patch.object(manager.ConductorManager, 'init_host') def test_stop_uninitialized_conductor(self, mock_init): self._start_service() self.service.del_host() @mock.patch.object(driver_factory.DriverFactory, '__getitem__', lambda *args: mock.MagicMock()) def test_start_registers_driver_names(self): init_names = ['fake1', 'fake2'] restart_names = ['fake3', 'fake4'] df = driver_factory.DriverFactory() with mock.patch.object(df._extension_manager, 'names') as mock_names: # verify driver names are registered self.config(enabled_drivers=init_names) mock_names.return_value = init_names self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(init_names, res['drivers']) self._stop_service() # verify that restart registers new driver names self.config(enabled_drivers=restart_names) mock_names.return_value = restart_names self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(restart_names, res['drivers']) @mock.patch.object(driver_factory.DriverFactory, '__getitem__') def test_start_registers_driver_specific_tasks(self, get_mock): init_names = ['fake1'] self.config(enabled_drivers=init_names) class TestInterface(object): @periodics.periodic(spacing=100500) def iface(self): pass class Driver(object): core_interfaces = [] standard_interfaces = ['iface'] all_interfaces = core_interfaces + standard_interfaces iface = TestInterface() @periodics.periodic(spacing=42) def task(self, context): pass @drivers_base.driver_periodic_task() def deprecated_task(self, context): pass obj = Driver() get_mock.return_value = mock.Mock(obj=obj) with mock.patch.object( driver_factory.DriverFactory()._extension_manager, 'names') as mock_names: mock_names.return_value = init_names self._start_service(start_periodic_tasks=True) tasks = {c[0] for c in self.service._periodic_task_callables} for t in (obj.task, obj.iface.iface, obj.deprecated_task): self.assertTrue(periodics.is_periodic(t)) self.assertIn(t, tasks) @mock.patch.object(driver_factory.DriverFactory, '__init__') def test_start_fails_on_missing_driver(self, mock_df): mock_df.side_effect = exception.DriverNotFound('test') with mock.patch.object(self.dbapi, 'register_conductor') as mock_reg: self.assertRaises(exception.DriverNotFound, self.service.init_host) self.assertTrue(mock_df.called) self.assertFalse(mock_reg.called) @mock.patch.object(base_manager, 'LOG') @mock.patch.object(driver_factory, 'DriverFactory') def test_start_fails_on_no_driver(self, df_mock, log_mock): driver_factory_mock = mock.MagicMock(names=[]) df_mock.return_value = driver_factory_mock self.assertRaises(exception.NoDriversLoaded, self.service.init_host) self.assertTrue(log_mock.error.called) def test_prevent_double_start(self): self._start_service() self.assertRaisesRegexp(RuntimeError, 'already running', self.service.init_host) @mock.patch.object(base_manager, 'LOG') def test_warning_on_low_workers_pool(self, log_mock): CONF.set_override('workers_pool_size', 3, 'conductor') self._start_service() self.assertTrue(log_mock.warning.called) @mock.patch.object(eventlet.greenpool.GreenPool, 'waitall') def test_del_host_waits_on_workerpool(self, wait_mock): self._start_service() self.service.del_host() self.assertTrue(wait_mock.called) class KeepAliveTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test__conductor_service_record_keepalive(self): self._start_service() # avoid wasting time at the event.wait() CONF.set_override('heartbeat_interval', 0, 'conductor') with mock.patch.object(self.dbapi, 'touch_conductor') as mock_touch: with mock.patch.object(self.service._keepalive_evt, 'is_set') as mock_is_set: mock_is_set.side_effect = [False, True] self.service._conductor_service_record_keepalive() mock_touch.assert_called_once_with(self.hostname) def test__conductor_service_record_keepalive_failed_db_conn(self): self._start_service() # avoid wasting time at the event.wait() CONF.set_override('heartbeat_interval', 0, 'conductor') with mock.patch.object(self.dbapi, 'touch_conductor') as mock_touch: mock_touch.side_effect = [None, db_exception.DBConnectionError(), None] with mock.patch.object(self.service._keepalive_evt, 'is_set') as mock_is_set: mock_is_set.side_effect = [False, False, False, True] self.service._conductor_service_record_keepalive() self.assertEqual(3, mock_touch.call_count) class ManagerSpawnWorkerTestCase(tests_base.TestCase): def setUp(self): super(ManagerSpawnWorkerTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') self.executor = mock.Mock(spec=futurist.GreenThreadPoolExecutor) self.service._executor = self.executor def test__spawn_worker(self): self.service._spawn_worker('fake', 1, 2, foo='bar', cat='meow') self.executor.submit.assert_called_once_with( 'fake', 1, 2, foo='bar', cat='meow') def test__spawn_worker_none_free(self): self.executor.submit.side_effect = futurist.RejectedSubmission() self.assertRaises(exception.NoFreeConductorWorker, self.service._spawn_worker, 'fake') ironic-5.1.0/ironic/tests/unit/conductor/test_manager.py0000664000567000056710000067532312674513470024621 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # Copyright 2013 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Ironic ManagerService.""" import datetime import eventlet import mock from oslo_config import cfg import oslo_messaging as messaging from oslo_utils import uuidutils from oslo_versionedobjects import base as ovo_base from oslo_versionedobjects import fields from ironic.common import boot_devices from ironic.common import driver_factory from ironic.common import exception from ironic.common import images from ironic.common import states from ironic.common import swift from ironic.conductor import manager from ironic.conductor import task_manager from ironic.conductor import utils as conductor_utils from ironic.db import api as dbapi from ironic.drivers import base as drivers_base from ironic.drivers.modules import fake from ironic import objects from ironic.objects import base as obj_base from ironic.tests import base as tests_base from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as tests_db_base from ironic.tests.unit.db import utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF @mgr_utils.mock_record_keepalive class ChangeNodePowerStateTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_change_node_power_state_power_on(self): # Test change_node_power_state including integration with # conductor.utils.node_power_action and lower. node = obj_utils.create_test_node(self.context, driver='fake', power_state=states.POWER_OFF) self._start_service() with mock.patch.object(self.driver.power, 'get_power_state') as get_power_mock: get_power_mock.return_value = states.POWER_OFF self.service.change_node_power_state(self.context, node.uuid, states.POWER_ON) self._stop_service() get_power_mock.assert_called_once_with(mock.ANY) node.refresh() self.assertEqual(states.POWER_ON, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNone(node.last_error) # Verify the reservation has been cleared by # background task's link callback. self.assertIsNone(node.reservation) @mock.patch.object(conductor_utils, 'node_power_action') def test_change_node_power_state_node_already_locked(self, pwr_act_mock): # Test change_node_power_state with mocked # conductor.utils.node_power_action. fake_reservation = 'fake-reserv' pwr_state = states.POWER_ON node = obj_utils.create_test_node(self.context, driver='fake', power_state=pwr_state, reservation=fake_reservation) self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.change_node_power_state, self.context, node.uuid, states.POWER_ON) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) # In this test worker should not be spawned, but waiting to make sure # the below perform_mock assertion is valid. self._stop_service() self.assertFalse(pwr_act_mock.called, 'node_power_action has been ' 'unexpectedly called.') # Verify existing reservation wasn't broken. node.refresh() self.assertEqual(fake_reservation, node.reservation) def test_change_node_power_state_worker_pool_full(self): # Test change_node_power_state including integration with # conductor.utils.node_power_action and lower. initial_state = states.POWER_OFF node = obj_utils.create_test_node(self.context, driver='fake', power_state=initial_state) self._start_service() with mock.patch.object(self.service, '_spawn_worker') as spawn_mock: spawn_mock.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.change_node_power_state, self.context, node.uuid, states.POWER_ON) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) spawn_mock.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) node.refresh() self.assertEqual(initial_state, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNotNone(node.last_error) # Verify the picked reservation has been cleared due to full pool. self.assertIsNone(node.reservation) def test_change_node_power_state_exception_in_background_task( self): # Test change_node_power_state including integration with # conductor.utils.node_power_action and lower. initial_state = states.POWER_OFF node = obj_utils.create_test_node(self.context, driver='fake', power_state=initial_state) self._start_service() with mock.patch.object(self.driver.power, 'get_power_state') as get_power_mock: get_power_mock.return_value = states.POWER_OFF with mock.patch.object(self.driver.power, 'set_power_state') as set_power_mock: new_state = states.POWER_ON set_power_mock.side_effect = exception.PowerStateFailure( pstate=new_state ) self.service.change_node_power_state(self.context, node.uuid, new_state) self._stop_service() get_power_mock.assert_called_once_with(mock.ANY) set_power_mock.assert_called_once_with(mock.ANY, new_state) node.refresh() self.assertEqual(initial_state, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNotNone(node.last_error) # Verify the reservation has been cleared by background task's # link callback despite exception in background task. self.assertIsNone(node.reservation) def test_change_node_power_state_validate_fail(self): # Test change_node_power_state where task.driver.power.validate # fails and raises an exception initial_state = states.POWER_ON node = obj_utils.create_test_node(self.context, driver='fake', power_state=initial_state) self._start_service() with mock.patch.object(self.driver.power, 'validate') as validate_mock: validate_mock.side_effect = exception.InvalidParameterValue( 'wrong power driver info') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.change_node_power_state, self.context, node.uuid, states.POWER_ON) self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) node.refresh() validate_mock.assert_called_once_with(mock.ANY) self.assertEqual(states.POWER_ON, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNone(node.last_error) @mgr_utils.mock_record_keepalive class UpdateNodeTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_update_node(self): node = obj_utils.create_test_node(self.context, driver='fake', extra={'test': 'one'}) # check that ManagerService.update_node actually updates the node node.extra = {'test': 'two'} res = self.service.update_node(self.context, node) self.assertEqual({'test': 'two'}, res['extra']) def test_update_node_clears_maintenance_reason(self): node = obj_utils.create_test_node(self.context, driver='fake', maintenance=True, maintenance_reason='reason') # check that ManagerService.update_node actually updates the node node.maintenance = False res = self.service.update_node(self.context, node) self.assertFalse(res['maintenance']) self.assertIsNone(res['maintenance_reason']) def test_update_node_already_locked(self): node = obj_utils.create_test_node(self.context, driver='fake', extra={'test': 'one'}) # check that it fails if something else has locked it already with task_manager.acquire(self.context, node['id'], shared=False): node.extra = {'test': 'two'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) # verify change did not happen res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual({'test': 'one'}, res['extra']) @mock.patch('ironic.drivers.modules.fake.FakePower.get_power_state') def _test_associate_node(self, power_state, mock_get_power_state): mock_get_power_state.return_value = power_state node = obj_utils.create_test_node(self.context, driver='fake', instance_uuid=None, power_state=states.NOSTATE) node.instance_uuid = 'fake-uuid' self.service.update_node(self.context, node) # Check if the change was applied node.instance_uuid = 'meow' node.refresh() self.assertEqual('fake-uuid', node.instance_uuid) def test_associate_node_powered_off(self): self._test_associate_node(states.POWER_OFF) def test_associate_node_powered_on(self): self._test_associate_node(states.POWER_ON) def test_update_node_invalid_driver(self): existing_driver = 'fake' wrong_driver = 'wrong-driver' node = obj_utils.create_test_node(self.context, driver=existing_driver, extra={'test': 'one'}, instance_uuid=None, task_state=states.POWER_ON) # check that it fails because driver not found node.driver = wrong_driver node.driver_info = {} self.assertRaises(exception.DriverNotFound, self.service.update_node, self.context, node) # verify change did not happen node.refresh() self.assertEqual(existing_driver, node.driver) @mgr_utils.mock_record_keepalive class VendorPassthruTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): @mock.patch.object(task_manager.TaskManager, 'spawn_after') def test_vendor_passthru_async(self, mock_spawn): node = obj_utils.create_test_node(self.context, driver='fake') info = {'bar': 'baz'} self._start_service() response = self.service.vendor_passthru(self.context, node.uuid, 'first_method', 'POST', info) # Waiting to make sure the below assertions are valid. self._stop_service() # Assert spawn_after was called self.assertTrue(mock_spawn.called) self.assertIsNone(response['return']) self.assertTrue(response['async']) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch.object(task_manager.TaskManager, 'spawn_after') def test_vendor_passthru_sync(self, mock_spawn): node = obj_utils.create_test_node(self.context, driver='fake') info = {'bar': 'meow'} self._start_service() response = self.service.vendor_passthru(self.context, node.uuid, 'third_method_sync', 'POST', info) # Waiting to make sure the below assertions are valid. self._stop_service() # Assert no workers were used self.assertFalse(mock_spawn.called) self.assertTrue(response['return']) self.assertFalse(response['async']) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def test_vendor_passthru_http_method_not_supported(self): node = obj_utils.create_test_node(self.context, driver='fake') self._start_service() # GET not supported by first_method exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'first_method', 'GET', {}) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def test_vendor_passthru_node_already_locked(self): fake_reservation = 'test_reserv' node = obj_utils.create_test_node(self.context, driver='fake', reservation=fake_reservation) info = {'bar': 'baz'} self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'first_method', 'POST', info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) node.refresh() self.assertIsNone(node.last_error) # Verify the existing reservation is not broken. self.assertEqual(fake_reservation, node.reservation) def test_vendor_passthru_unsupported_method(self): node = obj_utils.create_test_node(self.context, driver='fake') info = {'bar': 'baz'} self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'unsupported_method', 'POST', info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def test_vendor_passthru_missing_method_parameters(self): node = obj_utils.create_test_node(self.context, driver='fake') info = {'invalid_param': 'whatever'} self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'first_method', 'POST', info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.MissingParameterValue, exc.exc_info[0]) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def test_vendor_passthru_vendor_interface_not_supported(self): node = obj_utils.create_test_node(self.context, driver='fake') info = {'bar': 'baz'} self.driver.vendor = None self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'whatever_method', 'POST', info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) node.refresh() # Verify reservation has been cleared. self.assertIsNone(node.reservation) def test_vendor_passthru_worker_pool_full(self): node = obj_utils.create_test_node(self.context, driver='fake') info = {'bar': 'baz'} self._start_service() with mock.patch.object(self.service, '_spawn_worker') as spawn_mock: spawn_mock.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'first_method', 'POST', info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) # Waiting to make sure the below assertions are valid. self._stop_service() node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def test_get_node_vendor_passthru_methods(self): node = obj_utils.create_test_node(self.context, driver='fake') fake_routes = {'test_method': {'async': True, 'description': 'foo', 'http_methods': ['POST'], 'func': None}} self.driver.vendor.vendor_routes = fake_routes self._start_service() data = self.service.get_node_vendor_passthru_methods(self.context, node.uuid) # The function reference should not be returned del fake_routes['test_method']['func'] self.assertEqual(fake_routes, data) def test_get_node_vendor_passthru_methods_not_supported(self): node = obj_utils.create_test_node(self.context, driver='fake') self.driver.vendor = None exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_node_vendor_passthru_methods, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) @mock.patch.object(manager.ConductorManager, '_spawn_worker') def test_driver_vendor_passthru_sync(self, mock_spawn): expected = {'foo': 'bar'} self.driver.vendor = mock.Mock(spec=drivers_base.VendorInterface) test_method = mock.MagicMock(return_value=expected) self.driver.vendor.driver_routes = { 'test_method': {'func': test_method, 'async': False, 'attach': False, 'http_methods': ['POST']}} self.service.init_host() # init_host() called _spawn_worker because of the heartbeat mock_spawn.reset_mock() vendor_args = {'test': 'arg'} response = self.service.driver_vendor_passthru( self.context, 'fake', 'test_method', 'POST', vendor_args) # Assert that the vendor interface has no custom # driver_vendor_passthru() self.assertFalse(hasattr(self.driver.vendor, 'driver_vendor_passthru')) self.assertEqual(expected, response['return']) self.assertFalse(response['async']) test_method.assert_called_once_with(self.context, **vendor_args) # No worker was spawned self.assertFalse(mock_spawn.called) @mock.patch.object(manager.ConductorManager, '_spawn_worker') def test_driver_vendor_passthru_async(self, mock_spawn): self.driver.vendor = mock.Mock(spec=drivers_base.VendorInterface) test_method = mock.MagicMock() self.driver.vendor.driver_routes = { 'test_sync_method': {'func': test_method, 'async': True, 'attach': False, 'http_methods': ['POST']}} self.service.init_host() # init_host() called _spawn_worker because of the heartbeat mock_spawn.reset_mock() vendor_args = {'test': 'arg'} response = self.service.driver_vendor_passthru( self.context, 'fake', 'test_sync_method', 'POST', vendor_args) # Assert that the vendor interface has no custom # driver_vendor_passthru() self.assertFalse(hasattr(self.driver.vendor, 'driver_vendor_passthru')) self.assertIsNone(response['return']) self.assertTrue(response['async']) mock_spawn.assert_called_once_with(test_method, self.context, **vendor_args) def test_driver_vendor_passthru_http_method_not_supported(self): self.driver.vendor = mock.Mock(spec=drivers_base.VendorInterface) self.driver.vendor.driver_routes = { 'test_method': {'func': mock.MagicMock(), 'async': True, 'http_methods': ['POST']}} self.service.init_host() # GET not supported by test_method exc = self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'fake', 'test_method', 'GET', {}) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_driver_vendor_passthru_vendor_interface_not_supported(self): # Test for when no vendor interface is set at all self.driver.vendor = None self.service.init_host() exc = self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'fake', 'test_method', 'POST', {}) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) def test_driver_vendor_passthru_method_not_supported(self): # Test for when the vendor interface is set, but hasn't passed a # driver_passthru_mapping to MixinVendorInterface self.service.init_host() exc = self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'fake', 'test_method', 'POST', {}) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_driver_vendor_passthru_driver_not_found(self): self.service.init_host() self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'does_not_exist', 'test_method', 'POST', {}) def test_get_driver_vendor_passthru_methods(self): self.driver.vendor = mock.Mock(spec=drivers_base.VendorInterface) fake_routes = {'test_method': {'async': True, 'description': 'foo', 'http_methods': ['POST'], 'func': None}} self.driver.vendor.driver_routes = fake_routes self.service.init_host() data = self.service.get_driver_vendor_passthru_methods(self.context, 'fake') # The function reference should not be returned del fake_routes['test_method']['func'] self.assertEqual(fake_routes, data) def test_get_driver_vendor_passthru_methods_not_supported(self): self.service.init_host() self.driver.vendor = None exc = self.assertRaises( messaging.rpc.ExpectedException, self.service.get_driver_vendor_passthru_methods, self.context, 'fake') # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) @mock.patch.object(drivers_base.VendorInterface, 'driver_validate') def test_driver_vendor_passthru_validation_failed(self, validate_mock): validate_mock.side_effect = exception.MissingParameterValue('error') test_method = mock.Mock() self.driver.vendor.driver_routes = { 'test_method': {'func': test_method, 'async': False, 'http_methods': ['POST']}} self.service.init_host() exc = self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'fake', 'test_method', 'POST', {}) self.assertEqual(exception.MissingParameterValue, exc.exc_info[0]) self.assertFalse(test_method.called) @mgr_utils.mock_record_keepalive @mock.patch.object(images, 'is_whole_disk_image') class ServiceDoNodeDeployTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_do_node_deploy_invalid_state(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() # test that node deploy fails if the node is already provisioned node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node['uuid']) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertNotIn('is_whole_disk_image', node.driver_internal_info) def test_do_node_deploy_maintenance(self, mock_iwdi): mock_iwdi.return_value = False node = obj_utils.create_test_node(self.context, driver='fake', maintenance=True) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node['uuid']) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeInMaintenance, exc.exc_info[0]) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) self.assertFalse(mock_iwdi.called) def _test_do_node_deploy_validate_fail(self, mock_validate, mock_iwdi): mock_iwdi.return_value = False # InvalidParameterValue should be re-raised as InstanceDeployFailure mock_validate.side_effect = exception.InvalidParameterValue('error') node = obj_utils.create_test_node(self.context, driver='fake') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InstanceDeployFailure, exc.exc_info[0]) # Check the message of InstanceDeployFailure. In a # messaging.rpc.ExpectedException sys.exc_info() is stored in exc_info # in the exception object. So InstanceDeployFailure will be in # exc_info[1] self.assertIn(r'node 1be26c0b-03f2-4d2e-ae87-c02d7f33c123', str(exc.exc_info[1])) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertNotIn('is_whole_disk_image', node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.validate') def test_do_node_deploy_validate_fail(self, mock_validate, mock_iwdi): self._test_do_node_deploy_validate_fail(mock_validate, mock_iwdi) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_deploy_power_validate_fail(self, mock_validate, mock_iwdi): self._test_do_node_deploy_validate_fail(mock_validate, mock_iwdi) @mock.patch('ironic.conductor.task_manager.TaskManager.process_event') def test_deploy_with_nostate_converts_to_available(self, mock_pe, mock_iwdi): # expressly create a node using the Juno-era NOSTATE state # and assert that it does not result in an error, and that the state # is converted to the new AVAILABLE state. # Mock the process_event call, because the transitions from # AVAILABLE are tested thoroughly elsewhere # NOTE(deva): This test can be deleted after Kilo is released mock_iwdi.return_value = False self._start_service() node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.NOSTATE) self.assertEqual(states.NOSTATE, node.provision_state) self.service.do_node_deploy(self.context, node.uuid) self.assertTrue(mock_pe.called) node.refresh() self.assertEqual(states.AVAILABLE, node.provision_state) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) def test_do_node_deploy_partial_ok(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() thread = self.service._spawn_worker(lambda: None) with mock.patch.object(self.service, '_spawn_worker') as mock_spawn: mock_spawn.return_value = thread node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.AVAILABLE) self.service.do_node_deploy(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYING, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_spawn.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY, None) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test_do_node_deploy_rebuild_active_state(self, mock_deploy, mock_iwdi): # This tests manager.do_node_deploy(), the 'else' path of # 'if new_state == states.DEPLOYDONE'. The node's states # aren't changed in this case. mock_iwdi.return_value = True self._start_service() mock_deploy.return_value = states.DEPLOYING node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE, instance_info={'image_source': uuidutils.generate_uuid(), 'kernel': 'aaaa', 'ramdisk': 'bbbb'}, driver_internal_info={'is_whole_disk_image': False}) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYING, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) # last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY) # Verify instance_info values has been cleared. self.assertNotIn('kernel', node.instance_info) self.assertNotIn('ramdisk', node.instance_info) mock_iwdi.assert_called_once_with(self.context, node.instance_info) # Verify is_whole_disk_image reflects correct value on rebuild. self.assertTrue(node.driver_internal_info['is_whole_disk_image']) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test_do_node_deploy_rebuild_active_state_waiting(self, mock_deploy, mock_iwdi): mock_iwdi.return_value = False self._start_service() mock_deploy.return_value = states.DEPLOYWAIT node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE, instance_info={'image_source': uuidutils.generate_uuid()}) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYWAIT, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) # last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test_do_node_deploy_rebuild_active_state_done(self, mock_deploy, mock_iwdi): mock_iwdi.return_value = False self._start_service() mock_deploy.return_value = states.DEPLOYDONE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) # last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test_do_node_deploy_rebuild_deployfail_state(self, mock_deploy, mock_iwdi): mock_iwdi.return_value = False self._start_service() mock_deploy.return_value = states.DEPLOYDONE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.DEPLOYFAIL, target_provision_state=states.NOSTATE) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) # last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test_do_node_deploy_rebuild_error_state(self, mock_deploy, mock_iwdi): mock_iwdi.return_value = False self._start_service() mock_deploy.return_value = states.DEPLOYDONE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.ERROR, target_provision_state=states.NOSTATE) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) # last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) def test_do_node_deploy_rebuild_from_available_state(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() # test node will not rebuild if state is AVAILABLE node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.AVAILABLE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node['uuid'], rebuild=True) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) # Last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertNotIn('is_whole_disk_image', node.driver_internal_info) def test_do_node_deploy_worker_pool_full(self, mock_iwdi): mock_iwdi.return_value = False prv_state = states.AVAILABLE tgt_prv_state = states.NOSTATE node = obj_utils.create_test_node(self.context, provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None, driver='fake') self._start_service() with mock.patch.object(self.service, '_spawn_worker') as mock_spawn: mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() node.refresh() # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) @mgr_utils.mock_record_keepalive class DoNodeDeployTearDownTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_node_deploy_driver_raises_prepare_error(self, mock_prepare, mock_deploy): self._start_service() # test when driver.deploy.prepare raises an exception mock_prepare.side_effect = exception.InstanceDeployFailure('test') node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) self.assertRaises(exception.InstanceDeployFailure, manager.do_node_deploy, task, self.service.conductor.id) node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) # NOTE(deva): failing a deploy does not clear the target state # any longer. Instead, it is cleared when the instance # is deleted. self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertTrue(mock_prepare.called) self.assertFalse(mock_deploy.called) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test__do_node_deploy_driver_raises_error(self, mock_deploy): self._start_service() # test when driver.deploy.deploy raises an exception mock_deploy.side_effect = exception.InstanceDeployFailure('test') node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) self.assertRaises(exception.InstanceDeployFailure, manager.do_node_deploy, task, self.service.conductor.id) node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) # NOTE(deva): failing a deploy does not clear the target state # any longer. Instead, it is cleared when the instance # is deleted. self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) mock_deploy.assert_called_once_with(mock.ANY) @mock.patch.object(manager, '_store_configdrive') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test__do_node_deploy_ok(self, mock_deploy, mock_store): self._start_service() # test when driver.deploy.deploy returns DEPLOYDONE mock_deploy.return_value = states.DEPLOYDONE node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) manager.do_node_deploy(task, self.service.conductor.id) node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNone(node.last_error) mock_deploy.assert_called_once_with(mock.ANY) # assert _store_configdrive wasn't invoked self.assertFalse(mock_store.called) @mock.patch.object(manager, '_store_configdrive') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test__do_node_deploy_ok_configdrive(self, mock_deploy, mock_store): self._start_service() # test when driver.deploy.deploy returns DEPLOYDONE mock_deploy.return_value = states.DEPLOYDONE node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) configdrive = 'foo' manager.do_node_deploy(task, self.service.conductor.id, configdrive=configdrive) node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNone(node.last_error) mock_deploy.assert_called_once_with(mock.ANY) mock_store.assert_called_once_with(task.node, configdrive) @mock.patch.object(swift, 'SwiftAPI') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test__do_node_deploy_configdrive_swift_error(self, mock_deploy, mock_swift): CONF.set_override('configdrive_use_swift', True, group='conductor') self._start_service() # test when driver.deploy.deploy returns DEPLOYDONE mock_deploy.return_value = states.DEPLOYDONE node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) mock_swift.side_effect = exception.SwiftOperationError('error') self.assertRaises(exception.SwiftOperationError, manager.do_node_deploy, task, self.service.conductor.id, configdrive=b'fake config drive') node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertFalse(mock_deploy.called) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') def test__do_node_deploy_ok_2(self, mock_deploy): # NOTE(rloo): a different way of testing for the same thing as in # test__do_node_deploy_ok() self._start_service() # test when driver.deploy.deploy returns DEPLOYDONE mock_deploy.return_value = states.DEPLOYDONE node = obj_utils.create_test_node(self.context, driver='fake') task = task_manager.TaskManager(self.context, node.uuid) task.process_event('deploy') manager.do_node_deploy(task, self.service.conductor.id) node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNone(node.last_error) mock_deploy.assert_called_once_with(mock.ANY) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.clean_up') def test__check_deploy_timeouts(self, mock_cleanup): self._start_service() CONF.set_override('deploy_callback_timeout', 1, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, provision_updated_at=datetime.datetime(2000, 1, 1, 0, 0)) self.service._check_deploy_timeouts(self.context) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) mock_cleanup.assert_called_once_with(mock.ANY) def _check_cleanwait_timeouts(self, manual=False): self._start_service() CONF.set_override('clean_callback_timeout', 1, group='conductor') tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=tgt_prov_state, provision_updated_at=datetime.datetime(2000, 1, 1, 0, 0)) self.service._check_cleanwait_timeouts(self.context) self._stop_service() node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertIsNotNone(node.last_error) def test__check_cleanwait_timeouts_automated_clean(self): self._check_cleanwait_timeouts() def test__check_cleanwait_timeouts_manual_clean(self): self._check_cleanwait_timeouts(manual=True) def test_do_node_tear_down_invalid_state(self): self._start_service() # test node.provision_state is incorrect for tear_down node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.AVAILABLE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_tear_down, self.context, node['uuid']) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_tear_down_validate_fail(self, mock_validate): # InvalidParameterValue should be re-raised as InstanceDeployFailure mock_validate.side_effect = exception.InvalidParameterValue('error') node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_tear_down, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InstanceDeployFailure, exc.exc_info[0]) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.tear_down') def test_do_node_tear_down_driver_raises_error(self, mock_tear_down): # test when driver.deploy.tear_down raises exception node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.DELETING, target_provision_state=states.AVAILABLE, instance_info={'foo': 'bar'}, driver_internal_info={'is_whole_disk_image': False}) task = task_manager.TaskManager(self.context, node.uuid) self._start_service() mock_tear_down.side_effect = exception.InstanceDeployFailure('test') self.assertRaises(exception.InstanceDeployFailure, self.service._do_node_tear_down, task) node.refresh() self.assertEqual(states.ERROR, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNotNone(node.last_error) # Assert instance_info was erased self.assertEqual({}, node.instance_info) mock_tear_down.assert_called_once_with(mock.ANY) @mock.patch('ironic.conductor.manager.ConductorManager._do_node_clean') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.tear_down') def test__do_node_tear_down_ok(self, mock_tear_down, mock_clean): # test when driver.deploy.tear_down succeeds node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.DELETING, target_provision_state=states.AVAILABLE, instance_uuid=uuidutils.generate_uuid(), instance_info={'foo': 'bar'}, driver_internal_info={'is_whole_disk_image': False, 'instance': {'ephemeral_gb': 10}}) task = task_manager.TaskManager(self.context, node.uuid) self._start_service() self.service._do_node_tear_down(task) node.refresh() # Node will be moved to AVAILABLE after cleaning, not tested here self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) self.assertIsNone(node.last_error) self.assertIsNone(node.instance_uuid) self.assertEqual({}, node.instance_info) self.assertNotIn('instance', node.driver_internal_info) mock_tear_down.assert_called_once_with(mock.ANY) mock_clean.assert_called_once_with(mock.ANY) @mock.patch('ironic.conductor.manager.ConductorManager._do_node_clean') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.tear_down') def _test_do_node_tear_down_from_state(self, init_state, mock_tear_down, mock_clean): node = obj_utils.create_test_node( self.context, driver='fake', uuid=uuidutils.generate_uuid(), provision_state=init_state, target_provision_state=states.AVAILABLE, driver_internal_info={'is_whole_disk_image': False}) self._start_service() self.service.do_node_tear_down(self.context, node.uuid) self._stop_service() node.refresh() # Node will be moved to AVAILABLE after cleaning, not tested here self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) self.assertIsNone(node.last_error) self.assertEqual({}, node.instance_info) mock_tear_down.assert_called_once_with(mock.ANY) mock_clean.assert_called_once_with(mock.ANY) def test__do_node_tear_down_from_valid_states(self): valid_states = [states.ACTIVE, states.DEPLOYWAIT, states.DEPLOYFAIL, states.ERROR] for state in valid_states: self._test_do_node_tear_down_from_state(state) # NOTE(deva): partial tear-down was broken. A node left in a state of # DELETING could not have tear_down called on it a second time # Thus, I have removed the unit test, which faultily asserted # only that a node could be left in a state of incomplete # deletion -- not that such a node's deletion could later be # completed. @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def test_do_node_tear_down_worker_pool_full(self, mock_spawn): prv_state = states.ACTIVE tgt_prv_state = states.NOSTATE fake_instance_info = {'foo': 'bar'} driver_internal_info = {'is_whole_disk_image': False} node = obj_utils.create_test_node( self.context, driver='fake', provision_state=prv_state, target_provision_state=tgt_prv_state, instance_info=fake_instance_info, driver_internal_info=driver_internal_info, last_error=None) self._start_service() mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_tear_down, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() node.refresh() # Assert instance_info/driver_internal_info was not touched self.assertEqual(fake_instance_info, node.instance_info) self.assertEqual(driver_internal_info, node.driver_internal_info) # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def test_do_provisioning_action_worker_pool_full(self, mock_spawn): prv_state = states.MANAGEABLE tgt_prv_state = states.NOSTATE node = obj_utils.create_test_node(self.context, driver='fake', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None) self._start_service() mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_provisioning_action, self.context, node.uuid, 'provide') # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() node.refresh() # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def test_do_provision_action_provide(self, mock_spawn): # test when a node is cleaned going from manageable to available node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.MANAGEABLE, target_provision_state=states.AVAILABLE) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'provide') node.refresh() # Node will be moved to AVAILABLE after cleaning, not tested here self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) self.assertIsNone(node.last_error) mock_spawn.assert_called_with(self.service._do_node_clean, mock.ANY) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def test_do_provision_action_manage(self, mock_spawn): # test when a node is verified going from enroll to manageable node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.ENROLL, target_provision_state=states.MANAGEABLE) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'manage') node.refresh() # Node will be moved to MANAGEABLE after verification, not tested here self.assertEqual(states.VERIFYING, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertIsNone(node.last_error) mock_spawn.assert_called_with(self.service._do_node_verify, mock.ANY) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def _do_provision_action_abort(self, mock_spawn, manual=False): tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=tgt_prov_state) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'abort') node.refresh() # Node will be moved to tgt_prov_state after cleaning, not tested here self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertIsNone(node.last_error) mock_spawn.assert_called_with(self.service._do_node_clean_abort, mock.ANY) def test_do_provision_action_abort_automated_clean(self): self._do_provision_action_abort() def test_do_provision_action_abort_manual_clean(self): self._do_provision_action_abort(manual=True) def test_do_provision_action_abort_clean_step_not_abortable(self): node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=states.AVAILABLE, clean_step={'step': 'foo', 'abortable': False}) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'abort') node.refresh() # Assert the current clean step was marked to be aborted later self.assertIn('abort_after', node.clean_step) self.assertTrue(node.clean_step['abort_after']) # Make sure things stays as it was before self.assertEqual(states.CLEANWAIT, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) @mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True) def _test__do_node_clean_abort(self, step_name, tear_mock): node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANFAIL, target_provision_state=states.AVAILABLE, clean_step={'step': 'foo', 'abortable': True}) with task_manager.acquire(self.context, node.uuid) as task: self.service._do_node_clean_abort(task, step_name=step_name) self.assertIsNotNone(task.node.last_error) tear_mock.assert_called_once_with(task.driver.deploy, task) if step_name: self.assertIn(step_name, task.node.last_error) # assert node's clean_step was cleaned up self.assertEqual({}, task.node.clean_step) def test__do_node_clean_abort(self): self._test__do_node_clean_abort(None) def test__do_node_clean_abort_with_step_name(self): self._test__do_node_clean_abort('foo') @mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True) def test__do_node_clean_abort_tear_down_fail(self, tear_mock): tear_mock.side_effect = Exception('Surprise') node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANFAIL, target_provision_state=states.AVAILABLE, clean_step={'step': 'foo', 'abortable': True}) with task_manager.acquire(self.context, node.uuid) as task: self.service._do_node_clean_abort(task) tear_mock.assert_called_once_with(task.driver.deploy, task) self.assertIsNotNone(task.node.last_error) self.assertIsNotNone(task.node.maintenance_reason) self.assertTrue(task.node.maintenance) @mgr_utils.mock_record_keepalive class DoNodeCleanTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def setUp(self): super(DoNodeCleanTestCase, self).setUp() self.config(automated_clean=True, group='conductor') self.power_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'power'} self.deploy_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'deploy'} self.deploy_erase = { 'step': 'erase_disks', 'priority': 20, 'interface': 'deploy'} # Automated cleaning should be executed in this order self.clean_steps = [self.deploy_erase, self.power_update, self.deploy_update] self.next_clean_step_index = 1 # Manual clean step self.deploy_raid = { 'step': 'build_raid', 'priority': 0, 'interface': 'deploy'} @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_clean_maintenance(self, mock_validate): node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.MANAGEABLE, target_provision_state=states.NOSTATE, maintenance=True, maintenance_reason='reason') self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_clean, self.context, node.uuid, []) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeInMaintenance, exc.exc_info[0]) self.assertFalse(mock_validate.called) @mock.patch('ironic.conductor.task_manager.TaskManager.process_event') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_clean_validate_fail(self, mock_validate, mock_process): # power validate fails mock_validate.side_effect = exception.InvalidParameterValue('error') node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.MANAGEABLE, target_provision_state=states.NOSTATE) self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_clean, self.context, node.uuid, []) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) mock_validate.assert_called_once_with(mock.ANY) self.assertFalse(mock_process.called) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_clean_invalid_state(self, mock_validate): # test node.provision_state is incorrect for clean node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.ENROLL, target_provision_state=states.NOSTATE) self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_clean, self.context, node.uuid, []) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) mock_validate.assert_called_once_with(mock.ANY) node.refresh() self.assertFalse('clean_steps' in node.driver_internal_info) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_clean_ok(self, mock_validate, mock_spawn): node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.MANAGEABLE, target_provision_state=states.NOSTATE, last_error='old error') self._start_service() clean_steps = [self.deploy_raid] self.service.do_node_clean(self.context, node.uuid, clean_steps) mock_validate.assert_called_once_with(mock.ANY) mock_spawn.assert_called_with(self.service._do_node_clean, mock.ANY, clean_steps) node.refresh() # Node will be moved to CLEANING self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertIsNone(node.driver_internal_info.get('clean_steps')) self.assertIsNone(node.last_error) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_clean_worker_pool_full(self, mock_validate, mock_spawn): prv_state = states.MANAGEABLE tgt_prv_state = states.NOSTATE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=prv_state, target_provision_state=tgt_prv_state) self._start_service() clean_steps = [self.deploy_raid] mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_clean, self.context, node.uuid, clean_steps) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() mock_validate.assert_called_once_with(mock.ANY) mock_spawn.assert_called_with(self.service._do_node_clean, mock.ANY, clean_steps) node.refresh() # Make sure states were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertIsNone(node.reservation) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def test_continue_node_clean_worker_pool_full(self, mock_spawn): # Test the appropriate exception is raised if the worker pool is full prv_state = states.CLEANWAIT tgt_prv_state = states.AVAILABLE node = obj_utils.create_test_node(self.context, driver='fake', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None) self._start_service() mock_spawn.side_effect = exception.NoFreeConductorWorker() self.assertRaises(exception.NoFreeConductorWorker, self.service.continue_node_clean, self.context, node.uuid) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def test_continue_node_clean_wrong_state(self, mock_spawn): # Test the appropriate exception is raised if node isn't already # in CLEANWAIT state prv_state = states.DELETING tgt_prv_state = states.AVAILABLE node = obj_utils.create_test_node(self.context, driver='fake', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None) self._start_service() self.assertRaises(exception.InvalidStateRequested, self.service.continue_node_clean, self.context, node.uuid) self._stop_service() node.refresh() # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def _continue_node_clean(self, return_state, mock_spawn, manual=False): # test a node can continue cleaning via RPC prv_state = return_state tgt_prv_state = states.MANAGEABLE if manual else states.AVAILABLE driver_info = {'clean_steps': self.clean_steps} node = obj_utils.create_test_node(self.context, driver='fake', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None, driver_internal_info=driver_info, clean_step=self.clean_steps[0]) self._start_service() self.service.continue_node_clean(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) mock_spawn.assert_called_with(self.service._do_next_clean_step, mock.ANY, self.next_clean_step_index) def test_continue_node_clean_automated(self): self._continue_node_clean(states.CLEANWAIT) def test_continue_node_clean_manual(self): self._continue_node_clean(states.CLEANWAIT, manual=True) def test_continue_node_clean_backward_compat(self): self._continue_node_clean(states.CLEANING) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def _continue_node_clean_skip_step(self, mock_spawn, skip=True): # test that skipping current step mechanism works driver_info = {'clean_steps': self.clean_steps} if not skip: driver_info['skip_current_clean_step'] = skip node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=states.MANAGEABLE, driver_internal_info=driver_info, clean_step=self.clean_steps[0]) self._start_service() self.service.continue_node_clean(self.context, node.uuid) self._stop_service() node.refresh() if skip: expected_step_index = 1 else: self.assertFalse( 'skip_current_clean_step' in node.driver_internal_info) expected_step_index = 0 mock_spawn.assert_called_with(self.service._do_next_clean_step, mock.ANY, expected_step_index) def test_continue_node_clean_skip_step(self): self._continue_node_clean_skip_step() def test_continue_node_clean_no_skip_step(self): self._continue_node_clean_skip_step(skip=False) def _continue_node_clean_abort(self, manual=False): last_clean_step = self.clean_steps[0] last_clean_step['abortable'] = False last_clean_step['abort_after'] = True driver_info = {'clean_steps': self.clean_steps} tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=driver_info, clean_step=self.clean_steps[0]) self._start_service() self.service.continue_node_clean(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # assert the clean step name is in the last error message self.assertIn(self.clean_steps[0]['step'], node.last_error) def test_continue_node_clean_automated_abort(self): self._continue_node_clean_abort() def test_continue_node_clean_manual_abort(self): self._continue_node_clean_abort(manual=True) def _continue_node_clean_abort_last_clean_step(self, manual=False): last_clean_step = self.clean_steps[0] last_clean_step['abortable'] = False last_clean_step['abort_after'] = True driver_info = {'clean_steps': [self.clean_steps[0]], 'clean_step_index': 0} tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=driver_info, clean_step=self.clean_steps[0]) self._start_service() self.service.continue_node_clean(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(tgt_prov_state, node.provision_state) self.assertIsNone(node.target_provision_state) self.assertIsNone(node.last_error) def test_continue_node_clean_automated_abort_last_clean_step(self): self._continue_node_clean_abort_last_clean_step() def test_continue_node_clean_manual_abort_last_clean_step(self): self._continue_node_clean_abort_last_clean_step(manual=True) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def __do_node_clean_validate_fail(self, mock_validate, clean_steps=None): # InvalidParameterValue should be cause node to go to CLEANFAIL mock_validate.side_effect = exception.InvalidParameterValue('error') tgt_prov_state = states.MANAGEABLE if clean_steps else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state) with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_node_clean(task, clean_steps=clean_steps) node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) mock_validate.assert_called_once_with(mock.ANY) def test__do_node_clean_automated_validate_fail(self): self.__do_node_clean_validate_fail() def test__do_node_clean_manual_validate_fail(self): self.__do_node_clean_validate_fail(clean_steps=[]) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test__do_node_clean_automated_disabled(self, mock_validate): self.config(automated_clean=False, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, last_error=None) self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_node_clean(task) self._stop_service() node.refresh() # Assert that the node was moved to available without cleaning self.assertFalse(mock_validate.called) self.assertEqual(states.AVAILABLE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertIsNone(node.driver_internal_info.get('clean_steps')) self.assertIsNone(node.driver_internal_info.get('clean_step_index')) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare_cleaning') def __do_node_clean_prepare_clean_fail(self, mock_prep, clean_steps=None): # Exception from task.driver.deploy.prepare_cleaning should cause node # to go to CLEANFAIL mock_prep.side_effect = exception.InvalidParameterValue('error') tgt_prov_state = states.MANAGEABLE if clean_steps else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state) with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_node_clean(task, clean_steps=clean_steps) node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) mock_prep.assert_called_once_with(mock.ANY) def test__do_node_clean_automated_prepare_clean_fail(self): self.__do_node_clean_prepare_clean_fail() def test__do_node_clean_manual_prepare_clean_fail(self): self.__do_node_clean_prepare_clean_fail(clean_steps=[self.deploy_raid]) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare_cleaning') def __do_node_clean_prepare_clean_wait(self, mock_prep, clean_steps=None): mock_prep.return_value = states.CLEANWAIT tgt_prov_state = states.MANAGEABLE if clean_steps else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state) with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_node_clean(task, clean_steps=clean_steps) node.refresh() self.assertEqual(states.CLEANWAIT, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) mock_prep.assert_called_once_with(mock.ANY) def test__do_node_clean_automated_prepare_clean_wait(self): self.__do_node_clean_prepare_clean_wait() def test__do_node_clean_manual_prepare_clean_wait(self): self.__do_node_clean_prepare_clean_wait(clean_steps=[self.deploy_raid]) @mock.patch.object(conductor_utils, 'set_node_cleaning_steps') def __do_node_clean_steps_fail(self, mock_steps, clean_steps=None, invalid_exc=True): if invalid_exc: mock_steps.side_effect = exception.InvalidParameterValue('invalid') else: mock_steps.side_effect = exception.NodeCleaningFailure('failure') tgt_prov_state = states.MANAGEABLE if clean_steps else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', uuid=uuidutils.generate_uuid(), provision_state=states.CLEANING, target_provision_state=tgt_prov_state) with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_node_clean(task, clean_steps=clean_steps) node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) mock_steps.assert_called_once_with(mock.ANY) def test__do_node_clean_automated_steps_fail(self): for invalid in (True, False): self.__do_node_clean_steps_fail(invalid_exc=invalid) def test__do_node_clean_manual_steps_fail(self): for invalid in (True, False): self.__do_node_clean_steps_fail(clean_steps=[self.deploy_raid], invalid_exc=invalid) @mock.patch.object(conductor_utils, 'set_node_cleaning_steps') @mock.patch('ironic.conductor.manager.ConductorManager.' '_do_next_clean_step') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def __do_node_clean(self, mock_validate, mock_next_step, mock_steps, clean_steps=None): if clean_steps: tgt_prov_state = states.MANAGEABLE driver_info = {} else: tgt_prov_state = states.AVAILABLE driver_info = {'clean_steps': self.clean_steps} node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, power_state=states.POWER_OFF, driver_internal_info=driver_info) self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_node_clean(task, clean_steps=clean_steps) self._stop_service() node.refresh() mock_validate.assert_called_once_with(task) mock_next_step.assert_called_once_with(mock.ANY, 0) mock_steps.assert_called_once_with(task) if clean_steps: self.assertEqual(clean_steps, node.driver_internal_info['clean_steps']) # Check that state didn't change self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) def test__do_node_clean_automated(self): self.__do_node_clean() def test__do_node_clean_manual(self): self.__do_node_clean(clean_steps=[self.deploy_raid]) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step') def _do_next_clean_step_first_step_async(self, return_state, mock_execute, clean_steps=None): # Execute the first async clean step on a node driver_internal_info = {'clean_step_index': None} if clean_steps: tgt_prov_state = states.MANAGEABLE driver_internal_info['clean_steps'] = clean_steps else: tgt_prov_state = states.AVAILABLE driver_internal_info['clean_steps'] = self.clean_steps node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=driver_internal_info, clean_step={}) mock_execute.return_value = return_state expected_first_step = node.driver_internal_info['clean_steps'][0] self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_next_clean_step(task, 0) self._stop_service() node.refresh() self.assertEqual(states.CLEANWAIT, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual(expected_first_step, node.clean_step) self.assertEqual(0, node.driver_internal_info['clean_step_index']) mock_execute.assert_called_once_with(mock.ANY, expected_first_step) def test_do_next_clean_step_automated_first_step_async(self): self._do_next_clean_step_first_step_async(states.CLEANWAIT) def test_do_next_clean_step_first_step_async_backward_compat(self): self._do_next_clean_step_first_step_async(states.CLEANING) def test_do_next_clean_step_manual_first_step_async(self): self._do_next_clean_step_first_step_async( states.CLEANWAIT, clean_steps=[self.deploy_raid]) @mock.patch('ironic.drivers.modules.fake.FakePower.execute_clean_step') def _do_next_clean_step_continue_from_last_cleaning(self, return_state, mock_execute, manual=False): # Resume an in-progress cleaning after the first async step tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': 0}, clean_step=self.clean_steps[0]) mock_execute.return_value = return_state self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_next_clean_step(task, self.next_clean_step_index) self._stop_service() node.refresh() self.assertEqual(states.CLEANWAIT, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual(self.clean_steps[1], node.clean_step) self.assertEqual(1, node.driver_internal_info['clean_step_index']) mock_execute.assert_called_once_with(mock.ANY, self.clean_steps[1]) def test_do_next_clean_step_continue_from_last_cleaning(self): self._do_next_clean_step_continue_from_last_cleaning(states.CLEANWAIT) def test_do_next_clean_step_continue_from_last_cleaning_backward_com(self): self._do_next_clean_step_continue_from_last_cleaning(states.CLEANING) def test_do_next_clean_step_manual_continue_from_last_cleaning(self): self._do_next_clean_step_continue_from_last_cleaning(states.CLEANWAIT, manual=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step') def _do_next_clean_step_last_step_noop(self, mock_execute, manual=False): # Resume where last_step is the last cleaning step, should be noop tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE info = {'clean_steps': self.clean_steps, 'clean_step_index': len(self.clean_steps) - 1} node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=info, clean_step=self.clean_steps[-1]) self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_next_clean_step(task, None) self._stop_service() node.refresh() # Cleaning should be complete without calling additional steps self.assertEqual(tgt_prov_state, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertFalse('clean_step_index' in node.driver_internal_info) self.assertIsNone(node.driver_internal_info['clean_steps']) self.assertFalse(mock_execute.called) def test__do_next_clean_step_automated_last_step_noop(self): self._do_next_clean_step_last_step_noop() def test__do_next_clean_step_manual_last_step_noop(self): self._do_next_clean_step_last_step_noop(manual=True) @mock.patch('ironic.drivers.modules.fake.FakePower.execute_clean_step') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step') def _do_next_clean_step_all(self, mock_deploy_execute, mock_power_execute, manual=False): # Run all steps from start to finish (all synchronous) tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None}, clean_step={}) mock_deploy_execute.return_value = None mock_power_execute.return_value = None self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_next_clean_step(task, 0) self._stop_service() node.refresh() # Cleaning should be complete self.assertEqual(tgt_prov_state, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertFalse('clean_step_index' in node.driver_internal_info) self.assertIsNone(node.driver_internal_info['clean_steps']) mock_power_execute.assert_called_once_with(mock.ANY, self.clean_steps[1]) mock_deploy_execute.assert_has_calls = [ mock.call(self.clean_steps[0]), mock.call(self.clean_steps[2]) ] def test_do_next_clean_step_automated_all(self): self._do_next_clean_step_all() def test_do_next_clean_step_manual_all(self): self._do_next_clean_step_all(manual=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step') @mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True) def _do_next_clean_step_execute_fail(self, tear_mock, mock_execute, manual=False): # When a clean step fails, go to CLEANFAIL tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None}, clean_step={}) mock_execute.side_effect = Exception() self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_next_clean_step(task, 0) tear_mock.assert_called_once_with(task.driver.deploy, task) self._stop_service() node.refresh() # Make sure we go to CLEANFAIL, clear clean_steps self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertFalse('clean_step_index' in node.driver_internal_info) self.assertIsNotNone(node.last_error) self.assertTrue(node.maintenance) mock_execute.assert_called_once_with(mock.ANY, self.clean_steps[0]) def test__do_next_clean_step_automated_execute_fail(self): self._do_next_clean_step_execute_fail() def test__do_next_clean_step_manual_execute_fail(self): self._do_next_clean_step_execute_fail(manual=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step') @mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True) def _do_next_clean_step_fail_in_tear_down_cleaning(self, tear_mock, mock_execute, manual=True): tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None}, clean_step={}) mock_execute.return_value = None tear_mock.side_effect = Exception() self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_next_clean_step(task, 0) self._stop_service() node.refresh() # Make sure we go to CLEANFAIL, clear clean_steps self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertFalse('clean_step_index' in node.driver_internal_info) self.assertIsNotNone(node.last_error) self.assertEqual(1, tear_mock.call_count) self.assertTrue(node.maintenance) mock_execute.assert_called_once_with(mock.ANY, self.clean_steps[0]) def test__do_next_clean_step_automated_fail_in_tear_down_cleaning(self): self._do_next_clean_step_fail_in_tear_down_cleaning() def test__do_next_clean_step_manual_fail_in_tear_down_cleaning(self): self._do_next_clean_step_fail_in_tear_down_cleaning(manual=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step') def _do_next_clean_step_no_steps(self, mock_execute, manual=False): for info in ({'clean_steps': None, 'clean_step_index': None}, {'clean_steps': None}): # Resume where there are no steps, should be a noop tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', uuid=uuidutils.generate_uuid(), provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=info, clean_step={}) self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_next_clean_step(task, None) self._stop_service() node.refresh() # Cleaning should be complete without calling additional steps self.assertEqual(tgt_prov_state, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertFalse('clean_step_index' in node.driver_internal_info) self.assertFalse(mock_execute.called) mock_execute.reset_mock() def test__do_next_clean_step_automated_no_steps(self): self._do_next_clean_step_no_steps() def test__do_next_clean_step_manual_no_steps(self): self._do_next_clean_step_no_steps(manual=True) @mock.patch('ironic.drivers.modules.fake.FakePower.execute_clean_step') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step') def _do_next_clean_step_bad_step_return_value( self, deploy_exec_mock, power_exec_mock, manual=False): # When a clean step fails, go to CLEANFAIL tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None}, clean_step={}) deploy_exec_mock.return_value = "foo" self._start_service() with task_manager.acquire( self.context, node.uuid, shared=False) as task: self.service._do_next_clean_step(task, 0) self._stop_service() node.refresh() # Make sure we go to CLEANFAIL, clear clean_steps self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertFalse('clean_step_index' in node.driver_internal_info) self.assertIsNotNone(node.last_error) self.assertTrue(node.maintenance) deploy_exec_mock.assert_called_once_with(mock.ANY, self.clean_steps[0]) # Make sure we don't execute any other step and return self.assertFalse(power_exec_mock.called) def test__do_next_clean_step_automated_bad_step_return_value(self): self._do_next_clean_step_bad_step_return_value() def test__do_next_clean_step_manual_bad_step_return_value(self): self._do_next_clean_step_bad_step_return_value(manual=True) def __get_node_next_clean_steps(self, skip=True): driver_internal_info = {'clean_steps': self.clean_steps, 'clean_step_index': 0} node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=states.AVAILABLE, driver_internal_info=driver_internal_info, last_error=None, clean_step=self.clean_steps[0]) with task_manager.acquire(self.context, node.uuid) as task: step_index = self.service._get_node_next_clean_steps( task, skip_current_step=skip) expected_index = 1 if skip else 0 self.assertEqual(expected_index, step_index) def test__get_node_next_clean_steps(self): self.__get_node_next_clean_steps() def test__get_node_next_clean_steps_no_skip(self): self.__get_node_next_clean_steps(skip=False) def test__get_node_next_clean_steps_unset_clean_step(self): driver_internal_info = {'clean_steps': self.clean_steps, 'clean_step_index': None} node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=states.AVAILABLE, driver_internal_info=driver_internal_info, last_error=None, clean_step=None) with task_manager.acquire(self.context, node.uuid) as task: step_index = self.service._get_node_next_clean_steps(task) self.assertEqual(0, step_index) def __get_node_next_clean_steps_backwards_compat(self, skip=True): driver_internal_info = {'clean_steps': self.clean_steps} node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=states.AVAILABLE, driver_internal_info=driver_internal_info, last_error=None, clean_step=self.clean_steps[0]) with task_manager.acquire(self.context, node.uuid) as task: step_index = self.service._get_node_next_clean_steps( task, skip_current_step=skip) expected_index = 1 if skip else 0 self.assertEqual(expected_index, step_index) def test__get_node_next_clean_steps_backwards_compat(self): self.__get_node_next_clean_steps_backwards_compat() def test__get_node_next_clean_steps_no_skip_backwards_compat(self): self.__get_node_next_clean_steps_backwards_compat(skip=False) def test__get_node_next_clean_steps_bad_clean_step(self): # NOTE(rloo) for backwards compatibility driver_internal_info = {'clean_steps': self.clean_steps} node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.CLEANWAIT, target_provision_state=states.AVAILABLE, driver_internal_info=driver_internal_info, last_error=None, clean_step={'interface': 'deploy', 'step': 'not_a_clean_step', 'priority': 100}) with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.NodeCleaningFailure, self.service._get_node_next_clean_steps, task) @mgr_utils.mock_record_keepalive class DoNodeVerifyTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): @mock.patch('ironic.drivers.modules.fake.FakePower.get_power_state') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test__do_node_verify(self, mock_validate, mock_get_power_state): mock_get_power_state.return_value = states.POWER_OFF node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.VERIFYING, target_provision_state=states.MANAGEABLE, last_error=None, power_state=states.NOSTATE) self._start_service() with task_manager.acquire( self.context, node['id'], shared=False) as task: self.service._do_node_verify(task) self._stop_service() node.refresh() mock_validate.assert_called_once_with(task) mock_get_power_state.assert_called_once_with(task) self.assertEqual(states.MANAGEABLE, node.provision_state) self.assertIsNone(node.target_provision_state) self.assertIsNone(node.last_error) self.assertEqual(states.POWER_OFF, node.power_state) @mock.patch('ironic.drivers.modules.fake.FakePower.get_power_state') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test__do_node_verify_validation_fails(self, mock_validate, mock_get_power_state): node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.VERIFYING, target_provision_state=states.MANAGEABLE, last_error=None, power_state=states.NOSTATE) mock_validate.side_effect = iter([RuntimeError("boom")]) self._start_service() with task_manager.acquire( self.context, node['id'], shared=False) as task: self.service._do_node_verify(task) self._stop_service() node.refresh() mock_validate.assert_called_once_with(task) self.assertEqual(states.ENROLL, node.provision_state) self.assertIsNone(node.target_provision_state) self.assertTrue(node.last_error) self.assertFalse(mock_get_power_state.called) @mock.patch('ironic.drivers.modules.fake.FakePower.get_power_state') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test__do_node_verify_get_state_fails(self, mock_validate, mock_get_power_state): node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.VERIFYING, target_provision_state=states.MANAGEABLE, last_error=None, power_state=states.NOSTATE) mock_get_power_state.side_effect = iter([RuntimeError("boom")]) self._start_service() with task_manager.acquire( self.context, node['id'], shared=False) as task: self.service._do_node_verify(task) self._stop_service() node.refresh() mock_get_power_state.assert_called_once_with(task) self.assertEqual(states.ENROLL, node.provision_state) self.assertIsNone(node.target_provision_state) self.assertTrue(node.last_error) @mgr_utils.mock_record_keepalive class MiscTestCase(mgr_utils.ServiceSetUpMixin, mgr_utils.CommonMixIn, tests_db_base.DbTestCase): def test__mapped_to_this_conductor(self): self._start_service() n = utils.get_test_node() self.assertTrue(self.service._mapped_to_this_conductor(n['uuid'], 'fake')) self.assertFalse(self.service._mapped_to_this_conductor(n['uuid'], 'otherdriver')) @mock.patch.object(images, 'is_whole_disk_image') def test_validate_driver_interfaces(self, mock_iwdi): mock_iwdi.return_value = False target_raid_config = {'logical_disks': [{'size_gb': 1, 'raid_level': '1'}]} node = obj_utils.create_test_node( self.context, driver='fake', target_raid_config=target_raid_config) ret = self.service.validate_driver_interfaces(self.context, node.uuid) expected = {'console': {'result': True}, 'power': {'result': True}, 'inspect': {'result': True}, 'management': {'result': True}, 'boot': {'result': True}, 'raid': {'result': True}, 'deploy': {'result': True}} self.assertEqual(expected, ret) mock_iwdi.assert_called_once_with(self.context, node.instance_info) @mock.patch.object(images, 'is_whole_disk_image') def test_validate_driver_interfaces_validation_fail(self, mock_iwdi): mock_iwdi.return_value = False node = obj_utils.create_test_node(self.context, driver='fake') with mock.patch( 'ironic.drivers.modules.fake.FakeDeploy.validate' ) as deploy: reason = 'fake reason' deploy.side_effect = exception.InvalidParameterValue(reason) ret = self.service.validate_driver_interfaces(self.context, node.uuid) self.assertFalse(ret['deploy']['result']) self.assertEqual(reason, ret['deploy']['reason']) mock_iwdi.assert_called_once_with(self.context, node.instance_info) @mock.patch.object(manager.ConductorManager, '_fail_if_in_state', autospec=True) @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') def test_iter_nodes(self, mock_nodeinfo_list, mock_mapped, mock_fail_if_state): self._start_service() self.columns = ['uuid', 'driver', 'id'] nodes = [self._create_node(id=i, driver='fake') for i in range(2)] mock_nodeinfo_list.return_value = self._get_nodeinfo_list_response( nodes) mock_mapped.side_effect = [True, False] result = list(self.service.iter_nodes(fields=['id'], filters=mock.sentinel.filters)) self.assertEqual([(nodes[0].uuid, 'fake', 0)], result) mock_nodeinfo_list.assert_called_once_with( columns=self.columns, filters=mock.sentinel.filters) mock_fail_if_state.assert_called_once_with( mock.ANY, mock.ANY, {'provision_state': 'deploying', 'reserved': False}, 'deploying', 'provision_updated_at', last_error=mock.ANY) @mgr_utils.mock_record_keepalive class ConsoleTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_set_console_mode_worker_pool_full(self): node = obj_utils.create_test_node(self.context, driver='fake') self._start_service() with mock.patch.object(self.service, '_spawn_worker') as spawn_mock: spawn_mock.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_console_mode, self.context, node.uuid, True) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() spawn_mock.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) def test_set_console_mode_enabled(self): node = obj_utils.create_test_node(self.context, driver='fake') self._start_service() self.service.set_console_mode(self.context, node.uuid, True) self._stop_service() node.refresh() self.assertTrue(node.console_enabled) def test_set_console_mode_disabled(self): node = obj_utils.create_test_node(self.context, driver='fake') self._start_service() self.service.set_console_mode(self.context, node.uuid, False) self._stop_service() node.refresh() self.assertFalse(node.console_enabled) def test_set_console_mode_not_supported(self): node = obj_utils.create_test_node(self.context, driver='fake', last_error=None) self._start_service() # null the console interface self.driver.console = None exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_console_mode, self.context, node.uuid, True) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) self._stop_service() node.refresh() def test_set_console_mode_validation_fail(self): node = obj_utils.create_test_node(self.context, driver='fake', last_error=None) self._start_service() with mock.patch.object(self.driver.console, 'validate') as mock_val: mock_val.side_effect = exception.InvalidParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_console_mode, self.context, node.uuid, True) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_set_console_mode_start_fail(self): node = obj_utils.create_test_node(self.context, driver='fake', last_error=None, console_enabled=False) self._start_service() with mock.patch.object(self.driver.console, 'start_console') as mock_sc: mock_sc.side_effect = exception.IronicException('test-error') self.service.set_console_mode(self.context, node.uuid, True) self._stop_service() mock_sc.assert_called_once_with(mock.ANY) node.refresh() self.assertIsNotNone(node.last_error) def test_set_console_mode_stop_fail(self): node = obj_utils.create_test_node(self.context, driver='fake', last_error=None, console_enabled=True) self._start_service() with mock.patch.object(self.driver.console, 'stop_console') as mock_sc: mock_sc.side_effect = exception.IronicException('test-error') self.service.set_console_mode(self.context, node.uuid, False) self._stop_service() mock_sc.assert_called_once_with(mock.ANY) node.refresh() self.assertIsNotNone(node.last_error) def test_enable_console_already_enabled(self): node = obj_utils.create_test_node(self.context, driver='fake', console_enabled=True) self._start_service() with mock.patch.object(self.driver.console, 'start_console') as mock_sc: self.service.set_console_mode(self.context, node.uuid, True) self._stop_service() self.assertFalse(mock_sc.called) def test_disable_console_already_disabled(self): node = obj_utils.create_test_node(self.context, driver='fake', console_enabled=False) self._start_service() with mock.patch.object(self.driver.console, 'stop_console') as mock_sc: self.service.set_console_mode(self.context, node.uuid, False) self._stop_service() self.assertFalse(mock_sc.called) def test_get_console(self): node = obj_utils.create_test_node(self.context, driver='fake', console_enabled=True) console_info = {'test': 'test info'} with mock.patch.object(self.driver.console, 'get_console') as mock_gc: mock_gc.return_value = console_info data = self.service.get_console_information(self.context, node.uuid) self.assertEqual(console_info, data) def test_get_console_not_supported(self): node = obj_utils.create_test_node(self.context, driver='fake', console_enabled=True) # null the console interface self.driver.console = None exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_console_information, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) def test_get_console_disabled(self): node = obj_utils.create_test_node(self.context, driver='fake', console_enabled=False) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_console_information, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeConsoleNotEnabled, exc.exc_info[0]) def test_get_console_validate_fail(self): node = obj_utils.create_test_node(self.context, driver='fake', console_enabled=True) with mock.patch.object(self.driver.console, 'validate') as mock_gc: mock_gc.side_effect = exception.InvalidParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_console_information, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class DestroyNodeTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_destroy_node(self): self._start_service() for state in states.DELETE_ALLOWED_STATES: node = obj_utils.create_test_node(self.context, provision_state=state) self.service.destroy_node(self.context, node.uuid) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) def test_destroy_node_reserved(self): self._start_service() fake_reservation = 'fake-reserv' node = obj_utils.create_test_node(self.context, reservation=fake_reservation) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_node, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) # Verify existing reservation wasn't broken. node.refresh() self.assertEqual(fake_reservation, node.reservation) def test_destroy_node_associated(self): self._start_service() node = obj_utils.create_test_node(self.context, instance_uuid='fake-uuid') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_node, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeAssociated, exc.exc_info[0]) # Verify reservation was released. node.refresh() self.assertIsNone(node.reservation) def test_destroy_node_invalid_provision_state(self): self._start_service() node = obj_utils.create_test_node(self.context, provision_state=states.ACTIVE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_node, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) # Verify reservation was released. node.refresh() self.assertIsNone(node.reservation) def test_destroy_node_allowed_in_maintenance(self): self._start_service() node = obj_utils.create_test_node(self.context, instance_uuid='fake-uuid', provision_state=states.ACTIVE, maintenance=True) self.service.destroy_node(self.context, node.uuid) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) def test_destroy_node_power_off(self): self._start_service() node = obj_utils.create_test_node(self.context, power_state=states.POWER_OFF) self.service.destroy_node(self.context, node.uuid) def test_destroy_node_console_enabled(self): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake', console_enabled=True) with mock.patch.object(self.driver.console, 'stop_console') as mock_sc: self.service.destroy_node(self.context, node.uuid) mock_sc.assert_called_once_with(mock.ANY) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) @mgr_utils.mock_record_keepalive class UpdatePortTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_update_port(self): node = obj_utils.create_test_node(self.context, driver='fake') port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) new_extra = {'foo': 'baz'} port.extra = new_extra res = self.service.update_port(self.context, port) self.assertEqual(new_extra, res.extra) def test_update_port_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake', reservation='fake-reserv') port = obj_utils.create_test_port(self.context, node_id=node.id) port.extra = {'foo': 'baz'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_address') def test_update_port_address(self, mac_update_mock): node = obj_utils.create_test_node(self.context, driver='fake') port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'vif_port_id': 'fake-id'}) new_address = '11:22:33:44:55:bb' port.address = new_address res = self.service.update_port(self.context, port) self.assertEqual(new_address, res.address) mac_update_mock.assert_called_once_with('fake-id', new_address, token=self.context.auth_token) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_address') def test_update_port_address_fail(self, mac_update_mock): node = obj_utils.create_test_node(self.context, driver='fake') port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'vif_port_id': 'fake-id'}) old_address = port.address port.address = '11:22:33:44:55:bb' mac_update_mock.side_effect = ( exception.FailedToUpdateMacOnPort(port_id=port.uuid)) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.FailedToUpdateMacOnPort, exc.exc_info[0]) port.refresh() self.assertEqual(old_address, port.address) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_address') def test_update_port_address_no_vif_id(self, mac_update_mock): node = obj_utils.create_test_node(self.context, driver='fake') port = obj_utils.create_test_port(self.context, node_id=node.id) new_address = '11:22:33:44:55:bb' port.address = new_address res = self.service.update_port(self.context, port) self.assertEqual(new_address, res.address) self.assertFalse(mac_update_mock.called) def test_update_port_node_deleting_state(self): node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.DELETING) port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) old_pxe = port.pxe_enabled port.pxe_enabled = True exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) self.assertEqual(exception.InvalidState, exc.exc_info[0]) port.refresh() self.assertEqual(old_pxe, port.pxe_enabled) def test_update_port_node_manageable_state(self): node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.MANAGEABLE) port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) port.pxe_enabled = True self.service.update_port(self.context, port) port.refresh() self.assertEqual(True, port.pxe_enabled) def test_update_port_node_active_state_and_maintenance(self): node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.ACTIVE, maintenance=True) port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) port.pxe_enabled = True self.service.update_port(self.context, port) port.refresh() self.assertEqual(True, port.pxe_enabled) def test__filter_out_unsupported_types_all(self): self._start_service() CONF.set_override('send_sensor_data_types', ['All'], group='conductor') fake_sensors_data = {"t1": {'f1': 'v1'}, "t2": {'f1': 'v1'}} actual_result = ( self.service._filter_out_unsupported_types(fake_sensors_data)) expected_result = {"t1": {'f1': 'v1'}, "t2": {'f1': 'v1'}} self.assertEqual(expected_result, actual_result) def test__filter_out_unsupported_types_part(self): self._start_service() CONF.set_override('send_sensor_data_types', ['t1'], group='conductor') fake_sensors_data = {"t1": {'f1': 'v1'}, "t2": {'f1': 'v1'}} actual_result = ( self.service._filter_out_unsupported_types(fake_sensors_data)) expected_result = {"t1": {'f1': 'v1'}} self.assertEqual(expected_result, actual_result) def test__filter_out_unsupported_types_non(self): self._start_service() CONF.set_override('send_sensor_data_types', ['t3'], group='conductor') fake_sensors_data = {"t1": {'f1': 'v1'}, "t2": {'f1': 'v1'}} actual_result = ( self.service._filter_out_unsupported_types(fake_sensors_data)) expected_result = {} self.assertEqual(expected_result, actual_result) @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') @mock.patch.object(task_manager, 'acquire') def test___send_sensor_data(self, acquire_mock, get_nodeinfo_list_mock, _mapped_to_this_conductor_mock): node = obj_utils.create_test_node(self.context, driver='fake') self._start_service() CONF.set_override('send_sensor_data', True, group='conductor') acquire_mock.return_value.__enter__.return_value.driver = self.driver with mock.patch.object(self.driver.management, 'get_sensors_data') as get_sensors_data_mock: with mock.patch.object(self.driver.management, 'validate') as validate_mock: get_sensors_data_mock.return_value = 'fake-sensor-data' _mapped_to_this_conductor_mock.return_value = True get_nodeinfo_list_mock.return_value = [(node.uuid, node.driver, node.instance_uuid)] self.service._send_sensor_data(self.context) self.assertTrue(get_nodeinfo_list_mock.called) self.assertTrue(_mapped_to_this_conductor_mock.called) self.assertTrue(acquire_mock.called) self.assertTrue(get_sensors_data_mock.called) self.assertTrue(validate_mock.called) @mock.patch.object(manager.ConductorManager, '_fail_if_in_state', autospec=True) @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') @mock.patch.object(task_manager, 'acquire') def test___send_sensor_data_disabled(self, acquire_mock, get_nodeinfo_list_mock, _mapped_to_this_conductor_mock, mock_fail_if_state): node = obj_utils.create_test_node(self.context, driver='fake') self._start_service() acquire_mock.return_value.__enter__.return_value.driver = self.driver with mock.patch.object(self.driver.management, 'get_sensors_data') as get_sensors_data_mock: with mock.patch.object(self.driver.management, 'validate') as validate_mock: get_sensors_data_mock.return_value = 'fake-sensor-data' _mapped_to_this_conductor_mock.return_value = True get_nodeinfo_list_mock.return_value = [(node.uuid, node.driver, node.instance_uuid)] self.service._send_sensor_data(self.context) self.assertFalse(get_nodeinfo_list_mock.called) self.assertFalse(_mapped_to_this_conductor_mock.called) self.assertFalse(acquire_mock.called) self.assertFalse(get_sensors_data_mock.called) self.assertFalse(validate_mock.called) mock_fail_if_state.assert_called_once_with( mock.ANY, mock.ANY, {'provision_state': 'deploying', 'reserved': False}, 'deploying', 'provision_updated_at', last_error=mock.ANY) @mock.patch.object(manager.ConductorManager, 'iter_nodes', autospec=True) @mock.patch.object(task_manager, 'acquire', autospec=True) def test___send_sensor_data_no_management(self, acquire_mock, iter_nodes_mock): CONF.set_override('send_sensor_data', True, group='conductor') iter_nodes_mock.return_value = [('fake_uuid1', 'fake', 'fake_uuid2')] self.driver.management = None acquire_mock.return_value.__enter__.return_value.driver = self.driver with mock.patch.object(fake.FakeManagement, 'get_sensors_data', autospec=True) as get_sensors_data_mock: with mock.patch.object(fake.FakeManagement, 'validate', autospec=True) as validate_mock: self.service._send_sensor_data(self.context) self.assertTrue(iter_nodes_mock.called) self.assertTrue(acquire_mock.called) self.assertFalse(get_sensors_data_mock.called) self.assertFalse(validate_mock.called) def test_set_boot_device(self): node = obj_utils.create_test_node(self.context, driver='fake') with mock.patch.object(self.driver.management, 'validate') as mock_val: with mock.patch.object(self.driver.management, 'set_boot_device') as mock_sbd: self.service.set_boot_device(self.context, node.uuid, boot_devices.PXE) mock_val.assert_called_once_with(mock.ANY) mock_sbd.assert_called_once_with(mock.ANY, boot_devices.PXE, persistent=False) def test_set_boot_device_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake', reservation='fake-reserv') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_boot_device, self.context, node.uuid, boot_devices.DISK) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) def test_set_boot_device_not_supported(self): node = obj_utils.create_test_node(self.context, driver='fake') # null the console interface self.driver.management = None exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_boot_device, self.context, node.uuid, boot_devices.DISK) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) def test_set_boot_device_validate_fail(self): node = obj_utils.create_test_node(self.context, driver='fake') with mock.patch.object(self.driver.management, 'validate') as mock_val: mock_val.side_effect = exception.InvalidParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_boot_device, self.context, node.uuid, boot_devices.DISK) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_get_boot_device(self): node = obj_utils.create_test_node(self.context, driver='fake') bootdev = self.service.get_boot_device(self.context, node.uuid) expected = {'boot_device': boot_devices.PXE, 'persistent': False} self.assertEqual(expected, bootdev) def test_get_boot_device_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake', reservation='fake-reserv') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_boot_device, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) def test_get_boot_device_not_supported(self): node = obj_utils.create_test_node(self.context, driver='fake') # null the management interface self.driver.management = None exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_boot_device, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) def test_get_boot_device_validate_fail(self): node = obj_utils.create_test_node(self.context, driver='fake') with mock.patch.object(self.driver.management, 'validate') as mock_val: mock_val.side_effect = exception.InvalidParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_boot_device, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_get_supported_boot_devices(self): node = obj_utils.create_test_node(self.context, driver='fake') bootdevs = self.service.get_supported_boot_devices(self.context, node.uuid) self.assertEqual([boot_devices.PXE], bootdevs) def test_get_supported_boot_devices_iface_not_supported(self): node = obj_utils.create_test_node(self.context, driver='fake') # null the management interface self.driver.management = None exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_supported_boot_devices, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class UpdatePortgroupTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_update_portgroup(self): node = obj_utils.create_test_node(self.context, driver='fake') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id, extra={'foo': 'bar'}) new_extra = {'foo': 'baz'} portgroup.extra = new_extra self.service.update_portgroup(self.context, portgroup) portgroup.refresh() self.assertEqual(new_extra, portgroup.extra) def test_update_portgroup_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake', reservation='fake-reserv') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id) old_extra = portgroup.extra portgroup.extra = {'foo': 'baz'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_portgroup, self.context, portgroup) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) portgroup.refresh() self.assertEqual(old_extra, portgroup.extra) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_address') def test_update_portgroup_address(self, mac_update_mock): node = obj_utils.create_test_node(self.context, driver='fake') pg = obj_utils.create_test_portgroup( self.context, node_id=node.id, extra={'vif_portgroup_id': 'fake-id'}) new_address = '11:22:33:44:55:bb' pg.address = new_address self.service.update_portgroup(self.context, pg) pg.refresh() self.assertEqual(new_address, pg.address) mac_update_mock.assert_called_once_with('fake-id', new_address, token=self.context.auth_token) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_address') def test_update_portgroup_address_fail(self, mac_update_mock): node = obj_utils.create_test_node(self.context, driver='fake') pg = obj_utils.create_test_portgroup( self.context, node_id=node.id, extra={'vif_portgroup_id': 'fake-id'}) old_address = pg.address pg.address = '11:22:33:44:55:bb' mac_update_mock.side_effect = ( exception.FailedToUpdateMacOnPort(port_id=pg.uuid)) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_portgroup, self.context, pg) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.FailedToUpdateMacOnPort, exc.exc_info[0]) pg.refresh() self.assertEqual(old_address, pg.address) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_address') def test_update_portgroup_address_no_vif_id(self, mac_update_mock): node = obj_utils.create_test_node(self.context, driver='fake') pg = obj_utils.create_test_port(self.context, node_id=node.id) new_address = '11:22:33:44:55:bb' pg.address = new_address self.service.update_portgroup(self.context, pg) pg.refresh() self.assertEqual(new_address, pg.address) self.assertFalse(mac_update_mock.called) @mgr_utils.mock_record_keepalive class RaidTestCases(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def setUp(self): super(RaidTestCases, self).setUp() self.node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.MANAGEABLE) def test_get_raid_logical_disk_properties(self): self._start_service() properties = self.service.get_raid_logical_disk_properties( self.context, 'fake') self.assertIn('raid_level', properties) self.assertIn('size_gb', properties) def test_get_raid_logical_disk_properties_iface_not_supported(self): self.driver.raid = None self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_raid_logical_disk_properties, self.context, 'fake') self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) def test_set_target_raid_config(self): raid_config = {'logical_disks': [{'size_gb': 100, 'raid_level': '1'}]} self.service.set_target_raid_config( self.context, self.node.uuid, raid_config) self.node.refresh() self.assertEqual(raid_config, self.node.target_raid_config) def test_set_target_raid_config_empty(self): self.node.target_raid_config = {'foo': 'bar'} self.node.save() raid_config = {} self.service.set_target_raid_config( self.context, self.node.uuid, raid_config) self.node.refresh() self.assertEqual({}, self.node.target_raid_config) def test_set_target_raid_config_iface_not_supported(self): raid_config = {'logical_disks': [{'size_gb': 100, 'raid_level': '1'}]} self.driver.raid = None exc = self.assertRaises( messaging.rpc.ExpectedException, self.service.set_target_raid_config, self.context, self.node.uuid, raid_config) self.node.refresh() self.assertEqual({}, self.node.target_raid_config) self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) def test_set_target_raid_config_invalid_parameter_value(self): # Missing raid_level in the below raid config. raid_config = {'logical_disks': [{'size_gb': 100}]} self.node.target_raid_config = {'foo': 'bar'} self.node.save() exc = self.assertRaises( messaging.rpc.ExpectedException, self.service.set_target_raid_config, self.context, self.node.uuid, raid_config) self.node.refresh() self.assertEqual({'foo': 'bar'}, self.node.target_raid_config) self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) @mock.patch.object(conductor_utils, 'node_power_action') class ManagerDoSyncPowerStateTestCase(tests_db_base.DbTestCase): def setUp(self): super(ManagerDoSyncPowerStateTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') self.driver = mock.Mock(spec_set=drivers_base.BaseDriver) self.power = self.driver.power self.node = obj_utils.create_test_node( self.context, driver='fake', maintenance=False, provision_state=states.AVAILABLE) self.task = mock.Mock(spec_set=['context', 'driver', 'node', 'upgrade_lock', 'shared']) self.task.context = self.context self.task.driver = self.driver self.task.node = self.node self.task.shared = False self.config(force_power_state_during_sync=False, group='conductor') def _do_sync_power_state(self, old_power_state, new_power_states, fail_validate=False): self.node.power_state = old_power_state if not isinstance(new_power_states, (list, tuple)): new_power_states = [new_power_states] if fail_validate: exc = exception.InvalidParameterValue('error') self.power.validate.side_effect = exc for new_power_state in new_power_states: self.node.power_state = old_power_state if isinstance(new_power_state, Exception): self.power.get_power_state.side_effect = new_power_state else: self.power.get_power_state.return_value = new_power_state count = manager.do_sync_power_state( self.task, self.service.power_state_sync_count[self.node.uuid]) self.service.power_state_sync_count[self.node.uuid] = count def test_state_unchanged(self, node_power_action): self._do_sync_power_state('fake-power', 'fake-power') self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertEqual('fake-power', self.node.power_state) self.assertFalse(node_power_action.called) self.assertFalse(self.task.upgrade_lock.called) def test_state_not_set(self, node_power_action): self._do_sync_power_state(None, states.POWER_ON) self.power.validate.assert_called_once_with(self.task) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.assertEqual(states.POWER_ON, self.node.power_state) self.task.upgrade_lock.assert_called_once_with() def test_validate_fail(self, node_power_action): self._do_sync_power_state(None, states.POWER_ON, fail_validate=True) self.power.validate.assert_called_once_with(self.task) self.assertFalse(self.power.get_power_state.called) self.assertFalse(node_power_action.called) self.assertIsNone(self.node.power_state) def test_get_power_state_fail(self, node_power_action): self._do_sync_power_state('fake', exception.IronicException('foo')) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.assertEqual('fake', self.node.power_state) self.assertEqual(1, self.service.power_state_sync_count[self.node.uuid]) def test_get_power_state_error(self, node_power_action): self._do_sync_power_state('fake', states.ERROR) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.assertEqual('fake', self.node.power_state) self.assertEqual(1, self.service.power_state_sync_count[self.node.uuid]) def test_state_changed_no_sync(self, node_power_action): self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.assertEqual(states.POWER_OFF, self.node.power_state) self.task.upgrade_lock.assert_called_once_with() def test_state_changed_sync(self, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') self.config(power_state_sync_max_retries=1, group='conductor') self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) node_power_action.assert_called_once_with(self.task, states.POWER_ON) self.assertEqual(states.POWER_ON, self.node.power_state) self.task.upgrade_lock.assert_called_once_with() def test_state_changed_sync_failed(self, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') node_power_action.side_effect = exception.IronicException('test') self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) # Just testing that this test doesn't raise. self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) node_power_action.assert_called_once_with(self.task, states.POWER_ON) self.assertEqual(states.POWER_ON, self.node.power_state) self.assertEqual(1, self.service.power_state_sync_count[self.node.uuid]) def test_max_retries_exceeded(self, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') self.config(power_state_sync_max_retries=1, group='conductor') self._do_sync_power_state(states.POWER_ON, [states.POWER_OFF, states.POWER_OFF]) self.assertFalse(self.power.validate.called) power_exp_calls = [mock.call(self.task)] * 2 self.assertEqual(power_exp_calls, self.power.get_power_state.call_args_list) node_power_action.assert_called_once_with(self.task, states.POWER_ON) self.assertEqual(states.POWER_OFF, self.node.power_state) self.assertEqual(2, self.service.power_state_sync_count[self.node.uuid]) self.assertTrue(self.node.maintenance) self.assertIsNotNone(self.node.maintenance_reason) def test_max_retries_exceeded2(self, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') self.config(power_state_sync_max_retries=2, group='conductor') self._do_sync_power_state(states.POWER_ON, [states.POWER_OFF, states.POWER_OFF, states.POWER_OFF]) self.assertFalse(self.power.validate.called) power_exp_calls = [mock.call(self.task)] * 3 self.assertEqual(power_exp_calls, self.power.get_power_state.call_args_list) npa_exp_calls = [mock.call(self.task, states.POWER_ON)] * 2 self.assertEqual(npa_exp_calls, node_power_action.call_args_list) self.assertEqual(states.POWER_OFF, self.node.power_state) self.assertEqual(3, self.service.power_state_sync_count[self.node.uuid]) self.assertTrue(self.node.maintenance) def test_retry_then_success(self, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') self.config(power_state_sync_max_retries=2, group='conductor') self._do_sync_power_state(states.POWER_ON, [states.POWER_OFF, states.POWER_OFF, states.POWER_ON]) self.assertFalse(self.power.validate.called) power_exp_calls = [mock.call(self.task)] * 3 self.assertEqual(power_exp_calls, self.power.get_power_state.call_args_list) npa_exp_calls = [mock.call(self.task, states.POWER_ON)] * 2 self.assertEqual(npa_exp_calls, node_power_action.call_args_list) self.assertEqual(states.POWER_ON, self.node.power_state) self.assertEqual(0, self.service.power_state_sync_count[self.node.uuid]) def test_power_state_sync_max_retries_gps_exception(self, node_power_action): self.config(power_state_sync_max_retries=2, group='conductor') self.service.power_state_sync_count[self.node.uuid] = 2 node_power_action.side_effect = exception.IronicException('test') self._do_sync_power_state('fake', exception.IronicException('SpongeBob')) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertIsNone(self.node.power_state) self.assertTrue(self.node.maintenance) self.assertFalse(node_power_action.called) # make sure the actual error is in the last_error attribute self.assertIn('SpongeBob', self.node.last_error) def test_maintenance_on_upgrade_lock(self, node_power_action): self.node.maintenance = True self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertEqual(states.POWER_ON, self.node.power_state) self.assertFalse(node_power_action.called) self.task.upgrade_lock.assert_called_once_with() def test_wrong_provision_state_on_upgrade_lock(self, node_power_action): self.node.provision_state = states.DEPLOYWAIT self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertEqual(states.POWER_ON, self.node.power_state) self.assertFalse(node_power_action.called) self.task.upgrade_lock.assert_called_once_with() def test_correct_power_state_on_upgrade_lock(self, node_power_action): def _fake_upgrade(): self.node.power_state = states.POWER_OFF self.task.upgrade_lock.side_effect = _fake_upgrade self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.task.upgrade_lock.assert_called_once_with() @mock.patch.object(manager, 'do_sync_power_state') @mock.patch.object(task_manager, 'acquire') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') class ManagerSyncPowerStatesTestCase(mgr_utils.CommonMixIn, tests_db_base.DbTestCase): def setUp(self): super(ManagerSyncPowerStatesTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') self.service.dbapi = self.dbapi self.node = self._create_node() self.filters = {'reserved': False, 'maintenance': False} self.columns = ['uuid', 'driver', 'id'] def test_node_not_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = False self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) self.assertFalse(acquire_mock.called) self.assertFalse(sync_mock.called) def test_node_locked_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeLocked(node=self.node.uuid, host='fake') self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_in_deploywait_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task( node_attrs=dict(provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_in_enroll_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task( node_attrs=dict(provision_state=states.ENROLL, target_provision_state=states.NOSTATE, uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_in_power_transition_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task( node_attrs=dict(target_power_state=states.POWER_ON, uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_in_maintenance_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task( node_attrs=dict(maintenance=True, uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_disappears_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeNotFound(node=self.node.uuid, host='fake') self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_single_node(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task(node_attrs=dict(uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) sync_mock.assert_called_once_with(task, mock.ANY) def test__sync_power_state_multiple_nodes(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): # Create 8 nodes: # 1st node: Should acquire and try to sync # 2nd node: Not mapped to this conductor # 3rd node: In DEPLOYWAIT provision_state # 4th node: In maintenance mode # 5th node: Is in power transition # 6th node: Disappears after getting nodeinfo list # 7th node: Should acquire and try to sync # 8th node: do_sync_power_state raises NodeLocked nodes = [] node_attrs = {} mapped_map = {} for i in range(1, 8): attrs = {'id': i, 'uuid': uuidutils.generate_uuid()} if i == 3: attrs['provision_state'] = states.DEPLOYWAIT attrs['target_provision_state'] = states.ACTIVE elif i == 4: attrs['maintenance'] = True elif i == 5: attrs['target_power_state'] = states.POWER_ON n = self._create_node(**attrs) nodes.append(n) node_attrs[n.uuid] = attrs mapped_map[n.uuid] = False if i == 2 else True tasks = [self._create_task(node_attrs=node_attrs[x.uuid]) for x in nodes if x.id != 2] # not found during acquire (4 = index of Node6 after removing Node2) tasks[4] = exception.NodeNotFound(node=6) sync_results = [0] * 7 + [exception.NodeLocked(node=8, host='')] get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response(nodes)) mapped_mock.side_effect = lambda x, y: mapped_map[x] acquire_mock.side_effect = self._get_acquire_side_effect(tasks) sync_mock.side_effect = sync_results with mock.patch.object(eventlet, 'sleep') as sleep_mock: self.service._sync_power_states(self.context) # Ensure we've yielded on every iteration, except for node # not mapped to this conductor self.assertEqual(len(nodes) - 1, sleep_mock.call_count) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_calls = [mock.call(x.uuid, x.driver) for x in nodes] self.assertEqual(mapped_calls, mapped_mock.call_args_list) acquire_calls = [mock.call(self.context, x.uuid, purpose=mock.ANY, shared=True) for x in nodes if x.id != 2] self.assertEqual(acquire_calls, acquire_mock.call_args_list) # Nodes 1 and 7 (5 = index of Node7 after removing Node2) sync_calls = [mock.call(tasks[0], mock.ANY), mock.call(tasks[5], mock.ANY)] self.assertEqual(sync_calls, sync_mock.call_args_list) @mock.patch.object(task_manager, 'acquire') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') class ManagerCheckDeployTimeoutsTestCase(mgr_utils.CommonMixIn, tests_db_base.DbTestCase): def setUp(self): super(ManagerCheckDeployTimeoutsTestCase, self).setUp() self.config(deploy_callback_timeout=300, group='conductor') self.service = manager.ConductorManager('hostname', 'test-topic') self.service.dbapi = self.dbapi self.node = self._create_node(provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE) self.task = self._create_task(node=self.node) self.node2 = self._create_node(provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE) self.task2 = self._create_task(node=self.node2) self.filters = {'reserved': False, 'maintenance': False, 'provisioned_before': 300, 'provision_state': states.DEPLOYWAIT} self.columns = ['uuid', 'driver'] def _assert_get_nodeinfo_args(self, get_nodeinfo_mock): get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters, sort_key='provision_updated_at', sort_dir='asc') def test_disabled(self, get_nodeinfo_mock, mapped_mock, acquire_mock): self.config(deploy_callback_timeout=0, group='conductor') self.service._check_deploy_timeouts(self.context) self.assertFalse(get_nodeinfo_mock.called) self.assertFalse(mapped_mock.called) self.assertFalse(acquire_mock.called) def test_not_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = False self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) self.assertFalse(acquire_mock.called) def test_timeout(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(self.task) self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task), err_handler=conductor_utils.provisioning_error_handler, target_state=None) def test_acquire_node_disappears(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeNotFound(node='fake') # Exception eaten self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(self.task.spawn_after.called) def test_acquire_node_locked(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeLocked(node='fake', host='fake') # Exception eaten self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(self.task.spawn_after.called) def test_no_deploywait_after_lock(self, get_nodeinfo_mock, mapped_mock, acquire_mock): task = self._create_task( node_attrs=dict(provision_state=states.AVAILABLE, uuid=self.node.uuid)) get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(task.spawn_after.called) def test_maintenance_after_lock(self, get_nodeinfo_mock, mapped_mock, acquire_mock): task = self._create_task( node_attrs=dict(provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, maintenance=True, uuid=self.node.uuid)) get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([task.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([task, self.task2])) self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) self.assertEqual([mock.call(self.node.uuid, task.node.driver), mock.call(self.node2.uuid, self.node2.driver)], mapped_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.uuid, purpose=mock.ANY), mock.call(self.context, self.node2.uuid, purpose=mock.ANY)], acquire_mock.call_args_list) # First node skipped self.assertFalse(task.spawn_after.called) # Second node spawned self.task2.process_event.assert_called_with( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task2), err_handler=conductor_utils.provisioning_error_handler, target_state=None) def test_exiting_no_worker_avail(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [(self.task, exception.NoFreeConductorWorker()), self.task2]) # Exception should be nuked self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # mapped should be only called for the first node as we should # have exited the loop early due to NoFreeConductorWorker mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task), err_handler=conductor_utils.provisioning_error_handler, target_state=None) def test_exiting_with_other_exception(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [(self.task, exception.IronicException('foo')), self.task2]) # Should re-raise self.assertRaises(exception.IronicException, self.service._check_deploy_timeouts, self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # mapped should be only called for the first node as we should # have exited the loop early due to unknown exception mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task), err_handler=conductor_utils.provisioning_error_handler, target_state=None) def test_worker_limit(self, get_nodeinfo_mock, mapped_mock, acquire_mock): self.config(periodic_max_workers=2, group='conductor') # Use the same nodes/tasks to make life easier in the tests # here get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([self.task] * 3)) self.service._check_deploy_timeouts(self.context) # Should only have ran 2. self.assertEqual([mock.call(self.node.uuid, self.node.driver)] * 2, mapped_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.uuid, purpose=mock.ANY)] * 2, acquire_mock.call_args_list) process_event_call = mock.call( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task), err_handler=conductor_utils.provisioning_error_handler, target_state=None) self.assertEqual([process_event_call] * 2, self.task.process_event.call_args_list) @mock.patch.object(dbapi.IMPL, 'update_port') @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_address') def test_update_port_duplicate_mac(self, get_nodeinfo_mock, mapped_mock, acquire_mock, mac_update_mock, mock_up): node = utils.create_test_node(driver='fake') port = obj_utils.create_test_port(self.context, node_id=node.id) mock_up.side_effect = exception.MACAlreadyExists(mac=port.address) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.MACAlreadyExists, exc.exc_info[0]) # ensure Neutron wasn't updated self.assertFalse(mac_update_mock.called) @mgr_utils.mock_record_keepalive class ManagerTestProperties(tests_db_base.DbTestCase): def setUp(self): super(ManagerTestProperties, self).setUp() self.service = manager.ConductorManager('test-host', 'test-topic') def _check_driver_properties(self, driver, expected): mgr_utils.mock_the_extension_manager(driver=driver) self.driver = driver_factory.get_driver(driver) self.service.init_host() properties = self.service.get_driver_properties(self.context, driver) self.assertEqual(sorted(expected), sorted(properties.keys())) def test_driver_properties_fake(self): expected = ['A1', 'A2', 'B1', 'B2'] self._check_driver_properties("fake", expected) def test_driver_properties_fake_ipmitool(self): expected = ['ipmi_address', 'ipmi_terminal_port', 'ipmi_password', 'ipmi_port', 'ipmi_priv_level', 'ipmi_username', 'ipmi_bridging', 'ipmi_transit_channel', 'ipmi_transit_address', 'ipmi_target_channel', 'ipmi_target_address', 'ipmi_local_address', 'ipmi_protocol_version', 'ipmi_force_boot_device' ] self._check_driver_properties("fake_ipmitool", expected) def test_driver_properties_fake_ipminative(self): expected = ['ipmi_address', 'ipmi_password', 'ipmi_username', 'ipmi_terminal_port', 'ipmi_force_boot_device'] self._check_driver_properties("fake_ipminative", expected) def test_driver_properties_fake_ssh(self): expected = ['ssh_address', 'ssh_username', 'ssh_virt_type', 'ssh_key_contents', 'ssh_key_filename', 'ssh_password', 'ssh_port', 'ssh_terminal_port'] self._check_driver_properties("fake_ssh", expected) def test_driver_properties_fake_pxe(self): expected = ['deploy_kernel', 'deploy_ramdisk', 'deploy_forces_oob_reboot'] self._check_driver_properties("fake_pxe", expected) def test_driver_properties_fake_seamicro(self): expected = ['seamicro_api_endpoint', 'seamicro_password', 'seamicro_server_id', 'seamicro_username', 'seamicro_api_version', 'seamicro_terminal_port'] self._check_driver_properties("fake_seamicro", expected) def test_driver_properties_fake_snmp(self): expected = ['snmp_driver', 'snmp_address', 'snmp_port', 'snmp_version', 'snmp_community', 'snmp_security', 'snmp_outlet'] self._check_driver_properties("fake_snmp", expected) def test_driver_properties_pxe_ipmitool(self): expected = ['ipmi_address', 'ipmi_terminal_port', 'ipmi_password', 'ipmi_port', 'ipmi_priv_level', 'ipmi_username', 'ipmi_bridging', 'ipmi_transit_channel', 'ipmi_transit_address', 'ipmi_target_channel', 'ipmi_target_address', 'ipmi_local_address', 'deploy_kernel', 'deploy_ramdisk', 'ipmi_protocol_version', 'ipmi_force_boot_device', 'deploy_forces_oob_reboot'] self._check_driver_properties("pxe_ipmitool", expected) def test_driver_properties_pxe_ipminative(self): expected = ['ipmi_address', 'ipmi_password', 'ipmi_username', 'deploy_kernel', 'deploy_ramdisk', 'ipmi_terminal_port', 'ipmi_force_boot_device', 'deploy_forces_oob_reboot'] self._check_driver_properties("pxe_ipminative", expected) def test_driver_properties_pxe_ssh(self): expected = ['deploy_kernel', 'deploy_ramdisk', 'ssh_address', 'ssh_username', 'ssh_virt_type', 'ssh_key_contents', 'ssh_key_filename', 'ssh_password', 'ssh_port', 'ssh_terminal_port', 'deploy_forces_oob_reboot'] self._check_driver_properties("pxe_ssh", expected) def test_driver_properties_pxe_seamicro(self): expected = ['deploy_kernel', 'deploy_ramdisk', 'seamicro_api_endpoint', 'seamicro_password', 'seamicro_server_id', 'seamicro_username', 'seamicro_api_version', 'seamicro_terminal_port', 'deploy_forces_oob_reboot'] self._check_driver_properties("pxe_seamicro", expected) def test_driver_properties_pxe_snmp(self): expected = ['deploy_kernel', 'deploy_ramdisk', 'snmp_driver', 'snmp_address', 'snmp_port', 'snmp_version', 'snmp_community', 'snmp_security', 'snmp_outlet', 'deploy_forces_oob_reboot'] self._check_driver_properties("pxe_snmp", expected) def test_driver_properties_fake_ilo(self): expected = ['ilo_address', 'ilo_username', 'ilo_password', 'client_port', 'client_timeout', 'ilo_change_password'] self._check_driver_properties("fake_ilo", expected) def test_driver_properties_ilo_iscsi(self): expected = ['ilo_address', 'ilo_username', 'ilo_password', 'client_port', 'client_timeout', 'ilo_deploy_iso', 'console_port', 'ilo_change_password', 'deploy_forces_oob_reboot'] self._check_driver_properties("iscsi_ilo", expected) def test_driver_properties_agent_ilo(self): expected = ['ilo_address', 'ilo_username', 'ilo_password', 'client_port', 'client_timeout', 'ilo_deploy_iso', 'console_port', 'ilo_change_password', 'deploy_forces_oob_reboot'] self._check_driver_properties("agent_ilo", expected) def test_driver_properties_fail(self): mgr_utils.mock_the_extension_manager() self.driver = driver_factory.get_driver("fake") self.service.init_host() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_driver_properties, self.context, "bad-driver") # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.DriverNotFound, exc.exc_info[0]) @mock.patch.object(task_manager, 'acquire') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') class ManagerSyncLocalStateTestCase(mgr_utils.CommonMixIn, tests_db_base.DbTestCase): def setUp(self): super(ManagerSyncLocalStateTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') self.service.conductor = mock.Mock() self.service.dbapi = self.dbapi self.service.ring_manager = mock.Mock() self.node = self._create_node(provision_state=states.ACTIVE, target_provision_state=states.NOSTATE) self.task = self._create_task(node=self.node) self.filters = {'reserved': False, 'maintenance': False, 'provision_state': states.ACTIVE} self.columns = ['uuid', 'driver', 'id', 'conductor_affinity'] def _assert_get_nodeinfo_args(self, get_nodeinfo_mock): get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) def test_not_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = False self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) self.assertFalse(acquire_mock.called) def test_already_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock): # Node is already mapped to the conductor running the periodic task self.node.conductor_affinity = 123 self.service.conductor.id = 123 get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) self.assertFalse(acquire_mock.called) def test_good(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(self.task) self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) # assert spawn_after has been called self.task.spawn_after.assert_called_once_with( self.service._spawn_worker, self.service._do_takeover, self.task) def test_no_free_worker(self, get_nodeinfo_mock, mapped_mock, acquire_mock): mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([self.task] * 3)) self.task.spawn_after.side_effect = [ None, exception.NoFreeConductorWorker('error') ] # 3 nodes to be checked get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # assert _mapped_to_this_conductor() gets called 2 times only # instead of 3. When NoFreeConductorWorker is raised the loop # should be broken expected = [mock.call(self.node.uuid, self.node.driver)] * 2 self.assertEqual(expected, mapped_mock.call_args_list) # assert acquire() gets called 2 times only instead of 3. When # NoFreeConductorWorker is raised the loop should be broken expected = [mock.call(self.context, self.node.uuid, purpose=mock.ANY)] * 2 self.assertEqual(expected, acquire_mock.call_args_list) # assert spawn_after has been called twice expected = [mock.call(self.service._spawn_worker, self.service._do_takeover, self.task)] * 2 self.assertEqual(expected, self.task.spawn_after.call_args_list) def test_node_locked(self, get_nodeinfo_mock, mapped_mock, acquire_mock,): mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [self.task, exception.NodeLocked('error'), self.task]) self.task.spawn_after.side_effect = [None, None] # 3 nodes to be checked get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # assert _mapped_to_this_conductor() gets called 3 times expected = [mock.call(self.node.uuid, self.node.driver)] * 3 self.assertEqual(expected, mapped_mock.call_args_list) # assert acquire() gets called 3 times expected = [mock.call(self.context, self.node.uuid, purpose=mock.ANY)] * 3 self.assertEqual(expected, acquire_mock.call_args_list) # assert spawn_after has been called only 2 times expected = [mock.call(self.service._spawn_worker, self.service._do_takeover, self.task)] * 2 self.assertEqual(expected, self.task.spawn_after.call_args_list) def test_worker_limit(self, get_nodeinfo_mock, mapped_mock, acquire_mock): # Limit to only 1 worker self.config(periodic_max_workers=1, group='conductor') mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([self.task] * 3)) self.task.spawn_after.side_effect = [None] * 3 # 3 nodes to be checked get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # assert _mapped_to_this_conductor() gets called only once # because of the worker limit mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) # assert acquire() gets called only once because of the worker limit acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) # assert spawn_after has been called self.task.spawn_after.assert_called_once_with( self.service._spawn_worker, self.service._do_takeover, self.task) @mock.patch.object(swift, 'SwiftAPI') class StoreConfigDriveTestCase(tests_base.TestCase): def setUp(self): super(StoreConfigDriveTestCase, self).setUp() self.node = obj_utils.get_test_node(self.context, driver='fake', instance_info=None) def test_store_configdrive(self, mock_swift): manager._store_configdrive(self.node, 'foo') expected_instance_info = {'configdrive': 'foo'} self.assertEqual(expected_instance_info, self.node.instance_info) self.assertFalse(mock_swift.called) def test_store_configdrive_swift(self, mock_swift): container_name = 'foo_container' timeout = 123 expected_obj_name = 'configdrive-%s' % self.node.uuid expected_obj_header = {'X-Delete-After': timeout} expected_instance_info = {'configdrive': 'http://1.2.3.4'} # set configs and mocks CONF.set_override('configdrive_use_swift', True, group='conductor') CONF.set_override('configdrive_swift_container', container_name, group='conductor') CONF.set_override('deploy_callback_timeout', timeout, group='conductor') mock_swift.return_value.get_temp_url.return_value = 'http://1.2.3.4' manager._store_configdrive(self.node, b'foo') mock_swift.assert_called_once_with() mock_swift.return_value.create_object.assert_called_once_with( container_name, expected_obj_name, mock.ANY, object_headers=expected_obj_header) mock_swift.return_value.get_temp_url.assert_called_once_with( container_name, expected_obj_name, timeout) self.assertEqual(expected_instance_info, self.node.instance_info) @mgr_utils.mock_record_keepalive class NodeInspectHardware(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_ok(self, mock_inspect): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.INSPECTING) task = task_manager.TaskManager(self.context, node.uuid) mock_inspect.return_value = states.MANAGEABLE manager._do_inspect_hardware(task) node.refresh() self.assertEqual(states.MANAGEABLE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNone(node.last_error) mock_inspect.assert_called_once_with(mock.ANY) @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_return_inspecting(self, mock_inspect): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.INSPECTING) task = task_manager.TaskManager(self.context, node.uuid) mock_inspect.return_value = states.INSPECTING manager._do_inspect_hardware(task) node.refresh() self.assertEqual(states.INSPECTING, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNone(node.last_error) mock_inspect.assert_called_once_with(mock.ANY) @mock.patch.object(manager, 'LOG') @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_return_other_state(self, mock_inspect, log_mock): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.INSPECTING) task = task_manager.TaskManager(self.context, node.uuid) mock_inspect.return_value = None self.assertRaises(exception.HardwareInspectionFailure, manager._do_inspect_hardware, task) node.refresh() self.assertEqual(states.INSPECTFAIL, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertIsNotNone(node.last_error) mock_inspect.assert_called_once_with(mock.ANY) self.assertTrue(log_mock.error.called) def test__check_inspect_timeouts(self): self._start_service() CONF.set_override('inspect_timeout', 1, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake', provision_state=states.INSPECTING, target_provision_state=states.MANAGEABLE, provision_updated_at=datetime.datetime(2000, 1, 1, 0, 0), inspection_started_at=datetime.datetime(2000, 1, 1, 0, 0)) self.service._check_inspect_timeouts(self.context) self._stop_service() node.refresh() self.assertEqual(states.INSPECTFAIL, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertIsNotNone(node.last_error) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker') def test_inspect_hardware_worker_pool_full(self, mock_spawn): prv_state = states.MANAGEABLE tgt_prv_state = states.NOSTATE node = obj_utils.create_test_node(self.context, provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None, driver='fake') self._start_service() mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.inspect_hardware, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() node.refresh() # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def _test_inspect_hardware_validate_fail(self, mock_validate): mock_validate.side_effect = exception.InvalidParameterValue('error') node = obj_utils.create_test_node(self.context, driver='fake') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.inspect_hardware, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.HardwareInspectionFailure, exc.exc_info[0]) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch('ironic.drivers.modules.fake.FakeInspect.validate') def test_inspect_hardware_validate_fail(self, mock_validate): self._test_inspect_hardware_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_inspect_hardware_power_validate_fail(self, mock_validate): self._test_inspect_hardware_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_raises_error(self, mock_inspect): self._start_service() mock_inspect.side_effect = exception.HardwareInspectionFailure('test') state = states.MANAGEABLE node = obj_utils.create_test_node(self.context, driver='fake', provision_state=states.INSPECTING, target_provision_state=state) task = task_manager.TaskManager(self.context, node.uuid) self.assertRaises(exception.HardwareInspectionFailure, manager._do_inspect_hardware, task) node.refresh() self.assertEqual(states.INSPECTFAIL, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertTrue(mock_inspect.called) @mock.patch.object(task_manager, 'acquire') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') class ManagerCheckInspectTimeoutsTestCase(mgr_utils.CommonMixIn, tests_db_base.DbTestCase): def setUp(self): super(ManagerCheckInspectTimeoutsTestCase, self).setUp() self.config(inspect_timeout=300, group='conductor') self.service = manager.ConductorManager('hostname', 'test-topic') self.service.dbapi = self.dbapi self.node = self._create_node(provision_state=states.INSPECTING, target_provision_state=states.MANAGEABLE) self.task = self._create_task(node=self.node) self.node2 = self._create_node( provision_state=states.INSPECTING, target_provision_state=states.MANAGEABLE) self.task2 = self._create_task(node=self.node2) self.filters = {'reserved': False, 'inspection_started_before': 300, 'provision_state': states.INSPECTING} self.columns = ['uuid', 'driver'] def _assert_get_nodeinfo_args(self, get_nodeinfo_mock): get_nodeinfo_mock.assert_called_once_with( sort_dir='asc', columns=self.columns, filters=self.filters, sort_key='inspection_started_at') def test__check_inspect_timeouts_disabled(self, get_nodeinfo_mock, mapped_mock, acquire_mock): self.config(inspect_timeout=0, group='conductor') self.service._check_inspect_timeouts(self.context) self.assertFalse(get_nodeinfo_mock.called) self.assertFalse(mapped_mock.called) self.assertFalse(acquire_mock.called) def test__check_inspect_timeouts_not_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = False self.service._check_inspect_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) self.assertFalse(acquire_mock.called) def test__check_inspect_timeout(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(self.task) self.service._check_inspect_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with('fail', target_state=None) def test__check_inspect_timeouts_acquire_node_disappears(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeNotFound(node='fake') # Exception eaten self.service._check_inspect_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(self.task.process_event.called) def test__check_inspect_timeouts_acquire_node_locked(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeLocked(node='fake', host='fake') # Exception eaten self.service._check_inspect_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(self.task.process_event.called) def test__check_inspect_timeouts_no_acquire_after_lock(self, get_nodeinfo_mock, mapped_mock, acquire_mock): task = self._create_task( node_attrs=dict(provision_state=states.AVAILABLE, uuid=self.node.uuid)) get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._check_inspect_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(task.process_event.called) def test__check_inspect_timeouts_to_maintenance_after_lock( self, get_nodeinfo_mock, mapped_mock, acquire_mock): task = self._create_task( node_attrs=dict(provision_state=states.INSPECTING, target_provision_state=states.MANAGEABLE, maintenance=True, uuid=self.node.uuid)) get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([task.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([task, self.task2])) self.service._check_inspect_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) self.assertEqual([mock.call(self.node.uuid, task.node.driver), mock.call(self.node2.uuid, self.node2.driver)], mapped_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.uuid, purpose=mock.ANY), mock.call(self.context, self.node2.uuid, purpose=mock.ANY)], acquire_mock.call_args_list) # First node skipped self.assertFalse(task.process_event.called) # Second node spawned self.task2.process_event.assert_called_with('fail', target_state=None) def test__check_inspect_timeouts_exiting_no_worker_avail( self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [(self.task, exception.NoFreeConductorWorker()), self.task2]) # Exception should be nuked self.service._check_inspect_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # mapped should be only called for the first node as we should # have exited the loop early due to NoFreeConductorWorker mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with('fail', target_state=None) def test__check_inspect_timeouts_exit_with_other_exception( self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [(self.task, exception.IronicException('foo')), self.task2]) # Should re-raise self.assertRaises(exception.IronicException, self.service._check_inspect_timeouts, self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # mapped should be only called for the first node as we should # have exited the loop early due to unknown exception mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with('fail', target_state=None) def test__check_inspect_timeouts_worker_limit(self, get_nodeinfo_mock, mapped_mock, acquire_mock): self.config(periodic_max_workers=2, group='conductor') # Use the same nodes/tasks to make life easier in the tests # here get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([self.task] * 3)) self.service._check_inspect_timeouts(self.context) # Should only have ran 2. self.assertEqual([mock.call(self.node.uuid, self.node.driver)] * 2, mapped_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.uuid, purpose=mock.ANY)] * 2, acquire_mock.call_args_list) process_event_call = mock.call('fail', target_state=None) self.assertEqual([process_event_call] * 2, self.task.process_event.call_args_list) @mgr_utils.mock_record_keepalive class DestroyPortTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_destroy_port(self): node = obj_utils.create_test_node(self.context, driver='fake') port = obj_utils.create_test_port(self.context, node_id=node.id) self.service.destroy_port(self.context, port) self.assertRaises(exception.PortNotFound, port.refresh) def test_destroy_port_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake', reservation='fake-reserv') port = obj_utils.create_test_port(self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class DestroyPortgroupTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def test_destroy_portgroup(self): node = obj_utils.create_test_node(self.context, driver='fake') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id) self.service.destroy_portgroup(self.context, portgroup) self.assertRaises(exception.PortgroupNotFound, portgroup.refresh) def test_destroy_portgroup_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake', reservation='fake-reserv') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_portgroup, self.context, portgroup) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) @mgr_utils.mock_record_keepalive @mock.patch.object(manager.ConductorManager, '_fail_if_in_state') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_offline_conductors') class ManagerCheckDeployingStatusTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): def setUp(self): super(ManagerCheckDeployingStatusTestCase, self).setUp() self._start_service() self.node = obj_utils.create_test_node( self.context, id=1, uuid=uuidutils.generate_uuid(), driver='fake', provision_state=states.DEPLOYING, target_provision_state=states.DEPLOYDONE, reservation='fake-conductor') # create a second node in a different state to test the # filtering nodes in DEPLOYING state obj_utils.create_test_node( self.context, id=10, uuid=uuidutils.generate_uuid(), driver='fake', provision_state=states.AVAILABLE, target_provision_state=states.NOSTATE) self.expected_filter = { 'provision_state': 'deploying', 'reserved': False, 'maintenance': False} def test__check_deploying_status(self, mock_off_cond, mock_mapped, mock_fail_if): mock_off_cond.return_value = ['fake-conductor'] self.service._check_deploying_status(self.context) self.node.refresh() mock_off_cond.assert_called_once_with() mock_mapped.assert_called_once_with(self.node.uuid, 'fake') mock_fail_if.assert_called_once_with( mock.ANY, {'id': self.node.id}, states.DEPLOYING, 'provision_updated_at', callback_method=conductor_utils.cleanup_after_timeout, err_handler=conductor_utils.provisioning_error_handler) # assert node was released self.assertIsNone(self.node.reservation) def test__check_deploying_status_alive(self, mock_off_cond, mock_mapped, mock_fail_if): mock_off_cond.return_value = [] self.service._check_deploying_status(self.context) self.node.refresh() mock_off_cond.assert_called_once_with() self.assertFalse(mock_mapped.called) self.assertFalse(mock_fail_if.called) # assert node still locked self.assertIsNotNone(self.node.reservation) @mock.patch.object(objects.Node, 'release') def test__check_deploying_status_release_exceptions_skipping( self, mock_release, mock_off_cond, mock_mapped, mock_fail_if): mock_off_cond.return_value = ['fake-conductor'] # Add another node so we can check both exceptions node2 = obj_utils.create_test_node( self.context, id=2, uuid=uuidutils.generate_uuid(), driver='fake', provision_state=states.DEPLOYING, target_provision_state=states.DEPLOYDONE, reservation='fake-conductor') mock_mapped.return_value = True mock_release.side_effect = iter([exception.NodeNotFound('not found'), exception.NodeLocked('locked')]) self.service._check_deploying_status(self.context) self.node.refresh() mock_off_cond.assert_called_once_with() expected_calls = [mock.call(self.node.uuid, 'fake'), mock.call(node2.uuid, 'fake')] mock_mapped.assert_has_calls(expected_calls) # Assert we skipped and didn't try to call _fail_if_in_state self.assertFalse(mock_fail_if.called) @mock.patch.object(objects.Node, 'release') def test__check_deploying_status_release_node_not_locked( self, mock_release, mock_off_cond, mock_mapped, mock_fail_if): mock_off_cond.return_value = ['fake-conductor'] mock_mapped.return_value = True mock_release.side_effect = iter([ exception.NodeNotLocked('not locked')]) self.service._check_deploying_status(self.context) self.node.refresh() mock_off_cond.assert_called_once_with() mock_mapped.assert_called_once_with(self.node.uuid, 'fake') mock_fail_if.assert_called_once_with( mock.ANY, {'id': self.node.id}, states.DEPLOYING, 'provision_updated_at', callback_method=conductor_utils.cleanup_after_timeout, err_handler=conductor_utils.provisioning_error_handler) class TestIndirectionApiConductor(tests_db_base.DbTestCase): def setUp(self): super(TestIndirectionApiConductor, self).setUp() self.conductor = manager.ConductorManager('test-host', 'test-topic') def _test_object_action(self, is_classmethod, raise_exception, return_object=False): @obj_base.IronicObjectRegistry.register class TestObject(obj_base.IronicObject): context = self.context def foo(self, context, raise_exception=False, return_object=False): if raise_exception: raise Exception('test') elif return_object: return obj else: return 'test' @classmethod def bar(cls, context, raise_exception=False, return_object=False): if raise_exception: raise Exception('test') elif return_object: return obj else: return 'test' obj = TestObject(self.context) if is_classmethod: versions = ovo_base.obj_tree_get_versions(TestObject.obj_name()) result = self.conductor.object_class_action_versions( self.context, TestObject.obj_name(), 'bar', versions, tuple(), {'raise_exception': raise_exception, 'return_object': return_object}) else: updates, result = self.conductor.object_action( self.context, obj, 'foo', tuple(), {'raise_exception': raise_exception, 'return_object': return_object}) if return_object: self.assertEqual(obj, result) else: self.assertEqual('test', result) def test_object_action(self): self._test_object_action(False, False) def test_object_action_on_raise(self): self.assertRaises(messaging.ExpectedException, self._test_object_action, False, True) def test_object_action_on_object(self): self._test_object_action(False, False, True) def test_object_class_action(self): self._test_object_action(True, False) def test_object_class_action_on_raise(self): self.assertRaises(messaging.ExpectedException, self._test_object_action, True, True) def test_object_class_action_on_object(self): self._test_object_action(True, False, False) def test_object_action_copies_object(self): @obj_base.IronicObjectRegistry.register class TestObject(obj_base.IronicObject): fields = {'dict': fields.DictOfStringsField()} def touch_dict(self, context): self.dict['foo'] = 'bar' self.obj_reset_changes() obj = TestObject(self.context) obj.dict = {} obj.obj_reset_changes() updates, result = self.conductor.object_action( self.context, obj, 'touch_dict', tuple(), {}) # NOTE(danms): If conductor did not properly copy the object, then # the new and reference copies of the nested dict object will be # the same, and thus 'dict' will not be reported as changed self.assertIn('dict', updates) self.assertEqual({'foo': 'bar'}, updates['dict']) def test_object_backport_versions(self): fake_backported_obj = 'fake-backported-obj' obj_name = 'fake-obj' test_obj = mock.Mock() test_obj.obj_name.return_value = obj_name test_obj.obj_to_primitive.return_value = fake_backported_obj fake_version_manifest = {obj_name: '1.0'} result = self.conductor.object_backport_versions( self.context, test_obj, fake_version_manifest) self.assertEqual(result, fake_backported_obj) test_obj.obj_to_primitive.assert_called_once_with( target_version='1.0', version_manifest=fake_version_manifest) @mgr_utils.mock_record_keepalive class DoNodeTakeOverTestCase(mgr_utils.ServiceSetUpMixin, tests_db_base.DbTestCase): @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_takeover(self, mock_prepare, mock_take_over, mock_start_console): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake') task = task_manager.TaskManager(self.context, node.uuid) self.service._do_takeover(task) node.refresh() self.assertIsNone(node.last_error) self.assertFalse(node.console_enabled) mock_prepare.assert_called_once_with(mock.ANY) mock_take_over.assert_called_once_with(mock.ANY) self.assertFalse(mock_start_console.called) @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_takeover_with_console_enabled(self, mock_prepare, mock_take_over, mock_start_console): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake', console_enabled=True) task = task_manager.TaskManager(self.context, node.uuid) self.service._do_takeover(task) node.refresh() self.assertIsNone(node.last_error) self.assertTrue(node.console_enabled) mock_prepare.assert_called_once_with(mock.ANY) mock_take_over.assert_called_once_with(mock.ANY) mock_start_console.assert_called_once_with(mock.ANY) @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_takeover_with_console_exception(self, mock_prepare, mock_take_over, mock_start_console): self._start_service() mock_start_console.side_effect = Exception() node = obj_utils.create_test_node(self.context, driver='fake', console_enabled=True) task = task_manager.TaskManager(self.context, node.uuid) self.service._do_takeover(task) node.refresh() self.assertIsNotNone(node.last_error) self.assertFalse(node.console_enabled) mock_prepare.assert_called_once_with(mock.ANY) mock_take_over.assert_called_once_with(mock.ANY) mock_start_console.assert_called_once_with(mock.ANY) ironic-5.1.0/ironic/tests/unit/conductor/test__mgr_utils.py0000664000567000056710000000406212674513466025342 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for Ironic Manager test utils.""" from ironic.tests import base from ironic.tests.unit.conductor import mgr_utils class UtilsTestCase(base.TestCase): def test_fails_to_load_extension(self): self.assertRaises(AttributeError, mgr_utils.mock_the_extension_manager, 'fake', 'bad.namespace') self.assertRaises(AttributeError, mgr_utils.mock_the_extension_manager, 'no-such-driver', 'ironic.drivers') def test_get_mockable_ext_mgr(self): (mgr, ext) = mgr_utils.mock_the_extension_manager('fake', 'ironic.drivers') # confirm that stevedore did not scan the actual entrypoints self.assertNotEqual(mgr._extension_manager.namespace, 'ironic.drivers') # confirm mgr has only one extension self.assertEqual(1, len(mgr._extension_manager.extensions)) # confirm that we got a reference to the extension in this manager self.assertEqual(ext, mgr._extension_manager.extensions[0]) # confirm that it is the "fake" driver we asked for self.assertEqual("fake = ironic.drivers.fake:FakeDriver", "%s" % ext.entry_point) # Confirm driver is loaded self.assertIn('fake', mgr.names) ironic-5.1.0/ironic/tests/unit/conductor/test_rpcapi.py0000664000567000056710000003707012674513466024461 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for :py:class:`ironic.conductor.rpcapi.ConductorAPI`. """ import copy import mock from oslo_config import cfg import oslo_messaging as messaging from oslo_messaging import _utils as messaging_utils from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import manager as conductor_manager from ironic.conductor import rpcapi as conductor_rpcapi from ironic import objects from ironic.tests import base as tests_base from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as dbutils CONF = cfg.CONF class ConductorRPCAPITestCase(tests_base.TestCase): def test_versions_in_sync(self): self.assertEqual( conductor_manager.ConductorManager.RPC_API_VERSION, conductor_rpcapi.ConductorAPI.RPC_API_VERSION) class RPCAPITestCase(base.DbTestCase): def setUp(self): super(RPCAPITestCase, self).setUp() self.fake_node = dbutils.get_test_node(driver='fake-driver') self.fake_node_obj = objects.Node._from_db_object( objects.Node(self.context), self.fake_node) self.fake_portgroup = dbutils.get_test_portgroup() def test_serialized_instance_has_uuid(self): self.assertTrue('uuid' in self.fake_node) def test_get_topic_for_known_driver(self): CONF.set_override('host', 'fake-host') self.dbapi.register_conductor({'hostname': 'fake-host', 'drivers': ['fake-driver']}) rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') expected_topic = 'fake-topic.fake-host' self.assertEqual(expected_topic, rpcapi.get_topic_for(self.fake_node_obj)) def test_get_topic_for_unknown_driver(self): CONF.set_override('host', 'fake-host') self.dbapi.register_conductor({'hostname': 'fake-host', 'drivers': ['other-driver']}) rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertRaises(exception.NoValidHost, rpcapi.get_topic_for, self.fake_node_obj) def test_get_topic_doesnt_cache(self): CONF.set_override('host', 'fake-host') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertRaises(exception.NoValidHost, rpcapi.get_topic_for, self.fake_node_obj) self.dbapi.register_conductor({'hostname': 'fake-host', 'drivers': ['fake-driver']}) rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') expected_topic = 'fake-topic.fake-host' self.assertEqual(expected_topic, rpcapi.get_topic_for(self.fake_node_obj)) def test_get_topic_for_driver_known_driver(self): CONF.set_override('host', 'fake-host') self.dbapi.register_conductor({ 'hostname': 'fake-host', 'drivers': ['fake-driver'], }) rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertEqual('fake-topic.fake-host', rpcapi.get_topic_for_driver('fake-driver')) def test_get_topic_for_driver_unknown_driver(self): CONF.set_override('host', 'fake-host') self.dbapi.register_conductor({ 'hostname': 'fake-host', 'drivers': ['other-driver'], }) rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertRaises(exception.DriverNotFound, rpcapi.get_topic_for_driver, 'fake-driver') def test_get_topic_for_driver_doesnt_cache(self): CONF.set_override('host', 'fake-host') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertRaises(exception.DriverNotFound, rpcapi.get_topic_for_driver, 'fake-driver') self.dbapi.register_conductor({ 'hostname': 'fake-host', 'drivers': ['fake-driver'], }) rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertEqual('fake-topic.fake-host', rpcapi.get_topic_for_driver('fake-driver')) def _test_rpcapi(self, method, rpc_method, **kwargs): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') expected_retval = 'hello world' if rpc_method == 'call' else None expected_topic = 'fake-topic' if 'host' in kwargs: expected_topic += ".%s" % kwargs['host'] target = { "topic": expected_topic, "version": kwargs.pop('version', rpcapi.RPC_API_VERSION) } expected_msg = copy.deepcopy(kwargs) self.fake_args = None self.fake_kwargs = None def _fake_can_send_version_method(version): return messaging_utils.version_is_compatible( rpcapi.RPC_API_VERSION, version) def _fake_prepare_method(*args, **kwargs): for kwd in kwargs: self.assertEqual(kwargs[kwd], target[kwd]) return rpcapi.client def _fake_rpc_method(*args, **kwargs): self.fake_args = args self.fake_kwargs = kwargs if expected_retval: return expected_retval with mock.patch.object(rpcapi.client, "can_send_version") as mock_can_send_version: mock_can_send_version.side_effect = _fake_can_send_version_method with mock.patch.object(rpcapi.client, "prepare") as mock_prepared: mock_prepared.side_effect = _fake_prepare_method with mock.patch.object(rpcapi.client, rpc_method) as mock_method: mock_method.side_effect = _fake_rpc_method retval = getattr(rpcapi, method)(self.context, **kwargs) self.assertEqual(retval, expected_retval) expected_args = [self.context, method, expected_msg] for arg, expected_arg in zip(self.fake_args, expected_args): self.assertEqual(arg, expected_arg) def test_update_node(self): self._test_rpcapi('update_node', 'call', version='1.1', node_obj=self.fake_node) def test_change_node_power_state(self): self._test_rpcapi('change_node_power_state', 'call', version='1.6', node_id=self.fake_node['uuid'], new_state=states.POWER_ON) def test_vendor_passthru(self): self._test_rpcapi('vendor_passthru', 'call', version='1.20', node_id=self.fake_node['uuid'], driver_method='test-driver-method', http_method='test-http-method', info={"test_info": "test_value"}) def test_driver_vendor_passthru(self): self._test_rpcapi('driver_vendor_passthru', 'call', version='1.20', driver_name='test-driver-name', driver_method='test-driver-method', http_method='test-http-method', info={'test_key': 'test_value'}) def test_do_node_deploy(self): self._test_rpcapi('do_node_deploy', 'call', version='1.22', node_id=self.fake_node['uuid'], rebuild=False, configdrive=None) def test_do_node_tear_down(self): self._test_rpcapi('do_node_tear_down', 'call', version='1.6', node_id=self.fake_node['uuid']) def test_validate_driver_interfaces(self): self._test_rpcapi('validate_driver_interfaces', 'call', version='1.5', node_id=self.fake_node['uuid']) def test_destroy_node(self): self._test_rpcapi('destroy_node', 'call', version='1.9', node_id=self.fake_node['uuid']) def test_get_console_information(self): self._test_rpcapi('get_console_information', 'call', version='1.11', node_id=self.fake_node['uuid']) def test_set_console_mode(self): self._test_rpcapi('set_console_mode', 'call', version='1.11', node_id=self.fake_node['uuid'], enabled=True) def test_update_port(self): fake_port = dbutils.get_test_port() self._test_rpcapi('update_port', 'call', version='1.13', port_obj=fake_port) def test_get_driver_properties(self): self._test_rpcapi('get_driver_properties', 'call', version='1.16', driver_name='fake-driver') def test_set_boot_device(self): self._test_rpcapi('set_boot_device', 'call', version='1.17', node_id=self.fake_node['uuid'], device=boot_devices.DISK, persistent=False) def test_get_boot_device(self): self._test_rpcapi('get_boot_device', 'call', version='1.17', node_id=self.fake_node['uuid']) def test_get_supported_boot_devices(self): self._test_rpcapi('get_supported_boot_devices', 'call', version='1.17', node_id=self.fake_node['uuid']) def test_get_node_vendor_passthru_methods(self): self._test_rpcapi('get_node_vendor_passthru_methods', 'call', version='1.21', node_id=self.fake_node['uuid']) def test_get_driver_vendor_passthru_methods(self): self._test_rpcapi('get_driver_vendor_passthru_methods', 'call', version='1.21', driver_name='fake-driver') def test_inspect_hardware(self): self._test_rpcapi('inspect_hardware', 'call', version='1.24', node_id=self.fake_node['uuid']) def test_continue_node_clean(self): self._test_rpcapi('continue_node_clean', 'cast', version='1.27', node_id=self.fake_node['uuid']) def test_get_raid_logical_disk_properties(self): self._test_rpcapi('get_raid_logical_disk_properties', 'call', version='1.30', driver_name='fake-driver') def test_set_target_raid_config(self): self._test_rpcapi('set_target_raid_config', 'call', version='1.30', node_id=self.fake_node['uuid'], target_raid_config='config') def test_do_node_clean(self): clean_steps = [{'step': 'upgrade_firmware', 'interface': 'deploy'}, {'step': 'upgrade_bmc', 'interface': 'management'}] self._test_rpcapi('do_node_clean', 'call', version='1.32', node_id=self.fake_node['uuid'], clean_steps=clean_steps) def test_object_action(self): self._test_rpcapi('object_action', 'call', version='1.31', objinst='fake-object', objmethod='foo', args=tuple(), kwargs=dict()) def test_object_class_action_versions(self): self._test_rpcapi('object_class_action_versions', 'call', version='1.31', objname='fake-object', objmethod='foo', object_versions={'fake-object': '1.0'}, args=tuple(), kwargs=dict()) def test_object_backport_versions(self): self._test_rpcapi('object_backport_versions', 'call', version='1.31', objinst='fake-object', object_versions={'fake-object': '1.0'}) @mock.patch.object(messaging.RPCClient, 'can_send_version', autospec=True) def test_object_action_invalid_version(self, mock_send): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') mock_send.return_value = False self.assertRaises(NotImplementedError, rpcapi.object_action, self.context, objinst='fake-object', objmethod='foo', args=tuple(), kwargs=dict()) @mock.patch.object(messaging.RPCClient, 'can_send_version', autospec=True) def test_object_class_action_versions_invalid_version(self, mock_send): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') mock_send.return_value = False self.assertRaises(NotImplementedError, rpcapi.object_class_action_versions, self.context, objname='fake-object', objmethod='foo', object_versions={'fake-object': '1.0'}, args=tuple(), kwargs=dict()) @mock.patch.object(messaging.RPCClient, 'can_send_version', autospec=True) def test_object_backport_versions_invalid_version(self, mock_send): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') mock_send.return_value = False self.assertRaises(NotImplementedError, rpcapi.object_backport_versions, self.context, objinst='fake-object', object_versions={'fake-object': '1.0'}) def test_update_portgroup(self): self._test_rpcapi('update_portgroup', 'call', version='1.33', portgroup_obj=self.fake_portgroup) def test_destroy_portgroup(self): self._test_rpcapi('destroy_portgroup', 'call', version='1.33', portgroup=self.fake_portgroup) ironic-5.1.0/ironic/tests/unit/conductor/test_task_manager.py0000664000567000056710000010551012674513466025632 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for :class:`ironic.conductor.task_manager`.""" import futurist import mock from oslo_context import context as oslo_context from oslo_utils import uuidutils from ironic.common import context from ironic.common import driver_factory from ironic.common import exception from ironic.common import fsm from ironic.common import states from ironic.conductor import task_manager from ironic import objects from ironic.tests import base as tests_base from ironic.tests.unit.db import base as tests_db_base from ironic.tests.unit.objects import utils as obj_utils @mock.patch.object(objects.Node, 'get') @mock.patch.object(objects.Node, 'release') @mock.patch.object(objects.Node, 'reserve') @mock.patch.object(driver_factory, 'build_driver_for_task') @mock.patch.object(objects.Port, 'list_by_node_id') @mock.patch.object(objects.Portgroup, 'list_by_node_id') class TaskManagerTestCase(tests_db_base.DbTestCase): def setUp(self): super(TaskManagerTestCase, self).setUp() self.host = 'test-host' self.config(host=self.host) self.config(node_locked_retry_attempts=1, group='conductor') self.config(node_locked_retry_interval=0, group='conductor') self.node = obj_utils.create_test_node(self.context) self.future_mock = mock.Mock(spec=['cancel', 'add_done_callback']) def test_excl_lock(self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node with task_manager.TaskManager(self.context, 'fake-node-id') as task: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(get_ports_mock.return_value, task.ports) self.assertEqual(get_portgroups_mock.return_value, task.portgroups) self.assertEqual(build_driver_mock.return_value, task.driver) self.assertFalse(task.shared) build_driver_mock.assert_called_once_with(task, driver_name=None) reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) release_mock.assert_called_once_with(self.context, self.host, self.node.id) self.assertFalse(node_get_mock.called) def test_excl_lock_with_driver( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node with task_manager.TaskManager(self.context, 'fake-node-id', driver_name='fake-driver') as task: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(get_ports_mock.return_value, task.ports) self.assertEqual(get_portgroups_mock.return_value, task.portgroups) self.assertEqual(build_driver_mock.return_value, task.driver) self.assertFalse(task.shared) build_driver_mock.assert_called_once_with( task, driver_name='fake-driver') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) release_mock.assert_called_once_with(self.context, self.host, self.node.id) self.assertFalse(node_get_mock.called) def test_excl_nested_acquire( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node2 = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake') reserve_mock.return_value = self.node get_ports_mock.return_value = mock.sentinel.ports1 get_portgroups_mock.return_value = mock.sentinel.portgroups1 build_driver_mock.return_value = mock.sentinel.driver1 with task_manager.TaskManager(self.context, 'node-id1') as task: reserve_mock.return_value = node2 get_ports_mock.return_value = mock.sentinel.ports2 get_portgroups_mock.return_value = mock.sentinel.portgroups2 build_driver_mock.return_value = mock.sentinel.driver2 with task_manager.TaskManager(self.context, 'node-id2') as task2: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(mock.sentinel.ports1, task.ports) self.assertEqual(mock.sentinel.portgroups1, task.portgroups) self.assertEqual(mock.sentinel.driver1, task.driver) self.assertFalse(task.shared) self.assertEqual(self.context, task2.context) self.assertEqual(node2, task2.node) self.assertEqual(mock.sentinel.ports2, task2.ports) self.assertEqual(mock.sentinel.portgroups2, task2.portgroups) self.assertEqual(mock.sentinel.driver2, task2.driver) self.assertFalse(task2.shared) self.assertEqual([mock.call(task, driver_name=None), mock.call(task2, driver_name=None)], build_driver_mock.call_args_list) self.assertEqual([mock.call(self.context, self.host, 'node-id1'), mock.call(self.context, self.host, 'node-id2')], reserve_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.id), mock.call(self.context, node2.id)], get_ports_mock.call_args_list) # release should be in reverse order self.assertEqual([mock.call(self.context, self.host, node2.id), mock.call(self.context, self.host, self.node.id)], release_mock.call_args_list) self.assertFalse(node_get_mock.called) def test_excl_lock_exception_then_lock( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): retry_attempts = 3 self.config(node_locked_retry_attempts=retry_attempts, group='conductor') # Fail on the first lock attempt, succeed on the second. reserve_mock.side_effect = [exception.NodeLocked(node='foo', host='foo'), self.node] with task_manager.TaskManager(self.context, 'fake-node-id') as task: self.assertFalse(task.shared) expected_calls = [mock.call(self.context, self.host, 'fake-node-id')] * 2 reserve_mock.assert_has_calls(expected_calls) self.assertEqual(2, reserve_mock.call_count) def test_excl_lock_reserve_exception( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): retry_attempts = 3 self.config(node_locked_retry_attempts=retry_attempts, group='conductor') reserve_mock.side_effect = exception.NodeLocked(node='foo', host='foo') self.assertRaises(exception.NodeLocked, task_manager.TaskManager, self.context, 'fake-node-id') reserve_mock.assert_called_with(self.context, self.host, 'fake-node-id') self.assertEqual(retry_attempts, reserve_mock.call_count) self.assertFalse(get_ports_mock.called) self.assertFalse(get_portgroups_mock.called) self.assertFalse(build_driver_mock.called) self.assertFalse(release_mock.called) self.assertFalse(node_get_mock.called) def test_excl_lock_get_ports_exception( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node get_ports_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) release_mock.assert_called_once_with(self.context, self.host, self.node.id) self.assertFalse(node_get_mock.called) def test_excl_lock_get_portgroups_exception( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node get_portgroups_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_portgroups_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) release_mock.assert_called_once_with(self.context, self.host, self.node.id) self.assertFalse(node_get_mock.called) def test_excl_lock_build_driver_exception( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node build_driver_mock.side_effect = ( exception.DriverNotFound(driver_name='foo')) self.assertRaises(exception.DriverNotFound, task_manager.TaskManager, self.context, 'fake-node-id') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) build_driver_mock.assert_called_once_with(mock.ANY, driver_name=None) release_mock.assert_called_once_with(self.context, self.host, self.node.id) self.assertFalse(node_get_mock.called) def test_shared_lock( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node with task_manager.TaskManager(self.context, 'fake-node-id', shared=True) as task: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(get_ports_mock.return_value, task.ports) self.assertEqual(get_portgroups_mock.return_value, task.portgroups) self.assertEqual(build_driver_mock.return_value, task.driver) self.assertTrue(task.shared) build_driver_mock.assert_called_once_with(task, driver_name=None) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) def test_shared_lock_with_driver( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node with task_manager.TaskManager(self.context, 'fake-node-id', shared=True, driver_name='fake-driver') as task: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(get_ports_mock.return_value, task.ports) self.assertEqual(get_portgroups_mock.return_value, task.portgroups) self.assertEqual(build_driver_mock.return_value, task.driver) self.assertTrue(task.shared) build_driver_mock.assert_called_once_with( task, driver_name='fake-driver') self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) def test_shared_lock_node_get_exception( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.side_effect = exception.NodeNotFound(node='foo') self.assertRaises(exception.NodeNotFound, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') self.assertFalse(get_ports_mock.called) self.assertFalse(get_portgroups_mock.called) self.assertFalse(build_driver_mock.called) def test_shared_lock_get_ports_exception( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node get_ports_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) def test_shared_lock_get_portgroups_exception( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node get_portgroups_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_portgroups_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) def test_shared_lock_build_driver_exception( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node build_driver_mock.side_effect = ( exception.DriverNotFound(driver_name='foo')) self.assertRaises(exception.DriverNotFound, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) build_driver_mock.assert_called_once_with(mock.ANY, driver_name=None) def test_upgrade_lock( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node reserve_mock.return_value = self.node with task_manager.TaskManager(self.context, 'fake-node-id', shared=True) as task: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(get_ports_mock.return_value, task.ports) self.assertEqual(get_portgroups_mock.return_value, task.portgroups) self.assertEqual(build_driver_mock.return_value, task.driver) self.assertTrue(task.shared) self.assertFalse(reserve_mock.called) task.upgrade_lock() self.assertFalse(task.shared) # second upgrade does nothing task.upgrade_lock() self.assertFalse(task.shared) build_driver_mock.assert_called_once_with(mock.ANY, driver_name=None) # make sure reserve() was called only once reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') release_mock.assert_called_once_with(self.context, self.host, self.node.id) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) def test_spawn_after( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): spawn_mock = mock.Mock(return_value=self.future_mock) task_release_mock = mock.Mock() reserve_mock.return_value = self.node with task_manager.TaskManager(self.context, 'node-id') as task: task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') self.future_mock.add_done_callback.assert_called_once_with( task._thread_release_resources) self.assertFalse(self.future_mock.cancel.called) # Since we mocked link(), we're testing that __exit__ didn't # release resources pending the finishing of the background # thread self.assertFalse(task_release_mock.called) def test_spawn_after_exception_while_yielded( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): spawn_mock = mock.Mock() task_release_mock = mock.Mock() reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock raise exception.IronicException('foo') self.assertRaises(exception.IronicException, _test_it) self.assertFalse(spawn_mock.called) task_release_mock.assert_called_once_with() def test_spawn_after_spawn_fails( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): spawn_mock = mock.Mock(side_effect=exception.IronicException('foo')) task_release_mock = mock.Mock() reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock self.assertRaises(exception.IronicException, _test_it) spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') task_release_mock.assert_called_once_with() def test_spawn_after_link_fails( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): self.future_mock.add_done_callback.side_effect = ( exception.IronicException('foo')) spawn_mock = mock.Mock(return_value=self.future_mock) task_release_mock = mock.Mock() thr_release_mock = mock.Mock(spec_set=[]) reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task._thread_release_resources = thr_release_mock task.release_resources = task_release_mock self.assertRaises(exception.IronicException, _test_it) spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') self.future_mock.add_done_callback.assert_called_once_with( thr_release_mock) self.future_mock.cancel.assert_called_once_with() task_release_mock.assert_called_once_with() def test_spawn_after_on_error_hook( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): expected_exception = exception.IronicException('foo') spawn_mock = mock.Mock(side_effect=expected_exception) task_release_mock = mock.Mock() on_error_handler = mock.Mock() reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.set_spawn_error_hook(on_error_handler, 'fake-argument') task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock self.assertRaises(exception.IronicException, _test_it) spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') task_release_mock.assert_called_once_with() on_error_handler.assert_called_once_with(expected_exception, 'fake-argument') def test_spawn_after_on_error_hook_exception( self, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): expected_exception = exception.IronicException('foo') spawn_mock = mock.Mock(side_effect=expected_exception) task_release_mock = mock.Mock() # Raise an exception within the on_error handler on_error_handler = mock.Mock(side_effect=Exception('unexpected')) on_error_handler.__name__ = 'foo_method' reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.set_spawn_error_hook(on_error_handler, 'fake-argument') task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock # Make sure the original exception is the one raised self.assertRaises(exception.IronicException, _test_it) spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') task_release_mock.assert_called_once_with() on_error_handler.assert_called_once_with(expected_exception, 'fake-argument') @mock.patch.object(states.machine, 'copy') def test_init_prepares_fsm( self, copy_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): m = mock.Mock(spec=fsm.FSM) reserve_mock.return_value = self.node copy_mock.return_value = m t = task_manager.TaskManager('fake', 'fake') copy_mock.assert_called_once_with() self.assertIs(m, t.fsm) m.initialize.assert_called_once_with( start_state=self.node.provision_state, target_state=self.node.target_provision_state) class TaskManagerStateModelTestCases(tests_base.TestCase): def setUp(self): super(TaskManagerStateModelTestCases, self).setUp() self.fsm = mock.Mock(spec=fsm.FSM) self.node = mock.Mock(spec=objects.Node) self.task = mock.Mock(spec=task_manager.TaskManager) self.task.fsm = self.fsm self.task.node = self.node def test_release_clears_resources(self): t = self.task t.release_resources = task_manager.TaskManager.release_resources t.driver = mock.Mock() t.ports = mock.Mock() t.portgroups = mock.Mock() t.shared = True t._purpose = 'purpose' t._debug_timer = mock.Mock() t.release_resources(t) self.assertIsNone(t.node) self.assertIsNone(t.driver) self.assertIsNone(t.ports) self.assertIsNone(t.portgroups) self.assertIsNone(t.fsm) def test_process_event_fsm_raises(self): self.task.process_event = task_manager.TaskManager.process_event self.fsm.process_event.side_effect = exception.InvalidState('test') self.assertRaises( exception.InvalidState, self.task.process_event, self.task, 'fake') self.assertEqual(0, self.task.spawn_after.call_count) self.assertFalse(self.task.node.save.called) def test_process_event_sets_callback(self): cb = mock.Mock() arg = mock.Mock() kwarg = mock.Mock() self.task.process_event = task_manager.TaskManager.process_event self.task.process_event( self.task, 'fake', callback=cb, call_args=[arg], call_kwargs={'mock': kwarg}) self.fsm.process_event.assert_called_once_with('fake', target_state=None) self.task.spawn_after.assert_called_with(cb, arg, mock=kwarg) self.assertEqual(1, self.task.node.save.call_count) self.assertIsNone(self.node.last_error) def test_process_event_sets_callback_and_error_handler(self): arg = mock.Mock() cb = mock.Mock() er = mock.Mock() kwarg = mock.Mock() provision_state = 'provision_state' target_provision_state = 'target' self.node.provision_state = provision_state self.node.target_provision_state = target_provision_state self.task.process_event = task_manager.TaskManager.process_event self.task.process_event( self.task, 'fake', callback=cb, call_args=[arg], call_kwargs={'mock': kwarg}, err_handler=er) self.task.set_spawn_error_hook.assert_called_once_with( er, self.node, provision_state, target_provision_state) self.fsm.process_event.assert_called_once_with('fake', target_state=None) self.task.spawn_after.assert_called_with(cb, arg, mock=kwarg) self.assertEqual(1, self.task.node.save.call_count) self.assertIsNone(self.node.last_error) self.assertNotEqual(provision_state, self.node.provision_state) self.assertNotEqual(target_provision_state, self.node.target_provision_state) def test_process_event_sets_target_state(self): event = 'fake' tgt_state = 'target' provision_state = 'provision_state' target_provision_state = 'target_provision_state' self.node.provision_state = provision_state self.node.target_provision_state = target_provision_state self.task.process_event = task_manager.TaskManager.process_event self.task.process_event(self.task, event, target_state=tgt_state) self.fsm.process_event.assert_called_once_with(event, target_state=tgt_state) self.assertEqual(1, self.task.node.save.call_count) self.assertNotEqual(provision_state, self.node.provision_state) self.assertNotEqual(target_provision_state, self.node.target_provision_state) def test_process_event_callback_stable_state(self): callback = mock.Mock() for state in states.STABLE_STATES: self.node.provision_state = state self.node.target_provision_state = 'target' self.task.process_event = task_manager.TaskManager.process_event self.task.process_event(self.task, 'fake', callback=callback) # assert the target state is set when callback is passed self.assertNotEqual(states.NOSTATE, self.task.node.target_provision_state) def test_process_event_no_callback_stable_state(self): for state in states.STABLE_STATES: self.node.provision_state = state self.node.target_provision_state = 'target' self.task.process_event = task_manager.TaskManager.process_event self.task.process_event(self.task, 'fake') # assert the target state was cleared when moving to a # stable state self.assertEqual(states.NOSTATE, self.task.node.target_provision_state) @task_manager.require_exclusive_lock def _req_excl_lock_method(*args, **kwargs): return (args, kwargs) class ExclusiveLockDecoratorTestCase(tests_base.TestCase): def setUp(self): super(ExclusiveLockDecoratorTestCase, self).setUp() self.task = mock.Mock(spec=task_manager.TaskManager) self.task.context = self.context self.args_task_first = (self.task, 1, 2) self.args_task_second = (1, self.task, 2) self.kwargs = dict(cat='meow', dog='wuff') def test_with_excl_lock_task_first_arg(self): self.task.shared = False (args, kwargs) = _req_excl_lock_method(*self.args_task_first, **self.kwargs) self.assertEqual(self.args_task_first, args) self.assertEqual(self.kwargs, kwargs) def test_with_excl_lock_task_second_arg(self): self.task.shared = False (args, kwargs) = _req_excl_lock_method(*self.args_task_second, **self.kwargs) self.assertEqual(self.args_task_second, args) self.assertEqual(self.kwargs, kwargs) def test_with_shared_lock_task_first_arg(self): self.task.shared = True self.assertRaises(exception.ExclusiveLockRequired, _req_excl_lock_method, *self.args_task_first, **self.kwargs) def test_with_shared_lock_task_second_arg(self): self.task.shared = True self.assertRaises(exception.ExclusiveLockRequired, _req_excl_lock_method, *self.args_task_second, **self.kwargs) class ThreadExceptionTestCase(tests_base.TestCase): def setUp(self): super(ThreadExceptionTestCase, self).setUp() self.node = mock.Mock(spec=objects.Node) self.node.last_error = None self.task = mock.Mock(spec=task_manager.TaskManager) self.task.node = self.node self.task._write_exception = task_manager.TaskManager._write_exception self.future_mock = mock.Mock(spec_set=['exception']) def async_method_foo(): pass self.task._spawn_args = (async_method_foo,) def test_set_node_last_error(self): self.future_mock.exception.return_value = Exception('fiasco') self.task._write_exception(self.task, self.future_mock) self.node.save.assert_called_once_with() self.assertIn('fiasco', self.node.last_error) self.assertIn('async_method_foo', self.node.last_error) def test_set_node_last_error_exists(self): self.future_mock.exception.return_value = Exception('fiasco') self.node.last_error = 'oops' self.task._write_exception(self.task, self.future_mock) self.assertFalse(self.node.save.called) self.assertFalse(self.future_mock.exception.called) self.assertEqual('oops', self.node.last_error) def test_set_node_last_error_no_error(self): self.future_mock.exception.return_value = None self.task._write_exception(self.task, self.future_mock) self.assertFalse(self.node.save.called) self.future_mock.exception.assert_called_once_with() self.assertIsNone(self.node.last_error) @mock.patch.object(task_manager.LOG, 'exception', spec_set=True, autospec=True) def test_set_node_last_error_cancelled(self, log_mock): self.future_mock.exception.side_effect = futurist.CancelledError() self.task._write_exception(self.task, self.future_mock) self.assertFalse(self.node.save.called) self.future_mock.exception.assert_called_once_with() self.assertIsNone(self.node.last_error) self.assertTrue(log_mock.called) @mock.patch.object(oslo_context, 'get_current') class TaskManagerContextTestCase(tests_base.TestCase): def setUp(self): super(TaskManagerContextTestCase, self).setUp() self.context = mock.Mock(spec=context.RequestContext) def test_thread_without_context(self, context_get_mock): context_get_mock.return_value = False task_manager.ensure_thread_contain_context(self.context) self.assertTrue(self.context.update_store.called) def test_thread_with_context(self, context_get_mock): context_get_mock.return_value = True task_manager.ensure_thread_contain_context(self.context) self.assertFalse(self.context.update_store.called) ironic-5.1.0/ironic/tests/unit/conductor/__init__.py0000664000567000056710000000000012674513466023662 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/conductor/mgr_utils.py0000664000567000056710000001645112674513466024151 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test utils for Ironic Managers.""" from futurist import periodics import mock from oslo_utils import strutils from oslo_utils import uuidutils import pkg_resources from stevedore import dispatch from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.conductor import manager from ironic import objects def mock_the_extension_manager(driver="fake", namespace="ironic.drivers"): """Get a fake stevedore NameDispatchExtensionManager instance. :param namespace: A string representing the namespace over which to search for entrypoints. :returns mock_ext_mgr: A DriverFactory instance that has been faked. :returns mock_ext: A real plugin loaded by mock_ext_mgr in the specified namespace. """ entry_point = None for ep in list(pkg_resources.iter_entry_points(namespace)): s = "%s" % ep if driver == s[:s.index(' =')]: entry_point = ep break # NOTE(lucasagomes): Initialize the _extension_manager before # instantiaing a DriverFactory class to avoid # a real NameDispatchExtensionManager to be created # with the real namespace. driver_factory.DriverFactory._extension_manager = ( dispatch.NameDispatchExtensionManager('ironic.no-such-namespace', lambda x: True)) mock_ext_mgr = driver_factory.DriverFactory() mock_ext = mock_ext_mgr._extension_manager._load_one_plugin( entry_point, True, [], {}, False) mock_ext_mgr._extension_manager.extensions = [mock_ext] mock_ext_mgr._extension_manager.by_name = dict((e.name, e) for e in [mock_ext]) return (mock_ext_mgr, mock_ext) class CommonMixIn(object): @staticmethod def _create_node(**kwargs): attrs = {'id': 1, 'uuid': uuidutils.generate_uuid(), 'power_state': states.POWER_OFF, 'target_power_state': None, 'maintenance': False, 'reservation': None} attrs.update(kwargs) node = mock.Mock(spec_set=objects.Node) for attr in attrs: setattr(node, attr, attrs[attr]) return node def _create_task(self, node=None, node_attrs=None): if node_attrs is None: node_attrs = {} if node is None: node = self._create_node(**node_attrs) task = mock.Mock(spec_set=['node', 'release_resources', 'spawn_after', 'process_event']) task.node = node return task def _get_nodeinfo_list_response(self, nodes=None): if nodes is None: nodes = [self.node] elif not isinstance(nodes, (list, tuple)): nodes = [nodes] return [tuple(getattr(n, c) for c in self.columns) for n in nodes] def _get_acquire_side_effect(self, task_infos): """Helper method to generate a task_manager.acquire() side effect. This accepts a list of information about task mocks to return. task_infos can be a single entity or a list. Each task_info can be a single entity, the task to return, or it can be a tuple of (task, exception_to_raise_on_exit). 'task' can be an exception to raise on __enter__. Examples: _get_acquire_side_effect(self, task): Yield task _get_acquire_side_effect(self, [task, enter_exception(), (task2, exit_exception())]) Yield task on first call to acquire() raise enter_exception() in __enter__ on 2nd call to acquire() Yield task2 on 3rd call to acquire(), but raise exit_exception() on __exit__() """ tasks = [] exit_exceptions = [] if not isinstance(task_infos, list): task_infos = [task_infos] for task_info in task_infos: if isinstance(task_info, tuple): task, exc = task_info else: task = task_info exc = None tasks.append(task) exit_exceptions.append(exc) class FakeAcquire(object): def __init__(fa_self, context, node_id, *args, **kwargs): # We actually verify these arguments via # acquire_mock.call_args_list(). However, this stores the # node_id so we can assert we're returning the correct node # in __enter__(). fa_self.node_id = node_id def __enter__(fa_self): task = tasks.pop(0) if isinstance(task, Exception): raise task # NOTE(comstud): Not ideal to throw this into # a helper, however it's the cleanest way # to verify we're dealing with the correct task/node. if strutils.is_int_like(fa_self.node_id): self.assertEqual(fa_self.node_id, task.node.id) else: self.assertEqual(fa_self.node_id, task.node.uuid) return task def __exit__(fa_self, exc_typ, exc_val, exc_tb): exc = exit_exceptions.pop(0) if exc_typ is None and exc is not None: raise exc return FakeAcquire class ServiceSetUpMixin(object): def setUp(self): super(ServiceSetUpMixin, self).setUp() self.hostname = 'test-host' self.config(enabled_drivers=['fake']) self.config(node_locked_retry_attempts=1, group='conductor') self.config(node_locked_retry_interval=0, group='conductor') self.service = manager.ConductorManager(self.hostname, 'test-topic') mock_the_extension_manager() self.driver = driver_factory.get_driver("fake") def _stop_service(self): try: objects.Conductor.get_by_hostname(self.context, self.hostname) except exception.ConductorNotFound: return self.service.del_host() def _start_service(self, start_periodic_tasks=False): if start_periodic_tasks: self.service.init_host() else: with mock.patch.object(periodics, 'PeriodicWorker', autospec=True): self.service.init_host() self.addCleanup(self._stop_service) def mock_record_keepalive(func_or_class): return mock.patch.object( manager.ConductorManager, '_conductor_service_record_keepalive', lambda _: None)(func_or_class) ironic-5.1.0/ironic/tests/unit/db/0000775000567000056710000000000012674513633020144 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/db/base.py0000664000567000056710000000643212674513466021441 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 NTT DOCOMO, INC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Ironic DB test base class.""" import os import shutil import fixtures from oslo_config import cfg from oslo_db.sqlalchemy import enginefacade from ironic.common import paths from ironic.db import api as dbapi from ironic.db.sqlalchemy import migration from ironic.db.sqlalchemy import models from ironic.tests import base CONF = cfg.CONF _DB_CACHE = None class Database(fixtures.Fixture): def __init__(self, engine, db_migrate, sql_connection, sqlite_db, sqlite_clean_db): self.sql_connection = sql_connection self.sqlite_db = sqlite_db self.sqlite_clean_db = sqlite_clean_db self.engine = engine self.engine.dispose() conn = self.engine.connect() if sql_connection == "sqlite://": self.setup_sqlite(db_migrate) elif sql_connection.startswith('sqlite:///'): testdb = paths.state_path_rel(sqlite_db) if os.path.exists(testdb): return self.setup_sqlite(db_migrate) else: db_migrate.upgrade('head') self.post_migrations() if sql_connection == "sqlite://": conn = self.engine.connect() self._DB = "".join(line for line in conn.connection.iterdump()) self.engine.dispose() else: cleandb = paths.state_path_rel(sqlite_clean_db) shutil.copyfile(testdb, cleandb) def setup_sqlite(self, db_migrate): if db_migrate.version(): return models.Base.metadata.create_all(self.engine) db_migrate.stamp('head') def setUp(self): super(Database, self).setUp() if self.sql_connection == "sqlite://": conn = self.engine.connect() conn.connection.executescript(self._DB) self.addCleanup(self.engine.dispose) else: shutil.copyfile(paths.state_path_rel(self.sqlite_clean_db), paths.state_path_rel(self.sqlite_db)) self.addCleanup(os.unlink, self.sqlite_db) def post_migrations(self): """Any addition steps that are needed outside of the migrations.""" class DbTestCase(base.TestCase): def setUp(self): super(DbTestCase, self).setUp() self.dbapi = dbapi.get_instance() global _DB_CACHE if not _DB_CACHE: engine = enginefacade.get_legacy_facade().get_engine() _DB_CACHE = Database(engine, migration, sql_connection=CONF.database.connection, sqlite_db=CONF.database.sqlite_db, sqlite_clean_db='clean.sqlite') self.useFixture(_DB_CACHE) ironic-5.1.0/ironic/tests/unit/db/test_ports.py0000664000567000056710000001250212674513466022730 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating Ports via the DB API""" from oslo_utils import uuidutils import six from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbPortTestCase(base.DbTestCase): def setUp(self): # This method creates a port for every test and # replaces a test for creating a port. super(DbPortTestCase, self).setUp() self.node = db_utils.create_test_node() self.portgroup = db_utils.create_test_portgroup(node_id=self.node.id) self.port = db_utils.create_test_port(node_id=self.node.id, portgroup_id=self.portgroup.id) def test_get_port_by_id(self): res = self.dbapi.get_port_by_id(self.port.id) self.assertEqual(self.port.address, res.address) def test_get_port_by_uuid(self): res = self.dbapi.get_port_by_uuid(self.port.uuid) self.assertEqual(self.port.id, res.id) def test_get_port_by_address(self): res = self.dbapi.get_port_by_address(self.port.address) self.assertEqual(self.port.id, res.id) def test_get_port_list(self): uuids = [] for i in range(1, 6): port = db_utils.create_test_port(uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:4%s' % i) uuids.append(six.text_type(port.uuid)) # Also add the uuid for the port created in setUp() uuids.append(six.text_type(self.port.uuid)) res = self.dbapi.get_port_list() res_uuids = [r.uuid for r in res] six.assertCountEqual(self, uuids, res_uuids) def test_get_port_list_sorted(self): uuids = [] for i in range(1, 6): port = db_utils.create_test_port(uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:4%s' % i) uuids.append(six.text_type(port.uuid)) # Also add the uuid for the port created in setUp() uuids.append(six.text_type(self.port.uuid)) res = self.dbapi.get_port_list(sort_key='uuid') res_uuids = [r.uuid for r in res] self.assertEqual(sorted(uuids), res_uuids) self.assertRaises(exception.InvalidParameterValue, self.dbapi.get_port_list, sort_key='foo') def test_get_ports_by_node_id(self): res = self.dbapi.get_ports_by_node_id(self.node.id) self.assertEqual(self.port.address, res[0].address) def test_get_ports_by_node_id_that_does_not_exist(self): self.assertEqual([], self.dbapi.get_ports_by_node_id(99)) def test_get_ports_by_portgroup_id(self): res = self.dbapi.get_ports_by_portgroup_id(self.portgroup.id) self.assertEqual(self.port.address, res[0].address) def test_get_ports_by_portgroup_id_that_does_not_exist(self): self.assertEqual([], self.dbapi.get_ports_by_portgroup_id(99)) def test_destroy_port(self): self.dbapi.destroy_port(self.port.id) self.assertRaises(exception.PortNotFound, self.dbapi.destroy_port, self.port.id) def test_update_port(self): old_address = self.port.address new_address = 'ff.ee.dd.cc.bb.aa' self.assertNotEqual(old_address, new_address) res = self.dbapi.update_port(self.port.id, {'address': new_address}) self.assertEqual(new_address, res.address) def test_update_port_uuid(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_port, self.port.id, {'uuid': ''}) def test_update_port_duplicated_address(self): address1 = self.port.address address2 = 'aa-bb-cc-11-22-33' port2 = db_utils.create_test_port(uuid=uuidutils.generate_uuid(), node_id=self.node.id, address=address2) self.assertRaises(exception.MACAlreadyExists, self.dbapi.update_port, port2.id, {'address': address1}) def test_create_port_duplicated_address(self): self.assertRaises(exception.MACAlreadyExists, db_utils.create_test_port, uuid=uuidutils.generate_uuid(), node_id=self.node.id, address=self.port.address) def test_create_port_duplicated_uuid(self): self.assertRaises(exception.PortAlreadyExists, db_utils.create_test_port, uuid=self.port.uuid, node_id=self.node.id, address='aa-bb-cc-33-11-22') ironic-5.1.0/ironic/tests/unit/db/test_node_tags.py0000664000567000056710000001067312674513466023533 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating NodeTags via the DB API""" from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbNodeTagTestCase(base.DbTestCase): def setUp(self): super(DbNodeTagTestCase, self).setUp() self.node = db_utils.create_test_node() def test_set_node_tags(self): tags = self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) self.assertEqual(self.node.id, tags[0].node_id) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in tags]) tags = self.dbapi.set_node_tags(self.node.id, []) self.assertEqual([], tags) def test_set_node_tags_duplicate(self): tags = self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2', 'tag2']) self.assertEqual(self.node.id, tags[0].node_id) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in tags]) def test_set_node_tags_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.set_node_tags, '1234', ['tag1', 'tag2']) def test_get_node_tags_by_node_id(self): self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual(self.node.id, tags[0].node_id) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in tags]) def test_get_node_tags_empty(self): tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual([], tags) def test_get_node_tags_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_tags_by_node_id, '123') def test_unset_node_tags(self): self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) self.dbapi.unset_node_tags(self.node.id) tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual([], tags) def test_unset_empty_node_tags(self): self.dbapi.unset_node_tags(self.node.id) tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual([], tags) def test_unset_node_tags_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.unset_node_tags, '123') def test_add_node_tag(self): tag = self.dbapi.add_node_tag(self.node.id, 'tag1') self.assertEqual(self.node.id, tag.node_id) self.assertEqual('tag1', tag.tag) def test_add_node_tag_duplicate(self): tag = self.dbapi.add_node_tag(self.node.id, 'tag1') tag = self.dbapi.add_node_tag(self.node.id, 'tag1') self.assertEqual(self.node.id, tag.node_id) self.assertEqual('tag1', tag.tag) def test_add_node_tag_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.add_node_tag, '123', 'tag1') def test_delete_node_tag(self): self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) self.dbapi.delete_node_tag(self.node.id, 'tag1') tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual(1, len(tags)) self.assertEqual('tag2', tags[0].tag) def test_delete_node_tag_not_found(self): self.assertRaises(exception.NodeTagNotFound, self.dbapi.delete_node_tag, self.node.id, 'tag1') def test_delete_node_tag_node_not_found(self): self.assertRaises(exception.NodeNotFound, self.dbapi.delete_node_tag, '123', 'tag1') def test_node_tag_exists(self): self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) ret = self.dbapi.node_tag_exists(self.node.id, 'tag1') self.assertTrue(ret) def test_node_tag_not_exists(self): ret = self.dbapi.node_tag_exists(self.node.id, 'tag1') self.assertFalse(ret) ironic-5.1.0/ironic/tests/unit/db/test_chassis.py0000664000567000056710000000640712674513466023225 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating Chassis via the DB API""" from oslo_utils import uuidutils import six from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class DbChassisTestCase(base.DbTestCase): def setUp(self): super(DbChassisTestCase, self).setUp() self.chassis = utils.create_test_chassis() def test_get_chassis_list(self): uuids = [self.chassis.uuid] for i in range(1, 6): ch = utils.create_test_chassis(uuid=uuidutils.generate_uuid()) uuids.append(six.text_type(ch.uuid)) res = self.dbapi.get_chassis_list() res_uuids = [r.uuid for r in res] six.assertCountEqual(self, uuids, res_uuids) def test_get_chassis_by_id(self): chassis = self.dbapi.get_chassis_by_id(self.chassis.id) self.assertEqual(self.chassis.uuid, chassis.uuid) def test_get_chassis_by_uuid(self): chassis = self.dbapi.get_chassis_by_uuid(self.chassis.uuid) self.assertEqual(self.chassis.id, chassis.id) def test_get_chassis_that_does_not_exist(self): self.assertRaises(exception.ChassisNotFound, self.dbapi.get_chassis_by_id, 666) def test_update_chassis(self): res = self.dbapi.update_chassis(self.chassis.id, {'description': 'hello'}) self.assertEqual('hello', res.description) def test_update_chassis_that_does_not_exist(self): self.assertRaises(exception.ChassisNotFound, self.dbapi.update_chassis, 666, {'description': ''}) def test_update_chassis_uuid(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_chassis, self.chassis.id, {'uuid': 'hello'}) def test_destroy_chassis(self): self.dbapi.destroy_chassis(self.chassis.id) self.assertRaises(exception.ChassisNotFound, self.dbapi.get_chassis_by_id, self.chassis.id) def test_destroy_chassis_that_does_not_exist(self): self.assertRaises(exception.ChassisNotFound, self.dbapi.destroy_chassis, 666) def test_destroy_chassis_with_nodes(self): utils.create_test_node(chassis_id=self.chassis.id) self.assertRaises(exception.ChassisNotEmpty, self.dbapi.destroy_chassis, self.chassis.id) def test_create_chassis_already_exists(self): self.assertRaises(exception.ChassisAlreadyExists, utils.create_test_chassis, uuid=self.chassis.uuid) ironic-5.1.0/ironic/tests/unit/db/test_conductor.py0000664000567000056710000002065612674513466023572 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating Conductors via the DB API""" import datetime import mock from oslo_utils import timeutils from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class DbConductorTestCase(base.DbTestCase): def test_register_conductor_existing_fails(self): c = utils.get_test_conductor() self.dbapi.register_conductor(c) self.assertRaises( exception.ConductorAlreadyRegistered, self.dbapi.register_conductor, c) def test_register_conductor_override(self): c = utils.get_test_conductor() self.dbapi.register_conductor(c) self.dbapi.register_conductor(c, update_existing=True) def _create_test_cdr(self, **kwargs): c = utils.get_test_conductor(**kwargs) return self.dbapi.register_conductor(c) def test_get_conductor(self): c1 = self._create_test_cdr() c2 = self.dbapi.get_conductor(c1.hostname) self.assertEqual(c1.id, c2.id) def test_get_conductor_not_found(self): self._create_test_cdr() self.assertRaises( exception.ConductorNotFound, self.dbapi.get_conductor, 'bad-hostname') def test_unregister_conductor(self): c = self._create_test_cdr() self.dbapi.unregister_conductor(c.hostname) self.assertRaises( exception.ConductorNotFound, self.dbapi.unregister_conductor, c.hostname) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_touch_conductor(self, mock_utcnow): test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time c = self._create_test_cdr() self.assertEqual(test_time, timeutils.normalize_time(c.updated_at)) test_time = datetime.datetime(2000, 1, 1, 0, 1) mock_utcnow.return_value = test_time self.dbapi.touch_conductor(c.hostname) c = self.dbapi.get_conductor(c.hostname) self.assertEqual(test_time, timeutils.normalize_time(c.updated_at)) def test_touch_conductor_not_found(self): # A conductor's heartbeat will not create a new record, # it will only update existing ones self._create_test_cdr() self.assertRaises( exception.ConductorNotFound, self.dbapi.touch_conductor, 'bad-hostname') def test_touch_offline_conductor(self): # Ensure that a conductor's periodic heartbeat task can make the # conductor visible again, even if it was spuriously marked offline c = self._create_test_cdr() self.dbapi.unregister_conductor(c.hostname) self.assertRaises( exception.ConductorNotFound, self.dbapi.get_conductor, c.hostname) self.dbapi.touch_conductor(c.hostname) self.dbapi.get_conductor(c.hostname) def test_clear_node_reservations_for_conductor(self): node1 = self.dbapi.create_node({'reservation': 'hostname1'}) node2 = self.dbapi.create_node({'reservation': 'hostname2'}) node3 = self.dbapi.create_node({'reservation': None}) self.dbapi.clear_node_reservations_for_conductor('hostname1') node1 = self.dbapi.get_node_by_id(node1.id) node2 = self.dbapi.get_node_by_id(node2.id) node3 = self.dbapi.get_node_by_id(node3.id) self.assertIsNone(node1.reservation) self.assertEqual('hostname2', node2.reservation) self.assertIsNone(node3.reservation) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_driver_dict_one_host_no_driver(self, mock_utcnow): h = 'fake-host' expected = {} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(hostname=h, drivers=[]) result = self.dbapi.get_active_driver_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_driver_dict_one_host_one_driver(self, mock_utcnow): h = 'fake-host' d = 'fake-driver' expected = {d: set([h])} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(hostname=h, drivers=[d]) result = self.dbapi.get_active_driver_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_driver_dict_one_host_many_drivers(self, mock_utcnow): h = 'fake-host' d1 = 'driver-one' d2 = 'driver-two' expected = {d1: set([h]), d2: set([h])} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(hostname=h, drivers=[d1, d2]) result = self.dbapi.get_active_driver_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_driver_dict_many_hosts_one_driver(self, mock_utcnow): h1 = 'host-one' h2 = 'host-two' d = 'fake-driver' expected = {d: set([h1, h2])} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(id=1, hostname=h1, drivers=[d]) self._create_test_cdr(id=2, hostname=h2, drivers=[d]) result = self.dbapi.get_active_driver_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_driver_dict_many_hosts_and_drivers(self, mock_utcnow): h1 = 'host-one' h2 = 'host-two' h3 = 'host-three' d1 = 'driver-one' d2 = 'driver-two' expected = {d1: set([h1, h2]), d2: set([h2, h3])} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(id=1, hostname=h1, drivers=[d1]) self._create_test_cdr(id=2, hostname=h2, drivers=[d1, d2]) self._create_test_cdr(id=3, hostname=h3, drivers=[d2]) result = self.dbapi.get_active_driver_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_driver_dict_with_old_conductor(self, mock_utcnow): past = datetime.datetime(2000, 1, 1, 0, 0) present = past + datetime.timedelta(minutes=2) d = 'common-driver' h1 = 'old-host' d1 = 'old-driver' mock_utcnow.return_value = past self._create_test_cdr(id=1, hostname=h1, drivers=[d, d1]) h2 = 'new-host' d2 = 'new-driver' mock_utcnow.return_value = present self._create_test_cdr(id=2, hostname=h2, drivers=[d, d2]) # verify that old-host does not show up in current list one_minute = 60 expected = {d: set([h2]), d2: set([h2])} result = self.dbapi.get_active_driver_dict(interval=one_minute) self.assertEqual(expected, result) # change the interval, and verify that old-host appears two_minute = one_minute * 2 expected = {d: set([h1, h2]), d1: set([h1]), d2: set([h2])} result = self.dbapi.get_active_driver_dict(interval=two_minute) self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_offline_conductors(self, mock_utcnow): self.config(heartbeat_timeout=60, group='conductor') time_ = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = time_ c = self._create_test_cdr() # Only 30 seconds passed since last heartbeat, it's still # considered alive mock_utcnow.return_value = time_ + datetime.timedelta(seconds=30) self.assertEqual([], self.dbapi.get_offline_conductors()) # 61 seconds passed since last heartbeat, it's dead mock_utcnow.return_value = time_ + datetime.timedelta(seconds=61) self.assertEqual([c.hostname], self.dbapi.get_offline_conductors()) ironic-5.1.0/ironic/tests/unit/db/test_portgroups.py0000664000567000056710000002067012674513466024012 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating portgroups via the DB API""" from oslo_utils import uuidutils import six from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbportgroupTestCase(base.DbTestCase): def setUp(self): # This method creates a portgroup for every test and # replaces a test for creating a portgroup. super(DbportgroupTestCase, self).setUp() self.node = db_utils.create_test_node() self.portgroup = db_utils.create_test_portgroup(node_id=self.node.id) def _create_test_portgroup_range(self, count): """Create the specified number of test portgroup entries in DB It uses create_test_portgroup method. And returns List of Portgroup DB objects. :param count: Specifies the number of portgroups to be created :returns: List of Portgroup DB objects """ uuids = [] for i in range(1, count): portgroup = db_utils.create_test_portgroup( uuid=uuidutils.generate_uuid(), name='portgroup' + str(i), address='52:54:00:cf:2d:4%s' % i) uuids.append(six.text_type(portgroup.uuid)) return uuids def test_get_portgroup_by_id(self): res = self.dbapi.get_portgroup_by_id(self.portgroup.id) self.assertEqual(self.portgroup.address, res.address) def test_get_portgroup_by_id_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_id, 99) def test_get_portgroup_by_uuid(self): res = self.dbapi.get_portgroup_by_uuid(self.portgroup.uuid) self.assertEqual(self.portgroup.id, res.id) def test_get_portgroup_by_uuid_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_uuid, 'EEEEEEEE-EEEE-EEEE-EEEE-EEEEEEEEEEEE') def test_get_portgroup_by_address(self): res = self.dbapi.get_portgroup_by_address(self.portgroup.address) self.assertEqual(self.portgroup.id, res.id) def test_get_portgroup_by_address_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_address, '31:31:31:31:31:31') def test_get_portgroup_by_name(self): res = self.dbapi.get_portgroup_by_name(self.portgroup.name) self.assertEqual(self.portgroup.id, res.id) def test_get_portgroup_by_name_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_name, 'testfail') def test_get_portgroup_list(self): uuids = self._create_test_portgroup_range(6) # Also add the uuid for the portgroup created in setUp() uuids.append(six.text_type(self.portgroup.uuid)) res = self.dbapi.get_portgroup_list() res_uuids = [r.uuid for r in res] six.assertCountEqual(self, uuids, res_uuids) def test_get_portgroup_list_sorted(self): uuids = self._create_test_portgroup_range(6) # Also add the uuid for the portgroup created in setUp() uuids.append(six.text_type(self.portgroup.uuid)) res = self.dbapi.get_portgroup_list(sort_key='uuid') res_uuids = [r.uuid for r in res] self.assertEqual(sorted(uuids), res_uuids) self.assertRaises(exception.InvalidParameterValue, self.dbapi.get_portgroup_list, sort_key='foo') def test_get_portgroups_by_node_id(self): res = self.dbapi.get_portgroups_by_node_id(self.node.id) self.assertEqual(self.portgroup.address, res[0].address) def test_get_portgroups_by_node_id_that_does_not_exist(self): self.assertEqual([], self.dbapi.get_portgroups_by_node_id(99)) def test_destroy_portgroup(self): self.dbapi.destroy_portgroup(self.portgroup.id) self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_id, self.portgroup.id) def test_destroy_portgroup_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.destroy_portgroup, 99) def test_destroy_portgroup_uuid(self): self.dbapi.destroy_portgroup(self.portgroup.uuid) def test_destroy_portgroup_not_empty(self): self.port = db_utils.create_test_port(node_id=self.node.id, portgroup_id=self.portgroup.id) self.assertRaises(exception.PortgroupNotEmpty, self.dbapi.destroy_portgroup, self.portgroup.id) def test_update_portgroup(self): old_address = self.portgroup.address new_address = 'ff:ee:dd:cc:bb:aa' self.assertNotEqual(old_address, new_address) old_name = self.portgroup.name new_name = 'newname' self.assertNotEqual(old_name, new_name) res = self.dbapi.update_portgroup(self.portgroup.id, {'address': new_address, 'name': new_name}) self.assertEqual(new_address, res.address) self.assertEqual(new_name, res.name) def test_update_portgroup_uuid(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_portgroup, self.portgroup.id, {'uuid': ''}) def test_update_portgroup_not_found(self): id_2 = 99 self.assertNotEqual(self.portgroup.id, id_2) address2 = 'aa:bb:cc:11:22:33' self.assertRaises(exception.PortgroupNotFound, self.dbapi.update_portgroup, id_2, {'address': address2}) def test_update_portgroup_duplicated_address(self): address1 = self.portgroup.address address2 = 'aa:bb:cc:11:22:33' portgroup2 = db_utils.create_test_portgroup( uuid=uuidutils.generate_uuid(), node_id=self.node.id, name=str(uuidutils.generate_uuid()), address=address2) self.assertRaises(exception.PortgroupMACAlreadyExists, self.dbapi.update_portgroup, portgroup2.id, {'address': address1}) def test_update_portgroup_duplicated_name(self): name1 = self.portgroup.name portgroup2 = db_utils.create_test_portgroup( uuid=uuidutils.generate_uuid(), node_id=self.node.id, name='name2', address='aa:bb:cc:11:22:55') self.assertRaises(exception.PortgroupDuplicateName, self.dbapi.update_portgroup, portgroup2.id, {'name': name1}) def test_create_portgroup_duplicated_name(self): self.assertRaises(exception.PortgroupDuplicateName, db_utils.create_test_portgroup, uuid=uuidutils.generate_uuid(), node_id=self.node.id, name=self.portgroup.name, address='aa:bb:cc:11:22:55') def test_create_portgroup_duplicated_address(self): self.assertRaises(exception.PortgroupMACAlreadyExists, db_utils.create_test_portgroup, uuid=uuidutils.generate_uuid(), node_id=self.node.id, name=str(uuidutils.generate_uuid()), address=self.portgroup.address) def test_create_portgroup_duplicated_uuid(self): self.assertRaises(exception.PortgroupAlreadyExists, db_utils.create_test_portgroup, uuid=self.portgroup.uuid, node_id=self.node.id, name=str(uuidutils.generate_uuid()), address='aa:bb:cc:33:11:22') ironic-5.1.0/ironic/tests/unit/db/test_nodes.py0000664000567000056710000005431412674513466022700 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating Nodes via the DB API""" import datetime import mock from oslo_utils import timeutils from oslo_utils import uuidutils import six from ironic.common import exception from ironic.common import states from ironic.db.sqlalchemy import api from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class DbNodeTestCase(base.DbTestCase): def test_create_node(self): utils.create_test_node() @mock.patch.object(api.LOG, 'warning', autospec=True) def test_create_node_with_tags(self, mock_log): utils.create_test_node(tags=['tag1', 'tag2']) self.assertTrue(mock_log.called) def test_create_node_already_exists(self): utils.create_test_node() self.assertRaises(exception.NodeAlreadyExists, utils.create_test_node) def test_create_node_instance_already_associated(self): instance = uuidutils.generate_uuid() utils.create_test_node(uuid=uuidutils.generate_uuid(), instance_uuid=instance) self.assertRaises(exception.InstanceAssociated, utils.create_test_node, uuid=uuidutils.generate_uuid(), instance_uuid=instance) def test_create_node_name_duplicate(self): node = utils.create_test_node(name='spam') self.assertRaises(exception.DuplicateName, utils.create_test_node, name=node.name) def test_get_node_by_id(self): node = utils.create_test_node() res = self.dbapi.get_node_by_id(node.id) self.assertEqual(node.id, res.id) self.assertEqual(node.uuid, res.uuid) def test_get_node_by_uuid(self): node = utils.create_test_node() res = self.dbapi.get_node_by_uuid(node.uuid) self.assertEqual(node.id, res.id) self.assertEqual(node.uuid, res.uuid) def test_get_node_by_name(self): node = utils.create_test_node() res = self.dbapi.get_node_by_name(node.name) self.assertEqual(node.id, res.id) self.assertEqual(node.uuid, res.uuid) self.assertEqual(node.name, res.name) def test_get_node_that_does_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_id, 99) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, '12345678-9999-0000-aaaa-123456789012') self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_name, 'spam-eggs-bacon-spam') def test_get_nodeinfo_list_defaults(self): node_id_list = [] for i in range(1, 6): node = utils.create_test_node(uuid=uuidutils.generate_uuid()) node_id_list.append(node.id) res = [i[0] for i in self.dbapi.get_nodeinfo_list()] self.assertEqual(sorted(res), sorted(node_id_list)) def test_get_nodeinfo_list_with_cols(self): uuids = {} extras = {} for i in range(1, 6): uuid = uuidutils.generate_uuid() extra = {'foo': i} node = utils.create_test_node(extra=extra, uuid=uuid) uuids[node.id] = uuid extras[node.id] = extra res = self.dbapi.get_nodeinfo_list(columns=['id', 'extra', 'uuid']) self.assertEqual(extras, dict((r[0], r[1]) for r in res)) self.assertEqual(uuids, dict((r[0], r[2]) for r in res)) def test_get_nodeinfo_list_with_filters(self): node1 = utils.create_test_node( driver='driver-one', instance_uuid=uuidutils.generate_uuid(), reservation='fake-host', uuid=uuidutils.generate_uuid()) node2 = utils.create_test_node( driver='driver-two', uuid=uuidutils.generate_uuid(), maintenance=True) node3 = utils.create_test_node( driver='driver-one', uuid=uuidutils.generate_uuid(), reservation='another-fake-host') res = self.dbapi.get_nodeinfo_list(filters={'driver': 'driver-one'}) self.assertEqual(sorted([node1.id, node3.id]), sorted([r[0] for r in res])) res = self.dbapi.get_nodeinfo_list(filters={'driver': 'bad-driver'}) self.assertEqual([], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'associated': True}) self.assertEqual([node1.id], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'associated': False}) self.assertEqual(sorted([node2.id, node3.id]), sorted([r[0] for r in res])) res = self.dbapi.get_nodeinfo_list(filters={'reserved': True}) self.assertEqual(sorted([node1.id, node3.id]), sorted([r[0] for r in res])) res = self.dbapi.get_nodeinfo_list(filters={'reserved': False}) self.assertEqual([node2.id], [r[0] for r in res]) res = self.dbapi.get_node_list(filters={'maintenance': True}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'maintenance': False}) self.assertEqual(sorted([node1.id, node3.id]), sorted([r.id for r in res])) res = self.dbapi.get_node_list( filters={'reserved_by_any_of': ['fake-host', 'another-fake-host']}) self.assertEqual(sorted([node1.id, node3.id]), sorted([r.id for r in res])) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_nodeinfo_list_provision(self, mock_utcnow): past = datetime.datetime(2000, 1, 1, 0, 0) next = past + datetime.timedelta(minutes=8) present = past + datetime.timedelta(minutes=10) mock_utcnow.return_value = past # node with provision_updated timeout node1 = utils.create_test_node(uuid=uuidutils.generate_uuid(), provision_updated_at=past) # node with None in provision_updated_at node2 = utils.create_test_node(uuid=uuidutils.generate_uuid(), provision_state=states.DEPLOYWAIT) # node without timeout utils.create_test_node(uuid=uuidutils.generate_uuid(), provision_updated_at=next) mock_utcnow.return_value = present res = self.dbapi.get_nodeinfo_list(filters={'provisioned_before': 300}) self.assertEqual([node1.id], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'provision_state': states.DEPLOYWAIT}) self.assertEqual([node2.id], [r[0] for r in res]) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_nodeinfo_list_inspection(self, mock_utcnow): past = datetime.datetime(2000, 1, 1, 0, 0) next = past + datetime.timedelta(minutes=8) present = past + datetime.timedelta(minutes=10) mock_utcnow.return_value = past # node with provision_updated timeout node1 = utils.create_test_node(uuid=uuidutils.generate_uuid(), inspection_started_at=past) # node with None in provision_updated_at node2 = utils.create_test_node(uuid=uuidutils.generate_uuid(), provision_state=states.INSPECTING) # node without timeout utils.create_test_node(uuid=uuidutils.generate_uuid(), inspection_started_at=next) mock_utcnow.return_value = present res = self.dbapi.get_nodeinfo_list( filters={'inspection_started_before': 300}) self.assertEqual([node1.id], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'provision_state': states.INSPECTING}) self.assertEqual([node2.id], [r[0] for r in res]) def test_get_node_list(self): uuids = [] for i in range(1, 6): node = utils.create_test_node(uuid=uuidutils.generate_uuid()) uuids.append(six.text_type(node['uuid'])) res = self.dbapi.get_node_list() res_uuids = [r.uuid for r in res] six.assertCountEqual(self, uuids, res_uuids) def test_get_node_list_with_filters(self): ch1 = utils.create_test_chassis(uuid=uuidutils.generate_uuid()) ch2 = utils.create_test_chassis(uuid=uuidutils.generate_uuid()) node1 = utils.create_test_node( driver='driver-one', instance_uuid=uuidutils.generate_uuid(), reservation='fake-host', uuid=uuidutils.generate_uuid(), chassis_id=ch1['id']) node2 = utils.create_test_node( driver='driver-two', uuid=uuidutils.generate_uuid(), chassis_id=ch2['id'], maintenance=True) res = self.dbapi.get_node_list(filters={'chassis_uuid': ch1['uuid']}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'chassis_uuid': ch2['uuid']}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'driver': 'driver-one'}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'driver': 'bad-driver'}) self.assertEqual([], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'associated': True}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'associated': False}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'reserved': True}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'reserved': False}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'maintenance': True}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'maintenance': False}) self.assertEqual([node1.id], [r.id for r in res]) def test_get_node_list_chassis_not_found(self): self.assertRaises(exception.ChassisNotFound, self.dbapi.get_node_list, {'chassis_uuid': uuidutils.generate_uuid()}) def test_get_node_by_instance(self): node = utils.create_test_node( instance_uuid='12345678-9999-0000-aaaa-123456789012') res = self.dbapi.get_node_by_instance(node.instance_uuid) self.assertEqual(node.uuid, res.uuid) def test_get_node_by_instance_wrong_uuid(self): utils.create_test_node( instance_uuid='12345678-9999-0000-aaaa-123456789012') self.assertRaises(exception.InstanceNotFound, self.dbapi.get_node_by_instance, '12345678-9999-0000-bbbb-123456789012') def test_get_node_by_instance_invalid_uuid(self): self.assertRaises(exception.InvalidUUID, self.dbapi.get_node_by_instance, 'fake_uuid') def test_destroy_node(self): node = utils.create_test_node() self.dbapi.destroy_node(node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_id, node.id) def test_destroy_node_by_uuid(self): node = utils.create_test_node() self.dbapi.destroy_node(node.uuid) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) def test_destroy_node_that_does_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.destroy_node, '12345678-9999-0000-aaaa-123456789012') def test_ports_get_destroyed_after_destroying_a_node(self): node = utils.create_test_node() port = utils.create_test_port(node_id=node.id) self.dbapi.destroy_node(node.id) self.assertRaises(exception.PortNotFound, self.dbapi.get_port_by_id, port.id) def test_ports_get_destroyed_after_destroying_a_node_by_uuid(self): node = utils.create_test_node() port = utils.create_test_port(node_id=node.id) self.dbapi.destroy_node(node.uuid) self.assertRaises(exception.PortNotFound, self.dbapi.get_port_by_id, port.id) def test_tags_get_destroyed_after_destroying_a_node(self): node = utils.create_test_node() tag = utils.create_test_node_tag(node_id=node.id) self.assertTrue(self.dbapi.node_tag_exists(node.id, tag.tag)) self.dbapi.destroy_node(node.id) self.assertFalse(self.dbapi.node_tag_exists(node.id, tag.tag)) def test_tags_get_destroyed_after_destroying_a_node_by_uuid(self): node = utils.create_test_node() tag = utils.create_test_node_tag(node_id=node.id) self.assertTrue(self.dbapi.node_tag_exists(node.id, tag.tag)) self.dbapi.destroy_node(node.uuid) self.assertFalse(self.dbapi.node_tag_exists(node.id, tag.tag)) def test_update_node(self): node = utils.create_test_node() old_extra = node.extra new_extra = {'foo': 'bar'} self.assertNotEqual(old_extra, new_extra) res = self.dbapi.update_node(node.id, {'extra': new_extra}) self.assertEqual(new_extra, res.extra) def test_update_node_not_found(self): node_uuid = uuidutils.generate_uuid() new_extra = {'foo': 'bar'} self.assertRaises(exception.NodeNotFound, self.dbapi.update_node, node_uuid, {'extra': new_extra}) def test_update_node_uuid(self): node = utils.create_test_node() self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_node, node.id, {'uuid': ''}) def test_update_node_associate_and_disassociate(self): node = utils.create_test_node() new_i_uuid = uuidutils.generate_uuid() res = self.dbapi.update_node(node.id, {'instance_uuid': new_i_uuid}) self.assertEqual(new_i_uuid, res.instance_uuid) res = self.dbapi.update_node(node.id, {'instance_uuid': None}) self.assertIsNone(res.instance_uuid) def test_update_node_already_associated(self): node = utils.create_test_node() new_i_uuid_one = uuidutils.generate_uuid() self.dbapi.update_node(node.id, {'instance_uuid': new_i_uuid_one}) new_i_uuid_two = uuidutils.generate_uuid() self.assertRaises(exception.NodeAssociated, self.dbapi.update_node, node.id, {'instance_uuid': new_i_uuid_two}) def test_update_node_instance_already_associated(self): node1 = utils.create_test_node(uuid=uuidutils.generate_uuid()) new_i_uuid = uuidutils.generate_uuid() self.dbapi.update_node(node1.id, {'instance_uuid': new_i_uuid}) node2 = utils.create_test_node(uuid=uuidutils.generate_uuid()) self.assertRaises(exception.InstanceAssociated, self.dbapi.update_node, node2.id, {'instance_uuid': new_i_uuid}) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_update_node_provision(self, mock_utcnow): mocked_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = mocked_time node = utils.create_test_node() res = self.dbapi.update_node(node.id, {'provision_state': 'fake'}) self.assertEqual(mocked_time, timeutils.normalize_time(res['provision_updated_at'])) def test_update_node_name_duplicate(self): node1 = utils.create_test_node(uuid=uuidutils.generate_uuid(), name='spam') node2 = utils.create_test_node(uuid=uuidutils.generate_uuid()) self.assertRaises(exception.DuplicateName, self.dbapi.update_node, node2.id, {'name': node1.name}) def test_update_node_no_provision(self): node = utils.create_test_node() res = self.dbapi.update_node(node.id, {'extra': {'foo': 'bar'}}) self.assertIsNone(res['provision_updated_at']) self.assertIsNone(res['inspection_started_at']) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_update_node_inspection_started_at(self, mock_utcnow): mocked_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = mocked_time node = utils.create_test_node(uuid=uuidutils.generate_uuid(), inspection_started_at=mocked_time) res = self.dbapi.update_node(node.id, {'provision_state': 'fake'}) result = res['inspection_started_at'] self.assertEqual(mocked_time, timeutils.normalize_time(result)) self.assertIsNone(res['inspection_finished_at']) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_update_node_inspection_finished_at(self, mock_utcnow): mocked_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = mocked_time node = utils.create_test_node(uuid=uuidutils.generate_uuid(), inspection_finished_at=mocked_time) res = self.dbapi.update_node(node.id, {'provision_state': 'fake'}) result = res['inspection_finished_at'] self.assertEqual(mocked_time, timeutils.normalize_time(result)) self.assertIsNone(res['inspection_started_at']) def test_reserve_node(self): node = utils.create_test_node() uuid = node.uuid r1 = 'fake-reservation' # reserve the node self.dbapi.reserve_node(r1, uuid) # check reservation res = self.dbapi.get_node_by_uuid(uuid) self.assertEqual(r1, res.reservation) def test_release_reservation(self): node = utils.create_test_node() uuid = node.uuid r1 = 'fake-reservation' self.dbapi.reserve_node(r1, uuid) # release reservation self.dbapi.release_node(r1, uuid) res = self.dbapi.get_node_by_uuid(uuid) self.assertIsNone(res.reservation) def test_reservation_of_reserved_node_fails(self): node = utils.create_test_node() uuid = node.uuid r1 = 'fake-reservation' r2 = 'another-reservation' # reserve the node self.dbapi.reserve_node(r1, uuid) # another host fails to reserve or release self.assertRaises(exception.NodeLocked, self.dbapi.reserve_node, r2, uuid) self.assertRaises(exception.NodeLocked, self.dbapi.release_node, r2, uuid) def test_reservation_after_release(self): node = utils.create_test_node() uuid = node.uuid r1 = 'fake-reservation' r2 = 'another-reservation' self.dbapi.reserve_node(r1, uuid) self.dbapi.release_node(r1, uuid) # another host succeeds self.dbapi.reserve_node(r2, uuid) res = self.dbapi.get_node_by_uuid(uuid) self.assertEqual(r2, res.reservation) def test_reservation_in_exception_message(self): node = utils.create_test_node() uuid = node.uuid r = 'fake-reservation' self.dbapi.reserve_node(r, uuid) try: self.dbapi.reserve_node('another', uuid) except exception.NodeLocked as e: self.assertIn(r, str(e)) def test_reservation_non_existent_node(self): node = utils.create_test_node() self.dbapi.destroy_node(node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.reserve_node, 'fake', node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.reserve_node, 'fake', node.uuid) def test_release_non_existent_node(self): node = utils.create_test_node() self.dbapi.destroy_node(node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.release_node, 'fake', node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.release_node, 'fake', node.uuid) def test_release_non_locked_node(self): node = utils.create_test_node() self.assertIsNone(node.reservation) self.assertRaises(exception.NodeNotLocked, self.dbapi.release_node, 'fake', node.id) self.assertRaises(exception.NodeNotLocked, self.dbapi.release_node, 'fake', node.uuid) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_touch_node_provisioning(self, mock_utcnow): test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time node = utils.create_test_node() # assert provision_updated_at is None self.assertIsNone(node.provision_updated_at) self.dbapi.touch_node_provisioning(node.uuid) node = self.dbapi.get_node_by_uuid(node.uuid) # assert provision_updated_at has been updated self.assertEqual(test_time, timeutils.normalize_time(node.provision_updated_at)) def test_touch_node_provisioning_not_found(self): self.assertRaises( exception.NodeNotFound, self.dbapi.touch_node_provisioning, uuidutils.generate_uuid()) ironic-5.1.0/ironic/tests/unit/db/__init__.py0000664000567000056710000000125512674513466022264 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 NTT DOCOMO, INC. # All Rights Reserved. # flake8: noqa # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.tests.unit.db import * ironic-5.1.0/ironic/tests/unit/db/sqlalchemy/0000775000567000056710000000000012674513633022306 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/db/sqlalchemy/test_types.py0000664000567000056710000000647112674513466025077 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for custom SQLAlchemy types via Ironic DB.""" from oslo_db import exception as db_exc from oslo_utils import uuidutils import ironic.db.sqlalchemy.api as sa_api from ironic.db.sqlalchemy import models from ironic.tests.unit.db import base class SqlAlchemyCustomTypesTestCase(base.DbTestCase): # NOTE(max_lobur): Since it's not straightforward to check this in # isolation these tests use existing db models. def test_JSONEncodedDict_default_value(self): # Create chassis w/o extra specified. ch1_id = uuidutils.generate_uuid() self.dbapi.create_chassis({'uuid': ch1_id}) # Get chassis manually to test SA types in isolation from UOM. ch1 = sa_api.model_query(models.Chassis).filter_by(uuid=ch1_id).one() self.assertEqual({}, ch1.extra) # Create chassis with extra specified. ch2_id = uuidutils.generate_uuid() extra = {'foo1': 'test', 'foo2': 'other extra'} self.dbapi.create_chassis({'uuid': ch2_id, 'extra': extra}) # Get chassis manually to test SA types in isolation from UOM. ch2 = sa_api.model_query(models.Chassis).filter_by(uuid=ch2_id).one() self.assertEqual(extra, ch2.extra) def test_JSONEncodedDict_type_check(self): self.assertRaises(db_exc.DBError, self.dbapi.create_chassis, {'extra': ['this is not a dict']}) def test_JSONEncodedList_default_value(self): # Create conductor w/o extra specified. cdr1_id = 321321 self.dbapi.register_conductor({'hostname': 'test_host1', 'drivers': None, 'id': cdr1_id}) # Get conductor manually to test SA types in isolation from UOM. cdr1 = (sa_api .model_query(models.Conductor) .filter_by(id=cdr1_id) .one()) self.assertEqual([], cdr1.drivers) # Create conductor with drivers specified. cdr2_id = 623623 drivers = ['foo1', 'other driver'] self.dbapi.register_conductor({'hostname': 'test_host2', 'drivers': drivers, 'id': cdr2_id}) # Get conductor manually to test SA types in isolation from UOM. cdr2 = (sa_api .model_query(models.Conductor) .filter_by(id=cdr2_id) .one()) self.assertEqual(drivers, cdr2.drivers) def test_JSONEncodedList_type_check(self): self.assertRaises(db_exc.DBError, self.dbapi.register_conductor, {'hostname': 'test_host3', 'drivers': {'this is not a list': 'test'}}) ironic-5.1.0/ironic/tests/unit/db/sqlalchemy/test_migrations.py0000664000567000056710000004503212674513466026103 0ustar jenkinsjenkins00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2012-2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for database migrations. There are "opportunistic" tests for both mysql and postgresql in here, which allows testing against these databases in a properly configured unit test environment. For the opportunistic testing you need to set up a db named 'openstack_citest' with user 'openstack_citest' and password 'openstack_citest' on localhost. The test will then use that db and u/p combo to run the tests. For postgres on Ubuntu this can be done with the following commands: :: sudo -u postgres psql postgres=# create user openstack_citest with createdb login password 'openstack_citest'; postgres=# create database openstack_citest with owner openstack_citest; """ import contextlib from alembic import script import mock from oslo_db import exception as db_exc from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import test_base from oslo_db.sqlalchemy import test_migrations from oslo_db.sqlalchemy import utils as db_utils from oslo_log import log as logging from oslo_utils import uuidutils import sqlalchemy import sqlalchemy.exc from ironic.common.i18n import _LE from ironic.db.sqlalchemy import migration from ironic.db.sqlalchemy import models from ironic.tests import base LOG = logging.getLogger(__name__) def _get_connect_string(backend, user, passwd, database): """Get database connection Try to get a connection with a very specific set of values, if we get these then we'll run the tests, otherwise they are skipped """ if backend == "postgres": backend = "postgresql+psycopg2" elif backend == "mysql": backend = "mysql+mysqldb" else: raise Exception("Unrecognized backend: '%s'" % backend) return ("%(backend)s://%(user)s:%(passwd)s@localhost/%(database)s" % {'backend': backend, 'user': user, 'passwd': passwd, 'database': database}) def _is_backend_avail(backend, user, passwd, database): try: connect_uri = _get_connect_string(backend, user, passwd, database) engine = sqlalchemy.create_engine(connect_uri) connection = engine.connect() except Exception: # intentionally catch all to handle exceptions even if we don't # have any backend code loaded. return False else: connection.close() engine.dispose() return True @contextlib.contextmanager def patch_with_engine(engine): with mock.patch.object(enginefacade.get_legacy_facade(), 'get_engine') as patch_engine: patch_engine.return_value = engine yield class WalkVersionsMixin(object): def _walk_versions(self, engine=None, alembic_cfg=None): # Determine latest version script from the repo, then # upgrade from 1 through to the latest, with no data # in the databases. This just checks that the schema itself # upgrades successfully. # Place the database under version control with patch_with_engine(engine): script_directory = script.ScriptDirectory.from_config(alembic_cfg) self.assertIsNone(self.migration_api.version(alembic_cfg)) versions = [ver for ver in script_directory.walk_revisions()] for version in reversed(versions): self._migrate_up(engine, alembic_cfg, version.revision, with_data=True) def _migrate_up(self, engine, config, version, with_data=False): """migrate up to a new version of the db. We allow for data insertion and post checks at every migration version with special _pre_upgrade_### and _check_### functions in the main test. """ # NOTE(sdague): try block is here because it's impossible to debug # where a failed data migration happens otherwise try: if with_data: data = None pre_upgrade = getattr( self, "_pre_upgrade_%s" % version, None) if pre_upgrade: data = pre_upgrade(engine) self.migration_api.upgrade(version, config=config) self.assertEqual(version, self.migration_api.version(config)) if with_data: check = getattr(self, "_check_%s" % version, None) if check: check(engine, data) except Exception: LOG.error(_LE("Failed to migrate to version %(version)s on engine " "%(engine)s"), {'version': version, 'engine': engine}) raise class TestWalkVersions(base.TestCase, WalkVersionsMixin): def setUp(self): super(TestWalkVersions, self).setUp() self.migration_api = mock.MagicMock() self.engine = mock.MagicMock() self.config = mock.MagicMock() self.versions = [mock.Mock(revision='2b2'), mock.Mock(revision='1a1')] def test_migrate_up(self): self.migration_api.version.return_value = 'dsa123' self._migrate_up(self.engine, self.config, 'dsa123') self.migration_api.upgrade.assert_called_with('dsa123', config=self.config) self.migration_api.version.assert_called_with(self.config) def test_migrate_up_with_data(self): test_value = {"a": 1, "b": 2} self.migration_api.version.return_value = '141' self._pre_upgrade_141 = mock.MagicMock() self._pre_upgrade_141.return_value = test_value self._check_141 = mock.MagicMock() self._migrate_up(self.engine, self.config, '141', True) self._pre_upgrade_141.assert_called_with(self.engine) self._check_141.assert_called_with(self.engine, test_value) @mock.patch.object(script, 'ScriptDirectory') @mock.patch.object(WalkVersionsMixin, '_migrate_up') def test_walk_versions_all_default(self, _migrate_up, script_directory): fc = script_directory.from_config() fc.walk_revisions.return_value = self.versions self.migration_api.version.return_value = None self._walk_versions(self.engine, self.config) self.migration_api.version.assert_called_with(self.config) upgraded = [mock.call(self.engine, self.config, v.revision, with_data=True) for v in reversed(self.versions)] self.assertEqual(self._migrate_up.call_args_list, upgraded) @mock.patch.object(script, 'ScriptDirectory') @mock.patch.object(WalkVersionsMixin, '_migrate_up') def test_walk_versions_all_false(self, _migrate_up, script_directory): fc = script_directory.from_config() fc.walk_revisions.return_value = self.versions self.migration_api.version.return_value = None self._walk_versions(self.engine, self.config) upgraded = [mock.call(self.engine, self.config, v.revision, with_data=True) for v in reversed(self.versions)] self.assertEqual(upgraded, self._migrate_up.call_args_list) class MigrationCheckersMixin(object): def setUp(self): super(MigrationCheckersMixin, self).setUp() self.config = migration._alembic_config() self.migration_api = migration def test_walk_versions(self): self._walk_versions(self.engine, self.config) def test_connect_fail(self): """Test that we can trigger a database connection failure Test that we can fail gracefully to ensure we don't break people without specific database backend """ if _is_backend_avail(self.FIXTURE.DRIVER, "openstack_cifail", self.FIXTURE.USERNAME, self.FIXTURE.DBNAME): self.fail("Shouldn't have connected") def _check_21b331f883ef(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('provision_updated_at', col_names) self.assertIsInstance(nodes.c.provision_updated_at.type, sqlalchemy.types.DateTime) def _check_3cb628139ea4(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('console_enabled', col_names) # in some backends bool type is integer self.assertTrue(isinstance(nodes.c.console_enabled.type, sqlalchemy.types.Boolean) or isinstance(nodes.c.console_enabled.type, sqlalchemy.types.Integer)) def _check_31baaf680d2b(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('instance_info', col_names) self.assertIsInstance(nodes.c.instance_info.type, sqlalchemy.types.TEXT) def _check_3bea56f25597(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') instance_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' data = {'driver': 'fake', 'uuid': uuidutils.generate_uuid(), 'instance_uuid': instance_uuid} nodes.insert().values(data).execute() data['uuid'] = uuidutils.generate_uuid() self.assertRaises(db_exc.DBDuplicateEntry, nodes.insert().execute, data) def _check_242cc6a923b3(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('maintenance_reason', col_names) self.assertIsInstance(nodes.c.maintenance_reason.type, sqlalchemy.types.String) def _pre_upgrade_5674c57409b9(self, engine): # add some nodes in various states so we can assert that "None" # was replaced by "available", and nothing else changed. nodes = db_utils.get_table(engine, 'nodes') data = [{'uuid': uuidutils.generate_uuid(), 'provision_state': 'fake state'}, {'uuid': uuidutils.generate_uuid(), 'provision_state': 'active'}, {'uuid': uuidutils.generate_uuid(), 'provision_state': 'deleting'}, {'uuid': uuidutils.generate_uuid(), 'provision_state': None}] nodes.insert().values(data).execute() return data def _check_5674c57409b9(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') result = engine.execute(nodes.select()) def _get_state(uuid): for row in data: if row['uuid'] == uuid: return row['provision_state'] for row in result: old = _get_state(row['uuid']) new = row['provision_state'] if old is None: self.assertEqual('available', new) else: self.assertEqual(old, new) def _check_bb59b63f55a(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('driver_internal_info', col_names) self.assertIsInstance(nodes.c.driver_internal_info.type, sqlalchemy.types.TEXT) def _check_4f399b21ae71(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('clean_step', col_names) self.assertIsInstance(nodes.c.clean_step.type, sqlalchemy.types.String) def _check_789acc877671(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('raid_config', col_names) self.assertIn('target_raid_config', col_names) self.assertIsInstance(nodes.c.raid_config.type, sqlalchemy.types.String) self.assertIsInstance(nodes.c.target_raid_config.type, sqlalchemy.types.String) def _check_2fb93ffd2af1(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') bigstring = 'a' * 255 uuid = uuidutils.generate_uuid() data = {'uuid': uuid, 'name': bigstring} nodes.insert().execute(data) node = nodes.select(nodes.c.uuid == uuid).execute().first() self.assertEqual(bigstring, node['name']) def _check_516faf1bb9b1(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') bigstring = 'a' * 255 uuid = uuidutils.generate_uuid() data = {'uuid': uuid, 'driver': bigstring} nodes.insert().execute(data) node = nodes.select(nodes.c.uuid == uuid).execute().first() self.assertEqual(bigstring, node['driver']) def _check_48d6c242bb9b(self, engine, data): node_tags = db_utils.get_table(engine, 'node_tags') col_names = [column.name for column in node_tags.c] self.assertIn('tag', col_names) self.assertIsInstance(node_tags.c.tag.type, sqlalchemy.types.String) nodes = db_utils.get_table(engine, 'nodes') data = {'id': '123', 'name': 'node1'} nodes.insert().execute(data) data = {'node_id': '123', 'tag': 'tag1'} node_tags.insert().execute(data) tag = node_tags.select(node_tags.c.node_id == '123').execute().first() self.assertEqual('tag1', tag['tag']) def _check_5ea1b0d310e(self, engine, data): portgroup = db_utils.get_table(engine, 'portgroups') col_names = [column.name for column in portgroup.c] expected_names = ['created_at', 'updated_at', 'id', 'uuid', 'name', 'node_id', 'address', 'extra'] self.assertEqual(sorted(expected_names), sorted(col_names)) self.assertIsInstance(portgroup.c.created_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(portgroup.c.updated_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(portgroup.c.id.type, sqlalchemy.types.Integer) self.assertIsInstance(portgroup.c.uuid.type, sqlalchemy.types.String) self.assertIsInstance(portgroup.c.name.type, sqlalchemy.types.String) self.assertIsInstance(portgroup.c.node_id.type, sqlalchemy.types.Integer) self.assertIsInstance(portgroup.c.address.type, sqlalchemy.types.String) self.assertIsInstance(portgroup.c.extra.type, sqlalchemy.types.TEXT) ports = db_utils.get_table(engine, 'ports') col_names = [column.name for column in ports.c] self.assertIn('pxe_enabled', col_names) self.assertIn('portgroup_id', col_names) self.assertIn('local_link_connection', col_names) self.assertIsInstance(ports.c.portgroup_id.type, sqlalchemy.types.Integer) # in some backends bool type is integer self.assertTrue(isinstance(ports.c.pxe_enabled.type, sqlalchemy.types.Boolean) or isinstance(ports.c.pxe_enabled.type, sqlalchemy.types.Integer)) def _pre_upgrade_f6fdb920c182(self, engine): # add some ports. ports = db_utils.get_table(engine, 'ports') data = [{'uuid': uuidutils.generate_uuid(), 'pxe_enabled': None}, {'uuid': uuidutils.generate_uuid(), 'pxe_enabled': None}] ports.insert().values(data).execute() return data def _check_f6fdb920c182(self, engine, data): ports = db_utils.get_table(engine, 'ports') result = engine.execute(ports.select()) def _was_inserted(uuid): for row in data: if row['uuid'] == uuid: return True for row in result: if _was_inserted(row['uuid']): self.assertTrue(row['pxe_enabled']) def test_upgrade_and_version(self): with patch_with_engine(self.engine): self.migration_api.upgrade('head') self.assertIsNotNone(self.migration_api.version()) def test_create_schema_and_version(self): with patch_with_engine(self.engine): self.migration_api.create_schema() self.assertIsNotNone(self.migration_api.version()) def test_upgrade_and_create_schema(self): with patch_with_engine(self.engine): self.migration_api.upgrade('31baaf680d2b') self.assertRaises(db_exc.DbMigrationError, self.migration_api.create_schema) def test_upgrade_twice(self): with patch_with_engine(self.engine): self.migration_api.upgrade('31baaf680d2b') v1 = self.migration_api.version() self.migration_api.upgrade('head') v2 = self.migration_api.version() self.assertNotEqual(v1, v2) class TestMigrationsMySQL(MigrationCheckersMixin, WalkVersionsMixin, test_base.MySQLOpportunisticTestCase): pass class TestMigrationsPostgreSQL(MigrationCheckersMixin, WalkVersionsMixin, test_base.PostgreSQLOpportunisticTestCase): pass class ModelsMigrationSyncMixin(object): def get_metadata(self): return models.Base.metadata def get_engine(self): return self.engine def db_sync(self, engine): with patch_with_engine(engine): migration.upgrade('head') class ModelsMigrationsSyncMysql(ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync, test_base.MySQLOpportunisticTestCase): pass class ModelsMigrationsSyncPostgres(ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync, test_base.PostgreSQLOpportunisticTestCase): pass ironic-5.1.0/ironic/tests/unit/db/sqlalchemy/__init__.py0000664000567000056710000000000012674513466024411 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/tests/unit/db/utils.py0000664000567000056710000002741212674513466021670 0ustar jenkinsjenkins00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Ironic test utilities.""" from oslo_utils import timeutils from ironic.common import states from ironic.db import api as db_api def get_test_ipmi_info(): return { "ipmi_address": "1.2.3.4", "ipmi_username": "admin", "ipmi_password": "fake" } def get_test_ipmi_bridging_parameters(): return { "ipmi_bridging": "dual", "ipmi_local_address": "0x20", "ipmi_transit_channel": "0", "ipmi_transit_address": "0x82", "ipmi_target_channel": "7", "ipmi_target_address": "0x72" } def get_test_ssh_info(auth_type='password', virt_type='virsh'): result = { "ssh_address": "1.2.3.4", "ssh_username": "admin", "ssh_port": 22, "ssh_virt_type": virt_type, } if 'password' == auth_type: result['ssh_password'] = 'fake' elif 'file' == auth_type: result['ssh_key_filename'] = '/not/real/file' elif 'key' == auth_type: result['ssh_key_contents'] = '--BEGIN PRIVATE ...blah' elif 'too_many' == auth_type: result['ssh_password'] = 'fake' result['ssh_key_filename'] = '/not/real/file' else: # No auth details (is invalid) pass return result def get_test_pxe_driver_info(): return { "deploy_kernel": "glance://deploy_kernel_uuid", "deploy_ramdisk": "glance://deploy_ramdisk_uuid", } def get_test_pxe_driver_internal_info(): return { "is_whole_disk_image": False, } def get_test_pxe_instance_info(): return { "image_source": "glance://image_uuid", "root_gb": 100, } def get_test_seamicro_info(): return { "seamicro_api_endpoint": "http://1.2.3.4", "seamicro_username": "admin", "seamicro_password": "fake", "seamicro_server_id": "0/0", } def get_test_ilo_info(): return { "ilo_address": "1.2.3.4", "ilo_username": "admin", "ilo_password": "fake", } def get_test_drac_info(): return { "drac_host": "1.2.3.4", "drac_port": "443", "drac_path": "/wsman", "drac_protocol": "https", "drac_username": "admin", "drac_password": "fake", } def get_test_irmc_info(): return { "irmc_address": "1.2.3.4", "irmc_username": "admin0", "irmc_password": "fake0", "irmc_port": 80, "irmc_auth_method": "digest", } def get_test_amt_info(): return { "amt_address": "1.2.3.4", "amt_protocol": "http", "amt_username": "admin", "amt_password": "fake", } def get_test_msftocs_info(): return { "msftocs_base_url": "http://fakehost:8000", "msftocs_username": "admin", "msftocs_password": "fake", "msftocs_blade_id": 1, } def get_test_agent_instance_info(): return { 'image_source': 'fake-image', 'image_url': 'http://image', 'image_checksum': 'checksum', 'image_disk_format': 'qcow2', 'image_container_format': 'bare', } def get_test_agent_driver_info(): return { 'deploy_kernel': 'glance://deploy_kernel_uuid', 'deploy_ramdisk': 'glance://deploy_ramdisk_uuid', } def get_test_agent_driver_internal_info(): return { 'agent_url': 'http://127.0.0.1/foo', 'is_whole_disk_image': True, } def get_test_iboot_info(): return { "iboot_address": "1.2.3.4", "iboot_username": "admin", "iboot_password": "fake", } def get_test_snmp_info(**kw): result = { "snmp_driver": kw.get("snmp_driver", "teltronix"), "snmp_address": kw.get("snmp_address", "1.2.3.4"), "snmp_port": kw.get("snmp_port", "161"), "snmp_outlet": kw.get("snmp_outlet", "1"), "snmp_version": kw.get("snmp_version", "1") } if result["snmp_version"] in ("1", "2c"): result["snmp_community"] = kw.get("snmp_community", "public") elif result["snmp_version"] == "3": result["snmp_security"] = kw.get("snmp_security", "public") return result def get_test_node(**kw): properties = { "cpu_arch": "x86_64", "cpus": "8", "local_gb": "10", "memory_mb": "4096", } fake_info = {"foo": "bar", "fake_password": "fakepass"} return { 'id': kw.get('id', 123), 'name': kw.get('name', None), 'uuid': kw.get('uuid', '1be26c0b-03f2-4d2e-ae87-c02d7f33c123'), 'chassis_id': kw.get('chassis_id', None), 'conductor_affinity': kw.get('conductor_affinity', None), 'power_state': kw.get('power_state', states.NOSTATE), 'target_power_state': kw.get('target_power_state', states.NOSTATE), 'provision_state': kw.get('provision_state', states.NOSTATE), 'target_provision_state': kw.get('target_provision_state', states.NOSTATE), 'provision_updated_at': kw.get('provision_updated_at'), 'last_error': kw.get('last_error'), 'instance_uuid': kw.get('instance_uuid'), 'instance_info': kw.get('instance_info', fake_info), 'driver': kw.get('driver', 'fake'), 'driver_info': kw.get('driver_info', fake_info), 'driver_internal_info': kw.get('driver_internal_info', fake_info), 'clean_step': kw.get('clean_step'), 'properties': kw.get('properties', properties), 'reservation': kw.get('reservation'), 'maintenance': kw.get('maintenance', False), 'maintenance_reason': kw.get('maintenance_reason'), 'console_enabled': kw.get('console_enabled', False), 'extra': kw.get('extra', {}), 'updated_at': kw.get('updated_at'), 'created_at': kw.get('created_at'), 'inspection_finished_at': kw.get('inspection_finished_at'), 'inspection_started_at': kw.get('inspection_started_at'), 'raid_config': kw.get('raid_config'), 'target_raid_config': kw.get('target_raid_config'), 'tags': kw.get('tags', []), } def create_test_node(**kw): """Create test node entry in DB and return Node DB object. Function to be used to create test Node objects in the database. :param kw: kwargs with overriding values for node's attributes. :returns: Test Node DB object. """ node = get_test_node(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del node['id'] dbapi = db_api.get_instance() return dbapi.create_node(node) def get_test_port(**kw): return { 'id': kw.get('id', 987), 'uuid': kw.get('uuid', '1be26c0b-03f2-4d2e-ae87-c02d7f33c781'), 'node_id': kw.get('node_id', 123), 'address': kw.get('address', '52:54:00:cf:2d:31'), 'extra': kw.get('extra', {}), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), 'local_link_connection': kw.get('local_link_connection', {'switch_id': '0a:1b:2c:3d:4e:5f', 'port_id': 'Ethernet3/1', 'switch_info': 'switch1'}), 'portgroup_id': kw.get('portgroup_id'), 'pxe_enabled': kw.get('pxe_enabled', True), } def create_test_port(**kw): """Create test port entry in DB and return Port DB object. Function to be used to create test Port objects in the database. :param kw: kwargs with overriding values for port's attributes. :returns: Test Port DB object. """ port = get_test_port(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del port['id'] dbapi = db_api.get_instance() return dbapi.create_port(port) def get_test_chassis(**kw): return { 'id': kw.get('id', 42), 'uuid': kw.get('uuid', 'e74c40e0-d825-11e2-a28f-0800200c9a66'), 'extra': kw.get('extra', {}), 'description': kw.get('description', 'data-center-1-chassis'), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), } def create_test_chassis(**kw): """Create test chassis entry in DB and return Chassis DB object. Function to be used to create test Chassis objects in the database. :param kw: kwargs with overriding values for chassis's attributes. :returns: Test Chassis DB object. """ chassis = get_test_chassis(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del chassis['id'] dbapi = db_api.get_instance() return dbapi.create_chassis(chassis) def get_test_conductor(**kw): return { 'id': kw.get('id', 6), 'hostname': kw.get('hostname', 'test-conductor-node'), 'drivers': kw.get('drivers', ['fake-driver', 'null-driver']), 'created_at': kw.get('created_at', timeutils.utcnow()), 'updated_at': kw.get('updated_at', timeutils.utcnow()), } def get_test_ucs_info(): return { "ucs_username": "admin", "ucs_password": "password", "ucs_service_profile": "org-root/ls-devstack", "ucs_address": "ucs-b", } def get_test_cimc_info(): return { "cimc_username": "admin", "cimc_password": "password", "cimc_address": "1.2.3.4", } def get_test_oneview_properties(): return { "cpu_arch": "x86_64", "cpus": "8", "local_gb": "10", "memory_mb": "4096", "capabilities": ("server_hardware_type_uri:fake_sht_uri," "enclosure_group_uri:fake_eg_uri," "server_profile_template_uri:fake_spt_uri"), } def get_test_oneview_driver_info(): return { 'server_hardware_uri': 'fake_sh_uri', } def get_test_portgroup(**kw): return { 'id': kw.get('id', 654), 'uuid': kw.get('uuid', '6eb02b44-18a3-4659-8c0b-8d2802581ae4'), 'name': kw.get('name', 'fooname'), 'node_id': kw.get('node_id', 123), 'address': kw.get('address', '52:54:00:cf:2d:31'), 'extra': kw.get('extra', {}), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), } def create_test_portgroup(**kw): """Create test portgroup entry in DB and return Portgroup DB object. Function to be used to create test Portgroup objects in the database. :param kw: kwargs with overriding values for port's attributes. :returns: Test Portgroup DB object. """ portgroup = get_test_portgroup(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del portgroup['id'] dbapi = db_api.get_instance() return dbapi.create_portgroup(portgroup) def get_test_node_tag(**kw): return { "tag": kw.get("tag", "tag1"), "node_id": kw.get("node_id", "123"), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), } def create_test_node_tag(**kw): """Create test node tag entry in DB and return NodeTag DB object. Function to be used to create test NodeTag objects in the database. :param kw: kwargs with overriding values for tag's attributes. :returns: Test NodeTag DB object. """ tag = get_test_node_tag(**kw) dbapi = db_api.get_instance() return dbapi.add_node_tag(tag['node_id'], tag['tag']) ironic-5.1.0/ironic/conductor/0000775000567000056710000000000012674513633017436 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/conductor/base_manager.py0000664000567000056710000004054112674513466022424 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base conductor manager functionality.""" import inspect import threading import futurist from futurist import periodics from futurist import rejection from oslo_config import cfg from oslo_context import context as ironic_context from oslo_db import exception as db_exception from oslo_log import log from oslo_utils import excutils from ironic.common import driver_factory from ironic.common import exception from ironic.common import hash_ring as hash from ironic.common.i18n import _ from ironic.common.i18n import _LC from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import rpc from ironic.common import states from ironic.conductor import task_manager from ironic.db import api as dbapi from ironic import objects conductor_opts = [ cfg.IntOpt('workers_pool_size', default=100, min=3, help=_('The size of the workers greenthread pool. ' 'Note that 2 threads will be reserved by the conductor ' 'itself for handling heart beats and periodic tasks.')), cfg.IntOpt('heartbeat_interval', default=10, help=_('Seconds between conductor heart beats.')), ] CONF = cfg.CONF CONF.register_opts(conductor_opts, 'conductor') LOG = log.getLogger(__name__) class BaseConductorManager(object): def __init__(self, host, topic): super(BaseConductorManager, self).__init__() if not host: host = CONF.host self.host = host self.topic = topic self.notifier = rpc.get_notifier() self._started = False def init_host(self, admin_context=None): """Initialize the conductor host. :param admin_context: the admin context to pass to periodic tasks. :raises: RuntimeError when conductor is already running. :raises: NoDriversLoaded when no drivers are enabled on the conductor. :raises: DriverNotFound if a driver is enabled that does not exist. :raises: DriverLoadError if an enabled driver cannot be loaded. """ if self._started: raise RuntimeError(_('Attempt to start an already running ' 'conductor manager')) self.dbapi = dbapi.get_instance() self._keepalive_evt = threading.Event() """Event for the keepalive thread.""" # TODO(dtantsur): make the threshold configurable? rejection_func = rejection.reject_when_reached( CONF.conductor.workers_pool_size) self._executor = futurist.GreenThreadPoolExecutor( max_workers=CONF.conductor.workers_pool_size, check_and_reject=rejection_func) """Executor for performing tasks async.""" self.ring_manager = hash.HashRingManager() """Consistent hash ring which maps drivers to conductors.""" # NOTE(deva): this call may raise DriverLoadError or DriverNotFound drivers = driver_factory.drivers() if not drivers: msg = _LE("Conductor %s cannot be started because no drivers " "were loaded. This could be because no drivers were " "specified in 'enabled_drivers' config option.") LOG.error(msg, self.host) raise exception.NoDriversLoaded(conductor=self.host) # NOTE(jroll) this is passed to the dbapi, which requires a list, not # a generator (which keys() returns in py3) driver_names = list(drivers) # Collect driver-specific periodic tasks. # Conductor periodic tasks accept context argument, driver periodic # tasks accept this manager and context. We have to ensure that the # same driver interface class is not traversed twice, otherwise # we'll have several instances of the same task. LOG.debug('Collecting periodic tasks') self._periodic_task_callables = [] periodic_task_classes = set() self._collect_periodic_tasks(self, (admin_context,)) for driver_obj in drivers.values(): self._collect_periodic_tasks(driver_obj, (self, admin_context)) for iface_name in driver_obj.all_interfaces: iface = getattr(driver_obj, iface_name, None) if iface and iface.__class__ not in periodic_task_classes: self._collect_periodic_tasks(iface, (self, admin_context)) periodic_task_classes.add(iface.__class__) if (len(self._periodic_task_callables) > CONF.conductor.workers_pool_size): LOG.warning(_LW('This conductor has %(tasks)d periodic tasks ' 'enabled, but only %(workers)d task workers ' 'allowed by [conductor]workers_pool_size option'), {'tasks': len(self._periodic_task_callables), 'workers': CONF.conductor.workers_pool_size}) self._periodic_tasks = periodics.PeriodicWorker( self._periodic_task_callables, executor_factory=periodics.ExistingExecutor(self._executor)) # clear all locks held by this conductor before registering self.dbapi.clear_node_reservations_for_conductor(self.host) try: # Register this conductor with the cluster self.conductor = objects.Conductor.register( admin_context, self.host, driver_names) except exception.ConductorAlreadyRegistered: # This conductor was already registered and did not shut down # properly, so log a warning and update the record. LOG.warning( _LW("A conductor with hostname %(hostname)s " "was previously registered. Updating registration"), {'hostname': self.host}) self.conductor = objects.Conductor.register( admin_context, self.host, driver_names, update_existing=True) # Start periodic tasks self._periodic_tasks_worker = self._executor.submit( self._periodic_tasks.start, allow_empty=True) self._periodic_tasks_worker.add_done_callback( self._on_periodic_tasks_stop) # NOTE(lucasagomes): If the conductor server dies abruptly # mid deployment (OMM Killer, power outage, etc...) we # can not resume the deployment even if the conductor # comes back online. Cleaning the reservation of the nodes # (dbapi.clear_node_reservations_for_conductor) is not enough to # unstick it, so let's gracefully fail the deployment so the node # can go through the steps (deleting & cleaning) to make itself # available again. filters = {'reserved': False, 'provision_state': states.DEPLOYING} last_error = (_("The deployment can't be resumed by conductor " "%s. Moving to fail state.") % self.host) self._fail_if_in_state(ironic_context.get_admin_context(), filters, states.DEPLOYING, 'provision_updated_at', last_error=last_error) # Spawn a dedicated greenthread for the keepalive try: self._spawn_worker(self._conductor_service_record_keepalive) LOG.info(_LI('Successfully started conductor with hostname ' '%(hostname)s.'), {'hostname': self.host}) except exception.NoFreeConductorWorker: with excutils.save_and_reraise_exception(): LOG.critical(_LC('Failed to start keepalive')) self.del_host() self._started = True def del_host(self, deregister=True): # Conductor deregistration fails if called on non-initialized # conductor (e.g. when rpc server is unreachable). if not hasattr(self, 'conductor'): return self._keepalive_evt.set() if deregister: try: # Inform the cluster that this conductor is shutting down. # Note that rebalancing will not occur immediately, but when # the periodic sync takes place. self.conductor.unregister() LOG.info(_LI('Successfully stopped conductor with hostname ' '%(hostname)s.'), {'hostname': self.host}) except exception.ConductorNotFound: pass else: LOG.info(_LI('Not deregistering conductor with hostname ' '%(hostname)s.'), {'hostname': self.host}) # Waiting here to give workers the chance to finish. This has the # benefit of releasing locks workers placed on nodes, as well as # having work complete normally. self._periodic_tasks.stop() self._periodic_tasks.wait() self._executor.shutdown(wait=True) self._started = False def _collect_periodic_tasks(self, obj, args): """Collect periodic tasks from a given object. Populates self._periodic_task_callables with tuples (callable, args, kwargs). :param obj: object containing periodic tasks as methods :param args: tuple with arguments to pass to every task """ for name, member in inspect.getmembers(obj): if periodics.is_periodic(member): LOG.debug('Found periodic task %(owner)s.%(member)s', {'owner': obj.__class__.__name__, 'member': name}) self._periodic_task_callables.append((member, args, {})) def _on_periodic_tasks_stop(self, fut): try: fut.result() except Exception as exc: LOG.critical(_LC('Periodic tasks worker has failed: %s'), exc) else: LOG.info(_LI('Successfully shut down periodic tasks')) def iter_nodes(self, fields=None, **kwargs): """Iterate over nodes mapped to this conductor. Requests node set from and filters out nodes that are not mapped to this conductor. Yields tuples (node_uuid, driver, ...) where ... is derived from fields argument, e.g.: fields=None means yielding ('uuid', 'driver'), fields=['foo'] means yielding ('uuid', 'driver', 'foo'). :param fields: list of fields to fetch in addition to uuid and driver :param kwargs: additional arguments to pass to dbapi when looking for nodes :return: generator yielding tuples of requested fields """ columns = ['uuid', 'driver'] + list(fields or ()) node_list = self.dbapi.get_nodeinfo_list(columns=columns, **kwargs) for result in node_list: if self._mapped_to_this_conductor(*result[:2]): yield result def _spawn_worker(self, func, *args, **kwargs): """Create a greenthread to run func(*args, **kwargs). Spawns a greenthread if there are free slots in pool, otherwise raises exception. Execution control returns immediately to the caller. :returns: Future object. :raises: NoFreeConductorWorker if worker pool is currently full. """ try: return self._executor.submit(func, *args, **kwargs) except futurist.RejectedSubmission: raise exception.NoFreeConductorWorker() def _conductor_service_record_keepalive(self): while not self._keepalive_evt.is_set(): try: self.conductor.touch() except db_exception.DBConnectionError: LOG.warning(_LW('Conductor could not connect to database ' 'while heartbeating.')) self._keepalive_evt.wait(CONF.conductor.heartbeat_interval) def _mapped_to_this_conductor(self, node_uuid, driver): """Check that node is mapped to this conductor. Note that because mappings are eventually consistent, it is possible for two conductors to simultaneously believe that a node is mapped to them. Any operation that depends on exclusive control of a node should take out a lock. """ try: ring = self.ring_manager[driver] except exception.DriverNotFound: return False return self.host in ring.get_hosts(node_uuid) def _fail_if_in_state(self, context, filters, provision_state, sort_key, callback_method=None, err_handler=None, last_error=None, keep_target_state=False): """Fail nodes that are in specified state. Retrieves nodes that satisfy the criteria in 'filters'. If any of these nodes is in 'provision_state', it has failed in whatever provisioning activity it was currently doing. That failure is processed here. :param: context: request context :param: filters: criteria (as a dictionary) to get the desired list of nodes that satisfy the filter constraints. For example, if filters['provisioned_before'] = 60, this would process nodes whose provision_updated_at field value was 60 or more seconds before 'now'. :param: provision_state: provision_state that the node is in, for the provisioning activity to have failed. :param: sort_key: the nodes are sorted based on this key. :param: callback_method: the callback method to be invoked in a spawned thread, for a failed node. This method must take a :class:`TaskManager` as the first (and only required) parameter. :param: err_handler: for a failed node, the error handler to invoke if an error occurs trying to spawn an thread to do the callback_method. :param: last_error: the error message to be updated in node.last_error :param: keep_target_state: if True, a failed node will keep the same target provision state it had before the failure. Otherwise, the node's target provision state will be determined by the fsm. """ node_iter = self.iter_nodes(filters=filters, sort_key=sort_key, sort_dir='asc') workers_count = 0 for node_uuid, driver in node_iter: try: with task_manager.acquire(context, node_uuid, purpose='node state check') as task: if (task.node.maintenance or task.node.provision_state != provision_state): continue target_state = (None if not keep_target_state else task.node.target_provision_state) # timeout has been reached - process the event 'fail' if callback_method: task.process_event('fail', callback=self._spawn_worker, call_args=(callback_method, task), err_handler=err_handler, target_state=target_state) else: task.node.last_error = last_error task.process_event('fail', target_state=target_state) except exception.NoFreeConductorWorker: break except (exception.NodeLocked, exception.NodeNotFound): continue workers_count += 1 if workers_count >= CONF.conductor.periodic_max_workers: break ironic-5.1.0/ironic/conductor/rpcapi.py0000664000567000056710000010117712674513466021301 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Client side of the conductor RPC API. """ import random import oslo_messaging as messaging from ironic.common import exception from ironic.common import hash_ring from ironic.common.i18n import _ from ironic.common import rpc from ironic.conductor import manager from ironic.objects import base as objects_base class ConductorAPI(object): """Client side of the conductor RPC API. API version history: | 1.0 - Initial version. | Included get_node_power_status | 1.1 - Added update_node and start_power_state_change. | 1.2 - Added vendor_passthru. | 1.3 - Rename start_power_state_change to change_node_power_state. | 1.4 - Added do_node_deploy and do_node_tear_down. | 1.5 - Added validate_driver_interfaces. | 1.6 - change_node_power_state, do_node_deploy and do_node_tear_down | accept node id instead of node object. | 1.7 - Added topic parameter to RPC methods. | 1.8 - Added change_node_maintenance_mode. | 1.9 - Added destroy_node. | 1.10 - Remove get_node_power_state | 1.11 - Added get_console_information, set_console_mode. | 1.12 - validate_vendor_action, do_vendor_action replaced by single | vendor_passthru method. | 1.13 - Added update_port. | 1.14 - Added driver_vendor_passthru. | 1.15 - Added rebuild parameter to do_node_deploy. | 1.16 - Added get_driver_properties. | 1.17 - Added set_boot_device, get_boot_device and | get_supported_boot_devices. | 1.18 - Remove change_node_maintenance_mode. | 1.19 - Change return value of vendor_passthru and | driver_vendor_passthru | 1.20 - Added http_method parameter to vendor_passthru and | driver_vendor_passthru | 1.21 - Added get_node_vendor_passthru_methods and | get_driver_vendor_passthru_methods | 1.22 - Added configdrive parameter to do_node_deploy. | 1.23 - Added do_provisioning_action | 1.24 - Added inspect_hardware method | 1.25 - Added destroy_port | 1.26 - Added continue_node_clean | 1.27 - Convert continue_node_clean to cast | 1.28 - Change exceptions raised by destroy_node | 1.29 - Change return value of vendor_passthru and | driver_vendor_passthru to a dictionary | 1.30 - Added set_target_raid_config and | get_raid_logical_disk_properties | 1.31 - Added Versioned Objects indirection API methods: | object_class_action_versions, object_action and | object_backport_versions | 1.32 - Add do_node_clean | 1.33 - Added update and destroy portgroup. """ # NOTE(rloo): This must be in sync with manager.ConductorManager's. RPC_API_VERSION = '1.33' def __init__(self, topic=None): super(ConductorAPI, self).__init__() self.topic = topic if self.topic is None: self.topic = manager.MANAGER_TOPIC target = messaging.Target(topic=self.topic, version='1.0') serializer = objects_base.IronicObjectSerializer() self.client = rpc.get_client(target, version_cap=self.RPC_API_VERSION, serializer=serializer) # NOTE(deva): this is going to be buggy self.ring_manager = hash_ring.HashRingManager() def get_topic_for(self, node): """Get the RPC topic for the conductor service the node is mapped to. :param node: a node object. :returns: an RPC topic string. :raises: NoValidHost """ self.ring_manager.reset() try: ring = self.ring_manager[node.driver] dest = ring.get_hosts(node.uuid) return self.topic + "." + dest[0] except exception.DriverNotFound: reason = (_('No conductor service registered which supports ' 'driver %s.') % node.driver) raise exception.NoValidHost(reason=reason) def get_topic_for_driver(self, driver_name): """Get RPC topic name for a conductor supporting the given driver. The topic is used to route messages to the conductor supporting the specified driver. A conductor is selected at random from the set of qualified conductors. :param driver_name: the name of the driver to route to. :returns: an RPC topic string. :raises: DriverNotFound """ self.ring_manager.reset() hash_ring = self.ring_manager[driver_name] host = random.choice(list(hash_ring.hosts)) return self.topic + "." + host def update_node(self, context, node_obj, topic=None): """Synchronously, have a conductor update the node's information. Update the node's information in the database and return a node object. The conductor will lock the node while it validates the supplied information. If driver_info is passed, it will be validated by the core drivers. If instance_uuid is passed, it will be set or unset only if the node is properly configured. Note that power_state should not be passed via this method. Use change_node_power_state for initiating driver actions. :param context: request context. :param node_obj: a changed (but not saved) node object. :param topic: RPC topic. Defaults to self.topic. :returns: updated node object, including all fields. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.1') return cctxt.call(context, 'update_node', node_obj=node_obj) def change_node_power_state(self, context, node_id, new_state, topic=None): """Change a node's power state. Synchronously, acquire lock and start the conductor background task to change power state of a node. :param context: request context. :param node_id: node id or uuid. :param new_state: one of ironic.common.states power state values :param topic: RPC topic. Defaults to self.topic. :raises: NoFreeConductorWorker when there is no free worker to start async task. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.6') return cctxt.call(context, 'change_node_power_state', node_id=node_id, new_state=new_state) def vendor_passthru(self, context, node_id, driver_method, http_method, info, topic=None): """Receive requests for vendor-specific actions. Synchronously validate driver specific info or get driver status, and if successful invokes the vendor method. If the method mode is async the conductor will start background worker to perform vendor action. :param context: request context. :param node_id: node id or uuid. :param driver_method: name of method for driver. :param http_method: the HTTP method used for the request. :param info: info for node driver. :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue if supplied info is not valid. :raises: MissingParameterValue if a required parameter is missing :raises: UnsupportedDriverExtension if current driver does not have vendor interface. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: NodeLocked if node is locked by another conductor. :returns: A dictionary containing: :return: The response of the invoked vendor method :async: Boolean value. Whether the method was invoked asynchronously (True) or synchronously (False). When invoked asynchronously the response will be always None. :attach: Boolean value. Whether to attach the response of the invoked vendor method to the HTTP response object (True) or return it in the response body (False). """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.20') return cctxt.call(context, 'vendor_passthru', node_id=node_id, driver_method=driver_method, http_method=http_method, info=info) def driver_vendor_passthru(self, context, driver_name, driver_method, http_method, info, topic=None): """Pass vendor-specific calls which don't specify a node to a driver. Handles driver-level vendor passthru calls. These calls don't require a node UUID and are executed on a random conductor with the specified driver. If the method mode is async the conductor will start background worker to perform vendor action. :param context: request context. :param driver_name: name of the driver on which to call the method. :param driver_method: name of the vendor method, for use by the driver. :param http_method: the HTTP method used for the request. :param info: data to pass through to the driver. :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue for parameter errors. :raises: MissingParameterValue if a required parameter is missing :raises: UnsupportedDriverExtension if the driver doesn't have a vendor interface, or if the vendor interface does not support the specified driver_method. :raises: DriverNotFound if the supplied driver is not loaded. :raises: NoFreeConductorWorker when there is no free worker to start async task. :returns: A dictionary containing: :return: The response of the invoked vendor method :async: Boolean value. Whether the method was invoked asynchronously (True) or synchronously (False). When invoked asynchronously the response will be always None. :attach: Boolean value. Whether to attach the response of the invoked vendor method to the HTTP response object (True) or return it in the response body (False). """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.20') return cctxt.call(context, 'driver_vendor_passthru', driver_name=driver_name, driver_method=driver_method, http_method=http_method, info=info) def get_node_vendor_passthru_methods(self, context, node_id, topic=None): """Retrieve information about vendor methods of the given node. :param context: an admin context. :param node_id: the id or uuid of a node. :param topic: RPC topic. Defaults to self.topic. :returns: dictionary of : entries. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.21') return cctxt.call(context, 'get_node_vendor_passthru_methods', node_id=node_id) def get_driver_vendor_passthru_methods(self, context, driver_name, topic=None): """Retrieve information about vendor methods of the given driver. :param context: an admin context. :param driver_name: name of the driver. :param topic: RPC topic. Defaults to self.topic. :returns: dictionary of : entries. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.21') return cctxt.call(context, 'get_driver_vendor_passthru_methods', driver_name=driver_name) def do_node_deploy(self, context, node_id, rebuild, configdrive, topic=None): """Signal to conductor service to perform a deployment. :param context: request context. :param node_id: node id or uuid. :param rebuild: True if this is a rebuild request. :param configdrive: A gzipped and base64 encoded configdrive. :param topic: RPC topic. Defaults to self.topic. :raises: InstanceDeployFailure :raises: InvalidParameterValue if validation fails :raises: MissingParameterValue if a required parameter is missing :raises: NoFreeConductorWorker when there is no free worker to start async task. The node must already be configured and in the appropriate undeployed state before this method is called. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.22') return cctxt.call(context, 'do_node_deploy', node_id=node_id, rebuild=rebuild, configdrive=configdrive) def do_node_tear_down(self, context, node_id, topic=None): """Signal to conductor service to tear down a deployment. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: InstanceDeployFailure :raises: InvalidParameterValue if validation fails :raises: MissingParameterValue if a required parameter is missing :raises: NoFreeConductorWorker when there is no free worker to start async task. The node must already be configured and in the appropriate deployed state before this method is called. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.6') return cctxt.call(context, 'do_node_tear_down', node_id=node_id) def do_provisioning_action(self, context, node_id, action, topic=None): """Signal to conductor service to perform the given action on a node. :param context: request context. :param node_id: node id or uuid. :param action: an action. One of ironic.common.states.VERBS :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: InvalidStateRequested if the requested action can not be performed. This encapsulates some provisioning actions in a single call. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.23') return cctxt.call(context, 'do_provisioning_action', node_id=node_id, action=action) def continue_node_clean(self, context, node_id, topic=None): """Signal to conductor service to start the next cleaning action. NOTE(JoshNang) this is an RPC cast, there will be no response or exception raised by the conductor for this RPC. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.27') return cctxt.cast(context, 'continue_node_clean', node_id=node_id) def validate_driver_interfaces(self, context, node_id, topic=None): """Validate the `core` and `standardized` interfaces for drivers. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :returns: a dictionary containing the results of each interface validation. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.5') return cctxt.call(context, 'validate_driver_interfaces', node_id=node_id) def destroy_node(self, context, node_id, topic=None): """Delete a node. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: NodeAssociated if the node contains an instance associated with it. :raises: InvalidState if the node is in the wrong provision state to perform deletion. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.9') return cctxt.call(context, 'destroy_node', node_id=node_id) def get_console_information(self, context, node_id, topic=None): """Get connection information about the console. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: UnsupportedDriverExtension if the node's driver doesn't support console. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if a required parameter is missing """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.11') return cctxt.call(context, 'get_console_information', node_id=node_id) def set_console_mode(self, context, node_id, enabled, topic=None): """Enable/Disable the console. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :param enabled: Boolean value; whether the console is enabled or disabled. :raises: UnsupportedDriverExtension if the node's driver doesn't support console. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if a required parameter is missing :raises: NoFreeConductorWorker when there is no free worker to start async task. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.11') return cctxt.call(context, 'set_console_mode', node_id=node_id, enabled=enabled) def update_port(self, context, port_obj, topic=None): """Synchronously, have a conductor update the port's information. Update the port's information in the database and return a port object. The conductor will lock related node and trigger specific driver actions if they are needed. :param context: request context. :param port_obj: a changed (but not saved) port object. :param topic: RPC topic. Defaults to self.topic. :returns: updated port object, including all fields. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.13') return cctxt.call(context, 'update_port', port_obj=port_obj) def update_portgroup(self, context, portgroup_obj, topic=None): """Synchronously, have a conductor update the portgroup's information. Update the portgroup's information in the database and return a portgroup object. The conductor will lock related node and trigger specific driver actions if they are needed. :param context: request context. :param portgroup_obj: a changed (but not saved) portgroup object. :param topic: RPC topic. Defaults to self.topic. :returns: updated portgroup object, including all fields. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.33') return cctxt.call(context, 'update_portgroup', portgroup_obj=portgroup_obj) def destroy_portgroup(self, context, portgroup, topic=None): """Delete a portgroup. :param context: request context. :param portgroup: portgroup object :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node associated with the portgroup does not exist. :raises: PortgroupNotEmpty if portgroup is not empty """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.33') return cctxt.call(context, 'destroy_portgroup', portgroup=portgroup) def get_driver_properties(self, context, driver_name, topic=None): """Get the properties of the driver. :param context: request context. :param driver_name: name of the driver. :param topic: RPC topic. Defaults to self.topic. :returns: a dictionary with : entries. :raises: DriverNotFound. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.16') return cctxt.call(context, 'get_driver_properties', driver_name=driver_name) def set_boot_device(self, context, node_id, device, persistent=False, topic=None): """Set the boot device for a node. Set the boot device to use on next reboot of the node. Be aware that not all drivers support this. :param context: request context. :param node_id: node id or uuid. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Whether to set next-boot, or make the change permanent. Default: False. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified or an invalid boot device is specified. :raises: MissingParameterValue if missing supplied info. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.17') return cctxt.call(context, 'set_boot_device', node_id=node_id, device=device, persistent=persistent) def get_boot_device(self, context, node_id, topic=None): """Get the current boot device. Returns the current boot device of a node. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.17') return cctxt.call(context, 'get_boot_device', node_id=node_id) def get_supported_boot_devices(self, context, node_id, topic=None): """Get the list of supported devices. Returns the list of supported boot devices of a node. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.17') return cctxt.call(context, 'get_supported_boot_devices', node_id=node_id) def inspect_hardware(self, context, node_id, topic=None): """Signals the conductor service to perform hardware introspection. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: HardwareInspectionFailure :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: UnsupportedDriverExtension if the node's driver doesn't support inspection. :raises: InvalidStateRequested if 'inspect' is not a valid action to do in the current state. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.24') return cctxt.call(context, 'inspect_hardware', node_id=node_id) def destroy_port(self, context, port, topic=None): """Delete a port. :param context: request context. :param port: port object :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node associated with the port does not exist. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.25') return cctxt.call(context, 'destroy_port', port=port) def set_target_raid_config(self, context, node_id, target_raid_config, topic=None): """Stores the target RAID configuration on the node. Stores the target RAID configuration on node.target_raid_config :param context: request context. :param node_id: node id or uuid. :param target_raid_config: Dictionary containing the target RAID configuration. It may be an empty dictionary as well. :param topic: RPC topic. Defaults to self.topic. :raises: UnsupportedDriverExtension if the node's driver doesn't support RAID configuration. :raises: InvalidParameterValue, if validation of target raid config fails. :raises: MissingParameterValue, if some required parameters are missing. :raises: NodeLocked if node is locked by another conductor. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.30') return cctxt.call(context, 'set_target_raid_config', node_id=node_id, target_raid_config=target_raid_config) def get_raid_logical_disk_properties(self, context, driver_name, topic=None): """Get the logical disk properties for RAID configuration. Gets the information about logical disk properties which can be specified in the input RAID configuration. :param context: request context. :param driver_name: name of the driver :param topic: RPC topic. Defaults to self.topic. :raises: UnsupportedDriverExtension if the driver doesn't support RAID configuration. :returns: A dictionary containing the properties that can be mentioned for logical disks and a textual description for them. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.30') return cctxt.call(context, 'get_raid_logical_disk_properties', driver_name=driver_name) def do_node_clean(self, context, node_id, clean_steps, topic=None): """Signal to conductor service to perform manual cleaning on a node. :param context: request context. :param node_id: node ID or UUID. :param clean_steps: a list of clean step dictionaries. :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue if validation of power driver interface failed. :raises: InvalidStateRequested if cleaning can not be performed. :raises: NodeInMaintenance if node is in maintenance mode. :raises: NodeLocked if node is locked by another conductor. :raises: NoFreeConductorWorker when there is no free worker to start async task. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.32') return cctxt.call(context, 'do_node_clean', node_id=node_id, clean_steps=clean_steps) def object_class_action_versions(self, context, objname, objmethod, object_versions, args, kwargs): """Perform an action on a VersionedObject class. We want any conductor to handle this, so it is intentional that there is no topic argument for this method. :param context: The context within which to perform the action :param objname: The registry name of the object :param objmethod: The name of the action method to call :param object_versions: A dict of {objname: version} mappings :param args: The positional arguments to the action method :param kwargs: The keyword arguments to the action method :raises: NotImplementedError when an operator makes an error during upgrade :returns: The result of the action method, which may (or may not) be an instance of the implementing VersionedObject class. """ if not self.client.can_send_version('1.31'): raise NotImplementedError(_('Incompatible conductor version - ' 'please upgrade ironic-conductor ' 'first')) cctxt = self.client.prepare(topic=self.topic, version='1.31') return cctxt.call(context, 'object_class_action_versions', objname=objname, objmethod=objmethod, object_versions=object_versions, args=args, kwargs=kwargs) def object_action(self, context, objinst, objmethod, args, kwargs): """Perform an action on a VersionedObject instance. We want any conductor to handle this, so it is intentional that there is no topic argument for this method. :param context: The context within which to perform the action :param objinst: The object instance on which to perform the action :param objmethod: The name of the action method to call :param args: The positional arguments to the action method :param kwargs: The keyword arguments to the action method :raises: NotImplementedError when an operator makes an error during upgrade :returns: A tuple with the updates made to the object and the result of the action method """ if not self.client.can_send_version('1.31'): raise NotImplementedError(_('Incompatible conductor version - ' 'please upgrade ironic-conductor ' 'first')) cctxt = self.client.prepare(topic=self.topic, version='1.31') return cctxt.call(context, 'object_action', objinst=objinst, objmethod=objmethod, args=args, kwargs=kwargs) def object_backport_versions(self, context, objinst, object_versions): """Perform a backport of an object instance. The default behavior of the base VersionedObjectSerializer, upon receiving an object with a version newer than what is in the local registry, is to call this method to request a backport of the object. We want any conductor to handle this, so it is intentional that there is no topic argument for this method. :param context: The context within which to perform the backport :param objinst: An instance of a VersionedObject to be backported :param object_versions: A dict of {objname: version} mappings :raises: NotImplementedError when an operator makes an error during upgrade :returns: The downgraded instance of objinst """ if not self.client.can_send_version('1.31'): raise NotImplementedError(_('Incompatible conductor version - ' 'please upgrade ironic-conductor ' 'first')) cctxt = self.client.prepare(topic=self.topic, version='1.31') return cctxt.call(context, 'object_backport_versions', objinst=objinst, object_versions=object_versions) ironic-5.1.0/ironic/conductor/manager.py0000664000567000056710000035036612674513466021443 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # Copyright 2013 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Conduct all activity related to bare-metal deployments. A single instance of :py:class:`ironic.conductor.manager.ConductorManager` is created within the *ironic-conductor* process, and is responsible for performing all actions on bare metal resources (Chassis, Nodes, and Ports). Commands are received via RPCs. The conductor service also performs periodic tasks, eg. to monitor the status of active deployments. Drivers are loaded via entrypoints by the :py:class:`ironic.common.driver_factory` class. Each driver is instantiated only once, when the ConductorManager service starts. In this way, a single ConductorManager may use multiple drivers, and manage heterogeneous hardware. When multiple :py:class:`ConductorManager` are run on different hosts, they are all active and cooperatively manage all nodes in the deployment. Nodes are locked by each conductor when performing actions which change the state of that node; these locks are represented by the :py:class:`ironic.conductor.task_manager.TaskManager` class. A :py:class:`ironic.common.hash_ring.HashRing` is used to distribute nodes across the set of active conductors which support each node's driver. Rebalancing this ring can trigger various actions by each conductor, such as building or tearing down the TFTP environment for a node, notifying Neutron of a change, etc. """ import collections import datetime import tempfile import eventlet from futurist import periodics from oslo_config import cfg from oslo_log import log import oslo_messaging as messaging from oslo_utils import excutils from oslo_utils import uuidutils from ironic.common import dhcp_factory from ironic.common import driver_factory from ironic.common import exception from ironic.common.glance_service import service_utils as glance_utils from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import images from ironic.common import states from ironic.common import swift from ironic.conductor import base_manager from ironic.conductor import task_manager from ironic.conductor import utils from ironic import objects from ironic.objects import base as objects_base MANAGER_TOPIC = 'ironic.conductor_manager' LOG = log.getLogger(__name__) conductor_opts = [ cfg.StrOpt('api_url', help=_('URL of Ironic API service. If not set ironic can ' 'get the current value from the keystone service ' 'catalog.')), cfg.IntOpt('heartbeat_timeout', default=60, help=_('Maximum time (in seconds) since the last check-in ' 'of a conductor. A conductor is considered inactive ' 'when this time has been exceeded.')), cfg.IntOpt('sync_power_state_interval', default=60, help=_('Interval between syncing the node power state to the ' 'database, in seconds.')), cfg.IntOpt('check_provision_state_interval', default=60, help=_('Interval between checks of provision timeouts, ' 'in seconds.')), cfg.IntOpt('deploy_callback_timeout', default=1800, help=_('Timeout (seconds) to wait for a callback from ' 'a deploy ramdisk. Set to 0 to disable timeout.')), cfg.BoolOpt('force_power_state_during_sync', default=True, help=_('During sync_power_state, should the hardware power ' 'state be set to the state recorded in the database ' '(True) or should the database be updated based on ' 'the hardware state (False).')), cfg.IntOpt('power_state_sync_max_retries', default=3, help=_('During sync_power_state failures, limit the ' 'number of times Ironic should try syncing the ' 'hardware node power state with the node power state ' 'in DB')), cfg.IntOpt('periodic_max_workers', default=8, help=_('Maximum number of worker threads that can be started ' 'simultaneously by a periodic task. Should be less ' 'than RPC thread pool size.')), cfg.IntOpt('node_locked_retry_attempts', default=3, help=_('Number of attempts to grab a node lock.')), cfg.IntOpt('node_locked_retry_interval', default=1, help=_('Seconds to sleep between node lock attempts.')), cfg.BoolOpt('send_sensor_data', default=False, help=_('Enable sending sensor data message via the ' 'notification bus')), cfg.IntOpt('send_sensor_data_interval', default=600, help=_('Seconds between conductor sending sensor data message' ' to ceilometer via the notification bus.')), cfg.ListOpt('send_sensor_data_types', default=['ALL'], help=_('List of comma separated meter types which need to be' ' sent to Ceilometer. The default value, "ALL", is a ' 'special value meaning send all the sensor data.')), cfg.IntOpt('sync_local_state_interval', default=180, help=_('When conductors join or leave the cluster, existing ' 'conductors may need to update any persistent ' 'local state as nodes are moved around the cluster. ' 'This option controls how often, in seconds, each ' 'conductor will check for nodes that it should ' '"take over". Set it to a negative value to disable ' 'the check entirely.')), cfg.BoolOpt('configdrive_use_swift', default=False, help=_('Whether to upload the config drive to Swift.')), cfg.StrOpt('configdrive_swift_container', default='ironic_configdrive_container', help=_('Name of the Swift container to store config drive ' 'data. Used when configdrive_use_swift is True.')), cfg.IntOpt('inspect_timeout', default=1800, help=_('Timeout (seconds) for waiting for node inspection. ' '0 - unlimited.')), # TODO(rloo): Remove support for deprecated name 'clean_nodes' in Newton # cycle. cfg.BoolOpt('automated_clean', default=True, deprecated_name='clean_nodes', help=_('Enables or disables automated cleaning. Automated ' 'cleaning is a configurable set of steps, ' 'such as erasing disk drives, that are performed on ' 'the node to ensure it is in a baseline state and ' 'ready to be deployed to. This is ' 'done after instance deletion as well as during ' 'the transition from a "manageable" to "available" ' 'state. When enabled, the particular steps ' 'performed to clean a node depend on which driver ' 'that node is managed by; see the individual ' 'driver\'s documentation for details. ' 'NOTE: The introduction of the cleaning operation ' 'causes instance deletion to take significantly ' 'longer. In an environment where all tenants are ' 'trusted (eg, because there is only one tenant), ' 'this option could be safely disabled.')), cfg.IntOpt('clean_callback_timeout', default=1800, help=_('Timeout (seconds) to wait for a callback from the ' 'ramdisk doing the cleaning. If the timeout is reached ' 'the node will be put in the "clean failed" provision ' 'state. Set to 0 to disable timeout.')), ] CONF = cfg.CONF CONF.register_opts(conductor_opts, 'conductor') SYNC_EXCLUDED_STATES = (states.DEPLOYWAIT, states.CLEANWAIT, states.ENROLL) class ConductorManager(base_manager.BaseConductorManager): """Ironic Conductor manager main class.""" # NOTE(rloo): This must be in sync with rpcapi.ConductorAPI's. RPC_API_VERSION = '1.33' target = messaging.Target(version=RPC_API_VERSION) def __init__(self, host, topic): super(ConductorManager, self).__init__(host, topic) self.power_state_sync_count = collections.defaultdict(int) @messaging.expected_exceptions(exception.InvalidParameterValue, exception.MissingParameterValue, exception.NodeLocked) def update_node(self, context, node_obj): """Update a node with the supplied data. This method is the main "hub" for PUT and PATCH requests in the API. It ensures that the requested change is safe to perform, validates the parameters with the node's driver, if necessary. :param context: an admin context :param node_obj: a changed (but not saved) node object. """ node_id = node_obj.uuid LOG.debug("RPC update_node called for node %s." % node_id) # NOTE(jroll) clear maintenance_reason if node.update sets # maintenance to False for backwards compatibility, for tools # not using the maintenance endpoint. delta = node_obj.obj_what_changed() if 'maintenance' in delta and not node_obj.maintenance: node_obj.maintenance_reason = None driver_name = node_obj.driver if 'driver' in delta else None with task_manager.acquire(context, node_id, shared=False, driver_name=driver_name, purpose='node update'): node_obj.save() return node_obj @messaging.expected_exceptions(exception.InvalidParameterValue, exception.MissingParameterValue, exception.NoFreeConductorWorker, exception.NodeLocked) def change_node_power_state(self, context, node_id, new_state): """RPC method to encapsulate changes to a node's state. Perform actions such as power on, power off. The validation is performed synchronously, and if successful, the power action is updated in the background (asynchronously). Once the power action is finished and successful, it updates the power_state for the node with the new power state. :param context: an admin context. :param node_id: the id or uuid of a node. :param new_state: the desired power state of the node. :raises: NoFreeConductorWorker when there is no free worker to start async task. """ LOG.debug("RPC change_node_power_state called for node %(node)s. " "The desired new state is %(state)s." % {'node': node_id, 'state': new_state}) with task_manager.acquire(context, node_id, shared=False, purpose='changing node power state') as task: task.driver.power.validate(task) # Set the target_power_state and clear any last_error, since we're # starting a new operation. This will expose to other processes # and clients that work is in progress. if new_state == states.REBOOT: task.node.target_power_state = states.POWER_ON else: task.node.target_power_state = new_state task.node.last_error = None task.node.save() task.set_spawn_error_hook(utils.power_state_error_handler, task.node, task.node.power_state) task.spawn_after(self._spawn_worker, utils.node_power_action, task, new_state) @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.InvalidParameterValue, exception.UnsupportedDriverExtension, exception.MissingParameterValue) def vendor_passthru(self, context, node_id, driver_method, http_method, info): """RPC method to encapsulate vendor action. Synchronously validate driver specific info or get driver status, and if successful invokes the vendor method. If the method mode is 'async' the conductor will start background worker to perform vendor action. :param context: an admin context. :param node_id: the id or uuid of a node. :param driver_method: the name of the vendor method. :param http_method: the HTTP method used for the request. :param info: vendor method args. :raises: InvalidParameterValue if supplied info is not valid. :raises: MissingParameterValue if missing supplied info :raises: UnsupportedDriverExtension if current driver does not have vendor interface or method is unsupported. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: NodeLocked if node is locked by another conductor. :returns: A dictionary containing: :return: The response of the invoked vendor method :async: Boolean value. Whether the method was invoked asynchronously (True) or synchronously (False). When invoked asynchronously the response will be always None. :attach: Boolean value. Whether to attach the response of the invoked vendor method to the HTTP response object (True) or return it in the response body (False). """ LOG.debug("RPC vendor_passthru called for node %s." % node_id) # NOTE(max_lobur): Even though not all vendor_passthru calls may # require an exclusive lock, we need to do so to guarantee that the # state doesn't unexpectedly change between doing a vendor.validate # and vendor.vendor_passthru. with task_manager.acquire(context, node_id, shared=False, purpose='calling vendor passthru') as task: if not getattr(task.driver, 'vendor', None): raise exception.UnsupportedDriverExtension( driver=task.node.driver, extension='vendor interface') vendor_iface = task.driver.vendor try: vendor_opts = vendor_iface.vendor_routes[driver_method] vendor_func = vendor_opts['func'] except KeyError: raise exception.InvalidParameterValue( _('No handler for method %s') % driver_method) http_method = http_method.upper() if http_method not in vendor_opts['http_methods']: raise exception.InvalidParameterValue( _('The method %(method)s does not support HTTP %(http)s') % {'method': driver_method, 'http': http_method}) vendor_iface.validate(task, method=driver_method, http_method=http_method, **info) # Inform the vendor method which HTTP method it was invoked with info['http_method'] = http_method # Invoke the vendor method accordingly with the mode is_async = vendor_opts['async'] ret = None if is_async: task.spawn_after(self._spawn_worker, vendor_func, task, **info) else: ret = vendor_func(task, **info) return {'return': ret, 'async': is_async, 'attach': vendor_opts['attach']} @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.InvalidParameterValue, exception.MissingParameterValue, exception.UnsupportedDriverExtension, exception.DriverNotFound) def driver_vendor_passthru(self, context, driver_name, driver_method, http_method, info): """Handle top-level vendor actions. RPC method which handles driver-level vendor passthru calls. These calls don't require a node UUID and are executed on a random conductor with the specified driver. If the method mode is async the conductor will start background worker to perform vendor action. :param context: an admin context. :param driver_name: name of the driver on which to call the method. :param driver_method: name of the vendor method, for use by the driver. :param http_method: the HTTP method used for the request. :param info: user-supplied data to pass through to the driver. :raises: MissingParameterValue if missing supplied info :raises: InvalidParameterValue if supplied info is not valid. :raises: UnsupportedDriverExtension if current driver does not have vendor interface, if the vendor interface does not implement driver-level vendor passthru or if the passthru method is unsupported. :raises: DriverNotFound if the supplied driver is not loaded. :raises: NoFreeConductorWorker when there is no free worker to start async task. :returns: A dictionary containing: :return: The response of the invoked vendor method :async: Boolean value. Whether the method was invoked asynchronously (True) or synchronously (False). When invoked asynchronously the response will be always None. :attach: Boolean value. Whether to attach the response of the invoked vendor method to the HTTP response object (True) or return it in the response body (False). """ # Any locking in a top-level vendor action will need to be done by the # implementation, as there is little we could reasonably lock on here. LOG.debug("RPC driver_vendor_passthru for driver %s." % driver_name) driver = driver_factory.get_driver(driver_name) if not getattr(driver, 'vendor', None): raise exception.UnsupportedDriverExtension( driver=driver_name, extension='vendor interface') try: vendor_opts = driver.vendor.driver_routes[driver_method] vendor_func = vendor_opts['func'] except KeyError: raise exception.InvalidParameterValue( _('No handler for method %s') % driver_method) http_method = http_method.upper() if http_method not in vendor_opts['http_methods']: raise exception.InvalidParameterValue( _('The method %(method)s does not support HTTP %(http)s') % {'method': driver_method, 'http': http_method}) # Inform the vendor method which HTTP method it was invoked with info['http_method'] = http_method # Invoke the vendor method accordingly with the mode is_async = vendor_opts['async'] ret = None driver.vendor.driver_validate(method=driver_method, **info) if is_async: self._spawn_worker(vendor_func, context, **info) else: ret = vendor_func(context, **info) return {'return': ret, 'async': is_async, 'attach': vendor_opts['attach']} @messaging.expected_exceptions(exception.UnsupportedDriverExtension) def get_node_vendor_passthru_methods(self, context, node_id): """Retrieve information about vendor methods of the given node. :param context: an admin context. :param node_id: the id or uuid of a node. :returns: dictionary of : entries. """ LOG.debug("RPC get_node_vendor_passthru_methods called for node %s" % node_id) lock_purpose = 'listing vendor passthru methods' with task_manager.acquire(context, node_id, shared=True, purpose=lock_purpose) as task: if not getattr(task.driver, 'vendor', None): raise exception.UnsupportedDriverExtension( driver=task.node.driver, extension='vendor interface') return get_vendor_passthru_metadata( task.driver.vendor.vendor_routes) @messaging.expected_exceptions(exception.UnsupportedDriverExtension, exception.DriverNotFound) def get_driver_vendor_passthru_methods(self, context, driver_name): """Retrieve information about vendor methods of the given driver. :param context: an admin context. :param driver_name: name of the driver. :returns: dictionary of : entries. """ # Any locking in a top-level vendor action will need to be done by the # implementation, as there is little we could reasonably lock on here. LOG.debug("RPC get_driver_vendor_passthru_methods for driver %s" % driver_name) driver = driver_factory.get_driver(driver_name) if not getattr(driver, 'vendor', None): raise exception.UnsupportedDriverExtension( driver=driver_name, extension='vendor interface') return get_vendor_passthru_metadata(driver.vendor.driver_routes) @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.NodeInMaintenance, exception.InstanceDeployFailure, exception.InvalidStateRequested) def do_node_deploy(self, context, node_id, rebuild=False, configdrive=None): """RPC method to initiate deployment to a node. Initiate the deployment of a node. Validations are done synchronously and the actual deploy work is performed in background (asynchronously). :param context: an admin context. :param node_id: the id or uuid of a node. :param rebuild: True if this is a rebuild request. A rebuild will recreate the instance on the same node, overwriting all disk. The ephemeral partition, if it exists, can optionally be preserved. :param configdrive: Optional. A gzipped and base64 encoded configdrive. :raises: InstanceDeployFailure :raises: NodeInMaintenance if the node is in maintenance mode. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: InvalidStateRequested when the requested state is not a valid target from the current state. """ LOG.debug("RPC do_node_deploy called for node %s." % node_id) # NOTE(comstud): If the _sync_power_states() periodic task happens # to have locked this node, we'll fail to acquire the lock. The # client should perhaps retry in this case unless we decide we # want to add retries or extra synchronization here. with task_manager.acquire(context, node_id, shared=False, purpose='node deployment') as task: node = task.node if node.maintenance: raise exception.NodeInMaintenance(op=_('provisioning'), node=node.uuid) if rebuild: event = 'rebuild' # Note(gilliard) Clear these to force the driver to # check whether they have been changed in glance # NOTE(vdrok): If image_source is not from Glance we should # not clear kernel and ramdisk as they're input manually if glance_utils.is_glance_image( node.instance_info.get('image_source')): instance_info = node.instance_info instance_info.pop('kernel', None) instance_info.pop('ramdisk', None) node.instance_info = instance_info else: event = 'deploy' driver_internal_info = node.driver_internal_info # Infer the image type to make sure the deploy driver # validates only the necessary variables for different # image types. # NOTE(sirushtim): The iwdi variable can be None. It's up to # the deploy driver to validate this. iwdi = images.is_whole_disk_image(context, node.instance_info) driver_internal_info['is_whole_disk_image'] = iwdi node.driver_internal_info = driver_internal_info node.save() try: task.driver.power.validate(task) task.driver.deploy.validate(task) except (exception.InvalidParameterValue, exception.MissingParameterValue) as e: raise exception.InstanceDeployFailure( _("RPC do_node_deploy failed to validate deploy or " "power info for node %(node_uuid)s. Error: %(msg)s") % {'node_uuid': node.uuid, 'msg': e}) LOG.debug("do_node_deploy Calling event: %(event)s for node: " "%(node)s", {'event': event, 'node': node.uuid}) try: task.process_event( event, callback=self._spawn_worker, call_args=(do_node_deploy, task, self.conductor.id, configdrive), err_handler=utils.provisioning_error_handler) except exception.InvalidState: raise exception.InvalidStateRequested( action=event, node=task.node.uuid, state=task.node.provision_state) @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.InstanceDeployFailure, exception.InvalidStateRequested) def do_node_tear_down(self, context, node_id): """RPC method to tear down an existing node deployment. Validate driver specific information synchronously, and then spawn a background worker to tear down the node asynchronously. :param context: an admin context. :param node_id: the id or uuid of a node. :raises: InstanceDeployFailure :raises: NoFreeConductorWorker when there is no free worker to start async task :raises: InvalidStateRequested when the requested state is not a valid target from the current state. """ LOG.debug("RPC do_node_tear_down called for node %s." % node_id) with task_manager.acquire(context, node_id, shared=False, purpose='node tear down') as task: try: # NOTE(ghe): Valid power driver values are needed to perform # a tear-down. Deploy info is useful to purge the cache but not # required for this method. task.driver.power.validate(task) except (exception.InvalidParameterValue, exception.MissingParameterValue) as e: raise exception.InstanceDeployFailure(_( "Failed to validate power driver interface. " "Can not delete instance. Error: %(msg)s") % {'msg': e}) try: task.process_event( 'delete', callback=self._spawn_worker, call_args=(self._do_node_tear_down, task), err_handler=utils.provisioning_error_handler) except exception.InvalidState: raise exception.InvalidStateRequested( action='delete', node=task.node.uuid, state=task.node.provision_state) def _do_node_tear_down(self, task): """Internal RPC method to tear down an existing node deployment.""" node = task.node try: task.driver.deploy.clean_up(task) task.driver.deploy.tear_down(task) except Exception as e: with excutils.save_and_reraise_exception(): LOG.exception(_LE('Error in tear_down of node %(node)s: ' '%(err)s'), {'node': node.uuid, 'err': e}) node.last_error = _("Failed to tear down. Error: %s") % e task.process_event('error') else: # NOTE(deva): When tear_down finishes, the deletion is done, # cleaning will start next LOG.info(_LI('Successfully unprovisioned node %(node)s with ' 'instance %(instance)s.'), {'node': node.uuid, 'instance': node.instance_uuid}) finally: # NOTE(deva): there is no need to unset conductor_affinity # because it is a reference to the most recent conductor which # deployed a node, and does not limit any future actions. # But we do need to clear the instance-related fields. node.instance_info = {} node.instance_uuid = None driver_internal_info = node.driver_internal_info driver_internal_info.pop('instance', None) node.driver_internal_info = driver_internal_info node.save() # Begin cleaning try: task.process_event('clean') except exception.InvalidState: raise exception.InvalidStateRequested( action='clean', node=node.uuid, state=node.provision_state) self._do_node_clean(task) def _get_node_next_clean_steps(self, task, skip_current_step=True): """Get the task's node's next clean steps. This determines what the next (remaining) clean steps are, and returns the index into the clean steps list that corresponds to the next clean step. The remaining clean steps are determined as follows: * If no clean steps have been started yet, all the clean steps must be executed * If skip_current_step is False, the remaining clean steps start with the current clean step. Otherwise, the remaining clean steps start with the clean step after the current one. All the clean steps for an automated or manual cleaning are in node.driver_internal_info['clean_steps']. node.clean_step is the current clean step that was just executed (or None, {} if no steps have been executed yet). node.driver_internal_info['clean_step_index'] is the index into the clean steps list (or None, doesn't exist if no steps have been executed yet) and corresponds to node.clean_step. :param task: A TaskManager object :param skip_current_step: True to skip the current clean step; False to include it. :raises: NodeCleaningFailure if an internal error occurred when getting the next clean steps :returns: index of the next clean step; None if there are no clean steps to execute. """ node = task.node if not node.clean_step: # first time through, all steps need to be done. Return the # index of the first step in the list. return 0 ind = None if 'clean_step_index' in node.driver_internal_info: ind = node.driver_internal_info['clean_step_index'] else: # TODO(rloo). driver_internal_info['clean_step_index'] was # added in Mitaka. We need to maintain backwards compatibility # so this uses the original code to get the index of the current # step. This will be deleted in the Newton cycle. try: next_steps = node.driver_internal_info['clean_steps'] ind = next_steps.index(node.clean_step) except (KeyError, ValueError): msg = (_('Node %(node)s got an invalid last step for ' '%(state)s: %(step)s.') % {'node': node.uuid, 'step': node.clean_step, 'state': node.provision_state}) LOG.exception(msg) utils.cleaning_error_handler(task, msg) raise exception.NodeCleaningFailure(node=node.uuid, reason=msg) if ind is None: return None if skip_current_step: ind += 1 if ind >= len(node.driver_internal_info['clean_steps']): # no steps left to do ind = None return ind @messaging.expected_exceptions(exception.InvalidParameterValue, exception.InvalidStateRequested, exception.NodeInMaintenance, exception.NodeLocked, exception.NoFreeConductorWorker) def do_node_clean(self, context, node_id, clean_steps): """RPC method to initiate manual cleaning. :param context: an admin context. :param node_id: the ID or UUID of a node. :param clean_steps: an ordered list of clean steps that will be performed on the node. A clean step is a dictionary with required keys 'interface' and 'step', and optional key 'args'. If specified, the 'args' arguments are passed to the clean step method.:: { 'interface': , 'step': , 'args': {: , ..., : } } For example (this isn't a real example, this clean step doesn't exist):: { 'interface': deploy', 'step': 'upgrade_firmware', 'args': {'force': True} } :raises: InvalidParameterValue if power validation fails. :raises: InvalidStateRequested if the node is not in manageable state. :raises: NodeLocked if node is locked by another conductor. :raises: NoFreeConductorWorker when there is no free worker to start async task. """ with task_manager.acquire(context, node_id, shared=False, purpose='node manual cleaning') as task: node = task.node if node.maintenance: raise exception.NodeInMaintenance(op=_('cleaning'), node=node.uuid) # NOTE(rloo): _do_node_clean() will also make a similar call # to validate the power, but we are doing it again here so that # the user gets immediate feedback of any issues. This behaviour # (of validating) is consistent with other methods like # self.do_node_deploy(). try: task.driver.power.validate(task) except exception.InvalidParameterValue as e: msg = (_('RPC do_node_clean failed to validate power info.' ' Cannot clean node %(node)s. Error: %(msg)s') % {'node': node.uuid, 'msg': e}) raise exception.InvalidParameterValue(msg) try: task.process_event( 'clean', callback=self._spawn_worker, call_args=(self._do_node_clean, task, clean_steps), err_handler=utils.provisioning_error_handler, target_state=states.MANAGEABLE) except exception.InvalidState: raise exception.InvalidStateRequested( action='manual clean', node=node.uuid, state=node.provision_state) def continue_node_clean(self, context, node_id): """RPC method to continue cleaning a node. This is useful for cleaning tasks that are async. When they complete, they call back via RPC, a new worker and lock are set up, and cleaning continues. This can also be used to resume cleaning on take_over. :param context: an admin context. :param node_id: the id or uuid of a node. :raises: InvalidStateRequested if the node is not in CLEANWAIT state :raises: NoFreeConductorWorker when there is no free worker to start async task :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node no longer appears in the database :raises: NodeCleaningFailure if an internal error occurred when getting the next clean steps """ LOG.debug("RPC continue_node_clean called for node %s.", node_id) with task_manager.acquire(context, node_id, shared=False, purpose='continue node cleaning') as task: node = task.node if node.target_provision_state == states.MANAGEABLE: target_state = states.MANAGEABLE else: target_state = None # TODO(lucasagomes): CLEANING here for backwards compat # with previous code, otherwise nodes in CLEANING when this # is deployed would fail. Should be removed once the Mitaka # release starts. if node.provision_state not in (states.CLEANWAIT, states.CLEANING): raise exception.InvalidStateRequested(_( 'Cannot continue cleaning on %(node)s, node is in ' '%(state)s state, should be %(clean_state)s') % {'node': node.uuid, 'state': node.provision_state, 'clean_state': states.CLEANWAIT}) info = node.driver_internal_info try: skip_current_step = info.pop('skip_current_clean_step') except KeyError: skip_current_step = True else: node.driver_internal_info = info node.save() next_step_index = self._get_node_next_clean_steps( task, skip_current_step=skip_current_step) # If this isn't the final clean step in the cleaning operation # and it is flagged to abort after the clean step that just # finished, we abort the cleaning operation. if node.clean_step.get('abort_after'): step_name = node.clean_step['step'] if next_step_index is not None: LOG.debug('The cleaning operation for node %(node)s was ' 'marked to be aborted after step "%(step)s ' 'completed. Aborting now that it has completed.', {'node': task.node.uuid, 'step': step_name}) task.process_event( 'abort', callback=self._spawn_worker, call_args=(self._do_node_clean_abort, task, step_name), err_handler=utils.provisioning_error_handler, target_state=target_state) return LOG.debug('The cleaning operation for node %(node)s was ' 'marked to be aborted after step "%(step)s" ' 'completed. However, since there are no more ' 'clean steps after this, the abort is not going ' 'to be done.', {'node': node.uuid, 'step': step_name}) # TODO(lucasagomes): This conditional is here for backwards # compat with previous code. Should be removed once the Mitaka # release starts. if node.provision_state == states.CLEANWAIT: task.process_event('resume', target_state=target_state) task.set_spawn_error_hook(utils.spawn_cleaning_error_handler, task.node) task.spawn_after( self._spawn_worker, self._do_next_clean_step, task, next_step_index) def _do_node_clean(self, task, clean_steps=None): """Internal RPC method to perform cleaning of a node. :param task: a TaskManager instance with an exclusive lock on its node :param clean_steps: For a manual clean, the list of clean steps to perform. Is None For automated cleaning (default). For more information, see the clean_steps parameter of :func:`ConductorManager.do_node_clean`. """ node = task.node manual_clean = clean_steps is not None clean_type = 'manual' if manual_clean else 'automated' LOG.debug('Starting %(type)s cleaning for node %(node)s', {'type': clean_type, 'node': node.uuid}) if not manual_clean and not CONF.conductor.automated_clean: # Skip cleaning, move to AVAILABLE. node.clean_step = None node.save() task.process_event('done') LOG.info(_LI('Automated cleaning is disabled, node %s has been ' 'successfully moved to AVAILABLE state.'), node.uuid) return try: # NOTE(ghe): Valid power driver values are needed to perform # a cleaning. task.driver.power.validate(task) except (exception.InvalidParameterValue, exception.MissingParameterValue) as e: msg = (_('Failed to validate power driver interface. ' 'Can not clean node %(node)s. Error: %(msg)s') % {'node': node.uuid, 'msg': e}) return utils.cleaning_error_handler(task, msg) if manual_clean: info = node.driver_internal_info info['clean_steps'] = clean_steps node.driver_internal_info = info node.save() # Allow the deploy driver to set up the ramdisk again (necessary for # IPA cleaning) try: prepare_result = task.driver.deploy.prepare_cleaning(task) except Exception as e: msg = (_('Failed to prepare node %(node)s for cleaning: %(e)s') % {'node': node.uuid, 'e': e}) LOG.exception(msg) return utils.cleaning_error_handler(task, msg) # TODO(lucasagomes): Should be removed once the Mitaka release starts if prepare_result == states.CLEANING: LOG.warning(_LW('Returning CLEANING for asynchronous prepare ' 'cleaning has been deprecated. Please use ' 'CLEANWAIT instead.')) prepare_result = states.CLEANWAIT if prepare_result == states.CLEANWAIT: # Prepare is asynchronous, the deploy driver will need to # set node.driver_internal_info['clean_steps'] and # node.clean_step and then make an RPC call to # continue_node_cleaning to start cleaning. # For manual cleaning, the target provision state is MANAGEABLE, # whereas for automated cleaning, it is AVAILABLE (the default). target_state = states.MANAGEABLE if manual_clean else None task.process_event('wait', target_state=target_state) return try: utils.set_node_cleaning_steps(task) except (exception.InvalidParameterValue, exception.NodeCleaningFailure) as e: msg = (_('Cannot clean node %(node)s. Error: %(msg)s') % {'node': node.uuid, 'msg': e}) return utils.cleaning_error_handler(task, msg) steps = node.driver_internal_info.get('clean_steps', []) step_index = 0 if steps else None self._do_next_clean_step(task, step_index) def _do_next_clean_step(self, task, step_index): """Do cleaning, starting from the specified clean step. :param task: a TaskManager instance with an exclusive lock :param step_index: The first clean step in the list to execute. This is the index (from 0) into the list of clean steps in the node's driver_internal_info['clean_steps']. Is None if there are no steps to execute. """ node = task.node # For manual cleaning, the target provision state is MANAGEABLE, # whereas for automated cleaning, it is AVAILABLE. manual_clean = node.target_provision_state == states.MANAGEABLE driver_internal_info = node.driver_internal_info if step_index is None: steps = [] else: steps = driver_internal_info['clean_steps'][step_index:] LOG.info(_LI('Executing %(state)s on node %(node)s, remaining steps: ' '%(steps)s'), {'node': node.uuid, 'steps': steps, 'state': node.provision_state}) # Execute each step until we hit an async step or run out of steps for ind, step in enumerate(steps): # Save which step we're about to start so we can restart # if necessary node.clean_step = step driver_internal_info['clean_step_index'] = step_index + ind node.driver_internal_info = driver_internal_info node.save() interface = getattr(task.driver, step.get('interface')) LOG.info(_LI('Executing %(step)s on node %(node)s'), {'step': step, 'node': node.uuid}) try: result = interface.execute_clean_step(task, step) except Exception as e: msg = (_('Node %(node)s failed step %(step)s: ' '%(exc)s') % {'node': node.uuid, 'exc': e, 'step': node.clean_step}) LOG.exception(msg) utils.cleaning_error_handler(task, msg) return # TODO(lucasagomes): Should be removed once the Mitaka # release starts if result == states.CLEANING: LOG.warning(_LW('Returning CLEANING for asynchronous clean ' 'steps has been deprecated. Please use ' 'CLEANWAIT instead.')) result = states.CLEANWAIT # Check if the step is done or not. The step should return # states.CLEANWAIT if the step is still being executed, or # None if the step is done. if result == states.CLEANWAIT: # Kill this worker, the async step will make an RPC call to # continue_node_clean to continue cleaning LOG.info(_LI('Clean step %(step)s on node %(node)s being ' 'executed asynchronously, waiting for driver.') % {'node': node.uuid, 'step': step}) target_state = states.MANAGEABLE if manual_clean else None task.process_event('wait', target_state=target_state) return elif result is not None: msg = (_('While executing step %(step)s on node ' '%(node)s, step returned invalid value: %(val)s') % {'step': step, 'node': node.uuid, 'val': result}) LOG.error(msg) return utils.cleaning_error_handler(task, msg) LOG.info(_LI('Node %(node)s finished clean step %(step)s'), {'node': node.uuid, 'step': step}) # Clear clean_step node.clean_step = None driver_internal_info['clean_steps'] = None driver_internal_info.pop('clean_step_index', None) node.driver_internal_info = driver_internal_info node.save() try: task.driver.deploy.tear_down_cleaning(task) except Exception as e: msg = (_('Failed to tear down from cleaning for node %s') % node.uuid) LOG.exception(msg) return utils.cleaning_error_handler(task, msg, tear_down_cleaning=False) LOG.info(_LI('Node %s cleaning complete'), node.uuid) event = 'manage' if manual_clean else 'done' # NOTE(rloo): No need to specify target prov. state; we're done task.process_event(event) def _do_node_verify(self, task): """Internal method to perform power credentials verification.""" node = task.node LOG.debug('Starting power credentials verification for node %s', node.uuid) error = None try: task.driver.power.validate(task) except Exception as e: error = (_('Failed to validate power driver interface for node ' '%(node)s. Error: %(msg)s') % {'node': node.uuid, 'msg': e}) else: try: power_state = task.driver.power.get_power_state(task) except Exception as e: error = (_('Failed to get power state for node ' '%(node)s. Error: %(msg)s') % {'node': node.uuid, 'msg': e}) if error is None: node.power_state = power_state task.process_event('done') else: LOG.error(error) node.last_error = error task.process_event('fail') node.target_provision_state = None node.save() def _do_node_clean_abort(self, task, step_name=None): """Internal method to abort an ongoing operation. :param task: a TaskManager instance with an exclusive lock :param step_name: The name of the clean step. """ node = task.node try: task.driver.deploy.tear_down_cleaning(task) except Exception as e: LOG.exception(_LE('Failed to tear down cleaning for node %(node)s ' 'after aborting the operation. Error: %(err)s'), {'node': node.uuid, 'err': e}) error_msg = _('Failed to tear down cleaning after aborting ' 'the operation') utils.cleaning_error_handler(task, error_msg, tear_down_cleaning=False, set_fail_state=False) return info_message = _('Clean operation aborted for node %s') % node.uuid last_error = _('By request, the clean operation was aborted') if step_name: msg = _(' after the completion of step "%s"') % step_name last_error += msg info_message += msg node.last_error = last_error node.clean_step = None node.save() LOG.info(info_message) @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.InvalidParameterValue, exception.MissingParameterValue, exception.InvalidStateRequested) def do_provisioning_action(self, context, node_id, action): """RPC method to initiate certain provisioning state transitions. Initiate a provisioning state change through the state machine, rather than through an RPC call to do_node_deploy / do_node_tear_down :param context: an admin context. :param node_id: the id or uuid of a node. :param action: an action. One of ironic.common.states.VERBS :raises: InvalidParameterValue :raises: InvalidStateRequested :raises: NoFreeConductorWorker """ with task_manager.acquire(context, node_id, shared=False, purpose='provision action %s' % action) as task: node = task.node if (action == states.VERBS['provide'] and node.provision_state == states.MANAGEABLE): task.process_event( 'provide', callback=self._spawn_worker, call_args=(self._do_node_clean, task), err_handler=utils.provisioning_error_handler) return if (action == states.VERBS['manage'] and node.provision_state == states.ENROLL): task.process_event( 'manage', callback=self._spawn_worker, call_args=(self._do_node_verify, task), err_handler=utils.provisioning_error_handler) return if (action == states.VERBS['abort'] and node.provision_state == states.CLEANWAIT): # Check if the clean step is abortable; if so abort it. # Otherwise, indicate in that clean step, that cleaning # should be aborted after that step is done. if (node.clean_step and not node.clean_step.get('abortable')): LOG.info(_LI('The current clean step "%(clean_step)s" for ' 'node %(node)s is not abortable. Adding a ' 'flag to abort the cleaning after the clean ' 'step is completed.'), {'clean_step': node.clean_step['step'], 'node': node.uuid}) clean_step = node.clean_step if not clean_step.get('abort_after'): clean_step['abort_after'] = True node.clean_step = clean_step node.save() return LOG.debug('Aborting the cleaning operation during clean step ' '"%(step)s" for node %(node)s in provision state ' '"%(prov)s".', {'node': node.uuid, 'prov': node.provision_state, 'step': node.clean_step.get('step')}) target_state = None if node.target_provision_state == states.MANAGEABLE: target_state = states.MANAGEABLE task.process_event( 'abort', callback=self._spawn_worker, call_args=(self._do_node_clean_abort, task), err_handler=utils.provisioning_error_handler, target_state=target_state) return try: task.process_event(action) except exception.InvalidState: raise exception.InvalidStateRequested( action=action, node=node.uuid, state=node.provision_state) @periodics.periodic(spacing=CONF.conductor.sync_power_state_interval) def _sync_power_states(self, context): """Periodic task to sync power states for the nodes. Attempt to grab a lock and sync only if the following conditions are met: 1) Node is mapped to this conductor. 2) Node is not in maintenance mode. 3) Node is not in DEPLOYWAIT/CLEANWAIT provision state. 4) Node doesn't have a reservation NOTE: Grabbing a lock here can cause other methods to fail to grab it. We want to avoid trying to grab a lock while a node is in the DEPLOYWAIT/CLEANWAIT state so we don't unnecessarily cause a deploy/cleaning callback to fail. There's not much we can do here to avoid failing a brand new deploy to a node that we've locked here, though. """ # FIXME(comstud): Since our initial state checks are outside # of the lock (to try to avoid the lock), some checks are # repeated after grabbing the lock so we can unlock quickly. # The node mapping is not re-checked because it doesn't much # matter if things happened to re-balance. # # This is inefficient and racey. We end up with calling DB API's # get_node() twice (once here, and once in acquire(). Ideally we # add a way to pass constraints to task_manager.acquire() # (through to its DB API call) so that we can eliminate our call # and first set of checks below. filters = {'reserved': False, 'maintenance': False} node_iter = self.iter_nodes(fields=['id'], filters=filters) for (node_uuid, driver, node_id) in node_iter: try: # NOTE(dtantsur): start with a shared lock, upgrade if needed with task_manager.acquire(context, node_uuid, purpose='power state sync', shared=True) as task: # NOTE(deva): we should not acquire a lock on a node in # DEPLOYWAIT/CLEANWAIT, as this could cause # an error within a deploy ramdisk POSTing back # at the same time. # NOTE(dtantsur): it's also pointless (and dangerous) to # sync power state when a power action is in progress if (task.node.provision_state in SYNC_EXCLUDED_STATES or task.node.maintenance or task.node.target_power_state): continue count = do_sync_power_state( task, self.power_state_sync_count[node_uuid]) if count: self.power_state_sync_count[node_uuid] = count else: # don't bloat the dict with non-failing nodes del self.power_state_sync_count[node_uuid] except exception.NodeNotFound: LOG.info(_LI("During sync_power_state, node %(node)s was not " "found and presumed deleted by another process."), {'node': node_uuid}) except exception.NodeLocked: LOG.info(_LI("During sync_power_state, node %(node)s was " "already locked by another process. Skip."), {'node': node_uuid}) finally: # Yield on every iteration eventlet.sleep(0) @periodics.periodic(spacing=CONF.conductor.check_provision_state_interval) def _check_deploy_timeouts(self, context): """Periodically checks whether a deploy RPC call has timed out. If a deploy call has timed out, the deploy failed and we clean up. :param context: request context. """ callback_timeout = CONF.conductor.deploy_callback_timeout if not callback_timeout: return filters = {'reserved': False, 'provision_state': states.DEPLOYWAIT, 'maintenance': False, 'provisioned_before': callback_timeout} sort_key = 'provision_updated_at' callback_method = utils.cleanup_after_timeout err_handler = utils.provisioning_error_handler self._fail_if_in_state(context, filters, states.DEPLOYWAIT, sort_key, callback_method, err_handler) @periodics.periodic(spacing=CONF.conductor.check_provision_state_interval) def _check_deploying_status(self, context): """Periodically checks the status of nodes in DEPLOYING state. Periodically checks the nodes in DEPLOYING and the state of the conductor deploying them. If we find out that a conductor that was provisioning the node has died we then break release the node and gracefully mark the deployment as failed. :param context: request context. """ offline_conductors = self.dbapi.get_offline_conductors() if not offline_conductors: return node_iter = self.iter_nodes( fields=['id', 'reservation'], filters={'provision_state': states.DEPLOYING, 'maintenance': False, 'reserved_by_any_of': offline_conductors}) if not node_iter: return for node_uuid, driver, node_id, conductor_hostname in node_iter: # NOTE(lucasagomes): Although very rare, this may lead to a # race condition. By the time we release the lock the conductor # that was previously managing the node could be back online. try: objects.Node.release(context, conductor_hostname, node_id) except exception.NodeNotFound: LOG.warning(_LW("During checking for deploying state, node " "%s was not found and presumed deleted by " "another process. Skipping."), node_uuid) continue except exception.NodeLocked: LOG.warning(_LW("During checking for deploying state, when " "releasing the lock of the node %s, it was " "locked by another process. Skipping."), node_uuid) continue except exception.NodeNotLocked: LOG.warning(_LW("During checking for deploying state, when " "releasing the lock of the node %s, it was " "already unlocked."), node_uuid) self._fail_if_in_state( context, {'id': node_id}, states.DEPLOYING, 'provision_updated_at', callback_method=utils.cleanup_after_timeout, err_handler=utils.provisioning_error_handler) def _do_takeover(self, task): """Take over this node. Prepares a node for takeover by this conductor, performs the takeover, and changes the conductor associated with the node. The node with the new conductor affiliation is saved to the DB. :param task: a TaskManager instance """ LOG.debug(('Conductor %(cdr)s taking over node %(node)s'), {'cdr': self.host, 'node': task.node.uuid}) task.driver.deploy.prepare(task) task.driver.deploy.take_over(task) # NOTE(zhenguo): If console enabled, take over the console session # as well. if task.node.console_enabled: try: task.driver.console.start_console(task) except Exception as err: msg = (_('Failed to start console while taking over the ' 'node %(node)s: %(err)s.') % {'node': task.node.uuid, 'err': err}) LOG.error(msg) # If taking over console failed, set node's console_enabled # back to False and set node's last error. task.node.last_error = msg task.node.console_enabled = False # NOTE(lucasagomes): Set the ID of the new conductor managing # this node task.node.conductor_affinity = self.conductor.id task.node.save() @periodics.periodic(spacing=CONF.conductor.check_provision_state_interval) def _check_cleanwait_timeouts(self, context): """Periodically checks for nodes being cleaned. If a node doing cleaning is unresponsive (detected when it stops heart beating), the operation should be aborted. :param context: request context. """ callback_timeout = CONF.conductor.clean_callback_timeout if not callback_timeout: return filters = {'reserved': False, 'provision_state': states.CLEANWAIT, 'maintenance': False, 'provisioned_before': callback_timeout} last_error = _("Timeout reached while cleaning the node. Please " "check if the ramdisk responsible for the cleaning is " "running on the node.") self._fail_if_in_state(context, filters, states.CLEANWAIT, 'provision_updated_at', last_error=last_error, keep_target_state=True) @periodics.periodic(spacing=CONF.conductor.sync_local_state_interval) def _sync_local_state(self, context): """Perform any actions necessary to sync local state. This is called periodically to refresh the conductor's copy of the consistent hash ring. If any mappings have changed, this method then determines which, if any, nodes need to be "taken over". The ensuing actions could include preparing a PXE environment, updating the DHCP server, and so on. """ filters = {'reserved': False, 'maintenance': False, 'provision_state': states.ACTIVE} node_iter = self.iter_nodes(fields=['id', 'conductor_affinity'], filters=filters) workers_count = 0 for node_uuid, driver, node_id, conductor_affinity in node_iter: if conductor_affinity == self.conductor.id: continue # Node is mapped here, but not updated by this conductor last try: with task_manager.acquire(context, node_uuid, purpose='node take over') as task: # NOTE(deva): now that we have the lock, check again to # avoid racing with deletes and other state changes node = task.node if (node.maintenance or node.conductor_affinity == self.conductor.id or node.provision_state != states.ACTIVE): continue task.spawn_after(self._spawn_worker, self._do_takeover, task) except exception.NoFreeConductorWorker: break except (exception.NodeLocked, exception.NodeNotFound): continue workers_count += 1 if workers_count == CONF.conductor.periodic_max_workers: break @messaging.expected_exceptions(exception.NodeLocked) def validate_driver_interfaces(self, context, node_id): """Validate the `core` and `standardized` interfaces for drivers. :param context: request context. :param node_id: node id or uuid. :returns: a dictionary containing the results of each interface validation. """ LOG.debug('RPC validate_driver_interfaces called for node %s.', node_id) ret_dict = {} lock_purpose = 'driver interface validation' with task_manager.acquire(context, node_id, shared=True, purpose=lock_purpose) as task: # NOTE(sirushtim): the is_whole_disk_image variable is needed by # deploy drivers for doing their validate(). Since the deploy # isn't being done yet and the driver information could change in # the meantime, we don't know if the is_whole_disk_image value will # change or not. It isn't saved to the DB, but only used with this # node instance for the current validations. iwdi = images.is_whole_disk_image(context, task.node.instance_info) task.node.driver_internal_info['is_whole_disk_image'] = iwdi for iface_name in task.driver.non_vendor_interfaces: iface = getattr(task.driver, iface_name, None) result = reason = None if iface: try: iface.validate(task) result = True except (exception.InvalidParameterValue, exception.UnsupportedDriverExtension, exception.MissingParameterValue) as e: result = False reason = str(e) else: reason = _('not supported') ret_dict[iface_name] = {} ret_dict[iface_name]['result'] = result if reason is not None: ret_dict[iface_name]['reason'] = reason return ret_dict @messaging.expected_exceptions(exception.NodeLocked, exception.NodeAssociated, exception.InvalidState) def destroy_node(self, context, node_id): """Delete a node. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: NodeAssociated if the node contains an instance associated with it. :raises: InvalidState if the node is in the wrong provision state to perform deletion. """ # NOTE(dtantsur): we allow deleting a node in maintenance mode even if # we would disallow it otherwise. That's done for recovering hopelessly # broken nodes (e.g. with broken BMC). with task_manager.acquire(context, node_id, purpose='node deletion') as task: node = task.node if not node.maintenance and node.instance_uuid is not None: raise exception.NodeAssociated(node=node.uuid, instance=node.instance_uuid) # NOTE(lucasagomes): For the *FAIL states we users should # move it to a safe state prior to deletion. This is because we # should try to avoid deleting a node in a dirty/whacky state, # e.g: A node in DEPLOYFAIL, if deleted without passing through # tear down/cleaning may leave data from the previous tenant # in the disk. So nodes in *FAIL states should first be moved to: # CLEANFAIL -> MANAGEABLE # INSPECTIONFAIL -> MANAGEABLE # DEPLOYFAIL -> DELETING if (not node.maintenance and node.provision_state not in states.DELETE_ALLOWED_STATES): msg = (_('Can not delete node "%(node)s" while it is in ' 'provision state "%(state)s". Valid provision states ' 'to perform deletion are: "%(valid_states)s"') % {'node': node.uuid, 'state': node.provision_state, 'valid_states': states.DELETE_ALLOWED_STATES}) raise exception.InvalidState(msg) if node.console_enabled: try: task.driver.console.stop_console(task) except Exception as err: LOG.error(_LE('Failed to stop console while deleting ' 'the node %(node)s: %(err)s.'), {'node': node.uuid, 'err': err}) node.destroy() LOG.info(_LI('Successfully deleted node %(node)s.'), {'node': node.uuid}) @messaging.expected_exceptions(exception.NodeLocked, exception.NodeNotFound) def destroy_port(self, context, port): """Delete a port. :param context: request context. :param port: port object :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node associated with the port does not exist. """ LOG.debug('RPC destroy_port called for port %(port)s', {'port': port.uuid}) with task_manager.acquire(context, port.node_id, purpose='port deletion') as task: port.destroy() LOG.info(_LI('Successfully deleted port %(port)s. ' 'The node associated with the port was ' '%(node)s'), {'port': port.uuid, 'node': task.node.uuid}) @messaging.expected_exceptions(exception.NodeLocked, exception.NodeNotFound, exception.PortgroupNotEmpty) def destroy_portgroup(self, context, portgroup): """Delete a portgroup. :param context: request context. :param portgroup: portgroup object :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node associated with the portgroup does not exist. :raises: PortgroupNotEmpty if portgroup is not empty """ LOG.debug('RPC destroy_portgroup called for portgroup %(portgroup)s', {'portgroup': portgroup.uuid}) with task_manager.acquire(context, portgroup.node_id, purpose='portgroup deletion') as task: portgroup.destroy() LOG.info(_LI('Successfully deleted portgroup %(portgroup)s. ' 'The node associated with the portgroup was ' '%(node)s'), {'portgroup': portgroup.uuid, 'node': task.node.uuid}) @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.NodeConsoleNotEnabled, exception.InvalidParameterValue, exception.MissingParameterValue) def get_console_information(self, context, node_id): """Get connection information about the console. :param context: request context. :param node_id: node id or uuid. :raises: UnsupportedDriverExtension if the node's driver doesn't support console. :raises: NodeConsoleNotEnabled if the console is not enabled. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. """ LOG.debug('RPC get_console_information called for node %s' % node_id) lock_purpose = 'getting console information' with task_manager.acquire(context, node_id, shared=True, purpose=lock_purpose) as task: node = task.node if not getattr(task.driver, 'console', None): raise exception.UnsupportedDriverExtension(driver=node.driver, extension='console') if not node.console_enabled: raise exception.NodeConsoleNotEnabled(node=node.uuid) task.driver.console.validate(task) return task.driver.console.get_console(task) @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue, exception.MissingParameterValue) def set_console_mode(self, context, node_id, enabled): """Enable/Disable the console. Validate driver specific information synchronously, and then spawn a background worker to set console mode asynchronously. :param context: request context. :param node_id: node id or uuid. :param enabled: Boolean value; whether the console is enabled or disabled. :raises: UnsupportedDriverExtension if the node's driver doesn't support console. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :raises: NoFreeConductorWorker when there is no free worker to start async task """ LOG.debug('RPC set_console_mode called for node %(node)s with ' 'enabled %(enabled)s' % {'node': node_id, 'enabled': enabled}) with task_manager.acquire(context, node_id, shared=False, purpose='setting console mode') as task: node = task.node if not getattr(task.driver, 'console', None): raise exception.UnsupportedDriverExtension(driver=node.driver, extension='console') task.driver.console.validate(task) if enabled == node.console_enabled: op = _('enabled') if enabled else _('disabled') LOG.info(_LI("No console action was triggered because the " "console is already %s"), op) task.release_resources() else: node.last_error = None node.save() task.spawn_after(self._spawn_worker, self._set_console_mode, task, enabled) def _set_console_mode(self, task, enabled): """Internal method to set console mode on a node.""" node = task.node try: if enabled: task.driver.console.start_console(task) # TODO(deva): We should be updating conductor_affinity here # but there is no support for console sessions in # take_over() right now. else: task.driver.console.stop_console(task) except Exception as e: op = _('enabling') if enabled else _('disabling') msg = (_('Error %(op)s the console on node %(node)s. ' 'Reason: %(error)s') % {'op': op, 'node': node.uuid, 'error': e}) node.last_error = msg LOG.error(msg) else: node.console_enabled = enabled node.last_error = None finally: node.save() @messaging.expected_exceptions(exception.NodeLocked, exception.FailedToUpdateMacOnPort, exception.MACAlreadyExists, exception.InvalidState) def update_port(self, context, port_obj): """Update a port. :param context: request context. :param port_obj: a changed (but not saved) port object. :raises: DHCPLoadError if the dhcp_provider cannot be loaded. :raises: FailedToUpdateMacOnPort if MAC address changed and update failed. :raises: MACAlreadyExists if the update is setting a MAC which is registered on another port already. :raises: InvalidState if port connectivity attributes are updated while node not in a MANAGEABLE or ENROLL or INSPECTING state or not in MAINTENANCE mode. """ port_uuid = port_obj.uuid LOG.debug("RPC update_port called for port %s.", port_uuid) with task_manager.acquire(context, port_obj.node_id, purpose='port update') as task: node = task.node # If port update is modifying the portgroup membership of the port # or modifying the local_link_connection or pxe_enabled flags then # node should be in MANAGEABLE/INSPECTING/ENROLL provisioning state # or in maintenance mode. # Otherwise InvalidState exception is raised. connectivity_attr = {'portgroup_uuid', 'pxe_enabled', 'local_link_connection'} allowed_update_states = [states.ENROLL, states.INSPECTING, states.MANAGEABLE] if (set(port_obj.obj_what_changed()) & connectivity_attr and not (node.provision_state in allowed_update_states or node.maintenance)): action = _("Port %(port)s can not have any connectivity " "attributes (%(connect)s) updated unless " "node %(node)s is in a %(allowed)s state " "or in maintenance mode.") raise exception.InvalidState( action % {'port': port_uuid, 'node': node.uuid, 'connect': ', '.join(connectivity_attr), 'allowed': ', '.join(allowed_update_states)}) if 'address' in port_obj.obj_what_changed(): vif = port_obj.extra.get('vif_port_id') if vif: api = dhcp_factory.DHCPFactory() api.provider.update_port_address(vif, port_obj.address, token=context.auth_token) # Log warning if there is no vif_port_id and an instance # is associated with the node. elif node.instance_uuid: LOG.warning(_LW( "No VIF found for instance %(instance)s " "port %(port)s when attempting to update port MAC " "address."), {'port': port_uuid, 'instance': node.instance_uuid}) port_obj.save() return port_obj @messaging.expected_exceptions(exception.NodeLocked, exception.FailedToUpdateMacOnPort, exception.PortgroupMACAlreadyExists) def update_portgroup(self, context, portgroup_obj): """Update a portgroup. :param context: request context. :param portgroup_obj: a changed (but not saved) portgroup object. :raises: DHCPLoadError if the dhcp_provider cannot be loaded. :raises: FailedToUpdateMacOnPort if MAC address changed and update failed. :raises: PortgroupMACAlreadyExists if the update is setting a MAC which is registered on another portgroup already. """ portgroup_uuid = portgroup_obj.uuid LOG.debug("RPC update_portgroup called for portgroup %s.", portgroup_uuid) lock_purpose = 'update portgroup' with task_manager.acquire(context, portgroup_obj.node_id, purpose=lock_purpose) as task: node = task.node if 'address' in portgroup_obj.obj_what_changed(): vif = portgroup_obj.extra.get('vif_portgroup_id') if vif: api = dhcp_factory.DHCPFactory() api.provider.update_port_address( vif, portgroup_obj.address, token=context.auth_token) # Log warning if there is no vif_portgroup_id and an instance # is associated with the node. elif node.instance_uuid: LOG.warning(_LW( "No VIF was found for instance %(instance)s " "on node %(node)s, when attempting to update " "portgroup %(portgroup)s MAC address."), {'portgroup': portgroup_uuid, 'instance': node.instance_uuid, 'node': node.uuid}) portgroup_obj.save() return portgroup_obj @messaging.expected_exceptions(exception.DriverNotFound) def get_driver_properties(self, context, driver_name): """Get the properties of the driver. :param context: request context. :param driver_name: name of the driver. :returns: a dictionary with : entries. :raises: DriverNotFound if the driver is not loaded. """ LOG.debug("RPC get_driver_properties called for driver %s.", driver_name) driver = driver_factory.get_driver(driver_name) return driver.get_properties() @periodics.periodic(spacing=CONF.conductor.send_sensor_data_interval) def _send_sensor_data(self, context): """Periodically sends sensor data to Ceilometer.""" # do nothing if send_sensor_data option is False if not CONF.conductor.send_sensor_data: return filters = {'associated': True} node_iter = self.iter_nodes(fields=['instance_uuid'], filters=filters) for (node_uuid, driver, instance_uuid) in node_iter: # populate the message which will be sent to ceilometer message = {'message_id': uuidutils.generate_uuid(), 'instance_uuid': instance_uuid, 'node_uuid': node_uuid, 'timestamp': datetime.datetime.utcnow(), 'event_type': 'hardware.ipmi.metrics.update'} try: lock_purpose = 'getting sensors data' with task_manager.acquire(context, node_uuid, shared=True, purpose=lock_purpose) as task: if not getattr(task.driver, 'management', None): continue task.driver.management.validate(task) sensors_data = task.driver.management.get_sensors_data( task) except NotImplementedError: LOG.warning(_LW( 'get_sensors_data is not implemented for driver' ' %(driver)s, node_uuid is %(node)s'), {'node': node_uuid, 'driver': driver}) except exception.FailedToParseSensorData as fps: LOG.warning(_LW( "During get_sensors_data, could not parse " "sensor data for node %(node)s. Error: %(err)s."), {'node': node_uuid, 'err': str(fps)}) except exception.FailedToGetSensorData as fgs: LOG.warning(_LW( "During get_sensors_data, could not get " "sensor data for node %(node)s. Error: %(err)s."), {'node': node_uuid, 'err': str(fgs)}) except exception.NodeNotFound: LOG.warning(_LW( "During send_sensor_data, node %(node)s was not " "found and presumed deleted by another process."), {'node': node_uuid}) except Exception as e: LOG.warning(_LW( "Failed to get sensor data for node %(node)s. " "Error: %(error)s"), {'node': node_uuid, 'error': str(e)}) else: message['payload'] = ( self._filter_out_unsupported_types(sensors_data)) if message['payload']: self.notifier.info(context, "hardware.ipmi.metrics", message) finally: # Yield on every iteration eventlet.sleep(0) def _filter_out_unsupported_types(self, sensors_data): """Filters out sensor data types that aren't specified in the config. Removes sensor data types that aren't specified in CONF.conductor.send_sensor_data_types. :param sensors_data: dict containing sensor types and the associated data :returns: dict with unsupported sensor types removed """ allowed = set(x.lower() for x in CONF.conductor.send_sensor_data_types) if 'all' in allowed: return sensors_data return dict((sensor_type, sensor_value) for (sensor_type, sensor_value) in sensors_data.items() if sensor_type.lower() in allowed) @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue, exception.MissingParameterValue) def set_boot_device(self, context, node_id, device, persistent=False): """Set the boot device for a node. Set the boot device to use on next reboot of the node. :param context: request context. :param node_id: node id or uuid. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Whether to set next-boot, or make the change permanent. Default: False. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified or an invalid boot device is specified. :raises: MissingParameterValue if missing supplied info. """ LOG.debug('RPC set_boot_device called for node %(node)s with ' 'device %(device)s', {'node': node_id, 'device': device}) with task_manager.acquire(context, node_id, purpose='setting boot device') as task: node = task.node if not getattr(task.driver, 'management', None): raise exception.UnsupportedDriverExtension( driver=node.driver, extension='management') task.driver.management.validate(task) task.driver.management.set_boot_device(task, device, persistent=persistent) @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue, exception.MissingParameterValue) def get_boot_device(self, context, node_id): """Get the current boot device. Returns the current boot device of a node. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ LOG.debug('RPC get_boot_device called for node %s', node_id) with task_manager.acquire(context, node_id, purpose='getting boot device') as task: if not getattr(task.driver, 'management', None): raise exception.UnsupportedDriverExtension( driver=task.node.driver, extension='management') task.driver.management.validate(task) return task.driver.management.get_boot_device(task) @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue, exception.MissingParameterValue) def get_supported_boot_devices(self, context, node_id): """Get the list of supported devices. Returns the list of supported boot devices of a node. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ LOG.debug('RPC get_supported_boot_devices called for node %s', node_id) lock_purpose = 'getting supported boot devices' with task_manager.acquire(context, node_id, shared=True, purpose=lock_purpose) as task: if not getattr(task.driver, 'management', None): raise exception.UnsupportedDriverExtension( driver=task.node.driver, extension='management') return task.driver.management.get_supported_boot_devices(task) @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.HardwareInspectionFailure, exception.InvalidStateRequested, exception.UnsupportedDriverExtension) def inspect_hardware(self, context, node_id): """Inspect hardware to obtain hardware properties. Initiate the inspection of a node. Validations are done synchronously and the actual inspection work is performed in background (asynchronously). :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support inspect. :raises: NoFreeConductorWorker when there is no free worker to start async task :raises: HardwareInspectionFailure when unable to get essential scheduling properties from hardware. :raises: InvalidStateRequested if 'inspect' is not a valid action to do in the current state. """ LOG.debug('RPC inspect_hardware called for node %s', node_id) with task_manager.acquire(context, node_id, shared=False, purpose='hardware inspection') as task: if not getattr(task.driver, 'inspect', None): raise exception.UnsupportedDriverExtension( driver=task.node.driver, extension='inspect') try: task.driver.power.validate(task) task.driver.inspect.validate(task) except (exception.InvalidParameterValue, exception.MissingParameterValue) as e: error = (_("RPC inspect_hardware failed to validate " "inspection or power info. Error: %(msg)s") % {'msg': e}) raise exception.HardwareInspectionFailure(error=error) try: task.process_event( 'inspect', callback=self._spawn_worker, call_args=(_do_inspect_hardware, task), err_handler=utils.provisioning_error_handler) except exception.InvalidState: raise exception.InvalidStateRequested( action='inspect', node=task.node.uuid, state=task.node.provision_state) @periodics.periodic(spacing=CONF.conductor.check_provision_state_interval) def _check_inspect_timeouts(self, context): """Periodically checks inspect_timeout and fails upon reaching it. :param: context: request context """ callback_timeout = CONF.conductor.inspect_timeout if not callback_timeout: return filters = {'reserved': False, 'provision_state': states.INSPECTING, 'inspection_started_before': callback_timeout} sort_key = 'inspection_started_at' last_error = _("timeout reached while inspecting the node") self._fail_if_in_state(context, filters, states.INSPECTING, sort_key, last_error=last_error) @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue, exception.MissingParameterValue) def set_target_raid_config(self, context, node_id, target_raid_config): """Stores the target RAID configuration on the node. Stores the target RAID configuration on node.target_raid_config :param context: request context. :param node_id: node id or uuid. :param target_raid_config: Dictionary containing the target RAID configuration. It may be an empty dictionary as well. :raises: UnsupportedDriverExtension, if the node's driver doesn't support RAID configuration. :raises: InvalidParameterValue, if validation of target raid config fails. :raises: MissingParameterValue, if some required parameters are missing. :raises: NodeLocked if node is locked by another conductor. """ LOG.debug('RPC set_target_raid_config called for node %(node)s with ' 'RAID configuration %(target_raid_config)s', {'node': node_id, 'target_raid_config': target_raid_config}) with task_manager.acquire( context, node_id, purpose='setting target RAID config') as task: node = task.node if not getattr(task.driver, 'raid', None): raise exception.UnsupportedDriverExtension( driver=task.driver, extension='raid') # Operator may try to unset node.target_raid_config. So, try to # validate only if it is not empty. if target_raid_config: task.driver.raid.validate_raid_config(task, target_raid_config) node.target_raid_config = target_raid_config node.save() @messaging.expected_exceptions(exception.UnsupportedDriverExtension) def get_raid_logical_disk_properties(self, context, driver_name): """Get the logical disk properties for RAID configuration. Gets the information about logical disk properties which can be specified in the input RAID configuration. :param context: request context. :param driver_name: name of the driver :raises: UnsupportedDriverExtension, if the driver doesn't support RAID configuration. :returns: A dictionary containing the properties and a textual description for them. """ LOG.debug("RPC get_raid_logical_disk_properties " "called for driver %s" % driver_name) driver = driver_factory.get_driver(driver_name) if not getattr(driver, 'raid', None): raise exception.UnsupportedDriverExtension( driver=driver_name, extension='raid') return driver.raid.get_logical_disk_properties() def _object_dispatch(self, target, method, context, args, kwargs): """Dispatch a call to an object method. This ensures that object methods get called and any exception that is raised gets wrapped in an ExpectedException for forwarding back to the caller (without spamming the conductor logs). """ try: # NOTE(danms): Keep the getattr inside the try block since # a missing method is really a client problem return getattr(target, method)(context, *args, **kwargs) except Exception: # NOTE(danms): This is oslo.messaging fu. ExpectedException() # grabs sys.exc_info here and forwards it along. This allows the # caller to see the exception information, but causes us *not* to # log it as such in this service. This is something that is quite # critical so that things that conductor does on behalf of another # node are not logged as exceptions in conductor logs. Otherwise, # you'd have the same thing logged in both places, even though an # exception here *always* means that the caller screwed up, so # there's no reason to log it here. raise messaging.ExpectedException() def object_class_action_versions(self, context, objname, objmethod, object_versions, args, kwargs): """Perform an action on a VersionedObject class. :param context: The context within which to perform the action :param objname: The registry name of the object :param objmethod: The name of the action method to call :param object_versions: A dict of {objname: version} mappings :param args: The positional arguments to the action method :param kwargs: The keyword arguments to the action method :returns: The result of the action method, which may (or may not) be an instance of the implementing VersionedObject class. """ objclass = objects_base.IronicObject.obj_class_from_name( objname, object_versions[objname]) result = self._object_dispatch(objclass, objmethod, context, args, kwargs) # NOTE(danms): The RPC layer will convert to primitives for us, # but in this case, we need to honor the version the client is # asking for, so we do it before returning here. if isinstance(result, objects_base.IronicObject): result = result.obj_to_primitive( target_version=object_versions[objname], version_manifest=object_versions) return result def object_action(self, context, objinst, objmethod, args, kwargs): """Perform an action on a VersionedObject instance. :param context: The context within which to perform the action :param objinst: The object instance on which to perform the action :param objmethod: The name of the action method to call :param args: The positional arguments to the action method :param kwargs: The keyword arguments to the action method :returns: A tuple with the updates made to the object and the result of the action method """ oldobj = objinst.obj_clone() result = self._object_dispatch(objinst, objmethod, context, args, kwargs) updates = dict() # NOTE(danms): Diff the object with the one passed to us and # generate a list of changes to forward back for name, field in objinst.fields.items(): if not objinst.obj_attr_is_set(name): # Avoid demand-loading anything continue if (not oldobj.obj_attr_is_set(name) or getattr(oldobj, name) != getattr(objinst, name)): updates[name] = field.to_primitive(objinst, name, getattr(objinst, name)) # This is safe since a field named this would conflict with the # method anyway updates['obj_what_changed'] = objinst.obj_what_changed() return updates, result def object_backport_versions(self, context, objinst, object_versions): """Perform a backport of an object instance. The default behavior of the base VersionedObjectSerializer, upon receiving an object with a version newer than what is in the local registry, is to call this method to request a backport of the object. :param context: The context within which to perform the backport :param objinst: An instance of a VersionedObject to be backported :param object_versions: A dict of {objname: version} mappings :returns: The downgraded instance of objinst """ target = object_versions[objinst.obj_name()] LOG.debug('Backporting %(obj)s to %(ver)s with versions %(manifest)s', {'obj': objinst.obj_name(), 'ver': target, 'manifest': ','.join( ['%s=%s' % (name, ver) for name, ver in object_versions.items()])}) return objinst.obj_to_primitive(target_version=target, version_manifest=object_versions) def get_vendor_passthru_metadata(route_dict): d = {} for method, metadata in route_dict.items(): # 'func' is the vendor method reference, ignore it d[method] = {k: metadata[k] for k in metadata if k != 'func'} return d def _get_configdrive_obj_name(node): """Generate the object name for the config drive.""" return 'configdrive-%s' % node.uuid def _store_configdrive(node, configdrive): """Handle the storage of the config drive. If configured, the config drive data are uploaded to Swift. The Node's instance_info is updated to include either the temporary Swift URL from the upload, or if no upload, the actual config drive data. :param node: an Ironic node object. :param configdrive: A gzipped and base64 encoded configdrive. :raises: SwiftOperationError if an error occur when uploading the config drive to Swift. """ if CONF.conductor.configdrive_use_swift: # NOTE(lucasagomes): No reason to use a different timeout than # the one used for deploying the node timeout = CONF.conductor.deploy_callback_timeout container = CONF.conductor.configdrive_swift_container object_name = _get_configdrive_obj_name(node) object_headers = {'X-Delete-After': timeout} with tempfile.NamedTemporaryFile(dir=CONF.tempdir) as fileobj: fileobj.write(configdrive) fileobj.flush() swift_api = swift.SwiftAPI() swift_api.create_object(container, object_name, fileobj.name, object_headers=object_headers) configdrive = swift_api.get_temp_url(container, object_name, timeout) i_info = node.instance_info i_info['configdrive'] = configdrive node.instance_info = i_info def do_node_deploy(task, conductor_id, configdrive=None): """Prepare the environment and deploy a node.""" node = task.node def handle_failure(e, task, logmsg, errmsg): # NOTE(deva): there is no need to clear conductor_affinity task.process_event('fail') args = {'node': task.node.uuid, 'err': e} LOG.error(logmsg, args) node.last_error = errmsg % e try: try: if configdrive: _store_configdrive(node, configdrive) except exception.SwiftOperationError as e: with excutils.save_and_reraise_exception(): handle_failure( e, task, _LE('Error while uploading the configdrive for ' '%(node)s to Swift'), _('Failed to upload the configdrive to Swift. ' 'Error: %s')) try: task.driver.deploy.prepare(task) except Exception as e: with excutils.save_and_reraise_exception(): handle_failure( e, task, _LE('Error while preparing to deploy to node %(node)s: ' '%(err)s'), _("Failed to prepare to deploy. Error: %s")) try: new_state = task.driver.deploy.deploy(task) except Exception as e: with excutils.save_and_reraise_exception(): handle_failure( e, task, _LE('Error in deploy of node %(node)s: %(err)s'), _("Failed to deploy. Error: %s")) # Update conductor_affinity to reference this conductor's ID # since there may be local persistent state node.conductor_affinity = conductor_id # NOTE(deva): Some drivers may return states.DEPLOYWAIT # eg. if they are waiting for a callback if new_state == states.DEPLOYDONE: task.process_event('done') LOG.info(_LI('Successfully deployed node %(node)s with ' 'instance %(instance)s.'), {'node': node.uuid, 'instance': node.instance_uuid}) elif new_state == states.DEPLOYWAIT: task.process_event('wait') else: LOG.error(_LE('Unexpected state %(state)s returned while ' 'deploying node %(node)s.'), {'state': new_state, 'node': node.uuid}) finally: node.save() @task_manager.require_exclusive_lock def handle_sync_power_state_max_retries_exceeded(task, actual_power_state, exception=None): """Handles power state sync exceeding the max retries. When synchronizing the power state between a node and the DB has exceeded the maximum number of retries, change the DB power state to be the actual node power state and place the node in maintenance. :param task: a TaskManager instance with an exclusive lock :param actual_power_state: the actual power state of the node; a power state from ironic.common.states :param exception: the exception object that caused the sync power state to fail, if present. """ node = task.node msg = (_("During sync_power_state, max retries exceeded " "for node %(node)s, node state %(actual)s " "does not match expected state '%(state)s'. " "Updating DB state to '%(actual)s' " "Switching node to maintenance mode.") % {'node': node.uuid, 'actual': actual_power_state, 'state': node.power_state}) if exception is not None: msg += _(" Error: %s") % exception node.power_state = actual_power_state node.last_error = msg node.maintenance = True node.maintenance_reason = msg node.save() LOG.error(msg) def do_sync_power_state(task, count): """Sync the power state for this node, incrementing the counter on failure. When the limit of power_state_sync_max_retries is reached, the node is put into maintenance mode and the error recorded. :param task: a TaskManager instance :param count: number of times this node has previously failed a sync :raises: NodeLocked if unable to upgrade task lock to an exclusive one :returns: Count of failed attempts. On success, the counter is set to 0. On failure, the count is incremented by one """ node = task.node power_state = None count += 1 max_retries = CONF.conductor.power_state_sync_max_retries # If power driver info can not be validated, and node has no prior state, # do not attempt to sync the node's power state. if node.power_state is None: try: task.driver.power.validate(task) except (exception.InvalidParameterValue, exception.MissingParameterValue): return 0 try: # The driver may raise an exception, or may return ERROR. # Handle both the same way. power_state = task.driver.power.get_power_state(task) if power_state == states.ERROR: raise exception.PowerStateFailure( _("Power driver returned ERROR state " "while trying to sync power state.")) except Exception as e: # Stop if any exception is raised when getting the power state if count > max_retries: task.upgrade_lock() handle_sync_power_state_max_retries_exceeded(task, power_state, exception=e) else: LOG.warning(_LW("During sync_power_state, could not get power " "state for node %(node)s, attempt %(attempt)s of " "%(retries)s. Error: %(err)s."), {'node': node.uuid, 'attempt': count, 'retries': max_retries, 'err': e}) return count if node.power_state and node.power_state == power_state: # No action is needed return 0 # We will modify a node, so upgrade our lock and use reloaded node. # This call may raise NodeLocked that will be caught on upper level. task.upgrade_lock() node = task.node # Repeat all checks with exclusive lock to avoid races if node.power_state and node.power_state == power_state: # Node power state was updated to the correct value return 0 elif node.provision_state in SYNC_EXCLUDED_STATES or node.maintenance: # Something was done to a node while a shared lock was held return 0 elif node.power_state is None: # If node has no prior state AND we successfully got a state, # simply record that. LOG.info(_LI("During sync_power_state, node %(node)s has no " "previous known state. Recording current state " "'%(state)s'."), {'node': node.uuid, 'state': power_state}) node.power_state = power_state node.save() return 0 if count > max_retries: handle_sync_power_state_max_retries_exceeded(task, power_state) return count if CONF.conductor.force_power_state_during_sync: LOG.warning(_LW("During sync_power_state, node %(node)s state " "'%(actual)s' does not match expected state. " "Changing hardware state to '%(state)s'."), {'node': node.uuid, 'actual': power_state, 'state': node.power_state}) try: # node_power_action will update the node record # so don't do that again here. utils.node_power_action(task, node.power_state) except Exception as e: LOG.error(_LE( "Failed to change power state of node %(node)s " "to '%(state)s', attempt %(attempt)s of %(retries)s."), {'node': node.uuid, 'state': node.power_state, 'attempt': count, 'retries': max_retries}) else: LOG.warning(_LW("During sync_power_state, node %(node)s state " "does not match expected state '%(state)s'. " "Updating recorded state to '%(actual)s'."), {'node': node.uuid, 'actual': power_state, 'state': node.power_state}) node.power_state = power_state node.save() return count def _do_inspect_hardware(task): """Initiates inspection. :param: task: a TaskManager instance with an exclusive lock on its node. :raises: HardwareInspectionFailure if driver doesn't return the state as states.MANAGEABLE or states.INSPECTING. """ node = task.node def handle_failure(e): node.last_error = e task.process_event('fail') LOG.error(_LE("Failed to inspect node %(node)s: %(err)s"), {'node': node.uuid, 'err': e}) try: new_state = task.driver.inspect.inspect_hardware(task) except Exception as e: with excutils.save_and_reraise_exception(): error = str(e) handle_failure(error) if new_state == states.MANAGEABLE: task.process_event('done') LOG.info(_LI('Successfully inspected node %(node)s') % {'node': node.uuid}) elif new_state != states.INSPECTING: error = (_("During inspection, driver returned unexpected " "state %(state)s") % {'state': new_state}) handle_failure(error) raise exception.HardwareInspectionFailure(error=error) ironic-5.1.0/ironic/conductor/__init__.py0000664000567000056710000000000012674513466021541 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/conductor/task_manager.py0000664000567000056710000004524512674513466022462 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A context manager to perform a series of tasks on a set of resources. :class:`TaskManager` is a context manager, created on-demand to allow synchronized access to a node and its resources. The :class:`TaskManager` will, by default, acquire an exclusive lock on a node for the duration that the TaskManager instance exists. You may create a TaskManager instance without locking by passing "shared=True" when creating it, but certain operations on the resources held by such an instance of TaskManager will not be possible. Requiring this exclusive lock guards against parallel operations interfering with each other. A shared lock is useful when performing non-interfering operations, such as validating the driver interfaces. An exclusive lock is stored in the database to coordinate between :class:`ironic.conductor.manager` instances, that are typically deployed on different hosts. :class:`TaskManager` methods, as well as driver methods, may be decorated to determine whether their invocation requires an exclusive lock. The TaskManager instance exposes certain node resources and properties as attributes that you may access: task.context The context passed to TaskManager() task.shared False if Node is locked, True if it is not locked. (The 'shared' kwarg arg of TaskManager()) task.node The Node object task.ports Ports belonging to the Node task.driver The Driver for the Node, or the Driver based on the 'driver_name' kwarg of TaskManager(). Example usage: :: with task_manager.acquire(context, node_id, purpose='power on') as task: task.driver.power.power_on(task.node) If you need to execute task-requiring code in a background thread, the TaskManager instance provides an interface to handle this for you, making sure to release resources when the thread finishes (successfully or if an exception occurs). Common use of this is within the Manager like so: :: with task_manager.acquire(context, node_id, purpose='some work') as task: task.spawn_after(self._spawn_worker, utils.node_power_action, task, new_state) All exceptions that occur in the current GreenThread as part of the spawn handling are re-raised. You can specify a hook to execute custom code when such exceptions occur. For example, the hook is a more elegant solution than wrapping the "with task_manager.acquire()" with a try..exception block. (Note that this hook does not handle exceptions raised in the background thread.): :: def on_error(e): if isinstance(e, Exception): ... with task_manager.acquire(context, node_id, purpose='some work') as task: task.set_spawn_error_hook(on_error) task.spawn_after(self._spawn_worker, utils.node_power_action, task, new_state) """ import futurist from oslo_config import cfg from oslo_context import context as oslo_context from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import timeutils import retrying import six from ironic.common import driver_factory from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LW from ironic.common import states from ironic import objects LOG = logging.getLogger(__name__) CONF = cfg.CONF def require_exclusive_lock(f): """Decorator to require an exclusive lock. Decorated functions must take a :class:`TaskManager` as the first parameter. Decorated class methods should take a :class:`TaskManager` as the first parameter after "self". """ @six.wraps(f) def wrapper(*args, **kwargs): # NOTE(dtantsur): this code could be written simpler, but then unit # testing decorated functions is pretty hard, as we usually pass a Mock # object instead of TaskManager there. if len(args) > 1: task = args[1] if isinstance(args[1], TaskManager) else args[0] else: task = args[0] if task.shared: raise exception.ExclusiveLockRequired() # NOTE(lintan): This is a workaround to set the context of async tasks, # which should contain an exclusive lock. ensure_thread_contain_context(task.context) return f(*args, **kwargs) return wrapper def acquire(context, node_id, shared=False, driver_name=None, purpose='unspecified action'): """Shortcut for acquiring a lock on a Node. :param context: Request context. :param node_id: ID or UUID of node to lock. :param shared: Boolean indicating whether to take a shared or exclusive lock. Default: False. :param driver_name: Name of Driver. Default: None. :param purpose: human-readable purpose to put to debug logs. :returns: An instance of :class:`TaskManager`. """ # NOTE(lintan): This is a workaround to set the context of periodic tasks. ensure_thread_contain_context(context) return TaskManager(context, node_id, shared=shared, driver_name=driver_name, purpose=purpose) def ensure_thread_contain_context(context): """Ensure threading contains context For async/periodic tasks, the context of local thread is missing. Set it with request context and this is useful to log the request_id in log messages. :param context: Request context """ if oslo_context.get_current(): return context.update_store() class TaskManager(object): """Context manager for tasks. This class wraps the locking, driver loading, and acquisition of related resources (eg, Node and Ports) when beginning a unit of work. """ def __init__(self, context, node_id, shared=False, driver_name=None, purpose='unspecified action'): """Create a new TaskManager. Acquire a lock on a node. The lock can be either shared or exclusive. Shared locks may be used for read-only or non-disruptive actions only, and must be considerate to what other threads may be doing on the same node at the same time. :param context: request context :param node_id: ID or UUID of node to lock. :param shared: Boolean indicating whether to take a shared or exclusive lock. Default: False. :param driver_name: The name of the driver to load, if different from the Node's current driver. :param purpose: human-readable purpose to put to debug logs. :raises: DriverNotFound :raises: NodeNotFound :raises: NodeLocked """ self._spawn_method = None self._on_error_method = None self.context = context self.node = None self.node_id = node_id self.shared = shared self.fsm = states.machine.copy() self._purpose = purpose self._debug_timer = timeutils.StopWatch() try: LOG.debug("Attempting to get %(type)s lock on node %(node)s (for " "%(purpose)s)", {'type': 'shared' if shared else 'exclusive', 'node': node_id, 'purpose': purpose}) if not self.shared: self._lock() else: self._debug_timer.restart() self.node = objects.Node.get(context, node_id) self.ports = objects.Port.list_by_node_id(context, self.node.id) self.portgroups = objects.Portgroup.list_by_node_id(context, self.node.id) self.driver = driver_factory.build_driver_for_task( self, driver_name=driver_name) # NOTE(deva): this handles the Juno-era NOSTATE state # and should be deleted after Kilo is released if self.node.provision_state is states.NOSTATE: self.node.provision_state = states.AVAILABLE self.node.save() self.fsm.initialize(start_state=self.node.provision_state, target_state=self.node.target_provision_state) except Exception: with excutils.save_and_reraise_exception(): self.release_resources() def _lock(self): self._debug_timer.restart() # NodeLocked exceptions can be annoying. Let's try to alleviate # some of that pain by retrying our lock attempts. The retrying # module expects a wait_fixed value in milliseconds. @retrying.retry( retry_on_exception=lambda e: isinstance(e, exception.NodeLocked), stop_max_attempt_number=CONF.conductor.node_locked_retry_attempts, wait_fixed=CONF.conductor.node_locked_retry_interval * 1000) def reserve_node(): self.node = objects.Node.reserve(self.context, CONF.host, self.node_id) LOG.debug("Node %(node)s successfully reserved for %(purpose)s " "(took %(time).2f seconds)", {'node': self.node.uuid, 'purpose': self._purpose, 'time': self._debug_timer.elapsed()}) self._debug_timer.restart() reserve_node() def upgrade_lock(self): """Upgrade a shared lock to an exclusive lock. Also reloads node object from the database. Does nothing if lock is already exclusive. """ if self.shared: LOG.debug('Upgrading shared lock on node %(uuid)s for %(purpose)s ' 'to an exclusive one (shared lock was held %(time).2f ' 'seconds)', {'uuid': self.node.uuid, 'purpose': self._purpose, 'time': self._debug_timer.elapsed()}) self._lock() self.shared = False def spawn_after(self, _spawn_method, *args, **kwargs): """Call this to spawn a thread to complete the task. The specified method will be called when the TaskManager instance exits. :param _spawn_method: a method that returns a GreenThread object :param args: args passed to the method. :param kwargs: additional kwargs passed to the method. """ self._spawn_method = _spawn_method self._spawn_args = args self._spawn_kwargs = kwargs def set_spawn_error_hook(self, _on_error_method, *args, **kwargs): """Create a hook to handle exceptions when spawning a task. Create a hook that gets called upon an exception being raised from spawning a background thread to do a task. :param _on_error_method: a callable object, it's first parameter should accept the Exception object that was raised. :param args: additional args passed to the callable object. :param kwargs: additional kwargs passed to the callable object. """ self._on_error_method = _on_error_method self._on_error_args = args self._on_error_kwargs = kwargs def release_resources(self): """Unlock a node and release resources. If an exclusive lock is held, unlock the node. Reset attributes to make it clear that this instance of TaskManager should no longer be accessed. """ if not self.shared: try: if self.node: objects.Node.release(self.context, CONF.host, self.node.id) except exception.NodeNotFound: # squelch the exception if the node was deleted # within the task's context. pass if self.node: LOG.debug("Successfully released %(type)s lock for %(purpose)s " "on node %(node)s (lock was held %(time).2f sec)", {'type': 'shared' if self.shared else 'exclusive', 'purpose': self._purpose, 'node': self.node.uuid, 'time': self._debug_timer.elapsed()}) self.node = None self.driver = None self.ports = None self.portgroups = None self.fsm = None def _write_exception(self, future): """Set node last_error if exception raised in thread.""" node = self.node # do not rewrite existing error if node and node.last_error is None: method = self._spawn_args[0].__name__ try: exc = future.exception() except futurist.CancelledError: LOG.exception(_LE("Execution of %(method)s for node %(node)s " "was canceled."), {'method': method, 'node': node.uuid}) else: if exc is not None: msg = _("Async execution of %(method)s failed with error: " "%(error)s") % {'method': method, 'error': six.text_type(exc)} node.last_error = msg try: node.save() except exception.NodeNotFound: pass def _thread_release_resources(self, fut): """Thread callback to release resources.""" try: self._write_exception(fut) finally: self.release_resources() def process_event(self, event, callback=None, call_args=None, call_kwargs=None, err_handler=None, target_state=None): """Process the given event for the task's current state. :param event: the name of the event to process :param callback: optional callback to invoke upon event transition :param call_args: optional \*args to pass to the callback method :param call_kwargs: optional \**kwargs to pass to the callback method :param err_handler: optional error handler to invoke if the callback fails, eg. because there are no workers available (err_handler should accept arguments node, prev_prov_state, and prev_target_state) :param target_state: if specified, the target provision state for the node. Otherwise, use the target state from the fsm :raises: InvalidState if the event is not allowed by the associated state machine """ # Advance the state model for the given event. Note that this doesn't # alter the node in any way. This may raise InvalidState, if this event # is not allowed in the current state. self.fsm.process_event(event, target_state=target_state) # stash current states in the error handler if callback is set, # in case we fail to get a worker from the pool if err_handler and callback: self.set_spawn_error_hook(err_handler, self.node, self.node.provision_state, self.node.target_provision_state) self.node.provision_state = self.fsm.current_state # NOTE(lucasagomes): If there's no extra processing # (callback) and we've moved to a stable state, make sure the # target_provision_state is cleared if not callback and self.fsm.is_stable(self.node.provision_state): self.node.target_provision_state = states.NOSTATE else: self.node.target_provision_state = self.fsm.target_state # set up the async worker if callback: # clear the error if we're going to start work in a callback self.node.last_error = None if call_args is None: call_args = () if call_kwargs is None: call_kwargs = {} self.spawn_after(callback, *call_args, **call_kwargs) # publish the state transition by saving the Node self.node.save() def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): if exc_type is None and self._spawn_method is not None: # Spawn a worker to complete the task # The linked callback below will be called whenever: # - background task finished with no errors. # - background task has crashed with exception. # - callback was added after the background task has # finished or crashed. While eventlet currently doesn't # schedule the new thread until the current thread blocks # for some reason, this is true. # All of the above are asserted in tests such that we'll # catch if eventlet ever changes this behavior. fut = None try: fut = self._spawn_method(*self._spawn_args, **self._spawn_kwargs) # NOTE(comstud): Trying to use a lambda here causes # the callback to not occur for some reason. This # also makes it easier to test. fut.add_done_callback(self._thread_release_resources) # Don't unlock! The unlock will occur when the # thread finshes. return except Exception as e: with excutils.save_and_reraise_exception(): try: # Execute the on_error hook if set if self._on_error_method: self._on_error_method(e, *self._on_error_args, **self._on_error_kwargs) except Exception: LOG.warning(_LW("Task's on_error hook failed to " "call %(method)s on node %(node)s"), {'method': self._on_error_method.__name__, 'node': self.node.uuid}) if fut is not None: # This means the add_done_callback() failed for some # reason. Nuke the thread. fut.cancel() self.release_resources() self.release_resources() ironic-5.1.0/ironic/conductor/utils.py0000664000567000056710000004051412674513466021160 0ustar jenkinsjenkins00000000000000# coding=utf-8 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_utils import excutils from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LE from ironic.common.i18n import _LI from ironic.common.i18n import _LW from ironic.common import states from ironic.conductor import task_manager LOG = log.getLogger(__name__) CLEANING_INTERFACE_PRIORITY = { # When two clean steps have the same priority, their order is determined # by which interface is implementing the clean step. The clean step of the # interface with the highest value here, will be executed first in that # case. 'power': 4, 'management': 3, 'deploy': 2, 'raid': 1, } @task_manager.require_exclusive_lock def node_set_boot_device(task, device, persistent=False): """Set the boot device for a node. :param task: a TaskManager instance. :param device: Boot device. Values are vendor-specific. :param persistent: Whether to set next-boot, or make the change permanent. Default: False. :raises: InvalidParameterValue if the validation of the ManagementInterface fails. """ if getattr(task.driver, 'management', None): task.driver.management.validate(task) task.driver.management.set_boot_device(task, device=device, persistent=persistent) @task_manager.require_exclusive_lock def node_power_action(task, new_state): """Change power state or reset for a node. Perform the requested power action if the transition is required. :param task: a TaskManager instance containing the node to act on. :param new_state: Any power state from ironic.common.states. If the state is 'REBOOT' then a reboot will be attempted, otherwise the node power state is directly set to 'state'. :raises: InvalidParameterValue when the wrong state is specified or the wrong driver info is specified. :raises: other exceptions by the node's power driver if something wrong occurred during the power action. """ node = task.node target_state = states.POWER_ON if new_state == states.REBOOT else new_state if new_state != states.REBOOT: try: curr_state = task.driver.power.get_power_state(task) except Exception as e: with excutils.save_and_reraise_exception(): node['last_error'] = _( "Failed to change power state to '%(target)s'. " "Error: %(error)s") % {'target': new_state, 'error': e} node['target_power_state'] = states.NOSTATE node.save() if curr_state == new_state: # Neither the ironic service nor the hardware has erred. The # node is, for some reason, already in the requested state, # though we don't know why. eg, perhaps the user previously # requested the node POWER_ON, the network delayed those IPMI # packets, and they are trying again -- but the node finally # responds to the first request, and so the second request # gets to this check and stops. # This isn't an error, so we'll clear last_error field # (from previous operation), log a warning, and return. node['last_error'] = None # NOTE(dtantsur): under rare conditions we can get out of sync here node['power_state'] = new_state node['target_power_state'] = states.NOSTATE node.save() LOG.warning(_LW("Not going to change node power state because " "current state = requested state = '%(state)s'."), {'state': curr_state}) return if curr_state == states.ERROR: # be optimistic and continue action LOG.warning(_LW("Driver returns ERROR power state for node %s."), node.uuid) # Set the target_power_state and clear any last_error, if we're # starting a new operation. This will expose to other processes # and clients that work is in progress. if node['target_power_state'] != target_state: node['target_power_state'] = target_state node['last_error'] = None node.save() # take power action try: if new_state != states.REBOOT: task.driver.power.set_power_state(task, new_state) else: task.driver.power.reboot(task) except Exception as e: with excutils.save_and_reraise_exception(): node['last_error'] = _( "Failed to change power state to '%(target)s'. " "Error: %(error)s") % {'target': target_state, 'error': e} else: # success! node['power_state'] = target_state LOG.info(_LI('Successfully set node %(node)s power state to ' '%(state)s.'), {'node': node.uuid, 'state': target_state}) finally: node['target_power_state'] = states.NOSTATE node.save() @task_manager.require_exclusive_lock def cleanup_after_timeout(task): """Cleanup deploy task after timeout. :param task: a TaskManager instance. """ node = task.node msg = (_('Timeout reached while waiting for callback for node %s') % node.uuid) node.last_error = msg LOG.error(msg) node.save() error_msg = _('Cleanup failed for node %(node)s after deploy timeout: ' ' %(error)s') try: task.driver.deploy.clean_up(task) except Exception as e: msg = error_msg % {'node': node.uuid, 'error': e} LOG.error(msg) if isinstance(e, exception.IronicException): node.last_error = msg else: node.last_error = _('Deploy timed out, but an unhandled ' 'exception was encountered while aborting. ' 'More info may be found in the log file.') node.save() def provisioning_error_handler(e, node, provision_state, target_provision_state): """Set the node's provisioning states if error occurs. This hook gets called upon an exception being raised when spawning the worker to do some provisioning to a node like deployment, tear down, or cleaning. :param e: the exception object that was raised. :param node: an Ironic node object. :param provision_state: the provision state to be set on the node. :param target_provision_state: the target provision state to be set on the node. """ if isinstance(e, exception.NoFreeConductorWorker): # NOTE(deva): there is no need to clear conductor_affinity # because it isn't updated on a failed deploy node.provision_state = provision_state node.target_provision_state = target_provision_state node.last_error = (_("No free conductor workers available")) node.save() LOG.warning(_LW("No free conductor workers available to perform " "an action on node %(node)s, setting node's " "provision_state back to %(prov_state)s and " "target_provision_state to %(tgt_prov_state)s."), {'node': node.uuid, 'prov_state': provision_state, 'tgt_prov_state': target_provision_state}) def cleaning_error_handler(task, msg, tear_down_cleaning=True, set_fail_state=True): """Put a failed node in CLEANFAIL and maintenance.""" node = task.node if node.provision_state in (states.CLEANING, states.CLEANWAIT): # Clear clean step, msg should already include current step node.clean_step = {} info = node.driver_internal_info info.pop('clean_step_index', None) node.driver_internal_info = info # For manual cleaning, the target provision state is MANAGEABLE, whereas # for automated cleaning, it is AVAILABLE. manual_clean = node.target_provision_state == states.MANAGEABLE node.last_error = msg node.maintenance = True node.maintenance_reason = msg node.save() if tear_down_cleaning: try: task.driver.deploy.tear_down_cleaning(task) except Exception as e: msg = (_LE('Failed to tear down cleaning on node %(uuid)s, ' 'reason: %(err)s'), {'err': e, 'uuid': node.uuid}) LOG.exception(msg) if set_fail_state: target_state = states.MANAGEABLE if manual_clean else None task.process_event('fail', target_state=target_state) def spawn_cleaning_error_handler(e, node): """Handle spawning error for node cleaning.""" if isinstance(e, exception.NoFreeConductorWorker): node.last_error = (_("No free conductor workers available")) node.save() LOG.warning(_LW("No free conductor workers available to perform " "cleaning on node %(node)s"), {'node': node.uuid}) def power_state_error_handler(e, node, power_state): """Set the node's power states if error occurs. This hook gets called upon an exception being raised when spawning the worker thread to change the power state of a node. :param e: the exception object that was raised. :param node: an Ironic node object. :param power_state: the power state to set on the node. """ if isinstance(e, exception.NoFreeConductorWorker): node.power_state = power_state node.target_power_state = states.NOSTATE node.last_error = (_("No free conductor workers available")) node.save() LOG.warning(_LW("No free conductor workers available to perform " "an action on node %(node)s, setting node's " "power state back to %(power_state)s."), {'node': node.uuid, 'power_state': power_state}) def _step_key(step): """Sort by priority, then interface priority in event of tie. :param step: cleaning step dict to get priority for. """ return (step.get('priority'), CLEANING_INTERFACE_PRIORITY[step.get('interface')]) def _get_cleaning_steps(task, enabled=False, sort=True): """Get cleaning steps for task.node. :param task: A TaskManager object :param enabled: If True, returns only enabled (priority > 0) steps. If False, returns all clean steps. :param sort: If True, the steps are sorted from highest priority to lowest priority. For steps having the same priority, they are sorted from highest interface priority to lowest. :raises: NodeCleaningFailure if there was a problem getting the clean steps. :returns: A list of clean step dictionaries """ # Iterate interfaces and get clean steps from each steps = list() for interface in CLEANING_INTERFACE_PRIORITY: interface = getattr(task.driver, interface) if interface: interface_steps = [x for x in interface.get_clean_steps(task) if not enabled or x['priority'] > 0] steps.extend(interface_steps) if sort: # Sort the steps from higher priority to lower priority steps = sorted(steps, key=_step_key, reverse=True) return steps def set_node_cleaning_steps(task): """Set up the node with clean step information for cleaning. For automated cleaning, get the clean steps from the driver. For manual cleaning, the user's clean steps are known but need to be validated against the driver's clean steps. :raises: InvalidParameterValue if there is a problem with the user's clean steps. :raises: NodeCleaningFailure if there was a problem getting the clean steps. """ node = task.node driver_internal_info = node.driver_internal_info # For manual cleaning, the target provision state is MANAGEABLE, whereas # for automated cleaning, it is AVAILABLE. manual_clean = node.target_provision_state == states.MANAGEABLE if not manual_clean: # Get the prioritized steps for automated cleaning driver_internal_info['clean_steps'] = _get_cleaning_steps(task, enabled=True) else: # For manual cleaning, the list of cleaning steps was specified by the # user and already saved in node.driver_internal_info['clean_steps']. # Now that we know what the driver's available clean steps are, we can # do further checks to validate the user's clean steps. steps = node.driver_internal_info['clean_steps'] _validate_user_clean_steps(task, steps) node.clean_step = {} driver_internal_info['clean_step_index'] = None node.driver_internal_info = driver_internal_info node.save() def _validate_user_clean_steps(task, user_steps): """Validate the user-specified clean steps. :param task: A TaskManager object :param user_steps: a list of clean steps. A clean step is a dictionary with required keys 'interface' and 'step', and optional key 'args':: { 'interface': , 'step': , 'args': {: , ..., : } } For example:: { 'interface': deploy', 'step': 'upgrade_firmware', 'args': {'force': True} } :raises: InvalidParameterValue if validation of clean steps fails. :raises: NodeCleaningFailure if there was a problem getting the clean steps from the driver. """ def step_id(step): return '.'.join([step['step'], step['interface']]) errors = [] # The clean steps from the driver. A clean step dictionary is of the form: # { 'interface': , # 'step': , # 'priority': # 'abortable': Optional. . # 'argsinfo': Optional. A dictionary of {:} # entries. is a dictionary with # { 'description': , # 'required': } # } driver_steps = {} for s in _get_cleaning_steps(task, enabled=False, sort=False): driver_steps[step_id(s)] = s for user_step in user_steps: # Check if this user_specified clean step isn't supported by the driver try: driver_step = driver_steps[step_id(user_step)] except KeyError: error = (_('node does not support this clean step: %(step)s') % {'step': user_step}) errors.append(error) continue # Check that the user-specified arguments are valid argsinfo = driver_step.get('argsinfo') or {} user_args = user_step.get('args') or {} invalid = set(user_args) - set(argsinfo) if invalid: error = _('clean step %(step)s has these invalid arguments: ' '%(invalid)s') % {'step': user_step, 'invalid': ', '.join(invalid)} errors.append(error) # Check that all required arguments were specified by the user missing = [] for (arg_name, arg_info) in argsinfo.items(): if arg_info.get('required', False) and arg_name not in user_args: msg = arg_name if arg_info.get('description'): msg += ' (%(desc)s)' % {'desc': arg_info['description']} missing.append(msg) if missing: error = _('clean step %(step)s is missing these required keyword ' 'arguments: %(miss)s') % {'step': user_step, 'miss': ', '.join(missing)} errors.append(error) if errors: raise exception.InvalidParameterValue('; '.join(errors)) ironic-5.1.0/ironic/db/0000775000567000056710000000000012674513633016023 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/db/api.py0000664000567000056710000004741712674513466017167 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Base classes for storage engines """ import abc from oslo_config import cfg from oslo_db import api as db_api import six _BACKEND_MAPPING = {'sqlalchemy': 'ironic.db.sqlalchemy.api'} IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING, lazy=True) def get_instance(): """Return a DB API instance.""" return IMPL @six.add_metaclass(abc.ABCMeta) class Connection(object): """Base class for storage system connections.""" @abc.abstractmethod def __init__(self): """Constructor.""" @abc.abstractmethod def get_nodeinfo_list(self, columns=None, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): """Get specific columns for matching nodes. Return a list of the specified columns for all nodes that match the specified filters. :param columns: List of column names to return. Defaults to 'id' column when columns == None. :param filters: Filters to apply. Defaults to None. :associated: True | False :reserved: True | False :reserved_by_any_of: [conductor1, conductor2] :maintenance: True | False :chassis_uuid: uuid of chassis :driver: driver's name :provision_state: provision state of node :provisioned_before: nodes with provision_updated_at field before this interval in seconds :param limit: Maximum number of nodes to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) :returns: A list of tuples of the specified columns. """ @abc.abstractmethod def get_node_list(self, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of nodes. :param filters: Filters to apply. Defaults to None. :associated: True | False :reserved: True | False :maintenance: True | False :chassis_uuid: uuid of chassis :driver: driver's name :provision_state: provision state of node :provisioned_before: nodes with provision_updated_at field before this interval in seconds :param limit: Maximum number of nodes to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) """ @abc.abstractmethod def reserve_node(self, tag, node_id): """Reserve a node. To prevent other ManagerServices from manipulating the given Node while a Task is performed, mark it reserved by this host. :param tag: A string uniquely identifying the reservation holder. :param node_id: A node id or uuid. :returns: A Node object. :raises: NodeNotFound if the node is not found. :raises: NodeLocked if the node is already reserved. """ @abc.abstractmethod def release_node(self, tag, node_id): """Release the reservation on a node. :param tag: A string uniquely identifying the reservation holder. :param node_id: A node id or uuid. :raises: NodeNotFound if the node is not found. :raises: NodeLocked if the node is reserved by another host. :raises: NodeNotLocked if the node was found to not have a reservation at all. """ @abc.abstractmethod def create_node(self, values): """Create a new node. :param values: A dict containing several items used to identify and track the node, and several dicts which are passed into the Drivers when managing this node. For example: :: { 'uuid': uuidutils.generate_uuid(), 'instance_uuid': None, 'power_state': states.POWER_OFF, 'provision_state': states.AVAILABLE, 'driver': 'pxe_ipmitool', 'driver_info': { ... }, 'properties': { ... }, 'extra': { ... }, } :returns: A node. """ @abc.abstractmethod def get_node_by_id(self, node_id): """Return a node. :param node_id: The id of a node. :returns: A node. """ @abc.abstractmethod def get_node_by_uuid(self, node_uuid): """Return a node. :param node_uuid: The uuid of a node. :returns: A node. """ @abc.abstractmethod def get_node_by_name(self, node_name): """Return a node. :param node_name: The logical name of a node. :returns: A node. """ @abc.abstractmethod def get_node_by_instance(self, instance): """Return a node. :param instance: The instance uuid to search for. :returns: A node. :raises: InstanceNotFound if the instance is not found. :raises: InvalidUUID if the instance uuid is invalid. """ @abc.abstractmethod def destroy_node(self, node_id): """Destroy a node and all associated interfaces. :param node_id: The id or uuid of a node. """ @abc.abstractmethod def update_node(self, node_id, values): """Update properties of a node. :param node_id: The id or uuid of a node. :param values: Dict of values to update. May be a partial list, eg. when setting the properties for a driver. For example: :: { 'driver_info': { 'my-field-1': val1, 'my-field-2': val2, } } :returns: A node. :raises: NodeAssociated :raises: NodeNotFound """ @abc.abstractmethod def get_port_by_id(self, port_id): """Return a network port representation. :param port_id: The id of a port. :returns: A port. """ @abc.abstractmethod def get_port_by_uuid(self, port_uuid): """Return a network port representation. :param port_uuid: The uuid of a port. :returns: A port. """ @abc.abstractmethod def get_port_by_address(self, address): """Return a network port representation. :param address: The MAC address of a port. :returns: A port. """ @abc.abstractmethod def get_port_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of ports. :param limit: Maximum number of ports to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) """ @abc.abstractmethod def get_ports_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): """List all the ports for a given node. :param node_id: The integer node ID. :param limit: Maximum number of ports to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) :returns: A list of ports. """ @abc.abstractmethod def get_ports_by_portgroup_id(self, portgroup_id, limit=None, marker=None, sort_key=None, sort_dir=None): """List all the ports for a given portgroup. :param portgroup_id: The integer portgroup ID. :param limit: Maximum number of ports to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted :param sort_dir: Direction in which results should be sorted (asc, desc) :returns: A list of ports. """ @abc.abstractmethod def create_port(self, values): """Create a new port. :param values: Dict of values. """ @abc.abstractmethod def update_port(self, port_id, values): """Update properties of an port. :param port_id: The id or MAC of a port. :param values: Dict of values to update. :returns: A port. """ @abc.abstractmethod def destroy_port(self, port_id): """Destroy an port. :param port_id: The id or MAC of a port. """ @abc.abstractmethod def get_portgroup_by_id(self, portgroup_id): """Return a network portgroup representation. :param portgroup_id: The id of a portgroup. :returns: A portgroup. :raises: PortgroupNotFound """ @abc.abstractmethod def get_portgroup_by_uuid(self, portgroup_uuid): """Return a network portgroup representation. :param portgroup_uuid: The uuid of a portgroup. :returns: A portgroup. :raises: PortgroupNotFound """ @abc.abstractmethod def get_portgroup_by_address(self, address): """Return a network portgroup representation. :param address: The MAC address of a portgroup. :returns: A portgroup. :raises: PortgroupNotFound """ @abc.abstractmethod def get_portgroup_by_name(self, name): """Return a network portgroup representation. :param name: The logical name of a portgroup. :returns: A portgroup. :raises: PortgroupNotFound """ @abc.abstractmethod def get_portgroup_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of portgroups. :param limit: Maximum number of portgroups to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: Direction in which results should be sorted. (asc, desc) :returns: A list of portgroups. """ @abc.abstractmethod def get_portgroups_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): """List all the portgroups for a given node. :param node_id: The integer node ID. :param limit: Maximum number of portgroups to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted :param sort_dir: Direction in which results should be sorted (asc, desc) :returns: A list of portgroups. """ @abc.abstractmethod def create_portgroup(self, values): """Create a new portgroup. :param values: Dict of values with the the following keys: 'id' 'uuid' 'name' 'node_id' 'address' 'extra' 'created_at' 'updated_at' :returns: A portgroup :raises: PortgroupDuplicateName :raises: PortgroupMACAlreadyExists :raises: PortgroupAlreadyExists """ @abc.abstractmethod def update_portgroup(self, portgroup_id, values): """Update properties of a portgroup. :param portgroup_id: The UUID or MAC of a portgroup. :param values: Dict of values to update. May contain the following keys: 'uuid' 'name' 'node_id' 'address' 'extra' 'created_at' 'updated_at' :returns: A portgroup. :raises: InvalidParameterValue :raises: PortgroupNotFound :raises: PortgroupDuplicateName :raises: PortgroupMACAlreadyExists """ @abc.abstractmethod def destroy_portgroup(self, portgroup_id): """Destroy a portgroup. :param portgroup_id: The UUID or MAC of a portgroup. :raises: PortgroupNotEmpty :raises: PortgroupNotFound """ @abc.abstractmethod def create_chassis(self, values): """Create a new chassis. :param values: Dict of values. """ @abc.abstractmethod def get_chassis_by_id(self, chassis_id): """Return a chassis representation. :param chassis_id: The id of a chassis. :returns: A chassis. """ @abc.abstractmethod def get_chassis_by_uuid(self, chassis_uuid): """Return a chassis representation. :param chassis_uuid: The uuid of a chassis. :returns: A chassis. """ @abc.abstractmethod def get_chassis_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of chassis. :param limit: Maximum number of chassis to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) """ @abc.abstractmethod def update_chassis(self, chassis_id, values): """Update properties of an chassis. :param chassis_id: The id or the uuid of a chassis. :param values: Dict of values to update. :returns: A chassis. """ @abc.abstractmethod def destroy_chassis(self, chassis_id): """Destroy a chassis. :param chassis_id: The id or the uuid of a chassis. """ @abc.abstractmethod def register_conductor(self, values, update_existing=False): """Register an active conductor with the cluster. :param values: A dict of values which must contain the following: :: { 'hostname': the unique hostname which identifies this Conductor service. 'drivers': a list of supported drivers. } :param update_existing: When false, registration will raise an exception when a conflicting online record is found. When true, will overwrite the existing record. Default: False. :returns: A conductor. :raises: ConductorAlreadyRegistered """ @abc.abstractmethod def get_conductor(self, hostname): """Retrieve a conductor's service record from the database. :param hostname: The hostname of the conductor service. :returns: A conductor. :raises: ConductorNotFound """ @abc.abstractmethod def unregister_conductor(self, hostname): """Remove this conductor from the service registry immediately. :param hostname: The hostname of this conductor service. :raises: ConductorNotFound """ @abc.abstractmethod def touch_conductor(self, hostname): """Mark a conductor as active by updating its 'updated_at' property. :param hostname: The hostname of this conductor service. :raises: ConductorNotFound """ @abc.abstractmethod def get_active_driver_dict(self, interval): """Retrieve drivers for the registered and active conductors. :param interval: Seconds since last check-in of a conductor. :returns: A dict which maps driver names to the set of hosts which support them. For example: :: {driverA: set([host1, host2]), driverB: set([host2, host3])} """ @abc.abstractmethod def get_offline_conductors(self): """Get a list conductor hostnames that are offline (dead). :returns: A list of conductor hostnames. """ @abc.abstractmethod def touch_node_provisioning(self, node_id): """Mark the node's provisioning as running. Mark the node's provisioning as running by updating its 'provision_updated_at' property. :param node_id: The id of a node. :raises: NodeNotFound """ @abc.abstractmethod def set_node_tags(self, node_id, tags): """Replace all of the node tags with specified list of tags. This ignores duplicate tags in the specified list. :param node_id: The id of a node. :param tags: List of tags. :returns: A list of NodeTag objects. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def unset_node_tags(self, node_id): """Remove all tags of the node. :param node_id: The id of a node. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def get_node_tags_by_node_id(self, node_id): """Get node tags based on its id. :param node_id: The id of a node. :returns: A list of NodeTag objects. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def add_node_tag(self, node_id, tag): """Add tag to the node. If the node_id and tag pair already exists, this should still succeed. :param node_id: The id of a node. :param tag: A tag string. :returns: the NodeTag object. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def delete_node_tag(self, node_id, tag): """Delete specified tag from the node. :param node_id: The id of a node. :param tag: A tag string. :raises: NodeNotFound if the node is not found. :raises: NodeTagNotFound if the tag is not found. """ @abc.abstractmethod def node_tag_exists(self, node_id, tag): """Check if the specified tag exist on the node. :param node_id: The id of a node. :param tag: A tag string. :returns: True if the tag exists otherwise False. """ ironic-5.1.0/ironic/db/__init__.py0000664000567000056710000000000012674513466020126 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/db/migration.py0000664000567000056710000000303012674513466020366 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database setup and migration commands.""" from oslo_config import cfg from stevedore import driver _IMPL = None def get_backend(): global _IMPL if not _IMPL: cfg.CONF.import_opt('backend', 'oslo_db.options', group='database') _IMPL = driver.DriverManager("ironic.database.migration_backend", cfg.CONF.database.backend).driver return _IMPL def upgrade(version=None): """Migrate the database to `version` or the most recent version.""" return get_backend().upgrade(version) def version(): return get_backend().version() def stamp(version): return get_backend().stamp(version) def revision(message, autogenerate): return get_backend().revision(message, autogenerate) def create_schema(): return get_backend().create_schema() ironic-5.1.0/ironic/db/sqlalchemy/0000775000567000056710000000000012674513633020165 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/db/sqlalchemy/alembic.ini0000664000567000056710000000171712674513466022274 0ustar jenkinsjenkins00000000000000# A generic, single database configuration. [alembic] # path to migration scripts script_location = %(here)s/alembic # template used to generate migration files # file_template = %%(rev)s_%%(slug)s # max length of characters to apply to the # "slug" field #truncate_slug_length = 40 # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false #sqlalchemy.url = driver://user:pass@localhost/dbname # Logging configuration [loggers] keys = root,sqlalchemy,alembic [handlers] keys = console [formatters] keys = generic [logger_root] level = WARN handlers = console qualname = [logger_sqlalchemy] level = WARN handlers = qualname = sqlalchemy.engine [logger_alembic] level = INFO handlers = qualname = alembic [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(levelname)-5.5s [%(name)s] %(message)s datefmt = %H:%M:%S ironic-5.1.0/ironic/db/sqlalchemy/api.py0000664000567000056710000007600412674513466021323 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """SQLAlchemy storage backend.""" import collections import datetime import threading from oslo_config import cfg from oslo_db import exception as db_exc from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import utils as db_utils from oslo_log import log from oslo_utils import strutils from oslo_utils import timeutils from oslo_utils import uuidutils from sqlalchemy.orm.exc import NoResultFound from sqlalchemy import sql from ironic.common import exception from ironic.common.i18n import _ from ironic.common.i18n import _LW from ironic.common import states from ironic.common import utils from ironic.db import api from ironic.db.sqlalchemy import models CONF = cfg.CONF CONF.import_opt('heartbeat_timeout', 'ironic.conductor.manager', group='conductor') LOG = log.getLogger(__name__) _CONTEXT = threading.local() def get_backend(): """The backend is this module itself.""" return Connection() def _session_for_read(): return enginefacade.reader.using(_CONTEXT) def _session_for_write(): return enginefacade.writer.using(_CONTEXT) def model_query(model, *args, **kwargs): """Query helper for simpler session usage. :param session: if present, the session to use """ with _session_for_read() as session: query = session.query(model, *args) return query def add_identity_filter(query, value): """Adds an identity filter to a query. Filters results by ID, if supplied value is a valid integer. Otherwise attempts to filter results by UUID. :param query: Initial query to add filter to. :param value: Value for filtering results by. :return: Modified query. """ if strutils.is_int_like(value): return query.filter_by(id=value) elif uuidutils.is_uuid_like(value): return query.filter_by(uuid=value) else: raise exception.InvalidIdentity(identity=value) def add_port_filter(query, value): """Adds a port-specific filter to a query. Filters results by address, if supplied value is a valid MAC address. Otherwise attempts to filter results by identity. :param query: Initial query to add filter to. :param value: Value for filtering results by. :return: Modified query. """ if utils.is_valid_mac(value): return query.filter_by(address=value) else: return add_identity_filter(query, value) def add_port_filter_by_node(query, value): if strutils.is_int_like(value): return query.filter_by(node_id=value) else: query = query.join(models.Node, models.Port.node_id == models.Node.id) return query.filter(models.Node.uuid == value) def add_portgroup_filter(query, value): """Adds a portgroup-specific filter to a query. Filters results by address, if supplied value is a valid MAC address. Otherwise attempts to filter results by identity. :param query: Initial query to add filter to. :param value: Value for filtering results by. :return: Modified query. """ if utils.is_valid_mac(value): return query.filter_by(address=value) else: return add_identity_filter(query, value) def add_portgroup_filter_by_node(query, value): if strutils.is_int_like(value): return query.filter_by(node_id=value) else: query = query.join(models.Node, models.Portgroup.node_id == models.Node.id) return query.filter(models.Node.uuid == value) def add_port_filter_by_portgroup(query, value): if strutils.is_int_like(value): return query.filter_by(portgroup_id=value) else: query = query.join(models.Portgroup, models.Port.portgroup_id == models.Portgroup.id) return query.filter(models.Portgroup.uuid == value) def add_node_filter_by_chassis(query, value): if strutils.is_int_like(value): return query.filter_by(chassis_id=value) else: query = query.join(models.Chassis, models.Node.chassis_id == models.Chassis.id) return query.filter(models.Chassis.uuid == value) def _paginate_query(model, limit=None, marker=None, sort_key=None, sort_dir=None, query=None): if not query: query = model_query(model) sort_keys = ['id'] if sort_key and sort_key not in sort_keys: sort_keys.insert(0, sort_key) try: query = db_utils.paginate_query(query, model, limit, sort_keys, marker=marker, sort_dir=sort_dir) except db_exc.InvalidSortKey: raise exception.InvalidParameterValue( _('The sort_key value "%(key)s" is an invalid field for sorting') % {'key': sort_key}) return query.all() class Connection(api.Connection): """SqlAlchemy connection.""" def __init__(self): pass def _add_nodes_filters(self, query, filters): if filters is None: filters = [] if 'chassis_uuid' in filters: # get_chassis_by_uuid() to raise an exception if the chassis # is not found chassis_obj = self.get_chassis_by_uuid(filters['chassis_uuid']) query = query.filter_by(chassis_id=chassis_obj.id) if 'associated' in filters: if filters['associated']: query = query.filter(models.Node.instance_uuid != sql.null()) else: query = query.filter(models.Node.instance_uuid == sql.null()) if 'reserved' in filters: if filters['reserved']: query = query.filter(models.Node.reservation != sql.null()) else: query = query.filter(models.Node.reservation == sql.null()) if 'reserved_by_any_of' in filters: query = query.filter(models.Node.reservation.in_( filters['reserved_by_any_of'])) if 'maintenance' in filters: query = query.filter_by(maintenance=filters['maintenance']) if 'driver' in filters: query = query.filter_by(driver=filters['driver']) if 'provision_state' in filters: query = query.filter_by(provision_state=filters['provision_state']) if 'provisioned_before' in filters: limit = (timeutils.utcnow() - datetime.timedelta(seconds=filters['provisioned_before'])) query = query.filter(models.Node.provision_updated_at < limit) if 'inspection_started_before' in filters: limit = ((timeutils.utcnow()) - (datetime.timedelta( seconds=filters['inspection_started_before']))) query = query.filter(models.Node.inspection_started_at < limit) return query def get_nodeinfo_list(self, columns=None, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): # list-ify columns default values because it is bad form # to include a mutable list in function definitions. if columns is None: columns = [models.Node.id] else: columns = [getattr(models.Node, c) for c in columns] query = model_query(*columns, base_model=models.Node) query = self._add_nodes_filters(query, filters) return _paginate_query(models.Node, limit, marker, sort_key, sort_dir, query) def get_node_list(self, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): query = model_query(models.Node) query = self._add_nodes_filters(query, filters) return _paginate_query(models.Node, limit, marker, sort_key, sort_dir, query) def reserve_node(self, tag, node_id): with _session_for_write(): query = model_query(models.Node) query = add_identity_filter(query, node_id) # be optimistic and assume we usually create a reservation count = query.filter_by(reservation=None).update( {'reservation': tag}, synchronize_session=False) try: node = query.one() if count != 1: # Nothing updated and node exists. Must already be # locked. raise exception.NodeLocked(node=node.uuid, host=node['reservation']) return node except NoResultFound: raise exception.NodeNotFound(node_id) def release_node(self, tag, node_id): with _session_for_write(): query = model_query(models.Node) query = add_identity_filter(query, node_id) # be optimistic and assume we usually release a reservation count = query.filter_by(reservation=tag).update( {'reservation': None}, synchronize_session=False) try: if count != 1: node = query.one() if node['reservation'] is None: raise exception.NodeNotLocked(node=node.uuid) else: raise exception.NodeLocked(node=node.uuid, host=node['reservation']) except NoResultFound: raise exception.NodeNotFound(node_id) def create_node(self, values): # ensure defaults are present for new nodes if 'uuid' not in values: values['uuid'] = uuidutils.generate_uuid() if 'power_state' not in values: values['power_state'] = states.NOSTATE if 'provision_state' not in values: values['provision_state'] = states.ENROLL # TODO(zhenguo): Support creating node with tags if 'tags' in values: LOG.warning( _LW('Ignore the specified tags %(tags)s when creating node: ' '%(node)s.'), {'tags': values['tags'], 'node': values['uuid']}) del values['tags'] node = models.Node() node.update(values) with _session_for_write() as session: try: session.add(node) session.flush() except db_exc.DBDuplicateEntry as exc: if 'name' in exc.columns: raise exception.DuplicateName(name=values['name']) elif 'instance_uuid' in exc.columns: raise exception.InstanceAssociated( instance_uuid=values['instance_uuid'], node=values['uuid']) raise exception.NodeAlreadyExists(uuid=values['uuid']) return node def get_node_by_id(self, node_id): query = model_query(models.Node).filter_by(id=node_id) try: return query.one() except NoResultFound: raise exception.NodeNotFound(node=node_id) def get_node_by_uuid(self, node_uuid): query = model_query(models.Node).filter_by(uuid=node_uuid) try: return query.one() except NoResultFound: raise exception.NodeNotFound(node=node_uuid) def get_node_by_name(self, node_name): query = model_query(models.Node).filter_by(name=node_name) try: return query.one() except NoResultFound: raise exception.NodeNotFound(node=node_name) def get_node_by_instance(self, instance): if not uuidutils.is_uuid_like(instance): raise exception.InvalidUUID(uuid=instance) query = (model_query(models.Node) .filter_by(instance_uuid=instance)) try: result = query.one() except NoResultFound: raise exception.InstanceNotFound(instance=instance) return result def destroy_node(self, node_id): with _session_for_write(): query = model_query(models.Node) query = add_identity_filter(query, node_id) try: node_ref = query.one() except NoResultFound: raise exception.NodeNotFound(node=node_id) # Get node ID, if an UUID was supplied. The ID is # required for deleting all ports, attached to the node. if uuidutils.is_uuid_like(node_id): node_id = node_ref['id'] port_query = model_query(models.Port) port_query = add_port_filter_by_node(port_query, node_id) port_query.delete() portgroup_query = model_query(models.Portgroup) portgroup_query = add_portgroup_filter_by_node(portgroup_query, node_id) portgroup_query.delete() # Delete all tags attached to the node tag_query = model_query(models.NodeTag).filter_by(node_id=node_id) tag_query.delete() query.delete() def update_node(self, node_id, values): # NOTE(dtantsur): this can lead to very strange errors if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing Node.") raise exception.InvalidParameterValue(err=msg) try: return self._do_update_node(node_id, values) except db_exc.DBDuplicateEntry as e: if 'name' in e.columns: raise exception.DuplicateName(name=values['name']) elif 'uuid' in e.columns: raise exception.NodeAlreadyExists(uuid=values['uuid']) elif 'instance_uuid' in e.columns: raise exception.InstanceAssociated( instance_uuid=values['instance_uuid'], node=node_id) else: raise e def _do_update_node(self, node_id, values): with _session_for_write(): query = model_query(models.Node) query = add_identity_filter(query, node_id) try: ref = query.with_lockmode('update').one() except NoResultFound: raise exception.NodeNotFound(node=node_id) # Prevent instance_uuid overwriting if values.get("instance_uuid") and ref.instance_uuid: raise exception.NodeAssociated( node=ref.uuid, instance=ref.instance_uuid) if 'provision_state' in values: values['provision_updated_at'] = timeutils.utcnow() if values['provision_state'] == states.INSPECTING: values['inspection_started_at'] = timeutils.utcnow() values['inspection_finished_at'] = None elif (ref.provision_state == states.INSPECTING and values['provision_state'] == states.MANAGEABLE): values['inspection_finished_at'] = timeutils.utcnow() values['inspection_started_at'] = None elif (ref.provision_state == states.INSPECTING and values['provision_state'] == states.INSPECTFAIL): values['inspection_started_at'] = None ref.update(values) return ref def get_port_by_id(self, port_id): query = model_query(models.Port).filter_by(id=port_id) try: return query.one() except NoResultFound: raise exception.PortNotFound(port=port_id) def get_port_by_uuid(self, port_uuid): query = model_query(models.Port).filter_by(uuid=port_uuid) try: return query.one() except NoResultFound: raise exception.PortNotFound(port=port_uuid) def get_port_by_address(self, address): query = model_query(models.Port).filter_by(address=address) try: return query.one() except NoResultFound: raise exception.PortNotFound(port=address) def get_port_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): return _paginate_query(models.Port, limit, marker, sort_key, sort_dir) def get_ports_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): query = model_query(models.Port) query = query.filter_by(node_id=node_id) return _paginate_query(models.Port, limit, marker, sort_key, sort_dir, query) def get_ports_by_portgroup_id(self, portgroup_id, limit=None, marker=None, sort_key=None, sort_dir=None): query = model_query(models.Port) query = query.filter_by(portgroup_id=portgroup_id) return _paginate_query(models.Port, limit, marker, sort_key, sort_dir, query) def create_port(self, values): if not values.get('uuid'): values['uuid'] = uuidutils.generate_uuid() port = models.Port() port.update(values) with _session_for_write() as session: try: session.add(port) session.flush() except db_exc.DBDuplicateEntry as exc: if 'address' in exc.columns: raise exception.MACAlreadyExists(mac=values['address']) raise exception.PortAlreadyExists(uuid=values['uuid']) return port def update_port(self, port_id, values): # NOTE(dtantsur): this can lead to very strange errors if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing Port.") raise exception.InvalidParameterValue(err=msg) try: with _session_for_write() as session: query = model_query(models.Port) query = add_port_filter(query, port_id) ref = query.one() ref.update(values) session.flush() except NoResultFound: raise exception.PortNotFound(port=port_id) except db_exc.DBDuplicateEntry: raise exception.MACAlreadyExists(mac=values['address']) return ref def destroy_port(self, port_id): with _session_for_write(): query = model_query(models.Port) query = add_port_filter(query, port_id) count = query.delete() if count == 0: raise exception.PortNotFound(port=port_id) def get_portgroup_by_id(self, portgroup_id): query = model_query(models.Portgroup).filter_by(id=portgroup_id) try: return query.one() except NoResultFound: raise exception.PortgroupNotFound(portgroup=portgroup_id) def get_portgroup_by_uuid(self, portgroup_uuid): query = model_query(models.Portgroup).filter_by(uuid=portgroup_uuid) try: return query.one() except NoResultFound: raise exception.PortgroupNotFound(portgroup=portgroup_uuid) def get_portgroup_by_address(self, address): query = model_query(models.Portgroup).filter_by(address=address) try: return query.one() except NoResultFound: raise exception.PortgroupNotFound(portgroup=address) def get_portgroup_by_name(self, name): query = model_query(models.Portgroup).filter_by(name=name) try: return query.one() except NoResultFound: raise exception.PortgroupNotFound(portgroup=name) def get_portgroup_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): return _paginate_query(models.Portgroup, limit, marker, sort_key, sort_dir) def get_portgroups_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): query = model_query(models.Portgroup) query = query.filter_by(node_id=node_id) return _paginate_query(models.Portgroup, limit, marker, sort_key, sort_dir, query) def create_portgroup(self, values): if not values.get('uuid'): values['uuid'] = uuidutils.generate_uuid() portgroup = models.Portgroup() portgroup.update(values) with _session_for_write() as session: try: session.add(portgroup) session.flush() except db_exc.DBDuplicateEntry as exc: if 'name' in exc.columns: raise exception.PortgroupDuplicateName(name=values['name']) elif 'address' in exc.columns: raise exception.PortgroupMACAlreadyExists( mac=values['address']) raise exception.PortgroupAlreadyExists(uuid=values['uuid']) return portgroup def update_portgroup(self, portgroup_id, values): if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing portgroup.") raise exception.InvalidParameterValue(err=msg) with _session_for_write() as session: try: query = model_query(models.Portgroup) query = add_portgroup_filter(query, portgroup_id) ref = query.one() ref.update(values) session.flush() except NoResultFound: raise exception.PortgroupNotFound(portgroup=portgroup_id) except db_exc.DBDuplicateEntry as exc: if 'name' in exc.columns: raise exception.PortgroupDuplicateName(name=values['name']) elif 'address' in exc.columns: raise exception.PortgroupMACAlreadyExists( mac=values['address']) else: raise exc return ref def destroy_portgroup(self, portgroup_id): def portgroup_not_empty(session): """Checks whether the portgroup does not have ports.""" query = model_query(models.Port) query = add_port_filter_by_portgroup(query, portgroup_id) return query.count() != 0 with _session_for_write() as session: if portgroup_not_empty(session): raise exception.PortgroupNotEmpty(portgroup=portgroup_id) query = model_query(models.Portgroup, session=session) query = add_identity_filter(query, portgroup_id) count = query.delete() if count == 0: raise exception.PortgroupNotFound(portgroup=portgroup_id) def get_chassis_by_id(self, chassis_id): query = model_query(models.Chassis).filter_by(id=chassis_id) try: return query.one() except NoResultFound: raise exception.ChassisNotFound(chassis=chassis_id) def get_chassis_by_uuid(self, chassis_uuid): query = model_query(models.Chassis).filter_by(uuid=chassis_uuid) try: return query.one() except NoResultFound: raise exception.ChassisNotFound(chassis=chassis_uuid) def get_chassis_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): return _paginate_query(models.Chassis, limit, marker, sort_key, sort_dir) def create_chassis(self, values): if not values.get('uuid'): values['uuid'] = uuidutils.generate_uuid() chassis = models.Chassis() chassis.update(values) with _session_for_write() as session: try: session.add(chassis) session.flush() except db_exc.DBDuplicateEntry: raise exception.ChassisAlreadyExists(uuid=values['uuid']) return chassis def update_chassis(self, chassis_id, values): # NOTE(dtantsur): this can lead to very strange errors if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing Chassis.") raise exception.InvalidParameterValue(err=msg) with _session_for_write(): query = model_query(models.Chassis) query = add_identity_filter(query, chassis_id) count = query.update(values) if count != 1: raise exception.ChassisNotFound(chassis=chassis_id) ref = query.one() return ref def destroy_chassis(self, chassis_id): def chassis_not_empty(): """Checks whether the chassis does not have nodes.""" query = model_query(models.Node) query = add_node_filter_by_chassis(query, chassis_id) return query.count() != 0 with _session_for_write(): if chassis_not_empty(): raise exception.ChassisNotEmpty(chassis=chassis_id) query = model_query(models.Chassis) query = add_identity_filter(query, chassis_id) count = query.delete() if count != 1: raise exception.ChassisNotFound(chassis=chassis_id) def register_conductor(self, values, update_existing=False): with _session_for_write() as session: query = (model_query(models.Conductor) .filter_by(hostname=values['hostname'])) try: ref = query.one() if ref.online is True and not update_existing: raise exception.ConductorAlreadyRegistered( conductor=values['hostname']) except NoResultFound: ref = models.Conductor() session.add(ref) ref.update(values) # always set online and updated_at fields when registering # a conductor, especially when updating an existing one ref.update({'updated_at': timeutils.utcnow(), 'online': True}) return ref def get_conductor(self, hostname): try: return (model_query(models.Conductor) .filter_by(hostname=hostname, online=True) .one()) except NoResultFound: raise exception.ConductorNotFound(conductor=hostname) def unregister_conductor(self, hostname): with _session_for_write(): query = (model_query(models.Conductor) .filter_by(hostname=hostname, online=True)) count = query.update({'online': False}) if count == 0: raise exception.ConductorNotFound(conductor=hostname) def touch_conductor(self, hostname): with _session_for_write(): query = (model_query(models.Conductor) .filter_by(hostname=hostname)) # since we're not changing any other field, manually set updated_at # and since we're heartbeating, make sure that online=True count = query.update({'updated_at': timeutils.utcnow(), 'online': True}) if count == 0: raise exception.ConductorNotFound(conductor=hostname) def clear_node_reservations_for_conductor(self, hostname): nodes = [] with _session_for_write(): query = (model_query(models.Node) .filter_by(reservation=hostname)) nodes = [node['uuid'] for node in query] query.update({'reservation': None}) if nodes: nodes = ', '.join(nodes) LOG.warning( _LW('Cleared reservations held by %(hostname)s: ' '%(nodes)s'), {'hostname': hostname, 'nodes': nodes}) def get_active_driver_dict(self, interval=None): if interval is None: interval = CONF.conductor.heartbeat_timeout limit = timeutils.utcnow() - datetime.timedelta(seconds=interval) result = (model_query(models.Conductor) .filter_by(online=True) .filter(models.Conductor.updated_at >= limit) .all()) # build mapping of drivers to the set of hosts which support them d2c = collections.defaultdict(set) for row in result: for driver in row['drivers']: d2c[driver].add(row['hostname']) return d2c def get_offline_conductors(self): interval = CONF.conductor.heartbeat_timeout limit = timeutils.utcnow() - datetime.timedelta(seconds=interval) result = (model_query(models.Conductor).filter_by() .filter(models.Conductor.updated_at < limit) .all()) return [row['hostname'] for row in result] def touch_node_provisioning(self, node_id): with _session_for_write(): query = model_query(models.Node) query = add_identity_filter(query, node_id) count = query.update({'provision_updated_at': timeutils.utcnow()}) if count == 0: raise exception.NodeNotFound(node_id) def _check_node_exists(self, node_id): if not model_query(models.Node).filter_by(id=node_id).scalar(): raise exception.NodeNotFound(node=node_id) def set_node_tags(self, node_id, tags): # remove duplicate tags tags = set(tags) with _session_for_write() as session: self.unset_node_tags(node_id) node_tags = [] for tag in tags: node_tag = models.NodeTag(tag=tag, node_id=node_id) session.add(node_tag) node_tags.append(node_tag) return node_tags def unset_node_tags(self, node_id): self._check_node_exists(node_id) with _session_for_write(): model_query(models.NodeTag).filter_by(node_id=node_id).delete() def get_node_tags_by_node_id(self, node_id): self._check_node_exists(node_id) result = (model_query(models.NodeTag) .filter_by(node_id=node_id) .all()) return result def add_node_tag(self, node_id, tag): node_tag = models.NodeTag(tag=tag, node_id=node_id) self._check_node_exists(node_id) try: with _session_for_write() as session: session.add(node_tag) session.flush() except db_exc.DBDuplicateEntry: # NOTE(zhenguo): ignore tags duplicates pass return node_tag def delete_node_tag(self, node_id, tag): self._check_node_exists(node_id) with _session_for_write(): result = model_query(models.NodeTag).filter_by( node_id=node_id, tag=tag).delete() if not result: raise exception.NodeTagNotFound(node_id=node_id, tag=tag) def node_tag_exists(self, node_id, tag): q = model_query(models.NodeTag).filter_by(node_id=node_id, tag=tag) return model_query(q.exists()).scalar() ironic-5.1.0/ironic/db/sqlalchemy/alembic/0000775000567000056710000000000012674513633021561 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/db/sqlalchemy/alembic/env.py0000664000567000056710000000370012674513466022727 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from logging import config as log_config from alembic import context from oslo_db.sqlalchemy import enginefacade try: # NOTE(whaom): This is to register the DB2 alembic code which # is an optional runtime dependency. from ibm_db_alembic.ibm_db import IbmDbImpl # noqa except ImportError: pass from ironic.db.sqlalchemy import models # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context.config # Interpret the config file for Python logging. # This line sets up loggers basically. log_config.fileConfig(config.config_file_name) # add your model's MetaData object here # for 'autogenerate' support # from myapp import mymodel target_metadata = models.Base.metadata # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option("my_important_option") # ... etc. def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ engine = enginefacade.get_legacy_facade().get_engine() with engine.connect() as connection: context.configure(connection=connection, target_metadata=target_metadata) with context.begin_transaction(): context.run_migrations() run_migrations_online() ironic-5.1.0/ironic/db/sqlalchemy/alembic/README0000664000567000056710000000100712674513466022443 0ustar jenkinsjenkins00000000000000Please see https://alembic.readthedocs.org/en/latest/index.html for general documentation To create alembic migrations use: $ ironic-dbsync revision --message --autogenerate Stamp db with most recent migration version, without actually running migrations $ ironic-dbsync stamp --revision head Upgrade can be performed by: $ ironic-dbsync - for backward compatibility $ ironic-dbsync upgrade # ironic-dbsync upgrade --revision head Downgrading db: $ ironic-dbsync downgrade $ ironic-dbsync downgrade --revision base ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/0000775000567000056710000000000012674513633023431 5ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/789acc877671_add_raid_config.py0000664000567000056710000000207412674513466030650 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add node.raid_config and node.target_raid_config Revision ID: 789acc877671 Revises: 2fb93ffd2af1 Create Date: 2015-06-26 01:21:46.062311 """ # revision identifiers, used by Alembic. revision = '789acc877671' down_revision = '2fb93ffd2af1' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('nodes', sa.Column('raid_config', sa.Text(), nullable=True)) op.add_column('nodes', sa.Column('target_raid_config', sa.Text(), nullable=True)) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/2581ebaf0cb2_initial_migration.py0000664000567000056710000001035312674513466031457 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """initial migration Revision ID: 2581ebaf0cb2 Revises: None Create Date: 2014-01-17 12:14:07.754448 """ # revision identifiers, used by Alembic. revision = '2581ebaf0cb2' down_revision = None from alembic import op import sqlalchemy as sa def upgrade(): # commands auto generated by Alembic - please adjust! op.create_table( 'conductors', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('hostname', sa.String(length=255), nullable=False), sa.Column('drivers', sa.Text(), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('hostname', name='uniq_conductors0hostname'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_table( 'chassis', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.Column('description', sa.String(length=255), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('uuid', name='uniq_chassis0uuid'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_table( 'nodes', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('instance_uuid', sa.String(length=36), nullable=True), sa.Column('chassis_id', sa.Integer(), nullable=True), sa.Column('power_state', sa.String(length=15), nullable=True), sa.Column('target_power_state', sa.String(length=15), nullable=True), sa.Column('provision_state', sa.String(length=15), nullable=True), sa.Column('target_provision_state', sa.String(length=15), nullable=True), sa.Column('last_error', sa.Text(), nullable=True), sa.Column('properties', sa.Text(), nullable=True), sa.Column('driver', sa.String(length=15), nullable=True), sa.Column('driver_info', sa.Text(), nullable=True), sa.Column('reservation', sa.String(length=255), nullable=True), sa.Column('maintenance', sa.Boolean(), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['chassis_id'], ['chassis.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('uuid', name='uniq_nodes0uuid'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_index('node_instance_uuid', 'nodes', ['instance_uuid'], unique=False) op.create_table( 'ports', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('address', sa.String(length=18), nullable=True), sa.Column('node_id', sa.Integer(), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('address', name='uniq_ports0address'), sa.UniqueConstraint('uuid', name='uniq_ports0uuid'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) # end Alembic commands ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/3ae36a5f5131_add_logical_name.py0000664000567000056710000000177412674513466031051 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_logical_name Revision ID: 3ae36a5f5131 Revises: bb59b63f55a Create Date: 2014-12-10 14:27:26.323540 """ # revision identifiers, used by Alembic. revision = '3ae36a5f5131' down_revision = 'bb59b63f55a' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('nodes', sa.Column('name', sa.String(length=63), nullable=True)) op.create_unique_constraint('uniq_nodes0name', 'nodes', ['name']) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/242cc6a923b3_add_node_maintenance_reason.py0000664000567000056710000000177612674513466033277 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add Node.maintenance_reason Revision ID: 242cc6a923b3 Revises: 487deb87cc9d Create Date: 2014-10-15 23:00:43.164061 """ # revision identifiers, used by Alembic. revision = '242cc6a923b3' down_revision = '487deb87cc9d' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('nodes', sa.Column('maintenance_reason', sa.Text(), nullable=True)) ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/5ea1b0d310e_added_port_group_table_and_altered_ports.pyironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/5ea1b0d310e_added_port_group_table_and_altered_po0000664000567000056710000000466512674513466034675 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Added portgroups table and altered ports Revision ID: 5ea1b0d310e Revises: 48d6c242bb9b Create Date: 2015-06-30 14:14:26.972368 """ # revision identifiers, used by Alembic. revision = '5ea1b0d310e' down_revision = '48d6c242bb9b' from alembic import op import sqlalchemy as sa def upgrade(): op.create_table('portgroups', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('node_id', sa.Integer(), nullable=True), sa.Column('address', sa.String(length=18), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('uuid', name='uniq_portgroups0uuid'), sa.UniqueConstraint('address', name='uniq_portgroups0address'), sa.UniqueConstraint('name', name='uniq_portgroups0name'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8') op.add_column(u'ports', sa.Column('local_link_connection', sa.Text(), nullable=True)) op.add_column(u'ports', sa.Column('portgroup_id', sa.Integer(), nullable=True)) op.add_column(u'ports', sa.Column('pxe_enabled', sa.Boolean(), default=True)) op.create_foreign_key('fk_portgroups_ports', 'ports', 'portgroups', ['portgroup_id'], ['id']) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/487deb87cc9d_add_conductor_affinity_and_online.py0000664000567000056710000000227012674513466034676 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add conductor_affinity and online Revision ID: 487deb87cc9d Revises: 3bea56f25597 Create Date: 2014-09-26 16:16:30.988900 """ # revision identifiers, used by Alembic. revision = '487deb87cc9d' down_revision = '3bea56f25597' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'conductors', sa.Column('online', sa.Boolean(), default=True)) op.add_column( 'nodes', sa.Column('conductor_affinity', sa.Integer(), sa.ForeignKey('conductors.id', name='nodes_conductor_affinity_fk'), nullable=True)) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/2fb93ffd2af1_increase_node_name_length.py0000664000567000056710000000210112674513466033177 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """increase-node-name-length Revision ID: 2fb93ffd2af1 Revises: 4f399b21ae71 Create Date: 2015-03-18 17:08:11.470791 """ # revision identifiers, used by Alembic. revision = '2fb93ffd2af1' down_revision = '4f399b21ae71' from alembic import op import sqlalchemy as sa from sqlalchemy.dialects import mysql def upgrade(): op.alter_column('nodes', 'name', existing_type=mysql.VARCHAR(length=63), type_=sa.String(length=255), existing_nullable=True) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/48d6c242bb9b_add_node_tags.py0000664000567000056710000000264012674513466030463 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add node tags Revision ID: 48d6c242bb9b Revises: 516faf1bb9b1 Create Date: 2015-10-08 10:07:33.779516 """ # revision identifiers, used by Alembic. revision = '48d6c242bb9b' down_revision = '516faf1bb9b1' from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'node_tags', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('node_id', sa.Integer(), nullable=False, autoincrement=False), sa.Column('tag', sa.String(length=255), nullable=False), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('node_id', 'tag'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_index('node_tags_idx', 'node_tags', ['tag'], unique=False) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/21b331f883ef_add_provision_updated_at.py0000664000567000056710000000171112674513466032660 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add provision_updated_at Revision ID: 21b331f883ef Revises: 2581ebaf0cb2 Create Date: 2014-02-19 13:45:30.150632 """ # revision identifiers, used by Alembic. revision = '21b331f883ef' down_revision = '2581ebaf0cb2' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('nodes', sa.Column('provision_updated_at', sa.DateTime(), nullable=True)) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/f6fdb920c182_set_pxe_enabled_true.py0000664000567000056710000000223112674513466032065 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Set Port.pxe_enabled to True if NULL Revision ID: f6fdb920c182 Revises: 5ea1b0d310e Create Date: 2016-02-12 16:53:21.008580 """ # revision identifiers, used by Alembic. revision = 'f6fdb920c182' down_revision = '5ea1b0d310e' from alembic import op from sqlalchemy import Boolean, String from sqlalchemy.sql import table, column, null port = table('ports', column('uuid', String(36)), column('pxe_enabled', Boolean())) def upgrade(): op.execute( port.update().where( port.c.pxe_enabled == null()).values( {'pxe_enabled': True})) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/1e1d5ace7dc6_add_inspection_started_at_and_.py0000664000567000056710000000230612674513466034222 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add inspection_started_at and inspection_finished_at Revision ID: 1e1d5ace7dc6 Revises: 3ae36a5f5131 Create Date: 2015-02-26 10:46:46.861927 """ # revision identifiers, used by Alembic. revision = '1e1d5ace7dc6' down_revision = '3ae36a5f5131' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('nodes', sa.Column('inspection_started_at', sa.DateTime(), nullable=True)) op.add_column('nodes', sa.Column('inspection_finished_at', sa.DateTime(), nullable=True)) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/4f399b21ae71_add_node_clean_step.py0000664000567000056710000000166612674513466031573 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add node.clean_step Revision ID: 4f399b21ae71 Revises: 1e1d5ace7dc6 Create Date: 2015-02-18 01:21:46.062311 """ # revision identifiers, used by Alembic. revision = '4f399b21ae71' down_revision = '1e1d5ace7dc6' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('nodes', sa.Column('clean_step', sa.Text(), nullable=True)) ././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/3bea56f25597_add_unique_constraint_to_instance_uuid.pyironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/3bea56f25597_add_unique_constraint_to_instance_uu0000664000567000056710000000206212674513466034670 0ustar jenkinsjenkins00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add unique constraint to instance_uuid Revision ID: 3bea56f25597 Revises: 31baaf680d2b Create Date: 2014-06-05 11:45:07.046670 """ # revision identifiers, used by Alembic. revision = '3bea56f25597' down_revision = '31baaf680d2b' from alembic import op def upgrade(): op.create_unique_constraint("uniq_nodes0instance_uuid", "nodes", ["instance_uuid"]) op.drop_index('node_instance_uuid', 'nodes') ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/516faf1bb9b1_resizing_column_nodes_driver.py0000664000567000056710000000175712674513466033744 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Resizing column nodes.driver Revision ID: 516faf1bb9b1 Revises: 789acc877671 Create Date: 2015-08-05 13:27:31.808919 """ # revision identifiers, used by Alembic. revision = '516faf1bb9b1' down_revision = '789acc877671' from alembic import op import sqlalchemy as sa def upgrade(): op.alter_column('nodes', 'driver', existing_type=sa.String(length=15), type_=sa.String(length=255)) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/3cb628139ea4_nodes_add_console_enabled.py0000664000567000056710000000164112674513466032743 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Nodes add console enabled Revision ID: 3cb628139ea4 Revises: 21b331f883ef Create Date: 2014-02-26 11:24:11.318023 """ # revision identifiers, used by Alembic. revision = '3cb628139ea4' down_revision = '21b331f883ef' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('nodes', sa.Column('console_enabled', sa.Boolean)) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/bb59b63f55a_add_node_driver_internal_info.py0000664000567000056710000000200012674513466033633 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_node_driver_internal_info Revision ID: bb59b63f55a Revises: 5674c57409b9 Create Date: 2015-01-28 14:28:22.212790 """ # revision identifiers, used by Alembic. revision = 'bb59b63f55a' down_revision = '5674c57409b9' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('nodes', sa.Column('driver_internal_info', sa.Text(), nullable=True)) ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/31baaf680d2b_add_node_instance_info.py0000664000567000056710000000211012674513466032400 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add Node instance info Revision ID: 31baaf680d2b Revises: 3cb628139ea4 Create Date: 2014-03-05 21:09:32.372463 """ # revision identifiers, used by Alembic. revision = '31baaf680d2b' down_revision = '3cb628139ea4' from alembic import op import sqlalchemy as sa def upgrade(): # commands auto generated by Alembic - please adjust op.add_column('nodes', sa.Column('instance_info', sa.Text(), nullable=True)) # end Alembic commands ironic-5.1.0/ironic/db/sqlalchemy/alembic/versions/5674c57409b9_replace_nostate_with_available.py0000664000567000056710000000276312674513466033716 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """replace NOSTATE with AVAILABLE Revision ID: 5674c57409b9 Revises: 242cc6a923b3 Create Date: 2015-01-14 16:55:44.718196 """ # revision identifiers, used by Alembic. revision = '5674c57409b9' down_revision = '242cc6a923b3' from alembic import op from sqlalchemy import String from sqlalchemy.sql import table, column, null node = table('nodes', column('uuid', String(36)), column('provision_state', String(15))) # NOTE(deva): We must represent the states as static strings in this migration # file, rather than import ironic.common.states, because that file may change # in the future. This migration script must still be able to be run with # future versions of the code and still produce the same results. AVAILABLE = 'available' def upgrade(): op.execute( node.update().where( node.c.provision_state == null()).values( {'provision_state': op.inline_literal(AVAILABLE)})) ironic-5.1.0/ironic/db/sqlalchemy/alembic/script.py.mako0000664000567000056710000000063412674513466024374 0ustar jenkinsjenkins00000000000000"""${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} def downgrade(): ${downgrades if downgrades else "pass"} ironic-5.1.0/ironic/db/sqlalchemy/__init__.py0000664000567000056710000000000012674513466022270 0ustar jenkinsjenkins00000000000000ironic-5.1.0/ironic/db/sqlalchemy/migration.py0000664000567000056710000000717212674513466022543 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import alembic from alembic import config as alembic_config import alembic.migration as alembic_migration from oslo_db import exception as db_exc from oslo_db.sqlalchemy import enginefacade from ironic.db.sqlalchemy import models def _alembic_config(): path = os.path.join(os.path.dirname(__file__), 'alembic.ini') config = alembic_config.Config(path) return config def version(config=None, engine=None): """Current database version. :returns: Database version :rtype: string """ if engine is None: engine = enginefacade.get_legacy_facade().get_engine() with engine.connect() as conn: context = alembic_migration.MigrationContext.configure(conn) return context.get_current_revision() def upgrade(revision, config=None): """Used for upgrading database. :param version: Desired database version :type version: string """ revision = revision or 'head' config = config or _alembic_config() alembic.command.upgrade(config, revision or 'head') def create_schema(config=None, engine=None): """Create database schema from models description. Can be used for initial installation instead of upgrade('head'). """ if engine is None: engine = enginefacade.get_legacy_facade().get_engine() # NOTE(viktors): If we will use metadata.create_all() for non empty db # schema, it will only add the new tables, but leave # existing as is. So we should avoid of this situation. if version(engine=engine) is not None: raise db_exc.DbMigrationError("DB schema is already under version" " control. Use upgrade() instead") models.Base.metadata.create_all(engine) stamp('head', config=config) def downgrade(revision, config=None): """Used for downgrading database. :param version: Desired database version :type version: string """ revision = revision or 'base' config = config or _alembic_config() return alembic.command.downgrade(config, revision) def stamp(revision, config=None): """Stamps database with provided revision. Don't run any migrations. :param revision: Should match one from repository or head - to stamp database with most recent revision :type revision: string """ config = config or _alembic_config() return alembic.command.stamp(config, revision=revision) def revision(message=None, autogenerate=False, config=None): """Creates template for migration. :param message: Text that will be used for migration title :type message: string :param autogenerate: If True - generates diff based on current database state :type autogenerate: bool """ config = config or _alembic_config() return alembic.command.revision(config, message=message, autogenerate=autogenerate) ironic-5.1.0/ironic/db/sqlalchemy/models.py0000664000567000056710000001606312674513466022034 0ustar jenkinsjenkins00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ SQLAlchemy models for baremetal data. """ from oslo_config import cfg from oslo_db import options as db_options from oslo_db.sqlalchemy import models from oslo_db.sqlalchemy import types as db_types import six.moves.urllib.parse as urlparse from sqlalchemy import Boolean, Column, DateTime, Index from sqlalchemy import ForeignKey, Integer from sqlalchemy import schema, String, Text from sqlalchemy.ext.declarative import declarative_base from ironic.common.i18n import _ from ironic.common import paths sql_opts = [ cfg.StrOpt('mysql_engine', default='InnoDB', help=_('MySQL engine to use.')) ] _DEFAULT_SQL_CONNECTION = 'sqlite:///' + paths.state_path_def('ironic.sqlite') cfg.CONF.register_opts(sql_opts, 'database') db_options.set_defaults(cfg.CONF, _DEFAULT_SQL_CONNECTION, 'ironic.sqlite') def table_args(): engine_name = urlparse.urlparse(cfg.CONF.database.connection).scheme if engine_name == 'mysql': return {'mysql_engine': cfg.CONF.database.mysql_engine, 'mysql_charset': "utf8"} return None class IronicBase(models.TimestampMixin, models.ModelBase): metadata = None def as_dict(self): d = {} for c in self.__table__.columns: d[c.name] = self[c.name] return d Base = declarative_base(cls=IronicBase) class Chassis(Base): """Represents a hardware chassis.""" __tablename__ = 'chassis' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_chassis0uuid'), table_args() ) id = Column(Integer, primary_key=True) uuid = Column(String(36)) extra = Column(db_types.JsonEncodedDict) description = Column(String(255), nullable=True) class Conductor(Base): """Represents a conductor service entry.""" __tablename__ = 'conductors' __table_args__ = ( schema.UniqueConstraint('hostname', name='uniq_conductors0hostname'), table_args() ) id = Column(Integer, primary_key=True) hostname = Column(String(255), nullable=False) drivers = Column(db_types.JsonEncodedList) online = Column(Boolean, default=True) class Node(Base): """Represents a bare metal node.""" __tablename__ = 'nodes' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_nodes0uuid'), schema.UniqueConstraint('instance_uuid', name='uniq_nodes0instance_uuid'), schema.UniqueConstraint('name', name='uniq_nodes0name'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36)) # NOTE(deva): we store instance_uuid directly on the node so that we can # filter on it more efficiently, even though it is # user-settable, and would otherwise be in node.properties. instance_uuid = Column(String(36), nullable=True) name = Column(String(255), nullable=True) chassis_id = Column(Integer, ForeignKey('chassis.id'), nullable=True) power_state = Column(String(15), nullable=True) target_power_state = Column(String(15), nullable=True) provision_state = Column(String(15), nullable=True) target_provision_state = Column(String(15), nullable=True) provision_updated_at = Column(DateTime, nullable=True) last_error = Column(Text, nullable=True) instance_info = Column(db_types.JsonEncodedDict) properties = Column(db_types.JsonEncodedDict) driver = Column(String(255)) driver_info = Column(db_types.JsonEncodedDict) driver_internal_info = Column(db_types.JsonEncodedDict) clean_step = Column(db_types.JsonEncodedDict) raid_config = Column(db_types.JsonEncodedDict) target_raid_config = Column(db_types.JsonEncodedDict) # NOTE(deva): this is the host name of the conductor which has # acquired a TaskManager lock on the node. # We should use an INT FK (conductors.id) in the future. reservation = Column(String(255), nullable=True) # NOTE(deva): this is the id of the last conductor which prepared local # state for the node (eg, a PXE config file). # When affinity and the hash ring's mapping do not match, # this indicates that a conductor should rebuild local state. conductor_affinity = Column(Integer, ForeignKey('conductors.id', name='nodes_conductor_affinity_fk'), nullable=True) maintenance = Column(Boolean, default=False) maintenance_reason = Column(Text, nullable=True) console_enabled = Column(Boolean, default=False) inspection_finished_at = Column(DateTime, nullable=True) inspection_started_at = Column(DateTime, nullable=True) extra = Column(db_types.JsonEncodedDict) class Port(Base): """Represents a network port of a bare metal node.""" __tablename__ = 'ports' __table_args__ = ( schema.UniqueConstraint('address', name='uniq_ports0address'), schema.UniqueConstraint('uuid', name='uniq_ports0uuid'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36)) address = Column(String(18)) node_id = Column(Integer, ForeignKey('nodes.id'), nullable=True) extra = Column(db_types.JsonEncodedDict) local_link_connection = Column(db_types.JsonEncodedDict) portgroup_id = Column(Integer, ForeignKey('portgroups.id'), nullable=True) pxe_enabled = Column(Boolean, default=True) class Portgroup(Base): """Represents a group of network ports of a bare metal node.""" __tablename__ = 'portgroups' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_portgroups0uuid'), schema.UniqueConstraint('address', name='uniq_portgroups0address'), schema.UniqueConstraint('name', name='uniq_portgroups0name'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36)) name = Column(String(255), nullable=True) node_id = Column(Integer, ForeignKey('nodes.id'), nullable=True) address = Column(String(18)) extra = Column(db_types.JsonEncodedDict) class NodeTag(Base): """Represents a tag of a bare metal node.""" __tablename__ = 'node_tags' __table_args__ = ( Index('node_tags_idx', 'tag'), table_args()) node_id = Column(Integer, ForeignKey('nodes.id'), primary_key=True, nullable=False) tag = Column(String(255), primary_key=True, nullable=False) ironic-5.1.0/ironic/version.py0000664000567000056710000000130112674513466017474 0ustar jenkinsjenkins00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version version_info = pbr.version.VersionInfo('ironic') ironic-5.1.0/RELEASE-NOTES0000777000567000056710000000000012674513466024442 2doc/source/releasenotes/index.rstustar jenkinsjenkins00000000000000ironic-5.1.0/devstack/0000775000567000056710000000000012674513633015757 5ustar jenkinsjenkins00000000000000ironic-5.1.0/devstack/lib/0000775000567000056710000000000012674513633016525 5ustar jenkinsjenkins00000000000000ironic-5.1.0/devstack/lib/ironic0000664000567000056710000011607512674513470017744 0ustar jenkinsjenkins00000000000000#!/bin/bash # # lib/ironic # Functions to control the configuration and operation of the **Ironic** service # Dependencies: # # - ``functions`` file # - ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined # - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined # - ``SERVICE_HOST`` # - ``KEYSTONE_TOKEN_FORMAT`` must be defined # ``stack.sh`` calls the entry points in this order: # # - install_ironic # - install_ironicclient # - init_ironic # - start_ironic # - stop_ironic # - cleanup_ironic # Save trace and pipefail settings _XTRACE_IRONIC=$(set +o | grep xtrace) _PIPEFAIL_IRONIC=$(set +o | grep pipefail) set +o xtrace set +o pipefail # Defaults # -------- # Set up default directories GITDIR["python-ironicclient"]=$DEST/python-ironicclient GITDIR["ironic-lib"]=$DEST/ironic-lib IRONIC_DIR=$DEST/ironic IRONIC_DEVSTACK_DIR=$IRONIC_DIR/devstack IRONIC_DEVSTACK_FILES_DIR=$IRONIC_DEVSTACK_DIR/files IRONIC_PYTHON_AGENT_DIR=$DEST/ironic-python-agent IRONIC_DATA_DIR=$DATA_DIR/ironic IRONIC_STATE_PATH=/var/lib/ironic IRONIC_AUTH_CACHE_DIR=${IRONIC_AUTH_CACHE_DIR:-/var/cache/ironic} IRONIC_CONF_DIR=${IRONIC_CONF_DIR:-/etc/ironic} IRONIC_CONF_FILE=$IRONIC_CONF_DIR/ironic.conf IRONIC_ROOTWRAP_CONF=$IRONIC_CONF_DIR/rootwrap.conf IRONIC_POLICY_JSON=$IRONIC_CONF_DIR/policy.json # Deploy callback timeout can be changed from its default (1800), if required. IRONIC_CALLBACK_TIMEOUT=${IRONIC_CALLBACK_TIMEOUT:-} # Deploy to hardware platform IRONIC_HW_NODE_CPU=${IRONIC_HW_NODE_CPU:-1} IRONIC_HW_NODE_RAM=${IRONIC_HW_NODE_RAM:-512} IRONIC_HW_NODE_DISK=${IRONIC_HW_NODE_DISK:-10} IRONIC_HW_EPHEMERAL_DISK=${IRONIC_HW_EPHEMERAL_DISK:-0} IRONIC_HW_ARCH=${IRONIC_HW_ARCH:-x86_64} # The file is composed of multiple lines, each line includes four field # separated by white space: IPMI address, MAC address, IPMI username # and IPMI password. # # 192.168.110.107 00:1e:67:57:50:4c root otc123 IRONIC_IPMIINFO_FILE=${IRONIC_IPMIINFO_FILE:-$IRONIC_DATA_DIR/hardware_info} # Set up defaults for functional / integration testing IRONIC_NODE_UUID=${IRONIC_NODE_UUID:-`uuidgen`} IRONIC_SCRIPTS_DIR=${IRONIC_SCRIPTS_DIR:-$IRONIC_DEVSTACK_DIR/tools/ironic/scripts} IRONIC_TEMPLATES_DIR=${IRONIC_TEMPLATES_DIR:-$IRONIC_DEVSTACK_DIR/tools/ironic/templates} IRONIC_BAREMETAL_BASIC_OPS=$(trueorfalse False IRONIC_BAREMETAL_BASIC_OPS) IRONIC_ENABLED_DRIVERS=${IRONIC_ENABLED_DRIVERS:-fake,pxe_ssh,pxe_ipmitool} IRONIC_SSH_USERNAME=${IRONIC_SSH_USERNAME:-`whoami`} IRONIC_SSH_TIMEOUT=${IRONIC_SSH_TIMEOUT:-15} IRONIC_SSH_KEY_DIR=${IRONIC_SSH_KEY_DIR:-$IRONIC_DATA_DIR/ssh_keys} IRONIC_SSH_KEY_FILENAME=${IRONIC_SSH_KEY_FILENAME:-ironic_key} IRONIC_KEY_FILE=${IRONIC_KEY_FILE:-$IRONIC_SSH_KEY_DIR/$IRONIC_SSH_KEY_FILENAME} IRONIC_SSH_VIRT_TYPE=${IRONIC_SSH_VIRT_TYPE:-virsh} IRONIC_TFTPBOOT_DIR=${IRONIC_TFTPBOOT_DIR:-$IRONIC_DATA_DIR/tftpboot} IRONIC_TFTPSERVER_IP=${IRONIC_TFTPSERVER_IP:-$HOST_IP} IRONIC_VM_SSH_PORT=${IRONIC_VM_SSH_PORT:-22} IRONIC_VM_SSH_ADDRESS=${IRONIC_VM_SSH_ADDRESS:-$HOST_IP} IRONIC_VM_COUNT=${IRONIC_VM_COUNT:-1} IRONIC_VM_SPECS_CPU=${IRONIC_VM_SPECS_CPU:-1} IRONIC_VM_SPECS_RAM=${IRONIC_VM_SPECS_RAM:-1024} IRONIC_VM_SPECS_DISK=${IRONIC_VM_SPECS_DISK:-10} IRONIC_VM_EPHEMERAL_DISK=${IRONIC_VM_EPHEMERAL_DISK:-0} IRONIC_VM_EMULATOR=${IRONIC_VM_EMULATOR:-/usr/bin/qemu-system-x86_64} IRONIC_VM_NETWORK_BRIDGE=${IRONIC_VM_NETWORK_BRIDGE:-brbm} IRONIC_VM_NETWORK_RANGE=${IRONIC_VM_NETWORK_RANGE:-192.0.2.0/24} IRONIC_VM_MACS_CSV_FILE=${IRONIC_VM_MACS_CSV_FILE:-$IRONIC_DATA_DIR/ironic_macs.csv} IRONIC_AUTHORIZED_KEYS_FILE=${IRONIC_AUTHORIZED_KEYS_FILE:-$HOME/.ssh/authorized_keys} # By default, baremetal VMs will console output to file. IRONIC_VM_LOG_CONSOLE=${IRONIC_VM_LOG_CONSOLE:-True} IRONIC_VM_LOG_DIR=${IRONIC_VM_LOG_DIR:-$IRONIC_DATA_DIR/logs/} IRONIC_VM_LOG_ROTATE=$(trueorfalse True IRONIC_VM_LOG_ROTATE) # Use DIB to create deploy ramdisk and kernel. IRONIC_BUILD_DEPLOY_RAMDISK=$(trueorfalse True IRONIC_BUILD_DEPLOY_RAMDISK) # Ironic IPA ramdisk type, supported types are: coreos, tinyipa. IRONIC_RAMDISK_TYPE=${IRONIC_RAMDISK_TYPE:-coreos} # If not use DIB, these files are used as deploy ramdisk/kernel. # (The value must be an absolute path) IRONIC_DEPLOY_RAMDISK=${IRONIC_DEPLOY_RAMDISK:-} IRONIC_DEPLOY_KERNEL=${IRONIC_DEPLOY_KERNEL:-} IRONIC_DEPLOY_ELEMENT=${IRONIC_DEPLOY_ELEMENT:-deploy-ironic} IRONIC_AGENT_KERNEL_URL=${IRONIC_AGENT_KERNEL_URL:-http://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe.vmlinuz} IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-http://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz} # Which deploy driver to use - valid choices right now # are ``pxe_ssh``, ``pxe_ipmitool``, ``agent_ssh`` and ``agent_ipmitool``. IRONIC_DEPLOY_DRIVER=${IRONIC_DEPLOY_DRIVER:-pxe_ssh} # TODO(agordeev): replace 'ubuntu' with host distro name getting IRONIC_DEPLOY_FLAVOR=${IRONIC_DEPLOY_FLAVOR:-ubuntu $IRONIC_DEPLOY_ELEMENT} # Support entry points installation of console scripts IRONIC_BIN_DIR=$(get_python_exec_prefix) # Ironic connection info. Note the port must be specified. IRONIC_SERVICE_PROTOCOL=${IRONIC_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL} IRONIC_SERVICE_PORT=${IRONIC_SERVICE_PORT:-6385} IRONIC_HOSTPORT=${IRONIC_HOSTPORT:-$SERVICE_HOST:$IRONIC_SERVICE_PORT} # Enable iPXE IRONIC_IPXE_ENABLED=$(trueorfalse False IRONIC_IPXE_ENABLED) IRONIC_HTTP_DIR=${IRONIC_HTTP_DIR:-$IRONIC_DATA_DIR/httpboot} IRONIC_HTTP_SERVER=${IRONIC_HTTP_SERVER:-$HOST_IP} IRONIC_HTTP_PORT=${IRONIC_HTTP_PORT:-8088} # Whether DevStack will be setup for bare metal or VMs IRONIC_IS_HARDWARE=$(trueorfalse False IRONIC_IS_HARDWARE) # The first port in the range to bind the Virtual BMCs. The number of # ports that will be used depends on $IRONIC_VM_COUNT variable, e.g if # $IRONIC_VM_COUNT=3 the ports 6230, 6231 and 6232 will be used for the # Virtual BMCs, one for each VM. IRONIC_VBMC_PORT_RANGE_START=${IRONIC_VBMC_PORT_RANGE_START:-6230} IRONIC_VBMC_CONFIG_FILE=${IRONIC_VBMC_CONFIG_FILE:-$HOME/.vbmc/virtualbmc.conf} IRONIC_VBMC_LOGFILE=${IRONIC_VBMC_LOGFILE:-$IRONIC_VM_LOG_DIR/virtualbmc.log} # NOTE(lucasagomes): This flag is used to differentiate the nodes that # uses IPA as their deploy ramdisk from nodes that uses the agent_* drivers # (which also uses IPA but depends on Swift Temp URLs to work). At present, # all drivers that uses the iSCSI approach for their deployment supports # using both, IPA or bash ramdisks for the deployment. In the future we # want to remove the support for the bash ramdisk in favor of IPA, once # we get there this flag can be removed, and all conditionals that uses # it should just run by default. IRONIC_DEPLOY_DRIVER_ISCSI_WITH_IPA=$(trueorfalse False IRONIC_DEPLOY_DRIVER_ISCSI_WITH_IPA) # The path to the libvirt hooks directory, used if IRONIC_VM_LOG_ROTATE is True IRONIC_LIBVIRT_HOOKS_PATH=${IRONIC_LIBVIRT_HOOKS_PATH:-/etc/libvirt/hooks/} # The authentication strategy used by ironic-api. Valid values are: # keystone and noauth. IRONIC_AUTH_STRATEGY=${IRONIC_AUTH_STRATEGY:-keystone} # get_pxe_boot_file() - Get the PXE/iPXE boot file path function get_pxe_boot_file { local relpath=syslinux/pxelinux.0 if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then relpath=ipxe/undionly.kpxe fi local pxe_boot_file if is_ubuntu; then pxe_boot_file=/usr/lib/$relpath elif is_fedora || is_suse; then pxe_boot_file=/usr/share/$relpath fi echo $pxe_boot_file } # PXE boot image IRONIC_PXE_BOOT_IMAGE=${IRONIC_PXE_BOOT_IMAGE:-$(get_pxe_boot_file)} # Functions # --------- # Test if any Ironic services are enabled # is_ironic_enabled function is_ironic_enabled { [[ ,${ENABLED_SERVICES} =~ ,"ir-" ]] && return 0 return 1 } function is_deployed_by_agent { [[ -z "${IRONIC_DEPLOY_DRIVER%%agent*}" ]] && return 0 return 1 } function is_deployed_by_ipmitool { [[ -z "${IRONIC_DEPLOY_DRIVER##*_ipmitool}" ]] && return 0 return 1 } function is_deployed_with_ipa_ramdisk { is_deployed_by_agent || [[ "$IRONIC_DEPLOY_DRIVER_ISCSI_WITH_IPA" == "True" ]] && return 0 return 1 } # install_ironic() - Install the things! function install_ironic { # make sure all needed service were enabled local req_services="key" if [[ "$VIRT_DRIVER" == "ironic" ]]; then req_services+=" nova glance neutron" fi for srv in $req_services; do if ! is_service_enabled "$srv"; then die $LINENO "$srv should be enabled for Ironic." fi done if use_library_from_git "ironic-lib"; then git_clone_by_name "ironic-lib" setup_dev_lib "ironic-lib" fi setup_develop $IRONIC_DIR if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then install_apache_wsgi fi if is_deployed_by_ipmitool && [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then pip_install "virtualbmc" if [[ ! -d $(dirname $IRONIC_VBMC_CONFIG_FILE) ]]; then mkdir -p $(dirname $IRONIC_VBMC_CONFIG_FILE) fi iniset $IRONIC_VBMC_CONFIG_FILE log debug True iniset $IRONIC_VBMC_CONFIG_FILE log logfile $IRONIC_VBMC_LOGFILE fi } # install_ironicclient() - Collect sources and prepare function install_ironicclient { if use_library_from_git "python-ironicclient"; then git_clone_by_name "python-ironicclient" setup_dev_lib "python-ironicclient" sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-ironicclient"]}/tools/,/etc/bash_completion.d/}ironic.bash_completion else # nothing actually "requires" ironicclient, so force instally from pypi pip_install_gr python-ironicclient fi } # _cleanup_ironic_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file function _cleanup_ironic_apache_wsgi { sudo rm -rf $IRONIC_HTTP_DIR disable_apache_site ironic sudo rm -f $(apache_site_config_for ironic) restart_apache_server } # _config_ironic_apache_wsgi() - Set WSGI config files of Ironic function _config_ironic_apache_wsgi { local ironic_apache_conf ironic_apache_conf=$(apache_site_config_for ironic) sudo cp $IRONIC_DEVSTACK_FILES_DIR/apache-ironic.template $ironic_apache_conf sudo sed -e " s|%PUBLICPORT%|$IRONIC_HTTP_PORT|g; s|%HTTPROOT%|$IRONIC_HTTP_DIR|g; " -i $ironic_apache_conf enable_apache_site ironic } # cleanup_ironic() - Remove residual data files, anything left over from previous # runs that would need to clean up. function cleanup_ironic { sudo rm -rf $IRONIC_AUTH_CACHE_DIR $IRONIC_CONF_DIR sudo rm -rf $IRONIC_VM_LOG_DIR/* } # configure_ironic_dirs() - Create all directories required by Ironic and # associated services. function configure_ironic_dirs { sudo install -d -o $STACK_USER $IRONIC_CONF_DIR $STACK_USER $IRONIC_DATA_DIR \ $IRONIC_STATE_PATH $IRONIC_TFTPBOOT_DIR $IRONIC_TFTPBOOT_DIR/pxelinux.cfg sudo chown -R $STACK_USER:$LIBVIRT_GROUP $IRONIC_TFTPBOOT_DIR if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then sudo install -d -o $STACK_USER -g $LIBVIRT_GROUP $IRONIC_HTTP_DIR fi if [ ! -f $IRONIC_PXE_BOOT_IMAGE ]; then die $LINENO "PXE boot file $IRONIC_PXE_BOOT_IMAGE not found." fi # Copy PXE binary if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then cp $IRONIC_PXE_BOOT_IMAGE $IRONIC_TFTPBOOT_DIR else # Syslinux >= 5.00 pxelinux.0 binary is not "stand-alone" anymore, # it depends on some c32 modules to work correctly. # More info: http://www.syslinux.org/wiki/index.php/Library_modules cp -aR $(dirname $IRONIC_PXE_BOOT_IMAGE)/*.{c32,0} $IRONIC_TFTPBOOT_DIR fi } # configure_ironic() - Set config files, create data dirs, etc function configure_ironic { configure_ironic_dirs # Copy over ironic configuration file and configure common parameters. cp $IRONIC_DIR/etc/ironic/ironic.conf.sample $IRONIC_CONF_FILE iniset $IRONIC_CONF_FILE DEFAULT debug True inicomment $IRONIC_CONF_FILE DEFAULT log_file iniset $IRONIC_CONF_FILE database connection `database_connection_url ironic` iniset $IRONIC_CONF_FILE DEFAULT state_path $IRONIC_STATE_PATH iniset $IRONIC_CONF_FILE DEFAULT use_syslog $SYSLOG # Configure Ironic conductor, if it was enabled. if is_service_enabled ir-cond; then configure_ironic_conductor fi # Configure Ironic API, if it was enabled. if is_service_enabled ir-api; then configure_ironic_api fi # Format logging if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then setup_colorized_logging $IRONIC_CONF_FILE DEFAULT tenant user fi if [[ "$IRONIC_IPXE_ENABLED" == "True" ]]; then _config_ironic_apache_wsgi fi } # configure_ironic_api() - Is used by configure_ironic(). Performs # API specific configuration. function configure_ironic_api { iniset $IRONIC_CONF_FILE DEFAULT auth_strategy $IRONIC_AUTH_STRATEGY iniset $IRONIC_CONF_FILE oslo_policy policy_file $IRONIC_POLICY_JSON # TODO(Yuki Nishiwaki): This is a temporary work-around until Ironic is fixed(bug#1422632). # These codes need to be changed to use the function of configure_auth_token_middleware # after Ironic conforms to the new auth plugin. iniset $IRONIC_CONF_FILE keystone_authtoken identity_uri $KEYSTONE_AUTH_URI iniset $IRONIC_CONF_FILE keystone_authtoken auth_uri $KEYSTONE_SERVICE_URI/v2.0 iniset $IRONIC_CONF_FILE keystone_authtoken admin_user ironic iniset $IRONIC_CONF_FILE keystone_authtoken admin_password $SERVICE_PASSWORD iniset $IRONIC_CONF_FILE keystone_authtoken admin_tenant_name $SERVICE_PROJECT_NAME iniset $IRONIC_CONF_FILE keystone_authtoken cafile $SSL_BUNDLE_FILE iniset $IRONIC_CONF_FILE keystone_authtoken signing_dir $IRONIC_AUTH_CACHE_DIR/api iniset_rpc_backend ironic $IRONIC_CONF_FILE iniset $IRONIC_CONF_FILE api port $IRONIC_SERVICE_PORT cp -p $IRONIC_DIR/etc/ironic/policy.json $IRONIC_POLICY_JSON } # configure_ironic_conductor() - Is used by configure_ironic(). # Sets conductor specific settings. function configure_ironic_conductor { cp $IRONIC_DIR/etc/ironic/rootwrap.conf $IRONIC_ROOTWRAP_CONF cp -r $IRONIC_DIR/etc/ironic/rootwrap.d $IRONIC_CONF_DIR local ironic_rootwrap ironic_rootwrap=$(get_rootwrap_location ironic) local rootwrap_isudoer_cmd="$ironic_rootwrap $IRONIC_CONF_DIR/rootwrap.conf *" # Set up the rootwrap sudoers for ironic local tempfile tempfile=`mktemp` echo "$STACK_USER ALL=(root) NOPASSWD: $rootwrap_isudoer_cmd" >$tempfile chmod 0440 $tempfile sudo chown root:root $tempfile sudo mv $tempfile /etc/sudoers.d/ironic-rootwrap iniset $IRONIC_CONF_FILE DEFAULT rootwrap_config $IRONIC_ROOTWRAP_CONF iniset $IRONIC_CONF_FILE DEFAULT enabled_drivers $IRONIC_ENABLED_DRIVERS iniset $IRONIC_CONF_FILE conductor api_url $IRONIC_SERVICE_PROTOCOL://$HOST_IP:$IRONIC_SERVICE_PORT if [[ -n "$IRONIC_CALLBACK_TIMEOUT" ]]; then iniset $IRONIC_CONF_FILE conductor deploy_callback_timeout $IRONIC_CALLBACK_TIMEOUT fi iniset $IRONIC_CONF_FILE pxe tftp_server $IRONIC_TFTPSERVER_IP iniset $IRONIC_CONF_FILE pxe tftp_root $IRONIC_TFTPBOOT_DIR iniset $IRONIC_CONF_FILE pxe tftp_master_path $IRONIC_TFTPBOOT_DIR/master_images local pxe_params="nofb nomodeset vga=normal console=ttyS0" if is_deployed_with_ipa_ramdisk; then pxe_params+=" systemd.journald.forward_to_console=yes ipa-debug=1" fi # When booting with less than 1GB, we need to switch from default tmpfs # to ramfs for ramdisks to decompress successfully. if ([[ "$IRONIC_IS_HARDWARE" == "True" ]] && [[ "$IRONIC_HW_NODE_RAM" -lt 1024 ]]) || ([[ "$IRONIC_IS_HARDWARE" == "False" ]] && [[ "$IRONIC_VM_SPECS_RAM" -lt 1024 ]]); then pxe_params+=" rootfstype=ramfs" fi if [[ -n "$pxe_params" ]]; then iniset $IRONIC_CONF_FILE pxe pxe_append_params "$pxe_params" fi # Set these options for scenarios in which the agent fetches the image # directly from glance, and don't set them where the image is pushed # over iSCSI. if is_deployed_by_agent; then if [[ "$SWIFT_ENABLE_TEMPURLS" == "True" ]] ; then iniset $IRONIC_CONF_FILE glance swift_temp_url_key $SWIFT_TEMPURL_KEY else die $LINENO "SWIFT_ENABLE_TEMPURLS must be True to use agent_* driver in Ironic." fi iniset $IRONIC_CONF_FILE glance swift_endpoint_url http://${HOST_IP}:${SWIFT_DEFAULT_BIND_PORT:-8080} iniset $IRONIC_CONF_FILE glance swift_api_version v1 local tenant_id tenant_id=$(get_or_create_project $SERVICE_PROJECT_NAME default) iniset $IRONIC_CONF_FILE glance swift_account AUTH_${tenant_id} iniset $IRONIC_CONF_FILE glance swift_container glance iniset $IRONIC_CONF_FILE glance swift_temp_url_duration 3600 iniset $IRONIC_CONF_FILE agent heartbeat_timeout 30 fi # FIXME: this really needs to be tested in the gate. For now, any # test using the agent ramdisk should skip the erase_devices clean # step because it is too slow to run in the gate. iniset $IRONIC_CONF_FILE deploy erase_devices_priority 0 if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then local pxebin pxebin=`basename $IRONIC_PXE_BOOT_IMAGE` iniset $IRONIC_CONF_FILE pxe ipxe_enabled True iniset $IRONIC_CONF_FILE pxe pxe_config_template '\$pybasedir/drivers/modules/ipxe_config.template' iniset $IRONIC_CONF_FILE pxe pxe_bootfile_name $pxebin iniset $IRONIC_CONF_FILE deploy http_root $IRONIC_HTTP_DIR iniset $IRONIC_CONF_FILE deploy http_url "http://$IRONIC_HTTP_SERVER:$IRONIC_HTTP_PORT" fi } # create_ironic_cache_dir() - Part of the init_ironic() process function create_ironic_cache_dir { # Create cache dir sudo mkdir -p $IRONIC_AUTH_CACHE_DIR/api sudo chown $STACK_USER $IRONIC_AUTH_CACHE_DIR/api rm -f $IRONIC_AUTH_CACHE_DIR/api/* sudo mkdir -p $IRONIC_AUTH_CACHE_DIR/registry sudo chown $STACK_USER $IRONIC_AUTH_CACHE_DIR/registry rm -f $IRONIC_AUTH_CACHE_DIR/registry/* } # create_ironic_accounts() - Set up common required ironic accounts # Tenant User Roles # ------------------------------------------------------------------ # service ironic admin # if enabled function create_ironic_accounts { # Ironic if [[ "$ENABLED_SERVICES" =~ "ir-api" ]]; then # Get ironic user if exists # NOTE(Shrews): This user MUST have admin level privileges! create_service_user "ironic" "admin" get_or_create_service "ironic" "baremetal" "Ironic baremetal provisioning service" get_or_create_endpoint "baremetal" \ "$REGION_NAME" \ "$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT" \ "$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT" \ "$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT" fi } # init_ironic() - Initialize databases, etc. function init_ironic { if is_service_enabled neutron; then # Save private network as cleaning network local cleaning_network_uuid cleaning_network_uuid=$(neutron net-list | grep private | get_field 1) die_if_not_set $LINENO cleaning_network_uuid "Failed to get ironic cleaning network id" iniset $IRONIC_CONF_FILE neutron cleaning_network_uuid ${cleaning_network_uuid} fi # (Re)create ironic database recreate_database ironic # Migrate ironic database $IRONIC_BIN_DIR/ironic-dbsync --config-file=$IRONIC_CONF_FILE create_ironic_cache_dir } # _ironic_bm_vm_names() - Generates list of names for baremetal VMs. function _ironic_bm_vm_names { local idx local num_vms num_vms=$(($IRONIC_VM_COUNT - 1)) for idx in $(seq 0 $num_vms); do echo "baremetal${IRONIC_VM_NETWORK_BRIDGE}_${idx}" done } # start_ironic() - Start running processes, including screen function start_ironic { # Start Ironic API server, if enabled. if is_service_enabled ir-api; then start_ironic_api fi # Start Ironic conductor, if enabled. if is_service_enabled ir-cond; then start_ironic_conductor fi # Start Apache if iPXE is enabled if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then restart_apache_server fi } # start_ironic_api() - Used by start_ironic(). # Starts Ironic API server. function start_ironic_api { run_process ir-api "$IRONIC_BIN_DIR/ironic-api --config-file=$IRONIC_CONF_FILE" echo "Waiting for ir-api ($IRONIC_HOSTPORT) to start..." if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- $IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT; do sleep 1; done"; then die $LINENO "ir-api did not start" fi } # start_ironic_conductor() - Used by start_ironic(). # Starts Ironic conductor. function start_ironic_conductor { run_process ir-cond "$IRONIC_BIN_DIR/ironic-conductor --config-file=$IRONIC_CONF_FILE" # TODO(romcheg): Find a way to check whether the conductor has started. } # stop_ironic() - Stop running processes function stop_ironic { stop_process ir-api stop_process ir-cond # Cleanup the WSGI files if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then _cleanup_ironic_apache_wsgi fi # Remove the hook to disable log rotate sudo rm -rf $IRONIC_LIBVIRT_HOOKS_PATH/qemu } function create_ovs_taps { local ironic_net_id ironic_net_id=$(neutron net-list | grep private | get_field 1) die_if_not_set $LINENO ironic_net_id "Failed to get ironic network id" # Work around: No netns exists on host until a Neutron port is created. We # need to create one in Neutron to know what netns to tap into prior to the # first node booting. local port_id port_id=$(neutron port-create private | grep " id " | get_field 2) die_if_not_set $LINENO port_id "Failed to create neutron port" # intentional sleep to make sure the tag has been set to port sleep 10 local tapdev tapdev=$(sudo ip netns exec qdhcp-${ironic_net_id} ip link list | grep " tap" | cut -d':' -f2 | cut -d'@' -f1 | cut -b2-) die_if_not_set $LINENO tapdev "Failed to get tap device id" local tag_id tag_id=$(sudo ovs-vsctl show |grep ${tapdev} -A1 -m1 | grep tag | cut -d':' -f2 | cut -b2-) die_if_not_set $LINENO tag_id "Failed to get tag id" # make sure veth pair is not existing, otherwise delete its links sudo ip link show ovs-tap1 && sudo ip link delete ovs-tap1 sudo ip link show brbm-tap1 && sudo ip link delete brbm-tap1 # create veth pair for future interconnection between br-int and brbm sudo ip link add brbm-tap1 type veth peer name ovs-tap1 sudo ip link set dev brbm-tap1 up sudo ip link set dev ovs-tap1 up sudo ovs-vsctl -- --if-exists del-port ovs-tap1 -- add-port br-int ovs-tap1 tag=$tag_id sudo ovs-vsctl -- --if-exists del-port brbm-tap1 -- add-port $IRONIC_VM_NETWORK_BRIDGE brbm-tap1 # Remove the port needed only for workaround. neutron port-delete $port_id # Finally, share the fixed tenant network across all tenants. This allows the host # to serve TFTP to a single network namespace via the tap device created above. neutron net-update $ironic_net_id --shared true } function setup_qemu_log_hook { local libvirt_service_name # Make sure the libvirt hooks directory exist sudo mkdir -p $IRONIC_LIBVIRT_HOOKS_PATH # Copy the qemu hook to the right directory sudo cp $IRONIC_DEVSTACK_FILES_DIR/hooks/qemu $IRONIC_LIBVIRT_HOOKS_PATH/qemu sudo chmod -v +x $IRONIC_LIBVIRT_HOOKS_PATH/qemu sudo sed -e " s|%LOG_DIR%|$IRONIC_VM_LOG_DIR|g; " -i $IRONIC_LIBVIRT_HOOKS_PATH/qemu # Restart the libvirt daemon libvirt_service_name="libvirt-bin" if is_fedora; then libvirt_service_name="libvirtd" fi restart_service $libvirt_service_name } function create_bridge_and_vms { # Call libvirt setup scripts in a new shell to ensure any new group membership sudo su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/setup-network.sh" if [[ "$IRONIC_VM_LOG_CONSOLE" == "True" ]] ; then local log_arg="$IRONIC_VM_LOG_DIR" if [[ "$IRONIC_VM_LOG_ROTATE" == "True" ]] ; then setup_qemu_log_hook fi else local log_arg="" fi local vbmc_port=$IRONIC_VBMC_PORT_RANGE_START local vm_name for vm_name in $(_ironic_bm_vm_names); do sudo su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/create-node.sh $vm_name \ $IRONIC_VM_SPECS_CPU $IRONIC_VM_SPECS_RAM $IRONIC_VM_SPECS_DISK \ amd64 $IRONIC_VM_NETWORK_BRIDGE $IRONIC_VM_EMULATOR \ $vbmc_port $log_arg" >> $IRONIC_VM_MACS_CSV_FILE vbmc_port=$((vbmc_port+1)) done create_ovs_taps } function wait_for_nova_resources { # After nodes have been enrolled, we need to wait for both ironic and # nova's periodic tasks to populate the resource tracker with available # nodes and resources. Wait up to 2 minutes for a given resource before # timing out. local resource=$1 local expected_count=$2 local i echo_summary "Waiting 2 minutes for Nova resource tracker to pick up $resource >= $expected_count" for i in $(seq 1 120); do if [ $(nova hypervisor-stats | grep " $resource " | get_field 2) -ge $expected_count ]; then return 0 fi sleep 1 done die $LINENO "Timed out waiting for Nova hypervisor-stats $resource >= $expected_count" } function _clean_ncpu_failure { SCREEN_NAME=${SCREEN_NAME:-stack} SERVICE_DIR=${SERVICE_DIR:-${DEST}/status} n_cpu_failure="$SERVICE_DIR/$SCREEN_NAME/n-cpu.failure" if [ -f ${n_cpu_failure} ]; then mv ${n_cpu_failure} "${n_cpu_failure}.before-restart-by-ironic" fi } function enroll_nodes { local chassis_id chassis_id=$(ironic chassis-create -d "ironic test chassis" | grep " uuid " | get_field 2) die_if_not_set $LINENO chassis_id "Failed to create chassis" if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then local ironic_node_cpu=$IRONIC_VM_SPECS_CPU local ironic_node_ram=$IRONIC_VM_SPECS_RAM local ironic_node_disk=$IRONIC_VM_SPECS_DISK local ironic_ephemeral_disk=$IRONIC_VM_EPHEMERAL_DISK local ironic_node_arch=x86_64 local ironic_hwinfo_file=$IRONIC_VM_MACS_CSV_FILE if is_deployed_by_ipmitool; then local node_options="\ -i ipmi_address=127.0.0.1 \ -i ipmi_username=admin \ -i ipmi_password=password" else local node_options="\ -i ssh_virt_type=$IRONIC_SSH_VIRT_TYPE \ -i ssh_address=$IRONIC_VM_SSH_ADDRESS \ -i ssh_port=$IRONIC_VM_SSH_PORT \ -i ssh_username=$IRONIC_SSH_USERNAME \ -i ssh_key_filename=$IRONIC_KEY_FILE" fi node_options="\ $node_options \ -i deploy_kernel=$IRONIC_DEPLOY_KERNEL_ID \ -i deploy_ramdisk=$IRONIC_DEPLOY_RAMDISK_ID" else local ironic_node_cpu=$IRONIC_HW_NODE_CPU local ironic_node_ram=$IRONIC_HW_NODE_RAM local ironic_node_disk=$IRONIC_HW_NODE_DISK local ironic_ephemeral_disk=$IRONIC_HW_EPHEMERAL_DISK local ironic_node_arch=$IRONIC_HW_ARCH if [[ -z "${IRONIC_DEPLOY_DRIVER##*_ipmitool}" ]]; then local ironic_hwinfo_file=$IRONIC_IPMIINFO_FILE fi fi local total_nodes=0 local total_cpus=0 while read hardware_info; do if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then local mac_address mac_address=$(echo $hardware_info | awk '{print $1}') if is_deployed_by_ipmitool; then local vbmc_port vbmc_port=$(echo $hardware_info | awk '{print $2}') node_options+=" -i ipmi_port=$vbmc_port" fi elif is_deployed_by_ipmitool; then local ipmi_address ipmi_address=$(echo $hardware_info |awk '{print $1}') local mac_address mac_address=$(echo $hardware_info |awk '{print $2}') local ironic_ipmi_username ironic_ipmi_username=$(echo $hardware_info |awk '{print $3}') local ironic_ipmi_passwd ironic_ipmi_passwd=$(echo $hardware_info |awk '{print $4}') # Currently we require all hardware platform have same CPU/RAM/DISK info # in future, this can be enhanced to support different type, and then # we create the bare metal flavor with minimum value local node_options="-i ipmi_address=$ipmi_address -i ipmi_password=$ironic_ipmi_passwd\ -i ipmi_username=$ironic_ipmi_username" node_options+=" -i deploy_kernel=$IRONIC_DEPLOY_KERNEL_ID" node_options+=" -i deploy_ramdisk=$IRONIC_DEPLOY_RAMDISK_ID" fi # First node created will be used for testing in ironic w/o glance # scenario, so we need to know its UUID. local standalone_node_uuid="" if [ $total_nodes -eq 0 ]; then standalone_node_uuid="--uuid $IRONIC_NODE_UUID" fi local node_id node_id=$(ironic node-create $standalone_node_uuid\ --chassis_uuid $chassis_id \ --driver $IRONIC_DEPLOY_DRIVER \ --name node-$total_nodes \ -p cpus=$ironic_node_cpu\ -p memory_mb=$ironic_node_ram\ -p local_gb=$ironic_node_disk\ -p cpu_arch=$ironic_node_arch \ $node_options \ | grep " uuid " | get_field 2) ironic port-create --address $mac_address --node $node_id total_nodes=$((total_nodes+1)) total_cpus=$((total_cpus+$ironic_node_cpu)) done < $ironic_hwinfo_file local adjusted_disk adjusted_disk=$(($ironic_node_disk - $ironic_ephemeral_disk)) nova flavor-create --ephemeral $ironic_ephemeral_disk baremetal auto $ironic_node_ram $adjusted_disk $ironic_node_cpu nova flavor-key baremetal set "cpu_arch"="$ironic_node_arch" if [ "$VIRT_DRIVER" == "ironic" ]; then # NOTE(dtantsur): sometimes nova compute fails to start with ironic due # to keystone restarting and not being able to authenticate us. # Restart it just to be sure (and avoid gate problems like bug 1537076) stop_nova_compute || /bin/true # NOTE(pas-ha) if nova compute failed before restart, .failure file # that was created will fail the service_check in the end of the deployment _clean_ncpu_failure start_nova_compute wait_for_nova_resources "count" $total_nodes wait_for_nova_resources "vcpus" $total_cpus fi } function configure_iptables { # enable tftp natting for allowing connections to HOST_IP's tftp server sudo modprobe nf_conntrack_tftp sudo modprobe nf_nat_tftp # explicitly allow DHCP - packets are occasionally being dropped here sudo iptables -I INPUT -p udp --dport 67:68 --sport 67:68 -j ACCEPT || true # nodes boot from TFTP and callback to the API server listening on $HOST_IP sudo iptables -I INPUT -d $HOST_IP -p udp --dport 69 -j ACCEPT || true sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true if is_deployed_by_agent; then # agent ramdisk gets instance image from swift sudo iptables -I INPUT -d $HOST_IP -p tcp --dport ${SWIFT_DEFAULT_BIND_PORT:-8080} -j ACCEPT || true fi if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $IRONIC_HTTP_PORT -j ACCEPT || true fi } function configure_tftpd { # stop tftpd and setup serving via xinetd stop_service tftpd-hpa || true [ -f /etc/init/tftpd-hpa.conf ] && echo "manual" | sudo tee /etc/init/tftpd-hpa.override sudo cp $IRONIC_TEMPLATES_DIR/tftpd-xinetd.template /etc/xinetd.d/tftp sudo sed -e "s|%TFTPBOOT_DIR%|$IRONIC_TFTPBOOT_DIR|g" -i /etc/xinetd.d/tftp # setup tftp file mapping to satisfy requests at the root (booting) and # /tftpboot/ sub-dir (as per deploy-ironic elements) echo "r ^([^/]) $IRONIC_TFTPBOOT_DIR/\1" >$IRONIC_TFTPBOOT_DIR/map-file echo "r ^(/tftpboot/) $IRONIC_TFTPBOOT_DIR/\2" >>$IRONIC_TFTPBOOT_DIR/map-file chmod -R 0755 $IRONIC_TFTPBOOT_DIR restart_service xinetd } function configure_ironic_ssh_keypair { if [[ ! -d $HOME/.ssh ]]; then mkdir -p $HOME/.ssh chmod 700 $HOME/.ssh fi if [[ ! -e $IRONIC_KEY_FILE ]]; then if [[ ! -d $(dirname $IRONIC_KEY_FILE) ]]; then mkdir -p $(dirname $IRONIC_KEY_FILE) fi echo -e 'n\n' | ssh-keygen -q -t rsa -P '' -f $IRONIC_KEY_FILE fi cat $IRONIC_KEY_FILE.pub | tee -a $IRONIC_AUTHORIZED_KEYS_FILE } function ironic_ssh_check { local key_file=$1 local floating_ip=$2 local port=$3 local default_instance_user=$4 local active_timeout=$5 if ! timeout $active_timeout sh -c "while ! ssh -p $port -o StrictHostKeyChecking=no -i $key_file ${default_instance_user}@$floating_ip echo success; do sleep 1; done"; then die $LINENO "server didn't become ssh-able!" fi } function configure_ironic_auxiliary { configure_ironic_ssh_keypair ironic_ssh_check $IRONIC_KEY_FILE $IRONIC_VM_SSH_ADDRESS $IRONIC_VM_SSH_PORT $IRONIC_SSH_USERNAME $IRONIC_SSH_TIMEOUT } function build_ipa_coreos_ramdisk { echo "Building ironic-python-agent deploy ramdisk" local kernel_path=$1 local ramdisk_path=$2 # on fedora services do not start by default restart_service docker git_clone $IRONIC_PYTHON_AGENT_REPO $IRONIC_PYTHON_AGENT_DIR $IRONIC_PYTHON_AGENT_BRANCH cd $IRONIC_PYTHON_AGENT_DIR imagebuild/coreos/build_coreos_image.sh cp imagebuild/coreos/UPLOAD/coreos_production_pxe_image-oem.cpio.gz $ramdisk_path cp imagebuild/coreos/UPLOAD/coreos_production_pxe.vmlinuz $kernel_path sudo rm -rf UPLOAD cd - } function build_tinyipa_ramdisk { echo "Building ironic-python-agent deploy ramdisk" local kernel_path=$1 local ramdisk_path=$2 git_clone $IRONIC_PYTHON_AGENT_REPO $IRONIC_PYTHON_AGENT_DIR $IRONIC_PYTHON_AGENT_BRANCH cd $IRONIC_PYTHON_AGENT_DIR/imagebuild/tinyipa export BUILD_AND_INSTALL_TINYIPA=true make cp tinyipa.gz $ramdisk_path cp tinyipa.vmlinuz $kernel_path make clean cd - } # install_diskimage_builder() - Collect source and prepare or install from pip function install_diskimage_builder { if use_library_from_git "diskimage-builder"; then git_clone_by_name "diskimage-builder" setup_dev_lib "diskimage-builder" else pip_install_gr "diskimage-builder" fi } # build deploy kernel+ramdisk, then upload them to glance # this function sets ``IRONIC_DEPLOY_KERNEL_ID``, ``IRONIC_DEPLOY_RAMDISK_ID`` function upload_baremetal_ironic_deploy { declare -g IRONIC_DEPLOY_KERNEL_ID IRONIC_DEPLOY_RAMDISK_ID echo_summary "Creating and uploading baremetal images for ironic" # install diskimage-builder if [[ $(type -P ramdisk-image-create) == "" ]]; then install_diskimage_builder fi if [ -z "$IRONIC_DEPLOY_KERNEL" -o -z "$IRONIC_DEPLOY_RAMDISK" ]; then local IRONIC_DEPLOY_KERNEL_PATH=$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.kernel local IRONIC_DEPLOY_RAMDISK_PATH=$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.initramfs else local IRONIC_DEPLOY_KERNEL_PATH=$IRONIC_DEPLOY_KERNEL local IRONIC_DEPLOY_RAMDISK_PATH=$IRONIC_DEPLOY_RAMDISK fi if [ ! -e "$IRONIC_DEPLOY_RAMDISK_PATH" -o ! -e "$IRONIC_DEPLOY_KERNEL_PATH" ]; then # files don't exist, need to build them if [ "$IRONIC_BUILD_DEPLOY_RAMDISK" = "True" ]; then # we can build them only if we're not offline if [ "$OFFLINE" != "True" ]; then if is_deployed_with_ipa_ramdisk; then if [ "$IRONIC_RAMDISK_TYPE" == "coreos" ]; then build_ipa_coreos_ramdisk $IRONIC_DEPLOY_KERNEL_PATH $IRONIC_DEPLOY_RAMDISK_PATH elif [ "$IRONIC_RAMDISK_TYPE" == "tinyipa" ]; then build_tinyipa_ramdisk $IRONIC_DEPLOY_KERNEL_PATH $IRONIC_DEPLOY_RAMDISK_PATH else die $LINENO "Unrecognised IRONIC_RAMDISK_TYPE: $IRONIC_RAMDISK_TYPE. Expected 'coreos' or 'tinyipa'" fi else ramdisk-image-create $IRONIC_DEPLOY_FLAVOR \ -o $TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER fi else die $LINENO "Deploy kernel+ramdisk files don't exist and cannot be build in OFFLINE mode" fi else if is_deployed_with_ipa_ramdisk; then # download the agent image tarball wget "$IRONIC_AGENT_KERNEL_URL" -O $IRONIC_DEPLOY_KERNEL_PATH wget "$IRONIC_AGENT_RAMDISK_URL" -O $IRONIC_DEPLOY_RAMDISK_PATH else die $LINENO "Deploy kernel+ramdisk files don't exist and their building was disabled explicitly by IRONIC_BUILD_DEPLOY_RAMDISK" fi fi fi # load them into glance IRONIC_DEPLOY_KERNEL_ID=$(openstack \ image create \ $(basename $IRONIC_DEPLOY_KERNEL_PATH) \ --public --disk-format=aki \ --container-format=aki \ < $IRONIC_DEPLOY_KERNEL_PATH | grep ' id ' | get_field 2) die_if_not_set $LINENO IRONIC_DEPLOY_KERNEL_ID "Failed to load kernel image into glance" IRONIC_DEPLOY_RAMDISK_ID=$(openstack \ image create \ $(basename $IRONIC_DEPLOY_RAMDISK_PATH) \ --public --disk-format=ari \ --container-format=ari \ < $IRONIC_DEPLOY_RAMDISK_PATH | grep ' id ' | get_field 2) die_if_not_set $LINENO IRONIC_DEPLOY_RAMDISK_ID "Failed to load ramdisk image into glance" } function prepare_baremetal_basic_ops { if [[ "$IRONIC_BAREMETAL_BASIC_OPS" != "True" ]]; then return 0 fi if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then configure_ironic_auxiliary fi upload_baremetal_ironic_deploy if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then create_bridge_and_vms fi enroll_nodes configure_tftpd configure_iptables } function cleanup_baremetal_basic_ops { if [[ "$IRONIC_BAREMETAL_BASIC_OPS" != "True" ]]; then return 0 fi rm -f $IRONIC_VM_MACS_CSV_FILE if [ -f $IRONIC_KEY_FILE ]; then local key key=$(cat $IRONIC_KEY_FILE.pub) # remove public key from authorized_keys grep -v "$key" $IRONIC_AUTHORIZED_KEYS_FILE > temp && mv temp $IRONIC_AUTHORIZED_KEYS_FILE chmod 0600 $IRONIC_AUTHORIZED_KEYS_FILE fi sudo rm -rf $IRONIC_DATA_DIR $IRONIC_STATE_PATH local vm_name for vm_name in $(_ironic_bm_vm_names); do sudo su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/cleanup-node.sh $vm_name $IRONIC_VM_NETWORK_BRIDGE" done sudo rm -rf /etc/xinetd.d/tftp /etc/init/tftpd-hpa.override restart_service xinetd sudo iptables -D INPUT -d $HOST_IP -p udp --dport 69 -j ACCEPT || true sudo iptables -D INPUT -d $HOST_IP -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true if is_deployed_by_agent; then # agent ramdisk gets instance image from swift sudo iptables -D INPUT -d $HOST_IP -p tcp --dport ${SWIFT_DEFAULT_BIND_PORT:-8080} -j ACCEPT || true fi sudo rmmod nf_conntrack_tftp || true sudo rmmod nf_nat_tftp || true } # Restore xtrace + pipefail $_XTRACE_IRONIC $_PIPEFAIL_IRONIC # Tell emacs to use shell-script-mode ## Local variables: ## mode: shell-script ## End: ironic-5.1.0/devstack/tools/0000775000567000056710000000000012674513633017117 5ustar jenkinsjenkins00000000000000ironic-5.1.0/devstack/tools/ironic/0000775000567000056710000000000012674513633020402 5ustar jenkinsjenkins00000000000000ironic-5.1.0/devstack/tools/ironic/scripts/0000775000567000056710000000000012674513633022071 5ustar jenkinsjenkins00000000000000ironic-5.1.0/devstack/tools/ironic/scripts/create-node.sh0000775000567000056710000000502412674513466024623 0ustar jenkinsjenkins00000000000000#!/usr/bin/env bash # **create-nodes** # Creates baremetal poseur nodes for ironic testing purposes set -ex # Keep track of the DevStack directory TOP_DIR=$(cd $(dirname "$0")/.. && pwd) NAME=$1 CPU=$2 MEM=$(( 1024 * $3 )) # Extra G to allow fuzz for partition table : flavor size and registered size # need to be different to actual size. DISK=$(( $4 + 1)) case $5 in i386) ARCH='i686' ;; amd64) ARCH='x86_64' ;; *) echo "Unsupported arch $4!" ; exit 1 ;; esac BRIDGE=$6 EMULATOR=$7 VBMC_PORT=$8 LOGDIR=$9 LIBVIRT_NIC_DRIVER=${LIBVIRT_NIC_DRIVER:-"virtio"} LIBVIRT_STORAGE_POOL=${LIBVIRT_STORAGE_POOL:-"default"} LIBVIRT_CONNECT_URI=${LIBVIRT_CONNECT_URI:-"qemu:///system"} export VIRSH_DEFAULT_CONNECT_URI=$LIBVIRT_CONNECT_URI if ! virsh pool-list --all | grep -q $LIBVIRT_STORAGE_POOL; then virsh pool-define-as --name $LIBVIRT_STORAGE_POOL dir --target /var/lib/libvirt/images >&2 virsh pool-autostart $LIBVIRT_STORAGE_POOL >&2 virsh pool-start $LIBVIRT_STORAGE_POOL >&2 fi pool_state=$(virsh pool-info $LIBVIRT_STORAGE_POOL | grep State | awk '{ print $2 }') if [ "$pool_state" != "running" ] ; then [ ! -d /var/lib/libvirt/images ] && sudo mkdir /var/lib/libvirt/images virsh pool-start $LIBVIRT_STORAGE_POOL >&2 fi if [ -n "$LOGDIR" ] ; then mkdir -p "$LOGDIR" fi PREALLOC= if [ -f /etc/debian_version ]; then PREALLOC="--prealloc-metadata" fi if [ -n "$LOGDIR" ] ; then VM_LOGGING="--console-log $LOGDIR/${NAME}_console.log" else VM_LOGGING="" fi VOL_NAME="${NAME}.qcow2" if ! virsh list --all | grep -q $NAME; then virsh vol-list --pool $LIBVIRT_STORAGE_POOL | grep -q $VOL_NAME && virsh vol-delete $VOL_NAME --pool $LIBVIRT_STORAGE_POOL >&2 virsh vol-create-as $LIBVIRT_STORAGE_POOL ${VOL_NAME} ${DISK}G --format qcow2 $PREALLOC >&2 volume_path=$(virsh vol-path --pool $LIBVIRT_STORAGE_POOL $VOL_NAME) # Pre-touch the VM to set +C, as it can only be set on empty files. sudo touch "$volume_path" sudo chattr +C "$volume_path" || true $TOP_DIR/scripts/configure-vm.py \ --bootdev network --name $NAME --image "$volume_path" \ --arch $ARCH --cpus $CPU --memory $MEM --libvirt-nic-driver $LIBVIRT_NIC_DRIVER \ --emulator $EMULATOR --network $BRIDGE $VM_LOGGING >&2 # Createa Virtual BMC for the node if IPMI is used if [[ $(type -P vbmc) != "" ]]; then vbmc add $NAME --port $VBMC_PORT vbmc start $NAME fi fi # echo mac VM_MAC=$(virsh dumpxml $NAME | grep "mac address" | head -1 | cut -d\' -f2) echo $VM_MAC $VBMC_PORT ironic-5.1.0/devstack/tools/ironic/scripts/setup-network.sh0000775000567000056710000000164612674513466025272 0ustar jenkinsjenkins00000000000000#!/usr/bin/env bash # **setup-network** # Setups openvswitch libvirt network suitable for # running baremetal poseur nodes for ironic testing purposes set -exu LIBVIRT_CONNECT_URI=${LIBVIRT_CONNECT_URI:-"qemu:///system"} # Keep track of the DevStack directory TOP_DIR=$(cd $(dirname "$0")/.. && pwd) BRIDGE_SUFFIX=${1:-''} BRIDGE_NAME=brbm$BRIDGE_SUFFIX export VIRSH_DEFAULT_CONNECT_URI="$LIBVIRT_CONNECT_URI" # Only add bridge if missing. Bring it UP. (sudo ovs-vsctl list-br | grep ${BRIDGE_NAME}$) || sudo ovs-vsctl add-br ${BRIDGE_NAME} sudo ip link set dev ${BRIDGE_NAME} up # Remove bridge before replacing it. (virsh net-list | grep "${BRIDGE_NAME} ") && virsh net-destroy ${BRIDGE_NAME} (virsh net-list --inactive | grep "${BRIDGE_NAME} ") && virsh net-undefine ${BRIDGE_NAME} virsh net-define <(sed s/brbm/$BRIDGE_NAME/ $TOP_DIR/templates/brbm.xml) virsh net-autostart ${BRIDGE_NAME} virsh net-start ${BRIDGE_NAME} ironic-5.1.0/devstack/tools/ironic/scripts/configure-vm.py0000775000567000056710000000746512674513466025067 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import argparse import os.path import libvirt templatedir = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'templates') CONSOLE_LOG = """ """ CONSOLE_PTY = """ """ def main(): parser = argparse.ArgumentParser( description="Configure a kvm virtual machine for the seed image.") parser.add_argument('--name', default='seed', help='the name to give the machine in libvirt.') parser.add_argument('--image', help='Use a custom image file (must be qcow2).') parser.add_argument('--engine', default='qemu', help='The virtualization engine to use') parser.add_argument('--arch', default='i686', help='The architecture to use') parser.add_argument('--memory', default='2097152', help="Maximum memory for the VM in KB.") parser.add_argument('--cpus', default='1', help="CPU count for the VM.") parser.add_argument('--bootdev', default='hd', help="What boot device to use (hd/network).") parser.add_argument('--network', default="brbm", help='The libvirt network name to use') parser.add_argument('--libvirt-nic-driver', default='virtio', help='The libvirt network driver to use') parser.add_argument('--console-log', help='File to log console') parser.add_argument('--emulator', default=None, help='Path to emulator bin for vm template') args = parser.parse_args() with file(templatedir + '/vm.xml', 'rb') as f: source_template = f.read() params = { 'name': args.name, 'imagefile': args.image, 'engine': args.engine, 'arch': args.arch, 'memory': args.memory, 'cpus': args.cpus, 'bootdev': args.bootdev, 'network': args.network, 'nicdriver': args.libvirt_nic_driver, 'emulator': args.emulator, } if args.emulator: params['emulator'] = args.emulator else: if os.path.exists("/usr/bin/kvm"): # Debian params['emulator'] = "/usr/bin/kvm" elif os.path.exists("/usr/bin/qemu-kvm"): # Redhat params['emulator'] = "/usr/bin/qemu-kvm" if args.console_log: params['console'] = CONSOLE_LOG % {'console_log': args.console_log} else: params['console'] = CONSOLE_PTY libvirt_template = source_template % params conn = libvirt.open("qemu:///system") a = conn.defineXML(libvirt_template) print ("Created machine %s with UUID %s" % (args.name, a.UUIDString())) if __name__ == '__main__': main() ironic-5.1.0/devstack/tools/ironic/scripts/cleanup-node.sh0000775000567000056710000000177612674513466025021 0ustar jenkinsjenkins00000000000000#!/usr/bin/env bash # **cleanup-nodes** # Cleans up baremetal poseur nodes and volumes created during ironic setup # Assumes calling user has proper libvirt group membership and access. set -exu LIBVIRT_STORAGE_POOL=${LIBVIRT_STORAGE_POOL:-"default"} LIBVIRT_CONNECT_URI=${LIBVIRT_CONNECT_URI:-"qemu:///system"} NAME=$1 NETWORK_BRIDGE=$2 export VIRSH_DEFAULT_CONNECT_URI=$LIBVIRT_CONNECT_URI VOL_NAME="$NAME.qcow2" virsh list | grep -q $NAME && virsh destroy $NAME virsh list --inactive | grep -q $NAME && virsh undefine $NAME # Delete the Virtual BMC if [[ $(type -P vbmc) != "" ]]; then vbmc list | grep -a $NAME && vbmc delete $NAME fi if virsh pool-list | grep -q $LIBVIRT_STORAGE_POOL ; then virsh vol-list $LIBVIRT_STORAGE_POOL | grep -q $VOL_NAME && virsh vol-delete $VOL_NAME --pool $LIBVIRT_STORAGE_POOL fi sudo brctl delif br-$NAME ovs-$NAME || true sudo ovs-vsctl del-port $NETWORK_BRIDGE ovs-$NAME || true sudo ip link set dev br-$NAME down || true sudo brctl delbr br-$NAME || true ironic-5.1.0/devstack/tools/ironic/templates/0000775000567000056710000000000012674513633022400 5ustar jenkinsjenkins00000000000000ironic-5.1.0/devstack/tools/ironic/templates/brbm.xml0000664000567000056710000000020012674513466024040 0ustar jenkinsjenkins00000000000000 brbm ironic-5.1.0/devstack/tools/ironic/templates/tftpd-xinetd.template0000664000567000056710000000065012674513466026554 0ustar jenkinsjenkins00000000000000service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = root server = /usr/sbin/in.tftpd server_args = -v -v -v -v -v --map-file %TFTPBOOT_DIR%/map-file %TFTPBOOT_DIR% disable = no # This is a workaround for Fedora, where TFTP will listen only on # IPv6 endpoint, if IPv4 flag is not used. flags = IPv4 } ironic-5.1.0/devstack/tools/ironic/templates/vm.xml0000664000567000056710000000304612674513466023553 0ustar jenkinsjenkins00000000000000 %(name)s %(memory)s %(cpus)s hvm destroy restart restart %(emulator)s