pax_global_header00006660000000000000000000000064131766203270014521gustar00rootroot0000000000000052 comment=622b43d040f78b780ba1c6fa207436a934dabd84 os-faults-0.1.17/000077500000000000000000000000001317662032700135245ustar00rootroot00000000000000os-faults-0.1.17/.coveragerc000066400000000000000000000001371317662032700156460ustar00rootroot00000000000000[run] branch = True source = os_faults omit = os_faults/tests/* [report] ignore_errors = True os-faults-0.1.17/.gitignore000066400000000000000000000010451317662032700155140ustar00rootroot00000000000000*.py[cod] # C extensions *.so # Packages *.egg* *.egg-info dist build eggs parts bin var sdist develop-eggs .installed.cfg lib lib64 # Installer logs pip-log.txt # Unit test / coverage reports htmlcov/ cover/ .coverage* !.coveragerc .tox nosetests.xml .testrepository .venv # Translations *.mo # Mr Developer .mr.developer.cfg .project .pydevproject # Complexity output/*.html output/*/index.html # Sphinx doc/build # pbr generates these AUTHORS ChangeLog # Editors *~ .*.swp .*sw? # Files created by releasenotes build releasenotes/buildos-faults-0.1.17/.gitreview000066400000000000000000000001161317662032700155300ustar00rootroot00000000000000[gerrit] host=review.openstack.org port=29418 project=openstack/os-faults.git os-faults-0.1.17/.mailmap000066400000000000000000000002371317662032700151470ustar00rootroot00000000000000Alexey Zaytsev Anton Studenov Ilya Shakhat Yaroslav Lobankov os-faults-0.1.17/.zuul.yaml000066400000000000000000000013401317662032700154630ustar00rootroot00000000000000- project: name: openstack/os-faults check: jobs: - os-faults-integration-py27 - os-faults-integration-py35 gate: jobs: - os-faults-integration-py27 - os-faults-integration-py35 - job: name: os-faults-integration-py27 parent: openstack-tox description: | Run integration tests under Python 2.7 To run tests manually use ``tox -e integration-py27`` command. vars: tox_envlist: integration-py27 - job: name: os-faults-integration-py35 parent: openstack-tox description: | Run integration tests under Python 3.5 To run tests manually use ``tox -e integration-py35`` command. vars: tox_envlist: integration-py35 os-faults-0.1.17/CONTRIBUTING.rst000066400000000000000000000012151317662032700161640ustar00rootroot00000000000000If you would like to contribute to the development of OpenStack, you must follow the steps in this page: http://docs.openstack.org/infra/manual/developers.html If you already have a good understanding of how the system works and your OpenStack accounts are set up, you can skip to the development workflow section of this documentation to learn how changes to OpenStack should be submitted for review via the Gerrit tool: http://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. Bugs should be filed on Launchpad, not GitHub: https://bugs.launchpad.net/os-faults os-faults-0.1.17/HACKING.rst000066400000000000000000000002541317662032700153230ustar00rootroot00000000000000OS-Faults Style Commandments ============================ - Step 1: Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/ - Step 2: Read on os-faults-0.1.17/LICENSE000066400000000000000000000236361317662032700145430ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. os-faults-0.1.17/MANIFEST.in000066400000000000000000000001361317662032700152620ustar00rootroot00000000000000include AUTHORS include ChangeLog exclude .gitignore exclude .gitreview global-exclude *.pyc os-faults-0.1.17/README.rst000066400000000000000000000127211317662032700152160ustar00rootroot00000000000000========= OS-Faults ========= **OpenStack fault-injection library** The library does destructive actions inside an OpenStack cloud. It provides an abstraction layer over different types of cloud deployments. The actions are implemented as drivers (e.g. DevStack driver, Fuel driver, Libvirt driver, IPMI driver). * Free software: Apache license * Documentation: http://os-faults.readthedocs.io * Source: https://github.com/openstack/os-faults * Bugs: http://bugs.launchpad.net/os-faults Installation ------------ Reqular installation:: pip install os-faults The library contains optional libvirt driver, if you plan to use it, please use the following command to install os-faults with extra dependencies:: pip install os-faults[libvirt] Configuration ------------- The cloud deployment configuration schema is an extension to the cloud config used by the `os-client-config `_ library: .. code-block:: python cloud_config = { 'cloud_management': { 'driver': 'devstack', 'args': { 'address': 'devstack.local', 'username': 'root', } }, 'power_managements': [ { 'driver': 'libvirt', 'args': { 'connection_uri': 'qemu+unix:///system', } }, { 'driver': 'ipmi', 'args': { 'mac_to_bmc': { 'aa:bb:cc:dd:ee:01': { 'address': '55.55.55.55', 'username': 'foo', 'password': 'bar', } } } } ] } Establish a connection to the cloud and verify it: .. code-block:: python destructor = os_faults.connect(cloud_config) destructor.verify() The library can also read configuration from a file and the file can be in the following three formats: os-faults.{json,yaml,yml}. The configuration file can be specified in the `OS_FAULTS_CONFIG` environment variable or can be read from one of the default locations: * current directory * ~/.config/os-faults * /etc/openstack Make some destructive actions: .. code-block:: python destructor.get_service(name='keystone').restart() The library operates with 2 types of objects: * `service` - is a software that runs in the cloud, e.g. `nova-api` * `nodes` - nodes that host the cloud, e.g. a hardware server with a hostname Simplified API -------------- Simplified API is used to inject faults in a human-friendly form. **Service-oriented** command performs specified `action` against `service` on all, on one random node or on the node specified by FQDN:: service [on (random|one|single| node[s])] Examples: * `Restart Keystone service` - restarts Keystone service on all nodes. * `kill nova-api service on one node` - restarts Nova API on one randomly-picked node. **Node-oriented** command performs specified `action` on node specified by FQDN or set of service's nodes:: [random|one|single|] node[s] [with service] Examples: * `Reboot one node with mysql` - reboots one random node with MySQL. * `Reset node-2.domain.tld node` - reset node `node-2.domain.tld`. **Network-oriented** command is a subset of node-oriented and performs network management operation on selected nodes:: network on [random|one|single|] node[s] [with service] Examples: * `Disconnect management network on nodes with rabbitmq service` - shuts down management network interface on all nodes where rabbitmq runs. * `Connect storage network on node-1.domain.tld node` - enables storage network interface on node-1.domain.tld. Extended API ------------ 1. Service actions ~~~~~~~~~~~~~~~~~~ Get a service and restart it: .. code-block:: python destructor = os_faults.connect(cloud_config) service = destructor.get_service(name='glance-api') service.restart() Available actions: * `start` - start Service * `terminate` - terminate Service gracefully * `restart` - restart Service * `kill` - terminate Service abruptly * `unplug` - unplug Service out of network * `plug` - plug Service into network 2. Node actions ~~~~~~~~~~~~~~~ Get all nodes in the cloud and reboot them: .. code-block:: python nodes = destructor.get_nodes() nodes.reboot() Available actions: * `reboot` - reboot all nodes gracefully * `poweroff` - power off all nodes abruptly * `reset` - reset (cold restart) all nodes * `oom` - fill all node's RAM * `disconnect` - disable network with the specified name on all nodes * `connect` - enable network with the specified name on all nodes 3. Operate with nodes ~~~~~~~~~~~~~~~~~~~~~ Get all nodes where a service runs, pick one of them and reset: .. code-block:: python nodes = service.get_nodes() one = nodes.pick() one.reset() Get nodes where l3-agent runs and disable the management network on them: .. code-block:: python fqdns = neutron.l3_agent_list_hosting_router(router_id) nodes = destructor.get_nodes(fqdns=fqdns) nodes.disconnect(network_name='management') 4. Operate with services ~~~~~~~~~~~~~~~~~~~~~~~~ Restart a service on a single node: .. code-block:: python service = destructor.get_service(name='keystone') nodes = service.get_nodes().pick() service.restart(nodes) os-faults-0.1.17/babel.cfg000066400000000000000000000000201317662032700152420ustar00rootroot00000000000000[python: **.py] os-faults-0.1.17/doc/000077500000000000000000000000001317662032700142715ustar00rootroot00000000000000os-faults-0.1.17/doc/ext/000077500000000000000000000000001317662032700150715ustar00rootroot00000000000000os-faults-0.1.17/doc/ext/__init__.py000066400000000000000000000000001317662032700171700ustar00rootroot00000000000000os-faults-0.1.17/doc/ext/driver_doc.py000066400000000000000000000035611317662032700175700ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from docutils.parsers import rst from os_faults import registry from sphinx.util import docstrings from ext import utils class DriverDocDirective(rst.Directive): required_arguments = 1 def run(self): drv_name = self.arguments[0] driver = registry.get_driver(drv_name) types = ', '.join(class_.__name__ for class_ in driver.__bases__) doc = '\n'.join(docstrings.prepare_docstring(driver.__doc__)) subcat = utils.subcategory('{} [{}]'.format(drv_name, types)) subcat.extend(utils.parse_text(doc)) return [subcat] class CloudDriverDocDirective(rst.Directive): required_arguments = 1 def run(self): drv_name = self.arguments[0] driver = registry.get_driver(drv_name) types = ', '.join(class_.__name__ for class_ in driver.__bases__) services = sorted(driver.list_supported_services()) doc = '\n'.join(docstrings.prepare_docstring(driver.__doc__)) subcat = utils.subcategory('{} [{}]'.format(drv_name, types)) subcat.extend(utils.parse_text(doc)) subcat.extend(utils.parse_text('**Default services:**')) subcat.extend(utils.rstlist(services)) return [subcat] def setup(app): app.add_directive('driver_doc', DriverDocDirective) app.add_directive('cloud_driver_doc', CloudDriverDocDirective) os-faults-0.1.17/doc/ext/utils.py000066400000000000000000000026441317662032700166110ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from docutils import frontend from docutils import nodes from docutils.parsers import rst from docutils import utils def parse_text(text): parser = rst.Parser() settings = frontend.OptionParser( components=(rst.Parser,)).get_default_values() document = utils.new_document(text, settings) parser.parse(text, document) return document.children paragraph = lambda text: parse_text(text)[0] note = lambda msg: nodes.note("", paragraph(msg)) hint = lambda msg: nodes.hint("", *parse_text(msg)) warning = lambda msg: nodes.warning("", paragraph(msg)) category = lambda title: parse_text("%s\n%s" % (title, "-" * len(title)))[0] subcategory = lambda title: parse_text("%s\n%s" % (title, "~" * len(title)))[0] section = lambda title: parse_text("%s\n%s" % (title, "\"" * len(title)))[0] def rstlist(items): return parse_text("".join("* %s\n" % item for item in items)) os-faults-0.1.17/doc/source/000077500000000000000000000000001317662032700155715ustar00rootroot00000000000000os-faults-0.1.17/doc/source/api.rst000066400000000000000000000004471317662032700171010ustar00rootroot00000000000000============= API Reference ============= .. automodule:: os_faults :members: .. autoclass:: os_faults.api.cloud_management.CloudManagement :members: .. autoclass:: os_faults.api.service.Service :members: .. autoclass:: os_faults.api.node_collection.NodeCollection :members: os-faults-0.1.17/doc/source/cli.rst000066400000000000000000000007761317662032700171040ustar00rootroot00000000000000============= CLI reference ============= os-inject-fault --------------- .. program-output:: os-inject-fault --help os-faults --------- .. program-output:: os-faults --help os-faults verify ---------------- .. program-output:: os-faults verify --help os-faults discover ------------------ .. program-output:: os-faults discover --help os-faults nodes --------------- .. program-output:: os-faults nodes --help os-faults drivers ----------------- .. program-output:: os-faults drivers --help os-faults-0.1.17/doc/source/conf.py000077500000000000000000000063541317662032700171030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys import os_faults sys.path.insert(0, os.path.abspath('..')) sys.path.insert(0, os.path.abspath('../..')) # -- General configuration ---------------------------------------------------- on_zuul = "ZUUL_PROJECT" in os.environ # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.autosectionlabel', #'sphinx.ext.intersphinx', 'sphinxcontrib.programoutput', 'ext.driver_doc', 'openstackdocstheme', ] version = os_faults.get_version() # The full version, including alpha/beta/rc tags. release = os_faults.get_release() # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'os-faults' copyright = u'2016, OpenStack Foundation' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['os_faults.'] # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' # html_static_path = ['static'] html_theme = 'openstackdocs' if not on_zuul: html_theme = "sphinx_rtd_theme" # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' html_last_updated_fmt = '%Y-%m-%d %H:%M' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', '%s.tex' % project, u'%s Documentation' % project, u'OpenStack Foundation', 'manual'), ] # Example configuration for intersphinx: refer to the Python standard library. #intersphinx_mapping = {'http://docs.python.org/': None} # -- Options for openstackdocstheme ------------------------------------------- repository_name = 'openstack/os-faults' bug_project = 'os-faults' bug_tag = '' os-faults-0.1.17/doc/source/contributing.rst000066400000000000000000000001131317662032700210250ustar00rootroot00000000000000============ Contributing ============ .. include:: ../../CONTRIBUTING.rst os-faults-0.1.17/doc/source/drivers.rst000066400000000000000000000010771317662032700200060ustar00rootroot00000000000000======= Drivers ======= Cloud management ---------------- .. cloud_driver_doc:: devstack .. cloud_driver_doc:: devstack_systemd .. cloud_driver_doc:: fuel .. cloud_driver_doc:: tcpcloud Power management ---------------- .. driver_doc:: libvirt .. driver_doc:: ipmi Node discover ------------- .. driver_doc:: node_list Service drivers --------------- .. driver_doc:: process .. driver_doc:: linux_service .. driver_doc:: screen .. driver_doc:: systemd_service .. driver_doc:: salt_service .. driver_doc:: pcs_service .. driver_doc:: pcs_or_linux_service os-faults-0.1.17/doc/source/history.rst000066400000000000000000000000351317662032700200220ustar00rootroot00000000000000.. include:: ../../ChangeLog os-faults-0.1.17/doc/source/index.rst000066400000000000000000000011541317662032700174330ustar00rootroot00000000000000=================== OS-Faults |release| =================== **OpenStack fault-injection library** The library does destructive actions inside an OpenStack cloud. It provides an abstraction layer over different types of cloud deployments. The actions are implemented as drivers (e.g. DevStack driver, Fuel driver, Libvirt driver, IPMI driver). Contents ======== .. toctree:: :maxdepth: 3 quickstart/index cli drivers api contributing Release Notes ============= .. toctree:: :maxdepth: 1 history Indices and Tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` os-faults-0.1.17/doc/source/quickstart/000077500000000000000000000000001317662032700177635ustar00rootroot00000000000000os-faults-0.1.17/doc/source/quickstart/api.rst000066400000000000000000000065051317662032700212740ustar00rootroot00000000000000=== API === The library operates with 2 types of objects: * `service` - is a software that runs in the cloud, e.g. `nova-api` * `nodes` - nodes that host the cloud, e.g. a hardware server with a hostname Simplified API -------------- Simplified API is used to inject faults in a human-friendly form. .. code-block:: python import os_faults destructor = os_faults.connect(config_filename='os-faults.yaml') os_faults.human_api(destructor, 'restart keystone service') **Service-oriented** command performs specified `action` against `service` on all, on one random node or on the node specified by FQDN:: service [on (random|one|single| node[s])] Examples: * `Restart Keystone service` - restarts Keystone service on all nodes. * `kill nova-api service on one node` - restarts Nova API on one randomly-picked node. **Node-oriented** command performs specified `action` on node specified by FQDN or set of service's nodes:: [random|one|single|] node[s] [with service] Examples: * `Reboot one node with mysql` - reboots one random node with MySQL. * `Reset node-2.domain.tld node` - reset node `node-2.domain.tld`. **Network-oriented** command is a subset of node-oriented and performs network management operation on selected nodes:: network on [random|one|single|] node[s] [with service] Examples: * `Disconnect management network on nodes with rabbitmq service` - shuts down management network interface on all nodes where rabbitmq runs. * `Connect storage network on node-1.domain.tld node` - enables storage network interface on node-1.domain.tld. Extended API ------------ 1. Service actions ~~~~~~~~~~~~~~~~~~ Get a service and restart it: .. code-block:: python destructor = os_faults.connect(cloud_config) service = destructor.get_service(name='glance-api') service.restart() Available actions: * `start` - start Service * `terminate` - terminate Service gracefully * `restart` - restart Service * `kill` - terminate Service abruptly * `unplug` - unplug Service out of network * `plug` - plug Service into network 2. Node actions ~~~~~~~~~~~~~~~ Get all nodes in the cloud and reboot them: .. code-block:: python nodes = destructor.get_nodes() nodes.reboot() Available actions: * `reboot` - reboot all nodes gracefully * `poweroff` - power off all nodes abruptly * `reset` - reset (cold restart) all nodes * `oom` - fill all node's RAM * `disconnect` - disable network with the specified name on all nodes * `connect` - enable network with the specified name on all nodes 3. Operate with nodes ~~~~~~~~~~~~~~~~~~~~~ Get all nodes where a service runs, pick one of them and reset: .. code-block:: python nodes = service.get_nodes() one = nodes.pick() one.reset() Get nodes where l3-agent runs and disable the management network on them: .. code-block:: python fqdns = neutron.l3_agent_list_hosting_router(router_id) nodes = destructor.get_nodes(fqdns=fqdns) nodes.disconnect(network_name='management') 4. Operate with services ~~~~~~~~~~~~~~~~~~~~~~~~ Restart a service on a single node: .. code-block:: python service = destructor.get_service(name='keystone') nodes = service.get_nodes().pick() service.restart(nodes) os-faults-0.1.17/doc/source/quickstart/basics.rst000066400000000000000000000025771317662032700217740ustar00rootroot00000000000000====== Basics ====== Configuration file ------------------ The cloud deployment configuration schema is an extension to the cloud config used by the `os-client-config `_ library: .. code-block:: yaml cloud_management: driver: devstack args: address: 192.168.1.240 username: ubuntu iface: enp0s3 power_managements: - driver: libvirt args: connection_uri: qemu+ssh://ubuntu@10.0.1.50/system By default, the library reads configuration from a file and the file can be in the following three formats: ``os-faults.{json,yaml,yml}``. The configuration file will be searched in the default locations: * current directory * ~/.config/os-faults * /etc/openstack Also, the configuration file can be specified in the ``OS_FAULTS_CONFIG`` environment variable:: $ export OS_FAULTS_CONFIG=/home/alex/my-os-faults-config.yaml Running ------- Establish a connection to the cloud and verify it: .. code-block:: python import os_faults destructor = os_faults.connect(config_filename='os-faults.yaml') destructor.verify() or via CLI:: $ os-faults verify -c os-faults.yaml Make some destructive actions: .. code-block:: python destructor.get_service(name='keystone').restart() or via CLI:: $ os-inject-fault -c os-faults.yaml restart keystone service os-faults-0.1.17/doc/source/quickstart/config_spec.rst000066400000000000000000000102371317662032700227770ustar00rootroot00000000000000=========================== Configuration specification =========================== Configuration file contains the following parameters: * cloud_management * power_managements * node_discover * services Each parameter specifies a driver or a list of drivers. Example configuration: .. code-block:: yaml cloud_management: driver: devstack args: address: 192.168.1.240 username: ubuntu iface: enp0s3 power_managements: - driver: libvirt args: connection_uri: qemu+ssh://ubuntu@10.0.1.50/system - driver: ipmi args: fqdn_to_bmc: node-1.domain.tld: address: 120.10.30.65 username: alex password: super-secret node_discover: driver: node_list args: - fqdn: node-1.domain.tld ip: 192.168.1.240 mac: 1e:24:c3:75:dd:2c services: glance-api: driver: screen args: grep: glance-api window_name: g-api hosts: - 192.168.1.240 cloud_management ---------------- This parameter specifies cloud managment driver and its argumanets. ``cloud_management`` is responsible for configuring connection to nodes and contains arguments such as SSH username/password/key/proxy. .. code-block:: yaml cloud_management: driver: devstack # name of the driver args: # arguments for the driver address: 192.168.1.240 username: ubuntu iface: enp0s3 Also, such drivers can support discovering of cloud nodes. For example, ``fuel``, ``tcpcloud`` drives allow discovering information about nodes through master/config node of the cloud. List of supported drivers for cloud_management: :ref:`Cloud management` power_managements ----------------- This parameter specifies list of power management drivers. Such drivers allow controlling power state of cloud nodes. .. code-block:: yaml power_managements: - driver: libvirt # name of the driver args: # arguments for the driver connection_uri: qemu+ssh://ubuntu@10.0.1.50/system - driver: ipmi # name of the driver args: # arguments for the driver fqdn_to_bmc: node-1.domain.tld: address: 120.10.30.65 username: alex password: super-secret List of supported drivers for power_managements: :ref:`Power management` node_discover ------------- This parameter specifies node discover driver. ``node_discover`` is responsible for fetching list of hosts for the cloud. If ``node_discover`` is specified in configuration then ``cloud_management`` will only control connection options to the nodes. .. code-block:: yaml node_discover: driver: node_list args: - fqdn: node-1.domain.tld ip: 192.168.1.240 mac: 1e:24:c3:75:dd:2c List of supported drivers for node_discover: :ref:`Node discover` services -------- This parameter specifies list of services and their types. This parameter allows updating/adding services which are embedded in ``cloud_management`` driver. .. code-block:: yaml services: glance-api: # name of the service driver: screen # name of the service driver args: # arguments for the driver grep: glance-api window_name: g-api hosts: # list of hosts where this service running - 192.168.1.240 mysql: # name of the service driver: process # name of the service driver args: # arguments for the driver grep: mysqld port: - tcp - 3307 restart_cmd: sudo service mysql restart start_cmd: sudo service mysql start terminate_cmd: sudo service mysql stop Service driver contains optional ``hosts`` parameter which controls discovering of hosts where the service is running. If ``hosts`` specified, then service discovering is disabled for this service and hosts specified in ``hosts`` will be used, otherwise, service will be searched across all nodes. List of supported drivers for services: :ref:`Service drivers` os-faults-0.1.17/doc/source/quickstart/index.rst000066400000000000000000000002461317662032700216260ustar00rootroot00000000000000========== Quickstart ========== This section describes how to start using os-faults. .. toctree:: :maxdepth: 2 installation basics api config_spec os-faults-0.1.17/doc/source/quickstart/installation.rst000066400000000000000000000005761317662032700232260ustar00rootroot00000000000000============ Installation ============ At the command line:: $ pip install os-faults Or, if you have virtualenvwrapper installed:: $ mkvirtualenv os-faults $ pip install os-faults The library contains optional libvirt driver, if you plan to use it, please use the following command to install os-faults with extra dependencies:: pip install os-faults[libvirt] os-faults-0.1.17/examples/000077500000000000000000000000001317662032700153425ustar00rootroot00000000000000os-faults-0.1.17/examples/due.py000066400000000000000000000025101317662032700164670ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os_faults def main(): # cloud config schema is an extension to os-client-config cloud_config = { 'cloud_management': { 'driver': 'devstack', 'args': { 'address': 'devstack.local', 'username': 'developer', } } } logging.info('# Create connection to the cloud') destructor = os_faults.connect(cloud_config) logging.info('# Verify connection to the cloud') destructor.verify() logging.info('# Restart Keystone service on all nodes') service = destructor.get_service(name='keystone') service.restart() if __name__ == '__main__': logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s', level=logging.INFO) main() os-faults-0.1.17/examples/fuel_distribute_key.sh000077500000000000000000000021271317662032700217440ustar00rootroot00000000000000#!/bin/bash -x KEY_FILE_NAME="${HOME}/.ssh/os_faults" HOST=${1:-fuel.local} USERNAME=${2:-root} echo "distributing keys to Fuel: ${USERNAME}@${HOST}" if [ ! -f ${KEY_FILE_NAME} ]; then echo "generating new key in ${KEY_FILE_NAME}" ssh-keygen -b 4096 -f ${KEY_FILE_NAME} -q -t rsa -P "" fi echo "copying the key to master node ${USERNAME}@${HOST}" ssh-copy-id -i ${KEY_FILE_NAME} -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ${USERNAME}@${HOST} echo "get list of nodes in the cluster" for NODE in `ssh -i ${KEY_FILE_NAME} -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ${USERNAME}@${HOST} fuel2 node list -c ip -f value`; do echo "copying the key to node ${NODE}" # ssh-copy-id does not copy the key over the hop when the destination is already reachable via its own key cat ${KEY_FILE_NAME}.pub | ssh -i ${KEY_FILE_NAME} ${USERNAME}@${HOST} ssh ${NODE} 'tee -a .ssh/authorized_keys' ssh -i ${KEY_FILE_NAME} -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ProxyCommand="ssh -i ${KEY_FILE_NAME} -W %h:%p ${USERNAME}@${HOST}" root@${NODE} hostname done os-faults-0.1.17/examples/power_off_on_ipmi_node.py000066400000000000000000000037631317662032700224320ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os_faults def main(): # cloud config schema is an extension to os-client-config cloud_config = { 'cloud_management': { 'driver': 'fuel', 'args': { 'address': 'fuel.local', 'username': 'root', } }, 'power_managements': [ { 'driver': 'ipmi', 'args': { 'mac_to_bmc': { '00:00:00:00:00:00': { 'address': '55.55.55.55', 'username': 'foo', 'password': 'bar', } } } } ] } logging.info('Create connection to the cluster') destructor = os_faults.connect(cloud_config) logging.info('Verify connection to the cluster') destructor.verify() logging.info('Get all cluster nodes') nodes = destructor.get_nodes() logging.info('All cluster nodes: %s', nodes) computes = nodes.filter(role='compute') one = computes.pick() logging.info('Pick one of compute nodes: %s', one) logging.info('Power off compute node') one.poweroff() logging.info('Power on compute node') one.poweron() logging.info('Done!') if __name__ == '__main__': logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s', level=logging.DEBUG) main() os-faults-0.1.17/examples/power_off_on_vm_node.py000066400000000000000000000032211317662032700221030ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os_faults def main(): # cloud config schema is an extension to os-client-config cloud_config = { 'cloud_management': { 'driver': 'fuel', 'args': { 'address': 'fuel.local', 'username': 'root', } }, 'power_managements': [ { 'driver': 'libvirt', 'args': { 'connection_uri': "qemu+ssh://ubuntu@host.local/system" } } ] } logging.info('Create connection to the cluster') destructor = os_faults.connect(cloud_config) logging.info('Verify connection to the cluster') destructor.verify() logging.info('Get all cluster nodes') nodes = destructor.get_nodes() logging.info('All cluster nodes: %s', nodes) logging.info('Pick and power off/on one of cluster nodes') one = nodes.pick() one.poweroff() one.poweron() if __name__ == '__main__': logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s', level=logging.DEBUG) main() os-faults-0.1.17/examples/uno.py000066400000000000000000000055071317662032700165240ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os_faults def main(): # cloud config schema is an extension to os-client-config cloud_config = { 'cloud_management': { 'driver': 'fuel', 'args': { 'address': 'fuel.local', 'username': 'root', 'private_key_file': '~/.ssh/os_faults', } }, 'power_managements': [ { 'driver': 'libvirt', 'args': { 'connection_uri': 'qemu+ssh://ubuntu@host.local/system' } } ] } logging.info('# Create connection to the cloud') destructor = os_faults.connect(cloud_config) logging.info('# Verify connection to the cloud') destructor.verify() # os_faults library operate with 2 types of objects: # service - is software that runs in the cloud, e.g. keystone, mysql, # rabbitmq, nova-api, glance-api # nodes - nodes that host the cloud, e.g. hardware server with hostname logging.info('# Get nodes where Keystone service runs') service = destructor.get_service(name='keystone') nodes = service.get_nodes() logging.info('Nodes: %s', nodes) logging.info('# Restart Keystone service on all nodes') service.restart() logging.info('# Pick and reset one of Keystone service nodes') one = nodes.pick() one.reset() logging.info('# Get all nodes in the cloud') nodes = destructor.get_nodes() logging.info('All cloud nodes: %s', nodes) logging.info('# Reset all these nodes') nodes.reset() logging.info('# Get node by FQDN: node-2.domain.tld') nodes = destructor.get_nodes(fqdns=['node-2.domain.tld']) logging.info('Node node-2.domain.tld: %s', nodes) logging.info('# Disable public network on node-2.domain.tld') nodes.disconnect(network_name='public') logging.info('# Enable public network on node-2.domain.tld') nodes.connect(network_name='public') logging.info('# Kill Glance API service on a single node') service = destructor.get_service(name='glance-api') nodes = service.get_nodes().pick() service.kill(nodes) if __name__ == '__main__': logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s', level=logging.INFO) main() os-faults-0.1.17/os_faults/000077500000000000000000000000001317662032700155235ustar00rootroot00000000000000os-faults-0.1.17/os_faults/__init__.py000066400000000000000000000167751317662032700176540ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import os import appdirs import jsonschema import logging import pbr.version import yaml from os_faults.ansible import executor from os_faults.api import error from os_faults.api import human from os_faults import registry LOG = logging.getLogger(__name__) # Set default logging handler to avoid "No handler found" warnings. LOG.addHandler(logging.NullHandler()) def get_version(): return pbr.version.VersionInfo('os_faults').version_string() def get_release(): return pbr.version.VersionInfo('os_faults').release_string() APPDIRS = appdirs.AppDirs(appname='openstack', appauthor='OpenStack') UNIX_SITE_CONFIG_HOME = '/etc/openstack' CONFIG_SEARCH_PATH = [ os.getcwd(), APPDIRS.user_config_dir, UNIX_SITE_CONFIG_HOME, ] CONFIG_FILES = [ os.path.join(d, 'os-faults' + s) for d in CONFIG_SEARCH_PATH for s in ['.json', '.yaml', '.yml'] ] CONFIG_SCHEMA = { 'type': 'object', '$schema': 'http://json-schema.org/draft-04/schema#', 'properties': { 'node_discover': { 'type': 'object', 'properties': { 'driver': {'type': 'string'}, 'args': {}, }, 'required': ['driver', 'args'], 'additionalProperties': False, }, 'services': { 'type': 'object', 'patternProperties': { '.*': { 'type': 'object', 'properties': { 'driver': {'type': 'string'}, 'args': {'type': 'object'}, 'hosts': { 'type': 'array', 'minItems': 1, 'items': {'type': 'string'}, }, }, 'required': ['driver', 'args'], 'additionalProperties': False, } }, 'additionalProperties': False, }, 'cloud_management': { 'type': 'object', 'properties': { 'driver': {'type': 'string'}, 'args': {'type': 'object'}, }, 'required': ['driver'], 'additionalProperties': False, }, 'power_management': { 'type': 'object', 'properties': { 'driver': {'type': 'string'}, 'args': {'type': 'object'}, }, 'required': ['driver', 'args'], 'additionalProperties': False, }, 'power_managements': { 'type': 'array', 'items': { 'type': 'object', 'properties': { 'driver': {'type': 'string'}, 'args': {'type': 'object'}, }, 'required': ['driver', 'args'], 'additionalProperties': False, }, 'minItems': 1, }, }, 'required': ['cloud_management'], } def get_default_config_file(): if 'OS_FAULTS_CONFIG' in os.environ: return os.environ['OS_FAULTS_CONFIG'] for config_file in CONFIG_FILES: if os.path.exists(config_file): return config_file msg = 'Config file is not found on any of paths: {}'.format(CONFIG_FILES) raise error.OSFError(msg) def _init_driver(params): driver_cls = registry.get_driver(params['driver']) args = params.get('args') or {} # driver may have no arguments if args: jsonschema.validate(args, driver_cls.CONFIG_SCHEMA) return driver_cls(args) def connect(cloud_config=None, config_filename=None): """Connect to the cloud :param cloud_config: dict with cloud and power management params :param config_filename: name of the file where to read config from :returns: CloudManagement object """ if cloud_config is None: config_filename = config_filename or get_default_config_file() with open(config_filename) as fd: cloud_config = yaml.safe_load(fd.read()) jsonschema.validate(cloud_config, CONFIG_SCHEMA) cloud_management_conf = cloud_config['cloud_management'] cloud_management = _init_driver(cloud_management_conf) services = cloud_config.get('services') if services: cloud_management.update_services(services) cloud_management.validate_services() node_discover_conf = cloud_config.get('node_discover') if node_discover_conf: node_discover = _init_driver(node_discover_conf) cloud_management.set_node_discover(node_discover) power_managements_conf = cloud_config.get('power_managements') if power_managements_conf: for pm_conf in power_managements_conf: pm = _init_driver(pm_conf) cloud_management.add_power_management(pm) power_management_conf = cloud_config.get('power_management') if power_management_conf: if power_managements_conf: raise error.OSFError('Please use only power_managements') else: LOG.warning('power_management is deprecated, use ' 'power_managements instead.') power_management = _init_driver(power_management_conf) cloud_management.add_power_management(power_management) return cloud_management def discover(cloud_config): """Connect to the cloud and discover nodes and services :param cloud_config: dict with cloud and power management params :returns: config dict with discovered nodes/services """ cloud_config = copy.deepcopy(cloud_config) cloud_management = connect(cloud_config) # discover nodes hosts = [] for host in cloud_management.get_nodes().hosts: hosts.append({'ip': host.ip, 'mac': host.mac, 'fqdn': host.fqdn}) LOG.info('Found node: %s' % str(host)) cloud_config['node_discover'] = {'driver': 'node_list', 'args': hosts} # discover services cloud_config['services'] = {} for service_name in cloud_management.list_supported_services(): service = cloud_management.get_service(service_name) ips = service.get_nodes().get_ips() cloud_config['services'][service_name] = { 'driver': service.NAME, 'args': service.config } if ips: cloud_config['services'][service_name]['hosts'] = ips LOG.info('Found service "%s" on hosts: %s' % ( service_name, str(ips))) else: LOG.warning('Service "%s" is not found' % service_name) return cloud_config def human_api(distractor, command): """Execute high-level text command with specified destructor :param destructor: library instance as returned by :connect: function :param command: text command """ human.execute(distractor, command) def register_ansible_modules(paths): """Registers ansible modules by provided paths Allows to use custom ansible modules in NodeCollection.run_task method :param path: list of paths to folders with ansible modules """ executor.add_module_paths(paths) os-faults-0.1.17/os_faults/ansible/000077500000000000000000000000001317662032700171405ustar00rootroot00000000000000os-faults-0.1.17/os_faults/ansible/__init__.py000066400000000000000000000000001317662032700212370ustar00rootroot00000000000000os-faults-0.1.17/os_faults/ansible/executor.py000066400000000000000000000250771317662032700213630ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import collections import copy import logging import os from ansible.executor import task_queue_manager from ansible.parsing import dataloader from ansible.playbook import play from ansible.plugins import callback as callback_pkg try: from ansible.inventory.manager import InventoryManager as Inventory from ansible.vars.manager import VariableManager PRE_24_ANSIBLE = False except ImportError: # pre-2.4 Ansible from ansible.inventory import Inventory from ansible.vars import VariableManager PRE_24_ANSIBLE = True from os_faults.api import error LOG = logging.getLogger(__name__) STATUS_OK = 'OK' STATUS_FAILED = 'FAILED' STATUS_UNREACHABLE = 'UNREACHABLE' STATUS_SKIPPED = 'SKIPPED' DEFAULT_ERROR_STATUSES = {STATUS_FAILED, STATUS_UNREACHABLE} SSH_COMMON_ARGS = ('-o UserKnownHostsFile=/dev/null ' '-o StrictHostKeyChecking=no ' '-o ConnectTimeout=60') STDOUT_LIMIT = 4096 # Symbols count class AnsibleExecutionException(Exception): pass class AnsibleExecutionUnreachable(AnsibleExecutionException): pass AnsibleExecutionRecord = collections.namedtuple( 'AnsibleExecutionRecord', ['host', 'status', 'task', 'payload']) class MyCallback(callback_pkg.CallbackBase): CALLBACK_VERSION = 2.0 CALLBACK_TYPE = 'stdout' CALLBACK_NAME = 'myown' def __init__(self, storage, display=None): super(MyCallback, self).__init__(display) self.storage = storage def _store(self, result, status): record = AnsibleExecutionRecord( host=result._host.get_name(), status=status, task=result._task.get_name(), payload=result._result) self.storage.append(record) def v2_runner_on_failed(self, result, ignore_errors=False): super(MyCallback, self).v2_runner_on_failed(result) self._store(result, STATUS_FAILED) def v2_runner_on_ok(self, result): super(MyCallback, self).v2_runner_on_ok(result) self._store(result, STATUS_OK) def v2_runner_on_skipped(self, result): super(MyCallback, self).v2_runner_on_skipped(result) self._store(result, STATUS_SKIPPED) def v2_runner_on_unreachable(self, result): super(MyCallback, self).v2_runner_on_unreachable(result) self._store(result, STATUS_UNREACHABLE) def resolve_relative_path(file_name): path = os.path.normpath(os.path.join( os.path.dirname(__import__('os_faults').__file__), '../', file_name)) if os.path.exists(path): return path MODULE_PATHS = { resolve_relative_path('os_faults/ansible/modules'), } def get_module_paths(): global MODULE_PATHS return MODULE_PATHS def add_module_paths(paths): global MODULE_PATHS for path in paths: if not os.path.exists(path): raise error.OSFError('{!r} does not exist'.format(path)) # find all subfolders dirs = [x[0] for x in os.walk(path)] MODULE_PATHS.update(dirs) def make_module_path_option(): if PRE_24_ANSIBLE: # it was a string of colon-separated directories module_path = os.pathsep.join(get_module_paths()) else: # now it is a list of strings (MUST have > 1 element) module_path = list(get_module_paths()) if len(module_path) == 1: module_path += [module_path[0]] return module_path Options = collections.namedtuple( 'Options', ['connection', 'module_path', 'forks', 'remote_user', 'private_key_file', 'ssh_common_args', 'ssh_extra_args', 'sftp_extra_args', 'scp_extra_args', 'become', 'become_method', 'become_user', 'verbosity', 'check', 'diff']) class AnsibleRunner(object): def __init__(self, remote_user='root', password=None, forks=100, jump_host=None, jump_user=None, private_key_file=None, become=None, become_password=None, serial=None): super(AnsibleRunner, self).__init__() ssh_common_args = SSH_COMMON_ARGS if jump_host: ssh_common_args += self._build_proxy_arg( jump_host=jump_host, jump_user=jump_user or remote_user, private_key_file=private_key_file) self.passwords = dict(conn_pass=password, become_pass=become_password) self.options = Options( connection='smart', module_path=make_module_path_option(), forks=forks, remote_user=remote_user, private_key_file=private_key_file, ssh_common_args=ssh_common_args, ssh_extra_args=None, sftp_extra_args=None, scp_extra_args=None, become=become, become_method='sudo', become_user='root', verbosity=100, check=False, diff=None) self.serial = serial or 10 @staticmethod def _build_proxy_arg(jump_user, jump_host, private_key_file=None): key = '-i ' + private_key_file if private_key_file else '' return (' -o ProxyCommand="ssh %(key)s -W %%h:%%p %(ssh_args)s ' '%(user)s@%(host)s"' % dict(key=key, user=jump_user, host=jump_host, ssh_args=SSH_COMMON_ARGS)) def _run_play(self, play_source, host_vars): host_list = play_source['hosts'] loader = dataloader.DataLoader() # FIXME(jpena): we need to behave differently if we are using # Ansible >= 2.4.0.0. Remove when only versions > 2.4 are supported if PRE_24_ANSIBLE: variable_manager = VariableManager() inventory_inst = Inventory(loader=loader, variable_manager=variable_manager, host_list=host_list) variable_manager.set_inventory(inventory_inst) else: inventory_inst = Inventory(loader=loader, sources=','.join(host_list) + ',') variable_manager = VariableManager(loader=loader, inventory=inventory_inst) for host, variables in host_vars.items(): host_inst = inventory_inst.get_host(host) for var_name, value in variables.items(): if value is not None: variable_manager.set_host_variable( host_inst, var_name, value) storage = [] callback = MyCallback(storage) tqm = task_queue_manager.TaskQueueManager( inventory=inventory_inst, variable_manager=variable_manager, loader=loader, options=self.options, passwords=self.passwords, stdout_callback=callback, ) # create play play_inst = play.Play().load(play_source, variable_manager=variable_manager, loader=loader) # actually run it try: tqm.run(play_inst) finally: tqm.cleanup() return storage def run_playbook(self, playbook, host_vars): result = [] for play_source in playbook: play_source['gather_facts'] = 'no' result += self._run_play(play_source, host_vars) return result def execute(self, hosts, task, raise_on_statuses=DEFAULT_ERROR_STATUSES): """Executes the task on every host from the list Raises exception if any of the commands fails with one of specified statuses. :param hosts: list of host addresses :param task: Ansible task :param raise_on_statuses: raise exception if any of commands return any of these statuses :return: execution result, type AnsibleExecutionRecord """ LOG.debug('Executing task: %s on hosts: %s with serial: %s', task, hosts, self.serial) host_vars = {h.ip: self._build_host_vars(h) for h in hosts} task_play = {'hosts': [h.ip for h in hosts], 'tasks': [task], 'serial': self.serial} result = self.run_playbook([task_play], host_vars) log_result = copy.deepcopy(result) LOG.debug('Execution completed with %s result(s):' % len(log_result)) for lr in log_result: if 'stdout' in lr.payload: if len(lr.payload['stdout']) > STDOUT_LIMIT: lr.payload['stdout'] = ( lr.payload['stdout'][:STDOUT_LIMIT] + '... ') if 'stdout_lines' in lr.payload: del lr.payload['stdout_lines'] LOG.debug(lr) if raise_on_statuses: errors = [] only_unreachable = True for r in result: if r.status in raise_on_statuses: if r.status != STATUS_UNREACHABLE: only_unreachable = False errors.append(r) if errors: msg = 'Execution failed: %s' % ', '.join(( '(host: %s, status: %s)' % (r.host, r.status)) for r in errors) ek = (AnsibleExecutionUnreachable if only_unreachable else AnsibleExecutionException) raise ek(msg) return result def _build_host_vars(self, host): if not host.auth: return {} ssh_common_args = None if 'jump' in host.auth: ssh_common_args = SSH_COMMON_ARGS ssh_common_args += self._build_proxy_arg( jump_host=host.auth['jump']['host'], jump_user=host.auth['jump'].get( 'username', self.options.remote_user), private_key_file=host.auth['jump'].get( 'private_key_file', self.options.private_key_file)) return { 'ansible_user': host.auth.get('username'), 'ansible_ssh_pass': host.auth.get('password'), 'ansible_become': host.auth.get('become') or host.auth.get('sudo'), 'ansible_become_password': host.auth.get('become_password'), 'ansible_ssh_private_key_file': host.auth.get('private_key_file'), 'ansible_ssh_common_args': ssh_common_args, } os-faults-0.1.17/os_faults/ansible/modules/000077500000000000000000000000001317662032700206105ustar00rootroot00000000000000os-faults-0.1.17/os_faults/ansible/modules/__init__.py000066400000000000000000000000001317662032700227070ustar00rootroot00000000000000os-faults-0.1.17/os_faults/ansible/modules/freeze.py000066400000000000000000000026561317662032700224530ustar00rootroot00000000000000#!/usr/bin/python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ansible.module_utils.basic import * # noqa def main(): module = AnsibleModule( argument_spec=dict( grep=dict(required=True, type='str'), sec=dict(required=True, type='int') )) grep = module.params['grep'] sec = module.params['sec'] cmd = ('bash -c "tf=$(mktemp /tmp/script.XXXXXX);' 'echo -n \'#!\' > $tf; ' 'echo -en \'/bin/bash\\npids=`ps ax | ' 'grep -v grep | ' 'grep %s | awk {{\\047print $1\\047}}`; ' 'echo $pids | xargs kill -19; sleep %s; ' 'echo $pids | xargs kill -18; rm \' >> $tf; ' 'echo -n $tf >> $tf; ' 'chmod 770 $tf; nohup $tf &"') % (grep, sec) rc, stdout, stderr = module.run_command(cmd, check_rc=True) module.exit_json(cmd=cmd, rc=rc, stderr=stderr, stdout=stdout) if __name__ == '__main__': main() os-faults-0.1.17/os_faults/ansible/modules/fuel_network_mgmt.py000066400000000000000000000025621317662032700247170ustar00rootroot00000000000000#!/usr/bin/python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ansible.module_utils.basic import * # noqa NETWORK_NAME_TO_INTERFACE = { 'management': 'br-mgmt', 'public': 'br-ex', 'private': 'br-prv', 'storage': 'br-storage', } def main(): module = AnsibleModule( argument_spec=dict( operation=dict(choices=['up', 'down']), network_name=dict(default='management', choices=list(NETWORK_NAME_TO_INTERFACE.keys())), )) operation = module.params['operation'] network_name = module.params['network_name'] interface = NETWORK_NAME_TO_INTERFACE[network_name] cmd = 'ip link set %s %s' % (interface, operation) rc, stdout, stderr = module.run_command(cmd, check_rc=True) module.exit_json(cmd=cmd, rc=rc, stderr=stderr, stdout=stdout) if __name__ == '__main__': main() os-faults-0.1.17/os_faults/ansible/modules/iptables.py000066400000000000000000000035621317662032700227730ustar00rootroot00000000000000#!/usr/bin/python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ansible.module_utils.basic import * # noqa def main(): module = AnsibleModule( argument_spec=dict( service=dict(required=True, type='str'), action=dict(required=True, choices=['block', 'unblock']), port=dict(required=True, type='int'), protocol=dict(required=True, choices=['tcp', 'udp']), )) service = module.params['service'] action = module.params['action'] port = module.params['port'] protocol = module.params['protocol'] comment = '{}_temporary_DROP'.format(service) if action == 'block': cmd = ('bash -c "iptables -I INPUT 1 -p {protocol} --dport {port} ' '-j DROP -m comment --comment "{comment}""'.format( comment=comment, port=port, protocol=protocol)) else: cmd = ('bash -c "rule=`iptables -L INPUT -n --line-numbers | ' 'grep "{comment}" | cut -d \' \' -f1`; for arg in $rule;' ' do iptables -D INPUT -p {protocol} --dport {port} ' '-j DROP -m comment --comment "{comment}"; done"'.format( comment=comment, port=port, protocol=protocol)) rc, stdout, stderr = module.run_command(cmd, check_rc=True) module.exit_json(cmd=cmd, rc=rc, stderr=stderr, stdout=stdout) if __name__ == '__main__': main() os-faults-0.1.17/os_faults/ansible/modules/kill.py000066400000000000000000000022031317662032700221120ustar00rootroot00000000000000#!/usr/bin/python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ansible.module_utils.basic import * # noqa def main(): module = AnsibleModule( argument_spec=dict( grep=dict(required=True, type='str'), sig=dict(required=True, type='int') )) grep = module.params['grep'] sig = module.params['sig'] cmd = ('bash -c "ps ax | grep -v grep | grep \'%s\' | awk {\'print $1\'} ' '| xargs kill -%s"') % (grep, sig) rc, stdout, stderr = module.run_command(cmd, check_rc=True) module.exit_json(cmd=cmd, rc=rc, stderr=stderr, stdout=stdout) if __name__ == '__main__': main() os-faults-0.1.17/os_faults/ansible/modules/stress.py000066400000000000000000000024601317662032700225070ustar00rootroot00000000000000#!/usr/bin/python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ansible.module_utils.basic import * # noqa STRESSORS_MAP = { 'cpu': '--cpu 0', 'disk': '--hdd 0', 'memory': '--brk 0', 'kernel': '--kill 0', 'all': '--all 0', } def main(): module = AnsibleModule( argument_spec=dict( target=dict(required=True, type='str'), duration=dict(required=True, type='int') )) target = module.params['target'] stressor = STRESSORS_MAP.get(target) or STRESSORS_MAP['all'] duration = module.params['duration'] cmd = 'bash -c "stress-ng %s --timeout %ss"' % (stressor, duration) rc, stdout, stderr = module.run_command(cmd, check_rc=True) module.exit_json(cmd=cmd, rc=rc, stderr=stderr, stdout=stdout) if __name__ == '__main__': main() os-faults-0.1.17/os_faults/api/000077500000000000000000000000001317662032700162745ustar00rootroot00000000000000os-faults-0.1.17/os_faults/api/__init__.py000066400000000000000000000000001317662032700203730ustar00rootroot00000000000000os-faults-0.1.17/os_faults/api/base_driver.py000066400000000000000000000014141317662032700211330ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. class BaseDriver(object): NAME = 'base' DESCRIPTION = 'base driver' @classmethod def get_driver_name(cls): return cls.NAME @classmethod def get_driver_description(cls): return cls.DESCRIPTION os-faults-0.1.17/os_faults/api/cloud_management.py000066400000000000000000000077731317662032700221660ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import copy import logging import jsonschema import six from os_faults.api import base_driver from os_faults.api import error from os_faults.api import node_collection from os_faults.api import power_management from os_faults import registry LOG = logging.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class CloudManagement(base_driver.BaseDriver): SERVICES = {} SUPPORTED_NETWORKS = [] NODE_CLS = node_collection.NodeCollection def __init__(self): self.power_manager = power_management.PowerManager() self.node_discover = None self.services = copy.deepcopy(self.SERVICES) def add_power_management(self, driver): self.power_manager.add_driver(driver) def set_node_discover(self, node_discover): self.node_discover = node_discover def update_services(self, services): self.services.update(services) def validate_services(self): for service_name, serive_conf in self.services.items(): serive_cls = registry.get_driver(serive_conf["driver"]) jsonschema.validate(serive_conf['args'], serive_cls.CONFIG_SCHEMA) @abc.abstractmethod def verify(self): """Verify connection to the cloud. """ def get_nodes(self, fqdns=None): """Get nodes in the cloud This function returns NodesCollection representing all nodes in the cloud or only those that has specified FQDNs. :param fqdns list of FQDNs or None to retrieve all nodes :return: NodesCollection """ if self.node_discover is None: raise error.OSFError( 'node_discover is not specified and "{}" ' 'driver does not support discovering'.format(self.NAME)) hosts = self.node_discover.discover_hosts() nodes = self.NODE_CLS(cloud_management=self, hosts=hosts) if fqdns: LOG.debug('Trying to find nodes with FQDNs: %s', fqdns) nodes = nodes.filter(lambda node: node.fqdn in fqdns) LOG.debug('The following nodes were found: %s', nodes.hosts) return nodes def get_service(self, name): """Get service with specified name :param name: name of the service :return: Service """ if name not in self.services: raise error.ServiceError( '{} driver does not support {!r} service'.format( self.NAME.title(), name)) config = self.services[name] klazz = registry.get_driver(config["driver"]) return klazz(node_cls=self.NODE_CLS, cloud_management=self, service_name=name, config=config["args"], hosts=config.get('hosts')) @abc.abstractmethod def execute_on_cloud(self, hosts, task, raise_on_error=True): """Execute task on specified hosts within the cloud. :param hosts: List of host FQDNs :param task: Ansible task :param raise_on_error: throw exception in case of error :return: Ansible execution result (list of records) """ @classmethod def list_supported_services(cls): """Lists all services supported by this driver :return: [String] list of service names """ return cls.SERVICES.keys() @classmethod def list_supported_networks(cls): """Lists all networks supported by nodes returned by this driver :return: [String] list of network names """ return cls.SUPPORTED_NETWORKS os-faults-0.1.17/os_faults/api/error.py000066400000000000000000000023011317662032700177730ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. class OSFException(Exception): """Base Exception class""" class OSFError(OSFException): """Base Error class""" class PowerManagementError(OSFError): """Base Error class for Power Management API""" class CloudManagementError(OSFError): """Base Error class for Cloud Management API""" class ServiceError(OSFError): """Base Error class for Service API""" class NodeCollectionError(OSFError): """Base Error class for NodeCollection API""" class OSFDriverNotFound(OSFError): """Driver Not Found by os-faults registry""" class OSFDriverWithSuchNameExists(OSFError): """Driver with such name already exists in os-faults registry""" os-faults-0.1.17/os_faults/api/human.py000066400000000000000000000116211317662032700177570ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import logging import re from os_faults.api import error from os_faults.api import node_collection as node_collection_pkg from os_faults.api import service as service_pkg from os_faults.api import utils LOG = logging.getLogger(__name__) """ Human API understands commands like these (examples): * restart service [on (random|one|single| node[s])] * terminate service [on (random|one|single| node[s])] * start service [on (random|one|single| node[s])] * kill service [on (random|one|single| node[s])] * plug service [on (random|one|single| node[s])] * unplug service [on (random|one|single| node[s])] * freeze service [on (random|one|single| node[s])] [for seconds] * unfreeze service [on (random|one|single| node[s])] * reboot [random|one|single|] node[s] [with service] * reset [random|one|single|] node[s] [with service] * stress [cpu|memory|disk|kernel for seconds] on [random|one|single|] node[s] [with service] * disconnect network on [random|one|single|] node[s] [with service] * connect network on [random|one|single|] node[s] [with service] """ def list_actions(klazz): return set(m[0].replace('_', ' ') for m in inspect.getmembers( klazz, predicate=utils.is_public)) RANDOMNESS = {'one', 'random', 'some', 'single'} ANYTHING = {'all'} NODE_ALIASES_PATTERN = '|'.join(RANDOMNESS | ANYTHING) SERVICE_ACTIONS = list_actions(service_pkg.Service) SERVICE_ACTIONS_PATTERN = '|'.join(SERVICE_ACTIONS) NODE_ACTIONS = list_actions(node_collection_pkg.NodeCollection) NODE_ACTIONS_PATTERN = '|'.join(NODE_ACTIONS) PATTERNS = [ re.compile('(?P%s)' '\s+(?P\S+)\s+service' '(\s+on(\s+(?P\S+))?\s+nodes?)?' '(\s+for\s+(?P\d+)\s+seconds)?' % SERVICE_ACTIONS_PATTERN), re.compile('(?P%s)' '(\s+(?P\w+)\s+network\s+on)?' '(\s+(?P\w+)' '(\s+for\s+(?P\d+)\s+seconds)(\s+on)?)?' '(\s+(?P%s|\S+))?' '\s+nodes?' '(\s+with\s+(?P\S+)\s+service)?' % (NODE_ACTIONS_PATTERN, NODE_ALIASES_PATTERN)), ] def execute(destructor, command): command = command.lower() rec = None for pattern in PATTERNS: rec = re.search(pattern, command) if rec: break if not rec: raise error.OSFException('Could not parse command: %s' % command) groups = rec.groupdict() action = groups.get('action').replace(' ', '_') service_name = groups.get('service') node_name = groups.get('node') network_name = groups.get('network') target = groups.get('target') duration = groups.get('duration') if service_name: service = destructor.get_service(name=service_name) if action in SERVICE_ACTIONS: kwargs = {} if node_name in RANDOMNESS: kwargs['nodes'] = service.get_nodes().pick() elif node_name and node_name not in ANYTHING: kwargs['nodes'] = destructor.get_nodes(fqdns=[node_name]) if duration: kwargs['sec'] = int(duration) fn = getattr(service, action) fn(**kwargs) else: # node actions nodes = service.get_nodes() if node_name in RANDOMNESS: nodes = nodes.pick() kwargs = {} if network_name: kwargs['network_name'] = network_name if target: kwargs['target'] = target kwargs['duration'] = int(duration) fn = getattr(nodes, action) fn(**kwargs) else: # nodes operation if node_name and node_name not in ANYTHING: nodes = destructor.get_nodes(fqdns=[node_name]) else: nodes = destructor.get_nodes() kwargs = {} if network_name: kwargs['network_name'] = network_name if target: kwargs['target'] = target kwargs['duration'] = int(duration) fn = getattr(nodes, action) fn(**kwargs) LOG.info('Command `%s` is executed successfully', command) os-faults-0.1.17/os_faults/api/node_collection.py000066400000000000000000000152711317662032700220140ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import random import warnings from os_faults.api import error from os_faults.api.utils import public from os_faults import utils LOG = logging.getLogger(__name__) class Host(utils.ComparableMixin, utils.ReprMixin): ATTRS = ('ip', 'mac', 'fqdn', 'libvirt_name') def __init__(self, ip, mac=None, fqdn=None, libvirt_name=None, auth=None): self.ip = ip self.mac = mac self.fqdn = fqdn self.libvirt_name = libvirt_name self.auth = auth class NodeCollection(utils.ReprMixin): ATTRS = ('hosts', ) def __init__(self, cloud_management=None, hosts=None): self.cloud_management = cloud_management self._hosts = set(hosts) @property def hosts(self): return sorted(self._hosts) def __len__(self): return len(self._hosts) def _check_nodes_types(self, other): if type(self) is not type(other): raise TypeError( 'Unsupported operand types: {} and {}'.format( type(self), type(other))) if self.cloud_management is not other.cloud_management: raise error.NodeCollectionError( 'NodeCollections have different cloud_managements: ' '{} and {}'.format(self.cloud_management, other.cloud_management)) def __add__(self, other): return self.__or__(other) def __sub__(self, other): self._check_nodes_types(other) return self._make_instance(self._hosts - other._hosts) def __and__(self, other): self._check_nodes_types(other) return self._make_instance(self._hosts & other._hosts) def __or__(self, other): self._check_nodes_types(other) return self._make_instance(self._hosts | other._hosts) def __xor__(self, other): self._check_nodes_types(other) return self._make_instance(self._hosts ^ other._hosts) def __contains__(self, host): return host in self._hosts def __iter__(self): for host in self.hosts: yield host def _make_instance(self, hosts): return self.__class__(cloud_management=self.cloud_management, hosts=hosts) def get_ips(self): return [host.ip for host in self.hosts] def get_macs(self): return [host.mac for host in self.hosts] def get_fqdns(self): return [host.fqdn for host in self.hosts] def iterate_hosts(self): warnings.warn('iterate_hosts is deprecated, use __iter__ instead', DeprecationWarning, stacklevel=2) return self.__iter__() def pick(self, count=1): """Pick one Node out of collection :return: NodeCollection consisting just one node """ if count > len(self._hosts): msg = 'Cannot pick {} from {} node(s)'.format( count, len(self._hosts)) raise error.NodeCollectionError(msg) return self._make_instance(random.sample(self._hosts, count)) def filter(self, criteria_fn): hosts = list(filter(criteria_fn, self._hosts)) if hosts: return self._make_instance(hosts) else: raise error.NodeCollectionError( 'No nodes found according to criterion') def run_task(self, task, raise_on_error=True): """Run ansible task on node colection :param task: ansible task as dict :param raise_on_error: throw exception in case of error :return: AnsibleExecutionRecord with results of task """ LOG.info('Run task: %s on nodes: %s', task, self) return self.cloud_management.execute_on_cloud( self.hosts, task, raise_on_error=raise_on_error) @public def reboot(self): """Reboot all nodes gracefully """ LOG.info('Reboot nodes: %s', self) task = {'command': 'reboot now'} self.cloud_management.execute_on_cloud(self.hosts, task) @public def oom(self): """Fill all node's RAM """ raise NotImplementedError @public def poweroff(self): """Power off all nodes abruptly """ LOG.info('Power off nodes: %s', self) self.cloud_management.power_manager.poweroff(self.hosts) @public def poweron(self): """Power on all nodes abruptly """ LOG.info('Power on nodes: %s', self) self.cloud_management.power_manager.poweron(self.hosts) @public def reset(self): """Reset (cold restart) all nodes """ LOG.info('Reset nodes: %s', self) self.cloud_management.power_manager.reset(self.hosts) @public def shutdown(self): """Shutdown all nodes gracefully """ LOG.info('Shutdown nodes: %s', self) self.cloud_management.power_manager.shutdown(self.hosts) def snapshot(self, snapshot_name, suspend=True): """Create snapshot for all nodes """ LOG.info('Create snapshot "%s" for nodes: %s', snapshot_name, self) self.cloud_management.power_manager.snapshot( self.hosts, snapshot_name, suspend) def revert(self, snapshot_name, resume=True): """Revert snapshot for all nodes """ LOG.info('Revert snapshot "%s" for nodes: %s', snapshot_name, self) self.cloud_management.power_manager.revert( self.hosts, snapshot_name, resume) @public def disconnect(self, network_name): """Disconnect nodes from network :param network_name: name of network """ raise NotImplementedError @public def connect(self, network_name): """Connect nodes to network :param network_name: name of network """ raise NotImplementedError @public def stress(self, target, duration=None): """Stress node OS and hardware """ duration = duration or 10 # defaults to 10 seconds LOG.info('Stress %s for %ss on nodes %s', target, duration, self) task = {'stress': { 'target': target, 'duration': duration, }} self.cloud_management.execute_on_cloud(self.hosts, task) os-faults-0.1.17/os_faults/api/node_discover.py000066400000000000000000000015311317662032700214710ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import six from os_faults.api import base_driver @six.add_metaclass(abc.ABCMeta) class NodeDiscover(base_driver.BaseDriver): """Node discover base driver.""" @abc.abstractmethod def discover_hosts(self): """Discover hosts :returns: list of Host instances """ os-faults-0.1.17/os_faults/api/power_management.py000066400000000000000000000061761317662032700222100ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import six from os_faults.api import base_driver from os_faults.api import error from os_faults import utils @six.add_metaclass(abc.ABCMeta) class PowerDriver(base_driver.BaseDriver): @abc.abstractmethod def supports(host): """Returns True if host is supported by the power driver""" @abc.abstractmethod def poweroff(self, host): """Power off host abruptly""" @abc.abstractmethod def poweron(self, host): """Power on host""" @abc.abstractmethod def reset(self, host): """Reset host""" @abc.abstractmethod def shutdown(self, host): """Graceful shutdown host""" def snapshot(self, host, snapshot_name, suspend=True): raise NotImplementedError def revert(self, host, snapshot_name, resume=True): raise NotImplementedError class PowerManager(object): def __init__(self): self.power_drivers = [] def add_driver(self, driver): self.power_drivers.append(driver) def _map_hosts_to_driver(self, hosts): driver_host_pairs = [] for host in hosts: for power_driver in self.power_drivers: if power_driver.supports(host): driver_host_pairs.append((power_driver, host)) break else: raise error.PowerManagementError( "No supported driver found for host {}".format(host)) return driver_host_pairs def _run_command(self, cmd, hosts, **kwargs): driver_host_pairs = self._map_hosts_to_driver(hosts) tw = utils.ThreadsWrapper() for driver, host in driver_host_pairs: kwargs['host'] = host fn = getattr(driver, cmd) tw.start_thread(fn, **kwargs) tw.join_threads() if tw.errors: raise error.PowerManagementError( 'There are some errors when working the driver. ' 'Please, check logs for more details.') def poweroff(self, hosts): self._run_command('poweroff', hosts) def poweron(self, hosts): self._run_command('poweron', hosts) def reset(self, hosts): self._run_command('reset', hosts) def shutdown(self, hosts): self._run_command('shutdown', hosts) def snapshot(self, hosts, snapshot_name, suspend=True): self._run_command('snapshot', hosts, snapshot_name=snapshot_name, suspend=suspend) def revert(self, hosts, snapshot_name, resume=True): self._run_command('revert', hosts, snapshot_name=snapshot_name, resume=resume) os-faults-0.1.17/os_faults/api/service.py000066400000000000000000000065641317662032700203210ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import six from os_faults.api import base_driver from os_faults.api.utils import public @six.add_metaclass(abc.ABCMeta) class Service(base_driver.BaseDriver): def __init__(self, service_name, config, node_cls, cloud_management, hosts=None): self.service_name = service_name self.config = config self.node_cls = node_cls self.cloud_management = cloud_management self.hosts = hosts @abc.abstractmethod def discover_nodes(self): """Discover nodes where this Service is running :returns: NodesCollection """ def get_nodes(self): """Get nodes where this Service is running :returns: NodesCollection """ if self.hosts is not None: nodes = self.cloud_management.get_nodes() hosts = [h for h in nodes.hosts if h.ip in self.hosts] return self.node_cls(cloud_management=self.cloud_management, hosts=hosts) return self.discover_nodes() @public def restart(self, nodes=None): """Restart Service on all nodes or on particular subset :param nodes: NodesCollection """ raise NotImplementedError @public def terminate(self, nodes=None): """Terminate Service gracefully on all nodes or on particular subset :param nodes: NodesCollection """ raise NotImplementedError @public def start(self, nodes=None): """Start Service on all nodes or on particular subset :param nodes: NodesCollection """ raise NotImplementedError @public def kill(self, nodes=None): """Terminate Service abruptly on all nodes or on particular subset :param nodes: NodesCollection """ raise NotImplementedError @public def unplug(self, nodes=None): """Unplug Service out of network on all nodes or on particular subset :param nodes: NodesCollection """ raise NotImplementedError @public def plug(self, nodes=None): """Plug Service into network on all nodes or on particular subset :param nodes: NodesCollection """ raise NotImplementedError @public def freeze(self, nodes=None, sec=None): """Pause service execution Send SIGSTOP to Service into network on all nodes or on particular subset. If sec is defined - it mean Service will be stopped for a wile. :param nodes: NodesCollection :param sec: int """ raise NotImplementedError @public def unfreeze(self, nodes=None): """Resume service execution Send SIGCONT to Service into network on all nodes or on particular subset. :param nodes: NodesCollection """ raise NotImplementedError os-faults-0.1.17/os_faults/api/utils.py000066400000000000000000000013601317662032700200060ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect def public(func): func.__public__ = True return func def is_public(obj): return ((inspect.isfunction(obj) or inspect.ismethod(obj)) and hasattr(obj, '__public__')) os-faults-0.1.17/os_faults/cmd/000077500000000000000000000000001317662032700162665ustar00rootroot00000000000000os-faults-0.1.17/os_faults/cmd/__init__.py000066400000000000000000000000001317662032700203650ustar00rootroot00000000000000os-faults-0.1.17/os_faults/cmd/cmd.py000066400000000000000000000125261317662032700174110ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import argparse import inspect import logging import sys import textwrap import os_faults from os_faults.api import cloud_management from os_faults.api import node_collection as node_collection_pkg from os_faults.api import service as service_pkg from os_faults import registry def describe_actions(klazz): methods = (m for m in inspect.getmembers( klazz, predicate=lambda o: ((inspect.isfunction(o) or inspect.ismethod(o)) and hasattr(o, '__public__')))) return ['%s - %s' % (m[0], m[1].__doc__.split('\n')[0]) for m in sorted(methods)] SERVICE_ACTIONS = describe_actions(service_pkg.Service) NODE_ACTIONS = describe_actions(node_collection_pkg.NodeCollection) USAGE = 'os-inject-fault [-h] [-c CONFIG] [-d] [-v] [command]' HELP_TEMPLATE = """ Built-in drivers: %(drivers)s *Service-oriented* commands perform specified action against service on all, on one random node or on the node specified by FQDN: service [on (random|one|single| node[s])] where: action is one of: %(service_action)s service is one of supported by driver: %(services)s Examples: * "Restart Keystone service" - restarts Keystone service on all nodes. * "kill nova-api service on one node" - restarts Nova API on one randomly-picked node. *Node-oriented* commands perform specified action on node specified by FQDN or set of service's nodes: [random|one|single|] node[s] [with service] where: action is one of: %(node_action)s service is one of supported by driver: %(services)s Examples: * "Reboot one node with mysql" - reboots one random node with MySQL. * "Reset node-2.domain.tld node" - reset node node-2.domain.tld. *Network-oriented* commands are subset of node-oriented and perform network management operation on selected nodes: [connect|disconnect] network on [random|one|single|] node[s] [with service] where: network is one of supported by driver: %(networks)s service is one of supported by driver: %(services)s Examples: * "Disconnect management network on nodes with rabbitmq service" - shuts down management network interface on all nodes where rabbitmq runs. * "Connect storage network on node-1.domain.tld node" - enables storage network interface on node-1.domain.tld. For more details please refer to docs: http://os-faults.readthedocs.io/ """ def _list_items(group, items): s = '\n'.join( textwrap.wrap(', '.join(sorted(items)), subsequent_indent=' ' * (len(group) + 8), break_on_hyphens=False)) return ' %s: %s' % (group, s) def _make_epilog(): drivers = registry.get_drivers() services_strings = [] networks_strings = [] driver_descriptions = [] for driver_name, driver in drivers.items(): driver_descriptions.append( ' %s - %s' % (driver_name, driver.get_driver_description())) if issubclass(driver, cloud_management.CloudManagement): services_strings.append( _list_items(driver_name, driver.list_supported_services())) networks_strings.append( _list_items(driver_name, driver.list_supported_networks())) return HELP_TEMPLATE % dict( drivers='\n'.join(driver_descriptions), service_action='\n '.join(SERVICE_ACTIONS), services='\n'.join(services_strings), node_action='\n '.join(sorted(NODE_ACTIONS)), networks='\n'.join(networks_strings), ) def main(): parser = argparse.ArgumentParser( prog='os-inject-fault', usage=USAGE, formatter_class=argparse.RawDescriptionHelpFormatter, epilog=_make_epilog()) parser.add_argument('-c', '--config', dest='config', help='path to os-faults cloud connection config') parser.add_argument('-d', '--debug', dest='debug', action='store_true') parser.add_argument('-v', '--verify', action='store_true', help='verify connection to the cloud') parser.add_argument('command', nargs='*', help='fault injection command, e.g. "restart keystone ' 'service"') args = parser.parse_args() debug = args.debug logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s', level=logging.DEBUG if debug else logging.INFO) config = args.config command = args.command if not command and not args.verify: parser.print_help() sys.exit(0) destructor = os_faults.connect(config_filename=config) if args.verify: destructor.verify() if command: command = ' '.join(command) os_faults.human_api(destructor, command) if __name__ == '__main__': main() os-faults-0.1.17/os_faults/cmd/main.py000066400000000000000000000055561317662032700175770ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import click import yaml import os_faults from os_faults import registry READABLE_FILE = click.Path(dir_okay=False, readable=True, exists=True, resolve_path=True) WRITABLE_FILE = click.Path(dir_okay=False, writable=True, resolve_path=True) config_option = click.option('-c', '--config', type=READABLE_FILE, help='path to os-faults cloud connection config') def print_version(ctx, param, value): if not value or ctx.resilient_parsing: return click.echo('Version: %s' % os_faults.get_release()) ctx.exit() @click.group() @click.option('--debug', '-d', is_flag=True, help='Enable debug logs') @click.option('--version', is_flag=True, callback=print_version, expose_value=False, is_eager=True, help='Show version and exit.') def main(debug): logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s', level=logging.DEBUG if debug else logging.INFO) @main.command() @config_option def verify(config): """Verify connection to the cloud""" config = config or os_faults.get_default_config_file() destructor = os_faults.connect(config_filename=config) destructor.verify() @main.command() @click.argument('output', type=WRITABLE_FILE) @config_option def discover(config, output): """Discover services/nodes and save them to output config file""" config = config or os_faults.get_default_config_file() with open(config) as f: cloud_config = yaml.safe_load(f.read()) discovered_config = os_faults.discover(cloud_config) with open(output, 'w') as f: f.write(yaml.safe_dump(discovered_config, default_flow_style=False)) click.echo('Saved {}'.format(output)) @main.command() @config_option def nodes(config): """List cloud nodes""" config = config or os_faults.get_default_config_file() destructor = os_faults.connect(config_filename=config) hosts = [{'ip': host.ip, 'mac': host.mac, 'fqdn': host.fqdn} for host in destructor.get_nodes().hosts] click.echo(yaml.safe_dump(hosts, default_flow_style=False), nl=False) @main.command() def drivers(): """List os-faults drivers""" drivers = sorted(registry.get_drivers().keys()) click.echo(yaml.safe_dump(drivers, default_flow_style=False), nl=False) if __name__ == '__main__': main() os-faults-0.1.17/os_faults/drivers/000077500000000000000000000000001317662032700172015ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/__init__.py000066400000000000000000000000001317662032700213000ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/cloud/000077500000000000000000000000001317662032700203075ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/cloud/__init__.py000066400000000000000000000000001317662032700224060ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/cloud/devstack.py000066400000000000000000000216111317662032700224660ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from os_faults.ansible import executor from os_faults.api import cloud_management from os_faults.api import node_collection from os_faults.api import node_discover from os_faults.drivers.services import process LOG = logging.getLogger(__name__) class DevStackNode(node_collection.NodeCollection): def connect(self, network_name): raise NotImplementedError def disconnect(self, network_name): raise NotImplementedError class ServiceInScreen(process.ServiceAsProcess): """Service in Screen This driver controls service that is started in a window of `screen` tool. **Example configuration:** .. code-block:: yaml services: app: driver: screen args: window_name: app grep: my_app port: ['tcp', 4242] parameters: - **window_name** - name of a service - **grep** - regexp for grep to find process PID - **port** - tuple with two values - protocol, port number (optional) """ NAME = 'screen' DESCRIPTION = 'Service in screen' CONFIG_SCHEMA = { 'type': 'object', 'properties': { 'window_name': {'type': 'string'}, 'grep': {'type': 'string'}, 'port': process.PORT_SCHEMA, }, 'required': ['grep', 'window_name'], 'additionalProperties': False, } def __init__(self, *args, **kwargs): super(ServiceInScreen, self).__init__(*args, **kwargs) self.window_name = self.config['window_name'] # sends ctr+c, arrow up key, enter key self.restart_cmd = ( "screen -S stack -p {window_name} -X " "stuff $'\\003'$'\\033[A'$(printf \\\\r)").format( window_name=self.window_name) # sends ctr+c self.terminate_cmd = ( "screen -S stack -p {window_name} -X " "stuff $'\\003'").format( window_name=self.window_name) # sends arrow up key, enter key self.start_cmd = ( "screen -S stack -p {window_name} -X " "stuff $'\\033[A'$(printf \\\\r)").format( window_name=self.window_name) class DevStackManagement(cloud_management.CloudManagement, node_discover.NodeDiscover): """Devstack driver. This driver requires devstack installed in screen mode (USE_SCREEN=True). Supports discovering of node MAC addresses. **Example configuration:** .. code-block:: yaml cloud_management: driver: devstack args: address: 192.168.1.10 username: ubuntu password: ubuntu_pass private_key_file: ~/.ssh/id_rsa_devstack slaves: - 192.168.1.11 - 192.168.1.12 iface: eth1 parameters: - **address** - ip address of any devstack node - **username** - username for all nodes - **password** - password for all nodes (optional) - **private_key_file** - path to key file (optional) - **slaves** - list of ips for additional nodes (optional) - **iface** - network interface name to retrieve mac address (optional) - **serial** - how many hosts Ansible should manage at a single time. (optional) default: 10 """ NAME = 'devstack' DESCRIPTION = 'DevStack management driver' NODE_CLS = DevStackNode SERVICES = { 'keystone': { 'driver': 'process', 'args': { 'grep': 'keystone-', 'restart_cmd': 'sudo service apache2 restart', 'terminate_cmd': 'sudo service apache2 stop', 'start_cmd': 'sudo service apache2 start', } }, 'mysql': { 'driver': 'process', 'args': { 'grep': 'mysqld', 'restart_cmd': 'sudo service mysql restart', 'terminate_cmd': 'sudo service mysql stop', 'start_cmd': 'sudo service mysql start', 'port': ['tcp', 3307], } }, 'rabbitmq': { 'driver': 'process', 'args': { 'grep': 'rabbitmq-server', 'restart_cmd': 'sudo service rabbitmq-server restart', 'terminate_cmd': 'sudo service rabbitmq-server stop', 'start_cmd': 'sudo service rabbitmq-server start', } }, 'nova-api': { 'driver': 'screen', 'args': { 'grep': 'nova-api', 'window_name': 'n-api', } }, 'glance-api': { 'driver': 'screen', 'args': { 'grep': 'glance-api', 'window_name': 'g-api', } }, 'nova-compute': { 'driver': 'screen', 'args': { 'grep': 'nova-compute', 'window_name': 'n-cpu', } }, 'nova-scheduler': { 'driver': 'screen', 'args': { 'grep': 'nova-scheduler', 'window_name': 'n-sch', } }, 'ironic-api': { 'driver': 'screen', 'args': { 'grep': 'ironic-api', 'window_name': 'ir-api', } }, 'ironic-conductor': { 'driver': 'screen', 'args': { 'grep': 'ironic-conductor', 'window_name': 'ir-cond', } }, } SUPPORTED_NETWORKS = ['all-in-one'] CONFIG_SCHEMA = { 'type': 'object', '$schema': 'http://json-schema.org/draft-04/schema#', 'properties': { 'address': {'type': 'string'}, 'username': {'type': 'string'}, 'password': {'type': 'string'}, 'private_key_file': {'type': 'string'}, 'slaves': { 'type': 'array', 'items': {'type': 'string'}, }, 'iface': {'type': 'string'}, 'serial': {'type': 'integer', 'minimum': 1}, }, 'required': ['address', 'username'], 'additionalProperties': False, } def __init__(self, cloud_management_params): super(DevStackManagement, self).__init__() self.node_discover = self # supports discovering self.address = cloud_management_params['address'] self.username = cloud_management_params['username'] self.private_key_file = cloud_management_params.get('private_key_file') self.slaves = cloud_management_params.get('slaves', []) self.iface = cloud_management_params.get('iface', 'eth0') self.serial = cloud_management_params.get('serial') self.cloud_executor = executor.AnsibleRunner( remote_user=self.username, private_key_file=self.private_key_file, password=cloud_management_params.get('password'), become=False, serial=self.serial) self.hosts = [node_collection.Host(ip=self.address)] if self.slaves: self.hosts.extend([node_collection.Host(ip=h) for h in self.slaves]) self.nodes = None def verify(self): """Verify connection to the cloud.""" nodes = self.get_nodes() if nodes: LOG.debug('DevStack nodes: %s', nodes) LOG.info('Connected to cloud successfully') def execute_on_cloud(self, hosts, task, raise_on_error=True): """Execute task on specified hosts within the cloud. :param hosts: List of host FQDNs :param task: Ansible task :param raise_on_error: throw exception in case of error :return: Ansible execution result (list of records) """ if raise_on_error: return self.cloud_executor.execute(hosts, task) else: return self.cloud_executor.execute(hosts, task, []) def discover_hosts(self): if self.nodes is None: get_mac_cmd = 'cat /sys/class/net/{}/address'.format(self.iface) task = {'command': get_mac_cmd} results = self.execute_on_cloud(self.hosts, task) # TODO(astudenov): support fqdn self.nodes = [node_collection.Host(ip=r.host, mac=r.payload['stdout'], fqdn='') for r in results] return self.nodes os-faults-0.1.17/os_faults/drivers/cloud/devstack_systemd.py000066400000000000000000000076761317662032700242550ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from os_faults.drivers.cloud import devstack LOG = logging.getLogger(__name__) class DevStackSystemdManagement(devstack.DevStackManagement): """Driver for modern DevStack based on Systemd. This driver requires DevStack installed with Systemd (USE_SCREEN=False). Supports discovering of node MAC addresses. **Example configuration:** .. code-block:: yaml cloud_management: driver: devstack_systemd args: address: 192.168.1.10 username: ubuntu password: ubuntu_pass private_key_file: ~/.ssh/id_rsa_devstack_systemd slaves: - 192.168.1.11 - 192.168.1.12 iface: eth1 parameters: - **address** - ip address of any devstack node - **username** - username for all nodes - **password** - password for all nodes (optional) - **private_key_file** - path to key file (optional) - **slaves** - list of ips for additional nodes (optional) - **iface** - network interface name to retrieve mac address (optional) - **serial** - how many hosts Ansible should manage at a single time. (optional) default: 10 """ NAME = 'devstack_systemd' DESCRIPTION = 'DevStack management driver using Systemd' # NODE_CLS = DevStackNode SERVICES = { 'keystone': { 'driver': 'systemd_service', 'args': { 'grep': 'keystone-uwsgi', 'systemd_service': 'devstack@keystone', } }, 'mysql': { 'driver': 'systemd_service', 'args': { 'grep': 'mysqld', 'systemd_service': 'mariadb', 'port': ['tcp', 3307], } }, 'rabbitmq': { 'driver': 'systemd_service', 'args': { 'grep': 'rabbitmq_server', 'systemd_service': 'rabbitmq-server', } }, 'nova-api': { 'driver': 'systemd_service', 'args': { 'grep': 'nova-api', 'systemd_service': 'devstack@n-api', } }, 'glance-api': { 'driver': 'systemd_service', 'args': { 'grep': 'glance-api', 'systemd_service': 'devstack@g-api', } }, 'nova-compute': { 'driver': 'systemd_service', 'args': { 'grep': 'nova-compute', 'systemd_service': 'devstack@n-cpu', } }, 'nova-scheduler': { 'driver': 'systemd_service', 'args': { 'grep': 'nova-scheduler', 'systemd_service': 'devstack@n-sch', } }, } SUPPORTED_NETWORKS = ['all-in-one'] CONFIG_SCHEMA = { 'type': 'object', '$schema': 'http://json-schema.org/draft-04/schema#', 'properties': { 'address': {'type': 'string'}, 'username': {'type': 'string'}, 'password': {'type': 'string'}, 'private_key_file': {'type': 'string'}, 'slaves': { 'type': 'array', 'items': {'type': 'string'}, }, 'iface': {'type': 'string'}, 'serial': {'type': 'integer', 'minimum': 1}, }, 'required': ['address', 'username'], 'additionalProperties': False, } os-faults-0.1.17/os_faults/drivers/cloud/fuel.py000066400000000000000000000346741317662032700216320ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import json import logging from os_faults.ansible import executor from os_faults.api import cloud_management from os_faults.api import node_collection from os_faults.api import node_discover LOG = logging.getLogger(__name__) class FuelNodeCollection(node_collection.NodeCollection): def connect(self, network_name): LOG.info("Connect network '%s' on nodes: %s", network_name, self) task = {'fuel_network_mgmt': { 'network_name': network_name, 'operation': 'up', }} self.cloud_management.execute_on_cloud(self.hosts, task) def disconnect(self, network_name): LOG.info("Disconnect network '%s' on nodes: %s", network_name, self) task = {'fuel_network_mgmt': { 'network_name': network_name, 'operation': 'down', }} self.cloud_management.execute_on_cloud(self.hosts, task) class FuelManagement(cloud_management.CloudManagement, node_discover.NodeDiscover): """Fuel driver. Cloud deployed by fuel. Supports discovering of slave nodes. **Example configuration:** .. code-block:: yaml cloud_management: driver: fuel args: address: 192.168.1.10 username: root private_key_file: ~/.ssh/id_rsa_fuel slave_direct_ssh: True parameters: - **address** - ip address of fuel master node - **username** - username for fuel master and slave nodes - **private_key_file** - path to key file (optional) - **slave_direct_ssh** - if *False* then fuel master is used as ssh proxy (optional) - **serial** - how many hosts Ansible should manage at a single time. (optional) default: 10 """ NAME = 'fuel' DESCRIPTION = 'Fuel 9.x cloud management driver' NODE_CLS = FuelNodeCollection SERVICES = { 'keystone': { 'driver': 'linux_service', 'args': { 'grep': 'keystone', 'linux_service': 'apache2', } }, 'horizon': { 'driver': 'linux_service', 'args': { 'grep': 'apache2', 'linux_service': 'apache2', } }, 'memcached': { 'driver': 'linux_service', 'args': { 'grep': 'memcached', 'linux_service': 'memcached', } }, 'mysql': { 'driver': 'pcs_service', 'args': { 'grep': 'mysqld', 'pcs_service': 'p_mysqld', 'port': ['tcp', 3307], } }, 'rabbitmq': { 'driver': 'pcs_service', 'args': { 'grep': 'rabbit tcp_listeners', 'pcs_service': 'p_rabbitmq-server', } }, 'glance-api': { 'driver': 'linux_service', 'args': { 'grep': 'glance-api', 'linux_service': 'glance-api', } }, 'glance-glare': { 'driver': 'linux_service', 'args': { 'grep': 'glance-glare', 'linux_service': 'glance-glare', } }, 'glance-registry': { 'driver': 'linux_service', 'args': { 'grep': 'glance-registry', 'linux_service': 'glance-registry', } }, 'nova-api': { 'driver': 'linux_service', 'args': { 'grep': 'nova-api', 'linux_service': 'nova-api', } }, 'nova-compute': { 'driver': 'linux_service', 'args': { 'grep': 'nova-compute', 'linux_service': 'nova-compute', } }, 'nova-scheduler': { 'driver': 'linux_service', 'args': { 'grep': 'nova-scheduler', 'linux_service': 'nova-scheduler', } }, 'nova-cert': { 'driver': 'linux_service', 'args': { 'grep': 'nova-cert', 'linux_service': 'nova-cert', } }, 'nova-conductor': { 'driver': 'linux_service', 'args': { 'grep': 'nova-conductor', 'linux_service': 'nova-conductor', } }, 'nova-consoleauth': { 'driver': 'linux_service', 'args': { 'grep': 'nova-consoleauth', 'linux_service': 'nova-consoleauth', } }, 'nova-novncproxy': { 'driver': 'linux_service', 'args': { 'grep': 'nova-novncproxy', 'linux_service': 'nova-novncproxy', } }, 'neutron-server': { 'driver': 'linux_service', 'args': { 'grep': 'neutron-server', 'linux_service': 'neutron-server', } }, 'neutron-dhcp-agent': { 'driver': 'pcs_service', 'args': { 'grep': 'neutron-dhcp-agent', 'pcs_service': 'neutron-dhcp-agent', } }, 'neutron-metadata-agent': { 'driver': 'pcs_or_linux_service', 'args': { 'grep': 'neutron-metadata-agent', 'pcs_service': 'neutron-metadata-agent', 'linux_service': 'neutron-metadata-agent', } }, 'neutron-openvswitch-agent': { 'driver': 'pcs_or_linux_service', 'args': { 'grep': 'neutron-openvswitch-agent', 'pcs_service': 'neutron-openvswitch-agent', 'linux_service': 'neutron-openvswitch-agent', } }, 'neutron-l3-agent': { 'driver': 'pcs_or_linux_service', 'args': { 'grep': 'neutron-l3-agent', 'pcs_service': 'neutron-l3-agent', 'linux_service': 'neutron-l3-agent', } }, 'heat-api': { 'driver': 'linux_service', 'args': { 'grep': 'heat-api', 'linux_service': 'heat-api', } }, 'heat-engine': { 'driver': 'pcs_service', 'args': { 'grep': 'heat-engine', 'pcs_service': 'p_heat-engine', } }, 'cinder-api': { 'driver': 'linux_service', 'args': { 'grep': 'cinder-api', 'linux_service': 'cinder-api', } }, 'cinder-scheduler': { 'driver': 'linux_service', 'args': { 'grep': 'cinder-scheduler', 'linux_service': 'cinder-scheduler', } }, 'cinder-volume': { 'driver': 'linux_service', 'args': { 'grep': 'cinder-volume', 'linux_service': 'cinder-volume', } }, 'cinder-backup': { 'driver': 'linux_service', 'args': { 'grep': 'cinder-backup', 'linux_service': 'cinder-backup', } }, 'ironic-api': { 'driver': 'linux_service', 'args': { 'grep': 'ironic-api', 'linux_service': 'ironic-api', } }, 'ironic-conductor': { 'driver': 'linux_service', 'args': { 'grep': 'ironic-conductor', 'linux_service': 'ironic-conductor', } }, 'swift-account': { 'driver': 'linux_service', 'args': { 'grep': 'swift-account', 'linux_service': 'swift-account', } }, 'swift-account-auditor': { 'driver': 'linux_service', 'args': { 'grep': 'swift-account-auditor', 'linux_service': 'swift-account-auditor', } }, 'swift-account-reaper': { 'driver': 'linux_service', 'args': { 'grep': 'swift-account-reaper', 'linux_service': 'swift-account-reaper', } }, 'swift-account-replicator': { 'driver': 'linux_service', 'args': { 'grep': 'swift-account-replicator', 'linux_service': 'swift-account-replicator', } }, 'swift-container': { 'driver': 'linux_service', 'args': { 'grep': 'swift-container', 'linux_service': 'swift-container', } }, 'swift-container-auditor': { 'driver': 'linux_service', 'args': { 'grep': 'swift-container-auditor', 'linux_service': 'swift-container-auditor', } }, 'swift-container-replicator': { 'driver': 'linux_service', 'args': { 'grep': 'swift-container-replicator', 'linux_service': 'swift-container-replicator', } }, 'swift-container-sync': { 'driver': 'linux_service', 'args': { 'grep': 'swift-container-sync', 'linux_service': 'swift-container-sync', } }, 'swift-container-updater': { 'driver': 'linux_service', 'args': { 'grep': 'swift-container-updater', 'linux_service': 'swift-container-updater', } }, 'swift-object': { 'driver': 'linux_service', 'args': { 'grep': 'swift-object', 'linux_service': 'swift-object', } }, 'swift-object-auditor': { 'driver': 'linux_service', 'args': { 'grep': 'swift-object-auditor', 'linux_service': 'swift-object-auditor', } }, 'swift-object-replicator': { 'driver': 'linux_service', 'args': { 'grep': 'swift-object-replicator', 'linux_service': 'swift-object-replicator', } }, 'swift-object-updater': { 'driver': 'linux_service', 'args': { 'grep': 'swift-object-updater', 'linux_service': 'swift-object-updater', } }, 'swift-proxy': { 'driver': 'linux_service', 'args': { 'grep': 'swift-proxy', 'linux_service': 'swift-proxy', } }, } SUPPORTED_NETWORKS = ['management', 'private', 'public', 'storage'] CONFIG_SCHEMA = { 'type': 'object', '$schema': 'http://json-schema.org/draft-04/schema#', 'properties': { 'address': {'type': 'string'}, 'username': {'type': 'string'}, 'private_key_file': {'type': 'string'}, 'slave_direct_ssh': {'type': 'boolean'}, 'serial': {'type': 'integer', 'minimum': 1}, }, 'required': ['address', 'username'], 'additionalProperties': False, } def __init__(self, cloud_management_params): super(FuelManagement, self).__init__() self.node_discover = self # supports discovering self.master_node_address = cloud_management_params['address'] self._master_host = node_collection.Host(ip=self.master_node_address) self.username = cloud_management_params['username'] self.private_key_file = cloud_management_params.get('private_key_file') self.slave_direct_ssh = cloud_management_params.get( 'slave_direct_ssh', False) self.serial = cloud_management_params.get('serial') self.master_node_executor = executor.AnsibleRunner( remote_user=self.username, private_key_file=self.private_key_file) jump_host = self.master_node_address if self.slave_direct_ssh: jump_host = None self.cloud_executor = executor.AnsibleRunner( remote_user=self.username, private_key_file=self.private_key_file, jump_host=jump_host, serial=self.serial) self.cached_cloud_hosts = list() def verify(self): """Verify connection to the cloud.""" nodes = self.get_nodes() LOG.debug('Cloud nodes: %s', nodes) task = {'command': 'hostname'} task_result = self.execute_on_cloud(nodes.hosts, task) LOG.debug('Hostnames of cloud nodes: %s', [r.payload['stdout'] for r in task_result]) LOG.info('Connected to cloud successfully!') def discover_hosts(self): if not self.cached_cloud_hosts: task = {'command': 'fuel node --json'} result = self._execute_on_master_node(task) for r in json.loads(result[0].payload['stdout']): host = node_collection.Host(ip=r['ip'], mac=r['mac'], fqdn=r['fqdn']) self.cached_cloud_hosts.append(host) return self.cached_cloud_hosts def _execute_on_master_node(self, task): """Execute task on Fuel master node. :param task: Ansible task :return: Ansible execution result (list of records) """ return self.master_node_executor.execute([self._master_host], task) def execute_on_cloud(self, hosts, task, raise_on_error=True): """Execute task on specified hosts within the cloud. :param hosts: List of host FQDNs :param task: Ansible task :param raise_on_error: throw exception in case of error :return: Ansible execution result (list of records) """ if raise_on_error: return self.cloud_executor.execute(hosts, task) else: return self.cloud_executor.execute(hosts, task, []) os-faults-0.1.17/os_faults/drivers/cloud/tcpcloud.py000066400000000000000000000340511317662032700225010ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import yaml from os_faults.ansible import executor from os_faults.api import cloud_management from os_faults.api import node_collection from os_faults.api import node_discover from os_faults import error LOG = logging.getLogger(__name__) class TCPCloudNodeCollection(node_collection.NodeCollection): def connect(self, network_name): raise NotImplementedError def disconnect(self, network_name): raise NotImplementedError class TCPCloudManagement(cloud_management.CloudManagement, node_discover.NodeDiscover): """TCPCloud driver. Supports discovering of slave nodes. **Example configuration:** .. code-block:: yaml cloud_management: driver: tcpcloud args: address: 192.168.1.10 username: root password: root_pass private_key_file: ~/.ssh/id_rsa_tcpcloud slave_username: ubuntu slave_password: ubuntu_pass master_sudo: False slave_sudo: True slave_name_regexp: ^(?!cfg|mon) slave_direct_ssh: True get_ips_cmd: pillar.get _param:single_address parameters: - **address** - ip address of salt config node - **username** - username for salt config node - **password** - password for salt config node (optional) - **private_key_file** - path to key file (optional) - **slave_username** - username for salt minions (optional) *username* will be used if *slave_username* not specified - **slave_password** - password for salt minions (optional) *password* will be used if *slave_password* not specified - **master_sudo** - Use sudo on salt config node (optional) - **slave_sudo** - Use sudo on salt minion nodes (optional) - **slave_name_regexp** - regexp for minion FQDNs (optional) - **slave_direct_ssh** - if *False* then salt master is used as ssh proxy (optional) - **get_ips_cmd** - salt command to get IPs of minions (optional) - **serial** - how many hosts Ansible should manage at a single time. (optional) default: 10 """ NAME = 'tcpcloud' DESCRIPTION = 'TCPCloud management driver' NODE_CLS = TCPCloudNodeCollection SERVICES = { 'keystone': { 'driver': 'salt_service', 'args': { 'grep': 'keystone-all', 'salt_service': 'keystone', } }, 'horizon': { 'driver': 'salt_service', 'args': { 'grep': 'apache2', 'salt_service': 'apache2', } }, 'memcached': { 'driver': 'salt_service', 'args': { 'grep': 'memcached', 'salt_service': 'memcached', } }, 'mysql': { 'driver': 'salt_service', 'args': { 'grep': 'mysqld', 'salt_service': 'mysql', 'port': ['tcp', 3307], } }, 'rabbitmq': { 'driver': 'salt_service', 'args': { 'grep': 'beam\.smp .*rabbitmq_server', 'salt_service': 'rabbitmq-server', } }, 'glance-api': { 'driver': 'salt_service', 'args': { 'grep': 'glance-api', 'salt_service': 'glance-api', } }, 'glance-glare': { 'driver': 'salt_service', 'args': { 'grep': 'glance-glare', 'salt_service': 'glance-glare', } }, 'glance-registry': { 'driver': 'salt_service', 'args': { 'grep': 'glance-registry', 'salt_service': 'glance-registry', } }, 'nova-api': { 'driver': 'salt_service', 'args': { 'grep': 'nova-api', 'salt_service': 'nova-api', } }, 'nova-compute': { 'driver': 'salt_service', 'args': { 'grep': 'nova-compute', 'salt_service': 'nova-compute', } }, 'nova-scheduler': { 'driver': 'salt_service', 'args': { 'grep': 'nova-scheduler', 'salt_service': 'nova-scheduler', } }, 'nova-cert': { 'driver': 'salt_service', 'args': { 'grep': 'nova-cert', 'salt_service': 'nova-cert', } }, 'nova-conductor': { 'driver': 'salt_service', 'args': { 'grep': 'nova-conductor', 'salt_service': 'nova-conductor', } }, 'nova-consoleauth': { 'driver': 'salt_service', 'args': { 'grep': 'nova-consoleauth', 'salt_service': 'nova-consoleauth', } }, 'nova-novncproxy': { 'driver': 'salt_service', 'args': { 'grep': 'nova-novncproxy', 'salt_service': 'nova-novncproxy', } }, 'neutron-server': { 'driver': 'salt_service', 'args': { 'grep': 'neutron-server', 'salt_service': 'neutron-server', } }, 'neutron-dhcp-agent': { 'driver': 'salt_service', 'args': { 'grep': 'neutron-dhcp-agent', 'salt_service': 'neutron-dhcp-agent', } }, 'neutron-metadata-agent': { 'driver': 'salt_service', 'args': { 'grep': 'neutron-metadata-agent', 'salt_service': 'neutron-metadata-agent', } }, 'neutron-openvswitch-agent': { 'driver': 'salt_service', 'args': { 'grep': 'neutron-openvswitch-agent', 'salt_service': 'neutron-openvswitch-agent', } }, 'neutron-l3-agent': { 'driver': 'salt_service', 'args': { 'grep': 'neutron-l3-agent', 'salt_service': 'neutron-l3-agent', } }, 'heat-api': { 'driver': 'salt_service', 'args': { # space at the end filters heat-api-* services 'grep': 'heat-api ', 'salt_service': 'heat-api', } }, 'heat-engine': { 'driver': 'salt_service', 'args': { 'grep': 'heat-engine', 'salt_service': 'heat-engine', } }, 'cinder-api': { 'driver': 'salt_service', 'args': { 'grep': 'cinder-api', 'salt_service': 'cinder-api', } }, 'cinder-scheduler': { 'driver': 'salt_service', 'args': { 'grep': 'cinder-scheduler', 'salt_service': 'cinder-scheduler', } }, 'cinder-volume': { 'driver': 'salt_service', 'args': { 'grep': 'cinder-volume', 'salt_service': 'cinder-volume', } }, 'cinder-backup': { 'driver': 'salt_service', 'args': { 'grep': 'cinder-backup', 'salt_service': 'cinder-backup', } }, 'elasticsearch': { 'driver': 'salt_service', 'args': { 'grep': 'java .*elasticsearch', 'salt_service': 'elasticsearch', } }, 'grafana-server': { 'driver': 'salt_service', 'args': { 'grep': 'grafana-server', 'salt_service': 'grafana-server', } }, 'influxdb': { 'driver': 'salt_service', 'args': { 'grep': 'influxd', 'salt_service': 'influxdb', } }, 'kibana': { 'driver': 'salt_service', 'args': { 'grep': 'kibana', 'salt_service': 'kibana', } }, 'nagios3': { 'driver': 'salt_service', 'args': { 'grep': 'nagios3', 'salt_service': 'nagios3', } }, } SUPPORTED_NETWORKS = [] CONFIG_SCHEMA = { 'type': 'object', '$schema': 'http://json-schema.org/draft-04/schema#', 'properties': { 'address': {'type': 'string'}, 'username': {'type': 'string'}, 'password': {'type': 'string'}, 'private_key_file': {'type': 'string'}, 'slave_username': {'type': 'string'}, 'slave_password': {'type': 'string'}, 'master_sudo': {'type': 'boolean'}, 'slave_sudo': {'type': 'boolean'}, 'slave_name_regexp': {'type': 'string'}, 'slave_direct_ssh': {'type': 'boolean'}, 'get_ips_cmd': {'type': 'string'}, 'serial': {'type': 'integer', 'minimum': 1}, }, 'required': ['address', 'username'], 'additionalProperties': False, } def __init__(self, cloud_management_params): super(TCPCloudManagement, self).__init__() self.node_discover = self # supports discovering self.master_node_address = cloud_management_params['address'] self._master_host = node_collection.Host(ip=self.master_node_address) self.username = cloud_management_params['username'] self.slave_username = cloud_management_params.get( 'slave_username', self.username) self.private_key_file = cloud_management_params.get('private_key_file') self.slave_direct_ssh = cloud_management_params.get( 'slave_direct_ssh', False) use_jump = not self.slave_direct_ssh self.get_ips_cmd = cloud_management_params.get( 'get_ips_cmd', 'pillar.get _param:single_address') self.serial = cloud_management_params.get('serial') password = cloud_management_params.get('password') self.master_node_executor = executor.AnsibleRunner( remote_user=self.username, password=password, private_key_file=self.private_key_file, become=cloud_management_params.get('master_sudo')) self.cloud_executor = executor.AnsibleRunner( remote_user=self.slave_username, password=cloud_management_params.get('slave_password', password), private_key_file=self.private_key_file, jump_host=self.master_node_address if use_jump else None, jump_user=self.username if use_jump else None, become=cloud_management_params.get('slave_sudo'), serial=self.serial) # get all nodes except salt master (that has cfg* hostname) by default self.slave_name_regexp = cloud_management_params.get( 'slave_name_regexp', '^(?!cfg|mon)') self.cached_cloud_hosts = list() def verify(self): """Verify connection to the cloud.""" nodes = self.get_nodes() LOG.debug('Cloud nodes: %s', nodes) task = {'command': 'hostname'} task_result = self.execute_on_cloud(nodes.hosts, task) LOG.debug('Hostnames of cloud nodes: %s', [r.payload['stdout'] for r in task_result]) LOG.info('Connected to cloud successfully!') def _run_salt(self, command): cmd = "salt -E '{}' {} --out=yaml".format( self.slave_name_regexp, command) result = self._execute_on_master_node({'command': cmd}) return yaml.load(result[0].payload['stdout']) def discover_hosts(self): if not self.cached_cloud_hosts: interfaces = self._run_salt("network.interfaces") ips = self._run_salt(self.get_ips_cmd) for fqdn, ip in ips.items(): node_ifaces = interfaces[fqdn] mac = None for iface_name, net_data in node_ifaces.items(): iface_ips = [data['address'] for data in net_data.get('inet', [])] if ip in iface_ips: mac = net_data['hwaddr'] break else: raise error.CloudManagementError( "Can't find ip {} on node {} with node_ifaces:\n{}" "".format(ip, fqdn, yaml.dump(node_ifaces))) host = node_collection.Host(ip=ip, mac=mac, fqdn=fqdn) self.cached_cloud_hosts.append(host) self.cached_cloud_hosts = sorted(self.cached_cloud_hosts) return self.cached_cloud_hosts def _execute_on_master_node(self, task): """Execute task on salt master node. :param task: Ansible task :return: Ansible execution result (list of records) """ return self.master_node_executor.execute([self._master_host], task) def execute_on_cloud(self, hosts, task, raise_on_error=True): """Execute task on specified hosts within the cloud. :param hosts: List of host FQDNs :param task: Ansible task :param raise_on_error: throw exception in case of error :return: Ansible execution result (list of records) """ if raise_on_error: return self.cloud_executor.execute(hosts, task) else: return self.cloud_executor.execute(hosts, task, []) os-faults-0.1.17/os_faults/drivers/cloud/universal.py000066400000000000000000000150141317662032700226720ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from os_faults.ansible import executor from os_faults.api import cloud_management from os_faults.api import error from os_faults.api import node_collection from os_faults.api import node_discover from os_faults.drivers import shared_schemas LOG = logging.getLogger(__name__) class UniversalCloudManagement(cloud_management.CloudManagement, node_discover.NodeDiscover): """Universal cloud management driver This driver is suitable for the most abstract (and thus universal) case. The driver does not have any built-in services, all services need to be listed explicitly in a config file. By default the Universal driver works with only one node. To specify more nodes use `node_list` node discovery driver. Authentication parameters can be shared or overridden by corresponding parameters from node discovery. **Example single node configuration:** .. code-block:: yaml cloud_management: driver: universal args: address: 192.168.1.10 auth: username: ubuntu private_key_file: devstack_key become: true become_password: my_secret_password iface: eth1 serial: 10 **Example multi-node configuration:** Note that in this configuration a node discovery driver is required. .. code-block:: yaml cloud_management: driver: universal node_discovery: driver: node_list args: - ip: 192.168.5.149 auth: username: developer private_key_file: cloud_key become: true become_password: my_secret_password parameters: - **address** - address of the node (optional, but if not set a node discovery driver is mandatory) - **auth** - SSH related parameters (optional): - **username** - SSH username (optional) - **password** - SSH password (optional) - **private_key_file** - SSH key file (optional) - **become** - True if privilege escalation is used (optional) - **become_password** - privilege escalation password (optional) - **jump** - SSH proxy parameters (optional): - **host** - SSH proxy host - **username** - SSH proxy user - **private_key_file** - SSH proxy key file (optional) - **iface** - network interface name to retrieve mac address (optional) - **serial** - how many hosts Ansible should manage at a single time (optional) default: 10 """ NAME = 'universal' DESCRIPTION = 'Universal cloud management driver' CONFIG_SCHEMA = { 'type': 'object', '$schema': 'http://json-schema.org/draft-04/schema#', 'properties': { 'address': {'type': 'string'}, 'auth': shared_schemas.AUTH_SCHEMA, 'iface': {'type': 'string'}, 'serial': {'type': 'integer', 'minimum': 1}, }, 'additionalProperties': False, } def __init__(self, cloud_management_params): super(UniversalCloudManagement, self).__init__() self.node_discover = self # by default can discover itself self.address = cloud_management_params.get('address') self.iface = cloud_management_params.get('iface') serial = cloud_management_params.get('serial') auth = cloud_management_params.get('auth') or {} jump = auth.get('jump') or {} self.cloud_executor = executor.AnsibleRunner( remote_user=auth.get('username'), password=auth.get('password'), private_key_file=auth.get('private_key_file'), become=auth.get('become'), become_password=auth.get('become_password'), jump_host=jump.get('host'), jump_user=jump.get('user'), serial=serial, ) self.cached_hosts = None # cache for node discovery def verify(self): """Verify connection to the cloud.""" nodes = self.get_nodes() if not nodes: raise error.OSFError('Cloud has no nodes') task = {'command': 'hostname'} task_result = self.execute_on_cloud(nodes.hosts, task) LOG.debug('Host names of cloud nodes: %s', ', '.join(r.payload['stdout'] for r in task_result)) LOG.info('Connected to cloud successfully!') def execute_on_cloud(self, hosts, task, raise_on_error=True): """Execute task on specified hosts within the cloud. :param hosts: List of host FQDNs :param task: Ansible task :param raise_on_error: throw exception in case of error :return: Ansible execution result (list of records) """ if raise_on_error: return self.cloud_executor.execute(hosts, task) else: return self.cloud_executor.execute(hosts, task, []) def discover_hosts(self): # this function is called when no node-discovery driver is specified; # discover the default host set in config for this driver if not self.address: raise error.OSFError('Cloud has no nodes. Specify address in ' 'cloud management driver or add node ' 'discovery driver') if not self.cached_hosts: LOG.info('Discovering host name and MAC address for %s', self.address) host = node_collection.Host(ip=self.address) mac = None if self.iface: cmd = 'cat /sys/class/net/{}/address'.format(self.iface) res = self.execute_on_cloud([host], {'command': cmd}) mac = res[0].payload['stdout'] res = self.execute_on_cloud([host], {'command': 'hostname'}) hostname = res[0].payload['stdout'] # update my hosts self.cached_hosts = [node_collection.Host( ip=self.address, mac=mac, fqdn=hostname)] return self.cached_hosts os-faults-0.1.17/os_faults/drivers/nodes/000077500000000000000000000000001317662032700203115ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/nodes/__init__.py000066400000000000000000000000001317662032700224100ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/nodes/node_list.py000066400000000000000000000074751317662032700226600ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from os_faults.api import node_collection from os_faults.api import node_discover from os_faults import utils AUTH_SCHEMA = { 'type': 'object', 'properties': { 'username': {'type': 'string'}, 'password': {'type': 'string'}, 'sudo': {'type': 'boolean'}, 'private_key_file': {'type': 'string'}, 'become': {'type': 'boolean'}, 'become_password': {'type': 'string'}, 'jump': { 'type': 'object', 'properties': { 'host': {'type': 'string'}, 'username': {'type': 'string'}, 'private_key_file': {'type': 'string'}, }, 'required': ['host'], 'additionalProperties': False, }, }, 'additionalProperties': False, } class NodeListDiscover(node_discover.NodeDiscover): """Node list. Allows specifying list of nodes in configuration. **Example configuration:** .. code-block:: yaml node_discover: driver: node_list args: - ip: 10.0.0.51 mac: aa:bb:cc:dd:ee:01 fqdn: node1.local libvirt_name: node1 - ip: 192.168.1.50 mac: aa:bb:cc:dd:ee:02 fqdn: node2.local auth: username: user1 password: secret1 sudo: False jump: host: 10.0.0.52 username: ubuntu private_key_file: /path/to/file - ip: 10.0.0.53 mac: aa:bb:cc:dd:ee:03 fqdn: node3.local become: true become_password: my_secret_password node parameters: - **ip** - ip/host of the node - **mac** - MAC address of the node (optional). MAC address is used for libvirt driver. - **fqdn** - FQDN of the node (optional). FQDN is used for filtering only. - **auth** - SSH related parameters (optional): - **username** - SSH username (optional) - **password** - SSH password (optional) - **private_key_file** - SSH key file (optional) - **become** - True if privilege escalation is used (optional) - **become_password** - privilege escalation password (optional) - **jump** - SSH proxy parameters (optional): - **host** - SSH proxy host - **username** - SSH proxy user - **private_key_file** - SSH proxy key file (optional) """ NAME = 'node_list' DESCRIPTION = 'Reads hosts from configuration file' CONFIG_SCHEMA = { '$schema': 'http://json-schema.org/draft-04/schema#', 'type': 'array', 'items': { 'type': 'object', 'properties': { 'ip': {'type': 'string'}, 'mac': {'type': 'string', 'pattern': utils.MACADDR_REGEXP}, 'fqdn': {'type': 'string'}, 'libvirt_name': {'type': 'string'}, 'auth': AUTH_SCHEMA, }, 'required': ['ip'], 'additionalProperties': False, }, 'minItems': 1, } def __init__(self, conf): self.hosts = [node_collection.Host(**host) for host in conf] def discover_hosts(self): """Discover hosts :returns: list of Host instances """ return self.hosts os-faults-0.1.17/os_faults/drivers/power/000077500000000000000000000000001317662032700203355ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/power/__init__.py000066400000000000000000000000001317662032700224340ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/power/ipmi.py000066400000000000000000000121071317662032700216460ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from pyghmi import exceptions as pyghmi_exception from pyghmi.ipmi import command as ipmi_command from os_faults.api import error from os_faults.api import power_management from os_faults import utils LOG = logging.getLogger(__name__) BMC_SCHEMA = { 'type': 'object', 'properties': { 'address': {'type': 'string'}, 'username': {'type': 'string'}, 'password': {'type': 'string'}, }, 'required': ['address', 'username', 'password'] } class IPMIDriver(power_management.PowerDriver): """IPMI driver. **Example configuration:** .. code-block:: yaml power_managements: - driver: ipmi args: mac_to_bmc: aa:bb:cc:dd:ee:01: address: 170.0.10.50 username: admin1 password: Admin_123 aa:bb:cc:dd:ee:02: address: 170.0.10.51 username: admin2 password: Admin_123 fqdn_to_bmc: node3.local: address: 170.0.10.52 username: admin1 password: Admin_123 parameters: - **mac_to_bmc** - list of dicts where keys are the node MACs and values are the corresponding BMC configurations with the folowing fields: - **address** - ip address of IPMI server - **username** - IPMI user - **password** - IPMI password """ NAME = 'ipmi' DESCRIPTION = 'IPMI power management driver' CONFIG_SCHEMA = { 'type': 'object', '$schema': 'http://json-schema.org/draft-04/schema#', 'properties': { 'mac_to_bmc': { 'type': 'object', 'patternProperties': { utils.MACADDR_REGEXP: BMC_SCHEMA } }, 'fqdn_to_bmc': { 'type': 'object', 'patternProperties': { utils.FQDN_REGEXP: BMC_SCHEMA } }, }, 'anyOf': [ {'required': ['mac_to_bmc']}, {'required': ['fqdn_to_bmc']}, ], 'additionalProperties': False, } def __init__(self, params): self.mac_to_bmc = params.get('mac_to_bmc', {}) self.fqdn_to_bmc = params.get('fqdn_to_bmc', {}) # TODO(astudenov): make macs lowercased def _find_bmc_by_host(self, host): if host.mac in self.mac_to_bmc: return self.mac_to_bmc[host.mac] if host.fqdn in self.fqdn_to_bmc: return self.fqdn_to_bmc[host.fqdn] raise error.PowerManagementError( 'BMC for {!r} not found!'.format(host)) def _run_set_power_cmd(self, host, cmd, expected_state=None): bmc = self._find_bmc_by_host(host) try: ipmicmd = ipmi_command.Command(bmc=bmc['address'], userid=bmc['username'], password=bmc['password']) ret = ipmicmd.set_power(cmd, wait=True) except pyghmi_exception.IpmiException: msg = 'IPMI cmd {!r} failed on bmc {!r}, {!r}'.format( cmd, bmc['address'], host) LOG.error(msg, exc_info=True) raise LOG.debug('IPMI response: {}'.format(ret)) if ret.get('powerstate') != expected_state or 'error' in ret: msg = ('Failed to change power state to {!r} on bmc {!r}, ' '{!r}'.format(expected_state, bmc['address'], host)) raise error.PowerManagementError(msg) def supports(self, host): try: self._find_bmc_by_host(host) except error.PowerManagementError: return False return True def poweroff(self, host): LOG.debug('Power off Node: %s', host) self._run_set_power_cmd(host, cmd='off', expected_state='off') LOG.info('Node powered off: %s', host) def poweron(self, host): LOG.debug('Power on Node: %s', host) self._run_set_power_cmd(host, cmd='on', expected_state='on') LOG.info('Node powered on: %s', host) def reset(self, host): LOG.debug('Reset Node: %s', host) # boot -- If system is off, then 'on', else 'reset' self._run_set_power_cmd(host, cmd='boot') # NOTE(astudenov): This command does not wait for node to boot LOG.info('Node reset: %s', host) def shutdown(self, host): LOG.debug('Shutdown Node: %s', host) self._run_set_power_cmd(host, cmd='shutdown', expected_state='off') LOG.info('Node is off: %s', host) os-faults-0.1.17/os_faults/drivers/power/libvirt.py000066400000000000000000000106611317662032700223660ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from oslo_utils import importutils from os_faults.api import error from os_faults.api import power_management LOG = logging.getLogger(__name__) class LibvirtDriver(power_management.PowerDriver): """Libvirt driver. **Example configuration:** .. code-block:: yaml power_managements: - driver: libvirt args: connection_uri: qemu+unix:///system parameters: - **connection_uri** - libvirt uri """ NAME = 'libvirt' DESCRIPTION = 'Libvirt power management driver' CONFIG_SCHEMA = { 'type': 'object', '$schema': 'http://json-schema.org/draft-04/schema#', 'properties': { 'connection_uri': {'type': 'string'}, }, 'required': ['connection_uri'], 'additionalProperties': False, } def __init__(self, params): self.connection_uri = params['connection_uri'] self._cached_conn = None @property def conn(self): return self._get_connection() def _get_connection(self): if self._cached_conn is None: libvirt_module = importutils.try_import('libvirt') if not libvirt_module: raise error.OSFError('libvirt-python is required ' 'to use LibvirtDriver') self._cached_conn = libvirt_module.open(self.connection_uri) return self._cached_conn def _find_domain_by_host(self, host): for domain in self.conn.listAllDomains(): if host.libvirt_name and host.libvirt_name == domain.name(): return domain if host.mac and host.mac in domain.XMLDesc(): return domain raise error.PowerManagementError( 'Domain not found for host %s.' % host) def supports(self, host): try: self._find_domain_by_host(host) except error.PowerManagementError: return False return True def poweroff(self, host): domain = self._find_domain_by_host(host) LOG.debug('Power off domain with name: %s', host.mac) domain.destroy() LOG.info('Domain powered off: %s', host.mac) def poweron(self, host): domain = self._find_domain_by_host(host) LOG.debug('Power on domain with name: %s', domain.name()) domain.create() LOG.info('Domain powered on: %s', domain.name()) def reset(self, host): domain = self._find_domain_by_host(host) LOG.debug('Reset domain with name: %s', domain.name()) domain.reset() LOG.info('Domain reset: %s', domain.name()) def shutdown(self, host): domain = self._find_domain_by_host(host) LOG.debug('Shutdown domain with name: %s', domain.name()) domain.shutdown() LOG.info('Domain is off: %s', domain.name()) def snapshot(self, host, snapshot_name, suspend): domain = self._find_domain_by_host(host) LOG.debug('Create snapshot "%s" for domain with name: %s', snapshot_name, domain.name()) if suspend: domain.suspend() domain.snapshotCreateXML( '{}'.format( snapshot_name)) if suspend: domain.resume() LOG.debug('Created snapshot "%s" for domain with name: %s', snapshot_name, domain.name()) def revert(self, host, snapshot_name, resume): domain = self._find_domain_by_host(host) LOG.debug('Revert snapshot "%s" for domain with name: %s', snapshot_name, domain.name()) snapshot = domain.snapshotLookupByName(snapshot_name) if domain.isActive(): domain.destroy() domain.revertToSnapshot(snapshot) if resume: domain.resume() LOG.debug('Reverted snapshot "%s" for domain with name: %s', snapshot_name, domain.name()) os-faults-0.1.17/os_faults/drivers/services/000077500000000000000000000000001317662032700210245ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/services/__init__.py000066400000000000000000000000001317662032700231230ustar00rootroot00000000000000os-faults-0.1.17/os_faults/drivers/services/linux.py000066400000000000000000000036201317662032700225360ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from os_faults.drivers.services import process class LinuxService(process.ServiceAsProcess): """Linux service Service that is defined in init.d and can be controlled by `service` CLI tool. **Example configuration:** .. code-block:: yaml services: app: driver: linux_service args: linux_service: app grep: my_app port: ['tcp', 4242] parameters: - **linux_service** - name of a service - **grep** - regexp for grep to find process PID - **port** - tuple with two values - protocol, port number (optional) """ NAME = 'linux_service' DESCRIPTION = 'Service in init.d' CONFIG_SCHEMA = { 'type': 'object', 'properties': { 'linux_service': {'type': 'string'}, 'grep': {'type': 'string'}, 'port': process.PORT_SCHEMA, }, 'required': ['grep', 'linux_service'], 'additionalProperties': False, } def __init__(self, *args, **kwargs): super(LinuxService, self).__init__(*args, **kwargs) self.linux_service = self.config['linux_service'] self.restart_cmd = 'service {} restart'.format(self.linux_service) self.terminate_cmd = 'service {} stop'.format(self.linux_service) self.start_cmd = 'service {} start'.format(self.linux_service) os-faults-0.1.17/os_faults/drivers/services/pcs.py000066400000000000000000000106371317662032700221720ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from os_faults.drivers.services import process LOG = logging.getLogger(__name__) class PcsService(process.ServiceAsProcess): """Service as a resource in Pacemaker Service that can be controlled by `pcs resource` CLI tool. **Example configuration:** .. code-block:: yaml services: app: driver: pcs_service args: pcs_service: app grep: my_app port: ['tcp', 4242] parameters: - **pcs_service** - name of a service - **grep** - regexp for grep to find process PID - **port** - tuple with two values - protocol, port number (optional) """ NAME = 'pcs_service' DESCRIPTION = 'Service in pacemaker' CONFIG_SCHEMA = { 'type': 'object', 'properties': { 'pcs_service': {'type': 'string'}, 'grep': {'type': 'string'}, 'port': process.PORT_SCHEMA, }, 'required': ['grep', 'pcs_service'], 'additionalProperties': False, } def __init__(self, *args, **kwargs): super(PcsService, self).__init__(*args, **kwargs) self.pcs_service = self.config['pcs_service'] self.restart_cmd = 'pcs resource restart {} $(hostname)'.format( self.pcs_service) self.terminate_cmd = 'pcs resource ban {} $(hostname)'.format( self.pcs_service) self.start_cmd = 'pcs resource clear {} $(hostname)'.format( self.pcs_service) class PcsOrLinuxService(process.ServiceAsProcess): """Service as a resource in Pacemaker or Linux service Service that can be controlled by `pcs resource` CLI tool or linux `service` tool. This is a hybrid driver that tries to find service in Pacemaker and uses linux `service` if it is not found there. **Example configuration:** .. code-block:: yaml services: app: driver: pcs_or_linux_service args: pcs_service: p_app linux_service: app grep: my_app port: ['tcp', 4242] parameters: - **pcs_service** - name of a service in Pacemaker - **linux_service** - name of a service in init.d - **grep** - regexp for grep to find process PID - **port** - tuple with two values - protocol, port number (optional) """ NAME = 'pcs_or_linux_service' DESCRIPTION = 'Service in pacemaker or init.d' CONFIG_SCHEMA = { 'type': 'object', 'properties': { 'pcs_service': {'type': 'string'}, 'linux_service': {'type': 'string'}, 'grep': {'type': 'string'}, 'port': process.PORT_SCHEMA, }, 'required': ['grep', 'pcs_service', 'linux_service'], 'additionalProperties': False, } def __init__(self, *args, **kwargs): super(PcsOrLinuxService, self).__init__(*args, **kwargs) self.pcs_service = self.config.get('pcs_service') self.linux_service = self.config.get('linux_service') self.restart_cmd = ( 'if pcs resource show {pcs_service}; ' 'then pcs resource restart {pcs_service} $(hostname); ' 'else service {linux_service} restart; fi').format( linux_service=self.linux_service, pcs_service=self.pcs_service) self.terminate_cmd = ( 'if pcs resource show {pcs_service}; ' 'then pcs resource ban {pcs_service} $(hostname); ' 'else service {linux_service} stop; fi').format( linux_service=self.linux_service, pcs_service=self.pcs_service) self.start_cmd = ( 'if pcs resource show {pcs_service}; ' 'then pcs resource clear {pcs_service} $(hostname); ' 'else service {linux_service} start; fi').format( linux_service=self.linux_service, pcs_service=self.pcs_service) os-faults-0.1.17/os_faults/drivers/services/process.py000066400000000000000000000131561317662032700230620ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import signal from os_faults.ansible import executor from os_faults.api import error from os_faults.api import service from os_faults import utils LOG = logging.getLogger(__name__) PORT_SCHEMA = { 'type': 'array', 'items': [ {'enum': ['tcp', 'udp']}, {'type': 'integer', 'minimum': 0, 'maximum': 65535}, ], 'minItems': 2, 'maxItems': 2, } class ServiceAsProcess(service.Service): """Service as process "process" is a basic service driver that uses `ps` and `kill` in actions like kill / freeze / unfreeze. Commands for start / restart / terminate should be specified in configuration, otherwise the commands will fail at runtime. **Example configuration:** .. code-block:: yaml services: app: driver: process args: grep: my_app restart_cmd: /bin/my_app --restart terminate_cmd: /bin/stop_my_app start_cmd: /bin/my_app port: ['tcp', 4242] parameters: - **grep** - regexp for grep to find process PID - **restart_cmd** - command to restart service (optional) - **terminate_cmd** - command to terminate service (optional) - **start_cmd** - command to start service (optional) - **port** - tuple with two values - protocol, port number (optional) """ NAME = 'process' DESCRIPTION = 'Service as process' CONFIG_SCHEMA = { 'type': 'object', 'properties': { 'grep': {'type': 'string'}, 'start_cmd': {'type': 'string'}, 'terminate_cmd': {'type': 'string'}, 'restart_cmd': {'type': 'string'}, 'port': PORT_SCHEMA, }, 'required': ['grep'], 'additionalProperties': False, } def __init__(self, *args, **kwargs): super(ServiceAsProcess, self).__init__(*args, **kwargs) self.grep = self.config['grep'] self.start_cmd = self.config.get('start_cmd') self.terminate_cmd = self.config.get('terminate_cmd') self.restart_cmd = self.config.get('restart_cmd') self.port = self.config.get('port') def _run_task(self, nodes, task, message): nodes = nodes if nodes is not None else self.get_nodes() if len(nodes) == 0: raise error.ServiceError( 'Service %s is not found on any nodes' % self.service_name) LOG.info('%s service %s on nodes: %s', message, self.service_name, nodes.get_ips()) return self.cloud_management.execute_on_cloud(nodes.hosts, task) def discover_nodes(self): nodes = self.cloud_management.get_nodes() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format(self.grep) results = self.cloud_management.execute_on_cloud( nodes.hosts, {'command': cmd}, False) success_ips = [r.host for r in results if r.status == executor.STATUS_OK] hosts = [h for h in nodes.hosts if h.ip in success_ips] LOG.debug('Service %s is discovered on nodes %s', self.service_name, hosts) return self.node_cls(cloud_management=self.cloud_management, hosts=hosts) @utils.require_variables('restart_cmd') def restart(self, nodes=None): self._run_task(nodes, {'shell': self.restart_cmd}, 'Restart') @utils.require_variables('terminate_cmd') def terminate(self, nodes=None): self._run_task(nodes, {'shell': self.terminate_cmd}, 'Terminate') @utils.require_variables('start_cmd') def start(self, nodes=None): self._run_task(nodes, {'shell': self.start_cmd}, 'Start') def kill(self, nodes=None): task = {'kill': {'grep': self.grep, 'sig': signal.SIGKILL}} self._run_task(nodes, task, 'Kill') def freeze(self, nodes=None, sec=None): if sec: task = {'freeze': {'grep': self.grep, 'sec': sec}} else: task = {'kill': {'grep': self.grep, 'sig': signal.SIGSTOP}} message = "Freeze %s" % (('for %s sec ' % sec) if sec else '') self._run_task(nodes, task, message) def unfreeze(self, nodes=None): task = {'kill': {'grep': self.grep, 'sig': signal.SIGCONT}} self._run_task(nodes, task, 'Unfreeze') @utils.require_variables('port') def plug(self, nodes=None): nodes = nodes if nodes is not None else self.get_nodes() message = "Open port %d for" % self.port[1] task = { 'iptables': { 'protocol': self.port[0], 'port': self.port[1], 'action': 'unblock', 'service': self.service_name } } self._run_task(nodes, task, message) @utils.require_variables('port') def unplug(self, nodes=None): nodes = nodes if nodes is not None else self.get_nodes() message = "Close port %d for" % self.port[1] task = { 'iptables': { 'protocol': self.port[0], 'port': self.port[1], 'action': 'block', 'service': self.service_name } } self._run_task(nodes, task, message) os-faults-0.1.17/os_faults/drivers/services/salt.py000066400000000000000000000041101317662032700223350ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from os_faults.drivers.services import process SALT_CALL = 'salt-call --local --retcode-passthrough ' SALT_RESTART = SALT_CALL + 'service.restart {service}' SALT_TERMINATE = SALT_CALL + 'service.stop {service}' SALT_START = SALT_CALL + 'service.start {service}' class SaltService(process.ServiceAsProcess): """Salt service Service that can be controlled by `salt service.*` commands. **Example configuration:** .. code-block:: yaml services: app: driver: salt_service args: salt_service: app grep: my_app port: ['tcp', 4242] parameters: - **salt_service** - name of a service - **grep** - regexp for grep to find process PID - **port** - tuple with two values - protocol, port number (optional) """ NAME = 'salt_service' DESCRIPTION = 'Service in salt' CONFIG_SCHEMA = { 'type': 'object', 'properties': { 'salt_service': {'type': 'string'}, 'grep': {'type': 'string'}, 'port': process.PORT_SCHEMA, }, 'required': ['grep', 'salt_service'], 'additionalProperties': False, } def __init__(self, *args, **kwargs): super(SaltService, self).__init__(*args, **kwargs) self.salt_service = self.config['salt_service'] self.restart_cmd = SALT_RESTART.format(service=self.salt_service) self.terminate_cmd = SALT_TERMINATE.format(service=self.salt_service) self.start_cmd = SALT_START.format(service=self.salt_service) os-faults-0.1.17/os_faults/drivers/services/system.py000066400000000000000000000050631317662032700227260ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from os_faults.drivers.services import process from os_faults.drivers import shared_schemas class SystemService(process.ServiceAsProcess): """System service This is universal driver for any system services supported by Ansible (e.g. systemd, upstart). Please refer to Ansible documentation http://docs.ansible.com/ansible/latest/service_module.html for the whole list. **Example configuration:** .. code-block:: yaml services: app: driver: system_service args: service_name: app grep: my_app port: ['tcp', 4242] parameters: - **service_name** - name of a service - **grep** - regexp for grep to find process PID - **port** - tuple with two values - protocol, port number (optional) """ NAME = 'system_service' DESCRIPTION = 'System Service (systemd, upstart, SysV, etc.)' CONFIG_SCHEMA = { 'type': 'object', 'properties': { 'service_name': {'type': 'string'}, 'grep': {'type': 'string'}, 'port': shared_schemas.PORT_SCHEMA, }, 'required': ['grep', 'service_name'], 'additionalProperties': False, } def __init__(self, *args, **kwargs): super(SystemService, self).__init__(*args, **kwargs) self.service_name = self.config['service_name'] def start(self, nodes=None): task = { 'service': { 'name': self.service_name, 'state': 'started' }, } self._run_task(nodes, task, 'Start') def terminate(self, nodes=None): task = { 'service': { 'name': self.service_name, 'state': 'stopped', 'pattern': self.grep, }, } self._run_task(nodes, task, 'Terminate') def restart(self, nodes=None): task = { 'service': { 'name': self.service_name, 'state': 'restarted' }, } self._run_task(nodes, task, 'Restart') os-faults-0.1.17/os_faults/drivers/services/systemd.py000066400000000000000000000041661317662032700230750ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from os_faults.drivers.services import process class SystemdService(process.ServiceAsProcess): """Systemd service. Service as Systemd unit and can be controlled by `systemctl` CLI tool. **Example configuration:** .. code-block:: yaml services: app: driver: systemd_service args: systemd_service: app grep: my_app port: ['tcp', 4242] parameters: - **systemd_service** - name of a service in systemd - **grep** - regexp for grep to find process PID - **port** - tuple with two values - protocol, port number (optional) """ NAME = 'systemd_service' DESCRIPTION = 'Service in Systemd' CONFIG_SCHEMA = { 'type': 'object', 'properties': { 'systemd_service': {'type': 'string'}, 'grep': {'type': 'string'}, 'port': process.PORT_SCHEMA, 'start_cmd': {'type': 'string'}, 'terminate_cmd': {'type': 'string'}, 'restart_cmd': {'type': 'string'}, }, 'required': ['grep', 'systemd_service'], 'additionalProperties': False, } def __init__(self, *args, **kwargs): super(SystemdService, self).__init__(*args, **kwargs) self.systemd_service = self.config['systemd_service'] self.restart_cmd = 'sudo systemctl restart {}'.format( self.systemd_service) self.terminate_cmd = 'sudo systemctl stop {}'.format( self.systemd_service) self.start_cmd = 'sudo systemctl start {}'.format( self.systemd_service) os-faults-0.1.17/os_faults/drivers/shared_schemas.py000066400000000000000000000026511317662032700225300ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. PORT_SCHEMA = { 'type': 'array', 'items': [ {'enum': ['tcp', 'udp']}, {'type': 'integer', 'minimum': 0, 'maximum': 65535}, ], 'minItems': 2, 'maxItems': 2, } AUTH_SCHEMA = { 'type': 'object', 'properties': { 'username': {'type': 'string'}, 'password': {'type': 'string'}, 'sudo': {'type': 'boolean'}, # deprecated, use `become` 'private_key_file': {'type': 'string'}, 'become': {'type': 'boolean'}, 'become_password': {'type': 'string'}, 'jump': { 'type': 'object', 'properties': { 'host': {'type': 'string'}, 'username': {'type': 'string'}, 'private_key_file': {'type': 'string'}, }, 'required': ['host'], 'additionalProperties': False, }, }, 'additionalProperties': False, } os-faults-0.1.17/os_faults/registry.py000066400000000000000000000057411317662032700177540ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import os import sys from oslo_utils import importutils import os_faults from os_faults.api import base_driver from os_faults.api import error DRIVERS = {} def _import_modules_from_package(): folder = os.path.dirname(os_faults.__file__) library_root = os.path.normpath(os.path.join(folder, os.pardir)) for root, dirs, files in os.walk(folder): for filename in files: if (filename.startswith('__') or filename.startswith('test') or not filename.endswith('.py')): continue relative_path = os.path.relpath(os.path.join(root, filename), library_root) name = os.path.splitext(relative_path)[0] # remove extension module_name = '.'.join(name.split(os.sep)) # convert / to . if module_name not in sys.modules: module = importutils.import_module(module_name) sys.modules[module_name] = module else: module = sys.modules[module_name] yield module def _list_drivers(): modules = _import_modules_from_package() for module in modules: class_info_list = inspect.getmembers(module, inspect.isclass) for class_info in class_info_list: klazz = class_info[1] if not issubclass(klazz, base_driver.BaseDriver): continue if 'NAME' not in vars(klazz): # driver must have a name continue if klazz.NAME == 'base': # skip base class continue yield klazz def get_drivers(): global DRIVERS if not DRIVERS: DRIVERS = {} for k in _list_drivers(): driver_name = k.get_driver_name() if driver_name in DRIVERS: orig_k = DRIVERS[driver_name] orig_path = orig_k.__module__ + '.' + orig_k.__name__ dup_path = k.__module__ + '.' + k.__name__ raise error.OSFDriverWithSuchNameExists( 'Driver "%s" already defined in %s. ' 'Found a duplicate in %s ' % ( driver_name, orig_path, dup_path)) DRIVERS[driver_name] = k return DRIVERS def get_driver(name): all_drivers = get_drivers() if name not in all_drivers: raise error.OSFDriverNotFound('Driver %s is not found' % name) return all_drivers[name] os-faults-0.1.17/os_faults/tests/000077500000000000000000000000001317662032700166655ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/__init__.py000066400000000000000000000000001317662032700207640ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/integration/000077500000000000000000000000001317662032700212105ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/integration/__init__.py000066400000000000000000000000001317662032700233070ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/integration/os-faults.yaml000066400000000000000000000003251317662032700240110ustar00rootroot00000000000000cloud_management: driver: universal node_discover: driver: node_list args: - ip: localhost services: memcached: args: grep: memcached service_name: memcached driver: system_service os-faults-0.1.17/os_faults/tests/integration/test_cmd_utils.py000066400000000000000000000026451317662032700246130ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import re import shlex from oslo_concurrency import processutils from os_faults.tests.unit import test CONFIG_FILE = os.path.join(os.path.dirname(__file__), 'os-faults.yaml') LOG = logging.getLogger(__name__) class TestOSInjectFault(test.TestCase): def test_connect(self): cmd = 'os-inject-fault -c %s -v' % CONFIG_FILE command_stdout, command_stderr = processutils.execute( *shlex.split(cmd)) success = re.search('Connected to cloud successfully', command_stderr) self.assertTrue(success) class TestOSFaults(test.TestCase): def test_connect(self): cmd = 'os-faults verify -c %s' % CONFIG_FILE command_stdout, command_stderr = processutils.execute( *shlex.split(cmd)) success = re.search('Connected to cloud successfully', command_stderr) self.assertTrue(success) os-faults-0.1.17/os_faults/tests/unit/000077500000000000000000000000001317662032700176445ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/__init__.py000066400000000000000000000000001317662032700217430ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/ansible/000077500000000000000000000000001317662032700212615ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/ansible/__init__.py000066400000000000000000000000001317662032700233600ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/ansible/modules/000077500000000000000000000000001317662032700227315ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/ansible/modules/__init__.py000066400000000000000000000000001317662032700250300ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/ansible/modules/test_freeze.py000066400000000000000000000032511317662032700256230ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from os_faults.ansible.modules import freeze from os_faults.tests.unit import test class FreezeTestCase(test.TestCase): @mock.patch("os_faults.ansible.modules.freeze.AnsibleModule") def test_main(self, mock_ansible_module): ansible_module_inst = mock_ansible_module.return_value ansible_module_inst.run_command.return_value = [ 'myrc', 'mystdout', 'mystderr'] ansible_module_inst.params = { 'grep': 'foo', 'sec': 15, } freeze.main() cmd = ('bash -c "tf=$(mktemp /tmp/script.XXXXXX);' 'echo -n \'#!\' > $tf; ' 'echo -en \'/bin/bash\\npids=`ps ax | ' 'grep -v grep | ' 'grep foo | awk {{\\047print $1\\047}}`; ' 'echo $pids | xargs kill -19; sleep 15; ' 'echo $pids | xargs kill -18; rm \' >> $tf; ' 'echo -n $tf >> $tf; ' 'chmod 770 $tf; nohup $tf &"') ansible_module_inst.exit_json.assert_called_once_with( cmd=cmd, rc='myrc', stdout='mystdout', stderr='mystderr', ) os-faults-0.1.17/os_faults/tests/unit/ansible/modules/test_fule_network_mgmt.py000066400000000000000000000036641317662032700301030ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from os_faults.ansible.modules import fuel_network_mgmt from os_faults.tests.unit import test @ddt.ddt class FuelNetworkManagementTestCase(test.TestCase): def setUp(self): super(FuelNetworkManagementTestCase, self).setUp() @ddt.data(['management', 'up', 'ip link set br-mgmt up'], ['management', 'down', 'ip link set br-mgmt down'], ['public', 'up', 'ip link set br-ex up'], ['public', 'down', 'ip link set br-ex down'], ['private', 'up', 'ip link set br-prv up'], ['private', 'down', 'ip link set br-prv down'], ['storage', 'up', 'ip link set br-storage up'], ['storage', 'down', 'ip link set br-storage down']) @ddt.unpack @mock.patch("os_faults.ansible.modules.fuel_network_mgmt.AnsibleModule") def test_main(self, network_name, operation, cmd, mock_ansible_module): ansible_module_inst = mock_ansible_module.return_value ansible_module_inst.run_command.return_value = [ 'myrc', 'mystdout', 'mystderr'] ansible_module_inst.params = { 'network_name': network_name, 'operation': operation, } fuel_network_mgmt.main() ansible_module_inst.exit_json.assert_called_once_with( cmd=cmd, rc='myrc', stdout='mystdout', stderr='mystderr', ) os-faults-0.1.17/os_faults/tests/unit/ansible/modules/test_iptables.py000066400000000000000000000046061317662032700261530ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from os_faults.ansible.modules import iptables from os_faults.tests.unit import test class IptablesTestCase(test.TestCase): @mock.patch("os_faults.ansible.modules.iptables.AnsibleModule") def test_main_unblock(self, mock_ansible_module): ansible_module_inst = mock_ansible_module.return_value ansible_module_inst.run_command.return_value = [ 'myrc', 'mystdout', 'mystderr'] ansible_module_inst.params = { 'service': 'foo', 'action': 'unblock', 'port': 5555, 'protocol': 'tcp', } iptables.main() cmd = ( 'bash -c "rule=`iptables -L INPUT -n --line-numbers | ' 'grep "foo_temporary_DROP" | cut -d \' \' -f1`; for arg in $rule;' ' do iptables -D INPUT -p tcp --dport 5555 ' '-j DROP -m comment --comment "foo_temporary_DROP"; done"') ansible_module_inst.exit_json.assert_called_once_with( cmd=cmd, rc='myrc', stdout='mystdout', stderr='mystderr', ) @mock.patch("os_faults.ansible.modules.iptables.AnsibleModule") def test_main_block(self, mock_ansible_module): ansible_module_inst = mock_ansible_module.return_value ansible_module_inst.run_command.return_value = [ 'myrc', 'mystdout', 'mystderr'] ansible_module_inst.params = { 'service': 'foo', 'action': 'block', 'port': 5555, 'protocol': 'tcp', } iptables.main() cmd = ( 'bash -c "iptables -I INPUT 1 -p tcp --dport 5555 ' '-j DROP -m comment --comment "foo_temporary_DROP""') ansible_module_inst.exit_json.assert_called_once_with( cmd=cmd, rc='myrc', stdout='mystdout', stderr='mystderr', ) os-faults-0.1.17/os_faults/tests/unit/ansible/modules/test_kill.py000066400000000000000000000030561317662032700253010ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from os_faults.ansible.modules import kill from os_faults.tests.unit import test @ddt.ddt class KillTestCase(test.TestCase): @ddt.data(['foo', 9, 'bash -c "ps ax | grep -v grep | grep \'foo\' ' '| awk {\'print $1\'} | xargs kill -9"'], ['bar', 3, 'bash -c "ps ax | grep -v grep | grep \'bar\' ' '| awk {\'print $1\'} | xargs kill -3"']) @ddt.unpack @mock.patch("os_faults.ansible.modules.kill.AnsibleModule") def test_main(self, grep, sig, cmd, mock_ansible_module): ansible_module_inst = mock_ansible_module.return_value ansible_module_inst.run_command.return_value = [ 'myrc', 'mystdout', 'mystderr'] ansible_module_inst.params = { 'grep': grep, 'sig': sig, } kill.main() ansible_module_inst.exit_json.assert_called_once_with( cmd=cmd, rc='myrc', stdout='mystdout', stderr='mystderr', ) os-faults-0.1.17/os_faults/tests/unit/ansible/test_executor.py000066400000000000000000000456731317662032700245470ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from os_faults.ansible import executor from os_faults.api import node_collection from os_faults.tests.unit import test class MyCallbackTestCase(test.TestCase): def test__store(self,): ex = executor.MyCallback(mock.Mock()) my_host = 'my_host' my_task = 'my_task' my_result = 'my_result' r = mock.Mock() r._host.get_name.return_value = my_host r._task.get_name.return_value = my_task r._result = my_result stat = 'OK' ex._store(r, stat) ex.storage.append.assert_called_once_with( executor.AnsibleExecutionRecord(host=my_host, status=stat, task=my_task, payload=my_result)) @mock.patch('ansible.plugins.callback.CallbackBase.v2_runner_on_failed') @mock.patch('os_faults.ansible.executor.MyCallback._store') def test_v2_runner_on_failed_super(self, mock_store, mock_callback): ex = executor.MyCallback(mock.Mock()) result = mock.Mock() ex.v2_runner_on_failed(result) mock_callback.assert_called_once_with(result) @mock.patch('os_faults.ansible.executor.MyCallback._store') def test_v2_runner_on_failed(self, mock_store): result = mock.Mock() ex = executor.MyCallback(mock.Mock()) ex.v2_runner_on_failed(result) mock_store.assert_called_once_with(result, executor.STATUS_FAILED) @mock.patch('ansible.plugins.callback.CallbackBase.v2_runner_on_ok') @mock.patch('os_faults.ansible.executor.MyCallback._store') def test_v2_runner_on_ok_super(self, mock_store, mock_callback): ex = executor.MyCallback(mock.Mock()) result = mock.Mock() ex.v2_runner_on_ok(result) mock_callback.assert_called_once_with(result) @mock.patch('os_faults.ansible.executor.MyCallback._store') def test_v2_runner_on_ok(self, mock_store): result = mock.Mock() ex = executor.MyCallback(mock.Mock()) ex.v2_runner_on_ok(result) mock_store.assert_called_once_with(result, executor.STATUS_OK) @mock.patch('ansible.plugins.callback.CallbackBase.v2_runner_on_skipped') @mock.patch('os_faults.ansible.executor.MyCallback._store') def test_v2_runner_on_skipped_super(self, mock_store, mock_callback): ex = executor.MyCallback(mock.Mock()) result = mock.Mock() ex.v2_runner_on_skipped(result) mock_callback.assert_called_once_with(result) @mock.patch('os_faults.ansible.executor.MyCallback._store') def test_v2_runner_on_skipped(self, mock_store): result = mock.Mock() ex = executor.MyCallback(mock.Mock()) ex.v2_runner_on_skipped(result) mock_store.assert_called_once_with(result, executor.STATUS_SKIPPED) @mock.patch( 'ansible.plugins.callback.CallbackBase.v2_runner_on_unreachable') @mock.patch('os_faults.ansible.executor.MyCallback._store') def test_v2_runner_on_unreachable_super(self, mock_store, mock_callback): ex = executor.MyCallback(mock.Mock()) result = mock.Mock() ex.v2_runner_on_unreachable(result) mock_callback.assert_called_once_with(result) @mock.patch('os_faults.ansible.executor.MyCallback._store') def test_v2_runner_on_unreachable(self, mock_store): result = mock.Mock() ex = executor.MyCallback(mock.Mock()) ex.v2_runner_on_unreachable(result) mock_store.assert_called_once_with(result, executor.STATUS_UNREACHABLE) @ddt.ddt class AnsibleRunnerTestCase(test.TestCase): @mock.patch('os_faults.ansible.executor.os.path.exists') def test_resolve_relative_path_doesnt_exist(self, mock_exist): mock_exist.return_value = False r = executor.resolve_relative_path('') self.assertIsNone(r) @mock.patch('os_faults.ansible.executor.os.path.exists') def test_resolve_relative_path_exists(self, mock_exist): mock_exist.return_value = True r = executor.resolve_relative_path('') self.assertIsNotNone(r) @mock.patch.object(executor, 'Options') @ddt.data(( {}, dict(become=None, become_method='sudo', become_user='root', check=False, connection='smart', diff=None, forks=100, private_key_file=None, remote_user='root', scp_extra_args=None, sftp_extra_args=None, ssh_common_args=executor.SSH_COMMON_ARGS, ssh_extra_args=None, verbosity=100), dict(conn_pass=None, become_pass=None), ), ( dict(remote_user='root', password='foobar'), dict(become=None, become_method='sudo', become_user='root', check=False, connection='smart', diff=None, forks=100, private_key_file=None, remote_user='root', scp_extra_args=None, sftp_extra_args=None, ssh_common_args=executor.SSH_COMMON_ARGS, ssh_extra_args=None, verbosity=100), dict(conn_pass='foobar', become_pass=None), ), ( dict(remote_user='root', password='foobar', become_password='secret'), dict(become=None, become_method='sudo', become_user='root', check=False, connection='smart', diff=None, forks=100, private_key_file=None, remote_user='root', scp_extra_args=None, sftp_extra_args=None, ssh_common_args=executor.SSH_COMMON_ARGS, ssh_extra_args=None, verbosity=100), dict(conn_pass='foobar', become_pass='secret'), ), ( dict(remote_user='root', jump_host='jhost.com', private_key_file='/path/my.key'), dict(become=None, become_method='sudo', become_user='root', check=False, connection='smart', diff=None, forks=100, private_key_file='/path/my.key', remote_user='root', scp_extra_args=None, sftp_extra_args=None, ssh_common_args=('-o UserKnownHostsFile=/dev/null ' '-o StrictHostKeyChecking=no ' '-o ConnectTimeout=60 ' '-o ProxyCommand=' '"ssh -i /path/my.key ' '-W %h:%p ' '-o UserKnownHostsFile=/dev/null ' '-o StrictHostKeyChecking=no ' '-o ConnectTimeout=60 ' 'root@jhost.com"'), ssh_extra_args=None, verbosity=100), dict(conn_pass=None, become_pass=None), ), ( dict(remote_user='root', jump_host='jhost.com', jump_user='juser', private_key_file='/path/my.key'), dict(become=None, become_method='sudo', become_user='root', check=False, connection='smart', diff=None, forks=100, private_key_file='/path/my.key', remote_user='root', scp_extra_args=None, sftp_extra_args=None, ssh_common_args=('-o UserKnownHostsFile=/dev/null ' '-o StrictHostKeyChecking=no ' '-o ConnectTimeout=60 ' '-o ProxyCommand=' '"ssh -i /path/my.key ' '-W %h:%p ' '-o UserKnownHostsFile=/dev/null ' '-o StrictHostKeyChecking=no ' '-o ConnectTimeout=60 ' 'juser@jhost.com"'), ssh_extra_args=None, verbosity=100), dict(conn_pass=None, become_pass=None), )) @ddt.unpack def test___init__options(self, config, options_args, passwords, mock_options): runner = executor.AnsibleRunner(**config) module_path = executor.make_module_path_option() mock_options.assert_called_once_with(module_path=module_path, **options_args) self.assertEqual(passwords, runner.passwords) @mock.patch.object(executor.task_queue_manager, 'TaskQueueManager') @mock.patch('ansible.playbook.play.Play.load') @mock.patch('os_faults.ansible.executor.Inventory') @mock.patch('os_faults.ansible.executor.VariableManager') @mock.patch('ansible.parsing.dataloader.DataLoader') def test__run_play(self, mock_dataloader, mock_vmanager, mock_inventory, mock_play_load, mock_taskqm): mock_play_load.return_value = 'my_load' variable_manager = mock_vmanager.return_value host_inst = mock_inventory.return_value.get_host.return_value host_vars = { '0.0.0.0': { 'ansible_user': 'foo', 'ansible_ssh_pass': 'bar', 'ansible_become': True, 'ansible_ssh_private_key_file': None, 'ansible_ssh_common_args': '-o Option=yes', } } ex = executor.AnsibleRunner() ex._run_play({'hosts': ['0.0.0.0']}, host_vars) mock_taskqm.assert_called_once() self.assertEqual(mock_taskqm.mock_calls[1], mock.call().run('my_load')) self.assertEqual(mock_taskqm.mock_calls[2], mock.call().cleanup()) variable_manager.set_host_variable.assert_has_calls(( mock.call(host_inst, 'ansible_user', 'foo'), mock.call(host_inst, 'ansible_ssh_pass', 'bar'), mock.call(host_inst, 'ansible_become', True), mock.call(host_inst, 'ansible_ssh_common_args', '-o Option=yes'), ), any_order=True) @mock.patch.object(executor.task_queue_manager, 'TaskQueueManager') @mock.patch('ansible.playbook.play.Play.load') @mock.patch('os_faults.ansible.executor.Inventory') @mock.patch('os_faults.ansible.executor.VariableManager') @mock.patch('ansible.parsing.dataloader.DataLoader') def test__run_play_no_host_vars( self, mock_dataloader, mock_vmanager, mock_inventory, mock_play_load, mock_taskqm): mock_play_load.return_value = 'my_load' variable_manager = mock_vmanager.return_value host_vars = {} ex = executor.AnsibleRunner() ex._run_play({'hosts': ['0.0.0.0']}, host_vars) mock_taskqm.assert_called_once() self.assertEqual(mock_taskqm.mock_calls[1], mock.call().run('my_load')) self.assertEqual(mock_taskqm.mock_calls[2], mock.call().cleanup()) self.assertEqual(0, variable_manager.set_host_variable.call_count) @mock.patch('os_faults.ansible.executor.AnsibleRunner._run_play') def test_run_playbook(self, mock_run_play): ex = executor.AnsibleRunner() my_playbook = [{'gather_facts': 'yes'}, {'gather_facts': 'no'}] ex.run_playbook(my_playbook, {}) self.assertEqual(my_playbook, [{'gather_facts': 'no'}, {'gather_facts': 'no'}]) self.assertEqual(mock_run_play.call_count, 2) @mock.patch('os_faults.ansible.executor.AnsibleRunner.run_playbook') def test_execute(self, mock_run_playbook): my_hosts = [node_collection.Host('0.0.0.0'), node_collection.Host('255.255.255.255')] my_tasks = 'my_task' ex = executor.AnsibleRunner() ex.execute(my_hosts, my_tasks) mock_run_playbook.assert_called_once_with( [{'tasks': ['my_task'], 'hosts': ['0.0.0.0', '255.255.255.255'], 'serial': 10}], {'0.0.0.0': {}, '255.255.255.255': {}}) @mock.patch('os_faults.ansible.executor.AnsibleRunner.run_playbook') def test_execute_with_host_vars(self, mock_run_playbook): my_hosts = [ node_collection.Host('0.0.0.0', auth={'username': 'foo', 'password': 'bar', 'sudo': True}), node_collection.Host('255.255.255.255', auth={'jump': {'host': '192.168.1.100', 'username': 'foo'}})] my_tasks = 'my_task' ex = executor.AnsibleRunner() ex.execute(my_hosts, my_tasks) mock_run_playbook.assert_called_once_with( [{'tasks': ['my_task'], 'hosts': ['0.0.0.0', '255.255.255.255'], 'serial': 10}], { '0.0.0.0': { 'ansible_user': 'foo', 'ansible_ssh_pass': 'bar', 'ansible_become': True, 'ansible_become_password': None, 'ansible_ssh_private_key_file': None, 'ansible_ssh_common_args': None, }, '255.255.255.255': { 'ansible_user': None, 'ansible_ssh_pass': None, 'ansible_become': None, 'ansible_become_password': None, 'ansible_ssh_private_key_file': None, 'ansible_ssh_common_args': '-o UserKnownHostsFile=/dev/null ' '-o StrictHostKeyChecking=no ' '-o ConnectTimeout=60 ' '-o ProxyCommand="' 'ssh -W %h:%p ' '-o UserKnownHostsFile=/dev/null ' '-o StrictHostKeyChecking=no ' '-o ConnectTimeout=60 ' 'foo@192.168.1.100"'}}) @mock.patch('os_faults.ansible.executor.AnsibleRunner.run_playbook') def test_execute_with_serial(self, mock_run_playbook): my_hosts = [node_collection.Host('0.0.0.0'), node_collection.Host('255.255.255.255')] my_tasks = 'my_task' ex = executor.AnsibleRunner(serial=50) ex.execute(my_hosts, my_tasks) mock_run_playbook.assert_called_once_with( [{'tasks': ['my_task'], 'hosts': ['0.0.0.0', '255.255.255.255'], 'serial': 50}], {'0.0.0.0': {}, '255.255.255.255': {}}) @mock.patch('os_faults.ansible.executor.AnsibleRunner.run_playbook') def test_execute_status_unreachable(self, mock_run_playbook): my_hosts = [node_collection.Host('0.0.0.0'), node_collection.Host('255.255.255.255')] my_tasks = 'my_task' my_statuses = {executor.STATUS_FAILED, executor.STATUS_SKIPPED, executor.STATUS_UNREACHABLE} r0 = executor.AnsibleExecutionRecord( host='0.0.0.0', status=executor.STATUS_OK, task={}, payload={}) r1 = executor.AnsibleExecutionRecord( host='255.255.255.255', status=executor.STATUS_UNREACHABLE, task={}, payload={}) mock_run_playbook.return_value = [r0, r1] ex = executor.AnsibleRunner() err = self.assertRaises(executor.AnsibleExecutionException, ex.execute, my_hosts, my_tasks, my_statuses) self.assertEqual(type(err), executor.AnsibleExecutionUnreachable) @mock.patch('os_faults.ansible.executor.AnsibleRunner.run_playbook') def test_execute_status_failed(self, mock_run_playbook): my_hosts = [node_collection.Host('0.0.0.0'), node_collection.Host('255.255.255.255')] my_tasks = 'my_task' my_statuses = {executor.STATUS_OK, executor.STATUS_FAILED, executor.STATUS_SKIPPED, executor.STATUS_UNREACHABLE} r0 = executor.AnsibleExecutionRecord( host='0.0.0.0', status=executor.STATUS_OK, task={}, payload={}) r1 = executor.AnsibleExecutionRecord( host='255.255.255.255', status=executor.STATUS_UNREACHABLE, task={}, payload={}) mock_run_playbook.return_value = [r0, r1] ex = executor.AnsibleRunner() err = self.assertRaises(executor.AnsibleExecutionException, ex.execute, my_hosts, my_tasks, my_statuses) self.assertEqual(type(err), executor.AnsibleExecutionException) @mock.patch('copy.deepcopy') @mock.patch('os_faults.ansible.executor.AnsibleRunner.run_playbook') def test_execute_stdout_is_more_than_stdout_limit( self, mock_run_playbook, mock_deepcopy): result = mock.Mock() result.payload = {'stdout': 'a' * (executor.STDOUT_LIMIT + 1), 'stdout_lines': 'a' * (executor.STDOUT_LIMIT + 1)} mock_run_playbook.return_value = [result] mock_deepcopy.return_value = [result] log_result = mock_deepcopy.return_value[0] my_hosts = [node_collection.Host('0.0.0.0'), node_collection.Host('255.255.255.255')] my_tasks = 'my_task' ex = executor.AnsibleRunner() ex.execute(my_hosts, my_tasks) self.assertEqual('a' * executor.STDOUT_LIMIT + '... ', log_result.payload['stdout']) @mock.patch('os_faults.ansible.executor.LOG.debug') @mock.patch('os_faults.ansible.executor.AnsibleRunner.run_playbook') def test_execute_payload_without_stdout(self, mock_run_playbook, mock_debug): task = {'task': 'foo'} host = '0.0.0.0' result = executor.AnsibleExecutionRecord( host=host, status=executor.STATUS_OK, task=task, payload={'foo': 'bar'}) mock_run_playbook.return_value = [result] hosts = [node_collection.Host('0.0.0.0')] ex = executor.AnsibleRunner() ex.execute(hosts, task) mock_debug.assert_has_calls(( mock.call('Executing task: %s on hosts: %s with serial: %s', task, hosts, 10), mock.call('Execution completed with 1 result(s):'), mock.call(result), )) @mock.patch('os_faults.executor.get_module_paths') @mock.patch('os_faults.executor.PRE_24_ANSIBLE', False) def test_make_module_path_option_ansible_24(self, mock_mp): mock_mp.return_value = ['/path/one', 'path/two'] self.assertEqual(['/path/one', 'path/two'], executor.make_module_path_option()) @mock.patch('os_faults.executor.get_module_paths') @mock.patch('os_faults.executor.PRE_24_ANSIBLE', False) def test_make_module_path_option_ansible_24_one_item(self, mock_mp): mock_mp.return_value = ['/path/one'] self.assertEqual(['/path/one', '/path/one'], executor.make_module_path_option()) @mock.patch('os_faults.executor.get_module_paths') @mock.patch('os_faults.executor.PRE_24_ANSIBLE', True) def test_make_module_path_option_ansible_pre24(self, mock_mp): mock_mp.return_value = ['/path/one', 'path/two'] self.assertEqual('/path/one:path/two', executor.make_module_path_option()) os-faults-0.1.17/os_faults/tests/unit/api/000077500000000000000000000000001317662032700204155ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/api/__init__.py000066400000000000000000000000001317662032700225140ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/api/test_human_api.py000066400000000000000000000222441317662032700237730ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import mock from os_faults.api import error from os_faults.api import human from os_faults.api import node_collection from os_faults.api import service as service_api from os_faults.tests.unit import test @ddt.ddt class TestHumanAPI(test.TestCase): def setUp(self): super(TestHumanAPI, self).setUp() self.destructor = mock.MagicMock() self.service = mock.MagicMock(service_api.Service) self.destructor.get_service = mock.MagicMock(return_value=self.service) @ddt.data(('restart', 'keystone'), ('kill', 'nova-api')) @ddt.unpack def test_service_action(self, action, service_name): command = '%s %s service' % (action, service_name) human.execute(self.destructor, command) self.destructor.get_service.assert_called_once_with(name=service_name) getattr(self.service, action).assert_called_once_with() @ddt.data(('restart', 'keystone', 'random'), ('kill', 'nova-api', 'one')) @ddt.unpack def test_service_action_on_random_node(self, action, service_name, node): nodes = mock.MagicMock(node_collection.NodeCollection) self.service.get_nodes = mock.MagicMock(return_value=nodes) one_node = mock.MagicMock(node_collection.NodeCollection) nodes.pick = mock.MagicMock(return_value=one_node) command = '%s %s service on %s node' % (action, service_name, node) human.execute(self.destructor, command) self.destructor.get_service.assert_called_once_with(name=service_name) getattr(self.service, action).assert_called_once_with(nodes=one_node) nodes.pick.assert_called_once() @ddt.data(('freeze', 'keystone', 5)) @ddt.unpack def test_service_action_with_duration(self, action, service_name, t): command = '%s %s service for %d seconds' % (action, service_name, t) human.execute(self.destructor, command) self.destructor.get_service.assert_called_once_with(name=service_name) getattr(self.service, action).assert_called_once_with(sec=t) @ddt.data(('restart', 'keystone', 'node'), ('kill', 'nova-api', 'node')) @ddt.unpack def test_service_action_on_fqdn_node(self, action, service_name, node): nodes = mock.MagicMock(node_collection.NodeCollection) self.destructor.get_nodes.return_value = nodes command = '%s %s service on %s node' % (action, service_name, node) human.execute(self.destructor, command) self.destructor.get_service.assert_called_once_with(name=service_name) self.destructor.get_nodes.assert_called_once_with(fqdns=[node]) getattr(self.service, action).assert_called_once_with(nodes=nodes) @ddt.data(('reboot', 'keystone'), ('reset', 'nova-api')) @ddt.unpack def test_node_action_on_all_nodes(self, action, service_name): nodes = mock.MagicMock(node_collection.NodeCollection) self.service.get_nodes = mock.MagicMock(return_value=nodes) command = '%s node with %s service' % (action, service_name) human.execute(self.destructor, command) self.destructor.get_service.assert_called_once_with(name=service_name) getattr(nodes, action).assert_called_once_with() @ddt.data(('reboot', 'keystone'), ('reset', 'nova-api')) @ddt.unpack def test_node_action_on_random_node(self, action, service_name): nodes = mock.MagicMock(node_collection.NodeCollection) nodes2 = mock.MagicMock(node_collection.NodeCollection) self.service.get_nodes = mock.MagicMock(return_value=nodes) nodes.pick = mock.MagicMock(return_value=nodes2) command = '%s one node with %s service' % (action, service_name) human.execute(self.destructor, command) self.destructor.get_service.assert_called_once_with(name=service_name) nodes.pick.assert_called_once() getattr(nodes2, action).assert_called_once_with() @ddt.data('reboot', 'poweroff', 'poweron') def test_node_action_by_fqdn(self, action): destructor = mock.MagicMock() nodes = mock.MagicMock(node_collection.NodeCollection) destructor.get_nodes = mock.MagicMock(return_value=nodes) command = '%s node-2.local node' % action.capitalize() human.execute(destructor, command) destructor.get_nodes.assert_called_once_with(fqdns=['node-2.local']) getattr(nodes, action).assert_called_once() @ddt.data('cpu', 'memory', 'disk', 'kernel') def test_stress_by_fqdn(self, target): action = 'stress' duration = 20 destructor = mock.MagicMock() nodes = mock.MagicMock(node_collection.NodeCollection) destructor.get_nodes = mock.MagicMock(return_value=nodes) command = 'stress %s for %d seconds on node-2.local node' % ( target, duration) human.execute(destructor, command) destructor.get_nodes.assert_called_once_with(fqdns=['node-2.local']) getattr(nodes, action).assert_called_once_with( target=target, duration=duration) @ddt.data('cpu', 'memory', 'disk', 'kernel') def test_stress_target(self, target): action = 'stress' duration = 20 destructor = mock.MagicMock() nodes = mock.MagicMock(node_collection.NodeCollection) destructor.get_nodes = mock.MagicMock(return_value=nodes) command = 'stress %s for %d seconds on nodes' % (target, duration) human.execute(destructor, command) destructor.get_nodes.assert_called_once_with() getattr(nodes, action).assert_called_once_with( target=target, duration=duration) @ddt.data(('CPU', 'cpu', 10, 'keystone'), ('disk', 'disk', 20, 'nova-api')) @ddt.unpack def test_stress_by_service_on_fqdn_node(self, user_target, cmd_target, duration, service_name): action = 'stress' nodes = mock.MagicMock(node_collection.NodeCollection) self.service.get_nodes.return_value = nodes command = 'stress %s for %d seconds on all nodes with %s service' % ( user_target, duration, service_name) human.execute(self.destructor, command) getattr(nodes, action).assert_called_once_with( target=cmd_target, duration=duration) @ddt.data(('Disconnect', 'disconnect'), ('Connect', 'connect')) @ddt.unpack def test_network_on_nodes_by_fqdn(self, user_action, action): destructor = mock.MagicMock() nodes = mock.MagicMock(node_collection.NodeCollection) destructor.get_nodes = mock.MagicMock(return_value=nodes) command = '%s storage network on node-2.local node' % user_action human.execute(destructor, command) destructor.get_nodes.assert_called_once_with(fqdns=['node-2.local']) getattr(nodes, action).assert_called_once_with(network_name='storage') @ddt.data(('disconnect', 'storage', 'mysql'), ('connect', 'management', 'rabbitmq')) @ddt.unpack def test_network_on_nodes_by_service( self, action, network_name, service_name): nodes = mock.MagicMock(node_collection.NodeCollection) self.service.get_nodes = mock.MagicMock(return_value=nodes) command = '%s %s network on node with %s service' % ( action, network_name, service_name) human.execute(self.destructor, command) self.destructor.get_service.assert_called_once_with(name=service_name) self.service.get_nodes.assert_called_once() getattr(nodes, action).assert_called_once_with( network_name=network_name) @ddt.data(('disconnect', 'storage', 'one', 'mysql'), ('connect', 'management', 'random', 'rabbitmq')) @ddt.unpack def test_network_on_nodes_by_service_picked_node( self, action, network_name, node, service_name): nodes = mock.MagicMock(node_collection.NodeCollection) nodes2 = mock.MagicMock(node_collection.NodeCollection) self.service.get_nodes = mock.MagicMock(return_value=nodes) nodes.pick = mock.MagicMock(return_value=nodes2) command = '%s %s network on %s node with %s service' % ( action, network_name, node, service_name) human.execute(self.destructor, command) self.destructor.get_service.assert_called_once_with(name=service_name) self.service.get_nodes.assert_called_once() nodes.pick.assert_called_once() getattr(nodes2, action).assert_called_once_with( network_name=network_name) def test_malformed_query(self): destructor = mock.MagicMock() command = 'inject some fault' self.assertRaises(error.OSFException, human.execute, destructor, command) os-faults-0.1.17/os_faults/tests/unit/api/test_node_collection.py000066400000000000000000000177061317662032700252010ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import mock import six from os_faults.api import cloud_management from os_faults.api import error from os_faults.api import node_collection from os_faults.api import power_management from os_faults.tests.unit import test class MyNodeCollection(node_collection.NodeCollection): pass class NodeCollectionTestCase(test.TestCase): def setUp(self): super(NodeCollectionTestCase, self).setUp() self.mock_cloud_management = mock.Mock( spec=cloud_management.CloudManagement) self.mock_power_manager = mock.Mock( spec=power_management.PowerManager) self.mock_cloud_management.power_manager = self.mock_power_manager self.hosts = [ node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='node1.com'), node_collection.Host(ip='10.0.0.3', mac='09:7b:74:90:63:c2', fqdn='node2.com'), node_collection.Host(ip='10.0.0.4', mac='09:7b:74:90:63:c3', fqdn='node3.com'), node_collection.Host(ip='10.0.0.5', mac='09:7b:74:90:63:c4', fqdn='node4.com'), ] self.node_collection = node_collection.NodeCollection( cloud_management=self.mock_cloud_management, hosts=copy.deepcopy(self.hosts)) self.hosts2 = [ node_collection.Host(ip='10.0.0.7', mac='09:7b:74:90:63:c7', fqdn='node6.com'), node_collection.Host(ip='10.0.0.3', mac='09:7b:74:90:63:c2', fqdn='node2.com'), node_collection.Host(ip='10.0.0.6', mac='09:7b:74:90:63:c6', fqdn='node5.com'), node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='node1.com'), ] self.node_collection2 = node_collection.NodeCollection( cloud_management=self.mock_cloud_management, hosts=copy.deepcopy(self.hosts2)) def test_check_types_wrong_type(self): collection = MyNodeCollection(None, []) self.assertRaises(TypeError, self.node_collection._check_nodes_types, collection) self.assertRaises(TypeError, collection._check_nodes_types, self.node_collection) def test_check_types_wrong_cloud_management(self): collection = node_collection.NodeCollection(None, []) self.assertRaises(error.NodeCollectionError, self.node_collection._check_nodes_types, collection) self.assertRaises(error.NodeCollectionError, collection._check_nodes_types, self.node_collection) def test_repr(self): self.assertIsInstance(repr(self.node_collection), six.string_types) def test_len(self): self.assertEqual(4, len(self.node_collection)) def test_add(self): collection = self.node_collection + self.node_collection2 self.assertEqual(['10.0.0.2', '10.0.0.3', '10.0.0.4', '10.0.0.5', '10.0.0.6', '10.0.0.7'], collection.get_ips()) def test_sub(self): collection = self.node_collection - self.node_collection2 self.assertEqual(['10.0.0.4', '10.0.0.5'], collection.get_ips()) def test_and(self): collection = self.node_collection & self.node_collection2 self.assertEqual(['10.0.0.2', '10.0.0.3'], collection.get_ips()) def test_or(self): collection = self.node_collection | self.node_collection2 self.assertEqual(['10.0.0.2', '10.0.0.3', '10.0.0.4', '10.0.0.5', '10.0.0.6', '10.0.0.7'], collection.get_ips()) def test_xor(self): collection = self.node_collection ^ self.node_collection2 self.assertEqual(['10.0.0.4', '10.0.0.5', '10.0.0.6', '10.0.0.7'], collection.get_ips()) def test_in(self): self.assertIn(self.hosts[0], self.node_collection) def test_not_in(self): self.assertNotIn(self.hosts2[2], self.node_collection) def test_iter(self): self.assertEqual(self.hosts, list(self.node_collection)) def test_iterate_hosts(self): self.assertEqual(self.hosts, list(self.node_collection.iterate_hosts())) def test_get_ips(self): self.assertEqual(['10.0.0.2', '10.0.0.3', '10.0.0.4', '10.0.0.5'], self.node_collection.get_ips()) def test_get_macs(self): self.assertEqual(['09:7b:74:90:63:c1', '09:7b:74:90:63:c2', '09:7b:74:90:63:c3', '09:7b:74:90:63:c4'], self.node_collection.get_macs()) def test_get_fqdns(self): self.assertEqual(['node1.com', 'node2.com', 'node3.com', 'node4.com'], self.node_collection.get_fqdns()) def test_pick(self): one = self.node_collection.pick() self.assertEqual(1, len(one)) self.assertIn(next(iter(one.hosts)), self.hosts) def test_filter(self): one = self.node_collection.filter(lambda host: host.ip == '10.0.0.2') self.assertEqual(1, len(one)) self.assertEqual(self.hosts[0], one.hosts[0]) def test_filter_error(self): self.assertRaises(error.NodeCollectionError, self.node_collection.filter, lambda host: host.ip == 'foo') def test_run_task(self): result = self.node_collection.run_task({'foo': 'bar'}, raise_on_error=False) mock_execute_on_cloud = self.mock_cloud_management.execute_on_cloud expected_result = mock_execute_on_cloud.return_value self.assertIs(result, expected_result) mock_execute_on_cloud.assert_called_once_with( self.hosts, {'foo': 'bar'}, raise_on_error=False) def test_pick_count(self): two = self.node_collection.pick(count=2) self.assertEqual(2, len(two)) for host in two: self.assertIn(host, self.node_collection) def test_pick_exception(self): self.assertRaises( error.NodeCollectionError, self.node_collection.pick, count=10) def test_poweroff(self): self.node_collection.poweroff() self.mock_power_manager.poweroff.assert_called_once_with(self.hosts) def test_poweron(self): self.node_collection.poweron() self.mock_power_manager.poweron.assert_called_once_with(self.hosts) def test_reset(self): self.node_collection.reset() self.mock_power_manager.reset.assert_called_once_with(self.hosts) def test_shutdown(self): self.node_collection.shutdown() self.mock_power_manager.shutdown.assert_called_once_with(self.hosts) def test_reboot(self): self.node_collection.reboot() self.mock_cloud_management.execute_on_cloud.assert_called_once_with( self.hosts, {'command': 'reboot now'}) def test_snapshot(self): self.node_collection.snapshot('foo') self.mock_power_manager.snapshot.assert_called_once_with( self.hosts, 'foo', True) def test_revert(self): self.node_collection.revert('foo') self.mock_power_manager.revert.assert_called_once_with( self.hosts, 'foo', True) os-faults-0.1.17/os_faults/tests/unit/api/test_power_management.py000066400000000000000000000112501317662032700253550ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from os_faults.api import error from os_faults.api import node_collection from os_faults.api import power_management from os_faults.tests.unit import test class PowerManagerTestCase(test.TestCase): def setUp(self): super(PowerManagerTestCase, self).setUp() self.dummy_driver1 = mock.Mock(spec=power_management.PowerDriver) self.dummy_driver1.supports.side_effect = lambda host: 'c1' in host.mac self.dummy_driver2 = mock.Mock(spec=power_management.PowerDriver) self.dummy_driver2.supports.side_effect = lambda host: 'c2' in host.mac self.dummy_drivers = [self.dummy_driver1, self.dummy_driver2] self.hosts = [ node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='node1.com'), node_collection.Host(ip='10.0.0.3', mac='09:7b:74:90:63:c2', fqdn='node2.com'), ] self.pm = power_management.PowerManager() self.pm.add_driver(self.dummy_driver1) self.pm.add_driver(self.dummy_driver2) def test_poweroff(self): self.pm.poweroff(self.hosts) self.dummy_driver1.poweroff.called_once_with(host=self.hosts[0]) self.dummy_driver2.poweroff.called_once_with(host=self.hosts[1]) def test_poweron(self): self.pm.poweron(self.hosts) self.dummy_driver1.poweron.called_once_with(host=self.hosts[0]) self.dummy_driver2.poweron.called_once_with(host=self.hosts[1]) def test_reset(self): self.pm.reset(self.hosts) self.dummy_driver1.reset.called_once_with(host=self.hosts[0]) self.dummy_driver2.reset.called_once_with(host=self.hosts[1]) def test_shutdown(self): self.pm.shutdown(self.hosts) self.dummy_driver1.shutdown.called_once_with(host=self.hosts[0]) self.dummy_driver2.shutdown.called_once_with(host=self.hosts[1]) def test_snapshot(self): self.pm.snapshot(self.hosts, 'snap1', suspend=False) self.dummy_driver1.snapshot.called_once_with(host=self.hosts[0], snapshot_name='snap1', suspend=False) self.dummy_driver2.snapshot.called_once_with(host=self.hosts[1], snapshot_name='snap1', suspend=False) def test_revert(self): self.pm.revert(self.hosts, 'snap1', resume=False) self.dummy_driver1.revert.called_once_with(host=self.hosts[0], snapshot_name='snap1', resume=False) self.dummy_driver2.revert.called_once_with(host=self.hosts[1], snapshot_name='snap1', resume=False) def test_run_error(self): self.dummy_driver2.reset.side_effect = Exception() exc = self.assertRaises(error.PowerManagementError, self.pm.reset, self.hosts) self.assertEqual("There are some errors when working the driver. " "Please, check logs for more details.", str(exc)) def test_run_no_supported_driver(self): self.dummy_driver2.supports.side_effect = None self.dummy_driver2.supports.return_value = False exc = self.assertRaises(error.PowerManagementError, self.pm.reset, self.hosts) self.assertEqual("No supported driver found for host " "Host(ip='10.0.0.3', mac='09:7b:74:90:63:c2', " "fqdn='node2.com', libvirt_name=None)", str(exc)) def test_run_no_drivers(self): self.pm = power_management.PowerManager() exc = self.assertRaises(error.PowerManagementError, self.pm.reset, self.hosts) self.assertEqual("No supported driver found for host " "Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', " "fqdn='node1.com', libvirt_name=None)", str(exc)) os-faults-0.1.17/os_faults/tests/unit/cmd/000077500000000000000000000000001317662032700204075ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/cmd/__init__.py000066400000000000000000000000001317662032700225060ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/cmd/test_cmd.py000066400000000000000000000041661317662032700225720ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from os_faults.cmd import cmd from os_faults.tests.unit import test class CmdTestCase(test.TestCase): def test_main_no_args(self): self.assertRaises(SystemExit, cmd.main) @mock.patch('os_faults.human_api') @mock.patch('os_faults.connect') def test_main_verify(self, mock_connect, mock_human_api): with mock.patch('sys.argv', ['', '-c', 'my_file', '-v']): cmd.main() mock_connect.assert_called_once_with(config_filename='my_file') mock_connect.return_value.verify.assert_called_once_with() mock_human_api.assert_not_called() @mock.patch('os_faults.human_api') @mock.patch('os_faults.connect') def test_main_command(self, mock_connect, mock_human_api): with mock.patch('sys.argv', ['', '-c', 'my_file', 'my_command']): cmd.main() mock_connect.assert_called_once_with(config_filename='my_file') mock_connect.return_value.verify.assert_not_called() mock_human_api.assert_called_once_with( mock_connect.return_value, 'my_command') @mock.patch('os_faults.human_api') @mock.patch('os_faults.connect') def test_main_verify_and_command(self, mock_connect, mock_human_api): with mock.patch('sys.argv', ['', '-c', 'my_file', '-v', 'my_command']): cmd.main() mock_connect.assert_called_once_with(config_filename='my_file') mock_connect.return_value.verify.assert_called_once_with() mock_human_api.assert_called_once_with( mock_connect.return_value, 'my_command') os-faults-0.1.17/os_faults/tests/unit/cmd/test_main.py000066400000000000000000000103341317662032700227450ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from click import testing import mock from os_faults.api import cloud_management from os_faults.api import node_collection from os_faults.cmd import main from os_faults.tests.unit import test class MainTestCase(test.TestCase): def setUp(self): super(MainTestCase, self).setUp() self.runner = testing.CliRunner() def test_version(self): result = self.runner.invoke(main.main, ['--version']) self.assertEqual(0, result.exit_code) self.assertIn('Version', result.output) @mock.patch('os_faults.connect') def test_verify(self, mock_connect): with self.runner.isolated_filesystem(): with open('my.yaml', 'w') as f: f.write('foo') result = self.runner.invoke(main.main, ['verify'], env={'OS_FAULTS_CONFIG': 'my.yaml'}) self.assertEqual(0, result.exit_code) mock_connect.assert_called_once_with(config_filename='my.yaml') destructor = mock_connect.return_value destructor.verify.assert_called_once_with() @mock.patch('os_faults.connect') def test_verify_with_config(self, mock_connect): with self.runner.isolated_filesystem(): with open('my.yaml', 'w') as f: f.write('foo') myconf = os.path.abspath(f.name) result = self.runner.invoke(main.main, ['verify', '-c', myconf]) self.assertEqual(0, result.exit_code) mock_connect.assert_called_once_with(config_filename=myconf) destructor = mock_connect.return_value destructor.verify.assert_called_once_with() @mock.patch('os_faults.discover') def test_discover(self, mock_discover): mock_discover.return_value = {'foo': 'bar'} with self.runner.isolated_filesystem(): with open('my.yaml', 'w') as f: f.write('foo') myconf = os.path.abspath(f.name) result = self.runner.invoke(main.main, ['discover', '-c', myconf, 'my-new.yaml']) self.assertEqual(0, result.exit_code) mock_discover.assert_called_once_with('foo') with open('my-new.yaml') as f: self.assertEqual('foo: bar\n', f.read()) @mock.patch('os_faults.connect') def test_nodes(self, mock_connect): cloud_management_mock = mock.create_autospec( cloud_management.CloudManagement) mock_connect.return_value = cloud_management_mock cloud_management_mock.get_nodes.return_value.hosts = [ node_collection.Host( ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='node1.local'), node_collection.Host( ip='10.0.0.3', mac='09:7b:74:90:63:c2', fqdn='node2.local')] with self.runner.isolated_filesystem(): with open('my.yaml', 'w') as f: f.write('foo') myconf = os.path.abspath(f.name) result = self.runner.invoke(main.main, ['nodes', '-c', myconf]) self.assertEqual(0, result.exit_code) self.assertEqual( '- fqdn: node1.local\n' ' ip: 10.0.0.2\n' ' mac: 09:7b:74:90:63:c1\n' '- fqdn: node2.local\n' ' ip: 10.0.0.3\n' ' mac: 09:7b:74:90:63:c2\n', result.output) @mock.patch('os_faults.registry.get_drivers') def test_drivers(self, mock_get_drivers): mock_get_drivers.return_value = {'foo': 1, 'bar': 2} with self.runner.isolated_filesystem(): result = self.runner.invoke(main.main, ['drivers']) self.assertEqual(0, result.exit_code) self.assertEqual( '- bar\n' '- foo\n', result.output) os-faults-0.1.17/os_faults/tests/unit/drivers/000077500000000000000000000000001317662032700213225ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/drivers/__init__.py000066400000000000000000000000001317662032700234210ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/drivers/cloud/000077500000000000000000000000001317662032700224305ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/drivers/cloud/__init__.py000066400000000000000000000000001317662032700245270ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/drivers/cloud/test_devstack.py000066400000000000000000000304331317662032700256500ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import ddt import mock from os_faults.api import node_collection from os_faults.drivers.cloud import devstack from os_faults.tests.unit import fakes from os_faults.tests.unit import test class DevStackNodeTestCase(test.TestCase): def setUp(self): super(DevStackNodeTestCase, self).setUp() self.mock_cloud_management = mock.Mock( spec=devstack.DevStackManagement) self.host = node_collection.Host( ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='') self.node_collection = devstack.DevStackNode( cloud_management=self.mock_cloud_management, hosts=[copy.deepcopy(self.host)]) def test_connect(self): pass def test_disconnect(self): pass @ddt.ddt class DevStackManagementTestCase(test.TestCase): def setUp(self): super(DevStackManagementTestCase, self).setUp() self.conf = {'address': '10.0.0.2', 'username': 'root'} self.host = node_collection.Host('10.0.0.2') self.discoverd_host = node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='') @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_verify(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')], ] devstack_management = devstack.DevStackManagement(self.conf) devstack_management.verify() ansible_runner_inst.execute.assert_has_calls([ mock.call([self.host], {'command': 'cat /sys/class/net/eth0/address'}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_verify_slaves(self, mock_ansible_runner): self.conf['slaves'] = ['10.0.0.3', '10.0.0.4'] ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': 'mac1'}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': 'mac2'}, host='10.0.0.3'), fakes.FakeAnsibleResult(payload={'stdout': 'mac3'}, host='10.0.0.4')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.4')], ] devstack_management = devstack.DevStackManagement(self.conf) devstack_management.verify() hosts = [ node_collection.Host('10.0.0.2'), node_collection.Host('10.0.0.3'), node_collection.Host('10.0.0.4') ] ansible_runner_inst.execute.assert_has_calls([ mock.call(hosts, {'command': 'cat /sys/class/net/eth0/address'}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_execute_on_cloud(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '/root'})], ] devstack_management = devstack.DevStackManagement(self.conf) result = devstack_management.execute_on_cloud( ['10.0.0.2'], {'command': 'pwd'}) ansible_runner_inst.execute.assert_called_once_with( ['10.0.0.2'], {'command': 'pwd'}) self.assertEqual( [fakes.FakeAnsibleResult(payload={'stdout': '/root'})], result) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_get_nodes(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2')], ] devstack_management = devstack.DevStackManagement(self.conf) nodes = devstack_management.get_nodes() ansible_runner_inst.execute.assert_called_once_with( [self.host], {'command': 'cat /sys/class/net/eth0/address'}) self.assertIsInstance(nodes, devstack.DevStackNode) self.assertEqual( [node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='')], nodes.hosts) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_get_nodes_with_slaves(self, mock_ansible_runner): self.conf['slaves'] = ['10.0.0.3', '10.0.0.4'] self.conf['iface'] = 'eth1' ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c2'}, host='10.0.0.3'), fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c3'}, host='10.0.0.4')], ] hosts = [ node_collection.Host('10.0.0.2'), node_collection.Host('10.0.0.3'), node_collection.Host('10.0.0.4') ] devstack_management = devstack.DevStackManagement(self.conf) nodes = devstack_management.get_nodes() ansible_runner_inst.execute.assert_called_once_with( hosts, {'command': 'cat /sys/class/net/eth1/address'}) self.assertIsInstance(nodes, devstack.DevStackNode) self.assertEqual( [node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn=''), node_collection.Host(ip='10.0.0.3', mac='09:7b:74:90:63:c2', fqdn=''), node_collection.Host(ip='10.0.0.4', mac='09:7b:74:90:63:c3', fqdn='')], nodes.hosts) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*devstack.DevStackManagement.SERVICES.keys()) def test_get_service_nodes(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')] ] devstack_management = devstack.DevStackManagement(self.conf) service = devstack_management.get_service(service_name) nodes = service.get_nodes() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call( [self.host], {'command': 'cat /sys/class/net/eth0/address'}), mock.call([self.discoverd_host], {'command': cmd}, []) ]) self.assertEqual( [node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='')], nodes.hosts) def test_validate_services(self): devstack_management = devstack.DevStackManagement(self.conf) devstack_management.validate_services() @ddt.ddt class DevStackServiceTestCase(test.TestCase): def setUp(self): super(DevStackServiceTestCase, self).setUp() self.conf = {'address': '10.0.0.2', 'username': 'root'} self.host = node_collection.Host('10.0.0.2') self.discoverd_host = node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='') @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*devstack.DevStackManagement.SERVICES.keys()) def test_restart(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')] ] devstack_management = devstack.DevStackManagement(self.conf) service = devstack_management.get_service(service_name) service.restart() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call( [self.host], {'command': 'cat /sys/class/net/eth0/address'}), mock.call([self.discoverd_host], {'command': cmd}, []), mock.call([self.discoverd_host], {'shell': service.restart_cmd}) ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*devstack.DevStackManagement.SERVICES.keys()) def test_terminate(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')] ] devstack_management = devstack.DevStackManagement(self.conf) service = devstack_management.get_service(service_name) service.terminate() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call( [self.host], {'command': 'cat /sys/class/net/eth0/address'}), mock.call([self.discoverd_host], {'command': cmd}, []), mock.call([self.discoverd_host], {'shell': service.terminate_cmd}) ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*devstack.DevStackManagement.SERVICES.keys()) def test_start(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')] ] devstack_management = devstack.DevStackManagement(self.conf) service = devstack_management.get_service(service_name) service.start() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call( [self.host], {'command': 'cat /sys/class/net/eth0/address'}), mock.call([self.discoverd_host], {'command': cmd}, []), mock.call([self.discoverd_host], {'shell': service.start_cmd}) ]) os-faults-0.1.17/os_faults/tests/unit/drivers/cloud/test_devstack_systemd.py000066400000000000000000000120261317662032700274160ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from os_faults.api import node_collection from os_faults.drivers.cloud import devstack_systemd from os_faults.tests.unit.drivers.cloud import test_devstack from os_faults.tests.unit import fakes from os_faults.tests.unit import test @ddt.ddt class DevStackSystemdManagementTestCase( test_devstack.DevStackManagementTestCase): def setUp(self): super(DevStackSystemdManagementTestCase, self).setUp() @ddt.ddt class DevStackSystemdServiceTestCase(test.TestCase): def setUp(self): super(DevStackSystemdServiceTestCase, self).setUp() self.conf = {'address': '10.0.0.2', 'username': 'root'} self.host = node_collection.Host('10.0.0.2') self.discoverd_host = node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='') @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*devstack_systemd.DevStackSystemdManagement.SERVICES.keys()) def test_restart(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')] ] devstack_management = devstack_systemd.DevStackSystemdManagement( self.conf) service = devstack_management.get_service(service_name) service.restart() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call( [self.host], {'command': 'cat /sys/class/net/eth0/address'}), mock.call([self.discoverd_host], {'command': cmd}, []), mock.call([self.discoverd_host], {'shell': service.restart_cmd}) ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*devstack_systemd.DevStackSystemdManagement.SERVICES.keys()) def test_terminate(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')] ] devstack_management = devstack_systemd.DevStackSystemdManagement( self.conf) service = devstack_management.get_service(service_name) service.terminate() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call( [self.host], {'command': 'cat /sys/class/net/eth0/address'}), mock.call([self.discoverd_host], {'command': cmd}, []), mock.call([self.discoverd_host], {'shell': service.terminate_cmd}) ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*devstack_systemd.DevStackSystemdManagement.SERVICES.keys()) def test_start(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult(payload={'stdout': '09:7b:74:90:63:c1'}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2')] ] devstack_management = devstack_systemd.DevStackSystemdManagement( self.conf) service = devstack_management.get_service(service_name) service.start() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call( [self.host], {'command': 'cat /sys/class/net/eth0/address'}), mock.call([self.discoverd_host], {'command': cmd}, []), mock.call([self.discoverd_host], {'shell': service.start_cmd}) ]) os-faults-0.1.17/os_faults/tests/unit/drivers/cloud/test_fuel_management.py000066400000000000000000000171761317662032700272040ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from os_faults.ansible import executor from os_faults.api import error from os_faults.api import node_collection from os_faults.drivers.cloud import fuel from os_faults.tests.unit import fakes from os_faults.tests.unit import test @ddt.ddt class FuelManagementTestCase(test.TestCase): def setUp(self): super(FuelManagementTestCase, self).setUp() self.conf = { 'address': 'fuel.local', 'username': 'root', } self.fake_ansible_result = fakes.FakeAnsibleResult( payload={ 'stdout': '[{"ip": "10.0.0.2", "mac": "02", "fqdn": "node-2"},' ' {"ip": "10.0.0.3", "mac": "03", "fqdn": "node-3"}]' }) self.master_host = node_collection.Host('fuel.local') self.hosts = [ node_collection.Host(ip='10.0.0.2', mac='02', fqdn='node-2'), node_collection.Host(ip='10.0.0.3', mac='03', fqdn='node-3'), ] @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(( dict(address='fuel.local', username='root'), (mock.call(private_key_file=None, remote_user='root'), mock.call(private_key_file=None, remote_user='root', jump_host='fuel.local', serial=None)) ), ( dict(address='fuel.local', username='root', slave_direct_ssh=True, serial=42), (mock.call(private_key_file=None, remote_user='root'), mock.call(private_key_file=None, remote_user='root', jump_host=None, serial=42)) )) @ddt.unpack def test_init(self, config, expected_runner_calls, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value fuel_managment = fuel.FuelManagement(config) mock_ansible_runner.assert_has_calls(expected_runner_calls) self.assertIs(fuel_managment.master_node_executor, ansible_runner_inst) self.assertIs(fuel_managment.cloud_executor, ansible_runner_inst) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_verify(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}), fakes.FakeAnsibleResult(payload={'stdout': ''})], ] fuel_managment = fuel.FuelManagement(self.conf) fuel_managment.verify() ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': 'hostname'}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_get_nodes(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [[self.fake_ansible_result]] fuel_managment = fuel.FuelManagement(self.conf) nodes = fuel_managment.get_nodes() ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), ]) self.assertEqual(nodes.hosts, self.hosts) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_get_nodes_from_discover_driver(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value hosts = [ node_collection.Host(ip='10.0.2.2', mac='09:7b:74:90:63:c2', fqdn='mynode1.local'), node_collection.Host(ip='10.0.2.3', mac='09:7b:74:90:63:c3', fqdn='mynode2.local'), ] node_discover_driver = mock.Mock() node_discover_driver.discover_hosts.return_value = hosts fuel_managment = fuel.FuelManagement(self.conf) fuel_managment.set_node_discover(node_discover_driver) nodes = fuel_managment.get_nodes() self.assertFalse(ansible_runner_inst.execute.called) self.assertEqual(hosts, nodes.hosts) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_execute_on_cloud(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}), fakes.FakeAnsibleResult(payload={'stdout': ''})] ] fuel_managment = fuel.FuelManagement(self.conf) nodes = fuel_managment.get_nodes() result = fuel_managment.execute_on_cloud( nodes.hosts, {'command': 'mycmd'}, raise_on_error=False) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': 'mycmd'}, []), ]) self.assertEqual(result, [fakes.FakeAnsibleResult(payload={'stdout': ''}), fakes.FakeAnsibleResult(payload={'stdout': ''})]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_get_nodes_fqdns(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [[self.fake_ansible_result]] fuel_managment = fuel.FuelManagement(self.conf) nodes = fuel_managment.get_nodes(fqdns=['node-3']) hosts = [ node_collection.Host(ip='10.0.0.3', mac='03', fqdn='node-3'), ] self.assertEqual(nodes.hosts, hosts) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*fuel.FuelManagement.SERVICES.keys()) def test_get_service_nodes(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, status=executor.STATUS_FAILED, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) nodes = service.get_nodes() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': cmd}, []), ]) self.assertEqual(nodes.hosts, [self.hosts[1]]) def test_get_unknown_service(self): fuel_managment = fuel.FuelManagement(self.conf) self.assertRaises(error.ServiceError, fuel_managment.get_service, 'unknown') def test_validate_services(self): fuel_managment = fuel.FuelManagement(self.conf) fuel_managment.validate_services() os-faults-0.1.17/os_faults/tests/unit/drivers/cloud/test_fuel_node_collection.py000066400000000000000000000042441317662032700302200ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import mock from os_faults.api import node_collection from os_faults.drivers.cloud import fuel from os_faults.tests.unit import test class FuelNodeCollectionTestCase(test.TestCase): def setUp(self): super(FuelNodeCollectionTestCase, self).setUp() self.mock_cloud_management = mock.Mock(spec=fuel.FuelManagement) self.hosts = [ node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='node1.com'), node_collection.Host(ip='10.0.0.3', mac='09:7b:74:90:63:c2', fqdn='node2.com'), node_collection.Host(ip='10.0.0.4', mac='09:7b:74:90:63:c3', fqdn='node3.com'), node_collection.Host(ip='10.0.0.5', mac='09:7b:74:90:63:c4', fqdn='node4.com'), ] self.node_collection = fuel.FuelNodeCollection( cloud_management=self.mock_cloud_management, hosts=copy.deepcopy(self.hosts)) def test_connect(self): self.node_collection.connect(network_name='storage') self.mock_cloud_management.execute_on_cloud.assert_called_once_with( self.hosts, {'fuel_network_mgmt': {'operation': 'up', 'network_name': 'storage'}}) def test_disconnect(self): self.node_collection.disconnect(network_name='storage') self.mock_cloud_management.execute_on_cloud.assert_called_once_with( self.hosts, {'fuel_network_mgmt': {'operation': 'down', 'network_name': 'storage'}}) os-faults-0.1.17/os_faults/tests/unit/drivers/cloud/test_fuel_service.py000066400000000000000000000361001317662032700265140ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from os_faults.ansible import executor from os_faults.api import error from os_faults.api import node_collection from os_faults.drivers.cloud import fuel from os_faults.tests.unit import fakes from os_faults.tests.unit import test @ddt.ddt class FuelServiceTestCase(test.TestCase): def setUp(self): super(FuelServiceTestCase, self).setUp() self.conf = {'address': 'fuel.local', 'username': 'root'} self.fake_ansible_result = fakes.FakeAnsibleResult( payload={ 'stdout': '[{"ip": "10.0.0.2", "mac": "02", "fqdn": "node-2"},' ' {"ip": "10.0.0.3", "mac": "03", "fqdn": "node-3"}]' }) self.master_host = node_collection.Host('fuel.local') self.hosts = [ node_collection.Host(ip='10.0.0.2', mac='02', fqdn='node-2'), node_collection.Host(ip='10.0.0.3', mac='03', fqdn='node-3'), ] @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*fuel.FuelManagement.SERVICES.keys()) def test_kill(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) service.kill() get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), mock.call(self.hosts, {'kill': {'grep': service.grep, 'sig': 9}}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*fuel.FuelManagement.SERVICES.keys()) def test_freeze(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) service.freeze() get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), mock.call(self.hosts, {'kill': {'grep': service.grep, 'sig': 19}}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*fuel.FuelManagement.SERVICES.keys()) def test_freeze_sec(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) delay_sec = 10 service.freeze(nodes=None, sec=delay_sec) get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), mock.call(self.hosts, {'freeze': {'grep': service.grep, 'sec': delay_sec}}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*fuel.FuelManagement.SERVICES.keys()) def test_unfreeze(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) service.unfreeze() get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), mock.call(self.hosts, {'kill': {'grep': service.grep, 'sig': 18}}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data('mysql') def test_unplug(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) service.unplug() get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), mock.call(self.hosts, {'iptables': {'protocol': service.port[0], 'port': service.port[1], 'action': 'block', 'service': service.service_name}}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data('mysql') def test_plug(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) service.plug() get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), mock.call(self.hosts, {'iptables': {'protocol': service.port[0], 'port': service.port[1], 'action': 'unblock', 'service': service.service_name}}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*fuel.FuelManagement.SERVICES.keys()) def test_restart(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) service.restart() get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), mock.call(self.hosts, {'shell': service.restart_cmd}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*fuel.FuelManagement.SERVICES.keys()) def test_terminate(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) service.terminate() get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), mock.call(self.hosts, {'shell': service.terminate_cmd}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*fuel.FuelManagement.SERVICES.keys()) def test_start(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service(service_name) service.start() get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), mock.call(self.hosts, {'shell': service.start_cmd}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_run_node_collection_empty(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2', status=executor.STATUS_FAILED), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3', status=executor.STATUS_FAILED)], ] fuel_managment = fuel.FuelManagement(self.conf) service = fuel_managment.get_service('keystone') exception = self.assertRaises(error.ServiceError, service.restart) self.assertEqual('Service keystone is not found on any nodes', str(exception)) get_nodes_cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': 'fuel node --json'}), mock.call(self.hosts, {'command': get_nodes_cmd}, []), ]) os-faults-0.1.17/os_faults/tests/unit/drivers/cloud/test_tcpcloud.py000066400000000000000000000433021317662032700256600ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from os_faults.ansible import executor from os_faults.api import node_collection from os_faults.drivers.cloud import tcpcloud from os_faults.tests.unit import fakes from os_faults.tests.unit import test @ddt.ddt class TCPCloudManagementTestCase(test.TestCase): def setUp(self): super(TCPCloudManagementTestCase, self).setUp() self.fake_ansible_result = fakes.FakeAnsibleResult( payload={ 'stdout': 'cmp01.mk20.local:\n' ' eth1:\n' ' hwaddr: 09:7b:74:90:63:c2\n' ' inet:\n' ' - address: 10.0.0.2\n' ' eth2:\n' ' hwaddr: 00:00:00:00:00:02\n' ' inet:\n' ' - address: 192.168.1.2\n' 'cmp02.mk20.local:\n' ' eth1:\n' ' hwaddr: 00:00:00:00:00:03\n' ' inet:\n' ' - address: 192.168.1.3\n' ' eth2:\n' ' hwaddr: 09:7b:74:90:63:c3\n' ' inet:\n' ' - address: 10.0.0.3\n' }) self.fake_node_ip_result = fakes.FakeAnsibleResult( payload={ 'stdout': 'cmp01.mk20.local:\n' ' 10.0.0.2\n' 'cmp02.mk20.local:\n' ' 10.0.0.3\n' }) self.tcp_conf = { 'address': 'tcp.local', 'username': 'root', } self.get_nodes_cmd = ( "salt -E '^(?!cfg|mon)' network.interfaces --out=yaml") self.get_ips_cmd = ("salt -E '^(?!cfg|mon)' " "pillar.get _param:single_address --out=yaml") self.master_host = node_collection.Host('tcp.local') self.hosts = [ node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c2', fqdn='cmp01.mk20.local'), node_collection.Host(ip='10.0.0.3', mac='09:7b:74:90:63:c3', fqdn='cmp02.mk20.local'), ] @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(( dict(address='tcp.local', username='root'), (mock.call(become=None, private_key_file=None, remote_user='root', password=None), mock.call(become=None, jump_host='tcp.local', jump_user='root', private_key_file=None, remote_user='root', password=None, serial=None)) ), ( dict(address='tcp.local', username='ubuntu', slave_username='root', master_sudo=True, private_key_file='/path/id_rsa'), (mock.call(become=True, private_key_file='/path/id_rsa', remote_user='ubuntu', password=None), mock.call(become=None, jump_host='tcp.local', jump_user='ubuntu', private_key_file='/path/id_rsa', remote_user='root', password=None, serial=None)) ), ( dict(address='tcp.local', username='ubuntu', slave_username='root', slave_sudo=True, private_key_file='/path/id_rsa'), (mock.call(become=None, private_key_file='/path/id_rsa', remote_user='ubuntu', password=None), mock.call(become=True, jump_host='tcp.local', jump_user='ubuntu', private_key_file='/path/id_rsa', remote_user='root', password=None, serial=None)) ), ( dict(address='tcp.local', username='ubuntu', slave_username='root', slave_sudo=True, private_key_file='/path/id_rsa', slave_direct_ssh=True), (mock.call(become=None, private_key_file='/path/id_rsa', remote_user='ubuntu', password=None), mock.call(become=True, jump_host=None, jump_user=None, private_key_file='/path/id_rsa', remote_user='root', password=None, serial=None)) ), ( dict(address='tcp.local', username='root', password='root_pass'), (mock.call(become=None, private_key_file=None, remote_user='root', password='root_pass'), mock.call(become=None, jump_host='tcp.local', jump_user='root', private_key_file=None, remote_user='root', password='root_pass', serial=None)) ), ( dict(address='tcp.local', username='root', slave_password='slave_pass', serial=42), (mock.call(become=None, private_key_file=None, remote_user='root', password=None), mock.call(become=None, jump_host='tcp.local', jump_user='root', private_key_file=None, remote_user='root', password='slave_pass', serial=42)) )) @ddt.unpack def test_init(self, config, expected_runner_calls, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value tcp_managment = tcpcloud.TCPCloudManagement(config) mock_ansible_runner.assert_has_calls(expected_runner_calls) self.assertIs(tcp_managment.master_node_executor, ansible_runner_inst) self.assertIs(tcp_managment.cloud_executor, ansible_runner_inst) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_verify(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [self.fake_node_ip_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}), fakes.FakeAnsibleResult(payload={'stdout': ''})], ] self.tcp_conf['slave_name_regexp'] = '(ctl*|cmp*)' tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) tcp_managment.verify() get_nodes_cmd = "salt -E '(ctl*|cmp*)' network.interfaces --out=yaml" get_ips_cmd = ("salt -E '(ctl*|cmp*)' " "pillar.get _param:single_address --out=yaml") ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': get_nodes_cmd}), mock.call([self.master_host], {'command': get_ips_cmd}), mock.call(self.hosts, {'command': 'hostname'}), ]) def test_validate_services(self): tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) tcp_managment.validate_services() @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_get_nodes(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [self.fake_node_ip_result], ] tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) nodes = tcp_managment.get_nodes() ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': self.get_nodes_cmd}), mock.call([self.master_host], {'command': self.get_ips_cmd}), ]) self.assertEqual(nodes.hosts, self.hosts) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_get_nodes_from_discover_driver(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value hosts = [ node_collection.Host(ip='10.0.2.2', mac='09:7b:74:90:63:c2', fqdn='mynode1.local'), node_collection.Host(ip='10.0.2.3', mac='09:7b:74:90:63:c3', fqdn='mynode2.local'), ] node_discover_driver = mock.Mock() node_discover_driver.discover_hosts.return_value = hosts tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) tcp_managment.set_node_discover(node_discover_driver) nodes = tcp_managment.get_nodes() self.assertFalse(ansible_runner_inst.execute.called) self.assertEqual(hosts, nodes.hosts) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_execute_on_cloud(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [self.fake_node_ip_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}), fakes.FakeAnsibleResult(payload={'stdout': ''})] ] tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) nodes = tcp_managment.get_nodes() result = tcp_managment.execute_on_cloud( nodes.hosts, {'command': 'mycmd'}, raise_on_error=False) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': self.get_nodes_cmd}), mock.call([self.master_host], {'command': self.get_ips_cmd}), mock.call(self.hosts, {'command': 'mycmd'}, []), ]) self.assertEqual(result, [fakes.FakeAnsibleResult(payload={'stdout': ''}), fakes.FakeAnsibleResult(payload={'stdout': ''})]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_get_nodes_fqdns(self, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [self.fake_node_ip_result], ] tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) nodes = tcp_managment.get_nodes(fqdns=['cmp02.mk20.local']) self.assertEqual(nodes.hosts, [self.hosts[1]]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*tcpcloud.TCPCloudManagement.SERVICES.keys()) def test_get_service_nodes(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [self.fake_node_ip_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, status=executor.STATUS_FAILED, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) service = tcp_managment.get_service(service_name) nodes = service.get_nodes() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': self.get_nodes_cmd}), mock.call([self.master_host], {'command': self.get_ips_cmd}), mock.call(self.hosts, {'command': cmd}, []), ]) self.assertEqual(nodes.hosts, [self.hosts[1]]) @ddt.ddt class TcpServiceTestCase(test.TestCase): def setUp(self): super(TcpServiceTestCase, self).setUp() self.fake_ansible_result = fakes.FakeAnsibleResult( payload={ 'stdout': 'cmp01.mk20.local:\n' ' eth0:\n' ' hwaddr: 09:7b:74:90:63:c2\n' ' inet:\n' ' - address: 10.0.0.2\n' 'cmp02.mk20.local:\n' ' eth0:\n' ' hwaddr: 09:7b:74:90:63:c3\n' ' inet:\n' ' - address: 10.0.0.3\n' }) self.fake_node_ip_result = fakes.FakeAnsibleResult( payload={ 'stdout': 'cmp01.mk20.local:\n' ' 10.0.0.2\n' 'cmp02.mk20.local:\n' ' 10.0.0.3\n' }) self.tcp_conf = { 'address': 'tcp.local', 'username': 'root', } self.get_nodes_cmd = ( "salt -E '^(?!cfg|mon)' network.interfaces --out=yaml") self.get_ips_cmd = ("salt -E '^(?!cfg|mon)' " "pillar.get _param:single_address --out=yaml") self.master_host = node_collection.Host('tcp.local') self.hosts = [ node_collection.Host(ip='10.0.0.2', mac='09:7b:74:90:63:c2', fqdn='cmp01.mk20.local'), node_collection.Host(ip='10.0.0.3', mac='09:7b:74:90:63:c3', fqdn='cmp02.mk20.local'), ] @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*tcpcloud.TCPCloudManagement.SERVICES.keys()) def test_restart(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [self.fake_node_ip_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, status=executor.STATUS_FAILED, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) service = tcp_managment.get_service(service_name) service.restart() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': self.get_nodes_cmd}), mock.call([self.master_host], {'command': self.get_ips_cmd}), mock.call(self.hosts, {'command': cmd}, []), mock.call([self.hosts[1]], {'shell': service.restart_cmd}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*tcpcloud.TCPCloudManagement.SERVICES.keys()) def test_terminate(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [self.fake_node_ip_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, status=executor.STATUS_FAILED, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) service = tcp_managment.get_service(service_name) service.terminate() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': self.get_nodes_cmd}), mock.call([self.master_host], {'command': self.get_ips_cmd}), mock.call(self.hosts, {'command': cmd}, []), mock.call([self.hosts[1]], {'shell': service.terminate_cmd}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(*tcpcloud.TCPCloudManagement.SERVICES.keys()) def test_start(self, service_name, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [self.fake_ansible_result], [self.fake_node_ip_result], [fakes.FakeAnsibleResult(payload={'stdout': ''}, status=executor.STATUS_FAILED, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')], [fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.2'), fakes.FakeAnsibleResult(payload={'stdout': ''}, host='10.0.0.3')] ] tcp_managment = tcpcloud.TCPCloudManagement(self.tcp_conf) service = tcp_managment.get_service(service_name) service.start() cmd = 'bash -c "ps ax | grep -v grep | grep \'{}\'"'.format( service.grep) ansible_runner_inst.execute.assert_has_calls([ mock.call([self.master_host], {'command': self.get_nodes_cmd}), mock.call([self.master_host], {'command': self.get_ips_cmd}), mock.call(self.hosts, {'command': cmd}, []), mock.call([self.hosts[1]], {'shell': service.start_cmd}), ]) os-faults-0.1.17/os_faults/tests/unit/drivers/cloud/test_universal.py000066400000000000000000000100251317662032700260470ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from os_faults.api import node_collection from os_faults.drivers.cloud import universal from os_faults.tests.unit import fakes from os_faults.tests.unit import test @ddt.ddt class UniversalManagementTestCase(test.TestCase): @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @ddt.data(( dict(address='os.local', auth=dict(username='root')), dict(remote_user='root', private_key_file=None, password=None, become=None, become_password=None, jump_host=None, jump_user=None, serial=None), ), ( dict(address='os.local', auth=dict(username='user', become=True, become_password='secret'), serial=42), dict(remote_user='user', private_key_file=None, password=None, become=True, become_password='secret', jump_host=None, jump_user=None, serial=42), )) @ddt.unpack def test_init(self, config, expected_runner_call, mock_ansible_runner): ansible_runner_inst = mock_ansible_runner.return_value cloud = universal.UniversalCloudManagement(config) mock_ansible_runner.assert_called_with(**expected_runner_call) self.assertIs(cloud.cloud_executor, ansible_runner_inst) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) @mock.patch('os_faults.drivers.cloud.universal.UniversalCloudManagement.' 'discover_hosts') def test_verify(self, mock_discover_hosts, mock_ansible_runner): address = '10.0.0.10' ansible_result = fakes.FakeAnsibleResult( payload=dict(stdout='openstack.local')) ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [ansible_result] ] hosts = [node_collection.Host(ip=address)] mock_discover_hosts.return_value = hosts cloud = universal.UniversalCloudManagement(dict(address=address)) cloud.verify() ansible_runner_inst.execute.assert_has_calls([ mock.call(hosts, {'command': 'hostname'}), ]) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_discover_hosts(self, mock_ansible_runner): address = '10.0.0.10' hostname = 'openstack.local' ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult( payload=dict(stdout=hostname))] ] expected_hosts = [node_collection.Host( ip=address, mac=None, fqdn=hostname)] cloud = universal.UniversalCloudManagement(dict(address=address)) self.assertEqual(expected_hosts, cloud.discover_hosts()) @mock.patch('os_faults.ansible.executor.AnsibleRunner', autospec=True) def test_discover_hosts_with_iface(self, mock_ansible_runner): address = '10.0.0.10' hostname = 'openstack.local' mac = '0b:fe:fe:13:12:11' ansible_runner_inst = mock_ansible_runner.return_value ansible_runner_inst.execute.side_effect = [ [fakes.FakeAnsibleResult( payload=dict(stdout=mac))], [fakes.FakeAnsibleResult( payload=dict(stdout=hostname))], ] expected_hosts = [node_collection.Host( ip=address, mac=mac, fqdn=hostname)] cloud = universal.UniversalCloudManagement( dict(address=address, iface='eth1')) self.assertEqual(expected_hosts, cloud.discover_hosts()) os-faults-0.1.17/os_faults/tests/unit/drivers/nodes/000077500000000000000000000000001317662032700224325ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/drivers/nodes/__init__.py000066400000000000000000000000001317662032700245310ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/drivers/nodes/test_node_list.py000066400000000000000000000023551317662032700260300ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from os_faults.api import node_collection from os_faults.drivers.nodes import node_list from os_faults.tests.unit import test class NodeListDiscoverTestCase(test.TestCase): def test_discover_hosts(self): conf = [ {'ip': '10.0.0.11', 'mac': '01', 'fqdn': 'node-1'}, {'ip': '10.0.0.12', 'mac': '02', 'fqdn': 'node-2'}, ] expected_hosts = [ node_collection.Host(ip='10.0.0.11', mac='01', fqdn='node-1'), node_collection.Host(ip='10.0.0.12', mac='02', fqdn='node-2'), ] node_list_discover = node_list.NodeListDiscover(conf) hosts = node_list_discover.discover_hosts() self.assertEqual(expected_hosts, hosts) os-faults-0.1.17/os_faults/tests/unit/drivers/power/000077500000000000000000000000001317662032700224565ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/drivers/power/__init__.py000066400000000000000000000000001317662032700245550ustar00rootroot00000000000000os-faults-0.1.17/os_faults/tests/unit/drivers/power/test_ipmi.py000066400000000000000000000114071317662032700250300ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from pyghmi import exceptions as pyghmi_exc import os_faults from os_faults.api import node_collection from os_faults.drivers.power import ipmi from os_faults import error from os_faults.tests.unit import test @ddt.ddt class IPMIDriverTestCase(test.TestCase): def setUp(self): super(IPMIDriverTestCase, self).setUp() self.params = { 'mac_to_bmc': { '00:00:00:00:00:00': { 'address': '55.55.55.55', 'username': 'foo', 'password': 'bar' } }, 'fqdn_to_bmc': { 'node2.com': { 'address': '55.55.55.56', 'username': 'ham', 'password': 'eggs' } } } self.driver = ipmi.IPMIDriver(self.params) self.host = node_collection.Host( ip='10.0.0.2', mac='00:00:00:00:00:00', fqdn='node1.com') self.host2 = node_collection.Host( ip='10.0.0.3', mac='00:00:00:00:00:01', fqdn='node2.com') @ddt.data( { 'mac_to_bmc': { '00:00:00:00:00:00': { 'address': '55.55.55.55', 'username': 'foo', 'password': 'bar' } } }, { 'fqdn_to_bmc': { 'node2.com': { 'address': '55.55.55.56', 'username': 'ham', 'password': 'eggs' } } }, { 'mac_to_bmc': { '00:00:00:00:00:00': { 'address': '55.55.55.55', 'username': 'foo', 'password': 'bar' } }, 'fqdn_to_bmc': { 'node2.com': { 'address': '55.55.55.56', 'username': 'ham', 'password': 'eggs' } } }, ) def test_init(self, config): os_faults._init_driver({'driver': 'ipmi', 'args': config}) def test_supports(self): self.assertTrue(self.driver.supports(self.host)) self.assertTrue(self.driver.supports(self.host2)) def test_supports_false(self): host = node_collection.Host( ip='10.0.0.2', mac='00:00:00:00:00:01', fqdn='node1.com') self.assertFalse(self.driver.supports(host)) @mock.patch('pyghmi.ipmi.command.Command') def test__run_set_power_cmd(self, mock_command): ipmicmd = mock_command.return_value ipmicmd.set_power.return_value = {'powerstate': 'off'} self.driver._run_set_power_cmd(self.host, 'off', expected_state='off') ipmicmd.set_power.assert_called_once_with('off', wait=True) @mock.patch('pyghmi.ipmi.command.Command') def test__run_set_power_cmd_ipmi_exc(self, mock_command): ipmicmd = mock_command.return_value ipmicmd.set_power.side_effect = pyghmi_exc.IpmiException() self.assertRaises(pyghmi_exc.IpmiException, self.driver._run_set_power_cmd, self.host, 'off', expected_state='off') @mock.patch('pyghmi.ipmi.command.Command') def test__run_set_power_cmd_unexpected_power_state(self, mock_command): ipmicmd = mock_command.return_value ipmicmd.set_power.return_value = {'powerstate': 'unexpected state'} self.assertRaises(error.PowerManagementError, self.driver._run_set_power_cmd, self.host, 'off', expected_state='off') @mock.patch('os_faults.drivers.power.ipmi.IPMIDriver._run_set_power_cmd') @ddt.data(('poweroff', 'off', 'off'), ('poweron', 'on', 'on'), ('reset', 'boot'), ('shutdown', 'shutdown', 'off')) def test_driver_actions(self, actions, mock__run_set_power_cmd): getattr(self.driver, actions[0])(self.host) if len(actions) == 3: mock__run_set_power_cmd.assert_called_once_with( self.host, cmd=actions[1], expected_state=actions[2]) else: mock__run_set_power_cmd.assert_called_once_with( self.host, cmd=actions[1]) os-faults-0.1.17/os_faults/tests/unit/drivers/power/test_libvirt.py000066400000000000000000000156251317662032700255530ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ddt import mock from os_faults.api import node_collection from os_faults.drivers.power import libvirt from os_faults import error from os_faults.tests.unit import test DRIVER_PATH = 'os_faults.drivers.power.libvirt' @ddt.ddt class LibvirtDriverTestCase(test.TestCase): def setUp(self): super(LibvirtDriverTestCase, self).setUp() self.params = {'connection_uri': 'fake_connection_uri'} self.driver = libvirt.LibvirtDriver(self.params) self.host = node_collection.Host( ip='10.0.0.2', mac='00:00:00:00:00:00', fqdn='node1.com') @mock.patch('libvirt.open') def test__get_connection_no_cached_connection(self, mock_libvirt_open): self.driver._get_connection() self.assertNotEqual(self.driver._cached_conn, None) mock_libvirt_open.assert_called_once_with( self.params['connection_uri']) def test__get_connection_cached_connection(self): self.driver._cached_conn = 'some cached connection' conn = self.driver._get_connection() self.assertEqual(conn, 'some cached connection') @mock.patch(DRIVER_PATH + '.LibvirtDriver._get_connection') def test__find_domain_by_host_mac(self, mock__get_connection): host = node_collection.Host(ip='10.0.0.2', mac=':54:00:f9:b8:f9') domain1 = mock.MagicMock() domain1.XMLDesc.return_value = '52:54:00:ab:64:42' domain2 = mock.MagicMock() domain2.XMLDesc.return_value = '52:54:00:f9:b8:f9' self.driver.conn.listAllDomains.return_value = [domain1, domain2] domain = self.driver._find_domain_by_host(host) self.assertEqual(domain, domain2) @mock.patch(DRIVER_PATH + '.LibvirtDriver._get_connection') def test__find_domain_by_host_name(self, mock__get_connection): host = node_collection.Host(ip='10.0.0.2', libvirt_name='foo') domain1 = mock.MagicMock() domain1.XMLDesc.return_value = '52:54:00:ab:64:42' domain1.name.return_value = 'bar' domain2 = mock.MagicMock() domain2.XMLDesc.return_value = '52:54:00:f9:b8:f9' domain2.name.return_value = 'foo' self.driver.conn.listAllDomains.return_value = [domain1, domain2] domain = self.driver._find_domain_by_host(host) self.assertEqual(domain, domain2) @mock.patch(DRIVER_PATH + '.LibvirtDriver._get_connection') def test__find_domain_by_host_domain_not_found( self, mock__get_connection): host = node_collection.Host(ip='10.0.0.2') domain1 = mock.MagicMock() domain1.XMLDesc.return_value = '52:54:00:ab:64:42' domain2 = mock.MagicMock() domain2.XMLDesc.return_value = '52:54:00:f9:b8:f9' self.driver.conn.listAllDomains.return_value = [domain1, domain2] self.assertRaises(error.PowerManagementError, self.driver._find_domain_by_host, host) @mock.patch(DRIVER_PATH + '.LibvirtDriver._get_connection') def test_supports(self, mock__get_connection): domain1 = mock.MagicMock() domain1.XMLDesc.return_value = '52:54:00:ab:64:42' domain2 = mock.MagicMock() domain2.XMLDesc.return_value = '00:00:00:00:00:00' self.driver.conn.listAllDomains.return_value = [domain1, domain2] self.assertTrue(self.driver.supports(self.host)) @mock.patch(DRIVER_PATH + '.LibvirtDriver._get_connection') def test_supports_false(self, mock__get_connection): self.driver.conn.listAllDomains.return_value = [] self.assertFalse(self.driver.supports(self.host)) @mock.patch(DRIVER_PATH + '.LibvirtDriver._find_domain_by_host') @ddt.data(('poweroff', 'destroy'), ('poweron', 'create'), ('reset', 'reset'), ('shutdown', 'shutdown')) def test_driver_actions(self, actions, mock__find_domain_by_host): getattr(self.driver, actions[0])(self.host) domain = mock__find_domain_by_host.return_value getattr(domain, actions[1]).assert_called_once_with() @mock.patch(DRIVER_PATH + '.LibvirtDriver._find_domain_by_host') def test_snapshot(self, mock__find_domain_by_host): self.driver.snapshot(self.host, 'foo', suspend=False) domain = mock__find_domain_by_host.return_value domain.snapshotCreateXML.assert_called_once_with( 'foo') @mock.patch(DRIVER_PATH + '.LibvirtDriver._find_domain_by_host') def test_snapshot_suspend(self, mock__find_domain_by_host): self.driver.snapshot(self.host, 'foo', suspend=True) domain = mock__find_domain_by_host.return_value domain.assert_has_calls(( mock.call.suspend(), mock.call.snapshotCreateXML( 'foo'), mock.call.resume(), )) @mock.patch(DRIVER_PATH + '.LibvirtDriver._find_domain_by_host') def test_revert(self, mock__find_domain_by_host): self.driver.revert(self.host, 'foo', resume=False) domain = mock__find_domain_by_host.return_value snapshot = domain.snapshotLookupByName.return_value domain.snapshotLookupByName.assert_called_once_with('foo') domain.revertToSnapshot.assert_called_once_with(snapshot) self.assertFalse(domain.resume.called) @mock.patch(DRIVER_PATH + '.LibvirtDriver._find_domain_by_host') def test_revert_resume(self, mock__find_domain_by_host): self.driver.revert(self.host, 'foo', resume=True) domain = mock__find_domain_by_host.return_value snapshot = domain.snapshotLookupByName.return_value domain.snapshotLookupByName.assert_called_once_with('foo') domain.revertToSnapshot.assert_called_once_with(snapshot) domain.resume.assert_called_once_with() @mock.patch(DRIVER_PATH + '.LibvirtDriver._find_domain_by_host') def test_revert_destroy(self, mock__find_domain_by_host): domain = mock__find_domain_by_host.return_value domain.isActive.return_value = True self.driver.revert(self.host, 'foo', resume=True) domain.destroy.assert_called_once_with() @mock.patch(DRIVER_PATH + '.LibvirtDriver._find_domain_by_host') def test_revert_destroy_nonactive(self, mock__find_domain_by_host): domain = mock__find_domain_by_host.return_value domain.isActive.return_value = False self.driver.revert(self.host, 'foo', resume=True) self.assertFalse(domain.destroy.called) os-faults-0.1.17/os_faults/tests/unit/fakes.py000066400000000000000000000022171317662032700213110ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from os_faults.ansible import executor class FakeAnsibleResult(object): def __init__(self, payload, host=None, status=executor.STATUS_OK): self.host = host self.payload = payload self.status = status def __eq__(self, other): if not isinstance(other, self.__class__): return NotImplemented if self.host != other.host: return False if self.payload != other.payload: return False if self.status != other.status: return False return True def __ne__(self, other): return not self == other os-faults-0.1.17/os_faults/tests/unit/test.py000066400000000000000000000012241317662032700211740ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslotest import base class TestCase(base.BaseTestCase): """Test case base class for all unit tests.""" os-faults-0.1.17/os_faults/tests/unit/test_os_faults.py000066400000000000000000000276471317662032700232740ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import jsonschema import mock import yaml import os_faults from os_faults.ansible import executor from os_faults.api import cloud_management from os_faults.api import error from os_faults.api import node_collection from os_faults.api import service from os_faults.drivers.cloud import devstack from os_faults.drivers.cloud import devstack_systemd from os_faults.drivers.cloud import fuel from os_faults.drivers.nodes import node_list from os_faults.drivers.power import ipmi from os_faults.drivers.power import libvirt from os_faults.tests.unit import test class OSFaultsTestCase(test.TestCase): def setUp(self): super(OSFaultsTestCase, self).setUp() self.cloud_config = { 'cloud_management': { 'driver': 'fuel', 'args': { 'address': '10.30.00.5', 'username': 'root', 'private_key_file': '/my/path/pk.key', } }, 'power_management': { 'driver': 'libvirt', 'args': { 'connection_uri': "qemu+ssh://user@10.30.20.21/system" } } } def test_connect_devstack(self): cloud_config = { 'cloud_management': { 'driver': 'devstack', 'args': { 'address': 'devstack.local', 'username': 'developer', 'private_key_file': '/my/path/pk.key', } } } destructor = os_faults.connect(cloud_config) self.assertIsInstance(destructor, devstack.DevStackManagement) def test_connect_devstack_systemd(self): cloud_config = { 'cloud_management': { 'driver': 'devstack_systemd', 'args': { 'address': 'devstack.local', 'username': 'developer', 'private_key_file': '/my/path/pk.key', } } } destructor = os_faults.connect(cloud_config) self.assertIsInstance(destructor, devstack_systemd.DevStackSystemdManagement) def test_config_with_services(self): self.cloud_config['services'] = { 'app': { 'driver': 'process', 'args': {'grep': 'myapp'} } } destructor = os_faults.connect(self.cloud_config) app = destructor.get_service('app') self.assertIsNotNone(app) def test_config_with_services_and_hosts(self): self.cloud_config['node_discover'] = { 'driver': 'node_list', 'args': [ { 'ip': '10.0.0.11', 'mac': '01:ab:cd:01:ab:cd', 'fqdn': 'node-1' }, { 'ip': '10.0.0.12', 'mac': '02:ab:cd:02:ab:cd', 'fqdn': 'node-2' }, ] } self.cloud_config['services'] = { 'app': { 'driver': 'process', 'args': {'grep': 'myapp'}, 'hosts': ['10.0.0.11', '10.0.0.12'] } } destructor = os_faults.connect(self.cloud_config) app = destructor.get_service('app') self.assertIsNotNone(app) nodes = app.get_nodes() self.assertEqual(['10.0.0.11', '10.0.0.12'], nodes.get_ips()) self.assertEqual(['node-1', 'node-2'], nodes.get_fqdns()) self.assertEqual(['01:ab:cd:01:ab:cd', '02:ab:cd:02:ab:cd'], nodes.get_macs()) def test_connect_fuel_with_libvirt(self): destructor = os_faults.connect(self.cloud_config) self.assertIsInstance(destructor, fuel.FuelManagement) self.assertIsInstance(destructor.node_discover, fuel.FuelManagement) self.assertEqual(1, len(destructor.power_manager.power_drivers)) self.assertIsInstance(destructor.power_manager.power_drivers[0], libvirt.LibvirtDriver) def test_connect_fuel_with_ipmi_libvirt_and_node_list(self): cloud_config = { 'node_discover': { 'driver': 'node_list', 'args': [ { 'ip': '10.0.0.11', 'mac': '01:ab:cd:01:ab:cd', 'fqdn': 'node-1' }, { 'ip': '10.0.0.12', 'mac': '02:ab:cd:02:ab:cd', 'fqdn': 'node-2'}, ] }, 'cloud_management': { 'driver': 'fuel', 'args': { 'address': '10.30.00.5', 'username': 'root', }, }, 'power_managements': [ { 'driver': 'ipmi', 'args': { 'mac_to_bmc': { '00:00:00:00:00:00': { 'address': '55.55.55.55', 'username': 'foo', 'password': 'bar', } } } }, { 'driver': 'libvirt', 'args': { 'connection_uri': "qemu+ssh://user@10.30.20.21/system" } } ] } destructor = os_faults.connect(cloud_config) self.assertIsInstance(destructor, fuel.FuelManagement) self.assertIsInstance(destructor.node_discover, node_list.NodeListDiscover) self.assertEqual(2, len(destructor.power_manager.power_drivers)) self.assertIsInstance(destructor.power_manager.power_drivers[0], ipmi.IPMIDriver) self.assertIsInstance(destructor.power_manager.power_drivers[1], libvirt.LibvirtDriver) def test_connect_driver_not_found(self): cloud_config = { 'cloud_management': { 'driver': 'non-existing', 'args': {}, } } self.assertRaises( error.OSFDriverNotFound, os_faults.connect, cloud_config) def test_connect_driver_not_specified(self): cloud_config = {'foo': 'bar'} self.assertRaises( jsonschema.ValidationError, os_faults.connect, cloud_config) @mock.patch('os.path.exists', return_value=True) def test_connect_with_config_file(self, mock_os_path_exists): mock_os_faults_open = mock.mock_open( read_data=yaml.dump(self.cloud_config)) with mock.patch('os_faults.open', mock_os_faults_open, create=True): destructor = os_faults.connect() self.assertIsInstance(destructor, fuel.FuelManagement) self.assertEqual(1, len(destructor.power_manager.power_drivers)) self.assertIsInstance(destructor.power_manager.power_drivers[0], libvirt.LibvirtDriver) @mock.patch.dict(os.environ, {'OS_FAULTS_CONFIG': '/my/conf.yaml'}) @mock.patch('os.path.exists', return_value=True) def test_connect_with_env_config(self, mock_os_path_exists): mock_os_faults_open = mock.mock_open( read_data=yaml.dump(self.cloud_config)) with mock.patch('os_faults.open', mock_os_faults_open, create=True): destructor = os_faults.connect() self.assertIsInstance(destructor, fuel.FuelManagement) self.assertEqual(1, len(destructor.power_manager.power_drivers)) self.assertIsInstance(destructor.power_manager.power_drivers[0], libvirt.LibvirtDriver) mock_os_faults_open.assert_called_once_with('/my/conf.yaml') @mock.patch('os.path.exists', return_value=False) def test_connect_no_config_files(self, mock_os_path_exists): self.assertRaises(error.OSFError, os_faults.connect) @mock.patch('os.path.exists', side_effect=lambda x: 'bad' not in x) @mock.patch('os.walk', side_effect=lambda x: ([x, [], []], [x + 'subdir', [], []])) @mock.patch.object(executor, 'MODULE_PATHS', set()) def test_register_ansible_modules(self, mock_os_walk, mock_os_path_exists): os_faults.register_ansible_modules(['/my/path/', '/other/path/']) self.assertEqual(executor.get_module_paths(), {'/my/path/', '/my/path/subdir', '/other/path/', '/other/path/subdir'}) self.assertRaises(error.OSFError, os_faults.register_ansible_modules, ['/my/bad/path/']) @mock.patch('os_faults.connect') def test_discover(self, mock_connect): cloud_config = { 'cloud_management': { 'driver': 'devstack', 'args': { 'address': 'devstack.local', 'username': 'developer', 'private_key_file': '/my/path/pk.key', } } } cloud_management_mock = mock.create_autospec( cloud_management.CloudManagement) mock_connect.return_value = cloud_management_mock cloud_management_mock.get_nodes.return_value.hosts = [ node_collection.Host( ip='10.0.0.2', mac='09:7b:74:90:63:c1', fqdn='node1.local'), node_collection.Host( ip='10.0.0.3', mac='09:7b:74:90:63:c2', fqdn='node2.local')] cloud_management_mock.list_supported_services.return_value = [ 'srv1', 'srv2'] def mock_service(name, config, ips): m = mock.create_autospec(service.Service) m.NAME = name m.config = config m.get_nodes.return_value.get_ips.return_value = ips return m srv1 = mock_service('process', {'grep': 'srv1'}, []) srv2 = mock_service('linux_service', {'grep': 'srv2', 'linux_service': 'srv2'}, ['10.0.0.2']) services = {'srv1': srv1, 'srv2': srv2} cloud_management_mock.get_service.side_effect = services.get discovered_config = os_faults.discover(cloud_config) self.assertEqual({ 'cloud_management': { 'driver': 'devstack', 'args': { 'address': 'devstack.local', 'private_key_file': '/my/path/pk.key', 'username': 'developer' } }, 'node_discover': { 'driver': 'node_list', 'args': [ { 'fqdn': 'node1.local', 'ip': '10.0.0.2', 'mac': '09:7b:74:90:63:c1' }, { 'fqdn': 'node2.local', 'ip': '10.0.0.3', 'mac': '09:7b:74:90:63:c2' } ] }, 'services': { 'srv1': { 'driver': 'process', 'args': {'grep': 'srv1'}, }, 'srv2': { 'driver': 'linux_service', 'args': {'grep': 'srv2', 'linux_service': 'srv2'}, 'hosts': ['10.0.0.2'] } } }, discovered_config) os-faults-0.1.17/os_faults/tests/unit/test_registry.py000066400000000000000000000036721317662032700231350ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys import mock from os_faults.api import base_driver from os_faults.api import error from os_faults import drivers from os_faults import registry from os_faults.tests.unit import test class TestDriver(base_driver.BaseDriver): NAME = 'test' class RegistryTestCase(test.TestCase): def setUp(self): super(RegistryTestCase, self).setUp() registry.DRIVERS.clear() # reset global drivers list def tearDown(self): super(RegistryTestCase, self).tearDown() registry.DRIVERS.clear() # reset global drivers list @mock.patch('oslo_utils.importutils.import_module') @mock.patch('os.walk') def test_get_drivers(self, mock_os_walk, mock_import_module): drivers_folder = os.path.dirname(drivers.__file__) mock_os_walk.return_value = [(drivers_folder, [], ['driver.py'])] mock_import_module.return_value = sys.modules[__name__] self.assertEqual({'test': TestDriver}, registry.get_drivers()) @mock.patch('os_faults.registry._list_drivers') def test_name_collision(self, mock_list_drivers): class TestDriver1(base_driver.BaseDriver): NAME = 'test' class TestDriver2(base_driver.BaseDriver): NAME = 'test' mock_list_drivers.return_value = [TestDriver1, TestDriver2] self.assertRaises(error.OSFDriverWithSuchNameExists, registry.get_drivers) os-faults-0.1.17/os_faults/tests/unit/test_utils.py000066400000000000000000000074731317662032700224300ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import threading import mock from os_faults.tests.unit import test from os_faults import utils class MyException(Exception): pass class UtilsTestCase(test.TestCase): def test_start_thread(self): target = mock.Mock() target_params = {'param1': 'val1', 'param2': 'val2'} tw = utils.ThreadsWrapper() tw.start_thread(target, **target_params) tw.join_threads() target.assert_has_calls([mock.call(param1='val1', param2='val2')]) self.assertIsInstance(tw.threads[0], threading.Thread) self.assertEqual(len(tw.errors), 0) def test_start_thread_raise_exception(self): target = mock.Mock() target.side_effect = MyException() tw = utils.ThreadsWrapper() tw.start_thread(target) tw.join_threads() self.assertEqual(type(tw.errors[0]), MyException) def test_join_threads(self): thread_1 = mock.Mock() thread_2 = mock.Mock() tw = utils.ThreadsWrapper() tw.threads = [thread_1, thread_2] tw.join_threads() thread_1.join.assert_called_once() thread_2.join.assert_called_once() class MyClass(object): FOO = 10 def __init__(self): self.BAR = None @utils.require_variables('FOO') def method(self, a, b): return self.FOO + a + b @utils.require_variables('BAR', 'BAZ') def method_that_miss_variables(self): return self.BAR, self.BAZ class RequiredVariablesTestCase(test.TestCase): def test_require_variables(self): inst = MyClass() self.assertEqual(inst.method(1, b=2), 13) def test_require_variables_not_implemented(self): inst = MyClass() err = self.assertRaises(NotImplementedError, inst.method_that_miss_variables) msg = 'BAR, BAZ required for MyClass.method_that_miss_variables' self.assertEqual(str(err), msg) class MyPoint(utils.ComparableMixin): ATTRS = ('a', 'b') def __init__(self, a, b): self.a = a self.b = b class ComparableMixinTestCase(test.TestCase): def test_operations(self): p1 = MyPoint(1, 'a') p2 = MyPoint(1, 'b') p3 = MyPoint(2, 'c') p4 = MyPoint(2, 'c') self.assertTrue(p1 < p2) self.assertTrue(p1 <= p2) self.assertFalse(p1 == p2) self.assertFalse(p1 >= p2) self.assertFalse(p1 > p2) self.assertTrue(p1 != p2) self.assertTrue(hash(p1) != hash(p2)) self.assertTrue(p2 < p3) self.assertTrue(p2 <= p3) self.assertFalse(p2 == p3) self.assertFalse(p2 >= p3) self.assertFalse(p2 > p3) self.assertTrue(p2 != p3) self.assertTrue(hash(p2) != hash(p3)) self.assertFalse(p3 < p4) self.assertTrue(p3 <= p4) self.assertTrue(p3 == p4) self.assertTrue(p3 >= p4) self.assertFalse(p3 > p4) self.assertFalse(p3 != p4) self.assertEqual(hash(p3), hash(p4)) class MyRepr(utils.ReprMixin): REPR_ATTRS = ('a', 'b', 'c') def __init__(self): self.a = 'foo' self.b = {'foo': 'bar'} self.c = 42 class ReprMixinTestCase(test.TestCase): def test_repr(self): r = MyRepr() self.assertEqual("MyRepr(a='foo', b={'foo': 'bar'}, c=42)", repr(r)) os-faults-0.1.17/os_faults/utils.py000066400000000000000000000057251317662032700172460ustar00rootroot00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import functools import logging import threading LOG = logging.getLogger(__name__) MACADDR_REGEXP = '^([0-9A-Fa-f]{2}:){5}[0-9A-Fa-f]{2}$' FQDN_REGEXP = '.*' class ThreadsWrapper(object): def __init__(self): self.threads = [] self.errors = [] def _target(self, fn, **kwargs): try: fn(**kwargs) except Exception as exc: LOG.error('%s raised exception: %s', fn, exc) self.errors.append(exc) def start_thread(self, fn, **kwargs): thread = threading.Thread(target=self._target, args=(fn, ), kwargs=kwargs) thread.start() self.threads.append(thread) def join_threads(self): for thread in self.threads: thread.join() def require_variables(*variables): """Class method decorator to check that required variables are present""" def decorator(fn): @functools.wraps(fn) def wrapper(self, *args, **kawrgs): missing_vars = [] for var in variables: if not getattr(self, var, None): missing_vars.append(var) if missing_vars: missing_vars = ', '.join(missing_vars) msg = '{} required for {}.{}'.format( missing_vars, self.__class__.__name__, fn.__name__) raise NotImplementedError(msg) return fn(self, *args, **kawrgs) return wrapper return decorator class ComparableMixin(object): ATTRS = () def _cmp_attrs(self): return tuple(getattr(self, attr) for attr in self.ATTRS) def __lt__(self, other): return self._cmp_attrs() < other._cmp_attrs() def __le__(self, other): return self._cmp_attrs() <= other._cmp_attrs() def __eq__(self, other): return self._cmp_attrs() == other._cmp_attrs() def __ge__(self, other): return self._cmp_attrs() >= other._cmp_attrs() def __gt__(self, other): return self._cmp_attrs() > other._cmp_attrs() def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash(self._cmp_attrs()) class ReprMixin(object): ATTRS = () REPR_ATTRS = () def __repr__(self): return '{}({})'.format( self.__class__.__name__, ', '.join('{}={}'.format(attr, repr(getattr(self, attr))) for attr in self.REPR_ATTRS or self.ATTRS)) os-faults-0.1.17/readthedocs.yml000066400000000000000000000001141317662032700165300ustar00rootroot00000000000000python: setup_py_install: true requirements_file: rtd-requirements.txt os-faults-0.1.17/releasenotes/000077500000000000000000000000001317662032700162155ustar00rootroot00000000000000os-faults-0.1.17/releasenotes/notes/000077500000000000000000000000001317662032700173455ustar00rootroot00000000000000os-faults-0.1.17/releasenotes/notes/.placeholder000066400000000000000000000000001317662032700216160ustar00rootroot00000000000000os-faults-0.1.17/releasenotes/source/000077500000000000000000000000001317662032700175155ustar00rootroot00000000000000os-faults-0.1.17/releasenotes/source/_static/000077500000000000000000000000001317662032700211435ustar00rootroot00000000000000os-faults-0.1.17/releasenotes/source/_static/.placeholder000066400000000000000000000000001317662032700234140ustar00rootroot00000000000000os-faults-0.1.17/releasenotes/source/_templates/000077500000000000000000000000001317662032700216525ustar00rootroot00000000000000os-faults-0.1.17/releasenotes/source/_templates/.placeholder000066400000000000000000000000001317662032700241230ustar00rootroot00000000000000os-faults-0.1.17/releasenotes/source/conf.py000066400000000000000000000220551317662032700210200ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Glance Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'os_faults Release Notes' copyright = u'2016, OpenStack Foundation' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. # The full version, including alpha/beta/rc tags. release = '' # The short X.Y version. version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' html_last_updated_fmt = '%Y-%m-%d %H:%M' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'GlanceReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'GlanceReleaseNotes.tex', u'Glance Release Notes Documentation', u'Glance Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'glancereleasenotes', u'Glance Release Notes Documentation', [u'Glance Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'GlanceReleaseNotes', u'Glance Release Notes Documentation', u'Glance Developers', 'GlanceReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] # -- Options for openstackdocstheme ------------------------------------------- repository_name = 'openstack/os-faults' bug_project = 'os-faults' bug_tag = '' os-faults-0.1.17/releasenotes/source/index.rst000066400000000000000000000001661317662032700213610ustar00rootroot00000000000000======================= OS-Faults Release Notes ======================= .. toctree:: :maxdepth: 1 unreleased os-faults-0.1.17/releasenotes/source/unreleased.rst000066400000000000000000000001531317662032700223750ustar00rootroot00000000000000============================ Current Series Release Notes ============================ .. release-notes:: os-faults-0.1.17/requirements.txt000066400000000000000000000010131317662032700170030ustar00rootroot00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr>=2.0.0 # Apache-2.0 ansible>=2.2 appdirs>=1.3.0 # MIT License click>=6.7 # BSD iso8601>=0.1.11 # MIT jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT oslo.i18n>=2.1.0 # Apache-2.0 oslo.serialization>=1.10.0 # Apache-2.0 oslo.utils>=3.20.0 # Apache-2.0 pyghmi>=1.0.9 # Apache-2.0 PyYAML>=3.10.0 # MIT six>=1.9.0 # MIT os-faults-0.1.17/rtd-requirements.txt000066400000000000000000000000551317662032700175770ustar00rootroot00000000000000-r requirements.txt -r test-requirements.txt os-faults-0.1.17/setup.cfg000066400000000000000000000024351317662032700153510ustar00rootroot00000000000000[metadata] name = os_faults summary = OpenStack fault-injection library description-file = README.rst author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = http://os-faults.readthedocs.io/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 [files] packages = os_faults [entry_points] console_scripts = os-inject-fault = os_faults.cmd.cmd:main os-faults = os_faults.cmd.main:main [extras] libvirt = libvirt-python>=1.2.5 # LGPLv2+ [build_sphinx] source-dir = doc/source build-dir = doc/build all_files = 1 [upload_sphinx] upload-dir = doc/build/html [compile_catalog] directory = os_faults/locale domain = os_faults [update_catalog] domain = os_faults output_dir = os_faults/locale input_file = os_faults/locale/os_faults.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = os_faults/locale/os_faults.pot [build_releasenotes] all_files = 1 build-dir = releasenotes/build source-dir = releasenotes/source os-faults-0.1.17/setup.py000066400000000000000000000020061317662032700152340ustar00rootroot00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) os-faults-0.1.17/test-requirements.txt000066400000000000000000000014501317662032700177650ustar00rootroot00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0 pytest<=3.0.2,>=2.7 # MIT pytest-cov<=2.3.1,>=2.2.1 # MIT pytest-html<=1.10.0,>=1.10.0 # Mozilla Public License 2.0 (MPL 2.0) pytest-logging==2015.11.4 # Apache-2.0 coverage>=4.0 # Apache-2.0 ddt>=1.0.1 # MIT mock>=2.0 # BSD python-subunit>=0.0.18 # Apache-2.0/BSD sphinx>=1.6.2 # BSD sphinxcontrib-programoutput # BSD sphinx_rtd_theme # MIT openstackdocstheme>=1.11.0 # Apache-2.0 oslotest>=1.10.0 # Apache-2.0 testrepository>=0.0.18 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD testtools>=1.4.0 # MIT # releasenotes reno>=1.8.0 # Apache-2.0 os-faults-0.1.17/tox.ini000066400000000000000000000055571317662032700150530ustar00rootroot00000000000000[tox] minversion = 2.6 envlist = pep8-constraints,py27-constraints,py35-constraints,cover skipsdist = True [testenv] usedevelop = True install_command = constraints: {[testenv:common-constraints]install_command} pip install -U {opts} {packages} setenv = VIRTUAL_ENV={envdir} whitelist_externals = find rm deps = -r{toxinidir}/test-requirements.txt extras = libvirt commands = find . -type f -name "*.pyc" -delete py.test -vvvv --html={envlogdir}/pytest_results.html --self-contained-html --durations=10 "os_faults/tests/unit" {posargs} passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY [testenv:common-constraints] install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages} [testenv:pep8] commands = flake8 . doc/ext [testenv:pep8-constraints] install_command = {[testenv:common-constraints]install_command} commands = {[testenv:pep8]commands} [testenv:venv] commands = {posargs} [testenv:venv-constraints] install_command = {[testenv:common-constraints]install_command} commands = {posargs} [testenv:cover] commands = py.test --cov-config .coveragerc --cov-report html --cov=os_faults "os_faults/tests/unit" coverage html -d {envlogdir} coverage report [testenv:cover-constraints] install_command = {[testenv:common-constraints]install_command} commands = {[testenv:cover]commands} [testenv:integration] setenv = {[testenv]setenv} OS_TEST_PATH=./os_faults/tests/integration deps = {[testenv]deps} oslo.concurrency commands = py.test -vvvv --html={envlogdir}/pytest_results.html --self-contained-html --durations=10 "os_faults/tests/integration" {posargs} [testenv:integration-py27] basepython = python2.7 setenv = {[testenv:integration]setenv} deps = {[testenv:integration]deps} commands = {[testenv:integration]commands} [testenv:integration-py35] basepython = python3.5 setenv = {[testenv:integration]setenv} deps = {[testenv:integration]deps} commands = {[testenv:integration]commands} [testenv:docs] commands = rm -rf doc/build python setup.py build_sphinx --warning-is-error [testenv:releasenotes] commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html [testenv:docs-constraints] install_command = {[testenv:common-constraints]install_command} commands = {[testenv:docs]commands} [testenv:debug] commands = oslo_debug_helper {posargs} [testenv:debug-constraints] install_command = {[testenv:common-constraints]install_command} commands = {[testenv:debug]commands} [flake8] # E123 skipped because it is ignored by default in the default pep8. # E125 skipped until https://github.com/jcrocholl/pep8/issues/126 is resolved. ignore = E123,E125 show-source = True exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build