pax_global_header00006660000000000000000000000064132626550250014520gustar00rootroot0000000000000052 comment=04d95885c291a15d84ffb6396cd335a259226746 pcs-0.9.164/000077500000000000000000000000001326265502500124665ustar00rootroot00000000000000pcs-0.9.164/.gitignore000066400000000000000000000002331326265502500144540ustar00rootroot00000000000000*.pyc *.swp /MANIFEST /dist/ /pcs/bash_completion.d.pcs /pcsd/pcs_settings.conf /pcsd/pcs_settings.conf.* /pcsd/pcs_users.conf /pcsd/.bundle /pcsd/vendor pcs-0.9.164/CHANGELOG.md000066400000000000000000000550711326265502500143070ustar00rootroot00000000000000# Change Log ## [0.9.164] - 2018-04-09 ### Security - CVE-2018-1086: Debug parameter removal bypass, allowing information disclosure ([rhbz#1557366]) - CVE-2018-1079: Privilege escalation via authorized user malicious REST call ([rhbz#1550243]) - CVE-2018-1000119 rack-protection: Timing attack in authenticity_token.rb ([rhbz#1534027]) [rhbz#1534027]: https://bugzilla.redhat.com/show_bug.cgi?id=1534027 [rhbz#1550243]: https://bugzilla.redhat.com/show_bug.cgi?id=1550243 [rhbz#1557366]: https://bugzilla.redhat.com/show_bug.cgi?id=1557366 ## [0.9.163] - 2018-02-20 ### Added - Added `pcs status booth` as an alias to `pcs booth status` - A warning is displayed in `pcs status` and a stonith device detail in web UI when a stonith device has its `method` option set to `cycle` ([rhbz#1523378]) ### Fixed - `--skip-offline` is no longer ignored in the `pcs quorum device remove` command - pcs now waits up to 5 minutes (previously 10 seconds) for pcsd restart when synchronizing pcsd certificates - Usage and man page now correctly state it is possible to enable or disable several stonith devices at once - It is now possible to set the `action` option of stonith devices in web UI by using force ([rhbz#1421702]) - Do not crash when `--wait` is used in `pcs stonith create` ([rhbz#1522813]) - Nodes are now authenticated after running `pcs cluster auth` even if an existing corosync.conf defines no nodes ([ghissue#153], [rhbz#1517333]) - Pcs now properly exits with code 1 when an error occurs in `pcs cluster node add-remote` and `pcs cluster node add-guest` commands ([rhbz#1464781]) - Fixed a crash in the `pcs booth sync` command ([rhbz#1527530]) - Always replace the whole CIB instead of applying a diff when crm\_feature\_set <= 3.0.8 ([rhbz#1488044]) - Fixed `pcs cluster auth` in a cluster when not authenticated and using a non-default port ([rhbz#1415197]) - Fixed `pcs cluster auth` in a cluster when previously authenticated using a non-default port and reauthenticating using an implicit default port ([rhbz#1415197]) [ghissue#153]: https://github.com/ClusterLabs/pcs/issues/153 [rhbz#1415197]: https://bugzilla.redhat.com/show_bug.cgi?id=1415197 [rhbz#1421702]: https://bugzilla.redhat.com/show_bug.cgi?id=1421702 [rhbz#1464781]: https://bugzilla.redhat.com/show_bug.cgi?id=1464781 [rhbz#1488044]: https://bugzilla.redhat.com/show_bug.cgi?id=1488044 [rhbz#1517333]: https://bugzilla.redhat.com/show_bug.cgi?id=1517333 [rhbz#1522813]: https://bugzilla.redhat.com/show_bug.cgi?id=1522813 [rhbz#1523378]: https://bugzilla.redhat.com/show_bug.cgi?id=1523378 [rhbz#1527530]: https://bugzilla.redhat.com/show_bug.cgi?id=1527530 ## [0.9.162] - 2017-11-15 ### Added - `pcs status --full` now displays information about tickets ([rhbz#1389943]) - Support for managing qdevice heuristics ([rhbz#1389209]) - SNMP agent providing information about cluster to the master agent. It supports only python 2.7 for now ([rhbz#1367808]). ### Fixed - Fixed crash when loading a huge xml ([rhbz#1506864]) - Fixed adding an existing cluster into the web UI ([rhbz#1415197]) - False warnings about failed actions when resource is master/unmaster from the web UI ([rhbz#1506220]) ### Changed - `pcs resource|stonith cleanup` no longer deletes the whole operation history of resources. Instead, it only deletes failed operations from the history. The original functionality is available in the `pcs resource|stonith refresh` command. ([rhbz#1508351], [rhbz#1508350]) [rhbz#1367808]: https://bugzilla.redhat.com/show_bug.cgi?id=1367808 [rhbz#1389209]: https://bugzilla.redhat.com/show_bug.cgi?id=1389209 [rhbz#1389943]: https://bugzilla.redhat.com/show_bug.cgi?id=1389943 [rhbz#1415197]: https://bugzilla.redhat.com/show_bug.cgi?id=1415197 [rhbz#1506220]: https://bugzilla.redhat.com/show_bug.cgi?id=1506220 [rhbz#1506864]: https://bugzilla.redhat.com/show_bug.cgi?id=1506864 [rhbz#1508350]: https://bugzilla.redhat.com/show_bug.cgi?id=1508350 [rhbz#1508351]: https://bugzilla.redhat.com/show_bug.cgi?id=1508351 ## [0.9.161] - 2017-11-02 ### Added - List of pcs and pcsd capabilities ([rhbz#1230919]) ### Fixed - Fixed `pcs cluster auth` when already authenticated and using different port ([rhbz#1415197]) - It is now possible to restart a bundle resource on one node ([rhbz#1501274]) - `resource update` no longer exits with an error when the `remote-node` meta attribute is set to the same value that it already has ([rhbz#1502715], [ghissue#145]) - Listing and describing resource and stonith agents no longer crashes when agents' metadata contain non-ascii characters ([rhbz#1503110], [ghissue#151]) [ghissue#145]: https://github.com/ClusterLabs/pcs/issues/145 [ghissue#151]: https://github.com/ClusterLabs/pcs/issues/151 [rhbz#1230919]: https://bugzilla.redhat.com/show_bug.cgi?id=1230919 [rhbz#1415197]: https://bugzilla.redhat.com/show_bug.cgi?id=1415197 [rhbz#1501274]: https://bugzilla.redhat.com/show_bug.cgi?id=1501274 [rhbz#1502715]: https://bugzilla.redhat.com/show_bug.cgi?id=1502715 [rhbz#1503110]: https://bugzilla.redhat.com/show_bug.cgi?id=1503110 ## [0.9.160] - 2017-10-09 ### Added - Configurable pcsd port ([rhbz#1415197]) - Description of the `--force` option added to man page and help ([rhbz#1491631]) ### Fixed - Fixed some crashes when pcs encounters a non-ascii character in environment variables, command line arguments and so on ([rhbz#1435697]) - Fixed detecting if systemd is in use ([ghissue#118]) - Upgrade CIB schema version when `resource-discovery` option is used in location constraints ([rhbz#1420437]) - Fixed error messages in `pcs cluster report` ([rhbz#1388783]) - Increase request timeout when starting a cluster with large number of nodes to prevent timeouts ([rhbz#1463327]) - Fixed "Unable to update cib" error caused by invalid resource operation IDs - `pcs resource op defaults` now fails on an invalid option ([rhbz#1341582]) - Fixed behaviour of `pcs cluster verify` command when entered with the filename argument ([rhbz#1213946]) ### Changed - CIB changes are now pushed to pacemaker as a diff in commands overhauled to the new architecture (previously the whole CIB was pushed). This resolves race conditions and ACLs related errors when pushing CIB. ([rhbz#1441673]) - All actions / operations defined in resource agent's metadata (except meta-data, status and validate-all) are now copied to the CIB when creating a resource. ([rhbz#1418199], [ghissue#132]) - Improve documentation of the `pcs stonith confirm` command ([rhbz#1489682]) ### Deprecated - This is the last version fully supporting CMAN clusters and python 2.6. Support for these will be gradually dropped. [ghissue#118]: https://github.com/ClusterLabs/pcs/issues/118 [ghissue#132]: https://github.com/ClusterLabs/pcs/issues/132 [rhbz#1213946]: https://bugzilla.redhat.com/show_bug.cgi?id=1213946 [rhbz#1341582]: https://bugzilla.redhat.com/show_bug.cgi?id=1341582 [rhbz#1388783]: https://bugzilla.redhat.com/show_bug.cgi?id=1388783 [rhbz#1415197]: https://bugzilla.redhat.com/show_bug.cgi?id=1415197 [rhbz#1418199]: https://bugzilla.redhat.com/show_bug.cgi?id=1418199 [rhbz#1420437]: https://bugzilla.redhat.com/show_bug.cgi?id=1420437 [rhbz#1435697]: https://bugzilla.redhat.com/show_bug.cgi?id=1435697 [rhbz#1441673]: https://bugzilla.redhat.com/show_bug.cgi?id=1441673 [rhbz#1463327]: https://bugzilla.redhat.com/show_bug.cgi?id=1463327 [rhbz#1489682]: https://bugzilla.redhat.com/show_bug.cgi?id=1489682 [rhbz#1491631]: https://bugzilla.redhat.com/show_bug.cgi?id=1491631 ## [0.9.159] - 2017-06-30 ### Added - Option to create a cluster with or without corosync encryption enabled, by default the encryption is disabled ([rhbz#1165821]) - It is now possible to disable, enable, unmanage and manage bundle resources and set their meta attributes ([rhbz#1447910]) - Pcs now warns against using the `action` option of stonith devices ([rhbz#1421702]) ### Fixed - Fixed crash of the `pcs cluster setup` command when the `--force` flag was used ([rhbz#1176018]) - Fixed crash of the `pcs cluster destroy --all` command when the cluster was not running ([rhbz#1176018]) - Fixed crash of the `pcs config restore` command when restoring pacemaker authkey ([rhbz#1176018]) - Fixed "Error: unable to get cib" when adding a node to a stopped cluster ([rhbz#1176018]) - Fixed a crash in the `pcs cluster node add-remote` command when an id conflict occurs ([rhbz#1386114]) - Fixed creating a new cluster from the web UI ([rhbz#1284404]) - `pcs cluster node add-guest` now works with the flag `--skip-offline` ([rhbz#1176018]) - `pcs cluster node remove-guest` can be run again when the guest node was unreachable first time ([rhbz#1176018]) - Fixed "Error: Unable to read /etc/corosync/corosync.conf" when running `pcs resource create`([rhbz#1386114]) - It is now possible to set `debug` and `verbose` parameters of stonith devices ([rhbz#1432283]) - Resource operation ids are now properly validated and no longer ignored in `pcs resource create`, `pcs resource update` and `pcs resource op add` commands ([rhbz#1443418]) - Flag `--force` works correctly when an operation is not successful on some nodes during `pcs cluster node add-remote` or `pcs cluster node add-guest` ([rhbz#1464781]) ### Changed - Binary data are stored in corosync authkey ([rhbz#1165821]) - It is now mandatory to specify container type in the `resource bundle create` command - When creating a new cluster, corosync communication encryption is disabled by default (in 0.9.158 it was enabled by default, in 0.9.157 and older it was disabled) [rhbz#1165821]: https://bugzilla.redhat.com/show_bug.cgi?id=1165821 [rhbz#1176018]: https://bugzilla.redhat.com/show_bug.cgi?id=1176018 [rhbz#1284404]: https://bugzilla.redhat.com/show_bug.cgi?id=1284404 [rhbz#1386114]: https://bugzilla.redhat.com/show_bug.cgi?id=1386114 [rhbz#1421702]: https://bugzilla.redhat.com/show_bug.cgi?id=1421702 [rhbz#1432283]: https://bugzilla.redhat.com/show_bug.cgi?id=1432283 [rhbz#1443418]: https://bugzilla.redhat.com/show_bug.cgi?id=1443418 [rhbz#1447910]: https://bugzilla.redhat.com/show_bug.cgi?id=1447910 [rhbz#1464781]: https://bugzilla.redhat.com/show_bug.cgi?id=1464781 ## [0.9.158] - 2017-05-23 ### Added - Support for bundle resources (CLI only) ([rhbz#1433016]) - Commands for adding and removing guest and remote nodes including handling pacemaker authkey (CLI only) ([rhbz#1176018], [rhbz#1254984], [rhbz#1386114], [rhbz#1386512]) - Command `pcs cluster node clear` to remove a node from pacemaker's configuration and caches - Backing up and restoring cluster configuration by `pcs config backup` and `pcs config restore` commands now support corosync and pacemaker authkeys ([rhbz#1165821], [rhbz#1176018]) ### Deprecated - `pcs cluster remote-node add` and `pcs cluster remote-node remove `commands have been deprecated in favor of `pcs cluster node add-guest` and `pcs cluster node remove-guest` commands ([rhbz#1386512]) ### Fixed - Fixed a bug which under specific conditions caused pcsd to crash on start when running under systemd ([ghissue#134]) - `pcs resource unmanage` now sets the unmanaged flag to primitive resources even if a clone or master/slave resource is specified. Thus the primitive resources will not become managed just by uncloning. This also prevents some discrepancies between disabled monitor operations and the unmanaged flag. ([rhbz#1303969]) - `pcs resource unmanage --monitor` now properly disables monitor operations even if a clone or master/slave resource is specified. ([rhbz#1303969]) - `--help` option now shows help just for the specified command. Previously the usage for a whole group of commands was shown. - Fixed a crash when `pcs cluster cib-push` is called with an explicit value of the `--wait` flag ([rhbz#1422667]) - Handle pcsd crash when an unusable address is set in `PCSD_BIND_ADDR` ([rhbz#1373614]) - Removal of a pacemaker remote resource no longer causes the respective remote node to be fenced ([rhbz#1390609]) ### Changed - Newly created clusters are set up to encrypt corosync communication ([rhbz#1165821], [ghissue#98]) [ghissue#98]: https://github.com/ClusterLabs/pcs/issues/98 [ghissue#134]: https://github.com/ClusterLabs/pcs/issues/134 [rhbz#1176018]: https://bugzilla.redhat.com/show_bug.cgi?id=1176018 [rhbz#1254984]: https://bugzilla.redhat.com/show_bug.cgi?id=1254984 [rhbz#1303969]: https://bugzilla.redhat.com/show_bug.cgi?id=1303969 [rhbz#1373614]: https://bugzilla.redhat.com/show_bug.cgi?id=1373614 [rhbz#1386114]: https://bugzilla.redhat.com/show_bug.cgi?id=1386114 [rhbz#1386512]: https://bugzilla.redhat.com/show_bug.cgi?id=1386512 [rhbz#1390609]: https://bugzilla.redhat.com/show_bug.cgi?id=1390609 [rhbz#1422667]: https://bugzilla.redhat.com/show_bug.cgi?id=1422667 [rhbz#1433016]: https://bugzilla.redhat.com/show_bug.cgi?id=1433016 [rhbz#1165821]: https://bugzilla.redhat.com/show_bug.cgi?id=1165821 ## [0.9.157] - 2017-04-10 ### Added - Resources in location constraints now may be specified by resource name patterns in addition to resource names ([rhbz#1362493]) - Proxy settings description in pcsd configuration file ([rhbz#1315627]) - Man page for pcsd ([rhbz#1378742]) - Pcs now allows to set `trace_ra` and `trace_file` options of `ocf:heartbeat` and `ocf:pacemaker` resources ([rhbz#1421702]) - `pcs resource describe` and `pcs stonith describe` commands now show all information about the specified agent if the `--full` flag is used - `pcs resource manage | unmanage` enables respectively disables monitor operations when the `--monitor` flag is specified ([rhbz#1303969]) - Support for shared storage in SBD. Currently, there is very limited support in web UI ([rhbz#1413958]) ### Changed - It is now possible to specify more than one resource in the `pcs resource enable` and `pcs resource disable` commands. ### Fixed - Python 3: pcs no longer spams stderr with error messages when communicating with another node - Stopping a cluster does not timeout too early and it generally works better even if the cluster is running Virtual IP resources ([rhbz#1334429]) - `pcs booth remove` now works correctly even if the booth resource group is disabled (another fix) ([rhbz#1389941]) - Fixed Cross-site scripting (XSS) vulnerability in web UI ([CVE-2017-2661], [rhbz#1434111]) - Pcs no longer allows to create a stonith resource based on an agent whose name contains a colon ([rhbz#1415080]) - Pcs command now launches Python interpreter with "sane" options (python -Es) ([rhbz#1328882]) - Clufter is now supported on both Python 2 and Python 3 ([rhbz#1428350]) - Do not colorize clufter output if saved to a file [CVE-2017-2661]: https://access.redhat.com/security/cve/CVE-2017-2661 [rhbz#1303969]: https://bugzilla.redhat.com/show_bug.cgi?id=1303969 [rhbz#1315627]: https://bugzilla.redhat.com/show_bug.cgi?id=1315627 [rhbz#1328882]: https://bugzilla.redhat.com/show_bug.cgi?id=1328882 [rhbz#1334429]: https://bugzilla.redhat.com/show_bug.cgi?id=1334429 [rhbz#1362493]: https://bugzilla.redhat.com/show_bug.cgi?id=1362493 [rhbz#1378742]: https://bugzilla.redhat.com/show_bug.cgi?id=1378742 [rhbz#1389941]: https://bugzilla.redhat.com/show_bug.cgi?id=1389941 [rhbz#1413958]: https://bugzilla.redhat.com/show_bug.cgi?id=1413958 [rhbz#1415080]: https://bugzilla.redhat.com/show_bug.cgi?id=1415080 [rhbz#1421702]: https://bugzilla.redhat.com/show_bug.cgi?id=1421702 [rhbz#1428350]: https://bugzilla.redhat.com/show_bug.cgi?id=1428350 [rhbz#1434111]: https://bugzilla.redhat.com/show_bug.cgi?id=1434111 ## [0.9.156] - 2017-02-10 ### Added - Fencing levels now may be targeted in CLI by a node name pattern or a node attribute in addition to a node name ([rhbz#1261116]) - `pcs cluster cib-push` allows to push a diff obtained internally by comparing CIBs in specified files ([rhbz#1404233], [rhbz#1419903]) - Added flags `--wait`, `--disabled`, `--group`, `--after`, `--before` into the command `pcs stonith create` - Added commands `pcs stonith enable` and `pcs stonith disable` - Command line option --request-timeout ([rhbz#1292858]) - Check whenever proxy is set when unable to connect to a node ([rhbz#1315627]) ### Changed - `pcs node [un]standby` and `pcs node [un]maintenance` is now atomic even if more than one node is specified ([rhbz#1315992]) - Restarting pcsd initiated from pcs is now a synchronous operation ([rhbz#1284404]) - Stopped bundling fonts used in pcsd web UI ([ghissue#125]) - In `pcs resource create` flags `--master` and `--clone` changed to keywords `master` and `clone` - libcurl is now used for node to node communication ### Fixed - When upgrading CIB to the latest schema version, check for minimal common version across the cluster ([rhbz#1389443]) - `pcs booth remove` now works correctly even if the booth resource group is disabled ([rhbz#1389941]) - Adding a node in a CMAN cluster does not cause the new node to be fenced immediately ([rhbz#1394846]) - Show proper error message when there is an HTTP communication failure ([rhbz#1394273]) - Fixed searching for files to remove in the `/var/lib` directory ([ghpull#119], [ghpull#120]) - Fixed messages when managing services (start, stop, enable, disable...) - Fixed disabling services on systemd systems when using instances ([rhbz#1389501]) - Fixed parsing commandline options ([rhbz#1404229]) - Pcs does not exit with a false error message anymore when pcsd-cli.rb outputs to stderr ([ghissue#124]) - Pcs now exits with an error when both `--all` and a list of nodes is specified in the `pcs cluster start | stop | enable | disable` commands ([rhbz#1339355]) - built-in help and man page fixes and improvements ([rhbz#1347335]) - In `pcs resource create` the flag `--clone` no longer steals arguments from the keywords `meta` and `op` ([rhbz#1395226]) - `pcs resource create` does not produce invalid cib when group id is already occupied with non-resource element ([rhbz#1382004]) - Fixed misbehavior of the flag `--master` in `pcs resource create` command ([rhbz#1378107]) - Fixed tacit acceptance of invalid resource operation in `pcs resource create` ([rhbz#1398562]) - Fixed misplacing metadata for disabling when running `pcs resource create` with flags `--clone` and `--disabled` ([rhbz#1402475]) - Fixed incorrect acceptance of the invalid attribute of resource operation in `pcs resource create` ([rhbz#1382597]) - Fixed validation of options of resource operations in `pcs resource create` ([rhbz#1390071]) - Fixed silent omission of duplicate options ([rhbz#1390066]) - Added more validation for resource agent names ([rhbz#1387670]) - Fixed network communication issues in pcsd when a node was specified by an IPv6 address - Fixed JS error in web UI when empty cluster status is received ([rhbz#1396462]) - Fixed sending user group in cookies from Python 3 - Fixed pcsd restart in Python 3 - Fixed parsing XML in Python 3 (caused crashes when reading resource agents metadata) ([rhbz#1419639]) - Fixed the recognition of the structure of a resource agent name that contains a systemd instance ([rhbz#1419661]) ### Removed - Ruby 1.8 and 1.9 is no longer supported due to bad libcurl support [ghissue#124]: https://github.com/ClusterLabs/pcs/issues/124 [ghissue#125]: https://github.com/ClusterLabs/pcs/issues/125 [ghpull#119]: https://github.com/ClusterLabs/pcs/pull/119 [ghpull#120]: https://github.com/ClusterLabs/pcs/pull/120 [rhbz#1261116]: https://bugzilla.redhat.com/show_bug.cgi?id=1261116 [rhbz#1284404]: https://bugzilla.redhat.com/show_bug.cgi?id=1284404 [rhbz#1292858]: https://bugzilla.redhat.com/show_bug.cgi?id=1292858 [rhbz#1315627]: https://bugzilla.redhat.com/show_bug.cgi?id=1315627 [rhbz#1315992]: https://bugzilla.redhat.com/show_bug.cgi?id=1315992 [rhbz#1339355]: https://bugzilla.redhat.com/show_bug.cgi?id=1339355 [rhbz#1347335]: https://bugzilla.redhat.com/show_bug.cgi?id=1347335 [rhbz#1378107]: https://bugzilla.redhat.com/show_bug.cgi?id=1378107 [rhbz#1382004]: https://bugzilla.redhat.com/show_bug.cgi?id=1382004 [rhbz#1382597]: https://bugzilla.redhat.com/show_bug.cgi?id=1382597 [rhbz#1387670]: https://bugzilla.redhat.com/show_bug.cgi?id=1387670 [rhbz#1389443]: https://bugzilla.redhat.com/show_bug.cgi?id=1389443 [rhbz#1389501]: https://bugzilla.redhat.com/show_bug.cgi?id=1389501 [rhbz#1389941]: https://bugzilla.redhat.com/show_bug.cgi?id=1389941 [rhbz#1390066]: https://bugzilla.redhat.com/show_bug.cgi?id=1390066 [rhbz#1390071]: https://bugzilla.redhat.com/show_bug.cgi?id=1390071 [rhbz#1394273]: https://bugzilla.redhat.com/show_bug.cgi?id=1394273 [rhbz#1394846]: https://bugzilla.redhat.com/show_bug.cgi?id=1394846 [rhbz#1395226]: https://bugzilla.redhat.com/show_bug.cgi?id=1395226 [rhbz#1396462]: https://bugzilla.redhat.com/show_bug.cgi?id=1396462 [rhbz#1398562]: https://bugzilla.redhat.com/show_bug.cgi?id=1398562 [rhbz#1402475]: https://bugzilla.redhat.com/show_bug.cgi?id=1402475 [rhbz#1404229]: https://bugzilla.redhat.com/show_bug.cgi?id=1404229 [rhbz#1404233]: https://bugzilla.redhat.com/show_bug.cgi?id=1404233 [rhbz#1419639]: https://bugzilla.redhat.com/show_bug.cgi?id=1419639 [rhbz#1419661]: https://bugzilla.redhat.com/show_bug.cgi?id=1419661 [rhbz#1419903]: https://bugzilla.redhat.com/show_bug.cgi?id=1419903 ## [0.9.155] - 2016-11-03 ### Added - Show daemon status in `pcs status` on non-systemd machines - SBD support for cman clusters ([rhbz#1380352]) - Alerts management in pcsd ([rhbz#1376480]) ### Changed - Get all information about resource and stonith agents from pacemaker. Pcs now supports the same set of agents as pacemaker does. ([rhbz#1262001], [ghissue#81]) - `pcs resource create` now exits with an error if more than one resource agent matches the specified short agent name instead of randomly selecting one of the agents - Allow to remove multiple alerts and alert recipients at once ### Fixed - When stopping a cluster with some of the nodes unreachable, stop the cluster completely on all reachable nodes ([rhbz#1380372]) - Fixed pcsd crash when rpam rubygem is installed ([ghissue#109]) - Fixed occasional crashes / failures when using locale other than en\_US.UTF8 ([rhbz#1387106]) - Fixed starting and stopping cluster services on systemd machines without the `service` executable ([ghissue#115]) [ghissue#81]: https://github.com/ClusterLabs/pcs/issues/81 [ghissue#109]: https://github.com/ClusterLabs/pcs/issues/109 [ghissue#115]: https://github.com/ClusterLabs/pcs/issues/115 [rhbz#1262001]: https://bugzilla.redhat.com/show_bug.cgi?id=1262001 [rhbz#1376480]: https://bugzilla.redhat.com/show_bug.cgi?id=1376480 [rhbz#1380352]: https://bugzilla.redhat.com/show_bug.cgi?id=1380352 [rhbz#1380372]: https://bugzilla.redhat.com/show_bug.cgi?id=1380372 [rhbz#1387106]: https://bugzilla.redhat.com/show_bug.cgi?id=1387106 ## [0.9.154] - 2016-09-21 - There is no change log for this and previous releases. We are sorry. - Take a look at git history if you are interested. pcs-0.9.164/COPYING000066400000000000000000000432541326265502500135310ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. pcs-0.9.164/MANIFEST.in000066400000000000000000000003311326265502500142210ustar00rootroot00000000000000include Makefile include COPYING include pcs/pcs.8 include pcs/bash_completion include pcsd/.bundle/config graft pcsd graft pcsd/vendor/cache prune pcsd/vendor/bundle prune pcsd/test recursive-exclude pcsd .gitignore pcs-0.9.164/Makefile000066400000000000000000000176261326265502500141420ustar00rootroot00000000000000# Compatibility with GNU/Linux [i.e. Debian] based distros UNAME_OS_GNU := $(shell if uname -o | grep -q "GNU/Linux" ; then echo true; else echo false; fi) DISTRO_DEBIAN := $(shell if [ -e /etc/debian_version ] ; then echo true; else echo false; fi) IS_DEBIAN=false DISTRO_DEBIAN_VER_8=false ifndef PYTHON PYTHON := $(shell which python3 || which python2 || which python) endif ifeq ($(UNAME_OS_GNU),true) ifeq ($(DISTRO_DEBIAN),true) IS_DEBIAN=true DISTRO_DEBIAN_VER_8 := $(shell if grep -q -i "^8\|jessie" /etc/debian_version ; then echo true; else echo false; fi) # dpkg-architecture is in the optional dpkg-dev package, unfortunately. #DEB_HOST_MULTIARCH := $(shell dpkg-architecture -qDEB_HOST_MULTIARCH) # TODO: Use lsb_architecture to get the multiarch tuple if/when it becomes available in distributions. DEB_HOST_MULTIARCH := $(shell dpkg -L libc6 | sed -nr 's|^/etc/ld\.so\.conf\.d/(.*)\.conf$$|\1|p') endif endif ifndef PYTHON_SITELIB PYTHON_SITELIB=$(shell $(PYTHON) -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") endif ifeq ($(PYTHON_SITELIB), /usr/lib/python2.6/dist-packages) EXTRA_SETUP_OPTS="--install-layout=deb" endif ifeq ($(PYTHON_SITELIB), /usr/lib/python2.7/dist-packages) EXTRA_SETUP_OPTS="--install-layout=deb" endif ifeq ($(PYTHON_SITELIB), /usr/lib/python3/dist-packages) EXTRA_SETUP_OPTS="--install-layout=deb" endif # Check for systemd presence ifeq ($(SYSTEMCTL_OVERRIDE),true) IS_SYSTEMCTL=true else ifeq ($(SYSTEMCTL_OVERRIDE),false) IS_SYSTEMCTL=false else IS_SYSTEMCTL = $(shell if [ -d /run/systemd/system ] || [ -d /var/run/systemd/system ] ; then echo true ; else echo false; fi) endif endif # Check for an override for building gems ifndef BUILD_GEMS BUILD_GEMS=true endif MANDIR=/usr/share/man ifndef PREFIX PREFIX=$(shell prefix=`$(PYTHON) -c "import sys; print(sys.prefix)"` || prefix="/usr"; echo $$prefix) endif ifndef systemddir systemddir=/usr/lib/systemd endif ifndef initdir initdir=/etc/init.d endif ifndef install_settings ifeq ($(IS_DEBIAN),true) install_settings=true else install_settings=false endif endif ifndef BASH_COMPLETION_DIR BASH_COMPLETION_DIR=${DESTDIR}/etc/bash_completion.d endif ifndef PCSD_PARENT_DIR ifeq ($(IS_DEBIAN),true) PCSD_PARENT_DIR = /usr/share else PCSD_PARENT_DIR = ${PREFIX}/lib endif endif ifndef PCS_PARENT_DIR PCS_PARENT_DIR=${DESTDIR}/${PREFIX}/lib/pcs endif BUNDLED_LIB_INSTALL_DIR=${PCS_PARENT_DIR}/bundled ifndef BUNDLED_LIB_DIR BUNDLED_LIB_DIR=./pcs/bundled/ endif BUNDLED_LIB_DIR_ABS=$(shell readlink -f ${BUNDLED_LIB_DIR}) BUNDLES_TMP_DIR=${BUNDLED_LIB_DIR_ABS}/tmp ifndef SNMP_MIB_DIR SNMP_MIB_DIR=/share/snmp/mibs/ endif SNMP_MIB_DIR_FULL=${DESTDIR}/${PREFIX}/${SNMP_MIB_DIR} pcsd_fonts = \ LiberationSans-Regular.ttf;LiberationSans:style=Regular \ LiberationSans-Bold.ttf;LiberationSans:style=Bold \ LiberationSans-BoldItalic.ttf;LiberationSans:style=BoldItalic \ LiberationSans-Italic.ttf;LiberationSans:style=Italic \ Overpass-Regular.ttf;Overpass:style=Regular \ Overpass-Bold.ttf;Overpass:style=Bold install: install_bundled_libs # make Python interpreter execution sane (via -Es flags) printf "[build]\nexecutable = $(PYTHON) -Es\n" > setup.cfg $(PYTHON) setup.py install --root=$(or ${DESTDIR}, /) ${EXTRA_SETUP_OPTS} # fix excessive script interpreting "executable" quoting with old setuptools: # https://github.com/pypa/setuptools/issues/188 # https://bugzilla.redhat.com/1353934 sed -i '1s|^\(#!\)"\(.*\)"$$|\1\2|' ${DESTDIR}${PREFIX}/bin/pcs sed -i '1s|^\(#!\)"\(.*\)"$$|\1\2|' ${DESTDIR}${PREFIX}/bin/pcs_snmp_agent rm setup.cfg mkdir -p ${DESTDIR}${PREFIX}/sbin/ mv ${DESTDIR}${PREFIX}/bin/pcs ${DESTDIR}${PREFIX}/sbin/pcs install -D -m644 pcs/bash_completion ${BASH_COMPLETION_DIR}/pcs install -m644 -D pcs/pcs.8 ${DESTDIR}/${MANDIR}/man8/pcs.8 # pcs SNMP install mv ${DESTDIR}${PREFIX}/bin/pcs_snmp_agent ${PCS_PARENT_DIR}/pcs_snmp_agent install -d ${DESTDIR}/var/log/pcs install -d ${SNMP_MIB_DIR_FULL} install -m 644 pcs/snmp/mibs/PCMK-PCS*-MIB.txt ${SNMP_MIB_DIR_FULL} install -m 644 -D pcs/snmp/pcs_snmp_agent.conf ${DESTDIR}/etc/sysconfig/pcs_snmp_agent install -m 644 -D pcs/snmp/pcs_snmp_agent.8 ${DESTDIR}/${MANDIR}/man8/pcs_snmp_agent.8 ifeq ($(IS_SYSTEMCTL),true) install -d ${DESTDIR}/${systemddir}/system/ install -m 644 pcs/snmp/pcs_snmp_agent.service ${DESTDIR}/${systemddir}/system/ endif ifeq ($(IS_DEBIAN),true) ifeq ($(install_settings),true) rm -f ${DESTDIR}${PYTHON_SITELIB}/pcs/settings.py tmp_settings=`mktemp`; \ sed s/DEB_HOST_MULTIARCH/${DEB_HOST_MULTIARCH}/g pcs/settings.py.debian > $$tmp_settings; \ install -m644 $$tmp_settings ${DESTDIR}${PYTHON_SITELIB}/pcs/settings.py; \ rm -f $$tmp_settings $(PYTHON) -m compileall -fl ${DESTDIR}${PYTHON_SITELIB}/pcs/settings.py endif endif install_pcsd: ifeq ($(BUILD_GEMS),true) make -C pcsd build_gems endif mkdir -p ${DESTDIR}/var/log/pcsd ifeq ($(IS_DEBIAN),true) mkdir -p ${DESTDIR}/usr/share/ cp -r pcsd ${DESTDIR}/usr/share/ install -m 644 -D pcsd/pcsd.conf ${DESTDIR}/etc/default/pcsd install -d ${DESTDIR}/etc/pam.d install pcsd/pcsd.pam.debian ${DESTDIR}/etc/pam.d/pcsd ifeq ($(install_settings),true) rm -f ${DESTDIR}/usr/share/pcsd/settings.rb tmp_settings_pcsd=`mktemp`; \ sed s/DEB_HOST_MULTIARCH/${DEB_HOST_MULTIARCH}/g pcsd/settings.rb.debian > $$tmp_settings_pcsd; \ install -m644 $$tmp_settings_pcsd ${DESTDIR}/usr/share/pcsd/settings.rb; \ rm -f $$tmp_settings_pcsd endif ifeq ($(IS_SYSTEMCTL),true) install -d ${DESTDIR}/${systemddir}/system/ install -m 644 pcsd/pcsd.service.debian ${DESTDIR}/${systemddir}/system/pcsd.service else install -m 755 -D pcsd/pcsd.debian ${DESTDIR}/${initdir}/pcsd endif else mkdir -p ${DESTDIR}${PCSD_PARENT_DIR}/ cp -r pcsd ${DESTDIR}${PCSD_PARENT_DIR}/ install -m 644 -D pcsd/pcsd.conf ${DESTDIR}/etc/sysconfig/pcsd install -d ${DESTDIR}/etc/pam.d install pcsd/pcsd.pam ${DESTDIR}/etc/pam.d/pcsd ifeq ($(IS_SYSTEMCTL),true) install -d ${DESTDIR}/${systemddir}/system/ install -m 644 pcsd/pcsd.service ${DESTDIR}/${systemddir}/system/ # ${DESTDIR}${PCSD_PARENT_DIR}/pcsd/pcsd holds the selinux context install -m 755 pcsd/pcsd.service-runner ${DESTDIR}${PCSD_PARENT_DIR}/pcsd/pcsd rm ${DESTDIR}${PCSD_PARENT_DIR}/pcsd/pcsd.service-runner else install -m 755 -D pcsd/pcsd ${DESTDIR}/${initdir}/pcsd endif endif install -m 700 -d ${DESTDIR}/var/lib/pcsd install -m 644 -D pcsd/pcsd.logrotate ${DESTDIR}/etc/logrotate.d/pcsd install -m644 -D pcsd/pcsd.8 ${DESTDIR}/${MANDIR}/man8/pcsd.8 $(foreach font,$(pcsd_fonts),\ $(eval font_file = $(word 1,$(subst ;, ,$(font)))) \ $(eval font_def = $(word 2,$(subst ;, ,$(font)))) \ $(eval font_path = $(shell fc-match '--format=%{file}' '$(font_def)')) \ $(if $(font_path),ln -s -f $(font_path) ${DESTDIR}${PCSD_PARENT_DIR}/pcsd/public/css/$(font_file);,$(error Font $(font_def) not found)) \ ) build_bundled_libs: ifndef PYAGENTX_INSTALLED rm -rf ${BUNDLES_TMP_DIR} mkdir -p ${BUNDLES_TMP_DIR} $(MAKE) -C pcs/snmp/ build_bundled_libs rm -rf ${BUNDLES_TMP_DIR} endif install_bundled_libs: build_bundled_libs ifndef PYAGENTX_INSTALLED install -d ${BUNDLED_LIB_INSTALL_DIR} cp -r ${BUNDLED_LIB_DIR_ABS}/packages ${BUNDLED_LIB_INSTALL_DIR} endif uninstall: rm -f ${DESTDIR}${PREFIX}/sbin/pcs rm -rf ${DESTDIR}${PYTHON_SITELIB}/pcs ifeq ($(IS_DEBIAN),true) rm -rf ${DESTDIR}/usr/share/pcsd rm -rf ${DESTDIR}/usr/share/pcs else rm -rf ${DESTDIR}${PREFIX}/lib/pcsd rm -rf ${DESTDIR}${PREFIX}/lib/pcs endif ifeq ($(IS_SYSTEMCTL),true) rm -f ${DESTDIR}/${systemddir}/system/pcsd.service rm -f ${DESTDIR}/${systemddir}/system/pcs_snmp_agent.service else rm -f ${DESTDIR}/${initdir}/pcsd endif rm -f ${DESTDIR}/etc/pam.d/pcsd rm -rf ${DESTDIR}/var/lib/pcsd rm -f ${SNMP_MIB_DIR_FULL}/PCMK-PCS*-MIB.txt tarball: $(PYTHON) setup.py sdist --formats=tar $(PYTHON) maketarballs.py newversion: $(PYTHON) newversion.py pcs-0.9.164/README.md000066400000000000000000000107571326265502500137570ustar00rootroot00000000000000## PCS - Pacemaker/Corosync Configuration System Pcs is a Corosync and Pacemaker configuration tool. It permits users to easily view, modify and create Pacemaker based clusters. Pcs contains pcsd, a pcs daemon, which operates as a remote server for pcs and provides a web UI. --- ### Installation from Source These are the runtime dependencies of pcs and pcsd: * python 2.7+ * python-lxml / python3-lxml * python-pycurl / python3-pycurl * python-setuptools / python3-setuptools * ruby 2.0.0+ * killall (package psmisc) * openssl * corosync * pacemaker It is also recommended to have these: * python-clufter / python3-clufter * liberation fonts (package liberation-sans-fonts or fonts-liberation or fonts-liberation2) * overpass fonts (package overpass-fonts) If you plan to manage Corosync 1.x based clusters, you will also need: * cman * ccs It is however highly recommended for new clusters to use Corosync 2.x. Support for Corosync 1.x and CMAN has been deprecated in 0.9.160 and will be removed. Apart from the dependencies listed above, these are also required for installation: * python development files (package python-devel / python3-devel) * ruby development files (package ruby-devel) * rubygems * rubygem bundler (package rubygem-bundler or ruby-bundler or bundler) * gcc * gcc-c++ * PAM development files (package pam-devel or libpam0g-dev) * FFI development files (package libffi-devel or libffi-dev) * fontconfig * printf (package coreutils) * redhat-rpm-config if you are using Fedora * wget (to download bundled libraries) During the installation, all required rubygems are automatically downloaded and compiled. To install pcs and pcsd run the following in terminal: ```shell # tar -xzvf pcs-0.9.160.tar.gz # cd pcs-0.9.160 # make install # make install_pcsd ``` If you are using GNU/Linux with systemd, it is now time to: ```shell # systemctl daemon-reload ``` Start pcsd and make it start on boot: ```shell # systemctl start pcsd # systemctl enable pcsd ``` --- ### Packages Currently this is built into Fedora, RHEL and its clones and Debian and its derivates. * [Fedora package git repositories](https://src.fedoraproject.org/rpms/pcs) * [Current Fedora .spec](https://src.fedoraproject.org/rpms/pcs/blob/master/f/pcs.spec) * [Debian-HA project home page](https://wiki.debian.org/Debian-HA) --- ### Quick Start * **Authenticate cluster nodes** Set the same password for the `hacluster` user on all nodes. ```shell # passwd hacluster ``` To authenticate the nodes, run the following command on one of the nodes (replacing node1, node2, node3 with a list of nodes in your future cluster). Specify all your cluster nodes in the command. Make sure pcsd is running on all nodes. ```shell # pcs cluster auth node1 node2 node3 -u hacluster ``` * **Create a cluster** To create a cluster run the following command on one node (replacing cluster\_name with a name of your cluster and node1, node2, node3 with a list of nodes in the cluster). `--start` and `--enable` will start your cluster and configure the nodes to start the cluster on boot respectively. ```shell # pcs cluster setup --name cluster_name node1 node2 node3 --start --enable ``` * **Check the cluster status** After a few moments the cluster should startup and you can get the status of the cluster. ```shell # pcs status ``` * **Add cluster resources** After this you can add stonith agents and resources: ```shell # pcs -h stonith create ``` and ```shell # pcs -h resource create ``` --- ### Accessing the Web UI Apart from command line interface you can use web user interface to view and configure your cluster. To access the web UI open a browser to the following URL (replace nodename with an address of your node): ``` https://nodename:2224 ``` Login as the `hacluster` user. --- ### Further Documentation [ClusterLabs website](https://clusterlabs.org) is an excellent place to learn more about Pacemaker clusters. * [ClusterLabs quick start](https://clusterlabs.org/quickstart.html) * [Clusters from Scratch](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/index.html) * [ClusterLabs documentation page](https://clusterlabs.org/pacemaker/doc/) --- ### Inquiries If you have any bug reports or feature requests please feel free to open a github issue on the pcs project. Alternatively you can use ClusterLabs [users mailinglist](https://oss.clusterlabs.org/mailman/listinfo/users) which is also a great place to ask Pacemaker clusters related questions. pcs-0.9.164/maketarballs.py000066400000000000000000000011121326265502500154750ustar00rootroot00000000000000#!/usr/bin/python from __future__ import absolute_import from __future__ import division from __future__ import print_function import sys import os sys.path.insert( 0, os.path.join(os.path.dirname(os.path.abspath(__file__)), "pcs") ) import settings pcs_version = settings.pcs_version print(os.system("cp dist/pcs-"+pcs_version+".tar dist/pcs-withgems-"+pcs_version+".tar")) print(os.system("tar --delete -f dist/pcs-"+pcs_version+".tar '*/pcsd/vendor'")) print(os.system("gzip dist/pcs-"+pcs_version+".tar")) print(os.system("gzip dist/pcs-withgems-"+pcs_version+".tar")) pcs-0.9.164/newversion.py000066400000000000000000000034521326265502500152430ustar00rootroot00000000000000#!/usr/bin/python from __future__ import absolute_import from __future__ import division from __future__ import print_function import sys import os import locale import datetime sys.path.insert( 0, os.path.join(os.path.dirname(os.path.abspath(__file__)), "pcs") ) import settings locale.setlocale(locale.LC_ALL, ("en_US", "UTF-8")) # Get the current version, increment by 1, verify changes, git commit & tag pcs_version_split = settings.pcs_version.split('.') pcs_version_split[2] = str(int(pcs_version_split[2]) + 1) new_version = ".".join(pcs_version_split) print(os.system("sed -i 's/"+settings.pcs_version+"/"+new_version+"/' setup.py")) print(os.system("sed -i 's/"+settings.pcs_version+"/"+new_version+"/' pcs/settings_default.py")) print(os.system("sed -i 's/"+settings.pcs_version+"/"+new_version+"/' pcsd/bootstrap.rb")) print(os.system("sed -i 's/\#\# \[Unreleased\]/\#\# ["+new_version+"] - "+datetime.date.today().strftime('%Y-%m-%d')+"/' CHANGELOG.md")) def manpage_head(component, package="pcs"): return '.TH {component} "8" "{date}" "{package} {version}" "System Administration Utilities"'.format( component=component.upper(), date=datetime.date.today().strftime('%B %Y'), version=new_version, package=package, ) print(os.system("sed -i '1c " + manpage_head("pcs") + "' pcs/pcs.8")) print(os.system("sed -i '1c " + manpage_head("pcsd") + "' pcsd/pcsd.8")) print(os.system( "sed -i '1c {man_head}' pcs/snmp/pcs_snmp_agent.8".format( man_head=manpage_head("pcs_snmp_agent", package="pcs-snmp"), ) )) print(os.system("git diff")) print("Look good? (y/n)") choice = sys.stdin.read(1) if choice != "y": print("Ok, exiting") sys.exit(0) print(os.system("git commit -a -m 'Bumped to "+new_version+"'")) print(os.system("git tag "+new_version)) pcs-0.9.164/pcs/000077500000000000000000000000001326265502500132535ustar00rootroot00000000000000pcs-0.9.164/pcs/COPYING000066400000000000000000000432541326265502500143160ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. pcs-0.9.164/pcs/__init__.py000066400000000000000000000000001326265502500153520ustar00rootroot00000000000000pcs-0.9.164/pcs/acl.py000066400000000000000000000227631326265502500143760ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import sys from pcs import ( prop, usage, utils, ) from pcs.cli.common.console_report import indent from pcs.cli.common.errors import CmdLineInputError from pcs.lib.errors import LibraryError from pcs.lib.pacemaker.values import is_true def acl_cmd(lib, argv, modifiers): if len(argv) < 1: sub_cmd, argv_next = "show", [] else: sub_cmd, argv_next = argv[0], argv[1:] try: if sub_cmd == "help": usage.acl(argv_next) elif sub_cmd == "show": show_acl_config(lib, argv_next, modifiers) elif sub_cmd == "enable": acl_enable(argv_next) elif sub_cmd == "disable": acl_disable(argv_next) elif sub_cmd == "role": acl_role(lib, argv_next, modifiers) elif sub_cmd in ["target", "user"]: acl_user(lib, argv_next, modifiers) elif sub_cmd == "group": acl_group(lib, argv_next, modifiers) elif sub_cmd == "permission": acl_permission(lib, argv_next, modifiers) else: raise CmdLineInputError() except LibraryError as e: utils.process_library_reports(e.args) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "acl", sub_cmd) def _print_list_of_objects(obj_list, transformation_fn): out = [] for obj in obj_list: out += transformation_fn(obj) if out: print("\n".join(out)) def show_acl_config(lib, argv, modifiers): # TODO move to lib once lib supports cluster properties # enabled/disabled should be part of the structure returned # by lib.acl.get_config properties = utils.get_set_properties(defaults=prop.get_default_properties()) acl_enabled = properties.get("enable-acl", "").lower() if is_true(acl_enabled): print("ACLs are enabled") else: print("ACLs are disabled, run 'pcs acl enable' to enable") print() data = lib.acl.get_config() _print_list_of_objects(data.get("target_list", []), target_to_str) _print_list_of_objects(data.get("group_list", []), group_to_str) _print_list_of_objects(data.get("role_list", []), role_to_str) def acl_enable(argv): # TODO move to lib once lib supports cluster properties prop.set_property(["enable-acl=true"]) def acl_disable(argv): # TODO move to lib once lib supports cluster properties prop.set_property(["enable-acl=false"]) def acl_role(lib, argv, modifiers): if len(argv) < 1: raise CmdLineInputError() sub_cmd, argv_next = argv[0], argv[1:] try: if sub_cmd == "create": role_create(lib, argv_next, modifiers) elif sub_cmd == "delete": role_delete(lib, argv_next, modifiers) elif sub_cmd == "assign": role_assign(lib, argv_next, modifiers) elif sub_cmd == "unassign": role_unassign(lib, argv_next, modifiers) else: usage.show("acl", ["role"]) sys.exit(1) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "acl", "role {0}".format(sub_cmd)) def acl_user(lib, argv, modifiers): if len(argv) < 1: raise CmdLineInputError() sub_cmd, argv_next = argv[0], argv[1:] try: if sub_cmd == "create": user_create(lib, argv_next, modifiers) elif sub_cmd == "delete": user_delete(lib, argv_next, modifiers) else: usage.show("acl", ["user"]) sys.exit(1) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "acl", "user {0}".format(sub_cmd)) def user_create(lib, argv, dummy_modifiers): if len(argv) < 1: raise CmdLineInputError() user_name, role_list = argv[0], argv[1:] lib.acl.create_target(user_name, role_list) def user_delete(lib, argv, dummy_modifiers): if len(argv) != 1: raise CmdLineInputError() lib.acl.remove_target(argv[0]) def acl_group(lib, argv, modifiers): if len(argv) < 1: raise CmdLineInputError() sub_cmd, argv_next = argv[0], argv[1:] try: if sub_cmd == "create": group_create(lib, argv_next, modifiers) elif sub_cmd == "delete": group_delete(lib, argv_next, modifiers) else: usage.show("acl", ["group"]) sys.exit(1) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror( e, "acl", "group {0}".format(sub_cmd) ) def group_create(lib, argv, dummy_modifiers): if len(argv) < 1: raise CmdLineInputError() group_name, role_list = argv[0], argv[1:] lib.acl.create_group(group_name, role_list) def group_delete(lib, argv, dummy_modifiers): if len(argv) != 1: raise CmdLineInputError() lib.acl.remove_group(argv[0]) def acl_permission(lib, argv, modifiers): if len(argv) < 1: raise CmdLineInputError() sub_cmd, argv_next = argv[0], argv[1:] try: if sub_cmd == "add": permission_add(lib, argv_next, modifiers) elif sub_cmd == "delete": run_permission_delete(lib, argv_next, modifiers) else: usage.show("acl", ["permission"]) sys.exit(1) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror( e, "acl", "permission {0}".format(sub_cmd) ) def argv_to_permission_info_list(argv): if len(argv) % 3 != 0: raise CmdLineInputError() #wrapping by list, #because in python3 zip() returns an iterator instead of a list #and the loop below makes iteration over it permission_info_list = list(zip( [permission.lower() for permission in argv[::3]], [scope_type.lower() for scope_type in argv[1::3]], argv[2::3] )) for permission, scope_type, dummy_scope in permission_info_list: if( permission not in ['read', 'write', 'deny'] or scope_type not in ['xpath', 'id'] ): raise CmdLineInputError() return permission_info_list def role_create(lib, argv, modifiers): if len(argv) < 1: raise CmdLineInputError() role_id = argv.pop(0) description = "" desc_key = 'description=' if argv and argv[0].startswith(desc_key) and len(argv[0]) > len(desc_key): description = argv.pop(0)[len(desc_key):] permission_info_list = argv_to_permission_info_list(argv) lib.acl.create_role(role_id, permission_info_list, description) def role_delete(lib, argv, modifiers): if len(argv) != 1: raise CmdLineInputError() lib.acl.remove_role(argv[0], autodelete_users_groups=True) def _role_assign_unassign(argv, keyword, not_specific_fn, user_fn, group_fn): argv_len = len(argv) if argv_len < 2: raise CmdLineInputError() if argv_len == 2: not_specific_fn(*argv) elif argv_len == 3: role_id, something, ug_id = argv if something == keyword: not_specific_fn(role_id, ug_id) elif something == "user": user_fn(role_id, ug_id) elif something == "group": group_fn(role_id, ug_id) else: raise CmdLineInputError() elif argv_len == 4 and argv[1] == keyword and argv[2] in ["group", "user"]: role_id, _, user_group, ug_id = argv if user_group == "user": user_fn(role_id, ug_id) else: group_fn(role_id, ug_id) else: raise CmdLineInputError() def role_assign(lib, argv, dummy_modifiers): _role_assign_unassign( argv, "to", lib.acl.assign_role_not_specific, lib.acl.assign_role_to_target, lib.acl.assign_role_to_group ) def role_unassign(lib, argv, modifiers): _role_assign_unassign( argv, "from", lambda role_id, ug_id: lib.acl.unassign_role_not_specific( role_id, ug_id, modifiers.get("autodelete", False) ), lambda role_id, ug_id: lib.acl.unassign_role_from_target( role_id, ug_id, modifiers.get("autodelete", False) ), lambda role_id, ug_id: lib.acl.unassign_role_from_group( role_id, ug_id, modifiers.get("autodelete", False) ) ) def permission_add(lib, argv, dummy_modifiers): if len(argv) < 4: raise CmdLineInputError() role_id, argv_next = argv[0], argv[1:] lib.acl.add_permission(role_id, argv_to_permission_info_list(argv_next)) def run_permission_delete(lib, argv, dummy_modifiers): if len(argv) != 1: raise CmdLineInputError() lib.acl.remove_permission(argv[0]) def _target_group_to_str(type_name, obj): return ["{0}: {1}".format(type_name.title(), obj.get("id"))] + indent( [" ".join(["Roles:"] + obj.get("role_list", []))] ) def target_to_str(target): return _target_group_to_str("user", target) def group_to_str(group): return _target_group_to_str("group", group) def role_to_str(role): out = [] if role.get("description"): out.append("Description: {0}".format(role.get("description"))) out += map(_permission_to_str, role.get("permission_list", [])) return ["Role: {0}".format(role.get("id"))] + indent(out) def _permission_to_str(permission): out = ["Permission:", permission.get("kind")] if permission.get("xpath") is not None: out += ["xpath", permission.get("xpath")] elif permission.get("reference") is not None: out += ["id", permission.get("reference")] out.append("({0})".format(permission.get("id"))) return " ".join(out) pcs-0.9.164/pcs/alert.py000066400000000000000000000152131326265502500147360ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import sys import json from functools import partial from pcs import ( usage, utils, ) from pcs.cli.common.errors import CmdLineInputError from pcs.cli.common.parse_args import prepare_options, group_by_keywords from pcs.cli.common.console_report import indent from pcs.lib.errors import LibraryError parse_cmd_sections = partial(group_by_keywords, implicit_first_group_key="main") def alert_cmd(*args): argv = args[1] if not argv: sub_cmd = "config" else: sub_cmd = argv.pop(0) try: if sub_cmd == "help": usage.alert(argv) elif sub_cmd == "create": alert_add(*args) elif sub_cmd == "update": alert_update(*args) elif sub_cmd == "remove": alert_remove(*args) elif sub_cmd == "config" or sub_cmd == "show": print_alert_config(*args) elif sub_cmd == "recipient": recipient_cmd(*args) elif sub_cmd == "get_all_alerts": print_alerts_in_json(*args) else: raise CmdLineInputError() except LibraryError as e: utils.process_library_reports(e.args) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "alert", sub_cmd) def recipient_cmd(*args): argv = args[1] if not argv: usage.alert(["recipient"]) sys.exit(1) sub_cmd = argv.pop(0) try: if sub_cmd == "help": usage.alert(["recipient"]) elif sub_cmd == "add": recipient_add(*args) elif sub_cmd == "update": recipient_update(*args) elif sub_cmd == "remove": recipient_remove(*args) else: raise CmdLineInputError() except CmdLineInputError as e: utils.exit_on_cmdline_input_errror( e, "alert", "recipient {0}".format(sub_cmd) ) def ensure_only_allowed_options(parameter_dict, allowed_list): for arg, value in parameter_dict.items(): if arg not in allowed_list: raise CmdLineInputError( "Unexpected parameter '{0}={1}'".format(arg, value) ) def alert_add(lib, argv, modifiers): if not argv: raise CmdLineInputError() sections = parse_cmd_sections(argv, set(["options", "meta"])) main_args = prepare_options(sections["main"]) ensure_only_allowed_options(main_args, ["id", "description", "path"]) lib.alert.create_alert( main_args.get("id", None), main_args.get("path", None), prepare_options(sections["options"]), prepare_options(sections["meta"]), main_args.get("description", None) ) def alert_update(lib, argv, modifiers): if not argv: raise CmdLineInputError() alert_id = argv[0] sections = parse_cmd_sections(argv[1:], set(["options", "meta"])) main_args = prepare_options(sections["main"]) ensure_only_allowed_options(main_args, ["description", "path"]) lib.alert.update_alert( alert_id, main_args.get("path", None), prepare_options(sections["options"]), prepare_options(sections["meta"]), main_args.get("description", None) ) def alert_remove(lib, argv, modifiers): if len(argv) < 1: raise CmdLineInputError() lib.alert.remove_alert(argv) def recipient_add(lib, argv, modifiers): if len(argv) < 2: raise CmdLineInputError() alert_id = argv[0] sections = parse_cmd_sections(argv[1:], set(["options", "meta"])) main_args = prepare_options(sections["main"]) ensure_only_allowed_options(main_args, ["description", "id", "value"]) lib.alert.add_recipient( alert_id, main_args.get("value", None), prepare_options(sections["options"]), prepare_options(sections["meta"]), recipient_id=main_args.get("id", None), description=main_args.get("description", None), allow_same_value=modifiers["force"] ) def recipient_update(lib, argv, modifiers): if len(argv) < 1: raise CmdLineInputError() recipient_id = argv[0] sections = parse_cmd_sections(argv[1:], set(["options", "meta"])) main_args = prepare_options(sections["main"]) ensure_only_allowed_options(main_args, ["description", "value"]) lib.alert.update_recipient( recipient_id, prepare_options(sections["options"]), prepare_options(sections["meta"]), recipient_value=main_args.get("value", None), description=main_args.get("description", None), allow_same_value=modifiers["force"] ) def recipient_remove(lib, argv, modifiers): if len(argv) < 1: raise CmdLineInputError() lib.alert.remove_recipient(argv) def _nvset_to_str(nvset_obj): output = [] for nvpair_obj in nvset_obj: output.append("{key}={value}".format( key=nvpair_obj["name"], value=nvpair_obj["value"] )) return " ".join(output) def __description_attributes_to_str(obj): output = [] if obj.get("description"): output.append("Description: {desc}".format(desc=obj["description"])) if obj.get("instance_attributes"): output.append("Options: {attributes}".format( attributes=_nvset_to_str(obj["instance_attributes"]) )) if obj.get("meta_attributes"): output.append("Meta options: {attributes}".format( attributes=_nvset_to_str(obj["meta_attributes"]) )) return output def _alert_to_str(alert): content = [] content.extend(__description_attributes_to_str(alert)) recipients = [] for recipient in alert.get("recipient_list", []): recipients.extend( _recipient_to_str(recipient)) if recipients: content.append("Recipients:") content.extend(indent(recipients, 1)) return ["Alert: {alert_id} (path={path})".format( alert_id=alert["id"], path=alert["path"] )] + indent(content, 1) def _recipient_to_str(recipient): return ["Recipient: {id} (value={value})".format( value=recipient["value"], id=recipient["id"] )] + indent(__description_attributes_to_str(recipient), 1) def print_alert_config(lib, argv, modifiers): if argv: raise CmdLineInputError() print("Alerts:") alert_list = lib.alert.get_all_alerts() if alert_list: for alert in alert_list: print("\n".join(indent(_alert_to_str(alert), 1))) else: print(" No alerts defined") def print_alerts_in_json(lib, argv, dummy_modifiers): # This is used only by pcsd, will be removed in new architecture if argv: raise CmdLineInputError() print(json.dumps(lib.alert.get_all_alerts())) pcs-0.9.164/pcs/app.py000066400000000000000000000175471326265502500144230ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import getopt import os import sys import logging from pcs import ( acl, booth, cluster, config, constraint, node, pcsd, prop, qdevice, quorum, resource, settings, status, stonith, usage, utils, alert, ) from pcs.cli.common import ( capabilities, completion, parse_args, ) logging.basicConfig() usefile = False filename = "" def main(argv=None): if completion.has_applicable_environment(os.environ): print(completion.make_suggestions( os.environ, usage.generate_completion_tree_from_usage() )) sys.exit() argv = argv if argv else sys.argv[1:] utils.subprocess_setup() global filename, usefile orig_argv = argv[:] utils.pcs_options = {} argv = parse_args.upgrade_args(argv) # we want to support optional arguments for --wait, so if an argument # is specified with --wait (ie. --wait=30) then we use them waitsecs = None new_argv = [] for arg in argv: if arg.startswith("--wait="): tempsecs = arg.replace("--wait=","") if len(tempsecs) > 0: waitsecs = tempsecs arg = "--wait" new_argv.append(arg) argv = new_argv try: pcs_options, dummy_argv = getopt.gnu_getopt( parse_args.filter_out_non_option_negative_numbers(argv), parse_args.PCS_SHORT_OPTIONS, parse_args.PCS_LONG_OPTIONS, ) except getopt.GetoptError as err: print(err) usage.main() sys.exit(1) argv = parse_args.filter_out_options(argv) full = False for option, dummy_value in pcs_options: if option == "--full": full = True break for o, a in pcs_options: if not o in utils.pcs_options: if o in ["--watchdog", "--device"]: a = [a] utils.pcs_options[o] = a else: # If any options are a list then they've been entered twice which isn't valid if o not in ["--watchdog", "--device"]: utils.err("%s can only be used once" % o) else: utils.pcs_options[o].append(a) if o == "-h" or o == "--help": if len(argv) == 0: usage.main() sys.exit() else: argv = [argv[0], "help" ] + argv[1:] elif o == "-f": usefile = True filename = a utils.usefile = usefile utils.filename = filename elif o == "--corosync_conf": settings.corosync_conf_file = a elif o == "--cluster_conf": settings.cluster_conf_file = a elif o == "--version": print(settings.pcs_version) if full: print(" ".join( sorted([ feat["id"] for feat in capabilities.get_pcs_capabilities() ]) )) sys.exit() elif o == "--fullhelp": usage.full_usage() sys.exit() elif o == "--wait": utils.pcs_options[o] = waitsecs elif o == "--request-timeout": request_timeout_valid = False try: timeout = int(a) if timeout > 0: utils.pcs_options[o] = timeout request_timeout_valid = True except ValueError: pass if not request_timeout_valid: utils.err( ( "'{0}' is not a valid --request-timeout value, use " "a positive integer" ).format(a) ) if len(argv) == 0: usage.main() sys.exit(1) # create a dummy logger # we do not have a log file for cli (yet), but library requires a logger logger = logging.getLogger("pcs") logger.propagate = 0 logger.handlers = [] command = argv.pop(0) if (command == "-h" or command == "help"): usage.main() return cmd_map = { "resource": resource.resource_cmd, "cluster": cluster.cluster_cmd, "stonith": stonith.stonith_cmd, "property": prop.property_cmd, "constraint": constraint.constraint_cmd, "acl": lambda argv: acl.acl_cmd( utils.get_library_wrapper(), argv, utils.get_modifiers() ), "status": lambda argv: status.status_cmd( utils.get_library_wrapper(), argv, utils.get_modifiers() ), "config": config.config_cmd, "pcsd": pcsd.pcsd_cmd, "node": lambda argv: node.node_cmd( utils.get_library_wrapper(), argv, utils.get_modifiers() ), "quorum": lambda argv: quorum.quorum_cmd( utils.get_library_wrapper(), argv, utils.get_modifiers() ), "qdevice": lambda argv: qdevice.qdevice_cmd( utils.get_library_wrapper(), argv, utils.get_modifiers() ), "alert": lambda args: alert.alert_cmd( utils.get_library_wrapper(), args, utils.get_modifiers() ), "booth": lambda argv: booth.booth_cmd( utils.get_library_wrapper(), argv, utils.get_modifiers() ), } if command not in cmd_map: usage.main() sys.exit(1) # root can run everything directly, also help can be displayed, # working on a local file also do not need to run under root if (os.getuid() == 0) or (argv and argv[0] == "help") or usefile: cmd_map[command](argv) return # specific commands need to be run under root account, pass them to pcsd # don't forget to allow each command in pcsd.rb in "post /run_pcs do" root_command_list = [ ['cluster', 'auth', '...'], ['cluster', 'corosync', '...'], ['cluster', 'destroy', '...'], ['cluster', 'disable', '...'], ['cluster', 'enable', '...'], ['cluster', 'node', '...'], ['cluster', 'pcsd-status', '...'], ['cluster', 'setup', '...'], ['cluster', 'start', '...'], ['cluster', 'stop', '...'], ['cluster', 'sync', '...'], # ['config', 'restore', '...'], # handled in config.config_restore ['pcsd', 'sync-certificates'], ['status', 'nodes', 'corosync-id'], ['status', 'nodes', 'pacemaker-id'], ['status', 'pcsd', '...'], ] argv_cmd = argv[:] argv_cmd.insert(0, command) for root_cmd in root_command_list: if ( (argv_cmd == root_cmd) or ( root_cmd[-1] == "..." and argv_cmd[:len(root_cmd)-1] == root_cmd[:-1] ) ): # handle interactivity of 'pcs cluster auth' if argv_cmd[0:2] == ["cluster", "auth"]: if "-u" not in utils.pcs_options: username = utils.get_terminal_input('Username: ') orig_argv.extend(["-u", username]) if "-p" not in utils.pcs_options: password = utils.get_terminal_password() orig_argv.extend(["-p", password]) # call the local pcsd err_msgs, exitcode, std_out, std_err = utils.call_local_pcsd( orig_argv, True ) if err_msgs: for msg in err_msgs: utils.err(msg, False) sys.exit(1) if std_out.strip(): print(std_out) if std_err.strip(): sys.stderr.write(std_err) sys.exit(exitcode) return cmd_map[command](argv) pcs-0.9.164/pcs/bash_completion000066400000000000000000000020051326265502500163410ustar00rootroot00000000000000# bash completion for pcs _pcs_completion(){ LENGTHS=() for WORD in "${COMP_WORDS[@]}"; do LENGTHS+=(${#WORD}) done COMPREPLY=( $( \ env COMP_WORDS="${COMP_WORDS[*]}" \ COMP_LENGTHS="${LENGTHS[*]}" \ COMP_CWORD=$COMP_CWORD \ PCS_AUTO_COMPLETE=1 pcs \ ) ) #examples what we get: #pcs #COMP_WORDS: pcs COMP_LENGTHS: 3 #pcs co #COMP_WORDS: pcs co COMP_LENGTHS: 3 2 # pcs config #COMP_WORDS: pcs config COMP_LENGTHS: 3 6 # pcs config " #COMP_WORDS: pcs config " COMP_LENGTHS: 3 6 4 # pcs config "'\\n #COMP_WORDS: pcs config "'\\n COMP_LENGTHS: 3 6 5'" } # -o default # Use readline's default filename completion if the compspec generates no # matches. # -F function # The shell function function is executed in the current shell environment. # When it finishes, the possible completions are retrieved from the value of # the COMPREPLY array variable. complete -o default -F _pcs_completion pcs pcs-0.9.164/pcs/booth.py000066400000000000000000000052371326265502500147470ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import sys from pcs import usage from pcs import utils from pcs.cli.booth import command from pcs.cli.common.errors import CmdLineInputError from pcs.lib.errors import LibraryError from pcs.resource import resource_remove, resource_restart def booth_cmd(lib, argv, modifiers): """ routes booth command """ if len(argv) < 1: usage.booth() sys.exit(1) sub_cmd, argv_next = argv[0], argv[1:] try: if sub_cmd == "help": usage.booth([" ".join(argv_next)] if argv_next else []) elif sub_cmd == "config": command.config_show(lib, argv_next, modifiers) elif sub_cmd == "setup": command.config_setup(lib, argv_next, modifiers) elif sub_cmd == "destroy": command.config_destroy(lib, argv_next, modifiers) elif sub_cmd == "ticket": if len(argv_next) < 1: raise CmdLineInputError() if argv_next[0] == "add": command.config_ticket_add(lib, argv_next[1:], modifiers) elif argv_next[0] == "remove": command.config_ticket_remove(lib, argv_next[1:], modifiers) elif argv_next[0] == "grant": command.ticket_grant(lib, argv_next[1:], modifiers) elif argv_next[0] == "revoke": command.ticket_revoke(lib, argv_next[1:], modifiers) else: raise CmdLineInputError() elif sub_cmd == "create": command.create_in_cluster(lib, argv_next, modifiers) elif sub_cmd == "remove": command.get_remove_from_cluster(resource_remove)( lib, argv_next, modifiers ) elif sub_cmd == "restart": command.get_restart(resource_restart)(lib, argv_next, modifiers) elif sub_cmd == "sync": command.sync(lib, argv_next, modifiers) elif sub_cmd == "pull": command.pull(lib, argv_next, modifiers) elif sub_cmd == "enable": command.enable(lib, argv_next, modifiers) elif sub_cmd == "disable": command.disable(lib, argv_next, modifiers) elif sub_cmd == "start": command.start(lib, argv_next, modifiers) elif sub_cmd == "stop": command.stop(lib, argv_next, modifiers) elif sub_cmd == "status": command.status(lib, argv_next, modifiers) else: raise CmdLineInputError() except LibraryError as e: utils.process_library_reports(e.args) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "booth", sub_cmd) pcs-0.9.164/pcs/cli/000077500000000000000000000000001326265502500140225ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/__init__.py000066400000000000000000000000001326265502500161210ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/booth/000077500000000000000000000000001326265502500151355ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/booth/__init__.py000066400000000000000000000000001326265502500172340ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/booth/command.py000066400000000000000000000115041326265502500171260ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.common.errors import CmdLineInputError from pcs.cli.common.parse_args import group_by_keywords, prepare_options DEFAULT_BOOTH_NAME = "booth" def __get_name(modifiers): return modifiers["name"] if modifiers["name"] else DEFAULT_BOOTH_NAME def config_setup(lib, arg_list, modifiers): """ create booth config """ peers = group_by_keywords( arg_list, set(["sites", "arbitrators"]), keyword_repeat_allowed=False ) if "sites" not in peers or not peers["sites"]: raise CmdLineInputError() booth_config = [] for site in peers["sites"]: booth_config.append({"key": "site", "value": site, "details": []}) for arbitrator in peers["arbitrators"]: booth_config.append({ "key": "arbitrator", "value": arbitrator, "details": [], }) lib.booth.config_setup(booth_config, modifiers["force"]) def config_destroy(lib, arg_list, modifiers): """ destroy booth config """ if arg_list: raise CmdLineInputError() lib.booth.config_destroy(ignore_config_load_problems=modifiers["force"]) def config_show(lib, arg_list, modifiers): """ print booth config """ if len(arg_list) > 1: raise CmdLineInputError() node = None if not arg_list else arg_list[0] print(lib.booth.config_text(DEFAULT_BOOTH_NAME, node), end="") def config_ticket_add(lib, arg_list, modifiers): """ add ticket to current configuration """ if not arg_list: raise CmdLineInputError lib.booth.config_ticket_add( arg_list[0], prepare_options(arg_list[1:]), allow_unknown_options=modifiers["force"] ) def config_ticket_remove(lib, arg_list, modifiers): """ add ticket to current configuration """ if len(arg_list) != 1: raise CmdLineInputError lib.booth.config_ticket_remove(arg_list[0]) def ticket_operation(lib_call, arg_list, modifiers): site_ip = None if len(arg_list) == 2: site_ip = arg_list[1] elif len(arg_list) != 1: raise CmdLineInputError() ticket = arg_list[0] lib_call(__get_name(modifiers), ticket, site_ip) def ticket_revoke(lib, arg_list, modifiers): ticket_operation(lib.booth.ticket_revoke, arg_list, modifiers) def ticket_grant(lib, arg_list, modifiers): ticket_operation(lib.booth.ticket_grant, arg_list, modifiers) def create_in_cluster(lib, arg_list, modifiers): if len(arg_list) != 2 or arg_list[0] != "ip": raise CmdLineInputError() lib.booth.create_in_cluster( __get_name(modifiers), ip=arg_list[1], allow_absent_resource_agent=modifiers["force"] ) def get_remove_from_cluster(resource_remove): #TODO resource_remove is provisional hack until resources are not moved to #lib def remove_from_cluster(lib, arg_list, modifiers): if arg_list: raise CmdLineInputError() lib.booth.remove_from_cluster( __get_name(modifiers), resource_remove, modifiers["force"], ) return remove_from_cluster def get_restart(resource_restart): #TODO resource_restart is provisional hack until resources are not moved to #lib def restart(lib, arg_list, modifiers): if arg_list: raise CmdLineInputError() lib.booth.restart( __get_name(modifiers), resource_restart, modifiers["force"], ) return restart def sync(lib, arg_list, modifiers): if arg_list: raise CmdLineInputError() lib.booth.config_sync( DEFAULT_BOOTH_NAME, skip_offline_nodes=modifiers["skip_offline_nodes"] ) def enable(lib, arg_list, modifiers): if arg_list: raise CmdLineInputError() lib.booth.enable(DEFAULT_BOOTH_NAME) def disable(lib, arg_list, modifiers): if arg_list: raise CmdLineInputError() lib.booth.disable(DEFAULT_BOOTH_NAME) def start(lib, arg_list, modifiers): if arg_list: raise CmdLineInputError() lib.booth.start(DEFAULT_BOOTH_NAME) def stop(lib, arg_list, modifiers): if arg_list: raise CmdLineInputError() lib.booth.stop(DEFAULT_BOOTH_NAME) def pull(lib, arg_list, modifiers): if len(arg_list) != 1: raise CmdLineInputError() lib.booth.pull(arg_list[0], DEFAULT_BOOTH_NAME) def status(lib, arg_list, modifiers): if arg_list: raise CmdLineInputError() booth_status = lib.booth.status(DEFAULT_BOOTH_NAME) if booth_status.get("ticket"): print("TICKETS:") print(booth_status["ticket"]) if booth_status.get("peers"): print("PEERS:") print(booth_status["peers"]) if booth_status.get("status"): print("DAEMON STATUS:") print(booth_status["status"]) pcs-0.9.164/pcs/cli/booth/console_report.py000066400000000000000000000104641326265502500205510ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes as codes def format_booth_default(value, template): return "" if value in ("booth", "", None) else template.format(value) #Each value (a callable taking report_item.info) returns a message. #Force text will be appended if necessary. #If it is necessary to put the force text inside the string then the callable #must take the force_text parameter. CODE_TO_MESSAGE_BUILDER_MAP = { codes.BOOTH_LACK_OF_SITES: lambda info: "lack of sites for booth configuration (need 2 at least): sites {0}" .format(", ".join(info["sites"]) if info["sites"] else "missing") , codes.BOOTH_EVEN_PEERS_NUM: lambda info: "odd number of peers is required (entered {number} peers)" .format(**info) , codes.BOOTH_ADDRESS_DUPLICATION: lambda info: "duplicate address for booth configuration: {0}" .format(", ".join(info["addresses"])) , codes.BOOTH_CONFIG_UNEXPECTED_LINES: lambda info: "unexpected line appeard in config: \n{0}" .format("\n".join(info["line_list"])) , codes.BOOTH_INVALID_NAME: lambda info: "booth name '{name}' is not valid ({reason})" .format(**info) , codes.BOOTH_TICKET_NAME_INVALID: lambda info: "booth ticket name '{0}' is not valid, use alphanumeric chars or dash" .format(info["ticket_name"]) , codes.BOOTH_TICKET_DUPLICATE: lambda info: "booth ticket name '{ticket_name}' already exists in configuration" .format(**info) , codes.BOOTH_TICKET_DOES_NOT_EXIST: lambda info: "booth ticket name '{ticket_name}' does not exist" .format(**info) , codes.BOOTH_ALREADY_IN_CIB: lambda info: "booth instance '{name}' is already created as cluster resource" .format(**info) , codes.BOOTH_NOT_EXISTS_IN_CIB: lambda info: "booth instance '{name}' not found in cib" .format(**info) , codes.BOOTH_CONFIG_IS_USED: lambda info: "booth instance '{0}' is used{1}".format( info["name"], " {0}".format(info["detail"]) if info["detail"] else "", ) , codes.BOOTH_MULTIPLE_TIMES_IN_CIB: lambda info: "found more than one booth instance '{name}' in cib" .format(**info) , codes.BOOTH_CONFIG_DISTRIBUTION_STARTED: lambda info: "Sending booth configuration to cluster nodes..." , codes.BOOTH_CONFIG_ACCEPTED_BY_NODE: lambda info: "{node_info}Booth config{desc} saved.".format( desc=( "" if info["name_list"] in [None, [], ["booth"]] else "(s) ({0})".format(", ".join(info["name_list"])) ), node_info="{0}: ".format(info["node"]) if info["node"] else "" ) , codes.BOOTH_CONFIG_DISTRIBUTION_NODE_ERROR: lambda info: "Unable to save booth config{desc} on node '{node}': {reason}".format( desc=format_booth_default(info["name"], " ({0})"), **info ) , codes.BOOTH_CONFIG_READ_ERROR: lambda info: "Unable to read booth config{desc}.".format( desc=format_booth_default(info["name"], " ({0})") ) , codes.BOOTH_FETCHING_CONFIG_FROM_NODE: lambda info: "Fetching booth config{desc} from node '{node}'...".format( desc=format_booth_default(info["config"], " '{0}'"), **info ) , codes.BOOTH_DAEMON_STATUS_ERROR: lambda info: "unable to get status of booth daemon: {reason}".format(**info) , codes.BOOTH_TICKET_STATUS_ERROR: "unable to get status of booth tickets", codes.BOOTH_PEERS_STATUS_ERROR: "unable to get status of booth peers", codes.BOOTH_CANNOT_DETERMINE_LOCAL_SITE_IP: lambda info: "cannot determine local site ip, please specify site parameter" , codes.BOOTH_TICKET_OPERATION_FAILED: lambda info: ( "unable to {operation} booth ticket '{ticket_name}'" " for site '{site_ip}', reason: {reason}" ).format(**info) , codes.BOOTH_SKIPPING_CONFIG: lambda info: "Skipping config file '{config_file}': {reason}".format(**info) , codes.BOOTH_CANNOT_IDENTIFY_KEYFILE: "cannot identify authfile in booth configuration" , } pcs-0.9.164/pcs/cli/booth/env.py000066400000000000000000000045041326265502500163020ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.common import console_report from pcs.common.env_file_role_codes import BOOTH_CONFIG, BOOTH_KEY from pcs.lib.errors import LibraryEnvError from pcs.cli.common import env_file def middleware_config(name, config_path, key_path): if config_path and not key_path: raise console_report.error( "With --booth-conf must be specified --booth-key as well" ) if key_path and not config_path: raise console_report.error( "With --booth-key must be specified --booth-conf as well" ) is_mocked_environment = config_path and key_path def create_booth_env(): if not is_mocked_environment: return {"name": name} return { "name": name, "config_file": env_file.read(config_path), "key_file": env_file.read(key_path, is_binary=True), "key_path": key_path, } def flush(modified_env): if not is_mocked_environment: return if not modified_env: #TODO now this would not happen #for more information see comment in #pcs.cli.common.lib_wrapper.lib_env_to_cli_env raise console_report.error("Error during library communication") env_file.process_no_existing_file_expectation( "booth config file", modified_env["config_file"], config_path ) env_file.process_no_existing_file_expectation( "booth key file", modified_env["key_file"], key_path ) env_file.write(modified_env["key_file"], key_path) env_file.write(modified_env["config_file"], config_path) def apply(next_in_line, env, *args, **kwargs): env.booth = create_booth_env() try: result_of_next = next_in_line(env, *args, **kwargs) except LibraryEnvError as e: missing_file = env_file.MissingFileCandidateInfo env_file.evaluate_for_missing_files(e, [ missing_file(BOOTH_CONFIG, "Booth config file", config_path), missing_file(BOOTH_KEY, "Booth key file", key_path), ]) raise e flush(env.booth["modified_env"]) return result_of_next return apply pcs-0.9.164/pcs/cli/booth/test/000077500000000000000000000000001326265502500161145ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/booth/test/__init__.py000066400000000000000000000000001326265502500202130ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/booth/test/test_command.py000066400000000000000000000030651326265502500211470ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.booth import command from pcs.test.tools.pcs_unittest import mock class ConfigSetupTest(TestCase): def test_call_lib_with_correct_args(self): lib = mock.MagicMock() lib.booth = mock.MagicMock() lib.booth.config_setup = mock.MagicMock() command.config_setup( lib, arg_list=[ "sites", "1.1.1.1", "2.2.2.2", "4.4.4.4", "arbitrators", "3.3.3.3" ], modifiers={ "force": False, } ) lib.booth.config_setup.assert_called_once_with( [ {"key": "site", "value": "1.1.1.1", "details": []}, {"key": "site", "value": "2.2.2.2", "details": []}, {"key": "site", "value": "4.4.4.4", "details": []}, {"key": "arbitrator", "value": "3.3.3.3", "details": []}, ], False ) class ConfigTicketAddTest(TestCase): def test_call_lib_with_ticket_name(self): lib = mock.MagicMock() lib.booth = mock.MagicMock() lib.booth.config_ticket_add = mock.MagicMock() command.config_ticket_add( lib, arg_list=["TICKET_A", "timeout=10"], modifiers={"force": True} ) lib.booth.config_ticket_add.assert_called_once_with( "TICKET_A", {"timeout": "10"}, allow_unknown_options=True ) pcs-0.9.164/pcs/cli/booth/test/test_env.py000066400000000000000000000102471326265502500203210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.booth import env from pcs.common import report_codes, env_file_role_codes from pcs.lib.errors import LibraryEnvError, ReportItem from pcs.test.tools.pcs_unittest import mock from pcs.test.tools.misc import create_setup_patch_mixin SetupPatchMixin = create_setup_patch_mixin(env) class BoothConfTest(TestCase, SetupPatchMixin): def setUp(self): self.write = self.setup_patch("env_file.write") self.read = self.setup_patch("env_file.read") self.process_no_existing_file_expectation = self.setup_patch( "env_file.process_no_existing_file_expectation" ) def test_sucessfully_care_about_local_file(self): def next_in_line(env): env.booth["modified_env"] = { "config_file": { "content": "file content", "no_existing_file_expected": False, }, "key_file": { "content": "key file content", "no_existing_file_expected": False, } } return "call result" mock_env = mock.MagicMock() booth_conf_middleware = env.middleware_config( "booth-name", "/local/file/path.conf", "/local/file/path.key", ) self.assertEqual( "call result", booth_conf_middleware(next_in_line, mock_env) ) self.assertEqual(self.read.mock_calls, [ mock.call('/local/file/path.conf'), mock.call('/local/file/path.key', is_binary=True), ]) self.assertEqual(self.process_no_existing_file_expectation.mock_calls, [ mock.call( 'booth config file', { 'content': 'file content', 'no_existing_file_expected': False }, '/local/file/path.conf' ), mock.call( 'booth key file', { 'content': 'key file content', 'no_existing_file_expected': False }, '/local/file/path.key' ), ]) self.assertEqual(self.write.mock_calls, [ mock.call( { 'content': 'key file content', 'no_existing_file_expected': False }, '/local/file/path.key' ), mock.call( { 'content': 'file content', 'no_existing_file_expected': False }, '/local/file/path.conf' ) ]) def test_catch_exactly_his_exception(self): report_missing = self.setup_patch("env_file.report_missing") next_in_line = mock.Mock(side_effect=LibraryEnvError( ReportItem.error(report_codes.FILE_DOES_NOT_EXIST, info={ "file_role": env_file_role_codes.BOOTH_CONFIG, }), ReportItem.error(report_codes.FILE_DOES_NOT_EXIST, info={ "file_role": env_file_role_codes.BOOTH_KEY, }), ReportItem.error("OTHER ERROR", info={}), )) mock_env = mock.MagicMock() self.read.return_value = {"content": None} booth_conf_middleware = env.middleware_config( "booth-name", "/local/file/path.conf", "/local/file/path.key", ) raised_exception = [] def run_middleware(): try: booth_conf_middleware(next_in_line, mock_env) except Exception as e: raised_exception.append(e) raise e self.assertRaises(LibraryEnvError, run_middleware) self.assertEqual(1, len(raised_exception[0].unprocessed)) self.assertEqual("OTHER ERROR", raised_exception[0].unprocessed[0].code) self.assertEqual(report_missing.mock_calls, [ mock.call('Booth config file', '/local/file/path.conf'), mock.call('Booth key file', '/local/file/path.key'), ]) pcs-0.9.164/pcs/cli/booth/test/test_reports.py000066400000000000000000000101431326265502500212220ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.booth.console_report import CODE_TO_MESSAGE_BUILDER_MAP from pcs.common import report_codes as codes class BoothConfigAccetedByNodeTest(TestCase): def setUp(self): self.build = CODE_TO_MESSAGE_BUILDER_MAP[ codes.BOOTH_CONFIG_ACCEPTED_BY_NODE ] def test_crete_message_with_empty_name_list(self): self.assertEqual("Booth config saved.", self.build({ "node": None, "name_list": [], })) def test_crete_message_with_name_booth_only(self): self.assertEqual("Booth config saved.", self.build({ "node": None, "name_list": ["booth"], })) def test_crete_message_with_single_name(self): self.assertEqual("Booth config(s) (some) saved.", self.build({ "node": None, "name_list": ["some"], })) def test_crete_message_with_multiple_name(self): self.assertEqual("Booth config(s) (some, another) saved.", self.build({ "node": None, "name_list": ["some", "another"], })) def test_crete_message_with_empty_node(self): self.assertEqual( "node1: Booth config(s) (some, another) saved.", self.build({ "node": "node1", "name_list": ["some", "another"], }), ) class BoothConfigDistributionNodeErrorTest(TestCase): def setUp(self): self.build = CODE_TO_MESSAGE_BUILDER_MAP[ codes.BOOTH_CONFIG_DISTRIBUTION_NODE_ERROR ] def test_create_message_for_empty_name(self): self.assertEqual( "Unable to save booth config on node 'node1': reason1", self.build({ "node": "node1", "reason": "reason1", "name": None, }) ) def test_create_message_for_booth_name(self): self.assertEqual( "Unable to save booth config on node 'node1': reason1", self.build({ "node": "node1", "reason": "reason1", "name": "booth", }) ) def test_create_message_for_another_name(self): self.assertEqual( "Unable to save booth config (another) on node 'node1': reason1", self.build({ "node": "node1", "reason": "reason1", "name": "another", }) ) class BoothConfigReadErrorTest(TestCase): def setUp(self): self.build = CODE_TO_MESSAGE_BUILDER_MAP[ codes.BOOTH_CONFIG_READ_ERROR ] def test_create_message_for_empty_name(self): self.assertEqual("Unable to read booth config.", self.build({ "name": None, })) def test_create_message_for_booth_name(self): self.assertEqual("Unable to read booth config.", self.build({ "name": "booth", })) def test_create_message_for_another_name(self): self.assertEqual("Unable to read booth config (another).", self.build({ "name": "another", })) class BoothFetchingConfigFromNodeTest(TestCase): def setUp(self): self.build = CODE_TO_MESSAGE_BUILDER_MAP[ codes.BOOTH_FETCHING_CONFIG_FROM_NODE ] def test_create_message_for_empty_name(self): self.assertEqual( "Fetching booth config from node 'node1'...", self.build({ "config": None, "node": "node1", }) ) def test_create_message_for_booth_name(self): self.assertEqual( "Fetching booth config from node 'node1'...", self.build({ "config": "booth", "node": "node1", }) ) def test_create_message_for_another_name(self): self.assertEqual( "Fetching booth config 'another' from node 'node1'...", self.build({ "config": "another", "node": "node1", }) ) pcs-0.9.164/pcs/cli/cluster/000077500000000000000000000000001326265502500155035ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/cluster/__init__.py000066400000000000000000000000001326265502500176020ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/cluster/command.py000066400000000000000000000060351326265502500174770ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.resource.parse_args import( parse_create_simple as parse_resource_create_args ) from pcs.cli.common.errors import CmdLineInputError from pcs.cli.common.parse_args import prepare_options def _node_add_remote_separate_host_and_name(arg_list): node_host = arg_list[0] if len(arg_list) == 1: node_name = node_host rest_args = [] elif "=" in arg_list[1] or arg_list[1] in ["op", "meta"]: node_name = node_host rest_args = arg_list[1:] else: node_name = arg_list[1] rest_args = arg_list[2:] return node_host, node_name, rest_args def node_add_remote(lib, arg_list, modifiers): if not arg_list: raise CmdLineInputError() node_host, node_name, rest_args = _node_add_remote_separate_host_and_name( arg_list ) parts = parse_resource_create_args(rest_args) force = modifiers["force"] lib.remote_node.node_add_remote( node_host, node_name, parts["op"], parts["meta"], parts["options"], skip_offline_nodes=modifiers["skip_offline_nodes"], allow_incomplete_distribution=force, allow_pacemaker_remote_service_fail=force, allow_invalid_operation=force, allow_invalid_instance_attributes=force, use_default_operations=not modifiers["no-default-ops"], wait=modifiers["wait"], ) def create_node_remove_remote(remove_resource): def node_remove_remote(lib, arg_list, modifiers): if not arg_list: raise CmdLineInputError() lib.remote_node.node_remove_remote( arg_list[0], remove_resource, skip_offline_nodes=modifiers["skip_offline_nodes"], allow_remove_multiple_nodes=modifiers["force"], allow_pacemaker_remote_service_fail=modifiers["force"], ) return node_remove_remote def node_add_guest(lib, arg_list, modifiers): if len(arg_list) < 2: raise CmdLineInputError() node_name = arg_list[0] resource_id = arg_list[1] meta_options = prepare_options(arg_list[2:]) lib.remote_node.node_add_guest( node_name, resource_id, meta_options, skip_offline_nodes=modifiers["skip_offline_nodes"], allow_incomplete_distribution=modifiers["force"], allow_pacemaker_remote_service_fail=modifiers["force"], wait=modifiers["wait"], ) def node_remove_guest(lib, arg_list, modifiers): if not arg_list: raise CmdLineInputError() lib.remote_node.node_remove_guest( arg_list[0], skip_offline_nodes=modifiers["skip_offline_nodes"], allow_remove_multiple_nodes=modifiers["force"], allow_pacemaker_remote_service_fail=modifiers["force"], wait=modifiers["wait"], ) def node_clear(lib, arg_list, modifiers): if len(arg_list) != 1: raise CmdLineInputError() lib.cluster.node_clear( arg_list[0], allow_clear_cluster_node=modifiers["force"] ) pcs-0.9.164/pcs/cli/cluster/test/000077500000000000000000000000001326265502500164625ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/cluster/test/__init__.py000066400000000000000000000000001326265502500205610ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/cluster/test/test_command.py000066400000000000000000000011741326265502500215140ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.cluster import command class ParseNodeAddRemote(TestCase): def test_deal_with_explicit_name(self): self.assertEqual( command._node_add_remote_separate_host_and_name( ["host", "name", "a=b"] ), ("host", "name", ["a=b"]) ) def test_deal_with_implicit_name(self): self.assertEqual( command._node_add_remote_separate_host_and_name(["host", "a=b"]), ("host", "host", ["a=b"]) ) pcs-0.9.164/pcs/cli/common/000077500000000000000000000000001326265502500153125ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/common/__init__.py000066400000000000000000000000001326265502500174110ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/common/capabilities.py000066400000000000000000000027541326265502500203250ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree import os.path from textwrap import dedent from pcs import settings from pcs.cli.common.console_report import error from pcs.common.tools import xml_fromstring def get_capabilities_definition(): """ Read and parse capabilities file The point is to return all data in python structures for further processing. """ filename = os.path.join(settings.pcsd_exec_location, "capabilities.xml") try: with open(filename, mode="r") as file_: capabilities_xml = xml_fromstring(file_.read()) except (EnvironmentError, etree.XMLSyntaxError, etree.DocumentInvalid) as e: raise error( "Cannot read capabilities definition file '{0}': '{1}'" .format(filename, str(e)) ) capabilities = [] for feat_xml in capabilities_xml.findall(".//capability"): feat = dict(feat_xml.attrib) desc = feat_xml.find("./description") # dedent and strip remove indentation in the XML file feat["description"] = "" if desc is None else dedent(desc.text).strip() capabilities.append(feat) return capabilities def get_pcs_capabilities(): """ Get pcs capabilities form the capabilities file """ return [ { "id": feat["id"], "description": feat["description"], } for feat in get_capabilities_definition() if feat["in-pcs"] == "1" ] pcs-0.9.164/pcs/cli/common/completion.py000066400000000000000000000060271326265502500200420ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) def has_applicable_environment(environment): """ dict environment - very likely os.environ """ return bool( all( key in environment for key in ["COMP_WORDS", "COMP_LENGTHS", "COMP_CWORD", "PCS_AUTO_COMPLETE"] ) and environment['PCS_AUTO_COMPLETE'].strip() not in ('0', '') and environment['COMP_CWORD'].isdigit() ) def make_suggestions(environment, suggestion_tree): """ dict environment - very likely os.environ dict suggestion_tree - {'acl': {'role': {'create': ...}}}... """ if not has_applicable_environment(environment): raise EnvironmentError("Environment is not completion read") try: typed_word_list = _split_words( environment["COMP_WORDS"], environment["COMP_LENGTHS"].split(" "), ) except EnvironmentError: return "" return "\n".join(_find_suggestions( suggestion_tree, typed_word_list, int(environment['COMP_CWORD']) )) def _split_words(joined_words, word_lengths): cursor_position = 0 words_string_len = len(joined_words) word_list = [] for length in word_lengths: if not length.isdigit(): raise EnvironmentError( "Length of word '{0}' is not digit".format(length) ) next_position = cursor_position + int(length) if next_position > words_string_len: raise EnvironmentError( "Expected lengths are bigger than word lengths" ) if( next_position != words_string_len and not joined_words[next_position].isspace() ): raise EnvironmentError("Words separator is not expected space") word_list.append(joined_words[cursor_position:next_position]) cursor_position = next_position + 1 if words_string_len > next_position: raise EnvironmentError("Expected lengths are smaller then word lengths") return word_list def _find_suggestions(suggestion_tree, typed_word_list, word_under_cursor_idx): if not 1 <= word_under_cursor_idx <= len(typed_word_list): return [] if len(typed_word_list) == word_under_cursor_idx: #not started type the last word yet word_under_cursor = '' else: word_under_cursor = typed_word_list[word_under_cursor_idx] words_for_current_cursor_position = _get_subcommands( suggestion_tree, typed_word_list[1:word_under_cursor_idx] ) return [ word for word in words_for_current_cursor_position if word.startswith(word_under_cursor) ] def _get_subcommands(suggestion_tree, previous_subcommand_list): subcommand_tree = suggestion_tree for subcommand in previous_subcommand_list: if subcommand not in subcommand_tree: return [] subcommand_tree = subcommand_tree[subcommand] return sorted(list(subcommand_tree.keys())) pcs-0.9.164/pcs/cli/common/console_report.py000066400000000000000000001244271326265502500207330ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from collections import Iterable from functools import partial import sys from pcs.common import report_codes as codes from pcs.common.fencing_topology import TARGET_TYPE_ATTRIBUTE from pcs.common.tools import is_string INSTANCE_SUFFIX = "@{0}" NODE_PREFIX = "{0}: " _type_translation = { "acl_group": "ACL group", "acl_permission": "ACL permission", "acl_role": "ACL role", "acl_target": "ACL user", "primitive": "resource", } _type_articles = { "ACL group": "an", "ACL user": "an", "ACL role": "an", "ACL permission": "an", } def warn(message): sys.stdout.write(format_message(message, "Warning: ")) def format_message(message, prefix): return "{0}{1}\n".format(prefix, message) def error(message): sys.stderr.write(format_message(message, "Error: ")) return SystemExit(1) def indent(line_list, indent_step=2): """ return line list where each line of input is prefixed by N spaces list of string line_list are original lines int indent_step is count of spaces for line prefix """ return [ "{0}{1}".format(" "*indent_step, line) if line else line for line in line_list ] def format_optional(value, template, empty_case=""): return empty_case if not value else template.format(value) def format_fencing_level_target(target_type, target_value): if target_type == TARGET_TYPE_ATTRIBUTE: return "{0}={1}".format(target_value[0], target_value[1]) return target_value def service_operation_started(operation, info): return "{operation} {service}{instance_suffix}...".format( operation=operation, instance_suffix=format_optional(info["instance"], INSTANCE_SUFFIX), **info ) def service_operation_error(operation, info): return ( "{node_prefix}Unable to {operation} {service}{instance_suffix}:" " {reason}" ).format( operation=operation, instance_suffix=format_optional(info["instance"], INSTANCE_SUFFIX), node_prefix=format_optional(info["node"], NODE_PREFIX), **info ) def service_operation_success(operation, info): return "{node_prefix}{service}{instance_suffix} {operation}".format( operation=operation, instance_suffix=format_optional(info["instance"], INSTANCE_SUFFIX), node_prefix=format_optional(info["node"], NODE_PREFIX), **info ) def service_operation_skipped(operation, info): return ( "{node_prefix}not {operation} {service}{instance_suffix}: {reason}" ).format( operation=operation, instance_suffix=format_optional(info["instance"], INSTANCE_SUFFIX), node_prefix=format_optional(info["node"], NODE_PREFIX), **info ) def typelist_to_string(type_list, article=False): if not type_list: return "" new_list = sorted([ # get a translation or make a type_name a string _type_translation.get(type_name, "{0}".format(type_name)) for type_name in type_list ]) types = "/".join(new_list) if not article: return types return "{article} {types}".format( article=_type_articles.get(new_list[0], "a"), types=types ) def id_belongs_to_unexpected_type(info): return "'{id}' is not {expected_type}".format( id=info["id"], expected_type=typelist_to_string(info["expected_types"], article=True) ) def id_not_found(info): desc = format_optional(typelist_to_string(info["expected_types"]), "{0} ") if not info["context_type"] or not info["context_id"]: return "{desc}'{id}' does not exist".format(desc=desc, id=info["id"]) return ( "there is no {desc}'{id}' in the {context_type} '{context_id}'".format( desc=desc, id=info["id"], context_type=info["context_type"], context_id=info["context_id"], ) ) def resource_running_on_nodes(info): role_label_map = { "Started": "running", } state_info = {} for state, node_list in info["roles_with_nodes"].items(): state_info.setdefault( role_label_map.get(state, state.lower()), [] ).extend(node_list) return "resource '{resource_id}' is {detail_list}".format( resource_id=info["resource_id"], detail_list="; ".join(sorted([ "{run_type} on node{s} {node_list}".format( run_type=run_type, s="s" if len(node_list) > 1 else "", node_list=joined_list(node_list) ) for run_type, node_list in state_info.items() ])) ) def invalid_options(info): template = "invalid {desc}option{plural_options} {option_names_list}," if not info["allowed"] and not info["allowed_patterns"]: template += " there are no options allowed" elif not info["allowed_patterns"]: template += " allowed option{plural_allowed} {allowed_values}" elif not info["allowed"]: template += ( " allowed are options matching patterns: {allowed_patterns_values}" ) else: template += ( " allowed option{plural_allowed} {allowed_values}" " and" " options matching patterns: {allowed_patterns_values}" ) return template.format( desc=format_optional(info["option_type"], "{0} "), allowed_values=", ".join(sorted(info["allowed"])), allowed_patterns_values=", ".join(sorted(info["allowed_patterns"])), option_names_list=joined_list(info["option_names"]), plural_options=("s:" if len(info["option_names"]) > 1 else ""), plural_allowed=("s are:" if len(info["allowed"]) > 1 else " is"), **info ) def build_node_description(node_types): if not node_types: return "Node" label = "{0} node".format if is_string(node_types): return label(node_types) if len(node_types) == 1: return label(node_types[0]) return "nor " + " or ".join([label(ntype) for ntype in node_types]) def joined_list(item_list, optional_transformations=None): if not optional_transformations: optional_transformations={} return ", ".join(sorted([ "'{0}'".format(optional_transformations.get(item, item)) for item in item_list ])) #Each value (a callable taking report_item.info) returns a message. #Force text will be appended if necessary. #If it is necessary to put the force text inside the string then the callable #must take the force_text parameter. CODE_TO_MESSAGE_BUILDER_MAP = { codes.COMMON_ERROR: lambda info: info["text"], codes.COMMON_INFO: lambda info: info["text"], codes.EMPTY_RESOURCE_SET_LIST: "Resource set list is empty", codes.REQUIRED_OPTION_IS_MISSING: lambda info: "required {desc}option{s} {option_names_list} {are} missing" .format( desc=format_optional(info["option_type"], "{0} "), option_names_list=joined_list(info["option_names"]), s=("s" if len(info["option_names"]) > 1 else ""), are=( "are" if len(info["option_names"]) > 1 else "is" ) ) , codes.PREREQUISITE_OPTION_IS_MISSING: lambda info: ( "If {opt_desc}option '{option_name}' is specified, " "{pre_desc}option '{prerequisite_name}' must be specified as well" ).format( opt_desc=format_optional(info.get("option_type"), "{0} "), pre_desc=format_optional(info.get("prerequisite_type"), "{0} "), **info ) , codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING: lambda info: "{desc}option {option_names_list} has to be specified" .format( desc=format_optional(info.get("option_type"), "{0} "), option_names_list=" or ".join(sorted([ "'{0}'".format(name) for name in info["option_names"] ])), ) , codes.INVALID_OPTIONS: invalid_options, codes.INVALID_OPTION_VALUE: lambda info: #value on key "allowed_values" is overloaded: # * it can be a list - then it express possible option values # * it can be a string - then it is verbal description of value "'{option_value}' is not a valid {option_name} value, use {hint}" .format( hint=( ", ".join(sorted(info["allowed_values"])) if ( isinstance(info["allowed_values"], Iterable) and not is_string(info["allowed_values"]) ) else info["allowed_values"] ), **info ) , codes.INVALID_OPTION_TYPE: lambda info: #value on key "allowed_types" is overloaded: # * it can be a list - then it express possible option types # * it can be a string - then it is verbal description of the type "specified {option_name} is not valid, use {hint}" .format( hint=( ", ".join(sorted(info["allowed_types"])) if ( isinstance(info["allowed_types"], Iterable) and not is_string(info["allowed_types"]) ) else info["allowed_types"] ), **info ) , codes.INVALID_USERDEFINED_OPTIONS: lambda info: ( "invalid {desc}option{plural_options} {option_names_list}, " "{allowed_description}" ).format( desc=format_optional(info["option_type"], "{0} "), option_names_list=joined_list(info["option_names"]), plural_options=("s:" if len(info["option_names"]) > 1 else ""), **info ) , codes.DEPRECATED_OPTION: lambda info: ( "{desc}option '{option_name}' is deprecated and should not be " "used, use {hint} instead" ).format( desc=format_optional(info["option_type"], "{0} "), hint=( ", ".join(sorted(info["replaced_by"])) if ( isinstance(info["replaced_by"], Iterable) and not is_string(info["replaced_by"]) ) else info["replaced_by"] ), **info ) , codes.MUTUALLY_EXCLUSIVE_OPTIONS: lambda info: # "{desc}options {option_names} are muttually exclusive".format( "Only one of {desc}options {option_names} can be used".format( desc=format_optional(info["option_type"], "{0} "), option_names=( joined_list(sorted(info["option_names"])[:-1]) + " and '{0}'".format(sorted(info["option_names"])[-1]) ) ) , codes.EMPTY_ID: lambda info: "{id_description} cannot be empty" .format(**info) , codes.INVALID_CIB_CONTENT: lambda info: "invalid cib: \n{0}" .format(info["report"]) , codes.INVALID_ID: lambda info: ( "invalid {id_description} '{id}', '{invalid_character}' " "is not a valid {desc}character for a {id_description}" ).format( desc="first " if info["is_first_char"] else "", **info ) , codes.INVALID_TIMEOUT_VALUE: lambda info: "'{timeout}' is not a valid number of seconds to wait" .format(**info) , codes.INVALID_SCORE: lambda info: "invalid score '{score}', use integer or INFINITY or -INFINITY" .format(**info) , codes.MULTIPLE_SCORE_OPTIONS: "you cannot specify multiple score options", codes.RUN_EXTERNAL_PROCESS_STARTED: lambda info: "Running: {command}\nEnvironment:{env_part}\n{stdin_part}".format( stdin_part=format_optional( info["stdin"], "--Debug Input Start--\n{0}\n--Debug Input End--\n" ), env_part=( "" if not info["environment"] else "\n" + "\n".join([ " {0}={1}".format(key, val) for key, val in sorted(info["environment"].items()) ]) ), **info ) , codes.RUN_EXTERNAL_PROCESS_FINISHED: lambda info: ( "Finished running: {command}\n" "Return value: {return_value}\n" "--Debug Stdout Start--\n" "{stdout}\n" "--Debug Stdout End--\n" "--Debug Stderr Start--\n" "{stderr}\n" "--Debug Stderr End--\n" ).format(**info) , codes.RUN_EXTERNAL_PROCESS_ERROR: lambda info: "unable to run command {command}: {reason}" .format(**info) , codes.NODE_COMMUNICATION_DEBUG_INFO: lambda info: ( "Communication debug info for calling: {target}\n" "--Debug Communication Info Start--\n" "{data}\n" "--Debug Communication Info End--\n" ).format(**info) , codes.NODE_COMMUNICATION_STARTED: lambda info: "Sending HTTP Request to: {target}\n{data_part}".format( data_part=format_optional( info["data"], "--Debug Input Start--\n{0}\n--Debug Input End--\n" ), **info ) , codes.NODE_COMMUNICATION_FINISHED: lambda info: ( "Finished calling: {target}\n" "Response Code: {response_code}\n" "--Debug Response Start--\n" "{response_data}\n" "--Debug Response End--\n" ).format(**info) , codes.NODE_COMMUNICATION_NOT_CONNECTED: lambda info: "Unable to connect to {node} ({reason})" .format(**info) , codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED: lambda info: ( "Unable to authenticate to {node} ({reason})," " try running 'pcs cluster auth'" ) .format(**info) , codes.NODE_COMMUNICATION_ERROR_PERMISSION_DENIED: lambda info: "{node}: Permission denied ({reason})" .format(**info) , codes.NODE_COMMUNICATION_ERROR_UNSUPPORTED_COMMAND: lambda info: "{node}: Unsupported command ({reason}), try upgrading pcsd" .format(**info) , codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL: lambda info: "{node}: {reason}" .format(**info) , codes.NODE_COMMUNICATION_ERROR: lambda info: "Error connecting to {node} ({reason})" .format(**info) , codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT: lambda info: "Unable to connect to {node} ({reason})" .format(**info) , codes.NODE_COMMUNICATION_ERROR_TIMED_OUT: lambda info: ( "{node}: Connection timeout, try setting higher timeout in " "--request-timeout option ({reason})" ).format(**info) , codes.NODE_COMMUNICATION_PROXY_IS_SET: "Proxy is set in environment variables, try disabling it" , codes.CANNOT_ADD_NODE_IS_IN_CLUSTER: lambda info: "cannot add the node '{node}' because it is in a cluster" .format(**info) , codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE: lambda info: ( "cannot add the node '{node}' because it is running service" " '{service}'{guess}" ).format( guess=( "" if info["service"] not in ["pacemaker", "pacemaker_remote"] else " (is not the node already in a cluster?)" ), **info ) , codes.DEFAULTS_CAN_BE_OVERRIDEN: "Defaults do not apply to resources which override them with their " "own defined values" , codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED: "Sending updated corosync.conf to nodes..." , codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE: lambda info: "{node}: Succeeded" .format(**info) , codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR: lambda info: "{node}: Unable to set corosync config" .format(**info) , codes.COROSYNC_NOT_RUNNING_CHECK_STARTED: "Checking corosync is not running on nodes..." , codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR: lambda info: "{node}: Unable to check if corosync is not running" .format(**info) , codes.COROSYNC_NOT_RUNNING_ON_NODE: lambda info: "{node}: corosync is not running" .format(**info) , codes.COROSYNC_RUNNING_ON_NODE: lambda info: "{node}: corosync is running" .format(**info) , codes.COROSYNC_QUORUM_GET_STATUS_ERROR: lambda info: "Unable to get quorum status: {reason}" .format(**info) , codes.COROSYNC_QUORUM_SET_EXPECTED_VOTES_ERROR: lambda info: "Unable to set expected votes: {reason}" .format(**info) , codes.COROSYNC_QUORUM_HEURISTICS_ENABLED_WITH_NO_EXEC: "No exec_NAME options are specified, so heuristics are effectively " "disabled" , codes.COROSYNC_CONFIG_RELOADED: "Corosync configuration reloaded", codes.COROSYNC_CONFIG_RELOAD_ERROR: lambda info: "Unable to reload corosync configuration: {reason}" .format(**info) , codes.UNABLE_TO_READ_COROSYNC_CONFIG: lambda info: "Unable to read {path}: {reason}" .format(**info) , codes.PARSE_ERROR_COROSYNC_CONF_MISSING_CLOSING_BRACE: "Unable to parse corosync config: missing closing brace" , codes.PARSE_ERROR_COROSYNC_CONF_UNEXPECTED_CLOSING_BRACE: "Unable to parse corosync config: unexpected closing brace" , codes.PARSE_ERROR_COROSYNC_CONF: "Unable to parse corosync config" , codes.COROSYNC_OPTIONS_INCOMPATIBLE_WITH_QDEVICE: lambda info: "These options cannot be set when the cluster uses a quorum device: {0}" .format(", ".join(sorted(info["options_names"]))) , codes.QDEVICE_ALREADY_DEFINED: "quorum device is already defined" , codes.QDEVICE_NOT_DEFINED: "no quorum device is defined in this cluster" , codes.QDEVICE_REMOVE_OR_CLUSTER_STOP_NEEDED: ( "You need to stop the cluster or remove qdevice from the cluster to" " continue" ), codes.QDEVICE_CLIENT_RELOAD_STARTED: "Reloading qdevice configuration on nodes..." , codes.QDEVICE_ALREADY_INITIALIZED: lambda info: "Quorum device '{model}' has been already initialized" .format(**info) , codes.QDEVICE_NOT_INITIALIZED: lambda info: "Quorum device '{model}' has not been initialized yet" .format(**info) , codes.QDEVICE_INITIALIZATION_SUCCESS: lambda info: "Quorum device '{model}' initialized" .format(**info) , codes.QDEVICE_INITIALIZATION_ERROR: lambda info: "Unable to initialize quorum device '{model}': {reason}" .format(**info) , codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED: "Setting up qdevice certificates on nodes..." , codes.QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE: lambda info: "{node}: Succeeded" .format(**info) , codes.QDEVICE_CERTIFICATE_REMOVAL_STARTED: "Removing qdevice certificates from nodes..." , codes.QDEVICE_CERTIFICATE_REMOVED_FROM_NODE: lambda info: "{node}: Succeeded" .format(**info) , codes.QDEVICE_CERTIFICATE_IMPORT_ERROR: lambda info: "Unable to import quorum device certificate: {reason}" .format(**info) , codes.QDEVICE_CERTIFICATE_SIGN_ERROR: lambda info: "Unable to sign quorum device certificate: {reason}" .format(**info) , codes.QDEVICE_DESTROY_SUCCESS: lambda info: "Quorum device '{model}' configuration files removed" .format(**info) , codes.QDEVICE_DESTROY_ERROR: lambda info: "Unable to destroy quorum device '{model}': {reason}" .format(**info) , codes.QDEVICE_NOT_RUNNING: lambda info: "Quorum device '{model}' is not running" .format(**info) , codes.QDEVICE_GET_STATUS_ERROR: lambda info: "Unable to get status of quorum device '{model}': {reason}" .format(**info) , codes.QDEVICE_USED_BY_CLUSTERS: lambda info: "Quorum device is currently being used by cluster(s): {cluster_list}" .format(cluster_list=", ".join(info["clusters"])) , codes.CMAN_UNSUPPORTED_COMMAND: "This command is not supported on CMAN clusters" , codes.ID_ALREADY_EXISTS: lambda info: "'{id}' already exists" .format(**info) , codes.ID_BELONGS_TO_UNEXPECTED_TYPE: id_belongs_to_unexpected_type, codes.ID_NOT_FOUND: id_not_found, codes.STONITH_RESOURCES_DO_NOT_EXIST: lambda info: "Stonith resource(s) '{stonith_id_list}' do not exist" .format( stonith_id_list="', '".join(info["stonith_ids"]), **info ) , codes.CIB_ACL_ROLE_IS_ALREADY_ASSIGNED_TO_TARGET: lambda info: "Role '{role_id}' is already asigned to '{target_id}'" .format(**info) , codes.CIB_ACL_ROLE_IS_NOT_ASSIGNED_TO_TARGET: lambda info: "Role '{role_id}' is not assigned to '{target_id}'" .format(**info) , codes.CIB_ACL_TARGET_ALREADY_EXISTS: lambda info: "'{target_id}' already exists" .format(**info) , codes.CIB_FENCING_LEVEL_ALREADY_EXISTS: lambda info: ( "Fencing level for '{target}' at level '{level}' " "with device(s) '{device_list}' already exists" ).format( device_list=",".join(info["devices"]), target=format_fencing_level_target( info["target_type"], info["target_value"] ), **info ) , codes.CIB_FENCING_LEVEL_DOES_NOT_EXIST: lambda info: "Fencing level {part_target}{part_level}{part_devices}does not exist" .format( part_target=( "for '{0}' ".format(format_fencing_level_target( info["target_type"], info["target_value"] )) if info["target_type"] and info["target_value"] else "" ), part_level=format_optional(info["level"], "at level '{0}' "), part_devices=format_optional( ",".join(info["devices"]) if info["devices"] else "", "with device(s) '{0}' " ) ) , codes.CIB_LOAD_ERROR: "unable to get cib", codes.CIB_LOAD_ERROR_SCOPE_MISSING: lambda info: "unable to get cib, scope '{scope}' not present in cib" .format(**info) , codes.CIB_LOAD_ERROR_BAD_FORMAT: lambda info: "unable to get cib, {reason}" .format(**info) , codes.CIB_CANNOT_FIND_MANDATORY_SECTION: lambda info: "Unable to get {section} section of cib" .format(**info) , codes.CIB_PUSH_ERROR: lambda info: "Unable to update cib\n{reason}\n{pushed_cib}" .format(**info) , codes.CIB_DIFF_ERROR: lambda info: "Unable to diff CIB: {reason}\n{cib_new}" .format(**info) , codes.CIB_PUSH_FORCED_FULL_DUE_TO_CRM_FEATURE_SET: lambda info: ( "Replacing the whole CIB instead of applying a diff, a race " "condition may happen if the CIB is pushed more than once " "simultaneously. To fix this, upgrade pacemaker to get " "crm_feature_set at least {required_set}, current is {current_set}." ).format(**info) , codes.CIB_SAVE_TMP_ERROR: lambda info: "Unable to save CIB to a temporary file: {reason}" .format(**info) , codes.CRM_MON_ERROR: "error running crm_mon, is pacemaker running?" , codes.BAD_CLUSTER_STATE_FORMAT: "cannot load cluster status, xml does not conform to the schema" , codes.WAIT_FOR_IDLE_NOT_SUPPORTED: "crm_resource does not support --wait, please upgrade pacemaker" , codes.WAIT_FOR_IDLE_NOT_LIVE_CLUSTER: "Cannot use '-f' together with '--wait'" , codes.WAIT_FOR_IDLE_TIMED_OUT: lambda info: "waiting timeout\n\n{reason}" .format(**info) , codes.WAIT_FOR_IDLE_ERROR: lambda info: "{reason}" .format(**info) , codes.RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE: lambda info: ( "bundle '{bundle_id}' already contains resource '{resource_id}'" ", a bundle may contain at most one resource" ).format(**info) , codes.RESOURCE_CLEANUP_ERROR: lambda info: ( ( "Unable to forget failed operations of resource: {resource}" "\n{reason}" ) if info["resource"] else "Unable to forget failed operations of resources\n{reason}" ).format(**info) , codes.RESOURCE_REFRESH_ERROR: lambda info: ( "Unable to delete history of resource: {resource}\n{reason}" if info["resource"] else "Unable to delete history of resources\n{reason}" ).format(**info) , codes.RESOURCE_REFRESH_TOO_TIME_CONSUMING: lambda info: ( "Deleting history of all resources on all nodes will execute more " "than {threshold} operations in the cluster, which may " "negatively impact the responsiveness of the cluster. " "Consider specifying resource and/or node" ).format(**info) , codes.RESOURCE_OPERATION_INTERVAL_DUPLICATION: lambda info: ( "multiple specification of the same operation with the same interval:\n" +"\n".join([ "{0} with intervals {1}".format(name, ", ".join(intervals)) for name, intervals_list in info["duplications"].items() for intervals in intervals_list ]) ), codes.RESOURCE_OPERATION_INTERVAL_ADAPTED: lambda info: ( "changing a {operation_name} operation interval" " from {original_interval}" " to {adapted_interval} to make the operation unique" ).format(**info) , codes.RESOURCE_RUNNING_ON_NODES: resource_running_on_nodes, codes.RESOURCE_DOES_NOT_RUN: lambda info: "resource '{resource_id}' is not running on any node" .format(**info) , codes.RESOURCE_IS_UNMANAGED: lambda info: "'{resource_id}' is unmanaged" .format(**info) , codes.RESOURCE_IS_GUEST_NODE_ALREADY: lambda info: "the resource '{resource_id}' is already a guest node" .format(**info) , codes.RESOURCE_MANAGED_NO_MONITOR_ENABLED: lambda info: ( "Resource '{resource_id}' has no enabled monitor operations." " Re-run with '--monitor' to enable them." ) .format(**info) , codes.NODE_NOT_FOUND: lambda info: "{desc} '{node}' does not appear to exist in configuration".format( desc=build_node_description(info["searched_types"]), node=info["node"] ) , codes.NODE_REMOVE_IN_PACEMAKER_FAILED: lambda info: "unable to remove node '{node_name}' from pacemaker{reason_part}" .format( reason_part=format_optional(info["reason"], ": {0}"), **info ) , codes.NODE_TO_CLEAR_IS_STILL_IN_CLUSTER: lambda info: ( "node '{node}' seems to be still in the cluster" "; this command should be used only with nodes that have been" " removed from the cluster" ) .format(**info) , codes.MULTIPLE_RESULTS_FOUND: lambda info: "multiple {result_type} {search_description} found: {what_found}" .format( what_found=joined_list(info["result_identifier_list"]), search_description="" if not info["search_description"] else "for '{0}'".format(info["search_description"]) , result_type=info["result_type"] ) , codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND: lambda info: "unable to get local node name from pacemaker: {reason}" .format(**info) , codes.RRP_ACTIVE_NOT_SUPPORTED: "using a RRP mode of 'active' is not supported or tested" , codes.IGNORED_CMAN_UNSUPPORTED_OPTION: lambda info: "{option_name} ignored as it is not supported on CMAN clusters" .format(**info) , codes.NON_UDP_TRANSPORT_ADDR_MISMATCH: "--addr0 and --addr1 can only be used with --transport=udp" , codes.CMAN_UDPU_RESTART_REQUIRED: ( "Using udpu transport on a CMAN cluster," " cluster restart is required after node add or remove" ), codes.CMAN_BROADCAST_ALL_RINGS: ( "Enabling broadcast for all rings as CMAN does not support" " broadcast in only one ring" ), codes.SERVICE_START_STARTED: partial(service_operation_started, "Starting"), codes.SERVICE_START_ERROR: partial(service_operation_error, "start"), codes.SERVICE_START_SUCCESS: partial(service_operation_success, "started"), codes.SERVICE_START_SKIPPED: partial(service_operation_skipped, "starting"), codes.SERVICE_STOP_STARTED: partial(service_operation_started, "Stopping"), codes.SERVICE_STOP_ERROR: partial(service_operation_error, "stop"), codes.SERVICE_STOP_SUCCESS: partial(service_operation_success, "stopped"), codes.SERVICE_ENABLE_STARTED: partial( service_operation_started, "Enabling" ), codes.SERVICE_ENABLE_ERROR: partial(service_operation_error, "enable"), codes.SERVICE_ENABLE_SUCCESS: partial(service_operation_success, "enabled"), codes.SERVICE_ENABLE_SKIPPED: partial( service_operation_skipped, "enabling" ), codes.SERVICE_DISABLE_STARTED: partial(service_operation_started, "Disabling") , codes.SERVICE_DISABLE_ERROR: partial(service_operation_error, "disable"), codes.SERVICE_DISABLE_SUCCESS: partial(service_operation_success, "disabled"), codes.SERVICE_KILL_ERROR: lambda info: "Unable to kill {service_list}: {reason}" .format( service_list=", ".join(info["services"]), **info ) , codes.SERVICE_KILL_SUCCESS: lambda info: "{services_list} killed" .format( service_list=", ".join(info["services"]), **info ) , codes.UNABLE_TO_GET_AGENT_METADATA: lambda info: ( "Agent '{agent}' is not installed or does not provide valid" " metadata: {reason}" ).format(**info) , codes.INVALID_RESOURCE_AGENT_NAME: lambda info: ( "Invalid resource agent name '{name}'." " Use standard:provider:type when standard is 'ocf' or" " standard:type otherwise." " List of standards and providers can be obtained by using commands" " 'pcs resource standards' and 'pcs resource providers'" ) .format(**info) , codes.INVALID_STONITH_AGENT_NAME: lambda info: ( "Invalid stonith agent name '{name}'." " List of agents can be obtained by using command" " 'pcs stonith list'. Do not use the 'stonith:' prefix. Agent name" " cannot contain the ':' character." ) .format(**info) , codes.AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE: lambda info: ( "Multiple agents match '{agent}'" ", please specify full name: {possible_agents_str}" ).format(**info) , codes.AGENT_NAME_GUESS_FOUND_NONE: lambda info: "Unable to find agent '{agent}', try specifying its full name" .format(**info) , codes.AGENT_NAME_GUESSED: lambda info: "Assumed agent name '{guessed_name}' (deduced from '{entered_name}')" .format(**info) , codes.OMITTING_NODE: lambda info: "Omitting node '{node}'" .format(**info) , codes.SBD_CHECK_STARTED: "Running SBD pre-enabling checks...", codes.SBD_CHECK_SUCCESS: lambda info: "{node}: SBD pre-enabling checks done" .format(**info) , codes.SBD_CONFIG_DISTRIBUTION_STARTED: "Distributing SBD config...", codes.SBD_CONFIG_ACCEPTED_BY_NODE: lambda info: "{node}: SBD config saved" .format(**info) , codes.UNABLE_TO_GET_SBD_CONFIG: lambda info: "Unable to get SBD configuration from node '{node}'{reason_suffix}" .format( reason_suffix=format_optional(info["reason"], ": {0}"), **info ) , codes.SBD_ENABLING_STARTED: lambda info: "Enabling SBD service..." .format(**info) , codes.SBD_DISABLING_STARTED: "Disabling SBD service...", codes.SBD_DEVICE_INITIALIZATION_STARTED: lambda info: "Initializing device(s) {devices}..." .format(devices=", ".join(info["device_list"])) , codes.SBD_DEVICE_INITIALIZATION_SUCCESS: "Device(s) initialized successfuly", codes.SBD_DEVICE_INITIALIZATION_ERROR: lambda info: "Initialization of device(s) failed: {reason}" .format(**info) , codes.SBD_DEVICE_LIST_ERROR: lambda info: "Unable to get list of messages from device '{device}': {reason}" .format(**info) , codes.SBD_DEVICE_MESSAGE_ERROR: lambda info: "Unable to set message '{message}' for node '{node}' on device " "'{device}'" .format(**info) , codes.SBD_DEVICE_DUMP_ERROR: lambda info: "Unable to get SBD headers from device '{device}': {reason}" .format(**info) , codes.FILES_DISTRIBUTION_STARTED: lambda info: "Sending {description}{where}".format( where=( "" if not info["node_list"] else " to " + joined_list(info["node_list"]) ), description=info["description"] if info["description"] else joined_list(info["file_list"]) ) , codes.FILE_DISTRIBUTION_SUCCESS: lambda info: "{node}: successful distribution of the file '{file_description}'" .format( **info ) , codes.FILE_DISTRIBUTION_ERROR: lambda info: "{node}: unable to distribute file '{file_description}': {reason}" .format( **info ) , codes.FILES_REMOVE_FROM_NODE_STARTED: lambda info: "Requesting remove {description}{where}".format( where=( "" if not info["node_list"] else " from " + joined_list(info["node_list"]) ), description=info["description"] if info["description"] else joined_list(info["file_list"]) ) , codes.FILE_REMOVE_FROM_NODE_SUCCESS: lambda info: "{node}: successful removal of the file '{file_description}'" .format( **info ) , codes.FILE_REMOVE_FROM_NODE_ERROR: lambda info: "{node}: unable to remove file '{file_description}': {reason}" .format( **info ) , codes.SERVICE_COMMANDS_ON_NODES_STARTED: lambda info: "Requesting {description}{where}".format( where=( "" if not info["node_list"] else " on " + joined_list(info["node_list"]) ), description=info["description"] if info["description"] else joined_list(info["action_list"]) ) , codes.SERVICE_COMMAND_ON_NODE_SUCCESS: lambda info: "{node}: successful run of '{service_command_description}'" .format( **info ) , codes.SERVICE_COMMAND_ON_NODE_ERROR: lambda info: ( "{node}: service command failed:" " {service_command_description}: {reason}" ) .format( **info ) , codes.SBD_DEVICE_PATH_NOT_ABSOLUTE: lambda info: "Device path '{device}'{on_node} is not absolute" .format( on_node=format_optional( info["node"], " on node '{0}'".format(info["node"]) ), **info ) , codes.SBD_DEVICE_DOES_NOT_EXIST: lambda info: "{node}: device '{device}' not found" .format(**info) , codes.SBD_DEVICE_IS_NOT_BLOCK_DEVICE: lambda info: "{node}: device '{device}' is not a block device" .format(**info) , codes.INVALID_RESPONSE_FORMAT: lambda info: "{node}: Invalid format of response" .format(**info) , codes.SBD_NO_DEVICE_FOR_NODE: lambda info: "No device defined for node '{node}'" .format(**info) , codes.SBD_TOO_MANY_DEVICES_FOR_NODE: lambda info: ( "More than {max_devices} devices defined for node '{node}' " "(devices: {devices})" ) .format(devices=", ".join(info["device_list"]), **info) , codes.SBD_NOT_INSTALLED: lambda info: "SBD is not installed on node '{node}'" .format(**info) , codes.WATCHDOG_NOT_FOUND: lambda info: "Watchdog '{watchdog}' does not exist on node '{node}'" .format(**info) , codes.WATCHDOG_INVALID: lambda info: "Watchdog path '{watchdog}' is invalid." .format(**info) , codes.UNABLE_TO_GET_SBD_STATUS: lambda info: "Unable to get status of SBD from node '{node}'{reason_suffix}" .format( reason_suffix=format_optional(info["reason"], ": {0}"), **info ) , codes.CLUSTER_RESTART_REQUIRED_TO_APPLY_CHANGES: "Cluster restart is required in order to apply these changes." , codes.CIB_ALERT_RECIPIENT_ALREADY_EXISTS: lambda info: "Recipient '{recipient}' in alert '{alert}' already exists" .format(**info) , codes.CIB_ALERT_RECIPIENT_VALUE_INVALID: lambda info: "Recipient value '{recipient}' is not valid." .format(**info) , codes.CIB_UPGRADE_SUCCESSFUL: "CIB has been upgraded to the latest schema version." , codes.CIB_UPGRADE_FAILED: lambda info: "Upgrading of CIB to the latest schema failed: {reason}" .format(**info) , codes.CIB_UPGRADE_FAILED_TO_MINIMAL_REQUIRED_VERSION: lambda info: ( "Unable to upgrade CIB to required schema version" " {required_version} or higher. Current version is" " {current_version}. Newer version of pacemaker is needed." ) .format(**info) , codes.FILE_ALREADY_EXISTS: lambda info: "{node_prefix}{role_prefix}file {file_path} already exists" .format( node_prefix=format_optional(info["node"], NODE_PREFIX), role_prefix=format_optional(info["file_role"], "{0} "), **info ) , codes.FILE_DOES_NOT_EXIST: lambda info: "{file_role} file {file_path} does not exist" .format(**info) , codes.FILE_IO_ERROR: lambda info: "unable to {operation} {file_role}{path_desc}: {reason}" .format( path_desc=format_optional(info["file_path"], " '{0}'"), **info ) , codes.UNABLE_TO_DETERMINE_USER_UID: lambda info: "Unable to determine uid of user '{user}'" .format(**info) , codes.UNABLE_TO_DETERMINE_GROUP_GID: lambda info: "Unable to determine gid of group '{group}'" .format(**info) , codes.UNSUPPORTED_OPERATION_ON_NON_SYSTEMD_SYSTEMS: "unsupported operation on non systemd systems" , codes.LIVE_ENVIRONMENT_REQUIRED: lambda info: "This command does not support {forbidden_options}" .format( forbidden_options=joined_list(info["forbidden_options"], { "BOOTH_CONF": "--booth-conf", "BOOTH_KEY": "--booth-key", "CIB": "-f", "COROSYNC_CONF": "--corosync_conf", }) ) , codes.LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE: "Node(s) must be specified if -f is used" , codes.NOLIVE_SKIP_FILES_DISTRIBUTION: lambda info: ( "the distribution of {files} to {nodes} was skipped because command" " does not run on live cluster (e.g. -f was used)." " You will have to do it manually." ).format( files=joined_list(info["files_description"]), nodes=joined_list(info["nodes"]), ) , codes.NOLIVE_SKIP_FILES_REMOVE: lambda info: ( "{files} remove from {nodes} was skipped because command" " does not run on live cluster (e.g. -f was used)." " You will have to do it manually." ).format( files=joined_list(info["files_description"]), nodes=joined_list(info["nodes"]), ) , codes.NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES: lambda info: ( "running '{command}' on {nodes} was skipped" " because command does not run on live cluster (e.g. -f was" " used). You will have to run it manually." ).format( command="{0} {1}".format(info["service"], info["command"]), nodes=joined_list(info["nodes"]), ) , codes.COROSYNC_QUORUM_CANNOT_DISABLE_ATB_DUE_TO_SBD: lambda info: "unable to disable auto_tie_breaker: SBD fencing will have no effect" .format(**info) , codes.SBD_REQUIRES_ATB: lambda info: "auto_tie_breaker quorum option will be enabled to make SBD fencing " "effective. Cluster has to be offline to be able to make this change." , codes.CLUSTER_CONF_LOAD_ERROR_INVALID_FORMAT: lambda info: "unable to get cluster.conf: {reason}" .format(**info) , codes.CLUSTER_CONF_READ_ERROR: lambda info: "Unable to read {path}: {reason}" .format(**info) , codes.USE_COMMAND_NODE_ADD_REMOTE: lambda info: ( "this command is not sufficient for creating a remote connection," " use 'pcs cluster node add-remote'" ) , codes.USE_COMMAND_NODE_ADD_GUEST: lambda info: ( "this command is not sufficient for creating a guest node, use" " 'pcs cluster node add-guest'" ) , codes.USE_COMMAND_NODE_REMOVE_GUEST: lambda info: ( "this command is not sufficient for removing a guest node, use" " 'pcs cluster node remove-guest'" ) , codes.TMP_FILE_WRITE: lambda info: ( "Writing to a temporary file {file_path}:\n" "--Debug Content Start--\n{content}\n--Debug Content End--\n" ).format(**info) , codes.UNABLE_TO_PERFORM_OPERATION_ON_ANY_NODE: "Unable to perform operation on any available node/host, therefore it " "is not possible to continue." } pcs-0.9.164/pcs/cli/common/env_cli.py000066400000000000000000000007601326265502500173060ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) class Env(object): #pylint: disable=too-many-instance-attributes def __init__(self): self.cib_data = None self.user = None self.groups = None self.corosync_conf_data = None self.booth = None self.pacemaker = None self.token_file_data_getter = None self.debug = False self.cluster_conf_data = None self.request_timeout = None pcs-0.9.164/pcs/cli/common/env_file.py000066400000000000000000000041771326265502500174640ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os.path from collections import namedtuple from pcs.cli.common import console_report from pcs.common import report_codes def report_missing(file_role, file_path): console_report.error( "{0} '{1}' does not exist".format(file_role, file_path) ) def is_missing_report(report, file_role_code): return ( report.code == report_codes.FILE_DOES_NOT_EXIST and report.info["file_role"] == file_role_code ) def process_no_existing_file_expectation(file_role, env_file, file_path): if( env_file["no_existing_file_expected"] and os.path.exists(file_path) ): msg = "{0} {1} already exists".format(file_role, file_path) if not env_file["can_overwrite_existing_file"]: raise console_report.error( "{0}, use --force to override".format(msg) ) console_report.warn(msg) def write(env_file, file_path): try: f = open(file_path, "wb" if env_file.get("is_binary", False) else "w") f.write(env_file["content"]) f.close() except EnvironmentError as e: raise console_report.error( "Unable to write {0}: {1}".format(file_path, e.strerror) ) def read(path, is_binary=False): try: mode = "rb" if is_binary else "r" return { "content": open(path, mode).read() if os.path.isfile(path) else None } except EnvironmentError as e: raise console_report.error( "Unable to read {0}: {1}".format(path, e.strerror) ) MissingFileCandidateInfo = namedtuple( "MissingFileCandidateInfo", "code desc path" ) def evaluate_for_missing_files(exception, file_info_list): """ list of MissingFileCandidateInfo file_info_list contains the info for files that can be missing """ for report in exception.args: for file_info in file_info_list: if is_missing_report(report, file_info.code): report_missing(file_info.desc, file_info.path) exception.sign_processed(report) pcs-0.9.164/pcs/cli/common/errors.py000066400000000000000000000013311326265502500171760ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE = ( "Cannot specify both --all and a list of nodes." ) class CmdLineInputError(Exception): """ Exception express that user entered incorrect commad in command line. """ def __init__(self, message=None): """ string message explain what was wrong with entered command The routine which handles this exception behaves according to whether the message was specified (prints this message to user) or not (prints appropriate part of documentation) """ super(CmdLineInputError, self).__init__(message) self.message = message pcs-0.9.164/pcs/cli/common/lib_wrapper.py000066400000000000000000000335361326265502500202040ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import logging import sys from collections import namedtuple from pcs.cli.common import middleware from pcs.cli.common.reports import ( LibraryReportProcessorToConsole, process_library_reports ) from pcs.lib.commands import ( acl, alert, booth, cluster, fencing_topology, node, qdevice, quorum, remote_node, resource_agent, resource, cib_options, stonith, sbd, stonith_agent, ) from pcs.lib.commands.constraint import ( colocation as constraint_colocation, order as constraint_order, ticket as constraint_ticket ) from pcs.lib.env import LibraryEnvironment from pcs.lib.errors import LibraryEnvError _CACHE = {} def wrapper(dictionary): return namedtuple('wrapper', dictionary.keys())(**dictionary) def cli_env_to_lib_env(cli_env): return LibraryEnvironment( logging.getLogger("pcs"), LibraryReportProcessorToConsole(cli_env.debug), cli_env.user, cli_env.groups, cli_env.cib_data, cli_env.corosync_conf_data, booth=cli_env.booth, token_file_data_getter=cli_env.token_file_data_getter, cluster_conf_data=cli_env.cluster_conf_data, request_timeout=cli_env.request_timeout, ) def lib_env_to_cli_env(lib_env, cli_env): if not lib_env.is_cib_live: cli_env.cib_data = lib_env.final_mocked_cib_content if not lib_env.is_corosync_conf_live: cli_env.corosync_conf_data = lib_env.get_corosync_conf_data() if not lib_env.is_cluster_conf_live: cli_env.cluster_conf_data = lib_env.get_cluster_conf_data() #TODO #now we know: if is in cli_env booth is in lib_env as well #when we communicate with the library over the network we will need extra #sanitization here #this applies generally, not only for booth #corosync_conf and cib suffers with this problem as well but in this cases #it is dangerously hidden: when inconsistency between cli and lib #environment inconsitency occurs, original content is put to file (which is #wrong) if cli_env.booth: cli_env.booth["modified_env"] = lib_env.booth.export() return cli_env def bind(cli_env, run_with_middleware, run_library_command): def run(cli_env, *args, **kwargs): lib_env = cli_env_to_lib_env(cli_env) lib_call_result = run_library_command(lib_env, *args, **kwargs) #midlewares needs finish its work and they see only cli_env #so we need reflect some changes to cli_env lib_env_to_cli_env(lib_env, cli_env) return lib_call_result def decorated_run(*args, **kwargs): try: return run_with_middleware(run, cli_env, *args, **kwargs) except LibraryEnvError as e: process_library_reports(e.unprocessed) #TODO we use explicit exit here - process_library_reports stil has #possibility to not exit - it will need deeper rethinking sys.exit(1) return decorated_run def bind_all(env, run_with_middleware, dictionary): return wrapper(dict( (exposed_fn, bind(env, run_with_middleware, library_fn)) for exposed_fn, library_fn in dictionary.items() )) def get_module(env, middleware_factory, name): if name not in _CACHE: _CACHE[name] = load_module(env, middleware_factory, name) return _CACHE[name] def load_module(env, middleware_factory, name): if name == "acl": return bind_all( env, middleware.build(middleware_factory.cib), { "create_role": acl.create_role, "remove_role": acl.remove_role, "assign_role_not_specific": acl.assign_role_not_specific, "assign_role_to_target": acl.assign_role_to_target, "assign_role_to_group": acl.assign_role_to_group, "unassign_role_not_specific": acl.unassign_role_not_specific, "unassign_role_from_target": acl.unassign_role_from_target, "unassign_role_from_group": acl.unassign_role_from_group, "create_target": acl.create_target, "create_group": acl.create_group, "remove_target": acl.remove_target, "remove_group": acl.remove_group, "add_permission": acl.add_permission, "remove_permission": acl.remove_permission, "get_config": acl.get_config, } ) if name == "alert": return bind_all( env, middleware.build(middleware_factory.cib), { "create_alert": alert.create_alert, "update_alert": alert.update_alert, "remove_alert": alert.remove_alert, "add_recipient": alert.add_recipient, "update_recipient": alert.update_recipient, "remove_recipient": alert.remove_recipient, "get_all_alerts": alert.get_all_alerts, } ) if name == "booth": return bind_all( env, middleware.build( middleware_factory.booth_conf, middleware_factory.cib ), { "config_setup": booth.config_setup, "config_destroy": booth.config_destroy, "config_text": booth.config_text, "config_ticket_add": booth.config_ticket_add, "config_ticket_remove": booth.config_ticket_remove, "create_in_cluster": booth.create_in_cluster, "remove_from_cluster": booth.remove_from_cluster, "restart": booth.restart, "config_sync": booth.config_sync, "enable": booth.enable_booth, "disable": booth.disable_booth, "start": booth.start_booth, "stop": booth.stop_booth, "pull": booth.pull_config, "status": booth.get_status, "ticket_grant": booth.ticket_grant, "ticket_revoke": booth.ticket_revoke, } ) if name == "cluster": return bind_all( env, middleware.build( middleware_factory.cib, middleware_factory.corosync_conf_existing, ), { "node_clear": cluster.node_clear, "verify": cluster.verify, } ) if name == "remote_node": return bind_all( env, middleware.build( middleware_factory.cib, middleware_factory.corosync_conf_existing, ), { "node_add_remote": remote_node.node_add_remote, "node_add_guest": remote_node.node_add_guest, "node_remove_remote": remote_node.node_remove_remote, "node_remove_guest": remote_node.node_remove_guest, } ) if name == 'constraint_colocation': return bind_all( env, middleware.build(middleware_factory.cib), { 'set': constraint_colocation.create_with_set, 'show': constraint_colocation.show, } ) if name == 'constraint_order': return bind_all( env, middleware.build(middleware_factory.cib), { 'set': constraint_order.create_with_set, 'show': constraint_order.show, } ) if name == 'constraint_ticket': return bind_all( env, middleware.build(middleware_factory.cib), { 'set': constraint_ticket.create_with_set, 'show': constraint_ticket.show, 'add': constraint_ticket.create, 'remove': constraint_ticket.remove, } ) if name == "fencing_topology": return bind_all( env, middleware.build(middleware_factory.cib), { "add_level": fencing_topology.add_level, "get_config": fencing_topology.get_config, "remove_all_levels": fencing_topology.remove_all_levels, "remove_levels_by_params": fencing_topology.remove_levels_by_params, "verify": fencing_topology.verify, } ) if name == "node": return bind_all( env, middleware.build(middleware_factory.cib), { "maintenance_unmaintenance_all": node.maintenance_unmaintenance_all, "maintenance_unmaintenance_list": node.maintenance_unmaintenance_list, "maintenance_unmaintenance_local": node.maintenance_unmaintenance_local, "standby_unstandby_all": node.standby_unstandby_all, "standby_unstandby_list": node.standby_unstandby_list, "standby_unstandby_local": node.standby_unstandby_local, } ) if name == "qdevice": return bind_all( env, middleware.build(), { "status": qdevice.qdevice_status_text, "setup": qdevice.qdevice_setup, "destroy": qdevice.qdevice_destroy, "start": qdevice.qdevice_start, "stop": qdevice.qdevice_stop, "kill": qdevice.qdevice_kill, "enable": qdevice.qdevice_enable, "disable": qdevice.qdevice_disable, # following commands are internal use only, called from pcsd "client_net_setup": qdevice.client_net_setup, "client_net_import_certificate": qdevice.client_net_import_certificate, "client_net_destroy": qdevice.client_net_destroy, "sign_net_cert_request": qdevice.qdevice_net_sign_certificate_request, } ) if name == "quorum": return bind_all( env, middleware.build(middleware_factory.corosync_conf_existing), { "add_device": quorum.add_device, "get_config": quorum.get_config, "remove_device": quorum.remove_device, "remove_device_heuristics": quorum.remove_device_heuristics, "set_expected_votes_live": quorum.set_expected_votes_live, "set_options": quorum.set_options, "status": quorum.status_text, "status_device": quorum.status_device_text, "update_device": quorum.update_device, } ) if name == "resource_agent": return bind_all( env, middleware.build(), { "describe_agent": resource_agent.describe_agent, "list_agents": resource_agent.list_agents, "list_agents_for_standard_and_provider": resource_agent.list_agents_for_standard_and_provider, "list_ocf_providers": resource_agent.list_ocf_providers, "list_standards": resource_agent.list_standards, } ) if name == "resource": return bind_all( env, middleware.build( middleware_factory.cib, middleware_factory.corosync_conf_existing, ), { "bundle_create": resource.bundle_create, "bundle_update": resource.bundle_update, "create": resource.create, "create_as_master": resource.create_as_master, "create_as_clone": resource.create_as_clone, "create_in_group": resource.create_in_group, "create_into_bundle": resource.create_into_bundle, "disable": resource.disable, "enable": resource.enable, "manage": resource.manage, "unmanage": resource.unmanage, } ) if name == "cib_options": return bind_all( env, middleware.build( middleware_factory.cib, ), { "set_operations_defaults": cib_options.set_operations_defaults, "set_resources_defaults": cib_options.set_resources_defaults, } ) if name == "stonith": return bind_all( env, middleware.build( middleware_factory.cib, middleware_factory.corosync_conf_existing, ), { "create": stonith.create, "create_in_group": stonith.create_in_group, } ) if name == "sbd": return bind_all( env, middleware.build(), { "enable_sbd": sbd.enable_sbd, "disable_sbd": sbd.disable_sbd, "get_cluster_sbd_status": sbd.get_cluster_sbd_status, "get_cluster_sbd_config": sbd.get_cluster_sbd_config, "get_local_sbd_config": sbd.get_local_sbd_config, "initialize_block_devices": sbd.initialize_block_devices, "get_local_devices_info": sbd.get_local_devices_info, "set_message": sbd.set_message, } ) if name == "stonith_agent": return bind_all( env, middleware.build(), { "describe_agent": stonith_agent.describe_agent, "list_agents": stonith_agent.list_agents, } ) raise Exception("No library part '{0}'".format(name)) class Library(object): def __init__(self, env, middleware_factory): self.env = env self.middleware_factory = middleware_factory def __getattr__(self, name): return get_module(self.env, self.middleware_factory, name) pcs-0.9.164/pcs/cli/common/middleware.py000066400000000000000000000062001326265502500177770ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from collections import namedtuple from functools import partial from pcs.cli.common.console_report import error def build(*middleware_list): def run(command, env, *args, **kwargs): next_in_line = command for next_command in reversed(middleware_list): next_in_line = partial(next_command, next_in_line) return next_in_line(env, *args, **kwargs) return run def cib(filename, touch_cib_file): """ return configured middleware that cares about local cib bool use_local_cib is flag if local cib was required callable load_cib_content returns local cib content, take no params callable write_cib put content of cib to required place """ def apply(next_in_line, env, *args, **kwargs): if filename: touch_cib_file(filename) try: with open(filename, mode="r") as cib_file: original_content = cib_file.read() except EnvironmentError as e: raise error( "Cannot read cib file '{0}': '{1}'" .format(filename, str(e)) ) env.cib_data = original_content result_of_next = next_in_line(env, *args, **kwargs) if filename and env.cib_data != original_content: try: with open(filename, mode="w") as cib_file: cib_file.write(env.cib_data) except EnvironmentError as e: raise error( "Cannot write cib file '{0}': '{1}'" .format(filename, str(e)) ) return result_of_next return apply def corosync_conf_existing(local_file_path): def apply(next_in_line, env, *args, **kwargs): if local_file_path: try: env.corosync_conf_data = open(local_file_path).read() except EnvironmentError as e: raise error("Unable to read {0}: {1}".format( local_file_path, e.strerror )) result_of_next = next_in_line(env, *args, **kwargs) if local_file_path: try: f = open(local_file_path, "w") f.write(env.corosync_conf_data) f.close() except EnvironmentError as e: raise error("Unable to write {0}: {1}".format( local_file_path, e.strerror )) return result_of_next return apply def cluster_conf_read_only(local_file_path): def apply(next_in_line, env, *args, **kwargs): if local_file_path: try: env.cluster_conf_data = open(local_file_path).read() except EnvironmentError as e: raise error("Unable to read {0}: {1}".format( local_file_path, e.strerror )) return next_in_line(env, *args, **kwargs) return apply def create_middleware_factory(**kwargs): return namedtuple('MiddlewareFactory', kwargs.keys())(**kwargs) pcs-0.9.164/pcs/cli/common/parse_args.py000066400000000000000000000250731326265502500200210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.common.errors import CmdLineInputError ARG_TYPE_DELIMITER = "%" # h = help, f = file, # p = password (cluster auth), u = user (cluster auth), # V = verbose (cluster verify) PCS_SHORT_OPTIONS = "hf:p:u:V" PCS_LONG_OPTIONS = [ "debug", "version", "help", "fullhelp", "force", "skip-offline", "autocorrect", "interactive", "autodelete", "all", "full", "groups", "local", "wait", "config", "async", "start", "enable", "disabled", "off", "request-timeout=", "pacemaker", "corosync", "no-default-ops", "defaults", "nodesc", "clone", "master", "name=", "group=", "node=", "from=", "to=", "after=", "before=", "transport=", "rrpmode=", "ipv6", "addr0=", "bcast0=", "mcast0=", "mcastport0=", "ttl0=", "broadcast0", "addr1=", "bcast1=", "mcast1=", "mcastport1=", "ttl1=", "broadcast1", "wait_for_all=", "auto_tie_breaker=", "last_man_standing=", "last_man_standing_window=", "token=", "token_coefficient=", "consensus=", "join=", "miss_count_const=", "fail_recv_const=", "corosync_conf=", "cluster_conf=", "booth-conf=", "booth-key=", "remote", "watchdog=", "device=", "encryption=", #in pcs status - do not display resorce status on inactive node "hide-inactive", # pcs resource (un)manage - enable or disable monitor operations "monitor", ] def split_list(arg_list, separator): """return list of list of arg_list using separator as delimiter""" separator_indexes = [i for i, x in enumerate(arg_list) if x == separator] bounds = zip([0]+[i+1 for i in separator_indexes], separator_indexes+[None]) return [arg_list[i:j] for i, j in bounds] def split_option(arg): """ Get (key, value) from a key=value commandline argument. Split the argument by the first = and return resulting parts. Raise CmdLineInputError if the argument cannot be splitted. string arg -- commandline argument """ if "=" not in arg: raise CmdLineInputError("missing value of '{0}' option".format(arg)) if arg.startswith("="): raise CmdLineInputError("missing key in '{0}' option".format(arg)) return arg.split("=", 1) def prepare_options(cmdline_args): """return dictionary of options from commandline key=value args""" options = dict() for arg in cmdline_args: name, value = split_option(arg) if name not in options: options[name] = value elif options[name] != value: raise CmdLineInputError( "duplicate option '{0}' with different values '{1}' and '{2}'" .format(name, options[name], value) ) return options def group_by_keywords( arg_list, keyword_set, implicit_first_group_key=None, keyword_repeat_allowed=True, group_repeated_keywords=None, only_found_keywords=False ): """ Return dictionary with keywords as keys and following arguments as value. For example when keywords are "first" and "seconds" then for arg_list ["first", 1, 2, "second", 3] it returns {"first": [1, 2], "second": [3]} list arg_list is commandline arguments containing keywords set keyword_set contain all expected keywords string implicit_first_group_key is the key for capturing of arguments before the occurrence of the first keyword. implicit_first_group_key is not a keyword => its occurence in args is considered as ordinary argument. bool keyword_repeat_allowed is the flag to turn on/off checking the uniqueness of each keyword in arg_list. list group_repeated_keywords contains keywords for which each occurence is packed separately. For example when keywords are "first" and "seconds" and group_repeated_keywords is ["first"] then for arg_list ["first", 1, 2, "second", 3, "first", 4] it returns {"first": [[1, 2], [4]], "second": [3]}. For these keywords is allowed repeating. bool only_found_keywords is flag for deciding to (not)contain keywords that do not appeared in arg_list. """ def get_keywords_for_grouping(): if not group_repeated_keywords: return [] #implicit_first_group_key is not keyword: when it is in #group_repeated_keywords but not in keyword_set is considered as #unknown. unknown_keywords = set(group_repeated_keywords) - set(keyword_set) if unknown_keywords: #to avoid developer mistake raise AssertionError( "Keywords in grouping not in keyword set: {0}" .format(", ".join(unknown_keywords)) ) return group_repeated_keywords def get_completed_groups(): completed_groups = groups.copy() if not only_found_keywords: for keyword in keyword_set: if keyword not in completed_groups: completed_groups[keyword] = [] if( implicit_first_group_key and implicit_first_group_key not in completed_groups ): completed_groups[implicit_first_group_key] = [] return completed_groups def is_acceptable_keyword_occurence(keyword): return ( keyword not in groups.keys() or keyword_repeat_allowed or keyword in keywords_for_grouping ) def process_keyword(keyword): if not is_acceptable_keyword_occurence(keyword): raise CmdLineInputError( "'{0}' cannot be used more than once".format(keyword) ) groups.setdefault(keyword, []) if keyword in keywords_for_grouping: groups[keyword].append([]) def process_non_keyword(keyword, arg): place = groups[keyword] if keyword in keywords_for_grouping: place = place[-1] place.append(arg) groups = {} keywords_for_grouping = get_keywords_for_grouping() if arg_list: current_keyword = None if arg_list[0] not in keyword_set: if not implicit_first_group_key: raise CmdLineInputError() process_keyword(implicit_first_group_key) current_keyword = implicit_first_group_key for arg in arg_list: if arg in keyword_set: process_keyword(arg) current_keyword = arg else: process_non_keyword(current_keyword, arg) return get_completed_groups() def parse_typed_arg(arg, allowed_types, default_type): """ Get (type, value) from a typed commandline argument. Split the argument by the type separator and return the type and the value. Raise CmdLineInputError in the argument format or type is not valid. string arg -- commandline argument Iterable allowed_types -- list of allowed argument types string default_type -- type to return if the argument doesn't specify a type """ if ARG_TYPE_DELIMITER not in arg: return default_type, arg arg_type, arg_value = arg.split(ARG_TYPE_DELIMITER, 1) if not arg_type: return default_type, arg_value if arg_type not in allowed_types: raise CmdLineInputError( "'{arg_type}' is not an allowed type for '{arg_full}', use {hint}" .format( arg_type=arg_type, arg_full=arg, hint=", ".join(sorted(allowed_types)) ) ) return arg_type, arg_value def is_num(arg): return arg.isdigit() or arg.lower() == "infinity" def is_negative_num(arg): return arg.startswith("-") and is_num(arg[1:]) def is_short_option_expecting_value(arg): return ( len(arg) == 2 and arg[0] == "-" and "{0}:".format(arg[1]) in PCS_SHORT_OPTIONS ) def is_long_option_expecting_value(arg): return ( len(arg) > 2 and arg[0:2] == "--" and "{0}=".format(arg[2:]) in PCS_LONG_OPTIONS ) def is_option_expecting_value(arg): return ( is_short_option_expecting_value(arg) or is_long_option_expecting_value(arg) ) def filter_out_non_option_negative_numbers(arg_list): """ Return arg_list without non-option negative numbers. Negative numbers following the option expecting value are kept. There is the problematic legacy. Argumet "--" has special meaning: can be used to signal that no more options will follow. This would solve the problem with negative numbers in a standard way: there would be no special approach to negative numbers, everything would be left in the hands of users. But now it would be backward incompatible change. list arg_list contains command line arguments """ args_without_negative_nums = [] for i, arg in enumerate(arg_list): prev_arg = arg_list[i-1] if i > 0 else "" if not is_negative_num(arg) or is_option_expecting_value(prev_arg): args_without_negative_nums.append(arg) return args_without_negative_nums def filter_out_options(arg_list): """ Return arg_list without options and its negative numbers. list arg_list contains command line arguments """ args_without_options = [] for i, arg in enumerate(arg_list): prev_arg = arg_list[i-1] if i > 0 else "" if( not is_option_expecting_value(prev_arg) and ( not arg.startswith("-") or arg == "-" or is_negative_num(arg) ) ): args_without_options.append(arg) return args_without_options def upgrade_args(arg_list): """ Return modified copy of arg_list. This function transform some old syntax to new syntax to keep backward compatibility. list arg_list contains command line arguments """ upgraded_args = [] args_without_options = filter_out_options(arg_list) for arg in arg_list: if arg in ["--cloneopt", "--clone"]: #for every commands - kept as it was previously upgraded_args.append("clone") elif arg.startswith("--cloneopt="): #for every commands - kept as it was previously upgraded_args.append("clone") upgraded_args.append(arg.split('=', 1)[1]) elif( #only for resource create - currently the only known problematic #place arg == "--master" and args_without_options[:2] == ["resource", "create"] ): upgraded_args.append("master") else: upgraded_args.append(arg) return upgraded_args pcs-0.9.164/pcs/cli/common/reports.py000066400000000000000000000115641326265502500173710ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import sys import inspect from functools import partial from pcs.cli.booth.console_report import ( CODE_TO_MESSAGE_BUILDER_MAP as BOOTH_CODE_TO_MESSAGE_BUILDER_MAP ) from pcs.cli.common.console_report import ( CODE_TO_MESSAGE_BUILDER_MAP, error, warn, ) from pcs.cli.constraint_all.console_report import ( CODE_TO_MESSAGE_BUILDER_MAP as CONSTRAINT_CODE_TO_MESSAGE_BUILDER_MAP ) from pcs.common import report_codes as codes from pcs.lib.errors import LibraryError, ReportItemSeverity __CODE_BUILDER_MAP = {} __CODE_BUILDER_MAP.update(CODE_TO_MESSAGE_BUILDER_MAP) __CODE_BUILDER_MAP.update(CONSTRAINT_CODE_TO_MESSAGE_BUILDER_MAP) __CODE_BUILDER_MAP.update(BOOTH_CODE_TO_MESSAGE_BUILDER_MAP) def build_default_message_from_report(report_item, force_text): return "Unknown report: {0} info: {1}{2}".format( report_item.code, str(report_item.info), force_text, ) def build_message_from_report(code_builder_map, report_item, force_text=""): if report_item.code not in code_builder_map: return build_default_message_from_report(report_item, force_text) message = code_builder_map[report_item.code] #Sometimes report item info is not needed for message building. #In that case the message is a string. Otherwise the message is a callable. if not callable(message): return message + force_text try: # Object functools.partial cannot be used with inspect because it is not # regular python function. We have to use original function for that. if isinstance(message, partial): keywords = message.keywords if message.keywords is not None else {} args = inspect.getargspec(message.func).args del args[:len(message.args)] args = [arg for arg in args if arg not in keywords] else: args = inspect.getargspec(message).args if "force_text" in args: return message(report_item.info, force_text) return message(report_item.info) + force_text except(TypeError, KeyError): return build_default_message_from_report(report_item, force_text) build_report_message = partial(build_message_from_report, __CODE_BUILDER_MAP) class LibraryReportProcessorToConsole(object): def __init__(self, debug=False): self.debug = debug self.items = [] def append(self, report_item): self.items.append(report_item) return self def extend(self, report_item_list): self.items.extend(report_item_list) return self @property def errors_count(self): return len([ item for item in self.items if item.severity == ReportItemSeverity.ERROR ]) def report(self, report_item): return self.report_list([report_item]) def report_list(self, report_item_list): return self._send(report_item_list) def process(self, report_item): self.append(report_item) self.send() def process_list(self, report_item_list): self.extend(report_item_list) self.send() def _send(self, report_item_list, print_errors=True): errors = [] for report_item in report_item_list: if report_item.severity == ReportItemSeverity.ERROR: if print_errors: error(build_report_message(report_item)) errors.append(report_item) elif report_item.severity == ReportItemSeverity.WARNING: warn(build_report_message(report_item)) elif self.debug or report_item.severity != ReportItemSeverity.DEBUG: print(build_report_message(report_item)) return errors def send(self): errors = self._send(self.items, print_errors=False) self.items = [] if errors: raise LibraryError(*errors) def _prepare_force_text(report_item): if report_item.forceable == codes.SKIP_OFFLINE_NODES: return ", use --skip-offline to override" return ", use --force to override" if report_item.forceable else "" def process_library_reports(report_item_list): """ report_item_list list of ReportItem """ if not report_item_list: raise error("Errors have occurred, therefore pcs is unable to continue") critical_error = False for report_item in report_item_list: if report_item.severity == ReportItemSeverity.WARNING: print("Warning: " + build_report_message(report_item)) continue if report_item.severity != ReportItemSeverity.ERROR: print(build_report_message(report_item)) continue sys.stderr.write('Error: {0}\n'.format(build_report_message( report_item, _prepare_force_text(report_item) ))) critical_error = True if critical_error: sys.exit(1) pcs-0.9.164/pcs/cli/common/test/000077500000000000000000000000001326265502500162715ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/common/test/__init__.py000066400000000000000000000000001326265502500203700ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/common/test/test_capabilities.py000066400000000000000000000044171326265502500223410ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.misc import get_test_resource as rc from pcs.test.tools.pcs_unittest import mock, TestCase from pcs.cli.common import capabilities @mock.patch("pcs.settings.pcsd_exec_location", rc("")) class Capabilities(TestCase): def test_get_definition(self): self.assertEqual( capabilities.get_capabilities_definition(), [ { "id": "test.in-pcs", "in-pcs": "1", "in-pcsd": "0", "description": "This capability is available in pcs.", }, { "id": "test.in-pcsd", "in-pcs": "0", "in-pcsd": "1", "description": "This capability is available in pcsd.", }, { "id": "test.both", "in-pcs": "1", "in-pcsd": "1", "description": "This capability is available in both pcs and pcsd.", }, { "id": "test.empty-description", "in-pcs": "1", "in-pcsd": "1", "description": "", }, { "id": "test.no-description", "in-pcs": "1", "in-pcsd": "1", "description": "", }, ] ) def test_get_pcs(self): self.assertEqual( capabilities.get_pcs_capabilities(), [ { "id": "test.in-pcs", "description": "This capability is available in pcs.", }, { "id": "test.both", "description": "This capability is available in both pcs and pcsd.", }, { "id": "test.empty-description", "description": "", }, { "id": "test.no-description", "description": "", }, ] ) pcs-0.9.164/pcs/cli/common/test/test_completion.py000066400000000000000000000114631326265502500220600ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.common.completion import ( _find_suggestions, has_applicable_environment, make_suggestions, _split_words, ) tree = { "resource": { "op": { "add": {}, "defaults": {}, "remove": {}, }, "clone": {}, }, "cluster": { "auth": {}, "cib": {}, } } class SuggestionTest(TestCase): def test_suggest_nothing_when_cursor_on_first_word(self): self.assertEqual([], _find_suggestions(tree, ['pcs'], 0)) self.assertEqual([], _find_suggestions(tree, ['pcs', 'resource'], 0)) def test_suggest_nothing_when_cursor_possition_out_of_range(self): self.assertEqual([], _find_suggestions(tree, ['pcs', 'resource'], 3)) def test_suggest_when_last_word_not_started(self): self.assertEqual( ["clone", "op"], _find_suggestions(tree, ['pcs', 'resource'], 2) ) def test_suggest_when_last_word_started(self): self.assertEqual( ["clone"], _find_suggestions(tree, ['pcs', 'resource', 'c'], 2) ) def test_suggest_when_cursor_on_word_amid(self): self.assertEqual( ["clone"], _find_suggestions(tree, ['pcs', 'resource', 'c', 'add'], 2) ) def test_suggest_nothing_when_previously_typed_word_not_match(self): self.assertEqual( [], _find_suggestions(tree, ['pcs', 'invalid', 'c'], 2) ) class HasCompletionEnvironmentTest(TestCase): def test_returns_false_if_environment_inapplicable(self): inapplicable_environments = [ { 'COMP_CWORD': '1', 'PCS_AUTO_COMPLETE': '1', }, { 'COMP_WORDS': 'pcs resource', 'PCS_AUTO_COMPLETE': '1', }, { 'COMP_WORDS': 'pcs resource', 'COMP_CWORD': '1', }, { 'COMP_WORDS': 'pcs resource', 'COMP_CWORD': '1', 'PCS_AUTO_COMPLETE': '0', }, { 'COMP_WORDS': 'pcs resource', 'COMP_CWORD': '1a', 'PCS_AUTO_COMPLETE': '1', }, { 'COMP_WORDS': 'pcs resource', 'COMP_CWORD': '1', 'PCS_AUTO_COMPLETE': '1', }, ] for environment in inapplicable_environments: self.assertFalse( has_applicable_environment(environment), 'environment evaluated as applicable (should not be): ' +repr(environment) ) def test_returns_true_if_environment_is_set(self): self.assertTrue(has_applicable_environment({ "COMP_WORDS": "pcs resource", "COMP_CWORD": '1', "COMP_LENGTHS": "3 8", "PCS_AUTO_COMPLETE": "1", })) class MakeSuggestionsEnvironment(TestCase): def test_raises_for_incomlete_environment(self): self.assertRaises( EnvironmentError, lambda: make_suggestions( { 'COMP_CWORD': '1', 'PCS_AUTO_COMPLETE': '1', }, suggestion_tree=tree ) ) def test_suggest_on_correct_environment(self): self.assertEqual( "clone\nop", make_suggestions( { "COMP_WORDS": "pcs resource", "COMP_CWORD": "2", "COMP_LENGTHS": "3 8", "PCS_AUTO_COMPLETE": "1", }, suggestion_tree=tree ) ) class SplitWordsTest(TestCase): def test_return_word_list_on_compatible_words_and_lenght(self): self.assertEqual( ["pcs", "resource", "op", "a"], _split_words("pcs resource op a", ["3", "8", "2", "1"]) ) def test_refuse_when_no_int_in_lengths(self): self.assertRaises( EnvironmentError, lambda: _split_words("pcs resource op a", ["3", "8", "2", "A"]) ) def test_refuse_when_lengths_are_too_big(self): self.assertRaises( EnvironmentError, lambda: _split_words("pcs resource op a", ["3", "8", "2", "10"]) ) def test_refuse_when_separator_doesnot_match(self): self.assertRaises( EnvironmentError, lambda: _split_words("pc sresource op a", ["3", "8", "2", "1"]) ) def test_refuse_when_lengths_are_too_small(self): self.assertRaises( EnvironmentError, lambda: _split_words("pcs resource op a ", ["3", "8", "2", "1"]) ) pcs-0.9.164/pcs/cli/common/test/test_console_report.py000066400000000000000000001734161326265502500227530ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.common.console_report import( indent, CODE_TO_MESSAGE_BUILDER_MAP, format_optional, ) from pcs.common import report_codes as codes from pcs.common.fencing_topology import ( TARGET_TYPE_NODE, TARGET_TYPE_REGEXP, TARGET_TYPE_ATTRIBUTE, ) class IndentTest(TestCase): def test_indent_list_of_lines(self): self.assertEqual( indent([ "first", "second" ]), [ " first", " second" ] ) class NameBuildTest(TestCase): """ Mixin for the testing of message building. """ code = None def assert_message_from_info(self, message, info=None): info = info if info else {} build = CODE_TO_MESSAGE_BUILDER_MAP[self.code] self.assertEqual( message, build(info) if callable(build) else build ) class BuildInvalidOptionsMessageTest(NameBuildTest): code = codes.INVALID_OPTIONS def test_build_message_with_type(self): self.assert_message_from_info( "invalid TYPE option 'NAME', allowed options are: FIRST, SECOND", { "option_names": ["NAME"], "option_type": "TYPE", "allowed": ["SECOND", "FIRST"], "allowed_patterns": [], } ) def test_build_message_without_type(self): self.assert_message_from_info( "invalid option 'NAME', allowed options are: FIRST, SECOND", { "option_names": ["NAME"], "option_type": "", "allowed": ["FIRST", "SECOND"], "allowed_patterns": [], } ) def test_build_message_with_multiple_names(self): self.assert_message_from_info( "invalid options: 'ANOTHER', 'NAME', allowed option is FIRST", { "option_names": ["NAME", "ANOTHER"], "option_type": "", "allowed": ["FIRST"], "allowed_patterns": [], } ) def test_pattern(self): self.assert_message_from_info( ( "invalid option 'NAME', allowed are options matching patterns: " "exec_" ), { "option_names": ["NAME"], "option_type": "", "allowed": [], "allowed_patterns": ["exec_"], } ) def test_allowed_and_patterns(self): self.assert_message_from_info( ( "invalid option 'NAME', allowed option is FIRST and options " "matching patterns: exec_" ), { "option_names": ["NAME"], "option_type": "", "allowed": ["FIRST"], "allowed_patterns": ["exec_"], } ) def test_no_allowed_options(self): self.assert_message_from_info( "invalid options: 'ANOTHER', 'NAME', there are no options allowed", { "option_names": ["NAME", "ANOTHER"], "option_type": "", "allowed": [], "allowed_patterns": [], } ) class InvalidUserdefinedOptions(NameBuildTest): code = codes.INVALID_USERDEFINED_OPTIONS def test_without_type(self): self.assert_message_from_info( ( "invalid option 'exec_NAME', " "exec_NAME cannot contain . and whitespace characters" ), { "option_names": ["exec_NAME"], "option_type": "", "allowed_description": "exec_NAME cannot contain . and whitespace characters" , } ) def test_with_type(self): self.assert_message_from_info( ( "invalid heuristics option 'exec_NAME', " "exec_NAME cannot contain . and whitespace characters" ), { "option_names": ["exec_NAME"], "option_type": "heuristics", "allowed_description": "exec_NAME cannot contain . and whitespace characters" , } ) def test_more_options(self): self.assert_message_from_info( "invalid TYPE options: 'ANOTHER', 'NAME', DESC", { "option_names": ["NAME", "ANOTHER"], "option_type": "TYPE", "allowed_description": "DESC", } ) class RequiredOptionIsMissing(NameBuildTest): code = codes.REQUIRED_OPTION_IS_MISSING def test_build_message_with_type(self): self.assert_message_from_info( "required TYPE option 'NAME' is missing", { "option_names": ["NAME"], "option_type": "TYPE", } ) def test_build_message_without_type(self): self.assert_message_from_info( "required option 'NAME' is missing", { "option_names": ["NAME"], "option_type": "", } ) def test_build_message_with_multiple_names(self): self.assert_message_from_info( "required options 'ANOTHER', 'NAME' are missing", { "option_names": ["NAME", "ANOTHER"], "option_type": "", } ) class BuildInvalidOptionValueMessageTest(NameBuildTest): code = codes.INVALID_OPTION_VALUE def test_build_message_with_multiple_allowed_values(self): self.assert_message_from_info( "'VALUE' is not a valid NAME value, use FIRST, SECOND", { "option_name": "NAME", "option_value": "VALUE", "allowed_values": sorted(["FIRST", "SECOND"]), } ) def test_build_message_with_hint(self): self.assert_message_from_info( "'VALUE' is not a valid NAME value, use some hint", { "option_name": "NAME", "option_value": "VALUE", "allowed_values": "some hint", } ) class BuildServiceStartErrorTest(NameBuildTest): code = codes.SERVICE_START_ERROR def test_build_message_with_instance_and_node(self): self.assert_message_from_info( "NODE: Unable to start SERVICE@INSTANCE: REASON", { "service": "SERVICE", "reason": "REASON", "node": "NODE", "instance": "INSTANCE", } ) def test_build_message_with_instance_only(self): self.assert_message_from_info( "Unable to start SERVICE@INSTANCE: REASON", { "service": "SERVICE", "reason": "REASON", "node": "", "instance": "INSTANCE", } ) def test_build_message_with_node_only(self): self.assert_message_from_info( "NODE: Unable to start SERVICE: REASON", { "service": "SERVICE", "reason": "REASON", "node": "NODE", "instance": "", } ) def test_build_message_without_node_and_instance(self): self.assert_message_from_info( "Unable to start SERVICE: REASON", { "service": "SERVICE", "reason": "REASON", "node": "", "instance": "", } ) class InvalidCibContent(NameBuildTest): code = codes.INVALID_CIB_CONTENT def test_build_message(self): report = "report\nlines" self.assert_message_from_info( "invalid cib: \n{0}".format(report), { "report": report, } ) class BuildInvalidIdTest(NameBuildTest): code = codes.INVALID_ID def test_build_message_with_first_char_invalid(self): self.assert_message_from_info( ( "invalid ID_DESCRIPTION 'ID', 'INVALID_CHARACTER' is not a" " valid first character for a ID_DESCRIPTION" ), { "id_description": "ID_DESCRIPTION", "id": "ID", "invalid_character": "INVALID_CHARACTER", "is_first_char": True, } ) def test_build_message_with_non_first_char_invalid(self): self.assert_message_from_info( ( "invalid ID_DESCRIPTION 'ID', 'INVALID_CHARACTER' is not a" " valid character for a ID_DESCRIPTION" ), { "id_description": "ID_DESCRIPTION", "id": "ID", "invalid_character": "INVALID_CHARACTER", "is_first_char": False, } ) class BuildRunExternalStartedTest(NameBuildTest): code = codes.RUN_EXTERNAL_PROCESS_STARTED def test_build_message_minimal(self): self.assert_message_from_info( "Running: COMMAND\nEnvironment:\n", { "command": "COMMAND", "stdin": "", "environment": dict(), } ) def test_build_message_with_stdin(self): self.assert_message_from_info( ( "Running: COMMAND\nEnvironment:\n" "--Debug Input Start--\n" "STDIN\n" "--Debug Input End--\n" ), { "command": "COMMAND", "stdin": "STDIN", "environment": dict(), } ) def test_build_message_with_env(self): self.assert_message_from_info( ( "Running: COMMAND\nEnvironment:\n" " env_a=A\n" " env_b=B\n" ), { "command": "COMMAND", "stdin": "", "environment": {"env_a": "A", "env_b": "B",}, } ) def test_build_message_maximal(self): self.assert_message_from_info( ( "Running: COMMAND\nEnvironment:\n" " env_a=A\n" " env_b=B\n" "--Debug Input Start--\n" "STDIN\n" "--Debug Input End--\n" ), { "command": "COMMAND", "stdin": "STDIN", "environment": {"env_a": "A", "env_b": "B",}, } ) def test_insidious_environment(self): self.assert_message_from_info( ( "Running: COMMAND\nEnvironment:\n" " test=a:{green},b:{red}\n" "--Debug Input Start--\n" "STDIN\n" "--Debug Input End--\n" ), { "command": "COMMAND", "stdin": "STDIN", "environment": {"test": "a:{green},b:{red}",}, } ) class BuildNodeCommunicationStartedTest(NameBuildTest): code = codes.NODE_COMMUNICATION_STARTED def test_build_message_with_data(self): self.assert_message_from_info( ( "Sending HTTP Request to: TARGET\n" "--Debug Input Start--\n" "DATA\n" "--Debug Input End--\n" ), { "target": "TARGET", "data": "DATA", } ) def test_build_message_without_data(self): self.assert_message_from_info( "Sending HTTP Request to: TARGET\n", { "target": "TARGET", "data": "", } ) class NodeCommunicationErrorTimedOut(NameBuildTest): code = codes.NODE_COMMUNICATION_ERROR_TIMED_OUT def test_success(self): self.assert_message_from_info( ( "node-1: Connection timeout, try setting higher timeout in " "--request-timeout option (Connection timed out after 60049 " "milliseconds)" ), { "node": "node-1", "command": "/remote/command", "reason": "Connection timed out after 60049 milliseconds", } ) class FormatOptionalTest(TestCase): def test_info_key_is_falsy(self): self.assertEqual("", format_optional("", "{0}: ")) def test_info_key_is_not_falsy(self): self.assertEqual("A: ", format_optional("A", "{0}: ")) def test_default_value(self): self.assertEqual("DEFAULT", format_optional("", "{0}: ", "DEFAULT")) class AgentNameGuessedTest(NameBuildTest): code = codes.AGENT_NAME_GUESSED def test_build_message_with_data(self): self.assert_message_from_info( "Assumed agent name 'ocf:heratbeat:Delay' (deduced from 'Delay')", { "entered_name": "Delay", "guessed_name": "ocf:heratbeat:Delay", } ) class InvalidResourceAgentNameTest(NameBuildTest): code = codes.INVALID_RESOURCE_AGENT_NAME def test_build_message_with_data(self): self.assert_message_from_info( "Invalid resource agent name ':name'." " Use standard:provider:type when standard is 'ocf' or" " standard:type otherwise. List of standards and providers can" " be obtained by using commands 'pcs resource standards' and" " 'pcs resource providers'" , { "name": ":name", } ) class InvalidiStonithAgentNameTest(NameBuildTest): code = codes.INVALID_STONITH_AGENT_NAME def test_build_message_with_data(self): self.assert_message_from_info( "Invalid stonith agent name 'fence:name'. List of agents can be" " obtained by using command 'pcs stonith list'. Do not use the" " 'stonith:' prefix. Agent name cannot contain the ':'" " character." , { "name": "fence:name", } ) class InvalidOptionType(NameBuildTest): code = codes.INVALID_OPTION_TYPE def test_allowed_string(self): self.assert_message_from_info( "specified option name is not valid, use allowed types", { "option_name": "option name", "allowed_types": "allowed types", } ) def test_allowed_list(self): self.assert_message_from_info( "specified option name is not valid, use allowed, types", { "option_name": "option name", "allowed_types": ["allowed", "types"], } ) class DeprecatedOption(NameBuildTest): code = codes.DEPRECATED_OPTION def test_no_desc_hint_array(self): self.assert_message_from_info( "option 'option name' is deprecated and should not be used," " use new_a, new_b instead" , { "option_name": "option name", "option_type": "", "replaced_by": ["new_b", "new_a"], } ) def test_desc_hint_string(self): self.assert_message_from_info( "option type option 'option name' is deprecated and should not be" " used, use new option instead" , { "option_name": "option name", "option_type": "option type", "replaced_by": "new option", } ) class StonithResourcesDoNotExist(NameBuildTest): code = codes.STONITH_RESOURCES_DO_NOT_EXIST def test_success(self): self.assert_message_from_info( "Stonith resource(s) 'device1', 'device2' do not exist", { "stonith_ids": ["device1", "device2"], } ) class FencingLevelAlreadyExists(NameBuildTest): code = codes.CIB_FENCING_LEVEL_ALREADY_EXISTS def test_target_node(self): self.assert_message_from_info( "Fencing level for 'nodeA' at level '1' with device(s) " "'device1,device2' already exists", { "level": "1", "target_type": TARGET_TYPE_NODE, "target_value": "nodeA", "devices": ["device1", "device2"], } ) def test_target_pattern(self): self.assert_message_from_info( "Fencing level for 'node-\d+' at level '1' with device(s) " "'device1,device2' already exists", { "level": "1", "target_type": TARGET_TYPE_REGEXP, "target_value": "node-\d+", "devices": ["device1", "device2"], } ) def test_target_attribute(self): self.assert_message_from_info( "Fencing level for 'name=value' at level '1' with device(s) " "'device1,device2' already exists", { "level": "1", "target_type": TARGET_TYPE_ATTRIBUTE, "target_value": ("name", "value"), "devices": ["device1", "device2"], } ) class FencingLevelDoesNotExist(NameBuildTest): code = codes.CIB_FENCING_LEVEL_DOES_NOT_EXIST def test_full_info(self): self.assert_message_from_info( "Fencing level for 'nodeA' at level '1' with device(s) " "'device1,device2' does not exist", { "level": "1", "target_type": TARGET_TYPE_NODE, "target_value": "nodeA", "devices": ["device1", "device2"], } ) def test_only_level(self): self.assert_message_from_info( "Fencing level at level '1' does not exist", { "level": "1", "target_type": None, "target_value": None, "devices": None, } ) def test_only_target(self): self.assert_message_from_info( "Fencing level for 'name=value' does not exist", { "level": None, "target_type": TARGET_TYPE_ATTRIBUTE, "target_value": ("name", "value"), "devices": None, } ) def test_only_devices(self): self.assert_message_from_info( "Fencing level with device(s) 'device1,device2' does not exist", { "level": None, "target_type": None, "target_value": None, "devices": ["device1", "device2"], } ) def test_no_info(self): self.assert_message_from_info( "Fencing level does not exist", { "level": None, "target_type": None, "target_value": None, "devices": None, } ) class ResourceBundleAlreadyContainsAResource(NameBuildTest): code = codes.RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE def test_build_message_with_data(self): self.assert_message_from_info( ( "bundle 'test_bundle' already contains resource " "'test_resource', a bundle may contain at most one resource" ), { "resource_id": "test_resource", "bundle_id": "test_bundle", } ) class ResourceOperationIntevalDuplicationTest(NameBuildTest): code = codes.RESOURCE_OPERATION_INTERVAL_DUPLICATION def test_build_message_with_data(self): self.assert_message_from_info( "multiple specification of the same operation with the same" " interval:" "\nmonitor with intervals 3600s, 60m, 1h" "\nmonitor with intervals 60s, 1m" , { "duplications": { "monitor": [ ["3600s", "60m", "1h"], ["60s", "1m"], ], }, } ) class ResourceOperationIntevalAdaptedTest(NameBuildTest): code = codes.RESOURCE_OPERATION_INTERVAL_ADAPTED def test_build_message_with_data(self): self.assert_message_from_info( "changing a monitor operation interval from 10 to 11 to make the" " operation unique" , { "operation_name": "monitor", "original_interval": "10", "adapted_interval": "11", } ) class IdBelongsToUnexpectedType(NameBuildTest): code = codes.ID_BELONGS_TO_UNEXPECTED_TYPE def test_build_message_with_data(self): self.assert_message_from_info( "'ID' is not a clone/master/resource", { "id": "ID", "expected_types": ["primitive", "master", "clone"], "current_type": "op", } ) def test_build_message_with_transformation_and_article(self): self.assert_message_from_info( "'ID' is not an ACL group/ACL user", { "id": "ID", "expected_types": ["acl_target", "acl_group"], "current_type": "op", } ) class ResourceRunOnNodes(NameBuildTest): code = codes.RESOURCE_RUNNING_ON_NODES def test_one_node(self): self.assert_message_from_info( "resource 'R' is running on node 'node1'", { "resource_id": "R", "roles_with_nodes": {"Started": ["node1"]}, } ) def test_multiple_nodes(self): self.assert_message_from_info( "resource 'R' is running on nodes 'node1', 'node2'", { "resource_id": "R", "roles_with_nodes": {"Started": ["node1","node2"]}, } ) def test_multiple_role_multiple_nodes(self): self.assert_message_from_info( "resource 'R' is master on node 'node3'" "; running on nodes 'node1', 'node2'" , { "resource_id": "R", "roles_with_nodes": { "Started": ["node1","node2"], "Master": ["node3"], }, } ) class ResourceDoesNotRun(NameBuildTest): code = codes.RESOURCE_DOES_NOT_RUN def test_build_message(self): self.assert_message_from_info( "resource 'R' is not running on any node", { "resource_id": "R", } ) class MutuallyExclusiveOptions(NameBuildTest): code = codes.MUTUALLY_EXCLUSIVE_OPTIONS def test_build_message(self): self.assert_message_from_info( "Only one of some options 'a' and 'b' can be used", { "option_type": "some", "option_names": ["a", "b"], } ) class ResourceIsUnmanaged(NameBuildTest): code = codes.RESOURCE_IS_UNMANAGED def test_build_message(self): self.assert_message_from_info( "'R' is unmanaged", { "resource_id": "R", } ) class ResourceManagedNoMonitorEnabled(NameBuildTest): code = codes.RESOURCE_MANAGED_NO_MONITOR_ENABLED def test_build_message(self): self.assert_message_from_info( "Resource 'R' has no enabled monitor operations." " Re-run with '--monitor' to enable them." , { "resource_id": "R", } ) class NodeIsInCluster(NameBuildTest): code = codes.CANNOT_ADD_NODE_IS_IN_CLUSTER def test_build_message(self): self.assert_message_from_info( "cannot add the node 'N1' because it is in a cluster", { "node": "N1", } ) class NodeIsRunningPacemakerRemote(NameBuildTest): code = codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE def test_build_message(self): self.assert_message_from_info( "cannot add the node 'N1' because it is running service" " 'pacemaker_remote' (is not the node already in a cluster?)" , { "node": "N1", "service": "pacemaker_remote", } ) def test_build_message_with_unknown_service(self): self.assert_message_from_info( "cannot add the node 'N1' because it is running service 'unknown'", { "node": "N1", "service": "unknown", } ) class SbdDeviceInitializationStarted(NameBuildTest): code = codes.SBD_DEVICE_INITIALIZATION_STARTED def test_build_message(self): self.assert_message_from_info( "Initializing device(s) /dev1, /dev2, /dev3...", { "device_list": ["/dev1", "/dev2", "/dev3"], } ) class SbdDeviceInitializationError(NameBuildTest): code = codes.SBD_DEVICE_INITIALIZATION_ERROR def test_build_message(self): self.assert_message_from_info( "Initialization of device(s) failed: this is reason", { "reason": "this is reason" } ) class SbdDeviceListError(NameBuildTest): code = codes.SBD_DEVICE_LIST_ERROR def test_build_message(self): self.assert_message_from_info( "Unable to get list of messages from device '/dev': this is reason", { "device": "/dev", "reason": "this is reason", } ) class SbdDeviceMessageError(NameBuildTest): code = codes.SBD_DEVICE_MESSAGE_ERROR def test_build_message(self): self.assert_message_from_info( "Unable to set message 'test' for node 'node1' on device '/dev1'", { "message": "test", "node": "node1", "device": "/dev1", } ) class SbdDeviceDumpError(NameBuildTest): code = codes.SBD_DEVICE_DUMP_ERROR def test_build_message(self): self.assert_message_from_info( "Unable to get SBD headers from device '/dev1': this is reason", { "device": "/dev1", "reason": "this is reason", } ) class SbdDevcePathNotAbsolute(NameBuildTest): code = codes.SBD_DEVICE_PATH_NOT_ABSOLUTE def test_build_message(self): self.assert_message_from_info( "Device path '/dev' on node 'node1' is not absolute", { "device": "/dev", "node": "node1", } ) def test_build_message_without_node(self): self.assert_message_from_info( "Device path '/dev' is not absolute", { "device": "/dev", "node": None, } ) class SbdDeviceDoesNotExist(NameBuildTest): code = codes.SBD_DEVICE_DOES_NOT_EXIST def test_build_message(self): self.assert_message_from_info( "node1: device '/dev' not found", { "node": "node1", "device": "/dev", } ) class SbdDeviceISNotBlockDevice(NameBuildTest): code = codes.SBD_DEVICE_IS_NOT_BLOCK_DEVICE def test_build_message(self): self.assert_message_from_info( "node1: device '/dev' is not a block device", { "node": "node1", "device": "/dev", } ) class SbdNoDEviceForNode(NameBuildTest): code = codes.SBD_NO_DEVICE_FOR_NODE def test_build_message(self): self.assert_message_from_info( "No device defined for node 'node1'", { "node": "node1", } ) class SbdTooManyDevicesForNode(NameBuildTest): code = codes.SBD_TOO_MANY_DEVICES_FOR_NODE def test_build_messages(self): self.assert_message_from_info( "More than 3 devices defined for node 'node1' (devices: /dev1, " "/dev2, /dev3)", { "max_devices": 3, "node": "node1", "device_list": ["/dev1", "/dev2", "/dev3"] } ) class RequiredOptionOfAlternativesIsMissing(NameBuildTest): code = codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING def test_without_type(self): self.assert_message_from_info( "option 'aAa' or 'bBb' or 'cCc' has to be specified", { "option_names": ["aAa", "bBb", "cCc"], } ) def test_with_type(self): self.assert_message_from_info( "test option 'aAa' or 'bBb' or 'cCc' has to be specified", { "option_type": "test", "option_names": ["aAa", "bBb", "cCc"], } ) class PrerequisiteOptionIsMissing(NameBuildTest): code = codes.PREREQUISITE_OPTION_IS_MISSING def test_without_type(self): self.assert_message_from_info( "If option 'a' is specified, option 'b' must be specified as well", { "option_name": "a", "prerequisite_name": "b", } ) def test_with_type(self): self.assert_message_from_info( "If some option 'a' is specified, " "other option 'b' must be specified as well" , { "option_name": "a", "option_type": "some", "prerequisite_name": "b", "prerequisite_type": "other", } ) class FileDistributionStarted(NameBuildTest): code = codes.FILES_DISTRIBUTION_STARTED def test_build_messages(self): self.assert_message_from_info( "Sending 'first', 'second'", { "file_list": ["first", "second"], "node_list": None, "description": None, } ) def test_build_messages_with_nodes(self): self.assert_message_from_info( "Sending 'first', 'second' to 'node1', 'node2'", { "file_list": ["first", "second"], "node_list": ["node1", "node2"], "description": None, } ) def test_build_messages_with_description(self): self.assert_message_from_info( "Sending configuration files to 'node1', 'node2'", { "file_list": ["first", "second"], "node_list": ["node1", "node2"], "description": "configuration files", } ) class FileDistributionSucess(NameBuildTest): code = codes.FILE_DISTRIBUTION_SUCCESS def test_build_messages(self): self.assert_message_from_info( "node1: successful distribution of the file 'some authfile'", { "nodes_success_files": None, "node": "node1", "file_description": "some authfile", } ) class FileDistributionError(NameBuildTest): code = codes.FILE_DISTRIBUTION_ERROR def test_build_messages(self): self.assert_message_from_info( "node1: unable to distribute file 'file1': permission denied", { "node_file_errors": None, "node": "node1", "file_description": "file1", "reason": "permission denied", } ) class FileRemoveFromNodeStarted(NameBuildTest): code = codes.FILES_REMOVE_FROM_NODE_STARTED def test_build_messages(self): self.assert_message_from_info( "Requesting remove 'first', 'second' from 'node1', 'node2'", { "file_list": ["first", "second"], "node_list": ["node1", "node2"], "description": None, } ) def test_build_messages_with_description(self): self.assert_message_from_info( "Requesting remove remote configuration files from 'node1'," " 'node2'" , { "file_list": ["first", "second"], "node_list": ["node1", "node2"], "description": "remote configuration files", } ) class FileRemoveFromNodeSucess(NameBuildTest): code = codes.FILE_REMOVE_FROM_NODE_SUCCESS def test_build_messages(self): self.assert_message_from_info( "node1: successful removal of the file 'some authfile'", { "nodes_success_files": None, "node": "node1", "file_description": "some authfile", } ) class FileRemoveFromNodeError(NameBuildTest): code = codes.FILE_REMOVE_FROM_NODE_ERROR def test_build_messages(self): self.assert_message_from_info( "node1: unable to remove file 'file1': permission denied", { "node_file_errors": None, "node": "node1", "file_description": "file1", "reason": "permission denied", } ) class ActionsOnNodesStarted(NameBuildTest): code = codes.SERVICE_COMMANDS_ON_NODES_STARTED def test_build_messages(self): self.assert_message_from_info( "Requesting 'first', 'second'", { "action_list": ["first", "second"], "node_list": None, "description": None, } ) def test_build_messages_with_nodes(self): self.assert_message_from_info( "Requesting 'first', 'second' on 'node1', 'node2'", { "action_list": ["first", "second"], "node_list": ["node1", "node2"], "description": None, } ) def test_build_messages_with_description(self): self.assert_message_from_info( "Requesting running pacemaker_remote on 'node1', 'node2'", { "action_list": ["first", "second"], "node_list": ["node1", "node2"], "description": "running pacemaker_remote", } ) class ActionsOnNodesSuccess(NameBuildTest): code = codes.SERVICE_COMMAND_ON_NODE_SUCCESS def test_build_messages(self): self.assert_message_from_info( "node1: successful run of 'service enable'", { "nodes_success_actions": None, "node": "node1", "service_command_description": "service enable", } ) class ActionOnNodesError(NameBuildTest): code = codes.SERVICE_COMMAND_ON_NODE_ERROR def test_build_messages(self): self.assert_message_from_info( "node1: service command failed: service1 start: permission denied", { "node_action_errors": None, "node": "node1", "service_command_description": "service1 start", "reason": "permission denied", } ) class resource_is_guest_node_already(NameBuildTest): code = codes.RESOURCE_IS_GUEST_NODE_ALREADY def test_build_messages(self): self.assert_message_from_info( "the resource 'some-resource' is already a guest node", {"resource_id": "some-resource"} ) class live_environment_required(NameBuildTest): code = codes.LIVE_ENVIRONMENT_REQUIRED def test_build_messages(self): self.assert_message_from_info( "This command does not support '--corosync_conf'", { "forbidden_options": ["--corosync_conf"] } ) def test_build_messages_transformable_codes(self): self.assert_message_from_info( "This command does not support '--corosync_conf', '-f'", { "forbidden_options": ["COROSYNC_CONF", "CIB"] } ) class nolive_skip_files_distribution(NameBuildTest): code = codes.NOLIVE_SKIP_FILES_DISTRIBUTION def test_build_messages(self): self.assert_message_from_info( "the distribution of 'file1', 'file2' to 'node1', 'node2' was" " skipped because command" " does not run on live cluster (e.g. -f was used)." " You will have to do it manually." , { "files_description": ["file1", 'file2'], "nodes": ["node1", "node2"], } ) class nolive_skip_files_remove(NameBuildTest): code = codes.NOLIVE_SKIP_FILES_REMOVE def test_build_messages(self): self.assert_message_from_info( "'file1', 'file2' remove from 'node1', 'node2'" " was skipped because command" " does not run on live cluster (e.g. -f was used)." " You will have to do it manually." , { "files_description": ["file1", 'file2'], "nodes": ["node1", "node2"], } ) class nolive_skip_service_command_on_nodes(NameBuildTest): code = codes.NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES def test_build_messages(self): self.assert_message_from_info( "running 'pacemaker_remote start' on 'node1', 'node2' was skipped" " because command does not run on live cluster (e.g. -f was" " used). You will have to run it manually." , { "service": "pacemaker_remote", "command": "start", "nodes": ["node1", "node2"] } ) class NodeNotFound(NameBuildTest): code = codes.NODE_NOT_FOUND def test_build_messages(self): self.assert_message_from_info( "Node 'SOME_NODE' does not appear to exist in configuration", { "node": "SOME_NODE", "searched_types": [] } ) def test_build_messages_with_one_search_types(self): self.assert_message_from_info( "remote node 'SOME_NODE' does not appear to exist in configuration", { "node": "SOME_NODE", "searched_types": ["remote"] } ) def test_build_messages_with_string_search_types(self): self.assert_message_from_info( "remote node 'SOME_NODE' does not appear to exist in configuration", { "node": "SOME_NODE", "searched_types": "remote" } ) def test_build_messages_with_multiple_search_types(self): self.assert_message_from_info( "nor remote node or guest node 'SOME_NODE' does not appear to exist" " in configuration" , { "node": "SOME_NODE", "searched_types": ["remote", "guest"] } ) class MultipleResultFound(NameBuildTest): code = codes.MULTIPLE_RESULTS_FOUND def test_build_messages(self): self.assert_message_from_info( "multiple resource for 'NODE-NAME' found: 'ID1', 'ID2'", { "result_type": "resource", "result_identifier_list": ["ID1", "ID2"], "search_description": "NODE-NAME", } ) class UseCommandNodeAddRemote(NameBuildTest): code = codes.USE_COMMAND_NODE_ADD_REMOTE def test_build_messages(self): self.assert_message_from_info( "this command is not sufficient for creating a remote connection," " use 'pcs cluster node add-remote'" , {} ) class UseCommandNodeAddGuest(NameBuildTest): code = codes.USE_COMMAND_NODE_ADD_GUEST def test_build_messages(self): self.assert_message_from_info( "this command is not sufficient for creating a guest node, use " "'pcs cluster node add-guest'", {} ) class UseCommandNodeRemoveGuest(NameBuildTest): code = codes.USE_COMMAND_NODE_REMOVE_GUEST def test_build_messages(self): self.assert_message_from_info( "this command is not sufficient for removing a guest node, use " "'pcs cluster node remove-guest'", {} ) class NodeRemoveInPacemakerFailed(NameBuildTest): code = codes.NODE_REMOVE_IN_PACEMAKER_FAILED def test_build_messages(self): self.assert_message_from_info( "unable to remove node 'NODE' from pacemaker: reason", { "node_name": "NODE", "reason": "reason" } ) class NodeToClearIsStillInCluster(NameBuildTest): code = codes.NODE_TO_CLEAR_IS_STILL_IN_CLUSTER def test_build_messages(self): self.assert_message_from_info( "node 'node1' seems to be still in the cluster" "; this command should be used only with nodes that have been" " removed from the cluster" , { "node": "node1" } ) class ServiceStartStarted(NameBuildTest): code = codes.SERVICE_START_STARTED def test_minimal(self): self.assert_message_from_info( "Starting a_service...", { "service": "a_service", "instance": None, } ) def test_with_instance(self): self.assert_message_from_info( "Starting a_service@an_instance...", { "service": "a_service", "instance": "an_instance", } ) class ServiceStartError(NameBuildTest): code = codes.SERVICE_START_ERROR def test_minimal(self): self.assert_message_from_info( "Unable to start a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: Unable to start a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "Unable to start a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: Unable to start a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": "an_instance", } ) class ServiceStartSuccess(NameBuildTest): code = codes.SERVICE_START_SUCCESS def test_minimal(self): self.assert_message_from_info( "a_service started", { "service": "a_service", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: a_service started", { "service": "a_service", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "a_service@an_instance started", { "service": "a_service", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: a_service@an_instance started", { "service": "a_service", "node": "a_node", "instance": "an_instance", } ) class ServiceStartSkipped(NameBuildTest): code = codes.SERVICE_START_SKIPPED def test_minimal(self): self.assert_message_from_info( "not starting a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: not starting a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "not starting a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: not starting a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": "an_instance", } ) class ServiceStopStarted(NameBuildTest): code = codes.SERVICE_STOP_STARTED def test_minimal(self): self.assert_message_from_info( "Stopping a_service...", { "service": "a_service", "instance": None, } ) def test_with_instance(self): self.assert_message_from_info( "Stopping a_service@an_instance...", { "service": "a_service", "instance": "an_instance", } ) class ServiceStopError(NameBuildTest): code = codes.SERVICE_STOP_ERROR def test_minimal(self): self.assert_message_from_info( "Unable to stop a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: Unable to stop a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "Unable to stop a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: Unable to stop a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": "an_instance", } ) class ServiceStopSuccess(NameBuildTest): code = codes.SERVICE_STOP_SUCCESS def test_minimal(self): self.assert_message_from_info( "a_service stopped", { "service": "a_service", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: a_service stopped", { "service": "a_service", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "a_service@an_instance stopped", { "service": "a_service", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: a_service@an_instance stopped", { "service": "a_service", "node": "a_node", "instance": "an_instance", } ) class ServiceEnableStarted(NameBuildTest): code = codes.SERVICE_ENABLE_STARTED def test_minimal(self): self.assert_message_from_info( "Enabling a_service...", { "service": "a_service", "instance": None, } ) def test_with_instance(self): self.assert_message_from_info( "Enabling a_service@an_instance...", { "service": "a_service", "instance": "an_instance", } ) class ServiceEnableError(NameBuildTest): code = codes.SERVICE_ENABLE_ERROR def test_minimal(self): self.assert_message_from_info( "Unable to enable a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: Unable to enable a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "Unable to enable a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: Unable to enable a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": "an_instance", } ) class ServiceEnableSuccess(NameBuildTest): code = codes.SERVICE_ENABLE_SUCCESS def test_minimal(self): self.assert_message_from_info( "a_service enabled", { "service": "a_service", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: a_service enabled", { "service": "a_service", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "a_service@an_instance enabled", { "service": "a_service", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: a_service@an_instance enabled", { "service": "a_service", "node": "a_node", "instance": "an_instance", } ) class ServiceEnableSkipped(NameBuildTest): code = codes.SERVICE_ENABLE_SKIPPED def test_minimal(self): self.assert_message_from_info( "not enabling a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: not enabling a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "not enabling a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: not enabling a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": "an_instance", } ) class ServiceDisableStarted(NameBuildTest): code = codes.SERVICE_DISABLE_STARTED def test_minimal(self): self.assert_message_from_info( "Disabling a_service...", { "service": "a_service", "instance": None, } ) def test_with_instance(self): self.assert_message_from_info( "Disabling a_service@an_instance...", { "service": "a_service", "instance": "an_instance", } ) class ServiceDisableError(NameBuildTest): code = codes.SERVICE_DISABLE_ERROR def test_minimal(self): self.assert_message_from_info( "Unable to disable a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: Unable to disable a_service: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "Unable to disable a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: Unable to disable a_service@an_instance: a_reason", { "service": "a_service", "reason": "a_reason", "node": "a_node", "instance": "an_instance", } ) class ServiceDisableSuccess(NameBuildTest): code = codes.SERVICE_DISABLE_SUCCESS def test_minimal(self): self.assert_message_from_info( "a_service disabled", { "service": "a_service", "node": None, "instance": None, } ) def test_node(self): self.assert_message_from_info( "a_node: a_service disabled", { "service": "a_service", "node": "a_node", "instance": None, } ) def test_instance(self): self.assert_message_from_info( "a_service@an_instance disabled", { "service": "a_service", "node": None, "instance": "an_instance", } ) def test_all(self): self.assert_message_from_info( "a_node: a_service@an_instance disabled", { "service": "a_service", "node": "a_node", "instance": "an_instance", } ) class CibDiffError(NameBuildTest): code = codes.CIB_DIFF_ERROR def test_success(self): self.assert_message_from_info( "Unable to diff CIB: error message\n", { "reason": "error message", "cib_old": "", "cib_new": "", } ) class TmpFileWrite(NameBuildTest): code = codes.TMP_FILE_WRITE def test_success(self): self.assert_message_from_info( ( "Writing to a temporary file /tmp/pcs/test.tmp:\n" "--Debug Content Start--\n" "test file\ncontent\n\n" "--Debug Content End--\n" ), { "file_path": "/tmp/pcs/test.tmp", "content": "test file\ncontent\n", } ) class DefaultsCanBeOverriden(NameBuildTest): code = codes.DEFAULTS_CAN_BE_OVERRIDEN def test_message(self): self.assert_message_from_info( "Defaults do not apply to resources which override them with their " "own defined values" ) class CibLoadErrorBadFormat(NameBuildTest): code = codes.CIB_LOAD_ERROR_BAD_FORMAT def test_message(self): self.assert_message_from_info( "unable to get cib, something wrong", { "reason": "something wrong" } ) class CorosyncQuorumHeuristicsEnabledWithNoExec(NameBuildTest): code = codes.COROSYNC_QUORUM_HEURISTICS_ENABLED_WITH_NO_EXEC def test_message(self): self.assert_message_from_info( "No exec_NAME options are specified, so heuristics are effectively " "disabled" ) class ResourceCleanupError(NameBuildTest): code = codes.RESOURCE_CLEANUP_ERROR def test_minimal(self): self.assert_message_from_info( "Unable to forget failed operations of resources\nsomething wrong", { "reason": "something wrong", "resource": None, "node": None, } ) def test_node(self): self.assert_message_from_info( "Unable to forget failed operations of resources\nsomething wrong", { "reason": "something wrong", "resource": None, "node": "N1", } ) def test_resource(self): self.assert_message_from_info( "Unable to forget failed operations of resource: R1\n" "something wrong" , { "reason": "something wrong", "resource": "R1", "node": None, } ) def test_resource_and_node(self): self.assert_message_from_info( "Unable to forget failed operations of resource: R1\n" "something wrong" , { "reason": "something wrong", "resource": "R1", "node": "N1", } ) class ResourceRefreshError(NameBuildTest): code = codes.RESOURCE_REFRESH_ERROR def test_minimal(self): self.assert_message_from_info( "Unable to delete history of resources\nsomething wrong", { "reason": "something wrong", "resource": None, "node": None, } ) def test_node(self): self.assert_message_from_info( "Unable to delete history of resources\nsomething wrong", { "reason": "something wrong", "resource": None, "node": "N1", } ) def test_resource(self): self.assert_message_from_info( "Unable to delete history of resource: R1\nsomething wrong", { "reason": "something wrong", "resource": "R1", "node": None, } ) def test_resource_and_node(self): self.assert_message_from_info( "Unable to delete history of resource: R1\nsomething wrong", { "reason": "something wrong", "resource": "R1", "node": "N1", } ) class ResourceRefreshTooTimeConsuming(NameBuildTest): code = codes.RESOURCE_REFRESH_TOO_TIME_CONSUMING def test_success(self): self.assert_message_from_info( "Deleting history of all resources on all nodes will execute more " "than 25 operations in the cluster, which may negatively " "impact the responsiveness of the cluster. Consider specifying " "resource and/or node" , { "threshold": 25, } ) class IdNotFound(NameBuildTest): code = codes.ID_NOT_FOUND def test_id(self): self.assert_message_from_info( "'ID' does not exist", { "id": "ID", "expected_types": [], "context_type": "", "context_id": "", } ) def test_id_and_type(self): self.assert_message_from_info( "clone/master/resource 'ID' does not exist", { "id": "ID", "expected_types": ["primitive", "master", "clone"], "context_type": "", "context_id": "", } ) def test_context(self): self.assert_message_from_info( "there is no 'ID' in the C_TYPE 'C_ID'", { "id": "ID", "expected_types": [], "context_type": "C_TYPE", "context_id": "C_ID", } ) def test_type_and_context(self): self.assert_message_from_info( "there is no ACL user 'ID' in the C_TYPE 'C_ID'", { "id": "ID", "expected_types": ["acl_target"], "context_type": "C_TYPE", "context_id": "C_ID", } ) class CibPushForcedFullDueToCrmFeatureSet(NameBuildTest): code = codes.CIB_PUSH_FORCED_FULL_DUE_TO_CRM_FEATURE_SET def test_success(self): self.assert_message_from_info( ( "Replacing the whole CIB instead of applying a diff, a race " "condition may happen if the CIB is pushed more than once " "simultaneously. To fix this, upgrade pacemaker to get " "crm_feature_set at least 3.0.9, current is 3.0.6." ), { "required_set": "3.0.9", "current_set": "3.0.6", } ) pcs-0.9.164/pcs/cli/common/test/test_env_file.py000066400000000000000000000114121326265502500214700ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.pcs_unittest import mock from pcs.cli.common import env_file from pcs.test.tools.misc import create_patcher, create_setup_patch_mixin from pcs.lib.errors import ReportItem from pcs.common import report_codes patch_env_file = create_patcher(env_file) SetupPatchMixin = create_setup_patch_mixin(patch_env_file) FILE_PATH = "/path/to/local/file" class Write(TestCase, SetupPatchMixin): def setUp(self): self.mock_open = mock.mock_open() self.mock_error = self.setup_patch("console_report.error") def assert_params_causes_calls(self, env_file_dict, calls, path=FILE_PATH): with patch_env_file("open", self.mock_open, create=True): env_file.write(env_file_dict, path) self.assertEqual(self.mock_open.mock_calls, calls) def test_sucessfully_write(self): self.assert_params_causes_calls( {"content": "filecontent"}, [ mock.call(FILE_PATH, "w"), mock.call().write("filecontent"), mock.call().close(), ] ) def test_sucessfully_write_binary(self): self.assert_params_causes_calls( {"content": "filecontent", "is_binary": True}, [ mock.call(FILE_PATH, "wb"), mock.call().write("filecontent"), mock.call().close(), ] ) def test_exit_when_cannot_open_file(self): self.mock_open.side_effect = EnvironmentError() self.mock_error.side_effect = SystemExit() self.assertRaises( SystemExit, lambda: env_file.write({"content": "filecontent"}, FILE_PATH) ) class Read(TestCase, SetupPatchMixin): def setUp(self): self.is_file = self.setup_patch('os.path.isfile') self.mock_open = mock.mock_open(read_data='filecontent') self.mock_error = self.setup_patch("console_report.error") def assert_returns_content(self, content, is_file): self.is_file.return_value = is_file with patch_env_file("open", self.mock_open, create=True): self.assertEqual( content, env_file.read(FILE_PATH) ) def test_successfully_read(self): self.assert_returns_content({"content": "filecontent"}, is_file=True) def test_successfully_return_empty_content(self): self.assert_returns_content({"content": None}, is_file=False) def test_exit_when_cannot_open_file(self): self.mock_open.side_effect = EnvironmentError() self.mock_error.side_effect = SystemExit() self.assertRaises(SystemExit, lambda: env_file.read(FILE_PATH)) class ProcessNoExistingFileExpectation(TestCase, SetupPatchMixin): def setUp(self): self.exists = self.setup_patch('os.path.exists') self.mock_error = self.setup_patch("console_report.error") def run_process( self, no_existing_file_expected, file_exists, overwrite=False ): self.exists.return_value = file_exists env_file.process_no_existing_file_expectation( "role", { "no_existing_file_expected": no_existing_file_expected, "can_overwrite_existing_file": overwrite, }, FILE_PATH ) def test_do_nothing_when_expectation_does_not_conflict(self): self.run_process(no_existing_file_expected=False, file_exists=True) self.run_process(no_existing_file_expected=False, file_exists=False) self.run_process(no_existing_file_expected=True, file_exists=False) def test_overwrite_permission_produce_console_warning(self): warn = self.setup_patch("console_report.warn") self.run_process( no_existing_file_expected=True, file_exists=True, overwrite=True ) warn.assert_called_once_with("role /path/to/local/file already exists") def test_non_overwrittable_conflict_exits(self): self.mock_error.side_effect = SystemExit() self.assertRaises( SystemExit, lambda: self.run_process(no_existing_file_expected=True, file_exists=True) ) class ReportMissing(TestCase): @patch_env_file("console_report.error") def test_report_to_console(self, error): env_file.report_missing("role", "path") error.assert_called_once_with("role 'path' does not exist") class IsMissingReport(TestCase): def test_regcognize_missing_report(self): self.assertTrue(env_file.is_missing_report( ReportItem.error( report_codes.FILE_DOES_NOT_EXIST, info={"file_role": "role"} ), "role" )) pcs-0.9.164/pcs/cli/common/test/test_lib_wrapper.py000066400000000000000000000043441326265502500222150ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.common.lib_wrapper import Library, bind from pcs.test.tools.pcs_unittest import mock from pcs.lib.errors import ReportItem from pcs.lib.errors import LibraryEnvError class LibraryWrapperTest(TestCase): def test_raises_for_bad_path(self): mock_middleware_factory = mock.MagicMock() lib = Library('env', mock_middleware_factory) self.assertRaises(Exception, lambda:lib.no_valid_library_part) @mock.patch('pcs.cli.common.lib_wrapper.constraint_order.create_with_set') @mock.patch('pcs.cli.common.lib_wrapper.cli_env_to_lib_env') def test_bind_to_library(self, mock_cli_env_to_lib_env, mock_order_set): lib_env = mock.MagicMock() lib_env.is_cib_live = True lib_env.is_corosync_conf_live = True mock_cli_env_to_lib_env.return_value = lib_env def dummy_middleware(next_in_line, env, *args, **kwargs): return next_in_line(env, *args, **kwargs) mock_middleware_factory = mock.MagicMock() mock_middleware_factory.cib = dummy_middleware mock_middleware_factory.corosync_conf_existing = dummy_middleware mock_env = mock.MagicMock() Library(mock_env, mock_middleware_factory).constraint_order.set( 'first', second="third" ) mock_order_set.assert_called_once_with(lib_env, "first", second="third") class BindTest(TestCase): @mock.patch("pcs.cli.common.lib_wrapper.process_library_reports") def test_report_unprocessed_library_env_errors(self, mock_process_report): report1 = ReportItem.error("OTHER ERROR", info={}) report2 = ReportItem.error("OTHER ERROR", info={}) report3 = ReportItem.error("OTHER ERROR", info={}) e = LibraryEnvError(report1, report2, report3) e.sign_processed(report2) mock_middleware = mock.Mock(side_effect=e) binded = bind( cli_env=None, run_with_middleware=mock_middleware, run_library_command=None ) self.assertRaises(SystemExit, lambda: binded(cli_env=None)) mock_process_report.assert_called_once_with([report1, report3]) pcs-0.9.164/pcs/cli/common/test/test_middleware.py000066400000000000000000000021761326265502500220250ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.common import middleware class MiddlewareBuildTest(TestCase): def test_run_middleware_correctly_chained(self): log = [] def command(lib, argv, modifiers): log.append('command: {0}, {1}, {2}'.format(lib, argv, modifiers)) def m1(next, lib, argv, modifiers): log.append( 'm1 start: {0}, {1}, {2}'.format(lib, argv, modifiers) ) next(lib, argv, modifiers) log.append('m1 done') def m2(next, lib, argv, modifiers): log.append( 'm2 start: {0}, {1}, {2}'.format(lib, argv, modifiers) ) next(lib, argv, modifiers) log.append('m2 done') run_with_middleware = middleware.build(m1, m2) run_with_middleware(command, "1", "2", "3") self.assertEqual(log, [ 'm1 start: 1, 2, 3', 'm2 start: 1, 2, 3', 'command: 1, 2, 3', 'm2 done', 'm1 done', ]) pcs-0.9.164/pcs/cli/common/test/test_parse_args.py000066400000000000000000000400761326265502500220370ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.common.parse_args import( group_by_keywords, parse_typed_arg, prepare_options, split_list, filter_out_non_option_negative_numbers, filter_out_options, is_num, is_negative_num, is_short_option_expecting_value, is_long_option_expecting_value, is_option_expecting_value, upgrade_args, ) from pcs.cli.common.errors import CmdLineInputError class PrepareOptionsTest(TestCase): def test_refuse_option_without_value(self): self.assertRaises( CmdLineInputError, lambda: prepare_options(['abc']) ) def test_prepare_option_dict_form_args(self): self.assertEqual({'a': 'b', 'c': 'd'}, prepare_options(['a=b', 'c=d'])) def test_prepare_option_dict_with_empty_value(self): self.assertEqual({'a': ''}, prepare_options(['a='])) def test_refuse_option_without_key(self): self.assertRaises( CmdLineInputError, lambda: prepare_options(['=a']) ) def test_refuse_options_with_same_key_and_differend_value(self): self.assertRaises( CmdLineInputError, lambda: prepare_options(['a=a', "a=b"]) ) def test_accept_options_with_ssame_key_and_same_value(self): self.assertEqual({'a': '1'}, prepare_options(["a=1", "a=1"])) class SplitListTest(TestCase): def test_returns_list_with_original_when_separator_not_in_original(self): self.assertEqual([['a', 'b']], split_list(['a', 'b'], 'c')) def test_returns_splited_list(self): self.assertEqual( [['a', 'b'], ['c', 'd']], split_list(['a', 'b', '|', 'c', 'd'], '|') ) def test_behave_like_string_split_when_the_separator_edges(self): self.assertEqual( [[], ['a', 'b'], ['c', 'd'], []], split_list(['|','a', 'b', '|', 'c', 'd', "|"], '|') ) class SplitByKeywords(TestCase): def test_split_with_implicit_first_keyword(self): self.assertEqual( group_by_keywords( [0, "first", 1, 2, "second", 3], set(["first", "second"]), implicit_first_group_key="zero" ), { "zero": [0], "first": [1, 2], "second": [3], } ) def test_splict_without_implict_keyword(self): self.assertEqual( group_by_keywords( ["first", 1, 2, "second", 3], set(["first", "second"]), ), { "first": [1, 2], "second": [3], } ) def test_raises_when_args_do_not_start_with_keyword_nor_implicit(self): self.assertRaises(CmdLineInputError, lambda: group_by_keywords( [0, "first", 1, 2, "second", 3], set(["first", "second"]), )) def test_returns_dict_with_empty_lists_for_no_args(self): self.assertEqual( group_by_keywords( [], set(["first", "second"]) ), { "first": [], "second": [], } ) def test_returns_dict_with_empty_lists_for_no_args_implicit_case(self): self.assertEqual( group_by_keywords( [], set(["first", "second"]), implicit_first_group_key="zero", ), { "zero": [], "first": [], "second": [], } ) def test_returns_dict_with_empty_lists_for_no_opts_and_only_found_kws(self): self.assertEqual( group_by_keywords( ["first"], set(["first", "second"]), only_found_keywords=True, ), { "first": [], } ) def test_returns_empty_lists_no_opts_and_only_found_kws_with_grouping(self): self.assertEqual( group_by_keywords( ["second", 1, "second", "second", 2, 3], set(["first", "second"]), group_repeated_keywords=["second"], only_found_keywords=True, ), { "second": [ [1], [], [2, 3], ], } ) def test_empty_repeatable(self): self.assertEqual( group_by_keywords( ["second"], set(["first", "second"]), group_repeated_keywords=["second"], only_found_keywords=True, ), { "second": [ [], ], } ) def test_allow_keywords_repeating(self): self.assertEqual( group_by_keywords( ["first", 1, 2, "second", 3, "first", 4], set(["first", "second"]), ), { "first": [1, 2, 4], "second": [3], } ) def test_can_disallow_keywords_repeating(self): self.assertRaises(CmdLineInputError, lambda: group_by_keywords( ["first", 1, 2, "second", 3, "first"], set(["first", "second"]), keyword_repeat_allowed=False, )) def test_group_repeating_keyword_occurences(self): self.assertEqual( group_by_keywords( ["first", 1, 2, "second", 3, "first", 4], set(["first", "second"]), group_repeated_keywords=["first"] ), { "first": [[1, 2], [4]], "second": [3], } ) def test_raises_on_group_repeated_keywords_inconsistency(self): self.assertRaises(AssertionError, lambda: group_by_keywords( [], set(["first", "second"]), group_repeated_keywords=["first", "third"], implicit_first_group_key="third" )) def test_implicit_first_kw_not_applyed_in_the_middle(self): self.assertEqual( group_by_keywords( [1, 2, "first", 3, "zero", 4], set(["first"]), implicit_first_group_key="zero" ), { "zero": [1, 2], "first": [3, "zero", 4], } ) def test_implicit_first_kw_applyed_in_the_middle_when_is_in_kwds(self): self.assertEqual( group_by_keywords( [1, 2, "first", 3, "zero", 4], set(["first", "zero"]), implicit_first_group_key="zero" ), { "zero": [1, 2, 4], "first": [3], } ) class ParseTypedArg(TestCase): def assert_parse(self, arg, parsed): self.assertEqual( parse_typed_arg(arg, ["t0", "t1", "t2"], "t0"), parsed ) def test_no_type(self): self.assert_parse("value", ("t0", "value")) def test_escape(self): self.assert_parse("%value", ("t0", "value")) def test_allowed_type(self): self.assert_parse("t1%value", ("t1", "value")) def test_bad_type(self): self.assertRaises( CmdLineInputError, lambda: self.assert_parse("tX%value", "aaa") ) def test_escape_delimiter(self): self.assert_parse("%%value", ("t0", "%value")) self.assert_parse("%val%ue", ("t0", "val%ue")) def test_more_delimiters(self): self.assert_parse("t2%va%lu%e", ("t2", "va%lu%e")) self.assert_parse("t2%%va%lu%e", ("t2", "%va%lu%e")) class FilterOutNonOptionNegativeNumbers(TestCase): def test_does_not_remove_anything_when_no_negative_numbers(self): args = ["first", "second"] self.assertEqual(args, filter_out_non_option_negative_numbers(args)) def test_remove_negative_number(self): self.assertEqual( ["first"], filter_out_non_option_negative_numbers(["first", "-1"]) ) def test_remove_negative_infinity(self): self.assertEqual( ["first"], filter_out_non_option_negative_numbers(["first", "-INFINITY"]) ) self.assertEqual( ["first"], filter_out_non_option_negative_numbers(["first", "-infinity"]) ) def test_not_remove_follower_of_short_signed_option(self): self.assertEqual( ["first", "-f", "-1"], filter_out_non_option_negative_numbers(["first", "-f", "-1"]) ) def test_remove_follower_of_short_unsigned_option(self): self.assertEqual( ["first", "-h"], filter_out_non_option_negative_numbers(["first", "-h", "-1"]) ) def test_not_remove_follower_of_long_signed_option(self): self.assertEqual( ["first", "--name", "-1"], filter_out_non_option_negative_numbers(["first", "--name", "-1"]) ) def test_remove_follower_of_long_unsigned_option(self): self.assertEqual( ["first", "--master"], filter_out_non_option_negative_numbers(["first", "--master", "-1"]) ) def test_does_not_remove_dash(self): self.assertEqual( ["first", "-"], filter_out_non_option_negative_numbers(["first", "-"]) ) def test_does_not_remove_dash_dash(self): self.assertEqual( ["first", "--"], filter_out_non_option_negative_numbers(["first", "--"]) ) class FilterOutOptions(TestCase): def test_does_not_remove_anything_when_no_options(self): args = ["first", "second"] self.assertEqual(args, filter_out_options(args)) def test_remove_unsigned_short_option(self): self.assertEqual( ["first", "second"], filter_out_options(["first", "-h", "second"]) ) def test_remove_signed_short_option_with_value(self): self.assertEqual( ["first"], filter_out_options(["first", "-f", "second"]) ) def test_not_remove_value_of_signed_short_option_when_value_bundled(self): self.assertEqual( ["first", "second"], filter_out_options(["first", "-fvalue", "second"]) ) def test_remove_unsigned_long_option(self): self.assertEqual( ["first", "second"], filter_out_options(["first", "--master", "second"]) ) def test_remove_signed_long_option_with_value(self): self.assertEqual( ["first"], filter_out_options(["first", "--name", "second"]) ) def test_not_remove_value_of_signed_long_option_when_value_bundled(self): self.assertEqual( ["first", "second"], filter_out_options(["first", "--name=value", "second"]) ) def test_does_not_remove_dash(self): self.assertEqual( ["first", "-"], filter_out_options(["first", "-"]) ) def test_remove_dash_dash(self): self.assertEqual( ["first"], filter_out_options(["first", "--"]) ) class IsNum(TestCase): def test_returns_true_on_number(self): self.assertTrue(is_num("10")) def test_returns_true_on_infinity(self): self.assertTrue(is_num("infinity")) def test_returns_false_on_no_number(self): self.assertFalse(is_num("no-num")) class IsNegativeNum(TestCase): def test_returns_true_on_negative_number(self): self.assertTrue(is_negative_num("-10")) def test_returns_true_on_infinity(self): self.assertTrue(is_negative_num("-INFINITY")) def test_returns_false_on_positive_number(self): self.assertFalse(is_negative_num("10")) def test_returns_false_on_no_number(self): self.assertFalse(is_negative_num("no-num")) class IsShortOptionExpectingValue(TestCase): def test_returns_true_on_short_option_with_value(self): self.assertTrue(is_short_option_expecting_value("-f")) def test_returns_false_on_short_option_without_value(self): self.assertFalse(is_short_option_expecting_value("-h")) def test_returns_false_on_unknown_short_option(self): self.assertFalse(is_short_option_expecting_value("-x")) def test_returns_false_on_dash(self): self.assertFalse(is_short_option_expecting_value("-")) def test_returns_false_on_option_without_dash(self): self.assertFalse(is_short_option_expecting_value("ff")) def test_returns_false_on_option_including_value(self): self.assertFalse(is_short_option_expecting_value("-fvalue")) class IsLongOptionExpectingValue(TestCase): def test_returns_true_on_long_option_with_value(self): self.assertTrue(is_long_option_expecting_value("--name")) def test_returns_false_on_long_option_without_value(self): self.assertFalse(is_long_option_expecting_value("--master")) def test_returns_false_on_unknown_long_option(self): self.assertFalse(is_long_option_expecting_value("--not-specified-long-opt")) def test_returns_false_on_dash_dash(self): self.assertFalse(is_long_option_expecting_value("--")) def test_returns_false_on_option_without_dash_dash(self): self.assertFalse(is_long_option_expecting_value("-long-option")) def test_returns_false_on_option_including_value(self): self.assertFalse(is_long_option_expecting_value("--name=Name")) class IsOptionExpectingValue(TestCase): def test_returns_true_on_short_option_with_value(self): self.assertTrue(is_option_expecting_value("-f")) def test_returns_true_on_long_option_with_value(self): self.assertTrue(is_option_expecting_value("--name")) def test_returns_false_on_short_option_without_value(self): self.assertFalse(is_option_expecting_value("-h")) def test_returns_false_on_long_option_without_value(self): self.assertFalse(is_option_expecting_value("--master")) def test_returns_false_on_unknown_short_option(self): self.assertFalse(is_option_expecting_value("-x")) def test_returns_false_on_unknown_long_option(self): self.assertFalse(is_option_expecting_value("--not-specified-long-opt")) def test_returns_false_on_dash(self): self.assertFalse(is_option_expecting_value("-")) def test_returns_false_on_dash_dash(self): self.assertFalse(is_option_expecting_value("--")) def test_returns_false_on_option_including_value(self): self.assertFalse(is_option_expecting_value("--name=Name")) self.assertFalse(is_option_expecting_value("-fvalue")) class UpgradeArgs(TestCase): def test_returns_the_same_args_when_no_older_versions_detected(self): args = ["first", "second"] self.assertEqual(args, upgrade_args(args)) def test_upgrade_2dash_cloneopt(self): self.assertEqual( ["first", "clone", "second"], upgrade_args(["first", "--cloneopt", "second"]) ) def test_upgrade_2dash_clone(self): self.assertEqual( ["first", "clone", "second"], upgrade_args(["first", "--clone", "second"]) ) def test_upgrade_2dash_cloneopt_with_value(self): self.assertEqual( ["first", "clone", "1", "second"], upgrade_args(["first", "--cloneopt=1", "second"]) ) def test_upgrade_2dash_master_in_resource_create(self): self.assertEqual( ["resource", "create", "master", "second"], upgrade_args(["resource", "create", "--master", "second"]) ) def test_dont_upgrade_2dash_master_outside_of_resource_create(self): self.assertEqual( ["first", "--master", "second"], upgrade_args(["first", "--master", "second"]) ) def test_upgrade_2dash_master_in_resource_create_with_complications(self): self.assertEqual( [ "-f", "path/to/file", "resource", "-V", "create", "master", "second" ], upgrade_args([ "-f", "path/to/file", "resource", "-V", "create", "--master", "second" ]) ) pcs-0.9.164/pcs/cli/common/test/test_reports.py000066400000000000000000000072021326265502500214010ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from collections import namedtuple from functools import partial from pcs.cli.common.reports import build_message_from_report ReportItem = namedtuple("ReportItem", "code info") class BuildMessageFromReportTest(TestCase): def test_returns_default_message_when_code_not_in_map(self): info = {"first": "FIRST"} self.assertEqual( "Unknown report: SOME info: {0}force text".format(str(info)) , build_message_from_report( {}, ReportItem("SOME", info), "force text" ) ) def test_complete_force_text(self): self.assertEqual( "Message force text is inside", build_message_from_report( { "SOME": lambda info, force_text: "Message "+force_text+" is inside" , }, ReportItem("SOME", {}), "force text" ) ) def test_deal_with_callable(self): self.assertEqual( "Info: MESSAGE", build_message_from_report( { "SOME": lambda info: "Info: {message}".format(**info), }, ReportItem("SOME", {"message": "MESSAGE"}), ) ) def test_append_force_when_needed_and_not_specified(self): self.assertEqual( "message force at the end", build_message_from_report( {"SOME": "message"}, ReportItem("SOME", {}), " force at the end", ) ) def test_returns_default_message_when_conflict_key_appear(self): info = {"message": "MESSAGE"} self.assertEqual( "Unknown report: SOME info: {0}".format(str(info)), build_message_from_report( { "SOME": lambda info: "Info: {message} {extra}".format( message="ANY", **info ), }, ReportItem("SOME", info), ) ) def test_returns_default_message_when_key_disappear(self): self.assertEqual( "Unknown report: SOME info: {}" , build_message_from_report( { "SOME": lambda info: "Info: {message}".format(**info), }, ReportItem("SOME", {}), ) ) def test_callable_is_partial_object(self): code_builder_map = { "SOME": partial( lambda title, info: "{title}: {message}".format( title=title, **info ), "Info" ) } self.assertEqual( "Info: MESSAGE", build_message_from_report( code_builder_map, ReportItem("SOME", {"message": "MESSAGE"}) ) ) def test_callable_is_partial_object_with_force(self): code_builder_map = { "SOME": partial( lambda title, info, force_text: "{title}: {message} {force_text}".format( title=title, force_text=force_text, **info ), "Info" ) } self.assertEqual( "Info: MESSAGE force text", build_message_from_report( code_builder_map, ReportItem("SOME", {"message": "MESSAGE"}), "force text" ) ) pcs-0.9.164/pcs/cli/constraint/000077500000000000000000000000001326265502500162065ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint/__init__.py000066400000000000000000000000001326265502500203050ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint/command.py000066400000000000000000000050001326265502500201710ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.constraint import parse_args, console_report from pcs.cli.common.console_report import indent def create_with_set(create_with_set_library_call, argv, modifiers): """ callable create_with_set_library_call create constraint with set list argv part of comandline args see usage for "constraint (colocation|resource|ticket) set" dict like object modifiers can contain "force" allows resource in clone/master and constraint duplicity "autocorrect" allows correct resource to its clone/master parent """ resource_set_list, constraint_options = parse_args.prepare_set_args(argv) create_with_set_library_call( resource_set_list, constraint_options, can_repair_to_clone=modifiers["autocorrect"], resource_in_clone_alowed=modifiers["force"], duplication_alowed=modifiers["force"], ) def show_constraints_with_set(constraint_list, show_detail, indent_step=2): """ return list of console lines with info about constraints list of dict constraint_list see constraint in pcs/lib/exchange_formats.md bool with_id have to show id with options int indent_step is count of spaces for indenting """ return ["Resource Sets:"] + indent( [ console_report.constraint_with_sets(constraint, with_id=show_detail) for constraint in constraint_list ], indent_step=indent_step ) def show(caption, load_constraints, format_options, modifiers): """ load constraints and return console lines list with info about constraints string caption for example "Ticket Constraints:" callable load_constraints which returns desired constraints as dictionary like {"plain": [], "with_resource_sets": []} callable format_options takes dict of options and show_detail flag (bool) and returns string with constraint formated for commandline modifiers dict like object with command modifiers """ show_detail = modifiers["full"] constraints = load_constraints() line_list = [caption] line_list.extend([ " " + format_options(constraint_options_dict, show_detail) for constraint_options_dict in constraints["plain"] ]) if constraints["with_resource_sets"]: line_list.extend( indent(show_constraints_with_set( constraints["with_resource_sets"], show_detail )) ) return line_list pcs-0.9.164/pcs/cli/constraint/console_report.py000066400000000000000000000030471326265502500216210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) def constraint_plain(constraint_type, constraint_info, with_id=False): return constraint_type + " ".join( prepare_options(constraint_info["options"], with_id) ) def resource_sets(set_list, with_id=True): """ list of dict set_list see resource set in pcs/lib/exchange_formats.md """ report = [] for resource_set in set_list: report.extend( ["set"] + resource_set["ids"] + options(resource_set["options"]) ) if with_id: report.append(id_from_options(resource_set["options"])) return report def options(options_dict): return [ key+"="+value for key, value in sorted(options_dict.items()) if key != "id" ] def id_from_options(options_dict): return "(id:"+options_dict.get("id", "")+")" def constraint_with_sets(constraint_info, with_id=True): """ dict constraint_info see constraint in pcs/lib/exchange_formats.md bool with_id have to show id with options_dict """ options_dict = options(constraint_info["options"]) return " ".join( resource_sets(constraint_info["resource_sets"], with_id) + (["setoptions"] + options_dict if options_dict else []) + ([id_from_options(constraint_info["options"])] if with_id else []) ) def prepare_options(options_dict, with_id=True): return ( options(options_dict) + ([id_from_options(options_dict)] if with_id else []) ) pcs-0.9.164/pcs/cli/constraint/parse_args.py000066400000000000000000000024311326265502500207060ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.common import parse_args from pcs.cli.common.errors import CmdLineInputError def prepare_resource_sets(cmdline_args): return [ { "ids": [id for id in args if "=" not in id], "options": parse_args.prepare_options( [opt for opt in args if "=" in opt] ), } for args in parse_args.split_list(cmdline_args, "set") ] def prepare_set_args(argv): if argv.count("setoptions") > 1: raise CmdLineInputError( "Keyword 'setoptions' may be mentioned at most once" ) resource_set_args, constraint_options_args = ( parse_args.split_list(argv, "setoptions") if "setoptions" in argv else (argv, []) ) if not resource_set_args: raise CmdLineInputError() resource_set_list = prepare_resource_sets(resource_set_args) if( not resource_set_list or not all(resource_set["ids"] for resource_set in resource_set_list) ): raise CmdLineInputError() constraint_options = {} if constraint_options_args: constraint_options = parse_args.prepare_options(constraint_options_args) return (resource_set_list, constraint_options) pcs-0.9.164/pcs/cli/constraint/test/000077500000000000000000000000001326265502500171655ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint/test/__init__.py000066400000000000000000000000001326265502500212640ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint/test/test_command.py000066400000000000000000000045111326265502500222150ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.constraint import command from pcs.test.tools.pcs_unittest import mock def fixture_constraint(): return { "resource_sets": [ {"ids": ["a", "b"], "options": {"c": "d", "e": "f"}}, {"ids": ["g", "h"], "options": {"i": "j", "k": "l"}}, ], "options": {"m": "n", "o":"p"} } def fixture_constraint_console(): return " set a b c=d e=f (id:) set g h i=j k=l (id:) setoptions m=n o=p (id:)" class ShowConstraintsWithSetTest(TestCase): def test_return_line_list(self): self.assertEqual( [ "Resource Sets:", " set a b c=d e=f set g h i=j k=l setoptions m=n o=p", ], command.show_constraints_with_set( [fixture_constraint()], show_detail=False ) ) def test_return_line_list_with_id(self): self.assertEqual( [ "Resource Sets:", fixture_constraint_console(), ], command.show_constraints_with_set( [fixture_constraint()], show_detail=True ) ) class ShowTest(TestCase): def test_show_only_caption_when_no_constraint_loaded(self): self.assertEqual(["caption"], command.show( "caption", load_constraints=lambda: {"plain": [], "with_resource_sets": []}, format_options=lambda: None, modifiers={"full": False} )) def test_show_constraints_full(self): load_constraints = mock.Mock() load_constraints.return_value = { "plain": [{"options": {"id": "plain_id"}}], "with_resource_sets": [fixture_constraint()] } format_options = mock.Mock() format_options.return_value = "plain constraint listing" self.assertEqual( [ "caption", " plain constraint listing", " Resource Sets:", " "+fixture_constraint_console(), ], command.show( "caption", load_constraints, format_options, {"full": True} ) ) pcs-0.9.164/pcs/cli/constraint/test/test_console_report.py000066400000000000000000000044721326265502500236420ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.constraint import console_report class OptionsTest(TestCase): def test_get_console_options_from_lib_options(self): self.assertEqual( ["a=b", "c=d"], console_report.options({"c": "d", "a": "b", "id":"some_id"}) ) class IdFromOptionsTest(TestCase): def test_get_id_from_options(self): self.assertEqual( '(id:some_id)', console_report.id_from_options({"c": "d", "a": "b", "id":"some_id"}) ) class PrepareOptionsTest(TestCase): def test_prepare_options_with_id(self): self.assertEqual( ["a=b", "c=d", '(id:some_id)'], console_report.prepare_options({"c": "d", "a": "b", "id":"some_id"}) ) def test_prepare_options_without_id(self): self.assertEqual( ["a=b", "c=d"], console_report.prepare_options( {"c": "d", "a": "b", "id":"some_id"}, with_id=False ) ) class ResourceSetsTest(TestCase): def test_prepare_resource_sets_without_id(self): self.assertEqual( ['set', 'a', 'b', 'c=d', 'e=f', 'set', 'g', 'h', 'i=j', 'k=l'], console_report.resource_sets( [ { "ids": ["a", "b"], "options": {"c": "d", "e": "f", "id": "some_id"}, }, { "ids": ["g", "h"], "options": {"i": "j", "k": "l", "id": "some_id_2"}, }, ], with_id=False ) ) def test_prepare_resource_sets_with_id(self): self.assertEqual( [ 'set', 'a', 'b', 'c=d', 'e=f', '(id:some_id)', 'set', 'g', 'h', 'i=j', 'k=l', '(id:some_id_2)' ], console_report.resource_sets([ { "ids": ["a", "b"], "options": {"c": "d", "e": "f", "id": "some_id"}, }, { "ids": ["g", "h"], "options": {"i": "j", "k": "l", "id": "some_id_2"}, }, ]) ) pcs-0.9.164/pcs/cli/constraint/test/test_parse_args.py000066400000000000000000000056041326265502500227310ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.common.errors import CmdLineInputError from pcs.cli.constraint.parse_args import prepare_set_args, prepare_resource_sets from pcs.test.tools.pcs_unittest import mock @mock.patch("pcs.cli.common.parse_args.prepare_options") class PrepareResourceSetsTest(TestCase): def test_prepare_resource_sets(self, options): opts = [{"id": "1"}, {"id": "2", "sequential": "true"}] options.side_effect = opts self.assertEqual( [ {"ids": ["resA", "resB"], "options":opts[0]}, {"ids": ["resC"], "options": opts[1]}, ], prepare_resource_sets([ "resA", "resB", "id=resource-set-1", "set", "resC", "id=resource-set-2", "sequential=true", ]) ) def test_has_no_responsibility_to_assess_the_content(self, options): options.return_value = {} self.assertEqual([{"ids":[], "options":{}}], prepare_resource_sets([])) @mock.patch("pcs.cli.common.parse_args.prepare_options") @mock.patch("pcs.cli.constraint.parse_args.prepare_resource_sets") class PrepareSetArgvTest(TestCase): def test_return_tuple_of_given_resource_set_list_and_options( self, res_sets, options ): res_sets.return_value = [{"ids": "A"}] options.return_value = 'O' self.assertEqual( ([{"ids": "A"}], "O"), prepare_set_args(['A', 'b=c', "setoptions", "d=e"]) ) def test_right_distribute_full_args(self, res_sets, options): prepare_set_args(['A', 'b=c', "setoptions", "d=e"]) res_sets.assert_called_once_with(['A', 'b=c']) options.assert_called_once_with(["d=e"]) def test_right_distribute_args_without_options(self, res_sets, options): prepare_set_args(['A', 'b=c']) res_sets.assert_called_once_with(['A', 'b=c']) options.assert_not_called() def test_right_distribute_args_with_empty_options(self, res_sets, options): prepare_set_args(['A', 'b=c', 'setoptions']) res_sets.assert_called_once_with(['A', 'b=c']) options.assert_not_called() def test_raises_when_no_set_specified(self, res_sets, options): self.assertRaises(CmdLineInputError, lambda: prepare_set_args([])) res_sets.assert_not_called() def test_raises_when_no_resource_in_set(self, res_sets, options): res_sets.return_value = [{"ids": [], "options": {"b": "c"}}] self.assertRaises(CmdLineInputError, lambda: prepare_set_args(["b=c"])) res_sets.assert_called_once_with(["b=c"]) def test_raises_when_setoption_more_than_once(self, res_sets, options): self.assertRaises(CmdLineInputError, lambda: prepare_set_args( ['A', 'b=c', 'setoptions', "c=d", "setoptions", "e=f"] )) pcs-0.9.164/pcs/cli/constraint_all/000077500000000000000000000000001326265502500170365ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_all/__init__.py000066400000000000000000000000001326265502500211350ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_all/console_report.py000066400000000000000000000044761326265502500224600ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.constraint.console_report import ( constraint_plain as constraint_plain_default, constraint_with_sets, ) from pcs.cli.constraint_colocation.console_report import ( constraint_plain as colocation_plain ) from pcs.cli.constraint_order.console_report import ( constraint_plain as order_plain ) from pcs.cli.constraint_ticket.console_report import ( constraint_plain as ticket_plain ) from pcs.common import report_codes as codes def constraint(constraint_type, constraint_info, with_id=True): """ dict constraint_info see constraint in pcs/lib/exchange_formats.md bool with_id have to show id with options_dict """ if "resource_sets" in constraint_info: return constraint_with_sets(constraint_info, with_id) return constraint_plain(constraint_type, constraint_info, with_id) def constraint_plain(constraint_type, options_dict, with_id=False): """return console shape for any constraint_type of plain constraint""" type_report_map = { "rsc_colocation": colocation_plain, "rsc_order": order_plain, "rsc_ticket": ticket_plain, } if constraint_type not in type_report_map: return constraint_plain_default(constraint_type, options_dict, with_id) return type_report_map[constraint_type](options_dict, with_id) #Each value (a callable taking report_item.info) returns a message. #Force text will be appended if necessary. #If it is necessary to put the force text inside the string then the callable #must take the force_text parameter. CODE_TO_MESSAGE_BUILDER_MAP = { codes.DUPLICATE_CONSTRAINTS_EXIST: lambda info, force_text: "duplicate constraint already exists{0}\n".format(force_text) + "\n".join([ " " + constraint(info["constraint_type"], constraint_info) for constraint_info in info["constraint_info_list"] ]) , codes.RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE: lambda info: ( "{resource_id} is a {mode} resource, you should use the" " {parent_type} id: {parent_id} when adding constraints" ).format( mode="master/slave" if info["parent_type"] == "master" else info["parent_type"] , **info ) , } pcs-0.9.164/pcs/cli/constraint_all/test/000077500000000000000000000000001326265502500200155ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_all/test/__init__.py000066400000000000000000000000001326265502500221140ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_all/test/test_console_report.py000066400000000000000000000106511326265502500244660ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.pcs_unittest import mock from pcs.cli.constraint_all import console_report from pcs.common import report_codes as codes class ConstraintTest(TestCase): @mock.patch("pcs.cli.constraint_all.console_report.constraint_plain") def test_can_display_plain_constraint(self, mock_constraint_plain): mock_constraint_plain.return_value = "plain" self.assertEqual( 'plain', console_report.constraint( "rsc_ticket", "constraint_in_library_representation" ) ) mock_constraint_plain.assert_called_once_with( "rsc_ticket", "constraint_in_library_representation", True ) @mock.patch("pcs.cli.constraint_all.console_report.constraint_with_sets") def test_can_display_constraint_with_set(self, mock_constraint_with_sets): mock_constraint_with_sets.return_value = "with_set" self.assertEqual( 'with_set', console_report.constraint( "rsc_ticket", {"resource_sets": "some_resource_sets", "options": {"a": "b"}}, with_id=False ) ) mock_constraint_with_sets.assert_called_once_with( {"resource_sets": "some_resource_sets", "options": {"a": "b"}}, False ) class ConstraintPlainTest(TestCase): @mock.patch("pcs.cli.constraint_all.console_report.colocation_plain") def test_choose_right_reporter(self, mock_colocation_plain): mock_colocation_plain.return_value = "some constraint formated" self.assertEqual( "some constraint formated", console_report.constraint_plain( "rsc_colocation", "constraint_in_library_representation", with_id=True ) ) mock_colocation_plain.assert_called_once_with( "constraint_in_library_representation", True ) class DuplicateConstraintsReportTest(TestCase): def setUp(self): self.build = console_report.CODE_TO_MESSAGE_BUILDER_MAP[ codes.DUPLICATE_CONSTRAINTS_EXIST ] @mock.patch("pcs.cli.constraint_all.console_report.constraint") def test_translate_from_report_info(self, mock_constraint): mock_constraint.return_value = "constraint info" self.assertEqual( "\n".join([ "duplicate constraint already exists force text", " constraint info" ]), self.build( { "constraint_info_list": [{"options": {"a": "b"}}], "constraint_type": "rsc_some" }, force_text=" force text" ) ) mock_constraint.assert_called_once_with( "rsc_some", {"options": {"a": "b"}} ) class ResourceForConstraintIsMultiinstanceTest(TestCase): def setUp(self): self.build = console_report.CODE_TO_MESSAGE_BUILDER_MAP[ codes.RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE ] def test_build_message_for_master(self): self.assertEqual( "RESOURCE_PRIMITIVE is a master/slave resource, you should use the" " master id: RESOURCE_MASTER when adding constraints" , self.build({ "resource_id": "RESOURCE_PRIMITIVE", "parent_type": "master", "parent_id": "RESOURCE_MASTER" }) ) def test_build_message_for_clone(self): self.assertEqual( "RESOURCE_PRIMITIVE is a clone resource, you should use the" " clone id: RESOURCE_CLONE when adding constraints" , self.build({ "resource_id": "RESOURCE_PRIMITIVE", "parent_type": "clone", "parent_id": "RESOURCE_CLONE" }) ) def test_build_message_for_bundle(self): self.assertEqual( "RESOURCE_PRIMITIVE is a bundle resource, you should use the" " bundle id: RESOURCE_CLONE when adding constraints" , self.build({ "resource_id": "RESOURCE_PRIMITIVE", "parent_type": "bundle", "parent_id": "RESOURCE_CLONE" }) ) pcs-0.9.164/pcs/cli/constraint_colocation/000077500000000000000000000000001326265502500204205ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_colocation/__init__.py000066400000000000000000000000001326265502500225170ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_colocation/command.py000066400000000000000000000020631326265502500224110ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.constraint import command from pcs.cli.constraint_colocation import console_report def create_with_set(lib, argv, modifiers): """ create colocation constraint with resource set object lib exposes library list argv see usage for "constraint colocation set" dict like object modifiers can contain "force" allows resource in clone/master and constraint duplicity "autocorrect" allows correct resource to its clone/master parent """ command.create_with_set( lib.constraint_colocation.set, argv, modifiers, ) def show(lib, argv, modifiers): """ show all colocation constraints object lib exposes library list argv see usage for "constraint colocation show" dict like object modifiers can contain "full" """ print("\n".join(command.show( "Colocation Constraints:", lib.constraint_colocation.show, console_report.constraint_plain, modifiers, ))) pcs-0.9.164/pcs/cli/constraint_colocation/console_report.py000066400000000000000000000016231326265502500240310ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) def constraint_plain(constraint_info, with_id=False): """ dict constraint_info see constraint in pcs/lib/exchange_formats.md bool with_id have to show id with options_dict """ options_dict = constraint_info["options"] co_resource1 = options_dict.get("rsc", "") co_resource2 = options_dict.get("with-rsc", "") co_id = options_dict.get("id", "") co_score = options_dict.get("score", "") score_text = "(score:" + co_score + ")" console_option_list = [ "(%s:%s)" % (option[0], option[1]) for option in sorted(options_dict.items()) if option[0] not in ("rsc", "with-rsc", "id", "score") ] if with_id: console_option_list.append("(id:%s)" % co_id) return " ".join( [co_resource1, "with", co_resource2, score_text] + console_option_list ) pcs-0.9.164/pcs/cli/constraint_order/000077500000000000000000000000001326265502500174015ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_order/__init__.py000066400000000000000000000000001326265502500215000ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_order/command.py000066400000000000000000000020261326265502500213710ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.constraint import command from pcs.cli.constraint_order import console_report def create_with_set(lib, argv, modifiers): """ create order constraint with resource set object lib exposes library list argv see usage for "constraint colocation set" dict like object modifiers can contain "force" allows resource in clone/master and constraint duplicity "autocorrect" allows correct resource to its clone/master parent """ command.create_with_set( lib.constraint_order.set, argv, modifiers ) def show(lib, argv, modifiers): """ show all order constraints object lib exposes library list argv see usage for "constraint colocation show" dict like object modifiers can contain "full" """ print("\n".join(command.show( "Ordering Constraints:", lib.constraint_order.show, console_report.constraint_plain, modifiers, ))) pcs-0.9.164/pcs/cli/constraint_order/console_report.py000066400000000000000000000032241326265502500230110ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.pacemaker.values import is_true def constraint_plain(constraint_info, with_id=False): """ dict constraint_info see constraint in pcs/lib/exchange_formats.md bool with_id have to show id with options_dict """ options = constraint_info["options"] oc_resource1 = options.get("first", "") oc_resource2 = options.get("then", "") first_action = options.get("first-action", "") then_action = options.get("then-action", "") oc_id = options.get("id", "") oc_score = options.get("score", "") oc_kind = options.get("kind", "") oc_sym = "" oc_id_out = "" oc_options = "" if ( "symmetrical" in options and not is_true(options.get("symmetrical", "false")) ): oc_sym = "(non-symmetrical)" if oc_kind != "": score_text = "(kind:" + oc_kind + ")" elif oc_kind == "" and oc_score == "": score_text = "(kind:Mandatory)" else: score_text = "(score:" + oc_score + ")" if with_id: oc_id_out = "(id:"+oc_id+")" already_processed_options = ( "first", "then", "first-action", "then-action", "id", "score", "kind", "symmetrical" ) oc_options = " ".join([ "{0}={1}".format(name, value) for name, value in options.items() if name not in already_processed_options ]) if oc_options: oc_options = "(Options: " + oc_options + ")" return " ".join([arg for arg in [ first_action, oc_resource1, "then", then_action, oc_resource2, score_text, oc_sym, oc_options, oc_id_out ] if arg]) pcs-0.9.164/pcs/cli/constraint_ticket/000077500000000000000000000000001326265502500175515ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_ticket/__init__.py000066400000000000000000000000001326265502500216500ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_ticket/command.py000066400000000000000000000044371326265502500215510ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.common.errors import CmdLineInputError from pcs.cli.constraint import command from pcs.cli.constraint_ticket import parse_args, console_report from pcs.cli.common.console_report import error def create_with_set(lib, argv, modifiers): """ create ticket constraint with resource set object lib exposes library list argv see usage for "constraint colocation set" dict like object modifiers can contain "force" allows resource in clone/master and constraint duplicity "autocorrect" allows correct resource to its clone/master parent """ command.create_with_set( lib.constraint_ticket.set, argv, modifiers, ) def add(lib, argv, modifiers): """ create ticket constraint object lib exposes library list argv see usage for "constraint colocation add" dict like object modifiers can contain "force" allows resource in clone/master and constraint duplicity "autocorrect" allows correct resource to its clone/master parent """ ticket, resource_id, resource_role, options = parse_args.parse_add(argv) if "rsc-role" in options: raise CmdLineInputError( "Resource role must not be specified among options" +", specify it before resource id" ) if resource_role: options["rsc-role"] = resource_role lib.constraint_ticket.add( ticket, resource_id, options, autocorrection_allowed=modifiers["autocorrect"], resource_in_clone_alowed=modifiers["force"], duplication_alowed=modifiers["force"], ) def remove(lib, argv, modifiers): if len(argv) != 2: raise CmdLineInputError() ticket, resource_id = argv if not lib.constraint_ticket.remove(ticket, resource_id): raise error("no matching ticket constraint found") def show(lib, argv, modifiers): """ show all ticket constraints object lib exposes library list argv see usage for "constraint colocation show" dict like object modifiers can contain "full" """ print("\n".join(command.show( "Ticket Constraints:", lib.constraint_ticket.show, console_report.constraint_plain, modifiers, ))) pcs-0.9.164/pcs/cli/constraint_ticket/console_report.py000066400000000000000000000013041326265502500231560ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.constraint.console_report import prepare_options def constraint_plain(constraint_info, with_id=False): """ dict constraint_info see constraint in pcs/lib/exchange_formats.md bool with_id have to show id with options_dict """ options = constraint_info["options"] role = options.get("rsc-role", "") role_prefix = "{0} ".format(role) if role else "" return role_prefix + " ".join([options.get("rsc", "")] + prepare_options( dict( (name, value) for name, value in options.items() if name not in ["rsc-role", "rsc"] ), with_id )) pcs-0.9.164/pcs/cli/constraint_ticket/parse_args.py000066400000000000000000000021201326265502500222440ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.common import parse_args from pcs.cli.common.errors import CmdLineInputError def separate_tail_option_candidates(arg_list): for i, arg in enumerate(arg_list): if "=" in arg: return arg_list[:i], arg_list[i:] return arg_list, [] def parse_add(arg_list): info, option_candidates = separate_tail_option_candidates(arg_list) if not info: raise CmdLineInputError("Ticket not specified") ticket, resource_specification = info[0], info[1:] if len(resource_specification) not in (1, 2): raise CmdLineInputError( "invalid resource specification: '{0}'" .format(" ".join(resource_specification)) ) if len(resource_specification) == 2: resource_role, resource_id = resource_specification else: resource_role = "" resource_id = resource_specification[0] return ( ticket, resource_id, resource_role, parse_args.prepare_options(option_candidates) ) pcs-0.9.164/pcs/cli/constraint_ticket/test/000077500000000000000000000000001326265502500205305ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_ticket/test/__init__.py000066400000000000000000000000001326265502500226270ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/constraint_ticket/test/test_command.py000066400000000000000000000061131326265502500235600ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.pcs_unittest import mock from pcs.cli.common.errors import CmdLineInputError from pcs.cli.constraint_ticket import command class AddTest(TestCase): @mock.patch("pcs.cli.constraint_ticket.command.parse_args.parse_add") def test_call_library_with_correct_attrs(self, mock_parse_add): mock_parse_add.return_value = ( "ticket", "resource_id", "", {"loss-policy": "fence"} ) lib = mock.MagicMock() lib.constraint_ticket = mock.MagicMock() lib.constraint_ticket.add = mock.MagicMock() command.add(lib, ["argv"], {"force": True, "autocorrect": True}) mock_parse_add.assert_called_once_with(["argv"]) lib.constraint_ticket.add.assert_called_once_with( "ticket", "resource_id", {"loss-policy": "fence"}, autocorrection_allowed=True, resource_in_clone_alowed=True, duplication_alowed=True, ) @mock.patch("pcs.cli.constraint_ticket.command.parse_args.parse_add") def test_refuse_resource_role_in_options(self, mock_parse_add): mock_parse_add.return_value = ( "ticket", "resource_id", "resource_role", {"rsc-role": "master"} ) lib = None self.assertRaises( CmdLineInputError, lambda: command.add( lib, ["argv"], {"force": True, "autocorrect": True}, ) ) @mock.patch("pcs.cli.constraint_ticket.command.parse_args.parse_add") def test_put_resource_role_to_options_for_library(self, mock_parse_add): mock_parse_add.return_value = ( "ticket", "resource_id", "resource_role", {"loss-policy": "fence"} ) lib = mock.MagicMock() lib.constraint_ticket = mock.MagicMock() lib.constraint_ticket.add = mock.MagicMock() command.add(lib, ["argv"], {"force": True, "autocorrect": True}) mock_parse_add.assert_called_once_with(["argv"]) lib.constraint_ticket.add.assert_called_once_with( "ticket", "resource_id", {"loss-policy": "fence", "rsc-role": "resource_role"}, autocorrection_allowed=True, resource_in_clone_alowed=True, duplication_alowed=True, ) class RemoveTest(TestCase): def test_refuse_args_count(self): self.assertRaises(CmdLineInputError, lambda: command.remove( mock.MagicMock(), ["TICKET"], {}, )) self.assertRaises(CmdLineInputError, lambda: command.remove( mock.MagicMock(), ["TICKET", "RESOURCE", "SOMETHING_ELSE"], {}, )) def test_call_library_remove_with_correct_attrs(self): lib = mock.MagicMock( constraint_ticket=mock.MagicMock(remove=mock.Mock()) ) command.remove(lib, ["TICKET", "RESOURCE"], {}) lib.constraint_ticket.remove.assert_called_once_with( "TICKET", "RESOURCE", ) pcs-0.9.164/pcs/cli/constraint_ticket/test/test_console_report.py000066400000000000000000000016441326265502500252030ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.constraint_ticket import console_report class ConstraintPlainTest(TestCase): def test_prepare_report(self): self.assertEqual( "Master resourceA (id:some_id)", console_report.constraint_plain( {"options": { "rsc-role": "Master", "rsc": "resourceA", "id": "some_id" }}, with_id=True ) ) def test_prepare_report_without_role(self): self.assertEqual( "resourceA (id:some_id)", console_report.constraint_plain( {"options": { "rsc": "resourceA", "id": "some_id" }}, with_id=True ) ) pcs-0.9.164/pcs/cli/constraint_ticket/test/test_parse_args.py000066400000000000000000000046431326265502500242760ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.constraint_ticket import parse_args from pcs.cli.common.errors import CmdLineInputError class ParseAddTest(TestCase): def test_parse_add_args(self): self.assertEqual( parse_args.parse_add( ["T", "resource1", "ticket=T", "loss-policy=fence"] ), ( "T", "resource1", "", { "ticket": "T", "loss-policy": "fence", } ) ) def test_parse_add_args_with_resource_role(self): self.assertEqual( parse_args.parse_add( ["T", "master", "resource1", "ticket=T", "loss-policy=fence"] ), ( "T", "resource1", "master", { "ticket": "T", "loss-policy": "fence", } ) ) def test_raises_when_invalid_resource_specification(self): self.assertRaises( CmdLineInputError, lambda: parse_args.parse_add( ["T", "master", "resource1", "something_else"] ) ) def test_raises_when_ticket_and_resource_not_specified(self): self.assertRaises( CmdLineInputError, lambda: parse_args.parse_add( ["loss-policy=fence"] ) ) def test_raises_when_resource_not_specified(self): self.assertRaises( CmdLineInputError, lambda: parse_args.parse_add( ["T", "loss-policy=fence"] ) ) class SeparateTailOptionCandidatesTest(TestCase): def test_separate_when_both_parts_there(self): self.assertEqual( (["a", "b"], ["c=d", "e=f"]), parse_args.separate_tail_option_candidates(["a", "b", "c=d", "e=f"]) ) def test_returns_empty_head_when_options_there_only(self): self.assertEqual( ([], ["c=d", "e=f"]), parse_args.separate_tail_option_candidates(["c=d", "e=f"]) ) def test_returns_empty_tail_when_no_options_there(self): self.assertEqual( (["a", "b"], []), parse_args.separate_tail_option_candidates(["a", "b"]) ) pcs-0.9.164/pcs/cli/fencing_topology.py000066400000000000000000000007331326265502500177440ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common.fencing_topology import ( TARGET_TYPE_NODE, TARGET_TYPE_REGEXP, TARGET_TYPE_ATTRIBUTE, ) __target_type_map = { "attrib": TARGET_TYPE_ATTRIBUTE, "node": TARGET_TYPE_NODE, "regexp": TARGET_TYPE_REGEXP, } target_type_map_cli_to_lib = __target_type_map target_type_map_lib_to_cli = dict([ (value, key) for key, value in __target_type_map.items() ]) pcs-0.9.164/pcs/cli/resource/000077500000000000000000000000001326265502500156515ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/resource/__init__.py000066400000000000000000000000001326265502500177500ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/resource/parse_args.py000066400000000000000000000141201326265502500203470ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.cli.common.parse_args import group_by_keywords, prepare_options from pcs.cli.common.errors import CmdLineInputError def parse_create_simple(arg_list): groups = group_by_keywords( arg_list, set(["op", "meta"]), implicit_first_group_key="options", group_repeated_keywords=["op"], ) parts = { "meta": prepare_options(groups.get("meta", [])), "options": prepare_options(groups.get("options", [])), "op": [ prepare_options(op) for op in build_operations(groups.get("op", [])) ], } return parts def parse_create(arg_list): groups = group_by_keywords( arg_list, set(["op", "meta", "clone", "master", "bundle"]), implicit_first_group_key="options", group_repeated_keywords=["op"], only_found_keywords=True, ) parts = { "meta": prepare_options(groups.get("meta", [])), "options": prepare_options(groups.get("options", [])), "op": [ prepare_options(op) for op in build_operations(groups.get("op", [])) ], } if "clone" in groups: parts["clone"] = prepare_options(groups["clone"]) if "master" in groups: parts["master"] = prepare_options(groups["master"]) if "bundle" in groups: parts["bundle"] = groups["bundle"] return parts def _parse_bundle_groups(arg_list): repeatable_keyword_list = ["port-map", "storage-map"] keyword_list = ["meta", "container", "network"] + repeatable_keyword_list groups = group_by_keywords( arg_list, set(keyword_list), group_repeated_keywords=repeatable_keyword_list, only_found_keywords=True, ) for keyword in keyword_list: if keyword not in groups: continue if keyword in repeatable_keyword_list: for repeated_section in groups[keyword]: if len(repeated_section) == 0: raise CmdLineInputError( "No {0} options specified".format(keyword) ) else: if len(groups[keyword]) == 0: raise CmdLineInputError( "No {0} options specified".format(keyword) ) return groups def parse_bundle_create_options(arg_list): groups = _parse_bundle_groups(arg_list) container_options = groups.get("container", []) container_type = "" if container_options and "=" not in container_options[0]: container_type = container_options.pop(0) parts = { "container_type": container_type, "container": prepare_options(container_options), "network": prepare_options(groups.get("network", [])), "port_map": [ prepare_options(port_map) for port_map in groups.get("port-map", []) ], "storage_map": [ prepare_options(storage_map) for storage_map in groups.get("storage-map", []) ], "meta": prepare_options(groups.get("meta", [])) } return parts def _split_bundle_map_update_op_and_options( map_arg_list, result_parts, map_name ): if len(map_arg_list) < 2: raise _bundle_map_update_not_valid(map_name) op, options = map_arg_list[0], map_arg_list[1:] if op == "add": result_parts[op].append(prepare_options(options)) elif op == "remove": result_parts[op].extend(options) else: raise _bundle_map_update_not_valid(map_name) def _bundle_map_update_not_valid(map_name): return CmdLineInputError( ( "When using '{map}' you must specify either 'add' and options or " "'remove' and id(s)" ).format(map=map_name) ) def parse_bundle_update_options(arg_list): groups = _parse_bundle_groups(arg_list) port_map = {"add": [], "remove": []} for map_group in groups.get("port-map", []): _split_bundle_map_update_op_and_options( map_group, port_map, "port-map" ) storage_map = {"add": [], "remove": []} for map_group in groups.get("storage-map", []): _split_bundle_map_update_op_and_options( map_group, storage_map, "storage-map" ) parts = { "container": prepare_options(groups.get("container", [])), "network": prepare_options(groups.get("network", [])), "port_map_add": port_map["add"], "port_map_remove": port_map["remove"], "storage_map_add": storage_map["add"], "storage_map_remove": storage_map["remove"], "meta": prepare_options(groups.get("meta", [])) } return parts def build_operations(op_group_list): """ Return a list of dicts. Each dict represents one operation. list of list op_group_list contains items that have parameters after "op" (so item can contain multiple operations) for example: [ [monitor timeout=1 start timeout=2], [monitor timeout=3 interval=10], ] """ operation_list = [] for op_group in op_group_list: #empty operation is not allowed if not op_group: raise __not_enough_parts_in_operation() #every operation group needs to start with operation name if "=" in op_group[0]: raise __every_operation_needs_name() for arg in op_group: if "=" not in arg: operation_list.append(["name={0}".format(arg)]) else: operation_list[-1].append(arg) #every operation needs at least name and one option #there can be more than one operation in op_group: check is after processing if any([len(operation) < 2 for operation in operation_list]): raise __not_enough_parts_in_operation() return operation_list def __not_enough_parts_in_operation(): return CmdLineInputError( "When using 'op' you must specify an operation name" " and at least one option" ) def __every_operation_needs_name(): return CmdLineInputError( "When using 'op' you must specify an operation name after 'op'" ) pcs-0.9.164/pcs/cli/resource/test/000077500000000000000000000000001326265502500166305ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/resource/test/__init__.py000066400000000000000000000000001326265502500207270ustar00rootroot00000000000000pcs-0.9.164/pcs/cli/resource/test/test_parse_args.py000066400000000000000000000537301326265502500223770ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.resource import parse_args from pcs.cli.common.errors import CmdLineInputError class ParseCreateArgs(TestCase): def assert_produce(self, arg_list, result): self.assertEqual(parse_args.parse_create(arg_list), result) def test_no_args(self): self.assert_produce([], { "meta": {}, "options": {}, "op": [], }) def test_only_instance_attributes(self): self.assert_produce(["a=b", "c=d"], { "meta": {}, "options": { "a": "b", "c": "d", }, "op": [], }) def test_only_meta(self): self.assert_produce(["meta", "a=b", "c=d"], { "options": {}, "op": [], "meta": { "a": "b", "c": "d", }, }) def test_only_clone(self): self.assert_produce(["clone", "a=b", "c=d"], { "meta": {}, "options": {}, "op": [], "clone": { "a": "b", "c": "d", }, }) def test_only_operations(self): self.assert_produce([ "op", "monitor", "a=b", "c=d", "start", "e=f", ], { "meta": {}, "options": {}, "op": [ {"name": "monitor", "a": "b", "c": "d"}, {"name": "start", "e": "f"}, ], }) def test_args_op_clone_meta(self): self.assert_produce([ "a=b", "c=d", "meta", "e=f", "g=h", "op", "monitor", "i=j", "k=l", "start", "m=n", "clone", "o=p", "q=r", ], { "options": { "a": "b", "c": "d", }, "op": [ {"name": "monitor", "i": "j", "k": "l"}, {"name": "start", "m": "n"}, ], "meta": { "e": "f", "g": "h", }, "clone": { "o": "p", "q": "r", }, }) def assert_raises_cmdline(self, args): self.assertRaises( CmdLineInputError, lambda: parse_args.parse_create(args) ) def test_raises_when_operation_name_does_not_follow_op_keyword(self): self.assert_raises_cmdline(["op", "a=b"]) self.assert_raises_cmdline(["op", "monitor", "a=b", "op", "c=d"]) def test_raises_when_operation_have_no_option(self): self.assert_raises_cmdline( ["op", "monitor", "a=b", "start", "stop", "c=d"] ) self.assert_raises_cmdline( ["op", "monitor", "a=b", "stop", "c=d", "op", "start"] ) def test_allow_to_repeat_op(self): self.assert_produce([ "op", "monitor", "a=b", "c=d", "op", "start", "e=f", ], { "meta": {}, "options": {}, "op": [ {"name": "monitor", "a": "b", "c": "d"}, {"name": "start", "e": "f"}, ], }) def test_deal_with_empty_operatins(self): self.assert_raises_cmdline(["op", "monitoring", "a=b", "op"]) class ParseCreateSimple(TestCase): def assert_produce(self, arg_list, result): self.assertEqual(parse_args.parse_create_simple(arg_list), result) def test_without_args(self): self.assert_produce([], { "meta": {}, "options": {}, "op": [], }) def test_only_instance_attributes(self): self.assert_produce(["a=b", "c=d"], { "meta": {}, "options": { "a": "b", "c": "d", }, "op": [], }) def test_only_meta(self): self.assert_produce(["meta", "a=b", "c=d"], { "options": {}, "op": [], "meta": { "a": "b", "c": "d", }, }) def test_only_operations(self): self.assert_produce([ "op", "monitor", "a=b", "c=d", "start", "e=f", ], { "meta": {}, "options": {}, "op": [ {"name": "monitor", "a": "b", "c": "d"}, {"name": "start", "e": "f"}, ], }) def assert_raises_cmdline(self, args): self.assertRaises( CmdLineInputError, lambda: parse_args.parse_create_simple(args) ) def test_raises_when_operation_name_does_not_follow_op_keyword(self): self.assert_raises_cmdline(["op", "a=b"]) self.assert_raises_cmdline(["op", "monitor", "a=b", "op", "c=d"]) def test_raises_when_operation_have_no_option(self): self.assert_raises_cmdline( ["op", "monitor", "a=b", "start", "stop", "c=d"] ) self.assert_raises_cmdline( ["op", "monitor", "a=b", "stop", "c=d", "op", "start"] ) def test_allow_to_repeat_op(self): self.assert_produce([ "op", "monitor", "a=b", "c=d", "op", "start", "e=f", ], { "meta": {}, "options": {}, "op": [ {"name": "monitor", "a": "b", "c": "d"}, {"name": "start", "e": "f"}, ], }) class ParseBundleCreateOptions(TestCase): def assert_produce(self, arg_list, result): self.assertEqual( result, parse_args.parse_bundle_create_options(arg_list) ) def assert_raises_cmdline(self, arg_list): self.assertRaises( CmdLineInputError, lambda: parse_args.parse_bundle_create_options(arg_list) ) def test_no_args(self): self.assert_produce( [], { "container_type": "", "container": {}, "network": {}, "port_map": [], "storage_map": [], "meta": {}, } ) def test_container_empty(self): self.assert_raises_cmdline(["container"]) def test_container_type(self): self.assert_produce( ["container", "docker"], { "container_type": "docker", "container": {}, "network": {}, "port_map": [], "storage_map": [], "meta": {}, } ) def test_container_options(self): self.assert_produce( ["container", "a=b", "c=d"], { "container_type": "", "container": {"a": "b", "c": "d"}, "network": {}, "port_map": [], "storage_map": [], "meta": {}, } ) def test_container_type_and_options(self): self.assert_produce( ["container", "docker", "a=b", "c=d"], { "container_type": "docker", "container": {"a": "b", "c": "d"}, "network": {}, "port_map": [], "storage_map": [], "meta": {}, } ) def test_container_type_must_be_first(self): self.assert_raises_cmdline(["container", "a=b", "docker", "c=d"]) def test_container_missing_value(self): self.assert_raises_cmdline(["container", "docker", "a", "c=d"]) def test_container_missing_key(self): self.assert_raises_cmdline(["container", "docker", "=b", "c=d"]) def test_network(self): self.assert_produce( ["network", "a=b", "c=d"], { "container_type": "", "container": {}, "network": {"a": "b", "c": "d"}, "port_map": [], "storage_map": [], "meta": {}, } ) def test_network_empty(self): self.assert_raises_cmdline(["network"]) def test_network_missing_value(self): self.assert_raises_cmdline(["network", "a", "c=d"]) def test_network_missing_key(self): self.assert_raises_cmdline(["network", "=b", "c=d"]) def test_port_map_empty(self): self.assert_raises_cmdline(["port-map"]) def test_one_of_port_map_empty(self): self.assert_raises_cmdline( ["port-map", "a=b", "port-map", "network", "c=d"] ) def test_port_map_one(self): self.assert_produce( ["port-map", "a=b", "c=d"], { "container_type": "", "container": {}, "network": {}, "port_map": [{"a": "b", "c": "d"}], "storage_map": [], "meta": {}, } ) def test_port_map_more(self): self.assert_produce( ["port-map", "a=b", "c=d", "port-map", "e=f"], { "container_type": "", "container": {}, "network": {}, "port_map": [{"a": "b", "c": "d"}, {"e": "f"}], "storage_map": [], "meta": {}, } ) def test_port_map_missing_value(self): self.assert_raises_cmdline(["port-map", "a", "c=d"]) def test_port_map_missing_key(self): self.assert_raises_cmdline(["port-map", "=b", "c=d"]) def test_storage_map_empty(self): self.assert_raises_cmdline(["storage-map"]) def test_one_of_storage_map_empty(self): self.assert_raises_cmdline( ["storage-map", "port-map", "a=b", "storage-map", "c=d"] ) def test_storage_map_one(self): self.assert_produce( ["storage-map", "a=b", "c=d"], { "container_type": "", "container": {}, "network": {}, "port_map": [], "storage_map": [{"a": "b", "c": "d"}], "meta": {}, } ) def test_storage_map_more(self): self.assert_produce( ["storage-map", "a=b", "c=d", "storage-map", "e=f"], { "container_type": "", "container": {}, "network": {}, "port_map": [], "storage_map": [{"a": "b", "c": "d"}, {"e": "f"}], "meta": {}, } ) def test_storage_map_missing_value(self): self.assert_raises_cmdline(["storage-map", "a", "c=d"]) def test_storage_map_missing_key(self): self.assert_raises_cmdline(["storage-map", "=b", "c=d"]) def test_meta(self): self.assert_produce( ["meta", "a=b", "c=d"], { "container_type": "", "container": {}, "network": {}, "port_map": [], "storage_map": [], "meta": {"a": "b", "c": "d"}, } ) def test_meta_empty(self): self.assert_raises_cmdline(["meta"]) def test_meta_missing_value(self): self.assert_raises_cmdline(["meta", "a", "c=d"]) def test_meta_missing_key(self): self.assert_raises_cmdline(["meta", "=b", "c=d"]) def test_all(self): self.assert_produce( [ "container", "docker", "a=b", "c=d", "network", "e=f", "g=h", "port-map", "i=j", "k=l", "port-map", "m=n", "o=p", "storage-map", "q=r", "s=t", "storage-map", "u=v", "w=x", "meta", "y=z", "A=B", ], { "container_type": "docker", "container": {"a": "b", "c": "d"}, "network": {"e": "f", "g": "h"}, "port_map": [{"i": "j", "k": "l"}, {"m": "n", "o": "p"}], "storage_map": [{"q": "r", "s": "t"}, {"u": "v", "w": "x"}], "meta": {"y": "z", "A": "B"}, } ) def test_all_mixed(self): self.assert_produce( [ "storage-map", "q=r", "s=t", "meta", "y=z", "port-map", "i=j", "k=l", "network", "e=f", "container", "docker", "a=b", "storage-map", "u=v", "w=x", "port-map", "m=n", "o=p", "meta", "A=B", "network", "g=h", "container", "c=d", ], { "container_type": "docker", "container": {"a": "b", "c": "d"}, "network": {"e": "f", "g": "h"}, "port_map": [{"i": "j", "k": "l"}, {"m": "n", "o": "p"}], "storage_map": [{"q": "r", "s": "t"}, {"u": "v", "w": "x"}], "meta": {"y": "z", "A": "B"}, } ) class ParseBundleUpdateOptions(TestCase): def assert_produce(self, arg_list, result): self.assertEqual( result, parse_args.parse_bundle_update_options(arg_list) ) def assert_raises_cmdline(self, arg_list): self.assertRaises( CmdLineInputError, lambda: parse_args.parse_bundle_update_options(arg_list) ) def test_no_args(self): self.assert_produce( [], { "container": {}, "network": {}, "port_map_add": [], "port_map_remove": [], "storage_map_add": [], "storage_map_remove": [], "meta": {}, } ) def test_container_options(self): self.assert_produce( ["container", "a=b", "c=d"], { "container": {"a": "b", "c": "d"}, "network": {}, "port_map_add": [], "port_map_remove": [], "storage_map_add": [], "storage_map_remove": [], "meta": {}, } ) def test_container_empty(self): self.assert_raises_cmdline(["container"]) def test_container_missing_value(self): self.assert_raises_cmdline(["container", "a", "c=d"]) def test_container_missing_key(self): self.assert_raises_cmdline(["container", "=b", "c=d"]) def test_network(self): self.assert_produce( ["network", "a=b", "c=d"], { "container": {}, "network": {"a": "b", "c": "d"}, "port_map_add": [], "port_map_remove": [], "storage_map_add": [], "storage_map_remove": [], "meta": {}, } ) def test_network_empty(self): self.assert_raises_cmdline(["network"]) def test_network_missing_value(self): self.assert_raises_cmdline(["network", "a", "c=d"]) def test_network_missing_key(self): self.assert_raises_cmdline(["network", "=b", "c=d"]) def test_port_map_empty(self): self.assert_raises_cmdline(["port-map"]) def test_one_of_port_map_empty(self): self.assert_raises_cmdline( ["port-map", "a=b", "port-map", "network", "c=d"] ) def test_port_map_missing_params(self): self.assert_raises_cmdline(["port-map"]) self.assert_raises_cmdline(["port-map add"]) self.assert_raises_cmdline(["port-map remove"]) def test_port_map_wrong_keyword(self): self.assert_raises_cmdline(["port-map", "wrong", "a=b"]) def test_port_map_missing_value(self): self.assert_raises_cmdline(["port-map", "add", "a", "c=d"]) def test_port_map_missing_key(self): self.assert_raises_cmdline(["port-map", "add", "=b", "c=d"]) def test_port_map_more(self): self.assert_produce( [ "port-map", "add", "a=b", "port-map", "remove", "c", "d", "port-map", "add", "e=f", "g=h", "port-map", "remove", "i", ], { "container": {}, "network": {}, "port_map_add": [ {"a": "b", }, {"e": "f", "g": "h",}, ], "port_map_remove": ["c", "d", "i"], "storage_map_add": [], "storage_map_remove": [], "meta": {}, } ) def test_storage_map_empty(self): self.assert_raises_cmdline(["storage-map"]) def test_one_of_storage_map_empty(self): self.assert_raises_cmdline( ["storage-map", "port-map", "a=b", "storage-map", "c=d"] ) def test_storage_map_missing_params(self): self.assert_raises_cmdline(["storage-map"]) self.assert_raises_cmdline(["storage-map add"]) self.assert_raises_cmdline(["storage-map remove"]) def test_storage_map_wrong_keyword(self): self.assert_raises_cmdline(["storage-map", "wrong", "a=b"]) def test_storage_map_missing_value(self): self.assert_raises_cmdline(["storage-map", "add", "a", "c=d"]) def test_storage_map_missing_key(self): self.assert_raises_cmdline(["storage-map", "add", "=b", "c=d"]) def test_storage_map_more(self): self.assert_produce( [ "storage-map", "add", "a=b", "storage-map", "remove", "c", "d", "storage-map", "add", "e=f", "g=h", "storage-map", "remove", "i", ], { "container": {}, "network": {}, "port_map_add": [], "port_map_remove": [], "storage_map_add": [ {"a": "b", }, {"e": "f", "g": "h",}, ], "storage_map_remove": ["c", "d", "i"], "meta": {}, } ) def test_meta(self): self.assert_produce( ["meta", "a=b", "c=d"], { "container": {}, "network": {}, "port_map_add": [], "port_map_remove": [], "storage_map_add": [], "storage_map_remove": [], "meta": {"a": "b", "c": "d"}, } ) def test_meta_empty(self): self.assert_raises_cmdline(["meta"]) def test_meta_missing_value(self): self.assert_raises_cmdline(["meta", "a", "c=d"]) def test_meta_missing_key(self): self.assert_raises_cmdline(["meta", "=b", "c=d"]) def test_all(self): self.assert_produce( [ "container", "a=b", "c=d", "network", "e=f", "g=h", "port-map", "add", "i=j", "k=l", "port-map", "add", "m=n", "port-map", "remove", "o", "p", "port-map", "remove", "q", "storage-map", "add", "r=s", "t=u", "storage-map", "add", "v=w", "storage-map", "remove", "x", "y", "storage-map", "remove", "z", "meta", "A=B", "C=D", ], { "container": {"a": "b", "c": "d"}, "network": {"e": "f", "g": "h"}, "port_map_add": [ {"i": "j", "k": "l"}, {"m": "n"}, ], "port_map_remove": ["o", "p", "q"], "storage_map_add": [ {"r": "s", "t": "u"}, {"v": "w"}, ], "storage_map_remove": ["x", "y", "z"], "meta": {"A": "B", "C": "D"}, } ) def test_all_mixed(self): self.assert_produce( [ "storage-map", "remove", "x", "y", "meta", "A=B", "port-map", "remove", "o", "p", "network", "e=f", "g=h", "storage-map", "add", "r=s", "t=u", "port-map", "add", "i=j", "k=l", "container", "a=b", "c=d", "meta", "C=D", "port-map", "remove", "q", "storage-map", "remove", "z", "storage-map", "add", "v=w", "port-map", "add", "m=n", ], { "container": {"a": "b", "c": "d"}, "network": {"e": "f", "g": "h"}, "port_map_add": [ {"i": "j", "k": "l"}, {"m": "n"}, ], "port_map_remove": ["o", "p", "q"], "storage_map_add": [ {"r": "s", "t": "u"}, {"v": "w"}, ], "storage_map_remove": ["x", "y", "z"], "meta": {"A": "B", "C": "D"}, } ) class BuildOperations(TestCase): def assert_produce(self, arg_list, result): self.assertEqual(result, parse_args.build_operations(arg_list)) def assert_raises_cmdline(self, arg_list): self.assertRaises( CmdLineInputError, lambda: parse_args.build_operations(arg_list) ) def test_return_empty_list_on_empty_input(self): self.assert_produce([], []) def test_return_all_operations_specified_in_the_same_group(self): self.assert_produce( [ ["monitor", "interval=10s", "start", "timeout=20s"] ], [ ["name=monitor", "interval=10s"], ["name=start", "timeout=20s"], ] ) def test_return_all_operations_specified_in_different_groups(self): self.assert_produce( [ ["monitor", "interval=10s"], ["start", "timeout=20s"], ], [ ["name=monitor", "interval=10s"], ["name=start", "timeout=20s"], ] ) def test_refuse_empty_operation(self): self.assert_raises_cmdline([[]]) def test_refuse_operation_without_attribute(self): self.assert_raises_cmdline([["monitor"]]) def test_refuse_operation_without_name(self): self.assert_raises_cmdline([["interval=10s"]]) pcs-0.9.164/pcs/cluster.py000066400000000000000000002530451326265502500153170ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import math import os import subprocess import re import sys import socket import tempfile import datetime import json import time import xml.dom.minidom try: # python2 from commands import getstatusoutput except ImportError: # python3 from subprocess import getstatusoutput try: # python2 from urlparse import urlparse except ImportError: # python3 from urllib.parse import urlparse from pcs import ( constraint, node, pcsd, quorum, resource, settings, status, usage, utils, ) from pcs.utils import parallel_for_nodes from pcs.common import report_codes from pcs.cli.common.errors import ( CmdLineInputError, ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE, ) from pcs.cli.common.reports import process_library_reports, build_report_message import pcs.cli.cluster.command as cluster_command from pcs.lib import ( sbd as lib_sbd, reports as lib_reports, ) from pcs.lib.booth import sync as booth_sync from pcs.lib.commands.remote_node import _share_authkey, _destroy_pcmk_remote_env from pcs.lib.commands.quorum import _add_device_model_net from pcs.lib.communication.corosync import CheckCorosyncOffline from pcs.lib.communication.nodes import DistributeFiles from pcs.lib.communication.sbd import ( CheckSbd, SetSbdConfig, EnableSbdService, DisableSbdService, ) from pcs.lib.communication.tools import run_and_raise from pcs.lib.corosync import ( config_parser as corosync_conf_utils, qdevice_net, ) from pcs.cli.common.console_report import warn, error from pcs.lib.corosync.config_facade import ConfigFacade as corosync_conf_facade from pcs.lib.errors import ( LibraryError, ReportItemSeverity, ) from pcs.lib.external import ( disable_service, is_systemctl, NodeCommandUnsuccessfulException, NodeCommunicationException, node_communicator_exception_to_report_item, ) from pcs.lib.env_tools import get_nodes from pcs.lib.node import NodeAddresses from pcs.lib import node_communication_format import pcs.lib.pacemaker.live as lib_pacemaker from pcs.lib.tools import ( environment_file_to_dict, generate_binary_key, generate_key, ) def cluster_cmd(argv): if len(argv) == 0: usage.cluster() exit(1) sub_cmd = argv.pop(0) if (sub_cmd == "help"): usage.cluster([" ".join(argv)] if argv else []) elif (sub_cmd == "setup"): if "--name" in utils.pcs_options: cluster_setup([utils.pcs_options["--name"]] + argv) else: utils.err( "A cluster name (--name ) is required to setup a cluster" ) elif (sub_cmd == "sync"): sync_nodes(utils.getNodesFromCorosyncConf(),utils.getCorosyncConf()) elif (sub_cmd == "status"): status.cluster_status(argv) elif (sub_cmd == "pcsd-status"): status.cluster_pcsd_status(argv) elif (sub_cmd == "certkey"): cluster_certkey(argv) elif (sub_cmd == "auth"): cluster_auth(argv) elif (sub_cmd == "token"): cluster_token(argv) elif (sub_cmd == "token-nodes"): cluster_token_nodes(argv) elif (sub_cmd == "start"): if "--all" in utils.pcs_options: if argv: utils.err(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) start_cluster_all() else: start_cluster(argv) elif (sub_cmd == "stop"): if "--all" in utils.pcs_options: if argv: utils.err(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) stop_cluster_all() else: stop_cluster(argv) elif (sub_cmd == "kill"): kill_cluster(argv) elif (sub_cmd == "standby"): try: node.node_standby_cmd( utils.get_library_wrapper(), argv, utils.get_modifiers(), True ) except LibraryError as e: utils.process_library_reports(e.args) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "node", "standby") elif (sub_cmd == "unstandby"): try: node.node_standby_cmd( utils.get_library_wrapper(), argv, utils.get_modifiers(), False ) except LibraryError as e: utils.process_library_reports(e.args) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "node", "unstandby") elif (sub_cmd == "enable"): if "--all" in utils.pcs_options: if argv: utils.err(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) enable_cluster_all() else: enable_cluster(argv) elif (sub_cmd == "disable"): if "--all" in utils.pcs_options: if argv: utils.err(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) disable_cluster_all() else: disable_cluster(argv) elif (sub_cmd == "remote-node"): try: cluster_remote_node(argv) except LibraryError as e: utils.process_library_reports(e.args) elif (sub_cmd == "cib"): get_cib(argv) elif (sub_cmd == "cib-push"): cluster_push(argv) elif (sub_cmd == "cib-upgrade"): utils.cluster_upgrade() elif (sub_cmd == "edit"): cluster_edit(argv) elif (sub_cmd == "node"): if not argv: usage.cluster(["node"]) sys.exit(1) remote_node_command_map = { "add-remote": cluster_command.node_add_remote, "add-guest": cluster_command.node_add_guest, "remove-remote": cluster_command.create_node_remove_remote( resource.resource_remove ), "remove-guest": cluster_command.node_remove_guest, "clear": cluster_command.node_clear, } if argv[0] in remote_node_command_map: try: remote_node_command_map[argv[0]]( utils.get_library_wrapper(), argv[1:], utils.get_modifiers() ) except LibraryError as e: process_library_reports(e.args) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror( e, "cluster", "node " + argv[0] ) else: cluster_node(argv) elif (sub_cmd == "localnode"): cluster_localnode(argv) elif (sub_cmd == "uidgid"): cluster_uidgid(argv) elif (sub_cmd == "corosync"): cluster_get_corosync_conf(argv) elif (sub_cmd == "reload"): cluster_reload(argv) elif (sub_cmd == "destroy"): cluster_destroy(argv) elif (sub_cmd == "verify"): cluster_verify(argv) elif (sub_cmd == "report"): cluster_report(argv) elif (sub_cmd == "quorum"): if argv and argv[0] == "unblock": quorum.quorum_unblock_cmd(argv[1:]) else: usage.cluster() sys.exit(1) else: usage.cluster() sys.exit(1) def sync_nodes(nodes,config): for node in nodes: utils.setCorosyncConfig(node,config) def cluster_auth(argv): if len(argv) == 0: auth_nodes(utils.getNodesFromCorosyncConf()) else: auth_nodes(argv) def cluster_token(argv): if len(argv) > 1: utils.err("Must specify only one node") elif len(argv) == 0: utils.err("Must specify a node to get authorization token from") node = argv[0] tokens = utils.readTokens() if node in tokens: print(tokens[node]) else: utils.err("No authorization token for: %s" % (node)) def cluster_token_nodes(argv): print("\n".join(sorted(utils.readTokens().keys()))) def auth_nodes(nodes): if "-u" in utils.pcs_options: username = utils.pcs_options["-u"] else: username = None if "-p" in utils.pcs_options: password = utils.pcs_options["-p"] else: password = None nodes_dict = parse_nodes_with_ports(nodes) need_auth = "--force" in utils.pcs_options or (username or password) if not need_auth: for node in nodes_dict.keys(): status = utils.checkAuthorization(node) if status[0] == 3: need_auth = True break mutually_authorized = False if status[0] == 0: try: auth_status = json.loads(status[1]) if auth_status["success"]: if set(nodes_dict.keys()).issubset( set(auth_status["node_list"]) ): mutually_authorized = True except (ValueError, KeyError): pass if not mutually_authorized: need_auth = True break if need_auth: if username == None: username = utils.get_terminal_input('Username: ') if password == None: password = utils.get_terminal_password() utils.auth_nodes_do( nodes_dict, username, password, '--force' in utils.pcs_options, '--local' in utils.pcs_options ) else: for node in nodes_dict.keys(): print(node + ": Already authorized") def parse_nodes_with_ports(node_list): result = {} for node in node_list: if node.count(":") > 1 and not node.startswith("["): # if IPv6 without port put it in parentheses node = "[{0}]".format(node) # adding protocol so urlparse will parse hostname/ip and port correctly url = urlparse("http://{0}".format(node)) if url.hostname in result and result[url.hostname] != url.port: raise CmdLineInputError( "Node '{0}' defined twice with different ports".format( url.hostname ) ) result[url.hostname] = url.port return result def cluster_certkey(argv): return pcsd.pcsd_certkey(argv) def cluster_setup(argv): modifiers = utils.get_modifiers() allowed_encryption_values = ["0", "1"] if modifiers["encryption"] not in allowed_encryption_values: process_library_reports([ lib_reports.invalid_option_value( "--encryption", modifiers["encryption"], allowed_encryption_values, severity=ReportItemSeverity.ERROR, forceable=None ) ]) if len(argv) < 2: usage.cluster(["setup"]) sys.exit(1) is_rhel6 = utils.is_rhel6() cluster_name = argv[0] wait = False wait_timeout = None if "--start" in utils.pcs_options and "--wait" in utils.pcs_options: wait_timeout = utils.validate_wait_get_timeout(False) wait = True # get nodes' addresses udpu_rrp = False node_list = [] primary_addr_list = [] all_addr_list = [] for node in argv[1:]: addr_list = utils.parse_multiring_node(node) primary_addr_list.append(addr_list[0]) all_addr_list.append(addr_list[0]) node_options = { "ring0_addr": addr_list[0], } if addr_list[1]: udpu_rrp = True all_addr_list.append(addr_list[1]) node_options["ring1_addr"] = addr_list[1] node_list.append(node_options) # special case of ring1 address on cman if is_rhel6 and not udpu_rrp and "--addr1" in utils.pcs_options: for node in node_list: node["ring1_addr"] = utils.pcs_options["--addr1"] # verify addresses if udpu_rrp: for node_options in node_list: if "ring1_addr" not in node_options: utils.err( "if one node is configured for RRP, " + "all nodes must be configured for RRP" ) nodes_unresolvable = False for node_addr in all_addr_list: try: socket.getaddrinfo(node_addr, None) except socket.error: print("Warning: Unable to resolve hostname: {0}".format(node_addr)) nodes_unresolvable = True if nodes_unresolvable and "--force" not in utils.pcs_options: utils.err("Unable to resolve all hostnames, use --force to override") # parse, validate and complete options if is_rhel6: options, messages = cluster_setup_parse_options_cman( utils.pcs_options, "--force" in utils.pcs_options ) else: options, messages = cluster_setup_parse_options_corosync( utils.pcs_options, "--force" in utils.pcs_options ) if udpu_rrp and "rrp_mode" not in options["transport_options"]: options["transport_options"]["rrp_mode"] = "passive" if messages: process_library_reports(messages) # prepare config file if is_rhel6: config, messages = cluster_setup_create_cluster_conf( cluster_name, node_list, options["transport_options"], options["totem_options"] ) else: config, messages = cluster_setup_create_corosync_conf( cluster_name, node_list, options["transport_options"], options["totem_options"], options["quorum_options"], modifiers["encryption"] == "1" ) if messages: process_library_reports(messages) # setup on the local node if "--local" in utils.pcs_options: # Config path can be overriden by --corosync_conf or --cluster_conf # command line options. If it is overriden we do not touch any cluster # which may be set up on the local node. if is_rhel6: config_path = settings.cluster_conf_file else: config_path = settings.corosync_conf_file config_path_overriden = ( (is_rhel6 and "--cluster_conf" in utils.pcs_options) or (not is_rhel6 and "--corosync_conf" in utils.pcs_options) ) # verify and ensure no cluster is set up on the host if "--force" not in utils.pcs_options and os.path.exists(config_path): utils.err("{0} already exists, use --force to overwrite".format( config_path )) if not config_path_overriden: cib_path = os.path.join(settings.cib_dir, "cib.xml") if "--force" not in utils.pcs_options and os.path.exists(cib_path): utils.err("{0} already exists, use --force to overwrite".format( cib_path )) cluster_destroy([]) # set up the cluster utils.setCorosyncConf(config) if "--start" in utils.pcs_options: start_cluster([]) if "--enable" in utils.pcs_options: enable_cluster([]) if wait: wait_for_nodes_started([], wait_timeout) # setup on remote nodes else: # verify and ensure no cluster is set up on the nodes # checks that nodes are authenticated as well lib_env = utils.get_lib_env() if "--force" not in utils.pcs_options: all_nodes_available = True for node in primary_addr_list: available, message = utils.canAddNodeToCluster( lib_env.get_node_communicator(), lib_env.get_node_target_factory().get_target( NodeAddresses(node) ) ) if not available: all_nodes_available = False utils.err("{0}: {1}".format(node, message), False) if not all_nodes_available: utils.err( "nodes availability check failed, use --force to override. " + "WARNING: This will destroy existing cluster on the nodes." ) print("Destroying cluster on nodes: {0}...".format( ", ".join(primary_addr_list) )) destroy_cluster(primary_addr_list) print() try: file_definitions = {} file_definitions.update( node_communication_format.pcmk_authkey_file(generate_key()) ) if modifiers["encryption"] == "1": file_definitions.update( node_communication_format.corosync_authkey_file( generate_binary_key(random_bytes_count=128) ) ) com_cmd = DistributeFiles( lib_env.report_processor, file_definitions, skip_offline_targets=modifiers["skip_offline_nodes"], allow_fails=modifiers["force"], ) com_cmd.set_targets( lib_env.get_node_target_factory().get_target_list( [NodeAddresses(node) for node in primary_addr_list] ) ) run_and_raise(lib_env.get_node_communicator(), com_cmd) except LibraryError as e: #Theoretically, this should not happen utils.process_library_reports(e.args) # send local cluster pcsd configs to the new nodes print("Sending cluster config files to the nodes...") pcsd_data = { "nodes": primary_addr_list, "force": True, "clear_local_cluster_permissions": True, } err_msgs = [] output, retval = utils.run_pcsdcli("send_local_configs", pcsd_data) if retval == 0 and output["status"] == "ok" and output["data"]: try: for node in primary_addr_list: node_response = output["data"][node] if node_response["status"] == "notauthorized": err_msgs.append( "Unable to authenticate to " + node + ", try running 'pcs cluster auth'" ) if node_response["status"] not in ["ok", "not_supported"]: err_msgs.append( "Unable to set pcsd configs on {0}".format(node) ) except: err_msgs.append("Unable to communicate with pcsd") else: err_msgs.append("Unable to set pcsd configs") for err_msg in err_msgs: print("Warning: {0}".format(err_msg)) # send the cluster config for node in primary_addr_list: utils.setCorosyncConfig(node, config) # start and enable the cluster if requested if "--start" in utils.pcs_options: print("\nStarting cluster on nodes: {0}...".format( ", ".join(primary_addr_list) )) start_cluster_nodes(primary_addr_list) if "--enable" in utils.pcs_options: enable_cluster(primary_addr_list) # sync certificates as the last step because it restarts pcsd print() pcsd.pcsd_sync_certs( [], exit_after_error=False, async_restart=modifiers["async"] ) if wait: print() wait_for_nodes_started(primary_addr_list, wait_timeout) def cluster_setup_parse_options_corosync(options, force=False): messages = [] parsed = { "transport_options": { "rings_options": [], }, "totem_options": {}, "quorum_options": {}, } severity = ReportItemSeverity.WARNING if force else ReportItemSeverity.ERROR forceable = None if force else report_codes.FORCE_OPTIONS transport = "udpu" if "--transport" in options: transport = options["--transport"] allowed_transport = ("udp", "udpu") if transport not in allowed_transport: messages.append(lib_reports.invalid_option_value( "transport", transport, allowed_transport, severity, forceable )) parsed["transport_options"]["transport"] = transport if transport == "udpu" and ("--addr0" in options or "--addr1" in options): messages.append(lib_reports.rrp_addresses_transport_mismatch()) rrpmode = None if "--rrpmode" in options or "--addr0" in options: rrpmode = "passive" if "--rrpmode" in options: rrpmode = options["--rrpmode"] allowed_rrpmode = ("passive", "active") if rrpmode not in allowed_rrpmode: messages.append(lib_reports.invalid_option_value( "RRP mode", rrpmode, allowed_rrpmode, severity, forceable )) if rrpmode == "active": messages.append(lib_reports.rrp_active_not_supported(force)) if rrpmode: parsed["transport_options"]["rrp_mode"] = rrpmode totem_options_names = { "--token": "token", "--token_coefficient": "token_coefficient", "--join": "join", "--consensus": "consensus", "--miss_count_const": "miss_count_const", "--fail_recv_const": "fail_recv_const", } for opt_name, parsed_name in totem_options_names.items(): if opt_name in options: parsed["totem_options"][parsed_name] = options[opt_name] if transport == "udp": interface_ids = [] if "--addr0" in options: interface_ids.append(0) if "--addr1" in options: interface_ids.append(1) for interface in interface_ids: ring_options = {} ring_options["addr"] = options["--addr{0}".format(interface)] if "--broadcast{0}".format(interface) in options: ring_options["broadcast"] = True else: if "--mcast{0}".format(interface) in options: mcastaddr = options["--mcast{0}".format(interface)] else: mcastaddr = "239.255.{0}.1".format(interface + 1) ring_options["mcastaddr"] = mcastaddr if "--mcastport{0}".format(interface) in options: mcastport = options["--mcastport{0}".format(interface)] else: mcastport = "5405" ring_options["mcastport"] = mcastport if "--ttl{0}".format(interface) in options: ring_options["ttl"] = options["--ttl{0}".format(interface)] parsed["transport_options"]["rings_options"].append(ring_options) if "--ipv6" in options: parsed["transport_options"]["ip_version"] = "ipv6" quorum_options_names = { "--wait_for_all": "wait_for_all", "--auto_tie_breaker": "auto_tie_breaker", "--last_man_standing": "last_man_standing", "--last_man_standing_window": "last_man_standing_window", } for opt_name, parsed_name in quorum_options_names.items(): if opt_name in options: parsed["quorum_options"][parsed_name] = options[opt_name] for opt_name in ( "--wait_for_all", "--auto_tie_breaker", "--last_man_standing" ): allowed_values = ("0", "1") if opt_name in options and options[opt_name] not in allowed_values: messages.append(lib_reports.invalid_option_value( opt_name, options[opt_name], allowed_values )) return parsed, messages def cluster_setup_parse_options_cman(options, force=False): messages = [] parsed = { "transport_options": { "rings_options": [], }, "totem_options": {}, } severity = ReportItemSeverity.WARNING if force else ReportItemSeverity.ERROR forceable = None if force else report_codes.FORCE_OPTIONS broadcast = ("--broadcast0" in options) or ("--broadcast1" in options) if broadcast: transport = "udpb" parsed["transport_options"]["broadcast"] = True ring_missing_broadcast = None if "--broadcast0" not in options: ring_missing_broadcast = "0" if "--broadcast1" not in options: ring_missing_broadcast = "1" if ring_missing_broadcast: messages.append(lib_reports.cman_broadcast_all_rings()) else: transport = "udp" if "--transport" in options: transport = options["--transport"] allowed_transport = ("udp", "udpu") if transport not in allowed_transport: messages.append(lib_reports.invalid_option_value( "transport", transport, allowed_transport, severity, forceable )) parsed["transport_options"]["transport"] = transport if transport == "udpu": messages.append(lib_reports.cman_udpu_restart_required()) if transport == "udpu" and ("--addr0" in options or "--addr1" in options): messages.append(lib_reports.rrp_addresses_transport_mismatch()) rrpmode = None if "--rrpmode" in options or "--addr0" in options: rrpmode = "passive" if "--rrpmode" in options: rrpmode = options["--rrpmode"] allowed_rrpmode = ("passive", "active") if rrpmode not in allowed_rrpmode: messages.append(lib_reports.invalid_option_value( "RRP mode", rrpmode, allowed_rrpmode, severity, forceable )) if rrpmode == "active": messages.append(lib_reports.rrp_active_not_supported(force)) if rrpmode: parsed["transport_options"]["rrp_mode"] = rrpmode totem_options_names = { "--token": "token", "--join": "join", "--consensus": "consensus", "--miss_count_const": "miss_count_const", "--fail_recv_const": "fail_recv_const", } for opt_name, parsed_name in totem_options_names.items(): if opt_name in options: parsed["totem_options"][parsed_name] = options[opt_name] if not broadcast: for interface in (0, 1): if "--addr{0}".format(interface) not in options: continue ring_options = {} if "--mcast{0}".format(interface) in options: mcastaddr = options["--mcast{0}".format(interface)] else: mcastaddr = "239.255.{0}.1".format(interface + 1) ring_options["mcastaddr"] = mcastaddr if "--mcastport{0}".format(interface) in options: ring_options["mcastport"] = options[ "--mcastport{0}".format(interface) ] if "--ttl{0}".format(interface) in options: ring_options["ttl"] = options["--ttl{0}".format(interface)] parsed["transport_options"]["rings_options"].append(ring_options) ignored_options_names = ( "--wait_for_all", "--auto_tie_breaker", "--last_man_standing", "--last_man_standing_window", "--token_coefficient", "--ipv6", ) for opt_name in ignored_options_names: if opt_name in options: messages.append(lib_reports.cman_ignored_option(opt_name)) return parsed, messages def cluster_setup_create_corosync_conf( cluster_name, node_list, transport_options, totem_options, quorum_options, encrypted ): messages = [] corosync_conf = corosync_conf_utils.Section("") totem_section = corosync_conf_utils.Section("totem") nodelist_section = corosync_conf_utils.Section("nodelist") quorum_section = corosync_conf_utils.Section("quorum") logging_section = corosync_conf_utils.Section("logging") corosync_conf.add_section(totem_section) corosync_conf.add_section(nodelist_section) corosync_conf.add_section(quorum_section) corosync_conf.add_section(logging_section) totem_section.add_attribute("version", "2") totem_section.add_attribute("cluster_name", cluster_name) if not encrypted: totem_section.add_attribute("secauth", "off") transport_options_names = ( "transport", "rrp_mode", "ip_version", ) for opt_name in transport_options_names: if opt_name in transport_options: totem_section.add_attribute(opt_name, transport_options[opt_name]) totem_options_names = ( "token", "token_coefficient", "join", "consensus", "miss_count_const", "fail_recv_const", ) for opt_name in totem_options_names: if opt_name in totem_options: totem_section.add_attribute(opt_name, totem_options[opt_name]) transport = None if "transport" in transport_options: transport = transport_options["transport"] if transport == "udp": if "rings_options" in transport_options: for ring_number, ring_options in enumerate( transport_options["rings_options"] ): interface_section = corosync_conf_utils.Section("interface") totem_section.add_section(interface_section) interface_section.add_attribute("ringnumber", ring_number) if "addr" in ring_options: interface_section.add_attribute( "bindnetaddr", ring_options["addr"] ) if "broadcast" in ring_options and ring_options["broadcast"]: interface_section.add_attribute("broadcast", "yes") else: for opt_name in ("mcastaddr", "mcastport", "ttl"): if opt_name in ring_options: interface_section.add_attribute( opt_name, ring_options[opt_name] ) for node_id, node_options in enumerate(node_list, 1): node_section = corosync_conf_utils.Section("node") nodelist_section.add_section(node_section) for opt_name in ("ring0_addr", "ring1_addr"): if opt_name in node_options: node_section.add_attribute(opt_name, node_options[opt_name]) node_section.add_attribute("nodeid", node_id) quorum_section.add_attribute("provider", "corosync_votequorum") quorum_options_names = ( "wait_for_all", "auto_tie_breaker", "last_man_standing", "last_man_standing_window", ) for opt_name in quorum_options_names: if opt_name in quorum_options: quorum_section.add_attribute(opt_name, quorum_options[opt_name]) auto_tie_breaker = ( "auto_tie_breaker" in quorum_options and quorum_options["auto_tie_breaker"] == "1" ) if len(node_list) == 2 and not auto_tie_breaker: quorum_section.add_attribute("two_node", "1") logging_section.add_attribute("to_logfile", "yes") logging_section.add_attribute("logfile", "/var/log/cluster/corosync.log") logging_section.add_attribute("to_syslog", "yes") return str(corosync_conf), messages def cluster_setup_create_cluster_conf( cluster_name, node_list, transport_options, totem_options ): broadcast = ( "broadcast" in transport_options and transport_options["broadcast"] ) commands = [] commands.append({ "cmd": ["-i", "--createcluster", cluster_name], "err": "error creating cluster: {0}".format(cluster_name), }) commands.append({ "cmd": ["-i", "--addfencedev", "pcmk-redirect", "agent=fence_pcmk"], "err": "error creating fence dev: {0}".format(cluster_name), }) cman_opts = [] if "transport" in transport_options: cman_opts.append("transport=" + transport_options["transport"]) cman_opts.append("broadcast=" + ("yes" if broadcast else "no")) if len(node_list) == 2: cman_opts.append("two_node=1") cman_opts.append("expected_votes=1") commands.append({ "cmd": ["--setcman"] + cman_opts, "err": "error setting cman options", }) for node_options in node_list: if "ring0_addr" in node_options: ring0_addr = node_options["ring0_addr"] commands.append({ "cmd": ["--addnode", ring0_addr], "err": "error adding node: {0}".format(ring0_addr), }) if "ring1_addr" in node_options: ring1_addr = node_options["ring1_addr"] commands.append({ "cmd": ["--addalt", ring0_addr, ring1_addr], "err": ( "error adding alternative address for node: {0}".format( ring0_addr ) ), }) commands.append({ "cmd": ["-i", "--addmethod", "pcmk-method", ring0_addr], "err": "error adding fence method: {0}".format(ring0_addr), }) commands.append({ "cmd": [ "-i", "--addfenceinst", "pcmk-redirect", ring0_addr, "pcmk-method", "port=" + ring0_addr ], "err": "error adding fence instance: {0}".format(ring0_addr), }) if not broadcast: if "rings_options" in transport_options: for ring_number, ring_options in enumerate( transport_options["rings_options"] ): mcast_options = [] if "mcastaddr" in ring_options: mcast_options.append(ring_options["mcastaddr"]) if "mcastport" in ring_options: mcast_options.append("port=" + ring_options["mcastport"]) if "ttl" in ring_options: mcast_options.append("ttl=" + ring_options["ttl"]) if ring_number == 0: cmd_name = "--setmulticast" else: cmd_name = "--setaltmulticast" commands.append({ "cmd": [cmd_name] + mcast_options, "err": "error adding ring{0} settings".format(ring_number), }) totem_options_names = ( "token", "join", "consensus", "miss_count_const", "fail_recv_const", ) totem_cmd_options = [] for opt_name in totem_options_names: if opt_name in totem_options: totem_cmd_options.append( "{0}={1}".format(opt_name, totem_options[opt_name]) ) if "rrp_mode" in transport_options: totem_cmd_options.append( "rrp_mode={0}".format(transport_options["rrp_mode"]) ) if totem_cmd_options: commands.append({ "cmd": ["--settotem"] + totem_cmd_options, "err": "error setting totem options", }) messages = [] conf_temp = tempfile.NamedTemporaryFile(mode="w+", suffix=".pcs") conf_path = conf_temp.name cmd_prefix = ["ccs", "-f", conf_path] for cmd_item in commands: output, retval = utils.run(cmd_prefix + cmd_item["cmd"]) if retval != 0: if output: messages.append(lib_reports.common_info(output)) messages.append(lib_reports.common_error(cmd_item["err"])) conf_temp.close() return "", messages conf_temp.seek(0) cluster_conf = conf_temp.read() conf_temp.close() return cluster_conf, messages def get_local_network(): args = ["/sbin/ip", "route"] p = subprocess.Popen(args, stdout=subprocess.PIPE) iproute_out = p.stdout.read() network_addr = re.search(r"^([0-9\.]+)", iproute_out) if network_addr: return network_addr.group(1) else: utils.err("unable to determine network address, is interface up?") def start_cluster(argv): wait = False wait_timeout = None if "--wait" in utils.pcs_options: wait_timeout = utils.validate_wait_get_timeout(False) wait = True if len(argv) > 0: nodes = set(argv) # unique start_cluster_nodes(nodes) if wait: wait_for_nodes_started(nodes, wait_timeout) return print("Starting Cluster...") service_list = [] if utils.is_cman_cluster(): # Verify that CMAN_QUORUM_TIMEOUT is set, if not, then we set it to 0 retval, output = getstatusoutput('source /etc/sysconfig/cman ; [ -z "$CMAN_QUORUM_TIMEOUT" ]') if retval == 0: with open("/etc/sysconfig/cman", "a") as cman_conf_file: cman_conf_file.write("\nCMAN_QUORUM_TIMEOUT=0\n") output, retval = utils.start_service("cman") if retval != 0: print(output) utils.err("unable to start cman") else: service_list.append("corosync") if utils.need_to_handle_qdevice_service(): service_list.append("corosync-qdevice") service_list.append("pacemaker") for service in service_list: output, retval = utils.start_service(service) if retval != 0: print(output) utils.err("unable to start {0}".format(service)) if wait: wait_for_nodes_started([], wait_timeout) def start_cluster_all(): wait = False wait_timeout = None if "--wait" in utils.pcs_options: wait_timeout = utils.validate_wait_get_timeout(False) wait = True all_nodes = utils.getNodesFromCorosyncConf() start_cluster_nodes(all_nodes) if wait: wait_for_nodes_started(all_nodes, wait_timeout) def start_cluster_nodes(nodes): # Large clusters take longer time to start up. So we make the timeout longer # for each 8 nodes: # 1 - 8 nodes: 1 * timeout # 9 - 16 nodes: 2 * timeout # 17 - 24 nodes: 3 * timeout # and so on # Users can override this and set their own timeout by specifying # the --request-timeout option (see utils.sendHTTPRequest). timeout = int( settings.default_request_timeout * math.ceil(len(nodes) / 8.0) ) node_errors = parallel_for_nodes( utils.startCluster, nodes, quiet=True, timeout=timeout ) if node_errors: utils.err( "unable to start all nodes\n" + "\n".join(node_errors.values()) ) def is_node_fully_started(node_status): return ( "online" in node_status and "pending" in node_status and node_status["online"] and not node_status["pending"] ) def wait_for_local_node_started(stop_at, interval): try: while True: time.sleep(interval) node_status = lib_pacemaker.get_local_node_status( utils.cmd_runner() ) if is_node_fully_started(node_status): return 0, "Started" if datetime.datetime.now() > stop_at: return 1, "Waiting timeout" except LibraryError as e: return 1, "Unable to get node status: {0}".format( "\n".join([build_report_message(item) for item in e.args]) ) def wait_for_remote_node_started(node, stop_at, interval): while True: time.sleep(interval) code, output = utils.getPacemakerNodeStatus(node) # HTTP error, permission denied or unable to auth # there is no point in trying again as it won't get magically fixed if code in [1, 3, 4]: return 1, output if code == 0: try: status = json.loads(output) if (is_node_fully_started(status)): return 0, "Started" except (ValueError, KeyError): # this won't get fixed either return 1, "Unable to get node status" if datetime.datetime.now() > stop_at: return 1, "Waiting timeout" def wait_for_nodes_started(node_list, timeout=None): timeout = 60 * 15 if timeout is None else timeout interval = 2 stop_at = datetime.datetime.now() + datetime.timedelta(seconds=timeout) print("Waiting for node(s) to start...") if not node_list: code, output = wait_for_local_node_started(stop_at, interval) if code != 0: utils.err(output) else: print(output) else: node_errors = parallel_for_nodes( wait_for_remote_node_started, node_list, stop_at, interval ) if node_errors: utils.err("unable to verify all nodes have started") def stop_cluster_all(): stop_cluster_nodes(utils.getNodesFromCorosyncConf()) def stop_cluster_nodes(nodes): all_nodes = utils.getNodesFromCorosyncConf() unknown_nodes = set(nodes) - set(all_nodes) if unknown_nodes: utils.err( "nodes '%s' do not appear to exist in configuration" % "', '".join(unknown_nodes) ) stopping_all = set(nodes) >= set(all_nodes) if "--force" not in utils.pcs_options and not stopping_all: error_list = [] for node in nodes: retval, data = utils.get_remote_quorumtool_output(node) if retval != 0: error_list.append(node + ": " + data) continue # we are sure whether we are on cman cluster or not because only # nodes from a local cluster can be stopped (see nodes validation # above) if utils.is_rhel6(): quorum_info = utils.parse_cman_quorum_info(data) else: quorum_info = utils.parse_quorumtool_output(data) if quorum_info: if not quorum_info["quorate"]: continue if utils.is_node_stop_cause_quorum_loss( quorum_info, local=False, node_list=nodes ): utils.err( "Stopping the node(s) will cause a loss of the quorum" + ", use --force to override" ) else: # We have the info, no need to print errors error_list = [] break if not utils.is_node_offline_by_quorumtool_output(data): error_list.append("Unable to get quorum status") # else the node seems to be stopped already if error_list: utils.err( "Unable to determine whether stopping the nodes will cause " + "a loss of the quorum, use --force to override\n" + "\n".join(error_list) ) was_error = False node_errors = parallel_for_nodes( utils.repeat_if_timeout(utils.stopPacemaker), nodes, quiet=True ) accessible_nodes = [ node for node in nodes if node not in node_errors.keys() ] if node_errors: utils.err( "unable to stop all nodes\n" + "\n".join(node_errors.values()), exit_after_error=not accessible_nodes ) was_error = True for node in node_errors: print("{0}: Not stopping cluster - node is unreachable".format(node)) node_errors = parallel_for_nodes( utils.stopCorosync, accessible_nodes, quiet=True ) if node_errors: utils.err( "unable to stop all nodes\n" + "\n".join(node_errors.values()) ) if was_error: utils.err("unable to stop all nodes") def enable_cluster(argv): if len(argv) > 0: enable_cluster_nodes(argv) return try: utils.enableServices() except LibraryError as e: process_library_reports(e.args) def disable_cluster(argv): if len(argv) > 0: disable_cluster_nodes(argv) return try: utils.disableServices() except LibraryError as e: process_library_reports(e.args) def enable_cluster_all(): enable_cluster_nodes(utils.getNodesFromCorosyncConf()) def disable_cluster_all(): disable_cluster_nodes(utils.getNodesFromCorosyncConf()) def enable_cluster_nodes(nodes): error_list = utils.map_for_error_list(utils.enableCluster, nodes) if len(error_list) > 0: utils.err("unable to enable all nodes\n" + "\n".join(error_list)) def disable_cluster_nodes(nodes): error_list = utils.map_for_error_list(utils.disableCluster, nodes) if len(error_list) > 0: utils.err("unable to disable all nodes\n" + "\n".join(error_list)) def destroy_cluster(argv, keep_going=False): if len(argv) > 0: # stop pacemaker and resources while cluster is still quorate nodes = argv node_errors = parallel_for_nodes( utils.repeat_if_timeout(utils.stopPacemaker), nodes, quiet=True ) # proceed with destroy regardless of errors # destroy will stop any remaining cluster daemons node_errors = parallel_for_nodes(utils.destroyCluster, nodes, quiet=True) if node_errors: if keep_going: print( "Warning: unable to destroy cluster\n" + "\n".join(node_errors.values()) ) else: utils.err( "unable to destroy cluster\n" + "\n".join(node_errors.values()) ) def stop_cluster(argv): if len(argv) > 0: stop_cluster_nodes(argv) return if "--force" not in utils.pcs_options: if utils.is_rhel6(): output_status, dummy_retval = utils.run(["cman_tool", "status"]) output_nodes, dummy_retval = utils.run([ "cman_tool", "nodes", "-F", "id,type,votes,name" ]) if output_status == output_nodes: # when both commands return the same error output = output_status else: output = output_status + "\n---Votes---\n" + output_nodes quorum_info = utils.parse_cman_quorum_info(output) else: output, dummy_retval = utils.run(["corosync-quorumtool", "-p", "-s"]) # retval is 0 on success if node is not in partition with quorum # retval is 1 on error OR on success if node has quorum quorum_info = utils.parse_quorumtool_output(output) if quorum_info: if utils.is_node_stop_cause_quorum_loss(quorum_info, local=True): utils.err( "Stopping the node will cause a loss of the quorum" + ", use --force to override" ) elif not utils.is_node_offline_by_quorumtool_output(output): utils.err( "Unable to determine whether stopping the node will cause " + "a loss of the quorum, use --force to override" ) # else the node seems to be stopped already, proceed to be sure stop_all = ( "--pacemaker" not in utils.pcs_options and "--corosync" not in utils.pcs_options ) if stop_all or "--pacemaker" in utils.pcs_options: stop_cluster_pacemaker() if stop_all or "--corosync" in utils.pcs_options: stop_cluster_corosync() def stop_cluster_pacemaker(): print("Stopping Cluster (pacemaker)...") if not is_systemctl(): command = ["service", "pacemaker", "stop"] # If --skip-cman is not specified, pacemaker init script will stop cman # and corosync as well. That way some of the nodes may stop cman before # others stop pacemaker, which leads to quorum loss. We need to keep # quorum until all pacemaker resources are stopped as some of them may # need quorum to be able to stop. if utils.is_cman_cluster(): command.append("--skip-cman") else: command = ["systemctl", "stop", "pacemaker"] output, retval = utils.run(command) if retval != 0: print(output) utils.err("unable to stop pacemaker") def stop_cluster_corosync(): if utils.is_rhel6(): print("Stopping Cluster (cman)...") output, retval = utils.stop_service("cman") if retval != 0: print(output) utils.err("unable to stop cman") else: print("Stopping Cluster (corosync)...") service_list = [] if utils.need_to_handle_qdevice_service(): service_list.append("corosync-qdevice") service_list.append("corosync") for service in service_list: output, retval = utils.stop_service(service) if retval != 0: print(output) utils.err("unable to stop {0}".format(service)) def kill_cluster(argv): daemons = [ "crmd", "pengine", "attrd", "lrmd", "stonithd", "cib", "pacemakerd", "pacemaker_remoted", "corosync-qdevice", "corosync", ] dummy_output, dummy_retval = utils.run(["killall", "-9"] + daemons) # if dummy_retval != 0: # print "Error: unable to execute killall -9" # print output # sys.exit(1) def cluster_push(argv): if len(argv) > 2: usage.cluster(["cib-push"]) sys.exit(1) filename = None scope = None timeout = None diff_against = None if "--wait" in utils.pcs_options: timeout = utils.validate_wait_get_timeout() for arg in argv: if "=" not in arg: filename = arg else: arg_name, arg_value = arg.split("=", 1) if arg_name == "scope": if "--config" in utils.pcs_options: utils.err("Cannot use both scope and --config") if not utils.is_valid_cib_scope(arg_value): utils.err("invalid CIB scope '%s'" % arg_value) else: scope = arg_value elif arg_name == "diff-against": diff_against = arg_value else: usage.cluster(["cib-push"]) sys.exit(1) if "--config" in utils.pcs_options: scope = "configuration" if diff_against and scope: utils.err("Cannot use both scope and diff-against") if not filename: usage.cluster(["cib-push"]) sys.exit(1) try: new_cib_dom = xml.dom.minidom.parse(filename) if scope and not new_cib_dom.getElementsByTagName(scope): utils.err( "unable to push cib, scope '%s' not present in new cib" % scope ) except (EnvironmentError, xml.parsers.expat.ExpatError) as e: utils.err("unable to parse new cib: %s" % e) if diff_against: try: xml.dom.minidom.parse(diff_against) except (EnvironmentError, xml.parsers.expat.ExpatError) as e: utils.err("unable to parse original cib: %s" % e) runner = utils.cmd_runner() command = [ "crm_diff", "--original", diff_against, "--new", filename, "--no-version" ] patch, error, dummy_retval = runner.run(command) # dummy_retval == 1 means one of two things: # a) an error has occured # b) --original and --new differ # therefore it's of no use to see if an error occurred if error.strip(): utils.err("unable to diff the CIBs:\n" + error) if not patch.strip(): utils.err( "The new CIB is the same as the original CIB, nothing to push." ) command = ["cibadmin", "--patch", "--xml-pipe"] output, error, retval = runner.run(command, patch) if retval != 0: utils.err("unable to push cib\n" + error + output) else: command = ["cibadmin", "--replace", "--xml-file", filename] if scope: command.append("--scope=%s" % scope) output, retval = utils.run(command) if retval != 0: utils.err("unable to push cib\n" + output) print("CIB updated") if "--wait" not in utils.pcs_options: return cmd = ["crm_resource", "--wait"] if timeout: cmd.extend(["--timeout", str(timeout)]) output, retval = utils.run(cmd) if retval != 0: msg = [] if retval == settings.pacemaker_wait_timeout_status: msg.append("waiting timeout") if output: msg.append("\n" + output) utils.err("\n".join(msg).strip()) def cluster_edit(argv): if 'EDITOR' in os.environ: if len(argv) > 1: usage.cluster(["edit"]) sys.exit(1) scope = None scope_arg = "" for arg in argv: if "=" not in arg: usage.cluster(["edit"]) sys.exit(1) else: arg_name, arg_value = arg.split("=", 1) if arg_name == "scope" and "--config" not in utils.pcs_options: if not utils.is_valid_cib_scope(arg_value): utils.err("invalid CIB scope '%s'" % arg_value) else: scope_arg = arg scope = arg_value else: usage.cluster(["edit"]) sys.exit(1) if "--config" in utils.pcs_options: scope = "configuration" # Leave scope_arg empty as cluster_push will pick up a --config # option from utils.pcs_options scope_arg = "" editor = os.environ['EDITOR'] tempcib = tempfile.NamedTemporaryFile(mode="w+", suffix=".pcs") cib = utils.get_cib(scope) tempcib.write(cib) tempcib.flush() try: subprocess.call([editor, tempcib.name]) except OSError: utils.err("unable to open file with $EDITOR: " + editor) tempcib.seek(0) newcib = "".join(tempcib.readlines()) if newcib == cib: print("CIB not updated, no changes detected") else: cluster_push([arg for arg in [tempcib.name, scope_arg] if arg]) else: utils.err("$EDITOR environment variable is not set") def get_cib(argv): if len(argv) > 2: usage.cluster(["cib"]) sys.exit(1) filename = None scope = None for arg in argv: if "=" not in arg: filename = arg else: arg_name, arg_value = arg.split("=", 1) if arg_name == "scope" and "--config" not in utils.pcs_options: if not utils.is_valid_cib_scope(arg_value): utils.err("invalid CIB scope '%s'" % arg_value) else: scope = arg_value else: usage.cluster(["cib"]) sys.exit(1) if "--config" in utils.pcs_options: scope = "configuration" if not filename: print(utils.get_cib(scope), end="") else: try: f = open(filename, 'w') output = utils.get_cib(scope) if output != "": f.write(output) else: utils.err("No data in the CIB") except IOError as e: utils.err("Unable to write to file '%s', %s" % (filename, e.strerror)) def _ensure_cluster_is_offline_if_atb_should_be_enabled( lib_env, node_num_modifier, skip_offline_nodes=False ): """ Check if cluster is offline if auto tie breaker should be enabled. Raises LibraryError if ATB needs to be enabled cluster is not offline. lib_env -- LibraryEnvironment node_num_modifier -- number which wil be added to the number of nodes in cluster when determining whenever ATB is needed. skip_offline_nodes -- if True offline nodes will be skipped """ if not lib_env.is_cman_cluster: corosync_conf = lib_env.get_corosync_conf() if lib_sbd.atb_has_to_be_enabled( lib_env.cmd_runner(), corosync_conf, node_num_modifier ): print( "Warning: auto_tie_breaker quorum option will be enabled to " "make SBD fencing effecive after this change. Cluster has to " "be offline to be able to make this change." ) com_cmd = CheckCorosyncOffline( lib_env.report_processor, skip_offline_nodes ) com_cmd.set_targets( lib_env.get_node_target_factory().get_target_list( corosync_conf.get_nodes() ) ) run_and_raise(lib_env.get_node_communicator(), com_cmd) def cluster_node(argv): if len(argv) < 1: usage.cluster(["node"]) sys.exit(1) if argv[0] == "add": add_node = True elif argv[0] in ["remove","delete"]: add_node = False elif argv[0] == "add-outside": try: node_add_outside_cluster( utils.get_library_wrapper(), argv[1:], utils.get_modifiers(), ) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "cluster", "node") return else: usage.cluster(["node"]) sys.exit(1) if len(argv) != 2: usage.cluster([" ".join(["node", argv[0]])]) sys.exit(1) node = argv[1] node0, node1 = utils.parse_multiring_node(node) if not node0: utils.err("missing ring 0 address of the node") # allow to continue if removing a node with --force if add_node or "--force" not in utils.pcs_options: status, output = utils.checkAuthorization(node0) if status != 0: if status == 2: msg = "pcsd is not running on {0}".format(node0) elif status == 3: msg = ( "{node} is not yet authenticated " + " (try pcs cluster auth {node})" ).format(node=node0) else: msg = output if not add_node: msg += ", use --force to override" utils.err(msg) lib_env = utils.get_lib_env() modifiers = utils.get_modifiers() if add_node == True: node_add(lib_env, node0, node1, modifiers) else: node_remove(lib_env, node0, modifiers) def node_add_outside_cluster(lib, argv, modifiers): if len(argv) != 2: raise CmdLineInputError( "Usage: pcs cluster node add-outside " ) if len(modifiers["watchdog"]) > 1: raise CmdLineInputError("Multiple watchdogs defined") node_ring0, node_ring1 = utils.parse_multiring_node(argv[0]) cluster_node = argv[1] data = [ ("new_nodename", node_ring0), ] if node_ring1: data.append(("new_ring1addr", node_ring1)) if modifiers["watchdog"]: data.append(("watchdog", modifiers["watchdog"][0])) if modifiers["device"]: # way to send data in array data += [("devices[]", device) for device in modifiers["device"]] communicator = utils.get_lib_env().node_communicator() try: communicator.call_host( cluster_node, "remote/add_node_all", communicator.format_data_dict(data), ) except NodeCommandUnsuccessfulException as e: print(e.reason) except NodeCommunicationException as e: process_library_reports([node_communicator_exception_to_report_item(e)]) def node_add(lib_env, node0, node1, modifiers): wait = False wait_timeout = None if "--start" in utils.pcs_options and "--wait" in utils.pcs_options: wait_timeout = utils.validate_wait_get_timeout(False) wait = True need_ring1_address = utils.need_ring1_address(utils.getCorosyncConf()) if not node1 and need_ring1_address: utils.err( "cluster is configured for RRP, " "you have to specify ring 1 address for the node" ) elif node1 and not need_ring1_address: utils.err( "cluster is not configured for RRP, " "you must not specify ring 1 address for the node" ) node_addr = NodeAddresses(node0, node1) (canAdd, error) = utils.canAddNodeToCluster( lib_env.get_node_communicator(), lib_env.get_node_target_factory().get_target(node_addr) ) if not canAdd: utils.err("Unable to add '%s' to cluster: %s" % (node0, error)) report_processor = lib_env.report_processor com_factory = lib_env.communicator_factory # First set up everything else than corosync. Once the new node is # present in corosync.conf / cluster.conf, it's considered part of a # cluster and the node add command cannot be run again. So we need to # minimize the amout of actions (and therefore possible failures) after # adding the node to corosync. try: # qdevice setup if not utils.is_rhel6(): conf_facade = corosync_conf_facade.from_string( utils.getCorosyncConf() ) qdevice_model, qdevice_model_options, _, _ = conf_facade.get_quorum_device_settings() if qdevice_model == "net": _add_device_model_net( lib_env, qdevice_model_options["host"], conf_facade.get_cluster_name(), [node_addr], skip_offline_nodes=False ) # sbd setup new_node_target = lib_env.get_node_target_factory().get_target( node_addr ) if lib_sbd.is_sbd_enabled(utils.cmd_runner()): if "--watchdog" not in utils.pcs_options: watchdog = settings.sbd_watchdog_default print("Warning: using default watchdog '{0}'".format( watchdog )) else: watchdog = utils.pcs_options["--watchdog"][0] _ensure_cluster_is_offline_if_atb_should_be_enabled( lib_env, 1, modifiers["skip_offline_nodes"] ) report_processor.process(lib_reports.sbd_check_started()) device_list = utils.pcs_options.get("--device", []) device_num = len(device_list) sbd_with_device = lib_sbd.is_device_set_local() sbd_cfg = environment_file_to_dict(lib_sbd.get_local_sbd_config()) if sbd_with_device and device_num not in range(1, 4): utils.err( "SBD is configured to use shared storage, therefore it " +\ "is required to specify at least one device and at most " +\ "{0} devices (option --device),".format( settings.sbd_max_device_num ) ) elif not sbd_with_device and device_num > 0: utils.err( "SBD is not configured to use shared device, " +\ "therefore --device should not be specified" ) com_cmd = CheckSbd(lib_env.report_processor) com_cmd.add_request(new_node_target, watchdog, device_list) run_and_raise(com_factory.get_communicator(), com_cmd) com_cmd = SetSbdConfig(lib_env.report_processor) com_cmd.add_request( new_node_target, lib_sbd.create_sbd_config( sbd_cfg, new_node_target.label, watchdog, device_list ) ) run_and_raise(com_factory.get_communicator(), com_cmd) com_cmd = EnableSbdService(lib_env.report_processor) com_cmd.add_request(new_node_target) run_and_raise(com_factory.get_communicator(), com_cmd) else: com_cmd = DisableSbdService(lib_env.report_processor) com_cmd.add_request(new_node_target) run_and_raise(com_factory.get_communicator(), com_cmd) # booth setup booth_sync.send_all_config_to_node( com_factory.get_communicator(), report_processor, new_node_target, rewrite_existing=modifiers["force"], skip_wrong_config=modifiers["force"] ) if os.path.isfile(settings.corosync_authkey_file): com_cmd = DistributeFiles( lib_env.report_processor, node_communication_format.corosync_authkey_file( open(settings.corosync_authkey_file, "rb").read() ), # added force, it was missing before # but it doesn't make sence here skip_offline_targets=modifiers["skip_offline_nodes"], allow_fails=modifiers["force"], ) com_cmd.set_targets( lib_env.get_node_target_factory().get_target_list([node_addr]) ) run_and_raise(lib_env.get_node_communicator(), com_cmd) # do not send pcmk authkey to guest and remote nodes, they either have # it or are not working anyway # if the cluster is stopped, we cannot get the cib anyway _share_authkey( lib_env, get_nodes(lib_env.get_corosync_conf()), node_addr, skip_offline_nodes=modifiers["skip_offline_nodes"], allow_incomplete_distribution=modifiers["skip_offline_nodes"] ) except LibraryError as e: process_library_reports(e.args) except NodeCommunicationException as e: process_library_reports( [node_communicator_exception_to_report_item(e)] ) # Now add the new node to corosync.conf / cluster.conf corosync_conf = None for my_node in utils.getNodesFromCorosyncConf(): retval, output = utils.addLocalNode(my_node, node0, node1) if retval != 0: utils.err( "unable to add %s on %s - %s" % (node0, my_node, output.strip()), False ) else: print("%s: Corosync updated" % my_node) corosync_conf = output if not utils.is_cman_cluster(): # When corosync 2 is in use, the procedure for adding a node is: # 1. add the new node to corosync.conf # 2. reload corosync.conf before the new node is started # 3. start the new node # If done otherwise, membership gets broken and qdevice hangs. Cluster # will recover after a minute or so but still it's a wrong way. # When corosync 1 is in use, the procedure for adding a node is: # 1. add the new node to cluster.conf # 2. start the new node # Starting the node will automaticall reload cluster.conf on all # nodes. If the config is reloaded before the new node is started, # the new node gets fenced by the cluster. output, retval = utils.reloadCorosync() if corosync_conf != None: # send local cluster pcsd configs to the new node # may be used for sending corosync config as well in future pcsd_data = { 'nodes': [node0], 'force': True, } output, retval = utils.run_pcsdcli('send_local_configs', pcsd_data) if retval != 0: utils.err("Unable to set pcsd configs") if output['status'] == 'notauthorized': utils.err( "Unable to authenticate to " + node0 + ", try running 'pcs cluster auth'" ) if output['status'] == 'ok' and output['data']: try: node_response = output['data'][node0] if node_response['status'] not in ['ok', 'not_supported']: utils.err("Unable to set pcsd configs") except: utils.err('Unable to communicate with pcsd') print("Setting up corosync...") utils.setCorosyncConfig(node0, corosync_conf) if "--enable" in utils.pcs_options: retval, err = utils.enableCluster(node0) if retval != 0: print("Warning: enable cluster - {0}".format(err)) if "--start" in utils.pcs_options or utils.is_rhel6(): # Always start the new node on cman cluster in order to reload # cluster.conf (see above). retval, err = utils.startCluster(node0) if retval != 0: print("Warning: start cluster - {0}".format(err)) pcsd.pcsd_sync_certs([node0], exit_after_error=False) else: utils.err("Unable to update any nodes") if utils.is_cman_with_udpu_transport(): print("Warning: Using udpu transport on a CMAN cluster, " + "cluster restart is required to apply node addition") if wait: print() wait_for_nodes_started([node0], wait_timeout) def node_remove(lib_env, node0, modifiers): if node0 not in utils.getNodesFromCorosyncConf(): utils.err( "node '%s' does not appear to exist in configuration" % node0 ) if "--force" not in utils.pcs_options: retval, data = utils.get_remote_quorumtool_output(node0) if retval != 0: utils.err( "Unable to determine whether removing the node will cause " + "a loss of the quorum, use --force to override\n" + data ) # we are sure whether we are on cman cluster or not because only # nodes from a local cluster can be stopped (see nodes validation # above) if utils.is_rhel6(): quorum_info = utils.parse_cman_quorum_info(data) else: quorum_info = utils.parse_quorumtool_output(data) if quorum_info: if utils.is_node_stop_cause_quorum_loss( quorum_info, local=False, node_list=[node0] ): utils.err( "Removing the node will cause a loss of the quorum" + ", use --force to override" ) elif not utils.is_node_offline_by_quorumtool_output(data): utils.err( "Unable to determine whether removing the node will cause " + "a loss of the quorum, use --force to override\n" + data ) # else the node seems to be stopped already, we're ok to proceed try: _ensure_cluster_is_offline_if_atb_should_be_enabled( lib_env, -1, modifiers["skip_offline_nodes"] ) except LibraryError as e: utils.process_library_reports(e.args) nodesRemoved = False c_nodes = utils.getNodesFromCorosyncConf() destroy_cluster([node0], keep_going=("--force" in utils.pcs_options)) for my_node in c_nodes: if my_node == node0: continue retval, output = utils.removeLocalNode(my_node, node0) if retval != 0: utils.err( "unable to remove %s on %s - %s" % (node0,my_node,output.strip()), False ) else: if output[0] == 0: print("%s: Corosync updated" % my_node) nodesRemoved = True else: utils.err( "%s: Error executing command occured: %s" % (my_node, "".join(output[1])), False ) if nodesRemoved == False: utils.err("Unable to update any nodes") output, retval = utils.reloadCorosync() output, retval = utils.run(["crm_node", "--force", "-R", node0]) if utils.is_cman_with_udpu_transport(): print("Warning: Using udpu transport on a CMAN cluster, " + "cluster restart is required to apply node removal") def cluster_localnode(argv): if len(argv) != 2: usage.cluster() exit(1) elif argv[0] == "add": node = argv[1] if not utils.is_rhel6(): success = utils.addNodeToCorosync(node) else: success = utils.addNodeToClusterConf(node) if success: print("%s: successfully added!" % node) else: utils.err("unable to add %s" % node) elif argv[0] in ["remove","delete"]: node = argv[1] if not utils.is_rhel6(): success = utils.removeNodeFromCorosync(node) else: success = utils.removeNodeFromClusterConf(node) # The removed node might be present in CIB. If it is, pacemaker will show it as # offline, no matter it's not in corosync / cman config any longer. We remove # the node by running 'crm_node -R ' on the node where the remove command # was ran. This only works if pacemaker is running. If it's not, we need # to remove the node manually from the CIB on all nodes. cib_node_remove = None if utils.usefile: cib_node_remove = utils.filename elif not utils.is_service_running(utils.cmd_runner(), "pacemaker"): cib_node_remove = os.path.join(settings.cib_dir, "cib.xml") if cib_node_remove: original_usefile, original_filename = utils.usefile, utils.filename utils.usefile = True utils.filename = cib_node_remove dummy_output, dummy_retval = utils.run([ "cibadmin", "--delete-all", "--force", "--xpath=/cib/configuration/nodes/node[@uname='{0}']".format( node ), ]) utils.usefile, utils.filename = original_usefile, original_filename if success: print("%s: successfully removed!" % node) else: utils.err("unable to remove %s" % node) else: usage.cluster() exit(1) def cluster_uidgid_rhel6(argv, silent_list = False): if not os.path.isfile(settings.cluster_conf_file): utils.err("the file doesn't exist on this machine, create a cluster before running this command" % settings.cluster_conf_file) if len(argv) == 0: found = False output, retval = utils.run(["ccs", "-f", settings.cluster_conf_file, "--lsmisc"]) if retval != 0: utils.err("error running ccs\n" + output) lines = output.split('\n') for line in lines: if line.startswith('UID/GID: '): print(line) found = True if not found and not silent_list: print("No uidgids configured in cluster.conf") return command = argv.pop(0) uid="" gid="" if (command == "add" or command == "rm") and len(argv) > 0: for arg in argv: if arg.find('=') == -1: utils.err("uidgid options must be of the form uid= gid=") (k,v) = arg.split('=',1) if k != "uid" and k != "gid": utils.err("%s is not a valid key, you must use uid or gid" %k) if k == "uid": uid = v if k == "gid": gid = v if uid == "" and gid == "": utils.err("you must set either uid or gid") if command == "add": output, retval = utils.run(["ccs", "-f", settings.cluster_conf_file, "--setuidgid", "uid="+uid, "gid="+gid]) if retval != 0: utils.err("unable to add uidgid\n" + output.rstrip()) elif command == "rm": output, retval = utils.run(["ccs", "-f", settings.cluster_conf_file, "--rmuidgid", "uid="+uid, "gid="+gid]) if retval != 0: utils.err("unable to remove uidgid\n" + output.rstrip()) # If we make a change, we sync out the changes to all nodes unless we're using -f if not utils.usefile: sync_nodes(utils.getNodesFromCorosyncConf(), utils.getCorosyncConf()) else: usage.cluster(["uidgid"]) exit(1) def cluster_uidgid(argv, silent_list = False): if utils.is_rhel6(): cluster_uidgid_rhel6(argv, silent_list) return if len(argv) == 0: found = False uid_gid_files = os.listdir(settings.corosync_uidgid_dir) for ug_file in uid_gid_files: uid_gid_dict = utils.read_uid_gid_file(ug_file) if "uid" in uid_gid_dict or "gid" in uid_gid_dict: line = "UID/GID: uid=" if "uid" in uid_gid_dict: line += uid_gid_dict["uid"] line += " gid=" if "gid" in uid_gid_dict: line += uid_gid_dict["gid"] print(line) found = True if not found and not silent_list: print("No uidgids configured in cluster.conf") return command = argv.pop(0) uid="" gid="" if (command == "add" or command == "rm") and len(argv) > 0: for arg in argv: if arg.find('=') == -1: utils.err("uidgid options must be of the form uid= gid=") (k,v) = arg.split('=',1) if k != "uid" and k != "gid": utils.err("%s is not a valid key, you must use uid or gid" %k) if k == "uid": uid = v if k == "gid": gid = v if uid == "" and gid == "": utils.err("you must set either uid or gid") if command == "add": utils.write_uid_gid_file(uid,gid) elif command == "rm": retval = utils.remove_uid_gid_file(uid,gid) if retval == False: utils.err("no uidgid files with uid=%s and gid=%s found" % (uid,gid)) else: usage.cluster(["uidgid"]) exit(1) def cluster_get_corosync_conf(argv): if utils.is_rhel6(): utils.err("corosync.conf is not supported on CMAN clusters") if len(argv) > 1: usage.cluster() exit(1) if len(argv) == 0: print(utils.getCorosyncConf(), end="") return node = argv[0] retval, output = utils.getCorosyncConfig(node) if retval != 0: utils.err(output) else: print(output, end="") def cluster_reload(argv): if len(argv) != 1 or argv[0] != "corosync": usage.cluster(["reload"]) exit(1) output, retval = utils.reloadCorosync() if retval != 0 or "invalid option" in output: utils.err(output.rstrip()) print("Corosync reloaded") # Completely tear down the cluster & remove config files # Code taken from cluster-clean script in pacemaker def cluster_destroy(argv): if "--all" in utils.pcs_options: # destroy remote and guest nodes cib = None lib_env = utils.get_lib_env() try: cib = lib_env.get_cib() except LibraryError as e: warn( "Unable to load CIB to get guest and remote nodes from it, " "those nodes will not be deconfigured." ) if cib is not None: try: all_remote_nodes = get_nodes(tree=cib) if len(all_remote_nodes) > 0: _destroy_pcmk_remote_env( lib_env, all_remote_nodes, skip_offline_nodes=True, allow_fails=True ) except LibraryError as e: utils.process_library_reports(e.args) # destroy full-stack nodes destroy_cluster(utils.getNodesFromCorosyncConf()) else: print("Shutting down pacemaker/corosync services...") for service in ["pacemaker", "corosync-qdevice", "corosync"]: # Returns an error if a service is not running. It is safe to # ignore it since we want it not to be running anyways. utils.stop_service(service) print("Killing any remaining services...") os.system("killall -q -9 corosync corosync-qdevice aisexec heartbeat pacemakerd ccm stonithd ha_logd lrmd crmd pengine attrd pingd mgmtd cib fenced dlm_controld gfs_controld") try: utils.disableServices() except: # previously errors were suppressed in here, let's keep it that way # for now pass try: disable_service(utils.cmd_runner(), lib_sbd.get_sbd_service_name()) except: # it's not a big deal if sbd disable fails pass print("Removing all cluster configuration files...") if utils.is_rhel6(): os.system("rm -f /etc/cluster/cluster.conf") else: os.system("rm -f /etc/corosync/corosync.conf") os.system("rm -f {0}".format(settings.corosync_authkey_file)) state_files = ["cib.xml*", "cib-*", "core.*", "hostcache", "cts.*", "pe*.bz2","cib.*"] for name in state_files: os.system("find /var/lib/pacemaker -name '"+name+"' -exec rm -f \{\} \;") os.system("rm -f {0}".format(settings.pacemaker_authkey_file)) try: qdevice_net.client_destroy() except: # errors from deleting other files are suppressed as well # we do not want to fail if qdevice was not set up pass def cluster_verify(argv): if len(argv) > 1: usage.cluster("verify") raise SystemExit(1) if argv: filename = argv[0] if not utils.usefile: #We must operate on given cib everywhere. utils.usefile = True utils.filename = filename elif os.path.abspath(filename) == os.path.abspath(utils.filename): warn("File '{0}' specified twice".format(os.path.abspath(filename))) else: raise error( "Ambiguous cib filename specification: '{0}' vs -f '{1}'" .format(filename, utils.filename) ) lib = utils.get_library_wrapper() try: lib.cluster.verify(verbose="-V" in utils.pcs_options) except LibraryError as e: utils.process_library_reports(e.args) def cluster_report(argv): if len(argv) != 1: usage.cluster(["report"]) sys.exit(1) outfile = argv[0] dest_outfile = outfile + ".tar.bz2" if os.path.exists(dest_outfile): if "--force" not in utils.pcs_options: utils.err(dest_outfile + " already exists, use --force to overwrite") else: try: os.remove(dest_outfile) except OSError as e: utils.err("Unable to remove " + dest_outfile + ": " + e.strerror) crm_report_opts = [] crm_report_opts.append("-f") if "--from" in utils.pcs_options: crm_report_opts.append(utils.pcs_options["--from"]) if "--to" in utils.pcs_options: crm_report_opts.append("-t") crm_report_opts.append(utils.pcs_options["--to"]) else: yesterday = datetime.datetime.now() - datetime.timedelta(1) crm_report_opts.append(yesterday.strftime("%Y-%m-%d %H:%M")) crm_report_opts.append(outfile) output, retval = utils.run([settings.crm_report] + crm_report_opts) if ( retval != 0 and "ERROR: Cannot determine nodes; specify --nodes or --single-node" in output ): utils.err("cluster is not configured on this node") newoutput = "" for line in output.split("\n"): if line.startswith("cat:") or line.startswith("grep") or line.startswith("grep") or line.startswith("tail"): continue if "We will attempt to remove" in line: continue if "-p option" in line: continue if "However, doing" in line: continue if "to diagnose" in line: continue if "--dest" in line: line = line.replace("--dest", "") newoutput = newoutput + line + "\n" if retval != 0: utils.err(newoutput) print(newoutput) def cluster_remote_node(argv): usage_add = """\ remote-node add [options] Enables the specified resource as a remote-node resource on the specified hostname (hostname should be the same as 'uname -n').""" usage_remove = """\ remote-node remove Disables any resources configured to be remote-node resource on the specified hostname (hostname should be the same as 'uname -n').""" if len(argv) < 1: print("\nUsage: pcs cluster remote-node...") print(usage_add) print() print(usage_remove) print() sys.exit(1) command = argv.pop(0) if command == "add": if len(argv) < 2: print("\nUsage: pcs cluster remote-node add...") print(usage_add) print() sys.exit(1) if "--force" in utils.pcs_options: warn("this command is deprecated, use 'pcs cluster node add-guest'") else: raise error( "this command is deprecated, use 'pcs cluster node add-guest'" ", use --force to override" ) hostname = argv.pop(0) rsc = argv.pop(0) if not utils.dom_get_resource(utils.get_cib_dom(), rsc): utils.err("unable to find resource '%s'" % rsc) resource.resource_update( rsc, ["meta", "remote-node="+hostname] + argv, deal_with_guest_change=False ) elif command in ["remove","delete"]: if len(argv) < 1: print("\nUsage: pcs cluster remote-node remove...") print(usage_remove) print() sys.exit(1) if "--force" in utils.pcs_options: warn( "this command is deprecated, use" " 'pcs cluster node remove-guest'" ) else: raise error( "this command is deprecated, use 'pcs cluster node" " remove-guest', use --force to override" ) hostname = argv.pop(0) dom = utils.get_cib_dom() nvpairs = dom.getElementsByTagName("nvpair") nvpairs_to_remove = [] for nvpair in nvpairs: if nvpair.getAttribute("name") == "remote-node" and nvpair.getAttribute("value") == hostname: for np in nvpair.parentNode.getElementsByTagName("nvpair"): if np.getAttribute("name").startswith("remote-"): nvpairs_to_remove.append(np) if len(nvpairs_to_remove) == 0: utils.err("unable to remove: cannot find remote-node '%s'" % hostname) for nvpair in nvpairs_to_remove[:]: nvpair.parentNode.removeChild(nvpair) dom = constraint.remove_constraints_containing_node(dom, hostname) utils.replace_cib_configuration(dom) if not utils.usefile: output, retval = utils.run([ "crm_node", "--force", "--remove", hostname ]) if retval != 0: utils.err("unable to remove: {0}".format(output)) else: print("\nUsage: pcs cluster remote-node...") print(usage_add) print() print(usage_remove) print() sys.exit(1) pcs-0.9.164/pcs/common/000077500000000000000000000000001326265502500145435ustar00rootroot00000000000000pcs-0.9.164/pcs/common/__init__.py000066400000000000000000000000001326265502500166420ustar00rootroot00000000000000pcs-0.9.164/pcs/common/env_file_role_codes.py000066400000000000000000000002611326265502500211010ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) BOOTH_CONFIG = "BOOTH_CONFIG" BOOTH_KEY = "BOOTH_KEY" PACEMAKER_AUTHKEY = "PACEMAKER_AUTHKEY" pcs-0.9.164/pcs/common/fencing_topology.py000066400000000000000000000002571326265502500204660ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) TARGET_TYPE_NODE = "node" TARGET_TYPE_REGEXP = "regexp" TARGET_TYPE_ATTRIBUTE = "attribute" pcs-0.9.164/pcs/common/node_communicator.py000066400000000000000000000443141326265502500206300ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import base64 import io import re from collections import namedtuple try: # python2 from urllib import urlencode as urllib_urlencode except ImportError: # python3 from urllib.parse import urlencode as urllib_urlencode # We should ignore SIGPIPE when using pycurl.NOSIGNAL - see the libcurl tutorial # for more info. try: import signal signal.signal(signal.SIGPIPE, signal.SIG_IGN) except ImportError: pass from pcs import settings from pcs.common import pcs_pycurl as pycurl def _find_value_for_possible_keys(value_dict, possible_key_list): for key in possible_key_list: if key in value_dict: return value_dict[key] return None class NodeTargetFactory(object): def __init__(self, auth_tokens, ports): self._auth_tokens = auth_tokens self._ports = ports def _get_token(self, possible_names): return _find_value_for_possible_keys(self._auth_tokens, possible_names) def _get_port(self, possible_names): return _find_value_for_possible_keys(self._ports, possible_names) def get_target(self, node_addresses): possible_names = [node_addresses.label, node_addresses.ring0] if node_addresses.ring1: possible_names.append(node_addresses.ring1) return RequestTarget.from_node_addresses( node_addresses, token=self._get_token(possible_names), port=self._get_port(possible_names), ) def get_target_list(self, node_addresses_list): return [self.get_target(node) for node in node_addresses_list] def get_target_from_hostname(self, hostname): return RequestTarget( hostname, token=self._get_token([hostname]), port=self._get_port([hostname]), ) class RequestData( namedtuple("RequestData", ["action", "structured_data", "data"]) ): """ This class represents action and data asociated with action which will be send in request """ def __new__(cls, action, structured_data=()): """ string action -- action to perform list structured_data -- list of tuples, data to send with specified action """ return super(RequestData, cls).__new__( cls, action, structured_data, urllib_urlencode(structured_data) ) class RequestTarget(namedtuple( "RequestTarget", ["label", "address_list", "port", "token"] )): """ This class represents target (host) for request to be performed on """ def __new__(cls, label, address_list=None, port=None, token=None): """ string label -- label for the host, this is used as only hostname if address_list is not defined list address_list -- list of all possible hostnames on which the host is reachable int port -- target communnication port string token -- authentication token """ if not address_list: address_list = [label] return super(RequestTarget, cls).__new__( cls, label, list(address_list), port, token ) @classmethod def from_node_addresses(cls, node_addresses, port=None, token=None): """ Create object RequestTarget from NodeAddresses instance. Returns new RequestTarget instance. NodeAddresses node_addresses -- node which defines target string port -- target communnication port string token -- authentication token """ address_list = [node_addresses.ring0] if node_addresses.ring1: address_list.append(node_addresses.ring1) return cls( node_addresses.label, address_list=address_list, port=port, token=token ) class Request(object): """ This class represents request. With usage of RequestTarget it provides interface for getting next available host to make request on. """ def __init__(self, request_target, request_data): """ RequestTarget request_target RequestData request_data """ self._target = request_target self._data = request_data self._current_host_iterator = iter(request_target.address_list) self._current_host = None self.next_host() def next_host(self): """ Move to the next available host. Raises StopIteration when there is no host to use. """ self._current_host = next(self._current_host_iterator) @property def url(self): """ URL representing request using current host. """ return "https://{host}:{port}/{request}".format( host="[{0}]".format(self.host) if ":" in self.host else self.host, port=( self._target.port if self._target.port else settings.pcsd_default_port ), request=self._data.action ) @property def host(self): return self._current_host @property def host_label(self): return self._target.label @property def target(self): return self._target @property def data(self): return self._data.data @property def action(self): return self._data.action @property def cookies(self): cookies = {} if self._target.token: cookies["token"] = self._target.token return cookies def __repr__(self): return str("Request({0}, {1})").format(self._target, self._data) class Response(object): """ This class represents response for request which is available as instance property. """ def __init__(self, handle, was_connected, errno=None, error_msg=None): self._handle = handle self._was_connected = was_connected self._errno = errno self._error_msg = error_msg self._data = None self._debug = None @classmethod def connection_successful(cls, handle): """ Returns Response instance that is marked as successfully connected. pycurl.Curl handle -- curl easy handle, which connection was successful """ return cls(handle, True) @classmethod def connection_failure(cls, handle, errno, error_msg): """ Returns Response instance that is marked as not successfuly connected. pycurl.Curl handle -- curl easy handle, which was not connected int errno -- error number string error_msg -- text description of error """ return cls(handle, False, errno, error_msg) @property def request(self): return self._handle.request_obj @property def handle(self): return self._handle @property def was_connected(self): return self._was_connected @property def errno(self): return self._errno @property def error_msg(self): return self._error_msg @property def data(self): if self._data is None: self._data = self._handle.output_buffer.getvalue().decode("utf-8") return self._data @property def debug(self): if self._debug is None: self._debug = self._handle.debug_buffer.getvalue().decode("utf-8") return self._debug @property def response_code(self): if not self.was_connected: return None return self._handle.getinfo(pycurl.RESPONSE_CODE) def __repr__(self): return str( "Response({0} data='{1}' was_connected={2}) errno='{3}'" " error_msg='{4}' response_code='{5}')" ).format( self.request, self.data, self.was_connected, self.errno, self.error_msg, self.response_code, ) class NodeCommunicatorFactory(object): def __init__(self, communicator_logger, user, groups, request_timeout): self._logger = communicator_logger self._user = user self._groups = groups self._request_timeout = request_timeout def get_communicator(self): return self.get_simple_communicator() def get_simple_communicator(self): return Communicator( self._logger, self._user, self._groups, self._request_timeout ) def get_multiaddress_communicator(self): return MultiaddressCommunicator( self._logger, self._user, self._groups, self._request_timeout ) class Communicator(object): """ This class provides simple interface for making parallel requests. The instances of this class are not thread-safe! It is intended to use it only in a single thread. Use an unique instance for each thread. """ curl_multi_select_timeout_default = 0.8 # in seconds def __init__(self, communicator_logger, user, groups, request_timeout=None): self._logger = communicator_logger self._auth_cookies = _get_auth_cookies(user, groups) self._request_timeout = ( request_timeout if request_timeout is not None else settings.default_request_timeout ) self._multi_handle = pycurl.CurlMulti() self._is_running = False # This is used just for storing references of curl easy handles. # We need to have references for all the handles, so they don't be # cleaned up by the garbage collector. self._easy_handle_list = [] def add_requests(self, request_list): """ Add requests to queue to be processed. It is possible to call this method before getting generator using start_loop method and also during getting responses from generator. Requests are not performed after calling this method, but only when generator returned by start_loop method is in progress (returned at least one response and not raised StopIteration exception). list request_list -- Request objects to add to the queue """ for request in request_list: handle = _create_request_handle( request, self._auth_cookies, self._request_timeout, ) self._easy_handle_list.append(handle) self._multi_handle.add_handle(handle) if self._is_running: self._logger.log_request_start(request) def start_loop(self): """ Returns generator. When generator is invoked, all requests in queue (added by method add_requests) will be invoked in parallel, and generator will then return responses for these requests. It is possible to add new request to the queue while the generator is in progres. Generator will stop (raise StopIteration) after all requests (also those added after creation of generator) are processed. WARNING: do not use multiple instances of generator (of one Communicator instance) when there is one which didn't finish (raised StopIteration). It wil cause AssertionError. USAGE: com = Communicator(...) com.add_requests([ Request(...), ... ]) for response in communicator.start_loop(): # do something with response # if needed, add some new requests to the queue com.add_requests([Request(...)]) """ if self._is_running: raise AssertionError("Method start_loop already running") self._is_running = True for handle in self._easy_handle_list: self._logger.log_request_start(handle.request_obj) finished_count = 0 while finished_count < len(self._easy_handle_list): self.__multi_perform() self.__wait_for_multi_handle() response_list = self.__get_all_ready_responses() for response in response_list: # free up memory for next usage of this Communicator instance self._multi_handle.remove_handle(response.handle) self._logger.log_response(response) yield response # if something was added to the queue in the meantime, run it # immediately, so we don't need to wait until all responses will # be processed self.__multi_perform() finished_count += len(response_list) self._easy_handle_list = [] self._is_running = False def __get_all_ready_responses(self): response_list = [] repeat = True while repeat: num_queued, ok_list, err_list = self._multi_handle.info_read() response_list.extend( [Response.connection_successful(handle) for handle in ok_list] + [ Response.connection_failure(handle, errno, error_msg) for handle, errno, error_msg in err_list ] ) repeat = num_queued > 0 return response_list def __multi_perform(self): # run all internal operation required by libcurl status, num_to_process = self._multi_handle.perform() # if perform returns E_CALL_MULTI_PERFORM it requires to call perform # once again right away while status == pycurl.E_CALL_MULTI_PERFORM: status, num_to_process = self._multi_handle.perform() return num_to_process def __wait_for_multi_handle(self): # try to wait until there is something to do for us need_to_wait = True while need_to_wait: timeout = self._multi_handle.timeout() if timeout == 0: # if timeout == 0 then there is something to precess already return timeout = ( timeout / 1000.0 if timeout > 0 # curl don't have timeout set, so we can use our default else self.curl_multi_select_timeout_default ) # when value returned from select is -1, it timed out, so we can # wait need_to_wait = (self._multi_handle.select(timeout) == -1) class MultiaddressCommunicator(Communicator): """ Class with same interface as Communicator. In difference with Communicator, it takes advantage of multiple hosts in RequestTarget. So if it is not possible to connect to target using first hostname, it will use next one until connection will be successful or there is no host left. """ def start_loop(self): for response in super(MultiaddressCommunicator, self).start_loop(): if response.was_connected: yield response continue try: previous_host = response.request.host response.request.next_host() self._logger.log_retry(response, previous_host) self.add_requests([response.request]) except StopIteration: self._logger.log_no_more_addresses(response) yield response class CommunicatorLoggerInterface(object): def log_request_start(self, request): raise NotImplementedError() def log_response(self, response): raise NotImplementedError() def log_retry(self, response, previous_host): raise NotImplementedError() def log_no_more_addresses(self, response): raise NotImplementedError() def _get_auth_cookies(user, group_list): """ Returns input parameters in a dictionary which is prepared to be converted to cookie string. string user -- CIB user string group_list -- CIB user groups """ # Let's be safe about characters in variables (they can come from env) # and do base64. We cannot do it for CIB_user however to be backward # compatible so we at least remove disallowed characters. cookies = {} if user: cookies["CIB_user"] = re.sub(r"[^!-~]", "", user).replace(";", "") if group_list: # python3 requires the value to be bytes not str cookies["CIB_user_groups"] = base64.b64encode( " ".join(group_list).encode("utf-8") ) return cookies def _create_request_handle(request, cookies, timeout): """ Returns Curl object (easy handle) which is set up witc specified parameters. Request request -- request specification dict cookies -- cookies to add to request int timeot -- request timeout """ # it is not possible to take this callback out of this function, because of # curl API def __debug_callback(data_type, debug_data): prefixes = { pycurl.DEBUG_TEXT: b"* ", pycurl.DEBUG_HEADER_IN: b"< ", pycurl.DEBUG_HEADER_OUT: b"> ", pycurl.DEBUG_DATA_IN: b"<< ", pycurl.DEBUG_DATA_OUT: b">> ", } if data_type in prefixes: debug_output.write(prefixes[data_type]) debug_output.write(debug_data) if not debug_data.endswith(b"\n"): debug_output.write(b"\n") output = io.BytesIO() debug_output = io.BytesIO() cookies.update(request.cookies) handle = pycurl.Curl() handle.setopt(pycurl.PROTOCOLS, pycurl.PROTO_HTTPS) handle.setopt(pycurl.TIMEOUT, timeout) handle.setopt(pycurl.URL, request.url.encode("utf-8")) handle.setopt(pycurl.WRITEFUNCTION, output.write) handle.setopt(pycurl.VERBOSE, 1) handle.setopt(pycurl.DEBUGFUNCTION, __debug_callback) handle.setopt(pycurl.SSL_VERIFYHOST, 0) handle.setopt(pycurl.SSL_VERIFYPEER, 0) handle.setopt(pycurl.NOSIGNAL, 1) # required for multi-threading if cookies: handle.setopt( pycurl.COOKIE, _dict_to_cookies(cookies).encode("utf-8") ) if request.data: handle.setopt( pycurl.COPYPOSTFIELDS, request.data.encode("utf-8") ) # add reference for request object and output bufers to handle, so later # we don't need to match these objects when they are returned from # pycurl after they've been processed # similar usage is in pycurl example: # https://github.com/pycurl/pycurl/blob/REL_7_19_0_3/examples/retriever-multi.py handle.request_obj = request handle.output_buffer = output handle.debug_buffer = debug_output return handle def _dict_to_cookies(cookies_dict): return ";".join([ "{0}={1}".format(key, value) for key, value in sorted(cookies_dict.items()) ]) pcs-0.9.164/pcs/common/pcs_pycurl.py000066400000000000000000000015031326265502500172770ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import sys from pycurl import * # This package defines constants which are not present in some older versions # of pycurl but pcs needs to use them required_constants = { "PROTOCOLS": 181, "PROTO_HTTPS": 2, "E_OPERATION_TIMEDOUT": 28, # these are types of debug messages # see https://curl.haxx.se/libcurl/c/CURLOPT_DEBUGFUNCTION.html "DEBUG_TEXT": 0, "DEBUG_HEADER_IN": 1, "DEBUG_HEADER_OUT": 2, "DEBUG_DATA_IN": 3, "DEBUG_DATA_OUT": 4, "DEBUG_SSL_DATA_IN": 5, "DEBUG_SSL_DATA_OUT": 6, "DEBUG_END": 7, } __current_module = sys.modules[__name__] for constant, value in required_constants.items(): if not hasattr(__current_module, constant): setattr(__current_module, constant, value) pcs-0.9.164/pcs/common/report_codes.py000066400000000000000000000356521326265502500176200ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) # force cathegories FORCE_ACTIVE_RRP = "ACTIVE_RRP" FORCE_ALERT_RECIPIENT_VALUE_NOT_UNIQUE = "FORCE_ALERT_RECIPIENT_VALUE_NOT_UNIQUE" FORCE_BOOTH_DESTROY = "FORCE_BOOTH_DESTROY" FORCE_BOOTH_REMOVE_FROM_CIB = "FORCE_BOOTH_REMOVE_FROM_CIB" FORCE_REMOVE_MULTIPLE_NODES = "FORCE_REMOVE_MULTIPLE_NODES" FORCE_CONSTRAINT_DUPLICATE = "CONSTRAINT_DUPLICATE" FORCE_CONSTRAINT_MULTIINSTANCE_RESOURCE = "CONSTRAINT_MULTIINSTANCE_RESOURCE" FORCE_FILE_OVERWRITE = "FORCE_FILE_OVERWRITE" FORCE_LOAD_THRESHOLD = "LOAD_THRESHOLD" FORCE_METADATA_ISSUE = "METADATA_ISSUE" FORCE_NODE_DOES_NOT_EXIST = "FORCE_NODE_DOES_NOT_EXIST" FORCE_OPTIONS = "OPTIONS" FORCE_QDEVICE_MODEL = "QDEVICE_MODEL" FORCE_QDEVICE_USED = "QDEVICE_USED" FORCE_STONITH_RESOURCE_DOES_NOT_EXIST = "FORCE_STONITH_RESOURCE_DOES_NOT_EXIST" FORCE_NOT_SUITABLE_COMMAND = "FORCE_NOT_SUITABLE_COMMAND" FORCE_CLEAR_CLUSTER_NODE = "FORCE_CLEAR_CLUSTER_NODE" SKIP_OFFLINE_NODES = "SKIP_OFFLINE_NODES" SKIP_FILE_DISTRIBUTION_ERRORS = "SKIP_FILE_DISTRIBUTION_ERRORS" SKIP_ACTION_ON_NODES_ERRORS = "SKIP_ACTION_ON_NODES_ERRORS" SKIP_UNREADABLE_CONFIG = "SKIP_UNREADABLE_CONFIG" AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE = "AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE" AGENT_NAME_GUESS_FOUND_NONE = "AGENT_NAME_GUESS_FOUND_NONE" AGENT_NAME_GUESSED = "AGENT_NAME_GUESSED" BAD_CLUSTER_STATE_FORMAT = 'BAD_CLUSTER_STATE_FORMAT' BOOTH_ADDRESS_DUPLICATION = "BOOTH_ADDRESS_DUPLICATION" BOOTH_ALREADY_IN_CIB = "BOOTH_ALREADY_IN_CIB" BOOTH_CANNOT_DETERMINE_LOCAL_SITE_IP = "BOOTH_CANNOT_DETERMINE_LOCAL_SITE_IP" BOOTH_CANNOT_IDENTIFY_KEYFILE = "BOOTH_CANNOT_IDENTIFY_KEYFILE" BOOTH_CONFIG_ACCEPTED_BY_NODE = "BOOTH_CONFIG_ACCEPTED_BY_NODE" BOOTH_CONFIG_DISTRIBUTION_NODE_ERROR = "BOOTH_CONFIG_DISTRIBUTION_NODE_ERROR" BOOTH_CONFIG_DISTRIBUTION_STARTED = "BOOTH_CONFIG_DISTRIBUTION_STARTED" BOOTH_CONFIG_FILE_ALREADY_EXISTS = "BOOTH_CONFIG_FILE_ALREADY_EXISTS" BOOTH_CONFIG_IO_ERROR = "BOOTH_CONFIG_IO_ERROR" BOOTH_CONFIG_IS_USED = "BOOTH_CONFIG_IS_USED" BOOTH_CONFIG_READ_ERROR = "BOOTH_CONFIG_READ_ERROR" BOOTH_CONFIG_UNEXPECTED_LINES = "BOOTH_CONFIG_UNEXPECTED_LINES" BOOTH_DAEMON_STATUS_ERROR = "BOOTH_DAEMON_STATUS_ERROR" BOOTH_EVEN_PEERS_NUM = "BOOTH_EVEN_PEERS_NUM" BOOTH_FETCHING_CONFIG_FROM_NODE = "BOOTH_FETCHING_CONFIG_FROM_NODE" BOOTH_INVALID_CONFIG_NAME = "BOOTH_INVALID_CONFIG_NAME" BOOTH_INVALID_NAME = "BOOTH_INVALID_NAME" BOOTH_LACK_OF_SITES = "BOOTH_LACK_OF_SITES" BOOTH_MULTIPLE_TIMES_IN_CIB = "BOOTH_MULTIPLE_TIMES_IN_CIB" BOOTH_NOT_EXISTS_IN_CIB = "BOOTH_NOT_EXISTS_IN_CIB" BOOTH_PEERS_STATUS_ERROR = "BOOTH_PEERS_STATUS_ERROR" BOOTH_SKIPPING_CONFIG = "BOOTH_SKIPPING_CONFIG" BOOTH_TICKET_DOES_NOT_EXIST = "BOOTH_TICKET_DOES_NOT_EXIST" BOOTH_TICKET_DUPLICATE = "BOOTH_TICKET_DUPLICATE" BOOTH_TICKET_NAME_INVALID = "BOOTH_TICKET_NAME_INVALID" BOOTH_TICKET_OPERATION_FAILED = "BOOTH_TICKET_OPERATION_FAILED" BOOTH_TICKET_STATUS_ERROR = "BOOTH_TICKET_STATUS_ERROR" BOOTH_UNSUPORTED_FILE_LOCATION = "BOOTH_UNSUPORTED_FILE_LOCATION" CIB_ACL_ROLE_IS_ALREADY_ASSIGNED_TO_TARGET = "CIB_ACL_ROLE_IS_ALREADY_ASSIGNED_TO_TARGET" CIB_ACL_ROLE_IS_NOT_ASSIGNED_TO_TARGET = "CIB_ACL_ROLE_IS_NOT_ASSIGNED_TO_TARGET" CIB_ACL_TARGET_ALREADY_EXISTS = "CIB_ACL_TARGET_ALREADY_EXISTS" CIB_ALERT_RECIPIENT_ALREADY_EXISTS = "CIB_ALERT_RECIPIENT_ALREADY_EXISTS" CIB_ALERT_RECIPIENT_VALUE_INVALID = "CIB_ALERT_RECIPIENT_VALUE_INVALID" CIB_CANNOT_FIND_MANDATORY_SECTION = "CIB_CANNOT_FIND_MANDATORY_SECTION" CIB_DIFF_ERROR = "CIB_DIFF_ERROR" CIB_FENCING_LEVEL_ALREADY_EXISTS = "CIB_FENCING_LEVEL_ALREADY_EXISTS" CIB_FENCING_LEVEL_DOES_NOT_EXIST = "CIB_FENCING_LEVEL_DOES_NOT_EXIST" CIB_LOAD_ERROR_BAD_FORMAT = "CIB_LOAD_ERROR_BAD_FORMAT" CIB_LOAD_ERROR = "CIB_LOAD_ERROR" CIB_LOAD_ERROR_SCOPE_MISSING = "CIB_LOAD_ERROR_SCOPE_MISSING" CIB_PUSH_FORCED_FULL_DUE_TO_CRM_FEATURE_SET = "CIB_PUSH_FORCED_FULL_DUE_TO_CRM_FEATURE_SET" CIB_PUSH_ERROR = "CIB_PUSH_ERROR" CIB_SAVE_TMP_ERROR = "CIB_SAVE_TMP_ERROR" CIB_UPGRADE_FAILED = "CIB_UPGRADE_FAILED" CIB_UPGRADE_FAILED_TO_MINIMAL_REQUIRED_VERSION = "CIB_UPGRADE_FAILED_TO_MINIMAL_REQUIRED_VERSION" CIB_UPGRADE_SUCCESSFUL = "CIB_UPGRADE_SUCCESSFUL" CLUSTER_CONF_LOAD_ERROR_INVALID_FORMAT = "CLUSTER_CONF_LOAD_ERROR_INVALID_FORMAT" CLUSTER_CONF_READ_ERROR = "CLUSTER_CONF_READ_ERROR" CLUSTER_RESTART_REQUIRED_TO_APPLY_CHANGES = "CLUSTER_RESTART_REQUIRED_TO_APPLY_CHANGES" CMAN_BROADCAST_ALL_RINGS = 'CMAN_BROADCAST_ALL_RINGS' CMAN_UDPU_RESTART_REQUIRED = 'CMAN_UDPU_RESTART_REQUIRED' CMAN_UNSUPPORTED_COMMAND = "CMAN_UNSUPPORTED_COMMAND" COMMON_ERROR = 'COMMON_ERROR' COMMON_INFO = 'COMMON_INFO' LIVE_ENVIRONMENT_REQUIRED = "LIVE_ENVIRONMENT_REQUIRED" LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE = "LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE" COROSYNC_CONFIG_ACCEPTED_BY_NODE = "COROSYNC_CONFIG_ACCEPTED_BY_NODE" COROSYNC_CONFIG_DISTRIBUTION_STARTED = "COROSYNC_CONFIG_DISTRIBUTION_STARTED" COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR = "COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR" COROSYNC_CONFIG_RELOADED = "COROSYNC_CONFIG_RELOADED" COROSYNC_CONFIG_RELOAD_ERROR = "COROSYNC_CONFIG_RELOAD_ERROR" COROSYNC_NOT_RUNNING_CHECK_STARTED = "COROSYNC_NOT_RUNNING_CHECK_STARTED" COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR = "COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR" COROSYNC_NOT_RUNNING_ON_NODE = "COROSYNC_NOT_RUNNING_ON_NODE" COROSYNC_OPTIONS_INCOMPATIBLE_WITH_QDEVICE = "COROSYNC_OPTIONS_INCOMPATIBLE_WITH_QDEVICE" COROSYNC_QUORUM_CANNOT_DISABLE_ATB_DUE_TO_SBD = "COROSYNC_QUORUM_CANNOT_DISABLE_ATB_DUE_TO_SBD" COROSYNC_QUORUM_GET_STATUS_ERROR = "COROSYNC_QUORUM_GET_STATUS_ERROR" COROSYNC_QUORUM_HEURISTICS_ENABLED_WITH_NO_EXEC = "COROSYNC_QUORUM_HEURISTICS_ENABLED_WITH_NO_EXEC" COROSYNC_QUORUM_SET_EXPECTED_VOTES_ERROR = "COROSYNC_QUORUM_SET_EXPECTED_VOTES_ERROR" COROSYNC_RUNNING_ON_NODE = "COROSYNC_RUNNING_ON_NODE" CRM_MON_ERROR = "CRM_MON_ERROR" DEFAULTS_CAN_BE_OVERRIDEN = "DEFAULTS_CAN_BE_OVERRIDEN" DEPRECATED_OPTION = "DEPRECATED_OPTION" DUPLICATE_CONSTRAINTS_EXIST = "DUPLICATE_CONSTRAINTS_EXIST" EMPTY_RESOURCE_SET_LIST = "EMPTY_RESOURCE_SET_LIST" EMPTY_ID = "EMPTY_ID" FILE_ALREADY_EXISTS = "FILE_ALREADY_EXISTS" FILE_DOES_NOT_EXIST = "FILE_DOES_NOT_EXIST" FILE_IO_ERROR = "FILE_IO_ERROR" FILES_DISTRIBUTION_STARTED = "FILES_DISTRIBUTION_STARTED" FILE_DISTRIBUTION_ERROR = "FILE_DISTRIBUTION_ERROR" FILE_DISTRIBUTION_SUCCESS = "FILE_DISTRIBUTION_SUCCESS" FILES_REMOVE_FROM_NODE_STARTED = "FILES_REMOVE_FROM_NODE_STARTED" FILE_REMOVE_FROM_NODE_ERROR = "FILE_REMOVE_FROM_NODE_ERROR" FILE_REMOVE_FROM_NODE_SUCCESS = "FILE_REMOVE_FROM_NODE_SUCCESS" ID_ALREADY_EXISTS = 'ID_ALREADY_EXISTS' ID_BELONGS_TO_UNEXPECTED_TYPE = "ID_BELONGS_TO_UNEXPECTED_TYPE" ID_NOT_FOUND = 'ID_NOT_FOUND' IGNORED_CMAN_UNSUPPORTED_OPTION = 'IGNORED_CMAN_UNSUPPORTED_OPTION' INVALID_CIB_CONTENT = "INVALID_CIB_CONTENT" INVALID_ID = "INVALID_ID" INVALID_OPTIONS = "INVALID_OPTIONS" INVALID_USERDEFINED_OPTIONS = "INVALID_USERDEFINED_OPTIONS" INVALID_OPTION_TYPE = "INVALID_OPTION_TYPE" INVALID_OPTION_VALUE = "INVALID_OPTION_VALUE" INVALID_RESOURCE_NAME = 'INVALID_RESOURCE_NAME' INVALID_RESOURCE_AGENT_NAME = 'INVALID_RESOURCE_AGENT_NAME' INVALID_RESPONSE_FORMAT = "INVALID_RESPONSE_FORMAT" INVALID_SCORE = "INVALID_SCORE" INVALID_STONITH_AGENT_NAME = "INVALID_STONITH_AGENT_NAME" INVALID_TIMEOUT_VALUE = "INVALID_TIMEOUT_VALUE" MULTIPLE_SCORE_OPTIONS = "MULTIPLE_SCORE_OPTIONS" MULTIPLE_RESULTS_FOUND = "MULTIPLE_RESULTS_FOUND" MUTUALLY_EXCLUSIVE_OPTIONS = "MUTUALLY_EXCLUSIVE_OPTIONS" CANNOT_ADD_NODE_IS_IN_CLUSTER = "CANNOT_ADD_NODE_IS_IN_CLUSTER" CANNOT_ADD_NODE_IS_RUNNING_SERVICE = "CANNOT_ADD_NODE_IS_RUNNING_SERVICE" NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL = "NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL" NODE_COMMUNICATION_DEBUG_INFO = "NODE_COMMUNICATION_DEBUG_INFO" NODE_COMMUNICATION_ERROR = "NODE_COMMUNICATION_ERROR" NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED = "NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED" NODE_COMMUNICATION_ERROR_PERMISSION_DENIED = "NODE_COMMUNICATION_ERROR_PERMISSION_DENIED" NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT = "NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT" NODE_COMMUNICATION_ERROR_UNSUPPORTED_COMMAND = "NODE_COMMUNICATION_ERROR_UNSUPPORTED_COMMAND" NODE_COMMUNICATION_ERROR_TIMED_OUT = "NODE_COMMUNICATION_ERROR_TIMED_OUT" NODE_COMMUNICATION_FINISHED = "NODE_COMMUNICATION_FINISHED" NODE_COMMUNICATION_NOT_CONNECTED = "NODE_COMMUNICATION_NOT_CONNECTED" NODE_COMMUNICATION_NO_MORE_ADDRESSES = "NODE_COMMUNICATION_NO_MORE_ADDRESSES" NODE_COMMUNICATION_PROXY_IS_SET = "NODE_COMMUNICATION_PROXY_IS_SET" NODE_COMMUNICATION_RETRYING = "NODE_COMMUNICATION_RETRYING" NODE_COMMUNICATION_STARTED = "NODE_COMMUNICATION_STARTED" NODE_NOT_FOUND = "NODE_NOT_FOUND" NODE_REMOVE_IN_PACEMAKER_FAILED = "NODE_REMOVE_IN_PACEMAKER_FAILED" NON_UDP_TRANSPORT_ADDR_MISMATCH = 'NON_UDP_TRANSPORT_ADDR_MISMATCH' NOLIVE_SKIP_FILES_DISTRIBUTION="NOLIVE_SKIP_FILES_DISTRIBUTION" NOLIVE_SKIP_FILES_REMOVE="NOLIVE_SKIP_FILES_REMOVE" NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES="NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES" NODE_TO_CLEAR_IS_STILL_IN_CLUSTER = "NODE_TO_CLEAR_IS_STILL_IN_CLUSTER" OMITTING_NODE = "OMITTING_NODE" OBJECT_WITH_ID_IN_UNEXPECTED_CONTEXT = "OBJECT_WITH_ID_IN_UNEXPECTED_CONTEXT" PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND = "PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND" PARSE_ERROR_COROSYNC_CONF_MISSING_CLOSING_BRACE = "PARSE_ERROR_COROSYNC_CONF_MISSING_CLOSING_BRACE" PARSE_ERROR_COROSYNC_CONF = "PARSE_ERROR_COROSYNC_CONF" PARSE_ERROR_COROSYNC_CONF_UNEXPECTED_CLOSING_BRACE = "PARSE_ERROR_COROSYNC_CONF_UNEXPECTED_CLOSING_BRACE" PREREQUISITE_OPTION_IS_MISSING = "PREREQUISITE_OPTION_IS_MISSING" QDEVICE_ALREADY_DEFINED = "QDEVICE_ALREADY_DEFINED" QDEVICE_ALREADY_INITIALIZED = "QDEVICE_ALREADY_INITIALIZED" QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE = "QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE" QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED = "QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED" QDEVICE_CERTIFICATE_REMOVAL_STARTED = "QDEVICE_CERTIFICATE_REMOVAL_STARTED" QDEVICE_CERTIFICATE_REMOVED_FROM_NODE = "QDEVICE_CERTIFICATE_REMOVED_FROM_NODE" QDEVICE_CERTIFICATE_IMPORT_ERROR = "QDEVICE_CERTIFICATE_IMPORT_ERROR" QDEVICE_CERTIFICATE_SIGN_ERROR = "QDEVICE_CERTIFICATE_SIGN_ERROR" QDEVICE_DESTROY_ERROR = "QDEVICE_DESTROY_ERROR" QDEVICE_DESTROY_SUCCESS = "QDEVICE_DESTROY_SUCCESS" QDEVICE_GET_STATUS_ERROR = "QDEVICE_GET_STATUS_ERROR" QDEVICE_INITIALIZATION_ERROR = "QDEVICE_INITIALIZATION_ERROR" QDEVICE_INITIALIZATION_SUCCESS = "QDEVICE_INITIALIZATION_SUCCESS" QDEVICE_NOT_DEFINED = "QDEVICE_NOT_DEFINED" QDEVICE_NOT_INITIALIZED = "QDEVICE_NOT_INITIALIZED" QDEVICE_NOT_RUNNING = "QDEVICE_NOT_RUNNING" QDEVICE_CLIENT_RELOAD_STARTED = "QDEVICE_CLIENT_RELOAD_STARTED" QDEVICE_REMOVE_OR_CLUSTER_STOP_NEEDED = "QDEVICE_REMOVE_OR_CLUSTER_STOP_NEEDED" QDEVICE_USED_BY_CLUSTERS = "QDEVICE_USED_BY_CLUSTERS" REQUIRED_OPTION_IS_MISSING = "REQUIRED_OPTION_IS_MISSING" REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING = "REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING" RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE = "RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE" RESOURCE_CANNOT_BE_NEXT_TO_ITSELF_IN_GROUP = "RESOURCE_CANNOT_BE_NEXT_TO_ITSELF_IN_GROUP" RESOURCE_CLEANUP_ERROR = "RESOURCE_CLEANUP_ERROR" RESOURCE_DOES_NOT_RUN = "RESOURCE_DOES_NOT_RUN" RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE = 'RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE' RESOURCE_IS_GUEST_NODE_ALREADY = "RESOURCE_IS_GUEST_NODE_ALREADY" RESOURCE_IS_UNMANAGED = "RESOURCE_IS_UNMANAGED" RESOURCE_MANAGED_NO_MONITOR_ENABLED = "RESOURCE_MANAGED_NO_MONITOR_ENABLED" RESOURCE_OPERATION_INTERVAL_DUPLICATION = "RESOURCE_OPERATION_INTERVAL_DUPLICATION" RESOURCE_OPERATION_INTERVAL_ADAPTED = "RESOURCE_OPERATION_INTERVAL_ADAPTED" RESOURCE_REFRESH_ERROR = "RESOURCE_REFRESH_ERROR" RESOURCE_REFRESH_TOO_TIME_CONSUMING = 'RESOURCE_REFRESH_TOO_TIME_CONSUMING' RESOURCE_RUNNING_ON_NODES = "RESOURCE_RUNNING_ON_NODES" RRP_ACTIVE_NOT_SUPPORTED = 'RRP_ACTIVE_NOT_SUPPORTED' RUN_EXTERNAL_PROCESS_ERROR = "RUN_EXTERNAL_PROCESS_ERROR" RUN_EXTERNAL_PROCESS_FINISHED = "RUN_EXTERNAL_PROCESS_FINISHED" RUN_EXTERNAL_PROCESS_STARTED = "RUN_EXTERNAL_PROCESS_STARTED" SBD_CHECK_STARTED = "SBD_CHECK_STARTED" SBD_CHECK_SUCCESS = "SBD_CHECK_SUCCESS" SBD_CONFIG_ACCEPTED_BY_NODE = "SBD_CONFIG_ACCEPTED_BY_NODE" SBD_CONFIG_DISTRIBUTION_STARTED = "SBD_CONFIG_DISTRIBUTION_STARTED" SBD_DEVICE_DOES_NOT_EXIST = "SBD_DEVICE_DOES_NOT_EXIST" SBD_DEVICE_DUMP_ERROR = "SBD_DEVICE_DUMP_ERROR" SBD_DEVICE_INITIALIZATION_ERROR = "SBD_DEVICE_INITIALIZATION_ERROR" SBD_DEVICE_INITIALIZATION_STARTED = "SBD_DEVICE_INITIALIZATION_STARTED" SBD_DEVICE_INITIALIZATION_SUCCESS = "SBD_DEVICE_INITIALIZATION_SUCCESS" SBD_DEVICE_IS_NOT_BLOCK_DEVICE = "SBD_DEVICE_IS_NOT_BLOCK_DEVICE" SBD_DEVICE_LIST_ERROR = "SBD_DEVICE_LIST_ERROR" SBD_DEVICE_MESSAGE_ERROR = "SBD_DEVICE_MESSAGE_ERROR" SBD_DEVICE_PATH_NOT_ABSOLUTE = "SBD_DEVICE_PATH_NOT_ABSOLUTE" SBD_DISABLING_STARTED = "SBD_DISABLING_STARTED" SBD_ENABLING_STARTED = "SBD_ENABLING_STARTED" SBD_NO_DEVICE_FOR_NODE = "SBD_NO_DEVICE_FOR_NODE" SBD_NOT_ENABLED = "SBD_NOT_ENABLED" SBD_NOT_INSTALLED = "SBD_NOT_INSTALLED" SBD_REQUIRES_ATB = "SBD_REQUIRES_ATB" SBD_TOO_MANY_DEVICES_FOR_NODE = "SBD_TOO_MANY_DEVICES_FOR_NODE" SERVICE_DISABLE_ERROR = "SERVICE_DISABLE_ERROR" SERVICE_DISABLE_STARTED = "SERVICE_DISABLE_STARTED" SERVICE_DISABLE_SUCCESS = "SERVICE_DISABLE_SUCCESS" SERVICE_ENABLE_ERROR = "SERVICE_ENABLE_ERROR" SERVICE_ENABLE_STARTED = "SERVICE_ENABLE_STARTED" SERVICE_ENABLE_SKIPPED = "SERVICE_ENABLE_SKIPPED" SERVICE_ENABLE_SUCCESS = "SERVICE_ENABLE_SUCCESS" SERVICE_KILL_ERROR = "SERVICE_KILL_ERROR" SERVICE_KILL_SUCCESS = "SERVICE_KILL_SUCCESS" SERVICE_START_ERROR = "SERVICE_START_ERROR" SERVICE_START_SKIPPED = "SERVICE_START_SKIPPED" SERVICE_START_STARTED = "SERVICE_START_STARTED" SERVICE_START_SUCCESS = "SERVICE_START_SUCCESS" SERVICE_STOP_ERROR = "SERVICE_STOP_ERROR" SERVICE_STOP_STARTED = "SERVICE_STOP_STARTED" SERVICE_STOP_SUCCESS = "SERVICE_STOP_SUCCESS" STONITH_RESOURCES_DO_NOT_EXIST = "STONITH_RESOURCES_DO_NOT_EXIST" SERVICE_COMMANDS_ON_NODES_STARTED = "SERVICE_COMMANDS_ON_NODES_STARTED" SERVICE_COMMAND_ON_NODE_ERROR = "SERVICE_COMMAND_ON_NODE_ERROR" SERVICE_COMMAND_ON_NODE_SUCCESS = "SERVICE_COMMAND_ON_NODE_SUCCESS" TMP_FILE_WRITE = "TMP_FILE_WRITE" UNABLE_TO_DETERMINE_USER_UID = "UNABLE_TO_DETERMINE_USER_UID" UNABLE_TO_DETERMINE_GROUP_GID = "UNABLE_TO_DETERMINE_GROUP_GID" UNABLE_TO_GET_AGENT_METADATA = 'UNABLE_TO_GET_AGENT_METADATA' UNABLE_TO_READ_COROSYNC_CONFIG = "UNABLE_TO_READ_COROSYNC_CONFIG" UNABLE_TO_GET_SBD_CONFIG = "UNABLE_TO_GET_SBD_CONFIG" UNABLE_TO_GET_SBD_STATUS = "UNABLE_TO_GET_SBD_STATUS" UNABLE_TO_PERFORM_OPERATION_ON_ANY_NODE = "UNABLE_TO_PERFORM_OPERATION_ON_ANY_NODE" UNKNOWN_COMMAND = 'UNKNOWN_COMMAND' WATCHDOG_INVALID = "WATCHDOG_INVALID" UNSUPPORTED_OPERATION_ON_NON_SYSTEMD_SYSTEMS = "UNSUPPORTED_OPERATION_ON_NON_SYSTEMD_SYSTEMS" USE_COMMAND_NODE_ADD_REMOTE = "USE_COMMAND_NODE_ADD_REMOTE" USE_COMMAND_NODE_ADD_GUEST = "USE_COMMAND_NODE_ADD_GUEST" USE_COMMAND_NODE_REMOVE_GUEST = "USE_COMMAND_NODE_REMOVE_GUEST" WAIT_FOR_IDLE_ERROR = "WAIT_FOR_IDLE_ERROR" WAIT_FOR_IDLE_NOT_LIVE_CLUSTER = "WAIT_FOR_IDLE_NOT_LIVE_CLUSTER" WAIT_FOR_IDLE_NOT_SUPPORTED = "WAIT_FOR_IDLE_NOT_SUPPORTED" WAIT_FOR_IDLE_TIMED_OUT = "WAIT_FOR_IDLE_TIMED_OUT" WATCHDOG_NOT_FOUND = "WATCHDOG_NOT_FOUND" pcs-0.9.164/pcs/common/test/000077500000000000000000000000001326265502500155225ustar00rootroot00000000000000pcs-0.9.164/pcs/common/test/__init__.py000066400000000000000000000000001326265502500176210ustar00rootroot00000000000000pcs-0.9.164/pcs/common/test/test_node_communicator.py000066400000000000000000000510431326265502500226430ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import io from pcs.test.tools.pcs_unittest import mock, TestCase from pcs.test.tools.custom_mock import ( MockCurl, MockCurlMulti, ) from pcs import settings from pcs.common import pcs_pycurl as pycurl from pcs.lib.node import NodeAddresses import pcs.common.node_communicator as lib class RequestDataUrlEncodeTest(TestCase): def test_no_data(self): action = "action" data = lib.RequestData(action) self.assertEqual(action, data.action) self.assertEqual(0, len(data.structured_data)) self.assertEqual("", data.data) def test_with_data(self): action = "action" orig_data = [ ("key1", "value1"), ("spacial characters", "+-+/%&?'\";[]()*^$#@!~`{:}<>") ] data = lib.RequestData(action, orig_data) self.assertEqual(action, data.action) self.assertEqual(orig_data, data.structured_data) expected_raw_data = ( "key1=value1&spacial+characters=%2B-%2B%2F%25%26%3F%27%22%3B%5B" + "%5D%28%29%2A%5E%24%23%40%21%7E%60%7B%3A%7D%3C%3E" ) self.assertEqual(expected_raw_data, data.data) class RequestTargetConstructorTest(TestCase): def test_no_adresses(self): label = "label" target = lib.RequestTarget(label) self.assertEqual(label, target.label) self.assertEqual([label], target.address_list) def test_with_adresses(self): label = "label" address_list = ["a1", "a2"] original_list = list(address_list) target = lib.RequestTarget(label, address_list=address_list) address_list.append("a3") self.assertEqual(label, target.label) self.assertIsNot(address_list, target.address_list) self.assertEqual(original_list, target.address_list) class RequestTargetFromNodeAdressesTest(TestCase): def test_ring0(self): ring0 = "ring0" target = lib.RequestTarget.from_node_addresses(NodeAddresses(ring0)) self.assertEqual(ring0, target.label) self.assertEqual([ring0], target.address_list) def test_ring1(self): ring0 = "ring0" ring1 = "ring1" target = lib.RequestTarget.from_node_addresses( NodeAddresses(ring0, ring1) ) self.assertEqual(ring0, target.label) self.assertEqual([ring0, ring1], target.address_list) def test_ring0_with_label(self): ring0 = "ring0" label = "label" target = lib.RequestTarget.from_node_addresses( NodeAddresses(ring0, name=label) ) self.assertEqual(label, target.label) self.assertEqual([ring0], target.address_list) def test_ring1_with_label(self): ring0 = "ring0" ring1 = "ring1" label = "label" target = lib.RequestTarget.from_node_addresses( NodeAddresses(ring0, ring1, name=label) ) self.assertEqual(label, target.label) self.assertEqual([ring0, ring1], target.address_list) class RequestUrlTest(TestCase): action = "action" def _get_request(self, target): return lib.Request(target, lib.RequestData(self.action)) def assert_url(self, actual_url, host, action, port=None): if port is None: port = settings.pcsd_default_port self.assertEqual( "https://{host}:{port}/{action}".format( host=host, action=action, port=port ), actual_url ) def test_url_basic(self): host = "host" self.assert_url( self._get_request(lib.RequestTarget(host)).url, host, self.action, ) def test_url_with_port(self): host = "host" port = 1234 self.assert_url( self._get_request(lib.RequestTarget(host, port=port)).url, host, self.action, port=port, ) def test_url_ipv6(self): host = "::1" self.assert_url( self._get_request(lib.RequestTarget(host)).url, "[{0}]".format(host), self.action, ) def test_url_multiaddr(self): hosts = ["ring0", "ring1"] action = "action" request = self._get_request( lib.RequestTarget.from_node_addresses(NodeAddresses(*hosts)) ) self.assert_url(request.url, hosts[0], action) request.next_host() self.assert_url(request.url, hosts[1], action) class RequestHostTest(TestCase): action = "action" def _get_request(self, target): return lib.Request(target, lib.RequestData(self.action)) def test_one_host(self): host = "host" request = self._get_request(lib.RequestTarget(host)) self.assertEqual(host, request.host) self.assertRaises(StopIteration, request.next_host) def test_multiple_hosts(self): hosts = ["host1", "host2", "host3"] request = self._get_request(lib.RequestTarget("label", hosts)) for host in hosts: self.assertEqual(host, request.host) if host == hosts[-1]: self.assertRaises(StopIteration, request.next_host) else: request.next_host() class RequestCookiesTest(TestCase): def _get_request(self, token=None): return lib.Request( lib.RequestTarget("host", token=token), lib.RequestData("action") ) def test_with_token(self): token = "token1" self.assertEqual({"token": token}, self._get_request(token).cookies) def test_without_token(self): self.assertEqual({}, self._get_request().cookies) class ResponseTest(TestCase): def fixture_handle(self, info, request, data, debug): handle = MockCurl(info) handle.request_obj = request handle.output_buffer = io.BytesIO() handle.output_buffer.write(data.encode("utf-8")) handle.debug_buffer = io.BytesIO() handle.debug_buffer.write(debug.encode("utf-8")) return handle def test_connection_successful(self): request = lib.Request( lib.RequestTarget("host"), lib.RequestData("request") ) output = "output" debug = "debug" response_code = 200 handle = self.fixture_handle( {pycurl.RESPONSE_CODE: 200}, request, output, debug ) response = lib.Response.connection_successful(handle) self.assertEqual(request, response.request) self.assertTrue(response.was_connected) self.assertIsNone(response.errno) self.assertIsNone(response.error_msg) self.assertEqual(output, response.data) self.assertEqual(debug, response.debug) self.assertEqual(response_code, response.response_code) def test_connection_failure(self): request = lib.Request( lib.RequestTarget("host"), lib.RequestData("request") ) output = "output" debug = "debug" errno = 1 error_msg = "error" handle = self.fixture_handle({}, request, output, debug) response = lib.Response.connection_failure(handle, errno, error_msg) self.assertEqual(request, response.request) self.assertFalse(response.was_connected) self.assertEqual(errno, response.errno) self.assertEqual(error_msg, response.error_msg) self.assertEqual(output, response.data) self.assertEqual(debug, response.debug) self.assertIsNone(response.response_code) @mock.patch("pcs.common.node_communicator.pycurl.Curl") class CreateRequestHandleTest(TestCase): _common_opts = { pycurl.PROTOCOLS: pycurl.PROTO_HTTPS, pycurl.VERBOSE: 1, pycurl.SSL_VERIFYHOST: 0, pycurl.SSL_VERIFYPEER: 0, pycurl.NOSIGNAL: 1, } def test_all_info(self, mock_curl): mock_curl.return_value = MockCurl( None, b"output", [ (pycurl.DEBUG_TEXT, b"debug"), (pycurl.DEBUG_DATA_OUT, b"info\n"), ] ) request = lib.Request( lib.RequestTarget( "label", ["host1", "host2"], port=123, token="token_val", ), lib.RequestData("action", [("data", "value")]) ) cookies = { "name1": "val1", "name2": "val2", } handle = lib._create_request_handle(request, cookies, 1) expected_opts = { pycurl.TIMEOUT: 1, pycurl.URL: request.url.encode("utf-8"), pycurl.COOKIE: "name1=val1;name2=val2;token=token_val".encode( "utf-8" ), pycurl.COPYPOSTFIELDS: "data=value".encode("utf-8"), } expected_opts.update(self._common_opts) self.assertLessEqual( set(expected_opts.items()), set(handle.opts.items()) ) self.assertIs(request, handle.request_obj) self.assertEqual("", handle.output_buffer.getvalue().decode("utf-8")) self.assertEqual("", handle.debug_buffer.getvalue().decode("utf-8")) handle.perform() self.assertEqual( "output", handle.output_buffer.getvalue().decode("utf-8") ) self.assertEqual( "* debug\n>> info\n", handle.debug_buffer.getvalue().decode("utf-8") ) def test_basic(self, mock_curl): mock_curl.return_value = MockCurl(None) request = lib.Request( lib.RequestTarget("label"), lib.RequestData("action") ) handle = lib._create_request_handle(request, {}, 10) expected_opts = { pycurl.TIMEOUT: 10, pycurl.URL: request.url.encode("utf-8"), } expected_opts.update(self._common_opts) self.assertLessEqual( set(expected_opts.items()), set(handle.opts.items()) ) self.assertFalse(pycurl.COOKIE in handle.opts) self.assertFalse(pycurl.COPYPOSTFIELDS in handle.opts) self.assertIs(request, handle.request_obj) self.assertEqual("", handle.output_buffer.getvalue().decode("utf-8")) self.assertEqual("", handle.debug_buffer.getvalue().decode("utf-8")) handle.perform() self.assertEqual("", handle.output_buffer.getvalue().decode("utf-8")) self.assertEqual("", handle.debug_buffer.getvalue().decode("utf-8")) def fixture_request(host_id=1, action="action"): return lib.Request( lib.RequestTarget("host{0}".format(host_id)), lib.RequestData(action), ) class CommunicatorBaseTest(TestCase): def setUp(self): self.mock_com_log = mock.MagicMock( spec_set=lib.CommunicatorLoggerInterface ) def get_communicator(self): return lib.Communicator(self.mock_com_log, None, None) def get_multiaddress_communicator(self): return lib.MultiaddressCommunicator(self.mock_com_log, None, None) @mock.patch( "pcs.common.node_communicator.pycurl.CurlMulti", side_effect=lambda: MockCurlMulti([1]) ) @mock.patch("pcs.common.node_communicator._create_request_handle") class CommunicatorSimpleTest(CommunicatorBaseTest): def get_response(self, com, mock_create_handle, handle): request = fixture_request(0, "action") handle.request_obj = request mock_create_handle.return_value = handle com.add_requests([request]) self.assertEqual(0, self.mock_com_log.log_request_start.call_count) response_list = list(com.start_loop()) self.assertEqual(1, len(response_list)) response = response_list[0] self.assertIs(handle, response.handle) self.assertIs(request, response.request) mock_create_handle.assert_called_once_with( request, {}, settings.default_request_timeout ) return response def assert_common_checks(self, com, response): self.assertEqual(response.handle.error is None, response.was_connected) self.mock_com_log.log_request_start.assert_called_once_with(response.request) self.mock_com_log.log_response.assert_called_once_with(response) self.assertEqual(0, self.mock_com_log.log_retry.call_count) self.assertEqual(0, self.mock_com_log.log_no_more_addresses.call_count) com._multi_handle.assert_no_handle_left() def test_simple(self, mock_create_handle, _): com = self.get_communicator() response = self.get_response(com, mock_create_handle, MockCurl()) self.assert_common_checks(com, response) def test_failure(self, mock_create_handle, _): com = self.get_communicator() expected_reason = "expected reason" errno = pycurl.E_SEND_ERROR response = self.get_response( com, mock_create_handle, MockCurl(error=(errno, expected_reason)) ) self.assert_common_checks(com, response) self.assertEqual(errno, response.errno) self.assertEqual(expected_reason, response.error_msg) class CommunicatorMultiTest(CommunicatorBaseTest): @mock.patch("pcs.common.node_communicator._create_request_handle") @mock.patch( "pcs.common.node_communicator.pycurl.CurlMulti", side_effect=lambda: MockCurlMulti([1, 1]) ) def test_call_start_loop_multiple_times(self, _, mock_create_handle): com = self.get_communicator() mock_create_handle.side_effect = lambda request, _, __: MockCurl( request=request ) com.add_requests([fixture_request(i) for i in range(2)]) next(com.start_loop()) with self.assertRaises(AssertionError): next(com.start_loop()) @mock.patch("pcs.common.node_communicator.pycurl.Curl") @mock.patch( "pcs.common.node_communicator.pycurl.CurlMulti", side_effect=lambda: MockCurlMulti([2, 0, 0, 1, 0, 1, 1]) ) def test_multiple(self, _, mock_curl): com = self.get_communicator() action = "action" counter = {"counter": 0} def _create_mock_curl(): counter["counter"] += 1 return ( MockCurl() if counter["counter"] != 2 else MockCurl(error=(pycurl.E_SEND_ERROR, "reason")) ) mock_curl.side_effect = _create_mock_curl request_list = [fixture_request(i, action) for i in range(3)] com.add_requests(request_list) self.assertEqual(0, self.mock_com_log.log_request_start.call_count) response_list = [] for response in com.start_loop(): if len(response_list) == 0: request = fixture_request(3, action) request_list.append(request) com.add_requests([request]) elif len(response_list) == 3: request = fixture_request(4, action) request_list.append(request) com.add_requests([request]) response_list.append(response) self.assertEqual(len(request_list), len(response_list)) self.assertEqual(request_list, [r.request for r in response_list]) for i in range(len(request_list)): self.assertEqual(i != 1, response_list[i].was_connected) logger_calls = ( [mock.call.log_request_start(request_list[i]) for i in range(3)] + [ mock.call.log_response(response_list[0]), mock.call.log_request_start(request_list[3]), ] + [mock.call.log_response(response_list[i]) for i in range(1, 4)] + [ mock.call.log_request_start(request_list[4]), mock.call.log_response(response_list[4]), ] ) self.assertEqual(logger_calls, self.mock_com_log.mock_calls) com._multi_handle.assert_no_handle_left() def fixture_logger_request_retry_calls(response, host): return [ mock.call.log_request_start(response.request), mock.call.log_response(response), mock.call.log_retry(response, host), ] @mock.patch.object(lib.Response, "connection_failure") @mock.patch.object(lib.Response, "connection_successful") @mock.patch( "pcs.common.node_communicator.pycurl.CurlMulti", side_effect=lambda: MockCurlMulti([1, 0, 1, 1, 1]) ) @mock.patch("pcs.common.node_communicator._create_request_handle") class MultiaddressCommunicatorTest(CommunicatorBaseTest): def test_success( self, mock_create_handle, _, mock_con_successful, mock_con_failure ): com = self.get_multiaddress_communicator() counter = {"counter": 0} expected_response_list = [] def _con_successful(handle): response = lib.Response(handle, True) expected_response_list.append(response) return response def _con_failure(handle, errno, err_msg): response = lib.Response(handle, False, errno, err_msg) expected_response_list.append(response) return response def _mock_create_request_handle(request, _, __): counter["counter"] += 1 return( MockCurl(request=request) if counter["counter"] > 2 else MockCurl( error=(pycurl.E_SEND_ERROR, "reason"), request=request, ) ) mock_con_successful.side_effect = _con_successful mock_con_failure.side_effect = _con_failure mock_create_handle.side_effect = _mock_create_request_handle request = lib.Request( lib.RequestTarget("label", ["host{0}".format(i) for i in range(4)]), lib.RequestData("action") ) com.add_requests([request]) response_list = list(com.start_loop()) self.assertEqual(1, len(response_list)) response = response_list[0] self.assertIs(response, expected_response_list[-1]) self.assertTrue(response.was_connected) self.assertIs(request, response.request) self.assertEqual("host2", request.host) self.assertEqual(3, mock_create_handle.call_count) self.assertEqual(3, len(expected_response_list)) mock_create_handle.assert_has_calls([ mock.call(request, {}, settings.default_request_timeout) for _ in range(3) ]) logger_calls = ( fixture_logger_request_retry_calls( expected_response_list[0], "host0" ) + fixture_logger_request_retry_calls( expected_response_list[1], "host1" ) + [ mock.call.log_request_start(request), mock.call.log_response(response), ] ) self.assertEqual(logger_calls, self.mock_com_log.mock_calls) com._multi_handle.assert_no_handle_left() def test_failure( self, mock_create_handle, _, mock_con_successful, mock_con_failure ): expected_response_list = [] def _con_failure(handle, errno, err_msg): response = lib.Response(handle, False, errno, err_msg) expected_response_list.append(response) return response mock_con_failure.side_effect = _con_failure com = self.get_multiaddress_communicator() mock_create_handle.side_effect = lambda request, _, __: MockCurl( error=(pycurl.E_SEND_ERROR, "reason"), request=request, ) request = lib.Request( lib.RequestTarget("label", ["host{0}".format(i) for i in range(4)]), lib.RequestData("action") ) com.add_requests([request]) response_list = list(com.start_loop()) self.assertEqual(1, len(response_list)) response = response_list[0] self.assertFalse(response.was_connected) self.assertIs(request, response.request) self.assertEqual("host3", request.host) self.assertEqual(4, mock_create_handle.call_count) mock_con_successful.assert_not_called() self.assertEqual(4, len(expected_response_list)) mock_create_handle.assert_has_calls([ mock.call(request, {}, settings.default_request_timeout) for _ in range(3) ]) logger_calls = ( fixture_logger_request_retry_calls( expected_response_list[0], "host0" ) + fixture_logger_request_retry_calls( expected_response_list[1], "host1" ) + fixture_logger_request_retry_calls( expected_response_list[2], "host2" ) + [ mock.call.log_request_start(request), mock.call.log_response(response), mock.call.log_no_more_addresses(response) ] ) self.assertEqual(logger_calls, self.mock_com_log.mock_calls) com._multi_handle.assert_no_handle_left() pcs-0.9.164/pcs/common/test/test_tools.py000066400000000000000000000111071326265502500202730ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.common.tools import ( is_string, Version ) class IsString(TestCase): def test_recognize_plain_string(self): self.assertTrue(is_string("")) def test_recognize_unicode_string(self): #in python3 this is str type self.assertTrue(is_string(u"")) def test_rcognize_bytes(self): #in python3 this is str type self.assertTrue(is_string(b"")) def test_list_of_string_is_not_string(self): self.assertFalse(is_string(["a", "b"])) class VersionTest(TestCase): def assert_asterisk(self, expected, major, minor=None, revision=None): self.assertEqual(expected, (major, minor, revision)) def assert_eq_tuple(self, a, b): self.assert_eq(Version(*a), Version(*b)) def assert_lt_tuple(self, a, b): self.assert_lt(Version(*a), Version(*b)) def assert_eq(self, a, b): self.assertTrue(a == b) self.assertFalse(a != b) self.assertFalse(a < b) self.assertTrue(a <= b) self.assertFalse(a > b) self.assertTrue(a >= b) def assert_lt(self, a, b): self.assertFalse(a == b) self.assertTrue(a != b) self.assertTrue(a < b) self.assertTrue(a <= b) self.assertFalse(a > b) self.assertFalse(a >= b) def test_major(self): ver = Version(2) self.assert_asterisk((2, None, None), *ver) self.assertEqual(ver.major, 2) self.assertEqual(ver[0], 2) self.assertEqual(ver.minor, None) self.assertEqual(ver[1], None) self.assertEqual(ver.revision, None) self.assertEqual(ver[2], None) self.assertEqual(ver.as_full_tuple, (2, 0, 0)) self.assertEqual(str(ver), "2") self.assertEqual(str(ver.normalize()), "2.0.0") def test_major_minor(self): ver = Version(2, 3) self.assert_asterisk((2, 3, None), *ver) self.assertEqual(ver.major, 2) self.assertEqual(ver[0], 2) self.assertEqual(ver.minor, 3) self.assertEqual(ver[1], 3) self.assertEqual(ver.revision, None) self.assertEqual(ver[2], None) self.assertEqual(ver.as_full_tuple, (2, 3, 0)) self.assertEqual(str(ver), "2.3") self.assertEqual(str(ver.normalize()), "2.3.0") def test_major_minor_revision(self): ver = Version(2, 3, 4) self.assert_asterisk((2, 3, 4), *ver) self.assertEqual(ver.major, 2) self.assertEqual(ver[0], 2) self.assertEqual(ver.minor, 3) self.assertEqual(ver[1], 3) self.assertEqual(ver.revision, 4) self.assertEqual(ver[2], 4) self.assertEqual(ver.as_full_tuple, (2, 3, 4)) self.assertEqual(str(ver), "2.3.4") self.assertEqual(str(ver.normalize()), "2.3.4") def test_compare(self): self.assert_eq_tuple((2, ), (2, )) self.assert_lt_tuple((2, ), (3, )) self.assert_eq_tuple((2, 0), (2, 0)) self.assert_lt_tuple((2, 0), (2, 5)) self.assert_lt_tuple((2, 0), (3, 5)) self.assert_eq_tuple((2, 0), (2, )) self.assert_lt_tuple((2, 0), (3, )) self.assert_lt_tuple((2, 5), (3, )) self.assert_lt_tuple((3, ), (3, 5)) self.assert_eq_tuple((2, 0, 0), (2, 0, 0)) self.assert_lt_tuple((2, 0, 0), (2, 0, 1)) self.assert_lt_tuple((2, 0, 0), (2, 5, 0)) self.assert_lt_tuple((2, 0, 0), (2, 5, 1)) self.assert_lt_tuple((2, 0, 0), (3, 0, 0)) self.assert_lt_tuple((2, 0, 0), (3, 0, 1)) self.assert_lt_tuple((2, 0, 0), (3, 5, 0)) self.assert_lt_tuple((2, 0, 0), (3, 5, 1)) self.assert_eq_tuple((2, 0, 0), (2, 0)) self.assert_eq_tuple((2, 0, 0), (2, )) self.assert_lt_tuple((2, 0, 0), (2, 5)) self.assert_lt_tuple((2, 0, 0), (3, )) self.assert_lt_tuple((2, 5, 0), (3, )) self.assert_lt_tuple((2, ), (2, 5, 0)) self.assert_eq_tuple((2, 5, 0), (2, 5)) self.assert_lt_tuple((2, 5, 0), (3, 5)) self.assert_lt_tuple((2, 0), (2, 5, 1)) self.assert_lt_tuple((2, 5), (2, 5, 1)) self.assert_lt_tuple((2, 5, 1), (3, 5)) self.assert_lt_tuple((2, 5, 1), (3, )) self.assert_lt_tuple((2, ), (2, 5, 1)) self.assert_lt_tuple((2, 5, 1), (3, )) self.assert_lt_tuple((2, ), (3, 5, 1)) self.assert_lt_tuple((3, ), (3, 5, 1)) self.assert_lt_tuple((2, 0), (3, 5, 1)) self.assert_lt_tuple((2, 5), (3, 5, 1)) self.assert_lt_tuple((3, 5), (3, 5, 1)) pcs-0.9.164/pcs/common/test/test_tools_xml_fromstring.py000066400000000000000000000074041326265502500234320ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.common.tools import xml_fromstring class XmlFromstring(TestCase): def test_large_xml(self): #it raises on a huge xml without the flag huge_tree=True #see https://bugzilla.redhat.com/show_bug.cgi?id=1506864 xml_fromstring(large_xml) large_xml = """ {0} {1} """.format( "".join([ """ """.format(i) for i in range(20000) ]), "".join([ """ {0} """.format("".join([ """ """.format("{0}-{1}".format(i, j)) for j in range(98) ]), i) for i in range(5) ]) ) pcs-0.9.164/pcs/common/tools.py000066400000000000000000000065351326265502500162660ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from collections import namedtuple from lxml import etree import threading import sys _PYTHON2 = (sys.version_info.major == 2) def simple_cache(func): cache = { "was_run": False, "value": None } def wrapper(): if not cache["was_run"]: cache["value"] = func() cache["was_run"] = True return cache["value"] return wrapper def run_parallel(worker, data_list): thread_list = [] for args, kwargs in data_list: thread = threading.Thread(target=worker, args=args, kwargs=kwargs) thread.daemon = True thread_list.append(thread) thread.start() for thread in thread_list: thread.join() def format_environment_error(e): if e.filename: return "{0}: '{1}'".format(e.strerror, e.filename) return e.strerror def join_multilines(strings): return "\n".join([a.strip() for a in strings if a.strip()]) def is_string(candidate): """ Return if candidate is string. Simply lookin solution isinstance(candidate, "".__class__) does not work: >>> isinstance("", "".__class__), isinstance(u"", "".__class__) (True, False) This code also needs to deal with python2 and python3 and unicode type is in python2 but not in python3. """ string_list = [str, bytes] try: string_list.append(unicode) except NameError: #unicode is not present in python3 pass return any([isinstance(candidate, string) for string in string_list]) def xml_fromstring(xml): # If the xml contains encoding declaration such as: # # we get an exception in python3: # ValueError: Unicode strings with encoding declaration are not supported. # Please use bytes input or XML fragments without declaration. # So we encode the string to bytes. # In python2 we cannot do that as it causes a UnicodeDecodeError if the xml # contains a non-ascii character. return etree.fromstring( xml if _PYTHON2 else xml.encode("utf-8"), #it raises on a huge xml without the flag huge_tree=True #see https://bugzilla.redhat.com/show_bug.cgi?id=1506864 etree.XMLParser(huge_tree=True) ) class Version(namedtuple("Version", ["major", "minor", "revision"])): def __new__(cls, major, minor=None, revision=None): return super(Version, cls).__new__(cls, major, minor, revision) @property def as_full_tuple(self): return ( self.major, self.minor if self.minor is not None else 0, self.revision if self.revision is not None else 0, ) def normalize(self): return self.__class__(*self.as_full_tuple) def __str__(self): return ".".join([str(x) for x in self if x is not None]) def __lt__(self, other): return self.as_full_tuple < other.as_full_tuple def __le__(self, other): return self.as_full_tuple <= other.as_full_tuple def __eq__(self, other): return self.as_full_tuple == other.as_full_tuple def __ne__(self, other): return self.as_full_tuple != other.as_full_tuple def __gt__(self, other): return self.as_full_tuple > other.as_full_tuple def __ge__(self, other): return self.as_full_tuple >= other.as_full_tuple pcs-0.9.164/pcs/config.py000066400000000000000000000733611326265502500151040ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import sys import os import os.path import re import datetime from io import BytesIO import tarfile import json from xml.dom.minidom import parse import logging import pwd import grp import tempfile import time import platform import shutil try: import clufter.facts import clufter.format_manager import clufter.filter_manager import clufter.command_manager no_clufter = False except ImportError: no_clufter = True from pcs import ( cluster, constraint, prop, quorum, resource, settings, status, stonith, usage, utils, alert, ) from pcs.lib.errors import LibraryError from pcs.lib.commands import quorum as lib_quorum import pcs.cli.constraint_colocation.command as colocation_command import pcs.cli.constraint_order.command as order_command import pcs.cli.constraint_ticket.command as ticket_command from pcs.cli.common.console_report import indent def config_cmd(argv): if len(argv) == 0: config_show(argv) return sub_cmd = argv.pop(0) if sub_cmd == "help": usage.config(argv) elif sub_cmd == "show": config_show(argv) elif sub_cmd == "backup": config_backup(argv) elif sub_cmd == "restore": config_restore(argv) elif sub_cmd == "checkpoint": if not argv: config_checkpoint_list() elif argv[0] == "view": config_checkpoint_view(argv[1:]) elif argv[0] == "restore": config_checkpoint_restore(argv[1:]) else: usage.config(["checkpoint"]) sys.exit(1) elif sub_cmd == "import-cman": config_import_cman(argv) elif sub_cmd == "export": if not argv: usage.config(["export"]) sys.exit(1) elif argv[0] == "pcs-commands": config_export_pcs_commands(argv[1:]) elif argv[0] == "pcs-commands-verbose": config_export_pcs_commands(argv[1:], True) else: usage.config(["export"]) sys.exit(1) else: usage.config() sys.exit(1) def config_show(argv): print("Cluster Name: %s" % utils.getClusterName()) status.nodes_status(["config"]) print() config_show_cib() if ( utils.hasCorosyncConf() and ( utils.is_rhel6() or (not utils.usefile and "--corosync_conf" not in utils.pcs_options) ) ): # with corosync 1 and cman, uid gid is part of cluster.conf file # with corosync 2, uid gid is in a separate directory cluster.cluster_uidgid([], True) if ( "--corosync_conf" in utils.pcs_options or (not utils.is_rhel6() and utils.hasCorosyncConf()) ): print() print("Quorum:") try: config = lib_quorum.get_config(utils.get_lib_env()) print("\n".join(indent(quorum.quorum_config_to_str(config)))) except LibraryError as e: utils.process_library_reports(e.args) def config_show_cib(): lib = utils.get_library_wrapper() modifiers = utils.get_modifiers() print("Resources:") utils.pcs_options["--all"] = 1 utils.pcs_options["--full"] = 1 resource.resource_show([]) print() print("Stonith Devices:") resource.resource_show([], True) print("Fencing Levels:") levels = stonith.stonith_level_config_to_str( lib.fencing_topology.get_config() ) if levels: print("\n".join(indent(levels, 2))) print() constraint.location_show([]) order_command.show(lib, [], modifiers) colocation_command.show(lib, [], modifiers) ticket_command.show(lib, [], modifiers) print() alert.print_alert_config(lib, [], modifiers) print() del utils.pcs_options["--all"] print("Resources Defaults:") resource.show_defaults("rsc_defaults", indent=" ") print("Operations Defaults:") resource.show_defaults("op_defaults", indent=" ") print() prop.list_property([]) def config_backup(argv): if len(argv) > 1: usage.config(["backup"]) sys.exit(1) outfile_name = None if argv: outfile_name = argv[0] if not outfile_name.endswith(".tar.bz2"): outfile_name += ".tar.bz2" tar_data = config_backup_local() if outfile_name: ok, message = utils.write_file( outfile_name, tar_data, permissions=0o600, binary=True ) if not ok: utils.err(message) else: # in python3 stdout accepts str so we need to use buffer if hasattr(sys.stdout, "buffer"): sys.stdout.buffer.write(tar_data) else: sys.stdout.write(tar_data) def config_backup_local(): file_list = config_backup_path_list() tar_data = BytesIO() try: tarball = tarfile.open(fileobj=tar_data, mode="w|bz2") config_backup_add_version_to_tarball(tarball) for tar_path, path_info in file_list.items(): if ( not os.path.exists(path_info["path"]) and not path_info["required"] ): continue tarball.add(path_info["path"], tar_path) tarball.close() except (tarfile.TarError, EnvironmentError) as e: utils.err("unable to create tarball: %s" % e) tar = tar_data.getvalue() tar_data.close() return tar def config_restore(argv): if len(argv) > 1: usage.config(["restore"]) sys.exit(1) infile_name = infile_obj = None if argv: infile_name = argv[0] if not infile_name: # in python3 stdin returns str so we need to use buffer if hasattr(sys.stdin, "buffer"): infile_obj = BytesIO(sys.stdin.buffer.read()) else: infile_obj = BytesIO(sys.stdin.read()) if os.getuid() == 0: if "--local" in utils.pcs_options: config_restore_local(infile_name, infile_obj) else: config_restore_remote(infile_name, infile_obj) else: new_argv = ['config', 'restore'] new_stdin = None if '--local' in utils.pcs_options: new_argv.append('--local') if infile_name: new_argv.append(os.path.abspath(infile_name)) else: new_stdin = infile_obj.read() err_msgs, exitcode, std_out, std_err = utils.call_local_pcsd( new_argv, True, new_stdin ) if err_msgs: for msg in err_msgs: utils.err(msg, False) sys.exit(1) print(std_out) sys.stderr.write(std_err) sys.exit(exitcode) def config_restore_remote(infile_name, infile_obj): extracted = { "version.txt": "", "corosync.conf": "", "cluster.conf": "", } try: tarball = tarfile.open(infile_name, "r|*", infile_obj) while True: # next(tarball) does not work in python2.6 tar_member_info = tarball.next() if tar_member_info is None: break if tar_member_info.name in extracted: tar_member = tarball.extractfile(tar_member_info) extracted[tar_member_info.name] = tar_member.read() tar_member.close() tarball.close() except (tarfile.TarError, EnvironmentError) as e: utils.err("unable to read the tarball: %s" % e) config_backup_check_version(extracted["version.txt"]) node_list = utils.getNodesFromCorosyncConf( extracted["cluster.conf" if utils.is_rhel6() else "corosync.conf"].decode("utf-8") ) if not node_list: utils.err("no nodes found in the tarball") err_msgs = [] for node in node_list: try: retval, output = utils.checkStatus(node) if retval != 0: err_msgs.append(output) continue status = json.loads(output) if ( status["corosync"] or status["pacemaker"] or status["cman"] or # not supported by older pcsd, do not fail if not present status.get("pacemaker_remote", False) ): err_msgs.append( "Cluster is currently running on node %s. You need to stop " "the cluster in order to restore the configuration." % node ) continue except (ValueError, NameError, LookupError): err_msgs.append("unable to determine status of the node %s" % node) if err_msgs: for msg in err_msgs: utils.err(msg, False) sys.exit(1) # Temporarily disable config files syncing thread in pcsd so it will not # rewrite restored files. 10 minutes should be enough time to restore. # If node returns HTTP 404 it does not support config syncing at all. for node in node_list: retval, output = utils.pauseConfigSyncing(node, 10 * 60) if not (retval == 0 or "(HTTP error: 404)" in output): utils.err(output) if infile_obj: infile_obj.seek(0) tarball_data = infile_obj.read() else: with open(infile_name, "rb") as tarball: tarball_data = tarball.read() error_list = [] for node in node_list: retval, error = utils.restoreConfig(node, tarball_data) if retval != 0: error_list.append(error) if error_list: utils.err("unable to restore all nodes\n" + "\n".join(error_list)) def config_restore_local(infile_name, infile_obj): if ( status.is_service_running("cman") or status.is_service_running("corosync") or status.is_service_running("pacemaker") or status.is_service_running("pacemaker_remote") ): utils.err( "Cluster is currently running on this node. You need to stop " "the cluster in order to restore the configuration." ) file_list = config_backup_path_list(with_uid_gid=True) tarball_file_list = [] version = None tmp_dir = None try: tarball = tarfile.open(infile_name, "r|*", infile_obj) while True: # next(tarball) does not work in python2.6 tar_member_info = tarball.next() if tar_member_info is None: break if tar_member_info.name == "version.txt": version_data = tarball.extractfile(tar_member_info) version = version_data.read() version_data.close() continue tarball_file_list.append(tar_member_info.name) tarball.close() required_file_list = [ tar_path for tar_path, path_info in file_list.items() if path_info["required"] ] missing = set(required_file_list) - set(tarball_file_list) if missing: utils.err( "unable to restore the cluster, missing files in backup: %s" % ", ".join(missing) ) config_backup_check_version(version) if infile_obj: infile_obj.seek(0) tarball = tarfile.open(infile_name, "r|*", infile_obj) while True: # next(tarball) does not work in python2.6 tar_member_info = tarball.next() if tar_member_info is None: break extract_info = None path = tar_member_info.name while path: if path in file_list: extract_info = file_list[path] break path = os.path.dirname(path) if not extract_info: continue path_full = None if hasattr(extract_info.get("pre_store_call"), '__call__'): extract_info["pre_store_call"]() if "rename" in extract_info and extract_info["rename"]: if tmp_dir is None: tmp_dir = tempfile.mkdtemp() tarball.extractall(tmp_dir, [tar_member_info]) path_full = extract_info["path"] os.rename( os.path.join(tmp_dir, tar_member_info.name), path_full ) else: dir_path = os.path.dirname(extract_info["path"]) tarball.extractall(dir_path, [tar_member_info]) path_full = os.path.join(dir_path, tar_member_info.name) file_attrs = extract_info["attrs"] os.chmod(path_full, file_attrs["mode"]) os.chown(path_full, file_attrs["uid"], file_attrs["gid"]) tarball.close() except (tarfile.TarError, EnvironmentError, OSError) as e: utils.err("unable to restore the cluster: %s" % e) finally: if tmp_dir: shutil.rmtree(tmp_dir, ignore_errors=True) try: sig_path = os.path.join(settings.cib_dir, "cib.xml.sig") if os.path.exists(sig_path): os.remove(sig_path) except EnvironmentError as e: utils.err("unable to remove %s: %s" % (sig_path, e)) def config_backup_path_list(with_uid_gid=False, force_rhel6=None): rhel6 = utils.is_rhel6() if force_rhel6 is None else force_rhel6 corosync_attrs = { "mtime": int(time.time()), "mode": 0o644, "uname": "root", "gname": "root", "uid": 0, "gid": 0, } corosync_authkey_attrs = dict(corosync_attrs) corosync_authkey_attrs["mode"] = 0o400 cib_attrs = { "mtime": int(time.time()), "mode": 0o600, "uname": settings.pacemaker_uname, "gname": settings.pacemaker_gname, } if with_uid_gid: cib_attrs["uid"] = _get_uid(cib_attrs["uname"]) cib_attrs["gid"] = _get_gid(cib_attrs["gname"]) pcmk_authkey_attrs = dict(cib_attrs) pcmk_authkey_attrs["mode"] = 0o440 file_list = { "cib.xml": { "path": os.path.join(settings.cib_dir, "cib.xml"), "required": True, "attrs": dict(cib_attrs), }, "corosync_authkey": { "path": settings.corosync_authkey_file, "required": False, "attrs": corosync_authkey_attrs, "restore_procedure": None, "rename": True, }, "pacemaker_authkey": { "path": settings.pacemaker_authkey_file, "required": False, "attrs": pcmk_authkey_attrs, "restore_procedure": None, "rename": True, "pre_store_call": _ensure_etc_pacemaker_exists, }, } if rhel6: file_list["cluster.conf"] = { "path": settings.cluster_conf_file, "required": True, "attrs": dict(corosync_attrs), } else: file_list["corosync.conf"] = { "path": settings.corosync_conf_file, "required": True, "attrs": dict(corosync_attrs), } file_list["uidgid.d"] = { "path": settings.corosync_uidgid_dir.rstrip("/"), "required": False, "attrs": dict(corosync_attrs), } file_list["pcs_settings.conf"] = { "path": settings.pcsd_settings_conf_location, "required": False, "attrs": { "mtime": int(time.time()), "mode": 0o644, "uname": "root", "gname": "root", "uid": 0, "gid": 0, }, } return file_list def _get_uid(user_name): try: return pwd.getpwnam(user_name).pw_uid except KeyError: utils.err("Unable to determine uid of user '{0}'".format(user_name)) def _get_gid(group_name): try: return grp.getgrnam(group_name).gr_gid except KeyError: utils.err( "Unable to determine gid of group '{0}'".format(group_name) ) def _ensure_etc_pacemaker_exists(): dir_name = os.path.dirname(settings.pacemaker_authkey_file) if not os.path.exists(dir_name): os.mkdir(dir_name) os.chmod(dir_name, 0o750) os.chown( dir_name, _get_uid(settings.pacemaker_uname), _get_gid(settings.pacemaker_gname) ) def config_backup_check_version(version): try: version_number = int(version) supported_version = config_backup_version() if version_number > supported_version: utils.err( "Unsupported version of the backup, " "supported version is %d, backup version is %d" % (supported_version, version_number) ) if version_number < supported_version: print( "Warning: restoring from the backup version %d, " "current supported version is %s" % (version_number, supported_version) ) except TypeError: utils.err("Cannot determine version of the backup") def config_backup_add_version_to_tarball(tarball, version=None): ver = version if version is not None else str(config_backup_version()) return utils.tar_add_file_data(tarball, ver.encode("utf-8"), "version.txt") def config_backup_version(): return 1 def config_checkpoint_list(): try: file_list = os.listdir(settings.cib_dir) except OSError as e: utils.err("unable to list checkpoints: %s" % e) cib_list = [] cib_name_re = re.compile("^cib-(\d+)\.raw$") for filename in file_list: match = cib_name_re.match(filename) if not match: continue file_path = os.path.join(settings.cib_dir, filename) try: if os.path.isfile(file_path): cib_list.append( (float(os.path.getmtime(file_path)), match.group(1)) ) except OSError: pass cib_list.sort() if not cib_list: print("No checkpoints available") return for cib_info in cib_list: print( "checkpoint %s: date %s" % (cib_info[1], datetime.datetime.fromtimestamp(round(cib_info[0]))) ) def config_checkpoint_view(argv): if len(argv) != 1: usage.config(["checkpoint", "view"]) sys.exit(1) utils.usefile = True utils.filename = os.path.join(settings.cib_dir, "cib-%s.raw" % argv[0]) if not os.path.isfile(utils.filename): utils.err("unable to read the checkpoint") config_show_cib() def config_checkpoint_restore(argv): if len(argv) != 1: usage.config(["checkpoint", "restore"]) sys.exit(1) cib_path = os.path.join(settings.cib_dir, "cib-%s.raw" % argv[0]) try: snapshot_dom = parse(cib_path) except Exception as e: utils.err("unable to read the checkpoint: %s" % e) utils.replace_cib_configuration(snapshot_dom) def config_import_cman(argv): if no_clufter: utils.err("Unable to perform a CMAN cluster conversion due to missing python-clufter package") # prepare convertor options cluster_conf = settings.cluster_conf_file dry_run_output = None output_format = "cluster.conf" if utils.is_rhel6() else "corosync.conf" dist = None invalid_args = False for arg in argv: if "=" in arg: name, value = arg.split("=", 1) if name == "input": cluster_conf = value elif name == "output": dry_run_output = value elif name == "output-format": if value in ( "cluster.conf", "corosync.conf", "pcs-commands", "pcs-commands-verbose", ): output_format = value else: invalid_args = True elif name == "dist": dist = value else: invalid_args = True else: invalid_args = True if ( output_format not in ("pcs-commands", "pcs-commands-verbose") and (dry_run_output and not dry_run_output.endswith(".tar.bz2")) ): dry_run_output += ".tar.bz2" if invalid_args or not dry_run_output: usage.config(["import-cman"]) sys.exit(1) debug = "--debug" in utils.pcs_options force = "--force" in utils.pcs_options interactive = "--interactive" in utils.pcs_options if dist is not None: if output_format == "cluster.conf": if not clufter.facts.cluster_pcs_flatiron("linux", dist.split(",")): utils.err("dist does not match output-format") elif output_format == "corosync.conf": if not clufter.facts.cluster_pcs_needle("linux", dist.split(",")): utils.err("dist does not match output-format") elif ( (output_format == "cluster.conf" and utils.is_rhel6()) or (output_format == "corosync.conf" and not utils.is_rhel6()) ): dist = ",".join(platform.linux_distribution(full_distribution_name=0)) elif output_format == "cluster.conf": dist = "redhat,6.7,Santiago" elif output_format == "corosync.conf": dist = "redhat,7.1,Maipo" else: # for output-format=pcs-command[-verbose] dist = ",".join(platform.linux_distribution(full_distribution_name=0)) clufter_args = { "input": str(cluster_conf), "cib": {"passin": "bytestring"}, "nocheck": force, "batch": True, "sys": "linux", "dist": dist, } if interactive: if "EDITOR" not in os.environ: utils.err("$EDITOR environment variable is not set") clufter_args["batch"] = False clufter_args["editor"] = os.environ["EDITOR"] if debug: logging.getLogger("clufter").setLevel(logging.DEBUG) if output_format == "cluster.conf": clufter_args["ccs_pcmk"] = {"passin": "bytestring"} cmd_name = "ccs2pcs-flatiron" elif output_format == "corosync.conf": clufter_args["coro"] = {"passin": "struct"} cmd_name = "ccs2pcs-needle" elif output_format in ("pcs-commands", "pcs-commands-verbose"): clufter_args["output"] = {"passin": "bytestring"} clufter_args["start_wait"] = "60" clufter_args["tmp_cib"] = "tmp-cib.xml" clufter_args["force"] = force clufter_args["text_width"] = "80" clufter_args["silent"] = True clufter_args["noguidance"] = True if output_format == "pcs-commands-verbose": clufter_args["text_width"] = "-1" clufter_args["silent"] = False clufter_args["noguidance"] = False if clufter.facts.cluster_pcs_flatiron("linux", dist.split(",")): cmd_name = "ccs2pcscmd-flatiron" elif clufter.facts.cluster_pcs_needle("linux", dist.split(",")): cmd_name = "ccs2pcscmd-needle" else: utils.err( "unrecognized dist, try something recognized" + " (e. g. rhel,6.8 or redhat,7.3 or debian,7 or ubuntu,trusty)" ) clufter_args_obj = type(str("ClufterOptions"), (object, ), clufter_args) # run convertor run_clufter( cmd_name, clufter_args_obj, debug, force, "Error: unable to import cluster configuration" ) # save commands if output_format in ("pcs-commands", "pcs-commands-verbose"): ok, message = utils.write_file( dry_run_output, clufter_args_obj.output["passout"].decode() ) if not ok: utils.err(message) return # put new config files into tarball file_list = config_backup_path_list( force_rhel6=(output_format == "cluster.conf") ) for file_item in file_list.values(): file_item["attrs"]["uname"] = "root" file_item["attrs"]["gname"] = "root" file_item["attrs"]["uid"] = 0 file_item["attrs"]["gid"] = 0 file_item["attrs"]["mode"] = 0o600 tar_data = BytesIO() try: tarball = tarfile.open(fileobj=tar_data, mode="w|bz2") config_backup_add_version_to_tarball(tarball) utils.tar_add_file_data( tarball, clufter_args_obj.cib["passout"], "cib.xml", **file_list["cib.xml"]["attrs"] ) if output_format == "cluster.conf": utils.tar_add_file_data( tarball, clufter_args_obj.ccs_pcmk["passout"], "cluster.conf", **file_list["cluster.conf"]["attrs"] ) else: # put uidgid into separate files fmt_simpleconfig = clufter.format_manager.FormatManager.init_lookup( 'simpleconfig' ).plugins['simpleconfig'] corosync_struct = [] uidgid_list = [] for section in clufter_args_obj.coro["passout"][2]: if section[0] == "uidgid": uidgid_list.append(section[1]) else: corosync_struct.append(section) corosync_conf_data = fmt_simpleconfig( "struct", ("corosync", (), corosync_struct) )("bytestring") utils.tar_add_file_data( tarball, corosync_conf_data, "corosync.conf", **file_list["corosync.conf"]["attrs"] ) for uidgid in uidgid_list: uid = "" gid = "" for item in uidgid: if item[0] == "uid": uid = item[1] if item[0] == "gid": gid = item[1] filename = utils.get_uid_gid_file_name(uid, gid) uidgid_data = fmt_simpleconfig( "struct", ("corosync", (), [("uidgid", uidgid, None)]) )("bytestring") utils.tar_add_file_data( tarball, uidgid_data, "uidgid.d/" + filename, **file_list["uidgid.d"]["attrs"] ) tarball.close() except (tarfile.TarError, EnvironmentError) as e: utils.err("unable to create tarball: %s" % e) tar_data.seek(0) #save tarball / remote restore if dry_run_output: ok, message = utils.write_file( dry_run_output, tar_data.read(), permissions=0o600, binary=True ) if not ok: utils.err(message) else: config_restore_remote(None, tar_data) tar_data.close() def config_export_pcs_commands(argv, verbose=False): if no_clufter: utils.err( "Unable to perform export due to missing python-clufter package" ) # parse options debug = "--debug" in utils.pcs_options force = "--force" in utils.pcs_options interactive = "--interactive" in utils.pcs_options invalid_args = False output_file = None dist = None for arg in argv: if "=" in arg: name, value = arg.split("=", 1) if name == "output": output_file = value elif name == "dist": dist = value else: invalid_args = True else: invalid_args = True # check options if invalid_args: usage.config(["export", "pcs-commands"]) sys.exit(1) # complete optional options if dist is None: dist = ",".join(platform.linux_distribution(full_distribution_name=0)) # prepare convertor options clufter_args = { "nocheck": force, "batch": True, "sys": "linux", "dist": dist, "coro": settings.corosync_conf_file, "ccs": settings.cluster_conf_file, "start_wait": "60", "tmp_cib": "tmp-cib.xml", "force": force, "text_width": "80", "silent": True, "noguidance": True, } if output_file: clufter_args["output"] = {"passin": "bytestring"} else: clufter_args["output"] = "-" if interactive: if "EDITOR" not in os.environ: utils.err("$EDITOR environment variable is not set") clufter_args["batch"] = False clufter_args["editor"] = os.environ["EDITOR"] if debug: logging.getLogger("clufter").setLevel(logging.DEBUG) if utils.usefile: clufter_args["cib"] = os.path.abspath(utils.filename) else: clufter_args["cib"] = ("bytestring", utils.get_cib()) if verbose: clufter_args["text_width"] = "-1" clufter_args["silent"] = False clufter_args["noguidance"] = False clufter_args_obj = type(str("ClufterOptions"), (object, ), clufter_args) cmd_name = "pcs2pcscmd-flatiron" if utils.is_rhel6() else "pcs2pcscmd-needle" # run convertor run_clufter( cmd_name, clufter_args_obj, debug, force, "Error: unable to export cluster configuration" ) # save commands if not printed to stdout by clufter if output_file: ok, message = utils.write_file( output_file, clufter_args_obj.output["passout"].decode() ) if not ok: utils.err(message) def run_clufter(cmd_name, cmd_args, debug, force, err_prefix): try: result = None cmd_manager = clufter.command_manager.CommandManager.init_lookup( cmd_name ) result = cmd_manager.commands[cmd_name](cmd_args) error_message = "" except Exception as e: error_message = str(e) if error_message or result != 0: hints = [] hints.append("--interactive to solve the issues manually") if not debug: hints.append("--debug to get more information") if not force: hints.append("--force to override") hints_string = "\nTry using %s." % ", ".join(hints) if hints else "" sys.stderr.write( err_prefix + (": %s" % error_message if error_message else "") + hints_string + "\n" ) sys.exit(1 if result is None else result) pcs-0.9.164/pcs/constraint.py000066400000000000000000001301601326265502500160120ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import sys import xml.dom.minidom from collections import defaultdict from xml.dom.minidom import parseString from pcs import ( rule as rule_utils, usage, utils, ) from pcs.cli import ( constraint_colocation, constraint_order, ) from pcs.cli.common import parse_args from pcs.cli.common.errors import CmdLineInputError import pcs.cli.constraint_colocation.command as colocation_command import pcs.cli.constraint_order.command as order_command from pcs.cli.constraint_ticket import command as ticket_command from pcs.lib.cib.constraint import resource_set from pcs.lib.cib.constraint.order import ATTRIB as order_attrib from pcs.lib.errors import LibraryError from pcs.lib.pacemaker.values import sanitize_id OPTIONS_ACTION = resource_set.ATTRIB["action"] DEFAULT_ACTION = "start" DEFAULT_ROLE = "Started" OPTIONS_SYMMETRICAL = order_attrib["symmetrical"] OPTIONS_KIND = order_attrib["kind"] RESOURCE_TYPE_RESOURCE = "resource" RESOURCE_TYPE_REGEXP = "regexp" def constraint_cmd(argv): lib = utils.get_library_wrapper() modifiers = utils.get_modifiers() if len(argv) == 0: argv = ["list"] sub_cmd = argv.pop(0) try: if (sub_cmd == "help"): usage.constraint(argv) elif (sub_cmd == "location"): if len (argv) == 0: sub_cmd2 = "show" else: sub_cmd2 = argv.pop(0) if (sub_cmd2 == "add"): location_add(argv) elif (sub_cmd2 in ["remove","delete"]): location_add(argv,True) elif (sub_cmd2 == "show"): location_show(argv) elif len(argv) >= 2: if argv[0] == "rule": location_rule([sub_cmd2] + argv) else: location_prefer([sub_cmd2] + argv) else: usage.constraint() sys.exit(1) elif (sub_cmd == "order"): if (len(argv) == 0): sub_cmd2 = "show" else: sub_cmd2 = argv.pop(0) if (sub_cmd2 == "set"): try: order_command.create_with_set(lib, argv, modifiers) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "constraint", 'order set') except LibraryError as e: utils.process_library_reports(e.args) elif (sub_cmd2 in ["remove","delete"]): order_rm(argv) elif (sub_cmd2 == "show"): order_command.show(lib, argv, modifiers) else: order_start([sub_cmd2] + argv) elif sub_cmd == "ticket": usage_name = "ticket" try: command_map = { "set": ticket_command.create_with_set, "add": ticket_command.add, "remove": ticket_command.remove, "show": ticket_command.show, } sub_command = argv[0] if argv else "show" if sub_command not in command_map: raise CmdLineInputError() usage_name = "ticket "+sub_command command_map[sub_command](lib, argv[1:], modifiers) except LibraryError as e: utils.process_library_reports(e.args) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "constraint", usage_name) elif (sub_cmd == "colocation"): if (len(argv) == 0): sub_cmd2 = "show" else: sub_cmd2 = argv.pop(0) if (sub_cmd2 == "add"): colocation_add(argv) elif (sub_cmd2 in ["remove","delete"]): colocation_rm(argv) elif (sub_cmd2 == "set"): try: colocation_command.create_with_set(lib, argv, modifiers) except LibraryError as e: utils.process_library_reports(e.args) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "constraint", "colocation set") elif (sub_cmd2 == "show"): colocation_command.show(lib, argv, modifiers) else: usage.constraint() sys.exit(1) elif (sub_cmd in ["remove","delete"]): constraint_rm(argv) elif (sub_cmd == "show" or sub_cmd == "list"): location_show(argv) order_command.show(lib, argv, modifiers) colocation_command.show(lib, argv, modifiers) ticket_command.show(lib, argv, modifiers) elif (sub_cmd == "ref"): constraint_ref(argv) elif (sub_cmd == "rule"): constraint_rule(argv) else: usage.constraint() sys.exit(1) except LibraryError as e: utils.process_library_reports(e.args) except CmdLineInputError as e: utils.exit_on_cmdline_input_errror(e, "resource", sub_cmd) def colocation_rm(argv): elementFound = False if len(argv) < 2: usage.constraint() sys.exit(1) (dom,constraintsElement) = getCurrentConstraints() resource1 = argv[0] resource2 = argv[1] for co_loc in constraintsElement.getElementsByTagName('rsc_colocation')[:]: if co_loc.getAttribute("rsc") == resource1 and co_loc.getAttribute("with-rsc") == resource2: constraintsElement.removeChild(co_loc) elementFound = True if co_loc.getAttribute("rsc") == resource2 and co_loc.getAttribute("with-rsc") == resource1: constraintsElement.removeChild(co_loc) elementFound = True if elementFound == True: utils.replace_cib_configuration(dom) else: print("No matching resources found in ordering list") # When passed an array of arguments if the first argument doesn't have an '=' # then it's the score, otherwise they're all arguments # Return a tuple with the score and array of name,value pairs def parse_score_options(argv): if len(argv) == 0: return "INFINITY",[] arg_array = [] first = argv[0] if first.find('=') != -1: score = "INFINITY" else: score = argv.pop(0) for arg in argv: args = arg.split('=') if (len(args) != 2): continue arg_array.append(args) return (score, arg_array) # There are two acceptable syntaxes # Deprecated - colocation add [score] [options] # Supported - colocation add [role] with [role] [score] [options] def colocation_add(argv): if len(argv) < 2: usage.constraint() sys.exit(1) role1 = "" role2 = "" if len(argv) > 2: if not utils.is_score_or_opt(argv[2]): if argv[2] == "with": role1 = argv.pop(0).lower().capitalize() resource1 = argv.pop(0) else: resource1 = argv.pop(0) argv.pop(0) # Pop 'with' if len(argv) == 1: resource2 = argv.pop(0) else: if utils.is_score_or_opt(argv[1]): resource2 = argv.pop(0) else: role2 = argv.pop(0).lower().capitalize() resource2 = argv.pop(0) else: resource1 = argv.pop(0) resource2 = argv.pop(0) else: resource1 = argv.pop(0) resource2 = argv.pop(0) cib_dom = utils.get_cib_dom() resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource(cib_dom, resource1) if "--autocorrect" in utils.pcs_options and correct_id: resource1 = correct_id elif not resource_valid: utils.err(resource_error) resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource(cib_dom, resource2) if "--autocorrect" in utils.pcs_options and correct_id: resource2 = correct_id elif not resource_valid: utils.err(resource_error) score,nv_pairs = parse_score_options(argv) id_in_nvpairs = None for name, value in nv_pairs: if name == "id": id_valid, id_error = utils.validate_xml_id(value, 'constraint id') if not id_valid: utils.err(id_error) if utils.does_id_exist(cib_dom, value): utils.err( "id '%s' is already in use, please specify another one" % value ) id_in_nvpairs = True if not id_in_nvpairs: nv_pairs.append(( "id", utils.find_unique_id( cib_dom, "colocation-%s-%s-%s" % (resource1, resource2, score) ) )) (dom,constraintsElement) = getCurrentConstraints(cib_dom) # If one role is specified, the other should default to "started" if role1 != "" and role2 == "": role2 = DEFAULT_ROLE if role2 != "" and role1 == "": role1 = DEFAULT_ROLE element = dom.createElement("rsc_colocation") element.setAttribute("rsc",resource1) element.setAttribute("with-rsc",resource2) element.setAttribute("score",score) if role1 != "": element.setAttribute("rsc-role", role1) if role2 != "": element.setAttribute("with-rsc-role", role2) for nv_pair in nv_pairs: element.setAttribute(nv_pair[0], nv_pair[1]) if "--force" not in utils.pcs_options: duplicates = colocation_find_duplicates(constraintsElement, element) if duplicates: utils.err( "duplicate constraint already exists, use --force to override\n" + "\n".join([ " " + constraint_colocation.console_report.constraint_plain( {"options": dict(dup.attributes.items())}, True ) for dup in duplicates ]) ) constraintsElement.appendChild(element) utils.replace_cib_configuration(dom) def colocation_find_duplicates(dom, constraint_el): def normalize(const_el): return ( const_el.getAttribute("rsc"), const_el.getAttribute("with-rsc"), const_el.getAttribute("rsc-role").capitalize() or DEFAULT_ROLE, const_el.getAttribute("with-rsc-role").capitalize() or DEFAULT_ROLE, ) normalized_el = normalize(constraint_el) return [ other_el for other_el in dom.getElementsByTagName("rsc_colocation") if not other_el.getElementsByTagName("resource_set") and constraint_el is not other_el and normalized_el == normalize(other_el) ] def order_rm(argv): if len(argv) == 0: usage.constraint() sys.exit(1) elementFound = False (dom,constraintsElement) = getCurrentConstraints() for resource in argv: for ord_loc in constraintsElement.getElementsByTagName('rsc_order')[:]: if ord_loc.getAttribute("first") == resource or ord_loc.getAttribute("then") == resource: constraintsElement.removeChild(ord_loc) elementFound = True resource_refs_to_remove = [] for ord_set in constraintsElement.getElementsByTagName('resource_ref'): if ord_set.getAttribute("id") == resource: resource_refs_to_remove.append(ord_set) elementFound = True for res_ref in resource_refs_to_remove: res_set = res_ref.parentNode res_order = res_set.parentNode res_ref.parentNode.removeChild(res_ref) if len(res_set.getElementsByTagName('resource_ref')) <= 0: res_set.parentNode.removeChild(res_set) if len(res_order.getElementsByTagName('resource_set')) <= 0: res_order.parentNode.removeChild(res_order) if elementFound == True: utils.replace_cib_configuration(dom) else: utils.err("No matching resources found in ordering list") def order_start(argv): if len(argv) < 3: usage.constraint() sys.exit(1) first_action = DEFAULT_ACTION then_action = DEFAULT_ACTION action = argv[0] if action in OPTIONS_ACTION: first_action = action argv.pop(0) resource1 = argv.pop(0) if argv.pop(0) != "then": usage.constraint() sys.exit(1) if len(argv) == 0: usage.constraint() sys.exit(1) action = argv[0] if action in OPTIONS_ACTION: then_action = action argv.pop(0) if len(argv) == 0: usage.constraint() sys.exit(1) resource2 = argv.pop(0) order_options = [] if len(argv) != 0: order_options = order_options + argv[:] order_options.append("first-action="+first_action) order_options.append("then-action="+then_action) order_add([resource1, resource2] + order_options) def order_add(argv,returnElementOnly=False): if len(argv) < 2: usage.constraint() sys.exit(1) resource1 = argv.pop(0) resource2 = argv.pop(0) cib_dom = utils.get_cib_dom() resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource(cib_dom, resource1) if "--autocorrect" in utils.pcs_options and correct_id: resource1 = correct_id elif not resource_valid: utils.err(resource_error) resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource(cib_dom, resource2) if "--autocorrect" in utils.pcs_options and correct_id: resource2 = correct_id elif not resource_valid: utils.err(resource_error) order_options = [] id_specified = False sym = None for arg in argv: if arg == "symmetrical": sym = "true" elif arg == "nonsymmetrical": sym = "false" elif "=" in arg: name, value = arg.split("=", 1) if name == "id": id_valid, id_error = utils.validate_xml_id(value, 'constraint id') if not id_valid: utils.err(id_error) if utils.does_id_exist(cib_dom, value): utils.err( "id '%s' is already in use, please specify another one" % value ) id_specified = True order_options.append((name, value)) elif name == "symmetrical": if value.lower() in OPTIONS_SYMMETRICAL: sym = value.lower() else: utils.err( "invalid symmetrical value '%s', allowed values are: %s" % (value, ", ".join(OPTIONS_SYMMETRICAL)) ) else: order_options.append((name, value)) if sym: order_options.append(("symmetrical", sym)) options = "" if order_options: options = " (Options: %s)" % " ".join([ "%s=%s" % (name, value) for name, value in order_options if name not in ("kind", "score") ]) scorekind = "kind: Mandatory" id_suffix = "mandatory" for opt in order_options: if opt[0] == "score": scorekind = "score: " + opt[1] id_suffix = opt[1] break if opt[0] == "kind": scorekind = "kind: " + opt[1] id_suffix = opt[1] break if not id_specified: order_id = "order-" + resource1 + "-" + resource2 + "-" + id_suffix order_id = utils.find_unique_id(cib_dom, order_id) order_options.append(("id", order_id)) (dom,constraintsElement) = getCurrentConstraints() element = dom.createElement("rsc_order") element.setAttribute("first",resource1) element.setAttribute("then",resource2) for order_opt in order_options: element.setAttribute(order_opt[0], order_opt[1]) constraintsElement.appendChild(element) if "--force" not in utils.pcs_options: duplicates = order_find_duplicates(constraintsElement, element) if duplicates: utils.err( "duplicate constraint already exists, use --force to override\n" + "\n".join([ " " + constraint_order.console_report.constraint_plain( {"options": dict(dup.attributes.items())}, True ) for dup in duplicates ]) ) print( "Adding " + resource1 + " " + resource2 + " ("+scorekind+")" + options ) if returnElementOnly == False: utils.replace_cib_configuration(dom) else: return element.toxml() def order_find_duplicates(dom, constraint_el): def normalize(constraint_el): return ( constraint_el.getAttribute("first"), constraint_el.getAttribute("then"), constraint_el.getAttribute("first-action").lower() or DEFAULT_ACTION, constraint_el.getAttribute("then-action").lower() or DEFAULT_ACTION, ) normalized_el = normalize(constraint_el) return [ other_el for other_el in dom.getElementsByTagName("rsc_order") if not other_el.getElementsByTagName("resource_set") and constraint_el is not other_el and normalized_el == normalize(other_el) ] # Show the currently configured location constraints by node or resource def location_show(argv): if (len(argv) != 0 and argv[0] == "nodes"): byNode = True showDetail = False elif "--full" in utils.pcs_options: byNode = False showDetail = True else: byNode = False showDetail = False if len(argv) > 1: if byNode: valid_noderes = argv[1:] else: valid_noderes = [ parse_args.parse_typed_arg( arg, [RESOURCE_TYPE_RESOURCE, RESOURCE_TYPE_REGEXP], RESOURCE_TYPE_RESOURCE ) for arg in argv[1:] ] else: valid_noderes = [] (dummy_dom,constraintsElement) = getCurrentConstraints() nodehashon = {} nodehashoff = {} rschashon = {} rschashoff = {} ruleshash = defaultdict(list) all_loc_constraints = constraintsElement.getElementsByTagName('rsc_location') print("Location Constraints:") for rsc_loc in all_loc_constraints: if rsc_loc.hasAttribute("rsc-pattern"): lc_rsc_type = RESOURCE_TYPE_REGEXP lc_rsc_value = rsc_loc.getAttribute("rsc-pattern") lc_name = "Resource pattern: {0}".format(lc_rsc_value) else: lc_rsc_type = RESOURCE_TYPE_RESOURCE lc_rsc_value = rsc_loc.getAttribute("rsc") lc_name = "Resource: {0}".format(lc_rsc_value) lc_rsc = lc_rsc_type, lc_rsc_value, lc_name lc_id = rsc_loc.getAttribute("id") lc_node = rsc_loc.getAttribute("node") lc_score = rsc_loc.getAttribute("score") lc_role = rsc_loc.getAttribute("role") lc_resource_discovery = rsc_loc.getAttribute("resource-discovery") for child in rsc_loc.childNodes: if child.nodeType == child.ELEMENT_NODE and child.tagName == "rule": ruleshash[lc_rsc].append(child) # NEED TO FIX FOR GROUP LOCATION CONSTRAINTS (where there are children of # rsc_location) if lc_score == "": lc_score = "0" if lc_score == "INFINITY": positive = True elif lc_score == "-INFINITY": positive = False elif int(lc_score) >= 0: positive = True else: positive = False if positive == True: nodeshash = nodehashon rschash = rschashon else: nodeshash = nodehashoff rschash = rschashoff hash_element = { "id": lc_id, "rsc_type": lc_rsc_type, "rsc_value": lc_rsc_value, "rsc_label": lc_name, "node": lc_node, "score": lc_score, "role": lc_role, "resource-discovery": lc_resource_discovery, } if lc_node in nodeshash: nodeshash[lc_node].append(hash_element) else: nodeshash[lc_node] = [hash_element] if lc_rsc in rschash: rschash[lc_rsc].append(hash_element) else: rschash[lc_rsc] = [hash_element] nodelist = sorted(set(list(nodehashon.keys()) + list(nodehashoff.keys()))) rsclist = sorted( set(list(rschashon.keys()) + list(rschashoff.keys())), key=lambda item: ( { RESOURCE_TYPE_RESOURCE: 1, RESOURCE_TYPE_REGEXP: 0, }[item[0]], item[1] ) ) if byNode == True: for node in nodelist: if len(valid_noderes) != 0: if node not in valid_noderes: continue print(" Node: " + node) nodehash_label = ( (nodehashon, " Allowed to run:"), (nodehashoff, " Not allowed to run:") ) for nodehash, label in nodehash_label: if node in nodehash: print(label) for options in nodehash[node]: line_parts = [( " " + options["rsc_label"] + " (" + options["id"] + ")" )] if options["role"]: line_parts.append( "(role: {0})".format(options["role"]) ) if options["resource-discovery"]: line_parts.append( "(resource-discovery={0})".format( options["resource-discovery"] ) ) line_parts.append("Score: " + options["score"]) print(" ".join(line_parts)) show_location_rules(ruleshash, showDetail) else: for rsc in rsclist: if len(valid_noderes) != 0: if rsc[0:2] not in valid_noderes: continue print(" {0}".format(rsc[2])) rschash_label = ( (rschashon, " Enabled on:"), (rschashoff, " Disabled on:"), ) for rschash, label in rschash_label: if rsc in rschash: for options in rschash[rsc]: if not options["node"]: continue line_parts = [ label, options["node"], "(score:{0})".format(options["score"]), ] if options["role"]: line_parts.append( "(role: {0})".format(options["role"]) ) if options["resource-discovery"]: line_parts.append( "(resource-discovery={0})".format( options["resource-discovery"] ) ) if showDetail: line_parts.append("(id:{0})".format(options["id"])) print(" ".join(line_parts)) miniruleshash={} miniruleshash[rsc] = ruleshash[rsc] show_location_rules(miniruleshash, showDetail, True) def show_location_rules(ruleshash, showDetail, noheader=False): constraint_options = {} for rsc in sorted( ruleshash.keys(), key=lambda item: ( { RESOURCE_TYPE_RESOURCE: 1, RESOURCE_TYPE_REGEXP: 0, }[item[0]], item[1] ) ): constrainthash = defaultdict(list) if not noheader: print(" {0}".format(rsc[2])) for rule in ruleshash[rsc]: constraint_id = rule.parentNode.getAttribute("id") constrainthash[constraint_id].append(rule) constraint_options[constraint_id] = [] if rule.parentNode.getAttribute("resource-discovery"): constraint_options[constraint_id].append("resource-discovery=%s" % rule.parentNode.getAttribute("resource-discovery")) for constraint_id in sorted(constrainthash.keys()): if constraint_id in constraint_options and len(constraint_options[constraint_id]) > 0: constraint_option_info = " (" + " ".join(constraint_options[constraint_id]) + ")" else: constraint_option_info = "" print(" Constraint: " + constraint_id + constraint_option_info) for rule in constrainthash[constraint_id]: print(rule_utils.ExportDetailed().get_string( rule, showDetail, " " )) def location_prefer(argv): rsc = argv.pop(0) prefer_option = argv.pop(0) dummy_rsc_type, rsc_value = parse_args.parse_typed_arg( rsc, [RESOURCE_TYPE_RESOURCE, RESOURCE_TYPE_REGEXP], RESOURCE_TYPE_RESOURCE ) if prefer_option == "prefers": prefer = True elif prefer_option == "avoids": prefer = False else: usage.constraint() sys.exit(1) for nodeconf in argv: nodeconf_a = nodeconf.split("=",1) if len(nodeconf_a) == 1: node = nodeconf_a[0] if prefer: score = "INFINITY" else: score = "-INFINITY" else: score = nodeconf_a[1] if not utils.is_score(score): utils.err("invalid score '%s', use integer or INFINITY or -INFINITY" % score) if not prefer: if score[0] == "-": score = score[1:] else: score = "-" + score node = nodeconf_a[0] location_add([ sanitize_id("location-{0}-{1}-{2}".format(rsc_value, node, score)), rsc, node, score ]) def location_add(argv,rm=False): if rm: location_remove(argv) return if len(argv) < 4: usage.constraint(["location add"]) sys.exit(1) constraint_id = argv.pop(0) rsc_type, rsc_value = parse_args.parse_typed_arg( argv.pop(0), [RESOURCE_TYPE_RESOURCE, RESOURCE_TYPE_REGEXP], RESOURCE_TYPE_RESOURCE ) node = argv.pop(0) score = argv.pop(0) options = [] # For now we only allow setting resource-discovery if len(argv) > 0: for arg in argv: if '=' in arg: options.append(arg.split('=',1)) else: print("Error: bad option '%s'" % arg) usage.constraint(["location add"]) sys.exit(1) if options[-1][0] != "resource-discovery" and "--force" not in utils.pcs_options: utils.err("bad option '%s', use --force to override" % options[-1][0]) id_valid, id_error = utils.validate_xml_id(constraint_id, 'constraint id') if not id_valid: utils.err(id_error) if not utils.is_score(score): utils.err("invalid score '%s', use integer or INFINITY or -INFINITY" % score) required_version = None if [x for x in options if x[0] == "resource-discovery"]: required_version = 2, 2, 0 if rsc_type == RESOURCE_TYPE_REGEXP: required_version = 2, 6, 0 if required_version: dom = utils.cluster_upgrade_to_version(required_version) else: dom = utils.get_cib_dom() if rsc_type == RESOURCE_TYPE_RESOURCE: rsc_valid, rsc_error, correct_id = utils.validate_constraint_resource( dom, rsc_value ) if "--autocorrect" in utils.pcs_options and correct_id: rsc_value = correct_id elif not rsc_valid: utils.err(rsc_error) # Verify current constraint doesn't already exist # If it does we replace it with the new constraint dummy_dom, constraintsElement = getCurrentConstraints(dom) elementsToRemove = [] # If the id matches, or the rsc & node match, then we replace/remove for rsc_loc in constraintsElement.getElementsByTagName('rsc_location'): if ( rsc_loc.getAttribute("id") == constraint_id or ( rsc_loc.getAttribute("node") == node and ( ( RESOURCE_TYPE_RESOURCE == rsc_type and rsc_loc.getAttribute("rsc") == rsc_value ) or ( RESOURCE_TYPE_REGEXP == rsc_type and rsc_loc.getAttribute("rsc-pattern") == rsc_value ) ) ) ): elementsToRemove.append(rsc_loc) for etr in elementsToRemove: constraintsElement.removeChild(etr) element = dom.createElement("rsc_location") element.setAttribute("id",constraint_id) if rsc_type == RESOURCE_TYPE_RESOURCE: element.setAttribute("rsc", rsc_value) elif rsc_type == RESOURCE_TYPE_REGEXP: element.setAttribute("rsc-pattern", rsc_value) element.setAttribute("node",node) element.setAttribute("score",score) for option in options: element.setAttribute(option[0], option[1]) constraintsElement.appendChild(element) utils.replace_cib_configuration(dom) def location_remove(argv): # This code was originally merged in the location_add function and was # documented to take 1 or 4 arguments: # location remove [ ] # However it has always ignored all arguments but constraint id. Therefore # this command / function has no use as it can be fully replaced by "pcs # constraint remove" which also removes constraints by id. For now I keep # things as they are but we should solve this when moving these functions # to pcs.lib. if len(argv) != 1: usage.constraint(["location remove"]) sys.exit(1) constraint_id = argv.pop(0) dom, constraintsElement = getCurrentConstraints() elementsToRemove = [] for rsc_loc in constraintsElement.getElementsByTagName('rsc_location'): if constraint_id == rsc_loc.getAttribute("id"): elementsToRemove.append(rsc_loc) if (len(elementsToRemove) == 0): utils.err("resource location id: " + constraint_id + " not found.") for etr in elementsToRemove: constraintsElement.removeChild(etr) utils.replace_cib_configuration(dom) def location_rule(argv): if len(argv) < 3: usage.constraint(["location", "rule"]) sys.exit(1) rsc_type, rsc_value = parse_args.parse_typed_arg( argv.pop(0), [RESOURCE_TYPE_RESOURCE, RESOURCE_TYPE_REGEXP], RESOURCE_TYPE_RESOURCE ) argv.pop(0) # pop "rule" options, rule_argv = rule_utils.parse_argv( argv, { "constraint-id": None, "resource-discovery": None, } ) resource_discovery = ( "resource-discovery" in options and options["resource-discovery"] ) required_version = None if resource_discovery: required_version = 2, 2, 0 if rsc_type == RESOURCE_TYPE_REGEXP: required_version = 2, 6, 0 if required_version: dom = utils.cluster_upgrade_to_version(required_version) else: dom = utils.get_cib_dom() if rsc_type == RESOURCE_TYPE_RESOURCE: rsc_valid, rsc_error, correct_id = utils.validate_constraint_resource( dom, rsc_value ) if "--autocorrect" in utils.pcs_options and correct_id: rsc_value = correct_id elif not rsc_valid: utils.err(rsc_error) cib, constraints = getCurrentConstraints(dom) lc = cib.createElement("rsc_location") # If resource-discovery is specified, we use it with the rsc_location # element not the rule if resource_discovery: lc.setAttribute("resource-discovery", options.pop("resource-discovery")) constraints.appendChild(lc) if options.get("constraint-id"): id_valid, id_error = utils.validate_xml_id( options["constraint-id"], 'constraint id' ) if not id_valid: utils.err(id_error) if utils.does_id_exist(dom, options["constraint-id"]): utils.err( "id '%s' is already in use, please specify another one" % options["constraint-id"] ) lc.setAttribute("id", options["constraint-id"]) del options["constraint-id"] else: lc.setAttribute( "id", utils.find_unique_id(dom, sanitize_id("location-" + rsc_value)) ) if rsc_type == RESOURCE_TYPE_RESOURCE: lc.setAttribute("rsc", rsc_value) elif rsc_type == RESOURCE_TYPE_REGEXP: lc.setAttribute("rsc-pattern", rsc_value) rule_utils.dom_rule_add(lc, options, rule_argv) location_rule_check_duplicates(constraints, lc) utils.replace_cib_configuration(cib) def location_rule_check_duplicates(dom, constraint_el): if "--force" not in utils.pcs_options: duplicates = location_rule_find_duplicates(dom, constraint_el) if duplicates: lines = [] for dup in duplicates: lines.append(" Constraint: %s" % dup.getAttribute("id")) for dup_rule in utils.dom_get_children_by_tag_name(dup, "rule"): lines.append(rule_utils.ExportDetailed().get_string( dup_rule, True, " " )) utils.err( "duplicate constraint already exists, use --force to override\n" + "\n".join(lines) ) def location_rule_find_duplicates(dom, constraint_el): def normalize(constraint_el): if constraint_el.hasAttribute("rsc-pattern"): rsc = ( RESOURCE_TYPE_REGEXP, constraint_el.getAttribute("rsc-pattern") ) else: rsc = ( RESOURCE_TYPE_RESOURCE, constraint_el.getAttribute("rsc") ) return ( rsc, [ rule_utils.ExportAsExpression().get_string(rule_el, True) for rule_el in constraint_el.getElementsByTagName("rule") ] ) normalized_el = normalize(constraint_el) return [ other_el for other_el in dom.getElementsByTagName("rsc_location") if other_el.getElementsByTagName("rule") and constraint_el is not other_el and normalized_el == normalize(other_el) ] # Grabs the current constraints and returns the dom and constraint element def getCurrentConstraints(passed_dom=None): if passed_dom: dom = passed_dom else: current_constraints_xml = utils.get_cib_xpath('//constraints') if current_constraints_xml == "": utils.err("unable to process cib") # Verify current constraint doesn't already exist # If it does we replace it with the new constraint dom = parseString(current_constraints_xml) constraintsElement = dom.getElementsByTagName('constraints')[0] return (dom, constraintsElement) # If returnStatus is set, then we don't error out, we just print the error # and return false def constraint_rm(argv,returnStatus=False, constraintsElement=None, passed_dom=None): if len(argv) < 1: usage.constraint() sys.exit(1) bad_constraint = False if len(argv) != 1: for arg in argv: if not constraint_rm([arg],True, passed_dom=passed_dom): bad_constraint = True if bad_constraint: sys.exit(1) return else: c_id = argv.pop(0) elementFound = False if not constraintsElement: (dom, constraintsElement) = getCurrentConstraints(passed_dom) use_cibadmin = True else: use_cibadmin = False for co in constraintsElement.childNodes[:]: if co.nodeType != xml.dom.Node.ELEMENT_NODE: continue if co.getAttribute("id") == c_id: constraintsElement.removeChild(co) elementFound = True if not elementFound: for rule in constraintsElement.getElementsByTagName("rule")[:]: if rule.getAttribute("id") == c_id: elementFound = True parent = rule.parentNode parent.removeChild(rule) if len(parent.getElementsByTagName("rule")) == 0: parent.parentNode.removeChild(parent) if elementFound == True: if passed_dom: return dom if use_cibadmin: utils.replace_cib_configuration(dom) if returnStatus: return True else: utils.err("Unable to find constraint - '%s'" % c_id, False) if returnStatus: return False sys.exit(1) def constraint_ref(argv): if len(argv) == 0: usage.constraint() sys.exit(1) for arg in argv: print("Resource: %s" % arg) constraints,set_constraints = find_constraints_containing(arg) if len(constraints) == 0 and len(set_constraints) == 0: print(" No Matches.") else: for constraint in constraints: print(" " + constraint) for constraint in sorted(set_constraints): print(" " + constraint) def remove_constraints_containing(resource_id,output=False,constraints_element = None, passed_dom=None): constraints,set_constraints = find_constraints_containing(resource_id, passed_dom) for c in constraints: if output == True: print("Removing Constraint - " + c) if constraints_element != None: constraint_rm([c], True, constraints_element, passed_dom=passed_dom) else: constraint_rm([c], passed_dom=passed_dom) if len(set_constraints) != 0: (dom, constraintsElement) = getCurrentConstraints(passed_dom) for c in constraintsElement.getElementsByTagName("resource_ref")[:]: # If resource id is in a set, remove it from the set, if the set # is empty, then we remove the set, if the parent of the set # is empty then we remove it if c.getAttribute("id") == resource_id: pn = c.parentNode pn.removeChild(c) if output == True: print("Removing %s from set %s" % (resource_id,pn.getAttribute("id"))) if pn.getElementsByTagName("resource_ref").length == 0: print("Removing set %s" % pn.getAttribute("id")) pn2 = pn.parentNode pn2.removeChild(pn) if pn2.getElementsByTagName("resource_set").length == 0: pn2.parentNode.removeChild(pn2) print("Removing constraint %s" % pn2.getAttribute("id")) if passed_dom: return dom utils.replace_cib_configuration(dom) def find_constraints_containing(resource_id, passed_dom=None): if passed_dom: dom = passed_dom else: dom = utils.get_cib_dom() constraints_found = [] set_constraints = [] resources = dom.getElementsByTagName("primitive") resource_match = None for res in resources: if res.getAttribute("id") == resource_id: resource_match = res break if resource_match: if resource_match.parentNode.tagName == "master" or resource_match.parentNode.tagName == "clone": constraints_found,set_constraints = find_constraints_containing(resource_match.parentNode.getAttribute("id"), dom) constraints = dom.getElementsByTagName("constraints") if len(constraints) == 0: return [],[] else: constraints = constraints[0] myConstraints = constraints.getElementsByTagName("rsc_colocation") myConstraints += constraints.getElementsByTagName("rsc_location") myConstraints += constraints.getElementsByTagName("rsc_order") myConstraints += constraints.getElementsByTagName("rsc_ticket") attr_to_match = ["rsc", "first", "then", "with-rsc", "first", "then"] for c in myConstraints: for attr in attr_to_match: if c.getAttribute(attr) == resource_id: constraints_found.append(c.getAttribute("id")) break setConstraints = constraints.getElementsByTagName("resource_ref") for c in setConstraints: if c.getAttribute("id") == resource_id: set_constraints.append(c.parentNode.parentNode.getAttribute("id")) # Remove duplicates set_constraints = list(set(set_constraints)) return constraints_found,set_constraints def remove_constraints_containing_node(dom, node, output=False): for constraint in find_constraints_containing_node(dom, node): if output: print("Removing Constraint - %s" % constraint.getAttribute("id")) constraint.parentNode.removeChild(constraint) return dom def find_constraints_containing_node(dom, node): return [ constraint for constraint in dom.getElementsByTagName("rsc_location") if constraint.getAttribute("node") == node ] # Re-assign any constraints referencing a resource to its parent (a clone # or master) def constraint_resource_update(old_id, passed_dom=None): dom = utils.get_cib_dom() if passed_dom is None else passed_dom new_id = None clone_ms_parent = utils.dom_get_resource_clone_ms_parent(dom, old_id) if clone_ms_parent: new_id = clone_ms_parent.getAttribute("id") if new_id: constraints = dom.getElementsByTagName("rsc_location") constraints += dom.getElementsByTagName("rsc_order") constraints += dom.getElementsByTagName("rsc_colocation") attrs_to_update=["rsc","first","then", "with-rsc"] for constraint in constraints: for attr in attrs_to_update: if constraint.getAttribute(attr) == old_id: constraint.setAttribute(attr, new_id) if passed_dom is None: utils.replace_cib_configuration(dom) if passed_dom: return dom def constraint_rule(argv): if len(argv) < 2: usage.constraint("rule") sys.exit(1) found = False command = argv.pop(0) constraint_id = None if command == "add": constraint_id = argv.pop(0) cib = utils.get_cib_dom() constraint = utils.dom_get_element_with_id( cib.getElementsByTagName("constraints")[0], "rsc_location", constraint_id ) if not constraint: utils.err("Unable to find constraint: " + constraint_id) options, rule_argv = rule_utils.parse_argv(argv) rule_utils.dom_rule_add(constraint, options, rule_argv) location_rule_check_duplicates(cib, constraint) utils.replace_cib_configuration(cib) elif command in ["remove","delete"]: cib = utils.get_cib_etree() temp_id = argv.pop(0) constraints = cib.find('.//constraints') loc_cons = cib.findall(str('.//rsc_location')) for loc_con in loc_cons: for rule in loc_con: if rule.get("id") == temp_id: if len(loc_con) > 1: print("Removing Rule: {0}".format(rule.get("id"))) loc_con.remove(rule) found = True break else: print( "Removing Constraint: {0}".format(loc_con.get("id")) ) constraints.remove(loc_con) found = True break if found == True: break if found: utils.replace_cib_configuration(cib) else: utils.err("unable to find rule with id: %s" % temp_id) else: usage.constraint("rule") sys.exit(1) pcs-0.9.164/pcs/lib/000077500000000000000000000000001326265502500140215ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/__init__.py000066400000000000000000000000001326265502500161200ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/booth/000077500000000000000000000000001326265502500151345ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/booth/__init__.py000066400000000000000000000000001326265502500172330ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/booth/config_exchange.py000066400000000000000000000011451326265502500206160ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.booth.config_structure import ConfigItem def to_exchange_format(booth_configuration): return [ { "key": item.key, "value": item.value, "details": to_exchange_format(item.details), } for item in booth_configuration ] def from_exchange_format(exchange_format): return [ ConfigItem( item["key"], item["value"], from_exchange_format(item["details"]), ) for item in exchange_format ] pcs-0.9.164/pcs/lib/booth/config_files.py000066400000000000000000000055761326265502500201520ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os from pcs.common import report_codes, env_file_role_codes as file_roles from pcs.common.tools import format_environment_error from pcs.lib import reports as lib_reports from pcs.lib.booth import reports from pcs.lib.errors import ReportItemSeverity from pcs.settings import booth_config_dir as BOOTH_CONFIG_DIR def get_all_configs_file_names(): """ Returns list of all file names ending with '.conf' in booth configuration directory. """ if not os.path.isdir(BOOTH_CONFIG_DIR): return [] return [ file_name for file_name in os.listdir(BOOTH_CONFIG_DIR) if file_name.endswith(".conf") and len(file_name) > len(".conf") and os.path.isfile(os.path.join(BOOTH_CONFIG_DIR, file_name)) ] def _read_config(file_name): """ Read specified booth config from default booth config directory. file_name -- string, name of file """ with open(os.path.join(BOOTH_CONFIG_DIR, file_name), "r") as file: return file.read() def read_configs(reporter, skip_wrong_config=False): """ Returns content of all configs present on local system in dictionary, where key is name of config and value is its content. reporter -- report processor skip_wrong_config -- if True skip local configs that are unreadable """ report_list = [] output = {} for file_name in get_all_configs_file_names(): try: output[file_name] = _read_config(file_name) except EnvironmentError: report_list.append(reports.booth_config_read_error( file_name, ( ReportItemSeverity.WARNING if skip_wrong_config else ReportItemSeverity.ERROR ), ( None if skip_wrong_config else report_codes.SKIP_UNREADABLE_CONFIG ) )) reporter.process_list(report_list) return output def read_authfile(reporter, path): """ Returns content of specified authfile as bytes. None if file is not in default booth directory or there was some IO error. reporter -- report processor path -- path to the authfile to be read """ if not path: return None if os.path.dirname(os.path.abspath(path)) != BOOTH_CONFIG_DIR: reporter.process(reports.booth_unsupported_file_location(path)) return None try: with open(path, "rb") as file: return file.read() except EnvironmentError as e: reporter.process(lib_reports.file_io_error( file_roles.BOOTH_KEY, path, reason=format_environment_error(e), operation="read", severity=ReportItemSeverity.WARNING )) return None pcs-0.9.164/pcs/lib/booth/config_parser.py000066400000000000000000000054161326265502500203350ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import re from pcs.lib.booth import config_structure, reports from pcs.lib.errors import LibraryError class InvalidLines(Exception): pass def parse(content): try: return organize_lines(parse_to_raw_lines(content)) except InvalidLines as e: raise LibraryError( reports.booth_config_unexpected_lines(e.args[0]) ) def build(config_line_list): newline = [""] return "\n".join(build_to_lines(config_line_list) + newline) def build_to_lines(config_line_list, deep=0): line_list = [] for key, value, details in config_line_list: line_value = value if key != "ticket" else '"{0}"'.format(value) line_list.append("{0}{1} = {2}".format(" "*deep, key, line_value)) if details: line_list.extend(build_to_lines(details, deep+1)) return line_list def organize_lines(raw_line_list): #Decision: Global key is moved up when is below ticket. Alternative is move #it below all ticket details. But it is confusing. global_section = [] ticket_section = [] current_ticket = None for key, value in raw_line_list: if key == "ticket": current_ticket = config_structure.ConfigItem(key, value) ticket_section.append(current_ticket) elif key in config_structure.GLOBAL_KEYS or not current_ticket: global_section.append(config_structure.ConfigItem(key, value)) else: current_ticket.details.append( config_structure.ConfigItem(key, value) ) return global_section + ticket_section def search_with_multiple_re(re_object_list, string): """ return MatchObject of first matching regular expression object or None list re_object_list contains regular expresssion objects (products of re.compile) """ for expression in re_object_list: match = expression.search(string) if match: return match return None def parse_to_raw_lines(config_content): keyword_part = r"^(?P[a-zA-Z0-9_-]+)\s*=\s*" expression_list = [re.compile(pattern.format(keyword_part)) for pattern in [ r"""{0}(?P[^'"]+)$""", r"""{0}'(?P[^']*)'\s*(#.*)?$""", r"""{0}"(?P[^"]*)"\s*(#.*)?$""", ]] line_list = [] invalid_line_list = [] for line in config_content.splitlines(): line = line.strip() match = search_with_multiple_re(expression_list, line) if match: line_list.append((match.group("key"), match.group("value"))) elif line and not line.startswith("#"): invalid_line_list.append(line) if invalid_line_list: raise InvalidLines(invalid_line_list) return line_list pcs-0.9.164/pcs/lib/booth/config_structure.py000066400000000000000000000114361326265502500211000ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import re import pcs.lib.reports as common_reports from pcs.lib.booth import reports from pcs.lib.errors import LibraryError, ReportItemSeverity as severities from pcs.common import report_codes from collections import namedtuple GLOBAL_KEYS = ( "transport", "port", "name", "authfile", "maxtimeskew", "site", "arbitrator", "site-user", "site-group", "arbitrator-user", "arbitrator-group", "debug", "ticket", ) TICKET_KEYS = ( "acquire-after", "attr-prereq", "before-acquire-handler", "expire", "renewal-freq", "retries", "timeout", "weights", ) class ConfigItem(namedtuple("ConfigItem", "key value details")): def __new__(cls, key, value, details=None): details = details if details else [] return super(ConfigItem, cls).__new__(cls, key, value, details) def validate_peers(site_list, arbitrator_list): report = [] if len(site_list) < 2: report.append(reports.booth_lack_of_sites(site_list)) peer_list = site_list + arbitrator_list if len(peer_list) % 2 == 0: report.append(reports.booth_even_peers_num(len(peer_list))) address_set = set() duplicate_addresses = set() for address in peer_list: if address in address_set: duplicate_addresses.add(address) else: address_set.add(address) if duplicate_addresses: report.append(reports.booth_address_duplication(duplicate_addresses)) if report: raise LibraryError(*report) def take_peers(booth_configuration): return ( pick_list_by_key(booth_configuration, "site"), pick_list_by_key(booth_configuration, "arbitrator"), ) def pick_list_by_key(booth_configuration, key): return [item.value for item in booth_configuration if item.key == key] def remove_ticket(booth_configuration, ticket_name): validate_ticket_exists(booth_configuration, ticket_name) return [ config_item for config_item in booth_configuration if config_item.key != "ticket" or config_item.value != ticket_name ] def add_ticket( report_processor, booth_configuration, ticket_name, options, allow_unknown_options ): validate_ticket_name(ticket_name) validate_ticket_unique(booth_configuration, ticket_name) validate_ticket_options(report_processor, options, allow_unknown_options) return booth_configuration + [ ConfigItem("ticket", ticket_name, [ ConfigItem(key, value) for key, value in options.items() ]) ] def validate_ticket_exists(booth_configuration, ticket_name): if not ticket_exists(booth_configuration, ticket_name): raise LibraryError(reports.booth_ticket_does_not_exist(ticket_name)) def validate_ticket_unique(booth_configuration, ticket_name): if ticket_exists(booth_configuration, ticket_name): raise LibraryError(reports.booth_ticket_duplicate(ticket_name)) def validate_ticket_options(report_processor, options, allow_unknown_options): reports = [] for key in sorted(options): if key in GLOBAL_KEYS: reports.append(common_reports.invalid_options( [key], TICKET_KEYS, "booth ticket", )) elif key not in TICKET_KEYS: reports.append( common_reports.invalid_options( [key], TICKET_KEYS, "booth ticket", severity=( severities.WARNING if allow_unknown_options else severities.ERROR ), forceable=( None if allow_unknown_options else report_codes.FORCE_OPTIONS ), ) ) if not options[key].strip(): reports.append(common_reports.invalid_option_value( key, options[key], "no-empty", )) report_processor.process_list(reports) def ticket_exists(booth_configuration, ticket_name): return any( value for key, value, _ in booth_configuration if key == "ticket" and value == ticket_name ) def validate_ticket_name(ticket_name): if not re.compile(r"^[\w-]+$").search(ticket_name): raise LibraryError(reports.booth_ticket_name_invalid(ticket_name)) def set_authfile(booth_configuration, auth_file): return [ConfigItem("authfile", auth_file)] + [ config_item for config_item in booth_configuration if config_item.key != "authfile" ] def get_authfile(booth_configuration): for key, value, _ in reversed(booth_configuration): if key == "authfile": return value return None pcs-0.9.164/pcs/lib/booth/env.py000066400000000000000000000114721326265502500163030ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os import pwd import grp from pcs import settings from pcs.common import env_file_role_codes from pcs.common.tools import format_environment_error from pcs.lib import reports as common_reports from pcs.lib.booth import reports from pcs.lib.env_file import GhostFile, RealFile from pcs.lib.errors import LibraryError from pcs.settings import booth_config_dir as BOOTH_CONFIG_DIR def get_booth_env_file_name(name, extension): report_list = [] if "/" in name: report_list.append( reports.booth_invalid_name(name, "contains illegal character '/'") ) if report_list: raise LibraryError(*report_list) return "{0}.{1}".format(os.path.join(BOOTH_CONFIG_DIR, name), extension) def get_config_file_name(name): return get_booth_env_file_name(name, "conf") def get_key_path(name): return get_booth_env_file_name(name, "key") def report_keyfile_io_error(file_path, operation, e): return LibraryError(common_reports.file_io_error( file_role=env_file_role_codes.BOOTH_KEY, file_path=file_path, operation=operation, reason=format_environment_error(e) )) def set_keyfile_access(file_path): #shutil.chown is not in python2 try: uid = pwd.getpwnam(settings.pacemaker_uname).pw_uid except KeyError: raise LibraryError(common_reports.unable_to_determine_user_uid( settings.pacemaker_uname )) try: gid = grp.getgrnam(settings.pacemaker_gname).gr_gid except KeyError: raise LibraryError(common_reports.unable_to_determine_group_gid( settings.pacemaker_gname )) try: os.chown(file_path, uid, gid) except EnvironmentError as e: raise report_keyfile_io_error(file_path, "chown", e) try: # According to booth documentation, user and group of booth authfile # should be set to hacluster/haclient (created and used by pacemaker) # but mode of file doesn't need to be same as pacemaker authfile. os.chmod(file_path, settings.booth_authkey_file_mode) except EnvironmentError as e: raise report_keyfile_io_error(file_path, "chmod", e) class BoothEnv(object): def __init__(self, report_processor, env_data): self.__report_processor = report_processor self.__name = env_data["name"] if "config_file" in env_data: self.__config = GhostFile( file_role=env_file_role_codes.BOOTH_CONFIG, content=env_data["config_file"]["content"] ) self.__key_path = env_data["key_path"] self.__key = GhostFile( file_role=env_file_role_codes.BOOTH_KEY, content=env_data["key_file"]["content"], is_binary=True ) else: self.__config = RealFile( file_role=env_file_role_codes.BOOTH_CONFIG, file_path=get_config_file_name(env_data["name"]), ) self.__set_key_path(get_key_path(env_data["name"])) def __set_key_path(self, path): self.__key_path = path self.__key = RealFile( file_role=env_file_role_codes.BOOTH_KEY, file_path=path, is_binary=True ) def command_expect_live_env(self): if not self.__config.is_live: raise LibraryError(common_reports.live_environment_required([ "BOOTH_CONF", "BOOTH_KEY", ])) def set_key_path(self, path): if not self.__config.is_live: raise AssertionError( "Set path of keyfile is supported only in live environment" ) self.__set_key_path(path) @property def name(self): return self.__name @property def key_path(self): return self.__key_path def get_config_content(self): return self.__config.read() def create_config(self, content, can_overwrite_existing=False): self.__config.assert_no_conflict_with_existing( self.__report_processor, can_overwrite_existing ) self.__config.write(content) def create_key(self, key_content, can_overwrite_existing=False): self.__key.assert_no_conflict_with_existing( self.__report_processor, can_overwrite_existing ) self.__key.write(key_content, set_keyfile_access) def push_config(self, content): self.__config.write(content) def remove_key(self): self.__key.remove(silence_no_existence=True) def remove_config(self): self.__config.remove() def export(self): return {} if self.__config.is_live else { "config_file": self.__config.export(), "key_file": self.__key.export(), } pcs-0.9.164/pcs/lib/booth/reports.py000066400000000000000000000232451326265502500172120ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib.errors import ReportItem, ReportItemSeverity def booth_lack_of_sites(site_list): """ Less than 2 booth sites entered. But it does not make sense. list site_list contains currently entered sites """ return ReportItem.error( report_codes.BOOTH_LACK_OF_SITES, info={ "sites": site_list, } ) def booth_even_peers_num(number): """ Booth requires odd number of peers. But even number of peers was entered. integer number determines how many peers was entered """ return ReportItem.error( report_codes.BOOTH_EVEN_PEERS_NUM, info={ "number": number, } ) def booth_address_duplication(duplicate_addresses): """ Address of each peer must unique. But address duplication appeared. set duplicate_addresses contains addreses entered multiple times """ return ReportItem.error( report_codes.BOOTH_ADDRESS_DUPLICATION, info={ "addresses": duplicate_addresses, } ) def booth_config_unexpected_lines(line_list): """ Booth config have defined structure. But line out of structure definition appeared. list line_list contains lines out of defined structure """ return ReportItem.error( report_codes.BOOTH_CONFIG_UNEXPECTED_LINES, info={ "line_list": line_list, } ) def booth_invalid_name(name, reason): """ Booth instance name have rules. For example it cannot contain illegal characters like '/'. But some of rules was violated. string name is entered booth instance name """ return ReportItem.error( report_codes.BOOTH_INVALID_NAME, info={ "name": name, "reason": reason, } ) def booth_ticket_name_invalid(ticket_name): """ Name of booth ticket may consists of alphanumeric characters or dash. Entered ticket name violating this rule. string ticket_name is entered booth ticket name """ return ReportItem.error( report_codes.BOOTH_TICKET_NAME_INVALID, info={ "ticket_name": ticket_name, } ) def booth_ticket_duplicate(ticket_name): """ Each booth ticket name must be uniqe. But duplicate booth ticket name was entered. string ticket_name is entered booth ticket name """ return ReportItem.error( report_codes.BOOTH_TICKET_DUPLICATE, info={ "ticket_name": ticket_name, } ) def booth_ticket_does_not_exist(ticket_name): """ Some operations (like ticket remove) expect the ticket name in booth configuration. But the ticket name not found in booth configuration. string ticket_name is entered booth ticket name """ return ReportItem.error( report_codes.BOOTH_TICKET_DOES_NOT_EXIST, info={ "ticket_name": ticket_name, } ) def booth_already_in_cib(name): """ Each booth instance should be in a cib once maximally. Existence of booth instance in cib detected during creating new one. string name is booth instance name """ return ReportItem.error( report_codes.BOOTH_ALREADY_IN_CIB, info={ "name": name, } ) def booth_not_exists_in_cib(name): """ Remove booth instance from cib required. But no such instance found in cib. string name is booth instance name """ return ReportItem.error( report_codes.BOOTH_NOT_EXISTS_IN_CIB, info={ "name": name, } ) def booth_config_is_used(name, detail=""): """ Booth config use detected during destroy request. string name is booth instance name string detail provide more details (for example booth instance is used as cluster resource or is started/enabled under systemd) """ return ReportItem.error( report_codes.BOOTH_CONFIG_IS_USED, info={ "name": name, "detail": detail, "detail_string": " {0}".format(detail) if detail else "", } ) def booth_multiple_times_in_cib( name, severity=ReportItemSeverity.ERROR ): """ Each booth instance should be in a cib once maximally. But multiple occurences detected. For example during remove booth instance from cib. Notify user about this fact is required. When operation is forced user should be notified about multiple occurences. string name is booth instance name ReportItemSeverity severit should be ERROR or WARNING (depends on context) is flag for next report processing Because of severity coupling with ReportItem is it specified here. """ return ReportItem( report_codes.BOOTH_MULTIPLE_TIMES_IN_CIB, severity, info={ "name": name, }, forceable=report_codes.FORCE_BOOTH_REMOVE_FROM_CIB if severity == ReportItemSeverity.ERROR else None ) def booth_config_distribution_started(): """ booth configuration is about to be sent to nodes """ return ReportItem.info( report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, ) def booth_config_accepted_by_node(node=None, name_list=None): """ Booth config has been saved on specified node. node -- name of node name_list -- list of names of booth instance """ return ReportItem.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, info={ "node": node, "name_list": name_list } ) def booth_config_distribution_node_error(node, reason, name=None): """ Saving booth config failed on specified node. node -- node name reason -- reason of failure name -- name of booth instance """ return ReportItem.error( report_codes.BOOTH_CONFIG_DISTRIBUTION_NODE_ERROR, info={ "node": node, "name": name, "reason": reason } ) def booth_config_read_error( name, severity=ReportItemSeverity.ERROR, forceable=None ): """ Unable to read from specified booth instance config. name -- name of booth instance severity -- severity of report item forceable -- is this report item forceable? by what category? """ return ReportItem( report_codes.BOOTH_CONFIG_READ_ERROR, severity, info={"name": name}, forceable=forceable ) def booth_fetching_config_from_node_started(node, config=None): """ fetching of booth config from specified node started node -- node from which config is fetching config -- config name """ return ReportItem.info( report_codes.BOOTH_FETCHING_CONFIG_FROM_NODE, info={ "node": node, "config": config, } ) def booth_unsupported_file_location(file): """ location of booth configuration file (config, authfile) file is not supported (not in /etc/booth/) file -- file path """ return ReportItem.warning( report_codes.BOOTH_UNSUPORTED_FILE_LOCATION, info={"file": file} ) def booth_daemon_status_error(reason): """ Unable to get status of booth daemon because of error. reason -- reason """ return ReportItem.error( report_codes.BOOTH_DAEMON_STATUS_ERROR, info={"reason": reason} ) def booth_tickets_status_error(reason=None): """ Unable to get status of booth tickets because of error. reason -- reason """ return ReportItem.error( report_codes.BOOTH_TICKET_STATUS_ERROR, info={ "reason": reason, } ) def booth_peers_status_error(reason=None): """ Unable to get status of booth peers because of error. reason -- reason """ return ReportItem.error( report_codes.BOOTH_PEERS_STATUS_ERROR, info={ "reason": reason, } ) def booth_cannot_determine_local_site_ip(): """ Some booth operations are performed on specific site and requires to specify site ip. When site specification omitted pcs can try determine local ip. But determine local site ip failed. """ return ReportItem.error( report_codes.BOOTH_CANNOT_DETERMINE_LOCAL_SITE_IP, info={} ) def booth_ticket_operation_failed(operation, reason, site_ip, ticket_name): """ Pcs uses external booth tools for some ticket_name operations. For example grand and revoke. But the external command failed. string operatin determine what was intended perform with ticket_name string reason is taken from external booth command string site_ip specifiy what site had to run the command string ticket_name specify with which ticket had to run the command """ return ReportItem.error( report_codes.BOOTH_TICKET_OPERATION_FAILED, info={ "operation": operation, "reason": reason, "site_ip": site_ip, "ticket_name": ticket_name, } ) def booth_skipping_config(config_file, reason): """ Warning about skipping booth config file. config_file -- file name of config which is skipped reason -- reason """ return ReportItem.warning( report_codes.BOOTH_SKIPPING_CONFIG, info={ "config_file": config_file, "reason": reason, } ) def booth_cannot_identify_keyfile(severity=ReportItemSeverity.ERROR): return ReportItem( report_codes.BOOTH_CANNOT_IDENTIFY_KEYFILE, severity, info={}, forceable=report_codes.FORCE_BOOTH_DESTROY if severity == ReportItemSeverity.ERROR else None ) pcs-0.9.164/pcs/lib/booth/resource.py000066400000000000000000000041211326265502500173330ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.cib.tools import find_unique_id def create_resource_id(resources_section, name, suffix): return find_unique_id( resources_section.getroottree(), "booth-{0}-{1}".format(name, suffix) ) def is_ip_resource(resource_element): return resource_element.attrib.get("type", "") == "IPaddr2" def find_grouped_ip_element_to_remove(booth_element): group = booth_element.getparent() if group.tag != "group": return None primitives = group.xpath("./primitive") if len(primitives) != 2: # Don't remove the IP resource if some other resources are in the group. # It is most likely manually configured by the user so we cannot delete # it automatically. return None for element in primitives: if is_ip_resource(element): return element return None def get_remover(resource_remove): def remove_from_cluster(booth_element_list): for element in booth_element_list: ip_resource_to_remove = find_grouped_ip_element_to_remove(element) if ip_resource_to_remove is not None: resource_remove(ip_resource_to_remove.attrib["id"]) resource_remove(element.attrib["id"]) return remove_from_cluster def find_for_config(resources_section, booth_config_file_path): return resources_section.xpath((""" .//primitive[ @type="booth-site" and instance_attributes[nvpair[@name="config" and @value="{0}"]] ] """).format(booth_config_file_path)) def find_bound_ip(resources_section, booth_config_file_path): return resources_section.xpath((""" .//group[ primitive[ @type="booth-site" and instance_attributes[ nvpair[@name="config" and @value="{0}"] ] ] ] /primitive[@type="IPaddr2"] /instance_attributes /nvpair[@name="ip"] /@value """).format(booth_config_file_path)) pcs-0.9.164/pcs/lib/booth/status.py000066400000000000000000000025021326265502500170300ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs import settings from pcs.common.tools import join_multilines from pcs.lib.booth import reports from pcs.lib.errors import LibraryError def get_daemon_status(runner, name=None): cmd = [settings.booth_binary, "status"] if name: cmd += ["-c", name] stdout, stderr, return_value = runner.run(cmd) # 7 means that there is no booth instance running if return_value not in [0, 7]: raise LibraryError( reports.booth_daemon_status_error(join_multilines([stderr, stdout])) ) return stdout def get_tickets_status(runner, name=None): cmd = [settings.booth_binary, "list"] if name: cmd += ["-c", name] stdout, stderr, return_value = runner.run(cmd) if return_value != 0: raise LibraryError( reports.booth_tickets_status_error( join_multilines([stderr, stdout]) ) ) return stdout def get_peers_status(runner, name=None): cmd = [settings.booth_binary, "peers"] if name: cmd += ["-c", name] stdout, stderr, return_value = runner.run(cmd) if return_value != 0: raise LibraryError( reports.booth_peers_status_error(join_multilines([stderr, stdout])) ) return stdout pcs-0.9.164/pcs/lib/booth/sync.py000066400000000000000000000061271326265502500164700ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os import base64 from pcs.common import report_codes from pcs.lib import reports as lib_reports from pcs.lib.communication.booth import BoothSaveFiles from pcs.lib.communication.tools import run_and_raise from pcs.lib.errors import LibraryError, ReportItemSeverity as Severities from pcs.lib.booth import ( config_files as booth_conf, config_structure, config_parser, reports, ) def send_all_config_to_node( communicator, reporter, target, rewrite_existing=False, skip_wrong_config=False ): """ Send all booth configs from default booth config directory and theri authfiles to specified node. communicator -- NodeCommunicator reporter -- report processor node -- NodeAddress rewrite_existing -- if True rewrite existing file skip_wrong_config -- if True skip local configs that are unreadable """ config_dict = booth_conf.read_configs(reporter, skip_wrong_config) if not config_dict: return reporter.process(reports.booth_config_distribution_started()) file_list = [] for config, config_data in sorted(config_dict.items()): try: authfile_path = config_structure.get_authfile( config_parser.parse(config_data) ) file_list.append({ "name": config, "data": config_data, "is_authfile": False }) if authfile_path: content = booth_conf.read_authfile(reporter, authfile_path) if not content: continue file_list.append({ "name": os.path.basename(authfile_path), "data": base64.b64encode(content).decode("utf-8"), "is_authfile": True }) except LibraryError: reporter.process(reports.booth_skipping_config( config, "unable to parse config" )) com_cmd = BoothSaveFiles( reporter, file_list, rewrite_existing=rewrite_existing ) com_cmd.set_targets([target]) response = run_and_raise(communicator, com_cmd)[0][1] try: report_list = [] for file in response["existing"]: report_list.append(lib_reports.file_already_exists( None, file, Severities.WARNING if rewrite_existing else Severities.ERROR, ( None if rewrite_existing else report_codes.FORCE_FILE_OVERWRITE ), target.label )) for file, reason in response["failed"].items(): report_list.append(reports.booth_config_distribution_node_error( target.label, reason, file )) reporter.process_list(report_list) reporter.process( reports.booth_config_accepted_by_node(target.label, response["saved"]) ) except (KeyError, ValueError): raise LibraryError(lib_reports.invalid_response_format(target.label)) pcs-0.9.164/pcs/lib/booth/test/000077500000000000000000000000001326265502500161135ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/booth/test/__init__.py000066400000000000000000000000001326265502500202120ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/booth/test/test_config_exchange.py000066400000000000000000000046421326265502500226410ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.lib.booth import config_structure, config_exchange class FromExchangeFormatTest(TestCase): def test_convert_all_supported_items(self): self.assertEqual( [ config_structure.ConfigItem("authfile", "/path/to/auth.file"), config_structure.ConfigItem("site", "1.1.1.1"), config_structure.ConfigItem("site", "2.2.2.2"), config_structure.ConfigItem("arbitrator", "3.3.3.3"), config_structure.ConfigItem("ticket", "TA"), config_structure.ConfigItem("ticket", "TB", [ config_structure.ConfigItem("expire", "10") ]), ], config_exchange.from_exchange_format([ {"key": "authfile","value": "/path/to/auth.file","details": []}, {"key": "site", "value": "1.1.1.1", "details": []}, {"key": "site", "value": "2.2.2.2", "details": []}, {"key": "arbitrator", "value": "3.3.3.3", "details": []}, {"key": "ticket", "value": "TA", "details": []}, {"key": "ticket", "value": "TB", "details": [ {"key": "expire", "value": "10", "details": []} ]}, ]) ) class GetExchenageFormatTest(TestCase): def test_convert_parsed_config_to_exchange_format(self): self.assertEqual( [ {"key": "site", "value": "1.1.1.1", "details": []}, {"key": "site", "value": "2.2.2.2", "details": []}, {"key": "arbitrator", "value": "3.3.3.3", "details": []}, {"key": "ticket", "value": "TA", "details": []}, {"key": "ticket", "value": "TB", "details": [ {"key": "timeout", "value": "10", "details": []} ]}, ], config_exchange.to_exchange_format([ config_structure.ConfigItem("site", "1.1.1.1"), config_structure.ConfigItem("site", "2.2.2.2"), config_structure.ConfigItem("arbitrator", "3.3.3.3"), config_structure.ConfigItem("ticket", "TA"), config_structure.ConfigItem("ticket", "TB", [ config_structure.ConfigItem("timeout", "10") ]), ]) ) pcs-0.9.164/pcs/lib/booth/test/test_config_files.py000066400000000000000000000231541326265502500221600ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os.path from pcs.test.tools.pcs_unittest import TestCase from pcs.common import report_codes, env_file_role_codes as file_roles from pcs.lib.booth import config_files from pcs.lib.errors import ReportItemSeverity as severities from pcs.settings import booth_config_dir as BOOTH_CONFIG_DIR from pcs.test.tools.assertions import assert_raise_library_error, assert_report_item_list_equal from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.misc import create_patcher from pcs.test.tools.pcs_unittest import mock patch_config_files = create_patcher("pcs.lib.booth.config_files") @mock.patch("os.path.isdir") @mock.patch("os.listdir") @mock.patch("os.path.isfile") class GetAllConfigsFileNamesTest(TestCase): def test_booth_config_dir_is_no_dir( self, mock_is_file, mock_listdir, mock_isdir ): mock_isdir.return_value = False self.assertEqual([], config_files.get_all_configs_file_names()) mock_isdir.assert_called_once_with(BOOTH_CONFIG_DIR) self.assertEqual(0, mock_is_file.call_count) self.assertEqual(0, mock_listdir.call_count) def test_success(self, mock_is_file, mock_listdir, mock_isdir): def mock_is_file_fn(file_name): if file_name in [ os.path.join(BOOTH_CONFIG_DIR, name) for name in ("dir.cong", "dir") ]: return False elif file_name in [ os.path.join(BOOTH_CONFIG_DIR, name) for name in ( "name1", "name2.conf", "name.conf.conf", ".conf", "name3.conf" ) ]: return True else: raise AssertionError("unexpected input") mock_isdir.return_value = True mock_is_file.side_effect = mock_is_file_fn mock_listdir.return_value = [ "name1", "name2.conf", "name.conf.conf", ".conf", "name3.conf", "dir.cong", "dir" ] self.assertEqual( ["name2.conf", "name.conf.conf", "name3.conf"], config_files.get_all_configs_file_names() ) mock_listdir.assert_called_once_with(BOOTH_CONFIG_DIR) class ReadConfigTest(TestCase): def test_success(self): self.maxDiff = None mock_open = mock.mock_open(read_data="config content") with patch_config_files("open", mock_open, create=True): self.assertEqual( "config content", config_files._read_config("my-file.conf") ) self.assertEqual( [ mock.call(os.path.join(BOOTH_CONFIG_DIR, "my-file.conf"), "r"), mock.call().__enter__(), mock.call().read(), mock.call().__exit__(None, None, None) ], mock_open.mock_calls ) @patch_config_files("_read_config") @patch_config_files("get_all_configs_file_names") class ReadConfigsTest(TestCase): def setUp(self): self.mock_reporter = MockLibraryReportProcessor() def test_success(self, mock_get_configs, mock_read): def _mock_read_cfg(file): if file == "name1.conf": return "config1" elif file == "name2.conf": return "config2" elif file == "name3.conf": return "config3" else: raise AssertionError("unexpected input: {0}".format(file)) mock_get_configs.return_value = [ "name1.conf", "name2.conf", "name3.conf" ] mock_read.side_effect = _mock_read_cfg self.assertEqual( { "name1.conf": "config1", "name2.conf": "config2", "name3.conf": "config3" }, config_files.read_configs(self.mock_reporter) ) mock_get_configs.assert_called_once_with() self.assertEqual(3, mock_read.call_count) mock_read.assert_has_calls([ mock.call("name1.conf"), mock.call("name2.conf"), mock.call("name3.conf") ]) self.assertEqual(0, len(self.mock_reporter.report_item_list)) def test_skip_failed(self, mock_get_configs, mock_read): def _mock_read_cfg(file): if file in ["name1.conf", "name3.conf"]: raise EnvironmentError() elif file == "name2.conf": return "config2" else: raise AssertionError("unexpected input: {0}".format(file)) mock_get_configs.return_value = [ "name1.conf", "name2.conf", "name3.conf" ] mock_read.side_effect = _mock_read_cfg self.assertEqual( {"name2.conf": "config2"}, config_files.read_configs(self.mock_reporter, True) ) mock_get_configs.assert_called_once_with() self.assertEqual(3, mock_read.call_count) mock_read.assert_has_calls([ mock.call("name1.conf"), mock.call("name2.conf"), mock.call("name3.conf") ]) assert_report_item_list_equal( self.mock_reporter.report_item_list, [ ( severities.WARNING, report_codes.BOOTH_CONFIG_READ_ERROR, {"name": "name1.conf"} ), ( severities.WARNING, report_codes.BOOTH_CONFIG_READ_ERROR, {"name": "name3.conf"} ) ] ) def test_do_not_skip_failed(self, mock_get_configs, mock_read): def _mock_read_cfg(file): if file in ["name1.conf", "name3.conf"]: raise EnvironmentError() elif file == "name2.conf": return "config2" else: raise AssertionError("unexpected input: {0}".format(file)) mock_get_configs.return_value = [ "name1.conf", "name2.conf", "name3.conf" ] mock_read.side_effect = _mock_read_cfg assert_raise_library_error( lambda: config_files.read_configs(self.mock_reporter), ( severities.ERROR, report_codes.BOOTH_CONFIG_READ_ERROR, {"name": "name1.conf"}, report_codes.SKIP_UNREADABLE_CONFIG ), ( severities.ERROR, report_codes.BOOTH_CONFIG_READ_ERROR, {"name": "name3.conf"}, report_codes.SKIP_UNREADABLE_CONFIG ) ) mock_get_configs.assert_called_once_with() self.assertEqual(3, mock_read.call_count) mock_read.assert_has_calls([ mock.call("name1.conf"), mock.call("name2.conf"), mock.call("name3.conf") ]) self.assertEqual(2, len(self.mock_reporter.report_item_list)) class ReadAuthfileTest(TestCase): def setUp(self): self.mock_reporter = MockLibraryReportProcessor() self.maxDiff = None def test_success(self): path = os.path.join(BOOTH_CONFIG_DIR, "file.key") mock_open = mock.mock_open(read_data="key") with patch_config_files("open", mock_open, create=True): self.assertEqual( "key", config_files.read_authfile(self.mock_reporter, path) ) self.assertEqual( [ mock.call(path, "rb"), mock.call().__enter__(), mock.call().read(), mock.call().__exit__(None, None, None) ], mock_open.mock_calls ) self.assertEqual(0, len(self.mock_reporter.report_item_list)) def test_path_none(self): self.assertTrue( config_files.read_authfile(self.mock_reporter, None) is None ) self.assertEqual(0, len(self.mock_reporter.report_item_list)) def test_invalid_path(self): path = "/not/etc/booth/booth.key" self.assertTrue( config_files.read_authfile(self.mock_reporter, path) is None ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( severities.WARNING, report_codes.BOOTH_UNSUPORTED_FILE_LOCATION, {"file": path} )] ) def test_not_abs_path(self): path = "/etc/booth/../booth.key" self.assertTrue( config_files.read_authfile(self.mock_reporter, path) is None ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( severities.WARNING, report_codes.BOOTH_UNSUPORTED_FILE_LOCATION, {"file": path} )] ) @patch_config_files("format_environment_error", return_value="reason") def test_read_failure(self, _): path = os.path.join(BOOTH_CONFIG_DIR, "file.key") mock_open = mock.mock_open() mock_open().read.side_effect = EnvironmentError() with patch_config_files("open", mock_open, create=True): return_value = config_files.read_authfile(self.mock_reporter, path) self.assertTrue(return_value is None) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( severities.WARNING, report_codes.FILE_IO_ERROR, { "file_role": file_roles.BOOTH_KEY, "file_path": path, "reason": "reason", "operation": "read", } )] ) pcs-0.9.164/pcs/lib/booth/test/test_config_parser.py000066400000000000000000000132401326265502500223450ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib.booth import config_parser from pcs.lib.booth.config_structure import ConfigItem from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import assert_raise_library_error from pcs.test.tools.pcs_unittest import TestCase class BuildTest(TestCase): def test_build_file_content_from_parsed_structure(self): self.assertEqual( "\n".join([ "authfile = /path/to/auth.file", "site = 1.1.1.1", "site = 2.2.2.2", "arbitrator = 3.3.3.3", 'ticket = "TA"', 'ticket = "TB"', " timeout = 10", "", #newline at the end ]), config_parser.build([ ConfigItem("authfile", "/path/to/auth.file"), ConfigItem("site", "1.1.1.1"), ConfigItem("site", "2.2.2.2"), ConfigItem("arbitrator", "3.3.3.3"), ConfigItem("ticket", "TA"), ConfigItem("ticket", "TB", [ ConfigItem("timeout", "10") ]), ]) ) class OrganizeLinesTest(TestCase): def test_move_non_ticket_config_keys_above_tickets(self): self.assertEqual( [ ConfigItem("site", "1.1.1.1"), ConfigItem('site', '2.2.2.2'), ConfigItem('arbitrator', '3.3.3.3'), ConfigItem("ticket", "TA"), ], config_parser.organize_lines([ ("site", "1.1.1.1"), ("ticket", "TA"), ('site', '2.2.2.2'), ('arbitrator', '3.3.3.3'), ]) ) def test_use_ticket_key_as_ticket_detail(self): self.maxDiff = None self.assertEqual( [ ConfigItem("site", "1.1.1.1"), ConfigItem('expire', '300'), ConfigItem('site', '2.2.2.2'), ConfigItem('arbitrator', '3.3.3.3'), ConfigItem("ticket", "TA", [ ConfigItem("timeout", "10"), ConfigItem('--nonexistent', 'value'), ConfigItem("expire", "300"), ]), ConfigItem("ticket", "TB", [ ConfigItem("timeout", "20"), ConfigItem("renewal-freq", "40"), ]), ], config_parser.organize_lines([ ("site", "1.1.1.1"), ("expire", "300"), # out of ticket content is kept global ("ticket", "TA"), ("site", "2.2.2.2"), # move to global ("timeout", "10"), ("--nonexistent", "value"), # no global is kept under ticket ("expire", "300"), ("ticket", "TB"), ('arbitrator', '3.3.3.3'), ("timeout", "20"), ("renewal-freq", "40"), ]) ) class ParseRawLinesTest(TestCase): def test_parse_simple_correct_lines(self): self.assertEqual( [ ("site", "1.1.1.1"), ('site', '2.2.2.2'), ('arbitrator', '3.3.3.3'), ('syntactically_correct', 'nonsense'), ('line-with', 'hash#literal'), ], config_parser.parse_to_raw_lines("\n".join([ "site = 1.1.1.1", " site = 2.2.2.2 ", "arbitrator=3.3.3.3", "syntactically_correct = nonsense", "line-with = hash#literal", "", ])) ) def test_parse_lines_with_whole_line_comment(self): self.assertEqual( [("site", "1.1.1.1")], config_parser.parse_to_raw_lines("\n".join([ " # some comment", "site = 1.1.1.1", ])) ) def test_skip_empty_lines(self): self.assertEqual( [("site", "1.1.1.1")], config_parser.parse_to_raw_lines("\n".join([ " ", "site = 1.1.1.1", ])) ) def test_raises_when_unexpected_lines_appear(self): invalid_line_list = [ "first invalid line", "second = 'invalid line' something else #comment", "third = 'invalid line 'something#'#", ] line_list = ["site = 1.1.1.1"] + invalid_line_list with self.assertRaises(config_parser.InvalidLines) as context_manager: config_parser.parse_to_raw_lines("\n".join(line_list)) self.assertEqual(context_manager.exception.args[0], invalid_line_list) def test_parse_lines_finishing_with_comment(self): self.assertEqual( [("site", "1.1.1.1")], config_parser.parse_to_raw_lines("\n".join([ "site = '1.1.1.1' #comment", ])) ) class ParseTest(TestCase): def test_raises_when_invalid_lines_appear(self): invalid_line_list = [ "first invalid line", "second = 'invalid line' something else #comment" ] line_list = ["site = 1.1.1.1"] + invalid_line_list assert_raise_library_error( lambda: config_parser.parse("\n".join(line_list)) , ( severities.ERROR, report_codes.BOOTH_CONFIG_UNEXPECTED_LINES, { "line_list": invalid_line_list, }, ), ) def test_do_not_raises_when_no_invalid_liens_there(self): config_parser.parse("site = 1.1.1.1") pcs-0.9.164/pcs/lib/booth/test/test_config_structure.py000066400000000000000000000302431326265502500231130ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.common import report_codes from pcs.lib.booth import config_structure from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import ( assert_raise_library_error, assert_report_item_list_equal, ) from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock class ValidateTicketExistsTest(TestCase): def test_raises_on_duplicate_ticket(self): assert_raise_library_error( lambda: config_structure.validate_ticket_exists( [config_structure.ConfigItem("ticket", "B")], "A" ), ( severities.ERROR, report_codes.BOOTH_TICKET_DOES_NOT_EXIST, { "ticket_name": "A", }, ), ) class ValidateTicketUniqueTest(TestCase): def test_raises_on_duplicate_ticket(self): assert_raise_library_error( lambda: config_structure.validate_ticket_unique( [config_structure.ConfigItem("ticket", "A")], "A" ), ( severities.ERROR, report_codes.BOOTH_TICKET_DUPLICATE, { "ticket_name": "A", }, ), ) def test_do_not_raises_when_no_duplicated_ticket(self): config_structure.validate_ticket_unique([], "A") class ValidateTicketOptionsTest(TestCase): def test_raises_on_invalid_options(self): report_processor = MockLibraryReportProcessor() expected_errors = [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["site"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), "allowed_patterns": [], }, ), ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["port"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), "allowed_patterns": [], }, ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "timeout", "option_value": " ", "allowed_values": "no-empty", }, ), ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["unknown"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), "allowed_patterns": [], }, report_codes.FORCE_OPTIONS ), ] assert_raise_library_error( lambda: config_structure.validate_ticket_options( report_processor, { "site": "a", "port": "b", "timeout": " ", "unknown": "c", }, allow_unknown_options=False, ), *expected_errors ) assert_report_item_list_equal( report_processor.report_item_list, expected_errors ) def test_unknown_options_are_forceable(self): report_processor = MockLibraryReportProcessor() expected_errors = [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["site"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), "allowed_patterns": [], }, ), ] assert_raise_library_error( lambda: config_structure.validate_ticket_options( report_processor, { "site": "a", "unknown": "c", }, allow_unknown_options=True, ), *expected_errors ) assert_report_item_list_equal( report_processor.report_item_list, expected_errors + [ ( severities.WARNING, report_codes.INVALID_OPTIONS, { "option_names": ["unknown"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), "allowed_patterns": [], }, ), ] ) def test_success_on_valid_options(self): report_processor = MockLibraryReportProcessor() config_structure.validate_ticket_options( report_processor, {"timeout": "10"}, allow_unknown_options=False, ) assert_report_item_list_equal(report_processor.report_item_list, []) class TicketExistsTest(TestCase): def test_returns_true_if_ticket_in_structure(self): self.assertTrue(config_structure.ticket_exists( [config_structure.ConfigItem("ticket", "A")], "A" )) def test_returns_false_if_ticket_in_structure(self): self.assertFalse(config_structure.ticket_exists( [config_structure.ConfigItem("ticket", "A")], "B" )) class ValidateTicketNameTest(TestCase): def test_accept_valid_ticket_name(self): config_structure.validate_ticket_name("abc") def test_refuse_bad_ticket_name(self): assert_raise_library_error( lambda: config_structure.validate_ticket_name("@ticket"), ( severities.ERROR, report_codes.BOOTH_TICKET_NAME_INVALID, { "ticket_name": "@ticket", }, ), ) class ValidatePeersTest(TestCase): def test_do_no_raises_on_correct_args(self): config_structure.validate_peers( site_list=["1.1.1.1", "2.2.2.2"], arbitrator_list=["3.3.3.3"] ) def test_refuse_less_than_2_sites(self): assert_raise_library_error( lambda: config_structure.validate_peers( site_list=["1.1.1.1"], arbitrator_list=["3.3.3.3", "4.4.4.4"] ), ( severities.ERROR, report_codes.BOOTH_LACK_OF_SITES, { "sites": ["1.1.1.1"], } ), ) def test_refuse_even_number_peers(self): assert_raise_library_error( lambda: config_structure.validate_peers( site_list=["1.1.1.1", "2.2.2.2"], arbitrator_list=[] ), ( severities.ERROR, report_codes.BOOTH_EVEN_PEERS_NUM, { "number": 2, } ), ) def test_refuse_address_duplication(self): assert_raise_library_error( lambda: config_structure.validate_peers( site_list=["1.1.1.1", "1.1.1.1", "1.1.1.1"], arbitrator_list=["3.3.3.3", "4.4.4.4"] ), ( severities.ERROR, report_codes.BOOTH_ADDRESS_DUPLICATION, { "addresses": set(["1.1.1.1"]), } ), ) def test_refuse_problem_combination(self): assert_raise_library_error( lambda: config_structure.validate_peers( site_list=["1.1.1.1"], arbitrator_list=["1.1.1.1"] ), ( severities.ERROR, report_codes.BOOTH_LACK_OF_SITES, { "sites": ["1.1.1.1"], } ), ( severities.ERROR, report_codes.BOOTH_EVEN_PEERS_NUM, { "number": 2, } ), ( severities.ERROR, report_codes.BOOTH_ADDRESS_DUPLICATION, { "addresses": set(["1.1.1.1"]), } ), ) class RemoveTicketTest(TestCase): @mock.patch("pcs.lib.booth.config_structure.validate_ticket_exists") def test_successfully_remove_ticket(self, mock_validate_ticket_exists): configuration = [ config_structure.ConfigItem("ticket", "some-ticket"), config_structure.ConfigItem("ticket", "deprecated-ticket"), ] self.assertEqual( config_structure.remove_ticket(configuration, "deprecated-ticket"), [ config_structure.ConfigItem("ticket", "some-ticket"), ] ) mock_validate_ticket_exists.assert_called_once_with( configuration, "deprecated-ticket" ) class AddTicketTest(TestCase): @mock.patch("pcs.lib.booth.config_structure.validate_ticket_options") @mock.patch("pcs.lib.booth.config_structure.validate_ticket_unique") @mock.patch("pcs.lib.booth.config_structure.validate_ticket_name") def test_successfully_add_ticket( self, mock_validate_name, mock_validate_uniq, mock_validate_options ): configuration = [ config_structure.ConfigItem("ticket", "some-ticket"), ] self.assertEqual( config_structure.add_ticket( None, configuration, "new-ticket", { "timeout": "10", }, allow_unknown_options=False, ), [ config_structure.ConfigItem("ticket", "some-ticket"), config_structure.ConfigItem("ticket", "new-ticket", [ config_structure.ConfigItem("timeout", "10"), ]), ], ) mock_validate_name.assert_called_once_with("new-ticket") mock_validate_uniq.assert_called_once_with(configuration, "new-ticket") mock_validate_options.assert_called_once_with( None, {"timeout": "10"}, False ) class SetAuthfileTest(TestCase): def test_add_authfile(self): self.assertEqual( [ config_structure.ConfigItem("authfile", "/path/to/auth.file"), config_structure.ConfigItem("site", "1.1.1.1"), ], config_structure.set_authfile( [ config_structure.ConfigItem("site", "1.1.1.1"), ], "/path/to/auth.file" ) ) def test_reset_authfile(self): self.assertEqual( [ config_structure.ConfigItem("authfile", "/path/to/auth.file"), config_structure.ConfigItem("site", "1.1.1.1"), ], config_structure.set_authfile( [ config_structure.ConfigItem("site", "1.1.1.1"), config_structure.ConfigItem("authfile", "/old/path/to/auth1.file"), config_structure.ConfigItem("authfile", "/old/path/to/auth2.file"), ], "/path/to/auth.file" ) ) class TakePeersTest(TestCase): def test_returns_site_list_and_arbitrators_list(self): self.assertEqual( ( ["1.1.1.1", "2.2.2.2", "3.3.3.3"], ["4.4.4.4", "5.5.5.5"] ), config_structure.take_peers( [ config_structure.ConfigItem("site", "1.1.1.1"), config_structure.ConfigItem("site", "2.2.2.2"), config_structure.ConfigItem("site", "3.3.3.3"), config_structure.ConfigItem("arbitrator", "4.4.4.4"), config_structure.ConfigItem("arbitrator", "5.5.5.5"), ], ) ) pcs-0.9.164/pcs/lib/booth/test/test_env.py000066400000000000000000000155731326265502500203270ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.common import report_codes from pcs.lib.booth import env from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import assert_raise_library_error from pcs.test.tools.misc import create_patcher from pcs.test.tools.pcs_unittest import mock patch_env = create_patcher("pcs.lib.booth.env") class GetConfigFileNameTest(TestCase): @patch_env("os.path.exists") def test_refuse_when_name_starts_with_slash(self, mock_path_exists): mock_path_exists.return_value = True assert_raise_library_error( lambda: env.get_config_file_name("/booth"), ( severities.ERROR, report_codes.BOOTH_INVALID_NAME, { "name": "/booth", "reason": "contains illegal character '/'", } ), ) class BoothEnvTest(TestCase): @patch_env("RealFile") def test_get_content_from_file(self, mock_real_file): mock_real_file.return_value = mock.MagicMock( read=mock.MagicMock(return_value="content") ) self.assertEqual( "content", env.BoothEnv("report processor", env_data={"name": "booth"}) .get_config_content() ) @patch_env("set_keyfile_access") @patch_env("RealFile") def test_create_config(self, mock_real_file, mock_set_keyfile_access): mock_file = mock.MagicMock( assert_no_conflict_with_existing=mock.MagicMock(), write=mock.MagicMock(), ) mock_real_file.return_value = mock_file env.BoothEnv( "report processor", env_data={"name": "booth"} ).create_config("a", can_overwrite_existing=True) self.assertEqual(mock_file.assert_no_conflict_with_existing.mock_calls,[ mock.call('report processor', True), ]) self.assertEqual(mock_file.write.mock_calls, [mock.call('a')]) @patch_env("RealFile") def test_push_config(self, mock_real_file): mock_file = mock.MagicMock( assert_no_conflict_with_existing=mock.MagicMock(), write=mock.MagicMock(), ) mock_real_file.return_value = mock_file env.BoothEnv( "report processor", env_data={"name": "booth"} ).push_config("a") mock_file.write.assert_called_once_with("a") def test_export_config_file_when_was_present_in_env_data(self): self.assertEqual( env.BoothEnv( "report processor", { "name": "booth-name", "config_file": { "content": "a\nb", }, "key_file": { "content": "secure", }, "key_path": "/path/to/file.key", } ).export(), { "config_file": { "content": "a\nb", "can_overwrite_existing_file": False, "no_existing_file_expected": False, "is_binary": False, }, "key_file": { "content": "secure", "can_overwrite_existing_file": False, "no_existing_file_expected": False, "is_binary": True, }, } ) def test_do_not_export_config_file_when_no_provided(self): self.assertEqual( env.BoothEnv("report processor", {"name": "booth"}).export(), {} ) class SetKeyfileAccessTest(TestCase): @patch_env("os.chmod") @patch_env("os.chown") @patch_env("grp.getgrnam") @patch_env("pwd.getpwnam") @patch_env("settings") def test_do_everything_to_set_desired_file_access( self, settings, getpwnam, getgrnam, chown, chmod ): file_path = "/tmp/some_booth_file" env.set_keyfile_access(file_path) getpwnam.assert_called_once_with(settings.pacemaker_uname) getgrnam.assert_called_once_with(settings.pacemaker_gname) chown.assert_called_once_with( file_path, getpwnam.return_value.pw_uid, getgrnam.return_value.gr_gid, ) @patch_env("pwd.getpwnam", mock.MagicMock(side_effect=KeyError)) @patch_env("settings.pacemaker_uname", "some-user") def test_raises_when_cannot_get_uid(self): assert_raise_library_error( lambda: env.set_keyfile_access("/booth"), ( severities.ERROR, report_codes.UNABLE_TO_DETERMINE_USER_UID, { "user": "some-user", } ), ) @patch_env("grp.getgrnam", mock.MagicMock(side_effect=KeyError)) @patch_env("pwd.getpwnam", mock.MagicMock()) @patch_env("settings.pacemaker_gname", "some-group") def test_raises_when_cannot_get_gid(self): assert_raise_library_error( lambda: env.set_keyfile_access("/booth"), ( severities.ERROR, report_codes.UNABLE_TO_DETERMINE_GROUP_GID, { "group": "some-group", } ), ) @patch_env("format_environment_error", mock.Mock(return_value="err")) @patch_env("os.chown", mock.MagicMock(side_effect=EnvironmentError())) @patch_env("grp.getgrnam", mock.MagicMock()) @patch_env("pwd.getpwnam", mock.MagicMock()) @patch_env("settings.pacemaker_gname", "some-group") def test_raises_when_cannot_chown(self): assert_raise_library_error( lambda: env.set_keyfile_access("/booth"), ( severities.ERROR, report_codes.FILE_IO_ERROR, { 'reason': 'err', 'file_role': u'BOOTH_KEY', 'file_path': '/booth', 'operation': u'chown', } ), ) @patch_env("format_environment_error", mock.Mock(return_value="err")) @patch_env("os.chmod", mock.MagicMock(side_effect=EnvironmentError())) @patch_env("os.chown", mock.MagicMock()) @patch_env("grp.getgrnam", mock.MagicMock()) @patch_env("pwd.getpwnam", mock.MagicMock()) @patch_env("settings.pacemaker_gname", "some-group") def test_raises_when_cannot_chmod(self): assert_raise_library_error( lambda: env.set_keyfile_access("/booth"), ( severities.ERROR, report_codes.FILE_IO_ERROR, { 'reason': 'err', 'file_role': u'BOOTH_KEY', 'file_path': '/booth', 'operation': u'chmod', } ), ) pcs-0.9.164/pcs/lib/booth/test/test_resource.py000066400000000000000000000164471326265502500213670ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from lxml import etree import pcs.lib.booth.resource as booth_resource from pcs.test.tools.pcs_unittest import mock def fixture_resources_with_booth(booth_config_file_path): return etree.fromstring(''' '''.format(booth_config_file_path)) def fixture_booth_element(id, booth_config_file_path): return etree.fromstring(''' '''.format(id, booth_config_file_path)) def fixture_ip_element(id, ip=""): return etree.fromstring(''' '''.format(id, ip)) class CreateResourceIdTest(TestCase): @mock.patch("pcs.lib.booth.resource.find_unique_id") def test_return_new_uinq_id(self, mock_find_unique_id): resources_section = etree.fromstring('''''') mock_find_unique_id.side_effect = ( lambda resources_section, id: "{0}-n".format(id) ) self.assertEqual( "booth-some-name-ip-n", booth_resource.create_resource_id( resources_section, "some-name", "ip" ) ) class FindBoothResourceElementsTest(TestCase): def test_returns_empty_list_when_no_matching_booth_element(self): self.assertEqual([], booth_resource.find_for_config( fixture_resources_with_booth("/ANOTHER/PATH/TO/CONF"), "/PATH/TO/CONF" )) def test_returns_all_found_resource_elements(self): resources = etree.fromstring('') first = fixture_booth_element("first", "/PATH/TO/CONF") second = fixture_booth_element("second", "/ANOTHER/PATH/TO/CONF") third = fixture_booth_element("third", "/PATH/TO/CONF") for element in [first, second,third]: resources.append(element) self.assertEqual( [first, third], booth_resource.find_for_config( resources, "/PATH/TO/CONF" ) ) class RemoveFromClusterTest(TestCase): def call(self, element_list): mock_resource_remove = mock.Mock() booth_resource.get_remover(mock_resource_remove)(element_list) return mock_resource_remove def find_booth_resources(self, tree): return tree.xpath('.//primitive[@type="booth-site"]') def test_remove_ip_when_is_only_booth_sibling_in_group(self): group = etree.fromstring(''' ''') mock_resource_remove = self.call(self.find_booth_resources(group)) self.assertEqual( mock_resource_remove.mock_calls, [ mock.call('ip'), mock.call('booth'), ] ) def test_remove_ip_when_group_is_disabled_1(self): group = etree.fromstring(''' ''') mock_resource_remove = self.call(self.find_booth_resources(group)) self.assertEqual( mock_resource_remove.mock_calls, [ mock.call('ip'), mock.call('booth'), ] ) def test_remove_ip_when_group_is_disabled_2(self): group = etree.fromstring(''' ''') mock_resource_remove = self.call(self.find_booth_resources(group)) self.assertEqual( mock_resource_remove.mock_calls, [ mock.call('ip'), mock.call('booth'), ] ) def test_dont_remove_ip_when_group_has_other_resources(self): group = etree.fromstring(''' ''') mock_resource_remove = self.call(self.find_booth_resources(group)) self.assertEqual( mock_resource_remove.mock_calls, [ mock.call('booth'), ] ) class FindBoundIpTest(TestCase): def fixture_resource_section(self, ip_element_list): resources_section = etree.fromstring('') group = etree.SubElement(resources_section, "group") group.append(fixture_booth_element("booth1", "/PATH/TO/CONF")) for ip_element in ip_element_list: group.append(ip_element) return resources_section def test_returns_None_when_no_ip(self): self.assertEqual( [], booth_resource.find_bound_ip( self.fixture_resource_section([]), "/PATH/TO/CONF", ) ) def test_returns_ip_when_correctly_found(self): self.assertEqual( ["192.168.122.31"], booth_resource.find_bound_ip( self.fixture_resource_section([ fixture_ip_element("ip1", "192.168.122.31"), ]), "/PATH/TO/CONF", ) ) def test_returns_None_when_more_ip(self): self.assertEqual( ["192.168.122.31", "192.168.122.32"], booth_resource.find_bound_ip( self.fixture_resource_section([ fixture_ip_element("ip1", "192.168.122.31"), fixture_ip_element("ip2", "192.168.122.32"), ]), "/PATH/TO/CONF", ) ) pcs-0.9.164/pcs/lib/booth/test/test_status.py000066400000000000000000000105071326265502500210520ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase try: # python 2 #pylint: disable=unused-import from urlparse import parse_qs as url_decode except ImportError: # python 3 from urllib.parse import parse_qs as url_decode from pcs.test.tools.pcs_unittest import mock from pcs.test.tools.assertions import assert_raise_library_error from pcs import settings from pcs.common import report_codes from pcs.lib.errors import ReportItemSeverity as Severities from pcs.lib.external import CommandRunner import pcs.lib.booth.status as lib class GetDaemonStatusTest(TestCase): def setUp(self): self.mock_run = mock.MagicMock(spec_set=CommandRunner) def test_no_name(self): self.mock_run.run.return_value = ("output", "", 0) self.assertEqual("output", lib.get_daemon_status(self.mock_run)) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "status"] ) def test_with_name(self): self.mock_run.run.return_value = ("output", "", 0) self.assertEqual("output", lib.get_daemon_status(self.mock_run, "name")) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "status", "-c", "name"] ) def test_daemon_not_running(self): self.mock_run.run.return_value = ("", "error", 7) self.assertEqual("", lib.get_daemon_status(self.mock_run)) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "status"] ) def test_failure(self): self.mock_run.run.return_value = ("out", "error", 1) assert_raise_library_error( lambda: lib.get_daemon_status(self.mock_run), ( Severities.ERROR, report_codes.BOOTH_DAEMON_STATUS_ERROR, {"reason": "error\nout"} ) ) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "status"] ) class GetTicketsStatusTest(TestCase): def setUp(self): self.mock_run = mock.MagicMock(spec_set=CommandRunner) def test_no_name(self): self.mock_run.run.return_value = ("output", "", 0) self.assertEqual("output", lib.get_tickets_status(self.mock_run)) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "list"] ) def test_with_name(self): self.mock_run.run.return_value = ("output", "", 0) self.assertEqual( "output", lib.get_tickets_status(self.mock_run, "name") ) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "list", "-c", "name"] ) def test_failure(self): self.mock_run.run.return_value = ("out", "error", 1) assert_raise_library_error( lambda: lib.get_tickets_status(self.mock_run), ( Severities.ERROR, report_codes.BOOTH_TICKET_STATUS_ERROR, { "reason": "error\nout" } ) ) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "list"] ) class GetPeersStatusTest(TestCase): def setUp(self): self.mock_run = mock.MagicMock(spec_set=CommandRunner) def test_no_name(self): self.mock_run.run.return_value = ("output", "", 0) self.assertEqual("output", lib.get_peers_status(self.mock_run)) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "peers"] ) def test_with_name(self): self.mock_run.run.return_value = ("output", "", 0) self.assertEqual("output", lib.get_peers_status(self.mock_run, "name")) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "peers", "-c", "name"] ) def test_failure(self): self.mock_run.run.return_value = ("out", "error", 1) assert_raise_library_error( lambda: lib.get_peers_status(self.mock_run), ( Severities.ERROR, report_codes.BOOTH_PEERS_STATUS_ERROR, { "reason": "error\nout" } ) ) self.mock_run.run.assert_called_once_with( [settings.booth_binary, "peers"] ) pcs-0.9.164/pcs/lib/booth/test/test_sync.py000066400000000000000000001015251326265502500205040ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase import json import base64 try: # python 2 from urlparse import parse_qs as url_decode except ImportError: # python 3 from urllib.parse import parse_qs as url_decode from pcs.test.tools.pcs_unittest import mock, skip from pcs.test.tools.assertions import ( assert_report_item_list_equal, assert_raise_library_error, ) from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.common import report_codes from pcs.common.node_communicator import RequestTarget from pcs.lib.node import NodeAddresses, NodeAddressesList from pcs.lib.errors import LibraryError, ReportItemSeverity as Severities from pcs.lib.external import NodeCommunicator, NodeConnectionException import pcs.lib.booth.sync as lib def to_b64(string): return base64.b64encode(string.encode("utf-8")).decode("utf-8") @skip("TODO: rewrite for pcs.lib.communication.booth.BoothSaveFiles") @mock.patch("pcs.lib.booth.sync.parallel_nodes_communication_helper") class SyncConfigInCluster(TestCase): def setUp(self): self.mock_communicator = mock.MagicMock(spec_set=NodeCommunicator) self.mock_reporter = MockLibraryReportProcessor() self.node_list = NodeAddressesList( [NodeAddresses("node" + str(i) for i in range(5))] ) def test_without_authfile(self, mock_parallel): lib.send_config_to_all_nodes( self.mock_communicator, self.mock_reporter, self.node_list, "cfg_name", "config data" ) mock_parallel.assert_called_once_with( lib._set_config_on_node, [ ( [ self.mock_communicator, self.mock_reporter, node, "cfg_name", "config data", None, None ], {} ) for node in self.node_list ], self.mock_reporter, False ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( Severities.INFO, report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, {} )] ) def test_skip_offline(self, mock_parallel): lib.send_config_to_all_nodes( self.mock_communicator, self.mock_reporter, self.node_list, "cfg_name", "config data", skip_offline=True ) mock_parallel.assert_called_once_with( lib._set_config_on_node, [ ( [ self.mock_communicator, self.mock_reporter, node, "cfg_name", "config data", None, None ], {} ) for node in self.node_list ], self.mock_reporter, True ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( Severities.INFO, report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, {} )] ) def test_with_authfile(self, mock_parallel): lib.send_config_to_all_nodes( self.mock_communicator, self.mock_reporter, self.node_list, "cfg_name", "config data", authfile="/my/auth/file.key", authfile_data="authfile data".encode("utf-8") ) mock_parallel.assert_called_once_with( lib._set_config_on_node, [ ( [ self.mock_communicator, self.mock_reporter, node, "cfg_name", "config data", "/my/auth/file.key", "authfile data".encode("utf-8") ], {} ) for node in self.node_list ], self.mock_reporter, False ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( Severities.INFO, report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, {} )] ) @mock.patch("pcs.lib.booth.sync.run_and_raise") @mock.patch("pcs.lib.booth.config_structure.get_authfile") @mock.patch("pcs.lib.booth.config_parser.parse") @mock.patch("pcs.lib.booth.config_files.read_configs") @mock.patch("pcs.lib.booth.config_files.read_authfile") class SendAllConfigToNodeTest(TestCase): def setUp(self): self.mock_com = "communicator" self.mock_reporter = MockLibraryReportProcessor() self.node = RequestTarget("node") self.file_list = [ { "name": "name1.conf", "data": "config1", "is_authfile": False }, { "name": "file1.key", "data": to_b64("some key"), "is_authfile": True }, { "name": "name2.conf", "data": "config2", "is_authfile": False }, { "name": "file2.key", "data": to_b64("another key"), "is_authfile": True } ] @staticmethod def mock_parse_fn(config_content): if config_content not in ["config1", "config2"]: raise AssertionError( "unexpected input {0}".format(config_content) ) return config_content @staticmethod def mock_authfile_fn(parsed_config): _data = { "config1": "/path/to/file1.key", "config2": "/path/to/file2.key" } if parsed_config not in _data: raise AssertionError( "unexpected input {0}".format(parsed_config) ) return _data[parsed_config] @staticmethod def mock_read_authfile_fn(_, authfile_path): _data = { "/path/to/file1.key": "some key".encode("utf-8"), "/path/to/file2.key": "another key".encode("utf-8"), } if authfile_path not in _data: raise AssertionError( "unexpected input {0}".format(authfile_path) ) return _data[authfile_path] def test_success( self, mock_read_authfile, mock_read_configs, mock_parse, mock_authfile, mock_run_com, ): mock_parse.side_effect = self.mock_parse_fn mock_authfile.side_effect = self.mock_authfile_fn mock_read_authfile.side_effect = self.mock_read_authfile_fn mock_read_configs.return_value = { "name1.conf": "config1", "name2.conf": "config2" } mock_run_com.return_value = [( self.node, { "existing": [], "failed": {}, "saved": ["name1.conf", "file1.key", "name2.conf", "file2.key"] } )] lib.send_all_config_to_node( self.mock_com, self.mock_reporter, self.node ) self.assertEqual(2, mock_parse.call_count) mock_parse.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_authfile.call_count) mock_authfile.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_read_authfile.call_count) mock_read_authfile.assert_has_calls([ mock.call(self.mock_reporter, "/path/to/file1.key"), mock.call(self.mock_reporter, "/path/to/file2.key") ]) mock_read_configs.assert_called_once_with(self.mock_reporter, False) communicator, com_cmd = mock_run_com.call_args[0] self.assertEqual(self.mock_com, communicator) self.assertEqual(self.file_list, com_cmd._file_list) self.assertFalse(com_cmd._rewrite_existing) assert_report_item_list_equal( self.mock_reporter.report_item_list, [ ( Severities.INFO, report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, {} ), ( Severities.INFO, report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, { "node": self.node.label, "name_list": [ "name1.conf", "file1.key", "name2.conf", "file2.key" ] } ) ] ) def test_do_not_rewrite_existing( self, mock_read_authfile, mock_read_configs, mock_parse, mock_authfile, mock_run_com, ): mock_parse.side_effect = self.mock_parse_fn mock_authfile.side_effect = self.mock_authfile_fn mock_read_authfile.side_effect = self.mock_read_authfile_fn mock_read_configs.return_value = { "name1.conf": "config1", "name2.conf": "config2" } mock_run_com.return_value = [( self.node, { "existing": ["name1.conf", "file1.key"], "failed": {}, "saved": ["name2.conf", "file2.key"] } )] assert_raise_library_error( lambda: lib.send_all_config_to_node( self.mock_com, self.mock_reporter, self.node ), ( Severities.ERROR, report_codes.FILE_ALREADY_EXISTS, { "file_role": None, "file_path": "name1.conf", "node": self.node.label }, report_codes.FORCE_FILE_OVERWRITE ), ( Severities.ERROR, report_codes.FILE_ALREADY_EXISTS, { "file_role": None, "file_path": "file1.key", "node": self.node.label }, report_codes.FORCE_FILE_OVERWRITE ) ) self.assertEqual(2, mock_parse.call_count) mock_parse.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_authfile.call_count) mock_authfile.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_read_authfile.call_count) mock_read_authfile.assert_has_calls([ mock.call(self.mock_reporter, "/path/to/file1.key"), mock.call(self.mock_reporter, "/path/to/file2.key") ]) mock_read_configs.assert_called_once_with(self.mock_reporter, False) communicator, com_cmd = mock_run_com.call_args[0] self.assertEqual(self.mock_com, communicator) self.assertEqual(self.file_list, com_cmd._file_list) self.assertFalse(com_cmd._rewrite_existing) assert_report_item_list_equal( self.mock_reporter.report_item_list, [ ( Severities.INFO, report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, {} ), ( Severities.ERROR, report_codes.FILE_ALREADY_EXISTS, { "file_role": None, "file_path": "name1.conf", "node": self.node.label }, report_codes.FORCE_FILE_OVERWRITE ), ( Severities.ERROR, report_codes.FILE_ALREADY_EXISTS, { "file_role": None, "file_path": "file1.key", "node": self.node.label }, report_codes.FORCE_FILE_OVERWRITE ) ] ) def test_rewrite_existing( self, mock_read_authfile, mock_read_configs, mock_parse, mock_authfile, mock_run_com, ): mock_parse.side_effect = self.mock_parse_fn mock_authfile.side_effect = self.mock_authfile_fn mock_read_authfile.side_effect = self.mock_read_authfile_fn mock_read_configs.return_value = { "name1.conf": "config1", "name2.conf": "config2" } mock_run_com.return_value = [( self.node, { "existing": ["name1.conf", "file1.key"], "failed": {}, "saved": ["name2.conf", "file2.key"] } )] lib.send_all_config_to_node( self.mock_com, self.mock_reporter, self.node, rewrite_existing=True ) mock_read_configs.assert_called_once_with(self.mock_reporter, False) self.assertEqual(2, mock_parse.call_count) mock_parse.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_authfile.call_count) mock_authfile.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_read_authfile.call_count) mock_read_authfile.assert_has_calls([ mock.call(self.mock_reporter, "/path/to/file1.key"), mock.call(self.mock_reporter, "/path/to/file2.key") ]) communicator, com_cmd = mock_run_com.call_args[0] self.assertEqual(self.mock_com, communicator) self.assertEqual(self.file_list, com_cmd._file_list) self.assertTrue(com_cmd._rewrite_existing) assert_report_item_list_equal( self.mock_reporter.report_item_list, [ ( Severities.INFO, report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, {} ), ( Severities.WARNING, report_codes.FILE_ALREADY_EXISTS, { "file_role": None, "file_path": "name1.conf", "node": self.node.label } ), ( Severities.WARNING, report_codes.FILE_ALREADY_EXISTS, { "file_role": None, "file_path": "file1.key", "node": self.node.label } ), ( Severities.INFO, report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, { "node": self.node.label, "name_list": ["name2.conf", "file2.key"] } ) ] ) def test_write_failure( self, mock_read_authfile, mock_read_configs, mock_parse, mock_authfile, mock_run_com, ): mock_parse.side_effect = self.mock_parse_fn mock_authfile.side_effect = self.mock_authfile_fn mock_read_authfile.side_effect = self.mock_read_authfile_fn mock_read_configs.return_value = { "name1.conf": "config1", "name2.conf": "config2" } mock_run_com.return_value = [( self.node, { "existing": [], "failed": { "name1.conf": "Error message", "file1.key": "Another error message" }, "saved": ["name2.conf", "file2.key"] } )] assert_raise_library_error( lambda: lib.send_all_config_to_node( self.mock_com, self.mock_reporter, self.node ), ( Severities.ERROR, report_codes.BOOTH_CONFIG_DISTRIBUTION_NODE_ERROR, { "node": self.node.label, "name": "name1.conf", "reason": "Error message" } ), ( Severities.ERROR, report_codes.BOOTH_CONFIG_DISTRIBUTION_NODE_ERROR, { "node": self.node.label, "name": "file1.key", "reason": "Another error message" } ) ) self.assertEqual(2, mock_parse.call_count) mock_parse.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_authfile.call_count) mock_authfile.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_read_authfile.call_count) mock_read_authfile.assert_has_calls([ mock.call(self.mock_reporter, "/path/to/file1.key"), mock.call(self.mock_reporter, "/path/to/file2.key") ]) mock_read_configs.assert_called_once_with(self.mock_reporter, False) communicator, com_cmd = mock_run_com.call_args[0] self.assertEqual(self.mock_com, communicator) self.assertEqual(self.file_list, com_cmd._file_list) self.assertFalse(com_cmd._rewrite_existing) assert_report_item_list_equal( self.mock_reporter.report_item_list, [ ( Severities.INFO, report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, {} ), ( Severities.ERROR, report_codes.BOOTH_CONFIG_DISTRIBUTION_NODE_ERROR, { "node": self.node.label, "name": "name1.conf", "reason": "Error message" } ), ( Severities.ERROR, report_codes.BOOTH_CONFIG_DISTRIBUTION_NODE_ERROR, { "node": self.node.label, "name": "file1.key", "reason": "Another error message" } ) ] ) @skip("TODO: rewrite for pcs.lib.communication.booth.BoothSaveFiles") def test_communication_failure( self, mock_read_authfile, mock_read_configs, mock_parse, mock_authfile ): mock_parse.side_effect = self.mock_parse_fn mock_authfile.side_effect = self.mock_authfile_fn mock_read_authfile.side_effect = self.mock_read_authfile_fn mock_read_configs.return_value = { "name1.conf": "config1", "name2.conf": "config2" } self.mock_communicator.call_node.side_effect = NodeConnectionException( self.node.label, "command", "reason" ) assert_raise_library_error( lambda: lib.send_all_config_to_node( self.mock_communicator, self.mock_reporter, self.node ), ( Severities.ERROR, report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, { "node": self.node.label, "command": "command", "reason": "reason" } ) ) self.assertEqual(2, mock_parse.call_count) mock_parse.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_authfile.call_count) mock_authfile.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_read_authfile.call_count) mock_read_authfile.assert_has_calls([ mock.call(self.mock_reporter, "/path/to/file1.key"), mock.call(self.mock_reporter, "/path/to/file2.key") ]) mock_read_configs.assert_called_once_with(self.mock_reporter, False) self.assertEqual(1, self.mock_communicator.call_node.call_count) self.assertEqual( self.node, self.mock_communicator.call_node.call_args[0][0] ) self.assertEqual( "remote/booth_save_files", self.mock_communicator.call_node.call_args[0][1] ) data = url_decode(self.mock_communicator.call_node.call_args[0][2]) self.assertFalse("rewrite_existing" in data) self.assertTrue("data_json" in data) self.assertEqual( [ { "name": "name1.conf", "data": "config1", "is_authfile": False }, { "name": "file1.key", "data": to_b64("some key"), "is_authfile": True }, { "name": "name2.conf", "data": "config2", "is_authfile": False }, { "name": "file2.key", "data": to_b64("another key"), "is_authfile": True } ], json.loads(data["data_json"][0]) ) @skip("TODO: rewrite for pcs.lib.communication.booth.BoothSaveFiles") def test_wrong_response_format( self, mock_read_authfile, mock_read_configs, mock_parse, mock_authfile ): mock_parse.side_effect = self.mock_parse_fn mock_authfile.side_effect = self.mock_authfile_fn mock_read_authfile.side_effect = self.mock_read_authfile_fn mock_read_configs.return_value = { "name1.conf": "config1", "name2.conf": "config2" } self.mock_communicator.call_node.return_value = """ { "existing_files": [], "failed": { "name1.conf": "Error message", "file1.key": "Another error message" }, "saved": ["name2.conf", "file2.key"] } """ assert_raise_library_error( lambda: lib.send_all_config_to_node( self.mock_communicator, self.mock_reporter, self.node ), ( Severities.ERROR, report_codes.INVALID_RESPONSE_FORMAT, {"node": self.node.label} ) ) self.assertEqual(2, mock_parse.call_count) mock_parse.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_authfile.call_count) mock_authfile.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_read_authfile.call_count) mock_read_authfile.assert_has_calls([ mock.call(self.mock_reporter, "/path/to/file1.key"), mock.call(self.mock_reporter, "/path/to/file2.key") ]) mock_read_configs.assert_called_once_with(self.mock_reporter, False) self.assertEqual(1, self.mock_communicator.call_node.call_count) self.assertEqual( self.node, self.mock_communicator.call_node.call_args[0][0] ) self.assertEqual( "remote/booth_save_files", self.mock_communicator.call_node.call_args[0][1] ) data = url_decode(self.mock_communicator.call_node.call_args[0][2]) self.assertFalse("rewrite_existing" in data) self.assertTrue("data_json" in data) self.assertEqual( [ { "name": "name1.conf", "data": "config1", "is_authfile": False }, { "name": "file1.key", "data": to_b64("some key"), "is_authfile": True }, { "name": "name2.conf", "data": "config2", "is_authfile": False }, { "name": "file2.key", "data": to_b64("another key"), "is_authfile": True } ], json.loads(data["data_json"][0]) ) @skip("TODO: rewrite for pcs.lib.communication.booth.BoothSaveFiles") def test_response_not_json( self, mock_read_authfile, mock_read_configs, mock_parse, mock_authfile ): mock_parse.side_effect = self.mock_parse_fn mock_authfile.side_effect = self.mock_authfile_fn mock_read_authfile.side_effect = self.mock_read_authfile_fn mock_read_configs.return_value = { "name1.conf": "config1", "name2.conf": "config2" } self.mock_communicator.call_node.return_value = "not json" assert_raise_library_error( lambda: lib.send_all_config_to_node( self.mock_communicator, self.mock_reporter, self.node ), ( Severities.ERROR, report_codes.INVALID_RESPONSE_FORMAT, {"node": self.node.label} ) ) self.assertEqual(2, mock_parse.call_count) mock_parse.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_authfile.call_count) mock_authfile.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_read_authfile.call_count) mock_read_authfile.assert_has_calls([ mock.call(self.mock_reporter, "/path/to/file1.key"), mock.call(self.mock_reporter, "/path/to/file2.key") ]) mock_read_configs.assert_called_once_with(self.mock_reporter, False) self.assertEqual(1, self.mock_communicator.call_node.call_count) self.assertEqual( self.node, self.mock_communicator.call_node.call_args[0][0] ) self.assertEqual( "remote/booth_save_files", self.mock_communicator.call_node.call_args[0][1] ) data = url_decode(self.mock_communicator.call_node.call_args[0][2]) self.assertFalse("rewrite_existing" in data) self.assertTrue("data_json" in data) self.assertEqual( [ { "name": "name1.conf", "data": "config1", "is_authfile": False }, { "name": "file1.key", "data": to_b64("some key"), "is_authfile": True }, { "name": "name2.conf", "data": "config2", "is_authfile": False }, { "name": "file2.key", "data": to_b64("another key"), "is_authfile": True } ], json.loads(data["data_json"][0]) ) def test_configs_without_authfiles( self, mock_read_authfile, mock_read_configs, mock_parse, mock_authfile, mock_run_com ): def mock_authfile_fn(parsed_config): if parsed_config == "config1": return None elif parsed_config == "config2": return "/path/to/file2.key" else: raise AssertionError( "unexpected input: {0}".format(parsed_config) ) mock_parse.side_effect = self.mock_parse_fn mock_authfile.side_effect = mock_authfile_fn mock_read_authfile.return_value = "another key".encode("utf-8") mock_read_configs.return_value = { "name1.conf": "config1", "name2.conf": "config2" } mock_run_com.return_value = [( self.node, { "existing": [], "failed": {}, "saved": ["name1.conf", "name2.conf", "file2.key"] } )] lib.send_all_config_to_node( self.mock_com, self.mock_reporter, self.node ) self.assertEqual(2, mock_parse.call_count) mock_parse.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) self.assertEqual(2, mock_authfile.call_count) mock_authfile.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) mock_read_authfile.assert_called_once_with( self.mock_reporter, "/path/to/file2.key" ) mock_read_configs.assert_called_once_with(self.mock_reporter, False) expected_file_list = [ { "name": "name1.conf", "data": "config1", "is_authfile": False }, { "name": "name2.conf", "data": "config2", "is_authfile": False }, { "name": "file2.key", "data": to_b64("another key"), "is_authfile": True } ] communicator, com_cmd = mock_run_com.call_args[0] self.assertEqual(self.mock_com, communicator) self.assertEqual(expected_file_list, com_cmd._file_list) self.assertFalse(com_cmd._rewrite_existing) assert_report_item_list_equal( self.mock_reporter.report_item_list, [ ( Severities.INFO, report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, {} ), ( Severities.INFO, report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, { "node": self.node.label, "name_list": ["name1.conf", "name2.conf", "file2.key"] } ) ] ) def test_unable_to_parse_config( self, mock_read_authfile, mock_read_configs, mock_parse, mock_authfile, mock_run_com, ): def mock_parse_fn(config_data): if config_data == "config1": raise LibraryError() elif config_data == "config2": return "config2" else: raise AssertionError( "unexpected input: {0}".format(config_data) ) mock_parse.side_effect = mock_parse_fn mock_authfile.return_value = "/path/to/file2.key" mock_read_authfile.return_value = "another key".encode("utf-8") mock_read_configs.return_value = { "name1.conf": "config1", "name2.conf": "config2" } mock_run_com.return_value = [( self.node, { "existing": [], "failed": {}, "saved": ["name2.conf", "file2.key"] } )] lib.send_all_config_to_node( self.mock_com, self.mock_reporter, self.node ) self.assertEqual(2, mock_parse.call_count) mock_parse.assert_has_calls([ mock.call("config1"), mock.call("config2") ]) mock_authfile.assert_called_once_with("config2") mock_read_authfile.assert_called_once_with( self.mock_reporter, "/path/to/file2.key" ) mock_read_configs.assert_called_once_with(self.mock_reporter, False) expected_file_list = [ { "name": "name2.conf", "data": "config2", "is_authfile": False }, { "name": "file2.key", "data": to_b64("another key"), "is_authfile": True } ] communicator, com_cmd = mock_run_com.call_args[0] self.assertEqual(self.mock_com, communicator) self.assertEqual(expected_file_list, com_cmd._file_list) self.assertFalse(com_cmd._rewrite_existing) assert_report_item_list_equal( self.mock_reporter.report_item_list, [ ( Severities.INFO, report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED, {} ), ( Severities.WARNING, report_codes.BOOTH_SKIPPING_CONFIG, { "config_file": "name1.conf" } ), ( Severities.INFO, report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, { "node": self.node.label, "name_list": ["name2.conf", "file2.key"] } ) ] ) pcs-0.9.164/pcs/lib/cib/000077500000000000000000000000001326265502500145565ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/cib/__init__.py000066400000000000000000000000001326265502500166550ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/cib/acl.py000066400000000000000000000310511326265502500156670ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from lxml import etree from pcs.lib import reports from pcs.lib.errors import LibraryError from pcs.lib.cib.tools import ( check_new_id_applicable, does_id_exist, find_unique_id, find_element_by_tag_and_id, ) from pcs.lib.xml_tools import etree_element_attibutes_to_dict TAG_GROUP = "acl_group" TAG_ROLE = "acl_role" TAG_TARGET = "acl_target" TAG_PERMISSION = "acl_permission" def validate_permissions(tree, permission_info_list): """ Validate given permission list. Raise LibraryError if any of permission is not valid. tree -- cib tree permission_info_list -- list of tuples like this: ("read|write|deny", "xpath|id", ) """ report_items = [] allowed_permissions = ["read", "write", "deny"] allowed_scopes = ["xpath", "id"] for permission, scope_type, scope in permission_info_list: if not permission in allowed_permissions: report_items.append(reports.invalid_option_value( "permission", permission, allowed_permissions )) if not scope_type in allowed_scopes: report_items.append(reports.invalid_option_value( "scope type", scope_type, allowed_scopes )) if scope_type == 'id' and not does_id_exist(tree, scope): report_items.append(reports.id_not_found(scope, ["id"])) if report_items: raise LibraryError(*report_items) def _find( tag, acl_section, element_id, none_if_id_unused=False, id_types=None ): return find_element_by_tag_and_id( tag, acl_section, element_id, id_types=id_types, none_if_id_unused=none_if_id_unused, ) find_group = partial(_find, TAG_GROUP) find_role = partial(_find, TAG_ROLE) find_target = partial(_find, TAG_TARGET) def find_target_or_group(acl_section, target_or_group_id): """ Returns acl_target or acl_group element with id target_or_group_id. Target element has bigger priority so if there are target and group with the same id only target element will be affected by this function. Raises LibraryError if there is no target or group element with specified id. This approach is DEPRECATED and it is there only for backward compatibility reason. It is better to know explicitly whether we need target(user) or group. acl_section -- cib etree node target_or_group_id -- id of target/group element which should be returned """ target = find_target( acl_section, target_or_group_id, none_if_id_unused=True ) if target is not None: return target return find_group( acl_section, target_or_group_id, id_types=[TAG_GROUP, TAG_TARGET] ) def create_role(acl_section, role_id, description=None): """ Create new role element and add it to cib. Returns newly created role element. role_id id of desired role description role description """ check_new_id_applicable(acl_section, "ACL role", role_id) role = etree.SubElement(acl_section, TAG_ROLE, id=role_id) if description: role.set("description", description) return role def remove_role(acl_section, role_id, autodelete_users_groups=False): """ Remove role with specified id from CIB and all references to it. acl_section -- etree node role_id -- id of role to be removed autodelete_users_group -- if True remove targets with no role after removing """ acl_role = find_role(acl_section, role_id) acl_role.getparent().remove(acl_role) for role_el in acl_section.findall(".//role[@id='{0}']".format(role_id)): role_parent = role_el.getparent() role_parent.remove(role_el) if autodelete_users_groups and role_parent.find(".//role") is None: role_parent.getparent().remove(role_parent) def _assign_role(acl_section, role_id, target_el): try: role_el = find_role(acl_section, role_id) except LibraryError as e: return list(e.args) assigned_role = target_el.find( "./role[@id='{0}']".format(role_el.get("id")) ) if assigned_role is not None: return [reports.acl_role_is_already_assigned_to_target( role_el.get("id"), target_el.get("id") )] etree.SubElement(target_el, "role", {"id": role_el.get("id")}) return [] def assign_role(acl_section, role_id, target_el): """ Assign role element to specified target/group element. Raise LibraryError if role is already assigned to target/group. target_el -- etree element of target/group to which role should be assign role_el -- etree element of role """ report_list = _assign_role(acl_section, role_id, target_el) if report_list: raise LibraryError(*report_list) def assign_all_roles(acl_section, role_id_list, element): """ Assign roles from role_id_list to element. Raises LibraryError on any failure. acl_section -- cib etree node element -- element to which specified roles should be assigned role_id_list -- list of role id """ report_list = [] for role_id in role_id_list: report_list.extend(_assign_role(acl_section, role_id, element)) if report_list: raise LibraryError(*report_list) def unassign_role(target_el, role_id, autodelete_target=False): """ Unassign role with role_id from specified target/user target_el. Raise LibraryError if role is not assigned to target/group. target_el -- etree element of target/group from which role should be unassign role_id -- id of role autodelete_target -- if True remove target_el if there is no role assigned """ assigned_role = target_el.find("./role[@id='{0}']".format(role_id)) if assigned_role is None: raise LibraryError(reports.acl_role_is_not_assigned_to_target( role_id, target_el.get("id") )) target_el.remove(assigned_role) if autodelete_target and target_el.find("./role") is None: target_el.getparent().remove(target_el) def provide_role(acl_section, role_id): """ Returns role with id role_id. If doesn't exist, it will be created. role_id id of desired role """ role = find_role(acl_section, role_id, none_if_id_unused=True) return role if role is not None else create_role(acl_section, role_id) def create_target(acl_section, target_id): """ Creates new acl_target element with id target_id. Raises LibraryError if target with wpecified id aleready exists. acl_section -- etree node target_id -- id of new target """ # id of element acl_target is not type ID in CIB ACL schema so we don't need # to check if it is unique ID in whole CIB if( acl_section.find("./{0}[@id='{1}']".format(TAG_TARGET, target_id)) is not None ): raise LibraryError(reports.acl_target_already_exists(target_id)) return etree.SubElement(acl_section, TAG_TARGET, id=target_id) def create_group(acl_section, group_id): """ Creates new acl_group element with specified id. Raises LibraryError if tree contains element with id group_id. acl_section -- etree node group_id -- id of new group """ check_new_id_applicable(acl_section, "ACL group", group_id) return etree.SubElement(acl_section, TAG_GROUP, id=group_id) def remove_target(acl_section, target_id): """ Removes acl_target element from acl_section with specified id. Raises LibraryError if target with id target_id doesn't exist. acl_section -- etree node target_id -- id of target element to remove """ target = find_target(acl_section, target_id) target.getparent().remove(target) def remove_group(acl_section, group_id): """ Removes acl_group element from tree with specified id. Raises LibraryError if group with id group_id doesn't exist. acl_section -- etree node group_id -- id of group element to remove """ group = find_group(acl_section, group_id) group.getparent().remove(group) def add_permissions_to_role(role_el, permission_info_list): """ Add permissions from permission_info_list to role_el. role_el -- acl_role element to which permissions should be added permission_info_list -- list of tuples, each contains (permission, scope_type, scope) """ area_type_attribute_map = { 'xpath': 'xpath', 'id': 'reference', } for permission, scope_type, scope in permission_info_list: perm = etree.SubElement(role_el, "acl_permission") perm.set( "id", find_unique_id( role_el, "{0}-{1}".format(role_el.get("id", "role"), permission) ) ) perm.set("kind", permission) perm.set(area_type_attribute_map[scope_type], scope) def remove_permission(acl_section, permission_id): """ Remove permission with id permission_id from acl_section. acl_section -- etree node permission_id -- id of permission element to be removed """ permission = _find(TAG_PERMISSION, acl_section, permission_id) permission.getparent().remove(permission) def get_role_list(acl_section): """ Returns list of all acl_role elements from acl_section. Format of items of output list: { "id": , "description": , "permission_list": [, ...] } acl_section -- etree node """ output_list = [] for role_el in acl_section.findall("./{0}".format(TAG_ROLE)): role = etree_element_attibutes_to_dict( role_el, ["id", "description"] ) role["permission_list"] = _get_permission_list(role_el) output_list.append(role) return output_list def _get_permission_list(role_el): """ Return list of all permissions of role element role_el. Format of item of output list (if attribute is misssing in element under its key there is None): { "id": , "description": , "kind": , "xpath": , "reference": , "object-type": <>, "attribute": <>, } role_el -- acl_role etree element of which permissions whould be returned """ output_list = [] for permission in role_el.findall("./acl_permission"): output_list.append(etree_element_attibutes_to_dict( permission, [ "id", "description", "kind", "xpath", "reference", "object-type", "attribute" ] )) return output_list def get_target_list(acl_section): """ Returns list of acl_target elements in format: { "id": , "role_list": [, ...] } acl_section -- etree node """ return get_target_like_list(acl_section, TAG_TARGET) def get_group_list(acl_section): """ Returns list of acl_group elements in format: { "id": , "role_list": [, ...] } acl_section -- etree node """ return get_target_like_list(acl_section, TAG_GROUP) def get_target_like_list(acl_section, tag): output_list = [] for target_el in acl_section.findall("./{0}".format(tag)): output_list.append({ "id": target_el.get("id"), "role_list": _get_role_list_of_target(target_el), }) return output_list def _get_role_list_of_target(target): """ Returns all roles assigned to target element as list of strings. target -- etree acl_target/acl_group element of which roles should be returned """ return [ role.get("id") for role in target.findall("./role") if role.get("id") ] def remove_permissions_referencing(tree, reference): """ Removes all permission with specified reference. tree -- etree node reference -- reference identifier """ xpath = './/acl_permission[@reference="{0}"]'.format(reference) for permission in tree.findall(xpath): permission.getparent().remove(permission) def dom_remove_permissions_referencing(dom, reference): # TODO: remove once we go fully lxml for permission in dom.getElementsByTagName("acl_permission"): if permission.getAttribute("reference") == reference: permission.parentNode.removeChild(permission) pcs-0.9.164/pcs/lib/cib/alert.py000066400000000000000000000205311326265502500162400ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from functools import partial from pcs.common import report_codes from pcs.lib import reports from pcs.lib.errors import ReportItemSeverity as Severities from pcs.lib.cib.nvpair import arrange_first_nvset, get_nvset from pcs.lib.cib.tools import ( check_new_id_applicable, find_unique_id, get_alerts, validate_id_does_not_exist, find_element_by_tag_and_id, ) from pcs.lib.xml_tools import get_sub_element TAG_ALERT = "alert" TAG_RECIPIENT = "recipient" find_alert = partial(find_element_by_tag_and_id, TAG_ALERT) find_recipient = partial(find_element_by_tag_and_id, TAG_RECIPIENT) update_instance_attributes = partial( arrange_first_nvset, "instance_attributes" ) update_meta_attributes = partial(arrange_first_nvset, "meta_attributes") def _update_optional_attribute(element, attribute, value): """ Update optional attribute of element. Remove existing element if value is empty. element -- parent element of specified attribute attribute -- attribute to be updated value -- new value """ if value is None: return if value: element.set(attribute, value) elif attribute in element.attrib: del element.attrib[attribute] def ensure_recipient_value_is_unique( reporter, alert, recipient_value, recipient_id="", allow_duplicity=False ): """ Ensures that recipient_value is unique in alert. reporter -- report processor alert -- alert recipient_value -- recipient value recipient_id -- recipient id of to which value belongs to allow_duplicity -- if True only warning will be shown if value already exists """ recipient_list = alert.xpath( "./recipient[@value='{value}' and @id!='{id}']".format( value=recipient_value, id=recipient_id ) ) if recipient_list: reporter.process(reports.cib_alert_recipient_already_exists( alert.get("id", None), recipient_value, Severities.WARNING if allow_duplicity else Severities.ERROR, forceable=( None if allow_duplicity else report_codes.FORCE_ALERT_RECIPIENT_VALUE_NOT_UNIQUE ) )) def create_alert(tree, alert_id, path, description=""): """ Create new alert element. Returns newly created element. Raises LibraryError if element with specified id already exists. tree -- cib etree node alert_id -- id of new alert, it will be generated if it is None path -- path to script description -- description """ if alert_id: check_new_id_applicable(tree, "alert-id", alert_id) else: alert_id = find_unique_id(tree, "alert") alert = etree.SubElement(get_alerts(tree), "alert", id=alert_id, path=path) if description: alert.set("description", description) return alert def update_alert(tree, alert_id, path, description=None): """ Update existing alert. Return updated alert element. Raises LibraryError if alert with specified id doesn't exist. tree -- cib etree node alert_id -- id of alert to be updated path -- new value of path, stay unchanged if None description -- new value of description, stay unchanged if None, remove if empty """ alert = find_alert(get_alerts(tree), alert_id) if path: alert.set("path", path) _update_optional_attribute(alert, "description", description) return alert def remove_alert(tree, alert_id): """ Remove alert with specified id. Raises LibraryError if alert with specified id doesn't exist. tree -- cib etree node alert_id -- id of alert which should be removed """ alert = find_alert(get_alerts(tree), alert_id) alert.getparent().remove(alert) def add_recipient( reporter, tree, alert_id, recipient_value, recipient_id=None, description="", allow_same_value=False ): """ Add recipient to alert with specified id. Returns added recipient element. Raises LibraryError if alert with specified recipient_id doesn't exist. Raises LibraryError if recipient already exists. reporter -- report processor tree -- cib etree node alert_id -- id of alert which should be parent of new recipient recipient_value -- value of recipient recipient_id -- id of new recipient, if None it will be generated description -- description of recipient allow_same_value -- if True unique recipient value is not required """ if recipient_id is None: recipient_id = find_unique_id(tree, "{0}-recipient".format(alert_id)) else: validate_id_does_not_exist(tree, recipient_id) alert = find_alert(get_alerts(tree), alert_id) ensure_recipient_value_is_unique( reporter, alert, recipient_value, allow_duplicity=allow_same_value ) recipient = etree.SubElement( alert, "recipient", id=recipient_id, value=recipient_value ) if description: recipient.set("description", description) return recipient def update_recipient( reporter, tree, recipient_id, recipient_value=None, description=None, allow_same_value=False ): """ Update specified recipient. Returns updated recipient element. Raises LibraryError if recipient doesn't exist. reporter -- report processor tree -- cib etree node recipient_id -- id of recipient to be updated recipient_value -- recipient value, stay unchanged if None description -- description, if empty it will be removed, stay unchanged if None allow_same_value -- if True unique recipient value is not required """ recipient = find_recipient(get_alerts(tree), recipient_id) if recipient_value is not None: ensure_recipient_value_is_unique( reporter, recipient.getparent(), recipient_value, recipient_id=recipient_id, allow_duplicity=allow_same_value ) recipient.set("value", recipient_value) _update_optional_attribute(recipient, "description", description) return recipient def remove_recipient(tree, recipient_id): """ Remove specified recipient. Raises LibraryError if recipient doesn't exist. tree -- cib etree node recipient_id -- id of recipient to be removed """ recipient = find_recipient(get_alerts(tree), recipient_id) recipient.getparent().remove(recipient) def get_all_recipients(alert): """ Returns list of all recipient of specified alert. Format: [ { "id": , "value": , "description": , "instance_attributes": , "meta_attributes": } ] alert -- parent element of recipients to return """ recipient_list = [] for recipient in alert.findall("./recipient"): recipient_list.append({ "id": recipient.get("id"), "value": recipient.get("value"), "description": recipient.get("description", ""), "instance_attributes": get_nvset( get_sub_element(recipient, "instance_attributes") ), "meta_attributes": get_nvset( get_sub_element(recipient, "meta_attributes") ) }) return recipient_list def get_all_alerts(tree): """ Returns list of all alerts specified in tree. Format: [ { "id": , "path": , "description": , "instance_attributes": , "meta_attributes": , "recipients_list": } ] tree -- cib etree node """ alert_list = [] for alert in get_alerts(tree).findall("./alert"): alert_list.append({ "id": alert.get("id"), "path": alert.get("path"), "description": alert.get("description", ""), "instance_attributes": get_nvset( get_sub_element(alert, "instance_attributes") ), "meta_attributes": get_nvset( get_sub_element(alert, "meta_attributes") ), "recipient_list": get_all_recipients(alert) }) return alert_list pcs-0.9.164/pcs/lib/cib/constraint/000077500000000000000000000000001326265502500167425ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/cib/constraint/__init__.py000066400000000000000000000000001326265502500210410ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/cib/constraint/colocation.py000066400000000000000000000022331326265502500214460ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.lib import reports from pcs.lib.cib.constraint import constraint from pcs.lib.cib.tools import check_new_id_applicable from pcs.lib.errors import LibraryError from pcs.lib.pacemaker.values import is_score, SCORE_INFINITY TAG_NAME = 'rsc_colocation' DESCRIPTION = "constraint id" SCORE_NAMES = ("score", "score-attribute", "score-attribute-mangle") def prepare_options_with_set(cib, options, resource_set_list): options = constraint.prepare_options( tuple(SCORE_NAMES), options, partial(constraint.create_id, cib, TAG_NAME, resource_set_list), partial(check_new_id_applicable, cib, DESCRIPTION), ) if "score" in options and not is_score(options["score"]): raise LibraryError(reports.invalid_score(options["score"])) score_attrs_count = len([ name for name in options.keys() if name in SCORE_NAMES ]) if score_attrs_count > 1: raise LibraryError(reports.multiple_score_options()) if score_attrs_count == 0: options["score"] = SCORE_INFINITY return options pcs-0.9.164/pcs/lib/cib/constraint/constraint.py000066400000000000000000000110671326265502500215050ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common import report_codes from pcs.lib import reports from pcs.lib.cib import resource from pcs.lib.cib.constraint import resource_set from pcs.lib.cib.tools import ( find_unique_id, find_element_by_tag_and_id, ) from pcs.lib.errors import LibraryError, ReportItemSeverity from pcs.lib.xml_tools import ( export_attributes, find_parent, ) def _validate_attrib_names(attrib_names, options): invalid_names = [ name for name in options.keys() if name not in attrib_names ] if invalid_names: raise LibraryError( reports.invalid_options(invalid_names, attrib_names, None) ) def find_valid_resource_id( report_processor, cib, can_repair_to_clone, in_clone_allowed, id ): parent_tags = resource.clone.ALL_TAGS + [resource.bundle.TAG] resource_element = find_element_by_tag_and_id( parent_tags + [resource.primitive.TAG, resource.group.TAG], cib, id, ) if resource_element.tag in parent_tags: return resource_element.attrib["id"] clone = find_parent(resource_element, parent_tags) if clone is None: return resource_element.attrib["id"] if can_repair_to_clone: #this is workaround for web ui, console should not use it, so we do not #warn about it return clone.attrib["id"] if in_clone_allowed: report_processor.process( reports.resource_for_constraint_is_multiinstance( resource_element.attrib["id"], clone.tag, clone.attrib["id"], ReportItemSeverity.WARNING, ) ) return resource_element.attrib["id"] raise LibraryError(reports.resource_for_constraint_is_multiinstance( resource_element.attrib["id"], clone.tag, clone.attrib["id"], ReportItemSeverity.ERROR, #repair to clone is workaround for web ui, so we put only information #about one forceable possibility forceable=report_codes.FORCE_CONSTRAINT_MULTIINSTANCE_RESOURCE )) def prepare_options(attrib_names, options, create_id, validate_id): _validate_attrib_names(attrib_names+("id",), options) options = options.copy() if "id" not in options: options["id"] = create_id() else: validate_id(options["id"]) return options def export_with_set(element): return { "resource_sets": [ resource_set.export(resource_set_item) for resource_set_item in element.findall(".//resource_set") ], "options": export_attributes(element), } def export_plain(element): return {"options": export_attributes(element)} def create_id(cib, type_prefix, resource_set_list): id = "pcs_" +type_prefix +"".join([ "_set_"+"_".join(id_set) for id_set in resource_set.extract_id_set_list(resource_set_list) ]) return find_unique_id(cib, id) def have_duplicate_resource_sets(element, other_element): get_id_set_list = lambda element: [ resource_set.get_resource_id_set_list(resource_set_item) for resource_set_item in element.findall(".//resource_set") ] return get_id_set_list(element) == get_id_set_list(other_element) def check_is_without_duplication( report_processor, constraint_section, element, are_duplicate, export_element, duplication_alowed=False ): duplicate_element_list = [ duplicate_element for duplicate_element in constraint_section.findall(".//"+element.tag) if( element is not duplicate_element and are_duplicate(element, duplicate_element) ) ] if not duplicate_element_list: return report_processor.process(reports.duplicate_constraints_exist( element.tag, [ export_element(duplicate_element) for duplicate_element in duplicate_element_list ], ReportItemSeverity.WARNING if duplication_alowed else ReportItemSeverity.ERROR, forceable=None if duplication_alowed else report_codes.FORCE_CONSTRAINT_DUPLICATE, )) def create_with_set(constraint_section, tag_name, options, resource_set_list): if not resource_set_list: raise LibraryError(reports.empty_resource_set_list()) element = etree.SubElement(constraint_section, tag_name) element.attrib.update(options) for resource_set_item in resource_set_list: resource_set.create(element, resource_set_item) return element pcs-0.9.164/pcs/lib/cib/constraint/order.py000066400000000000000000000027371326265502500204400ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.lib import reports from pcs.lib.cib.constraint import constraint from pcs.lib.cib.tools import check_new_id_applicable from pcs.lib.errors import LibraryError TAG_NAME = "rsc_order" DESCRIPTION = "constraint id" ATTRIB = { "symmetrical": ("true", "false"), "kind": ("Optional", "Mandatory", "Serialize"), } def prepare_options_with_set(cib, options, resource_set_list): options = constraint.prepare_options( tuple(ATTRIB.keys()), options, create_id=partial( constraint.create_id, cib, TAG_NAME, resource_set_list ), validate_id=partial(check_new_id_applicable, cib, DESCRIPTION), ) report_items = [] if "kind" in options: kind = options["kind"].lower().capitalize() if kind not in ATTRIB["kind"]: report_items.append(reports.invalid_option_value( "kind", options["kind"], ATTRIB["kind"] )) options["kind"] = kind if "symmetrical" in options: symmetrical = options["symmetrical"].lower() if symmetrical not in ATTRIB["symmetrical"]: report_items.append(reports.invalid_option_value( "symmetrical", options["symmetrical"], ATTRIB["symmetrical"] )) options["symmetrical"] = symmetrical if report_items: raise LibraryError(*report_items) return options pcs-0.9.164/pcs/lib/cib/constraint/resource_set.py000066400000000000000000000041261326265502500220210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib import reports from pcs.lib.cib.tools import find_unique_id from pcs.lib.errors import LibraryError from pcs.lib.xml_tools import export_attributes ATTRIB = { "sequential": ("true", "false"), "require-all":("true", "false"), "action" : ("start", "promote", "demote", "stop"), "role" : ("Stopped", "Started", "Master", "Slave"), } def prepare_set(find_valid_id, resource_set): """return resource_set with corrected ids""" validate_options(resource_set["options"]) return { "ids": [find_valid_id(id) for id in resource_set["ids"]], "options": resource_set["options"] } def validate_options(options): #Pacemaker does not care currently about meaningfulness for concrete #constraint, so we use all attribs. for name, value in options.items(): if name not in ATTRIB: raise LibraryError( reports.invalid_options([name], list(ATTRIB.keys()), None) ) if value not in ATTRIB[name]: raise LibraryError( reports.invalid_option_value(name, value, ATTRIB[name]) ) def extract_id_set_list(resource_set_list): return [resource_set["ids"] for resource_set in resource_set_list] def create(parent, resource_set): """ parent - lxml element for append new resource_set """ element = etree.SubElement(parent, "resource_set") element.attrib.update(resource_set["options"]) element.attrib["id"] = find_unique_id( parent.getroottree(), "pcs_rsc_set_{0}".format("_".join(resource_set["ids"])) ) for id in resource_set["ids"]: etree.SubElement(element, "resource_ref").attrib["id"] = id return element def get_resource_id_set_list(element): return [ resource_ref_element.attrib["id"] for resource_ref_element in element.findall(".//resource_ref") ] def export(element): return { "ids": get_resource_id_set_list(element), "options": export_attributes(element), } pcs-0.9.164/pcs/lib/cib/constraint/ticket.py000066400000000000000000000113631326265502500206030ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from lxml import etree from pcs.lib import reports from pcs.lib.cib.constraint import constraint from pcs.lib.cib import tools from pcs.lib.errors import LibraryError from pcs.lib.xml_tools import remove_when_pointless TAG_NAME = 'rsc_ticket' DESCRIPTION = "constraint id" ATTRIB = { "loss-policy": ("fence", "stop", "freeze", "demote"), "ticket": None, } ATTRIB_PLAIN = { "rsc": None, "rsc-role": ("Stopped", "Started", "Master", "Slave"), } def _validate_options_common(options): report = [] if "loss-policy" in options: loss_policy = options["loss-policy"].lower() if options["loss-policy"] not in ATTRIB["loss-policy"]: report.append(reports.invalid_option_value( "loss-policy", options["loss-policy"], ATTRIB["loss-policy"] )) options["loss-policy"] = loss_policy return report def _create_id(cib, ticket, resource_id, resource_role): return tools.find_unique_id( cib, "-".join(('ticket', ticket, resource_id)) +("-{0}".format(resource_role) if resource_role else "") ) def prepare_options_with_set(cib, options, resource_set_list): options = constraint.prepare_options( tuple(ATTRIB.keys()), options, create_id=partial( constraint.create_id, cib, TAG_NAME, resource_set_list ), validate_id=partial(tools.check_new_id_applicable, cib, DESCRIPTION), ) report = _validate_options_common(options) if "ticket" not in options or not options["ticket"].strip(): report.append(reports.required_option_is_missing(['ticket'])) if report: raise LibraryError(*report) return options def prepare_options_plain(cib, options, ticket, resource_id): options = options.copy() report = _validate_options_common(options) if not ticket: report.append(reports.required_option_is_missing(['ticket'])) options["ticket"] = ticket if not resource_id: report.append(reports.required_option_is_missing(['rsc'])) options["rsc"] = resource_id if "rsc-role" in options: if options["rsc-role"]: resource_role = options["rsc-role"].lower().capitalize() if resource_role not in ATTRIB_PLAIN["rsc-role"]: report.append(reports.invalid_option_value( "rsc-role", options["rsc-role"], ATTRIB_PLAIN["rsc-role"] )) options["rsc-role"] = resource_role else: del(options["rsc-role"]) if report: raise LibraryError(*report) return constraint.prepare_options( tuple(list(ATTRIB) + list(ATTRIB_PLAIN)), options, partial( _create_id, cib, options["ticket"], resource_id, options.get("rsc-role", "") ), partial(tools.check_new_id_applicable, cib, DESCRIPTION) ) def create_plain(constraint_section, options): element = etree.SubElement(constraint_section, TAG_NAME) element.attrib.update(options) return element def remove_plain(constraint_section, ticket_key, resource_id): ticket_element_list = constraint_section.xpath( './/rsc_ticket[@ticket="{0}" and @rsc="{1}"]' .format(ticket_key, resource_id) ) for ticket_element in ticket_element_list: ticket_element.getparent().remove(ticket_element) return len(ticket_element_list) > 0 def remove_with_resource_set(constraint_section, ticket_key, resource_id): ref_element_list = constraint_section.xpath( './/rsc_ticket[@ticket="{0}"]/resource_set/resource_ref[@id="{1}"]' .format(ticket_key, resource_id) ) for ref_element in ref_element_list: set_element = ref_element.getparent() set_element.remove(ref_element) if not len(set_element): ticket_element = set_element.getparent() ticket_element.remove(set_element) #We do not care about attributes since without an attribute "rsc" #they are pointless. Attribute "rsc" is mutually exclusive with #resource_set (see rng) so it cannot be in this ticket_element. remove_when_pointless(ticket_element, attribs_important=False) return len(ref_element_list) > 0 def are_duplicate_plain(element, other_element): return all( element.attrib.get(name, "") == other_element.attrib.get(name, "") for name in ("ticket", "rsc", "rsc-role") ) def are_duplicate_with_resource_set(element, other_element): return ( element.attrib["ticket"] == other_element.attrib["ticket"] and constraint.have_duplicate_resource_sets(element, other_element) ) pcs-0.9.164/pcs/lib/cib/fencing_topology.py000066400000000000000000000260321326265502500205000ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common import report_codes from pcs.common.fencing_topology import ( TARGET_TYPE_NODE, TARGET_TYPE_REGEXP, TARGET_TYPE_ATTRIBUTE, ) from pcs.lib import reports from pcs.lib.cib.stonith import is_stonith_resource from pcs.lib.cib.tools import find_unique_id from pcs.lib.errors import ReportItemSeverity from pcs.lib.pacemaker.values import sanitize_id, validate_id def add_level( reporter, topology_el, resources_el, level, target_type, target_value, devices, cluster_status_nodes, force_device=False, force_node=False ): """ Validate and add a new fencing level. Raise LibraryError if not valid. object reporter -- report processor etree topology_el -- etree element to add the level to etree resources_el -- etree element with resources definitions int|string level -- level (index) of the new fencing level constant target_type -- the new fencing level target value type mixed target_value -- the new fencing level target value Iterable devices -- list of stonith devices for the new fencing level Iterable cluster_status_nodes -- list of status of existing cluster nodes bool force_device -- continue even if a stonith device does not exist bool force_node -- continue even if a node (target) does not exist """ valid_level = _validate_level(reporter, level) _validate_target( reporter, cluster_status_nodes, target_type, target_value, force_node ) _validate_devices(reporter, resources_el, devices, force_device) reporter.send() _validate_level_target_devices_does_not_exist( reporter, topology_el, level, target_type, target_value, devices ) reporter.send() _append_level_element( topology_el, valid_level, target_type, target_value, devices ) def remove_all_levels(topology_el): """ Remove all fencing levels. etree topology_el -- etree element to remove the levels from """ for level_el in topology_el.findall("fencing-level"): level_el.getparent().remove(level_el) def remove_levels_by_params( reporter, topology_el, level=None, target_type=None, target_value=None, devices=None, ignore_if_missing=False ): """ Remove specified fencing level(s). Raise LibraryError if not found. object reporter -- report processor etree topology_el -- etree element to remove the levels from int|string level -- level (index) of the fencing level to remove constant target_type -- the removed fencing level target value type mixed target_value -- the removed fencing level target value Iterable devices -- list of stonith devices of the removed fencing level bool ignore_if_missing -- when True, do not raise if level not found """ if target_type: _validate_target_typewise(reporter, target_type) reporter.send() level_el_list = _find_level_elements( topology_el, level, target_type, target_value, devices ) if not level_el_list: if ignore_if_missing: return reporter.process(reports.fencing_level_does_not_exist( level, target_type, target_value, devices )) for el in level_el_list: el.getparent().remove(el) def remove_device_from_all_levels(topology_el, device_id): """ Remove specified stonith device from all fencing levels. etree topology_el -- etree element with levels to remove the device from string device_id -- stonith device to remove """ for level_el in topology_el.findall("fencing-level"): new_devices = [ dev for dev in level_el.get("devices").split(",") if dev != device_id ] if new_devices: level_el.set("devices", ",".join(new_devices)) else: level_el.getparent().remove(level_el) def export(topology_el): """ Export all fencing levels. Return a list of levels where each level is a dict with keys: target_type, target_value. level and devices. Devices is a list of stonith device ids. etree topology_el -- etree element to export """ export_levels = [] for level_el in topology_el.iterfind("fencing-level"): target_type = target_value = None if "target" in level_el.attrib: target_type = TARGET_TYPE_NODE target_value = level_el.get("target") elif "target-pattern" in level_el.attrib: target_type = TARGET_TYPE_REGEXP target_value = level_el.get("target-pattern") elif "target-attribute" in level_el.attrib: target_type = TARGET_TYPE_ATTRIBUTE target_value = ( level_el.get("target-attribute"), level_el.get("target-value") ) if target_type and target_value: export_levels.append({ "target_type": target_type, "target_value": target_value, "level": level_el.get("index"), "devices": level_el.get("devices").split(",") }) return export_levels def verify(reporter, topology_el, resources_el, cluster_status_nodes): """ Check if all cluster nodes and stonith devices used in fencing levels exist. All errors are stored into the passed reporter. Calling function is responsible for processing the report. object reporter -- report processor etree topology_el -- etree element with fencing levels to check etree resources_el -- etree element with resources definitions Iterable cluster_status_nodes -- list of status of existing cluster nodes """ used_nodes = set() used_devices = set() for level_el in topology_el.iterfind("fencing-level"): used_devices.update(level_el.get("devices").split(",")) if "target" in level_el.attrib: used_nodes.add(level_el.get("target")) if used_devices: _validate_devices( reporter, resources_el, sorted(used_devices), allow_force=False ) for node in sorted(used_nodes): _validate_target_valuewise( reporter, cluster_status_nodes, TARGET_TYPE_NODE, node, allow_force=False ) def _validate_level(reporter, level): try: candidate = int(level) if candidate > 0: return candidate except ValueError: pass reporter.append( reports.invalid_option_value("level", level, "a positive integer") ) def _validate_target( reporter, cluster_status_nodes, target_type, target_value, force_node=False ): _validate_target_typewise(reporter, target_type) _validate_target_valuewise( reporter, cluster_status_nodes, target_type, target_value, force_node ) def _validate_target_typewise(reporter, target_type): if target_type not in [ TARGET_TYPE_NODE, TARGET_TYPE_ATTRIBUTE, TARGET_TYPE_REGEXP ]: reporter.append(reports.invalid_option_type( "target", ["node", "regular expression", "attribute_name=value"] )) def _validate_target_valuewise( reporter, cluster_status_nodes, target_type, target_value, force_node=False, allow_force=True ): if target_type == TARGET_TYPE_NODE: node_found = False for node in cluster_status_nodes: if target_value == node.attrs.name: node_found = True break if not node_found: reporter.append( reports.node_not_found( target_value, severity=ReportItemSeverity.WARNING if force_node and allow_force else ReportItemSeverity.ERROR , forceable=None if force_node or not allow_force else report_codes.FORCE_NODE_DOES_NOT_EXIST ) ) def _validate_devices( reporter, resources_el, devices, force_device=False, allow_force=True ): if not devices: reporter.append( reports.required_option_is_missing(["stonith devices"]) ) invalid_devices = [] for dev in devices: errors = reporter.errors_count validate_id(dev, description="device id", reporter=reporter) if reporter.errors_count > errors: continue # TODO use the new finding function if not is_stonith_resource(resources_el, dev): invalid_devices.append(dev) if invalid_devices: reporter.append( reports.stonith_resources_do_not_exist( invalid_devices, ReportItemSeverity.WARNING if force_device and allow_force else ReportItemSeverity.ERROR , None if force_device or not allow_force else report_codes.FORCE_STONITH_RESOURCE_DOES_NOT_EXIST ) ) def _validate_level_target_devices_does_not_exist( reporter, tree, level, target_type, target_value, devices ): if _find_level_elements(tree, level, target_type, target_value, devices): reporter.append( reports.fencing_level_already_exists( level, target_type, target_value, devices ) ) def _append_level_element(tree, level, target_type, target_value, devices): level_el = etree.SubElement( tree, "fencing-level", index=str(level), devices=",".join(devices) ) if target_type == TARGET_TYPE_NODE: level_el.set("target", target_value) id_part = target_value elif target_type == TARGET_TYPE_REGEXP: level_el.set("target-pattern", target_value) id_part = target_value elif target_type == TARGET_TYPE_ATTRIBUTE: level_el.set("target-attribute", target_value[0]) level_el.set("target-value", target_value[1]) id_part = target_value[0] level_el.set( "id", find_unique_id(tree, sanitize_id("fl-{0}-{1}".format(id_part, level))) ) return level_el def _find_level_elements( tree, level=None, target_type=None, target_value=None, devices=None ): xpath_target = "" if target_type and target_value: if target_type == TARGET_TYPE_NODE: xpath_target = "@target='{0}'".format(target_value) elif target_type == TARGET_TYPE_REGEXP: xpath_target = "@target-pattern='{0}'".format(target_value) elif target_type == TARGET_TYPE_ATTRIBUTE: xpath_target = ( "@target-attribute='{0}' and @target-value='{1}'".format( target_value[0], target_value[1] ) ) xpath_devices = "" if devices: xpath_devices = "@devices='{0}'".format(",".join(devices)) xpath_level = "" if level: xpath_level = "@index='{0}'".format(level) xpath_attrs = " and ".join( filter(None, [xpath_level, xpath_devices, xpath_target]) ) if xpath_attrs: return tree.xpath("fencing-level[{0}]".format(xpath_attrs)) return tree.findall("fencing-level") pcs-0.9.164/pcs/lib/cib/node.py000066400000000000000000000060561326265502500160640ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib import reports from pcs.lib.cib.nvpair import update_nvset from pcs.lib.cib.tools import get_nodes, find_unique_id from pcs.lib.errors import LibraryError def update_node_instance_attrs(cib, node_name, attrs, state_nodes=None): """ Update nvpairs in instance_attributes for a node specified by its name. Automatically creates instance_attributes element if needed. If the node has more than one instance_attributes element, the first one is modified. If the node is missing in the CIB, it is automatically created if its state is provided in state_nodes. etree cib -- cib string node_name -- name of the node to be updated dict attrs -- attrs to update, e.g. {'A': 'a', 'B': ''} iterable state_nodes -- optional list of node state objects """ node_el = _ensure_node_exists(get_nodes(cib), node_name, state_nodes) # If no instance_attributes id is specified, crm_attribute modifies the # first one found. So we just mimic this behavior here. attrs_el = node_el.find("./instance_attributes") if attrs_el is None: attrs_el = etree.SubElement( node_el, "instance_attributes", id=find_unique_id(cib, "nodes-{0}".format(node_el.get("id"))) ) update_nvset(attrs_el, attrs) def _ensure_node_exists(tree, node_name, state_nodes=None): """ Make sure node with specified name exists in the tree. If the node doesn't exist, raise LibraryError. If state_nodes is provided and contains state of a node with the specified name, create the node in the tree. Return existing or created node element. etree tree -- node parent element string name -- node name iterable state_nodes -- optional list of node state objects """ node_el = _get_node_by_uname(tree, node_name) if node_el is None and state_nodes: for node_state in state_nodes: if node_state.attrs.name == node_name: node_el = _create_node( tree, node_state.attrs.id, node_state.attrs.name, node_state.attrs.type ) break if node_el is None: raise LibraryError(reports.node_not_found(node_name)) return node_el def _get_node_by_uname(tree, uname): """ Return a node element with specified uname in the tree or None if not found etree tree -- node parent element string uname -- node name """ return tree.find("./node[@uname='{0}']".format(uname)) def _create_node(tree, node_id, uname, node_type=None): """ Create new node element as a direct child of the tree element etree tree -- node parent element string node_id -- node id string uname -- node name string node_type -- optional node type (normal, member, ping, remote) """ node = etree.SubElement(tree, "node", id=node_id, uname=uname) if node_type: node.set("type", node_type) return node pcs-0.9.164/pcs/lib/cib/nvpair.py000066400000000000000000000123631326265502500164340ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from functools import partial from pcs.lib.cib.tools import create_subelement_id from pcs.lib.xml_tools import( get_sub_element, remove_when_pointless, ) def _append_new_nvpair(nvset_element, name, value, id_provider=None): """ Create nvpair with name and value as subelement of nvset_element. etree.Element nvset_element is context of new nvpair string name is name attribute of new nvpair string value is value attribute of new nvpair IdProvider id_provider -- elements' ids generator """ etree.SubElement( nvset_element, "nvpair", id=create_subelement_id(nvset_element, name, id_provider), name=name, value=value ) def set_nvpair_in_nvset(nvset_element, name, value): """ Update nvpair, create new if it doesn't yet exist or remove existing nvpair if value is empty. nvset_element -- element in which nvpair should be added/updated/removed name -- name of nvpair value -- value of nvpair """ nvpair = nvset_element.find("./nvpair[@name='{0}']".format(name)) if nvpair is None: if value: _append_new_nvpair(nvset_element, name, value) else: if value: nvpair.set("value", value) else: nvset_element.remove(nvpair) def arrange_first_nvset(tag_name, context_element, nvpair_dict, new_id=None): """ Put nvpairs to the first tag_name nvset in the context_element. If the nvset does not exist, it will be created. WARNING: does not solve multiple nvsets (with the same tag_name) in the context_element! Consider carefully if this is your use case. Probably not. There could be more than one nvset. This function is DEPRECATED. Try to use update_nvset etc. string tag_name -- tag name of nvset element etree context_element -- parent element of nvset dict nvpair_dict -- dictionary of nvpairs """ if not nvpair_dict: return nvset_element = get_sub_element( context_element, tag_name, new_id if new_id else create_subelement_id(context_element, tag_name), new_index=0 ) update_nvset(nvset_element, nvpair_dict) def append_new_nvset(tag_name, context_element, nvpair_dict, id_provider=None): """ Append new nvset_element comprising nvpairs children (corresponding nvpair_dict) to the context_element string tag_name should be "instance_attributes" or "meta_attributes" etree.Element context_element is element where new nvset will be appended dict nvpair_dict contains source for nvpair children IdProvider id_provider -- elements' ids generator """ nvset_element = etree.SubElement(context_element, tag_name, { "id": create_subelement_id(context_element, tag_name, id_provider) }) for name, value in sorted(nvpair_dict.items()): _append_new_nvpair(nvset_element, name, value, id_provider) append_new_instance_attributes = partial( append_new_nvset, "instance_attributes" ) append_new_meta_attributes = partial( append_new_nvset, "meta_attributes" ) def update_nvset(nvset_element, nvpair_dict): """ Add, remove or update nvpairs according to nvpair_dict into nvset_element If the resulting nvset is empty, it will be removed. etree nvset_element -- container where nvpairs are set dict nvpair_dict -- contains source for nvpair children """ for name, value in sorted(nvpair_dict.items()): set_nvpair_in_nvset(nvset_element, name, value) remove_when_pointless(nvset_element) def get_nvset(nvset): """ Returns nvset element as list of nvpairs with format: [ { "id": , "name": , "value": }, ... ] nvset -- nvset element """ nvpair_list = [] for nvpair in nvset.findall("./nvpair"): nvpair_list.append({ "id": nvpair.get("id"), "name": nvpair.get("name"), "value": nvpair.get("value", "") }) return nvpair_list def get_value(tag_name, context_element, name, default=None): """ Return value from nvpair. WARNING: does not solve multiple nvsets (with the same tag_name) in the context_element nor multiple nvpair with the same name string tag_name should be "instance_attributes" or "meta_attributes" etree.Element context_element is searched element string name specify nvpair name """ value_list = context_element.xpath(""" ./{0} /nvpair[ @name="{1}" and string-length(@value) > 0 ] /@value """.format(tag_name, name)) return value_list[0] if value_list else default def has_meta_attribute(resource_el, name): """ Return if the element contains meta attribute 'name' etree.Element resource_el is researched element string name specifies attribute """ return 0 < len(resource_el.xpath( './meta_attributes/nvpair[@name="{0}"]'.format(name) )) arrange_first_meta_attributes = partial( arrange_first_nvset, "meta_attributes" ) get_meta_attribute_value = partial(get_value, "meta_attributes") pcs-0.9.164/pcs/lib/cib/resource/000077500000000000000000000000001326265502500164055ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/cib/resource/__init__.py000066400000000000000000000003461326265502500205210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.cib.resource import ( bundle, clone, common, group, guest_node, operations, primitive, remote_node, ) pcs-0.9.164/pcs/lib/cib/resource/bundle.py000066400000000000000000000440201326265502500202300ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common import report_codes from pcs.lib import reports, validate from pcs.lib.cib.nvpair import ( append_new_meta_attributes, arrange_first_meta_attributes, ) from pcs.lib.cib.resource.primitive import TAG as TAG_PRIMITIVE from pcs.lib.cib.tools import find_element_by_tag_and_id from pcs.lib.errors import ( LibraryError, ReportListAnalyzer, ) from pcs.lib.pacemaker.values import sanitize_id from pcs.lib.xml_tools import ( get_sub_element, update_attributes_remove_empty, remove_when_pointless, ) TAG = "bundle" _docker_options = set(( "image", "masters", "network", "options", "run-command", "replicas", "replicas-per-host", )) _network_options = set(( "control-port", "host-interface", "host-netmask", "ip-range-start", )) def is_bundle(resource_el): return resource_el.tag == TAG def validate_new( id_provider, bundle_id, container_type, container_options, network_options, port_map, storage_map, force_options=False ): """ Validate new bundle parameters, return list of report items IdProvider id_provider -- elements' ids generator and uniqueness checker string bundle_id -- id of the bundle string container_type -- bundle container type dict container_options -- container options dict network_options -- network options list of dict port_map -- list of port mapping options list of dict storage_map -- list of storage mapping options bool force_options -- return warnings instead of forceable errors """ report_list = [] report_list.extend( validate.run_collection_of_option_validators( {"id": bundle_id}, [ # with id_provider it validates that the id is available as well validate.value_id("id", "bundle name", id_provider), ] ) ) aux_reports = _validate_container_type(container_type) report_list.extend(aux_reports) if not ReportListAnalyzer(aux_reports).error_list: report_list.extend( # TODO call the proper function once more container_types are # supported by pacemaker _validate_container_docker_options_new( container_options, force_options ) ) report_list.extend( _validate_network_options_new(network_options, force_options) ) report_list.extend( _validate_port_map_list(port_map, id_provider, force_options) ) report_list.extend( _validate_storage_map_list(storage_map, id_provider, force_options) ) return report_list def append_new( parent_element, id_provider, bundle_id, container_type, container_options, network_options, port_map, storage_map, meta_attributes ): """ Create new bundle and add it to the CIB etree parent_element -- the bundle will be appended to this element IdProvider id_provider -- elements' ids generator string bundle_id -- id of the bundle string container_type -- bundle container type dict container_options -- container options dict network_options -- network options list of dict port_map -- list of port mapping options list of dict storage_map -- list of storage mapping options dict meta_attributes -- meta attributes """ bundle_element = etree.SubElement(parent_element, TAG, {"id": bundle_id}) # TODO create the proper element once more container_types are supported # by pacemaker docker_element = etree.SubElement(bundle_element, "docker") # Do not add options with empty values. When updating, an empty value means # remove the option. update_attributes_remove_empty(docker_element, container_options) if network_options or port_map: network_element = etree.SubElement(bundle_element, "network") # Do not add options with empty values. When updating, an empty value # means remove the option. update_attributes_remove_empty(network_element, network_options) for port_map_options in port_map: _append_port_map( network_element, id_provider, bundle_id, port_map_options ) if storage_map: storage_element = etree.SubElement(bundle_element, "storage") for storage_map_options in storage_map: _append_storage_map( storage_element, id_provider, bundle_id, storage_map_options ) if meta_attributes: append_new_meta_attributes(bundle_element, meta_attributes, id_provider) return bundle_element def validate_update( id_provider, bundle_el, container_options, network_options, port_map_add, port_map_remove, storage_map_add, storage_map_remove, force_options=False ): """ Validate modifying an existing bundle, return list of report items IdProvider id_provider -- elements' ids generator and uniqueness checker etree bundle_el -- the bundle to be updated dict container_options -- container options to modify dict network_options -- network options to modify list of dict port_map_add -- list of port mapping options to add list of string port_map_remove -- list of port mapping ids to remove list of dict storage_map_add -- list of storage mapping options to add list of string storage_map_remove -- list of storage mapping ids to remove bool force_options -- return warnings instead of forceable errors """ report_list = [] container_el = _get_container_element(bundle_el) if container_el.tag == "docker": # TODO call the proper function once more container types are # supported by pacemaker report_list.extend( _validate_container_docker_options_update( container_el, container_options, force_options ) ) network_el = bundle_el.find("network") if network_el is None: report_list.extend( _validate_network_options_new(network_options, force_options) ) else: report_list.extend( _validate_network_options_update( network_el, network_options, force_options ) ) # TODO It will probably be needed to split the following validators to # create and update variants. It should be done once the need exists and # not sooner. report_list.extend( _validate_port_map_list(port_map_add, id_provider, force_options) ) report_list.extend( _validate_storage_map_list(storage_map_add, id_provider, force_options) ) report_list.extend( _validate_map_ids_exist( bundle_el, "port-mapping", "port-map", port_map_remove ) ) report_list.extend( _validate_map_ids_exist( bundle_el, "storage-mapping", "storage-map", storage_map_remove ) ) return report_list def update( id_provider, bundle_el, container_options, network_options, port_map_add, port_map_remove, storage_map_add, storage_map_remove, meta_attributes ): """ Modify an existing bundle (does not touch encapsulated resources) IdProvider id_provider -- elements' ids generator and uniqueness checker etree bundle_el -- the bundle to be updated dict container_options -- container options to modify dict network_options -- network options to modify list of dict port_map_add -- list of port mapping options to add list of string port_map_remove -- list of port mapping ids to remove list of dict storage_map_add -- list of storage mapping options to add list of string storage_map_remove -- list of storage mapping ids to remove dict meta_attributes -- meta attributes to update """ bundle_id = bundle_el.get("id") update_attributes_remove_empty( _get_container_element(bundle_el), container_options ) network_element = get_sub_element(bundle_el, "network") if network_options: update_attributes_remove_empty(network_element, network_options) # It's crucial to remove port maps prior to appending new ones: If we are # adding a port map which in any way conflicts with another one and that # another one is being removed in the very same command, the removal must # be done first, otherwise the conflict would manifest itself (and then # possibly the old mapping would be removed) if port_map_remove: _remove_map_elements( network_element.findall("port-mapping"), port_map_remove ) for port_map_options in port_map_add: _append_port_map( network_element, id_provider, bundle_id, port_map_options ) storage_element = get_sub_element(bundle_el, "storage") # See the comment above about removing port maps prior to adding new ones. if storage_map_remove: _remove_map_elements( storage_element.findall("storage-mapping"), storage_map_remove ) for storage_map_options in storage_map_add: _append_storage_map( storage_element, id_provider, bundle_id, storage_map_options ) if meta_attributes: arrange_first_meta_attributes(bundle_el, meta_attributes) # remove empty elements with no attributes # meta attributes are handled in their own function remove_when_pointless(network_element) remove_when_pointless(storage_element) def add_resource(bundle_element, primitive_element): """ Add an existing resource to an existing bundle etree bundle_element -- where to add the resource to etree primitive_element -- the resource to be added to the bundle """ # TODO possibly split to 'validate' and 'do' functions # a bundle may currently contain at most one primitive resource inner_primitive = bundle_element.find(TAG_PRIMITIVE) if inner_primitive is not None: raise LibraryError(reports.resource_bundle_already_contains_a_resource( bundle_element.get("id"), inner_primitive.get("id") )) bundle_element.append(primitive_element) def get_inner_resource(bundle_el): resources = bundle_el.xpath("./primitive") if resources: return resources[0] return None def _validate_container_type(container_type): return validate.value_in("type", ("docker", ), "container type")({ "type": container_type, }) def _validate_container_docker_options_new(options, force_options): validators = [ validate.is_required("image", "container"), validate.value_not_empty("image", "image name"), validate.value_nonnegative_integer("masters"), validate.value_positive_integer("replicas"), validate.value_positive_integer("replicas-per-host"), ] return ( validate.run_collection_of_option_validators(options, validators) + validate.names_in( _docker_options, options.keys(), "container", report_codes.FORCE_OPTIONS, force_options ) ) def _validate_container_docker_options_update( docker_el, options, force_options ): validators = [ # image is a mandatory attribute and cannot be removed validate.value_not_empty("image", "image name"), validate.value_empty_or_valid( "masters", validate.value_nonnegative_integer("masters") ), validate.value_empty_or_valid( "replicas", validate.value_positive_integer("replicas") ), validate.value_empty_or_valid( "replicas-per-host", validate.value_positive_integer("replicas-per-host") ), ] return ( validate.run_collection_of_option_validators(options, validators) + validate.names_in( # allow to remove options even if they are not allowed _docker_options | _options_to_remove(options), options.keys(), "container", report_codes.FORCE_OPTIONS, force_options ) ) def _validate_network_options_new(options, force_options): validators = [ # TODO add validators for other keys (ip-range-start - IPv4) validate.value_port_number("control-port"), _value_host_netmask("host-netmask", force_options), ] return ( validate.run_collection_of_option_validators(options, validators) + validate.names_in( _network_options, options.keys(), "network", report_codes.FORCE_OPTIONS, force_options ) ) def _validate_network_options_update(network_el, options, force_options): validators = [ # TODO add validators for other keys (ip-range-start - IPv4) validate.value_empty_or_valid( "control-port", validate.value_port_number("control-port"), ), validate.value_empty_or_valid( "host-netmask", _value_host_netmask("host-netmask", force_options), ), ] return ( validate.run_collection_of_option_validators(options, validators) + validate.names_in( # allow to remove options even if they are not allowed _network_options | _options_to_remove(options), options.keys(), "network", report_codes.FORCE_OPTIONS, force_options ) ) def _validate_port_map_list(options_list, id_provider, force_options): allowed_options = [ "id", "port", "internal-port", "range", ] validators = [ validate.value_id("id", "port-map id", id_provider), validate.depends_on_option( "internal-port", "port", "port-map", "port-map" ), validate.is_required_some_of(["port", "range"], "port-map"), validate.mutually_exclusive(["port", "range"], "port-map"), validate.value_port_number("port"), validate.value_port_number("internal-port"), validate.value_port_range( "range", code_to_allow_extra_values=report_codes.FORCE_OPTIONS, allow_extra_values=force_options ), ] report_list = [] for options in options_list: report_list.extend( validate.run_collection_of_option_validators(options, validators) + validate.names_in( allowed_options, options.keys(), "port-map", report_codes.FORCE_OPTIONS, force_options ) ) return report_list def _validate_storage_map_list(options_list, id_provider, force_options): allowed_options = [ "id", "options", "source-dir", "source-dir-root", "target-dir", ] source_dir_options = ["source-dir", "source-dir-root"] validators = [ validate.value_id("id", "storage-map id", id_provider), validate.is_required_some_of(source_dir_options, "storage-map"), validate.mutually_exclusive(source_dir_options, "storage-map"), validate.is_required("target-dir", "storage-map"), ] report_list = [] for options in options_list: report_list.extend( validate.run_collection_of_option_validators(options, validators) + validate.names_in( allowed_options, options.keys(), "storage-map", report_codes.FORCE_OPTIONS, force_options ) ) return report_list def _validate_map_ids_exist(bundle_el, map_type, map_label, id_list): report_list = [] for id in id_list: try: find_element_by_tag_and_id( map_type, bundle_el, id, id_types=[map_label] ) except LibraryError as e: report_list.extend(e.args) return report_list def _value_host_netmask(option_name, force_options): return validate.value_cond( option_name, lambda value: validate.is_integer(value, 1, 32), "a number of bits of the mask (1-32)", # Leaving a possibility to force this validation, if pacemaker # starts supporting IPv6 or other format of the netmask code_to_allow_extra_values=report_codes.FORCE_OPTIONS, allow_extra_values=force_options ) def _append_port_map(parent_element, id_provider, id_base, port_map_options): if "id" not in port_map_options: id_suffix = None if "port" in port_map_options: id_suffix = port_map_options["port"] elif "range" in port_map_options: id_suffix = port_map_options["range"] if id_suffix: port_map_options["id"] = id_provider.allocate_id( sanitize_id("{0}-port-map-{1}".format(id_base, id_suffix)) ) port_map_element = etree.SubElement(parent_element, "port-mapping") # Do not add options with empty values. When updating, an empty value means # remove the option. update_attributes_remove_empty(port_map_element, port_map_options) return port_map_element def _append_storage_map( parent_element, id_provider, id_base, storage_map_options ): if "id" not in storage_map_options: storage_map_options["id"] = id_provider.allocate_id( # use just numbers to keep the ids reasonably short "{0}-storage-map".format(id_base) ) storage_map_element = etree.SubElement(parent_element, "storage-mapping") # Do not add options with empty values. When updating, an empty value means # remove the option. update_attributes_remove_empty(storage_map_element, storage_map_options) return storage_map_element def _get_container_element(bundle_el): # TODO get different types of container once supported by pacemaker return bundle_el.find("docker") def _remove_map_elements(element_list, id_to_remove_list): for el in element_list: if el.get("id", "") in id_to_remove_list: el.getparent().remove(el) def _options_to_remove(options): return set([ name for name, value in options.items() if validate.is_empty_string(value) ]) pcs-0.9.164/pcs/lib/cib/resource/clone.py000066400000000000000000000040501326265502500200560ustar00rootroot00000000000000""" Module for stuff related to clones. Multi-state resources are a specialization of clone resources. So this module include stuffs related to master. """ from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib.cib.nvpair import append_new_meta_attributes from pcs.lib.cib.tools import find_unique_id TAG_CLONE = "clone" TAG_MASTER = "master" ALL_TAGS = [TAG_CLONE, TAG_MASTER] def is_clone(resource_el): return resource_el.tag == TAG_CLONE def is_master(resource_el): return resource_el.tag == TAG_MASTER def is_any_clone(resource_el): return resource_el.tag in ALL_TAGS def create_id(clone_tag, primitive_element): """ Create id for clone element based on contained primitive_element. string clone_tag is tag of clone element. Specialization of "clone" is "master" and this function is common for both - "clone" and "master". etree.Element primitive_element is resource which will be cloned. It must be connected into the cib to ensure that the resulting id is unique! """ return find_unique_id( primitive_element, "{0}-{1}".format(primitive_element.get("id"), clone_tag) ) def append_new(clone_tag, resources_section, primitive_element, options): """ Append a new clone element (containing the primitive_element) to the resources_section. string clone_tag is tag of clone element. Expected values are "clone" and "master". etree.Element resources_section is place where new clone will be appended. etree.Element primitive_element is resource which will be cloned. dict options is source for clone meta options """ clone_element = etree.SubElement( resources_section, clone_tag, id=create_id(clone_tag, primitive_element), ) clone_element.append(primitive_element) if options: append_new_meta_attributes(clone_element, options) return clone_element def get_inner_resource(clone_el): return clone_el.xpath("./primitive | ./group")[0] pcs-0.9.164/pcs/lib/cib/resource/common.py000066400000000000000000000170111326265502500202470ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.cib import nvpair from pcs.lib.cib.resource.bundle import ( is_bundle, get_inner_resource as get_bundle_inner_resource, ) from pcs.lib.cib.resource.clone import ( is_any_clone, get_inner_resource as get_clone_inner_resource, ) from pcs.lib.cib.resource.group import ( is_group, get_inner_resources as get_group_inner_resources, ) from pcs.lib.cib.resource.primitive import is_primitive from pcs.lib.xml_tools import find_parent def are_meta_disabled(meta_attributes): return meta_attributes.get("target-role", "Started").lower() == "stopped" def _can_be_evaluated_as_positive_num(value): string_wo_leading_zeros = str(value).lstrip("0") return string_wo_leading_zeros and string_wo_leading_zeros[0].isdigit() def is_clone_deactivated_by_meta(meta_attributes): return are_meta_disabled(meta_attributes) or any([ not _can_be_evaluated_as_positive_num(meta_attributes.get(key, "1")) for key in ["clone-max", "clone-node-max"] ]) def find_primitives(resource_el): """ Get list of primitives contained in a given resource etree resource_el -- resource element """ if is_bundle(resource_el): in_bundle = get_bundle_inner_resource(resource_el) return [in_bundle] if in_bundle is not None else [] if is_any_clone(resource_el): resource_el = get_clone_inner_resource(resource_el) if is_group(resource_el): return get_group_inner_resources(resource_el) if is_primitive(resource_el): return [resource_el] return [] def find_resources_to_enable(resource_el): """ Get resources to enable in order to enable specified resource succesfully etree resource_el -- resource element """ if is_bundle(resource_el): to_enable = [resource_el] in_bundle = get_bundle_inner_resource(resource_el) if in_bundle is not None: to_enable.append(in_bundle) return to_enable if is_any_clone(resource_el): return [resource_el, get_clone_inner_resource(resource_el)] to_enable = [resource_el] parent = resource_el.getparent() if is_any_clone(parent) or is_bundle(parent): to_enable.append(parent) return to_enable def enable(resource_el): """ Enable specified resource etree resource_el -- resource element """ nvpair.arrange_first_nvset( "meta_attributes", resource_el, { "target-role": "", } ) def disable(resource_el): """ Disable specified resource etree resource_el -- resource element """ nvpair.arrange_first_nvset( "meta_attributes", resource_el, { "target-role": "Stopped", } ) def find_resources_to_manage(resource_el): """ Get resources to manage to manage the specified resource succesfully etree resource_el -- resource element """ # If the resource_el is a primitive in a group, we set both the group and # the primitive to managed mode. Otherwise the resource_el, all its # children and parents need to be set to managed mode. We do it to make # sure to remove the unmanaged flag form the whole tree. The flag could be # put there manually. If we didn't do it, the resource may stay unmanaged, # as a managed primitive in an unmanaged clone / group is still unmanaged # and vice versa. res_id = resource_el.attrib["id"] return ( [resource_el] # the resource itself + # its parents find_parent(resource_el, "resources").xpath( # a master or a clone which contains a group, a primitve, or a # grouped primitive with the specified id # OR # a group (in a clone, master, etc. - hence //) which contains a # primitive with the specified id # OR # a bundle which contains a primitive with the specified id """ (./master|./clone)[(group|group/primitive|primitive)[@id='{r}']] | //group[primitive[@id='{r}']] | ./bundle[primitive[@id='{r}']] """ .format(r=res_id) ) + # its children resource_el.xpath("(./group|./primitive|./group/primitive)") ) def find_resources_to_unmanage(resource_el): """ Get resources to unmanage to unmanage the specified resource succesfully etree resource_el -- resource element """ # resource hierarchy - specified resource - what to return # a primitive - the primitive - the primitive # # a cloned primitive - the primitive - the primitive # a cloned primitive - the clone - the primitive # The resource will run on all nodes after unclone. However that doesn't # seem to be bad behavior. Moreover, if monitor operations were disabled, # they wouldn't enable on unclone, but the resource would become managed, # which is definitely bad. # # a primitive in a group - the primitive - the primitive # Otherwise all primitives in the group would become unmanaged. # a primitive in a group - the group - all primitives in the group # If only the group was set to unmanaged, setting any primitive in the # group to managed would set all the primitives in the group to managed. # If the group as well as all its primitives were set to unmanaged, any # primitive added to the group would become unmanaged. This new primitive # would become managed if any original group primitive becomes managed. # Therefore changing one primitive influences another one, which we do # not want to happen. # # a primitive in a cloned group - the primitive - the primitive # a primitive in a cloned group - the group - all primitives in the group # See group notes above # a primitive in a cloned group - the clone - all primitives in the group # See clone notes above # # a bundled primitive - the primitive - the primitive # a bundled primitive - the bundle - the bundle and the primitive # We need to unmanage implicit resources create by pacemaker and there is # no other way to do it than unmanage the bundle itself. # Since it is not possible to unbundle a resource, the concers described # at unclone don't apply here. However to prevent future bugs, in case # unbundling becomes possible, we unmanage the primitive as well. # an empty bundle - the bundle - the bundle # There is nothing else to unmanage. if is_bundle(resource_el): in_bundle = get_bundle_inner_resource(resource_el) return ( [resource_el, in_bundle] if in_bundle is not None else [resource_el] ) if is_any_clone(resource_el): resource_el = get_clone_inner_resource(resource_el) if is_group(resource_el): return get_group_inner_resources(resource_el) if is_primitive(resource_el): return [resource_el] return [] def manage(resource_el): """ Set the resource to be managed by the cluster etree resource_el -- resource element """ nvpair.arrange_first_nvset( "meta_attributes", resource_el, { "is-managed": "", } ) def unmanage(resource_el): """ Set the resource not to be managed by the cluster etree resource_el -- resource element """ nvpair.arrange_first_nvset( "meta_attributes", resource_el, { "is-managed": "false", } ) pcs-0.9.164/pcs/lib/cib/resource/group.py000066400000000000000000000050621326265502500201160ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib import reports from pcs.lib.cib.tools import find_element_by_tag_and_id from pcs.lib.errors import LibraryError TAG = "group" def is_group(resource_el): return resource_el.tag == TAG def provide_group(resources_section, group_id): """ Provide group with id=group_id. Create new group if group with id=group_id does not exists. etree.Element resources_section is place where new group will be appended string group_id is id of group """ group_element = find_element_by_tag_and_id( TAG, resources_section, group_id, none_if_id_unused=True ) if group_element is None: group_element = etree.SubElement(resources_section, TAG, id=group_id) return group_element def place_resource( group_element, primitive_element, adjacent_resource_id=None, put_after_adjacent=False ): """ Add resource into group. This function is also applicable for a modification of the resource position because the primitive element is replanted from anywhere (including group itself) to concrete place inside group. etree.Element group_element is element where to put primitive_element etree.Element primitive_element is element for placement string adjacent_resource_id is id of the existing resource in group. primitive_element will be put beside adjacent_resource_id if specified. bool put_after_adjacent is flag where put primitive_element: before adjacent_resource_id if put_after_adjacent=False after adjacent_resource_id if put_after_adjacent=True Note that it make sense only if adjacent_resource_id is specified """ if primitive_element.attrib["id"] == adjacent_resource_id: raise LibraryError(reports.resource_cannot_be_next_to_itself_in_group( adjacent_resource_id, group_element.attrib["id"], )) if not adjacent_resource_id: return group_element.append(primitive_element) adjacent_resource = find_element_by_tag_and_id( "primitive", group_element, adjacent_resource_id, ) if put_after_adjacent and adjacent_resource.getnext() is None: return group_element.append(primitive_element) index = group_element.index( adjacent_resource.getnext() if put_after_adjacent else adjacent_resource ) group_element.insert(index, primitive_element) def get_inner_resources(group_el): return group_el.xpath("./primitive") pcs-0.9.164/pcs/lib/cib/resource/guest_node.py000066400000000000000000000146131326265502500211200ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib import reports, validate from pcs.lib.cib.tools import does_id_exist from pcs.lib.cib.nvpair import( has_meta_attribute, arrange_first_meta_attributes, get_meta_attribute_value, ) from pcs.lib.node import ( NodeAddresses, node_addresses_contain_host, node_addresses_contain_name, ) from pcs.lib.xml_tools import remove_when_pointless #TODO pcs currently does not care about multiple meta_attributes and here #we don't care as well GUEST_OPTIONS = [ 'remote-port', 'remote-addr', 'remote-connect-timeout', ] def validate_conflicts(tree, nodes, node_name, options): report_list = [] if( does_id_exist(tree, node_name) or node_addresses_contain_name(nodes, node_name) or ( "remote-addr" not in options and node_addresses_contain_host(nodes, node_name) ) ): report_list.append(reports.id_already_exists(node_name)) if( "remote-addr" in options and node_addresses_contain_host(nodes, options["remote-addr"]) ): report_list.append(reports.id_already_exists(options["remote-addr"])) return report_list def is_node_name_in_options(options): return "remote-node" in options def get_guest_option_value(options, default=None): return options.get("remote-node", default) def validate_set_as_guest(tree, nodes, node_name, options): report_list = validate.names_in( GUEST_OPTIONS, options.keys(), "guest", ) validator_list = [ validate.value_time_interval("remote-connect-timeout"), validate.value_port_number("remote-port"), ] report_list.extend( validate.run_collection_of_option_validators(options, validator_list) ) report_list.extend( validate_conflicts(tree, nodes, node_name, options) ) if not node_name.strip(): report_list.append( reports.invalid_option_value( "node name", node_name, "no empty value", ) ) return report_list def is_guest_node(resource_element): """ Return True if resource_element is already set as guest node. etree.Element resource_element is a search element """ return has_meta_attribute(resource_element, "remote-node") def validate_is_not_guest(resource_element): """ etree.Element resource_element """ if not is_guest_node(resource_element): return [] return [ reports.resource_is_guest_node_already( resource_element.attrib["id"] ) ] def set_as_guest( resource_element, node, addr=None, port=None, connect_timeout=None ): """ Set resource as guest node. etree.Element resource_element """ meta_options = {"remote-node": str(node)} if addr: meta_options["remote-addr"] = str(addr) if port: meta_options["remote-port"] = str(port) if connect_timeout: meta_options["remote-connect-timeout"] = str(connect_timeout) arrange_first_meta_attributes(resource_element, meta_options) def unset_guest(resource_element): """ Unset resource as guest node. etree.Element resource_element """ guest_nvpair_list = resource_element.xpath( "./meta_attributes/nvpair[{0}]".format( " or ".join([ '@name="{0}"'.format(option) for option in (GUEST_OPTIONS + ["remote-node"]) ]) ) ) for nvpair in guest_nvpair_list: meta_attributes = nvpair.getparent() meta_attributes.remove(nvpair) remove_when_pointless(meta_attributes) def get_node(meta_attributes): """ Return NodeAddresses with corresponding to guest node in meta_attributes. Return None if meta_attributes does not mean guest node etree.Element meta_attributes is a researched element """ host = None name = None for nvpair in meta_attributes: if nvpair.attrib.get("name", "") == "remote-addr": host = nvpair.attrib["value"] if nvpair.attrib.get("name", "") == "remote-node": name = nvpair.attrib["value"] if host is None: host = name return NodeAddresses(host, name=name) if name else None def get_host_from_options(node_name, meta_options): """ Return host from node_name meta options. dict meta_options """ return meta_options.get("remote-addr", node_name) def get_node_name_from_options(meta_options, default=None): """ Return node_name from meta options. dict meta_options """ return meta_options.get("remote-node", default) def get_host(resource_element): host = get_meta_attribute_value(resource_element, "remote-addr") if host: return host return get_meta_attribute_value(resource_element, "remote-node") def find_node_list(resources_section): """ Return list of nodes from resources_section etree.Element resources_section is a researched element """ return [ get_node(meta_attrs) for meta_attrs in resources_section.xpath(""" .//primitive /meta_attributes[ nvpair[ @name="remote-node" and string-length(@value) > 0 ] ] """) ] def find_node_resources(resources_section, node_identifier): """ Return list of etree.Eleent primitives that are guest nodes. etree.Element resources_section is a researched element string node_identifier could be id of resource, node name or node address """ resources = resources_section.xpath(""" .//primitive[ ( @id="{0}" and meta_attributes[ nvpair[ @name="remote-node" and string-length(@value) > 0 ] ] ) or meta_attributes[ nvpair[ ( @name="remote-addr" or @name="remote-node" ) and @value="{0}" ] ] ] """.format(node_identifier)) return resources pcs-0.9.164/pcs/lib/cib/resource/operations.py000066400000000000000000000261361326265502500211520ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from collections import defaultdict from lxml import etree from pcs.common import report_codes from pcs.lib import reports, validate from pcs.lib.resource_agent import get_default_interval, complete_all_intervals from pcs.lib.cib.nvpair import append_new_instance_attributes from pcs.lib.cib.tools import ( create_subelement_id, does_id_exist, ) from pcs.lib.errors import LibraryError from pcs.lib.pacemaker.values import ( is_true, timeout_to_seconds, ) OPERATION_NVPAIR_ATTRIBUTES = [ "OCF_CHECK_LEVEL", ] ATTRIBUTES = [ "id", "description", "enabled", "interval", "interval-origin", "name", "on-fail", "record-pending", "requires", "role", "start-delay", "timeout", "OCF_CHECK_LEVEL", ] ROLE_VALUES = [ "Stopped", "Started", "Slave", "Master", ] REQUIRES_VALUES = [ "nothing", "quorum", "fencing", "unfencing", ] ON_FAIL_VALUES = [ "ignore", "block", "stop", "restart", "standby", "fence", "restart-container", ] BOOLEAN_VALUES = [ "0", "1", "true", "false", ] #normalize(key, value) -> normalized_value normalize = validate.option_value_normalization({ "role": lambda value: value.lower().capitalize(), "requires": lambda value: value.lower(), "on-fail": lambda value: value.lower(), "record-pending": lambda value: value.lower(), "enabled": lambda value: value.lower(), }) def prepare( report_processor, raw_operation_list, default_operation_list, allowed_operation_name_list, allow_invalid=False ): """ Return operation_list prepared from raw_operation_list and default_operation_list. report_processor is tool for warning/info/error reporting list of dicts raw_operation_list are entered operations that require follow-up care list of dicts default_operation_list are operations defined as default by (most probably) resource agent bool allow_invalid is flag for validation skipping """ operations_to_validate = operations_to_normalized(raw_operation_list) report_list = [] report_list.extend( validate_operation_list( operations_to_validate, allowed_operation_name_list, allow_invalid ) ) operation_list = normalized_to_operations(operations_to_validate) report_list.extend(validate_different_intervals(operation_list)) #can raise LibraryError report_processor.process_list(report_list) return complete_all_intervals(operation_list) + get_remaining_defaults( report_processor, operation_list, default_operation_list ) def operations_to_normalized(raw_operation_list): return [ validate.values_to_pairs(op, normalize) for op in raw_operation_list ] def normalized_to_operations(normalized_pairs): return [ validate.pairs_to_values(op) for op in normalized_pairs ] def validate_operation_list( operation_list, allowed_operation_name_list, allow_invalid=False ): options_validators = [ validate.is_required("name", "resource operation"), validate.value_in("role", ROLE_VALUES), validate.value_in("requires", REQUIRES_VALUES), validate.value_in("on-fail", ON_FAIL_VALUES), validate.value_in("record-pending", BOOLEAN_VALUES), validate.value_in("enabled", BOOLEAN_VALUES), validate.mutually_exclusive( ["interval-origin", "start-delay"], "resource operation" ), validate.value_in( "name", allowed_operation_name_list, option_name_for_report="operation name", code_to_allow_extra_values=report_codes.FORCE_OPTIONS, allow_extra_values=allow_invalid, ), validate.value_id("id", option_name_for_report="operation id"), ] report_list = [] for operation in operation_list: report_list.extend( validate_operation(operation, options_validators) ) return report_list def validate_operation(operation, options_validator_list): """ Return a list with reports (ReportItems) about problems inside operation. dict operation contains attributes of operation """ report_list = validate.names_in( ATTRIBUTES, operation.keys(), "resource operation", ) report_list.extend(validate.run_collection_of_option_validators( operation, options_validator_list )) return report_list def get_remaining_defaults( report_processor, operation_list, default_operation_list ): """ Return operations not mentioned in operation_list but contained in default_operation_list. report_processor is tool for warning/info/error reporting list operation_list contains dictionaries with attributes of operation list default_operation_list contains dictionaries with attributes of the operation """ return make_unique_intervals( report_processor, [ default_operation for default_operation in default_operation_list if default_operation["name"] not in [ operation["name"] for operation in operation_list ] ] ) def get_interval_uniquer(): used_intervals_map = defaultdict(set) def get_uniq_interval(name, initial_interval): """ Return unique interval for name based on initial_interval if initial_interval is valid or return initial_interval otherwise. string name is the operation name for searching interval initial_interval is starting point for finding free value """ used_intervals = used_intervals_map[name] normalized_interval = timeout_to_seconds(initial_interval) if normalized_interval is None: return initial_interval if normalized_interval not in used_intervals: used_intervals.add(normalized_interval) return initial_interval while normalized_interval in used_intervals: normalized_interval += 1 used_intervals.add(normalized_interval) return str(normalized_interval) return get_uniq_interval def make_unique_intervals(report_processor, operation_list): """ Return operation list similar to operation_list where intervals for the same operation are unique report_processor is tool for warning/info/error reporting list operation_list contains dictionaries with attributes of operation """ get_unique_interval = get_interval_uniquer() adapted_operation_list = [] for operation in operation_list: adapted = operation.copy() if "interval" in adapted: adapted["interval"] = get_unique_interval( operation["name"], operation["interval"] ) if adapted["interval"] != operation["interval"]: report_processor.process( reports.resource_operation_interval_adapted( operation["name"], operation["interval"], adapted["interval"], ) ) adapted_operation_list.append(adapted) return adapted_operation_list def validate_different_intervals(operation_list): """ Check that the same operations (e.g. monitor) have different interval. list operation_list contains dictionaries with attributes of operation return see resource operation in pcs/lib/exchange_formats.md """ duplication_map = defaultdict(lambda: defaultdict(list)) for operation in operation_list: interval = operation.get( "interval", get_default_interval(operation["name"]) ) seconds = timeout_to_seconds(interval) duplication_map[operation["name"]][seconds].append(interval) duplications = defaultdict(list) for name, interval_map in duplication_map.items(): for timeout in sorted(interval_map.values()): if len(timeout) > 1: duplications[name].append(timeout) if duplications: return [reports.resource_operation_interval_duplication( dict(duplications) )] return [] def create_id(context_element, name, interval): """ Create id for op element. etree context_element is used for the name building string name is the name of the operation mixed interval is the interval attribute of operation """ return create_subelement_id( context_element, "{0}-interval-{1}".format(name, interval) ) def create_operations(primitive_element, operation_list): """ Create operation element containing operations from operation_list list operation_list contains dictionaries with attributes of operation etree primitive_element is context element """ operations_element = etree.SubElement(primitive_element, "operations") for operation in sorted(operation_list, key=lambda op: op["name"]): append_new_operation(operations_element, operation) def append_new_operation(operations_element, options): """ Create op element and apend it to operations_element. etree operations_element is the context element dict options are attributes of operation """ attribute_map = dict( (key, value) for key, value in options.items() if key not in OPERATION_NVPAIR_ATTRIBUTES ) if "id" in attribute_map: if does_id_exist(operations_element, attribute_map["id"]): raise LibraryError(reports.id_already_exists(attribute_map["id"])) else: attribute_map.update({ "id": create_id( operations_element.getparent(), options["name"], options["interval"] ) }) op_element = etree.SubElement( operations_element, "op", attribute_map, ) nvpair_attribute_map = dict( (key, value) for key, value in options.items() if key in OPERATION_NVPAIR_ATTRIBUTES ) if nvpair_attribute_map: append_new_instance_attributes(op_element, nvpair_attribute_map) return op_element def get_resource_operations(resource_el, names=None): """ Get operations of a given resource, optionally filtered by name etree resource_el -- resource element iterable names -- return only operations of these names if specified """ return [ op_el for op_el in resource_el.xpath("./operations/op") if not names or op_el.attrib.get("name", "") in names ] def disable(operation_element): """ Disable the specified operation etree operation_element -- the operation """ operation_element.attrib["enabled"] = "false" def enable(operation_element): """ Enable the specified operation etree operation_element -- the operation """ operation_element.attrib.pop("enabled", None) def is_enabled(operation_element): """ Check if the specified operation is enabled etree operation_element -- the operation """ return is_true(operation_element.attrib.get("enabled", "true")) pcs-0.9.164/pcs/lib/cib/resource/primitive.py000066400000000000000000000106111326265502500207660ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib import reports from pcs.lib.cib.nvpair import ( append_new_instance_attributes, append_new_meta_attributes, ) from pcs.lib.cib.resource.operations import( prepare as prepare_operations, create_operations, ) from pcs.lib.cib.tools import does_id_exist from pcs.lib.errors import LibraryError from pcs.lib.pacemaker.values import validate_id TAG = "primitive" def is_primitive(resource_el): return resource_el.tag == TAG def create( report_processor, resources_section, resource_id, resource_agent, raw_operation_list=None, meta_attributes=None, instance_attributes=None, allow_invalid_operation=False, allow_invalid_instance_attributes=False, use_default_operations=True, resource_type="resource" ): """ Prepare all parts of primitive resource and append it into cib. report_processor is a tool for warning/info/error reporting etree.Element resources_section is place where new element will be appended string resource_id is id of new resource lib.resource_agent.CrmAgent resource_agent list of dict raw_operation_list specifies operations of resource dict meta_attributes specifies meta attributes of resource dict instance_attributes specifies instance attributes of resource bool allow_invalid_operation is flag for skipping validation of operations bool allow_invalid_instance_attributes is flag for skipping validation of instance_attributes bool use_default_operations is flag for completion operations with default actions specified in resource agent string resource_type -- describes the resource for reports """ if raw_operation_list is None: raw_operation_list = [] if meta_attributes is None: meta_attributes = {} if instance_attributes is None: instance_attributes = {} if does_id_exist(resources_section, resource_id): raise LibraryError(reports.id_already_exists(resource_id)) validate_id(resource_id, "{0} name".format(resource_type)) operation_list = prepare_operations( report_processor, raw_operation_list, resource_agent.get_cib_default_actions( necessary_only=not use_default_operations ), [operation["name"] for operation in resource_agent.get_actions()], allow_invalid=allow_invalid_operation, ) report_processor.process_list( resource_agent.validate_parameters( instance_attributes, parameters_type=resource_type, allow_invalid=allow_invalid_instance_attributes, ) ) return append_new( resources_section, resource_id, resource_agent.get_standard(), resource_agent.get_provider(), resource_agent.get_type(), instance_attributes=instance_attributes, meta_attributes=meta_attributes, operation_list=operation_list ) def append_new( resources_section, resource_id, standard, provider, agent_type, instance_attributes=None, meta_attributes=None, operation_list=None ): """ Append a new primitive element to the resources_section. etree.Element resources_section is place where new element will be appended string resource_id is id of new resource string standard is a standard of resource agent (e.g. ocf) string agent_type is a type of resource agent (e.g. IPaddr2) string provider is a provider of resource agent (e.g. heartbeat) dict instance_attributes will be nvpairs inside instance_attributes element dict meta_attributes will be nvpairs inside meta_attributes element list operation_list contains dicts representing operations (e.g. [{"name": "monitor"}, {"name": "start"}]) """ attributes = { "id": resource_id, "class": standard, "type": agent_type, } if provider: attributes["provider"] = provider primitive_element = etree.SubElement(resources_section, TAG, attributes) if instance_attributes: append_new_instance_attributes( primitive_element, instance_attributes ) if meta_attributes: append_new_meta_attributes(primitive_element, meta_attributes) create_operations( primitive_element, operation_list if operation_list else [] ) return primitive_element pcs-0.9.164/pcs/lib/cib/resource/remote_node.py000066400000000000000000000153131326265502500212620ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib import reports from pcs.lib.errors import LibraryError from pcs.lib.cib.resource import primitive from pcs.lib.node import( NodeAddresses, node_addresses_contain_host, node_addresses_contain_name, ) from pcs.lib.resource_agent import( find_valid_resource_agent_by_name, ResourceAgentName, ) AGENT_NAME = ResourceAgentName("ocf", "pacemaker", "remote") def get_agent(report_processor, cmd_runner): return find_valid_resource_agent_by_name( report_processor, cmd_runner, AGENT_NAME.full_name, ) _IS_REMOTE_AGENT_XPATH_SNIPPET = """ @class="{0}" and @provider="{1}" and @type="{2}" """.format(AGENT_NAME.standard, AGENT_NAME.provider, AGENT_NAME.type) _HAS_SERVER_XPATH_SNIPPET = """ instance_attributes/nvpair[ @name="server" and string-length(@value) > 0 ] """ def find_node_list(resources_section): node_list = [ NodeAddresses( nvpair.attrib["value"], name=nvpair.getparent().getparent().attrib["id"] ) for nvpair in resources_section.xpath( ".//primitive[{is_remote}]/{has_server}" .format( is_remote=_IS_REMOTE_AGENT_XPATH_SNIPPET, has_server=_HAS_SERVER_XPATH_SNIPPET, ) ) ] node_list.extend([ NodeAddresses(primitive.attrib["id"], name=primitive.attrib["id"]) for primitive in resources_section.xpath( ".//primitive[{is_remote} and not({has_server})]" .format( is_remote=_IS_REMOTE_AGENT_XPATH_SNIPPET, has_server=_HAS_SERVER_XPATH_SNIPPET, ) ) ]) return node_list def find_node_resources(resources_section, node_identifier): """ Return list of resource elements that match to node_identifier etree.Element resources_section is a search element string node_identifier could be id of the resource or its instance attribute "server" """ return resources_section.xpath( """ .//primitive[ {is_remote} and ( @id="{identifier}" or instance_attributes/nvpair[ @name="server" and @value="{identifier}" ] ) ] """ .format( is_remote=_IS_REMOTE_AGENT_XPATH_SNIPPET, identifier=node_identifier ) ) def get_host(resource_element): """ Return first host from resource element if is there. Return None if host is not there. etree.Element resource_element """ if not ( resource_element.attrib.get("class", "") == AGENT_NAME.standard and resource_element.attrib.get("provider", "") == AGENT_NAME.provider and resource_element.attrib.get("type", "") == AGENT_NAME.type ): return None host_list = resource_element.xpath( "./{has_server}/@value".format(has_server=_HAS_SERVER_XPATH_SNIPPET) ) if host_list: return host_list[0] return resource_element.attrib["id"] def _validate_server_not_used(agent, option_dict): if "server" in option_dict: return [reports.invalid_options( ["server"], sorted([ attr["name"] for attr in agent.get_parameters() if attr["name"] != "server" ]), "resource", )] return [] def validate_host_not_conflicts(nodes, node_name, instance_attributes): host = instance_attributes.get("server", node_name) if node_addresses_contain_host(nodes, host): return [reports.id_already_exists(host)] return [] def validate_create( nodes, resource_agent, host, node_name, instance_attributes ): """ validate inputs for create list of NodeAddresses nodes -- nodes already used string node_name -- name of future node dict instance_attributes -- data for future resource instance attributes """ report_list = _validate_server_not_used(resource_agent, instance_attributes) host_is_used = False if node_addresses_contain_host(nodes, host): report_list.append(reports.id_already_exists(host)) host_is_used = True if not host_is_used or host != node_name: if node_addresses_contain_name(nodes, node_name): report_list.append(reports.id_already_exists(node_name)) return report_list def prepare_instance_atributes(instance_attributes, host): enriched_instance_attributes = instance_attributes.copy() enriched_instance_attributes["server"] = host return enriched_instance_attributes def create( report_processor, resource_agent, resources_section, host, node_name, raw_operation_list=None, meta_attributes=None, instance_attributes=None, allow_invalid_operation=False, allow_invalid_instance_attributes=False, use_default_operations=True, ): """ Prepare all parts of remote resource and append it into the cib. report_processor is a tool for warning/info/error reporting cmd_runner is tool for launching external commands etree.Element resources_section is place where new element will be appended string node_name is name of the remote node and id of new resource as well list of dict raw_operation_list specifies operations of resource dict meta_attributes specifies meta attributes of resource dict instance_attributes specifies instance attributes of resource bool allow_invalid_operation is flag for skipping validation of operations bool allow_invalid_instance_attributes is flag for skipping validation of instance_attributes bool use_default_operations is flag for completion operations with default actions specified in resource agent """ all_instance_attributes = instance_attributes.copy() if host != node_name: all_instance_attributes.update({"server": host}) try: return primitive.create( report_processor, resources_section, node_name, resource_agent, raw_operation_list, meta_attributes, all_instance_attributes, allow_invalid_operation, allow_invalid_instance_attributes, use_default_operations, ) except LibraryError as e: for report in e.args: if report.code == report_codes.INVALID_OPTIONS: report.info["allowed"] = [ value for value in report.info["allowed"] if value != "server" ] raise e pcs-0.9.164/pcs/lib/cib/sections.py000066400000000000000000000033271326265502500167640ustar00rootroot00000000000000""" This module defines madatory and optional cib sections. It provides function for getting existing section from the cib (lxml) tree. """ from __future__ import ( absolute_import, division, print_function, ) from pcs.lib import reports from pcs.lib.errors import LibraryError from pcs.lib.xml_tools import get_sub_element CONFIGURATION = "configuration" CONSTRAINTS = "configuration/constraints" NODES = "configuration/nodes" RESOURCES = "configuration/resources" ACLS = "acls" ALERTS = "alerts" FENCING_TOPOLOGY = "fencing-topology" OP_DEFAULTS = "op_defaults" RSC_DEFAULTS = "rsc_defaults" __MANDATORY_SECTIONS = [ CONFIGURATION, CONSTRAINTS, NODES, RESOURCES, ] __OPTIONAL_SECTIONS = [ ACLS, ALERTS, FENCING_TOPOLOGY, OP_DEFAULTS, RSC_DEFAULTS, ] def get(tree, section_name): """ Return the element which represents section 'section_name' in the tree. If the section is mandatory and is not found in the tree this function raises. If the section is optional and is not found in the tree this function creates new section. lxml.etree.Element tree -- is tree in which the section is looked up string section_name -- name of desired section; it is strongly recommended to use constants defined in this module """ if section_name in __MANDATORY_SECTIONS: section = tree.find(".//{0}".format(section_name)) if section is not None: return section raise LibraryError(reports.cib_missing_mandatory_section(section_name)) if section_name in __OPTIONAL_SECTIONS: return get_sub_element(get(tree, CONFIGURATION), section_name) raise AssertionError("Unknown cib section '{0}'".format(section_name)) pcs-0.9.164/pcs/lib/cib/stonith.py000066400000000000000000000004611326265502500166210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) # TODO replace by the new finding function def is_stonith_resource(resources_el, name): return len( resources_el.xpath( "primitive[@id='{0}' and @class='stonith']".format(name) ) ) > 0 pcs-0.9.164/pcs/lib/cib/test/000077500000000000000000000000001326265502500155355ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/cib/test/__init__.py000066400000000000000000000000001326265502500176340ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/cib/test/test_acl.py000066400000000000000000000765221326265502500177210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.test.tools.assertions import ( assert_raise_library_error, assert_xml_equal, ExtendedAssertionsMixin, ) from pcs.test.tools.misc import get_test_resource as rc from pcs.test.tools.xml import get_xml_manipulation_creator_from_file from pcs.test.tools.pcs_unittest import mock, TestCase from pcs.common import report_codes from pcs.lib.cib import acl as lib from pcs.lib.cib.tools import get_acls from pcs.lib.errors import ReportItemSeverity as severities, LibraryError class LibraryAclTest(TestCase): def setUp(self): self.create_cib = get_xml_manipulation_creator_from_file( rc("cib-empty.xml") ) self.cib = self.create_cib() @property def acls(self): return get_acls(self.cib.tree) def fixture_add_role(self, role_id): self.cib.append_to_first_tag_name( 'configuration', ''.format(role_id) ) def assert_cib_equal(self, expected_cib): got_xml = str(self.cib) expected_xml = str(expected_cib) assert_xml_equal(expected_xml, got_xml) class ValidatePermissionsTest(LibraryAclTest): def setUp(self): self.xml = """ """ self.tree = etree.XML(self.xml) self.allowed_permissions = ["read", "write", "deny"] self.allowed_scopes = ["xpath", "id"] def test_success(self): permissions = [ ("read", "id", "test-id"), ("write", "id", "another-id"), ("deny", "id", "last-id"), ("read", "xpath", "any string"), ("write", "xpath", "maybe xpath"), ("deny", "xpath", "xpath") ] lib.validate_permissions(self.tree, permissions) def test_unknown_permission(self): permissions = [ ("read", "id", "test-id"), ("unknown", "id", "another-id"), ("write", "xpath", "my xpath"), ("allow", "xpath", "xpath") ] assert_raise_library_error( lambda: lib.validate_permissions(self.tree, permissions), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_value": "unknown", "option_name": "permission", "allowed_values": self.allowed_permissions, }, None ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_value": "allow", "option_name": "permission", "allowed_values": self.allowed_permissions, }, None ) ) def test_unknown_scope(self): permissions = [ ("read", "id", "test-id"), ("write", "not_id", "test-id"), ("deny", "not_xpath", "some xpath"), ("read", "xpath", "xpath") ] assert_raise_library_error( lambda: lib.validate_permissions(self.tree, permissions), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_value": "not_id", "option_name": "scope type", "allowed_values": self.allowed_scopes, }, None ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_value": "not_xpath", "option_name": "scope type", "allowed_values": self.allowed_scopes, }, None ) ) def test_not_existing_id(self): permissions = [ ("read", "id", "test-id"), ("write", "id", "id"), ("deny", "id", "last"), ("write", "xpath", "maybe xpath") ] assert_raise_library_error( lambda: lib.validate_permissions(self.tree, permissions), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "id", "expected_types": ["id"], "context_type": "", "context_id": "", }, None ), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "last", "expected_types": ["id"], "context_type": "", "context_id": "", }, None ) ) class CreateRoleTest(LibraryAclTest): def test_create_for_new_role_id(self): role_id = 'new-id' lib.create_role(self.acls, role_id) self.assert_cib_equal( self.create_cib().append_to_first_tag_name( 'configuration', ''.format(role_id) ) ) def test_refuse_invalid_id(self): assert_raise_library_error( lambda: lib.create_role(self.cib.tree, '#invalid'), ( severities.ERROR, report_codes.INVALID_ID, {'id': '#invalid'}, ), ) def test_refuse_existing_non_role_id(self): self.cib.append_to_first_tag_name( 'nodes', '' ) assert_raise_library_error( lambda: lib.create_role(self.cib.tree, 'node-id'), ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, {'id': 'node-id'}, ), ) class RemoveRoleTest(LibraryAclTest, ExtendedAssertionsMixin): def setUp(self): self.xml = """ """ self.tree = etree.XML(self.xml) def test_success(self): expected_xml = """ """ lib.remove_role(self.tree, "role-id") assert_xml_equal(expected_xml, etree.tostring(self.tree).decode()) def test_autodelete(self): expected_xml = """ """ lib.remove_role(self.tree, "role-id", autodelete_users_groups=True) assert_xml_equal(expected_xml, etree.tostring(self.tree).decode()) def test_id_not_exists(self): assert_raise_library_error( lambda: lib.remove_role(self.tree.find(".//acls"), "id-of-role"), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "context_type": "acls", "context_id": "", "id": "id-of-role", }, ), ) class AssignRoleTest(LibraryAclTest): def setUp(self): LibraryAclTest.setUp(self) self.cib.append_to_first_tag_name( "configuration", """ """ ) def test_success_target(self): target = self.cib.tree.find(".//acl_target[@id='target1']") lib.assign_role(self.cib.tree, "role1", target) self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_sucess_group(self): group = self.cib.tree.find(".//acl_group[@id='group1']") lib.assign_role(self.cib.tree, "role1", group) self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_role_already_assigned(self): target = self.cib.tree.find(".//acl_target[@id='target1']") assert_raise_library_error( lambda: lib.assign_role(self.cib.tree, "role2", target), ( severities.ERROR, report_codes.CIB_ACL_ROLE_IS_ALREADY_ASSIGNED_TO_TARGET, { "role_id": "role2", "target_id": "target1", } ) ) @mock.patch("pcs.lib.cib.acl._assign_role") class AssignAllRoles(TestCase): def test_success(self, assign_role): assign_role.return_value = [] lib.assign_all_roles("acl_section", ["1", "2", "3"], "element") assign_role.assert_has_calls([ mock.call("acl_section", "1", "element"), mock.call("acl_section", "2", "element"), mock.call("acl_section", "3", "element"), ], any_order=True) def test_fail_on_error_report(self, assign_role): assign_role.return_value = ['report'] self.assertRaises( LibraryError, lambda: lib.assign_all_roles("acl_section", ["1", "2", "3"], "element") ) class UnassignRoleTest(LibraryAclTest): def setUp(self): LibraryAclTest.setUp(self) self.cib.append_to_first_tag_name( "configuration", """ """ ) def test_success_target(self): target = self.cib.tree.find( ".//acl_target[@id='{0}']".format("target1") ) lib.unassign_role(target, "role2") self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_success_group(self): group = self.cib.tree.find(".//acl_group[@id='{0}']".format("group1")) lib.unassign_role(group, "role1") self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_not_existing_role(self): target = self.cib.tree.find( ".//acl_target[@id='{0}']".format("target1") ) lib.unassign_role(target, "role3") self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_role_not_assigned(self): target = self.cib.tree.find( ".//acl_target[@id='{0}']".format("target1") ) assert_raise_library_error( lambda: lib.unassign_role(target, "role1"), ( severities.ERROR, report_codes.CIB_ACL_ROLE_IS_NOT_ASSIGNED_TO_TARGET, { "role_id": "role1", "target_id": "target1", } ) ) def test_autodelete(self): target = self.cib.tree.find(".//acl_group[@id='{0}']".format("group1")) lib.unassign_role(target, "role1", True) self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) class AddPermissionsToRoleTest(LibraryAclTest): def test_add_for_correct_permissions(self): role_id = 'role1' self.fixture_add_role(role_id) lib.add_permissions_to_role( self.cib.tree.find(".//acl_role[@id='{0}']".format(role_id)), [('read', 'xpath', '/whatever')] ) self.assert_cib_equal( self.create_cib().append_to_first_tag_name('configuration', ''' '''.format(role_id)) ) class ProvideRoleTest(LibraryAclTest): def test_add_role_for_nonexisting_id(self): role_id = 'new-id' lib.provide_role(self.acls, role_id) self.assert_cib_equal( self.create_cib().append_to_first_tag_name('configuration', ''' '''.format(role_id)) ) def test_add_role_for_nonexisting_role_id(self): self.fixture_add_role('role1') role_id = 'role1' lib.provide_role(self.cib.tree, role_id) self.assert_cib_equal( self.create_cib().append_to_first_tag_name('configuration', ''' '''.format(role_id)) ) class CreateTargetTest(LibraryAclTest): def setUp(self): LibraryAclTest.setUp(self) self.fixture_add_role("target3") self.cib.append_to_first_tag_name("acls", '') def test_success(self): lib.create_target(self.acls, "target1") self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_target_id_is_not_unique_id(self): lib.create_target(self.acls, "target3") self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_target_id_is_not_unique_target_id(self): assert_raise_library_error( lambda: lib.create_target(self.acls, "target2"), ( severities.ERROR, report_codes.CIB_ACL_TARGET_ALREADY_EXISTS, {"target_id":"target2"} ) ) class CreateGroupTest(LibraryAclTest): def setUp(self): LibraryAclTest.setUp(self) self.fixture_add_role("group2") def test_success(self): lib.create_group(self.acls, "group1") self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_existing_id(self): assert_raise_library_error( lambda: lib.create_group(self.acls, "group2"), ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, {"id": "group2"} ) ) class RemoveTargetTest(LibraryAclTest, ExtendedAssertionsMixin): def setUp(self): LibraryAclTest.setUp(self) self.fixture_add_role("target2") self.cib.append_to_first_tag_name("acls", '') def test_success(self): lib.remove_target(self.cib.tree, "target1") self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_not_existing(self): assert_raise_library_error( lambda: lib.remove_target(self.acls, "target2"), ( severities.ERROR, report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, { "id": "target2", "expected_types": ["acl_target"], "current_type": "acl_role", } ) ) class RemoveGroupTest(LibraryAclTest, ExtendedAssertionsMixin): def setUp(self): LibraryAclTest.setUp(self) self.fixture_add_role("group2") self.cib.append_to_first_tag_name("acls", '') def test_success(self): lib.remove_group(self.cib.tree, "group1") self.assert_cib_equal(self.create_cib().append_to_first_tag_name( "configuration", """ """ )) def test_not_existing(self): assert_raise_library_error( lambda: lib.remove_group(self.cib.tree, "group2"), ( severities.ERROR, report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, { "id": "group2", "expected_types": ["acl_group"], "current_type": "acl_role", } ) ) class RemovePermissionForReferenceTest(LibraryAclTest): def test_has_no_efect_when_id_not_referenced(self): lib.remove_permissions_referencing(self.cib.tree, 'dummy') self.assert_cib_equal(self.create_cib()) def test_remove_all_references(self): self.cib.append_to_first_tag_name('configuration', ''' ''') lib.remove_permissions_referencing(self.cib.tree, 'dummy') self.assert_cib_equal( self.create_cib().append_to_first_tag_name('configuration', ''' ''') ) class RemovePermissionTest(LibraryAclTest): def setUp(self): self.xml = """ """ self.tree = etree.XML(self.xml) def test_success(self): expected_xml = """ """ lib.remove_permission(self.tree, "permission-id") assert_xml_equal(expected_xml, etree.tostring(self.tree).decode()) def test_not_existing_id(self): assert_raise_library_error( lambda: lib.remove_permission(self.tree, "role-id"), ( severities.ERROR, report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, { "id": "role-id", "expected_types": ["acl_permission"], "current_type": "acl_role", } ) ) class GetRoleListTest(LibraryAclTest): def test_success(self): self.cib.append_to_first_tag_name( "configuration", """ """ ) expected = [ { "id": "role1", "description": "desc1", "permission_list": [ { "id": "role1-perm1", "description": None, "kind": "read", "xpath": "XPATH", "reference": None, "object-type": None, "attribute": None, }, { "id": "role1-perm2", "description": "desc", "kind": "write", "xpath": None, "reference": "id", "object-type": None, "attribute": None, }, { "id": "role1-perm3", "description": None, "kind": "deny", "xpath": None, "reference": None, "object-type": "type", "attribute": "attr", } ] }, { "id": "role2", "description": None, "permission_list": [], } ] self.assertEqual(expected, lib.get_role_list(self.acls)) class GetPermissionListTest(LibraryAclTest): def test_success(self): role_el = etree.Element("acl_role") etree.SubElement( role_el, "acl_permission", { "id":"role1-perm1", "kind": "read", "xpath": "XPATH", } ) etree.SubElement( role_el, "acl_permission", { "id": "role1-perm2", "description": "desc", "kind": "write", "reference": "id", } ) etree.SubElement( role_el, "acl_permission", { "id": "role1-perm3", "kind": "deny", "object-type": "type", "attribute": "attr", } ) expected = [ { "id": "role1-perm1", "description": None, "kind": "read", "xpath": "XPATH", "reference": None, "object-type": None, "attribute": None, }, { "id": "role1-perm2", "description": "desc", "kind": "write", "xpath": None, "reference": "id", "object-type": None, "attribute": None, }, { "id": "role1-perm3", "description": None, "kind": "deny", "xpath": None, "reference": None, "object-type": "type", "attribute": "attr", } ] self.assertEqual(expected, lib._get_permission_list(role_el)) @mock.patch("pcs.lib.cib.acl.get_target_like_list") class GetTargetListTest(TestCase): def test_success(self, mock_fn): mock_fn.return_value = "returned data" self.assertEqual("returned data", lib.get_target_list("tree")) mock_fn.assert_called_once_with("tree", "acl_target") @mock.patch("pcs.lib.cib.acl.get_target_like_list") class GetGroupListTest(TestCase): def test_success(self, mock_fn): mock_fn.return_value = "returned data" self.assertEqual("returned data", lib.get_group_list("tree")) mock_fn.assert_called_once_with("tree", "acl_group") class GetTargetLikeListWithTagTest(LibraryAclTest): def setUp(self): LibraryAclTest.setUp(self) self.cib.append_to_first_tag_name( "configuration", """ """ ) def test_success_targets(self): self.assertEqual( [ { "id": "target1", "role_list": [], }, { "id": "target2", "role_list": ["role1", "role2", "role3"], } ], lib.get_target_like_list(self.acls, "acl_target") ) def test_success_groups(self): self.assertEqual( [ { "id": "group1", "role_list": ["role1"], }, { "id": "group2", "role_list": [], } ], lib.get_target_like_list(self.acls, "acl_group") ) class GetRoleListOfTargetTest(LibraryAclTest): def test_success(self): target_el = etree.Element("target") etree.SubElement(target_el, "role", {"id": "role1"}) etree.SubElement(target_el, "role", {"id": "role2"}) etree.SubElement(target_el, "role") etree.SubElement(target_el, "role", {"id": "role3"}) self.assertEqual( ["role1", "role2", "role3"], lib._get_role_list_of_target(target_el) ) @mock.patch("pcs.lib.cib.acl.find_group") @mock.patch("pcs.lib.cib.acl.find_target") class FindTargetOrGroup(TestCase): def test_returns_target(self, find_target, find_group): find_target.return_value = "target_element" self.assertEqual( lib.find_target_or_group("acl_section", "target_id"), "target_element" ) find_target.assert_called_once_with( "acl_section", "target_id", none_if_id_unused=True ) def test_returns_group_if_target_is_none(self, find_target, find_group): find_target.return_value = None find_group.return_value = "group_element" self.assertEqual( lib.find_target_or_group("acl_section", "group_id"), "group_element" ) find_target.assert_called_once_with( "acl_section", "group_id", none_if_id_unused=True ) find_group.assert_called_once_with( "acl_section", "group_id", id_types=["acl_group", "acl_target"] ) class Find(TestCase): @mock.patch("pcs.lib.cib.acl.find_element_by_tag_and_id") def test_map_well_to_common_finder(self, common_finder): common_finder.return_value = "element" self.assertEqual( "element", lib._find( lib.TAG_GROUP, "acl_section", "group_id", none_if_id_unused=True, id_types=["some", "types"] ) ) common_finder.assert_called_once_with( lib.TAG_GROUP, "acl_section", "group_id", none_if_id_unused=True, id_types=["some", "types"] ) @mock.patch("pcs.lib.cib.acl.find_element_by_tag_and_id") def test_map_well_to_common_finder_with_automatic_desc(self, common_finder): common_finder.return_value = "element" self.assertEqual("element", lib._find( lib.TAG_GROUP, "acl_section", "group_id", none_if_id_unused=True )) common_finder.assert_called_once_with( lib.TAG_GROUP, "acl_section", "group_id", none_if_id_unused=True, id_types=None ) pcs-0.9.164/pcs/lib/cib/test/test_alert.py000066400000000000000000001101731326265502500202600ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from lxml import etree from pcs.common import report_codes from pcs.lib.cib import alert from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import( assert_raise_library_error, assert_xml_equal, assert_report_item_list_equal, ) from pcs.test.tools.custom_mock import MockLibraryReportProcessor class UpdateOptionalAttributeTest(TestCase): def test_add(self): element = etree.Element("element") alert._update_optional_attribute(element, "attr", "value1") self.assertEqual(element.get("attr"), "value1") def test_update(self): element = etree.Element("element", attr="value") alert._update_optional_attribute(element, "attr", "value1") self.assertEqual(element.get("attr"), "value1") def test_remove(self): element = etree.Element("element", attr="value") alert._update_optional_attribute(element, "attr", "") self.assertTrue(element.get("attr") is None) class EnsureRecipientValueIsUniqueTest(TestCase): def setUp(self): self.mock_reporter = MockLibraryReportProcessor() self.alert = etree.Element("alert", id="alert-1") self.recipient = etree.SubElement( self.alert, "recipient", id="rec-1", value="value1" ) def test_is_unique_no_duplicity_allowed(self): alert.ensure_recipient_value_is_unique( self.mock_reporter, self.alert, "value2" ) self.assertEqual(0, len(self.mock_reporter.report_item_list)) def test_same_recipient_no_duplicity_allowed(self): alert.ensure_recipient_value_is_unique( self.mock_reporter, self.alert, "value1", recipient_id="rec-1" ) self.assertEqual(0, len(self.mock_reporter.report_item_list)) def test_same_recipient_duplicity_allowed(self): alert.ensure_recipient_value_is_unique( self.mock_reporter, self.alert, "value1", recipient_id="rec-1", allow_duplicity=True ) self.assertEqual(0, len(self.mock_reporter.report_item_list)) def test_not_unique_no_duplicity_allowed(self): report_item = ( severities.ERROR, report_codes.CIB_ALERT_RECIPIENT_ALREADY_EXISTS, { "alert": "alert-1", "recipient": "value1" }, report_codes.FORCE_ALERT_RECIPIENT_VALUE_NOT_UNIQUE ) assert_raise_library_error( lambda: alert.ensure_recipient_value_is_unique( self.mock_reporter, self.alert, "value1" ), report_item ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [report_item] ) def test_is_unique_duplicity_allowed(self): alert.ensure_recipient_value_is_unique( self.mock_reporter, self.alert, "value2", allow_duplicity=True ) self.assertEqual(0, len(self.mock_reporter.report_item_list)) def test_not_unique_duplicity_allowed(self): alert.ensure_recipient_value_is_unique( self.mock_reporter, self.alert, "value1", allow_duplicity=True ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( severities.WARNING, report_codes.CIB_ALERT_RECIPIENT_ALREADY_EXISTS, { "alert": "alert-1", "recipient": "value1" } )] ) class CreateAlertTest(TestCase): def setUp(self): self.tree = etree.XML( """ """ ) def test_no_alerts(self): tree = etree.XML( """ """ ) assert_xml_equal( '', etree.tostring( alert.create_alert(tree, "my-alert", "/test/path") ).decode() ) assert_xml_equal( """ """, etree.tostring(tree).decode() ) def test_alerts_exists(self): assert_xml_equal( '', etree.tostring( alert.create_alert(self.tree, "my-alert", "/test/path") ).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) def test_alerts_exists_with_description(self): assert_xml_equal( '', etree.tostring(alert.create_alert( self.tree, "my-alert", "/test/path", "nothing" )).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) def test_invalid_id(self): assert_raise_library_error( lambda: alert.create_alert(self.tree, "1alert", "/path"), ( severities.ERROR, report_codes.INVALID_ID, { "id": "1alert", "id_description": "alert-id", "invalid_character": "1", "is_first_char": True, } ) ) def test_id_exists(self): assert_raise_library_error( lambda: alert.create_alert(self.tree, "alert", "/path"), ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, {"id": "alert"} ) ) def test_no_id(self): assert_xml_equal( '', etree.tostring( alert.create_alert(self.tree, None, "/test/path") ).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) class UpdateAlertTest(TestCase): def setUp(self): self.tree = etree.XML( """ """ ) def test_update_path(self): assert_xml_equal( '', etree.tostring( alert.update_alert(self.tree, "alert", "/test/path") ).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) def test_remove_path(self): assert_xml_equal( '', etree.tostring(alert.update_alert(self.tree, "alert", "")).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) def test_update_description(self): assert_xml_equal( '', etree.tostring( alert.update_alert(self.tree, "alert", None, "desc") ).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) def test_remove_description(self): assert_xml_equal( '', etree.tostring( alert.update_alert(self.tree, "alert1", None, "") ).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) def test_id_not_exists(self): assert_raise_library_error( lambda: alert.update_alert(self.tree, "alert0", "/test"), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "alert0", "expected_types": ["alert"], "context_type": "alerts", "context_id": "", }, None ) ) class RemoveAlertTest(TestCase): def setUp(self): self.tree = etree.XML( """ """ ) def test_success(self): alert.remove_alert(self.tree, "alert") assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) def test_not_existing_id(self): assert_raise_library_error( lambda: alert.remove_alert(self.tree, "not-existing-id"), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "not-existing-id", "expected_types": ["alert"], "context_type": "alerts", "context_id": "", }, None ) ) class AddRecipientTest(TestCase): def setUp(self): self.mock_reporter = MockLibraryReportProcessor() self.tree = etree.XML( """ """ ) def test_with_id(self): assert_xml_equal( '', etree.tostring( alert.add_recipient( self.mock_reporter, self.tree, "alert", "value1", "my-recipient" ) ).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) self.assertEqual([], self.mock_reporter.report_item_list) def test_without_id(self): assert_xml_equal( '', etree.tostring( alert.add_recipient( self.mock_reporter, self.tree, "alert", "value1" ) ).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) self.assertEqual([], self.mock_reporter.report_item_list) def test_id_exists(self): assert_raise_library_error( lambda: alert.add_recipient( self.mock_reporter, self.tree, "alert", "value1", "alert-recipient" ), ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, {"id": "alert-recipient"} ) ) self.assertEqual([], self.mock_reporter.report_item_list) def test_duplicity_of_value_not_allowed(self): report_item = ( severities.ERROR, report_codes.CIB_ALERT_RECIPIENT_ALREADY_EXISTS, { "alert": "alert", "recipient": "test_val" }, report_codes.FORCE_ALERT_RECIPIENT_VALUE_NOT_UNIQUE ) assert_raise_library_error( lambda: alert.add_recipient( self.mock_reporter, self.tree, "alert", "test_val" ), report_item ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [report_item] ) def test_duplicity_of_value_allowed(self): assert_xml_equal( '', etree.tostring( alert.add_recipient( self.mock_reporter, self.tree, "alert", "test_val", allow_same_value=True ) ).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( severities.WARNING, report_codes.CIB_ALERT_RECIPIENT_ALREADY_EXISTS, { "alert": "alert", "recipient": "test_val" } )] ) def test_alert_not_exist(self): assert_raise_library_error( lambda: alert.add_recipient( self.mock_reporter, self.tree, "alert1", "test_val" ), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "alert1", "expected_types": ["alert"], "context_type": "alerts", "context_id": "", }, None ) ) def test_with_description(self): assert_xml_equal( """ """, etree.tostring(alert.add_recipient( self.mock_reporter, self.tree, "alert", "value1", description="desc" )).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) self.assertEqual([], self.mock_reporter.report_item_list) class UpdateRecipientTest(TestCase): def setUp(self): self.mock_reporter = MockLibraryReportProcessor() self.tree = etree.XML( """ """ ) def test_update_value(self): assert_xml_equal( """ """, etree.tostring(alert.update_recipient( self.mock_reporter, self.tree, "alert-recipient", recipient_value="new_val" )).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) self.assertEqual([], self.mock_reporter.report_item_list) def test_update_same_value_no_duplicity_allowed(self): assert_xml_equal( '', etree.tostring(alert.update_recipient( self.mock_reporter, self.tree, "alert-recipient", recipient_value="test_val" )).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) self.assertEqual([], self.mock_reporter.report_item_list) def test_update_same_value_duplicity_allowed(self): assert_xml_equal( '', etree.tostring(alert.update_recipient( self.mock_reporter, self.tree, "alert-recipient", recipient_value="test_val", allow_same_value=True )).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) self.assertEqual([], self.mock_reporter.report_item_list) def test_duplicity_of_value_not_allowed(self): report_item = ( severities.ERROR, report_codes.CIB_ALERT_RECIPIENT_ALREADY_EXISTS, { "alert": "alert", "recipient": "value1" }, report_codes.FORCE_ALERT_RECIPIENT_VALUE_NOT_UNIQUE ) assert_raise_library_error( lambda: alert.update_recipient( self.mock_reporter, self.tree, "alert-recipient", "value1" ), report_item ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [report_item] ) def test_duplicity_of_value_allowed(self): assert_xml_equal( """ """, etree.tostring(alert.update_recipient( self.mock_reporter, self.tree, "alert-recipient", recipient_value="value1", allow_same_value=True )).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( severities.WARNING, report_codes.CIB_ALERT_RECIPIENT_ALREADY_EXISTS, { "alert": "alert", "recipient": "value1" } )] ) def test_add_description(self): assert_xml_equal( """ """, etree.tostring(alert.update_recipient( self.mock_reporter, self.tree, "alert-recipient", description="description" )).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) self.assertEqual([], self.mock_reporter.report_item_list) def test_update_description(self): assert_xml_equal( """ """, etree.tostring(alert.update_recipient( self.mock_reporter, self.tree, "alert-recipient-1", description="description" )).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) self.assertEqual([], self.mock_reporter.report_item_list) def test_remove_description(self): assert_xml_equal( """ """, etree.tostring( alert.update_recipient( self.mock_reporter, self.tree, "alert-recipient-1", description="" ) ).decode() ) assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) self.assertEqual([], self.mock_reporter.report_item_list) def test_recipient_not_exists(self): assert_raise_library_error( lambda: alert.update_recipient( self.mock_reporter, self.tree, "missing-recipient"), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "missing-recipient", "expected_types": ["recipient"], "context_type": "alerts", "context_id": "", }, None ) ) class RemoveRecipientTest(TestCase): def setUp(self): self.tree = etree.XML( """ """ ) def test_success(self): alert.remove_recipient(self.tree, "alert-recipient-2") assert_xml_equal( """ """, etree.tostring(self.tree).decode() ) def test_recipient_not_exists(self): assert_raise_library_error( lambda: alert.remove_recipient(self.tree, "recipient"), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "recipient", "expected_types": ["recipient"], "context_type": "alerts", "context_id": "", }, None ) ) class GetAllRecipientsTest(TestCase): def test_success(self): alert_obj = etree.XML( """ """ ) self.assertEqual( [ { "id": "alert-recipient", "value": "test_val", "description": "", "instance_attributes": [ { "id": "nvset-name1-value1", "name": "name1", "value": "value1" }, { "id": "nvset-name2-value2", "name": "name2", "value": "value2" } ], "meta_attributes": [ { "id": "nvset-name3", "name": "name3", "value": "" } ] }, { "id": "alert-recipient-1", "value": "value1", "description": "desc", "instance_attributes": [], "meta_attributes": [] } ], alert.get_all_recipients(alert_obj) ) class GetAllAlertsTest(TestCase): def test_success(self): alerts = etree.XML( """ """ ) self.assertEqual( [ { "id": "alert", "path": "/path", "description": "", "instance_attributes": [], "meta_attributes": [], "recipient_list": [ { "id": "alert-recipient", "value": "test_val", "description": "", "instance_attributes": [ { "id": "instance_attributes-name1-value1", "name": "name1", "value": "value1" }, { "id": "instance_attributes-name2-value2", "name": "name2", "value": "value2" } ], "meta_attributes": [ { "id": "meta_attributes-name3", "name": "name3", "value": "" } ] }, { "id": "alert-recipient-1", "value": "value1", "description": "desc", "instance_attributes": [], "meta_attributes": [] } ] }, { "id": "alert1", "path": "/test/path", "description": "desc", "instance_attributes": [ { "id": "alert1-name1-value1", "name": "name1", "value": "value1" }, { "id": "alert1-name2-value2", "name": "name2", "value": "value2" } ], "meta_attributes": [ { "id": "alert1-name3", "name": "name3", "value": "" } ], "recipient_list": [] } ], alert.get_all_alerts(alerts) ) pcs-0.9.164/pcs/lib/cib/test/test_constraint.py000066400000000000000000000332071326265502500213370ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.test.tools.pcs_unittest import TestCase from lxml import etree from pcs.common import report_codes from pcs.lib.cib.constraint import constraint from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import( assert_raise_library_error, assert_xml_equal, ) from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock from pcs.test.tools.assertions import ( assert_report_item_list_equal, ) def fixture_element(tag, id): element = mock.MagicMock() element.tag = tag element.attrib = {"id": id} return element @mock.patch("pcs.lib.cib.constraint.constraint.find_parent") @mock.patch("pcs.lib.cib.constraint.constraint.find_element_by_tag_and_id") class FindValidResourceId(TestCase): def setUp(self): self.cib = "cib" self.report_processor = MockLibraryReportProcessor() self.find = partial( constraint.find_valid_resource_id, self.report_processor, self.cib, can_repair_to_clone=False, in_clone_allowed=False, ) def fixture_error_multiinstance(self, parent_type, parent_id): return ( severities.ERROR, report_codes.RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE, { "resource_id": "resourceA", "parent_type": parent_type, "parent_id": parent_id, }, report_codes.FORCE_CONSTRAINT_MULTIINSTANCE_RESOURCE ) def fixture_warning_multiinstance(self, parent_type, parent_id): return ( severities.WARNING, report_codes.RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE, { "resource_id": "resourceA", "parent_type": parent_type, "parent_id": parent_id, }, None ) def test_return_same_id_when_resource_is_clone(self, mock_find_by_id, _): mock_find_by_id.return_value = fixture_element("clone", "resourceA") self.assertEqual("resourceA", self.find(id="resourceA")) def test_return_same_id_when_resource_is_master(self, mock_find_by_id, _): mock_find_by_id.return_value = fixture_element("master", "resourceA") self.assertEqual("resourceA", self.find(id="resourceA")) def test_return_same_id_when_resource_is_bundle(self, mock_find_by_id, _): mock_find_by_id.return_value = fixture_element("bundle", "resourceA") self.assertEqual("resourceA", self.find(id="resourceA")) def test_return_same_id_when_resource_is_standalone_primitive( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = None self.assertEqual("resourceA", self.find(id="resourceA")) def test_refuse_when_resource_is_in_clone( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("clone", "clone_id") assert_raise_library_error( lambda: self.find(id="resourceA"), self.fixture_error_multiinstance("clone", "clone_id"), ) def test_refuse_when_resource_is_in_master( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("master", "master_id") assert_raise_library_error( lambda: self.find(id="resourceA"), self.fixture_error_multiinstance("master", "master_id"), ) def test_refuse_when_resource_is_in_bundle( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("bundle", "bundle_id") assert_raise_library_error( lambda: self.find(id="resourceA"), self.fixture_error_multiinstance("bundle", "bundle_id"), ) def test_return_clone_id_when_repair_allowed( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("clone", "clone_id") self.assertEqual( "clone_id", self.find(can_repair_to_clone=True, id="resourceA") ) assert_report_item_list_equal( self.report_processor.report_item_list, [] ) def test_return_master_id_when_repair_allowed( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("master", "master_id") self.assertEqual( "master_id", self.find(can_repair_to_clone=True, id="resourceA") ) assert_report_item_list_equal( self.report_processor.report_item_list, [] ) def test_return_bundle_id_when_repair_allowed( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("bundle", "bundle_id") self.assertEqual( "bundle_id", self.find(can_repair_to_clone=True, id="resourceA") ) assert_report_item_list_equal( self.report_processor.report_item_list, [] ) def test_return_resource_id_when_in_clone_allowed( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("clone", "clone_id") self.assertEqual( "resourceA", self.find(in_clone_allowed=True, id="resourceA") ) assert_report_item_list_equal( self.report_processor.report_item_list, [ self.fixture_warning_multiinstance("clone", "clone_id"), ] ) def test_return_resource_id_when_in_master_allowed( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("master", "master_id") self.assertEqual( "resourceA", self.find(in_clone_allowed=True, id="resourceA") ) assert_report_item_list_equal( self.report_processor.report_item_list, [ self.fixture_warning_multiinstance("master", "master_id"), ] ) def test_return_resource_id_when_in_bundle_allowed( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("bundle", "bundle_id") self.assertEqual( "resourceA", self.find(in_clone_allowed=True, id="resourceA") ) assert_report_item_list_equal( self.report_processor.report_item_list, [ self.fixture_warning_multiinstance("bundle", "bundle_id"), ] ) class PrepareOptionsTest(TestCase): def test_refuse_unknown_option(self): assert_raise_library_error( lambda: constraint.prepare_options( ("a", ), {"b": "c"}, mock.MagicMock(), mock.MagicMock() ), ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["b"], "option_type": None, "allowed": ["a", "id"], "allowed_patterns": [], } ), ) def test_complete_id(self): mock_create_id = mock.MagicMock() mock_create_id.return_value = "new-id" self.assertEqual({"id": "new-id"}, constraint.prepare_options( ("a",), {}, mock_create_id, mock.MagicMock() )) def test_has_no_side_efect_on_input_options(self): mock_create_id = mock.MagicMock() mock_create_id.return_value = "new-id" options = {"a": "b"} self.assertEqual( {"id": "new-id", "a": "b"}, constraint.prepare_options( ("a",), options, mock_create_id, mock.MagicMock() ) ) self.assertEqual({"a": "b"}, options) def test_refuse_invalid_id(self): class SomeException(Exception): pass mock_validate_id = mock.MagicMock() mock_validate_id.side_effect = SomeException() self.assertRaises( SomeException, lambda: constraint.prepare_options( ("a", ), {"id": "invalid"}, mock.MagicMock(), mock_validate_id ), ) mock_validate_id.assert_called_once_with("invalid") class CreateIdTest(TestCase): @mock.patch( "pcs.lib.cib.constraint.constraint.resource_set.extract_id_set_list" ) @mock.patch("pcs.lib.cib.constraint.constraint.find_unique_id") def test_create_id_from_resource_set_list(self, mock_find_id, mock_extract): mock_extract.return_value = [["A", "B"], ["C"]] mock_find_id.return_value = "some_id" self.assertEqual( "some_id", constraint.create_id("cib", "PREFIX", "resource_set_list") ) mock_extract.assert_called_once_with("resource_set_list") mock_find_id.assert_called_once_with("cib", "pcs_PREFIX_set_A_B_set_C") def fixture_constraint_section(return_value): constraint_section = mock.MagicMock() constraint_section.findall = mock.MagicMock() constraint_section.findall.return_value = return_value return constraint_section @mock.patch("pcs.lib.cib.constraint.constraint.export_with_set") class CheckIsWithoutDuplicationTest(TestCase): def test_raises_when_duplicate_element_found(self, export_with_set): export_with_set.return_value = "exported_duplicate_element" element = mock.MagicMock() element.tag = "constraint_type" report_processor = MockLibraryReportProcessor() assert_raise_library_error( lambda: constraint.check_is_without_duplication( report_processor, fixture_constraint_section(["duplicate_element"]), element, are_duplicate=lambda e1, e2: True, export_element=constraint.export_with_set, ), ( severities.ERROR, report_codes.DUPLICATE_CONSTRAINTS_EXIST, { 'constraint_info_list': ['exported_duplicate_element'], 'constraint_type': 'constraint_type' }, report_codes.FORCE_CONSTRAINT_DUPLICATE ), ) def test_success_when_no_duplication_found(self, export_with_set): export_with_set.return_value = "exported_duplicate_element" element = mock.MagicMock() element.tag = "constraint_type" #no exception raised report_processor = MockLibraryReportProcessor() constraint.check_is_without_duplication( report_processor, fixture_constraint_section([]), element, are_duplicate=lambda e1, e2: True, export_element=constraint.export_with_set, ) def test_report_when_duplication_allowed(self, export_with_set): export_with_set.return_value = "exported_duplicate_element" element = mock.MagicMock() element.tag = "constraint_type" report_processor = MockLibraryReportProcessor() constraint.check_is_without_duplication( report_processor, fixture_constraint_section(["duplicate_element"]), element, are_duplicate=lambda e1, e2: True, export_element=constraint.export_with_set, duplication_alowed=True, ) assert_report_item_list_equal( report_processor.report_item_list, [ ( severities.WARNING, report_codes.DUPLICATE_CONSTRAINTS_EXIST, { 'constraint_info_list': ['exported_duplicate_element'], 'constraint_type': 'constraint_type' }, ) ] ) class CreateWithSetTest(TestCase): def test_put_new_constraint_to_constraint_section(self): constraint_section = etree.Element("constraints") constraint.create_with_set( constraint_section, "ticket", {"a": "b"}, [{"ids": ["A", "B"], "options": {"c": "d"}}] ) assert_xml_equal(etree.tostring(constraint_section).decode(), """ """) def test_refuse_empty_resource_set_list(self): constraint_section = etree.Element("constraints") assert_raise_library_error( lambda: constraint.create_with_set( constraint_section, "ticket", {"a": "b"}, [] ), (severities.ERROR, report_codes.EMPTY_RESOURCE_SET_LIST, {}) ) pcs-0.9.164/pcs/lib/cib/test/test_constraint_colocation.py000066400000000000000000000063471326265502500235560ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.common import report_codes from pcs.lib.cib.constraint import colocation from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import assert_raise_library_error from pcs.test.tools.pcs_unittest import mock #Patch check_new_id_applicable is always desired when working with #prepare_options_with_set. Patched function raises when id not applicable #and do nothing when applicable - in this case tests do no actions with it @mock.patch("pcs.lib.cib.constraint.colocation.check_new_id_applicable") class PrepareOptionsWithSetTest(TestCase): def setUp(self): self.cib = "cib" self.resource_set_list = "resource_set_list" self.prepare = lambda options: colocation.prepare_options_with_set( self.cib, options, self.resource_set_list, ) @mock.patch("pcs.lib.cib.constraint.colocation.constraint.create_id") def test_complete_id(self, mock_create_id, _): mock_create_id.return_value = "generated_id" options = {"score": "1"} expected_options = options.copy() expected_options.update({"id": "generated_id"}) self.assertEqual(expected_options, self.prepare(options)) mock_create_id.assert_called_once_with( self.cib, colocation.TAG_NAME, self.resource_set_list ) def test_refuse_invalid_id(self, mock_check_new_id_applicable): mock_check_new_id_applicable.side_effect = Exception() invalid_id = "invalid_id" self.assertRaises(Exception, lambda: self.prepare({ "score": "1", "id": invalid_id, })) mock_check_new_id_applicable.assert_called_once_with( self.cib, colocation.DESCRIPTION, invalid_id ) def test_refuse_bad_score(self, _): assert_raise_library_error( lambda: self.prepare({ "score": "bad", "id": "id", }), (severities.ERROR, report_codes.INVALID_SCORE, { 'score': 'bad' }), ) def test_refuse_more_scores(self, _): assert_raise_library_error( lambda: self.prepare({ "score": "1", "score-attribute": "2", "id": "id", }), (severities.ERROR, report_codes.MULTIPLE_SCORE_OPTIONS, {}), ) def test_refuse_unknown_attributes(self, _): assert_raise_library_error( lambda: self.prepare({ "score": "1", "unknown": "value", "id": "id", }), ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["unknown"], "option_type": None, "allowed": [ "id", "score", "score-attribute", "score-attribute-mangle", ], "allowed_patterns": [], } ), ) pcs-0.9.164/pcs/lib/cib/test/test_constraint_order.py000066400000000000000000000070041326265502500225260ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.common import report_codes from pcs.lib.cib.constraint import order from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import assert_raise_library_error from pcs.test.tools.pcs_unittest import mock #Patch check_new_id_applicable is always desired when working with #prepare_options_with_set. Patched function raises when id not applicable #and do nothing when applicable - in this case tests do no actions with it @mock.patch("pcs.lib.cib.constraint.order.check_new_id_applicable") class PrepareOptionsWithSetTest(TestCase): def setUp(self): self.cib = "cib" self.resource_set_list = "resource_set_list" self.prepare = lambda options: order.prepare_options_with_set( self.cib, options, self.resource_set_list, ) @mock.patch("pcs.lib.cib.constraint.order.constraint.create_id") def test_complete_id(self, mock_create_id, _): mock_create_id.return_value = "generated_id" options = {"symmetrical": "true", "kind": "Optional"} expected_options = options.copy() expected_options.update({"id": "generated_id"}) self.assertEqual(expected_options, self.prepare(options)) mock_create_id.assert_called_once_with( self.cib, order.TAG_NAME, self.resource_set_list ) def test_refuse_invalid_id(self, mock_check_new_id_applicable): mock_check_new_id_applicable.side_effect = Exception() invalid_id = "invalid_id" self.assertRaises(Exception, lambda: self.prepare({ "symmetrical": "true", "kind": "Optional", "id": invalid_id, })) mock_check_new_id_applicable.assert_called_once_with( self.cib, order.DESCRIPTION, invalid_id ) def test_refuse_unknown_kind(self, _): assert_raise_library_error( lambda: self.prepare({ "symmetrical": "true", "kind": "unknown", "id": "id", }), (severities.ERROR, report_codes.INVALID_OPTION_VALUE, { 'allowed_values': ('Optional', 'Mandatory', 'Serialize'), 'option_value': 'unknown', 'option_name': 'kind', }), ) def test_refuse_unknown_symmetrical(self, _): assert_raise_library_error( lambda: self.prepare({ "symmetrical": "unknown", "kind": "Optional", "id": "id", }), (severities.ERROR, report_codes.INVALID_OPTION_VALUE, { 'allowed_values': ('true', 'false'), 'option_value': 'unknown', 'option_name': 'symmetrical', }), ) def test_refuse_unknown_attributes(self, _): assert_raise_library_error( lambda: self.prepare({ "symmetrical": "unknown", "kind": "Optional", "unknown": "value", "id": "id", }), ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["unknown"], "option_type": None, "allowed": [ "id", "kind", "symmetrical"], "allowed_patterns": [], } ), ) pcs-0.9.164/pcs/lib/cib/test/test_constraint_ticket.py000066400000000000000000000350461326265502500227050ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.test.tools.pcs_unittest import TestCase from lxml import etree from pcs.common import report_codes from pcs.lib.cib.constraint import ticket from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import ( assert_raise_library_error, assert_xml_equal, ) from pcs.test.tools.pcs_unittest import mock @mock.patch("pcs.lib.cib.constraint.ticket.tools.check_new_id_applicable") class PrepareOptionsPlainTest(TestCase): def setUp(self): self.cib = "cib" self.prepare = partial(ticket.prepare_options_plain, self.cib) @mock.patch("pcs.lib.cib.constraint.ticket._create_id") def test_prepare_correct_options(self, mock_create_id, _): mock_create_id.return_value = "generated_id" self.assertEqual( { 'id': 'generated_id', 'loss-policy': 'fence', 'rsc': 'resourceA', 'rsc-role': 'Master', 'ticket': 'ticket_key' }, self.prepare( {"loss-policy": "fence", "rsc-role": "master"}, "ticket_key", "resourceA", ) ) @mock.patch("pcs.lib.cib.constraint.ticket._create_id") def test_does_not_include_role_if_not_presented(self, mock_create_id, _): mock_create_id.return_value = "generated_id" self.assertEqual( { 'id': 'generated_id', 'loss-policy': 'fence', 'rsc': 'resourceA', 'ticket': 'ticket_key' }, self.prepare( {"loss-policy": "fence", "rsc-role": ""}, "ticket_key", "resourceA", ) ) def test_refuse_unknown_attributes(self, _): assert_raise_library_error( lambda: self.prepare( {"unknown": "nonsense", "rsc-role": "master"}, "ticket_key", "resourceA", ), ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["unknown"], "option_type": None, "allowed": ["id", "loss-policy", "rsc", "rsc-role", "ticket"], "allowed_patterns": [], } ), ) def test_refuse_bad_role(self, _): assert_raise_library_error( lambda: self.prepare( {"id": "id", "rsc-role": "bad_role"}, "ticket_key", "resourceA" ), (severities.ERROR, report_codes.INVALID_OPTION_VALUE, { 'allowed_values': ('Stopped', 'Started', 'Master', 'Slave'), 'option_value': 'bad_role', 'option_name': 'rsc-role', }), ) def test_refuse_missing_ticket(self, _): assert_raise_library_error( lambda: self.prepare( {"id": "id", "rsc-role": "master"}, "", "resourceA" ), ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_names": ["ticket"] } ), ) def test_refuse_missing_resource_id(self, _): assert_raise_library_error( lambda: self.prepare( {"id": "id", "rsc-role": "master"}, "ticket_key", "" ), ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_names": ["rsc"], } ), ) def test_refuse_unknown_lost_policy(self, mock_check_new_id_applicable): assert_raise_library_error( lambda: self.prepare( { "loss-policy": "unknown", "ticket": "T", "id": "id"}, "ticket_key", "resourceA", ), (severities.ERROR, report_codes.INVALID_OPTION_VALUE, { 'allowed_values': ('fence', 'stop', 'freeze', 'demote'), 'option_value': 'unknown', 'option_name': 'loss-policy', }), ) @mock.patch("pcs.lib.cib.constraint.ticket._create_id") def test_complete_id(self, mock_create_id, _): mock_create_id.return_value = "generated_id" options = {"loss-policy": "freeze", "ticket": "T", "rsc-role": "Master"} ticket_key = "ticket_key" resource_id = "resourceA" expected_options = options.copy() expected_options.update({ "id": "generated_id", "rsc": resource_id, "rsc-role": "Master", "ticket": ticket_key, }) self.assertEqual(expected_options, self.prepare( options, ticket_key, resource_id, )) mock_create_id.assert_called_once_with( self.cib, ticket_key, resource_id, "Master", ) #Patch check_new_id_applicable is always desired when working with #prepare_options_with_set. Patched function raises when id not applicable #and do nothing when applicable - in this case tests do no actions with it @mock.patch("pcs.lib.cib.constraint.ticket.tools.check_new_id_applicable") class PrepareOptionsWithSetTest(TestCase): def setUp(self): self.cib = "cib" self.resource_set_list = "resource_set_list" self.prepare = lambda options: ticket.prepare_options_with_set( self.cib, options, self.resource_set_list, ) @mock.patch("pcs.lib.cib.constraint.ticket.constraint.create_id") def test_complete_id(self, mock_create_id, _): mock_create_id.return_value = "generated_id" options = {"loss-policy": "freeze", "ticket": "T"} expected_options = options.copy() expected_options.update({"id": "generated_id"}) self.assertEqual(expected_options, self.prepare(options)) mock_create_id.assert_called_once_with( self.cib, ticket.TAG_NAME, self.resource_set_list ) def test_refuse_invalid_id(self, mock_check_new_id_applicable): class SomeException(Exception): pass mock_check_new_id_applicable.side_effect = SomeException() invalid_id = "invalid_id" self.assertRaises(SomeException, lambda: self.prepare({ "loss-policy": "freeze", "ticket": "T", "id": invalid_id, })) mock_check_new_id_applicable.assert_called_once_with( self.cib, ticket.DESCRIPTION, invalid_id ) def test_refuse_unknown_lost_policy(self, _): assert_raise_library_error( lambda: self.prepare({ "loss-policy": "unknown", "ticket": "T", "id": "id", }), (severities.ERROR, report_codes.INVALID_OPTION_VALUE, { 'allowed_values': ('fence', 'stop', 'freeze', 'demote'), 'option_value': 'unknown', 'option_name': 'loss-policy', }), ) def test_refuse_missing_ticket(self, _): assert_raise_library_error( lambda: self.prepare({"loss-policy": "stop", "id": "id"}), ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, {"option_names": ["ticket"]} ) ) def test_refuse_empty_ticket(self, _): assert_raise_library_error( lambda: self.prepare({ "loss-policy": "stop", "id": "id", "ticket": " " }), ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, {"option_names": ["ticket"]} ) ) class Element(object): def __init__(self, attrib): self.attrib = attrib def update(self, attrib): self.attrib.update(attrib) return self class AreDuplicatePlain(TestCase): def setUp(self): self.first = Element({ "ticket": "ticket_key", "rsc": "resourceA", "rsc-role": "Master" }) self.second = Element({ "ticket": "ticket_key", "rsc": "resourceA", "rsc-role": "Master" }) def test_returns_true_for_duplicate_elements(self): self.assertTrue(ticket.are_duplicate_plain(self.first, self.second)) def test_returns_false_for_different_ticket(self): self.assertFalse(ticket.are_duplicate_plain( self.first, self.second.update({"ticket": "X"}) )) def test_returns_false_for_different_resource(self): self.assertFalse(ticket.are_duplicate_plain( self.first, self.second.update({"rsc": "Y"}) )) def test_returns_false_for_different_role(self): self.assertFalse(ticket.are_duplicate_plain( self.first, self.second.update({"rsc-role": "Z"}) )) def test_returns_false_for_different_elements(self): self.second.update({ "ticket": "X", "rsc": "Y", "rsc-role": "Z" }) self.assertFalse(ticket.are_duplicate_plain(self.first, self.second)) @mock.patch("pcs.lib.cib.constraint.ticket.constraint.have_duplicate_resource_sets") class AreDuplicateWithResourceSet(TestCase): def test_returns_true_for_duplicate_elements( self, mock_have_duplicate_resource_sets ): mock_have_duplicate_resource_sets.return_value = True self.assertTrue(ticket.are_duplicate_with_resource_set( Element({"ticket": "ticket_key"}), Element({"ticket": "ticket_key"}), )) def test_returns_false_for_different_elements( self, mock_have_duplicate_resource_sets ): mock_have_duplicate_resource_sets.return_value = True self.assertFalse(ticket.are_duplicate_with_resource_set( Element({"ticket": "ticket_key"}), Element({"ticket": "X"}), )) class RemovePlainTest(TestCase): def test_remove_tickets_constraints_for_resource(self): constraint_section = etree.fromstring(""" """) self.assertTrue(ticket.remove_plain( constraint_section, ticket_key="tA", resource_id="rA", )) assert_xml_equal(etree.tostring(constraint_section).decode(), """ """) def test_remove_nothing_when_no_matching_found(self): constraint_section = etree.fromstring(""" """) self.assertFalse(ticket.remove_plain( constraint_section, ticket_key="tA", resource_id="rA", )) assert_xml_equal(etree.tostring(constraint_section).decode(), """ """) class RemoveWithSetTest(TestCase): def test_remove_resource_references_and_empty_remaining_parents(self): constraint_section = etree.fromstring(""" """) self.assertTrue(ticket.remove_with_resource_set( constraint_section, ticket_key="tA", resource_id="rA" )) assert_xml_equal( """ """, etree.tostring(constraint_section).decode() ) def test_remove_nothing_when_no_matching_found(self): constraint_section = etree.fromstring(""" """) self.assertFalse(ticket.remove_with_resource_set( constraint_section, ticket_key="tA", resource_id="rA" )) pcs-0.9.164/pcs/lib/cib/test/test_fencing_topology.py000066400000000000000000001001551326265502500225150ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common import report_codes from pcs.lib import reports from pcs.lib.errors import LibraryError, ReportItemSeverity as severity from pcs.lib.pacemaker.state import ClusterState from pcs.test.tools.assertions import ( assert_raise_library_error, assert_report_item_list_equal, assert_xml_equal, ) from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.misc import create_patcher from pcs.test.tools.pcs_unittest import TestCase#, mock from pcs.test.tools.xml import etree_to_str from pcs.common.fencing_topology import ( TARGET_TYPE_NODE, TARGET_TYPE_REGEXP, TARGET_TYPE_ATTRIBUTE, ) from pcs.lib.cib import fencing_topology as lib patch_lib = create_patcher("pcs.lib.cib.fencing_topology") class CibMixin(object): def get_cib(self): return etree.fromstring(""" """) class StatusNodesMixin(object): def get_status(self): return ClusterState(""" """).node_section.nodes @patch_lib("_append_level_element") @patch_lib("_validate_level_target_devices_does_not_exist") @patch_lib("_validate_devices") @patch_lib("_validate_target") @patch_lib("_validate_level", return_value="valid_level") class AddLevel(TestCase): def setUp(self): self.reporter = MockLibraryReportProcessor() def assert_validators_called( self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, dupl_called=True ): mock_val_level.assert_called_once_with(self.reporter, "level") mock_val_target.assert_called_once_with( self.reporter, "cluster_status_nodes", "target_type", "target_value", "force_node" ) mock_val_devices.assert_called_once_with( self.reporter, "resources_el", "devices", "force_device" ) if dupl_called: mock_val_dupl.assert_called_once_with( self.reporter, "topology_el", "level", "target_type", "target_value", "devices" ) else: mock_val_dupl.assert_not_called() def assert_called_invalid( self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append, dupl_called=True ): self.assertRaises( LibraryError, lambda: lib.add_level( self.reporter, "topology_el", "resources_el", "level", "target_type", "target_value", "devices", "cluster_status_nodes", "force_device", "force_node" ) ) self.assert_validators_called( mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, dupl_called ) mock_append.assert_not_called() def test_success( self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append ): lib.add_level( self.reporter, "topology_el", "resources_el", "level", "target_type", "target_value", "devices", "cluster_status_nodes", "force_device", "force_node" ) self.assert_validators_called( mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl ) mock_append.assert_called_once_with( "topology_el", "valid_level", "target_type", "target_value", "devices" ) def test_invalid_level( self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append ): mock_val_level.side_effect = lambda reporter, level: reporter.append( reports.invalid_option_value("level", level, "a positive integer") ) self.assert_called_invalid( mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append, dupl_called=False ) def test_invalid_target( self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append ): mock_val_target.side_effect = ( lambda reporter, status_nodes, target_type, target_value, force: reporter.append( reports.node_not_found(target_value) ) ) self.assert_called_invalid( mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append, dupl_called=False ) def test_invalid_devices( self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append ): mock_val_devices.side_effect = ( lambda reporter, resources, devices, force: reporter.append( reports.stonith_resources_do_not_exist(["device"]) ) ) self.assert_called_invalid( mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append, dupl_called=False ) def test_already_exists( self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append ): mock_val_dupl.side_effect = ( lambda reporter, tree, level, target_type, target_value, devices: reporter.append( reports.fencing_level_already_exists( level, target_type, target_value, devices ) ) ) self.assert_called_invalid( mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, mock_append, dupl_called=True ) class RemoveAllLevels(TestCase, CibMixin): def setUp(self): self.tree = self.get_cib() def test_success(self): lib.remove_all_levels(self.tree) assert_xml_equal( "", etree_to_str(self.tree) ) class RemoveLevelsByParams(TestCase, CibMixin): def setUp(self): self.tree = self.get_cib() self.reporter = MockLibraryReportProcessor() def get_remaining_ids(self): return [el.get("id") for el in self.tree.findall("fencing-level")] def test_level(self): lib.remove_levels_by_params( self.reporter, self.tree, level=2 ) self.assertEqual( self.get_remaining_ids(), ["fl1", "fl3", "fl5", "fl7", "fl8", "fl9", "fl10"] ) assert_report_item_list_equal(self.reporter.report_item_list, []) def test_target_node(self): lib.remove_levels_by_params( self.reporter, self.tree, target_type=TARGET_TYPE_NODE, target_value="nodeA" ) self.assertEqual( self.get_remaining_ids(), ["fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10"] ) assert_report_item_list_equal(self.reporter.report_item_list, []) def test_target_pattern(self): lib.remove_levels_by_params( self.reporter, self.tree, target_type=TARGET_TYPE_REGEXP, target_value="node\d+" ) self.assertEqual( self.get_remaining_ids(), ["fl1", "fl2", "fl3", "fl4", "fl7", "fl8", "fl9", "fl10"] ) assert_report_item_list_equal(self.reporter.report_item_list, []) def test_target_attrib(self): lib.remove_levels_by_params( self.reporter, self.tree, target_type=TARGET_TYPE_ATTRIBUTE, target_value=("fencing", "improved") ) self.assertEqual( self.get_remaining_ids(), ["fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl9", "fl10"] ) assert_report_item_list_equal(self.reporter.report_item_list, []) def test_one_device(self): lib.remove_levels_by_params( self.reporter, self.tree, devices=["d3"] ) self.assertEqual( self.get_remaining_ids(), ["fl1", "fl3", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10"] ) assert_report_item_list_equal(self.reporter.report_item_list, []) def test_more_devices(self): lib.remove_levels_by_params( self.reporter, self.tree, devices=["d2", "d1"] ) self.assertEqual( self.get_remaining_ids(), ["fl1", "fl2", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10"] ) assert_report_item_list_equal(self.reporter.report_item_list, []) def test_combination(self): lib.remove_levels_by_params( self.reporter, self.tree, 2, TARGET_TYPE_NODE, "nodeB", ["d3"] ) self.assertEqual( self.get_remaining_ids(), ["fl1", "fl2", "fl3", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10"] ) assert_report_item_list_equal(self.reporter.report_item_list, []) def test_invalid_target(self): assert_raise_library_error( lambda: lib.remove_levels_by_params( self.reporter, self.tree, target_type="bad_target", target_value="nodeA" ), ( severity.ERROR, report_codes.INVALID_OPTION_TYPE, { "option_name": "target", "allowed_types": [ "node", "regular expression", "attribute_name=value" ] }, None ), ) self.assertEqual( self.get_remaining_ids(), [ "fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10" ] ) def test_no_such_level(self): assert_raise_library_error( lambda: lib.remove_levels_by_params( self.reporter, self.tree, 9, TARGET_TYPE_NODE, "nodeB", ["d3"] ), ( severity.ERROR, report_codes.CIB_FENCING_LEVEL_DOES_NOT_EXIST, { "devices": ["d3", ], "target_type": TARGET_TYPE_NODE, "target_value": "nodeB", "level": 9, }, None ), ) self.assertEqual( self.get_remaining_ids(), [ "fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10" ] ) def test_no_such_level_ignore_missing(self): lib.remove_levels_by_params( self.reporter, self.tree, 9, TARGET_TYPE_NODE, "nodeB", ["d3"], True ) self.assertEqual( self.get_remaining_ids(), [ "fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10" ] ) class RemoveDeviceFromAllLevels(TestCase, CibMixin): def setUp(self): self.tree = self.get_cib() def test_success(self): lib.remove_device_from_all_levels(self.tree, "d3") assert_xml_equal( """ """, etree_to_str(self.tree) ) def test_no_such_device(self): original_xml = etree_to_str(self.tree) lib.remove_device_from_all_levels(self.tree, "dX") assert_xml_equal(original_xml, etree_to_str(self.tree)) class Export(TestCase, CibMixin): def test_empty(self): self.assertEqual( lib.export(etree.fromstring("")), [] ) def test_success(self): self.assertEqual( lib.export(self.get_cib()), [ { "level": "1", "target_type": "node", "target_value": "nodeA", "devices": ["d1", "d2"], }, { "level": "2", "target_type": "node", "target_value": "nodeA", "devices": ["d3"], }, { "level": "1", "target_type": "node", "target_value": "nodeB", "devices": ["d2", "d1"], }, { "level": "2", "target_type": "node", "target_value": "nodeB", "devices": ["d3"], }, { "level": "1", "target_type": "regexp", "target_value": "node\d+", "devices": ["d3", "d4"], }, { "level": "2", "target_type": "regexp", "target_value": "node\d+", "devices": ["d1"], }, { "level": "3", "target_type": "attribute", "target_value": ("fencing", "improved"), "devices": ["d3", "d4"], }, { "level": "4", "target_type": "attribute", "target_value": ("fencing", "improved"), "devices": ["d5"], }, { "level": "3", "target_type": "regexp", "target_value": "node-R.*", "devices": ["dR"], }, { "level": "4", "target_type": "attribute", "target_value": ("fencing", "remote-special"), "devices": ["dR-special"], } ] ) class Verify(TestCase, CibMixin, StatusNodesMixin): def fixture_resource(self, tree, name): el = etree.SubElement(tree, "primitive", id=name, type="fence_dummy") el.set("class", "stonith") def test_empty(self): resources = etree.fromstring("") topology = etree.fromstring("") reporter = MockLibraryReportProcessor() lib.verify(reporter, topology, resources, self.get_status()) assert_report_item_list_equal(reporter.report_item_list, []) def test_success(self): resources = etree.fromstring("") for name in ["d1", "d2", "d3", "d4", "d5", "dR", "dR-special"]: self.fixture_resource(resources, name) reporter = MockLibraryReportProcessor() lib.verify(reporter, self.get_cib(), resources, self.get_status()) assert_report_item_list_equal(reporter.report_item_list, []) def test_failures(self): resources = etree.fromstring("") reporter = MockLibraryReportProcessor() lib.verify(reporter, self.get_cib(), resources, []) report = [ ( severity.ERROR, report_codes.STONITH_RESOURCES_DO_NOT_EXIST, { "stonith_ids": [ "d1", "d2", "d3", "d4", "d5", "dR", "dR-special" ], }, None ), ( severity.ERROR, report_codes.NODE_NOT_FOUND, { "node": "nodeA", }, None ), ( severity.ERROR, report_codes.NODE_NOT_FOUND, { "node": "nodeB", }, None ), ] assert_report_item_list_equal(reporter.report_item_list, report) class ValidateLevel(TestCase): def test_success(self): reporter = MockLibraryReportProcessor() lib._validate_level(reporter, 1) lib._validate_level(reporter, "1") lib._validate_level(reporter, 9) lib._validate_level(reporter, "9") lib._validate_level(reporter, "05") assert_report_item_list_equal(reporter.report_item_list, []) def test_invalid(self): reporter = MockLibraryReportProcessor() lib._validate_level(reporter, "") lib._validate_level(reporter, 0) lib._validate_level(reporter, "0") lib._validate_level(reporter, -1) lib._validate_level(reporter, "-1") lib._validate_level(reporter, "1abc") reports = [] for value in ["", 0, "0", -1, "-1", "1abc"]: reports.append(( severity.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_value": value, "option_name": "level", "allowed_values": "a positive integer", }, None )) assert_report_item_list_equal(reporter.report_item_list, reports) @patch_lib("_validate_target_valuewise") @patch_lib("_validate_target_typewise") class ValidateTarget(TestCase): def test_delegate(self, validate_type, validate_value): lib._validate_target("reporter", "status", "type", "value", "force") validate_type.assert_called_once_with("reporter", "type") validate_value.assert_called_once_with( "reporter", "status", "type", "value", "force" ) class ValidateTargetTypewise(TestCase): def test_success(self): reporter = MockLibraryReportProcessor() lib._validate_target_typewise(reporter, TARGET_TYPE_NODE) lib._validate_target_typewise(reporter, TARGET_TYPE_ATTRIBUTE) lib._validate_target_typewise(reporter, TARGET_TYPE_REGEXP) assert_report_item_list_equal(reporter.report_item_list, []) def test_empty(self): reporter = MockLibraryReportProcessor() lib._validate_target_typewise(reporter, "") report = [( severity.ERROR, report_codes.INVALID_OPTION_TYPE, { "option_name": "target", "allowed_types": [ "node", "regular expression", "attribute_name=value" ], }, None )] assert_report_item_list_equal(reporter.report_item_list, report) def test_invalid(self): reporter = MockLibraryReportProcessor() lib._validate_target_typewise(reporter, "bad_target") report = [( severity.ERROR, report_codes.INVALID_OPTION_TYPE, { "option_name": "target", "allowed_types": [ "node", "regular expression", "attribute_name=value" ], }, None )] assert_report_item_list_equal(reporter.report_item_list, report) class ValidateTargetValuewise(TestCase, StatusNodesMixin): def setUp(self): self.state = self.get_status() def test_node_valid(self): reporter = MockLibraryReportProcessor() lib._validate_target_valuewise( reporter, self.state, TARGET_TYPE_NODE, "nodeA" ) assert_report_item_list_equal(reporter.report_item_list, []) def test_node_empty(self): reporter = MockLibraryReportProcessor() lib._validate_target_valuewise( reporter, self.state, TARGET_TYPE_NODE, "" ) report = [( severity.ERROR, report_codes.NODE_NOT_FOUND, { "node": "", }, report_codes.FORCE_NODE_DOES_NOT_EXIST )] assert_report_item_list_equal(reporter.report_item_list, report) def test_node_invalid(self): reporter = MockLibraryReportProcessor() lib._validate_target_valuewise( reporter, self.state, TARGET_TYPE_NODE, "rh7-x" ) report = [( severity.ERROR, report_codes.NODE_NOT_FOUND, { "node": "rh7-x", }, report_codes.FORCE_NODE_DOES_NOT_EXIST )] assert_report_item_list_equal(reporter.report_item_list, report) def test_node_invalid_force(self): reporter = MockLibraryReportProcessor() lib._validate_target_valuewise( reporter, self.state, TARGET_TYPE_NODE, "rh7-x", force_node=True ) report = [( severity.WARNING, report_codes.NODE_NOT_FOUND, { "node": "rh7-x", }, None )] assert_report_item_list_equal(reporter.report_item_list, report) def test_node_invalid_not_forceable(self): reporter = MockLibraryReportProcessor() lib._validate_target_valuewise( reporter, self.state, TARGET_TYPE_NODE, "rh7-x", allow_force=False ) report = [( severity.ERROR, report_codes.NODE_NOT_FOUND, { "node": "rh7-x", }, None )] assert_report_item_list_equal(reporter.report_item_list, report) class ValidateDevices(TestCase): def setUp(self): self.resources_el = etree.fromstring(""" """) def test_success(self): reporter = MockLibraryReportProcessor() lib._validate_devices( reporter, self.resources_el, ["stonith1"] ) lib._validate_devices( reporter, self.resources_el, ["stonith1", "stonith2"] ) assert_report_item_list_equal(reporter.report_item_list, []) def test_empty(self): reporter = MockLibraryReportProcessor() lib._validate_devices(reporter, self.resources_el, []) report = [( severity.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_type": None, "option_names": ["stonith devices"], }, None )] assert_report_item_list_equal(reporter.report_item_list, report) def test_invalid(self): reporter = MockLibraryReportProcessor() lib._validate_devices(reporter, self.resources_el, ["dummy", "fenceX"]) report = [( severity.ERROR, report_codes.STONITH_RESOURCES_DO_NOT_EXIST, { "stonith_ids": ["dummy", "fenceX"], }, report_codes.FORCE_STONITH_RESOURCE_DOES_NOT_EXIST )] assert_report_item_list_equal(reporter.report_item_list, report) def test_invalid_forced(self): reporter = MockLibraryReportProcessor() lib._validate_devices( reporter, self.resources_el, ["dummy", "fenceX"], force_device=True ) report = [( severity.WARNING, report_codes.STONITH_RESOURCES_DO_NOT_EXIST, { "stonith_ids": ["dummy", "fenceX"], }, None )] assert_report_item_list_equal(reporter.report_item_list, report) def test_node_invalid_not_forceable(self): reporter = MockLibraryReportProcessor() lib._validate_devices( reporter, self.resources_el, ["dummy", "fenceX"], allow_force=False ) report = [( severity.ERROR, report_codes.STONITH_RESOURCES_DO_NOT_EXIST, { "stonith_ids": ["dummy", "fenceX"], }, None )] assert_report_item_list_equal(reporter.report_item_list, report) @patch_lib("_find_level_elements") class ValidateLevelTargetDevicesDoesNotExist(TestCase): def test_success(self, mock_find): mock_find.return_value = [] reporter = MockLibraryReportProcessor() lib._validate_level_target_devices_does_not_exist( reporter, "tree", "level", "target_type", "target_value", "devices" ) mock_find.assert_called_once_with( "tree", "level", "target_type", "target_value", "devices" ) assert_report_item_list_equal(reporter.report_item_list, []) def test_error(self, mock_find): mock_find.return_value = ["element"] reporter = MockLibraryReportProcessor() lib._validate_level_target_devices_does_not_exist( reporter, "tree", "level", "target_type", "target_value", "devices" ) mock_find.assert_called_once_with( "tree", "level", "target_type", "target_value", "devices" ) report = [( severity.ERROR, report_codes.CIB_FENCING_LEVEL_ALREADY_EXISTS, { "devices": "devices", "target_type": "target_type", "target_value": "target_value", "level": "level", }, None )] assert_report_item_list_equal(reporter.report_item_list, report) class AppendLevelElement(TestCase): def setUp(self): self.tree = etree.fromstring("") def test_node_name(self): lib._append_level_element( self.tree, 1, TARGET_TYPE_NODE, "node1", ["d1"] ) assert_xml_equal( """ """, etree_to_str(self.tree) ) def test_node_pattern(self): lib._append_level_element( self.tree, "2", TARGET_TYPE_REGEXP, "node-\d+", ["d1", "d2"] ) assert_xml_equal( """ """, etree_to_str(self.tree) ) def test_node_attribute(self): lib._append_level_element( self.tree, 3, TARGET_TYPE_ATTRIBUTE, ("name%@x", "val%@x"), ["d1"], ) assert_xml_equal( """ """, etree_to_str(self.tree) ) class FindLevelElements(TestCase, CibMixin): def setUp(self): self.tree = self.get_cib() def get_ids(self, elements): return [el.get("id") for el in elements] def test_no_filter(self): self.assertEqual( self.get_ids(lib._find_level_elements(self.tree)), [ "fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10" ] ) def test_no_such_level(self): self.assertEqual( self.get_ids(lib._find_level_elements( self.tree, level=2, target_type=TARGET_TYPE_NODE, target_value="nodeB", devices=["d5"] )), [] ) def test_level(self): self.assertEqual( self.get_ids(lib._find_level_elements( self.tree, level=1 )), ["fl1", "fl3", "fl5"] ) def test_target_node(self): self.assertEqual( self.get_ids(lib._find_level_elements( self.tree, target_type=TARGET_TYPE_NODE, target_value="nodeB" )), ["fl3", "fl4"] ) def test_target_pattern(self): self.assertEqual( self.get_ids(lib._find_level_elements( self.tree, target_type=TARGET_TYPE_REGEXP, target_value="node-R.*" )), ["fl9"] ) def test_target_attribute(self): self.assertEqual( self.get_ids(lib._find_level_elements( self.tree, target_type=TARGET_TYPE_ATTRIBUTE, target_value=("fencing", "improved") )), ["fl7", "fl8"] ) def test_devices(self): self.assertEqual( self.get_ids(lib._find_level_elements( self.tree, devices=["d3"] )), ["fl2", "fl4"] ) self.assertEqual( self.get_ids(lib._find_level_elements( self.tree, devices=["d1", "d2"] )), ["fl1"] ) def test_combination(self): self.assertEqual( self.get_ids(lib._find_level_elements( self.tree, 2, TARGET_TYPE_NODE, "nodeB", ["d3"] )), ["fl4"] ) pcs-0.9.164/pcs/lib/cib/test/test_node.py000066400000000000000000000173261326265502500201040ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common import report_codes from pcs.lib.errors import ReportItemSeverity as severity from pcs.lib.pacemaker.state import ClusterState from pcs.test.tools.assertions import ( assert_raise_library_error, assert_xml_equal, ) from pcs.test.tools.pcs_unittest import TestCase, mock from pcs.test.tools.xml import etree_to_str from pcs.lib.cib import node @mock.patch("pcs.lib.cib.node._ensure_node_exists") class UpdateNodeInstanceAttrs(TestCase): def setUp(self): self.node1 = etree.fromstring(""" """) self.node2 = etree.fromstring(""" """) self.node3 = etree.fromstring(""" """) self.cib = etree.fromstring(""" {0}{1}{2} """.format(*[ etree_to_str(el) for el in [self.node1, self.node2, self.node3] ])) self.state = "node state list" def test_empty_node(self, mock_get_node): mock_get_node.return_value = self.node1 node.update_node_instance_attrs( self.cib, "rh73-node1", {"x": "X"}, self.state ) assert_xml_equal( etree_to_str(self.node1), """ """ ) def test_exisitng_attrs(self, mock_get_node): mock_get_node.return_value = self.node2 node.update_node_instance_attrs( self.cib, "rh73-node2", {"a": "", "b": "b", "x": "X"}, self.state ) assert_xml_equal( etree_to_str(self.node2), """ """ ) def test_multiple_attrs_sets(self, mock_get_node): mock_get_node.return_value = self.node3 node.update_node_instance_attrs( self.cib, "rh73-node3", {"x": "X"}, self.state ) assert_xml_equal( etree_to_str(self.node3), """ """ ) class EnsureNodeExists(TestCase): def setUp(self): self.node1 = etree.fromstring(""" """) self.node2 = etree.fromstring(""" """) self.nodes = etree.Element("nodes") self.nodes.append(self.node1) self.state = ClusterState(""" """).node_section.nodes def test_node_already_exists(self): assert_xml_equal( etree_to_str(node._ensure_node_exists(self.nodes, "name-test1")), etree_to_str(self.node1) ) def test_node_missing_no_state(self): assert_raise_library_error( lambda: node._ensure_node_exists(self.nodes, "name-missing"), ( severity.ERROR, report_codes.NODE_NOT_FOUND, {"node": "name-missing"}, None ), ) def test_node_missing_not_in_state(self): assert_raise_library_error( lambda: node._ensure_node_exists( self.nodes, "name-missing", self.state ), ( severity.ERROR, report_codes.NODE_NOT_FOUND, {"node": "name-missing"}, None ), ) def test_node_missing_and_gets_created(self): assert_xml_equal( etree_to_str( node._ensure_node_exists(self.nodes, "name-test2", self.state) ), etree_to_str(self.node2) ) class GetNodeByUname(TestCase): def setUp(self): self.node1 = etree.fromstring(""" """) self.node2 = etree.fromstring(""" """) self.nodes = etree.Element("nodes") self.nodes.append(self.node1) self.nodes.append(self.node2) def test_found(self): assert_xml_equal( etree_to_str(node._get_node_by_uname(self.nodes, "name-test1")), """""" ) def test_not_found(self): self.assertTrue( node._get_node_by_uname(self.nodes, "id-test1") is None ) class CreateNode(TestCase): def setUp(self): self.nodes = etree.Element("nodes") def test_minimal(self): node._create_node(self.nodes, "id-test", "name-test") assert_xml_equal( """ """, etree_to_str(self.nodes) ) def test_with_type(self): node._create_node(self.nodes, "id-test", "name-test", "type-test") assert_xml_equal( """ """, etree_to_str(self.nodes) ) pcs-0.9.164/pcs/lib/cib/test/test_nvpair.py000066400000000000000000000327251326265502500204560ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib.cib import nvpair from pcs.lib.cib.tools import IdProvider from pcs.test.tools.assertions import assert_xml_equal from pcs.test.tools.pcs_unittest import TestCase, mock from pcs.test.tools.xml import etree_to_str class AppendNewNvpair(TestCase): def test_append_new_nvpair_to_given_element(self): nvset_element = etree.fromstring('') nvpair._append_new_nvpair(nvset_element, "b", "c") assert_xml_equal( etree_to_str(nvset_element), """ """ ) def test_with_id_provider(self): nvset_element = etree.fromstring('') provider = IdProvider(nvset_element) provider.book_ids("a-b") nvpair._append_new_nvpair(nvset_element, "b", "c", provider) assert_xml_equal( etree_to_str(nvset_element), """ """ ) class UpdateNvsetTest(TestCase): @mock.patch( "pcs.lib.cib.nvpair.create_subelement_id", mock.Mock(return_value="4") ) def test_updates_nvset(self): nvset_element = etree.fromstring(""" """) nvpair.update_nvset(nvset_element, { "a": "B", "c": "", "g": "h", }) assert_xml_equal( """ """, etree_to_str(nvset_element) ) def test_empty_value_has_no_effect(self): xml = """ """ nvset_element = etree.fromstring(xml) nvpair.update_nvset(nvset_element, {}) assert_xml_equal(xml, etree_to_str(nvset_element)) def test_remove_empty_nvset(self): xml_pre = """ """ xml_post = """ """ xml = etree.fromstring(xml_pre) nvset_element = xml.find("instance_attributes") nvpair.update_nvset(nvset_element, {"a": ""}) assert_xml_equal(xml_post, etree_to_str(xml)) class SetNvpairInNvsetTest(TestCase): def setUp(self): self.nvset = etree.Element("nvset", id="nvset") etree.SubElement( self.nvset, "nvpair", id="nvset-attr", name="attr", value="1" ) etree.SubElement( self.nvset, "nvpair", id="nvset-attr2", name="attr2", value="2" ) etree.SubElement( self.nvset, "notnvpair", id="nvset-test", name="test", value="0" ) def test_update(self): nvpair.set_nvpair_in_nvset(self.nvset, "attr", "10") assert_xml_equal( """ """, etree_to_str(self.nvset) ) def test_add(self): nvpair.set_nvpair_in_nvset(self.nvset, "test", "0") assert_xml_equal( """ """, etree_to_str(self.nvset) ) def test_remove(self): nvpair.set_nvpair_in_nvset(self.nvset, "attr2", "") assert_xml_equal( """ """, etree_to_str(self.nvset) ) def test_remove_not_existing(self): nvpair.set_nvpair_in_nvset(self.nvset, "attr3", "") assert_xml_equal( """ """, etree_to_str(self.nvset) ) class AppendNewNvsetTest(TestCase): def test_append_new_nvset_to_given_element(self): context_element = etree.fromstring('') nvpair.append_new_nvset("instance_attributes", context_element, { "a": "b", "c": "d", }) assert_xml_equal( """ """, etree_to_str(context_element) ) def test_with_id_provider(self): context_element = etree.fromstring('') provider = IdProvider(context_element) provider.book_ids("a-instance_attributes", "a-instance_attributes-1-a") nvpair.append_new_nvset( "instance_attributes", context_element, { "a": "b", "c": "d", }, provider ) assert_xml_equal( """ """, etree_to_str(context_element) ) class ArrangeFirstNvsetTest(TestCase): def setUp(self): self.root = etree.Element("root", id="root") self.nvset = etree.SubElement(self.root, "nvset", id="nvset") etree.SubElement( self.nvset, "nvpair", id="nvset-attr", name="attr", value="1" ) etree.SubElement( self.nvset, "nvpair", id="nvset-attr2", name="attr2", value="2" ) etree.SubElement( self.nvset, "notnvpair", id="nvset-test", name="test", value="0" ) def test_empty_value_has_no_effect(self): nvpair.arrange_first_nvset("nvset", self.root, {}) assert_xml_equal( """ """, etree_to_str(self.nvset) ) def test_update_existing_nvset(self): nvpair.arrange_first_nvset("nvset", self.root, { "attr": "10", "new_one": "20", "test": "0", "attr2": "" }) assert_xml_equal( """ """, etree_to_str(self.nvset) ) def test_create_new_nvset_if_does_not_exist(self): root = etree.Element("root", id="root") nvpair.arrange_first_nvset("nvset", root, { "attr": "10", "new_one": "20", "test": "0", "attr2": "" }) assert_xml_equal( """ """, etree_to_str(root) ) class GetNvsetTest(TestCase): def test_success(self): nvset = etree.XML( """ """ ) self.assertEqual( [ { "id": "nvset-name1", "name": "name1", "value": "value1" }, { "id": "nvset-name2", "name": "name2", "value": "value2" }, { "id": "nvset-name3", "name": "name3", "value": "" } ], nvpair.get_nvset(nvset) ) class GetValue(TestCase): def assert_find_value(self, tag_name, name, value, xml, default=None): self.assertEqual( value, nvpair.get_value(tag_name, etree.fromstring(xml), name, default) ) def test_return_value_when_name_exists(self): self.assert_find_value( "meta_attributes", "SOME-NAME", "some-value", """ """, ) def test_return_none_when_name_not_exists(self): self.assert_find_value( "instance_attributes", "SOME-NAME", value=None, xml=""" """, ) def test_return_default_when_name_not_exists(self): self.assert_find_value( "instance_attributes", "SOME-NAME", value="DEFAULT", xml=""" """, default="DEFAULT", ) def test_return_none_when_no_nvpair(self): self.assert_find_value( "instance_attributes", "SOME-NAME", value=None, xml=""" """, ) def test_return_none_when_no_nvset(self): self.assert_find_value( "instance_attributes", "SOME-NAME", value=None, xml=""" """, ) class HasMetaAttribute(TestCase): def test_return_false_if_does_not_have_such_attribute(self): resource_element = etree.fromstring("""""") self.assertFalse( nvpair.has_meta_attribute(resource_element, "attr_name") ) def test_return_true_if_such_meta_attribute_exists(self): resource_element = etree.fromstring(""" """) self.assertTrue( nvpair.has_meta_attribute(resource_element, "attr_name") ) def test_return_false_if_meta_attribute_exists_but_in_nested_element(self): resource_element = etree.fromstring(""" """) self.assertFalse( nvpair.has_meta_attribute(resource_element, "attr_name") ) pcs-0.9.164/pcs/lib/cib/test/test_resource_bundle.py000066400000000000000000000022761326265502500223350ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib.cib.resource import bundle from pcs.test.tools.pcs_unittest import TestCase # pcs.lib.cib.resource.bundle is covered by: # - pcs.lib.commands.test.resource.test_bundle_create # - pcs.lib.commands.test.resource.test_bundle_update # - pcs.lib.commands.test.resource.test_resource_create class IsBundle(TestCase): def test_is_bundle(self): self.assertTrue(bundle.is_bundle(etree.fromstring(""))) self.assertFalse(bundle.is_bundle(etree.fromstring(""))) self.assertFalse(bundle.is_bundle(etree.fromstring(""))) class GetInnerResource(TestCase): def assert_inner_resource(self, resource_id, xml): self.assertEqual( resource_id, bundle.get_inner_resource(etree.fromstring(xml)).get("id", "") ) def test_primitive(self): self.assert_inner_resource( "A", """ """ ) pcs-0.9.164/pcs/lib/cib/test/test_resource_clone.py000066400000000000000000000064211326265502500221600ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib.cib.resource import clone from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.assertions import assert_xml_equal class AppendNewCommon(TestCase): def setUp(self): self.cib = etree.fromstring(""" """) self.resources = self.cib.find(".//resources") self.primitive = self.cib.find(".//primitive") def assert_clone_effect(self, options, xml): clone.append_new( clone.TAG_CLONE, self.resources, self.primitive, options ) assert_xml_equal(etree.tostring(self.cib).decode(), xml) def test_add_without_options(self): self.assert_clone_effect({}, """ """) def test_add_with_options(self): self.assert_clone_effect({"a": "b"}, """ """) class IsAnyClone(TestCase): def test_is_clone(self): self.assertTrue(clone.is_clone(etree.fromstring(""))) self.assertFalse(clone.is_clone(etree.fromstring(""))) self.assertFalse(clone.is_clone(etree.fromstring(""))) def test_is_master(self): self.assertTrue(clone.is_master(etree.fromstring(""))) self.assertFalse(clone.is_master(etree.fromstring(""))) self.assertFalse(clone.is_master(etree.fromstring(""))) def test_is_any_clone(self): self.assertTrue(clone.is_any_clone(etree.fromstring(""))) self.assertTrue(clone.is_any_clone(etree.fromstring(""))) self.assertFalse(clone.is_any_clone(etree.fromstring(""))) class GetInnerResource(TestCase): def assert_inner_resource(self, resource_id, xml): self.assertEqual( resource_id, clone.get_inner_resource(etree.fromstring(xml)).get("id", "") ) def test_primitive(self): self.assert_inner_resource( "A", """ """ ) def test_group(self): self.assert_inner_resource( "A", """ """ ) pcs-0.9.164/pcs/lib/cib/test/test_resource_common.py000066400000000000000000000442101326265502500223460ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib.cib.resource import common from pcs.test.tools.assertions import assert_xml_equal from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.xml import etree_to_str fixture_cib = etree.fromstring(""" """) class AreMetaDisabled(TestCase): def test_detect_is_disabled(self): self.assertTrue(common.are_meta_disabled({"target-role": "Stopped"})) self.assertTrue(common.are_meta_disabled({"target-role": "stopped"})) def test_detect_is_not_disabled(self): self.assertFalse(common.are_meta_disabled({})) self.assertFalse(common.are_meta_disabled({"target-role": "any"})) class IsCloneDeactivatedByMeta(TestCase): def assert_is_disabled(self, meta_attributes): self.assertTrue(common.is_clone_deactivated_by_meta(meta_attributes)) def assert_is_not_disabled(self, meta_attributes): self.assertFalse(common.is_clone_deactivated_by_meta(meta_attributes)) def test_detect_is_disabled(self): self.assert_is_disabled({"target-role": "Stopped"}) self.assert_is_disabled({"target-role": "stopped"}) self.assert_is_disabled({"clone-max": "0"}) self.assert_is_disabled({"clone-max": "00"}) self.assert_is_disabled({"clone-max": 0}) self.assert_is_disabled({"clone-node-max": "0"}) self.assert_is_disabled({"clone-node-max": "abc1"}) def test_detect_is_not_disabled(self): self.assert_is_not_disabled({}) self.assert_is_not_disabled({"target-role": "any"}) self.assert_is_not_disabled({"clone-max": "1"}) self.assert_is_not_disabled({"clone-max": "01"}) self.assert_is_not_disabled({"clone-max": 1}) self.assert_is_not_disabled({"clone-node-max": "1"}) self.assert_is_not_disabled({"clone-node-max": 1}) self.assert_is_not_disabled({"clone-node-max": "1abc"}) self.assert_is_not_disabled({"clone-node-max": "1.1"}) class FindPrimitives(TestCase): def assert_find_resources(self, input_resource_id, output_resource_ids): self.assertEqual( output_resource_ids, [ element.get("id", "") for element in common.find_primitives( fixture_cib.find( './/*[@id="{0}"]'.format(input_resource_id) ) ) ] ) def test_primitive(self): self.assert_find_resources("A", ["A"]) def test_primitive_in_clone(self): self.assert_find_resources("B", ["B"]) def test_primitive_in_master(self): self.assert_find_resources("C", ["C"]) def test_primitive_in_group(self): self.assert_find_resources("D1", ["D1"]) self.assert_find_resources("D2", ["D2"]) self.assert_find_resources("E1", ["E1"]) self.assert_find_resources("E2", ["E2"]) self.assert_find_resources("F1", ["F1"]) self.assert_find_resources("F2", ["F2"]) def test_primitive_in_bundle(self): self.assert_find_resources("H", ["H"]) def test_group(self): self.assert_find_resources("D", ["D1", "D2"]) def test_group_in_clone(self): self.assert_find_resources("E", ["E1", "E2"]) def test_group_in_master(self): self.assert_find_resources("F", ["F1", "F2"]) def test_cloned_primitive(self): self.assert_find_resources("B-clone", ["B"]) def test_cloned_group(self): self.assert_find_resources("E-clone", ["E1", "E2"]) def test_mastered_primitive(self): self.assert_find_resources("C-master", ["C"]) def test_mastered_group(self): self.assert_find_resources("F-master", ["F1", "F2"]) def test_bundle_empty(self): self.assert_find_resources("G-bundle", []) def test_bundle_with_primitive(self): self.assert_find_resources("H-bundle", ["H"]) class FindResourcesToEnable(TestCase): def assert_find_resources(self, input_resource_id, output_resource_ids): self.assertEqual( output_resource_ids, [ element.get("id", "") for element in common.find_resources_to_enable( fixture_cib.find( './/*[@id="{0}"]'.format(input_resource_id) ) ) ] ) def test_primitive(self): self.assert_find_resources("A", ["A"]) def test_primitive_in_clone(self): self.assert_find_resources("B", ["B", "B-clone"]) def test_primitive_in_master(self): self.assert_find_resources("C", ["C", "C-master"]) def test_primitive_in_group(self): self.assert_find_resources("D1", ["D1"]) self.assert_find_resources("D2", ["D2"]) self.assert_find_resources("E1", ["E1"]) self.assert_find_resources("E2", ["E2"]) self.assert_find_resources("F1", ["F1"]) self.assert_find_resources("F2", ["F2"]) def test_primitive_in_bundle(self): self.assert_find_resources("H", ["H", "H-bundle"]) def test_group(self): self.assert_find_resources("D", ["D"]) def test_group_in_clone(self): self.assert_find_resources("E", ["E", "E-clone"]) def test_group_in_master(self): self.assert_find_resources("F", ["F", "F-master"]) def test_cloned_primitive(self): self.assert_find_resources("B-clone", ["B-clone", "B"]) def test_cloned_group(self): self.assert_find_resources("E-clone", ["E-clone", "E"]) def test_mastered_primitive(self): self.assert_find_resources("C-master", ["C-master", "C"]) def test_mastered_group(self): self.assert_find_resources("F-master", ["F-master", "F"]) def test_bundle_empty(self): self.assert_find_resources("G-bundle", ["G-bundle"]) def test_bundle_with_primitive(self): self.assert_find_resources("H-bundle", ["H-bundle", "H"]) class Enable(TestCase): def assert_enabled(self, pre, post): resource = etree.fromstring(pre) common.enable(resource) assert_xml_equal(post, etree_to_str(resource)) def test_disabled(self): self.assert_enabled( """ """, """ """ ) def test_enabled(self): self.assert_enabled( """ """, """ """ ) def test_only_first_meta(self): # this captures the current behavior # once pcs supports more instance and meta attributes for each resource, # this test should be reconsidered self.assert_enabled( """ """, """ """ ) class Disable(TestCase): def assert_disabled(self, pre, post): resource = etree.fromstring(pre) common.disable(resource) assert_xml_equal(post, etree_to_str(resource)) def test_disabled(self): xml = """ """ self.assert_disabled(xml, xml) def test_enabled(self): self.assert_disabled( """ """, """ """ ) def test_only_first_meta(self): # this captures the current behavior # once pcs supports more instance and meta attributes for each resource, # this test should be reconsidered self.assert_disabled( """ """, """ """ ) class FindResourcesToManage(TestCase): def assert_find_resources(self, input_resource_id, output_resource_ids): self.assertEqual( output_resource_ids, [ element.get("id", "") for element in common.find_resources_to_manage( fixture_cib.find( './/*[@id="{0}"]'.format(input_resource_id) ) ) ] ) def test_primitive(self): self.assert_find_resources("A", ["A"]) def test_primitive_in_clone(self): self.assert_find_resources("B", ["B", "B-clone"]) def test_primitive_in_master(self): self.assert_find_resources("C", ["C", "C-master"]) def test_primitive_in_group(self): self.assert_find_resources("D1", ["D1", "D"]) self.assert_find_resources("D2", ["D2", "D"]) self.assert_find_resources("E1", ["E1", "E-clone", "E"]) self.assert_find_resources("E2", ["E2", "E-clone", "E"]) self.assert_find_resources("F1", ["F1", "F-master", "F"]) self.assert_find_resources("F2", ["F2", "F-master", "F"]) def test_primitive_in_bundle(self): self.assert_find_resources("H", ["H", "H-bundle"]) def test_group(self): self.assert_find_resources("D", ["D", "D1", "D2"]) def test_group_in_clone(self): self.assert_find_resources("E", ["E", "E-clone", "E1", "E2"]) def test_group_in_master(self): self.assert_find_resources("F", ["F", "F-master", "F1", "F2"]) def test_cloned_primitive(self): self.assert_find_resources("B-clone", ["B-clone", "B"]) def test_cloned_group(self): self.assert_find_resources("E-clone", ["E-clone", "E", "E1", "E2"]) def test_mastered_primitive(self): self.assert_find_resources("C-master", ["C-master", "C"]) def test_mastered_group(self): self.assert_find_resources("F-master", ["F-master", "F", "F1", "F2"]) def test_bundle_empty(self): self.assert_find_resources("G-bundle", ["G-bundle"]) def test_bundle_with_primitive(self): self.assert_find_resources("H-bundle", ["H-bundle", "H"]) class FindResourcesToUnmanage(TestCase): def assert_find_resources(self, input_resource_id, output_resource_ids): self.assertEqual( output_resource_ids, [ element.get("id", "") for element in common.find_resources_to_unmanage( fixture_cib.find( './/*[@id="{0}"]'.format(input_resource_id) ) ) ] ) def test_primitive(self): self.assert_find_resources("A", ["A"]) def test_primitive_in_clone(self): self.assert_find_resources("B", ["B"]) def test_primitive_in_master(self): self.assert_find_resources("C", ["C"]) def test_primitive_in_group(self): self.assert_find_resources("D1", ["D1"]) self.assert_find_resources("D2", ["D2"]) self.assert_find_resources("E1", ["E1"]) self.assert_find_resources("E2", ["E2"]) self.assert_find_resources("F1", ["F1"]) self.assert_find_resources("F2", ["F2"]) def test_primitive_in_bundle(self): self.assert_find_resources("H", ["H"]) def test_group(self): self.assert_find_resources("D", ["D1", "D2"]) def test_group_in_clone(self): self.assert_find_resources("E", ["E1", "E2"]) def test_group_in_master(self): self.assert_find_resources("F", ["F1", "F2"]) def test_cloned_primitive(self): self.assert_find_resources("B-clone", ["B"]) def test_cloned_group(self): self.assert_find_resources("E-clone", ["E1", "E2"]) def test_mastered_primitive(self): self.assert_find_resources("C-master", ["C"]) def test_mastered_group(self): self.assert_find_resources("F-master", ["F1", "F2"]) def test_bundle_empty(self): self.assert_find_resources("G-bundle", ["G-bundle"]) def test_bundle_with_primitive(self): self.assert_find_resources("H-bundle", ["H-bundle", "H"]) class Manage(TestCase): def assert_managed(self, pre, post): resource = etree.fromstring(pre) common.manage(resource) assert_xml_equal(post, etree_to_str(resource)) def test_unmanaged(self): self.assert_managed( """ """, """ """ ) def test_managed(self): self.assert_managed( """ """, """ """ ) def test_only_first_meta(self): # this captures the current behavior # once pcs supports more instance and meta attributes for each resource, # this test should be reconsidered self.assert_managed( """ """, """ """ ) class Unmanage(TestCase): def assert_unmanaged(self, pre, post): resource = etree.fromstring(pre) common.unmanage(resource) assert_xml_equal(post, etree_to_str(resource)) def test_unmanaged(self): xml = """ """ self.assert_unmanaged(xml, xml) def test_managed(self): self.assert_unmanaged( """ """, """ """ ) def test_only_first_meta(self): # this captures the current behavior # once pcs supports more instance and meta attributes for each resource, # this test should be reconsidered self.assert_unmanaged( """ """, """ """ ) pcs-0.9.164/pcs/lib/cib/test/test_resource_group.py000066400000000000000000000121241326265502500222110ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common import report_codes from pcs.lib.cib.resource import group from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import assert_raise_library_error, assert_xml_equal from pcs.test.tools.pcs_unittest import TestCase, mock class IsGroup(TestCase): def test_is_group(self): self.assertTrue(group.is_group(etree.fromstring(""))) self.assertFalse(group.is_group(etree.fromstring(""))) self.assertFalse(group.is_group(etree.fromstring(""))) @mock.patch("pcs.lib.cib.resource.group.find_element_by_tag_and_id") class ProvideGroup(TestCase): def setUp(self): self.cib = etree.fromstring( '' ) self.group_element = self.cib.find('.//group') self.resources_section = self.cib.find('.//resources') def test_search_in_whole_tree(self, find_element_by_tag_and_id): def find_group(*args, **kwargs): return self.group_element find_element_by_tag_and_id.side_effect = find_group self.assertEqual( self.group_element, group.provide_group(self.resources_section, "g") ) def test_create_group_when_not_exists(self, find_element_by_tag_and_id): find_element_by_tag_and_id.return_value = None group_element = group.provide_group(self.resources_section, "g2") self.assertEqual('group', group_element.tag) self.assertEqual('g2', group_element.attrib["id"]) class PlaceResource(TestCase): def setUp(self): self.group_element = etree.fromstring(""" """) self.primitive_element = etree.Element("primitive", {"id": "c"}) def assert_final_order( self, id_list=None, adjacent_resource_id=None, put_after_adjacent=False ): group.place_resource( self.group_element, self.primitive_element, adjacent_resource_id, put_after_adjacent ) assert_xml_equal( etree.tostring(self.group_element).decode(), """ """.format(*id_list) ) def test_append_at_the_end_when_adjacent_is_not_specified(self): self.assert_final_order(["a", "b", "c"]) def test_insert_before_adjacent(self): self.assert_final_order(["c", "a", "b"], "a") def test_insert_after_adjacent(self): self.assert_final_order(["a", "c", "b"], "a", put_after_adjacent=True) def test_insert_after_adjacent_which_is_last(self): self.assert_final_order(["a", "b", "c"], "b", put_after_adjacent=True) def test_refuse_to_put_next_to_the_same_resource_id(self): assert_raise_library_error( lambda: group.place_resource( self.group_element, self.primitive_element, adjacent_resource_id="c", ), ( severities.ERROR, report_codes.RESOURCE_CANNOT_BE_NEXT_TO_ITSELF_IN_GROUP, { "group_id": "g", "resource_id": "c", }, ), ) def test_raises_when_adjacent_resource_not_in_group(self): assert_raise_library_error( lambda: group.place_resource( self.group_element, self.primitive_element, adjacent_resource_id="r", ), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "r", "expected_types": ["primitive"], "context_type": "group", "context_id": "g", }, None ), ) class GetInnerResource(TestCase): def assert_inner_resource(self, resource_id, xml): self.assertEqual( resource_id, [ element.attrib.get("id", "") for element in group.get_inner_resources(etree.fromstring(xml)) ] ) def test_one(self): self.assert_inner_resource( ["A"], """ """ ) def test_more(self): self.assert_inner_resource( ["A", "C", "B"], """ """ ) pcs-0.9.164/pcs/lib/cib/test/test_resource_guest_node.py000066400000000000000000000355301326265502500232170ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common import report_codes from pcs.lib.cib.resource import guest_node from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import( assert_xml_equal, assert_report_item_list_equal, ) from pcs.test.tools.misc import create_setup_patch_mixin from pcs.test.tools.pcs_unittest import TestCase from pcs.lib.node import NodeAddresses SetupPatchMixin = create_setup_patch_mixin(guest_node) class ValidateHostConflicts(TestCase): def validate(self, node_name, options): tree = etree.fromstring(""" """) nodes = [ NodeAddresses("RING0", "RING1", name="R1"), NodeAddresses("REMOTE_CONFLICT", name="B"), NodeAddresses("GUEST_CONFLICT", name="GUEST_CONFLICT"), NodeAddresses("GUEST_ADDR_CONFLICT", name="some"), ] return guest_node.validate_conflicts(tree, nodes, node_name, options) def assert_already_exists_error( self, conflict_name, node_name, options=None ): assert_report_item_list_equal( self.validate(node_name, options if options else {}), [ ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, { "id": conflict_name, }, None ), ] ) def test_report_conflict_with_id(self): self.assert_already_exists_error("CONFLICT", "CONFLICT") def test_report_conflict_guest_node(self): self.assert_already_exists_error("GUEST_CONFLICT", "GUEST_CONFLICT") def test_report_conflict_guest_addr(self): self.assert_already_exists_error( "GUEST_ADDR_CONFLICT", "GUEST_ADDR_CONFLICT", ) def test_report_conflict_guest_addr_by_addr(self): self.assert_already_exists_error( "GUEST_ADDR_CONFLICT", "GUEST_ADDR_CONFLICT", ) def test_no_conflict_guest_node_whe_addr_is_different(self): self.assertEqual([], self.validate("GUEST_ADDR_CONFLICT", { "remote-addr": "different", })) def test_report_conflict_remote_node(self): self.assert_already_exists_error("REMOTE_CONFLICT", "REMOTE_CONFLICT") def test_no_conflict_remote_node_whe_addr_is_different(self): self.assertEqual([], self.validate("REMOTE_CONFLICT", { "remote-addr": "different", })) def test_report_conflict_remote_node_by_addr(self): self.assert_already_exists_error("REMOTE_CONFLICT", "different", { "remote-addr": "REMOTE_CONFLICT", }) class ValidateOptions(TestCase): def validate(self, options, name="some_name"): return guest_node.validate_set_as_guest( etree.fromstring(''), [NodeAddresses( "EXISTING-HOST-RING0", "EXISTING-HOST-RING0", name="EXISTING-HOST-NAME" )], name, options ) def test_no_report_on_valid(self): self.assertEqual( [], self.validate({}, "node1") ) def test_report_invalid_option(self): assert_report_item_list_equal( self.validate({"invalid": "invalid"}, "node1"), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_type": "guest", "option_names": ["invalid"], "allowed": sorted(guest_node.GUEST_OPTIONS), "allowed_patterns": [], }, None ), ] ) def test_report_invalid_interval(self): assert_report_item_list_equal( self.validate({"remote-connect-timeout": "invalid"}, "node1"), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "remote-connect-timeout", "option_value": "invalid", }, None ), ] ) def test_report_invalid_node_name(self): assert_report_item_list_equal( self.validate({}, "EXISTING-HOST-NAME"), [ ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, { "id": "EXISTING-HOST-NAME", }, None ), ] ) class ValidateInNotGuest(TestCase): #guest_node.is_guest_node is tested here as well def test_no_report_on_non_guest(self): self.assertEqual( [], guest_node.validate_is_not_guest(etree.fromstring("")) ) def test_report_when_is_guest(self): assert_report_item_list_equal( guest_node.validate_is_not_guest(etree.fromstring(""" """)), [ ( severities.ERROR, report_codes.RESOURCE_IS_GUEST_NODE_ALREADY, { "resource_id": "resource_id", }, None ), ] ) class SetAsGuest(TestCase): def test_set_guest_meta_correctly(self): resource_element = etree.fromstring('') guest_node.set_as_guest(resource_element, "node1", connect_timeout="10") assert_xml_equal( etree.tostring(resource_element).decode(), """ """ ) class UnsetGuest(TestCase): def test_unset_all_guest_attributes(self): resource_element = etree.fromstring(""" """) guest_node.unset_guest(resource_element) assert_xml_equal( etree.tostring(resource_element).decode(), """ """ ) def test_unset_all_guest_attributes_and_empty_meta_tag(self): resource_element = etree.fromstring(""" """) guest_node.unset_guest(resource_element) assert_xml_equal( etree.tostring(resource_element).decode(), '' ) class FindNodeList(TestCase, SetupPatchMixin): def assert_find_meta_attributes(self, xml, meta_attributes_xml_list): get_node = self.setup_patch("get_node", return_value=None) self.assertEquals( [None] * len(meta_attributes_xml_list), guest_node.find_node_list(etree.fromstring(xml)) ) for i, call in enumerate(get_node.mock_calls): assert_xml_equal( meta_attributes_xml_list[i], etree.tostring(call[1][0]).decode() ) def test_get_no_nodes_when_no_primitives(self): self.assert_find_meta_attributes("", []) def test_get_no_nodes_when_no_meta_remote_node(self): self.assert_find_meta_attributes( """ """, [] ) def test_get_multiple_nodes(self): self.assert_find_meta_attributes( """ """, [ """ """, """ """, ] ) class GetNode(TestCase): def assert_node(self, xml, expected_node): node = guest_node.get_node(etree.fromstring(xml)) self.assertEquals(expected_node, (node.ring0, node.name)) def test_return_none_when_is_not_guest_node(self): self.assertIsNone(guest_node.get_node(etree.fromstring( """ """ ))) def test_return_same_host_and_name_when_remote_node_only(self): self.assert_node( """ """, ("G1", "G1") ) def test_return_different_host_and_name_when_remote_addr_there(self): self.assert_node( """ """, ("G1addr", "G1") ) class GetHost(TestCase): def assert_find_host(self, host, xml): self.assertEqual(host, guest_node.get_host(etree.fromstring(xml))) def test_return_host_from_remote_addr(self): self.assert_find_host("HOST", """ """) def test_return_host_from_remote_node(self): self.assert_find_host("HOST", """ """) def test_return_none(self): self.assert_find_host(None, """ """) class FindNodeResources(TestCase): def assert_return_resources(self, identifier): resources_section = etree.fromstring(""" """) self.assertEquals( "RESOURCE_ID", guest_node.find_node_resources(resources_section, identifier)[0] .attrib["id"] ) def test_return_resources_by_resource_id(self): self.assert_return_resources("RESOURCE_ID") def test_return_resources_by_node_name(self): self.assert_return_resources("NODE_NAME") def test_return_resources_by_node_host(self): self.assert_return_resources("NODE_HOST") def test_no_result_when_no_guest_nodes(self): resources_section = etree.fromstring( '' ) self.assertEquals([], guest_node.find_node_resources( resources_section, "RESOURCE_ID" )) pcs-0.9.164/pcs/lib/cib/test/test_resource_operations.py000066400000000000000000000332261326265502500232460ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from lxml import etree from pcs.common import report_codes from pcs.lib.cib.resource import operations from pcs.lib.errors import ReportItemSeverity as severities from pcs.lib.validate import ValuePair from pcs.test.tools.assertions import assert_report_item_list_equal from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.misc import create_patcher from pcs.test.tools.pcs_unittest import TestCase, mock patch_operations = create_patcher("pcs.lib.cib.resource.operations") @patch_operations("get_remaining_defaults") @patch_operations("complete_all_intervals") @patch_operations("validate_different_intervals") @patch_operations("validate_operation_list") @patch_operations("normalized_to_operations") @patch_operations("operations_to_normalized") class Prepare(TestCase): def test_prepare( self, operations_to_normalized, normalized_to_operations, validate_operation_list, validate_different_intervals, complete_all_intervals, get_remaining_defaults ): validate_operation_list.return_value = ["options_report"] validate_different_intervals.return_value = [ "different_interval_report" ] operations_to_normalized.return_value = [ {"name": ValuePair("Start", "start")}, {"name": ValuePair("Monitor", "monitor")}, ] normalized_to_operations.return_value = [ {"name": "start"}, {"name": "monitor"}, ] report_processor = mock.MagicMock() raw_operation_list = [ {"name": "Start"}, {"name": "Monitor"}, ] default_operation_list = [ {"name": "stop"}, ] allowed_operation_name_list = ["start", "stop", "monitor"] allow_invalid = True operations.prepare( report_processor, raw_operation_list, default_operation_list, allowed_operation_name_list, allow_invalid, ) operations_to_normalized.assert_called_once_with(raw_operation_list) normalized_to_operations.assert_called_once_with( operations_to_normalized.return_value ) validate_operation_list.assert_called_once_with( operations_to_normalized.return_value, allowed_operation_name_list, allow_invalid ) validate_different_intervals.assert_called_once_with( normalized_to_operations.return_value ) complete_all_intervals.assert_called_once_with( normalized_to_operations.return_value ) get_remaining_defaults.assert_called_once_with( report_processor, normalized_to_operations.return_value, default_operation_list ) report_processor.process_list.assert_called_once_with([ "options_report", "different_interval_report", ]) class ValidateDifferentIntervals(TestCase): def test_return_empty_reports_on_empty_list(self): operations.validate_different_intervals([]) def test_return_empty_reports_on_operations_without_duplication(self): operations.validate_different_intervals([ {"name": "monitor", "interval": "10s"}, {"name": "monitor", "interval": "5s"}, {"name": "start", "interval": "5s"}, ]) def test_return_report_on_duplicated_intervals(self): assert_report_item_list_equal( operations.validate_different_intervals([ {"name": "monitor", "interval": "3600s"}, {"name": "monitor", "interval": "60m"}, {"name": "monitor", "interval": "1h"}, {"name": "monitor", "interval": "60s"}, {"name": "monitor", "interval": "1m"}, {"name": "monitor", "interval": "5s"}, ]), [( severities.ERROR, report_codes.RESOURCE_OPERATION_INTERVAL_DUPLICATION, { "duplications": { "monitor": [ ["3600s", "60m", "1h"], ["60s", "1m"], ], }, }, )] ) class MakeUniqueIntervals(TestCase): def setUp(self): self.report_processor = MockLibraryReportProcessor() self.run = partial( operations.make_unique_intervals, self.report_processor ) def test_return_copy_input_when_no_interval_duplication(self): operation_list = [ {"name": "monitor", "interval": "10s"}, {"name": "monitor", "interval": "5s"}, {"name": "monitor", }, {"name": "monitor", "interval": ""}, {"name": "start", "interval": "5s"}, ] self.assertEqual(operation_list, self.run(operation_list)) def test_adopt_duplicit_values(self): self.assertEqual( self.run([ {"name": "monitor", "interval": "60s"}, {"name": "monitor", "interval": "1m"}, {"name": "monitor", "interval": "5s"}, {"name": "monitor", "interval": "6s"}, {"name": "monitor", "interval": "5s"}, {"name": "start", "interval": "5s"}, ]), [ {"name": "monitor", "interval": "60s"}, {"name": "monitor", "interval": "61"}, {"name": "monitor", "interval": "5s"}, {"name": "monitor", "interval": "6s"}, {"name": "monitor", "interval": "7"}, {"name": "start", "interval": "5s"}, ] ) assert_report_item_list_equal(self.report_processor.report_item_list, [ ( severities.WARNING, report_codes.RESOURCE_OPERATION_INTERVAL_ADAPTED, { "operation_name": "monitor", "original_interval": "1m", "adapted_interval": "61", }, ), ( severities.WARNING, report_codes.RESOURCE_OPERATION_INTERVAL_ADAPTED, { "operation_name": "monitor", "original_interval": "5s", "adapted_interval": "7", }, ), ]) def test_keep_duplicit_values_when_are_not_valid_interval(self): self.assertEqual( self.run([ {"name": "monitor", "interval": "some"}, {"name": "monitor", "interval": "some"}, ]), [ {"name": "monitor", "interval": "some"}, {"name": "monitor", "interval": "some"}, ] ) class Normalize(TestCase): def test_return_operation_with_the_same_values(self): operation = { "name": "monitor", "role": "Master", "timeout": "10", } self.assertEqual(operation, dict([ (key, operations.normalize(key, value)) for key, value in operation.items() ])) def test_return_operation_with_normalized_values(self): self.assertEqual( { "name": "monitor", "role": "Master", "timeout": "10", "requires": "nothing", "on-fail": "ignore", "record-pending": "true", "enabled": "1", }, dict([(key, operations.normalize(key, value)) for key, value in { "name": "monitor", "role": "master", "timeout": "10", "requires": "Nothing", "on-fail": "Ignore", "record-pending": "True", "enabled": "1", }.items()]) ) class ValidateOperation(TestCase): def assert_operation_produces_report(self, operation, report_list): assert_report_item_list_equal( operations.validate_operation_list( [operation], ["monitor"], ), report_list ) def test_return_empty_report_on_valid_operation(self): self.assert_operation_produces_report( { "name": "monitor", "role": "Master" }, [] ) def test_validate_all_individual_options(self): self.assertEqual( ["REQUIRES REPORT", "ROLE REPORT"], sorted(operations.validate_operation({"name": "monitor"}, [ mock.Mock(return_value=["ROLE REPORT"]), mock.Mock(return_value=["REQUIRES REPORT"]), ])) ) def test_return_error_when_unknown_operation_attribute(self): self.assert_operation_produces_report( { "name": "monitor", "unknown": "invalid", }, [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["unknown"], "option_type": "resource operation", "allowed": sorted(operations.ATTRIBUTES), "allowed_patterns": [], }, None ), ], ) def test_return_errror_when_missing_key_name(self): self.assert_operation_produces_report( { "role": "Master" }, [ ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_names": ["name"], "option_type": "resource operation", }, None ), ], ) def test_return_error_when_both_interval_origin_and_start_delay(self): self.assert_operation_produces_report( { "name": "monitor", "interval-origin": "a", "start-delay": "b", }, [ ( severities.ERROR, report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, { "option_names": ["interval-origin", "start-delay"], "option_type": "resource operation", }, None ), ], ) def test_return_error_on_invalid_id(self): self.assert_operation_produces_report( { "name": "monitor", "id": "a#b", }, [ ( severities.ERROR, report_codes.INVALID_ID, { "id": "a#b", "id_description": "operation id", "invalid_character": "#", "is_first_char": False, }, None ), ], ) class GetRemainingDefaults(TestCase): @mock.patch("pcs.lib.cib.resource.operations.make_unique_intervals") def test_returns_remining_operations(self, make_unique_intervals): make_unique_intervals.side_effect = ( lambda report_processor, operations: operations ) self.assertEqual( operations.get_remaining_defaults( report_processor=None, operation_list =[{"name": "monitor"}], default_operation_list=[{"name": "monitor"}, {"name": "start"}] ), [{"name": "start"}] ) class GetResourceOperations(TestCase): resource_el = etree.fromstring(""" """) resource_noop_el = etree.fromstring(""" """) def assert_op_list(self, op_list, expected_ids): self.assertEqual( [op.attrib.get("id") for op in op_list], expected_ids ) def test_all_operations(self): self.assert_op_list( operations.get_resource_operations(self.resource_el), ["dummy-start", "dummy-stop", "dummy-monitor-m", "dummy-monitor-s"] ) def test_filter_operations(self): self.assert_op_list( operations.get_resource_operations(self.resource_el, ["start"]), ["dummy-start"] ) def test_filter_more_operations(self): self.assert_op_list( operations.get_resource_operations( self.resource_el, ["monitor", "stop"] ), ["dummy-stop", "dummy-monitor-m", "dummy-monitor-s"] ) def test_filter_none(self): self.assert_op_list( operations.get_resource_operations(self.resource_el, ["promote"]), [] ) def test_no_operations(self): self.assert_op_list( operations.get_resource_operations(self.resource_noop_el), [] ) pcs-0.9.164/pcs/lib/cib/test/test_resource_primitive.py000066400000000000000000000057561326265502500231020ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from lxml import etree from pcs.lib.cib.resource import primitive from pcs.test.tools.pcs_unittest import TestCase, mock @mock.patch("pcs.lib.cib.resource.primitive.append_new_instance_attributes") @mock.patch("pcs.lib.cib.resource.primitive.append_new_meta_attributes") @mock.patch("pcs.lib.cib.resource.primitive.create_operations") class AppendNew(TestCase): def setUp(self): self.resources_section = etree.fromstring("") self.instance_attributes = {"a": "b"} self.meta_attributes = {"c": "d"} self.operation_list = [{"name": "monitoring"}] self.run = partial( primitive.append_new, self.resources_section, instance_attributes=self.instance_attributes, meta_attributes=self.meta_attributes, operation_list=self.operation_list, ) def check_mocks( self, primitive_element, create_operations, append_new_meta_attributes, append_new_instance_attributes, ): create_operations.assert_called_once_with( primitive_element, self.operation_list ) append_new_meta_attributes.assert_called_once_with( primitive_element, self.meta_attributes ) append_new_instance_attributes.assert_called_once_with( primitive_element, self.instance_attributes ) def test_append_without_provider( self, create_operations, append_new_meta_attributes, append_new_instance_attributes, ): primitive_element = self.run("RESOURCE_ID", "OCF", None, "DUMMY") self.assertEqual( primitive_element, self.resources_section.find(".//primitive") ) self.assertEqual(primitive_element.attrib["class"], "OCF") self.assertEqual(primitive_element.attrib["type"], "DUMMY") self.assertFalse(primitive_element.attrib.has_key("provider")) self.check_mocks( primitive_element, create_operations, append_new_meta_attributes, append_new_instance_attributes, ) def test_append_with_provider( self, create_operations, append_new_meta_attributes, append_new_instance_attributes, ): primitive_element = self.run("RESOURCE_ID", "OCF", "HEARTBEAT", "DUMMY") self.assertEqual( primitive_element, self.resources_section.find(".//primitive") ) self.assertEqual(primitive_element.attrib["class"], "OCF") self.assertEqual(primitive_element.attrib["type"], "DUMMY") self.assertEqual(primitive_element.attrib["provider"], "HEARTBEAT") self.check_mocks( primitive_element, create_operations, append_new_meta_attributes, append_new_instance_attributes, ) pcs-0.9.164/pcs/lib/cib/test/test_resource_remote_node.py000066400000000000000000000207421326265502500233620ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common import report_codes from pcs.lib.cib.resource import remote_node from pcs.lib.errors import ReportItemSeverity as severities from pcs.lib.node import NodeAddresses from pcs.test.tools.assertions import assert_report_item_list_equal from pcs.test.tools.pcs_unittest import TestCase, mock class FindNodeList(TestCase): def assert_nodes_equals(self, xml, expected_nodes): self.assertEquals( expected_nodes, [ (node.ring0, node.name) for node in remote_node.find_node_list(etree.fromstring(xml)) ] ) def test_find_multiple_nodes(self): self.assert_nodes_equals( """ """, [ ("H1", "R1"), ("H2", "R2"), ] ) def test_find_no_nodes(self): self.assert_nodes_equals( """ """, [] ) def test_find_nodes_without_server(self): self.assert_nodes_equals( """ """, [ ("R1", "R1"), ] ) def test_find_nodes_with_empty_server(self): #it does not work, but the node "R1" is visible as remote node in the #status self.assert_nodes_equals( """ """, [ ("R1", "R1"), ] ) class FindNodeResources(TestCase): def assert_resources_equals(self, node_identifier, xml, resource_id_list): self.assertEqual( resource_id_list, [ resource_element.attrib["id"] for resource_element in remote_node.find_node_resources( etree.fromstring(xml), node_identifier ) ] ) def test_find_all_resources(self): self.assert_resources_equals( "HOST", """ """, ["R1", "R2"] ) def test_find_by_resource_id(self): self.assert_resources_equals( "HOST", """ """, ["HOST"] ) def test_ignore_non_remote_primitives(self): self.assert_resources_equals( "HOST", """ """, [] ) class GetHost(TestCase): def test_return_host_when_there(self): self.assertEqual( "HOST", remote_node.get_host(etree.fromstring(""" """)) ) def test_return_none_when_host_not_found(self): self.assertIsNone(remote_node.get_host(etree.fromstring(""" """))) def test_return_none_when_primitive_is_without_agent(self): case_list = [ '', '', '', ] for case in case_list: self.assertIsNone( remote_node.get_host(etree.fromstring(case)), "for '{0}' is not returned None".format(case) ) def test_return_host_from_resource_id(self): self.assertEqual( "R", remote_node.get_host(etree.fromstring(""" """)) ) class Validate(TestCase): def validate( self, instance_attributes=None, node_name="NODE-NAME", host="node-host" ): nodes = [ NodeAddresses("RING0", "RING1", name="R"), ] resource_agent = mock.MagicMock() return remote_node.validate_create( nodes, resource_agent, host, node_name, instance_attributes if instance_attributes else {}, ) def test_report_conflict_node_name(self): assert_report_item_list_equal( self.validate( node_name="R", host="host", ), [ ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, { "id": "R", }, None ) ] ) def test_report_conflict_node_host(self): assert_report_item_list_equal( self.validate( host="RING0", ), [ ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, { "id": "RING0", }, None ) ] ) def test_report_conflict_node_host_ring1(self): assert_report_item_list_equal( self.validate( host="RING1", ), [ ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, { "id": "RING1", }, None ) ] ) def test_report_used_disallowed_server(self): assert_report_item_list_equal( self.validate( instance_attributes={"server": "A"} ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { 'option_type': 'resource', 'option_names': ['server'], 'allowed': [], "allowed_patterns": [], }, None ) ] ) pcs-0.9.164/pcs/lib/cib/test/test_resource_set.py000066400000000000000000000072251326265502500216560ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from lxml import etree from pcs.common import report_codes from pcs.lib.cib.constraint import resource_set from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import( assert_raise_library_error, assert_xml_equal ) from pcs.test.tools.pcs_unittest import mock class PrepareSetTest(TestCase): def test_return_corrected_resource_set(self): find_valid_id = mock.Mock() find_valid_id.side_effect = lambda id: {"A": "AA", "B": "BB"}[id] self.assertEqual( {"ids": ["AA", "BB"], "options": {"sequential": "true"}}, resource_set.prepare_set(find_valid_id, { "ids": ["A", "B"], "options": {"sequential": "true"} }) ) def test_refuse_invalid_attribute_name(self): assert_raise_library_error( lambda: resource_set.prepare_set(mock.Mock(), { "ids": ["A", "B"], "options": {"invalid_name": "true"} }), ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["invalid_name"], "option_type": None, "allowed": ["action", "require-all", "role", "sequential"], "allowed_patterns": [], }), ) def test_refuse_invalid_attribute_value(self): assert_raise_library_error( lambda: resource_set.prepare_set(mock.Mock(), { "ids": ["A", "B"], "options": {"role": "invalid"} }), (severities.ERROR, report_codes.INVALID_OPTION_VALUE, { 'option_name': 'role', 'allowed_values': ('Stopped', 'Started', 'Master', 'Slave'), 'option_value': 'invalid', }), ) class ExtractIdListTest(TestCase): def test_return_id_list_from_resource_set_list(self): self.assertEqual( [["A", "B"], ["C", "D"]], resource_set.extract_id_set_list([ {"ids": ["A", "B"], "options": {}}, {"ids": ["C", "D"], "options": {}}, ]) ) class CreateTest(TestCase): def test_resource_set_to_parent(self): constraint_element = etree.Element("constraint") resource_set.create( constraint_element, {"ids": ["A", "B"], "options": {"sequential": "true"}}, ) assert_xml_equal(etree.tostring(constraint_element).decode(), """ """) class GetResourceIdListTest(TestCase): def test_returns_id_list_from_element(self): element = etree.Element("resource_set") for id in ("A", "B"): etree.SubElement(element, "resource_ref").attrib["id"] = id self.assertEqual( ["A", "B"], resource_set.get_resource_id_set_list(element) ) class ExportTest(TestCase): def test_returns_element_in_dict_representation(self): element = etree.Element("resource_set") element.attrib.update({"role": "Master"}) for id in ("A", "B"): etree.SubElement(element, "resource_ref").attrib["id"] = id self.assertEqual( {'options': {'role': 'Master'}, 'ids': ['A', 'B']}, resource_set.export(element) ) pcs-0.9.164/pcs/lib/cib/test/test_sections.py000066400000000000000000000040161326265502500207760ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common import report_codes from pcs.lib.cib import sections from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import( assert_xml_equal, assert_raise_library_error ) from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.xml import etree_to_str class Get(TestCase): def setUp(self): self.tree = etree.fromstring( """ """ ) def assert_element_content(self, section_element, expected_xml): assert_xml_equal(etree_to_str(section_element), expected_xml) def test_get_existing_mandatory(self): self.assert_element_content( sections.get(self.tree, sections.CONFIGURATION), """ """ ) def test_get_existing_optinal(self): self.assert_element_content( sections.get(self.tree, sections.ACLS), "" ) def test_get_no_existing_optinal(self): self.assert_element_content( sections.get(self.tree, sections.ALERTS), "" ) self.assert_element_content( self.tree, """ """ ) def test_raises_on_no_existing_mandatory_section(self): assert_raise_library_error( lambda: sections.get(self.tree, sections.NODES), ( severities.ERROR, report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, { "section": "configuration/nodes", } ), ) pcs-0.9.164/pcs/lib/cib/test/test_tools.py000066400000000000000000000501601326265502500203100ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from lxml import etree from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.assertions import ( assert_raise_library_error, assert_report_item_list_equal, ) from pcs.test.tools import fixture from pcs.test.tools.misc import get_test_resource as rc from pcs.test.tools.pcs_unittest import mock from pcs.test.tools.xml import get_xml_manipulation_creator_from_file from pcs.common import report_codes from pcs.common.tools import Version from pcs.lib.errors import ReportItemSeverity as severities from pcs.lib.cib import tools as lib class CibToolsTest(TestCase): def setUp(self): self.create_cib = get_xml_manipulation_creator_from_file( rc("cib-empty.xml") ) self.cib = self.create_cib() def fixture_add_primitive_with_id(self, element_id): self.cib.append_to_first_tag_name( "resources", '' .format(element_id) ) class IdProviderTest(CibToolsTest): def setUp(self): super(IdProviderTest, self).setUp() self.provider = lib.IdProvider(self.cib.tree) def fixture_report(self, id): return ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, { "id": id, }, None ) class IdProviderBook(IdProviderTest): def test_nonexisting_id(self): assert_report_item_list_equal( self.provider.book_ids("myId"), [] ) def test_existing_id(self): self.fixture_add_primitive_with_id("myId") assert_report_item_list_equal( self.provider.book_ids("myId"), [ self.fixture_report("myId"), ] ) def test_double_book(self): assert_report_item_list_equal( self.provider.book_ids("myId"), [] ) assert_report_item_list_equal( self.provider.book_ids("myId"), [ self.fixture_report("myId"), ] ) def test_more_ids(self): assert_report_item_list_equal( self.provider.book_ids("myId1", "myId2"), [] ) assert_report_item_list_equal( self.provider.book_ids("myId1", "myId2"), [ self.fixture_report("myId1"), self.fixture_report("myId2"), ] ) def test_complex(self): # test ids existing in the cib, double booked, available # test reports not repeated self.fixture_add_primitive_with_id("myId1") self.fixture_add_primitive_with_id("myId2") assert_report_item_list_equal( self.provider.book_ids( "myId1", "myId2", "myId3", "myId2", "myId3", "myId4", "myId3" ), [ self.fixture_report("myId1"), self.fixture_report("myId2"), self.fixture_report("myId3"), ] ) class IdProviderAllocate(IdProviderTest): def test_nonexisting_id(self): self.assertEqual("myId", self.provider.allocate_id("myId")) def test_existing_id(self): self.fixture_add_primitive_with_id("myId") self.assertEqual("myId-1", self.provider.allocate_id("myId")) def test_allocate_books(self): self.assertEqual("myId", self.provider.allocate_id("myId")) self.assertEqual("myId-1", self.provider.allocate_id("myId")) def test_booked_ids(self): self.fixture_add_primitive_with_id("myId") assert_report_item_list_equal( self.provider.book_ids("myId-1"), [] ) self.assertEqual("myId-2", self.provider.allocate_id("myId")) class DoesIdExistTest(CibToolsTest): def test_existing_id(self): self.fixture_add_primitive_with_id("myId") self.assertTrue(lib.does_id_exist(self.cib.tree, "myId")) def test_nonexisting_id(self): self.fixture_add_primitive_with_id("myId") self.assertFalse(lib.does_id_exist(self.cib.tree, "otherId")) self.assertFalse(lib.does_id_exist(self.cib.tree, "myid")) self.assertFalse(lib.does_id_exist(self.cib.tree, " myId")) self.assertFalse(lib.does_id_exist(self.cib.tree, "myId ")) self.assertFalse(lib.does_id_exist(self.cib.tree, "my Id")) def test_ignore_status_section(self): self.cib.append_to_first_tag_name("status", """ """) self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1")) self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1a")) self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1aa")) self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1ab")) self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1b")) self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1ba")) self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1bb")) def test_ignore_acl_target(self): self.cib.append_to_first_tag_name( "configuration", """ """ ) self.assertFalse(lib.does_id_exist(self.cib.tree, "target1")) def test_ignore_acl_role_references(self): self.cib.append_to_first_tag_name( "configuration", """ """ ) self.assertFalse(lib.does_id_exist(self.cib.tree, "role1")) self.assertFalse(lib.does_id_exist(self.cib.tree, "role2")) def test_ignore_sections_directly_under_cib(self): #this is side effect of current implementation but is not problem since #id attribute is not allowed for elements directly under cib tree = etree.fromstring('') self.assertFalse(lib.does_id_exist(tree, "a")) def test_find_id_when_cib_is_not_root_element(self): #for example we have only part of xml tree = etree.fromstring('') self.assertTrue(lib.does_id_exist(tree, "a")) def test_find_remote_node_pacemaker_internal_id(self): tree = etree.fromstring(""" """) self.assertTrue(lib.does_id_exist(tree, "a")) class FindUniqueIdTest(CibToolsTest): def test_already_unique(self): self.fixture_add_primitive_with_id("myId") self.assertEqual("other", lib.find_unique_id(self.cib.tree, "other")) def test_add_suffix(self): self.fixture_add_primitive_with_id("myId") self.assertEqual("myId-1", lib.find_unique_id(self.cib.tree, "myId")) self.fixture_add_primitive_with_id("myId-1") self.assertEqual("myId-2", lib.find_unique_id(self.cib.tree, "myId")) def test_suffix_not_needed(self): self.fixture_add_primitive_with_id("myId-1") self.assertEqual("myId", lib.find_unique_id(self.cib.tree, "myId")) def test_add_first_available_suffix(self): self.fixture_add_primitive_with_id("myId") self.fixture_add_primitive_with_id("myId-1") self.fixture_add_primitive_with_id("myId-3") self.assertEqual("myId-2", lib.find_unique_id(self.cib.tree, "myId")) def test_reserved_ids(self): self.fixture_add_primitive_with_id("myId-1") self.assertEqual( "myId-3", lib.find_unique_id(self.cib.tree, "myId", ["myId", "myId-2"]) ) class CreateNvsetIdTest(TestCase): def test_create_plain_id_when_no_confilicting_id_there(self): context = etree.fromstring('') self.assertEqual( "b-name", lib.create_subelement_id(context.find(".//a"), "name") ) def test_create_decorated_id_when_conflicting_id_there(self): context = etree.fromstring( '' ) self.assertEqual( "b-name-1", lib.create_subelement_id(context.find(".//a"), "name") ) class GetConfigurationTest(CibToolsTest): def test_success_if_exists(self): self.assertEqual( "configuration", lib.get_configuration(self.cib.tree).tag ) def test_raise_if_missing(self): for conf in self.cib.tree.findall(".//configuration"): conf.getparent().remove(conf) assert_raise_library_error( lambda: lib.get_configuration(self.cib.tree), ( severities.ERROR, report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, { "section": "configuration", } ), ) class GetConstraintsTest(CibToolsTest): def test_success_if_exists(self): self.assertEqual( "constraints", lib.get_constraints(self.cib.tree).tag ) def test_raise_if_missing(self): for section in self.cib.tree.findall(".//configuration/constraints"): section.getparent().remove(section) assert_raise_library_error( lambda: lib.get_constraints(self.cib.tree), ( severities.ERROR, report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, { "section": "configuration/constraints", } ), ) class GetResourcesTest(CibToolsTest): def test_success_if_exists(self): self.assertEqual( "resources", lib.get_resources(self.cib.tree).tag ) def test_raise_if_missing(self): for section in self.cib.tree.findall(".//configuration/resources"): section.getparent().remove(section) assert_raise_library_error( lambda: lib.get_resources(self.cib.tree), ( severities.ERROR, report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, { "section": "configuration/resources", } ), ) class GetNodes(CibToolsTest): def test_success_if_exists(self): self.assertEqual( "nodes", lib.get_nodes(self.cib.tree).tag ) def test_raise_if_missing(self): for section in self.cib.tree.findall(".//configuration/nodes"): section.getparent().remove(section) assert_raise_library_error( lambda: lib.get_nodes(self.cib.tree), ( severities.ERROR, report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, { "section": "configuration/nodes", }, None ), ) class GetAclsTest(CibToolsTest): def test_success_if_exists(self): self.cib.append_to_first_tag_name( "configuration", '' ) self.assertEqual( "test_role", lib.get_acls(self.cib.tree)[0].get("id") ) def test_success_if_missing(self): acls = lib.get_acls(self.cib.tree) self.assertEqual("acls", acls.tag) self.assertEqual("configuration", acls.getparent().tag) class GetFencingTopology(CibToolsTest): def test_success_if_exists(self): self.cib.append_to_first_tag_name( "configuration", "" ) self.assertEqual( "fencing-topology", lib.get_fencing_topology(self.cib.tree).tag ) def test_success_if_missing(self): ft = lib.get_fencing_topology(self.cib.tree) self.assertEqual("fencing-topology", ft.tag) self.assertEqual("configuration", ft.getparent().tag) @mock.patch('pcs.lib.cib.tools.does_id_exist') class ValidateIdDoesNotExistsTest(TestCase): def test_success_when_id_does_not_exists(self, does_id_exists): does_id_exists.return_value = False lib.validate_id_does_not_exist("tree", "some-id") does_id_exists.assert_called_once_with("tree", "some-id") def test_raises_whne_id_exists(self, does_id_exists): does_id_exists.return_value = True assert_raise_library_error( lambda: lib.validate_id_does_not_exist("tree", "some-id"), ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, {"id": "some-id"}, ), ) does_id_exists.assert_called_once_with("tree", "some-id") class GetPacemakerVersionByWhichCibWasValidatedTest(TestCase): def test_missing_attribute(self): assert_raise_library_error( lambda: lib.get_pacemaker_version_by_which_cib_was_validated( etree.XML("") ), ( severities.ERROR, report_codes.CIB_LOAD_ERROR_BAD_FORMAT, { "reason": "the attribute 'validate-with' of the element" " 'cib' is missing" } ) ) def test_invalid_version(self): assert_raise_library_error( lambda: lib.get_pacemaker_version_by_which_cib_was_validated( etree.XML('') ), ( severities.ERROR, report_codes.CIB_LOAD_ERROR_BAD_FORMAT, { "reason": "the attribute 'validate-with' of the element" " 'cib' has an invalid value: 'something-1.2.3'" } ) ) def test_no_revision(self): self.assertEqual( Version(1, 2), lib.get_pacemaker_version_by_which_cib_was_validated( etree.XML('') ) ) def test_with_revision(self): self.assertEqual( Version(1, 2, 3), lib.get_pacemaker_version_by_which_cib_was_validated( etree.XML('') ) ) class getCibCrmFeatureSet(TestCase): def test_success(self): self.assertEqual( Version(3, 0, 9), lib.get_cib_crm_feature_set( etree.XML('') ) ) def test_success_no_revision(self): self.assertEqual( Version(3, 1), lib.get_cib_crm_feature_set( etree.XML('') ) ) def test_missing_attribute(self): assert_raise_library_error( lambda: lib.get_cib_crm_feature_set( etree.XML("") ), fixture.error( report_codes.CIB_LOAD_ERROR_BAD_FORMAT, reason=( "the attribute 'crm_feature_set' of the element 'cib' is " "missing" ) ) ) def test_missing_attribute_none(self): self.assertEqual( None, lib.get_cib_crm_feature_set( etree.XML(''), none_if_missing=True ) ) def test_invalid_version(self): assert_raise_library_error( lambda: lib.get_cib_crm_feature_set( etree.XML('') ), fixture.error( report_codes.CIB_LOAD_ERROR_BAD_FORMAT, reason=( "the attribute 'crm_feature_set' of the element 'cib' has " "an invalid value: '3'" ) ) ) find_group = partial(lib.find_element_by_tag_and_id, "group") class FindTagWithId(TestCase): def test_returns_element_when_exists(self): tree = etree.fromstring( '' ) element = find_group(tree.find(".//resources"), "a") self.assertEqual("group", element.tag) self.assertEqual("a", element.attrib["id"]) def test_returns_element_when_exists_one_of_tags(self): tree = etree.fromstring(""" """) element = lib.find_element_by_tag_and_id( ["group", "primitive"], tree.find(".//resources"), "a" ) self.assertEqual("group", element.tag) self.assertEqual("a", element.attrib["id"]) def test_raises_when_is_under_another_tag(self): tree = etree.fromstring( '' ) assert_raise_library_error( lambda: find_group(tree.find(".//resources"), "a"), ( severities.ERROR, report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, { "id": "a", "expected_types": ["group"], "current_type": "primitive", }, ), ) def test_raises_when_is_under_another_context(self): tree = etree.fromstring(""" """) assert_raise_library_error( lambda: lib.find_element_by_tag_and_id( "primitive", tree.find('.//resources/group[@id="g2"]'), "a" ), ( severities.ERROR, report_codes.OBJECT_WITH_ID_IN_UNEXPECTED_CONTEXT, { "type": "primitive", "id": "a", "expected_context_type": "group", "expected_context_id": "g2", }, ), ) def test_raises_when_id_does_not_exists(self): tree = etree.fromstring('') assert_raise_library_error( lambda: find_group(tree.find('.//resources'), "a"), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "a", "expected_types": ["group"], "context_type": "resources", "context_id": "", }, None ), ) assert_raise_library_error( lambda: find_group( tree.find('.//resources'), "a", id_types=["resource group"] ), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "a", "expected_types": ["resource group"], "context_type": "resources", "context_id": "", }, None ), ) def test_returns_none_if_id_do_not_exists(self): tree = etree.fromstring('') self.assertIsNone(find_group( tree.find('.//resources'), "a", none_if_id_unused=True )) pcs-0.9.164/pcs/lib/cib/tools.py000066400000000000000000000214261326265502500162750ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import re from pcs.common.tools import is_string, Version from pcs.lib import reports from pcs.lib.cib import sections from pcs.lib.errors import LibraryError from pcs.lib.pacemaker.values import ( sanitize_id, validate_id, ) from pcs.lib.xml_tools import get_root _VERSION_FORMAT = r"(?P\d+)\.(?P\d+)(\.(?P\d+))?" class IdProvider(object): """ Book ids for future use in the CIB and generate new ids accordingly """ def __init__(self, cib_element): """ etree cib_element -- any element of the xml to being check against """ self._cib = get_root(cib_element) self._booked_ids = set() def allocate_id(self, proposed_id): """ Generate a new unique id based on the proposal and keep track of it string proposed_id -- requested id """ final_id = find_unique_id(self._cib, proposed_id, self._booked_ids) self._booked_ids.add(final_id) return final_id def book_ids(self, *id_list): """ Check if the ids are not already used and reserve them for future use strings *id_list -- ids """ reported_ids = set() report_list = [] for id in id_list: if id in reported_ids: continue if id in self._booked_ids or does_id_exist(self._cib, id): report_list.append(reports.id_already_exists(id)) reported_ids.add(id) continue self._booked_ids.add(id) return report_list def does_id_exist(tree, check_id): """ Checks to see if id exists in the xml dom passed tree cib etree node check_id id to check """ # do not search in /cib/status, it may contain references to previously # existing and deleted resources and thus preventing creating them again #pacemaker creates an implicit resource for the pacemaker_remote connection, #which will be named the same as the value of the remote-node attribute of #the explicit resource. So the value of nvpair named "remote-node" is #considered to be id existing = get_root(tree).xpath(""" ( /cib/*[name()!="status"] | /*[name()!="cib"] ) //*[ ( name()!="acl_target" and name()!="role" and @id="{0}" ) or ( name()="primitive" and meta_attributes[ nvpair[ @name="remote-node" and @value="{0}" ] ] ) ] """.format(check_id)) return len(existing) > 0 def validate_id_does_not_exist(tree, id): """ tree cib etree node """ if does_id_exist(tree, id): raise LibraryError(reports.id_already_exists(id)) def find_unique_id(tree, check_id, reserved_ids=None): """ Returns check_id if it doesn't exist in the dom, otherwise it adds an integer to the end of the id and increments it until a unique id is found etree tree -- cib etree node string check_id -- id to check iterable reserved_ids -- ids to think about as already used """ if not reserved_ids: reserved_ids = set() counter = 1 temp_id = check_id while temp_id in reserved_ids or does_id_exist(tree, temp_id): temp_id = "{0}-{1}".format(check_id, counter) counter += 1 return temp_id def find_element_by_tag_and_id( tag, context_element, element_id, none_if_id_unused=False, id_types=None ): """ Return element with given tag and element_id under context_element. When element does not exists raises LibraryError or return None if specified in none_if_id_unused. etree.Element(Tree) context_element is part of tree for element scan string|list tag is expected tag (or list of tags) of search element string element_id is id of search element bool none_if_id_unused if the element is not found then return None if True or raise a LibraryError if False list id_types optional list of descriptions for id / expected types of id """ tag_list = [tag] if is_string(tag) else tag if id_types is None: id_type_list = tag_list elif is_string(id_types): id_type_list = [id_types] else: id_type_list = id_types element_list = context_element.xpath( './/*[({0}) and @id="{1}"]'.format( " or ".join(["self::{0}".format(one_tag) for one_tag in tag_list]), element_id ) ) if element_list: return element_list[0] element = get_root(context_element).find( './/*[@id="{0}"]'.format(element_id) ) if element is not None: raise LibraryError( reports.id_belongs_to_unexpected_type( element_id, expected_types=tag_list, current_type=element.tag ) if element.tag not in tag_list else reports.object_with_id_in_unexpected_context( element.tag, element_id, context_element.tag, context_element.attrib.get("id", "") ) ) if none_if_id_unused: return None raise LibraryError( reports.id_not_found( element_id, id_type_list, context_element.tag, context_element.attrib.get("id", "") ) ) def create_subelement_id(context_element, suffix, id_provider=None): proposed_id = sanitize_id( "{0}-{1}".format(context_element.get("id"), suffix) ) if id_provider: return id_provider.allocate_id(proposed_id) return find_unique_id(context_element, proposed_id) def check_new_id_applicable(tree, description, id): validate_id(id, description) validate_id_does_not_exist(tree, id) def get_configuration(tree): """ Return 'configuration' element from tree, raise LibraryError if missing tree cib etree node """ return sections.get(tree, sections.CONFIGURATION) def get_acls(tree): """ Return 'acls' element from tree, create a new one if missing tree cib etree node """ return sections.get(tree, sections.ACLS) def get_alerts(tree): """ Return 'alerts' element from tree, create a new one if missing tree -- cib etree node """ return sections.get(tree, sections.ALERTS) def get_constraints(tree): """ Return 'constraint' element from tree tree cib etree node """ return sections.get(tree, sections.CONSTRAINTS) def get_fencing_topology(tree): """ Return the 'fencing-topology' element from the tree tree -- cib etree node """ return sections.get(tree, sections.FENCING_TOPOLOGY) def get_nodes(tree): """ Return 'nodes' element from the tree tree cib etree node """ return sections.get(tree, sections.NODES) def get_resources(tree): """ Return the 'resources' element from the tree tree -- cib etree node """ return sections.get(tree, sections.RESOURCES) def _get_cib_version(cib, attribute, regexp, none_if_missing=False): version = cib.get(attribute) if version is None: if none_if_missing: return None raise LibraryError(reports.cib_load_error_invalid_format( "the attribute '{0}' of the element 'cib' is missing".format( attribute ) )) match = regexp.match(version) if not match: raise LibraryError(reports.cib_load_error_invalid_format( ( "the attribute '{0}' of the element 'cib' has an invalid" " value: '{1}'" ).format(attribute, version) )) return Version( int(match.group("major")), int(match.group("minor")), int(match.group("rev")) if match.group("rev") else None ) def get_pacemaker_version_by_which_cib_was_validated(cib): """ Return version of pacemaker which validated specified cib as tree. Version is returned as an instance of pcs.common.tools.Version. Raises LibraryError on any failure. cib -- cib etree """ return _get_cib_version( cib, "validate-with", re.compile(r"pacemaker-{0}".format(_VERSION_FORMAT)) ) def get_cib_crm_feature_set(cib, none_if_missing=False): """ Return crm_feature_set as pcs.common.tools.Version or raise LibraryError etree cib -- cib etree bool none_if_missing -- return None instead of raising when crm_feature_set is missing """ return _get_cib_version( cib, "crm_feature_set", re.compile(_VERSION_FORMAT), none_if_missing=none_if_missing ) pcs-0.9.164/pcs/lib/cluster_conf_facade.py000066400000000000000000000031571326265502500203520ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.common.tools import xml_fromstring from pcs.lib import reports from pcs.lib.errors import LibraryError from pcs.lib.node import NodeAddresses, NodeAddressesList class ClusterConfFacade(object): """ Provides high level access to a corosync.conf file """ @classmethod def from_string(cls, config_string): """ Parse cluster.conf config and create a facade around it config_string -- cluster.conf file content as string """ try: return cls(xml_fromstring(config_string)) except (etree.XMLSyntaxError, etree.DocumentInvalid) as e: raise LibraryError(reports.cluster_conf_invalid_format(str(e))) def __init__(self, parsed_config): """ Create a facade around a parsed cluster.conf config file parsed_config parsed cluster.conf config """ self._config = parsed_config @property def config(self): return self._config def get_cluster_name(self): return self.config.get("name", "") def get_nodes(self): """ Get all defined nodes """ result = NodeAddressesList() for node in self.config.findall("./clusternodes/clusternode"): altname = node.find("altname") result.append(NodeAddresses( ring0=node.get("name"), ring1=altname.get("name") if altname is not None else None, name=None, id=node.get("nodeid") )) return result pcs-0.9.164/pcs/lib/commands/000077500000000000000000000000001326265502500156225ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/__init__.py000066400000000000000000000000001326265502500177210ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/acl.py000066400000000000000000000210331326265502500167320ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from contextlib import contextmanager from pcs.common.tools import Version from pcs.lib.cib import acl from pcs.lib.cib.tools import get_acls REQUIRED_CIB_VERSION = Version(2, 0, 0) @contextmanager def cib_acl_section(env): yield get_acls(env.get_cib(REQUIRED_CIB_VERSION)) env.push_cib() def create_role(lib_env, role_id, permission_info_list, description): """ Create new acl role. Raises LibraryError on any failure. lib_env -- LibraryEnvirnoment role_id -- id of new role which should be created permission_info_list -- list of permissons, items of list should be tuples: (, , ) description -- text description for role """ with cib_acl_section(lib_env) as acl_section: if permission_info_list: acl.validate_permissions(acl_section, permission_info_list) role_el = acl.create_role(acl_section, role_id, description) if permission_info_list: acl.add_permissions_to_role(role_el, permission_info_list) def remove_role(lib_env, role_id, autodelete_users_groups=False): """ Remove role with specified id from CIB. Raises LibraryError on any failure. lib_env -- LibraryEnvironment role_id -- id of role which should be deleted autodelete_users_groups -- if True targets and groups which are empty after removal will be removed """ with cib_acl_section(lib_env) as acl_section: acl.remove_role(acl_section, role_id, autodelete_users_groups) def assign_role_not_specific(lib_env, role_id, target_or_group_id): """ Assign role with id role_id to target or group with id target_or_group_id. Target element has bigger priority so if there are target and group with the same id only target element will be affected by this function. Raises LibraryError on any failure. lib_env -- LibraryEnviroment role_id -- id of role which should be assigned to target/group target_or_group_id -- id of target/group element """ with cib_acl_section(lib_env) as acl_section: acl.assign_role( acl_section, role_id, acl.find_target_or_group(acl_section, target_or_group_id), ) def assign_role_to_target(lib_env, role_id, target_id): """ Assign role with id role_id to target with id target_id. Raises LibraryError on any failure. lib_env -- LibraryEnvironment role_id -- id of acl_role element which should be assigned to target target_id -- id of acl_target element to which role should be assigned """ with cib_acl_section(lib_env) as acl_section: acl.assign_role( acl_section, role_id, acl.find_target(acl_section, target_id), ) def assign_role_to_group(lib_env, role_id, group_id): """ Assign role with id role_id to group with id group_id. Raises LibraryError on any failure. lib_env -- LibraryEnvironment role_id -- id of acl_role element which should be assigned to group group_id -- id of acl_group element to which role should be assigned """ with cib_acl_section(lib_env) as acl_section: acl.assign_role( acl_section, role_id, acl.find_group(acl_section, group_id), ) def unassign_role_not_specific( lib_env, role_id, target_or_group_id, autodelete_target_group=False ): """ Unassign role with role_id from target/group with id target_or_group_id. Target element has bigger priority so if there are target and group with the same id only target element will be affected by this function. Raises LibraryError on any failure. lib_env -- LibraryEnvironment role_id -- id of role which should be unassigned from target/group target_or_group_id -- id of acl_target/acl_group element autodelete_target_group -- if True remove target/group element if has no more role assigned """ with cib_acl_section(lib_env) as acl_section: acl.unassign_role( acl.find_target_or_group(acl_section, target_or_group_id), role_id, autodelete_target_group ) def unassign_role_from_target( lib_env, role_id, target_id, autodelete_target=False ): """ Unassign role with role_id from group with id target_id. Raises LibraryError on any failure. lib_env -- LibraryEnvironment role_id -- id of role which should be unassigned from target target_id -- id of acl_target element autodelete_target -- if True remove target element if has no more role assigned """ with cib_acl_section(lib_env) as acl_section: acl.unassign_role( acl.find_target(acl_section, target_id), role_id, autodelete_target ) def unassign_role_from_group( lib_env, role_id, group_id, autodelete_group=False ): """ Unassign role with role_id from group with id group_id. Raises LibraryError on any failure. lib_env -- LibraryEnvironment role_id -- id of role which should be unassigned from group group_id -- id of acl_group element autodelete_target -- if True remove group element if has no more role assigned """ with cib_acl_section(lib_env) as acl_section: acl.unassign_role( acl.find_group(acl_section, group_id), role_id, autodelete_group ) def create_target(lib_env, target_id, role_list): """ Create new target with id target_id and assign roles role_list to it. Raises LibraryError on any failure. lib_env -- LibraryEnvironment target_id -- id of new target role_list -- list of roles to assign to new target """ with cib_acl_section(lib_env) as acl_section: acl.assign_all_roles( acl_section, role_list, acl.create_target(acl_section, target_id) ) def create_group(lib_env, group_id, role_list): """ Create new group with id group_id and assign roles role_list to it. Raises LibraryError on any failure. lib_env -- LibraryEnvironment group_id -- id of new group role_list -- list of roles to assign to new group """ with cib_acl_section(lib_env) as acl_section: acl.assign_all_roles( acl_section, role_list, acl.create_group(acl_section, group_id) ) def remove_target(lib_env, target_id): """ Remove acl_target element with id target_id. Raises LibraryError on any failure. lib_env -- LibraryEnvironment target_id -- id of taget which should be removed """ with cib_acl_section(lib_env) as acl_section: acl.remove_target(acl_section, target_id) def remove_group(lib_env, group_id): """ Remove acl_group element with id group_id. Raises LibraryError on any failure. lib_env -- LibraryEnvironment group_id -- id of group which should be removed """ with cib_acl_section(lib_env) as acl_section: acl.remove_group(acl_section, group_id) def add_permission(lib_env, role_id, permission_info_list): """ Add permissions do role with id role_id. If role doesn't exist it will be created. Raises LibraryError on any failure. lib_env -- LibraryEnvirnoment role_id -- id of role permission_info_list -- list of permissons, items of list should be tuples: (, , ) """ with cib_acl_section(lib_env) as acl_section: acl.validate_permissions(acl_section, permission_info_list) acl.add_permissions_to_role( acl.provide_role(acl_section, role_id), permission_info_list ) def remove_permission(lib_env, permission_id): """ Remove permission with id permission_id. Raises LibraryError on any failure. lib_env -- LibraryEnvironment permission_id -- id of permission element which should be removed """ with cib_acl_section(lib_env) as acl_section: acl.remove_permission(acl_section, permission_id) def get_config(lib_env): """ Returns ACL configuration in dictionary. Format of output: { "target_list": , "group_list": , "role_list": , } lib_env -- LibraryEnvironment """ acl_section = get_acls(lib_env.get_cib(REQUIRED_CIB_VERSION)) return { "target_list": acl.get_target_list(acl_section), "group_list": acl.get_group_list(acl_section), "role_list": acl.get_role_list(acl_section), } pcs-0.9.164/pcs/lib/commands/alert.py000066400000000000000000000136361326265502500173140ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common.tools import Version from pcs.lib import reports from pcs.lib.cib import alert from pcs.lib.errors import LibraryError REQUIRED_CIB_VERSION = Version(2, 5, 0) def create_alert( lib_env, alert_id, path, instance_attribute_dict, meta_attribute_dict, description=None ): """ Create new alert. Raises LibraryError if path is not specified, or any other failure. lib_env -- LibraryEnvironment alert_id -- id of alert to be created, if None it will be generated path -- path to script for alert instance_attribute_dict -- dictionary of instance attributes meta_attribute_dict -- dictionary of meta attributes description -- alert description description """ if not path: raise LibraryError(reports.required_option_is_missing(["path"])) alert_el = alert.create_alert( lib_env.get_cib(REQUIRED_CIB_VERSION), alert_id, path, description ) alert.update_instance_attributes(alert_el, instance_attribute_dict) alert.update_meta_attributes(alert_el, meta_attribute_dict) lib_env.push_cib() def update_alert( lib_env, alert_id, path, instance_attribute_dict, meta_attribute_dict, description=None ): """ Update existing alert with specified id. lib_env -- LibraryEnvironment alert_id -- id of alert to be updated path -- new path, if None old value will stay unchanged instance_attribute_dict -- dictionary of instance attributes to update meta_attribute_dict -- dictionary of meta attributes to update description -- new description, if empty string, old description will be deleted, if None old value will stay unchanged """ alert_el = alert.update_alert( lib_env.get_cib(REQUIRED_CIB_VERSION), alert_id, path, description ) alert.update_instance_attributes(alert_el, instance_attribute_dict) alert.update_meta_attributes(alert_el, meta_attribute_dict) lib_env.push_cib() def remove_alert(lib_env, alert_id_list): """ Remove alerts with specified ids. lib_env -- LibraryEnvironment alert_id_list -- list of alerts ids which should be removed """ cib = lib_env.get_cib(REQUIRED_CIB_VERSION) report_list = [] for alert_id in alert_id_list: try: alert.remove_alert(cib, alert_id) except LibraryError as e: report_list += e.args lib_env.report_processor.process_list(report_list) lib_env.push_cib() def add_recipient( lib_env, alert_id, recipient_value, instance_attribute_dict, meta_attribute_dict, recipient_id=None, description=None, allow_same_value=False ): """ Add new recipient to alert witch id alert_id. lib_env -- LibraryEnvironment alert_id -- id of alert to which new recipient should be added recipient_value -- value of new recipient instance_attribute_dict -- dictionary of instance attributes to update meta_attribute_dict -- dictionary of meta attributes to update recipient_id -- id of new recipient, if None it will be generated description -- recipient description allow_same_value -- if True unique recipient value is not required """ if not recipient_value: raise LibraryError( reports.required_option_is_missing(["value"]) ) recipient = alert.add_recipient( lib_env.report_processor, lib_env.get_cib(REQUIRED_CIB_VERSION), alert_id, recipient_value, recipient_id=recipient_id, description=description, allow_same_value=allow_same_value ) alert.update_instance_attributes(recipient, instance_attribute_dict) alert.update_meta_attributes(recipient, meta_attribute_dict) lib_env.push_cib() def update_recipient( lib_env, recipient_id, instance_attribute_dict, meta_attribute_dict, recipient_value=None, description=None, allow_same_value=False ): """ Update existing recipient. lib_env -- LibraryEnvironment recipient_id -- id of recipient to be updated instance_attribute_dict -- dictionary of instance attributes to update meta_attribute_dict -- dictionary of meta attributes to update recipient_value -- new recipient value, if None old value will stay unchanged description -- new description, if empty string, old description will be deleted, if None old value will stay unchanged allow_same_value -- if True unique recipient value is not required """ if not recipient_value and recipient_value is not None: raise LibraryError( reports.cib_alert_recipient_invalid_value(recipient_value) ) recipient = alert.update_recipient( lib_env.report_processor, lib_env.get_cib(REQUIRED_CIB_VERSION), recipient_id, recipient_value=recipient_value, description=description, allow_same_value=allow_same_value ) alert.update_instance_attributes(recipient, instance_attribute_dict) alert.update_meta_attributes(recipient, meta_attribute_dict) lib_env.push_cib() def remove_recipient(lib_env, recipient_id_list): """ Remove specified recipients. lib_env -- LibraryEnvironment recipient_id_list -- list of recipients ids to be removed """ cib = lib_env.get_cib(REQUIRED_CIB_VERSION) report_list = [] for recipient_id in recipient_id_list: try: alert.remove_recipient(cib, recipient_id) except LibraryError as e: report_list += e.args lib_env.report_processor.process_list(report_list) lib_env.push_cib() def get_all_alerts(lib_env): """ Returns list of all alerts. See docs of pcs.lib.cib.alert.get_all_alerts for description of data format. lib_env -- LibraryEnvironment """ return alert.get_all_alerts(lib_env.get_cib()) pcs-0.9.164/pcs/lib/commands/booth.py000066400000000000000000000326151326265502500173160ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import base64 import os.path from functools import partial from pcs import settings from pcs.common.tools import join_multilines from pcs.lib import external, reports, tools from pcs.lib.cib.resource import primitive, group from pcs.lib.booth import ( config_exchange, config_files, config_structure, reports as booth_reports, resource, status, ) from pcs.lib.booth.config_parser import parse, build from pcs.lib.booth.env import get_config_file_name from pcs.lib.cib.tools import get_resources from pcs.lib.communication.booth import ( BoothGetConfig, BoothSendConfig, ) from pcs.lib.communication.tools import run_and_raise from pcs.lib.errors import LibraryError, ReportItemSeverity from pcs.lib.resource_agent import find_valid_resource_agent_by_name def config_setup(env, booth_configuration, overwrite_existing=False): """ create boot configuration list site_list contains site adresses of multisite list arbitrator_list contains arbitrator adresses of multisite """ config_content = config_exchange.from_exchange_format(booth_configuration) config_structure.validate_peers( *config_structure.take_peers(config_content) ) env.booth.create_key(tools.generate_key(), overwrite_existing) config_content = config_structure.set_authfile( config_content, env.booth.key_path ) env.booth.create_config(build(config_content), overwrite_existing) def config_destroy(env, ignore_config_load_problems=False): env.booth.command_expect_live_env() env.command_expect_live_corosync_env() name = env.booth.name config_is_used = partial(booth_reports.booth_config_is_used, name) report_list = [] if(env.is_node_in_cluster() and resource.find_for_config( get_resources(env.get_cib()), get_config_file_name(name), )): report_list.append(config_is_used("in cluster resource")) #Only systemd is currently supported. Initd does not supports multiple #instances (here specified by name) if external.is_systemctl(): if external.is_service_running(env.cmd_runner(), "booth", name): report_list.append(config_is_used("(running in systemd)")) if external.is_service_enabled(env.cmd_runner(), "booth", name): report_list.append(config_is_used("(enabled in systemd)")) if report_list: raise LibraryError(*report_list) authfile_path = None try: authfile_path = config_structure.get_authfile( parse(env.booth.get_config_content()) ) except LibraryError: if not ignore_config_load_problems: raise LibraryError(booth_reports.booth_cannot_identify_keyfile()) #if content not received, not valid,... still remove config needed env.report_processor.process( booth_reports.booth_cannot_identify_keyfile( severity=ReportItemSeverity.WARNING ) ) if( authfile_path and os.path.dirname(authfile_path) == settings.booth_config_dir ): env.booth.set_key_path(authfile_path) env.booth.remove_key() env.booth.remove_config() def config_text(env, name, node_name=None): """ get configuration in raw format string name -- name of booth instance whose config should be returned string node_name -- get the config from specified node or local host if None """ if node_name is None: # TODO add name support return env.booth.get_config_content() com_cmd = BoothGetConfig(env.report_processor, name) com_cmd.set_targets([ env.get_node_target_factory().get_target_from_hostname(node_name) ]) remote_data = run_and_raise(env.get_node_communicator(), com_cmd)[0][1] try: return remote_data["config"]["data"] except KeyError: raise LibraryError(reports.invalid_response_format(node_name)) def config_ticket_add(env, ticket_name, options, allow_unknown_options): """ add ticket to booth configuration dict options contains options for ticket bool allow_unknown_options decide if can be used options not listed in ticket options nor global options """ booth_configuration = config_structure.add_ticket( env.report_processor, parse(env.booth.get_config_content()), ticket_name, options, allow_unknown_options, ) env.booth.push_config(build(booth_configuration)) def config_ticket_remove(env, ticket_name): """ remove ticket from booth configuration """ booth_configuration = config_structure.remove_ticket( parse(env.booth.get_config_content()), ticket_name ) env.booth.push_config(build(booth_configuration)) def create_in_cluster(env, name, ip, allow_absent_resource_agent=False): """ Create group with ip resource and booth resource LibraryEnvironment env provides all for communication with externals string name identifies booth instance string ip determines float ip for the operation of the booth bool allow_absent_resource_agent is flag allowing create booth resource even if its agent is not installed """ resources_section = get_resources(env.get_cib()) booth_config_file_path = get_config_file_name(name) if resource.find_for_config(resources_section, booth_config_file_path): raise LibraryError(booth_reports.booth_already_in_cib(name)) create_id = partial( resource.create_resource_id, resources_section, name ) get_agent = partial( find_valid_resource_agent_by_name, env.report_processor, env.cmd_runner(), allowed_absent=allow_absent_resource_agent ) create_primitive = partial( primitive.create, env.report_processor, resources_section, ) into_booth_group = partial( group.place_resource, group.provide_group(resources_section, create_id("group")), ) into_booth_group(create_primitive( create_id("ip"), get_agent("ocf:heartbeat:IPaddr2"), instance_attributes={"ip": ip}, )) into_booth_group(create_primitive( create_id("service"), get_agent("ocf:pacemaker:booth-site"), instance_attributes={"config": booth_config_file_path}, )) env.push_cib() def remove_from_cluster(env, name, resource_remove, allow_remove_multiple): #TODO resource_remove is provisional hack until resources are not moved to #lib resource.get_remover(resource_remove)( _find_resource_elements_for_operation(env, name, allow_remove_multiple) ) def restart(env, name, resource_restart, allow_multiple): #TODO resource_restart is provisional hack until resources are not moved to #lib for booth_element in _find_resource_elements_for_operation( env, name, allow_multiple ): resource_restart([booth_element.attrib["id"]]) def ticket_operation(operation, env, name, ticket, site_ip): if not site_ip: site_ip_list = resource.find_bound_ip( get_resources(env.get_cib()), get_config_file_name(name) ) if len(site_ip_list) != 1: raise LibraryError( booth_reports.booth_cannot_determine_local_site_ip() ) site_ip = site_ip_list[0] stdout, stderr, return_code = env.cmd_runner().run([ settings.booth_binary, operation, "-s", site_ip, ticket ]) if return_code != 0: raise LibraryError( booth_reports.booth_ticket_operation_failed( operation, join_multilines([stderr, stdout]), site_ip, ticket ) ) ticket_grant = partial(ticket_operation, "grant") ticket_revoke = partial(ticket_operation, "revoke") def config_sync(env, name, skip_offline_nodes=False): """ Send specified local booth configuration to all nodes in cluster. env -- LibraryEnvironment name -- booth instance name skip_offline_nodes -- if True offline nodes will be skipped """ config = env.booth.get_config_content() authfile_path = config_structure.get_authfile(parse(config)) authfile_content = config_files.read_authfile( env.report_processor, authfile_path ) com_cmd = BoothSendConfig( env.report_processor, name, config, authfile=authfile_path, authfile_data=authfile_content, skip_offline_targets=skip_offline_nodes ) com_cmd.set_targets( env.get_node_target_factory().get_target_list( env.get_corosync_conf().get_nodes() ) ) run_and_raise(env.get_node_communicator(), com_cmd) def enable_booth(env, name=None): """ Enable specified instance of booth service. Currently it is supported only systemd systems. env -- LibraryEnvironment name -- string, name of booth instance """ external.ensure_is_systemd() try: external.enable_service(env.cmd_runner(), "booth", name) except external.EnableServiceError as e: raise LibraryError(reports.service_enable_error( "booth", e.message, instance=name )) env.report_processor.process(reports.service_enable_success( "booth", instance=name )) def disable_booth(env, name=None): """ Disable specified instance of booth service. Currently it is supported only systemd systems. env -- LibraryEnvironment name -- string, name of booth instance """ external.ensure_is_systemd() try: external.disable_service(env.cmd_runner(), "booth", name) except external.DisableServiceError as e: raise LibraryError(reports.service_disable_error( "booth", e.message, instance=name )) env.report_processor.process(reports.service_disable_success( "booth", instance=name )) def start_booth(env, name=None): """ Start specified instance of booth service. Currently it is supported only systemd systems. On non systems it can be run like this: BOOTH_CONF_FILE= /etc/initd/booth-arbitrator env -- LibraryEnvironment name -- string, name of booth instance """ external.ensure_is_systemd() try: external.start_service(env.cmd_runner(), "booth", name) except external.StartServiceError as e: raise LibraryError(reports.service_start_error( "booth", e.message, instance=name )) env.report_processor.process(reports.service_start_success( "booth", instance=name )) def stop_booth(env, name=None): """ Stop specified instance of booth service. Currently it is supported only systemd systems. env -- LibraryEnvironment name -- string, name of booth instance """ external.ensure_is_systemd() try: external.stop_service(env.cmd_runner(), "booth", name) except external.StopServiceError as e: raise LibraryError(reports.service_stop_error( "booth", e.message, instance=name )) env.report_processor.process(reports.service_stop_success( "booth", instance=name )) def pull_config(env, node_name, name): """ Get config from specified node and save it on local system. It will rewrite existing files. env -- LibraryEnvironment node_name -- string, name of node from which config should be fetched name -- string, name of booth instance of which config should be fetched """ env.report_processor.process( booth_reports.booth_fetching_config_from_node_started(node_name, name) ) com_cmd = BoothGetConfig(env.report_processor, name) com_cmd.set_targets([ env.get_node_target_factory().get_target_from_hostname(node_name) ]) output = run_and_raise(env.get_node_communicator(), com_cmd)[0][1] try: env.booth.create_config(output["config"]["data"], True) if ( output["authfile"]["name"] is not None and output["authfile"]["data"] ): env.booth.set_key_path(os.path.join( settings.booth_config_dir, output["authfile"]["name"] )) env.booth.create_key( base64.b64decode( output["authfile"]["data"].encode("utf-8") ), True ) env.report_processor.process( booth_reports.booth_config_accepted_by_node(name_list=[name]) ) except KeyError: raise LibraryError(reports.invalid_response_format(node_name)) def get_status(env, name=None): return { "status": status.get_daemon_status(env.cmd_runner(), name), "ticket": status.get_tickets_status(env.cmd_runner(), name), "peers": status.get_peers_status(env.cmd_runner(), name), } def _find_resource_elements_for_operation(env, name, allow_multiple): booth_element_list = resource.find_for_config( get_resources(env.get_cib()), get_config_file_name(name), ) if not booth_element_list: raise LibraryError(booth_reports.booth_not_exists_in_cib(name)) if len(booth_element_list) > 1: if not allow_multiple: raise LibraryError(booth_reports.booth_multiple_times_in_cib(name)) env.report_processor.process( booth_reports.booth_multiple_times_in_cib( name, severity=ReportItemSeverity.WARNING, ) ) return booth_element_list pcs-0.9.164/pcs/lib/commands/cib_options.py000066400000000000000000000021431326265502500205040ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.lib import reports from pcs.lib.cib import sections from pcs.lib.xml_tools import remove_when_pointless from pcs.lib.cib.nvpair import arrange_first_meta_attributes def _set_any_defaults(section_name, env, options): """ string section_name -- determine the section of defaults LibraryEnvironment env -- provides access to outside environment dict options -- are desired options with its values; when value is empty the option have to be removed """ env.report_processor.process(reports.defaults_can_be_overriden()) if not options: return defaults_section = sections.get(env.get_cib(), section_name) arrange_first_meta_attributes( defaults_section, options, new_id="{0}-options".format(section_name) ) remove_when_pointless(defaults_section) env.push_cib() set_operations_defaults = partial(_set_any_defaults, sections.OP_DEFAULTS) set_resources_defaults = partial(_set_any_defaults, sections.RSC_DEFAULTS) pcs-0.9.164/pcs/lib/commands/cluster.py000066400000000000000000000062121326265502500176560ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib import reports from pcs.lib.cib import fencing_topology from pcs.lib.cib.tools import ( get_fencing_topology, get_resources, ) from pcs.lib.env_tools import get_nodes from pcs.lib.errors import LibraryError from pcs.lib.node import ( node_addresses_contain_name, node_addresses_contain_host, ) from pcs.lib.pacemaker.live import ( get_cib, get_cib_xml, get_cib_xml_cmd_results, get_cluster_status_xml, remove_node, verify as verify_cmd, ) from pcs.lib.pacemaker.state import ClusterState def node_clear(env, node_name, allow_clear_cluster_node=False): """ Remove specified node from various cluster caches. LibraryEnvironment env provides all for communication with externals string node_name bool allow_clear_cluster_node -- flag allows to clear node even if it's still in a cluster """ mocked_envs = [] if not env.is_cib_live: mocked_envs.append("CIB") if not env.is_corosync_conf_live: mocked_envs.append("COROSYNC_CONF") if mocked_envs: raise LibraryError(reports.live_environment_required(mocked_envs)) current_nodes = get_nodes(env.get_corosync_conf(), env.get_cib()) if( node_addresses_contain_name(current_nodes, node_name) or node_addresses_contain_host(current_nodes, node_name) ): env.report_processor.process( reports.get_problem_creator( report_codes.FORCE_CLEAR_CLUSTER_NODE, allow_clear_cluster_node )( reports.node_to_clear_is_still_in_cluster, node_name ) ) remove_node(env.cmd_runner(), node_name) def verify(env, verbose=False): runner = env.cmd_runner() dummy_stdout, verify_stderr, verify_returncode = verify_cmd( runner, verbose=verbose, ) #1) Do not even try to think about upgrading! #2) We do not need cib management in env (no need for push...). #So env.get_cib is not best choice here (there were considerations to #upgrade cib at all times inside env.get_cib). Go to a lower level here. if verify_returncode != 0: env.report_processor.append(reports.invalid_cib_content(verify_stderr)) #Cib is sometimes loadable even if `crm_verify` fails (e.g. when #fencing topology is invalid). On the other hand cib with id duplication #is not loadable. #We try extra checks when cib is possible to load. cib_xml, dummy_stderr, returncode = get_cib_xml_cmd_results(runner) if returncode != 0: #can raise; raise LibraryError is better but in this case we prefer #be consistent with raising below env.report_processor.send() else: cib_xml = get_cib_xml(runner) cib = get_cib(cib_xml) fencing_topology.verify( env.report_processor, get_fencing_topology(cib), get_resources(cib), ClusterState(get_cluster_status_xml(runner)).node_section.nodes ) #can raise env.report_processor.send() pcs-0.9.164/pcs/lib/commands/constraint/000077500000000000000000000000001326265502500200065ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/constraint/__init__.py000066400000000000000000000000001326265502500221050ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/constraint/colocation.py000066400000000000000000000010661326265502500225150ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.lib.cib.constraint import colocation import pcs.lib.commands.constraint.common #configure common constraint command show = partial( pcs.lib.commands.constraint.common.show, colocation.TAG_NAME, lambda element: element.attrib.has_key('rsc') ) #configure common constraint command create_with_set = partial( pcs.lib.commands.constraint.common.create_with_set, colocation.TAG_NAME, colocation.prepare_options_with_set ) pcs-0.9.164/pcs/lib/commands/constraint/common.py000066400000000000000000000056231326265502500216560ustar00rootroot00000000000000""" Common functions used from specific constraint commands. Functions of this module are not intended to be used for direct call from client. """ from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.lib.cib.constraint import constraint, resource_set from pcs.lib.cib.tools import get_constraints def create_with_set( tag_name, prepare_options, env, resource_set_list, constraint_options, can_repair_to_clone=False, resource_in_clone_alowed=False, duplication_alowed=False, duplicate_check=None, ): """ string tag_name is constraint tag name callable prepare_options takes cib(Element), options(dict), resource_set_list and return corrected options or if options not usable raises error env is library environment list resource_set_list is description of resource set, for example: {"ids": ["A", "B"], "options": {"sequential": "true"}}, dict constraint_options is base for building attributes of constraint tag bool resource_in_clone_alowed flag for allowing to reference id which is in tag clone or master bool duplication_alowed flag for allowing create duplicate element callable duplicate_check takes two elements and decide if they are duplicates """ cib = env.get_cib() find_valid_resource_id = partial( constraint.find_valid_resource_id, env.report_processor, cib, can_repair_to_clone, resource_in_clone_alowed ) constraint_section = get_constraints(cib) constraint_element = constraint.create_with_set( constraint_section, tag_name, options=prepare_options(cib, constraint_options, resource_set_list), resource_set_list=[ resource_set.prepare_set(find_valid_resource_id, resource_set_item) for resource_set_item in resource_set_list ] ) if not duplicate_check: duplicate_check = constraint.have_duplicate_resource_sets constraint.check_is_without_duplication( env.report_processor, constraint_section, constraint_element, are_duplicate=duplicate_check, export_element=constraint.export_with_set, duplication_alowed=duplication_alowed, ) env.push_cib() def show(tag_name, is_plain, env): """ string tag_name is constraint tag name callable is_plain takes constraint element and returns if is plain (i.e. without resource set) env is library environment """ constraints_info = {"plain": [], "with_resource_sets": []} for element in get_constraints(env.get_cib()).findall(".//"+tag_name): if is_plain(element): constraints_info["plain"].append(constraint.export_plain(element)) else: constraints_info["with_resource_sets"].append( constraint.export_with_set(element) ) return constraints_info pcs-0.9.164/pcs/lib/commands/constraint/order.py000066400000000000000000000010431326265502500214710ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.lib.cib.constraint import order import pcs.lib.commands.constraint.common #configure common constraint command show = partial( pcs.lib.commands.constraint.common.show, order.TAG_NAME, lambda element: element.attrib.has_key('first') ) #configure common constraint command create_with_set = partial( pcs.lib.commands.constraint.common.create_with_set, order.TAG_NAME, order.prepare_options_with_set ) pcs-0.9.164/pcs/lib/commands/constraint/ticket.py000066400000000000000000000051071326265502500216460ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.lib.cib.constraint import constraint, ticket from pcs.lib.cib.tools import get_constraints import pcs.lib.commands.constraint.common #configure common constraint command show = partial( pcs.lib.commands.constraint.common.show, ticket.TAG_NAME, lambda element: element.attrib.has_key('rsc') ) #configure common constraint command create_with_set = partial( pcs.lib.commands.constraint.common.create_with_set, ticket.TAG_NAME, ticket.prepare_options_with_set, duplicate_check=ticket.are_duplicate_with_resource_set, ) def create( env, ticket_key, resource_id, options, autocorrection_allowed=False, resource_in_clone_alowed=False, duplication_alowed=False, ): """ create ticket constraint string ticket_key ticket for constraining resource dict options desired constraint attributes bool resource_in_clone_alowed flag for allowing to reference id which is in tag clone or master bool duplication_alowed flag for allowing create duplicate element callable duplicate_check takes two elements and decide if they are duplicates """ cib = env.get_cib() options = ticket.prepare_options_plain( cib, options, ticket_key, constraint.find_valid_resource_id( env.report_processor, cib, autocorrection_allowed, resource_in_clone_alowed, resource_id ), ) constraint_section = get_constraints(cib) constraint_element = ticket.create_plain(constraint_section, options) constraint.check_is_without_duplication( env.report_processor, constraint_section, constraint_element, are_duplicate=ticket.are_duplicate_plain, export_element=constraint.export_plain, duplication_alowed=duplication_alowed, ) env.push_cib() def remove(env, ticket_key, resource_id): """ remove all ticket constraint from resource If resource is in resource set with another resources then only resource ref is removed. If resource is alone in resource set whole constraint is removed. """ constraint_section = get_constraints(env.get_cib()) any_plain_removed = ticket.remove_plain( constraint_section, ticket_key, resource_id ) any_with_resource_set_removed = ticket.remove_with_resource_set( constraint_section, ticket_key, resource_id ) env.push_cib() return any_plain_removed or any_with_resource_set_removed pcs-0.9.164/pcs/lib/commands/fencing_topology.py000066400000000000000000000072461326265502500215520ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common.fencing_topology import ( TARGET_TYPE_REGEXP, TARGET_TYPE_ATTRIBUTE, ) from pcs.common.tools import Version from pcs.lib.cib import fencing_topology as cib_fencing_topology from pcs.lib.cib.tools import ( get_fencing_topology, get_resources, ) from pcs.lib.pacemaker.live import get_cluster_status_xml from pcs.lib.pacemaker.state import ClusterState def add_level( lib_env, level, target_type, target_value, devices, force_device=False, force_node=False ): """ Validate and add a new fencing level LibraryEnvironment lib_env -- environment int|string level -- level (index) of the new fencing level constant target_type -- the new fencing level target value type mixed target_value -- the new fencing level target value Iterable devices -- list of stonith devices for the new fencing level bool force_device -- continue even if a stonith device does not exist bool force_node -- continue even if a node (target) does not exist """ version_check = None if target_type == TARGET_TYPE_REGEXP: version_check = Version(2, 3, 0) elif target_type == TARGET_TYPE_ATTRIBUTE: version_check = Version(2, 4, 0) cib = lib_env.get_cib(version_check) cib_fencing_topology.add_level( lib_env.report_processor, get_fencing_topology(cib), get_resources(cib), level, target_type, target_value, devices, ClusterState( get_cluster_status_xml(lib_env.cmd_runner()) ).node_section.nodes, force_device, force_node ) lib_env.report_processor.send() lib_env.push_cib() def get_config(lib_env): """ Get fencing levels configuration. Return a list of levels where each level is a dict with keys: target_type, target_value. level and devices. Devices is a list of stonith device ids. LibraryEnvironment lib_env -- environment """ cib = lib_env.get_cib() return cib_fencing_topology.export(get_fencing_topology(cib)) def remove_all_levels(lib_env): """ Remove all fencing levels LibraryEnvironment lib_env -- environment """ cib_fencing_topology.remove_all_levels( get_fencing_topology(lib_env.get_cib()) ) lib_env.push_cib() def remove_levels_by_params( lib_env, level=None, target_type=None, target_value=None, devices=None, ignore_if_missing=False ): """ Remove specified fencing level(s) LibraryEnvironment lib_env -- environment int|string level -- level (index) of the fencing level to remove constant target_type -- the removed fencing level target value type mixed target_value -- the removed fencing level target value Iterable devices -- list of stonith devices of the removed fencing level bool ignore_if_missing -- when True, do not report if level not found """ cib_fencing_topology.remove_levels_by_params( lib_env.report_processor, get_fencing_topology(lib_env.get_cib()), level, target_type, target_value, devices, ignore_if_missing ) lib_env.report_processor.send() lib_env.push_cib() def verify(lib_env): """ Check if all cluster nodes and stonith devices used in fencing levels exist LibraryEnvironment lib_env -- environment """ cib = lib_env.get_cib() cib_fencing_topology.verify( lib_env.report_processor, get_fencing_topology(cib), get_resources(cib), ClusterState( get_cluster_status_xml(lib_env.cmd_runner()) ).node_section.nodes ) lib_env.report_processor.send() pcs-0.9.164/pcs/lib/commands/node.py000066400000000000000000000121141326265502500171200ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from contextlib import contextmanager from pcs.lib import reports from pcs.lib.cib.node import update_node_instance_attrs from pcs.lib.errors import LibraryError from pcs.lib.pacemaker.live import ( get_cluster_status_xml, get_local_node_name, ) from pcs.lib.pacemaker.state import ClusterState @contextmanager def cib_runner_nodes(lib_env, wait): lib_env.ensure_wait_satisfiable(wait) runner = lib_env.cmd_runner() state_nodes = ClusterState( get_cluster_status_xml(runner) ).node_section.nodes yield (lib_env.get_cib(), runner, state_nodes) lib_env.push_cib(wait=wait) def standby_unstandby_local(lib_env, standby, wait=False): """ Change local node standby mode LibraryEnvironment lib_env bool standby -- True: enable standby, False: disable standby mixed wait -- False: no wait, None: wait with default timeout, str or int: wait with specified timeout """ return _set_instance_attrs_local_node( lib_env, _create_standby_unstandby_dict(standby), wait ) def standby_unstandby_list(lib_env, standby, node_names, wait=False): """ Change specified nodes standby mode LibraryEnvironment lib_env bool standby -- True: enable standby, False: disable standby iterable node_names -- nodes to apply the change to mixed wait -- False: no wait, None: wait with default timeout, str or int: wait with specified timeout """ return _set_instance_attrs_node_list( lib_env, _create_standby_unstandby_dict(standby), node_names, wait ) def standby_unstandby_all(lib_env, standby, wait=False): """ Change all nodes standby mode LibraryEnvironment lib_env bool standby -- True: enable standby, False: disable standby mixed wait -- False: no wait, None: wait with default timeout, str or int: wait with specified timeout """ return _set_instance_attrs_all_nodes( lib_env, _create_standby_unstandby_dict(standby), wait ) def maintenance_unmaintenance_local(lib_env, maintenance, wait=False): """ Change local node maintenance mode LibraryEnvironment lib_env bool maintenance -- True: enable maintenance, False: disable maintenance mixed wait -- False: no wait, None: wait with default timeout, str or int: wait with specified timeout """ return _set_instance_attrs_local_node( lib_env, _create_maintenance_unmaintenance_dict(maintenance), wait ) def maintenance_unmaintenance_list( lib_env, maintenance, node_names, wait=False ): """ Change specified nodes maintenance mode LibraryEnvironment lib_env bool maintenance -- True: enable maintenance, False: disable maintenance iterable node_names -- nodes to apply the change to mixed wait -- False: no wait, None: wait with default timeout, str or int: wait with specified timeout """ return _set_instance_attrs_node_list( lib_env, _create_maintenance_unmaintenance_dict(maintenance), node_names, wait ) def maintenance_unmaintenance_all(lib_env, maintenance, wait=False): """ Change all nodes maintenance mode LibraryEnvironment lib_env bool maintenance -- True: enable maintenance, False: disable maintenance mixed wait -- False: no wait, None: wait with default timeout, str or int: wait with specified timeout """ return _set_instance_attrs_all_nodes( lib_env, _create_maintenance_unmaintenance_dict(maintenance), wait ) def _create_standby_unstandby_dict(standby): return {"standby": "on" if standby else ""} def _create_maintenance_unmaintenance_dict(maintenance): return {"maintenance": "on" if maintenance else ""} def _set_instance_attrs_local_node(lib_env, attrs, wait): if not lib_env.is_cib_live: # If we are not working with a live cluster we cannot get the local node # name. raise LibraryError(reports.live_environment_required_for_local_node()) with cib_runner_nodes(lib_env, wait) as (cib, runner, state_nodes): update_node_instance_attrs( cib, get_local_node_name(runner), attrs, state_nodes ) def _set_instance_attrs_node_list(lib_env, attrs, node_names, wait): with cib_runner_nodes(lib_env, wait) as (cib, dummy_runner, state_nodes): known_nodes = [node.attrs.name for node in state_nodes] report = [] for node in node_names: if node not in known_nodes: report.append(reports.node_not_found(node)) if report: raise LibraryError(*report) for node in node_names: update_node_instance_attrs(cib, node, attrs, state_nodes) def _set_instance_attrs_all_nodes(lib_env, attrs, wait): with cib_runner_nodes(lib_env, wait) as (cib, dummy_runner, state_nodes): for node in [node.attrs.name for node in state_nodes]: update_node_instance_attrs(cib, node, attrs, state_nodes) pcs-0.9.164/pcs/lib/commands/qdevice.py000066400000000000000000000205001326265502500176110ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import base64 import binascii from pcs.common import report_codes from pcs.lib import external, reports from pcs.lib.corosync import qdevice_net from pcs.lib.errors import LibraryError, ReportItemSeverity def qdevice_setup(lib_env, model, enable, start): """ Initialize qdevice on local host with specified model string model qdevice model to initialize bool enable make qdevice service start on boot bool start start qdevice now """ _ensure_not_cman(lib_env) _check_model(model) qdevice_net.qdevice_setup(lib_env.cmd_runner()) lib_env.report_processor.process( reports.qdevice_initialization_success(model) ) if enable: _service_enable(lib_env, qdevice_net.qdevice_enable) if start: _service_start(lib_env, qdevice_net.qdevice_start) def qdevice_destroy(lib_env, model, proceed_if_used=False): """ Stop and disable qdevice on local host and remove its configuration string model qdevice model to destroy bool procced_if_used destroy qdevice even if it is used by clusters """ _ensure_not_cman(lib_env) _check_model(model) _check_qdevice_not_used( lib_env.report_processor, lib_env.cmd_runner(), model, proceed_if_used ) _service_stop(lib_env, qdevice_net.qdevice_stop) _service_disable(lib_env, qdevice_net.qdevice_disable) qdevice_net.qdevice_destroy() lib_env.report_processor.process(reports.qdevice_destroy_success(model)) def qdevice_status_text(lib_env, model, verbose=False, cluster=None): """ Get runtime status of a quorum device in plain text string model qdevice model to query bool verbose get more detailed output string cluster show information only about specified cluster """ _ensure_not_cman(lib_env) _check_model(model) runner = lib_env.cmd_runner() try: return ( qdevice_net.qdevice_status_generic_text(runner, verbose) + qdevice_net.qdevice_status_cluster_text(runner, cluster, verbose) ) except qdevice_net.QnetdNotRunningException: raise LibraryError( reports.qdevice_not_running(model) ) def qdevice_enable(lib_env, model): """ make qdevice start automatically on boot on local host """ _ensure_not_cman(lib_env) _check_model(model) _service_enable(lib_env, qdevice_net.qdevice_enable) def qdevice_disable(lib_env, model): """ make qdevice not start automatically on boot on local host """ _ensure_not_cman(lib_env) _check_model(model) _service_disable(lib_env, qdevice_net.qdevice_disable) def qdevice_start(lib_env, model): """ start qdevice now on local host """ _ensure_not_cman(lib_env) _check_model(model) _service_start(lib_env, qdevice_net.qdevice_start) def qdevice_stop(lib_env, model, proceed_if_used=False): """ stop qdevice now on local host string model qdevice model to destroy bool procced_if_used stop qdevice even if it is used by clusters """ _ensure_not_cman(lib_env) _check_model(model) _check_qdevice_not_used( lib_env.report_processor, lib_env.cmd_runner(), model, proceed_if_used ) _service_stop(lib_env, qdevice_net.qdevice_stop) def qdevice_kill(lib_env, model): """ kill qdevice now on local host """ _ensure_not_cman(lib_env) _check_model(model) _service_kill(lib_env, qdevice_net.qdevice_kill) def qdevice_net_sign_certificate_request( lib_env, certificate_request, cluster_name ): """ Sign node certificate request by qnetd CA string certificate_request base64 encoded certificate request string cluster_name name of the cluster to which qdevice is being added """ _ensure_not_cman(lib_env) try: certificate_request_data = base64.b64decode(certificate_request) except (TypeError, binascii.Error): raise LibraryError(reports.invalid_option_value( "qnetd certificate request", certificate_request, ["base64 encoded certificate"] )) return base64.b64encode( qdevice_net.qdevice_sign_certificate_request( lib_env.cmd_runner(), certificate_request_data, cluster_name ) ) def client_net_setup(lib_env, ca_certificate): """ Intialize qdevice net client on local host ca_certificate base64 encoded qnetd CA certificate """ _ensure_not_cman(lib_env) try: ca_certificate_data = base64.b64decode(ca_certificate) except (TypeError, binascii.Error): raise LibraryError(reports.invalid_option_value( "qnetd CA certificate", ca_certificate, ["base64 encoded certificate"] )) qdevice_net.client_setup(lib_env.cmd_runner(), ca_certificate_data) def client_net_import_certificate(lib_env, certificate): """ Import qnetd client certificate to local node certificate storage certificate base64 encoded qnetd client certificate """ _ensure_not_cman(lib_env) try: certificate_data = base64.b64decode(certificate) except (TypeError, binascii.Error): raise LibraryError(reports.invalid_option_value( "qnetd client certificate", certificate, ["base64 encoded certificate"] )) qdevice_net.client_import_certificate_and_key( lib_env.cmd_runner(), certificate_data ) def client_net_destroy(lib_env): """ delete qdevice client config files on local host """ _ensure_not_cman(lib_env) qdevice_net.client_destroy() def _ensure_not_cman(lib_env): if lib_env.is_cman_cluster: raise LibraryError(reports.cman_unsupported_command()) def _check_model(model): if model != "net": raise LibraryError( reports.invalid_option_value("model", model, ["net"]) ) def _check_qdevice_not_used(reporter, runner, model, force=False): _check_model(model) connected_clusters = [] if model == "net": try: status = qdevice_net.qdevice_status_cluster_text(runner) connected_clusters = qdevice_net.qdevice_connected_clusters(status) except qdevice_net.QnetdNotRunningException: pass if connected_clusters: reporter.process(reports.qdevice_used_by_clusters( connected_clusters, ReportItemSeverity.WARNING if force else ReportItemSeverity.ERROR, None if force else report_codes.FORCE_QDEVICE_USED )) def _service_start(lib_env, func): lib_env.report_processor.process( reports.service_start_started("quorum device") ) try: func(lib_env.cmd_runner()) except external.StartServiceError as e: raise LibraryError( reports.service_start_error(e.service, e.message) ) lib_env.report_processor.process( reports.service_start_success("quorum device") ) def _service_stop(lib_env, func): lib_env.report_processor.process( reports.service_stop_started("quorum device") ) try: func(lib_env.cmd_runner()) except external.StopServiceError as e: raise LibraryError( reports.service_stop_error(e.service, e.message) ) lib_env.report_processor.process( reports.service_stop_success("quorum device") ) def _service_kill(lib_env, func): try: func(lib_env.cmd_runner()) except external.KillServicesError as e: raise LibraryError( reports.service_kill_error(e.service, e.message) ) lib_env.report_processor.process( reports.service_kill_success(["quorum device"]) ) def _service_enable(lib_env, func): try: func(lib_env.cmd_runner()) except external.EnableServiceError as e: raise LibraryError( reports.service_enable_error(e.service, e.message) ) lib_env.report_processor.process( reports.service_enable_success("quorum device") ) def _service_disable(lib_env, func): try: func(lib_env.cmd_runner()) except external.DisableServiceError as e: raise LibraryError( reports.service_disable_error(e.service, e.message) ) lib_env.report_processor.process( reports.service_disable_success("quorum device") ) pcs-0.9.164/pcs/lib/commands/quorum.py000066400000000000000000000306221326265502500175270ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib import reports, sbd from pcs.lib.errors import LibraryError, ReportItemSeverity from pcs.lib.communication import ( qdevice as qdevice_com, qdevice_net as qdevice_net_com, ) from pcs.lib.communication.tools import run_and_raise from pcs.lib.corosync import ( live as corosync_live, qdevice_net, qdevice_client ) def get_config(lib_env): """ Extract and return quorum configuration from corosync.conf lib_env LibraryEnvironment """ __ensure_not_cman(lib_env) cfg = lib_env.get_corosync_conf() device = None if cfg.has_quorum_device(): model, model_options, generic_options, heuristics_options = ( cfg.get_quorum_device_settings() ) device = { "model": model, "model_options": model_options, "generic_options": generic_options, "heuristics_options": heuristics_options, } return { "options": cfg.get_quorum_options(), "device": device, } def _check_if_atb_can_be_disabled( runner, report_processor, corosync_conf, was_enabled, force=False ): """ Check whenever auto_tie_breaker can be changed without affecting SBD. Raises LibraryError if change of ATB will affect SBD functionality. runner -- CommandRunner report_processor -- report processor corosync_conf -- corosync conf facade was_enabled -- True if ATB was enabled, False otherwise force -- force change """ if ( was_enabled and not corosync_conf.is_enabled_auto_tie_breaker() and sbd.is_auto_tie_breaker_needed(runner, corosync_conf) ): report_processor.process(reports.quorum_cannot_disable_atb_due_to_sbd( ReportItemSeverity.WARNING if force else ReportItemSeverity.ERROR, None if force else report_codes.FORCE_OPTIONS )) def set_options(lib_env, options, skip_offline_nodes=False, force=False): """ Set corosync quorum options, distribute and reload corosync.conf if live lib_env LibraryEnvironment options quorum options (dict) skip_offline_nodes continue even if not all nodes are accessible bool force force changes """ __ensure_not_cman(lib_env) cfg = lib_env.get_corosync_conf() atb_enabled = cfg.is_enabled_auto_tie_breaker() cfg.set_quorum_options(lib_env.report_processor, options) if lib_env.is_corosync_conf_live: _check_if_atb_can_be_disabled( lib_env.cmd_runner(), lib_env.report_processor, cfg, atb_enabled, force ) lib_env.push_corosync_conf(cfg, skip_offline_nodes) def status_text(lib_env): """ Get quorum runtime status in plain text """ __ensure_not_cman(lib_env) return corosync_live.get_quorum_status_text(lib_env.cmd_runner()) def status_device_text(lib_env, verbose=False): """ Get quorum device client runtime status in plain text bool verbose get more detailed output """ __ensure_not_cman(lib_env) return qdevice_client.get_status_text(lib_env.cmd_runner(), verbose) def add_device( lib_env, model, model_options, generic_options, heuristics_options, force_model=False, force_options=False, skip_offline_nodes=False ): """ Add a quorum device to a cluster, distribute and reload configs if live string model -- quorum device model dict model_options -- model specific options dict generic_options -- generic quorum device options dict heuristics_options -- heuristics options bool force_model -- continue even if the model is not valid bool force_options -- continue even if options are not valid bool skip_offline_nodes -- continue even if not all nodes are accessible """ __ensure_not_cman(lib_env) cfg = lib_env.get_corosync_conf() # Try adding qdevice to corosync.conf. This validates all the options and # makes sure qdevice is not defined in corosync.conf yet. cfg.add_quorum_device( lib_env.report_processor, model, model_options, generic_options, heuristics_options, force_model=force_model, force_options=force_options ) target_list = lib_env.get_node_target_factory().get_target_list( cfg.get_nodes() ) # First setup certificates for qdevice, then send corosync.conf to nodes. # If anything fails, nodes will not have corosync.conf with qdevice in it, # so there is no effect on the cluster. if lib_env.is_corosync_conf_live: # do model specific configuration # if model is not known to pcs and was forced, do not configure antyhing # else but corosync.conf, as we do not know what to do anyways if model == "net": _add_device_model_net( lib_env, # we are sure it's there, it was validated in add_quorum_device model_options["host"], cfg.get_cluster_name(), cfg.get_nodes(), skip_offline_nodes ) lib_env.report_processor.process( reports.service_enable_started("corosync-qdevice") ) com_cmd = qdevice_com.Enable( lib_env.report_processor, skip_offline_nodes ) com_cmd.set_targets(target_list) run_and_raise(lib_env.get_node_communicator(), com_cmd) # everything set up, it's safe to tell the nodes to use qdevice lib_env.push_corosync_conf(cfg, skip_offline_nodes) # Now, when corosync.conf has been reloaded, we can start qdevice service. if lib_env.is_corosync_conf_live: lib_env.report_processor.process( reports.service_start_started("corosync-qdevice") ) com_cmd = qdevice_com.Start( lib_env.report_processor, skip_offline_nodes ) com_cmd.set_targets(target_list) run_and_raise(lib_env.get_node_communicator(), com_cmd) def _add_device_model_net( lib_env, qnetd_host, cluster_name, cluster_nodes, skip_offline_nodes ): """ setup cluster nodes for using qdevice model net string qnetd_host address of qdevice provider (qnetd host) string cluster_name name of the cluster to which qdevice is being added NodeAddressesList cluster_nodes list of cluster nodes addresses bool skip_offline_nodes continue even if not all nodes are accessible """ runner = lib_env.cmd_runner() reporter = lib_env.report_processor target_factory = lib_env.get_node_target_factory() qnetd_target = target_factory.get_target_from_hostname(qnetd_host) target_list = target_factory.get_target_list(cluster_nodes) reporter.process( reports.qdevice_certificate_distribution_started() ) # get qnetd CA certificate com_cmd = qdevice_net_com.GetCaCert(reporter) com_cmd.set_targets([qnetd_target]) qnetd_ca_cert = run_and_raise( lib_env.get_node_communicator(), com_cmd )[0][1] # init certificate storage on all nodes com_cmd = qdevice_net_com.ClientSetup( reporter, qnetd_ca_cert, skip_offline_nodes ) com_cmd.set_targets(target_list) run_and_raise(lib_env.get_node_communicator(), com_cmd) # create client certificate request cert_request = qdevice_net.client_generate_certificate_request( runner, cluster_name ) # sign the request on qnetd host com_cmd = qdevice_net_com.SignCertificate(reporter) com_cmd.add_request(qnetd_target, cert_request, cluster_name) signed_certificate = run_and_raise( lib_env.get_node_communicator(), com_cmd )[0][1] # transform the signed certificate to pk12 format which can sent to nodes pk12 = qdevice_net.client_cert_request_to_pk12(runner, signed_certificate) # distribute final certificate to nodes com_cmd = qdevice_net_com.ClientImportCertificateAndKey( reporter, pk12, skip_offline_nodes ) com_cmd.set_targets(target_list) run_and_raise(lib_env.get_node_communicator(), com_cmd) def update_device( lib_env, model_options, generic_options, heuristics_options, force_options=False, skip_offline_nodes=False ): """ Change quorum device settings, distribute and reload configs if live dict model_options -- model specific options dict generic_options -- generic quorum device options dict heuristics_options -- heuristics options bool force_options -- continue even if options are not valid bool skip_offline_nodes -- continue even if not all nodes are accessible """ __ensure_not_cman(lib_env) cfg = lib_env.get_corosync_conf() cfg.update_quorum_device( lib_env.report_processor, model_options, generic_options, heuristics_options, force_options=force_options ) lib_env.push_corosync_conf(cfg, skip_offline_nodes) def remove_device_heuristics(lib_env, skip_offline_nodes=False): """ Stop using quorum device heuristics, distribute and reload configs if live bool skip_offline_nodes -- continue even if not all nodes are accessible """ __ensure_not_cman(lib_env) cfg = lib_env.get_corosync_conf() cfg.remove_quorum_device_heuristics() lib_env.push_corosync_conf(cfg, skip_offline_nodes) def remove_device(lib_env, skip_offline_nodes=False): """ Stop using quorum device, distribute and reload configs if live skip_offline_nodes continue even if not all nodes are accessible """ __ensure_not_cman(lib_env) cfg = lib_env.get_corosync_conf() model, dummy_options, dummy_options, dummy_options = ( cfg.get_quorum_device_settings() ) cfg.remove_quorum_device() if lib_env.is_corosync_conf_live: target_list = lib_env.get_node_target_factory().get_target_list( cfg.get_nodes() ) # fix quorum options for SBD to work properly if sbd.atb_has_to_be_enabled(lib_env.cmd_runner(), cfg): lib_env.report_processor.process(reports.sbd_requires_atb()) cfg.set_quorum_options( lib_env.report_processor, {"auto_tie_breaker": "1"} ) # disable qdevice lib_env.report_processor.process( reports.service_disable_started("corosync-qdevice") ) com_cmd = qdevice_com.Disable( lib_env.report_processor, skip_offline_nodes ) com_cmd.set_targets(target_list) run_and_raise(lib_env.get_node_communicator(), com_cmd) # stop qdevice lib_env.report_processor.process( reports.service_stop_started("corosync-qdevice") ) com_cmd = qdevice_com.Stop( lib_env.report_processor, skip_offline_nodes ) com_cmd.set_targets(target_list) run_and_raise(lib_env.get_node_communicator(), com_cmd) # handle model specific configuration if model == "net": _remove_device_model_net( lib_env, cfg.get_nodes(), skip_offline_nodes ) lib_env.push_corosync_conf(cfg, skip_offline_nodes) def _remove_device_model_net(lib_env, cluster_nodes, skip_offline_nodes): """ remove configuration used by qdevice model net NodeAddressesList cluster_nodes list of cluster nodes addresses bool skip_offline_nodes continue even if not all nodes are accessible """ reporter = lib_env.report_processor reporter.process( reports.qdevice_certificate_removal_started() ) com_cmd = qdevice_net_com.ClientDestroy(reporter, skip_offline_nodes) com_cmd.set_targets( lib_env.get_node_target_factory().get_target_list(cluster_nodes) ) run_and_raise(lib_env.get_node_communicator(), com_cmd) def set_expected_votes_live(lib_env, expected_votes): """ set expected votes in live cluster to specified value numeric expected_votes desired value of expected votes """ if lib_env.is_cman_cluster: raise LibraryError(reports.cman_unsupported_command()) try: votes_int = int(expected_votes) if votes_int < 1: raise ValueError() except ValueError: raise LibraryError(reports.invalid_option_value( "expected votes", expected_votes, "positive integer" )) corosync_live.set_expected_votes(lib_env.cmd_runner(), votes_int) def __ensure_not_cman(lib_env): if lib_env.is_corosync_conf_live and lib_env.is_cman_cluster: raise LibraryError(reports.cman_unsupported_command()) pcs-0.9.164/pcs/lib/commands/remote_node.py000066400000000000000000000411231326265502500204750ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib import reports, node_communication_format from pcs.lib.node import( NodeAddresses, NodeAddressesList, ) from pcs.lib.tools import generate_key from pcs.lib.cib.resource import guest_node, primitive, remote_node from pcs.lib.cib.tools import get_resources, find_element_by_tag_and_id from pcs.lib.communication.nodes import ( availability_checker_remote_node, DistributeFiles, PrecheckNewNode, RemoveFiles, ServiceAction, ) from pcs.lib.communication.tools import run, run_and_raise from pcs.lib.env_tools import get_nodes, get_nodes_remote, get_nodes_guest from pcs.lib.errors import LibraryError from pcs.lib.pacemaker import state from pcs.lib.pacemaker.live import remove_node def _ensure_can_add_node_to_remote_cluster( env, node_addresses, warn_on_communication_exception=False ): report_items = [] com_cmd = PrecheckNewNode( report_items, availability_checker_remote_node, skip_offline_targets=warn_on_communication_exception, ) com_cmd.add_request( env.get_node_target_factory().get_target(node_addresses) ) run(env.get_node_communicator(), com_cmd) env.report_processor.process_list(report_items) def _share_authkey( env, current_nodes, candidate_node_addresses, skip_offline_nodes=False, allow_incomplete_distribution=False ): if env.pacemaker.has_authkey: authkey_content = env.pacemaker.get_authkey_content() node_addresses_list = NodeAddressesList([candidate_node_addresses]) else: authkey_content = generate_key() node_addresses_list = current_nodes + [candidate_node_addresses] com_cmd = DistributeFiles( env.report_processor, node_communication_format.pcmk_authkey_file(authkey_content), skip_offline_targets=skip_offline_nodes, allow_fails=allow_incomplete_distribution, description="remote node configuration files", ) com_cmd.set_targets( env.get_node_target_factory().get_target_list(node_addresses_list) ) run_and_raise(env.get_node_communicator(), com_cmd) def _start_and_enable_pacemaker_remote( env, node_list, skip_offline_nodes=False, allow_fails=False ): com_cmd = ServiceAction( env.report_processor, node_communication_format.create_pcmk_remote_actions([ "start", "enable", ]), skip_offline_targets=skip_offline_nodes, allow_fails=allow_fails, description="start of service pacemaker_remote" ) com_cmd.set_targets( env.get_node_target_factory().get_target_list(node_list) ) run_and_raise(env.get_node_communicator(), com_cmd) def _prepare_pacemaker_remote_environment( env, current_nodes, node_host, skip_offline_nodes, allow_incomplete_distribution, allow_fails ): if not env.is_corosync_conf_live: env.report_processor.process_list([ reports.nolive_skip_files_distribution( ["pacemaker authkey"], [node_host] ), reports.nolive_skip_service_command_on_nodes( "pacemaker_remote", "start", [node_host] ), reports.nolive_skip_service_command_on_nodes( "pacemaker_remote", "enable", [node_host] ), ]) return candidate_node = NodeAddresses(node_host) _ensure_can_add_node_to_remote_cluster( env, candidate_node, skip_offline_nodes ) _share_authkey( env, current_nodes, candidate_node, skip_offline_nodes, allow_incomplete_distribution ) _start_and_enable_pacemaker_remote( env, [candidate_node], skip_offline_nodes, allow_fails ) def _ensure_resource_running(env, resource_id): env.report_processor.process( state.ensure_resource_running(env.get_cluster_state(), resource_id) ) def _ensure_consistently_live_env(env): if env.is_cib_live and env.is_corosync_conf_live: return #we accept is as well, we need it for tests if not env.is_cib_live and not env.is_corosync_conf_live: return raise LibraryError(reports.live_environment_required([ "CIB" if not env.is_cib_live else "COROSYNC_CONF" ])) def node_add_remote( env, host, node_name, operations, meta_attributes, instance_attributes, skip_offline_nodes=False, allow_incomplete_distribution=False, allow_pacemaker_remote_service_fail=False, allow_invalid_operation=False, allow_invalid_instance_attributes=False, use_default_operations=True, wait=False, ): """ create resource ocf:pacemaker:remote and use it as remote node LibraryEnvironment env provides all for communication with externals list of dict operations contains attributes for each entered operation dict meta_attributes contains attributes for primitive/meta_attributes dict instance_attributes contains attributes for primitive/instance_attributes bool skip_offline_nodes -- a flag for ignoring when some nodes are offline bool allow_incomplete_distribution -- is a flag for allowing successfully finish this command even if is file distribution not succeeded bool allow_pacemaker_remote_service_fail -- is a flag for allowing successfully finish this command even if starting/enabling pacemaker_remote not succeeded bool allow_invalid_operation is a flag for allowing to use operations that are not listed in a resource agent metadata bool allow_invalid_instance_attributes is a flag for allowing to use instance attributes that are not listed in a resource agent metadata or for allowing to not use the instance_attributes that are required in resource agent metadata bool use_default_operations is a flag for stopping stopping of adding default cib operations (specified in a resource agent) mixed wait is flag for controlling waiting for pacemaker iddle mechanism """ _ensure_consistently_live_env(env) env.ensure_wait_satisfiable(wait) cib = env.get_cib() current_nodes = get_nodes(env.get_corosync_conf(), cib) resource_agent = remote_node.get_agent( env.report_processor, env.cmd_runner() ) report_list = remote_node.validate_create( current_nodes, resource_agent, host, node_name, instance_attributes ) try: remote_resource_element = remote_node.create( env.report_processor, resource_agent, get_resources(cib), host, node_name, operations, meta_attributes, instance_attributes, allow_invalid_operation, allow_invalid_instance_attributes, use_default_operations, ) except LibraryError as e: #Check unique id conflict with check against nodes. Until validation #resource create is not separated, we need to make unique post #validation. already_exists = [] unified_report_list = [] for report in report_list + list(e.args): if report.code != report_codes.ID_ALREADY_EXISTS: unified_report_list.append(report) elif report.info["id"] not in already_exists: unified_report_list.append(report) already_exists.append(report.info["id"]) report_list = unified_report_list env.report_processor.process_list(report_list) _prepare_pacemaker_remote_environment( env, current_nodes, host, skip_offline_nodes, allow_incomplete_distribution, allow_pacemaker_remote_service_fail, ) env.push_cib(wait=wait) if wait: _ensure_resource_running(env, remote_resource_element.attrib["id"]) def node_add_guest( env, node_name, resource_id, options, skip_offline_nodes=False, allow_incomplete_distribution=False, allow_pacemaker_remote_service_fail=False, wait=False, ): """ setup resource (resource_id) as guest node and setup node as guest LibraryEnvironment env provides all for communication with externals string resource_id -- specifies resource that should be guest node dict options could contain keys remote-node, remote-port, remote-addr, remote-connect-timeout bool skip_offline_nodes -- a flag for ignoring when some nodes are offline bool allow_incomplete_distribution -- is a flag for allowing successfully finish this command even if is file distribution not succeeded bool allow_pacemaker_remote_service_fail -- is a flag for allowing successfully finish this command even if starting/enabling pacemaker_remote not succeeded mixed wait is flag for controlling waiting for pacemaker iddle mechanism """ _ensure_consistently_live_env(env) env.ensure_wait_satisfiable(wait) cib = env.get_cib() current_nodes = get_nodes(env.get_corosync_conf(), cib) report_list = guest_node.validate_set_as_guest( cib, current_nodes, node_name, options ) try: resource_element = find_element_by_tag_and_id( primitive.TAG, get_resources(cib), resource_id ) report_list.extend(guest_node.validate_is_not_guest(resource_element)) except LibraryError as e: report_list.extend(e.args) env.report_processor.process_list(report_list) guest_node.set_as_guest( resource_element, node_name, options.get("remote-addr", None), options.get("remote-port", None), options.get("remote-connect-timeout", None), ) _prepare_pacemaker_remote_environment( env, current_nodes, guest_node.get_host_from_options(node_name, options), skip_offline_nodes, allow_incomplete_distribution, allow_pacemaker_remote_service_fail, ) env.push_cib(wait=wait) if wait: _ensure_resource_running(env, resource_id) def _find_resources_to_remove( cib, report_processor, node_type, node_identifier, allow_remove_multiple_nodes, find_resources ): resource_element_list = find_resources(get_resources(cib), node_identifier) if not resource_element_list: raise LibraryError(reports.node_not_found(node_identifier, node_type)) if len(resource_element_list) > 1: report_processor.process( reports.get_problem_creator( report_codes.FORCE_REMOVE_MULTIPLE_NODES, allow_remove_multiple_nodes )( reports.multiple_result_found, "resource", [resource.attrib["id"] for resource in resource_element_list], node_identifier ) ) return resource_element_list def _get_node_addresses_from_resources(nodes, resource_element_list, get_host): node_addresses_set = set() for resource_element in resource_element_list: for node in nodes: #remote nodes uses ring0 only if get_host(resource_element) == node.ring0: node_addresses_set.add(node) return sorted(node_addresses_set, key=lambda node: node.ring0) def _destroy_pcmk_remote_env( env, node_addresses_list, skip_offline_nodes, allow_fails ): actions = node_communication_format.create_pcmk_remote_actions([ "stop", "disable", ]) files = { "pacemaker_remote authkey": {"type": "pcmk_remote_authkey"}, } target_list = env.get_node_target_factory().get_target_list( node_addresses_list ) com_cmd = ServiceAction( env.report_processor, actions, skip_offline_targets=skip_offline_nodes, allow_fails=allow_fails, description="stop of service pacemaker_remote", ) com_cmd.set_targets(target_list) run_and_raise(env.get_node_communicator(), com_cmd) com_cmd = RemoveFiles( env.report_processor, files, skip_offline_targets=skip_offline_nodes, allow_fails=allow_fails, description="remote node files", ) com_cmd.set_targets(target_list) run_and_raise(env.get_node_communicator(), com_cmd) def _report_skip_live_parts_in_remove(node_addresses_list): #remote nodes uses ring0 only node_host_list = [addresses.ring0 for addresses in node_addresses_list] return [ reports.nolive_skip_service_command_on_nodes( "pacemaker_remote", "stop", node_host_list ), reports.nolive_skip_service_command_on_nodes( "pacemaker_remote", "disable", node_host_list ), reports.nolive_skip_files_remove(["pacemaker authkey"], node_host_list) ] def node_remove_remote( env, node_identifier, remove_resource, skip_offline_nodes=False, allow_remove_multiple_nodes=False, allow_pacemaker_remote_service_fail=False ): """ remove a resource representing remote node and destroy remote node LibraryEnvironment env provides all for communication with externals string node_identifier -- node name or hostname callable remove_resource -- function for remove resource bool skip_offline_nodes -- a flag for ignoring when some nodes are offline bool allow_remove_multiple_nodes -- is a flag for allowing remove unexpected multiple occurence of remote node for node_identifier bool allow_pacemaker_remote_service_fail -- is a flag for allowing successfully finish this command even if stoping/disabling pacemaker_remote not succeeded """ _ensure_consistently_live_env(env) cib = env.get_cib() resource_element_list = _find_resources_to_remove( cib, env.report_processor, "remote", node_identifier, allow_remove_multiple_nodes, remote_node.find_node_resources, ) node_addresses_list = _get_node_addresses_from_resources( get_nodes_remote(cib), resource_element_list, remote_node.get_host, ) if not env.is_corosync_conf_live: env.report_processor.process_list( _report_skip_live_parts_in_remove(node_addresses_list) ) else: _destroy_pcmk_remote_env( env, node_addresses_list, skip_offline_nodes, allow_pacemaker_remote_service_fail ) #remove node from pcmk caches is currently integrated in remove_resource #function for resource_element in resource_element_list: remove_resource( resource_element.attrib["id"], is_remove_remote_context=True, ) def node_remove_guest( env, node_identifier, skip_offline_nodes=False, allow_remove_multiple_nodes=False, allow_pacemaker_remote_service_fail=False, wait=False, ): """ remove a resource representing remote node and destroy remote node LibraryEnvironment env provides all for communication with externals string node_identifier -- node name, hostname or resource id bool skip_offline_nodes -- a flag for ignoring when some nodes are offline bool allow_remove_multiple_nodes -- is a flag for allowing remove unexpected multiple occurence of remote node for node_identifier bool allow_pacemaker_remote_service_fail -- is a flag for allowing successfully finish this command even if stoping/disabling pacemaker_remote not succeeded """ _ensure_consistently_live_env(env) env.ensure_wait_satisfiable(wait) cib = env.get_cib() resource_element_list = _find_resources_to_remove( cib, env.report_processor, "guest", node_identifier, allow_remove_multiple_nodes, guest_node.find_node_resources, ) node_addresses_list = _get_node_addresses_from_resources( get_nodes_guest(cib), resource_element_list, guest_node.get_host, ) if not env.is_corosync_conf_live: env.report_processor.process_list( _report_skip_live_parts_in_remove(node_addresses_list) ) else: _destroy_pcmk_remote_env( env, node_addresses_list, skip_offline_nodes, allow_pacemaker_remote_service_fail ) for resource_element in resource_element_list: guest_node.unset_guest(resource_element) env.push_cib(wait=wait) #remove node from pcmk caches if env.is_cib_live: for node_addresses in node_addresses_list: remove_node(env.cmd_runner(), node_addresses.name) pcs-0.9.164/pcs/lib/commands/resource.py000066400000000000000000000674611326265502500200410ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from contextlib import contextmanager from functools import partial from pcs.common import report_codes from pcs.common.tools import Version from pcs.lib import reports from pcs.lib.cib import resource from pcs.lib.cib.resource import operations, remote_node, guest_node from pcs.lib.cib.tools import ( find_element_by_tag_and_id, get_resources, IdProvider, ) from pcs.lib.env_tools import get_nodes from pcs.lib.errors import LibraryError from pcs.lib.pacemaker.values import validate_id from pcs.lib.pacemaker.state import ( ensure_resource_state, info_resource_state, is_resource_managed, ResourceNotFound, ) from pcs.lib.resource_agent import( find_valid_resource_agent_by_name as get_agent ) @contextmanager def resource_environment( env, wait=False, wait_for_resource_ids=None, resource_state_reporter=info_resource_state, required_cib_version=None ): env.ensure_wait_satisfiable(wait) yield get_resources(env.get_cib(required_cib_version)) env.push_cib(wait=wait) if wait is not False and wait_for_resource_ids: state = env.get_cluster_state() env.report_processor.process_list([ resource_state_reporter(state, res_id) for res_id in wait_for_resource_ids ]) def _ensure_disabled_after_wait(disabled_after_wait): def inner(state, resource_id): return ensure_resource_state( not disabled_after_wait, state, resource_id ) return inner def _validate_remote_connection( resource_agent, nodes_to_validate_against, resource_id, instance_attributes, allow_not_suitable_command ): if resource_agent.get_name() != remote_node.AGENT_NAME.full_name: return [] report_list = [] report_list.append( reports.get_problem_creator( report_codes.FORCE_NOT_SUITABLE_COMMAND, allow_not_suitable_command )(reports.use_command_node_add_remote) ) report_list.extend( remote_node.validate_host_not_conflicts( nodes_to_validate_against, resource_id, instance_attributes ) ) return report_list def _validate_guest_change( tree, nodes_to_validate_against, meta_attributes, allow_not_suitable_command, detect_remove=False ): if not guest_node.is_node_name_in_options(meta_attributes): return [] node_name = guest_node.get_node_name_from_options(meta_attributes) report_list = [] create_report = reports.use_command_node_add_guest if detect_remove and not guest_node.get_guest_option_value(meta_attributes): create_report = reports.use_command_node_remove_guest report_list.append( reports.get_problem_creator( report_codes.FORCE_NOT_SUITABLE_COMMAND, allow_not_suitable_command )(create_report) ) report_list.extend( guest_node.validate_conflicts( tree, nodes_to_validate_against, node_name, meta_attributes ) ) return report_list def _get_nodes_to_validate_against(env, tree): if not env.is_corosync_conf_live and env.is_cib_live: raise LibraryError( reports.live_environment_required(["COROSYNC_CONF"]) ) if not env.is_cib_live and env.is_corosync_conf_live: #we do not try to get corosync.conf from live cluster when cib is not #taken from live cluster return get_nodes(tree=tree) return get_nodes(env.get_corosync_conf(), tree) def _check_special_cases( env, resource_agent, resources_section, resource_id, meta_attributes, instance_attributes, allow_not_suitable_command ): if( resource_agent.get_name() != remote_node.AGENT_NAME.full_name and not guest_node.is_node_name_in_options(meta_attributes) ): #if no special case happens we won't take care about corosync.conf that #is needed for getting nodes to validate against return nodes_to_validate_against = _get_nodes_to_validate_against( env, resources_section ) report_list = [] report_list.extend(_validate_remote_connection( resource_agent, nodes_to_validate_against, resource_id, instance_attributes, allow_not_suitable_command, )) report_list.extend(_validate_guest_change( resources_section, nodes_to_validate_against, meta_attributes, allow_not_suitable_command, )) env.report_processor.process_list(report_list) def create( env, resource_id, resource_agent_name, operations, meta_attributes, instance_attributes, allow_absent_agent=False, allow_invalid_operation=False, allow_invalid_instance_attributes=False, use_default_operations=True, ensure_disabled=False, wait=False, allow_not_suitable_command=False, ): """ Create resource in a cib. LibraryEnvironment env provides all for communication with externals string resource_id is identifier of resource string resource_agent_name contains name for the identification of agent list of dict operations contains attributes for each entered operation dict meta_attributes contains attributes for primitive/meta_attributes dict instance_attributes contains attributes for primitive/instance_attributes bool allow_absent_agent is a flag for allowing agent that is not installed in a system bool allow_invalid_operation is a flag for allowing to use operations that are not listed in a resource agent metadata bool allow_invalid_instance_attributes is a flag for allowing to use instance attributes that are not listed in a resource agent metadata or for allowing to not use the instance_attributes that are required in resource agent metadata bool use_default_operations is a flag for stopping stopping of adding default cib operations (specified in a resource agent) bool ensure_disabled is flag that keeps resource in target-role "Stopped" mixed wait is flag for controlling waiting for pacemaker iddle mechanism bool allow_not_suitable_command -- flag for FORCE_NOT_SUITABLE_COMMAND """ resource_agent = get_agent( env.report_processor, env.cmd_runner(), resource_agent_name, allow_absent_agent, ) with resource_environment( env, wait, [resource_id], _ensure_disabled_after_wait( ensure_disabled or resource.common.are_meta_disabled(meta_attributes) ) ) as resources_section: _check_special_cases( env, resource_agent, resources_section, resource_id, meta_attributes, instance_attributes, allow_not_suitable_command ) primitive_element = resource.primitive.create( env.report_processor, resources_section, resource_id, resource_agent, operations, meta_attributes, instance_attributes, allow_invalid_operation, allow_invalid_instance_attributes, use_default_operations, ) if ensure_disabled: resource.common.disable(primitive_element) def _create_as_clone_common( tag, env, resource_id, resource_agent_name, operations, meta_attributes, instance_attributes, clone_meta_options, allow_absent_agent=False, allow_invalid_operation=False, allow_invalid_instance_attributes=False, use_default_operations=True, ensure_disabled=False, wait=False, allow_not_suitable_command=False, ): """ Create resource in some kind of clone (clone or master). Currently the only difference between commands "create_as_clone" and "create_as_master" is in tag. So the commands create_as_clone and create_as_master are created by passing tag with partial. string tag is any clone tag. Currently it can be "clone" or "master". LibraryEnvironment env provides all for communication with externals string resource_id is identifier of resource string resource_agent_name contains name for the identification of agent list of dict operations contains attributes for each entered operation dict meta_attributes contains attributes for primitive/meta_attributes dict instance_attributes contains attributes for primitive/instance_attributes dict clone_meta_options contains attributes for clone/meta_attributes bool allow_absent_agent is a flag for allowing agent that is not installed in a system bool allow_invalid_operation is a flag for allowing to use operations that are not listed in a resource agent metadata bool allow_invalid_instance_attributes is a flag for allowing to use instance attributes that are not listed in a resource agent metadata or for allowing to not use the instance_attributes that are required in resource agent metadata bool use_default_operations is a flag for stopping stopping of adding default cib operations (specified in a resource agent) bool ensure_disabled is flag that keeps resource in target-role "Stopped" mixed wait is flag for controlling waiting for pacemaker iddle mechanism bool allow_not_suitable_command -- flag for FORCE_NOT_SUITABLE_COMMAND """ resource_agent = get_agent( env.report_processor, env.cmd_runner(), resource_agent_name, allow_absent_agent, ) with resource_environment( env, wait, [resource_id], _ensure_disabled_after_wait( ensure_disabled or resource.common.are_meta_disabled(meta_attributes) or resource.common.is_clone_deactivated_by_meta(clone_meta_options) ) ) as resources_section: _check_special_cases( env, resource_agent, resources_section, resource_id, meta_attributes, instance_attributes, allow_not_suitable_command ) primitive_element = resource.primitive.create( env.report_processor, resources_section, resource_id, resource_agent, operations, meta_attributes, instance_attributes, allow_invalid_operation, allow_invalid_instance_attributes, use_default_operations, ) clone_element = resource.clone.append_new( tag, resources_section, primitive_element, clone_meta_options, ) if ensure_disabled: resource.common.disable(clone_element) def create_in_group( env, resource_id, resource_agent_name, group_id, operations, meta_attributes, instance_attributes, allow_absent_agent=False, allow_invalid_operation=False, allow_invalid_instance_attributes=False, use_default_operations=True, ensure_disabled=False, adjacent_resource_id=None, put_after_adjacent=False, wait=False, allow_not_suitable_command=False, ): """ Create resource in a cib and put it into defined group LibraryEnvironment env provides all for communication with externals string resource_id is identifier of resource string resource_agent_name contains name for the identification of agent string group_id is identificator for group to put primitive resource inside list of dict operations contains attributes for each entered operation dict meta_attributes contains attributes for primitive/meta_attributes bool allow_absent_agent is a flag for allowing agent that is not installed in a system bool allow_invalid_operation is a flag for allowing to use operations that are not listed in a resource agent metadata bool allow_invalid_instance_attributes is a flag for allowing to use instance attributes that are not listed in a resource agent metadata or for allowing to not use the instance_attributes that are required in resource agent metadata bool use_default_operations is a flag for stopping stopping of adding default cib operations (specified in a resource agent) bool ensure_disabled is flag that keeps resource in target-role "Stopped" string adjacent_resource_id identify neighbor of a newly created resource bool put_after_adjacent is flag to put a newly create resource befor/after adjacent resource mixed wait is flag for controlling waiting for pacemaker iddle mechanism bool allow_not_suitable_command -- flag for FORCE_NOT_SUITABLE_COMMAND """ resource_agent = get_agent( env.report_processor, env.cmd_runner(), resource_agent_name, allow_absent_agent, ) with resource_environment( env, wait, [resource_id], _ensure_disabled_after_wait( ensure_disabled or resource.common.are_meta_disabled(meta_attributes) ) ) as resources_section: _check_special_cases( env, resource_agent, resources_section, resource_id, meta_attributes, instance_attributes, allow_not_suitable_command ) primitive_element = resource.primitive.create( env.report_processor, resources_section, resource_id, resource_agent, operations, meta_attributes, instance_attributes, allow_invalid_operation, allow_invalid_instance_attributes, use_default_operations, ) if ensure_disabled: resource.common.disable(primitive_element) validate_id(group_id, "group name") resource.group.place_resource( resource.group.provide_group(resources_section, group_id), primitive_element, adjacent_resource_id, put_after_adjacent, ) create_as_clone = partial(_create_as_clone_common, resource.clone.TAG_CLONE) create_as_master = partial(_create_as_clone_common, resource.clone.TAG_MASTER) def create_into_bundle( env, resource_id, resource_agent_name, operations, meta_attributes, instance_attributes, bundle_id, allow_absent_agent=False, allow_invalid_operation=False, allow_invalid_instance_attributes=False, use_default_operations=True, ensure_disabled=False, wait=False, allow_not_suitable_command=False, ): """ Create a new resource in a cib and put it into an existing bundle LibraryEnvironment env provides all for communication with externals string resource_id is identifier of resource string resource_agent_name contains name for the identification of agent list of dict operations contains attributes for each entered operation dict meta_attributes contains attributes for primitive/meta_attributes dict instance_attributes contains attributes for primitive/instance_attributes string bundle_id is id of an existing bundle to put the created resource in bool allow_absent_agent is a flag for allowing agent that is not installed in a system bool allow_invalid_operation is a flag for allowing to use operations that are not listed in a resource agent metadata bool allow_invalid_instance_attributes is a flag for allowing to use instance attributes that are not listed in a resource agent metadata or for allowing to not use the instance_attributes that are required in resource agent metadata bool use_default_operations is a flag for stopping stopping of adding default cib operations (specified in a resource agent) bool ensure_disabled is flag that keeps resource in target-role "Stopped" mixed wait is flag for controlling waiting for pacemaker iddle mechanism bool allow_not_suitable_command -- flag for FORCE_NOT_SUITABLE_COMMAND """ resource_agent = get_agent( env.report_processor, env.cmd_runner(), resource_agent_name, allow_absent_agent, ) with resource_environment( env, wait, [resource_id], _ensure_disabled_after_wait( ensure_disabled or resource.common.are_meta_disabled(meta_attributes) ), required_cib_version=Version(2, 8, 0) ) as resources_section: _check_special_cases( env, resource_agent, resources_section, resource_id, meta_attributes, instance_attributes, allow_not_suitable_command ) primitive_element = resource.primitive.create( env.report_processor, resources_section, resource_id, resource_agent, operations, meta_attributes, instance_attributes, allow_invalid_operation, allow_invalid_instance_attributes, use_default_operations, ) if ensure_disabled: resource.common.disable(primitive_element) resource.bundle.add_resource( find_element_by_tag_and_id( resource.bundle.TAG, resources_section, bundle_id ), primitive_element ) def bundle_create( env, bundle_id, container_type, container_options=None, network_options=None, port_map=None, storage_map=None, meta_attributes=None, force_options=False, ensure_disabled=False, wait=False, ): """ Create a new bundle containing no resources LibraryEnvironment env -- provides communication with externals string bundle_id -- id of the new bundle string container_type -- container engine name (docker, lxc...) dict container_options -- container options dict network_options -- network options list of dict port_map -- a list of port mapping options list of dict storage_map -- a list of storage mapping options dict meta_attributes -- bundle's meta attributes bool force_options -- return warnings instead of forceable errors bool ensure_disabled -- set the bundle's target-role to "Stopped" mixed wait -- False: no wait, None: wait default timeout, int: wait timeout """ container_options = container_options or {} network_options = network_options or {} port_map = port_map or [] storage_map = storage_map or [] meta_attributes = meta_attributes or {} with resource_environment( env, wait, [bundle_id], _ensure_disabled_after_wait( ensure_disabled or resource.common.are_meta_disabled(meta_attributes) ), required_cib_version=Version(2, 8, 0) ) as resources_section: # no need to run validations related to remote and guest nodes as those # nodes can only be created from primitive resources id_provider = IdProvider(resources_section) env.report_processor.process_list( resource.bundle.validate_new( id_provider, bundle_id, container_type, container_options, network_options, port_map, storage_map, # TODO meta attributes - there is no validation for now force_options ) ) bundle_element = resource.bundle.append_new( resources_section, id_provider, bundle_id, container_type, container_options, network_options, port_map, storage_map, meta_attributes ) if ensure_disabled: resource.common.disable(bundle_element) def bundle_update( env, bundle_id, container_options=None, network_options=None, port_map_add=None, port_map_remove=None, storage_map_add=None, storage_map_remove=None, meta_attributes=None, force_options=False, wait=False, ): """ Modify an existing bundle (does not touch encapsulated resources) LibraryEnvironment env -- provides communication with externals string bundle_id -- id of the bundle to modify dict container_options -- container options to modify dict network_options -- network options to modify list of dict port_map_add -- list of port mapping options to add list of string port_map_remove -- list of port mapping ids to remove list of dict storage_map_add -- list of storage mapping options to add list of string storage_map_remove -- list of storage mapping ids to remove dict meta_attributes -- meta attributes to update bool force_options -- return warnings instead of forceable errors mixed wait -- False: no wait, None: wait default timeout, int: wait timeout """ container_options = container_options or {} network_options = network_options or {} port_map_add = port_map_add or [] port_map_remove = port_map_remove or [] storage_map_add = storage_map_add or [] storage_map_remove = storage_map_remove or [] meta_attributes = meta_attributes or {} with resource_environment( env, wait, [bundle_id], required_cib_version=Version(2, 8, 0) ) as resources_section: # no need to run validations related to remote and guest nodes as those # nodes can only be created from primitive resources id_provider = IdProvider(resources_section) bundle_element = find_element_by_tag_and_id( resource.bundle.TAG, resources_section, bundle_id ) env.report_processor.process_list( resource.bundle.validate_update( id_provider, bundle_element, container_options, network_options, port_map_add, port_map_remove, storage_map_add, storage_map_remove, # TODO meta attributes - there is no validation for now force_options ) ) resource.bundle.update( id_provider, bundle_element, container_options, network_options, port_map_add, port_map_remove, storage_map_add, storage_map_remove, meta_attributes ) def disable(env, resource_ids, wait): """ Disallow specified resource to be started by the cluster LibraryEnvironment env -- strings resource_ids -- ids of the resources to be disabled mixed wait -- False: no wait, None: wait default timeout, int: wait timeout """ with resource_environment( env, wait, resource_ids, _ensure_disabled_after_wait(True) ) as resources_section: resource_el_list = _find_resources_or_raise( resources_section, resource_ids ) env.report_processor.process_list( _resource_list_enable_disable( resource_el_list, resource.common.disable, env.get_cluster_state() ) ) def enable(env, resource_ids, wait): """ Allow specified resource to be started by the cluster LibraryEnvironment env -- strings resource_ids -- ids of the resources to be enabled mixed wait -- False: no wait, None: wait default timeout, int: wait timeout """ with resource_environment( env, wait, resource_ids, _ensure_disabled_after_wait(False) ) as resources_section: resource_el_list = _find_resources_or_raise( resources_section, resource_ids, resource.common.find_resources_to_enable ) env.report_processor.process_list( _resource_list_enable_disable( resource_el_list, resource.common.enable, env.get_cluster_state() ) ) def _resource_list_enable_disable(resource_el_list, func, cluster_state): report_list = [] for resource_el in resource_el_list: res_id = resource_el.attrib["id"] try: if not is_resource_managed(cluster_state, res_id): report_list.append(reports.resource_is_unmanaged(res_id)) func(resource_el) except ResourceNotFound: report_list.append( reports.id_not_found( res_id, ["primitive", "clone", "master", "group", "bundle"] ) ) return report_list def unmanage(env, resource_ids, with_monitor=False): """ Set specified resources not to be managed by the cluster LibraryEnvironment env -- strings resource_ids -- ids of the resources to become unmanaged bool with_monitor -- disable resources' monitor operations """ with resource_environment(env) as resources_section: resource_el_list = _find_resources_or_raise( resources_section, resource_ids, resource.common.find_resources_to_unmanage ) primitives = [] for resource_el in resource_el_list: resource.common.unmanage(resource_el) if with_monitor: primitives.extend( resource.common.find_primitives(resource_el) ) for resource_el in set(primitives): for op in operations.get_resource_operations( resource_el, ["monitor"] ): operations.disable(op) def manage(env, resource_ids, with_monitor=False): """ Set specified resource to be managed by the cluster LibraryEnvironment env -- strings resource_ids -- ids of the resources to become managed bool with_monitor -- enable resources' monitor operations """ with resource_environment(env) as resources_section: report_list = [] resource_el_list = _find_resources_or_raise( resources_section, resource_ids, resource.common.find_resources_to_manage ) primitives = [] for resource_el in resource_el_list: resource.common.manage(resource_el) primitives.extend( resource.common.find_primitives(resource_el) ) for resource_el in sorted( set(primitives), key=lambda element: element.get("id", "") ): op_list = operations.get_resource_operations( resource_el, ["monitor"] ) if with_monitor: for op in op_list: operations.enable(op) else: monitor_enabled = False for op in op_list: if operations.is_enabled(op): monitor_enabled = True break if op_list and not monitor_enabled: # do not advise enabling monitors if there are none defined report_list.append( reports.resource_managed_no_monitor_enabled( resource_el.get("id", "") ) ) env.report_processor.process_list(report_list) def _find_resources_or_raise( resources_section, resource_ids, additional_search=None ): if not additional_search: additional_search = lambda x: [x] report_list = [] resource_el_list = [] resource_tags = ( resource.clone.ALL_TAGS + [resource.group.TAG, resource.primitive.TAG, resource.bundle.TAG] ) for res_id in resource_ids: try: resource_el_list.extend( additional_search( find_element_by_tag_and_id( resource_tags, resources_section, res_id ) ) ) except LibraryError as e: report_list.extend(e.args) if report_list: raise LibraryError(*report_list) return resource_el_list pcs-0.9.164/pcs/lib/commands/resource_agent.py000066400000000000000000000075341326265502500212120ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib import resource_agent def list_standards(lib_env): """ List resource agents standards (ocf, lsb, ... ) on the local host """ return resource_agent.list_resource_agents_standards(lib_env.cmd_runner()) def list_ocf_providers(lib_env): """ List resource agents ocf providers on the local host """ return resource_agent.list_resource_agents_ocf_providers( lib_env.cmd_runner() ) def list_agents_for_standard_and_provider(lib_env, standard_provider=None): """ List resource agents for specified standard on the local host string standard_provider standard[:provider], e.g. None, ocf, ocf:pacemaker """ if standard_provider: standards = [standard_provider] else: standards = resource_agent.list_resource_agents_standards( lib_env.cmd_runner() ) agents = [] for std in standards: agents += resource_agent.list_resource_agents(lib_env.cmd_runner(), std) return sorted( agents, # works with both str and unicode in both python 2 and 3 key=lambda x: x.lower() ) def list_agents(lib_env, describe=True, search=None): """ List all resource agents on the local host, optionally filtered and described bool describe load and return agents' description as well string search return only agents which name contains this string """ runner = lib_env.cmd_runner() # list agents for all standards and providers agent_names = [] for std in resource_agent.list_resource_agents_standards_and_providers( runner ): agent_names += [ "{0}:{1}".format(std, agent) for agent in resource_agent.list_resource_agents(runner, std) ] agent_names.sort( # works with both str and unicode in both python 2 and 3 key=lambda x: x.lower() ) return _complete_agent_list( runner, agent_names, describe, search, resource_agent.ResourceAgent ) def _complete_agent_list( runner, agent_names, describe, search, metadata_class ): # filter agents by name if requested if search: search_lower = search.lower() agent_names = [ name for name in agent_names if search_lower in name.lower() ] # complete the output and load descriptions if requested agent_list = [] for name in agent_names: try: agent_metadata = metadata_class(runner, name) if describe: agent_list.append(agent_metadata.get_description_info()) else: agent_list.append(agent_metadata.get_name_info()) except resource_agent.ResourceAgentError: #we don't return it in the list: # #UnableToGetAgentMetadata - if we cannot get valid metadata, it's #not a resource agent # #InvalidResourceAgentName - invalid name cannot be used with a new #resource. The list of names is gained from "crm_resource" whilst #pcs is doing the validation. So there can be a name that pcs does #not recognize as valid. # #Providing a warning is not the way (currently). Other components #read this list and do not expect warnings there. Using the stderr #(to separate warnings) is currently difficult. pass return agent_list def describe_agent(lib_env, agent_name): """ Get agent's description (metadata) in a structure string agent_name name of the agent """ agent = resource_agent.find_valid_resource_agent_by_name( lib_env.report_processor, lib_env.cmd_runner(), agent_name, absent_agent_supported=False ) return agent.get_full_info() pcs-0.9.164/pcs/lib/commands/sbd.py000066400000000000000000000363761326265502500167630ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os from pcs import settings from pcs.common import report_codes from pcs.lib.communication.sbd import ( CheckSbd, DisableSbdService, EnableSbdService, GetSbdConfig, GetSbdStatus, RemoveStonithWatchdogTimeout, SetSbdConfig, SetStonithWatchdogTimeoutToZero, ) from pcs.lib.communication.nodes import GetOnlineTargets from pcs.lib.communication.corosync import CheckCorosyncOffline from pcs.lib.communication.tools import ( run as run_com, run_and_raise, ) from pcs.lib import ( sbd, reports, ) from pcs.lib.tools import environment_file_to_dict from pcs.lib.errors import ( LibraryError, ReportItemSeverity as Severities ) from pcs.lib.node import NodeNotFound from pcs.lib.validate import ( names_in, run_collection_of_option_validators, value_nonnegative_integer, ) UNSUPPORTED_SBD_OPTION_LIST = [ "SBD_WATCHDOG_DEV", "SBD_OPTS", "SBD_PACEMAKER", "SBD_DEVICE" ] ALLOWED_SBD_OPTION_LIST = [ "SBD_DELAY_START", "SBD_STARTMODE", "SBD_WATCHDOG_TIMEOUT" ] def _validate_sbd_options(sbd_config, allow_unknown_opts=False): """ Validate user SBD configuration. Options 'SBD_WATCHDOG_DEV' and 'SBD_OPTS' are restricted. Returns list of ReportItem sbd_config -- dictionary in format: : allow_unknown_opts -- if True, accept also unknown options. """ report_item_list = [] for sbd_opt in sbd_config: if sbd_opt in UNSUPPORTED_SBD_OPTION_LIST: report_item_list.append(reports.invalid_options( [sbd_opt], ALLOWED_SBD_OPTION_LIST, None )) elif sbd_opt not in ALLOWED_SBD_OPTION_LIST: report_item_list.append(reports.invalid_options( [sbd_opt], ALLOWED_SBD_OPTION_LIST, None, severity=( Severities.WARNING if allow_unknown_opts else Severities.ERROR ), forceable=( None if allow_unknown_opts else report_codes.FORCE_OPTIONS ) )) if "SBD_WATCHDOG_TIMEOUT" in sbd_config: report_item = reports.invalid_option_value( "SBD_WATCHDOG_TIMEOUT", sbd_config["SBD_WATCHDOG_TIMEOUT"], "a non-negative integer" ) try: if int(sbd_config["SBD_WATCHDOG_TIMEOUT"]) < 0: report_item_list.append(report_item) except (ValueError, TypeError): report_item_list.append(report_item) return report_item_list def _validate_watchdog_dict(watchdog_dict): """ Validates if all watchdogs are specified by absolute path. Returns list of ReportItem. watchdog_dict -- dictionary with NodeAddresses as keys and value as watchdog """ return [ reports.invalid_watchdog_path(watchdog) for watchdog in watchdog_dict.values() if not watchdog or not os.path.isabs(watchdog) ] def _validate_device_dict(node_device_dict): """ Validates device list for all nodes. If node is present, it checks if there is at least one device and at max settings.sbd_max_device_num. Also devices have to be specified with absolute path. Returns list of ReportItem node_device_dict -- dictionary with NodeAddresses as keys and list of devices as values """ report_item_list = [] for node_label, device_list in node_device_dict.items(): if not device_list: report_item_list.append( reports.sbd_no_device_for_node(node_label) ) continue elif len(device_list) > settings.sbd_max_device_num: report_item_list.append(reports.sbd_too_many_devices_for_node( node_label, device_list, settings.sbd_max_device_num )) continue for device in device_list: if not device or not os.path.isabs(device): report_item_list.append( reports.sbd_device_path_not_absolute(device, node_label) ) return report_item_list def _check_node_names_in_cluster(node_list, node_name_list): """ Check whenever all node names from node_name_list exists in node_list. Returns list of ReportItem node_list -- NodeAddressesList node_name_list -- list of stings """ not_existing_node_set = set() for node_name in node_name_list: try: node_list.find_by_label(node_name) except NodeNotFound: not_existing_node_set.add(node_name) return [reports.node_not_found(node) for node in not_existing_node_set] def _get_full_target_dict(target_list, node_value_dict, default_value): """ Returns dictionary where keys are labels of all nodes in cluster and value is obtained from node_value_dict for node name, or default value if node is not specified in node_value_dict. list node_list -- list of cluster nodes (RequestTarget object) node_value_dict -- dictionary, keys: node names, values: some velue default_value -- some default value """ return dict([ (target.label, node_value_dict.get(target.label, default_value)) for target in target_list ]) def enable_sbd( lib_env, default_watchdog, watchdog_dict, sbd_options, default_device_list=None, node_device_dict=None, allow_unknown_opts=False, ignore_offline_nodes=False, ): """ Enable SBD on all nodes in cluster. lib_env -- LibraryEnvironment default_watchdog -- watchdog for nodes which are not specified in watchdog_dict. Uses default value from settings if None. watchdog_dict -- dictionary with node names as keys and watchdog path as value sbd_options -- dictionary in format: : default_device_list -- list of devices for all nodes node_device_dict -- dictionary with node names as keys and list of devices as value allow_unknown_opts -- if True, accept also unknown options. ignore_offline_nodes -- if True, omit offline nodes """ node_list = _get_cluster_nodes(lib_env) target_list = lib_env.get_node_target_factory().get_target_list(node_list) using_devices = not ( default_device_list is None and node_device_dict is None ) if default_device_list is None: default_device_list = [] if node_device_dict is None: node_device_dict = {} if not default_watchdog: default_watchdog = settings.sbd_watchdog_default sbd_options = dict([(opt.upper(), val) for opt, val in sbd_options.items()]) full_watchdog_dict = _get_full_target_dict( target_list, watchdog_dict, default_watchdog ) full_device_dict = _get_full_target_dict( target_list, node_device_dict, default_device_list ) lib_env.report_processor.process_list( _check_node_names_in_cluster( node_list, list(watchdog_dict.keys()) + list(node_device_dict.keys()) ) + _validate_watchdog_dict(full_watchdog_dict) + (_validate_device_dict(full_device_dict) if using_devices else []) + _validate_sbd_options(sbd_options, allow_unknown_opts) ) com_cmd = GetOnlineTargets( lib_env.report_processor, ignore_offline_targets=ignore_offline_nodes, ) com_cmd.set_targets(target_list) online_targets = run_and_raise(lib_env.get_node_communicator(), com_cmd) # check if SBD can be enabled com_cmd = CheckSbd(lib_env.report_processor) for target in online_targets: com_cmd.add_request( target, full_watchdog_dict[target.label], full_device_dict[target.label] if using_devices else [], ) run_and_raise(lib_env.get_node_communicator(), com_cmd) # enable ATB if neede if not lib_env.is_cman_cluster and not using_devices: corosync_conf = lib_env.get_corosync_conf() if sbd.atb_has_to_be_enabled_pre_enable_check(corosync_conf): lib_env.report_processor.process(reports.sbd_requires_atb()) corosync_conf.set_quorum_options( lib_env.report_processor, {"auto_tie_breaker": "1"} ) lib_env.push_corosync_conf(corosync_conf, ignore_offline_nodes) # distribute SBD configuration config = sbd.get_default_sbd_config() config.update(sbd_options) com_cmd = SetSbdConfig(lib_env.report_processor) for target in online_targets: com_cmd.add_request( target, sbd.create_sbd_config( config, target.label, full_watchdog_dict[target.label], full_device_dict[target.label] ) ) run_and_raise(lib_env.get_node_communicator(), com_cmd) # remove cluster prop 'stonith_watchdog_timeout' com_cmd = RemoveStonithWatchdogTimeout(lib_env.report_processor) com_cmd.set_targets(online_targets) run_and_raise(lib_env.get_node_communicator(), com_cmd) # enable SBD service an all nodes com_cmd = EnableSbdService(lib_env.report_processor) com_cmd.set_targets(online_targets) run_and_raise(lib_env.get_node_communicator(), com_cmd) lib_env.report_processor.process( reports.cluster_restart_required_to_apply_changes() ) def disable_sbd(lib_env, ignore_offline_nodes=False): """ Disable SBD on all nodes in cluster. lib_env -- LibraryEnvironment ignore_offline_nodes -- if True, omit offline nodes """ com_cmd = GetOnlineTargets( lib_env.report_processor, ignore_offline_targets=ignore_offline_nodes, ) com_cmd.set_targets( lib_env.get_node_target_factory().get_target_list( _get_cluster_nodes(lib_env) ) ) online_nodes = run_and_raise(lib_env.get_node_communicator(), com_cmd) if lib_env.is_cman_cluster: com_cmd = CheckCorosyncOffline( lib_env.report_processor, skip_offline_targets=ignore_offline_nodes, ) com_cmd.set_targets(online_nodes) run_and_raise(lib_env.get_node_communicator(), com_cmd) com_cmd = SetStonithWatchdogTimeoutToZero(lib_env.report_processor) com_cmd.set_targets(online_nodes) run_and_raise(lib_env.get_node_communicator(), com_cmd) com_cmd = DisableSbdService(lib_env.report_processor) com_cmd.set_targets(online_nodes) run_and_raise(lib_env.get_node_communicator(), com_cmd) if not lib_env.is_cman_cluster: lib_env.report_processor.process( reports.cluster_restart_required_to_apply_changes() ) def get_cluster_sbd_status(lib_env): """ Returns status of SBD service in cluster in dictionary with format: { : { "installed": , "enabled": , "running": }, ... } lib_env -- LibraryEnvironment """ com_cmd = GetSbdStatus(lib_env.report_processor) com_cmd.set_targets( lib_env.get_node_target_factory().get_target_list( _get_cluster_nodes(lib_env) ) ) return run_com(lib_env.get_node_communicator(), com_cmd) def get_cluster_sbd_config(lib_env): """ Returns list of SBD config from all cluster nodes in cluster. Structure of data: [ { "node": "config": or None if there was failure, }, ... ] If error occurs while obtaining config from some node, it's config will be None. If obtaining config fail on all node returns empty dictionary. lib_env -- LibraryEnvironment """ com_cmd = GetSbdConfig(lib_env.report_processor) com_cmd.set_targets( lib_env.get_node_target_factory().get_target_list( _get_cluster_nodes(lib_env) ) ) return run_com(lib_env.get_node_communicator(), com_cmd) def get_local_sbd_config(lib_env): """ Returns local SBD config as dictionary. lib_env -- LibraryEnvironment """ return environment_file_to_dict(sbd.get_local_sbd_config()) def _get_cluster_nodes(lib_env): if lib_env.is_cman_cluster: return lib_env.get_cluster_conf().get_nodes() return lib_env.get_corosync_conf().get_nodes() def initialize_block_devices(lib_env, device_list, option_dict): """ Initialize SBD devices in device_list with options_dict. lib_env -- LibraryEnvironment device_list -- list of strings option_dict -- dictionary """ report_item_list = [] if not device_list: report_item_list.append(reports.required_option_is_missing(["device"])) supported_options = sbd.DEVICE_INITIALIZATION_OPTIONS_MAPPING.keys() report_item_list += names_in(supported_options, option_dict.keys()) validator_list = [ value_nonnegative_integer(key) for key in supported_options ] report_item_list += run_collection_of_option_validators( option_dict, validator_list ) lib_env.report_processor.process_list(report_item_list) sbd.initialize_block_devices( lib_env.report_processor, lib_env.cmd_runner(), device_list, option_dict ) def get_local_devices_info(lib_env, dump=False): """ Returns list of local devices info in format: { "device": , "list": , "dump": if dump is True, None otherwise } If sbd is not enabled, empty list will be returned. lib_env -- LibraryEnvironment dump -- if True returns also output of command 'sbd dump' """ if not sbd.is_sbd_enabled(lib_env.cmd_runner()): return [] device_list = sbd.get_local_sbd_device_list() report_item_list = [] output = [] for device in device_list: obj = { "device": device, "list": None, "dump": None, } try: obj["list"] = sbd.get_device_messages_info( lib_env.cmd_runner(), device ) if dump: obj["dump"] = sbd.get_device_sbd_header_dump( lib_env.cmd_runner(), device ) except LibraryError as e: report_item_list += e.args output.append(obj) for report_item in report_item_list: report_item.severity = Severities.WARNING lib_env.report_processor.process_list(report_item_list) return output def set_message(lib_env, device, node_name, message): """ Set message on device for node_name. lib_env -- LibrayEnvironment device -- string, absolute path to device node_name -- string message -- string, mesage type, should be one of settings.sbd_message_types """ report_item_list = [] missing_options = [] if not device: missing_options.append("device") if not node_name: missing_options.append("node") if missing_options: report_item_list.append( reports.required_option_is_missing(missing_options) ) supported_messages = settings.sbd_message_types if message not in supported_messages: report_item_list.append( reports.invalid_option_value("message", message, supported_messages) ) lib_env.report_processor.process_list(report_item_list) sbd.set_message(lib_env.cmd_runner(), device, node_name, message) pcs-0.9.164/pcs/lib/commands/stonith.py000066400000000000000000000140511326265502500176650ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.cib import resource from pcs.lib.cib.resource.common import are_meta_disabled from pcs.lib.commands.resource import ( _ensure_disabled_after_wait, resource_environment ) from pcs.lib.pacemaker.values import validate_id from pcs.lib.resource_agent import find_valid_stonith_agent_by_name as get_agent def create( env, stonith_id, stonith_agent_name, operations, meta_attributes, instance_attributes, allow_absent_agent=False, allow_invalid_operation=False, allow_invalid_instance_attributes=False, use_default_operations=True, ensure_disabled=False, wait=False, ): """ Create stonith as resource in a cib. LibraryEnvironment env provides all for communication with externals string stonith_id is an identifier of stonith resource string stonith_agent_name contains name for the identification of agent list of dict operations contains attributes for each entered operation dict meta_attributes contains attributes for primitive/meta_attributes dict instance_attributes contains attributes for primitive/instance_attributes bool allow_absent_agent is a flag for allowing agent that is not installed in a system bool allow_invalid_operation is a flag for allowing to use operations that are not listed in a stonith agent metadata bool allow_invalid_instance_attributes is a flag for allowing to use instance attributes that are not listed in a stonith agent metadata or for allowing to not use the instance_attributes that are required in stonith agent metadata bool use_default_operations is a flag for stopping stopping of adding default cib operations (specified in a stonith agent) bool ensure_disabled is flag that keeps resource in target-role "Stopped" mixed wait is flag for controlling waiting for pacemaker iddle mechanism """ stonith_agent = get_agent( env.report_processor, env.cmd_runner(), stonith_agent_name, allow_absent_agent, ) if stonith_agent.get_provides_unfencing(): meta_attributes["provides"] = "unfencing" with resource_environment( env, wait, [stonith_id], _ensure_disabled_after_wait( ensure_disabled or are_meta_disabled(meta_attributes), ) ) as resources_section: stonith_element = resource.primitive.create( env.report_processor, resources_section, stonith_id, stonith_agent, raw_operation_list=operations, meta_attributes=meta_attributes, instance_attributes=instance_attributes, allow_invalid_operation=allow_invalid_operation, allow_invalid_instance_attributes=allow_invalid_instance_attributes, use_default_operations=use_default_operations, resource_type="stonith" ) if ensure_disabled: resource.common.disable(stonith_element) def create_in_group( env, stonith_id, stonith_agent_name, group_id, operations, meta_attributes, instance_attributes, allow_absent_agent=False, allow_invalid_operation=False, allow_invalid_instance_attributes=False, use_default_operations=True, ensure_disabled=False, adjacent_resource_id=None, put_after_adjacent=False, wait=False, ): """ Create stonith as resource in a cib and put it into defined group. LibraryEnvironment env provides all for communication with externals string stonith_id is an identifier of stonith resource string stonith_agent_name contains name for the identification of agent string group_id is identificator for group to put stonith inside list of dict operations contains attributes for each entered operation dict meta_attributes contains attributes for primitive/meta_attributes dict instance_attributes contains attributes for primitive/instance_attributes bool allow_absent_agent is a flag for allowing agent that is not installed in a system bool allow_invalid_operation is a flag for allowing to use operations that are not listed in a stonith agent metadata bool allow_invalid_instance_attributes is a flag for allowing to use instance attributes that are not listed in a stonith agent metadata or for allowing to not use the instance_attributes that are required in stonith agent metadata bool use_default_operations is a flag for stopping stopping of adding default cib operations (specified in a stonith agent) bool ensure_disabled is flag that keeps resource in target-role "Stopped" string adjacent_resource_id identify neighbor of a newly created stonith bool put_after_adjacent is flag to put a newly create resource befor/after adjacent stonith mixed wait is flag for controlling waiting for pacemaker iddle mechanism """ stonith_agent = get_agent( env.report_processor, env.cmd_runner(), stonith_agent_name, allow_absent_agent, ) if stonith_agent.get_provides_unfencing(): meta_attributes["provides"] = "unfencing" with resource_environment( env, wait, [stonith_id], _ensure_disabled_after_wait( ensure_disabled or are_meta_disabled(meta_attributes), ) ) as resources_section: stonith_element = resource.primitive.create( env.report_processor, resources_section, stonith_id, stonith_agent, operations, meta_attributes, instance_attributes, allow_invalid_operation, allow_invalid_instance_attributes, use_default_operations, ) if ensure_disabled: resource.common.disable(stonith_element) validate_id(group_id, "group name") resource.group.place_resource( resource.group.provide_group(resources_section, group_id), stonith_element, adjacent_resource_id, put_after_adjacent, ) pcs-0.9.164/pcs/lib/commands/stonith_agent.py000066400000000000000000000021351326265502500210430ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib import resource_agent from pcs.lib.commands.resource_agent import _complete_agent_list def list_agents(lib_env, describe=True, search=None): """ List all stonith agents on the local host, optionally filtered and described bool describe load and return agents' description as well string search return only agents which name contains this string """ runner = lib_env.cmd_runner() agent_names = resource_agent.list_stonith_agents(runner) return _complete_agent_list( runner, agent_names, describe, search, resource_agent.StonithAgent ) def describe_agent(lib_env, agent_name): """ Get agent's description (metadata) in a structure string agent_name name of the agent (not containing "stonith:" prefix) """ agent = resource_agent.find_valid_stonith_agent_by_name( lib_env.report_processor, lib_env.cmd_runner(), agent_name, absent_agent_supported=False ) return agent.get_full_info() pcs-0.9.164/pcs/lib/commands/test/000077500000000000000000000000001326265502500166015ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/__init__.py000066400000000000000000000000001326265502500207000ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/cib_options/000077500000000000000000000000001326265502500211115ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/cib_options/__init__.py000066400000000000000000000000001326265502500232100ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/cib_options/test_operations_defaults.py000066400000000000000000000065701326265502500266040ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.commands import cib_options from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools import fixture from pcs.common import report_codes FIXTURE_INITIAL_DEFAULTS = """ """ class SetOperationsDefaults(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(optional_in_conf=FIXTURE_INITIAL_DEFAULTS) def tearDown(self): self.env_assist.assert_reports([ fixture.warn(report_codes.DEFAULTS_CAN_BE_OVERRIDEN) ]) def assert_options_produces_op_defaults_xml(self, options, op_defaults_xml): self.config.env.push_cib( replace={ "./configuration/op_defaults/meta_attributes": op_defaults_xml } ) cib_options.set_operations_defaults(self.env_assist.get_env(), options) def test_change(self): self.assert_options_produces_op_defaults_xml( { "a": "B", "b": "C", }, """ """ ) def test_add(self): self.assert_options_produces_op_defaults_xml( {"c": "d"}, """ """ ) def test_remove(self): self.config.env.push_cib( remove= "./configuration/op_defaults/meta_attributes/nvpair[@name='a']" ) cib_options.set_operations_defaults( self.env_assist.get_env(), {"a": ""}, ) def test_add_when_section_does_not_exists(self): (self.config .remove("runner.cib.load") .runner.cib.load() .env.push_cib( optional_in_conf=""" """ ) ) cib_options.set_operations_defaults( self.env_assist.get_env(), {"a": "b"}, ) def test_remove_section_when_empty(self): self.config.env.push_cib(remove="./configuration/op_defaults") cib_options.set_operations_defaults( self.env_assist.get_env(), { "a": "", "b": "", } ) pcs-0.9.164/pcs/lib/commands/test/cib_options/test_resources_defaults.py000066400000000000000000000066611326265502500264340ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.commands import cib_options from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools import fixture from pcs.common import report_codes FIXTURE_INITIAL_DEFAULTS = """ """ class SetResourcesDefaults(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(optional_in_conf=FIXTURE_INITIAL_DEFAULTS) def tearDown(self): self.env_assist.assert_reports([ fixture.warn(report_codes.DEFAULTS_CAN_BE_OVERRIDEN) ]) def assert_options_produces_rsc_defaults_xml( self, options, rsc_defaults_xml ): self.config.env.push_cib( replace={ "./configuration/rsc_defaults/meta_attributes": rsc_defaults_xml } ) cib_options.set_resources_defaults(self.env_assist.get_env(), options) def test_change(self): self.assert_options_produces_rsc_defaults_xml( { "a": "B", "b": "C", }, """ """ ) def test_add(self): self.assert_options_produces_rsc_defaults_xml( {"c": "d"}, """ """ ) def test_remove(self): self.config.env.push_cib( remove= "./configuration/rsc_defaults/meta_attributes/nvpair[@name='a']" ) cib_options.set_resources_defaults( self.env_assist.get_env(), {"a": ""}, ) def test_add_when_section_does_not_exists(self): (self.config .remove("runner.cib.load") .runner.cib.load() .env.push_cib( optional_in_conf=""" """ ) ) cib_options.set_resources_defaults( self.env_assist.get_env(), {"a": "b"}, ) def test_remove_section_when_empty(self): (self.config .env.push_cib(remove="./configuration/rsc_defaults") ) cib_options.set_resources_defaults( self.env_assist.get_env(), { "a": "", "b": "", } ) pcs-0.9.164/pcs/lib/commands/test/cluster/000077500000000000000000000000001326265502500202625ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/cluster/__init__.py000066400000000000000000000000001326265502500223610ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/cluster/test_verify.py000066400000000000000000000071221326265502500232010ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib.commands.cluster import verify from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase CRM_VERIFY_ERROR_REPORT = "someting wrong\nsomething else wrong" BAD_FENCING_TOPOLOGY = """ """ BAD_FENCING_TOPOLOGY_REPORTS = [ fixture.error( report_codes.STONITH_RESOURCES_DO_NOT_EXIST, stonith_ids=["FX"], ), fixture.error( report_codes.NODE_NOT_FOUND, node="node1", searched_types=[], ), ] class CibAsWholeValid(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.pcmk.verify() def test_success_on_valid(self): (self.config .runner.cib.load() .runner.pcmk.load_state() ) verify(self.env_assist.get_env()) def test_fail_on_invalid_fence_topology(self): (self.config .runner.cib.load(optional_in_conf=BAD_FENCING_TOPOLOGY) .runner.pcmk.load_state() ) self.env_assist.assert_raise_library_error( lambda: verify(self.env_assist.get_env()), list(BAD_FENCING_TOPOLOGY_REPORTS) ) class CibAsWholeInvalid(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.pcmk.verify(stderr=CRM_VERIFY_ERROR_REPORT) def assert_raises_invalid_cib_content(self, extra_reports=None): extra_reports = extra_reports if extra_reports else [] self.env_assist.assert_raise_library_error( lambda: verify(self.env_assist.get_env()), [ fixture.error( report_codes.INVALID_CIB_CONTENT, report=CRM_VERIFY_ERROR_REPORT, ), ] + extra_reports, ) def test_fail_immediately_on_unloadable_cib(self): self.config.runner.cib.load(returncode=1) self.assert_raises_invalid_cib_content() def test_continue_on_loadable_cib(self): (self.config .runner.cib.load() .runner.pcmk.load_state() ) self.assert_raises_invalid_cib_content() def test_add_following_errors(self): #More fencing topology tests are provided by tests of #pcs.lib.commands.fencing_topology (self.config .runner.cib.load(optional_in_conf=BAD_FENCING_TOPOLOGY) .runner.pcmk.load_state() ) self.assert_raises_invalid_cib_content( list(BAD_FENCING_TOPOLOGY_REPORTS) ) class CibIsMocked(TestCase): def test_success_on_valid_cib(self): cib_tempfile = "/fake/tmp/file" env_assist, config = get_env_tools(test_case=self) (config .env.set_cib_data("", cib_tempfile=cib_tempfile) .runner.pcmk.verify(cib_tempfile=cib_tempfile) .runner.cib.load() .runner.pcmk.load_state() ) verify(env_assist.get_env()) class VerboseMode(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.pcmk.verify(verbose=True) def test_success_on_valid_cib(self): (self.config .runner.cib.load() .runner.pcmk.load_state() ) verify(self.env_assist.get_env(), verbose=True) pcs-0.9.164/pcs/lib/commands/test/remote_node/000077500000000000000000000000001326265502500211015ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/remote_node/__init__.py000066400000000000000000000000001326265502500232000ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/remote_node/fixtures_add.py000066400000000000000000000160441326265502500241410ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import base64 import json from pcs.common import report_codes from pcs.test.tools import fixture from pcs.test.tools.pcs_unittest import mock OFFLINE_ERROR_MSG = "Could not resolve host" FAIL_HTTP_KWARGS = dict( output="", was_connected=False, errno='6', error_msg_template=OFFLINE_ERROR_MSG, ) class EnvConfigMixin(object): PCMK_AUTHKEY_PATH = "/etc/pacemaker/authkey" def __init__(self, call_collection, wrap_helper, config): self.__calls = call_collection self.config = config def distribute_authkey( self, communication_list, pcmk_authkey_content, result=None, **kwargs ): if kwargs.get("was_connected", True): result = result if result is not None else { "code": "written", "message": "", } kwargs["results"] = { "pacemaker_remote authkey": result } elif result is not None: raise AssertionError( "Keyword 'result' makes no sense with 'was_connected=False'" ) self.config.http.put_file( communication_list=communication_list, files={ "pacemaker_remote authkey": { "type": "pcmk_remote_authkey", "data": base64 .b64encode(pcmk_authkey_content) .decode("utf-8") , "rewrite_existing": True } }, **kwargs ) def check_node_availability(self, label, result=True, **kwargs): if "output" not in kwargs: kwargs["output"] = json.dumps({"node_available": result}) self.config.http.place_multinode_call( "node_available", communication_list=[dict(label=label)], action="remote/node_available", **kwargs ) def authkey_exists(self, return_value): self.config.fs.exists(self.PCMK_AUTHKEY_PATH, return_value=return_value) def open_authkey(self, pcmk_authkey_content="", fail=False): kwargs = {} if fail: kwargs["side_effect"] = EnvironmentError("open failed") else: kwargs["return_value"] = mock.mock_open( read_data=pcmk_authkey_content )() self.config.fs.open( self.PCMK_AUTHKEY_PATH, mode="rb", **kwargs ) def push_existing_authkey_to_remote( self, remote_host, distribution_result=None ): pcmk_authkey_content = b"password" (self.config .local.authkey_exists(return_value=True) .local.open_authkey(pcmk_authkey_content) .local.distribute_authkey( communication_list=[dict(label=remote_host)], pcmk_authkey_content=pcmk_authkey_content, result=distribution_result ) ) def run_pacemaker_remote(self, label, result=None, **kwargs): if kwargs.get("was_connected", True): result = result if result is not None else { "code": "success", "message": "", } kwargs["results"] = { "pacemaker_remote enable": result, "pacemaker_remote start": result } elif result is not None: raise AssertionError( "Keyword 'result' makes no sense with 'was_connected=False'" ) self.config.http.manage_services( communication_list=[dict(label=label)], action_map={ "pacemaker_remote enable": { "type": "service_command", "service": "pacemaker_remote", "command": "enable", }, "pacemaker_remote start": { "type": "service_command", "service": "pacemaker_remote", "command": "start", }, }, **kwargs ) REPORTS = (fixture.ReportStore() .info( "authkey_distribution_started" , report_codes.FILES_DISTRIBUTION_STARTED, #python 3 has dict_keys so list is not the right structure file_list={"pacemaker_remote authkey": None}.keys(), description="remote node configuration files", ) .info( "authkey_distribution_success", report_codes.FILE_DISTRIBUTION_SUCCESS, file_description="pacemaker_remote authkey", ) .info( "pcmk_remote_start_enable_started", report_codes.SERVICE_COMMANDS_ON_NODES_STARTED, #python 3 has dict_keys so list is not the right structure action_list={ "pacemaker_remote start": None, "pacemaker_remote enable": None, }.keys(), description="start of service pacemaker_remote", ) .info( "pcmk_remote_enable_success", report_codes.SERVICE_COMMAND_ON_NODE_SUCCESS, service_command_description="pacemaker_remote enable", ) .info( "pcmk_remote_start_success", report_codes.SERVICE_COMMAND_ON_NODE_SUCCESS, service_command_description="pacemaker_remote start", ) ) EXTRA_REPORTS = (fixture.ReportStore() .error( "manage_services_connection_failed", report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, command="remote/manage_services", reason=OFFLINE_ERROR_MSG, force_code=report_codes.SKIP_OFFLINE_NODES ) .as_warn( "manage_services_connection_failed", "manage_services_connection_failed_warn", ) .copy( "manage_services_connection_failed", "check_availability_connection_failed", command="remote/node_available", ) .as_warn( "check_availability_connection_failed", "check_availability_connection_failed_warn", ) .copy( "manage_services_connection_failed", "put_file_connection_failed", command="remote/put_file", ) .as_warn( "put_file_connection_failed", "put_file_connection_failed_warn", ) .error( "pcmk_remote_enable_failed", report_codes.SERVICE_COMMAND_ON_NODE_ERROR, reason="Operation failed.", service_command_description="pacemaker_remote enable", force_code=report_codes.SKIP_ACTION_ON_NODES_ERRORS, ) .as_warn("pcmk_remote_enable_failed", "pcmk_remote_enable_failed_warn") .copy( "pcmk_remote_enable_failed", "pcmk_remote_start_failed", service_command_description="pacemaker_remote start", ) .as_warn("pcmk_remote_start_failed", "pcmk_remote_start_failed_warn") .error( "authkey_distribution_failed", report_codes.FILE_DISTRIBUTION_ERROR, reason="File already exists", file_description="pacemaker_remote authkey", force_code=report_codes.SKIP_FILE_DISTRIBUTION_ERRORS ) .as_warn("authkey_distribution_failed", "authkey_distribution_failed_warn") ) pcs-0.9.164/pcs/lib/commands/test/remote_node/fixtures_remove.py000066400000000000000000000125321326265502500247040ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.test.tools import fixture OFFLINE_ERROR_MSG = "Could not resolve host" class EnvConfigMixin(object): def __init__(self, call_collection, wrap_helper, config): self.__calls = call_collection self.config = config def destroy_pacemaker_remote( self, label=None, address_list=None, result=None, **kwargs ): if kwargs.get("was_connected", True): result = result if result is not None else { "code": "success", "message": "", } kwargs["results"] = { "pacemaker_remote stop": result, "pacemaker_remote disable": result } elif result is not None: raise AssertionError( "Keyword 'result' makes no sense with 'was_connected=False'" ) if label or address_list: if kwargs.get("communication_list", None): raise AssertionError( "Keywords 'label' and 'address_list' makes no sense with" " 'communication_list != None'" ) kwargs["communication_list"] = [ dict(label=label, address_list=address_list) ] self.config.http.manage_services( action_map={ "pacemaker_remote stop": { "type": "service_command", "service": "pacemaker_remote", "command": "stop", }, "pacemaker_remote disable": { "type": "service_command", "service": "pacemaker_remote", "command": "disable", }, }, **kwargs ) def remove_authkey( self, communication_list, result=None, **kwargs ): if kwargs.get("was_connected", True): result = result if result is not None else { "code": "deleted", "message": "", } kwargs["results"] = { "pacemaker_remote authkey": result } elif result is not None: raise AssertionError( "Keyword 'result' makes no sense with 'was_connected=False'" ) self.config.http.remove_file( communication_list=communication_list, files={ "pacemaker_remote authkey": { "type": "pcmk_remote_authkey", } }, **kwargs ) REPORTS = (fixture.ReportStore() .info( "pcmk_remote_disable_stop_started", report_codes.SERVICE_COMMANDS_ON_NODES_STARTED, #python 3 has dict_keys so list is not the right structure action_list={ "pacemaker_remote disable": None, "pacemaker_remote stop": None, }.keys(), description="stop of service pacemaker_remote", ) .info( "pcmk_remote_disable_success", report_codes.SERVICE_COMMAND_ON_NODE_SUCCESS, service_command_description="pacemaker_remote disable", ) .info( "pcmk_remote_stop_success", report_codes.SERVICE_COMMAND_ON_NODE_SUCCESS, service_command_description="pacemaker_remote stop", ) .info( "authkey_remove_started" , report_codes.FILES_REMOVE_FROM_NODE_STARTED, #python 3 has dict_keys so list is not the right structure file_list={"pacemaker_remote authkey": None}.keys(), description="remote node files", ) .info( "authkey_remove_success", report_codes.FILE_REMOVE_FROM_NODE_SUCCESS, file_description="pacemaker_remote authkey", ) ) EXTRA_REPORTS = (fixture.ReportStore() .error( "manage_services_connection_failed", report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, command="remote/manage_services", reason=OFFLINE_ERROR_MSG, force_code=report_codes.SKIP_OFFLINE_NODES ) .as_warn( "manage_services_connection_failed", "manage_services_connection_failed_warn", ) .copy( "manage_services_connection_failed", "remove_file_connection_failed", command="remote/remove_file", ) .as_warn( "remove_file_connection_failed", "remove_file_connection_failed_warn", ) .error( "authkey_remove_failed", report_codes.FILE_REMOVE_FROM_NODE_ERROR, reason="Access denied", file_description="pacemaker_remote authkey", force_code=report_codes.SKIP_FILE_DISTRIBUTION_ERRORS, ) .as_warn( "authkey_remove_failed", "authkey_remove_failed_warn", ) .error( "pcmk_remote_disable_failed", report_codes.SERVICE_COMMAND_ON_NODE_ERROR, reason="Operation failed.", service_command_description="pacemaker_remote disable", force_code=report_codes.SKIP_ACTION_ON_NODES_ERRORS, ) .as_warn( "pcmk_remote_disable_failed", "pcmk_remote_disable_failed_warn", ) .copy( "pcmk_remote_disable_failed", "pcmk_remote_stop_failed", service_command_description="pacemaker_remote stop", ) .as_warn( "pcmk_remote_stop_failed", "pcmk_remote_stop_failed_warn", ) ) pcs-0.9.164/pcs/lib/commands/test/remote_node/test_node_add_guest.py000066400000000000000000000365511326265502500254700ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.common import report_codes from pcs.lib.commands.remote_node import node_add_guest as node_add_guest_orig from pcs.lib.commands.test.remote_node.fixtures_add import( EnvConfigMixin, REPORTS as FIXTURE_REPORTS, EXTRA_REPORTS as FIXTURE_EXTRA_REPORTS, FAIL_HTTP_KWARGS, ) from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase, mock NODE_NAME = "node-name" REMOTE_HOST = "remote-host" VIRTUAL_MACHINE_ID = "virtual_machine_id" NODE_1 = "node-1" NODE_2 = "node-2" def node_add_guest( env, node_name=NODE_NAME, resource_id=VIRTUAL_MACHINE_ID, options=None, **kwargs ): options = options or {"remote-addr": REMOTE_HOST} node_add_guest_orig(env, node_name, resource_id, options, **kwargs) FIXTURE_RESOURCES = """ """.format(VIRTUAL_MACHINE_ID) FIXTURE_META_ATTRIBUTES = """ """ class LocalConfig(EnvConfigMixin): def load_cib(self): self.config.runner.cib.load(resources=FIXTURE_RESOURCES) def push_cib(self, wait=False, meta_attributes=FIXTURE_META_ATTRIBUTES): self.config.env.push_cib( append={ './/resources/primitive[@id="{0}"]' .format(VIRTUAL_MACHINE_ID): meta_attributes , }, wait=wait ) get_env_tools = partial(get_env_tools, local_extensions={"local": LocalConfig}) def base_reports_for_host(host=REMOTE_HOST): return ( FIXTURE_REPORTS .adapt("authkey_distribution_started", node_list=[host]) .adapt("authkey_distribution_success", node=host) .adapt("pcmk_remote_start_enable_started", node_list=[host]) .adapt("pcmk_remote_enable_success", node=host) .adapt("pcmk_remote_start_success", node=host) ) REPORTS = base_reports_for_host() EXTRA_REPORTS = (FIXTURE_EXTRA_REPORTS.adapt_multi( [ "manage_services_connection_failed", "manage_services_connection_failed_warn", "check_availability_connection_failed", "check_availability_connection_failed_warn", "put_file_connection_failed", "put_file_connection_failed_warn", "pcmk_remote_enable_failed", "pcmk_remote_enable_failed_warn", "pcmk_remote_start_failed", "pcmk_remote_start_failed_warn", "authkey_distribution_failed", "authkey_distribution_failed_warn", ], node=REMOTE_HOST )) class AddGuest(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) def test_success_base(self): (self.config .local.load_cib() .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST) .local.push_existing_authkey_to_remote(REMOTE_HOST) .local.run_pacemaker_remote(REMOTE_HOST) .local.push_cib() ) node_add_guest(self.env_assist.get_env()) self.env_assist.assert_reports(REPORTS) @mock.patch("pcs.lib.commands.remote_node.generate_key") def test_success_generated_authkey(self, generate_key): generate_key.return_value = b"password" (self.config .local.load_cib() .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, result=True) .local.authkey_exists(return_value=False) .local.distribute_authkey( communication_list=[ dict(label=NODE_1), dict(label=NODE_2), dict(label=REMOTE_HOST), ], pcmk_authkey_content=generate_key.return_value, ) .local.run_pacemaker_remote(REMOTE_HOST) .local.push_cib() ) node_add_guest(self.env_assist.get_env()) generate_key.assert_called_once_with() self.env_assist.assert_reports( REPORTS .adapt( "authkey_distribution_started", node_list=[NODE_1, NODE_2, REMOTE_HOST] ) .copy( "authkey_distribution_success", "authkey_distribution_success_node1", node=NODE_1, ) .copy( "authkey_distribution_success", "authkey_distribution_success_node2", node=NODE_2, ) ) def test_can_skip_all_offline(self): pcmk_authkey_content = b"password" (self.config .local.load_cib() .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, **FAIL_HTTP_KWARGS) .local.authkey_exists(return_value=True) .local.open_authkey(pcmk_authkey_content) .local.distribute_authkey( communication_list=[dict(label=REMOTE_HOST)], pcmk_authkey_content=pcmk_authkey_content, **FAIL_HTTP_KWARGS ) .local.run_pacemaker_remote(REMOTE_HOST, **FAIL_HTTP_KWARGS) .local.push_cib() ) node_add_guest(self.env_assist.get_env(), skip_offline_nodes=True) self.env_assist.assert_reports( REPORTS.select( "authkey_distribution_started", "pcmk_remote_start_enable_started", ) + EXTRA_REPORTS.select( "check_availability_connection_failed_warn", "put_file_connection_failed_warn", "manage_services_connection_failed_warn", ) ) def test_changed_options(self): meta_attributes=""" """ (self.config .local.load_cib() .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) .local.check_node_availability(NODE_NAME) .local.push_existing_authkey_to_remote(NODE_NAME) .local.run_pacemaker_remote(NODE_NAME) .local.push_cib(meta_attributes=meta_attributes) ) node_add_guest(self.env_assist.get_env(), options={ #remote-addr is ommited here "remote-port": 1234, "remote-connect-timeout": 20 }) self.env_assist.assert_reports(base_reports_for_host(NODE_NAME)) def test_noexistent_resource(self): (self.config .local.load_cib() .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) ) self.env_assist.assert_raise_library_error( lambda: node_add_guest( self.env_assist.get_env(), resource_id="NOEXISTENT" ), [ fixture.error( report_codes.ID_NOT_FOUND, expected_types=["primitive"], context_type="resources", id="NOEXISTENT", context_id="" ) ], ) def test_validate_values(self): (self.config .local.load_cib() .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) ) self.env_assist.assert_raise_library_error( lambda: node_add_guest( self.env_assist.get_env(), node_name="*name", options={ "remote-addr": "*addr", "remote-port": "abc", "remote-connect-timeout": "def", } ), [ fixture.error( report_codes.INVALID_OPTION_VALUE, option_name="remote-connect-timeout", option_value="def", allowed_values="time interval (e.g. 1, 2s, 3m, 4h, ...)" ), fixture.error( report_codes.INVALID_OPTION_VALUE, option_name="remote-port", option_value="abc", allowed_values="a port number (1-65535)" ) ] ) class WithWait(TestCase): def setUp(self): self.wait = 1 self.env_assist, self.config = get_env_tools(self) (self.config .runner.pcmk.can_wait() .local.load_cib() .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST) .local.push_existing_authkey_to_remote(REMOTE_HOST) .local.run_pacemaker_remote(REMOTE_HOST) .local.push_cib(wait=self.wait) ) def test_success_when_resource_started(self): (self.config .runner.pcmk.load_state(raw_resources=dict( resource_id=VIRTUAL_MACHINE_ID, resource_agent="ocf::pacemaker:remote", node_name=NODE_1, )) ) node_add_guest(self.env_assist.get_env(), wait=self.wait) self.env_assist.assert_reports( REPORTS .info( "resource_running", report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={"Started": [NODE_1]}, resource_id=VIRTUAL_MACHINE_ID ) ) def test_fail_when_resource_not_started(self): (self.config .runner.pcmk.load_state(raw_resources=dict( resource_id=VIRTUAL_MACHINE_ID, resource_agent="ocf::pacemaker:remote", node_name=NODE_1, failed="true", )) ) self.env_assist.assert_raise_library_error( lambda: node_add_guest(self.env_assist.get_env(), wait=self.wait), [ fixture.error( report_codes.RESOURCE_DOES_NOT_RUN, resource_id=VIRTUAL_MACHINE_ID, ) ] ) self.env_assist.assert_reports(REPORTS) class RemoteService(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) (self.config .local.load_cib() .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST) .local.push_existing_authkey_to_remote(REMOTE_HOST) ) def test_fails_when_offline(self): (self.config .local.run_pacemaker_remote(label=REMOTE_HOST, **FAIL_HTTP_KWARGS) ) self.env_assist.assert_raise_library_error( lambda: node_add_guest(self.env_assist.get_env()), ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_enable_success"] + EXTRA_REPORTS.select("manage_services_connection_failed") ) def test_fail_when_remotely_fail(self): (self.config .local.run_pacemaker_remote(REMOTE_HOST, result={ "code": "fail", "message": "Action failed", }) ) self.env_assist.assert_raise_library_error( lambda: node_add_guest(self.env_assist.get_env()), ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_enable_success"] + EXTRA_REPORTS.select( "pcmk_remote_enable_failed", "pcmk_remote_start_failed", ) ) def test_forceable_when_remotely_fail(self): (self.config .local.run_pacemaker_remote(REMOTE_HOST, result={ "code": "fail", "message": "Action failed", }) .local.push_cib() ) node_add_guest( self.env_assist.get_env(), allow_pacemaker_remote_service_fail=True ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_enable_success"] + EXTRA_REPORTS.select( "pcmk_remote_enable_failed_warn", "pcmk_remote_start_failed_warn", ) ) class AuthkeyDistribution(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) (self.config .local.load_cib() .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST) ) def test_fails_when_offline(self): pcmk_authkey_content = b"password" (self.config .local.authkey_exists(return_value=True) .local.open_authkey(pcmk_authkey_content) .local.distribute_authkey( communication_list=[dict(label=REMOTE_HOST)], pcmk_authkey_content=pcmk_authkey_content, **FAIL_HTTP_KWARGS ) ) self.env_assist.assert_raise_library_error( lambda: node_add_guest(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS[:"authkey_distribution_success"] + EXTRA_REPORTS.only( "manage_services_connection_failed", command="remote/put_file", ) ) def test_fail_when_remotely_fail(self): (self.config .local.push_existing_authkey_to_remote( REMOTE_HOST, distribution_result={ "code": "conflict", "message": "", } ) ) self.env_assist.assert_raise_library_error( lambda: node_add_guest(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS[:"authkey_distribution_success"] + EXTRA_REPORTS.select("authkey_distribution_failed") ) def test_forceable_when_remotely_fail(self): (self.config .local.push_existing_authkey_to_remote( REMOTE_HOST, distribution_result={ "code": "conflict", "message": "", } ) .local.run_pacemaker_remote(REMOTE_HOST) .local.push_cib() ) node_add_guest( self.env_assist.get_env(), allow_incomplete_distribution=True, ) self.env_assist.assert_reports( REPORTS.remove("authkey_distribution_success") + EXTRA_REPORTS.select("authkey_distribution_failed_warn") ) pcs-0.9.164/pcs/lib/commands/test/remote_node/test_node_add_remote.py000066400000000000000000000446561326265502500256410ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.common import report_codes, env_file_role_codes from pcs.lib.commands.remote_node import node_add_remote as node_add_remote_orig from pcs.lib.commands.test.remote_node.fixtures_add import( EnvConfigMixin, REPORTS as FIXTURE_REPORTS, EXTRA_REPORTS as FIXTURE_EXTRA_REPORTS, FAIL_HTTP_KWARGS, ) from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase, mock REMOTE_HOST = "remote-host" NODE_NAME = "node-name" NODE_1 = "node-1" NODE_2 = "node-2" def node_add_remote( env, host=None, node_name=None, operations=None, meta_attributes=None, instance_attributes=None, **kwargs ): operations = operations or [] meta_attributes = meta_attributes or {} instance_attributes = instance_attributes or {} host = host or REMOTE_HOST node_name = node_name or NODE_NAME node_add_remote_orig( env, host, node_name, operations, meta_attributes, instance_attributes, **kwargs ) class LocalConfig(EnvConfigMixin): def load_cluster_configs(self, cluster_node_list): (self.config .runner.cib.load() .corosync_conf.load(node_name_list=cluster_node_list) .runner.pcmk.load_agent(agent_name="ocf:pacemaker:remote") ) get_env_tools = partial(get_env_tools, local_extensions={"local": LocalConfig}) REPORTS = (FIXTURE_REPORTS .adapt("authkey_distribution_started", node_list=[REMOTE_HOST]) .adapt("authkey_distribution_success", node=REMOTE_HOST) .adapt("pcmk_remote_start_enable_started", node_list=[REMOTE_HOST]) .adapt("pcmk_remote_enable_success", node=REMOTE_HOST) .adapt("pcmk_remote_start_success", node=REMOTE_HOST) ) EXTRA_REPORTS = (FIXTURE_EXTRA_REPORTS.adapt_multi( [ "manage_services_connection_failed", "manage_services_connection_failed_warn", "check_availability_connection_failed", "check_availability_connection_failed_warn", "put_file_connection_failed", "put_file_connection_failed_warn", "pcmk_remote_enable_failed", "pcmk_remote_enable_failed_warn", "pcmk_remote_start_failed", "pcmk_remote_start_failed_warn", "authkey_distribution_failed", "authkey_distribution_failed_warn", ], node=REMOTE_HOST )) FIXTURE_RESOURCES = """ """ class AddRemote(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) def test_success_base(self): (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, result=True) .local.push_existing_authkey_to_remote(REMOTE_HOST) .local.run_pacemaker_remote(REMOTE_HOST) .env.push_cib(resources=FIXTURE_RESOURCES) ) node_add_remote(self.env_assist.get_env()) self.env_assist.assert_reports(REPORTS) def test_success_base_host_as_name(self): #validation and creation of resource is covered in resource create tests (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, result=True) .local.push_existing_authkey_to_remote(REMOTE_HOST) .local.run_pacemaker_remote(REMOTE_HOST) .env.push_cib( resources=""" """ ) ) node_add_remote(self.env_assist.get_env(), node_name=REMOTE_HOST) self.env_assist.assert_reports(REPORTS) def test_node_name_conflict_report_is_unique(self): (self.config .runner.cib.load( resources=""" """ ) .corosync_conf.load(node_name_list=[NODE_1, NODE_2]) .runner.pcmk.load_agent(agent_name="ocf:pacemaker:remote") ) self.env_assist.assert_raise_library_error( lambda: node_add_remote(self.env_assist.get_env()), [ fixture.error( report_codes.ID_ALREADY_EXISTS, id=NODE_NAME, ) ] ) @mock.patch("pcs.lib.commands.remote_node.generate_key") def test_success_generated_authkey(self, generate_key): generate_key.return_value = b"password" (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, result=True) .local.authkey_exists(return_value=False) .local.distribute_authkey( communication_list=[ dict(label=NODE_1), dict(label=NODE_2), dict(label=REMOTE_HOST), ], pcmk_authkey_content=generate_key.return_value, ) .local.run_pacemaker_remote(REMOTE_HOST) .env.push_cib(resources=FIXTURE_RESOURCES) ) node_add_remote(self.env_assist.get_env()) generate_key.assert_called_once_with() self.env_assist.assert_reports( REPORTS .adapt( "authkey_distribution_started", node_list=[NODE_1, NODE_2, REMOTE_HOST] ) .copy( "authkey_distribution_success", "authkey_distribution_success_node1", node=NODE_1, ) .copy( "authkey_distribution_success", "authkey_distribution_success_node2", node=NODE_2, ) ) def test_can_skip_all_offline(self): pcmk_authkey_content = b"password" (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, **FAIL_HTTP_KWARGS) .local.authkey_exists(return_value=True) .local.open_authkey(pcmk_authkey_content) .local.distribute_authkey( communication_list=[dict(label=REMOTE_HOST)], pcmk_authkey_content=pcmk_authkey_content, **FAIL_HTTP_KWARGS ) .local.run_pacemaker_remote(REMOTE_HOST, **FAIL_HTTP_KWARGS) .env.push_cib(resources=FIXTURE_RESOURCES) ) node_add_remote(self.env_assist.get_env(), skip_offline_nodes=True) self.env_assist.assert_reports( REPORTS.select( "authkey_distribution_started", "pcmk_remote_start_enable_started", ) + EXTRA_REPORTS.select( "check_availability_connection_failed_warn", "put_file_connection_failed_warn", "manage_services_connection_failed_warn", ) ) def test_fails_when_remote_node_is_not_prepared(self): (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, result=False) ) self.env_assist.assert_raise_library_error( lambda: node_add_remote(self.env_assist.get_env()), [ fixture.error( report_codes.CANNOT_ADD_NODE_IS_IN_CLUSTER, node=REMOTE_HOST, ) ] ) def test_fails_when_remote_node_returns_invalid_output(self): (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, output="INVALID_OUTPUT") ) self.env_assist.assert_raise_library_error( lambda: node_add_remote(self.env_assist.get_env()), [ fixture.error( report_codes.INVALID_RESPONSE_FORMAT, node=REMOTE_HOST, ) ] ) def test_open_failed(self): (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, result=True) .local.authkey_exists(return_value=True) .local.open_authkey(fail=True) ) self.env_assist.assert_raise_library_error( lambda: node_add_remote( self.env_assist.get_env(), ), [ fixture.error( report_codes.FILE_IO_ERROR, file_role=env_file_role_codes.PACEMAKER_AUTHKEY, file_path=LocalConfig.PCMK_AUTHKEY_PATH, operation="read", ) ], expected_in_processor=False ) def test_validate_host_already_exists(self): (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) ) #more validation tests in pcs/lib/cib/test/test_resource_remote_node.py self.env_assist.assert_raise_library_error( lambda: node_add_remote( self.env_assist.get_env(), host=NODE_1, ), [ fixture.error( report_codes.ID_ALREADY_EXISTS, id=NODE_1 ) ] ) class WithWait(TestCase): def setUp(self): self. wait = 1 self.env_assist, self.config = get_env_tools(self) (self.config .runner.pcmk.can_wait() .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, result=True) .local.push_existing_authkey_to_remote(REMOTE_HOST) .local.run_pacemaker_remote(REMOTE_HOST) .env.push_cib(resources=FIXTURE_RESOURCES, wait=self.wait) ) def test_success_when_resource_started(self): (self.config .runner.pcmk.load_state(raw_resources=dict( resource_id=NODE_NAME, resource_agent="ocf::pacemaker:remote", node_name=NODE_1, )) ) node_add_remote(self.env_assist.get_env(), wait=self.wait) self.env_assist.assert_reports( REPORTS .info( "resource_running", report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={"Started": [NODE_1]}, resource_id=NODE_NAME ) ) def test_fail_when_resource_not_started(self): (self.config .runner.pcmk.load_state(raw_resources=dict( resource_id=NODE_NAME, resource_agent="ocf::pacemaker:remote", node_name=NODE_1, failed="true", )) ) self.env_assist.assert_raise_library_error( lambda: node_add_remote(self.env_assist.get_env(), wait=self.wait), [ fixture.error( report_codes.RESOURCE_DOES_NOT_RUN, resource_id=NODE_NAME, ) ] ) self.env_assist.assert_reports(REPORTS) class AddRemotePcmkRemoteService(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, result=True) .local.push_existing_authkey_to_remote(REMOTE_HOST) ) def test_fails_when_offline(self): (self.config .local.run_pacemaker_remote(label=REMOTE_HOST, **FAIL_HTTP_KWARGS) ) self.env_assist.assert_raise_library_error( lambda: node_add_remote(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_enable_success"] + EXTRA_REPORTS.select("manage_services_connection_failed") ) def test_fail_when_remotely_fail(self): (self.config .local.run_pacemaker_remote(REMOTE_HOST, result={ "code": "fail", "message": "Action failed", }) ) self.env_assist.assert_raise_library_error( lambda: node_add_remote(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_enable_success"] + EXTRA_REPORTS.select( "pcmk_remote_enable_failed", "pcmk_remote_start_failed", ) ) def test_forceable_when_remotely_fail(self): (self.config .local.run_pacemaker_remote(REMOTE_HOST, result={ "code": "fail", "message": "Action failed", }) .env.push_cib(resources=FIXTURE_RESOURCES) ) node_add_remote( self.env_assist.get_env(), allow_pacemaker_remote_service_fail=True ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_enable_success"] + EXTRA_REPORTS.select( "pcmk_remote_enable_failed_warn", "pcmk_remote_start_failed_warn", ) ) class AddRemoteAuthkeyDistribution(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) (self.config .local.load_cluster_configs(cluster_node_list=[NODE_1, NODE_2]) .local.check_node_availability(REMOTE_HOST, result=True) ) def test_fails_when_offline(self): pcmk_authkey_content = b"password" (self.config .local.authkey_exists(return_value=True) .local.open_authkey(pcmk_authkey_content) .local.distribute_authkey( communication_list=[dict(label=REMOTE_HOST)], pcmk_authkey_content=pcmk_authkey_content, **FAIL_HTTP_KWARGS ) ) self.env_assist.assert_raise_library_error( lambda: node_add_remote(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS[:"authkey_distribution_success"] + EXTRA_REPORTS.only( "manage_services_connection_failed", command="remote/put_file", ) ) def test_fail_when_remotely_fail(self): (self.config .local.push_existing_authkey_to_remote( REMOTE_HOST, distribution_result={ "code": "conflict", "message": "", } ) ) self.env_assist.assert_raise_library_error( lambda: node_add_remote(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS[:"authkey_distribution_success"] + EXTRA_REPORTS.select("authkey_distribution_failed") ) def test_forceable_when_remotely_fail(self): (self.config .local.push_existing_authkey_to_remote( REMOTE_HOST, distribution_result={ "code": "conflict", "message": "", } ) .local.run_pacemaker_remote(REMOTE_HOST) .env.push_cib(resources=FIXTURE_RESOURCES) ) node_add_remote( self.env_assist.get_env(), allow_incomplete_distribution=True, ) self.env_assist.assert_reports( REPORTS.remove("authkey_distribution_success") + EXTRA_REPORTS.select("authkey_distribution_failed_warn") ) pcs-0.9.164/pcs/lib/commands/test/remote_node/test_node_remove_guest.py000066400000000000000000000375301326265502500262330ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.common import report_codes from pcs.lib.commands.remote_node import( node_remove_guest as node_remove_guest_orig ) from pcs.lib.commands.test.remote_node.fixtures_add import FAIL_HTTP_KWARGS from pcs.lib.commands.test.remote_node.fixtures_remove import( EnvConfigMixin, REPORTS as FIXTURE_REPORTS, EXTRA_REPORTS as FIXTURE_EXTRA_REPORTS, ) from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase REMOTE_HOST = "remote-host" NODE_NAME = "node-name" VIRTUAL_MACHINE_ID = "virtual_machine_id" def node_remove_guest(env, node_identifier=REMOTE_HOST, **kwargs): node_remove_guest_orig(env, node_identifier, **kwargs) REPORTS = (FIXTURE_REPORTS .adapt("pcmk_remote_disable_stop_started", node_list=[NODE_NAME]) .adapt("pcmk_remote_disable_success", node=NODE_NAME) .adapt("pcmk_remote_stop_success", node=NODE_NAME) .adapt("authkey_remove_started", node_list=[NODE_NAME]) .adapt("authkey_remove_success", node=NODE_NAME) ) EXTRA_REPORTS = (FIXTURE_EXTRA_REPORTS .adapt_multi( [ "manage_services_connection_failed", "manage_services_connection_failed_warn", "remove_file_connection_failed", "remove_file_connection_failed_warn", ], node=REMOTE_HOST ) .adapt_multi( [ "authkey_remove_failed", "authkey_remove_failed_warn", "pcmk_remote_disable_failed", "pcmk_remote_disable_failed_warn", "pcmk_remote_stop_failed", "pcmk_remote_stop_failed_warn", ], node=NODE_NAME ) ) FIXTURE_RESOURCES = """ """.format(VIRTUAL_MACHINE_ID, REMOTE_HOST, NODE_NAME) get_env_tools = partial(get_env_tools, local_extensions={ "local": EnvConfigMixin }) class RemoveGuest(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) def find_by(self, identifier): (self.config .runner.cib.load(resources=FIXTURE_RESOURCES) .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST] ) .local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], ) .env.push_cib(remove=".//primitive/meta_attributes") .runner.pcmk.remove_node(NODE_NAME) ) node_remove_guest(self.env_assist.get_env(), node_identifier=identifier) self.env_assist.assert_reports(REPORTS) def test_success_base(self): self.find_by(REMOTE_HOST) def test_can_find_by_node_name(self): self.find_by(NODE_NAME) def test_can_find_by_resource_id(self): self.find_by(VIRTUAL_MACHINE_ID) class RemoveGuestOthers(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) def test_success_with_wait(self): wait = 10 (self.config .runner.pcmk.can_wait() .runner.cib.load(resources=FIXTURE_RESOURCES) .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST] ) .local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], ) .env.push_cib(remove=".//primitive/meta_attributes", wait=wait) .runner.pcmk.remove_node(NODE_NAME) ) node_remove_guest(self.env_assist.get_env(), wait=wait) self.env_assist.assert_reports(REPORTS) def test_can_skip_all_offline(self): (self.config .runner.cib.load(resources=FIXTURE_RESOURCES) .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST], **FAIL_HTTP_KWARGS ) .local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], **FAIL_HTTP_KWARGS ) .env.push_cib(remove=".//primitive/meta_attributes") .runner.pcmk.remove_node(NODE_NAME) ) node_remove_guest(self.env_assist.get_env(), skip_offline_nodes=True) self.env_assist.assert_reports( REPORTS.remove( "pcmk_remote_disable_success", "pcmk_remote_stop_success", "authkey_remove_success", ) + EXTRA_REPORTS.select( "manage_services_connection_failed_warn", "remove_file_connection_failed_warn" )) def test_fail_when_identifier_not_found(self): (self.config .runner.cib.load(resources=FIXTURE_RESOURCES) ) self.env_assist.assert_raise_library_error( lambda: node_remove_guest( self.env_assist.get_env(), node_identifier="NOEXISTENT" ), [ fixture.error( report_codes.NODE_NOT_FOUND, node="NOEXISTENT", searched_types="guest", ) ], expected_in_processor=False ) class MultipleResults(TestCase): fixture_multi_resources = """ """.format( VIRTUAL_MACHINE_ID, REMOTE_HOST, NODE_NAME, "B-ADDR", "B-NAME" ) def setUp(self): self.env_assist, self.config = get_env_tools(self) self.config.runner.cib.load(resources=self.fixture_multi_resources) self.multiple_result_reports = (fixture.ReportStore() .error( "multiple_result_found", report_codes.MULTIPLE_RESULTS_FOUND, result_identifier_list=[ VIRTUAL_MACHINE_ID, REMOTE_HOST, "C", ], result_type="resource", search_description=REMOTE_HOST, force_code=report_codes.FORCE_REMOVE_MULTIPLE_NODES ) .as_warn( "multiple_result_found", "multiple_result_found_warn", ) ) def test_fail(self): self.env_assist.assert_raise_library_error( lambda: node_remove_guest( self.env_assist.get_env(), node_identifier=REMOTE_HOST ), self.multiple_result_reports.select("multiple_result_found").reports ) def test_force(self): (self.config .local.destroy_pacemaker_remote( communication_list=[ dict(label="B-NAME", address_list=["B-ADDR"]), dict(label=REMOTE_HOST, address_list=[NODE_NAME]), dict(label=NODE_NAME, address_list=[REMOTE_HOST]), ], ) .local.remove_authkey( communication_list=[ dict(label="B-NAME", address_list=["B-ADDR"]), dict(label=REMOTE_HOST, address_list=[NODE_NAME]), dict(label=NODE_NAME, address_list=[REMOTE_HOST]), ], ) .env.push_cib(remove=[ ".//meta_attributes[@id='A-M']", ".//meta_attributes[@id='B-M']", ".//meta_attributes[@id='C-M']", ]) .runner.pcmk.remove_node("B-NAME", name="runner.pcmk.remove_node3") .runner.pcmk.remove_node(REMOTE_HOST) .runner.pcmk.remove_node(NODE_NAME, name="runner.pcmk.remove_node2") ) node_remove_guest( self.env_assist.get_env(), node_identifier=REMOTE_HOST, allow_remove_multiple_nodes=True ) self.env_assist.assert_reports( REPORTS .adapt( "pcmk_remote_disable_stop_started", node_list=["B-NAME", REMOTE_HOST, NODE_NAME] ) .copy( "pcmk_remote_disable_success", "pcmk_remote_disable_success_b_name", node="B-NAME", ) .copy( "pcmk_remote_stop_success", "pcmk_remote_stop_success_b_name", node="B-NAME", ) .copy( "pcmk_remote_disable_success", "pcmk_remote_disable_success_remote_host", node=REMOTE_HOST, ) .copy( "pcmk_remote_stop_success", "pcmk_remote_stop_success_remote_host", node=REMOTE_HOST, ) .adapt( "authkey_remove_started", node_list=["B-NAME", REMOTE_HOST, NODE_NAME] ) .copy( "authkey_remove_success", "authkey_remove_success_b_name", node="B-NAME", ) .copy( "authkey_remove_success", "authkey_remove_success_remote_host", node=REMOTE_HOST, ) + self.multiple_result_reports.select("multiple_result_found_warn") ) class AuthkeyRemove(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) (self.config .runner.cib.load(resources=FIXTURE_RESOURCES) .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST] ) ) def test_fails_when_offline(self): self.config.local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], **FAIL_HTTP_KWARGS ) self.env_assist.assert_raise_library_error( lambda: node_remove_guest(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS.remove("authkey_remove_success") + EXTRA_REPORTS.select("remove_file_connection_failed") ) def test_fails_when_remotely_fails(self): self.config.local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], result={ "code": "unexpected", "message": "Access denied", } ) self.env_assist.assert_raise_library_error( lambda: node_remove_guest(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS.remove("authkey_remove_success") + EXTRA_REPORTS.select("authkey_remove_failed") ) def test_forceable_when_remotely_fail(self): (self.config .local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], result={ "code": "unexpected", "message": "Access denied", } ) .env.push_cib(remove=".//primitive/meta_attributes") .runner.pcmk.remove_node(NODE_NAME) ) node_remove_guest( self.env_assist.get_env(), allow_pacemaker_remote_service_fail=True ) self.env_assist.assert_reports( REPORTS.remove("authkey_remove_success") + EXTRA_REPORTS.select("authkey_remove_failed_warn") ) class PcmkRemoteServiceDestroy(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.config.runner.cib.load(resources=FIXTURE_RESOURCES) def test_fails_when_offline(self): (self.config .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST], **FAIL_HTTP_KWARGS ) ) self.env_assist.assert_raise_library_error( lambda: node_remove_guest(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_disable_success"] + EXTRA_REPORTS.select("manage_services_connection_failed") ) def test_fails_when_remotely_fails(self): (self.config .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST], result={ "code": "fail", "message": "Action failed", } ) ) self.env_assist.assert_raise_library_error( lambda: node_remove_guest(self.env_assist.get_env()) ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_disable_success"] + EXTRA_REPORTS.select( "pcmk_remote_disable_failed", "pcmk_remote_stop_failed", ) ) def test_forceable_when_remotely_fail(self): (self.config .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST], result={ "code": "fail", "message": "Action failed", } ) .local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], ) .env.push_cib(remove=".//primitive/meta_attributes") .runner.pcmk.remove_node(NODE_NAME) ) node_remove_guest( self.env_assist.get_env(), allow_pacemaker_remote_service_fail=True ) self.env_assist.assert_reports( REPORTS.remove( "pcmk_remote_disable_success", "pcmk_remote_stop_success", ) + EXTRA_REPORTS.select( "pcmk_remote_disable_failed_warn", "pcmk_remote_stop_failed_warn", ) ) pcs-0.9.164/pcs/lib/commands/test/remote_node/test_node_remove_remote.py000066400000000000000000000343311326265502500263730ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.common import report_codes from pcs.lib.commands.remote_node import( node_remove_remote as node_remove_remote_orig ) from pcs.lib.commands.test.remote_node.fixtures_add import FAIL_HTTP_KWARGS from pcs.lib.commands.test.remote_node.fixtures_remove import( EnvConfigMixin, REPORTS as FIXTURE_REPORTS, EXTRA_REPORTS as FIXTURE_EXTRA_REPORTS, ) from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase, mock NODE_NAME = "node-name" REMOTE_HOST = "remote-host" NODE_1 = "node-1" NODE_2 = "node-2" def node_remove_remote(env, node_identifier=REMOTE_HOST, *args, **kwargs): node_remove_remote_orig(env, node_identifier, *args, **kwargs) FIXTURE_RESOURCES = """ """.format( NODE_NAME, REMOTE_HOST, ) REPORTS = (FIXTURE_REPORTS .adapt("pcmk_remote_disable_stop_started", node_list=[NODE_NAME]) .adapt("pcmk_remote_disable_success", node=NODE_NAME) .adapt("pcmk_remote_stop_success", node=NODE_NAME) .adapt("authkey_remove_started", node_list=[NODE_NAME]) .adapt("authkey_remove_success", node=NODE_NAME) ) EXTRA_REPORTS = (FIXTURE_EXTRA_REPORTS .adapt_multi( [ "manage_services_connection_failed", "manage_services_connection_failed_warn", "remove_file_connection_failed", "remove_file_connection_failed_warn", ], node=REMOTE_HOST ) .adapt_multi( [ "authkey_remove_failed", "authkey_remove_failed_warn", "pcmk_remote_disable_failed", "pcmk_remote_disable_failed_warn", "pcmk_remote_stop_failed", "pcmk_remote_stop_failed_warn", ], node=NODE_NAME ) ) get_env_tools = partial(get_env_tools, local_extensions={ "local": EnvConfigMixin }) class RemoveRemote(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.remove_resource = mock.Mock() def find_by(self, identifier): (self.config .runner.cib.load(resources=FIXTURE_RESOURCES) .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST] ) .local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], ) ) node_remove_remote( self.env_assist.get_env(), node_identifier=identifier, remove_resource=self.remove_resource ) self.remove_resource.assert_called_once_with( NODE_NAME, is_remove_remote_context=True ) self.env_assist.assert_reports(REPORTS) def test_success_base(self): self.find_by(REMOTE_HOST) def test_can_find_by_node_name(self): self.find_by(NODE_NAME) class RemoveRemoteOthers(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.remove_resource = mock.Mock() def test_can_skip_all_offline(self): (self.config .runner.cib.load(resources=FIXTURE_RESOURCES) .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST], **FAIL_HTTP_KWARGS ) .local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], **FAIL_HTTP_KWARGS ) ) node_remove_remote( self.env_assist.get_env(), remove_resource=self.remove_resource, skip_offline_nodes=True ) self.remove_resource.assert_called_once_with( NODE_NAME, is_remove_remote_context=True ) self.env_assist.assert_reports( REPORTS.remove( "pcmk_remote_disable_success", "pcmk_remote_stop_success", "authkey_remove_success", ) + EXTRA_REPORTS.select( "manage_services_connection_failed_warn", "remove_file_connection_failed_warn" ) ) def test_fail_when_identifier_not_found(self): (self.config .runner.cib.load(resources=FIXTURE_RESOURCES) ) self.env_assist.assert_raise_library_error( lambda: node_remove_remote( self.env_assist.get_env(), remove_resource=self.remove_resource, node_identifier="NOEXISTENT" ), [ fixture.error( report_codes.NODE_NOT_FOUND, node="NOEXISTENT", searched_types="remote", ) ], expected_in_processor=False ) class MultipleResults(TestCase): fixture_multi_resources = """ """.format( NODE_NAME, REMOTE_HOST, "OTHER-REMOTE" ) def setUp(self): self.env_assist, self.config = get_env_tools(self) self.remove_resource = mock.Mock() (self.config .runner.cib.load(resources=self.fixture_multi_resources) ) self.multiple_result_reports = (fixture.ReportStore() .error( "multiple_result_found", report_codes.MULTIPLE_RESULTS_FOUND, result_identifier_list=[ NODE_NAME, REMOTE_HOST, ], result_type="resource", search_description=REMOTE_HOST, force_code=report_codes.FORCE_REMOVE_MULTIPLE_NODES ) .as_warn( "multiple_result_found", "multiple_result_found_warn", ) ) def test_fail(self): self.env_assist.assert_raise_library_error( lambda: node_remove_remote( self.env_assist.get_env(), node_identifier=REMOTE_HOST, remove_resource=self.remove_resource ), self.multiple_result_reports.select("multiple_result_found").reports ) def test_force(self): (self.config .local.destroy_pacemaker_remote( communication_list=[ dict(label=REMOTE_HOST, address_list=["OTHER-REMOTE"]), dict(label=NODE_NAME, address_list=[REMOTE_HOST]), ] ) .local.remove_authkey( communication_list=[ dict(label=REMOTE_HOST, address_list=["OTHER-REMOTE"]), dict(label=NODE_NAME, address_list=[REMOTE_HOST]), ], ) ) node_remove_remote( self.env_assist.get_env(), node_identifier=REMOTE_HOST, remove_resource=self.remove_resource, allow_remove_multiple_nodes=True, ) self.env_assist.assert_reports( REPORTS .adapt( "pcmk_remote_disable_stop_started", node_list=[REMOTE_HOST, NODE_NAME] ) .copy( "pcmk_remote_disable_success", "pcmk_remote_disable_success_remote_host", node=REMOTE_HOST, ) .copy( "pcmk_remote_stop_success", "pcmk_remote_stop_success_remote_host", node=REMOTE_HOST, ) .adapt( "authkey_remove_started", node_list=[REMOTE_HOST, NODE_NAME] ) .copy( "authkey_remove_success", "authkey_remove_success_remote_host", node=REMOTE_HOST, ) + self.multiple_result_reports.select("multiple_result_found_warn") ) class AuthkeyRemove(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) (self.config .runner.cib.load(resources=FIXTURE_RESOURCES) .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST] ) ) self.remove_resource = mock.Mock() def test_fails_when_offline(self): self.config.local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], **FAIL_HTTP_KWARGS ) self.env_assist.assert_raise_library_error( lambda: node_remove_remote( self.env_assist.get_env(), remove_resource=self.remove_resource, ) ) self.env_assist.assert_reports( REPORTS.remove("authkey_remove_success") + EXTRA_REPORTS.select("remove_file_connection_failed") ) def test_fails_when_remotely_fails(self): self.config.local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], result={ "code": "unexpected", "message": "Access denied", } ) self.env_assist.assert_raise_library_error( lambda: node_remove_remote( self.env_assist.get_env(), remove_resource=self.remove_resource, ) ) self.env_assist.assert_reports( REPORTS.remove("authkey_remove_success") + EXTRA_REPORTS.select("authkey_remove_failed") ) def test_forceable_when_remotely_fail(self): self.config.local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], result={ "code": "unexpected", "message": "Access denied", } ) node_remove_remote( self.env_assist.get_env(), remove_resource=self.remove_resource, allow_pacemaker_remote_service_fail=True ) self.env_assist.assert_reports( REPORTS.remove("authkey_remove_success") + EXTRA_REPORTS.select("authkey_remove_failed_warn") ) class PcmkRemoteServiceDestroy(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.config.runner.cib.load(resources=FIXTURE_RESOURCES) self.remove_resource = mock.Mock() def test_fails_when_offline(self): (self.config .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST], **FAIL_HTTP_KWARGS ) ) self.env_assist.assert_raise_library_error( lambda: node_remove_remote( self.env_assist.get_env(), remove_resource=self.remove_resource, ) ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_disable_success"] + EXTRA_REPORTS.select("manage_services_connection_failed") ) def test_fails_when_remotely_fails(self): (self.config .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST], result={ "code": "fail", "message": "Action failed", } ) ) self.env_assist.assert_raise_library_error( lambda: node_remove_remote( self.env_assist.get_env(), remove_resource=self.remove_resource, ) ) self.env_assist.assert_reports( REPORTS[:"pcmk_remote_disable_success"] + EXTRA_REPORTS.select( "pcmk_remote_disable_failed", "pcmk_remote_stop_failed", ) ) def test_forceable_when_remotely_fail(self): (self.config .local.destroy_pacemaker_remote( label=NODE_NAME, address_list=[REMOTE_HOST], result={ "code": "fail", "message": "Action failed", } ) .local.remove_authkey( communication_list=[ dict(label=NODE_NAME, address_list=[REMOTE_HOST]) ], ) ) node_remove_remote( self.env_assist.get_env(), remove_resource=self.remove_resource, allow_pacemaker_remote_service_fail=True ) self.env_assist.assert_reports( REPORTS.remove( "pcmk_remote_disable_success", "pcmk_remote_stop_success", ) + EXTRA_REPORTS.select( "pcmk_remote_disable_failed_warn", "pcmk_remote_stop_failed_warn", ) ) pcs-0.9.164/pcs/lib/commands/test/resource/000077500000000000000000000000001326265502500204305ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/resource/__init__.py000066400000000000000000000000001326265502500225270ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/resource/test_bundle_create.py000066400000000000000000001211011326265502500246310ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from textwrap import dedent from pcs.common import report_codes from pcs.lib import reports from pcs.lib.commands import resource from pcs.lib.errors import ( LibraryError, ReportItemSeverity as severities, ) from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.misc import skip_unless_pacemaker_supports_bundle from pcs.test.tools.pcs_unittest import TestCase TIMEOUT=10 get_env_tools = partial( get_env_tools, base_cib_filename="cib-empty-2.8.xml" ) def simple_bundle_create(env, wait=TIMEOUT, disabled=False): return resource.bundle_create( env, "B1", "docker", container_options={"image": "pcs:test"}, ensure_disabled=disabled, wait=wait, ) fixture_cib_pre = "" fixture_resources_bundle_simple = """ """ class MinimalCreate(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) (self.config .runner.cib.load() .env.push_cib(resources=fixture_resources_bundle_simple) ) def test_success(self): simple_bundle_create(self.env_assist.get_env(), wait=False) def test_errors(self): self.config.remove("env.push_cib") self.env_assist.assert_raise_library_error( lambda: resource.bundle_create( self.env_assist.get_env(), "B#1", "nonsense" ), [ ( severities.ERROR, report_codes.INVALID_ID, { "invalid_character": "#", "id": "B#1", "id_description": "bundle name", "is_first_char": False, }, None ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "container type", "option_value": "nonsense", "allowed_values": ("docker", ), }, None ), ] ) def test_cib_upgrade(self): (self.config .runner.cib.load( name="load_cib_old_version", filename="cib-empty.xml", before="runner.cib.load" ) .runner.cib.upgrade(before="runner.cib.load") ) simple_bundle_create(self.env_assist.get_env(), wait=False) self.env_assist.assert_reports([ ( severities.INFO, report_codes.CIB_UPGRADE_SUCCESSFUL, { }, None ), ]) class CreateDocker(TestCase): allowed_options = [ "image", "masters", "network", "options", "replicas", "replicas-per-host", "run-command", ] def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(resources=fixture_cib_pre) def test_minimal(self): self.config.env.push_cib(resources=fixture_resources_bundle_simple) simple_bundle_create(self.env_assist.get_env(), wait=False) def test_all_options(self): self.config.env.push_cib( resources=""" """ ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", container_options={ "image": "pcs:test", "masters": "0", "network": "extra network settings", "options": "extra options", "run-command": "/bin/true", "replicas": "4", "replicas-per-host": "2", } ) def test_options_errors(self): self.env_assist.assert_raise_library_error( lambda: resource.bundle_create( self.env_assist.get_env(), "B1", "docker", container_options={ "replicas-per-host": "0", "replicas": "0", "masters": "-1", }, force_options=True ), [ ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_type": "container", "option_names": ["image", ], }, None ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "masters", "option_value": "-1", "allowed_values": "a non-negative integer", }, None ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "replicas", "option_value": "0", "allowed_values": "a positive integer", }, None ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "replicas-per-host", "option_value": "0", "allowed_values": "a positive integer", }, None ), ] ) def test_empty_image(self): self.env_assist.assert_raise_library_error( lambda: resource.bundle_create( self.env_assist.get_env(), "B1", "docker", container_options={ "image": "", }, force_options=True ), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "image", "option_value": "", "allowed_values": "image name", }, None ), ] ) def test_unknow_option(self): self.env_assist.assert_raise_library_error( lambda: resource.bundle_create( self.env_assist.get_env(), "B1", "docker", container_options={ "image": "pcs:test", "extra": "option", } ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "container", "allowed": self.allowed_options, "allowed_patterns": [], }, report_codes.FORCE_OPTIONS ), ] ) def test_unknow_option_forced(self): self.config.env.push_cib( resources=""" """ ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", container_options={ "image": "pcs:test", "extra": "option", }, force_options=True ) self.env_assist.assert_reports([ ( severities.WARNING, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "container", "allowed": self.allowed_options, "allowed_patterns": [], }, None ), ]) class CreateWithNetwork(TestCase): allowed_options = [ "control-port", "host-interface", "host-netmask", "ip-range-start", ] def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(resources=fixture_cib_pre) def test_no_options(self): self.config.env.push_cib(resources=fixture_resources_bundle_simple) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, network_options={} ) def test_all_options(self): self.config.env.push_cib( resources=""" """ ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, network_options={ "control-port": "12345", "host-interface": "eth0", "host-netmask": "24", "ip-range-start": "192.168.100.200", } ) def test_options_errors(self): self.env_assist.assert_raise_library_error( lambda: resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, network_options={ "control-port": "0", "host-netmask": "abc", "extra": "option", } ), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "control-port", "option_value": "0", "allowed_values": "a port number (1-65535)", }, None ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "host-netmask", "option_value": "abc", "allowed_values": "a number of bits of the mask (1-32)", }, report_codes.FORCE_OPTIONS ), ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "network", "allowed": self.allowed_options, "allowed_patterns": [], }, report_codes.FORCE_OPTIONS ), ] ) def test_options_forced(self): self.config.env.push_cib( resources=""" """ ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", { "image": "pcs:test", }, network_options={ "host-netmask": "abc", "extra": "option", }, force_options=True ) self.env_assist.assert_reports([ ( severities.WARNING, report_codes.INVALID_OPTION_VALUE, { "option_name": "host-netmask", "option_value": "abc", "allowed_values": "a number of bits of the mask (1-32)", }, None ), ( severities.WARNING, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "network", "allowed": self.allowed_options, "allowed_patterns": [], }, None ), ]) class CreateWithPortMap(TestCase): allowed_options = [ "id", "internal-port", "port", "range", ] def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(resources=fixture_cib_pre) def test_no_options(self): self.config.env.push_cib(resources=fixture_resources_bundle_simple) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, port_map=[] ) def test_several_mappings_and_handle_their_ids(self): self.config.env.push_cib( resources=""" """ ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, port_map=[ { "port": "1001", }, { # use an autogenerated id of the previous item "id": "B1-port-map-1001", "port": "2000", "internal-port": "2002", }, { "range": "3000-3300", }, ] ) def test_options_errors(self): self.env_assist.assert_raise_library_error( lambda: resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, port_map=[ { }, { "id": "not#valid", }, { "internal-port": "1000", }, { "port": "abc", }, { "port": "2000", "range": "3000-4000", "internal-port": "def", }, ], force_options=True ), [ # first ( severities.ERROR, report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, { "option_type": "port-map", "option_names": ["port", "range"], }, None ), # second ( severities.ERROR, report_codes.INVALID_ID, { "invalid_character": "#", "id": "not#valid", "id_description": "port-map id", "is_first_char": False, }, None ), ( severities.ERROR, report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, { "option_type": "port-map", "option_names": ["port", "range"], }, None ), # third ( severities.ERROR, report_codes.PREREQUISITE_OPTION_IS_MISSING, { "option_type": "port-map", "option_name": "internal-port", "prerequisite_type": "port-map", "prerequisite_name": "port", }, None ), ( severities.ERROR, report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, { "option_type": "port-map", "option_names": ["port", "range"], }, None ), # fourth ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "port", "option_value": "abc", "allowed_values": "a port number (1-65535)", }, None ), # fifth ( severities.ERROR, report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, { "option_names": ["port", "range", ], "option_type": "port-map", }, None ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "internal-port", "option_value": "def", "allowed_values": "a port number (1-65535)", }, None ), ] ) def test_forceable_options_errors(self): self.env_assist.assert_raise_library_error( lambda: resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, port_map=[ { "range": "3000", "extra": "option", }, ] ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "port-map", "allowed": self.allowed_options, "allowed_patterns": [], }, report_codes.FORCE_OPTIONS ), ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "range", "option_value": "3000", "allowed_values": "port-port", }, report_codes.FORCE_OPTIONS ), ] ) def test_forceable_options_errors_forced(self): self.config.env.push_cib( resources=""" """, ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", { "image": "pcs:test", }, port_map=[ { "range": "3000", "extra": "option", }, ], force_options=True ) self.env_assist.assert_reports([ ( severities.WARNING, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "port-map", "allowed": self.allowed_options, "allowed_patterns": [], }, None ), ( severities.WARNING, report_codes.INVALID_OPTION_VALUE, { "option_name": "range", "option_value": "3000", "allowed_values": "port-port", }, None ), ]) class CreateWithStorageMap(TestCase): allowed_options = [ "id", "options", "source-dir", "source-dir-root", "target-dir", ] def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(resources=fixture_cib_pre) def test_several_mappings_and_handle_their_ids(self): self.config.env.push_cib( resources=""" """ ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, storage_map=[ { "source-dir": "/tmp/docker1a", "target-dir": "/tmp/docker1b", }, { # use an autogenerated id of the previous item "id": "B1-storage-map", "source-dir": "/tmp/docker2a", "target-dir": "/tmp/docker2b", "options": "extra options 1" }, { "source-dir-root": "/tmp/docker3a", "target-dir": "/tmp/docker3b", }, { # use an autogenerated id of the previous item "id": "B1-storage-map-2", "source-dir-root": "/tmp/docker4a", "target-dir": "/tmp/docker4b", "options": "extra options 2" }, ] ) def test_options_errors(self): self.env_assist.assert_raise_library_error( lambda: resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, storage_map=[ { }, { "id": "not#valid", "source-dir": "/tmp/docker1a", "source-dir-root": "/tmp/docker1b", "target-dir": "/tmp/docker1c", }, ], force_options=True ), [ # first ( severities.ERROR, report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, { "option_type": "storage-map", "option_names": ["source-dir", "source-dir-root"], }, None ), ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_type": "storage-map", "option_names": ["target-dir", ], }, None ), # second ( severities.ERROR, report_codes.INVALID_ID, { "invalid_character": "#", "id": "not#valid", "id_description": "storage-map id", "is_first_char": False, }, None ), ( severities.ERROR, report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, { "option_type": "storage-map", "option_names": ["source-dir", "source-dir-root"], }, None ), ] ) def test_forceable_options_errors(self): self.env_assist.assert_raise_library_error( lambda: resource.bundle_create( self.env_assist.get_env(), "B1", "docker", {"image": "pcs:test", }, storage_map=[ { "source-dir": "/tmp/docker1a", "target-dir": "/tmp/docker1b", "extra": "option", }, ] ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "storage-map", "allowed": self.allowed_options, "allowed_patterns": [], }, report_codes.FORCE_OPTIONS ), ] ) def test_forceable_options_errors_forced(self): self.config.env.push_cib( resources=""" """, ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", { "image": "pcs:test", }, storage_map=[ { "source-dir": "/tmp/docker1a", "target-dir": "/tmp/docker1b", "extra": "option", }, ], force_options=True ) self.env_assist.assert_reports( [ ( severities.WARNING, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "storage-map", "allowed": self.allowed_options, "allowed_patterns": [], }, None ), ] ) class CreateWithMeta(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(resources=fixture_cib_pre) def test_success(self): self.config.env.push_cib( resources=""" """ ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", container_options={"image": "pcs:test", }, meta_attributes={ "target-role": "Stopped", "is-managed": "false", } ) def test_disabled(self): self.config.env.push_cib( resources=""" """ ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", container_options={"image": "pcs:test", }, ensure_disabled=True ) class CreateWithAllOptions(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(resources=fixture_cib_pre) def test_success(self): self.config.env.push_cib( resources=""" """ ) resource.bundle_create( self.env_assist.get_env(), "B1", "docker", container_options={ "image": "pcs:test", "masters": "0", "network": "extra network settings", "options": "extra options", "run-command": "/bin/true", "replicas": "4", "replicas-per-host": "2", }, network_options={ "control-port": "12345", "host-interface": "eth0", "host-netmask": "24", "ip-range-start": "192.168.100.200", }, port_map=[ { "port": "1001", }, { # use an autogenerated id of the previous item "id": "B1-port-map-1001", "port": "2000", "internal-port": "2002", }, { "range": "3000-3300", }, ], storage_map=[ { "source-dir": "/tmp/docker1a", "target-dir": "/tmp/docker1b", }, { # use an autogenerated id of the previous item "id": "B1-storage-map", "source-dir": "/tmp/docker2a", "target-dir": "/tmp/docker2b", "options": "extra options 1" }, { "source-dir-root": "/tmp/docker3a", "target-dir": "/tmp/docker3b", }, { # use an autogenerated id of the previous item "id": "B1-port-map-1001-1", "source-dir-root": "/tmp/docker4a", "target-dir": "/tmp/docker4b", "options": "extra options 2" }, ] ) class Wait(TestCase): fixture_status_running = """ """ fixture_status_not_running = """ """ fixture_resources_bundle_simple_disabled = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) (self.config .runner.pcmk.can_wait() .runner.cib.load(resources=fixture_cib_pre) ) def test_wait_fail(self): wait_error_message = dedent( """\ Pending actions: Action 12: B1-node2-stop on node2 Error performing operation: Timer expired """ ).strip() self.config.env.push_cib( resources=fixture_resources_bundle_simple, wait=TIMEOUT, exception=LibraryError( reports.wait_for_idle_timed_out(wait_error_message) ) ) self.env_assist.assert_raise_library_error( lambda: simple_bundle_create(self.env_assist.get_env()), [ fixture.report_wait_for_idle_timed_out(wait_error_message) ], expected_in_processor=False ) @skip_unless_pacemaker_supports_bundle def test_wait_ok_run_ok(self): (self.config .env.push_cib( resources=fixture_resources_bundle_simple, wait=TIMEOUT ) .runner.pcmk.load_state(resources=self.fixture_status_running) ) simple_bundle_create(self.env_assist.get_env()) self.env_assist.assert_reports([ fixture.report_resource_running( "B1", {"Started": ["node1", "node2"]} ), ]) @skip_unless_pacemaker_supports_bundle def test_wait_ok_run_fail(self): (self.config .env.push_cib( resources=fixture_resources_bundle_simple, wait=TIMEOUT ) .runner.pcmk.load_state(resources=self.fixture_status_not_running) ) self.env_assist.assert_raise_library_error( lambda: simple_bundle_create(self.env_assist.get_env()), [ fixture.report_resource_not_running("B1", severities.ERROR), ] ) @skip_unless_pacemaker_supports_bundle def test_disabled_wait_ok_run_ok(self): (self.config .env.push_cib( resources=self.fixture_resources_bundle_simple_disabled, wait=TIMEOUT ) .runner.pcmk.load_state(resources=self.fixture_status_not_running) ) simple_bundle_create(self.env_assist.get_env(), disabled=True) self.env_assist.assert_reports([ ( severities.INFO, report_codes.RESOURCE_DOES_NOT_RUN, { "resource_id": "B1" }, None ) ]) @skip_unless_pacemaker_supports_bundle def test_disabled_wait_ok_run_fail(self): (self.config .env.push_cib( resources=self.fixture_resources_bundle_simple_disabled, wait=TIMEOUT ) .runner.pcmk.load_state(resources=self.fixture_status_running) ) self.env_assist.assert_raise_library_error( lambda: simple_bundle_create(self.env_assist.get_env(), disabled=True), [ fixture.report_resource_running( "B1", {"Started": ["node1", "node2"]}, severities.ERROR ) ] ) pcs-0.9.164/pcs/lib/commands/test/resource/test_bundle_update.py000066400000000000000000000676571326265502500247010ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from textwrap import dedent from pcs.common import report_codes from pcs.lib import reports from pcs.lib.commands import resource from pcs.lib.errors import ( LibraryError, ReportItemSeverity as severities, ) from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.misc import skip_unless_pacemaker_supports_bundle from pcs.test.tools.pcs_unittest import TestCase TIMEOUT=10 get_env_tools = partial( get_env_tools, base_cib_filename="cib-empty-2.8.xml" ) def simple_bundle_update(env, wait=TIMEOUT): return resource.bundle_update(env, "B1", {"image": "new:image"}, wait=wait) fixture_resources_minimal = """ """ class Basics(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_nonexisting_id(self): self.config.runner.cib.load() self.env_assist.assert_raise_library_error( lambda: resource.bundle_update(self.env_assist.get_env(), "B1"), [ ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "B1", "expected_types": ["bundle"], "context_type": "resources", "context_id": "", }, None ), ], expected_in_processor=False ) def test_not_bundle_id(self): self.config.runner.cib.load( resources=""" """ ) self.env_assist.assert_raise_library_error( lambda: resource.bundle_update(self.env_assist.get_env(), "B1"), [ ( severities.ERROR, report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, { "id": "B1", "expected_types": ["bundle"], "current_type": "primitive", }, None ), ], expected_in_processor=False ) def test_no_updates(self): (self.config .runner.cib.load( resources=""" """ ) .env.push_cib() ) resource.bundle_update(self.env_assist.get_env(), "B1") def test_cib_upgrade(self): (self.config .runner.cib.load( filename="cib-empty.xml", name="load_cib_old_version" ) .runner.cib.upgrade() .runner.cib.load( resources=""" """ ) .env.push_cib() ) resource.bundle_update(self.env_assist.get_env(), "B1") self.env_assist.assert_reports([ ( severities.INFO, report_codes.CIB_UPGRADE_SUCCESSFUL, { }, None ), ]) class ContainerDocker(TestCase): allowed_options = [ "image", "masters", "network", "options", "replicas", "replicas-per-host", "run-command", ] fixture_cib_extra_option = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_success(self): (self.config .runner.cib.load( resources=""" """ ) .env.push_cib( resources=""" """ ) ) resource.bundle_update( self.env_assist.get_env(), "B1", container_options={ "options": "test", "replicas": "3", "masters": "", } ) def test_cannot_remove_required_options(self): self.config.runner.cib.load(resources=fixture_resources_minimal) self.env_assist.assert_raise_library_error( lambda: resource.bundle_update( self.env_assist.get_env(), "B1", container_options={ "image": "", "options": "test", }, force_options=True ), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "image", "option_value": "", "allowed_values": "image name", }, None ), ] ) def test_unknow_option(self): self.config.runner.cib.load(resources=fixture_resources_minimal) self.env_assist.assert_raise_library_error( lambda: resource.bundle_update( self.env_assist.get_env(), "B1", container_options={ "extra": "option", } ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "container", "allowed": self.allowed_options, }, report_codes.FORCE_OPTIONS ), ] ) def test_unknow_option_forced(self): (self.config .runner.cib.load(resources=fixture_resources_minimal) .env.push_cib(resources=self.fixture_cib_extra_option) ) resource.bundle_update( self.env_assist.get_env(), "B1", container_options={ "extra": "option", }, force_options=True ) self.env_assist.assert_reports([ ( severities.WARNING, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "container", "allowed": self.allowed_options, }, None ), ]) def test_unknown_option_remove(self): (self.config .runner.cib.load(resources=self.fixture_cib_extra_option) .env.push_cib(resources=fixture_resources_minimal) ) resource.bundle_update( self.env_assist.get_env(), "B1", container_options={ "extra": "", }, force_options=True ) class Network(TestCase): allowed_options = [ "control-port", "host-interface", "host-netmask", "ip-range-start", ] fixture_cib_interface = """ """ fixture_cib_extra_option = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_add_network(self): (self.config .runner.cib.load(resources=fixture_resources_minimal) .env.push_cib(resources=self.fixture_cib_interface) ) resource.bundle_update( self.env_assist.get_env(), "B1", network_options={ "host-interface": "eth0", } ) def test_remove_network(self): (self.config .runner.cib.load(resources=self.fixture_cib_interface) .env.push_cib(resources=fixture_resources_minimal) ) resource.bundle_update( self.env_assist.get_env(), "B1", network_options={ "host-interface": "", } ) def test_keep_network_when_port_map_set(self): (self.config .runner.cib.load( resources=""" """ ) .env.push_cib( resources=""" """ ) ) resource.bundle_update( self.env_assist.get_env(), "B1", network_options={ "host-interface": "", } ) def test_success(self): (self.config .runner.cib.load( resources=""" """ ) .env.push_cib( resources=""" """ ) ) resource.bundle_update( self.env_assist.get_env(), "B1", network_options={ "control-port": "", "host-netmask": "24", } ) def test_unknow_option(self): (self.config.runner.cib.load(resources=self.fixture_cib_interface)) self.env_assist.assert_raise_library_error( lambda: resource.bundle_update( self.env_assist.get_env(), "B1", network_options={ "extra": "option", } ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "network", "allowed": self.allowed_options, }, report_codes.FORCE_OPTIONS ), ] ) def test_unknow_option_forced(self): (self.config .runner.cib.load(resources=self.fixture_cib_interface) .env.push_cib(resources=self.fixture_cib_extra_option) ) resource.bundle_update( self.env_assist.get_env(), "B1", network_options={ "extra": "option", }, force_options=True ) self.env_assist.assert_reports( [ ( severities.WARNING, report_codes.INVALID_OPTIONS, { "option_names": ["extra", ], "option_type": "network", "allowed": self.allowed_options, }, None ), ] ) def test_unknown_option_remove(self): (self.config .runner.cib.load(resources=self.fixture_cib_extra_option) .env.push_cib(resources=self.fixture_cib_interface) ) resource.bundle_update( self.env_assist.get_env(), "B1", network_options={ "extra": "", } ) class PortMap(TestCase): allowed_options = [ "id", "port", "internal-port", "range", ] fixture_cib_port_80 = """ """ fixture_cib_port_80_8080 = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_add_network(self): (self.config .runner.cib.load(resources=fixture_resources_minimal) .env.push_cib(resources=self.fixture_cib_port_80) ) resource.bundle_update( self.env_assist.get_env(), "B1", port_map_add=[ { "port": "80", } ] ) def test_remove_network(self): (self.config .runner.cib.load(resources=self.fixture_cib_port_80) .env.push_cib(resources=fixture_resources_minimal) ) resource.bundle_update( self.env_assist.get_env(), "B1", port_map_remove=[ "B1-port-map-80", ] ) def test_keep_network_when_options_set(self): (self.config .runner.cib.load( resources=""" """ ) .env.push_cib( resources=""" """ ) ) resource.bundle_update( self.env_assist.get_env(), "B1", port_map_remove=[ "B1-port-map-80", ] ) def test_add(self): (self.config .runner.cib.load(resources=self.fixture_cib_port_80) .env.push_cib(resources=self.fixture_cib_port_80_8080) ) resource.bundle_update( self.env_assist.get_env(), "B1", port_map_add=[ { "port": "8080", } ] ) def test_remove(self): (self.config .runner.cib.load(resources=self.fixture_cib_port_80_8080) .env.push_cib(resources=self.fixture_cib_port_80) ) resource.bundle_update( self.env_assist.get_env(), "B1", port_map_remove=[ "B1-port-map-8080", ] ) def test_remove_missing(self): self.config.runner.cib.load(resources=self.fixture_cib_port_80) self.env_assist.assert_raise_library_error( lambda: resource.bundle_update( self.env_assist.get_env(), "B1", port_map_remove=[ "B1-port-map-8080", ] ), [ ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "B1-port-map-8080", "expected_types": ["port-map"], "context_type": "bundle", "context_id": "B1", }, None ), ] ) class StorageMap(TestCase): allowed_options = [ "id", "options", "source-dir", "source-dir-root", "target-dir", ] fixture_cib_storage_1 = """ """ fixture_cib_storage_1_2 = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_add_storage(self): (self.config .runner.cib.load(resources=fixture_resources_minimal) .env.push_cib(resources=self.fixture_cib_storage_1) ) resource.bundle_update( self.env_assist.get_env(), "B1", storage_map_add=[ { "source-dir": "/tmp/docker1a", "target-dir": "/tmp/docker1b", } ] ) def test_remove_storage(self): (self.config .runner.cib.load(resources=self.fixture_cib_storage_1) .env.push_cib(resources=fixture_resources_minimal) ) resource.bundle_update( self.env_assist.get_env(), "B1", storage_map_remove=[ "B1-storage-map", ] ) def test_add(self): (self.config .runner.cib.load(resources=self.fixture_cib_storage_1) .env.push_cib(resources=self.fixture_cib_storage_1_2) ) resource.bundle_update( self.env_assist.get_env(), "B1", storage_map_add=[ { "source-dir": "/tmp/docker2a", "target-dir": "/tmp/docker2b", } ] ) def test_remove(self): (self.config .runner.cib.load(resources=self.fixture_cib_storage_1_2) .env.push_cib(resources=self.fixture_cib_storage_1) ) resource.bundle_update( self.env_assist.get_env(), "B1", storage_map_remove=[ "B1-storage-map-1", ] ) def test_remove_missing(self): (self.config .runner.cib.load(resources=self.fixture_cib_storage_1) ) self.env_assist.assert_raise_library_error( lambda: resource.bundle_update( self.env_assist.get_env(), "B1", storage_map_remove=[ "B1-storage-map-1", ] ), [ ( severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "B1-storage-map-1", "expected_types": ["storage-map"], "context_type": "bundle", "context_id": "B1", }, None ) ] ) class Meta(TestCase): fixture_no_meta = """ """ fixture_meta_stopped = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_add_meta_element(self): (self.config .runner.cib.load(resources=self.fixture_no_meta) .env.push_cib(resources=self.fixture_meta_stopped) ) resource.bundle_update( self.env_assist.get_env(), "B1", meta_attributes={ "target-role": "Stopped", } ) def test_remove_meta_element(self): (self.config .runner.cib.load(resources=self.fixture_meta_stopped) .env.push_cib(resources=self.fixture_no_meta) ) resource.bundle_update( self.env_assist.get_env(), "B1", meta_attributes={ "target-role": "", } ) def test_change_meta(self): fixture_cib_pre = """ """ fixture_cib_post = """ """ (self.config .runner.cib.load(resources=fixture_cib_pre) .env.push_cib(resources=fixture_cib_post) ) resource.bundle_update( self.env_assist.get_env(), "B1", meta_attributes={ "priority": "10", "resource-stickiness": "100", "is-managed": "", } ) class Wait(TestCase): fixture_status_running = """ """ fixture_status_not_running = """ """ fixture_cib_pre = """ """ fixture_resources_bundle_simple = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) (self.config .runner.pcmk.can_wait() .runner.cib.load(resources=self.fixture_cib_pre) .env.push_cib( resources=self.fixture_resources_bundle_simple, wait=TIMEOUT ) ) def test_wait_fail(self): wait_error_message = dedent( """\ Pending actions: Action 12: B1-node2-stop on node2 Error performing operation: Timer expired """ ).strip() self.config.env.push_cib( resources=self.fixture_resources_bundle_simple, wait=TIMEOUT, exception=LibraryError( reports.wait_for_idle_timed_out(wait_error_message) ), instead="env.push_cib" ) self.env_assist.assert_raise_library_error( lambda: simple_bundle_update(self.env_assist.get_env()), [ fixture.report_wait_for_idle_timed_out(wait_error_message) ], expected_in_processor=False ) @skip_unless_pacemaker_supports_bundle def test_wait_ok_running(self): (self.config .runner.pcmk.load_state(resources=self.fixture_status_running) ) simple_bundle_update(self.env_assist.get_env()) self.env_assist.assert_reports([ fixture.report_resource_running( "B1", {"Started": ["node1", "node2"]} ), ]) @skip_unless_pacemaker_supports_bundle def test_wait_ok_not_running(self): (self.config .runner.pcmk.load_state(resources=self.fixture_status_not_running) ) simple_bundle_update(self.env_assist.get_env()) self.env_assist.assert_reports([ fixture.report_resource_not_running("B1", severities.INFO), ]) pcs-0.9.164/pcs/lib/commands/test/resource/test_resource_create.py000066400000000000000000001736171326265502500252320ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib import reports from pcs.lib.commands import resource from pcs.lib.errors import LibraryError from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.misc import ( outdent, skip_unless_pacemaker_supports_bundle, ) from pcs.test.tools.pcs_unittest import TestCase TIMEOUT=10 def create( env, wait=False, disabled=False, meta_attributes=None, operations=None, allow_invalid_operation=False ): return resource.create( env, "A", "ocf:heartbeat:Dummy", operations=operations if operations else [], meta_attributes=meta_attributes if meta_attributes else {}, instance_attributes={}, wait=wait, ensure_disabled=disabled, allow_invalid_operation=allow_invalid_operation ) def create_master( env, wait=TIMEOUT, disabled=False, meta_attributes=None, master_meta_options=None ): return resource.create_as_master( env, "A", "ocf:heartbeat:Dummy", operations=[], meta_attributes=meta_attributes if meta_attributes else {}, instance_attributes={}, clone_meta_options=master_meta_options if master_meta_options else {} , wait=wait, ensure_disabled=disabled ) def create_group(env, wait=TIMEOUT, disabled=False, meta_attributes=None): return resource.create_in_group( env, "A", "ocf:heartbeat:Dummy", "G", operations=[], meta_attributes=meta_attributes if meta_attributes else {}, instance_attributes={}, wait=wait, ensure_disabled=disabled ) def create_clone( env, wait=TIMEOUT, disabled=False, meta_attributes=None, clone_options=None ): return resource.create_as_clone( env, "A", "ocf:heartbeat:Dummy", operations=[], meta_attributes=meta_attributes if meta_attributes else {}, instance_attributes={}, clone_meta_options=clone_options if clone_options else {}, wait=wait, ensure_disabled=disabled ) def create_bundle(env, wait=TIMEOUT, disabled=False, meta_attributes=None): return resource.create_into_bundle( env, "A", "ocf:heartbeat:Dummy", operations=[], meta_attributes=meta_attributes if meta_attributes else {}, instance_attributes={}, bundle_id="B", wait=wait, ensure_disabled=disabled ) wait_error_message = outdent( """\ Pending actions: Action 39: stonith-vm-rhel72-1-reboot on vm-rhel72-1 Error performing operation: Timer expired """ ).strip() fixture_cib_resources_xml_primitive_simplest = """ """ fixture_cib_resources_xml_simplest_disabled = """ """ fixture_cib_resources_xml_master_simplest = """ """ fixture_cib_resources_xml_master_simplest_disabled = """ """ fixture_cib_resources_xml_master_simplest_disabled_meta_after = """ """ fixture_cib_resources_xml_group_simplest = """ """ fixture_cib_resources_xml_group_simplest_disabled = """ """ fixture_cib_resources_xml_clone_simplest = """ """ fixture_cib_resources_xml_clone_simplest_disabled = """ """ class Create(TestCase): fixture_sanitized_operation = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) (self.config .runner.pcmk.load_agent() .runner.cib.load() ) def test_simplest_resource(self): self.config.env.push_cib( resources=fixture_cib_resources_xml_primitive_simplest ) return create(self.env_assist.get_env()) def test_resource_with_operation(self): self.config.env.push_cib( resources=""" """ ) create( self.env_assist.get_env(), operations=[ {"name": "monitor", "timeout": "10s", "interval": "10"} ] ) def test_sanitize_operation_id_from_agent(self): self.config.runner.pcmk.load_agent( instead="runner.pcmk.load_agent", agent_filename="resource_agent_ocf_heartbeat_dummy_insane_action.xml" ) self.config.env.push_cib( resources=self.fixture_sanitized_operation ) return create(self.env_assist.get_env()) def test_sanitize_operation_id_from_user(self): self.config.env.push_cib( resources=self.fixture_sanitized_operation ) create( self.env_assist.get_env(), operations=[ {"name": "moni*tor", "timeout": "20", "interval": "20"} ], allow_invalid_operation=True ) self.env_assist.assert_reports([ fixture.warn( report_codes.INVALID_OPTION_VALUE, option_name="operation name", option_value="moni*tor", allowed_values=["start", "stop", "monitor", "reload", "migrate_to", "migrate_from", "meta-data", "validate-all"] ), ]) class CreateWait(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) (self.config .runner.pcmk.load_agent() .runner.pcmk.can_wait() .runner.cib.load() .env.push_cib( resources=fixture_cib_resources_xml_primitive_simplest, wait=TIMEOUT ) ) def test_fail_wait(self): self.config.env.push_cib( resources=fixture_cib_resources_xml_primitive_simplest, wait=TIMEOUT, exception=LibraryError( reports.wait_for_idle_timed_out(wait_error_message) ), instead="env.push_cib" ) self.env_assist.assert_raise_library_error( lambda: create(self.env_assist.get_env(), wait=TIMEOUT), [ fixture.report_wait_for_idle_timed_out(wait_error_message) ], expected_in_processor=False ) def test_wait_ok_run_fail(self): (self.config .runner.pcmk.load_state(raw_resources=dict(failed="true")) ) self.env_assist.assert_raise_library_error( lambda: create(self.env_assist.get_env(), wait=TIMEOUT), [ fixture.error( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ] ) def test_wait_ok_run_ok(self): self.config.runner.pcmk.load_state(raw_resources=dict()) create(self.env_assist.get_env(), wait=TIMEOUT) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={"Started": ["node1"]}, resource_id="A", ), ]) def test_wait_ok_disable_fail(self): (self.config .runner.pcmk.load_state(raw_resources=dict()) .env.push_cib( resources=fixture_cib_resources_xml_simplest_disabled, wait=TIMEOUT, instead="env.push_cib" ) ) self.env_assist.assert_raise_library_error( lambda: create( self.env_assist.get_env(), wait=TIMEOUT, disabled=True ), [ fixture.error( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={"Started": ["node1"]}, resource_id="A", ), ] ) def test_wait_ok_disable_ok(self): (self.config .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) .env.push_cib( resources=fixture_cib_resources_xml_simplest_disabled, wait=TIMEOUT, instead="env.push_cib" ) ) create(self.env_assist.get_env(), wait=TIMEOUT, disabled=True) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_target_role(self): (self.config .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) .env.push_cib( resources=fixture_cib_resources_xml_simplest_disabled, wait=TIMEOUT, instead="env.push_cib" ) ) create( self.env_assist.get_env(), wait=TIMEOUT, meta_attributes={"target-role": "Stopped"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) class CreateAsMaster(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) (self.config .runner.pcmk.load_agent() .runner.pcmk.can_wait() .runner.cib.load() ) def test_simplest_resource(self): (self.config .remove(name="runner.pcmk.can_wait") .env.push_cib( resources=fixture_cib_resources_xml_master_simplest ) ) create_master(self.env_assist.get_env(), wait=False) def test_fail_wait(self): self.config.env.push_cib( resources=fixture_cib_resources_xml_master_simplest, wait=TIMEOUT, exception=LibraryError( reports.wait_for_idle_timed_out(wait_error_message) ) ) self.env_assist.assert_raise_library_error( lambda: create_master(self.env_assist.get_env()), [ fixture.report_wait_for_idle_timed_out(wait_error_message) ], expected_in_processor=False ) def test_wait_ok_run_fail(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_master_simplest, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(failed="true")) ) self.env_assist.assert_raise_library_error( lambda: create_master(self.env_assist.get_env()), [ fixture.error( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A" ) ] ) def test_wait_ok_run_ok(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_master_simplest, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict()) ) create_master(self.env_assist.get_env()) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={"Started": ["node1"]}, resource_id="A", ) ]) def test_wait_ok_disable_fail(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_master_simplest_disabled, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict()) ) self.env_assist.assert_raise_library_error( lambda: create_master(self.env_assist.get_env(), disabled=True), [ fixture.error( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={'Started': ['node1']}, resource_id='A' ) ], ) def test_wait_ok_disable_ok(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_master_simplest_disabled, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_master(self.env_assist.get_env(), disabled=True) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_target_role(self): (self.config .env.push_cib( resources=""" """, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_master( self.env_assist.get_env(), meta_attributes={"target-role": "Stopped"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_target_role_in_master(self): (self.config .env.push_cib(resources =fixture_cib_resources_xml_master_simplest_disabled_meta_after, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_master( self.env_assist.get_env(), master_meta_options={"target-role": "Stopped"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_clone_max(self): (self.config .env.push_cib( resources=""" """, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_master( self.env_assist.get_env(), master_meta_options={"clone-max": "0"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_clone_node_max(self): (self.config .env.push_cib( resources=""" """, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_master( self.env_assist.get_env(), master_meta_options={"clone-node-max": "0"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) class CreateInGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) (self.config .runner.pcmk.load_agent() .runner.pcmk.can_wait() .runner.cib.load() ) def test_simplest_resource(self): (self.config .remove(name="runner.pcmk.can_wait") .env.push_cib( resources=""" """ ) ) create_group(self.env_assist.get_env(), wait=False) def test_fail_wait(self): self.config.env.push_cib( resources=fixture_cib_resources_xml_group_simplest, wait=TIMEOUT, exception=LibraryError( reports.wait_for_idle_timed_out(wait_error_message) ) ) self.env_assist.assert_raise_library_error( lambda: create_group(self.env_assist.get_env()), [ fixture.report_wait_for_idle_timed_out(wait_error_message) ], expected_in_processor=False ) def test_wait_ok_run_fail(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_group_simplest, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(failed="true")) ) self.env_assist.assert_raise_library_error( lambda: create_group(self.env_assist.get_env()), [ fixture.error( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A" ) ] ) def test_wait_ok_run_ok(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_group_simplest, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict()) ) create_group(self.env_assist.get_env()) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={"Started": ["node1"]}, resource_id="A", ) ]) def test_wait_ok_disable_fail(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_group_simplest_disabled, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict()) ) self.env_assist.assert_raise_library_error( lambda: create_group(self.env_assist.get_env(), disabled=True), [ fixture.error( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={'Started': ['node1']}, resource_id='A' ) ], ) def test_wait_ok_disable_ok(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_group_simplest_disabled, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_group(self.env_assist.get_env(), disabled=True) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_target_role(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_group_simplest_disabled, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_group( self.env_assist.get_env(), meta_attributes={"target-role": "Stopped"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) class CreateAsClone(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) (self.config .runner.pcmk.load_agent() .runner.pcmk.can_wait() .runner.cib.load() ) def test_simplest_resource(self): (self.config .remove(name="runner.pcmk.can_wait") .env.push_cib(resources=fixture_cib_resources_xml_clone_simplest) ) create_clone(self.env_assist.get_env(), wait=False) def test_fail_wait(self): self.config.env.push_cib( resources=fixture_cib_resources_xml_clone_simplest, wait=TIMEOUT, exception=LibraryError( reports.wait_for_idle_timed_out(wait_error_message) ) ) self.env_assist.assert_raise_library_error( lambda: create_clone(self.env_assist.get_env()), [ fixture.report_wait_for_idle_timed_out(wait_error_message) ], expected_in_processor=False ) def test_wait_ok_run_fail(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_clone_simplest, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(failed="true")) ) self.env_assist.assert_raise_library_error( lambda: create_clone(self.env_assist.get_env()), [ fixture.error( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A" ) ] ) def test_wait_ok_run_ok(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_clone_simplest, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict()) ) create_clone(self.env_assist.get_env()) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={"Started": ["node1"]}, resource_id="A", ) ]) def test_wait_ok_disable_fail(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_clone_simplest_disabled, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict()) ) self.env_assist.assert_raise_library_error( lambda: create_clone(self.env_assist.get_env(), disabled=True), [ fixture.error( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={'Started': ['node1']}, resource_id='A' ) ], ) def test_wait_ok_disable_ok(self): (self.config .env.push_cib( resources=fixture_cib_resources_xml_clone_simplest_disabled, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_clone(self.env_assist.get_env(), disabled=True) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_target_role(self): (self.config .env.push_cib( resources=""" """, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_clone( self.env_assist.get_env(), meta_attributes={"target-role": "Stopped"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_target_role_in_clone(self): (self.config .env.push_cib( resources=""" """, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_clone( self.env_assist.get_env(), clone_options={"target-role": "Stopped"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_clone_max(self): (self.config .env.push_cib( resources=""" """, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_clone( self.env_assist.get_env(), clone_options={"clone-max": "0"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) def test_wait_ok_disable_ok_by_clone_node_max(self): (self.config .env.push_cib( resources=""" """, wait=TIMEOUT ) .runner.pcmk.load_state(raw_resources=dict(role="Stopped")) ) create_clone( self.env_assist.get_env(), clone_options={"clone-node-max": "0"} ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A", ) ]) class CreateInToBundle(TestCase): fixture_empty_resources = "" fixture_resources_pre = """ """ fixture_resources_post_simple = """ """ fixture_resources_post_disabled = """ """ fixture_status_stopped = """ """ fixture_status_running_with_primitive = """ """ fixture_status_primitive_not_running = """ """ def setUp(self): self.env_assist, self.config = get_env_tools( test_case=self, base_cib_filename="cib-empty-2.8.xml", ) def test_upgrade_cib(self): (self.config .runner.pcmk.load_agent() .runner.cib.load( filename="cib-empty.xml", name="load_cib_old_version" ) .runner.cib.upgrade() .runner.cib.load(resources=self.fixture_resources_pre) .env.push_cib(resources=self.fixture_resources_post_simple) ) create_bundle(self.env_assist.get_env(), wait=False) self.env_assist.assert_reports([ fixture.info(report_codes.CIB_UPGRADE_SUCCESSFUL) ]) def test_simplest_resource(self): (self.config .runner.pcmk.load_agent() .runner.cib.load(resources=self.fixture_resources_pre) .env.push_cib(resources=self.fixture_resources_post_simple) ) create_bundle(self.env_assist.get_env(), wait=False) def test_bundle_doesnt_exist(self): (self.config .runner.pcmk.load_agent() .runner.cib.load(resources=self.fixture_empty_resources) ) self.env_assist.assert_raise_library_error( lambda: create_bundle(self.env_assist.get_env(), wait=False), [ fixture.error( report_codes.ID_NOT_FOUND, id="B", expected_types=["bundle"], context_type="resources", context_id="", ) ], expected_in_processor=False ) def test_id_not_bundle(self): (self.config .runner.pcmk.load_agent() .runner.cib.load( resources=""" """ ) ) self.env_assist.assert_raise_library_error( lambda: create_bundle(self.env_assist.get_env(), wait=False), [ fixture.error( report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, id="B", expected_types=["bundle"], current_type="primitive", ) ], expected_in_processor=False ) def test_bundle_not_empty(self): (self.config .runner.pcmk.load_agent() .runner.cib.load( resources=""" """ ) ) self.env_assist.assert_raise_library_error( lambda: create_bundle(self.env_assist.get_env(), wait=False), [ fixture.error( report_codes.RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE, bundle_id="B", resource_id="P", ) ], expected_in_processor=False ) def test_wait_fail(self): (self.config .runner.pcmk.load_agent() .runner.pcmk.can_wait() .runner.cib.load(resources=self.fixture_resources_pre) .env.push_cib( resources=self.fixture_resources_post_simple, wait=TIMEOUT, exception=LibraryError( reports.wait_for_idle_timed_out(wait_error_message) ) ) ) self.env_assist.assert_raise_library_error( lambda: create_bundle(self.env_assist.get_env()), [ fixture.report_wait_for_idle_timed_out(wait_error_message), ], expected_in_processor=False ) @skip_unless_pacemaker_supports_bundle def test_wait_ok_run_ok(self): (self.config .runner.pcmk.load_agent() .runner.pcmk.can_wait() .runner.cib.load(resources=self.fixture_resources_pre) .env.push_cib( resources=self.fixture_resources_post_simple, wait=TIMEOUT ) .runner.pcmk.load_state( resources=self.fixture_status_running_with_primitive ) ) create_bundle(self.env_assist.get_env()) self.env_assist.assert_reports([ fixture.report_resource_running("A", {"Started": ["node1"]}), ]) @skip_unless_pacemaker_supports_bundle def test_wait_ok_run_fail(self): (self.config .runner.pcmk.load_agent() .runner.pcmk.can_wait() .runner.cib.load(resources=self.fixture_resources_pre) .env.push_cib( resources=self.fixture_resources_post_simple, wait=TIMEOUT ) .runner.pcmk.load_state( resources=self.fixture_status_primitive_not_running ) ) self.env_assist.assert_raise_library_error( lambda: create_bundle(self.env_assist.get_env()), [ fixture.error( report_codes.RESOURCE_DOES_NOT_RUN, resource_id="A" ) ] ) @skip_unless_pacemaker_supports_bundle def test_disabled_wait_ok_not_running(self): (self.config .runner.pcmk.load_agent() .runner.pcmk.can_wait() .runner.cib.load(resources=self.fixture_resources_pre) .env.push_cib( resources=self.fixture_resources_post_disabled, wait=TIMEOUT ) .runner.pcmk.load_state( resources=self.fixture_status_primitive_not_running ) ) create_bundle(self.env_assist.get_env(), disabled=True) self.env_assist.assert_reports([ fixture.report_resource_not_running("A") ]) @skip_unless_pacemaker_supports_bundle def test_disabled_wait_ok_running(self): (self.config .runner.pcmk.load_agent() .runner.pcmk.can_wait() .runner.cib.load(resources=self.fixture_resources_pre) .env.push_cib( resources=self.fixture_resources_post_disabled, wait=TIMEOUT ) .runner.pcmk.load_state( resources=self.fixture_status_running_with_primitive ) ) self.env_assist.assert_raise_library_error( lambda: create_bundle(self.env_assist.get_env(), disabled=True), [ fixture.error( report_codes.RESOURCE_RUNNING_ON_NODES, resource_id="A", roles_with_nodes={"Started": ["node1"]}, ) ] ) pcs-0.9.164/pcs/lib/commands/test/resource/test_resource_enable_disable.py000066400000000000000000001663701326265502500266760ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib import reports from pcs.lib.commands import resource from pcs.lib.errors import ( LibraryError, ReportItemSeverity as severities, ) from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.misc import ( outdent, skip_unless_pacemaker_supports_bundle, ) from pcs.test.tools.pcs_unittest import TestCase TIMEOUT=10 fixture_primitive_cib_enabled = """ """ fixture_primitive_cib_disabled = """ """ fixture_primitive_status_managed = """ """ fixture_primitive_status_unmanaged = """ """ fixture_two_primitives_cib_enabled = """ """ fixture_two_primitives_cib_disabled = """ """ fixture_two_primitives_cib_disabled_both = """ """ fixture_two_primitives_status_managed = """ """ fixture_group_cib_enabled = """ """ fixture_group_cib_disabled_group = """ """ fixture_group_cib_disabled_primitive = """ """ fixture_group_cib_disabled_both = """ """ fixture_group_status_managed = """ """ fixture_group_status_unmanaged = """ """ fixture_clone_cib_enabled = """ """ fixture_clone_cib_disabled_clone = """ """ fixture_clone_cib_disabled_primitive = """ """ fixture_clone_cib_disabled_both = """ """ fixture_clone_status_managed = """ """ fixture_clone_status_unmanaged = """ """ fixture_master_cib_enabled = """ """ fixture_master_cib_disabled_master = """ """ fixture_master_cib_disabled_primitive = """ """ fixture_master_cib_disabled_both = """ """ fixture_master_status_managed = """ """ fixture_master_status_unmanaged = """ """ fixture_clone_group_cib_enabled = """ """ fixture_clone_group_cib_disabled_clone = """ """ fixture_clone_group_cib_disabled_group = """ """ fixture_clone_group_cib_disabled_primitive = """ """ fixture_clone_group_cib_disabled_clone_group = """ """ fixture_clone_group_cib_disabled_all = """ """ fixture_clone_group_status_managed = """ """ fixture_clone_group_status_unmanaged = """ """ fixture_bundle_cib_enabled = """ """ fixture_bundle_cib_disabled_primitive = """ """ fixture_bundle_cib_disabled_bundle = """ """ fixture_bundle_cib_disabled_both = """ """ fixture_bundle_status_managed = """ """ fixture_bundle_status_unmanaged = """ """ def fixture_report_unmanaged(resource): return ( severities.WARNING, report_codes.RESOURCE_IS_UNMANAGED, { "resource_id": resource, }, None ) class DisablePrimitive(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_nonexistent_resource(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_enabled) ) self.env_assist.assert_raise_library_error( lambda: resource.disable(self.env_assist.get_env(), ["B"], False), [ fixture.report_not_found("B", "resources") ], expected_in_processor=False ) def test_nonexistent_resource_in_status(self): (self.config .runner.cib.load(resources=fixture_two_primitives_cib_enabled) .runner.pcmk.load_state(resources=fixture_primitive_status_managed) ) self.env_assist.assert_raise_library_error( lambda: resource.disable(self.env_assist.get_env(), ["B"], False), [ fixture.report_not_found("B") ], ) def test_correct_resource(self): (self.config .runner.cib.load(resources=fixture_two_primitives_cib_enabled) .runner.pcmk.load_state( resources=fixture_two_primitives_status_managed ) .env.push_cib(resources=fixture_two_primitives_cib_disabled) ) resource.disable(self.env_assist.get_env(), ["A"], False) def test_unmanaged(self): # The code doesn't care what causes the resource to be unmanaged # (cluster property, resource's meta-attribute or whatever). It only # checks the cluster state (crm_mon). (self.config .runner.cib.load(resources=fixture_primitive_cib_enabled) .runner.pcmk.load_state( resources=fixture_primitive_status_unmanaged ) .env.push_cib(resources=fixture_primitive_cib_disabled) ) resource.disable(self.env_assist.get_env(), ["A"], False) self.env_assist.assert_reports([fixture_report_unmanaged("A")]) class EnablePrimitive(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_nonexistent_resource(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_disabled) ) self.env_assist.assert_raise_library_error( lambda: resource.enable(self.env_assist.get_env(), ["B"], False), [ fixture.report_not_found("B", "resources") ], expected_in_processor=False ) def test_nonexistent_resource_in_status(self): (self.config .runner.cib.load(resources=fixture_two_primitives_cib_disabled) .runner.pcmk.load_state(resources=fixture_primitive_status_managed) ) self.env_assist.assert_raise_library_error( lambda: resource.enable(self.env_assist.get_env(), ["B"], False), [ fixture.report_not_found("B") ] ) def test_correct_resource(self): (self.config .runner.cib.load(resources=fixture_two_primitives_cib_disabled_both) .runner.pcmk.load_state( resources=fixture_two_primitives_status_managed ) .env.push_cib(resources=fixture_two_primitives_cib_disabled) ) resource.enable(self.env_assist.get_env(), ["B"], False) def test_unmanaged(self): # The code doesn't care what causes the resource to be unmanaged # (cluster property, resource's meta-attribute or whatever). It only # checks the cluster state (crm_mon). (self.config .runner.cib.load(resources=fixture_primitive_cib_disabled) .runner.pcmk.load_state( resources=fixture_primitive_status_unmanaged ) .env.push_cib(resources=fixture_primitive_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], False) self.env_assist.assert_reports([fixture_report_unmanaged("A")]) class MoreResources(TestCase): fixture_cib_enabled = """ """ fixture_cib_disabled = """ """ fixture_status = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_success_enable(self): fixture_enabled = """ """ (self.config .runner.cib.load(resources=self.fixture_cib_disabled) .runner.pcmk.load_state(resources=self.fixture_status) .env.push_cib(resources=fixture_enabled) ) resource.enable(self.env_assist.get_env(), ["A", "B", "D"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("B"), fixture_report_unmanaged("D"), ]) def test_success_disable(self): fixture_disabled = """ """ (self.config .runner.cib.load(resources=self.fixture_cib_enabled) .runner.pcmk.load_state(resources=self.fixture_status) .env.push_cib(resources=fixture_disabled) ) resource.disable(self.env_assist.get_env(), ["A", "B", "D"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("B"), fixture_report_unmanaged("D"), ]) def test_bad_resource_enable(self): (self.config .runner.cib.load(resources=self.fixture_cib_disabled) ) self.env_assist.assert_raise_library_error( lambda: resource.enable( self.env_assist.get_env(), ["B", "X", "Y", "A"], wait=False ), [ fixture.report_not_found("X", "resources"), fixture.report_not_found("Y", "resources"), ], expected_in_processor=False ) def test_bad_resource_disable(self): (self.config .runner.cib.load(resources=self.fixture_cib_enabled) ) self.env_assist.assert_raise_library_error( lambda: resource.disable( self.env_assist.get_env(), ["B", "X", "Y", "A"], wait=False ), [ fixture.report_not_found("X", "resources"), fixture.report_not_found("Y", "resources"), ], expected_in_processor=False ) class Wait(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.pcmk.can_wait() fixture_status_running = """ """ fixture_status_stopped = """ """ fixture_status_mixed = """ """ fixture_wait_timeout_error = outdent( """\ Pending actions: Action 12: B-node2-stop on node2 Error performing operation: Timer expired """ ).strip() def test_enable_dont_wait_on_error(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_disabled) ) self.env_assist.assert_raise_library_error( lambda: resource.enable(self.env_assist.get_env(), ["B"], TIMEOUT), [ fixture.report_not_found("B", "resources"), ], expected_in_processor=False ) def test_disable_dont_wait_on_error(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_enabled) ) self.env_assist.assert_raise_library_error( lambda: resource.disable(self.env_assist.get_env(), ["B"], TIMEOUT), [ fixture.report_not_found("B", "resources"), ], expected_in_processor=False ) def test_enable_resource_stopped(self): (self.config .runner.cib.load(resources=fixture_two_primitives_cib_disabled_both) .runner.pcmk.load_state(resources=self.fixture_status_stopped) .env.push_cib( resources=fixture_two_primitives_cib_enabled, wait=TIMEOUT ) .runner.pcmk.load_state( name="", resources=self.fixture_status_stopped, ) ) self.env_assist.assert_raise_library_error( lambda: resource.enable( self.env_assist.get_env(), ["A", "B"], TIMEOUT ), [ fixture.report_resource_not_running("A", severities.ERROR), fixture.report_resource_not_running("B", severities.ERROR), ] ) def test_disable_resource_stopped(self): (self.config .runner.cib.load(resources=fixture_two_primitives_cib_enabled) .runner.pcmk.load_state(resources=self.fixture_status_running) .env.push_cib( resources=fixture_two_primitives_cib_disabled_both, wait=TIMEOUT ) .runner.pcmk.load_state( name="", resources=self.fixture_status_stopped, ) ) resource.disable(self.env_assist.get_env(), ["A", "B"], TIMEOUT) self.env_assist.assert_reports([ fixture.report_resource_not_running("A"), fixture.report_resource_not_running("B"), ]) def test_enable_resource_running(self): (self.config .runner.cib.load(resources=fixture_two_primitives_cib_disabled_both) .runner.pcmk.load_state(resources=self.fixture_status_stopped) .env.push_cib( resources=fixture_two_primitives_cib_enabled, wait=TIMEOUT ) .runner.pcmk.load_state( name="", resources=self.fixture_status_running, ) ) resource.enable(self.env_assist.get_env(), ["A", "B"], TIMEOUT) self.env_assist.assert_reports([ fixture.report_resource_running("A", {"Started": ["node1"]}), fixture.report_resource_running("B", {"Started": ["node2"]}), ]) def test_disable_resource_running(self): (self.config .runner.cib.load(resources=fixture_two_primitives_cib_enabled) .runner.pcmk.load_state(resources=self.fixture_status_running) .env.push_cib( resources=fixture_two_primitives_cib_disabled_both, wait=TIMEOUT ) .runner.pcmk.load_state( name="", resources=self.fixture_status_running, ) ) self.env_assist.assert_raise_library_error( lambda: resource.disable( self.env_assist.get_env(), ["A", "B"], TIMEOUT ), [ fixture.report_resource_running( "A", {"Started": ["node1"]}, severities.ERROR ), fixture.report_resource_running( "B", {"Started": ["node2"]}, severities.ERROR ), ] ) def test_enable_wait_timeout(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_disabled) .runner.pcmk.load_state(resources=self.fixture_status_stopped) .env.push_cib( resources=fixture_primitive_cib_enabled, wait=TIMEOUT, exception=LibraryError( reports.wait_for_idle_timed_out( self.fixture_wait_timeout_error ) ) ) ) self.env_assist.assert_raise_library_error( lambda: resource.enable(self.env_assist.get_env(), ["A"], TIMEOUT), [ fixture.report_wait_for_idle_timed_out( self.fixture_wait_timeout_error ) ], expected_in_processor=False ) def test_disable_wait_timeout(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_enabled) .runner.pcmk.load_state(resources=self.fixture_status_running) .env.push_cib( resources=fixture_primitive_cib_disabled, wait=TIMEOUT, exception=LibraryError( reports.wait_for_idle_timed_out( self.fixture_wait_timeout_error ) ) ) ) self.env_assist.assert_raise_library_error( lambda: resource.disable(self.env_assist.get_env(), ["A"], TIMEOUT), [ fixture.report_wait_for_idle_timed_out( self.fixture_wait_timeout_error ) ], expected_in_processor=False ) class WaitClone(TestCase): fixture_status_running = """ """ fixture_status_stopped = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.pcmk.can_wait() def test_disable_clone(self): (self.config .runner.cib.load(resources=fixture_clone_cib_enabled) .runner.pcmk.load_state(resources=self.fixture_status_running) .env.push_cib( resources=fixture_clone_cib_disabled_clone, wait=TIMEOUT ) .runner.pcmk.load_state( name="", resources=self.fixture_status_stopped, ) ) resource.disable(self.env_assist.get_env(), ["A-clone"], TIMEOUT) self.env_assist.assert_reports([ ( severities.INFO, report_codes.RESOURCE_DOES_NOT_RUN, { "resource_id": "A-clone", }, None ) ]) def test_enable_clone(self): (self.config .runner.cib.load(resources=fixture_clone_cib_disabled_clone) .runner.pcmk.load_state(resources=self.fixture_status_stopped) .env.push_cib( resources=fixture_clone_cib_enabled, wait=TIMEOUT ) .runner.pcmk.load_state( name="", resources=self.fixture_status_running, ) ) resource.enable(self.env_assist.get_env(), ["A-clone"], TIMEOUT) self.env_assist.assert_reports([ ( severities.INFO, report_codes.RESOURCE_RUNNING_ON_NODES, { "resource_id": "A-clone", "roles_with_nodes": {"Started": ["node1", "node2"]}, }, None ) ]) class DisableGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(resources=fixture_group_cib_enabled) def test_primitive(self): (self.config .runner.pcmk.load_state(resources=fixture_group_status_managed) .env.push_cib(resources=fixture_group_cib_disabled_primitive) ) resource.disable(self.env_assist.get_env(), ["A1"], wait=False) def test_group(self): (self.config .runner.pcmk.load_state(resources=fixture_group_status_managed) .env.push_cib(resources=fixture_group_cib_disabled_group) ) resource.disable(self.env_assist.get_env(), ["A"], wait=False) def test_primitive_unmanaged(self): (self.config .runner.pcmk.load_state(resources=fixture_group_status_unmanaged) .env.push_cib(resources=fixture_group_cib_disabled_primitive) ) resource.disable(self.env_assist.get_env(), ["A1"], wait=False) self.env_assist.assert_reports([ fixture_report_unmanaged("A1"), ]) def test_group_unmanaged(self): (self.config .runner.pcmk.load_state(resources=fixture_group_status_unmanaged) .env.push_cib(resources=fixture_group_cib_disabled_group) ) resource.disable(self.env_assist.get_env(), ["A"], wait=False) self.env_assist.assert_reports([ fixture_report_unmanaged("A"), ]) class EnableGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_group_cib_disabled_primitive) .runner.pcmk.load_state(resources=fixture_group_status_managed) .env.push_cib(resources=fixture_group_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A1"], wait=False) def test_primitive_disabled_both(self): (self.config .runner.cib.load(resources=fixture_group_cib_disabled_both) .runner.pcmk.load_state(resources=fixture_group_status_managed) .env.push_cib(resources=fixture_group_cib_disabled_group) ) resource.enable(self.env_assist.get_env(), ["A1"], wait=False) def test_group(self): (self.config .runner.cib.load(resources=fixture_group_cib_disabled_group) .runner.pcmk.load_state(resources=fixture_group_status_managed) .env.push_cib(resources=fixture_group_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], wait=False) def test_group_both_disabled(self): (self.config .runner.cib.load(resources=fixture_group_cib_disabled_both) .runner.pcmk.load_state(resources=fixture_group_status_managed) .env.push_cib(resources=fixture_group_cib_disabled_primitive) ) resource.enable(self.env_assist.get_env(), ["A"], wait=False) def test_primitive_unmanaged(self): (self.config .runner.cib.load(resources=fixture_group_cib_disabled_primitive) .runner.pcmk.load_state(resources=fixture_group_status_unmanaged) .env.push_cib(resources=fixture_group_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A1"], wait=False) self.env_assist.assert_reports([ fixture_report_unmanaged("A1"), ]) def test_group_unmanaged(self): (self.config .runner.cib.load(resources=fixture_group_cib_disabled_group) .runner.pcmk.load_state(resources=fixture_group_status_unmanaged) .env.push_cib(resources=fixture_group_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], wait=False) self.env_assist.assert_reports([ fixture_report_unmanaged("A"), ]) class DisableClone(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load(resources=fixture_clone_cib_enabled) def test_primitive(self): (self.config .runner.pcmk.load_state(resources=fixture_clone_status_managed) .env.push_cib(resources=fixture_clone_cib_disabled_primitive) ) resource.disable(self.env_assist.get_env(), ["A"], wait=False) def test_clone(self): (self.config .runner.pcmk.load_state(resources=fixture_clone_status_managed) .env.push_cib(resources=fixture_clone_cib_disabled_clone) ) resource.disable(self.env_assist.get_env(), ["A-clone"], wait=False) def test_primitive_unmanaged(self): (self.config .runner.pcmk.load_state(resources=fixture_clone_status_unmanaged) .env.push_cib(resources=fixture_clone_cib_disabled_primitive) ) resource.disable(self.env_assist.get_env(), ["A"], wait=False) self.env_assist.assert_reports([ fixture_report_unmanaged("A"), ]) def test_clone_unmanaged(self): (self.config .runner.pcmk.load_state(resources=fixture_clone_status_unmanaged) .env.push_cib(resources=fixture_clone_cib_disabled_clone) ) resource.disable(self.env_assist.get_env(), ["A-clone"], wait=False) self.env_assist.assert_reports([ fixture_report_unmanaged("A-clone"), ]) class EnableClone(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_clone_cib_disabled_primitive) .runner.pcmk.load_state(resources=fixture_clone_status_managed) .env.push_cib(resources=fixture_clone_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], wait=False) def test_primitive_disabled_both(self): (self.config .runner.cib.load(resources=fixture_clone_cib_disabled_both) .runner.pcmk.load_state(resources=fixture_clone_status_managed) .env.push_cib(resources=fixture_clone_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], wait=False) def test_clone(self): (self.config .runner.cib.load(resources=fixture_clone_cib_disabled_clone) .runner.pcmk.load_state(resources=fixture_clone_status_managed) .env.push_cib(resources=fixture_clone_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A-clone"], wait=False) def test_clone_disabled_both(self): (self.config .runner.cib.load(resources=fixture_clone_cib_disabled_both) .runner.pcmk.load_state(resources=fixture_clone_status_managed) .env.push_cib(resources=fixture_clone_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A-clone"], wait=False) def test_primitive_unmanaged(self): (self.config .runner.cib.load(resources=fixture_clone_cib_disabled_primitive) .runner.pcmk.load_state(resources=fixture_clone_status_unmanaged) .env.push_cib(resources=fixture_clone_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], wait=False) self.env_assist.assert_reports([ fixture_report_unmanaged("A-clone"), fixture_report_unmanaged("A"), ]) def test_clone_unmanaged(self): (self.config .runner.cib.load(resources=fixture_clone_cib_disabled_clone) .runner.pcmk.load_state(resources=fixture_clone_status_unmanaged) .env.push_cib(resources=fixture_clone_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A-clone"], wait=False) self.env_assist.assert_reports([ fixture_report_unmanaged("A-clone"), fixture_report_unmanaged("A"), ]) class DisableMaster(TestCase): # same as clone, minimum tests in here def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) (self.config .runner.cib.load(resources=fixture_master_cib_enabled) .runner.pcmk.load_state(resources=fixture_master_status_managed) ) def test_primitive(self): self.config.env.push_cib( resources=fixture_master_cib_disabled_primitive ) resource.disable(self.env_assist.get_env(), ["A"], False) def test_master(self): self.config.env.push_cib( resources=fixture_master_cib_disabled_master ) resource.disable(self.env_assist.get_env(), ["A-master"], False) class EnableMaster(TestCase): # same as clone, minimum tests in here def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_master_cib_disabled_primitive) .runner.pcmk.load_state(resources=fixture_master_status_managed) .env.push_cib(resources=fixture_master_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], False) def test_primitive_disabled_both(self): (self.config .runner.cib.load(resources=fixture_master_cib_disabled_both) .runner.pcmk.load_state(resources=fixture_master_status_managed) .env.push_cib(resources=fixture_master_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], False) def test_master(self): (self.config .runner.cib.load(resources=fixture_master_cib_disabled_master) .runner.pcmk.load_state(resources=fixture_master_status_managed) .env.push_cib(resources=fixture_master_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A-master"], False) def test_master_disabled_both(self): (self.config .runner.cib.load(resources=fixture_master_cib_disabled_both) .runner.pcmk.load_state(resources=fixture_master_status_managed) .env.push_cib(resources=fixture_master_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A-master"], False) class DisableClonedGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_clone(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_enabled) .runner.pcmk.load_state( resources=fixture_clone_group_status_managed ) .env.push_cib(resources=fixture_clone_group_cib_disabled_clone) ) resource.disable(self.env_assist.get_env(), ["A-clone"], False) def test_group(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_enabled) .runner.pcmk.load_state( resources=fixture_clone_group_status_managed ) .env.push_cib(resources=fixture_clone_group_cib_disabled_group) ) resource.disable(self.env_assist.get_env(), ["A"], False) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_enabled) .runner.pcmk.load_state( resources=fixture_clone_group_status_managed ) .env.push_cib( resources=fixture_clone_group_cib_disabled_primitive ) ) resource.disable(self.env_assist.get_env(), ["A1"], False) def test_clone_unmanaged(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_enabled) .runner.pcmk.load_state( resources=fixture_clone_group_status_unmanaged ) .env.push_cib(resources=fixture_clone_group_cib_disabled_clone) ) resource.disable(self.env_assist.get_env(), ["A-clone"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A-clone"), ]) def test_group_unmanaged(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_enabled) .runner.pcmk.load_state( resources=fixture_clone_group_status_unmanaged ) .env.push_cib(resources=fixture_clone_group_cib_disabled_group) ) resource.disable(self.env_assist.get_env(), ["A"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A"), ]) def test_primitive_unmanaged(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_enabled) .runner.pcmk.load_state( resources=fixture_clone_group_status_unmanaged ) .env.push_cib( resources=fixture_clone_group_cib_disabled_primitive ) ) resource.disable(self.env_assist.get_env(), ["A1"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A1"), ]) class EnableClonedGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_clone(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_disabled_clone) .runner.pcmk.load_state( resources=fixture_clone_group_status_managed ) .env.push_cib(resources=fixture_clone_group_cib_enabled,) ) resource.enable(self.env_assist.get_env(), ["A-clone"], False) def test_clone_disabled_all(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_disabled_all) .runner.pcmk.load_state( resources=fixture_clone_group_status_managed ) .env.push_cib( resources=fixture_clone_group_cib_disabled_primitive ) ) resource.enable(self.env_assist.get_env(), ["A-clone"], False) def test_group(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_disabled_group) .runner.pcmk.load_state( resources=fixture_clone_group_status_managed ) .env.push_cib(resources=fixture_clone_group_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], False) def test_group_disabled_all(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_disabled_all) .runner.pcmk.load_state( resources=fixture_clone_group_status_managed ) .env.push_cib( resources=fixture_clone_group_cib_disabled_primitive ) ) resource.enable(self.env_assist.get_env(), ["A"], False) def test_primitive(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_disabled_primitive ) .runner.pcmk.load_state( resources=fixture_clone_group_status_managed ) .env.push_cib(resources=fixture_clone_group_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A1"], False) def test_primitive_disabled_all(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_disabled_all) .runner.pcmk.load_state( resources=fixture_clone_group_status_managed ) .env.push_cib( resources=fixture_clone_group_cib_disabled_clone_group ) ) resource.enable(self.env_assist.get_env(), ["A1"], False) def test_clone_unmanaged(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_disabled_clone) .runner.pcmk.load_state( resources=fixture_clone_group_status_unmanaged ) .env.push_cib(resources=fixture_clone_group_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A-clone"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A-clone"), fixture_report_unmanaged("A"), ]) def test_group_unmanaged(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_disabled_group) .runner.pcmk.load_state( resources=fixture_clone_group_status_unmanaged ) .env.push_cib(resources=fixture_clone_group_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A"), fixture_report_unmanaged("A-clone"), ]) def test_primitive_unmanaged(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_disabled_primitive ) .runner.pcmk.load_state( resources=fixture_clone_group_status_unmanaged ) .env.push_cib(resources=fixture_clone_group_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A1"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A1"), ]) @skip_unless_pacemaker_supports_bundle class DisableBundle(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_enabled) .runner.pcmk.load_state(resources=fixture_bundle_status_managed) .env.push_cib(resources=fixture_bundle_cib_disabled_primitive) ) resource.disable(self.env_assist.get_env(), ["A"], False) def test_bundle(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_enabled) .runner.pcmk.load_state(resources=fixture_bundle_status_managed) .env.push_cib(resources=fixture_bundle_cib_disabled_bundle) ) resource.disable(self.env_assist.get_env(), ["A-bundle"], False) def test_primitive_unmanaged(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_enabled) .runner.pcmk.load_state(resources=fixture_bundle_status_unmanaged) .env.push_cib(resources=fixture_bundle_cib_disabled_primitive) ) resource.disable(self.env_assist.get_env(), ["A"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A"), ]) def test_bundle_unmanaged(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_enabled) .runner.pcmk.load_state(resources=fixture_bundle_status_unmanaged) .env.push_cib(resources=fixture_bundle_cib_disabled_bundle) ) resource.disable(self.env_assist.get_env(), ["A-bundle"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A-bundle"), ]) @skip_unless_pacemaker_supports_bundle class EnableBundle(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_disabled_primitive) .runner.pcmk.load_state(resources=fixture_bundle_status_managed) .env.push_cib(resources=fixture_bundle_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], False) def test_primitive_disabled_both(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_disabled_both) .runner.pcmk.load_state(resources=fixture_bundle_status_managed) .env.push_cib(resources=fixture_bundle_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], False) def test_bundle(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_disabled_bundle) .runner.pcmk.load_state(resources=fixture_bundle_status_managed) .env.push_cib(resources=fixture_bundle_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A-bundle"], False) def test_bundle_disabled_both(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_disabled_both) .runner.pcmk.load_state(resources=fixture_bundle_status_managed) .env.push_cib(resources=fixture_bundle_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A-bundle"], False) def test_primitive_unmanaged(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_disabled_primitive) .runner.pcmk.load_state(resources=fixture_bundle_status_unmanaged) .env.push_cib(resources=fixture_bundle_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A"), fixture_report_unmanaged("A-bundle"), ]) def test_bundle_unmanaged(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_disabled_primitive) .runner.pcmk.load_state(resources=fixture_bundle_status_unmanaged) .env.push_cib(resources=fixture_bundle_cib_enabled) ) resource.enable(self.env_assist.get_env(), ["A-bundle"], False) self.env_assist.assert_reports([ fixture_report_unmanaged("A-bundle"), fixture_report_unmanaged("A"), ]) pcs-0.9.164/pcs/lib/commands/test/resource/test_resource_manage_unmanage.py000066400000000000000000001402511326265502500270560ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib.commands import resource from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase fixture_primitive_cib_managed = """ """ fixture_primitive_cib_unmanaged = """ """ fixture_primitive_cib_managed_op_enabled = """ """ fixture_primitive_cib_managed_op_disabled = """ """ fixture_primitive_cib_unmanaged_op_enabled = """ """ fixture_primitive_cib_unmanaged_op_disabled = """ """ fixture_group_cib_managed = """ """ fixture_group_cib_unmanaged_resource = """ """ fixture_group_cib_unmanaged_resource_and_group = """ """ fixture_group_cib_unmanaged_all_resources = """ """ fixture_clone_cib_managed = """ """ fixture_clone_cib_unmanaged_clone = """ """ fixture_clone_cib_unmanaged_primitive = """ """ fixture_clone_cib_unmanaged_both = """ """ fixture_clone_cib_managed_op_enabled = """ """ fixture_clone_cib_unmanaged_primitive_op_disabled = """ """ fixture_master_cib_managed = """ """ fixture_master_cib_unmanaged_master = """ """ fixture_master_cib_unmanaged_primitive = """ """ fixture_master_cib_unmanaged_both = """ """ fixture_master_cib_managed_op_enabled = """ """ fixture_master_cib_unmanaged_primitive_op_disabled = """ """ fixture_clone_group_cib_managed = """ """ fixture_clone_group_cib_unmanaged_primitive = """ """ fixture_clone_group_cib_unmanaged_all_primitives = """ """ fixture_clone_group_cib_unmanaged_clone = """ """ fixture_clone_group_cib_unmanaged_everything = """ """ fixture_clone_group_cib_managed_op_enabled = """ """ fixture_clone_group_cib_unmanaged_primitive_op_disabled = """ """ fixture_clone_group_cib_unmanaged_all_primitives_op_disabled = """ """ fixture_bundle_empty_cib_managed = """ """ fixture_bundle_empty_cib_unmanaged_bundle = """ """ fixture_bundle_cib_managed = """ """ fixture_bundle_cib_unmanaged_bundle = """ """ fixture_bundle_cib_unmanaged_primitive = """ """ fixture_bundle_cib_unmanaged_both = """ """ fixture_bundle_cib_managed_op_enabled = """ """ fixture_bundle_cib_unmanaged_primitive_op_disabled = """ """ fixture_bundle_cib_unmanaged_both_op_disabled = """ """ def fixture_report_no_monitors(resource): return ( severities.WARNING, report_codes.RESOURCE_MANAGED_NO_MONITOR_ENABLED, { "resource_id": resource, }, None ) class UnmanagePrimitive(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_nonexistent_resource(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_managed) ) self.env_assist.assert_raise_library_error( lambda: resource.unmanage(self.env_assist.get_env(), ["B"]), [ fixture.report_not_found("B", "resources") ], expected_in_processor=False ) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_managed) .env.push_cib(resources=fixture_primitive_cib_unmanaged) ) resource.unmanage(self.env_assist.get_env(), ["A"]) def test_primitive_unmanaged(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_unmanaged) .env.push_cib(resources=fixture_primitive_cib_unmanaged) ) resource.unmanage(self.env_assist.get_env(), ["A"]) class ManagePrimitive(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_nonexistent_resource(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_unmanaged) ) self.env_assist.assert_raise_library_error( lambda: resource.manage(self.env_assist.get_env(), ["B"]), [ fixture.report_not_found("B", "resources") ], expected_in_processor=False ) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_unmanaged) .env.push_cib(resources=fixture_primitive_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_primitive_managed(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_managed) .env.push_cib(resources=fixture_primitive_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) class UnmanageGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_group_cib_managed) .env.push_cib(resources=fixture_group_cib_unmanaged_resource) ) resource.unmanage(self.env_assist.get_env(), ["A1"]) def test_group(self): (self.config .runner.cib.load(resources=fixture_group_cib_managed) .env.push_cib( resources=fixture_group_cib_unmanaged_all_resources ) ) resource.unmanage(self.env_assist.get_env(), ["A"]) class ManageGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load( resources=fixture_group_cib_unmanaged_all_resources ) .env.push_cib(resources=fixture_group_cib_unmanaged_resource) ) resource.manage(self.env_assist.get_env(), ["A2"]) def test_primitive_unmanaged_group(self): (self.config .runner.cib.load( resources=fixture_group_cib_unmanaged_resource_and_group ) .env.push_cib(resources=fixture_group_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A1"]) def test_group(self): (self.config .runner.cib.load( resources=fixture_group_cib_unmanaged_all_resources ) .env.push_cib(resources=fixture_group_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_group_unmanaged_group(self): (self.config .runner.cib.load( resources=fixture_group_cib_unmanaged_resource_and_group ) .env.push_cib(resources=fixture_group_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) class UnmanageClone(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_clone_cib_managed) .env.push_cib(resources=fixture_clone_cib_unmanaged_primitive) ) resource.unmanage(self.env_assist.get_env(), ["A"]) def test_clone(self): (self.config .runner.cib.load(resources=fixture_clone_cib_managed) .env.push_cib(resources=fixture_clone_cib_unmanaged_primitive) ) resource.unmanage(self.env_assist.get_env(), ["A-clone"]) class ManageClone(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_clone_cib_unmanaged_clone) .env.push_cib(resources=fixture_clone_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_primitive_unmanaged_primitive(self): (self.config .runner.cib.load(resources=fixture_clone_cib_unmanaged_primitive) .env.push_cib(resources=fixture_clone_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_primitive_unmanaged_both(self): (self.config .runner.cib.load(resources=fixture_clone_cib_unmanaged_both) .env.push_cib(resources=fixture_clone_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_clone(self): (self.config .runner.cib.load(resources=fixture_clone_cib_unmanaged_clone) .env.push_cib(resources=fixture_clone_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-clone"]) def test_clone_unmanaged_primitive(self): (self.config .runner.cib.load(resources=fixture_clone_cib_unmanaged_primitive) .env.push_cib(resources=fixture_clone_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-clone"]) def test_clone_unmanaged_both(self): (self.config .runner.cib.load(resources=fixture_clone_cib_unmanaged_both) .env.push_cib(resources=fixture_clone_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-clone"]) class UnmanageMaster(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_master_cib_managed) .env.push_cib(resources=fixture_master_cib_unmanaged_primitive) ) resource.unmanage(self.env_assist.get_env(), ["A"]) def test_master(self): (self.config .runner.cib.load(resources=fixture_master_cib_managed) .env.push_cib(resources=fixture_master_cib_unmanaged_primitive) ) resource.unmanage(self.env_assist.get_env(), ["A-master"]) class ManageMaster(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_master_cib_unmanaged_primitive) .env.push_cib(resources=fixture_master_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_primitive_unmanaged_master(self): (self.config .runner.cib.load(resources=fixture_master_cib_unmanaged_master) .env.push_cib(resources=fixture_master_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_primitive_unmanaged_both(self): (self.config .runner.cib.load(resources=fixture_master_cib_unmanaged_both) .env.push_cib(resources=fixture_master_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_master(self): (self.config .runner.cib.load(resources=fixture_master_cib_unmanaged_master) .env.push_cib(resources=fixture_master_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-master"]) def test_master_unmanaged_primitive(self): (self.config .runner.cib.load(resources=fixture_master_cib_unmanaged_primitive) .env.push_cib(resources=fixture_master_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-master"]) def test_master_unmanaged_both(self): (self.config .runner.cib.load(resources=fixture_master_cib_unmanaged_both) .env.push_cib(resources=fixture_master_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-master"]) class UnmanageClonedGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_managed) .env.push_cib( resources=fixture_clone_group_cib_unmanaged_primitive ) ) resource.unmanage(self.env_assist.get_env(), ["A1"]) def test_group(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_managed) .env.push_cib( resources=fixture_clone_group_cib_unmanaged_all_primitives ) ) resource.unmanage(self.env_assist.get_env(), ["A"]) def test_clone(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_managed) .env.push_cib( resources=fixture_clone_group_cib_unmanaged_all_primitives ) ) resource.unmanage(self.env_assist.get_env(), ["A-clone"]) class ManageClonedGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_unmanaged_primitive ) .env.push_cib(resources=fixture_clone_group_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A1"]) def test_primitive_unmanaged_all(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_unmanaged_everything ) .env.push_cib( resources=fixture_clone_group_cib_unmanaged_primitive ) ) resource.manage(self.env_assist.get_env(), ["A2"]) def test_group(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_unmanaged_all_primitives ) .env.push_cib(resources=fixture_clone_group_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_group_unmanaged_all(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_unmanaged_everything ) .env.push_cib(resources=fixture_clone_group_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_clone(self): (self.config .runner.cib.load(resources=fixture_clone_group_cib_unmanaged_clone) .env.push_cib(resources=fixture_clone_group_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-clone"]) def test_clone_unmanaged_all(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_unmanaged_everything ) .env.push_cib(resources=fixture_clone_group_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-clone"]) class UnmanageBundle(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_managed) .env.push_cib(resources=fixture_bundle_cib_unmanaged_primitive) ) resource.unmanage(self.env_assist.get_env(), ["A"]) def test_bundle(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_managed) .env.push_cib(resources=fixture_bundle_cib_unmanaged_both) ) resource.unmanage(self.env_assist.get_env(), ["A-bundle"]) def test_bundle_empty(self): (self.config .runner.cib.load(resources=fixture_bundle_empty_cib_managed) .env.push_cib( resources=fixture_bundle_empty_cib_unmanaged_bundle ) ) resource.unmanage(self.env_assist.get_env(), ["A-bundle"]) class ManageBundle(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_primitive(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_unmanaged_primitive) .env.push_cib(resources=fixture_bundle_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_primitive_unmanaged_bundle(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_unmanaged_bundle) .env.push_cib(resources=fixture_bundle_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_primitive_unmanaged_both(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_unmanaged_both) .env.push_cib(resources=fixture_bundle_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"]) def test_bundle(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_unmanaged_bundle) .env.push_cib(resources=fixture_bundle_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-bundle"]) def test_bundle_unmanaged_primitive(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_unmanaged_primitive) .env.push_cib(resources=fixture_bundle_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-bundle"]) def test_bundle_unmanaged_both(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_unmanaged_both) .env.push_cib(resources=fixture_bundle_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-bundle"]) def test_bundle_empty(self): (self.config .runner.cib.load( resources=fixture_bundle_empty_cib_unmanaged_bundle ) .env.push_cib(resources=fixture_bundle_empty_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A-bundle"]) class MoreResources(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) fixture_cib_managed = """ """ fixture_cib_unmanaged = """ """ def test_success_unmanage(self): fixture_cib_unmanaged = """ """ (self.config .runner.cib.load(resources=self.fixture_cib_managed) .env.push_cib(resources=fixture_cib_unmanaged) ) resource.unmanage(self.env_assist.get_env(), ["A", "C"]) def test_success_manage(self): fixture_cib_managed = """ """ (self.config .runner.cib.load(resources=self.fixture_cib_unmanaged) .env.push_cib(resources=fixture_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A", "C"]) def test_bad_resource_unmanage(self): (self.config .runner.cib.load(resources=self.fixture_cib_managed) ) self.env_assist.assert_raise_library_error( lambda: resource.unmanage(self.env_assist.get_env(), ["B", "X", "Y", "A"]), [ fixture.report_not_found("X", "resources"), fixture.report_not_found("Y", "resources"), ], expected_in_processor=False ) def test_bad_resource_enable(self): (self.config .runner.cib.load(resources=self.fixture_cib_unmanaged) ) self.env_assist.assert_raise_library_error( lambda: resource.manage(self.env_assist.get_env(), ["B", "X", "Y", "A"]), [ fixture.report_not_found("X", "resources"), fixture.report_not_found("Y", "resources"), ], expected_in_processor=False ) class WithMonitor(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_unmanage_noop(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_managed) .env.push_cib(resources=fixture_primitive_cib_unmanaged) ) resource.unmanage(self.env_assist.get_env(), ["A"], True) def test_manage_noop(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_unmanaged) .env.push_cib(resources=fixture_primitive_cib_managed) ) resource.manage(self.env_assist.get_env(), ["A"], True) def test_unmanage(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_managed_op_enabled) .env.push_cib( resources=fixture_primitive_cib_unmanaged_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A"], True) def test_manage(self): (self.config .runner.cib.load( resources=fixture_primitive_cib_unmanaged_op_disabled ) .env.push_cib(resources=fixture_primitive_cib_managed_op_enabled) ) resource.manage(self.env_assist.get_env(), ["A"], True) def test_unmanage_enabled_monitors(self): (self.config .runner.cib.load(resources=fixture_primitive_cib_managed_op_enabled) .env.push_cib( resources=fixture_primitive_cib_unmanaged_op_enabled ) ) resource.unmanage(self.env_assist.get_env(), ["A"], False) def test_manage_disabled_monitors(self): (self.config .runner.cib.load( resources=fixture_primitive_cib_unmanaged_op_disabled ) .env.push_cib( resources=fixture_primitive_cib_managed_op_disabled ) ) resource.manage(self.env_assist.get_env(), ["A"], False) self.env_assist.assert_reports([ fixture_report_no_monitors("A"), ]) def test_unmanage_clone(self): (self.config .runner.cib.load(resources=fixture_clone_cib_managed_op_enabled) .env.push_cib( resources=fixture_clone_cib_unmanaged_primitive_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A-clone"], True) def test_unmanage_in_clone(self): (self.config .runner.cib.load(resources=fixture_clone_cib_managed_op_enabled) .env.push_cib( resources=fixture_clone_cib_unmanaged_primitive_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A"], True) def test_unmanage_master(self): (self.config .runner.cib.load(resources=fixture_master_cib_managed_op_enabled) .env.push_cib( resources=fixture_master_cib_unmanaged_primitive_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A-master"], True) def test_unmanage_in_master(self): (self.config .runner.cib.load(resources=fixture_master_cib_managed_op_enabled) .env.push_cib( resources=fixture_master_cib_unmanaged_primitive_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A"], True) def test_unmanage_clone_with_group(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_managed_op_enabled ) .env.push_cib(resources= fixture_clone_group_cib_unmanaged_all_primitives_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A-clone"], True) def test_unmanage_group_in_clone(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_managed_op_enabled ) .env.push_cib(resources= fixture_clone_group_cib_unmanaged_all_primitives_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A"], True) def test_unmanage_in_cloned_group(self): (self.config .runner.cib.load( resources=fixture_clone_group_cib_managed_op_enabled ) .env.push_cib(resources= fixture_clone_group_cib_unmanaged_primitive_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A1"], True) def test_unmanage_bundle(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_managed_op_enabled) .env.push_cib( resources=fixture_bundle_cib_unmanaged_both_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A-bundle"], True) def test_unmanage_in_bundle(self): (self.config .runner.cib.load(resources=fixture_bundle_cib_managed_op_enabled) .env.push_cib( resources=fixture_bundle_cib_unmanaged_primitive_op_disabled ) ) resource.unmanage(self.env_assist.get_env(), ["A"], True) def test_unmanage_bundle_empty(self): (self.config .runner.cib.load(resources=fixture_bundle_empty_cib_managed) .env.push_cib( resources=fixture_bundle_empty_cib_unmanaged_bundle ) ) resource.unmanage(self.env_assist.get_env(), ["A-bundle"], True) pcs-0.9.164/pcs/lib/commands/test/sbd/000077500000000000000000000000001326265502500173515ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/sbd/__init__.py000066400000000000000000000000001326265502500214500ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/commands/test/sbd/test_disable_sbd.py000066400000000000000000000213431326265502500232200ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib.commands.sbd import disable_sbd from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase class DisableSbd(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.corosync_conf_name = "corosync-3nodes.conf" self.node_list = ["rh7-1", "rh7-2", "rh7-3"] def test_success(self): self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth(node_labels=self.node_list) self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero( node_labels=self.node_list[:1] ) self.config.http.sbd.disable_sbd(node_labels=self.node_list) disable_sbd(self.env_assist.get_env()) self.env_assist.assert_reports( [fixture.info(report_codes.SBD_DISABLING_STARTED)] + [ fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, service="sbd", node=node, instance=None ) for node in self.node_list ] + [ fixture.warn( report_codes.CLUSTER_RESTART_REQUIRED_TO_APPLY_CHANGES ) ] ) def test_node_offline(self): err_msg = "Failed connect to rh7-3:2224; No route to host" self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth( communication_list=[ {"label": "rh7-1"}, {"label": "rh7-2"}, { "label": "rh7-3", "was_connected": False, "errno": 7, "error_msg": err_msg, } ] ) self.env_assist.assert_raise_library_error( lambda: disable_sbd(self.env_assist.get_env()), [], expected_in_processor=False ) self.env_assist.assert_reports([ fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, force_code=report_codes.SKIP_OFFLINE_NODES, node="rh7-3", reason=err_msg, command="remote/check_auth" ) ]) def test_success_node_offline_skip_offline(self): err_msg = "Failed connect to rh7-3:2224; No route to host" online_nodes_list = ["rh7-2", "rh7-3"] self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth( communication_list=[ { "label": "rh7-1", "was_connected": False, "errno": 7, "error_msg": err_msg, }, {"label": "rh7-2"}, {"label": "rh7-3"} ] ) self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero( node_labels=online_nodes_list[:1] ) self.config.http.sbd.disable_sbd(node_labels=online_nodes_list) disable_sbd(self.env_assist.get_env(), ignore_offline_nodes=True) self.env_assist.assert_reports( [fixture.warn(report_codes.OMITTING_NODE, node="rh7-1")] + [fixture.info(report_codes.SBD_DISABLING_STARTED)] + [ fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, service="sbd", node=node, instance=None ) for node in online_nodes_list ] + [ fixture.warn( report_codes.CLUSTER_RESTART_REQUIRED_TO_APPLY_CHANGES ) ] ) def test_set_stonith_watchdog_timeout_fails_on_some_nodes(self): err_msg = "Error" self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth(node_labels=self.node_list) self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero( communication_list=[ [{ "label": "rh7-1", "was_connected": False, "errno": 7, "error_msg": err_msg, }], [{ "label": "rh7-2", "response_code": 400, "output": "FAILED", }], [{"label": "rh7-3"}] ] ) self.config.http.sbd.disable_sbd(node_labels=self.node_list) disable_sbd(self.env_assist.get_env()) self.env_assist.assert_reports( [ fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node="rh7-1", reason=err_msg, command="remote/set_stonith_watchdog_timeout_to_zero" ), fixture.warn( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node="rh7-2", reason="FAILED", command="remote/set_stonith_watchdog_timeout_to_zero" ) ] + [fixture.info(report_codes.SBD_DISABLING_STARTED)] + [ fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, service="sbd", node=node, instance=None ) for node in self.node_list ] + [ fixture.warn( report_codes.CLUSTER_RESTART_REQUIRED_TO_APPLY_CHANGES ) ] ) def test_set_stonith_watchdog_timeout_fails_on_all_nodes(self): err_msg = "Error" self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth(node_labels=self.node_list) self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero( communication_list=[ [dict(label=node, response_code=400, output=err_msg)] for node in self.node_list ] ) self.env_assist.assert_raise_library_error( lambda: disable_sbd(self.env_assist.get_env()), [], ) self.env_assist.assert_reports( [ fixture.warn( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=node, reason=err_msg, command="remote/set_stonith_watchdog_timeout_to_zero" ) for node in self.node_list ] + [ fixture.error( report_codes.UNABLE_TO_PERFORM_OPERATION_ON_ANY_NODE, ) ] ) def test_disable_failed(self): err_msg = "Error" self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth(node_labels=self.node_list) self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero( node_labels=self.node_list[:1] ) self.config.http.sbd.disable_sbd( communication_list=[ {"label": "rh7-1"}, {"label": "rh7-2"}, { "label": "rh7-3", "response_code": 400, "output": err_msg } ] ) self.env_assist.assert_raise_library_error( lambda: disable_sbd(self.env_assist.get_env()), [], ) self.env_assist.assert_reports( [fixture.info(report_codes.SBD_DISABLING_STARTED)] + [ fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, service="sbd", node=node, instance=None ) for node in self.node_list[:2] ] + [ fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node="rh7-3", reason=err_msg, command="remote/sbd_disable" ) ] ) pcs-0.9.164/pcs/lib/commands/test/sbd/test_enable_sbd.py000066400000000000000000001646201326265502500230510ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import json from pcs import settings from pcs.common import report_codes from pcs.lib.commands.sbd import enable_sbd, ALLOWED_SBD_OPTION_LIST from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase, mock from pcs.test.tools.misc import get_test_resource, outdent from pcs.lib.corosync.config_parser import parse_string def _check_sbd_comm_success_fixture(node, watchdog, device_list): return dict( label=node, output=json.dumps({ "sbd": { "installed": True, }, "watchdog": { "exist": True, "path": watchdog, }, "device_list": [ dict(path=dev, exist=True, block_device=True) for dev in device_list ], }), param_list=[ ("watchdog", watchdog), ("device_list", json.dumps(device_list)), ] ) def _get_corosync_conf_text_with_atb(orig_cfg_file): corosync_conf = parse_string(open(get_test_resource(orig_cfg_file)).read()) for quorum in corosync_conf.get_sections(name="quorum"): quorum.del_attributes_by_name("two_node") quorum.set_attribute("auto_tie_breaker", 1) return corosync_conf.export() def _sbd_enable_successful_report_list_fixture( online_node_list, skipped_offline_node_list=(), err_msg="err", atb_set=False ): report_list = ( [ fixture.warn(report_codes.OMITTING_NODE, node=node) for node in skipped_offline_node_list ] + [fixture.info(report_codes.SBD_CHECK_STARTED)] + [ fixture.info(report_codes.SBD_CHECK_SUCCESS, node=node) for node in online_node_list ] ) if atb_set: report_list += ( [ fixture.warn(report_codes.SBD_REQUIRES_ATB), fixture.info(report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED), ] + [ fixture.info( report_codes.COROSYNC_NOT_RUNNING_ON_NODE, node=node ) for node in online_node_list ] + [ fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=node, reason=err_msg, command="remote/status", ) for node in skipped_offline_node_list ] + [ fixture.warn( report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, node=node, ) for node in skipped_offline_node_list ] + [fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED)] + [ fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=node, reason=err_msg, command="remote/set_corosync_conf", ) for node in skipped_offline_node_list ] + [ fixture.warn( report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, node=node, ) for node in skipped_offline_node_list ] + [ fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node=node ) for node in online_node_list ] ) return ( report_list + [fixture.info(report_codes.SBD_CONFIG_DISTRIBUTION_STARTED)] + [ fixture.info(report_codes.SBD_CONFIG_ACCEPTED_BY_NODE, node=node) for node in online_node_list ] + [fixture.info(report_codes.SBD_ENABLING_STARTED)] + [ fixture.info( report_codes.SERVICE_ENABLE_SUCCESS, service="sbd", node=node, instance=None ) for node in online_node_list ] + [fixture.warn(report_codes.CLUSTER_RESTART_REQUIRED_TO_APPLY_CHANGES)] ) class OddNumOfNodesSuccess(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.corosync_conf_name = "corosync-3nodes.conf" self.node_list = ["rh7-1", "rh7-2", "rh7-3"] self.sbd_options = { "SBD_WATCHDOG_TIMEOUT": "10", "SBD_STARTMODE": "clean", } self.sbd_config_template = outdent("""\ # This file has been generated by pcs. SBD_DELAY_START=no {devices}SBD_OPTS="-n {node_name}" SBD_PACEMAKER=yes SBD_STARTMODE=clean SBD_WATCHDOG_DEV={watchdog} SBD_WATCHDOG_TIMEOUT=10 """) self.watchdog_dict = { node: "/dev/watchdog-{0}".format(node) for node in self.node_list } self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth(node_labels=self.node_list) def test_with_devices(self): device_dict = { node: ["/dev/{0}-sbd{1}".format(node, j) for j in range(i)] for i, node in enumerate(self.node_list, start=1) } config_generator = lambda node: self.sbd_config_template.format( node_name=node, watchdog=self.watchdog_dict[node], devices='SBD_DEVICE="{0}"\n'.format(";".join(device_dict[node])), ) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture( node, self.watchdog_dict[node], device_dict[node] ) for node in self.node_list ] ) self.config.http.sbd.set_sbd_config( config_generator=config_generator, node_labels=self.node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=None, watchdog_dict=self.watchdog_dict, sbd_options=self.sbd_options, default_device_list=[], node_device_dict=device_dict, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture(self.node_list) ) def test_no_device(self): config_generator = lambda node: self.sbd_config_template.format( node_name=node, watchdog=self.watchdog_dict[node], devices="", ) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture( node, self.watchdog_dict[node], [] ) for node in self.node_list ] ) self.config.corosync_conf.load( filename=self.corosync_conf_name, name="corosync_conf.load2", ) self.config.http.sbd.set_sbd_config( config_generator=config_generator, node_labels=self.node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=None, watchdog_dict=self.watchdog_dict, sbd_options=self.sbd_options, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture(self.node_list) ) class OddNumOfNodesDefaultsSuccess(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.corosync_conf_name = "corosync-3nodes.conf" self.node_list = ["rh7-1", "rh7-2", "rh7-3"] self.sbd_config_template = outdent("""\ # This file has been generated by pcs. SBD_DELAY_START=no {devices}SBD_OPTS="-n {node_name}" SBD_PACEMAKER=yes SBD_STARTMODE=always SBD_WATCHDOG_DEV=/dev/watchdog SBD_WATCHDOG_TIMEOUT=5 """) self.watchdog = "/dev/watchdog" self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth(node_labels=self.node_list) def test_with_device(self): device_list = ["/dev/sdb"] config_generator = lambda node: self.sbd_config_template.format( node_name=node, devices='SBD_DEVICE="{0}"\n'.format(";".join(device_list)), ) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture( node, self.watchdog, device_list ) for node in self.node_list ] ) self.config.http.sbd.set_sbd_config( config_generator=config_generator, node_labels=self.node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, default_device_list=device_list, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture(self.node_list) ) def test_no_device(self): config_generator = lambda node: self.sbd_config_template.format( node_name=node, devices="", ) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture(node, self.watchdog, []) for node in self.node_list ] ) self.config.corosync_conf.load( filename=self.corosync_conf_name, name="corosync_conf.load2", ) self.config.http.sbd.set_sbd_config( config_generator=config_generator, node_labels=self.node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture(self.node_list) ) class EvenNumOfNodes(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.corosync_conf_name = "corosync.conf" self.node_list = ["rh7-1", "rh7-2"] self.sbd_config_template = outdent("""\ # This file has been generated by pcs. SBD_DELAY_START=no {devices}SBD_OPTS="-n {node_name}" SBD_PACEMAKER=yes SBD_STARTMODE=always SBD_WATCHDOG_DEV=/dev/watchdog SBD_WATCHDOG_TIMEOUT=5 """) self.watchdog = "/dev/watchdog" self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth(node_labels=self.node_list) def test_with_device(self): device_list = ["/dev/sdb"] config_generator = lambda node: self.sbd_config_template.format( node_name=node, devices='SBD_DEVICE="{0}"\n'.format(";".join(device_list)), ) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture( node, self.watchdog, device_list ) for node in self.node_list ] ) self.config.http.sbd.set_sbd_config( config_generator=config_generator, node_labels=self.node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, default_device_list=device_list, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture(self.node_list) ) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) def test_no_device(self): config_generator = lambda node: self.sbd_config_template.format( node_name=node, devices="", ) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture(node, self.watchdog, []) for node in self.node_list ] ) self.config.corosync_conf.load( filename=self.corosync_conf_name, name="corosync_conf.load2", ) self.config.http.corosync.check_corosync_offline( node_labels=self.node_list ) self.config.http.corosync.set_corosync_conf( _get_corosync_conf_text_with_atb(self.corosync_conf_name), node_labels=self.node_list, ) self.config.runner.systemctl.is_active("corosync", is_active=False) self.config.http.sbd.set_sbd_config( config_generator=config_generator, node_labels=self.node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True, ) ) def test_no_device_auto_tie_breaker_enabled(self): config_generator = lambda node: self.sbd_config_template.format( node_name=node, devices="", ) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture(node, self.watchdog, []) for node in self.node_list ] ) self.config.corosync_conf.load( filename=self.corosync_conf_name, name="corosync_conf.load2", auto_tie_breaker=True, ) self.config.http.sbd.set_sbd_config( config_generator=config_generator, node_labels=self.node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture(self.node_list) ) def test_no_device_with_qdevice(self): config_generator = lambda node: self.sbd_config_template.format( node_name=node, devices="", ) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture(node, self.watchdog, []) for node in self.node_list ] ) self.config.corosync_conf.load( filename="corosync-qdevice.conf", name="corosync_conf.load2", ) self.config.http.sbd.set_sbd_config( config_generator=config_generator, node_labels=self.node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture(self.node_list) ) class OfflineNodes(TestCase): #pylint: disable=too-many-instance-attributes def setUp(self): self.env_assist, self.config = get_env_tools(self) self.corosync_conf_name = "corosync.conf" node_list = ["rh7-1", "rh7-2"] self.online_node_list = node_list[:-1] self.offline_node_list = node_list[-1:] self.watchdog = "/dev/watchdog" self.err_msg = "error msg" self.sbd_config_generator = outdent("""\ # This file has been generated by pcs. SBD_DELAY_START=no SBD_OPTS="-n {0}" SBD_PACEMAKER=yes SBD_STARTMODE=always SBD_WATCHDOG_DEV=/dev/watchdog SBD_WATCHDOG_TIMEOUT=5 """).format self.offline_communication_list = ( [dict(label=node) for node in self.online_node_list] + [ dict( label=node, was_connected=False, errno=1, error_msg=self.err_msg, ) for node in self.offline_node_list ] ) self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth( communication_list=self.offline_communication_list ) def test_no_ignore_offline_nodes(self): self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=None, watchdog_dict={}, sbd_options={}, ), [], ) self.env_assist.assert_reports( [ fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, force_code=report_codes.SKIP_OFFLINE_NODES, node=node, command="remote/check_auth", reason=self.err_msg, ) for node in self.offline_node_list ] ) def test_ignore_offline_nodes(self): self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture(node, self.watchdog, []) for node in self.online_node_list ] ) self.config.corosync_conf.load( filename="corosync-qdevice.conf", name="corosync_conf.load2", ) self.config.http.sbd.set_sbd_config( config_generator=self.sbd_config_generator, node_labels=self.online_node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.online_node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.online_node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=None, watchdog_dict={}, sbd_options={}, ignore_offline_nodes=True, ) self.env_assist.assert_reports( [ fixture.warn( report_codes.OMITTING_NODE, node=node, ) for node in self.offline_node_list ] + _sbd_enable_successful_report_list_fixture(self.online_node_list) ) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) def test_ignore_offline_nodes_atb_needed(self): self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture(node, self.watchdog, []) for node in self.online_node_list ] ) self.config.corosync_conf.load( filename=self.corosync_conf_name, name="corosync_conf.load2", ) self.config.http.corosync.check_corosync_offline( communication_list=self.offline_communication_list ) self.config.http.corosync.set_corosync_conf( _get_corosync_conf_text_with_atb(self.corosync_conf_name), communication_list=self.offline_communication_list, ) self.config.runner.systemctl.is_active("corosync", is_active=False) self.config.http.sbd.set_sbd_config( config_generator=self.sbd_config_generator, node_labels=self.online_node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.online_node_list[0]] ) self.config.http.sbd.enable_sbd(node_labels=self.online_node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=None, watchdog_dict={}, sbd_options={}, ignore_offline_nodes=True, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.online_node_list, skipped_offline_node_list=self.offline_node_list, err_msg=self.err_msg, atb_set=True, ) ) class Validations(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.corosync_conf_name = "corosync.conf" self.node_list = ["rh7-1", "rh7-2"] self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) def test_non_existing_node_in_watchdogs(self): unknown_node = "unknown_node" self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=None, watchdog_dict={ node: "/dev/watchdog" for node in (self.node_list + [unknown_node]) }, sbd_options={}, ), [ fixture.error( report_codes.NODE_NOT_FOUND, node=unknown_node, searched_types=[], ) ], ) self.env_assist.assert_reports([]) def test_non_existing_node_in_devices(self): unknown_node = "unknown_node" self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog="/dev/watchdog", watchdog_dict={}, sbd_options={}, default_device_list="/device", node_device_dict={ node: ["/device"] for node in (self.node_list + [unknown_node]) } ), [ fixture.error( report_codes.NODE_NOT_FOUND, node=unknown_node, searched_types=[], ) ], ) self.env_assist.assert_reports([]) def test_watchdog_not_abs_path(self): self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog="wd1", watchdog_dict={self.node_list[0]: "wd2"}, sbd_options={}, ), [ fixture.error( report_codes.WATCHDOG_INVALID, watchdog=w, ) for w in ["wd1", "wd2"] ], ) self.env_assist.assert_reports([]) def test_device_not_abs_path(self): self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog="/dev/watchdog", watchdog_dict={}, sbd_options={}, default_device_list=["device1"], node_device_dict={self.node_list[0]: ["device2"]} ), [ fixture.error( report_codes.SBD_DEVICE_PATH_NOT_ABSOLUTE, node=node, device=dev, ) for node, dev in [ (self.node_list[0], "device2"), (self.node_list[1], "device1"), ] ], ) self.env_assist.assert_reports([]) def test_no_device_for_node(self): self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog="/dev/watchdog", watchdog_dict={}, sbd_options={}, default_device_list=[], node_device_dict={self.node_list[0]: ["/dev/device1"]} ), [ fixture.error( report_codes.SBD_NO_DEVICE_FOR_NODE, node=self.node_list[1], ) ], ) self.env_assist.assert_reports([]) def test_too_many_devices(self): max_dev_num = settings.sbd_max_device_num dev_list = ["/dev/dev{0}".format(i) for i in range(max_dev_num + 1)] self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog="/dev/watchdog", watchdog_dict={}, sbd_options={}, default_device_list=["/dev/dev1"], node_device_dict={self.node_list[0]: dev_list} ), [ fixture.error( report_codes.SBD_TOO_MANY_DEVICES_FOR_NODE, node=self.node_list[0], device_list=dev_list, max_devices=max_dev_num, ) ], ) self.env_assist.assert_reports([]) def test_unknown_sbd_opts(self): self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog="/dev/watchdog", watchdog_dict={}, sbd_options={ "UNKNOWN_OPT1": 1, "SBD_STARTMODE": "clean", "UNKNOWN_OPT2": "val", "SBD_WATCHDOG_DEV": "dev", }, ), [ fixture.error( report_codes.INVALID_OPTIONS, option_names=[opt], option_type=None, allowed=sorted(ALLOWED_SBD_OPTION_LIST), allowed_patterns=[], force_code=report_codes.FORCE_OPTIONS, ) for opt in ["UNKNOWN_OPT1", "UNKNOWN_OPT2"] ] + [ fixture.error( report_codes.INVALID_OPTIONS, option_names=["SBD_WATCHDOG_DEV"], option_type=None, allowed=sorted(ALLOWED_SBD_OPTION_LIST), allowed_patterns=[], ), ] ) self.env_assist.assert_reports([]) def test_unknown_sbd_opts_allowed(self): self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog="/dev/watchdog", watchdog_dict={}, sbd_options={ "UNKNOWN_OPT1": 1, "SBD_STARTMODE": "clean", "UNKNOWN_OPT2": "val", "SBD_WATCHDOG_DEV": "dev", }, allow_unknown_opts=True, ), [ fixture.error( report_codes.INVALID_OPTIONS, option_names=["SBD_WATCHDOG_DEV"], option_type=None, allowed=sorted(ALLOWED_SBD_OPTION_LIST), allowed_patterns=[], ), ] ) self.env_assist.assert_reports( [ fixture.warn( report_codes.INVALID_OPTIONS, option_names=[opt], option_type=None, allowed=sorted(ALLOWED_SBD_OPTION_LIST), allowed_patterns=[], ) for opt in ["UNKNOWN_OPT1", "UNKNOWN_OPT2"] ] ) def test_sbd_not_installed(self): watchdog = "/dev/watchdog" self.config.http.host.check_auth(node_labels=self.node_list) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture( self.node_list[0], watchdog, [] ), dict( label=self.node_list[1], output=json.dumps({ "sbd": { "installed": False, }, "watchdog": { "exist": True, "path": watchdog, }, "device_list": [], }), param_list=[ ("watchdog", watchdog), ("device_list", "[]"), ] ) ] ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( [fixture.info(report_codes.SBD_CHECK_STARTED)] + [ fixture.error( report_codes.SBD_NOT_INSTALLED, node=self.node_list[1] ) ] + [ fixture.info( report_codes.SBD_CHECK_SUCCESS, node=self.node_list[0] ) ] ) def test_watchdog_not_found(self): watchdog = "/dev/watchdog" self.config.http.host.check_auth(node_labels=self.node_list) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture( self.node_list[0], watchdog, [] ), dict( label=self.node_list[1], output=json.dumps({ "sbd": { "installed": True, }, "watchdog": { "exist": False, "path": watchdog, }, "device_list": [], }), param_list=[ ("watchdog", watchdog), ("device_list", "[]"), ] ) ] ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( [fixture.info(report_codes.SBD_CHECK_STARTED)] + [ fixture.error( report_codes.WATCHDOG_NOT_FOUND, node=self.node_list[1], watchdog=watchdog, ) ] + [ fixture.info( report_codes.SBD_CHECK_SUCCESS, node=self.node_list[0] ) ] ) def test_device_not_exists_not_block_device(self): watchdog = "/dev/watchdog" device_list = ["/dev/dev0", "/dev/dev1"] self.config.http.host.check_auth(node_labels=self.node_list) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture( self.node_list[0], watchdog, device_list ), dict( label=self.node_list[1], output=json.dumps({ "sbd": { "installed": True, }, "watchdog": { "exist": True, "path": watchdog, }, "device_list": [ dict( path=device_list[0], exist=False, block_device=False, ), dict( path=device_list[1], exist=True, block_device=False, ), ], }), param_list=[ ("watchdog", watchdog), ("device_list", json.dumps(device_list)), ] ) ] ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=watchdog, watchdog_dict={}, sbd_options={}, default_device_list=device_list, ), [] ) self.env_assist.assert_reports( [fixture.info(report_codes.SBD_CHECK_STARTED)] + [ fixture.error( report_codes.SBD_DEVICE_DOES_NOT_EXIST, node=self.node_list[1], device=device_list[0], ), fixture.error( report_codes.SBD_DEVICE_IS_NOT_BLOCK_DEVICE, node=self.node_list[1], device=device_list[1], ) ] + [ fixture.info( report_codes.SBD_CHECK_SUCCESS, node=self.node_list[0] ) ] ) def test_multiple_validation_failures(self): unknown_node_list = ["unknown_node{0}".format(i) for i in range(2)] max_dev_num = settings.sbd_max_device_num self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog="watchdog0", watchdog_dict={ unknown_node_list[0]: "/dev/watchdog", }, sbd_options={ "UNKNOWN_OPT1": 1, "SBD_STARTMODE": "clean", "UNKNOWN_OPT2": "val", "SBD_WATCHDOG_DEV": "dev", }, default_device_list=[], node_device_dict={ self.node_list[0]: ["dev", "/dev0", "/dev1", "/dev2"], unknown_node_list[0]: ["/dev/device0"], unknown_node_list[1]: ["/dev/device0"], } ), [ fixture.error( report_codes.NODE_NOT_FOUND, node=node, searched_types=[], ) for node in unknown_node_list ] + [ fixture.error( report_codes.WATCHDOG_INVALID, watchdog="watchdog0" ), fixture.error( report_codes.WATCHDOG_INVALID, watchdog="watchdog0" ), fixture.error( report_codes.SBD_NO_DEVICE_FOR_NODE, node=self.node_list[1], ), fixture.error( report_codes.SBD_TOO_MANY_DEVICES_FOR_NODE, node=self.node_list[0], device_list=["dev", "/dev0", "/dev1", "/dev2"], max_devices=max_dev_num, ), fixture.error( report_codes.INVALID_OPTIONS, option_names=["SBD_WATCHDOG_DEV"], option_type=None, allowed=sorted(ALLOWED_SBD_OPTION_LIST), allowed_patterns=[], ), ] + [ fixture.error( report_codes.INVALID_OPTIONS, option_names=[opt], option_type=None, allowed=sorted(ALLOWED_SBD_OPTION_LIST), allowed_patterns=[], force_code=report_codes.FORCE_OPTIONS, ) for opt in ["UNKNOWN_OPT1", "UNKNOWN_OPT2"] ] ) self.env_assist.assert_reports([]) class FailureHandling(TestCase): #pylint: disable=too-many-instance-attributes def setUp(self): self.env_assist, self.config = get_env_tools(self) self.corosync_conf_name = "corosync.conf" self.node_list = ["rh7-1", "rh7-2"] self.sbd_config_generator = outdent("""\ # This file has been generated by pcs. SBD_DELAY_START=no SBD_OPTS="-n {0}" SBD_PACEMAKER=yes SBD_STARTMODE=always SBD_WATCHDOG_DEV=/dev/watchdog SBD_WATCHDOG_TIMEOUT=5 """).format self.watchdog = "/dev/watchdog" self.reason = "failure reason" self.communication_list_failure = [ dict( label=self.node_list[0], response_code=400, output=self.reason, ), dict( label=self.node_list[1], ) ] self.communication_list_not_connected = [ dict( label=self.node_list[0], errno=1, error_msg=self.reason, was_connected=False, ), dict( label=self.node_list[1], ) ] self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.host.check_auth(node_labels=self.node_list) self.config.http.sbd.check_sbd( communication_list=[ _check_sbd_comm_success_fixture(node, self.watchdog, []) for node in self.node_list ] ) self.config.corosync_conf.load( filename=self.corosync_conf_name, name="corosync_conf.load2", ) self.config.http.corosync.check_corosync_offline( node_labels=self.node_list ) self.config.http.corosync.set_corosync_conf( _get_corosync_conf_text_with_atb(self.corosync_conf_name), node_labels=self.node_list, ) self.config.runner.systemctl.is_active("corosync", is_active=False) self.config.http.sbd.set_sbd_config( config_generator=self.sbd_config_generator, node_labels=self.node_list, ) self.config.http.pcmk.remove_stonith_watchdog_timeout( node_labels=[self.node_list[0]] ) def _remove_calls(self, n): for name in self.config.calls.names[-n:]: self.config.calls.remove(name) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) def test_enable_failed(self): self.config.http.sbd.enable_sbd( communication_list=self.communication_list_failure ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True )[:-3] + [ fixture.info( report_codes.SERVICE_ENABLE_SUCCESS, service="sbd", node=self.node_list[1], instance=None ), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[0], reason=self.reason, command="remote/sbd_enable" ) ] ) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) def test_enable_not_connected(self): self.config.http.sbd.enable_sbd( communication_list=self.communication_list_not_connected ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True )[:-3] + [ fixture.info( report_codes.SERVICE_ENABLE_SUCCESS, service="sbd", node=self.node_list[1], instance=None ), fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.node_list[0], reason=self.reason, command="remote/sbd_enable" ) ] ) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) def test_removing_stonith_wd_timeout_failure(self): self._remove_calls(2) self.config.http.pcmk.remove_stonith_watchdog_timeout( communication_list=[ self.communication_list_failure[:1], [dict(label=self.node_list[1])] ] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True ) + [ fixture.warn( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[0], reason=self.reason, command="remote/remove_stonith_watchdog_timeout", ) ] ) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) def test_removing_stonith_wd_timeout_not_connected(self): self._remove_calls(2) self.config.http.pcmk.remove_stonith_watchdog_timeout( communication_list=[ self.communication_list_not_connected[:1], [dict(label=self.node_list[1])] ] ) self.config.http.sbd.enable_sbd(node_labels=self.node_list) enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True ) + [ fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.node_list[0], reason=self.reason, command="remote/remove_stonith_watchdog_timeout", ) ] ) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) def test_removing_stonith_wd_timeout_complete_failure(self): self._remove_calls(2) self.config.http.pcmk.remove_stonith_watchdog_timeout( communication_list=[ self.communication_list_not_connected[:1], [dict( label=self.node_list[1], response_code=400, output=self.reason, )], ] ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True )[:-4] + [ fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.node_list[0], reason=self.reason, command="remote/remove_stonith_watchdog_timeout", ), fixture.warn( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[1], reason=self.reason, command="remote/remove_stonith_watchdog_timeout", ), fixture.error( report_codes.UNABLE_TO_PERFORM_OPERATION_ON_ANY_NODE, ) ] ) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) def test_set_sbd_config_failure(self): self._remove_calls(4) self.config.http.sbd.set_sbd_config( communication_list=[ dict( label=self.node_list[0], param_list=[ ("config", self.sbd_config_generator(self.node_list[0])) ], response_code=400, output=self.reason, ), dict( label=self.node_list[1], param_list=[ ("config", self.sbd_config_generator(self.node_list[1])) ], ), ] ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True )[:-6] + [ fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[0], reason=self.reason, command="remote/set_sbd_config", ), fixture.info( report_codes.SBD_CONFIG_ACCEPTED_BY_NODE, node=self.node_list[1], ) ] ) def test_set_corosync_conf_failed(self): self._remove_calls(7) self.config.http.corosync.set_corosync_conf( _get_corosync_conf_text_with_atb(self.corosync_conf_name), communication_list=self.communication_list_failure, ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True )[:-9] + [ fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[0], reason=self.reason, command="remote/set_corosync_conf", force_code=report_codes.SKIP_OFFLINE_NODES, ), fixture.error( report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, node=self.node_list[0], force_code=report_codes.SKIP_OFFLINE_NODES, ), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node=self.node_list[1], ) ] ) def test_set_corosync_conf_not_connected(self): self._remove_calls(7) self.config.http.corosync.set_corosync_conf( _get_corosync_conf_text_with_atb(self.corosync_conf_name), communication_list=self.communication_list_not_connected, ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True )[:-9] + [ fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.node_list[0], reason=self.reason, command="remote/set_corosync_conf", force_code=report_codes.SKIP_OFFLINE_NODES, ), fixture.error( report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, node=self.node_list[0], force_code=report_codes.SKIP_OFFLINE_NODES, ), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node=self.node_list[1], ) ] ) def test_corosync_not_running_failed(self): self._remove_calls(9) self.config.http.corosync.check_corosync_offline( communication_list=self.communication_list_failure, ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True )[:-12] + [ fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[0], reason=self.reason, command="remote/status", force_code=report_codes.SKIP_OFFLINE_NODES, ), fixture.error( report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, node=self.node_list[0], force_code=report_codes.SKIP_OFFLINE_NODES, ), fixture.info( report_codes.COROSYNC_NOT_RUNNING_ON_NODE, node=self.node_list[1] ) ] ) def test_corosync_not_running_not_connected(self): self._remove_calls(9) self.config.http.corosync.check_corosync_offline( communication_list=self.communication_list_not_connected, ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( _sbd_enable_successful_report_list_fixture( self.node_list, atb_set=True )[:-12] + [ fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.node_list[0], reason=self.reason, command="remote/status", force_code=report_codes.SKIP_OFFLINE_NODES, ), fixture.error( report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, node=self.node_list[0], force_code=report_codes.SKIP_OFFLINE_NODES, ), fixture.info( report_codes.COROSYNC_NOT_RUNNING_ON_NODE, node=self.node_list[1] ) ] ) def test_check_sbd_invalid_data_format(self): self._remove_calls(12) self.config.http.sbd.check_sbd( communication_list=[ dict( label=self.node_list[0], param_list=[ ("watchdog", self.watchdog), ("device_list", "[]"), ], output="{}", ), dict( label=self.node_list[1], param_list=[ ("watchdog", self.watchdog), ("device_list", "[]"), ], output="not JSON", ), ] ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( [fixture.info(report_codes.SBD_CHECK_STARTED)] + [ fixture.error(report_codes.INVALID_RESPONSE_FORMAT, node=node) for node in self.node_list ] ) def test_check_sbd_failure(self): self._remove_calls(12) self.config.http.sbd.check_sbd( communication_list=[ dict( label=self.node_list[0], param_list=[ ("watchdog", self.watchdog), ("device_list", "[]"), ], response_code=400, output=self.reason, ), _check_sbd_comm_success_fixture( self.node_list[1], self.watchdog, [] ) ] ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( [ fixture.info(report_codes.SBD_CHECK_STARTED), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[0], reason=self.reason, command="remote/check_sbd", ), fixture.info( report_codes.SBD_CHECK_SUCCESS, node=self.node_list[1] ) ] ) def test_check_sbd_not_connected(self): self._remove_calls(12) self.config.http.sbd.check_sbd( communication_list=[ dict( label=self.node_list[0], param_list=[ ("watchdog", self.watchdog), ("device_list", "[]"), ], errno=1, error_msg=self.reason, was_connected=False, ), _check_sbd_comm_success_fixture( self.node_list[1], self.watchdog, [] ) ] ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( [ fixture.info(report_codes.SBD_CHECK_STARTED), fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.node_list[0], reason=self.reason, command="remote/check_sbd", ), fixture.info( report_codes.SBD_CHECK_SUCCESS, node=self.node_list[1] ) ] ) def test_get_online_targets_failed(self): self._remove_calls(14) self.config.http.host.check_auth( communication_list=self.communication_list_failure ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( [ fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[0], reason=self.reason, command="remote/check_auth", ) ] ) def test_get_online_targets_not_connected(self): self._remove_calls(14) self.config.http.host.check_auth( communication_list=self.communication_list_not_connected ) self.env_assist.assert_raise_library_error( lambda: enable_sbd( self.env_assist.get_env(), default_watchdog=self.watchdog, watchdog_dict={}, sbd_options={}, ), [] ) self.env_assist.assert_reports( [ fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.node_list[0], reason=self.reason, command="remote/check_auth", force_code=report_codes.SKIP_OFFLINE_NODES, ) ] ) pcs-0.9.164/pcs/lib/commands/test/sbd/test_get_cluster_sbd_config.py000066400000000000000000000077351326265502500254730ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib.commands.sbd import get_cluster_sbd_config from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.misc import outdent from pcs.test.tools.pcs_unittest import TestCase class GetClusterSbdConfig(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) def test_different_responses(self): (self.config .runner.corosync.version() .corosync_conf.load( node_name_list=[ "node-1", "node-2", "node-3", "node-4", "node-5", ], auto_tie_breaker=True, ) .http.add_communication( "get_sbd_config", [ dict( label="node-1", output=outdent( """\ # This file has been generated by pcs. SBD_DELAY_START=no SBD_OPTS="-n node-1" SBD_PACEMAKER=yes SBD_STARTMODE=always SBD_WATCHDOG_DEV=/dev/watchdog SBD_WATCHDOG_TIMEOUT=5 """ ), response_code=200, ), dict( label="node-2", was_connected=False, errno=7, error_msg="Failed connect to node-2:2224;" " No route to host" , ), dict( label="node-3", output= "OPTION= value", response_code=200, ), dict( label="node-4", output= "# just comment", response_code=200, ), dict( label="node-5", output= "invalid value", response_code=200, ), ], action="remote/get_sbd_config", ) ) self.assertEqual( get_cluster_sbd_config(self.env_assist.get_env()), [ { 'node': 'node-1', 'config': { 'SBD_WATCHDOG_TIMEOUT': '5', 'SBD_WATCHDOG_DEV': '/dev/watchdog', 'SBD_PACEMAKER': 'yes', 'SBD_OPTS': '"-n node-1"', 'SBD_STARTMODE': 'always', 'SBD_DELAY_START': 'no' }, }, { 'node': 'node-3', 'config': { "OPTION": "value", } }, { 'node': 'node-4', 'config': {}, }, { 'node': 'node-5', 'config': {}, }, { 'node': 'node-2', 'config': None, }, ] ) self.env_assist.assert_reports([ fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node="node-2", reason="Failed connect to node-2:2224; No route to host", command="remote/get_sbd_config", ), fixture.warn( report_codes.UNABLE_TO_GET_SBD_CONFIG, node="node-2", reason="", ), ]) pcs-0.9.164/pcs/lib/commands/test/sbd/test_get_cluster_sbd_status.py000066400000000000000000000114501326265502500255360ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial import json from pcs.common import report_codes from pcs.lib.commands.sbd import get_cluster_sbd_status from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase warn_unable_to_get_sbd_status = partial( fixture.warn, report_codes.UNABLE_TO_GET_SBD_STATUS, ) class GetClusterSbdStatus(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) def test_default_different_results_on_different_nodes(self): (self.config .runner.corosync.version() .corosync_conf.load( node_name_list=[ "node-1", "node-2", "node-3", "node-4", "node-5" ] ) .http.add_communication( "check_sbd", [ dict( label="node-1", output='{"notauthorized":"true"}', response_code=401, ), dict( label="node-2", was_connected=False, errno=6, error_msg="Could not resolve host: node-2;" " Name or service not known" , ), dict( label="node-3", output=json.dumps({ "sbd":{ "installed": True, "enabled": False, "running":False }, "watchdog":{ "path":"", "exist":False }, "device_list":[] }), response_code=200, ), dict( label="node-4", output=json.dumps({ "watchdog":{ "path":"", "exist":False }, "device_list":[] }), response_code=200, ), dict( label="node-5", output="invalid json", response_code=200, ), ], action="remote/check_sbd", param_list=[("watchdog", ""), ("device_list", "[]")], ) ) default_status = { 'running': None, 'enabled': None, 'installed': None, } self.assertEqual( get_cluster_sbd_status(self.env_assist.get_env()), [ { 'node': 'node-3', 'status': { 'running': False, 'enabled': False, 'installed': True, } }, { 'node': 'node-1', 'status': default_status }, { 'node': 'node-2', 'status': default_status }, { 'node': 'node-4', 'status': default_status }, { 'node': 'node-5', 'status': default_status }, ] ) self.env_assist.assert_reports([ fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, node="node-1", reason="HTTP error: 401", command="remote/check_sbd", ), warn_unable_to_get_sbd_status(node="node-1", reason=""), fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node="node-2", reason= "Could not resolve host: node-2; Name or service not known" , command="remote/check_sbd", ), warn_unable_to_get_sbd_status(node="node-2", reason=""), warn_unable_to_get_sbd_status(node="node-4", reason="'sbd'"), warn_unable_to_get_sbd_status( node="node-5", #the reason differs in python3 #reason="No JSON object could be decoded", ), ]) pcs-0.9.164/pcs/lib/commands/test/test_acl.py000066400000000000000000000255471326265502500207660ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import pcs.lib.commands.acl as cmd_acl from pcs.common.tools import Version from pcs.lib.env import LibraryEnvironment from pcs.test.tools.assertions import ExtendedAssertionsMixin from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock, TestCase REQUIRED_CIB_VERSION = Version(2, 0, 0) class AclCommandsTest(TestCase, ExtendedAssertionsMixin): def setUp(self): self.mock_rep = MockLibraryReportProcessor() self.mock_env = mock.MagicMock(spec_set=LibraryEnvironment) self.mock_env.report_processor = self.mock_rep self.cib = "cib" self.mock_env.get_cib.return_value = self.cib def assert_get_cib_called(self): self.mock_env.get_cib.assert_called_once_with(REQUIRED_CIB_VERSION) def assert_same_cib_pushed(self): self.mock_env.push_cib.assert_called_once_with() def assert_cib_not_pushed(self): self.assertEqual(0, self.mock_env.push_cib.call_count) @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) class CibAclSection(TestCase): def test_push_cib_on_success(self): env = mock.MagicMock() env.get_cib = mock.Mock(return_value="cib") with cmd_acl.cib_acl_section(env): pass env.get_cib.assert_called_once_with(cmd_acl.REQUIRED_CIB_VERSION) env.push_cib.assert_called_once_with() def test_does_not_push_cib_on_exception(self): env = mock.MagicMock() def run(): with cmd_acl.cib_acl_section(env): raise AssertionError() self.assertRaises(AssertionError, run) env.get_cib.assert_called_once_with(cmd_acl.REQUIRED_CIB_VERSION) env.push_cib.assert_not_called() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.validate_permissions") @mock.patch("pcs.lib.cib.acl.create_role") @mock.patch("pcs.lib.cib.acl.add_permissions_to_role") class CreateRoleTest(AclCommandsTest): def test_success(self, mock_add_perm, mock_create_role, mock_validate): perm_list = ["my", "list"] mock_create_role.return_value = "role el" cmd_acl.create_role(self.mock_env, "role_id", perm_list, "desc") self.assert_get_cib_called() mock_validate.assert_called_once_with(self.cib, perm_list) mock_create_role.assert_called_once_with(self.cib, "role_id", "desc") mock_add_perm.assert_called_once_with("role el", perm_list) self.assert_same_cib_pushed() def test_no_permission( self, mock_add_perm, mock_create_role, mock_validate ): mock_create_role.return_value = "role el" cmd_acl.create_role(self.mock_env, "role_id", [], "desc") self.assert_get_cib_called() self.assertEqual(0, mock_validate.call_count) mock_create_role.assert_called_once_with(self.cib, "role_id", "desc") self.assertEqual(0, mock_add_perm.call_count) self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.remove_role") class RemoveRoleTest(AclCommandsTest): def test_success(self, mock_remove): cmd_acl.remove_role(self.mock_env, "role_id", False) self.assert_get_cib_called() mock_remove.assert_called_once_with(self.cib, "role_id", False) self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.find_target_or_group") @mock.patch("pcs.lib.cib.acl.assign_role") class AssignRoleNotSpecific(AclCommandsTest, ExtendedAssertionsMixin): def test_success(self, mock_assign, find_target_or_group): find_target_or_group.return_value = "target_el" cmd_acl.assign_role_not_specific(self.mock_env, "role_id", "target_id") self.assert_get_cib_called() find_target_or_group.assert_called_once_with(self.cib, "target_id") mock_assign.assert_called_once_with(self.cib, "role_id", "target_el") self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.find_target") @mock.patch("pcs.lib.cib.acl.assign_role") class AssignRoleToTargetTest(AclCommandsTest): def test_success(self, mock_assign, find_target): find_target.return_value = "target_el" cmd_acl.assign_role_to_target(self.mock_env, "role_id", "target_id") self.assert_get_cib_called() mock_assign.assert_called_once_with(self.cib, "role_id", "target_el") self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.find_group") @mock.patch("pcs.lib.cib.acl.assign_role") class AssignRoleToGroupTest(AclCommandsTest): def test_success(self, mock_assign, find_group): find_group.return_value = "group_el" cmd_acl.assign_role_to_group(self.mock_env, "role_id", "group_id") self.assert_get_cib_called() mock_assign.assert_called_once_with(self.cib, "role_id", "group_el") self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.unassign_role") @mock.patch("pcs.lib.cib.acl.find_target_or_group") class UnassignRoleNotSpecificTest(AclCommandsTest): def test_success(self, find_target_or_group, mock_unassign): find_target_or_group.return_value = "target_el" cmd_acl.unassign_role_not_specific( self.mock_env, "role_id", "target_id", False ) self.assert_get_cib_called() find_target_or_group.assert_called_once_with(self.cib, "target_id") mock_unassign.assert_called_once_with("target_el", "role_id", False) self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.unassign_role") @mock.patch("pcs.lib.cib.acl.find_target") class UnassignRoleFromTargetTest(AclCommandsTest): def test_success(self, find_target, mock_unassign): find_target.return_value = "el" cmd_acl.unassign_role_from_target( self.mock_env, "role_id", "el_id", False ) self.assert_get_cib_called() find_target.assert_called_once_with(self.cib, "el_id") mock_unassign.assert_called_once_with("el", "role_id", False) self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.unassign_role") @mock.patch("pcs.lib.cib.acl.find_group") class UnassignRoleFromGroupTest(AclCommandsTest): def test_success(self, find_group, mock_unassign): find_group.return_value = "el" cmd_acl.unassign_role_from_group( self.mock_env, "role_id", "el_id", False ) self.assert_get_cib_called() find_group.assert_called_once_with(self.cib, "el_id") mock_unassign.assert_called_once_with("el", "role_id", False) self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.create_target") @mock.patch("pcs.lib.cib.acl.assign_all_roles") class CreateTargetTest(AclCommandsTest): def test_success(self, mock_assign, mock_create): mock_create.return_value = "el" cmd_acl.create_target( self.mock_env, "el_id", ["role1", "role2", "role3"] ) self.assert_get_cib_called() mock_create.assert_called_once_with(self.cib, "el_id") mock_assign(self.cib, "el", ["role1", "role2", "role3"]) self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.create_group") @mock.patch("pcs.lib.cib.acl.assign_all_roles") class CreateGroupTest(AclCommandsTest): def test_success(self, mock_assign, mock_create): mock_create.return_value = "el" cmd_acl.create_group( self.mock_env, "el_id", ["role1", "role2", "role3"] ) self.assert_get_cib_called() mock_create.assert_called_once_with(self.cib, "el_id") mock_assign(self.cib, "el", ["role1", "role2", "role3"]) self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.remove_target") class RemoveTargetTest(AclCommandsTest): def test_success(self, mock_remove): cmd_acl.remove_target(self.mock_env, "el_id") self.assert_get_cib_called() mock_remove.assert_called_once_with(self.cib, "el_id") self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.remove_group") class RemoveGroupTest(AclCommandsTest): def test_success(self, mock_remove): cmd_acl.remove_group(self.mock_env, "el_id") self.assert_get_cib_called() mock_remove.assert_called_once_with(self.cib, "el_id") self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.validate_permissions") @mock.patch("pcs.lib.cib.acl.provide_role") @mock.patch("pcs.lib.cib.acl.add_permissions_to_role") class AddPermissionTest(AclCommandsTest): def test_success(self, mock_add_perm, mock_provide_role, mock_validate): mock_provide_role.return_value = "role_el" cmd_acl.add_permission(self.mock_env, "role_id", "permission_list") self.assert_get_cib_called() mock_validate.assert_called_once_with(self.cib, "permission_list") mock_provide_role.assert_called_once_with(self.cib, "role_id") mock_add_perm.assert_called_once_with("role_el", "permission_list") self.assert_same_cib_pushed() @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.remove_permission") class RemovePermission(AclCommandsTest): def test_success(self, mock_remove): cmd_acl.remove_permission(self.mock_env, "id") self.assert_get_cib_called() mock_remove.assert_called_once_with(self.cib, "id") self.assert_same_cib_pushed() @mock.patch("pcs.lib.cib.acl.get_target_list") @mock.patch("pcs.lib.cib.acl.get_group_list") @mock.patch("pcs.lib.cib.acl.get_role_list") @mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) class GetConfigTest(AclCommandsTest): def test_success(self, mock_role, mock_group, mock_target): mock_role.return_value = "role" mock_group.return_value = "group" mock_target.return_value = "target" self.assertEqual( { "target_list": "target", "group_list": "group", "role_list": "role", }, cmd_acl.get_config(self.mock_env) ) pcs-0.9.164/pcs/lib/commands/test/test_alert.py000066400000000000000000000547161326265502500213360ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial import logging from pcs.common import report_codes from pcs.lib.errors import ReportItemSeverity as Severities from pcs.lib.env import LibraryEnvironment from pcs.lib.external import CommandRunner from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock, TestCase import pcs.lib.commands.alert as cmd_alert get_env_tools = partial( get_env_tools, base_cib_filename="cib-empty-2.5.xml", exception_reports_in_processor_by_default=False, ) class CreateAlertTest(TestCase): fixture_final_alerts = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_no_path(self): self.env_assist.assert_raise_library_error( lambda: cmd_alert.create_alert( self.env_assist.get_env(), None, None, None, None ), [ ( Severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, {"option_names": ["path"]}, None ), ], ) def test_create_no_upgrade(self): (self.config .runner.cib.load() .env.push_cib(optional_in_conf=self.fixture_final_alerts) ) cmd_alert.create_alert( self.env_assist.get_env(), "my-alert", "/my/path", { "instance": "value", "another": "val" }, {"meta1": "val1"}, "my description" ) def test_create_upgrade(self): (self.config .runner.cib.load( filename="cib-empty.xml", name="load_cib_old_version" ) .runner.cib.upgrade() .runner.cib.load() .env.push_cib(optional_in_conf=self.fixture_final_alerts) ) cmd_alert.create_alert( self.env_assist.get_env(), "my-alert", "/my/path", { "instance": "value", "another": "val" }, {"meta1": "val1"}, "my description" ) self.env_assist.assert_reports([ ( Severities.INFO, report_codes.CIB_UPGRADE_SUCCESSFUL, {}, None ), ]) class UpdateAlertTest(TestCase): fixture_initial_alerts = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_update_all(self): fixture_final_alerts = """ """ (self.config .runner.cib.load(optional_in_conf=self.fixture_initial_alerts) .env.push_cib( replace={"./configuration/alerts": fixture_final_alerts} ) ) cmd_alert.update_alert( self.env_assist.get_env(), "my-alert", "/another/one", { "instance": "", "my-attr": "its_val" }, {"meta1": "val2"}, "" ) def test_update_instance_attribute(self): (self.config .runner.cib.load(optional_in_conf=self.fixture_initial_alerts) .env.push_cib( replace={ './configuration/alerts/alert[@id="my-alert"]/' 'instance_attributes/nvpair[@name="instance"]' : """ """ } ) ) cmd_alert.update_alert( self.env_assist.get_env(), "my-alert", None, {"instance": "new_val"}, {}, None ) def test_alert_doesnt_exist(self): (self.config .runner.cib.load( optional_in_conf=""" """ ) ) self.env_assist.assert_raise_library_error( lambda: cmd_alert.update_alert( self.env_assist.get_env(), "unknown", "test", {}, {}, None ), [ ( Severities.ERROR, report_codes.ID_NOT_FOUND, { "context_type": "alerts", "context_id": "", "id": "unknown", "expected_types": ["alert"], }, None ), ], ) class RemoveAlertTest(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load( optional_in_conf=""" """ ) def test_one_alert(self): self.config.env.push_cib( remove="./configuration/alerts/alert[@id='alert2']" ) cmd_alert.remove_alert( self.env_assist.get_env(), ["alert2"] ) def test_multiple_alerts(self): self.config.env.push_cib( remove=[ "./configuration/alerts/alert[@id='alert1']", "./configuration/alerts/alert[@id='alert3']", "./configuration/alerts/alert[@id='alert4']", ] ) cmd_alert.remove_alert( self.env_assist.get_env(), ["alert1", "alert3", "alert4"] ) def test_no_alert(self): self.config.env.push_cib() cmd_alert.remove_alert( self.env_assist.get_env(), [] ) def test_alerts_dont_exist(self): self.env_assist.assert_raise_library_error( lambda: cmd_alert.remove_alert( self.env_assist.get_env(), ["unknown1", "alert1", "unknown2", "alert2"] ), [ ( Severities.ERROR, report_codes.ID_NOT_FOUND, { "context_type": "alerts", "context_id": "", "id": "unknown1", "expected_types": ["alert"], }, None ), ( Severities.ERROR, report_codes.ID_NOT_FOUND, { "context_type": "alerts", "context_id": "", "id": "unknown2", "expected_types": ["alert"], }, None ), ], expected_in_processor=True ) class AddRecipientTest(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load( optional_in_conf=""" """ ) def test_value_not_defined(self): self.config.remove("runner.cib.load") self.env_assist.assert_raise_library_error( lambda: cmd_alert.add_recipient( self.env_assist.get_env(), "unknown", "", {}, {} ), [ ( Severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, {"option_names": ["value"]} ) ], ) def test_recipient_already_exists(self): self.env_assist.assert_raise_library_error( lambda: cmd_alert.add_recipient( self.env_assist.get_env(), "alert", "value1", {}, {}, recipient_id="alert-recipient" ), [ ( Severities.ERROR, report_codes.ID_ALREADY_EXISTS, {"id": "alert-recipient"} ) ], ) def test_without_id(self): self.config.env.push_cib( replace={ './/alert[@id="alert"]' : """ """ } ) cmd_alert.add_recipient( self.env_assist.get_env(), "alert", "value", {"attr1": "val1"}, { "attr2": "val2", "attr1": "val1" } ) def test_with_id(self): self.config.env.push_cib( replace={ './/alert[@id="alert"]': """ """ } ) cmd_alert.add_recipient( self.env_assist.get_env(), "alert", "value", {"attr1": "val1"}, { "attr2": "val2", "attr1": "val1" }, recipient_id="my-recipient" ) class UpdateRecipientTest(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load( optional_in_conf=""" """ ) def test_empty_value(self): self.config.remove("runner.cib.load") self.env_assist.assert_raise_library_error( lambda: cmd_alert.update_recipient( self.env_assist.get_env(), "alert-recipient-1", {}, {}, recipient_value="" ), [ ( Severities.ERROR, report_codes.CIB_ALERT_RECIPIENT_VALUE_INVALID, {"recipient": ""} ) ], ) def test_recipient_not_found(self): self.env_assist.assert_raise_library_error( lambda: cmd_alert.update_recipient( self.env_assist.get_env(), "recipient", {}, {} ), [ ( Severities.ERROR, report_codes.ID_NOT_FOUND, { "id": "recipient", "expected_types": ["recipient"], "context_id": "", "context_type": "alerts", }, None ) ], ) def test_update_all(self): self.config.env.push_cib( replace={ './/alert[@id="alert"]': """ """, } ) cmd_alert.update_recipient( self.env_assist.get_env(), "alert-recipient-1", {"attr1": "value"}, { "attr1": "", "attr3": "new_val" }, recipient_value="new_val", description="desc" ) class RemoveRecipientTest(TestCase): fixture_initial_alerts = """ """ def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.cib.load( optional_in_conf=self.fixture_initial_alerts ) def test_recipient_not_found(self): self.env_assist.assert_raise_library_error( lambda: cmd_alert.remove_recipient( self.env_assist.get_env(), ["recipient", "alert-recip1", "alert2-recip1"] ), [ ( Severities.ERROR, report_codes.ID_NOT_FOUND, {"id": "recipient"}, None ), ( Severities.ERROR, report_codes.ID_NOT_FOUND, {"id": "alert2-recip1"}, None ) ], expected_in_processor=True ) def test_one_recipient(self): self.config.env.push_cib( remove="./configuration/alerts/alert/recipient[@id='alert-recip1']" ) cmd_alert.remove_recipient( self.env_assist.get_env(), ["alert-recip1"] ) def test_multiple_recipients(self): self.config.env.push_cib( remove=[ "./configuration/alerts/alert/recipient[@id='alert-recip1']", "./configuration/alerts/alert/recipient[@id='alert-recip2']", "./configuration/alerts/alert/recipient[@id='alert2-recip4']", ] ) cmd_alert.remove_recipient( self.env_assist.get_env(), ["alert-recip1", "alert-recip2", "alert2-recip4"] ) def test_no_recipient(self): self.config.env.push_cib() cmd_alert.remove_recipient( self.env_assist.get_env(), [] ) @mock.patch("pcs.lib.cib.alert.get_all_alerts") class GetAllAlertsTest(TestCase): def setUp(self): self.mock_log = mock.MagicMock(spec_set=logging.Logger) self.mock_run = mock.MagicMock(spec_set=CommandRunner) self.mock_rep = MockLibraryReportProcessor() self.mock_env = LibraryEnvironment( self.mock_log, self.mock_rep, cib_data='' ) def test_success(self, mock_alerts): mock_alerts.return_value = [{"id": "alert"}] self.assertEqual( [{"id": "alert"}], cmd_alert.get_all_alerts(self.mock_env) ) self.assertEqual(1, mock_alerts.call_count) pcs-0.9.164/pcs/lib/commands/test/test_booth.py000066400000000000000000001255561326265502500213430ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os from collections import namedtuple from pcs.test.tools.pcs_unittest import TestCase, mock from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.assertions import ( assert_raise_library_error, assert_report_item_list_equal, ) from pcs.test.tools.misc import create_patcher from pcs import settings from pcs.common import report_codes, env_file_role_codes as file_roles from pcs.lib.env import LibraryEnvironment from pcs.lib.errors import LibraryError, ReportItemSeverity as Severities from pcs.lib.commands import booth as commands from pcs.lib.external import ( CommandRunner, EnableServiceError, DisableServiceError, StartServiceError, StopServiceError ) patch_commands = create_patcher("pcs.lib.commands.booth") @mock.patch("pcs.lib.tools.generate_key", return_value="key value") @mock.patch("pcs.lib.commands.booth.build", return_value="config content") @mock.patch("pcs.lib.booth.config_structure.validate_peers") class ConfigSetupTest(TestCase): def test_successfuly_build_and_write_to_std_path( self, mock_validate_peers, mock_build, mock_generate_key ): env = mock.MagicMock() commands.config_setup( env, booth_configuration=[ {"key": "site", "value": "1.1.1.1", "details": []}, {"key": "arbitrator", "value": "2.2.2.2", "details": []}, ], ) env.booth.create_config.assert_called_once_with( "config content", False ) env.booth.create_key.assert_called_once_with( "key value", False ) mock_validate_peers.assert_called_once_with( ["1.1.1.1"], ["2.2.2.2"] ) def test_sanitize_peers_before_validation( self, mock_validate_peers, mock_build, mock_generate_key ): commands.config_setup(env=mock.MagicMock(), booth_configuration={}) mock_validate_peers.assert_called_once_with([], []) class ConfigDestroyTest(TestCase): @patch_commands("external.is_systemctl", mock.Mock(return_value=True)) @patch_commands("external.is_service_enabled", mock.Mock(return_value=True)) @patch_commands("external.is_service_running", mock.Mock(return_value=True)) @patch_commands("resource.find_for_config", mock.Mock(return_value=[True])) def test_raises_when_booth_config_in_use(self): env = mock.MagicMock() env.booth.name = "somename" assert_raise_library_error( lambda: commands.config_destroy(env), ( Severities.ERROR, report_codes.BOOTH_CONFIG_IS_USED, { "name": "somename", "detail": "in cluster resource", } ), ( Severities.ERROR, report_codes.BOOTH_CONFIG_IS_USED, { "name": "somename", "detail": "(enabled in systemd)", } ), ( Severities.ERROR, report_codes.BOOTH_CONFIG_IS_USED, { "name": "somename", "detail": "(running in systemd)", } ) ) @patch_commands("external.is_systemctl", mock.Mock(return_value=False)) @patch_commands("resource.find_for_config", mock.Mock(return_value=[])) @patch_commands("parse", mock.Mock(side_effect=LibraryError())) def test_raises_when_cannot_get_content_of_config(self): env = mock.MagicMock() env.booth.name = "somename" assert_raise_library_error( lambda: commands.config_destroy(env), ( Severities.ERROR, report_codes.BOOTH_CANNOT_IDENTIFY_KEYFILE, {}, report_codes.FORCE_BOOTH_DESTROY ) ) @patch_commands("external.is_systemctl", mock.Mock(return_value=False)) @patch_commands("resource.find_for_config", mock.Mock(return_value=[])) @patch_commands("parse", mock.Mock(side_effect=LibraryError())) def test_remove_config_even_if_cannot_get_its_content_when_forced(self): env = mock.MagicMock() env.booth.name = "somename" env.report_processor = MockLibraryReportProcessor() commands.config_destroy(env, ignore_config_load_problems=True) env.booth.remove_config.assert_called_once_with() assert_report_item_list_equal(env.report_processor.report_item_list, [ ( Severities.WARNING, report_codes.BOOTH_CANNOT_IDENTIFY_KEYFILE, {} ) ]) class ConfigSyncTest(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.name = "booth" self.config_path = os.path.join( settings.booth_config_dir, "{}.conf".format(self.name) ) self.node_list = ["rh7-1", "rh7-2"] self.config.env.set_booth({"name": self.name}) self.reason = "fail" def test_success(self): auth_file = "auth.file" auth_file_path = os.path.join(settings.booth_config_dir, auth_file) config_content = "authfile={}".format(auth_file_path) auth_file_content = b"auth" (self.config .fs.open( self.config_path, mock.mock_open(read_data=config_content)(), name="open.conf" ) .fs.open( auth_file_path, mock.mock_open(read_data=auth_file_content)(), mode="rb", name="open.authfile", ) .corosync_conf.load() .http.booth.send_config( self.name, config_content, authfile=auth_file, authfile_data=auth_file_content, node_labels=self.node_list, ) ) commands.config_sync(self.env_assist.get_env(), self.name) self.env_assist.assert_reports( [fixture.info(report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED)] + [ fixture.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, node=node, name_list=[self.name] ) for node in self.node_list ] ) def test_node_failure(self): (self.config .fs.open( self.config_path, mock.mock_open(read_data="")(), name="open.conf" ) .corosync_conf.load() .http.booth.send_config( self.name, "", communication_list=[ dict( label=self.node_list[0], response_code=400, output=self.reason, ), dict( label=self.node_list[1], ) ] ) ) self.env_assist.assert_raise_library_error( lambda: commands.config_sync(self.env_assist.get_env(), self.name), [] ) self.env_assist.assert_reports( [ fixture.info(report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, node=self.node_list[1], name_list=[self.name] ), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[0], reason=self.reason, command="remote/booth_set_config", force_code=report_codes.SKIP_OFFLINE_NODES, ), ] ) def test_node_failure_skip_offline(self): (self.config .fs.open( self.config_path, mock.mock_open(read_data="")(), name="open.conf" ) .corosync_conf.load() .http.booth.send_config( self.name, "", communication_list=[ dict( label=self.node_list[0], response_code=400, output=self.reason, ), dict( label=self.node_list[1], ) ] ) ) commands.config_sync( self.env_assist.get_env(), self.name, skip_offline_nodes=True ) self.env_assist.assert_reports( [ fixture.info(report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, node=self.node_list[1], name_list=[self.name] ), fixture.warn( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node=self.node_list[0], reason=self.reason, command="remote/booth_set_config", ), ] ) def test_node_offline(self): (self.config .fs.open( self.config_path, mock.mock_open(read_data="")(), name="open.conf" ) .corosync_conf.load() .http.booth.send_config( self.name, "", communication_list=[ dict( label=self.node_list[0], errno=1, error_msg=self.reason, was_connected=False, ), dict( label=self.node_list[1], ) ], ) ) self.env_assist.assert_raise_library_error( lambda: commands.config_sync(self.env_assist.get_env(), self.name), [] ) self.env_assist.assert_reports( [ fixture.info(report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, node=self.node_list[1], name_list=[self.name] ), fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.node_list[0], reason=self.reason, command="remote/booth_set_config", force_code=report_codes.SKIP_OFFLINE_NODES, ), ] ) def test_node_offline_skip_offline(self): (self.config .fs.open( self.config_path, mock.mock_open(read_data="")(), name="open.conf" ) .corosync_conf.load() .http.booth.send_config( self.name, "", communication_list=[ dict( label=self.node_list[0], errno=1, error_msg=self.reason, was_connected=False, ), dict( label=self.node_list[1], ) ], ) ) commands.config_sync( self.env_assist.get_env(), self.name, skip_offline_nodes=True ) self.env_assist.assert_reports( [ fixture.info(report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, node=self.node_list[1], name_list=[self.name] ), fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.node_list[0], reason=self.reason, command="remote/booth_set_config", ), ] ) def test_config_not_accessible(self): self.config.fs.open( self.config_path, side_effect=EnvironmentError(0, self.reason, self.config_path), ) self.env_assist.assert_raise_library_error( lambda: commands.config_sync(self.env_assist.get_env(), self.name), [ fixture.error( report_codes.FILE_IO_ERROR, reason="{}: '{}'".format(self.reason, self.config_path), file_role=file_roles.BOOTH_CONFIG, file_path=self.config_path, operation="read", ) ], expected_in_processor=False, ) self.env_assist.assert_reports([]) def test_authfile_not_accessible(self): auth_file = "auth.file" auth_file_path = os.path.join(settings.booth_config_dir, auth_file) config_content = "authfile={}".format(auth_file_path) (self.config .fs.open( self.config_path, mock.mock_open(read_data=config_content)(), name="open.conf" ) .fs.open( auth_file_path, mode="rb", name="open.authfile", side_effect=EnvironmentError(0, self.reason, auth_file_path), ) .corosync_conf.load() .http.booth.send_config( self.name, config_content, node_labels=self.node_list, ) ) commands.config_sync(self.env_assist.get_env(), self.name) self.env_assist.assert_reports( [ fixture.warn( report_codes.FILE_IO_ERROR, reason="{}: '{}'".format(self.reason, auth_file_path), file_role=file_roles.BOOTH_KEY, file_path=auth_file_path, operation="read", ), fixture.info(report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED) ] + [ fixture.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, node=node, name_list=[self.name] ) for node in self.node_list ] ) def test_no_authfile(self): (self.config .fs.open( self.config_path, mock.mock_open(read_data="")(), name="open.conf" ) .corosync_conf.load() .http.booth.send_config( self.name, "", node_labels=self.node_list, ) ) commands.config_sync(self.env_assist.get_env(), self.name) self.env_assist.assert_reports( [fixture.info(report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED)] + [ fixture.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, node=node, name_list=[self.name] ) for node in self.node_list ] ) def test_authfile_not_in_booth_dir(self): config_file_content = "authfile=/etc/my_booth.conf" (self.config .fs.open( self.config_path, mock.mock_open(read_data=config_file_content)(), name="open.conf" ) .corosync_conf.load() .http.booth.send_config( self.name, config_file_content, node_labels=self.node_list, ) ) commands.config_sync(self.env_assist.get_env(), self.name) self.env_assist.assert_reports( [ fixture.warn(report_codes.BOOTH_UNSUPORTED_FILE_LOCATION), fixture.info(report_codes.BOOTH_CONFIG_DISTRIBUTION_STARTED) ] + [ fixture.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, node=node, name_list=[self.name] ) for node in self.node_list ] ) @mock.patch("pcs.lib.commands.booth.external.ensure_is_systemd") @mock.patch("pcs.lib.external.enable_service") class EnableBoothTest(TestCase): def setUp(self): self.mock_env = mock.MagicMock(spec_set=LibraryEnvironment) self.mock_rep = MockLibraryReportProcessor() self.mock_run = mock.MagicMock(spec_set=CommandRunner) self.mock_env.cmd_runner.return_value = self.mock_run self.mock_env.report_processor = self.mock_rep def test_success(self, mock_enable, mock_is_systemctl): commands.enable_booth(self.mock_env, "name") mock_enable.assert_called_once_with(self.mock_run, "booth", "name") mock_is_systemctl.assert_called_once_with() assert_report_item_list_equal( self.mock_rep.report_item_list, [( Severities.INFO, report_codes.SERVICE_ENABLE_SUCCESS, { "service": "booth", "node": None, "instance": "name", } )] ) def test_failed(self, mock_enable, mock_is_systemctl): mock_enable.side_effect = EnableServiceError("booth", "msg", "name") assert_raise_library_error( lambda: commands.enable_booth(self.mock_env, "name"), ( Severities.ERROR, report_codes.SERVICE_ENABLE_ERROR, { "service": "booth", "reason": "msg", "node": None, "instance": "name", } ) ) mock_enable.assert_called_once_with(self.mock_run, "booth", "name") mock_is_systemctl.assert_called_once_with() @mock.patch("pcs.lib.commands.booth.external.ensure_is_systemd") @mock.patch("pcs.lib.external.disable_service") class DisableBoothTest(TestCase): def setUp(self): self.mock_env = mock.MagicMock(spec_set=LibraryEnvironment) self.mock_rep = MockLibraryReportProcessor() self.mock_run = mock.MagicMock(spec_set=CommandRunner) self.mock_env.cmd_runner.return_value = self.mock_run self.mock_env.report_processor = self.mock_rep def test_success(self, mock_disable, mock_is_systemctl): commands.disable_booth(self.mock_env, "name") mock_disable.assert_called_once_with(self.mock_run, "booth", "name") mock_is_systemctl.assert_called_once_with() assert_report_item_list_equal( self.mock_rep.report_item_list, [( Severities.INFO, report_codes.SERVICE_DISABLE_SUCCESS, { "service": "booth", "node": None, "instance": "name", } )] ) def test_failed(self, mock_disable, mock_is_systemctl): mock_disable.side_effect = DisableServiceError("booth", "msg", "name") assert_raise_library_error( lambda: commands.disable_booth(self.mock_env, "name"), ( Severities.ERROR, report_codes.SERVICE_DISABLE_ERROR, { "service": "booth", "reason": "msg", "node": None, "instance": "name", } ) ) mock_disable.assert_called_once_with(self.mock_run, "booth", "name") mock_is_systemctl.assert_called_once_with() @mock.patch("pcs.lib.commands.booth.external.ensure_is_systemd") @mock.patch("pcs.lib.external.start_service") class StartBoothTest(TestCase): def setUp(self): self.mock_env = mock.MagicMock(spec_set=LibraryEnvironment) self.mock_rep = MockLibraryReportProcessor() self.mock_run = mock.MagicMock(spec_set=CommandRunner) self.mock_env.cmd_runner.return_value = self.mock_run self.mock_env.report_processor = self.mock_rep def test_success(self, mock_start, mock_is_systemctl): commands.start_booth(self.mock_env, "name") mock_start.assert_called_once_with(self.mock_run, "booth", "name") mock_is_systemctl.assert_called_once_with() assert_report_item_list_equal( self.mock_rep.report_item_list, [( Severities.INFO, report_codes.SERVICE_START_SUCCESS, { "service": "booth", "node": None, "instance": "name", } )] ) def test_failed(self, mock_start, mock_is_systemctl): mock_start.side_effect = StartServiceError("booth", "msg", "name") assert_raise_library_error( lambda: commands.start_booth(self.mock_env, "name"), ( Severities.ERROR, report_codes.SERVICE_START_ERROR, { "service": "booth", "reason": "msg", "node": None, "instance": "name", } ) ) mock_start.assert_called_once_with(self.mock_run, "booth", "name") mock_is_systemctl.assert_called_once_with() @mock.patch("pcs.lib.commands.booth.external.ensure_is_systemd") @mock.patch("pcs.lib.external.stop_service") class StopBoothTest(TestCase): def setUp(self): self.mock_env = mock.MagicMock(spec_set=LibraryEnvironment) self.mock_rep = MockLibraryReportProcessor() self.mock_run = mock.MagicMock(spec_set=CommandRunner) self.mock_env.cmd_runner.return_value = self.mock_run self.mock_env.report_processor = self.mock_rep def test_success(self, mock_stop, mock_is_systemctl): commands.stop_booth(self.mock_env, "name") mock_stop.assert_called_once_with(self.mock_run, "booth", "name") mock_is_systemctl.assert_called_once_with() assert_report_item_list_equal( self.mock_rep.report_item_list, [( Severities.INFO, report_codes.SERVICE_STOP_SUCCESS, { "service": "booth", "node": None, "instance": "name", } )] ) def test_failed(self, mock_stop, mock_is_systemctl): mock_stop.side_effect = StopServiceError("booth", "msg", "name") assert_raise_library_error( lambda: commands.stop_booth(self.mock_env, "name"), ( Severities.ERROR, report_codes.SERVICE_STOP_ERROR, { "service": "booth", "reason": "msg", "node": None, "instance": "name", } ) ) mock_stop.assert_called_once_with(self.mock_run, "booth", "name") mock_is_systemctl.assert_called_once_with() def _get_booth_file_path(file): return os.path.join(settings.booth_config_dir, file) class PullConfigBase(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.name = "booth" self.node_name = "node" self.config_data = "config" self.config_path = _get_booth_file_path("{}.conf".format(self.name)) self.report_list = [ fixture.info( report_codes.BOOTH_FETCHING_CONFIG_FROM_NODE, node=self.node_name, config=self.name ), fixture.info( report_codes.BOOTH_CONFIG_ACCEPTED_BY_NODE, node=None, name_list=[self.name], ) ] self.config.env.set_booth({"name": self.name}) class PullConfigSuccess(PullConfigBase): def setUp(self): super(PullConfigSuccess, self).setUp() self.booth_cfg_open_mock = mock.mock_open()() (self.config .http.booth.get_config( self.name, self.config_data, node_labels=[self.node_name] ) .fs.exists(self.config_path, False) .fs.open(self.config_path, self.booth_cfg_open_mock, mode="w") ) self.addCleanup( lambda: self.booth_cfg_open_mock.write.assert_called_once_with( self.config_data ) ) def test_success(self): commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ) self.env_assist.assert_reports(self.report_list) def test_success_config_exists(self): self.config.fs.exists(self.config_path, True, instead="fs.exists") commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ) self.env_assist.assert_reports( self.report_list + [ fixture.warn( report_codes.FILE_ALREADY_EXISTS, node=None, file_role=file_roles.BOOTH_CONFIG, file_path=self.config_path, ), ] ) class PullConfigFailure(PullConfigBase): reason = "reason" def test_write_failure(self): (self.config .http.booth.get_config( self.name, self.config_data, node_labels=[self.node_name] ) .fs.exists(self.config_path, False) .fs.open( self.config_path, mode="w", side_effect=EnvironmentError(0, self.reason, self.config_path), ) ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [ fixture.error( report_codes.FILE_IO_ERROR, reason="{}: '{}'".format(self.reason, self.config_path), file_role=file_roles.BOOTH_CONFIG, file_path=self.config_path, operation="write", ) ], expected_in_processor=False, ) self.env_assist.assert_reports(self.report_list[:1]) def test_network_failure(self): self.config.http.booth.get_config( self.name, communication_list=[dict( label=self.node_name, was_connected=False, errno=1, error_msg=self.reason, )] ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [], ) self.env_assist.assert_reports([ self.report_list[0], fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, force_code=None, node=self.node_name, command="remote/booth_get_config", reason=self.reason, ), ]) def test_network_request_failure(self): self.config.http.booth.get_config( self.name, communication_list=[dict( label=self.node_name, response_code=400, output=self.reason, )] ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [], ) self.env_assist.assert_reports([ self.report_list[0], fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=None, node=self.node_name, command="remote/booth_get_config", reason=self.reason, ), ]) def test_request_response_not_json(self): self.config.http.booth.get_config( self.name, communication_list=[dict( label=self.node_name, output="not json", )] ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [], ) self.env_assist.assert_reports([ self.report_list[0], fixture.error( report_codes.INVALID_RESPONSE_FORMAT, node=self.node_name, ), ]) def test_request_response_missing_keys(self): self.config.http.booth.get_config( self.name, communication_list=[dict( label=self.node_name, output="{'config':{}}", )] ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [], ) self.env_assist.assert_reports([ self.report_list[0], fixture.error( report_codes.INVALID_RESPONSE_FORMAT, node=self.node_name, ), ]) class PullConfigWithAuthfile(PullConfigBase): def setUp(self): super(PullConfigWithAuthfile, self).setUp() self.booth_cfg_open_mock = mock.mock_open()() self.authfile = "authfile" self.authfile_path = _get_booth_file_path(self.authfile) self.authfile_data = b"auth" self.pcmk_uid = 2 self.pcmk_gid = 3 (self.config .http.booth.get_config( self.name, self.config_data, authfile=self.authfile, authfile_data=self.authfile_data, node_labels=[self.node_name], ) .fs.exists(self.config_path, False) .fs.open(self.config_path, self.booth_cfg_open_mock, mode="w") .fs.exists(self.authfile_path, False, name="fs.exists.authfile") ) self.addCleanup( lambda: self.booth_cfg_open_mock.write.assert_called_once_with( self.config_data ) ) def _set_pwd_mock(self, pwd_mock): pwd_mock.return_value = namedtuple("Pw", "pw_uid")(self.pcmk_uid) self.addCleanup( lambda: pwd_mock.assert_called_once_with(settings.pacemaker_uname) ) def _set_grp_mock(self, grp_mock): grp_mock.return_value = namedtuple("Gr", "gr_gid")(self.pcmk_gid) self.addCleanup( lambda: grp_mock.assert_called_once_with(settings.pacemaker_gname) ) @mock.patch("grp.getgrnam") @mock.patch("pwd.getpwnam") class PullConfigWithAuthfileSuccess(PullConfigWithAuthfile): def setUp(self): super(PullConfigWithAuthfileSuccess, self).setUp() self.booth_authfile_open_mock = mock.mock_open()() (self.config .fs.open( self.authfile_path, self.booth_authfile_open_mock, mode="wb", name="fs.open.authfile.write" ) .fs.chown(self.authfile_path, self.pcmk_uid, self.pcmk_gid) .fs.chmod(self.authfile_path, settings.booth_authkey_file_mode) ) self.addCleanup( lambda: self.booth_authfile_open_mock.write.assert_called_once_with( self.authfile_data ) ) def test_success(self, pwd_mock, grp_mock): self._set_pwd_mock(pwd_mock) self._set_grp_mock(grp_mock) commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ) self.env_assist.assert_reports(self.report_list) def test_success_authfile_exists(self, pwd_mock, grp_mock): self._set_pwd_mock(pwd_mock) self._set_grp_mock(grp_mock) self.config.fs.exists( self.authfile_path, True, name="fs.exists.authfile", instead="fs.exists.authfile", ) commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ) self.env_assist.assert_reports( self.report_list + [ fixture.warn( report_codes.FILE_ALREADY_EXISTS, node=None, file_role=file_roles.BOOTH_KEY, file_path=self.authfile_path, ) ] ) def test_success_config_and_authfile_exists(self, pwd_mock, grp_mock): self._set_pwd_mock(pwd_mock) self._set_grp_mock(grp_mock) (self.config .fs.exists(self.config_path, True, instead="fs.exists") .fs.exists( self.authfile_path, True, name="fs.exists.authfile", instead="fs.exists.authfile", ) ) commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ) self.env_assist.assert_reports( self.report_list + [ fixture.warn( report_codes.FILE_ALREADY_EXISTS, node=None, file_role=role, file_path=path, ) for role, path in [ (file_roles.BOOTH_CONFIG, self.config_path), (file_roles.BOOTH_KEY, self.authfile_path) ] ] ) @mock.patch("grp.getgrnam") @mock.patch("pwd.getpwnam") class PullConfigWithAuthfileFailure(PullConfigWithAuthfile): def setUp(self): super(PullConfigWithAuthfileFailure, self).setUp() self.reason = "reason" self.booth_authfile_open_mock = mock.mock_open()() def assert_authfile_written(self): self.booth_authfile_open_mock.write.assert_called_once_with( self.authfile_data ) def test_authfile_write_failure(self, pwd_mock, grp_mock): self.config.fs.open( self.authfile_path, mode="wb", name="fs.open.authfile.write", side_effect=EnvironmentError(1, self.reason, self.authfile_path) ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [ fixture.error( report_codes.FILE_IO_ERROR, reason="{}: '{}'".format(self.reason, self.authfile_path), file_role=file_roles.BOOTH_KEY, file_path=self.authfile_path, operation="write", ) ], expected_in_processor=False, ) self.env_assist.assert_reports(self.report_list[:1]) def test_unable_to_get_uid(self, pwd_mock, grp_mock): pwd_mock.side_effect = KeyError() self.config.fs.open( self.authfile_path, self.booth_authfile_open_mock, mode="wb", name="fs.open.authfile.write" ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [ fixture.error( report_codes.UNABLE_TO_DETERMINE_USER_UID, user=settings.pacemaker_uname, ) ], expected_in_processor=False, ) self.assert_authfile_written() pwd_mock.assert_called_once_with(settings.pacemaker_uname) self.assertEqual(0, grp_mock.call_count) self.env_assist.assert_reports(self.report_list[:1]) def test_unable_to_get_gid(self, pwd_mock, grp_mock): self._set_pwd_mock(pwd_mock) grp_mock.side_effect = KeyError() self.config.fs.open( self.authfile_path, self.booth_authfile_open_mock, mode="wb", name="fs.open.authfile.write" ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [ fixture.error( report_codes.UNABLE_TO_DETERMINE_GROUP_GID, group=settings.pacemaker_gname, ) ], expected_in_processor=False, ) self.assert_authfile_written() grp_mock.assert_called_once_with(settings.pacemaker_gname) self.env_assist.assert_reports(self.report_list[:1]) def test_unable_to_set_authfile_uid_gid(self, pwd_mock, grp_mock): self._set_pwd_mock(pwd_mock) self._set_grp_mock(grp_mock) (self.config .fs.open( self.authfile_path, self.booth_authfile_open_mock, mode="wb", name="fs.open.authfile.write" ) .fs.chown( self.authfile_path, self.pcmk_uid, self.pcmk_gid, side_effect=EnvironmentError(1, self.reason, self.authfile_path) ) ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [ fixture.error( report_codes.FILE_IO_ERROR, reason="{}: '{}'".format(self.reason, self.authfile_path), file_role=file_roles.BOOTH_KEY, file_path=self.authfile_path, operation="chown", ) ], expected_in_processor=False, ) self.assert_authfile_written() self.env_assist.assert_reports(self.report_list[:1]) def test_unable_to_set_authfile_mode(self, pwd_mock, grp_mock): self._set_pwd_mock(pwd_mock) self._set_grp_mock(grp_mock) (self.config .fs.open( self.authfile_path, self.booth_authfile_open_mock, mode="wb", name="fs.open.authfile.write" ) .fs.chown( self.authfile_path, self.pcmk_uid, self.pcmk_gid, ) .fs.chmod( self.authfile_path, settings.booth_authkey_file_mode, side_effect=EnvironmentError(1, self.reason, self.authfile_path) ) ) self.env_assist.assert_raise_library_error( lambda: commands.pull_config( self.env_assist.get_env(), self.node_name, self.name ), [ fixture.error( report_codes.FILE_IO_ERROR, reason="{}: '{}'".format(self.reason, self.authfile_path), file_role=file_roles.BOOTH_KEY, file_path=self.authfile_path, operation="chmod", ) ], expected_in_processor=False, ) self.assert_authfile_written() self.env_assist.assert_reports(self.report_list[:1]) class TicketOperationTest(TestCase): @mock.patch("pcs.lib.booth.resource.find_bound_ip") def test_raises_when_implicit_site_not_found_in_cib( self, mock_find_bound_ip ): mock_find_bound_ip.return_value = [] assert_raise_library_error( lambda: commands.ticket_operation( "grant", mock.Mock(), "booth", "ABC", site_ip=None ), ( Severities.ERROR, report_codes.BOOTH_CANNOT_DETERMINE_LOCAL_SITE_IP, {} ), ) def test_raises_when_command_fail(self): mock_run = mock.Mock(return_value=("some message", "error", 1)) mock_env = mock.MagicMock( cmd_runner=mock.Mock(return_value=mock.MagicMock(run=mock_run)) ) assert_raise_library_error( lambda: commands.ticket_operation( "grant", mock_env, "booth", "ABC", site_ip="1.2.3.4" ), ( Severities.ERROR, report_codes.BOOTH_TICKET_OPERATION_FAILED, { "operation": "grant", "reason": "error\nsome message", "site_ip": "1.2.3.4", "ticket_name": "ABC", } ), ) class CreateInClusterTest(TestCase): @patch_commands("get_resources", mock.MagicMock()) def test_raises_when_is_created_already(self): assert_raise_library_error( lambda: commands.create_in_cluster( mock.MagicMock(), "somename", ip="1.2.3.4", ), ( Severities.ERROR, report_codes.BOOTH_ALREADY_IN_CIB, { "name": "somename", } ), ) class FindResourceElementsForOperationTest(TestCase): @patch_commands("resource.find_for_config", mock.Mock(return_value=[])) def test_raises_when_no_booth_resource_found(self): assert_raise_library_error( lambda: commands._find_resource_elements_for_operation( mock.MagicMock(), "somename", allow_multiple=False ), ( Severities.ERROR, report_codes.BOOTH_NOT_EXISTS_IN_CIB, { 'name': 'somename', } ), ) @patch_commands( "resource.find_for_config", mock.Mock(return_value=["b_el1", "b_el2"]) ) def test_raises_when_multiple_booth_resource_found(self): assert_raise_library_error( lambda: commands._find_resource_elements_for_operation( mock.MagicMock(), "somename", allow_multiple=False ), ( Severities.ERROR, report_codes.BOOTH_MULTIPLE_TIMES_IN_CIB, { 'name': 'somename', }, report_codes.FORCE_BOOTH_REMOVE_FROM_CIB, ), ) @patch_commands("get_resources", mock.Mock(return_value="resources")) @patch_commands("resource.get_remover", mock.MagicMock()) @patch_commands("resource.find_for_config", mock.Mock(return_value=[1, 2])) def test_warn_when_multiple_booth_resources_removed(self): report_processor=MockLibraryReportProcessor() commands._find_resource_elements_for_operation( mock.MagicMock(report_processor=report_processor), "somename", allow_multiple=True, ) assert_report_item_list_equal(report_processor.report_item_list, [( Severities.WARNING, report_codes.BOOTH_MULTIPLE_TIMES_IN_CIB, { 'name': 'somename', }, )]) pcs-0.9.164/pcs/lib/commands/test/test_constraint_common.py000066400000000000000000000157501326265502500237560ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from lxml import etree from pcs.common import report_codes from pcs.lib.commands.constraint import common as constraint from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import( assert_raise_library_error, assert_xml_equal, ) from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock def fixture_cib_and_constraints(): cib = etree.Element("cib") resources_section = etree.SubElement(cib, "resources") for id in ("A", "B", "E", "F"): etree.SubElement(resources_section, "primitive").attrib["id"] = id constraint_section = etree.SubElement( etree.SubElement(cib, "configuration"), "constraints" ) return cib, constraint_section def fixture_env(cib): env = mock.MagicMock() env.get_cib = mock.Mock() env.get_cib.return_value = cib env.push_cib = mock.Mock() env.report_processor = MockLibraryReportProcessor() return env class CreateWithSetTest(TestCase): def setUp(self): self.cib, self.constraint_section = fixture_cib_and_constraints() self.env = fixture_env(self.cib) self.independent_cib = etree.XML(etree.tostring(self.cib)) def create(self, duplication_alowed=False): constraint.create_with_set( "rsc_some", lambda cib, options, resource_set_list: options, self.env, [ {"ids": ["A", "B"], "options": {"role": "Master"}}, {"ids": ["E", "F"], "options": {"action": "start"}}, ], {"id":"some_id", "symmetrical": "true"}, duplication_alowed=duplication_alowed ) def test_put_new_constraint_to_constraint_section(self): self.create() self.env.push_cib.assert_called_once_with() self.independent_cib.find(".//constraints").append(etree.XML(""" """)) assert_xml_equal( etree.tostring(self.independent_cib).decode(), etree.tostring(self.cib).decode() ) def test_refuse_duplicate(self): self.create() self.env.push_cib.assert_called_once_with() assert_raise_library_error(self.create, ( severities.ERROR, report_codes.DUPLICATE_CONSTRAINTS_EXIST, { 'constraint_type': 'rsc_some', 'constraint_info_list': [{ 'options': {'symmetrical': 'true', 'id': 'some_id'}, 'resource_sets': [ { 'ids': ['A', 'B'], 'options':{'role':'Master', 'id':'pcs_rsc_set_A_B'} }, { 'ids': ['E', 'F'], 'options':{'action':'start', 'id':'pcs_rsc_set_E_F'} } ], }] }, report_codes.FORCE_CONSTRAINT_DUPLICATE )) def test_put_duplicate_constraint_when_duplication_allowed(self): self.create() self.create(duplication_alowed=True) expected_calls = [ mock.call(), mock.call(), ] self.assertEqual(self.env.push_cib.call_count, len(expected_calls)) self.env.push_cib.assert_has_calls(expected_calls) constraint_section = self.independent_cib.find(".//constraints") constraint_section.append(etree.XML(""" """)) constraint_section.append(etree.XML(""" """)) assert_xml_equal( etree.tostring(self.independent_cib).decode(), etree.tostring(self.cib).decode() ) class ShowTest(TestCase): def setUp(self): self.cib, self.constraint_section = fixture_cib_and_constraints() self.env = fixture_env(self.cib) def create(self, tag_name, resource_set_list): constraint.create_with_set( tag_name, lambda cib, options, resource_set_list: options, self.env, resource_set_list, {"id":"some_id", "symmetrical": "true"}, ) def test_returns_export_of_found_elements(self): tag_name = "rsc_some" self.create(tag_name, [ {"ids": ["A", "B"], "options": {"role": "Master"}}, ]) self.create(tag_name, [ {"ids": ["E", "F"], "options": {"action": "start"}}, ]) etree.SubElement(self.constraint_section, tag_name).attrib.update({ "id": "plain1", "is_plain": "true" }) is_plain = lambda element: element.attrib.has_key("is_plain") self.assertEqual( constraint.show(tag_name, is_plain, self.env), { 'plain': [{"options": {'id': 'plain1', 'is_plain': 'true'}}], 'with_resource_sets': [ { 'resource_sets': [{ 'ids': ['A', 'B'], 'options': {'role': 'Master', 'id': 'pcs_rsc_set_A_B'}, }], 'options': {'symmetrical': 'true', 'id': 'some_id'} }, { 'options': {'symmetrical': 'true', 'id': 'some_id'}, 'resource_sets': [{ 'ids': ['E', 'F'], 'options': {'action': 'start', 'id': 'pcs_rsc_set_E_F'} }] } ] }) pcs-0.9.164/pcs/lib/commands/test/test_fencing_topology.py000066400000000000000000000206421326265502500235630ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial import logging from pcs.common.fencing_topology import ( TARGET_TYPE_REGEXP, TARGET_TYPE_ATTRIBUTE, ) from pcs.common.tools import Version from pcs.lib.env import LibraryEnvironment from pcs.test.tools.misc import create_patcher from pcs.test.tools.pcs_unittest import mock, TestCase from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.lib.commands import fencing_topology as lib create_lib_env = partial( LibraryEnvironment, mock.MagicMock(logging.Logger), MockLibraryReportProcessor() ) patch_env = partial(mock.patch.object, LibraryEnvironment) patch_command = create_patcher("pcs.lib.commands.fencing_topology") @patch_command("cib_fencing_topology.add_level") @patch_command("get_resources") @patch_command("get_fencing_topology") @patch_env("push_cib") @patch_command("ClusterState") @patch_command("get_cluster_status_xml") @patch_env("get_cib") @patch_env("cmd_runner", lambda self: "mocked cmd_runner") class AddLevel(TestCase): def prepare_mocks( self, mock_get_cib, mock_status_xml, mock_status, mock_get_topology, mock_get_resources ): mock_get_cib.return_value = "mocked cib" mock_status_xml.return_value = "mock get_cluster_status_xml" mock_status.return_value = mock.MagicMock( node_section=mock.MagicMock(nodes="nodes") ) mock_get_topology.return_value = "topology el" mock_get_resources.return_value = "resources_el" def assert_mocks( self, mock_status_xml, mock_status, mock_get_topology, mock_get_resources, mock_push_cib ): mock_status_xml.assert_called_once_with("mocked cmd_runner") mock_status.assert_called_once_with("mock get_cluster_status_xml") mock_get_topology.assert_called_once_with("mocked cib") mock_get_resources.assert_called_once_with("mocked cib") mock_push_cib.assert_called_once_with() def test_success( self, mock_get_cib, mock_status_xml, mock_status, mock_push_cib, mock_get_topology, mock_get_resources, mock_add_level ): self.prepare_mocks( mock_get_cib, mock_status_xml, mock_status, mock_get_topology, mock_get_resources ) lib_env = create_lib_env() lib.add_level( lib_env, "level", "target type", "target value", "devices", "force device", "force node" ) mock_add_level.assert_called_once_with( lib_env.report_processor, "topology el", "resources_el", "level", "target type", "target value", "devices", "nodes", "force device", "force node" ) mock_get_cib.assert_called_once_with(None) self.assert_mocks( mock_status_xml, mock_status, mock_get_topology, mock_get_resources, mock_push_cib ) def test_target_attribute_updates_cib( self, mock_get_cib, mock_status_xml, mock_status, mock_push_cib, mock_get_topology, mock_get_resources, mock_add_level ): self.prepare_mocks( mock_get_cib, mock_status_xml, mock_status, mock_get_topology, mock_get_resources ) lib_env = create_lib_env() lib.add_level( lib_env, "level", TARGET_TYPE_ATTRIBUTE, "target value", "devices", "force device", "force node" ) mock_add_level.assert_called_once_with( lib_env.report_processor, "topology el", "resources_el", "level", TARGET_TYPE_ATTRIBUTE, "target value", "devices", "nodes", "force device", "force node" ) mock_get_cib.assert_called_once_with(Version(2, 4, 0)) self.assert_mocks( mock_status_xml, mock_status, mock_get_topology, mock_get_resources, mock_push_cib ) def test_target_regexp_updates_cib( self, mock_get_cib, mock_status_xml, mock_status, mock_push_cib, mock_get_topology, mock_get_resources, mock_add_level ): self.prepare_mocks( mock_get_cib, mock_status_xml, mock_status, mock_get_topology, mock_get_resources ) lib_env = create_lib_env() lib.add_level( lib_env, "level", TARGET_TYPE_REGEXP, "target value", "devices", "force device", "force node" ) mock_add_level.assert_called_once_with( lib_env.report_processor, "topology el", "resources_el", "level", TARGET_TYPE_REGEXP, "target value", "devices", "nodes", "force device", "force node" ) mock_get_cib.assert_called_once_with(Version(2, 3, 0)) self.assert_mocks( mock_status_xml, mock_status, mock_get_topology, mock_get_resources, mock_push_cib ) @patch_command("cib_fencing_topology.export") @patch_command("get_fencing_topology") @patch_env("push_cib") @patch_env("get_cib", lambda self: "mocked cib") class GetConfig(TestCase): def test_success(self, mock_push_cib, mock_get_topology, mock_export): mock_get_topology.return_value = "topology el" mock_export.return_value = "exported config" lib_env = create_lib_env() self.assertEqual( "exported config", lib.get_config(lib_env) ) mock_export.assert_called_once_with("topology el") mock_get_topology.assert_called_once_with("mocked cib") mock_push_cib.assert_not_called() @patch_command("cib_fencing_topology.remove_all_levels") @patch_command("get_fencing_topology") @patch_env("push_cib") @patch_env("get_cib", lambda self: "mocked cib") class RemoveAllLevels(TestCase): def test_success(self, mock_push_cib, mock_get_topology, mock_remove): mock_get_topology.return_value = "topology el" lib_env = create_lib_env() lib.remove_all_levels(lib_env) mock_remove.assert_called_once_with("topology el") mock_get_topology.assert_called_once_with("mocked cib") mock_push_cib.assert_called_once_with() @patch_command("cib_fencing_topology.remove_levels_by_params") @patch_command("get_fencing_topology") @patch_env("push_cib") @patch_env("get_cib", lambda self: "mocked cib") class RemoveLevelsByParams(TestCase): def test_success(self, mock_push_cib, mock_get_topology, mock_remove): mock_get_topology.return_value = "topology el" lib_env = create_lib_env() lib.remove_levels_by_params( lib_env, "level", "target type", "target value", "devices", "ignore" ) mock_remove.assert_called_once_with( lib_env.report_processor, "topology el", "level", "target type", "target value", "devices", "ignore" ) mock_get_topology.assert_called_once_with("mocked cib") mock_push_cib.assert_called_once_with() @patch_command("cib_fencing_topology.verify") @patch_command("get_resources") @patch_command("get_fencing_topology") @patch_env("push_cib") @patch_command("ClusterState") @patch_command("get_cluster_status_xml") @patch_env("get_cib", lambda self: "mocked cib") @patch_env("cmd_runner", lambda self: "mocked cmd_runner") class Verify(TestCase): def test_success( self, mock_status_xml, mock_status, mock_push_cib, mock_get_topology, mock_get_resources, mock_verify ): mock_status_xml.return_value = "mock get_cluster_status_xml" mock_status.return_value = mock.MagicMock( node_section=mock.MagicMock(nodes="nodes") ) mock_get_topology.return_value = "topology el" mock_get_resources.return_value = "resources_el" lib_env = create_lib_env() lib.verify(lib_env) mock_verify.assert_called_once_with( lib_env.report_processor, "topology el", "resources_el", "nodes" ) mock_status_xml.assert_called_once_with("mocked cmd_runner") mock_status.assert_called_once_with("mock get_cluster_status_xml") mock_get_topology.assert_called_once_with("mocked cib") mock_get_resources.assert_called_once_with("mocked cib") mock_push_cib.assert_not_called() pcs-0.9.164/pcs/lib/commands/test/test_node.py000066400000000000000000000232261326265502500211440ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from contextlib import contextmanager from lxml import etree import logging from pcs.test.tools.assertions import assert_raise_library_error from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock, TestCase from pcs.test.tools.misc import create_patcher from pcs.common import report_codes from pcs.lib.env import LibraryEnvironment from pcs.lib.errors import ReportItemSeverity as severity, LibraryError from pcs.lib.commands import node as lib mocked_cib = etree.fromstring("") patch_env = partial(mock.patch.object, LibraryEnvironment) patch_command = create_patcher("pcs.lib.commands.node") create_env = partial( LibraryEnvironment, mock.MagicMock(logging.Logger), MockLibraryReportProcessor() ) def fixture_node(order_num): node = mock.MagicMock(attrs=mock.MagicMock()) node.attrs.name = "node-{0}".format(order_num) return node class StandbyMaintenancePassParameters(TestCase): def setUp(self): self.lib_env = "lib_env" self.nodes = "nodes" self.wait = "wait" self.standby_on = {"standby": "on"} self.standby_off = {"standby": ""} self.maintenance_on = {"maintenance": "on"} self.maintenance_off = {"maintenance": ""} @patch_command("_set_instance_attrs_local_node") class StandbyMaintenancePassParametersLocal(StandbyMaintenancePassParameters): def test_standby(self, mock_doer): lib.standby_unstandby_local(self.lib_env, True, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.standby_on, self.wait ) def test_unstandby(self, mock_doer): lib.standby_unstandby_local(self.lib_env, False, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.standby_off, self.wait ) def test_maintenance(self, mock_doer): lib.maintenance_unmaintenance_local(self.lib_env, True, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.maintenance_on, self.wait ) def test_unmaintenance(self, mock_doer): lib.maintenance_unmaintenance_local(self.lib_env, False, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.maintenance_off, self.wait ) @patch_command("_set_instance_attrs_node_list") class StandbyMaintenancePassParametersList(StandbyMaintenancePassParameters): def test_standby(self, mock_doer): lib.standby_unstandby_list(self.lib_env, True, self.nodes, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.standby_on, self.nodes, self.wait ) def test_unstandby(self, mock_doer): lib.standby_unstandby_list(self.lib_env, False, self.nodes, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.standby_off, self.nodes, self.wait ) def test_maintenance(self, mock_doer): lib.maintenance_unmaintenance_list( self.lib_env, True, self.nodes, self.wait ) mock_doer.assert_called_once_with( self.lib_env, self.maintenance_on, self.nodes, self.wait ) def test_unmaintenance(self, mock_doer): lib.maintenance_unmaintenance_list( self.lib_env, False, self.nodes, self.wait ) mock_doer.assert_called_once_with( self.lib_env, self.maintenance_off, self.nodes, self.wait ) @patch_command("_set_instance_attrs_all_nodes") class StandbyMaintenancePassParametersAll(StandbyMaintenancePassParameters): def test_standby(self, mock_doer): lib.standby_unstandby_all(self.lib_env, True, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.standby_on, self.wait ) def test_unstandby(self, mock_doer): lib.standby_unstandby_all(self.lib_env, False, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.standby_off, self.wait ) def test_maintenance(self, mock_doer): lib.maintenance_unmaintenance_all(self.lib_env, True, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.maintenance_on, self.wait ) def test_unmaintenance(self, mock_doer): lib.maintenance_unmaintenance_all(self.lib_env, False, self.wait) mock_doer.assert_called_once_with( self.lib_env, self.maintenance_off, self.wait ) class SetInstaceAttrsBase(TestCase): node_count = 2 def setUp(self): self.cluster_nodes = [fixture_node(i) for i in range(self.node_count)] self.launch = {"pre": False, "post": False} @contextmanager def cib_runner_nodes_contextmanager(env, wait): self.launch["pre"] = True yield ("cib", "mock_runner", self.cluster_nodes) self.launch["post"] = True patcher = patch_command('cib_runner_nodes') self.addCleanup(patcher.stop) patcher.start().side_effect = cib_runner_nodes_contextmanager def assert_context_manager_launched(self, pre=False, post=False): self.assertEqual(self.launch, {"pre": pre, "post": post}) @patch_command("update_node_instance_attrs") @patch_command("get_local_node_name") class SetInstaceAttrsLocal(SetInstaceAttrsBase): node_count = 2 def test_not_possible_with_cib_file(self, mock_name, mock_attrs): assert_raise_library_error( lambda: lib._set_instance_attrs_local_node( create_env(cib_data=""), "attrs", "wait" ), ( severity.ERROR, report_codes.LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE, {} ) ) self.assert_context_manager_launched(pre=False, post=False) mock_name.assert_not_called() mock_attrs.assert_not_called() def test_success(self, mock_name, mock_attrs): mock_name.return_value = "node-1" lib._set_instance_attrs_local_node(create_env(), "attrs", False) self.assert_context_manager_launched(pre=True, post=True) mock_name.assert_called_once_with("mock_runner") mock_attrs.assert_called_once_with( "cib", "node-1", "attrs", self.cluster_nodes ) @patch_command("update_node_instance_attrs") class SetInstaceAttrsAll(SetInstaceAttrsBase): node_count = 2 def test_success(self, mock_attrs): lib._set_instance_attrs_all_nodes(create_env(), "attrs", False) self.assertEqual(2, len(mock_attrs.mock_calls)) mock_attrs.assert_has_calls([ mock.call("cib", "node-0", "attrs", self.cluster_nodes), mock.call("cib", "node-1", "attrs", self.cluster_nodes), ]) @patch_command("update_node_instance_attrs") class SetInstaceAttrsList(SetInstaceAttrsBase): node_count = 4 def test_success(self, mock_attrs): lib._set_instance_attrs_node_list( create_env(), "attrs", ["node-1", "node-2"], False ) self.assert_context_manager_launched(pre=True, post=True) self.assertEqual(2, len(mock_attrs.mock_calls)) mock_attrs.assert_has_calls([ mock.call("cib", "node-1", "attrs", self.cluster_nodes), mock.call("cib", "node-2", "attrs", self.cluster_nodes), ]) def test_bad_node(self, mock_attrs): assert_raise_library_error( lambda: lib._set_instance_attrs_node_list( create_env(), "attrs", ["node-1", "node-9"], False ), ( severity.ERROR, report_codes.NODE_NOT_FOUND, { "node": "node-9", } ) ) mock_attrs.assert_not_called() @patch_env("push_cib") class CibRunnerNodes(TestCase): def setUp(self): self.env = create_env() @patch_env("get_cib", lambda self: "mocked cib") @patch_env("cmd_runner", lambda self: "mocked cmd_runner") @patch_env("ensure_wait_satisfiable") @patch_command("ClusterState") @patch_command("get_cluster_status_xml") def test_wire_together_all_expected_dependecies( self, get_cluster_status_xml, ClusterState, ensure_wait_satisfiable, push_cib ): ClusterState.return_value = mock.MagicMock( node_section=mock.MagicMock(nodes="nodes") ) get_cluster_status_xml.return_value = "mock get_cluster_status_xml" wait = 10 with lib.cib_runner_nodes(self.env, wait) as (cib, runner, nodes): self.assertEqual(cib, "mocked cib") self.assertEqual(runner, "mocked cmd_runner") self.assertEqual(nodes, "nodes") ensure_wait_satisfiable.assert_called_once_with(wait) get_cluster_status_xml.assert_called_once_with("mocked cmd_runner") ClusterState.assert_called_once_with("mock get_cluster_status_xml") push_cib.assert_called_once_with(wait=wait) @patch_env("ensure_wait_satisfiable", mock.Mock(side_effect=LibraryError)) def test_raises_when_wait_is_not_satisfiable(self, push_cib): def run(): #pylint: disable=unused-variable with lib.cib_runner_nodes(self.env, "wait") as (cib, runner, nodes): pass self.assertRaises(LibraryError, run) push_cib.assert_not_called() pcs-0.9.164/pcs/lib/commands/test/test_quorum.py000066400000000000000000003035571326265502500215570ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import base64 import logging from pcs.test.tools import fixture from pcs.test.tools.assertions import ( ac, assert_raise_library_error, assert_report_item_list_equal, ) from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.misc import ( get_test_resource as rc, outdent, ) from pcs.test.tools.pcs_unittest import mock, TestCase from pcs.common import report_codes from pcs.lib.env import LibraryEnvironment from pcs.lib.errors import ReportItemSeverity as severity from pcs.lib.corosync.config_facade import ConfigFacade from pcs.lib.commands import quorum as lib class CmanMixin(object): def assert_disabled_on_cman(self, func): assert_raise_library_error( func, ( severity.ERROR, report_codes.CMAN_UNSUPPORTED_COMMAND, {} ) ) @mock.patch.object(LibraryEnvironment, "get_corosync_conf_data") class GetQuorumConfigTest(TestCase, CmanMixin): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: True) def test_disabled_on_cman(self, mock_get_corosync): lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assert_disabled_on_cman(lambda: lib.get_config(lib_env)) mock_get_corosync.assert_not_called() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_enabled_on_cman_if_not_live(self, mock_get_corosync): original_conf = open(rc("corosync.conf")).read() mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment( self.mock_logger, self.mock_reporter, corosync_conf_data=original_conf ) self.assertEqual( { "options": {}, "device": None, }, lib.get_config(lib_env) ) self.assertEqual([], self.mock_reporter.report_item_list) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_no_options(self, mock_get_corosync): original_conf = open(rc("corosync.conf")).read() mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertEqual( { "options": {}, "device": None, }, lib.get_config(lib_env) ) self.assertEqual([], self.mock_reporter.report_item_list) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_options(self, mock_get_corosync): original_conf = "quorum {\nwait_for_all: 1\n}\n" mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertEqual( { "options": { "wait_for_all": "1", }, "device": None, }, lib.get_config(lib_env) ) self.assertEqual([], self.mock_reporter.report_item_list) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_device(self, mock_get_corosync): original_conf = """\ quorum { provider: corosync_votequorum wait_for_all: 1 device { option: value model: net net { host: 127.0.0.1 port: 4433 } } } """ mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertEqual( { "options": { "wait_for_all": "1", }, "device": { "model": "net", "model_options": { "host": "127.0.0.1", "port": "4433", }, "generic_options": { "option": "value", }, "heuristics_options": { }, }, }, lib.get_config(lib_env) ) self.assertEqual([], self.mock_reporter.report_item_list) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_device_with_heuristics(self, mock_get_corosync): original_conf = """\ quorum { provider: corosync_votequorum wait_for_all: 1 device { option: value model: net net { host: 127.0.0.1 port: 4433 } heuristics { mode: on exec_ls: test -f /tmp/test } } } """ mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertEqual( { "options": { "wait_for_all": "1", }, "device": { "model": "net", "model_options": { "host": "127.0.0.1", "port": "4433", }, "generic_options": { "option": "value", }, "heuristics_options": { "exec_ls": "test -f /tmp/test", "mode": "on", }, }, }, lib.get_config(lib_env) ) self.assertEqual([], self.mock_reporter.report_item_list) @mock.patch("pcs.lib.sbd.is_auto_tie_breaker_needed") class CheckIfAtbCanBeDisabledTest(TestCase): def setUp(self): self.mock_reporter = MockLibraryReportProcessor() self.mock_runner = "cmd_runner" self.mock_corosync_conf = mock.MagicMock(spec_set=ConfigFacade) def test_atb_no_need_was_disabled_atb_disabled(self, mock_atb_needed): mock_atb_needed.return_value = False self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = False lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, False ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_no_need_was_disabled_atb_enabled(self, mock_atb_needed): mock_atb_needed.return_value = False self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = True lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, False ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_no_need_was_enable_atb_disabled(self, mock_atb_needed): mock_atb_needed.return_value = False self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = False lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, True ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_no_need_was_enabled_atb_enabled(self, mock_atb_needed): mock_atb_needed.return_value = False self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = True lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, True ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_needed_was_disabled_atb_disabled(self, mock_atb_needed): mock_atb_needed.return_value = True self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = False lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, False ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_needed_was_disabled_atb_enabled(self, mock_atb_needed): mock_atb_needed.return_value = True self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = True lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, False ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_needed_was_enable_atb_disabled(self, mock_atb_needed): mock_atb_needed.return_value = True self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = False report_item = ( severity.ERROR, report_codes.COROSYNC_QUORUM_CANNOT_DISABLE_ATB_DUE_TO_SBD, {}, report_codes.FORCE_OPTIONS ) assert_raise_library_error( lambda: lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, True ), report_item ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [report_item] ) def test_atb_needed_was_enabled_atb_enabled(self, mock_atb_needed): mock_atb_needed.return_value = True self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = True lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, True ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_no_need_was_disabled_atb_disabled_force( self, mock_atb_needed ): mock_atb_needed.return_value = False self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = False lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, False, force=True ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_no_need_was_disabled_atb_enabled_force( self, mock_atb_needed ): mock_atb_needed.return_value = False self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = True lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, False, force=True ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_no_need_was_enable_atb_disabled_force(self, mock_atb_needed): mock_atb_needed.return_value = False self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = False lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, True, force=True ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_no_need_was_enabled_atb_enabled_force(self, mock_atb_needed): mock_atb_needed.return_value = False self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = True lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, True, force=True ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_needed_was_disabled_atb_disabled_force( self, mock_atb_needed ): mock_atb_needed.return_value = True self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = False lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, False, force=True ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_needed_was_disabled_atb_enabled_force(self, mock_atb_needed): mock_atb_needed.return_value = True self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = True lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, False, force=True ) self.assertEqual([], self.mock_reporter.report_item_list) def test_atb_needed_was_enable_atb_disabled_force(self, mock_atb_needed): mock_atb_needed.return_value = True self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = False lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, True, force=True ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [( severity.WARNING, report_codes.COROSYNC_QUORUM_CANNOT_DISABLE_ATB_DUE_TO_SBD, {}, None )] ) def test_atb_needed_was_enabled_atb_enabled_force(self, mock_atb_needed): mock_atb_needed.return_value = True self.mock_corosync_conf.is_enabled_auto_tie_breaker.return_value = True lib._check_if_atb_can_be_disabled( self.mock_runner, self.mock_reporter, self.mock_corosync_conf, True, force=True ) self.assertEqual([], self.mock_reporter.report_item_list) @mock.patch("pcs.lib.commands.quorum._check_if_atb_can_be_disabled") @mock.patch.object(LibraryEnvironment, "push_corosync_conf") @mock.patch.object(LibraryEnvironment, "get_corosync_conf_data") @mock.patch.object(LibraryEnvironment, "cmd_runner") class SetQuorumOptionsTest(TestCase, CmanMixin): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: True) def test_disabled_on_cman( self, mock_runner, mock_get_corosync, mock_push_corosync, mock_check ): lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assert_disabled_on_cman(lambda: lib.set_options(lib_env, {})) mock_get_corosync.assert_not_called() mock_push_corosync.assert_not_called() mock_check.assert_not_called() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: True) def test_enabled_on_cman_if_not_live( self, mock_runner, mock_get_corosync, mock_push_corosync, mock_check ): original_conf = "invalid {\nconfig: stop after cman test" mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment( self.mock_logger, self.mock_reporter, corosync_conf_data=original_conf ) options = {"wait_for_all": "1"} assert_raise_library_error( lambda: lib.set_options(lib_env, options), ( severity.ERROR, report_codes.PARSE_ERROR_COROSYNC_CONF_MISSING_CLOSING_BRACE, {} ) ) mock_push_corosync.assert_not_called() mock_check.assert_not_called() mock_runner.assert_not_called() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_success( self, mock_runner, mock_get_corosync, mock_push_corosync, mock_check ): original_conf = open(rc("corosync-3nodes.conf")).read() mock_get_corosync.return_value = original_conf mock_runner.return_value = "cmd_runner" lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) new_options = {"wait_for_all": "1"} lib.set_options(lib_env, new_options) self.assertEqual(1, len(mock_push_corosync.mock_calls)) ac( mock_push_corosync.mock_calls[0][1][0].config.export(), original_conf.replace( "provider: corosync_votequorum\n", "provider: corosync_votequorum\n wait_for_all: 1\n" ) ) self.assertEqual([], self.mock_reporter.report_item_list) self.assertEqual(1, mock_check.call_count) self.assertEqual("cmd_runner", mock_check.call_args[0][0]) self.assertEqual(self.mock_reporter, mock_check.call_args[0][1]) self.assertFalse(mock_check.call_args[0][3]) self.assertFalse(mock_check.call_args[0][4]) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_bad_options( self, mock_runner, mock_get_corosync, mock_push_corosync, mock_check ): original_conf = open(rc("corosync.conf")).read() mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) new_options = {"invalid": "option"} assert_raise_library_error( lambda: lib.set_options(lib_env, new_options), ( severity.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["invalid"], "option_type": "quorum", "allowed": [ "auto_tie_breaker", "last_man_standing", "last_man_standing_window", "wait_for_all", ], "allowed_patterns": [], } ) ) mock_push_corosync.assert_not_called() mock_check.assert_not_called() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_bad_config( self, mock_runner, mock_get_corosync, mock_push_corosync, mock_check ): original_conf = "invalid {\nconfig: this is" mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) new_options = {"wait_for_all": "1"} assert_raise_library_error( lambda: lib.set_options(lib_env, new_options), ( severity.ERROR, report_codes.PARSE_ERROR_COROSYNC_CONF_MISSING_CLOSING_BRACE, {} ) ) mock_push_corosync.assert_not_called() mock_check.assert_not_called() @mock.patch("pcs.lib.commands.quorum.corosync_live.get_quorum_status_text") @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class StatusTextTest(TestCase, CmanMixin): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() self.lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: True) def test_disabled_on_cman(self, mock_status): self.assert_disabled_on_cman( lambda: lib.status_text(self.lib_env) ) mock_status.assert_not_called() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_success(self, mock_status): mock_status.return_value = "status text" self.assertEqual( lib.status_text(self.lib_env), "status text" ) mock_status.assert_called_once_with("mock_runner") @mock.patch("pcs.lib.commands.quorum.qdevice_client.get_status_text") @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class StatusDeviceTextTest(TestCase, CmanMixin): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() self.lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: True) def test_disabled_on_cman(self, mock_status): self.assert_disabled_on_cman( lambda: lib.status_device_text(self.lib_env) ) mock_status.assert_not_called() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_success(self, mock_status): mock_status.return_value = "status text" self.assertEqual( lib.status_device_text(self.lib_env), "status text" ) mock_status.assert_called_once_with("mock_runner", False) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_success_verbose(self, mock_status): mock_status.return_value = "status text" self.assertEqual( lib.status_device_text(self.lib_env, True), "status text" ) mock_status.assert_called_once_with("mock_runner", True) class AddDeviceNetTest(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) self.qnetd_host = "qnetd-host" self.corosync_conf_name = "corosync-3nodes.conf" # the cluster name is defined in the corosync-3nodes.conf file self.cluster_name = "test99" # nodes are defined in the corosync-3nodes.conf file self.cluster_nodes = ["rh7-1", "rh7-2", "rh7-3"] self.certs = { "cacert": { "path": rc("qdevice-certs/qnetd-cacert.crt"), }, "cert_request": { "path": rc("qdevice-certs/qdevice-cert-request.crq"), }, "signed_request": { "path": rc("qdevice-certs/signed-certificate.crt"), }, "final_cert": { "path": rc("qdevice-certs/final-certificate.pk12"), }, } for cert_info in self.certs.values(): # b64encode accepts bytes in python3, so we must read the file as # binary to get bytes instead of a string. In python2, it doesn't # matter. plain = open(cert_info["path"], "rb").read() cert_info["data"] = plain cert_info["b64data"] = base64.b64encode(plain) def fixture_config_http_get_ca_cert(self, output=None): self.config.http.add_communication( "http.get_ca_certificate", [ {"label": self.qnetd_host, }, ], action="remote/qdevice_net_get_ca_certificate", response_code=200, output=(output or self.certs["cacert"]["b64data"]) ) def fixture_config_http_client_init(self): self.config.http.add_communication( "http.client_init", [{"label": node} for node in self.cluster_nodes], action="remote/qdevice_net_client_init_certificate_storage", param_list=[ ("ca_certificate", self.certs["cacert"]["b64data"]), ], response_code=200, ) def fixture_config_runner_get_cert_request(self): self.config.runner.place( "corosync-qdevice-net-certutil -r -n {cluster_name}".format( cluster_name=self.cluster_name ), name="runner.corosync.qdevice.cert-request", stdout="Certificate request stored in {path}".format( path=self.certs["cert_request"]["path"] ) ) def fixture_config_http_sign_cert_request(self, output=None): self.config.http.add_communication( "http.sign_certificate_request", [ {"label": self.qnetd_host, }, ], action="remote/qdevice_net_sign_node_certificate", param_list=[ ( "certificate_request", self.certs["cert_request"]["b64data"] ), ("cluster_name", self.cluster_name), ], response_code=200, output=(output or self.certs["signed_request"]["b64data"]) ) def fixture_config_runner_cert_to_pk12(self, cert_file_path): self.config.runner.place( "corosync-qdevice-net-certutil -M -c {file_path}".format( file_path=cert_file_path ), name="runner.corosync.qdevice.cert-to-pk12", stdout="Certificate request stored in {path}".format( path=self.certs["final_cert"]["path"] ) ) def fixture_config_http_import_final_cert(self): self.config.http.add_communication( "http.client_import_certificate", [{"label": node} for node in self.cluster_nodes], action="remote/qdevice_net_client_import_certificate", param_list=[ ("certificate", self.certs["final_cert"]["b64data"]), ], response_code=200, ) def fixture_config_success( self, expected_corosync_conf, cert_to_pk12_cert_path ): self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.fixture_config_http_client_init() self.fixture_config_runner_get_cert_request() self.fixture_config_http_sign_cert_request() self.fixture_config_runner_cert_to_pk12(cert_to_pk12_cert_path) self.fixture_config_http_import_final_cert() self.config.http.corosync.qdevice_client_enable( node_labels=self.cluster_nodes ) self.config.env.push_corosync_conf( corosync_conf_text=expected_corosync_conf ) self.config.http.corosync.qdevice_client_start( node_labels=self.cluster_nodes ) def fixture_reports_success(self): return [ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), ] + [ fixture.info( report_codes.QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE, node=node ) for node in self.cluster_nodes ] + [ fixture.info( report_codes.SERVICE_ENABLE_STARTED, service="corosync-qdevice" ), ] + [ fixture.info( report_codes.SERVICE_ENABLE_SUCCESS, node=node, service="corosync-qdevice", instance=None ) for node in self.cluster_nodes ] + [ fixture.info( report_codes.SERVICE_START_STARTED, service="corosync-qdevice" ), ] + [ fixture.info( report_codes.SERVICE_START_SUCCESS, node=node, service="corosync-qdevice", instance=None ) for node in self.cluster_nodes ] def test_disabled_on_cman(self): self.config.runner.corosync.version(version="1.4.7") self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host"}, {}, {} ), [ fixture.error(report_codes.CMAN_UNSUPPORTED_COMMAND), ], expected_in_processor=False ) def test_does_not_check_cman_if_not_live(self): (self.config .env.set_corosync_conf_data(open(rc("corosync-3nodes.conf")).read()) ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "bad model", {}, {}, {} ), [ fixture.error( report_codes.INVALID_OPTION_VALUE, force_code=report_codes.FORCE_QDEVICE_MODEL, option_name="model", option_value="bad model", allowed_values=("net", ) ), ] ) def test_fail_if_device_already_set(self): corosync_conf = open( rc(self.corosync_conf_name) ).read().replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { model: net net { algorithm: ffsplit host: qnetd-host } } """ ) ) self.config.runner.corosync.version() self.config.corosync_conf.load_content(corosync_conf) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host"}, {}, {} ), [ fixture.error(report_codes.QDEVICE_ALREADY_DEFINED), ], expected_in_processor=False ) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) @mock.patch("pcs.lib.corosync.qdevice_net.write_tmpfile") def test_success_minimal(self, mock_write_tmpfile): tmpfile_instance = mock.MagicMock() tmpfile_instance.name = rc("file.tmp") mock_write_tmpfile.return_value = tmpfile_instance expected_corosync_conf = open( rc(self.corosync_conf_name) ).read().replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { model: net votes: 1 net { algorithm: ffsplit host: qnetd-host } } """ ) ) self.fixture_config_success( expected_corosync_conf, tmpfile_instance.name ) lib.add_device( self.env_assist.get_env(), "net", {"host": self.qnetd_host, "algorithm": "ffsplit"}, {}, {} ) mock_write_tmpfile.assert_called_once_with( self.certs["signed_request"]["data"], binary=True ) self.env_assist.assert_reports(self.fixture_reports_success()) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) @mock.patch("pcs.lib.corosync.qdevice_net.write_tmpfile") def test_success_corosync_not_running_not_enabled(self, mock_write_tmpfile): tmpfile_instance = mock.MagicMock() tmpfile_instance.name = rc("file.tmp") mock_write_tmpfile.return_value = tmpfile_instance expected_corosync_conf = open( rc(self.corosync_conf_name) ).read().replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { model: net votes: 1 net { algorithm: ffsplit host: qnetd-host } } """ ) ) self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.fixture_config_http_client_init() self.fixture_config_runner_get_cert_request() self.fixture_config_http_sign_cert_request() self.fixture_config_runner_cert_to_pk12(tmpfile_instance.name) self.fixture_config_http_import_final_cert() self.config.http.corosync.qdevice_client_enable( communication_list=[ { "label": label, "output": "corosync is not enabled, skipping", } for label in self.cluster_nodes ] ) self.config.env.push_corosync_conf( corosync_conf_text=expected_corosync_conf ) self.config.http.corosync.qdevice_client_start( communication_list=[ { "label": label, "output": "corosync is not running, skipping", } for label in self.cluster_nodes ] ) lib.add_device( self.env_assist.get_env(), "net", {"host": self.qnetd_host, "algorithm": "ffsplit"}, {}, {} ) mock_write_tmpfile.assert_called_once_with( self.certs["signed_request"]["data"], binary=True ) self.env_assist.assert_reports( [ fixture.info( report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED ), ] + [ fixture.info( report_codes.QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE, node=node ) for node in self.cluster_nodes ] + [ fixture.info( report_codes.SERVICE_ENABLE_STARTED, service="corosync-qdevice" ), ] + [ fixture.info( report_codes.SERVICE_ENABLE_SKIPPED, node=node, service="corosync-qdevice", instance=None, reason="corosync is not enabled" ) for node in self.cluster_nodes ] + [ fixture.info( report_codes.SERVICE_START_STARTED, service="corosync-qdevice" ), ] + [ fixture.info( report_codes.SERVICE_START_SKIPPED, node=node, service="corosync-qdevice", instance=None, reason="corosync is not running" ) for node in self.cluster_nodes ] ) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) @mock.patch("pcs.lib.corosync.qdevice_net.write_tmpfile") def test_success_heuristics_no_exec(self, mock_write_tmpfile): tmpfile_instance = mock.MagicMock() tmpfile_instance.name = rc("file.tmp") mock_write_tmpfile.return_value = tmpfile_instance expected_corosync_conf = open( rc(self.corosync_conf_name) ).read().replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { model: net votes: 1 net { algorithm: ffsplit host: qnetd-host } heuristics { mode: on } } """ ) ) self.fixture_config_success( expected_corosync_conf, tmpfile_instance.name ) lib.add_device( self.env_assist.get_env(), "net", {"host": self.qnetd_host, "algorithm": "ffsplit"}, {}, { "mode": "on"} ) mock_write_tmpfile.assert_called_once_with( self.certs["signed_request"]["data"], binary=True ) self.env_assist.assert_reports( self.fixture_reports_success() + [ fixture.warn( report_codes.COROSYNC_QUORUM_HEURISTICS_ENABLED_WITH_NO_EXEC ) ] ) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) @mock.patch("pcs.lib.corosync.qdevice_net.write_tmpfile") def test_success_full(self, mock_write_tmpfile): tmpfile_instance = mock.MagicMock() tmpfile_instance.name = rc("file.tmp") mock_write_tmpfile.return_value = tmpfile_instance expected_corosync_conf = open( rc(self.corosync_conf_name) ).read().replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { sync_timeout: 34567 timeout: 23456 model: net votes: 1 net { algorithm: ffsplit connect_timeout: 12345 force_ip_version: 4 host: qnetd-host port: 4433 tie_breaker: lowest } heuristics { exec_ls: test -f /tmp/test exec_ping: ping -q -c 1 "127.0.0.1" interval: 30 mode: on sync_timeout: 15 timeout: 5 } } """ ) ) self.fixture_config_success( expected_corosync_conf, tmpfile_instance.name ) lib.add_device( self.env_assist.get_env(), "net", { "host": self.qnetd_host, "port": "4433", "algorithm": "ffsplit", "connect_timeout": "12345", "force_ip_version": "4", "tie_breaker": "lowest", }, { "timeout": "23456", "sync_timeout": "34567" }, { "mode": "on", "timeout": "5", "sync_timeout": "15", "interval": "30", "exec_ping": 'ping -q -c 1 "127.0.0.1"', "exec_ls": "test -f /tmp/test", } ) mock_write_tmpfile.assert_called_once_with( self.certs["signed_request"]["data"], binary=True ) self.env_assist.assert_reports(self.fixture_reports_success()) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) @mock.patch("pcs.lib.corosync.qdevice_net.write_tmpfile") def test_success_one_node_offline(self, mock_write_tmpfile): node_2_offline_msg = ( "Failed connect to {0}:2224; No route to host".format( self.cluster_nodes[1] ) ) node_2_offline_responses = [ {"label": self.cluster_nodes[0]}, { "label": self.cluster_nodes[1], "was_connected": False, "errno": 7, "error_msg": node_2_offline_msg, }, {"label": self.cluster_nodes[2]}, ] def node_2_offline_warning(command): return fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=self.cluster_nodes[1], reason=node_2_offline_msg, command=command ) tmpfile_instance = mock.MagicMock() tmpfile_instance.name = rc("file.tmp") mock_write_tmpfile.return_value = tmpfile_instance expected_corosync_conf = open( rc(self.corosync_conf_name) ).read().replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { model: net votes: 1 net { algorithm: ffsplit host: qnetd-host } } """ ) ) self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.config.http.add_communication( "http.client_init", node_2_offline_responses, action="remote/qdevice_net_client_init_certificate_storage", param_list=[ ("ca_certificate", self.certs["cacert"]["b64data"]), ], response_code=200, ) self.fixture_config_runner_get_cert_request() self.fixture_config_http_sign_cert_request() self.fixture_config_runner_cert_to_pk12(tmpfile_instance.name) self.config.http.add_communication( "http.client_import_certificate", node_2_offline_responses, action="remote/qdevice_net_client_import_certificate", param_list=[ ("certificate", self.certs["final_cert"]["b64data"]), ], response_code=200, ) self.config.http.corosync.qdevice_client_enable( communication_list=node_2_offline_responses ) self.config.env.push_corosync_conf( corosync_conf_text=expected_corosync_conf ) self.config.http.corosync.qdevice_client_start( communication_list=node_2_offline_responses ) lib.add_device( self.env_assist.get_env(), "net", {"host": self.qnetd_host, "algorithm": "ffsplit"}, {}, {}, skip_offline_nodes=True ) mock_write_tmpfile.assert_called_once_with( self.certs["signed_request"]["data"], binary=True ) self.env_assist.assert_reports([ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), node_2_offline_warning( "remote/qdevice_net_client_init_certificate_storage" ), fixture.info( report_codes.QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE, node=self.cluster_nodes[0] ), node_2_offline_warning( "remote/qdevice_net_client_import_certificate" ), fixture.info( report_codes.QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE, node=self.cluster_nodes[2] ), fixture.info( report_codes.SERVICE_ENABLE_STARTED, service="corosync-qdevice" ), fixture.info( report_codes.SERVICE_ENABLE_SUCCESS, node=self.cluster_nodes[0], service="corosync-qdevice" ), node_2_offline_warning("remote/qdevice_client_enable"), fixture.info( report_codes.SERVICE_ENABLE_SUCCESS, node=self.cluster_nodes[2], service="corosync-qdevice" ), fixture.info( report_codes.SERVICE_START_STARTED, service="corosync-qdevice" ), fixture.info( report_codes.SERVICE_START_SUCCESS, node=self.cluster_nodes[0], service="corosync-qdevice" ), node_2_offline_warning("remote/qdevice_client_start"), fixture.info( report_codes.SERVICE_START_SUCCESS, node=self.cluster_nodes[2], service="corosync-qdevice" ), ]) def test_success_file_minimal(self): original_corosync_conf = open(rc(self.corosync_conf_name)).read() expected_corosync_conf = original_corosync_conf.replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { model: net votes: 1 net { algorithm: ffsplit host: qnetd-host } } """ ) ) (self.config .env.set_corosync_conf_data(original_corosync_conf) .env.push_corosync_conf( corosync_conf_text=expected_corosync_conf ) ) lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {}, {} ) def test_success_file_full(self): expected_corosync_conf = open( rc(self.corosync_conf_name) ).read().replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { sync_timeout: 34567 timeout: 23456 model: net votes: 1 net { algorithm: ffsplit connect_timeout: 12345 force_ip_version: 4 host: qnetd-host port: 4433 tie_breaker: lowest } heuristics { exec_ls: test -f /tmp/test exec_ping: ping -q -c 1 "127.0.0.1" interval: 30 mode: on sync_timeout: 15 timeout: 5 } } """ ) ) (self.config .env.set_corosync_conf_data( open(rc(self.corosync_conf_name)).read() ) .env.push_corosync_conf( corosync_conf_text=expected_corosync_conf ) ) lib.add_device( self.env_assist.get_env(), "net", { "host": self.qnetd_host, "port": "4433", "algorithm": "ffsplit", "connect_timeout": "12345", "force_ip_version": "4", "tie_breaker": "lowest", }, { "timeout": "23456", "sync_timeout": "34567" }, { "mode": "on", "timeout": "5", "sync_timeout": "15", "interval": "30", "exec_ping": 'ping -q -c 1 "127.0.0.1"', "exec_ls": "test -f /tmp/test", } ) def test_invalid_options(self): (self.config .runner.corosync.version() .corosync_conf.load(filename=self.corosync_conf_name) ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {"bad_option": "bad_value"}, {"mode": "bad-mode", "bad_heur": "abc", "exec_bad.name": ""} ), [ fixture.error( report_codes.INVALID_OPTIONS, force_code=report_codes.FORCE_OPTIONS, option_names=["bad_option"], option_type="quorum device", allowed=["sync_timeout", "timeout"], allowed_patterns=[] ), fixture.error( report_codes.INVALID_OPTION_VALUE, force_code=report_codes.FORCE_OPTIONS, option_name="mode", option_value="bad-mode", allowed_values=("off", "on", "sync") ), fixture.error( report_codes.INVALID_OPTIONS, force_code=report_codes.FORCE_OPTIONS, option_names=["bad_heur"], option_type="heuristics", allowed=["interval", "mode", "sync_timeout", "timeout"], allowed_patterns=["exec_NAME"] ), fixture.error( report_codes.INVALID_USERDEFINED_OPTIONS, option_names=["exec_bad.name"], option_type="heuristics", allowed_description=( "exec_NAME cannot contain '.:{}#' and whitespace " "characters" ) ), ] ) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) @mock.patch("pcs.lib.corosync.qdevice_net.write_tmpfile") def test_invalid_options_forced(self, mock_write_tmpfile): tmpfile_instance = mock.MagicMock() tmpfile_instance.name = rc("file.tmp") mock_write_tmpfile.return_value = tmpfile_instance expected_corosync_conf = open( rc(self.corosync_conf_name) ).read().replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { bad_option: bad_value model: net votes: 1 net { algorithm: ffsplit host: qnetd-host } heuristics { bad_heur: abc mode: bad-mode } } """ ) ) self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.fixture_config_http_client_init() self.fixture_config_runner_get_cert_request() self.fixture_config_http_sign_cert_request() self.fixture_config_runner_cert_to_pk12(tmpfile_instance.name) self.fixture_config_http_import_final_cert() self.config.http.corosync.qdevice_client_enable( node_labels=self.cluster_nodes ) self.config.env.push_corosync_conf( corosync_conf_text=expected_corosync_conf ) self.config.http.corosync.qdevice_client_start( node_labels=self.cluster_nodes ) lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {"bad_option": "bad_value"}, {"mode": "bad-mode", "bad_heur": "abc",}, force_options=True ) self.env_assist.assert_reports([ fixture.warn( report_codes.INVALID_OPTIONS, option_names=["bad_option"], option_type="quorum device", allowed=["sync_timeout", "timeout"], allowed_patterns=[] ), fixture.warn( report_codes.INVALID_OPTION_VALUE, option_name="mode", option_value="bad-mode", allowed_values=("off", "on", "sync") ), fixture.warn( report_codes.INVALID_OPTIONS, option_names=["bad_heur"], option_type="heuristics", allowed=["interval", "mode", "sync_timeout", "timeout"], allowed_patterns=["exec_NAME"] ), fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), ] + [ fixture.info( report_codes.QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE, node=node ) for node in self.cluster_nodes ] + [ fixture.info( report_codes.SERVICE_ENABLE_STARTED, service="corosync-qdevice" ), ] + [ fixture.info( report_codes.SERVICE_ENABLE_SUCCESS, node=node, service="corosync-qdevice" ) for node in self.cluster_nodes ] + [ fixture.info( report_codes.SERVICE_START_STARTED, service="corosync-qdevice" ), ] + [ fixture.info( report_codes.SERVICE_START_SUCCESS, node=node, service="corosync-qdevice" ) for node in self.cluster_nodes ]) def test_invalid_model(self): (self.config .runner.corosync.version() .corosync_conf.load(filename=self.corosync_conf_name) ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "bad_model", {}, {}, {} ), [ fixture.error( report_codes.INVALID_OPTION_VALUE, force_code=report_codes.FORCE_QDEVICE_MODEL, option_name="model", option_value="bad_model", allowed_values=("net", ), ), ] ) def test_invalid_model_forced(self): expected_corosync_conf = open( rc(self.corosync_conf_name) ).read().replace( " provider: corosync_votequorum\n", outdent("""\ provider: corosync_votequorum device { model: bad_model } """ ) ) self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) # model is not "net" - do not set up certificates self.config.http.corosync.qdevice_client_enable( node_labels=self.cluster_nodes ) self.config.env.push_corosync_conf( corosync_conf_text=expected_corosync_conf ) self.config.http.corosync.qdevice_client_start( node_labels=self.cluster_nodes ) lib.add_device( self.env_assist.get_env(), "bad_model", {}, {}, {}, force_model=True ) self.env_assist.assert_reports([ fixture.warn( report_codes.INVALID_OPTION_VALUE, option_name="model", option_value="bad_model", allowed_values=("net", ), ), ] + [ fixture.info( report_codes.SERVICE_ENABLE_STARTED, service="corosync-qdevice" ), ] + [ fixture.info( report_codes.SERVICE_ENABLE_SUCCESS, node=node, service="corosync-qdevice" ) for node in self.cluster_nodes ] + [ fixture.info( report_codes.SERVICE_START_STARTED, service="corosync-qdevice" ), ] + [ fixture.info( report_codes.SERVICE_START_SUCCESS, node=node, service="corosync-qdevice" ) for node in self.cluster_nodes ]) def test_get_ca_cert_error_communication(self): self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.config.http.add_communication( "http.get_ca_certificate", [ {"label": self.qnetd_host, }, ], action="remote/qdevice_net_get_ca_certificate", response_code=400, output="Unable to read certificate: error description" ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {"timeout": "20"}, {}, skip_offline_nodes=True # test that this does not matter ), [], # an empty LibraryError is raised expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=None, node=self.qnetd_host, command="remote/qdevice_net_get_ca_certificate", reason="Unable to read certificate: error description", ) ]) def test_get_ca_cert_error_decode_certificate(self): self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert( output="invalid base64 encoded certificate data" ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": self.qnetd_host, "algorithm": "ffsplit"}, {"timeout": "20"}, {}, skip_offline_nodes=True # test that this does not matter ), [], # an empty LibraryError is raised expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), fixture.error( report_codes.INVALID_RESPONSE_FORMAT, force_code=None, node=self.qnetd_host, ) ]) def test_error_client_setup(self): self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.config.http.add_communication( "http.client_init", [ {"label": self.cluster_nodes[0]}, { "label": self.cluster_nodes[1], "response_code": 400, "output": "some error occurred", }, {"label": self.cluster_nodes[2]}, ], action="remote/qdevice_net_client_init_certificate_storage", param_list=[ ("ca_certificate", self.certs["cacert"]["b64data"]), ], response_code=200, ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {"timeout": "20"}, {} ), [], # an empty LibraryError is raised expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=report_codes.SKIP_OFFLINE_NODES, node=self.cluster_nodes[1], command="remote/qdevice_net_client_init_certificate_storage", reason="some error occurred", ) ]) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) def test_generate_cert_request_error(self): self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.fixture_config_http_client_init() self.config.runner.place( "corosync-qdevice-net-certutil -r -n {cluster_name}".format( cluster_name=self.cluster_name ), name="runner.corosync.qdevice.cert-request", stderr="some error occurred", returncode=1 ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {"timeout": "20"}, {} ), [ fixture.error( report_codes.QDEVICE_INITIALIZATION_ERROR, force_code=None, model="net", reason="some error occurred", ), ], expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), ]) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) def test_sign_certificate_error_communication(self): self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.fixture_config_http_client_init() self.fixture_config_runner_get_cert_request() self.config.http.add_communication( "http.sign_certificate_request", [ {"label": self.qnetd_host, }, ], action="remote/qdevice_net_sign_node_certificate", param_list=[ ( "certificate_request", self.certs["cert_request"]["b64data"] ), ("cluster_name", self.cluster_name), ], response_code=400, output="some error occurred" ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {"timeout": "20"}, {} ), [], # an empty LibraryError is raised expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=None, node=self.qnetd_host, command="remote/qdevice_net_sign_node_certificate", reason="some error occurred", ) ]) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) def test_sign_certificate_error_decode_certificate(self): self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.fixture_config_http_client_init() self.fixture_config_runner_get_cert_request() self.fixture_config_http_sign_cert_request( output="invalid base64 encoded certificate data" ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {"timeout": "20"}, {} ), [], # an empty LibraryError is raised expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), fixture.error( report_codes.INVALID_RESPONSE_FORMAT, force_code=None, node=self.qnetd_host, ) ]) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) @mock.patch("pcs.lib.corosync.qdevice_net.write_tmpfile") def test_certificate_to_pk12_error(self, mock_write_tmpfile): tmpfile_instance = mock.MagicMock() tmpfile_instance.name = rc("file.tmp") mock_write_tmpfile.return_value = tmpfile_instance self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.fixture_config_http_client_init() self.fixture_config_runner_get_cert_request() self.fixture_config_http_sign_cert_request() self.config.runner.place( "corosync-qdevice-net-certutil -M -c {file_path}".format( file_path=tmpfile_instance.name ), name="runner.corosync.qdevice.cert-to-pk12", stderr="some error occurred", returncode=1 ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {"timeout": "20"}, {} ), [ fixture.error( report_codes.QDEVICE_CERTIFICATE_IMPORT_ERROR, force_code=None, reason="some error occurred", ), ], expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), ]) @mock.patch("pcs.lib.corosync.qdevice_net.client_initialized", lambda: True) @mock.patch("pcs.lib.corosync.qdevice_net.write_tmpfile") def test_client_import_cert_error(self, mock_write_tmpfile): tmpfile_instance = mock.MagicMock() tmpfile_instance.name = rc("file.tmp") mock_write_tmpfile.return_value = tmpfile_instance self.config.runner.corosync.version() self.config.corosync_conf.load(filename=self.corosync_conf_name) self.fixture_config_http_get_ca_cert() self.fixture_config_http_client_init() self.fixture_config_runner_get_cert_request() self.fixture_config_http_sign_cert_request() self.fixture_config_runner_cert_to_pk12(tmpfile_instance.name) self.config.http.add_communication( "http.client_import_certificate", [ {"label": self.cluster_nodes[0]}, { "label": self.cluster_nodes[1], "response_code": 400, "output": "some error occurred", }, {"label": self.cluster_nodes[2]}, ], action="remote/qdevice_net_client_import_certificate", param_list=[ ("certificate", self.certs["final_cert"]["b64data"]), ], response_code=200, ) self.env_assist.assert_raise_library_error( lambda: lib.add_device( self.env_assist.get_env(), "net", {"host": "qnetd-host", "algorithm": "ffsplit"}, {"timeout": "20"}, {} ), [], # an empty LibraryError is raised expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info(report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED), fixture.info( report_codes.QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE, node=self.cluster_nodes[0], ), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=report_codes.SKIP_OFFLINE_NODES, node=self.cluster_nodes[1], command="remote/qdevice_net_client_import_certificate", reason="some error occurred", ), fixture.info( report_codes.QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE, node=self.cluster_nodes[2], ), ]) class RemoveDeviceHeuristics(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) def test_disabled_on_cman(self): self.config.runner.corosync.version(version="1.4.7") self.env_assist.assert_raise_library_error( lambda: lib.remove_device_heuristics(self.env_assist.get_env()), [ fixture.error(report_codes.CMAN_UNSUPPORTED_COMMAND), ], expected_in_processor=False ) def test_enabled_on_cman_if_not_live(self): (self.config .env.set_corosync_conf_data(open(rc("corosync-3nodes.conf")).read()) ) self.env_assist.assert_raise_library_error( lambda: lib.remove_device_heuristics(self.env_assist.get_env()), [ fixture.error(report_codes.QDEVICE_NOT_DEFINED), ], expected_in_processor=False ) def test_success(self): config_no_heuristics = open(rc("corosync-3nodes-qdevice.conf")).read() config_heuristics = config_no_heuristics.replace( outdent("""\ net { host: 127.0.0.1 } """), outdent("""\ net { host: 127.0.0.1 } heuristics { mode: on exec_ls: test -f /tmp/test } """) ) self.config.runner.corosync.version() self.config.corosync_conf.load_content(config_heuristics) self.config.env.push_corosync_conf( corosync_conf_text=config_no_heuristics ) lib.remove_device_heuristics(self.env_assist.get_env()) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) class RemoveDeviceNetTest(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(self) def conf_2nodes(self, quorum_line): cluster_nodes = ["rh7-1", "rh7-2"] original_conf = open(rc("corosync-qdevice.conf")).read() expected_conf = original_conf.replace( outdent("""\ quorum { provider: corosync_votequorum device { model: net net { host: 127.0.0.1 } } } """ ), # cluster consists of two nodes, two_node must be set outdent("""\ quorum { provider: corosync_votequorum """ + quorum_line + """ } """ ) ) return cluster_nodes, original_conf, expected_conf def conf_3nodes(self): cluster_nodes = ["rh7-1", "rh7-2", "rh7-3"] original_conf = open(rc("corosync-3nodes-qdevice.conf")).read() expected_conf = original_conf.replace( outdent("""\ quorum { provider: corosync_votequorum device { model: net net { host: 127.0.0.1 } } } """ ), outdent("""\ quorum { provider: corosync_votequorum } """ ) ) return cluster_nodes, original_conf, expected_conf def fixture_config_http_qdevice_net_destroy(self, nodes, responses=None): responses = responses or [{"label": node} for node in nodes] self.config.http.add_communication( "http.qdevice_net_destroy", responses, action="remote/qdevice_net_client_destroy", response_code=200 ) def fixture_config_runner_sbd_installed(self, sbd_installed): units = { "non_sbd": "enabled", } if sbd_installed: units["sbd"] = "enabled" # enabled/disabled doesn't matter self.config.runner.systemctl.list_unit_files( units, before="http.corosync.qdevice_client_disable_requests", ) def fixture_config_runner_sbd_enabled(self, sbd_enabled): self.config.runner.systemctl.is_enabled( "sbd", sbd_enabled, before="http.corosync.qdevice_client_disable_requests", ) def fixture_config_success( self, cluster_nodes, original_corosync_conf, expected_corosync_conf ): self.config.runner.corosync.version() self.config.corosync_conf.load_content(original_corosync_conf) self.config.http.corosync.qdevice_client_disable( node_labels=cluster_nodes ) self.config.http.corosync.qdevice_client_stop(node_labels=cluster_nodes) self.fixture_config_http_qdevice_net_destroy(cluster_nodes) self.config.env.push_corosync_conf( corosync_conf_text=expected_corosync_conf ) def fixture_config_success_sbd_part( self, sbd_installed, sbd_enabled ): self.fixture_config_runner_sbd_installed(sbd_installed) if sbd_installed: self.fixture_config_runner_sbd_enabled(sbd_enabled) def fixture_reports_success(self, cluster_nodes, atb_enabled=False): reports = [] if atb_enabled: reports.append( fixture.warn(report_codes.SBD_REQUIRES_ATB) ) reports += [ fixture.info( report_codes.SERVICE_DISABLE_STARTED, service="corosync-qdevice", ), ] + [ fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, node=node, service="corosync-qdevice", ) for node in cluster_nodes ] + [ fixture.info( report_codes.SERVICE_STOP_STARTED, service="corosync-qdevice", ), ] + [ fixture.info( report_codes.SERVICE_STOP_SUCCESS, node=node, service="corosync-qdevice", ) for node in cluster_nodes ] + [ fixture.info(report_codes.QDEVICE_CERTIFICATE_REMOVAL_STARTED), ] + [ fixture.info( report_codes.QDEVICE_CERTIFICATE_REMOVED_FROM_NODE, node=node, ) for node in cluster_nodes ] return reports @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_disabled_on_cman(self): self.config.runner.corosync.version(version="1.4.7") self.env_assist.assert_raise_library_error( lambda: lib.remove_device(self.env_assist.get_env()), [ fixture.error(report_codes.CMAN_UNSUPPORTED_COMMAND), ], expected_in_processor=False ) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_does_not_check_cman_if_not_live(self): (self.config .env.set_corosync_conf_data(open(rc("corosync-3nodes.conf")).read()) ) self.env_assist.assert_raise_library_error( lambda: lib.remove_device(self.env_assist.get_env()), [ fixture.error(report_codes.QDEVICE_NOT_DEFINED), ], expected_in_processor=False ) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_fail_if_device_not_set(self): self.config.runner.corosync.version() self.config.corosync_conf.load_content( open(rc("corosync-3nodes.conf")).read() ) self.env_assist.assert_raise_library_error( lambda: lib.remove_device(self.env_assist.get_env()), [ fixture.error(report_codes.QDEVICE_NOT_DEFINED), ], expected_in_processor=False ) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_success_2nodes_no_sbd(self): # cluster consists of two nodes, two_node must be set cluster_nodes, original_conf, expected_conf = self.conf_2nodes( "two_node: 1" ) self.fixture_config_success(cluster_nodes, original_conf, expected_conf) self.fixture_config_success_sbd_part(False, False) lib.remove_device(self.env_assist.get_env()) self.env_assist.assert_reports( self.fixture_reports_success(cluster_nodes) ) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_success_2nodes_sbd_installed_disabled(self): # cluster consists of two nodes, two_node must be set cluster_nodes, original_conf, expected_conf = self.conf_2nodes( "two_node: 1" ) self.fixture_config_success(cluster_nodes, original_conf, expected_conf) self.fixture_config_success_sbd_part(True, False) lib.remove_device(self.env_assist.get_env()) self.env_assist.assert_reports( self.fixture_reports_success(cluster_nodes, atb_enabled=False) ) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_success_2nodes_sbd_enabled(self): # cluster consists of two nodes and SBD is in use, so teo_nodes must be # disabled and auto_tie_breaker must be enabled cluster_nodes, original_conf, expected_conf = self.conf_2nodes( "auto_tie_breaker: 1" ) self.fixture_config_success(cluster_nodes, original_conf, expected_conf) self.fixture_config_success_sbd_part(True, True) lib.remove_device(self.env_assist.get_env()) self.env_assist.assert_reports( self.fixture_reports_success(cluster_nodes, atb_enabled=True) ) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: ["/dev/sdb"]) def test_success_2nodes_sbd_enabled_with_devices(self): # cluster consists of two nodes, but SBD with shared storage is in use # auto tie breaker doesn't need to be enabled cluster_nodes, original_conf, expected_conf = self.conf_2nodes( "two_node: 1" ) self.fixture_config_success(cluster_nodes, original_conf, expected_conf) self.fixture_config_success_sbd_part(True, True) lib.remove_device(self.env_assist.get_env()) self.env_assist.assert_reports( self.fixture_reports_success(cluster_nodes, atb_enabled=False) ) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_success_3nodes(self): # with odd number of nodes it doesn't matter if sbd is used cluster_nodes, original_conf, expected_conf = self.conf_3nodes() self.fixture_config_success(cluster_nodes, original_conf, expected_conf) lib.remove_device(self.env_assist.get_env()) self.env_assist.assert_reports( self.fixture_reports_success(cluster_nodes) ) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_success_3nodes_file(self): # with odd number of nodes it doesn't matter if sbd is used dummy_cluster_nodes, original_conf, expected_conf = self.conf_3nodes() (self.config .env.set_corosync_conf_data(original_conf) .env.push_corosync_conf(corosync_conf_text=expected_conf) ) lib.remove_device(self.env_assist.get_env()) self.env_assist.assert_reports([]) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_success_3nodes_one_node_offline(self): # with odd number of nodes it doesn't matter if sbd is used cluster_nodes, original_conf, expected_conf = self.conf_3nodes() node_2_offline_msg = ( "Failed connect to {0}:2224; No route to host".format( cluster_nodes[1] ) ) node_2_offline_responses = [ {"label": cluster_nodes[0]}, { "label": cluster_nodes[1], "was_connected": False, "errno": 7, "error_msg": node_2_offline_msg, }, {"label": cluster_nodes[2]}, ] def node_2_offline_warning(command): return fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node=cluster_nodes[1], reason=node_2_offline_msg, command=command ) self.config.runner.corosync.version() self.config.corosync_conf.load_content(original_conf) self.config.http.corosync.qdevice_client_disable( communication_list=node_2_offline_responses ) self.config.http.corosync.qdevice_client_stop( communication_list=node_2_offline_responses ) self.fixture_config_http_qdevice_net_destroy( cluster_nodes, node_2_offline_responses ) self.config.env.push_corosync_conf( corosync_conf_text=expected_conf ) lib.remove_device(self.env_assist.get_env(), skip_offline_nodes=True) self.env_assist.assert_reports([ fixture.info( report_codes.SERVICE_DISABLE_STARTED, service="corosync-qdevice", ), fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, node=cluster_nodes[0], service="corosync-qdevice", ), node_2_offline_warning("remote/qdevice_client_disable"), fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, node=cluster_nodes[2], service="corosync-qdevice", ), fixture.info( report_codes.SERVICE_STOP_STARTED, service="corosync-qdevice", ), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node=cluster_nodes[0], service="corosync-qdevice", ), node_2_offline_warning("remote/qdevice_client_stop"), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node=cluster_nodes[2], service="corosync-qdevice", ), fixture.info(report_codes.QDEVICE_CERTIFICATE_REMOVAL_STARTED), fixture.info( report_codes.QDEVICE_CERTIFICATE_REMOVED_FROM_NODE, node=cluster_nodes[0], ), node_2_offline_warning("remote/qdevice_net_client_destroy"), fixture.info( report_codes.QDEVICE_CERTIFICATE_REMOVED_FROM_NODE, node=cluster_nodes[2], ), ]) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_error_disable_qdevice(self): cluster_nodes, original_conf, dummy_expected_conf = self.conf_3nodes() self.config.runner.corosync.version() self.config.corosync_conf.load_content(original_conf) self.config.http.corosync.qdevice_client_disable( communication_list=[ {"label": cluster_nodes[0]}, { "label": cluster_nodes[1], "response_code": 400, "output": "some error occurred", }, {"label": cluster_nodes[2]}, ] ) self.env_assist.assert_raise_library_error( lambda: lib.remove_device( self.env_assist.get_env(), skip_offline_nodes=False ), [], # an empty LibraryError is raised expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info( report_codes.SERVICE_DISABLE_STARTED, service="corosync-qdevice", ), fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, node=cluster_nodes[0], service="corosync-qdevice", ), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=report_codes.SKIP_OFFLINE_NODES, node=cluster_nodes[1], command="remote/qdevice_client_disable", reason="some error occurred", ), fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, node=cluster_nodes[2], service="corosync-qdevice", ), ]) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_error_stop_qdevice(self): cluster_nodes, original_conf, dummy_expected_conf = self.conf_3nodes() self.config.runner.corosync.version() self.config.corosync_conf.load_content(original_conf) self.config.http.corosync.qdevice_client_disable( node_labels=cluster_nodes ) self.config.http.corosync.qdevice_client_stop( communication_list=[ {"label": cluster_nodes[0]}, { "label": cluster_nodes[1], "response_code": 400, "output": "some error occurred", }, {"label": cluster_nodes[2]}, ], ) self.env_assist.assert_raise_library_error( lambda: lib.remove_device( self.env_assist.get_env(), skip_offline_nodes=False ), [], # an empty LibraryError is raised expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info( report_codes.SERVICE_DISABLE_STARTED, service="corosync-qdevice", ), ] + [ fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, node=node, service="corosync-qdevice", ) for node in cluster_nodes ] + [ fixture.info( report_codes.SERVICE_STOP_STARTED, service="corosync-qdevice", ), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node=cluster_nodes[0], service="corosync-qdevice", ), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=report_codes.SKIP_OFFLINE_NODES, node=cluster_nodes[1], command="remote/qdevice_client_stop", reason="some error occurred", ), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node=cluster_nodes[2], service="corosync-qdevice", ), ]) @mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: []) def test_error_destroy_qdevice_net(self): cluster_nodes, original_conf, dummy_expected_conf = self.conf_3nodes() self.config.runner.corosync.version() self.config.corosync_conf.load_content(original_conf) self.config.http.corosync.qdevice_client_disable( node_labels=cluster_nodes ) self.config.http.corosync.qdevice_client_stop(node_labels=cluster_nodes) self.fixture_config_http_qdevice_net_destroy( cluster_nodes, [ {"label": cluster_nodes[0]}, { "label": cluster_nodes[1], "response_code": 400, "output": "some error occurred", }, {"label": cluster_nodes[2]}, ], ) self.env_assist.assert_raise_library_error( lambda: lib.remove_device( self.env_assist.get_env(), skip_offline_nodes=False ), [], # an empty LibraryError is raised expected_in_processor=False ) self.env_assist.assert_reports([ fixture.info( report_codes.SERVICE_DISABLE_STARTED, service="corosync-qdevice", ), ] + [ fixture.info( report_codes.SERVICE_DISABLE_SUCCESS, node=node, service="corosync-qdevice", ) for node in cluster_nodes ] + [ fixture.info( report_codes.SERVICE_STOP_STARTED, service="corosync-qdevice", ), ] + [ fixture.info( report_codes.SERVICE_STOP_SUCCESS, node=node, service="corosync-qdevice", ) for node in cluster_nodes ] + [ fixture.info(report_codes.QDEVICE_CERTIFICATE_REMOVAL_STARTED), fixture.info( report_codes.QDEVICE_CERTIFICATE_REMOVED_FROM_NODE, node=cluster_nodes[0], ), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=report_codes.SKIP_OFFLINE_NODES, node=cluster_nodes[1], command="remote/qdevice_net_client_destroy", reason="some error occurred", ), fixture.info( report_codes.QDEVICE_CERTIFICATE_REMOVED_FROM_NODE, node=cluster_nodes[2], ), ]) @mock.patch.object(LibraryEnvironment, "push_corosync_conf") @mock.patch.object(LibraryEnvironment, "get_corosync_conf_data") class UpdateDeviceTest(TestCase, CmanMixin): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: True) def test_disabled_on_cman(self, mock_get_corosync, mock_push_corosync): lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assert_disabled_on_cman( lambda: lib.update_device(lib_env, {"host": "127.0.0.1"}, {}, {}) ) mock_get_corosync.assert_not_called() mock_push_corosync.assert_not_called() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: True) def test_enabled_on_cman_if_not_live( self, mock_get_corosync, mock_push_corosync ): original_conf = open(rc("corosync-3nodes.conf")).read() mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment( self.mock_logger, self.mock_reporter, corosync_conf_data=original_conf ) assert_raise_library_error( lambda: lib.update_device(lib_env, {"host": "127.0.0.1"}, {}, {}), ( severity.ERROR, report_codes.QDEVICE_NOT_DEFINED, {} ) ) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_no_device(self, mock_get_corosync, mock_push_corosync): original_conf = open(rc("corosync-3nodes.conf")).read() mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) assert_raise_library_error( lambda: lib.update_device(lib_env, {"host": "127.0.0.1"}, {}, {}), ( severity.ERROR, report_codes.QDEVICE_NOT_DEFINED, {} ) ) mock_push_corosync.assert_not_called() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_success(self, mock_get_corosync, mock_push_corosync): original_conf = open(rc("corosync-3nodes-qdevice.conf")).read() mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) lib.update_device( lib_env, {"host": "127.0.0.2"}, {"timeout": "12345"}, {"mode": "on", "exec_ls": "test -f /tmp/test"} ) self.assertEqual(1, len(mock_push_corosync.mock_calls)) ac( mock_push_corosync.mock_calls[0][1][0].config.export(), original_conf .replace( " host: 127.0.0.1\n", outdent("""\ host: 127.0.0.2 } heuristics { exec_ls: test -f /tmp/test mode: on """) ) .replace( "model: net", "model: net\n timeout: 12345" ) ) self.assertEqual([], self.mock_reporter.report_item_list) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_success_heuristics_no_exec( self, mock_get_corosync, mock_push_corosync ): original_conf = open(rc("corosync-3nodes-qdevice.conf")).read() mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) lib.update_device(lib_env, {}, {}, {"mode": "on"}) self.assertEqual(1, len(mock_push_corosync.mock_calls)) ac( mock_push_corosync.mock_calls[0][1][0].config.export(), original_conf .replace( " host: 127.0.0.1\n", outdent("""\ host: 127.0.0.1 } heuristics { mode: on """) ) ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [ fixture.warn( report_codes.COROSYNC_QUORUM_HEURISTICS_ENABLED_WITH_NO_EXEC ) ] ) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_invalid_options(self, mock_get_corosync, mock_push_corosync): original_conf = open(rc("corosync-3nodes-qdevice.conf")).read() mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) assert_raise_library_error( lambda: lib.update_device( lib_env, {}, {"bad_option": "bad_value", }, {"mode": "bad mode"} ), ( severity.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["bad_option"], "option_type": "quorum device", "allowed": ["sync_timeout", "timeout"], "allowed_patterns": [], }, report_codes.FORCE_OPTIONS ), fixture.error( report_codes.INVALID_OPTION_VALUE, force_code=report_codes.FORCE_OPTIONS, option_name="mode", option_value="bad mode", allowed_values=("off", "on", "sync") ), ) self.assertEqual(1, mock_get_corosync.call_count) self.assertEqual(0, mock_push_corosync.call_count) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_invalid_options_forced(self, mock_get_corosync, mock_push_corosync): original_conf = open(rc("corosync-3nodes-qdevice.conf")).read() mock_get_corosync.return_value = original_conf lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) lib.update_device( lib_env, {}, {"bad_option": "bad_value", }, {"mode": "bad mode"}, force_options=True ) assert_report_item_list_equal( self.mock_reporter.report_item_list, [ ( severity.WARNING, report_codes.INVALID_OPTIONS, { "option_names": ["bad_option"], "option_type": "quorum device", "allowed": ["sync_timeout", "timeout"], "allowed_patterns": [], } ), fixture.warn( report_codes.INVALID_OPTION_VALUE, option_name="mode", option_value="bad mode", allowed_values=("off", "on", "sync") ), ] ) self.assertEqual(1, mock_get_corosync.call_count) self.assertEqual(1, len(mock_push_corosync.mock_calls)) ac( mock_push_corosync.mock_calls[0][1][0].config.export(), original_conf.replace( outdent("""\ net { host: 127.0.0.1 } """), outdent("""\ bad_option: bad_value net { host: 127.0.0.1 } heuristics { mode: bad mode } """) ) ) @mock.patch("pcs.lib.commands.quorum.corosync_live.set_expected_votes") @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class SetExpectedVotesLiveTest(TestCase, CmanMixin): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: True) def test_disabled_on_cman(self, mock_set_votes): lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assert_disabled_on_cman( lambda: lib.set_expected_votes_live(lib_env, "5") ) mock_set_votes.assert_not_called() @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_success(self, mock_set_votes): lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) lib.set_expected_votes_live(lib_env, "5") mock_set_votes.assert_called_once_with("mock_runner", 5) @mock.patch("pcs.lib.env.is_cman_cluster", lambda self: False) def test_invalid_votes(self, mock_set_votes): lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) assert_raise_library_error( lambda: lib.set_expected_votes_live(lib_env, "-5"), ( severity.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "expected votes", "option_value": "-5", "allowed_values": "positive integer", } ) ) mock_set_votes.assert_not_called() pcs-0.9.164/pcs/lib/commands/test/test_resource_agent.py000066400000000000000000000365501326265502500232300ustar00rootroot00000000000000# coding=utf-8 from __future__ import ( absolute_import, division, print_function, ) import logging from lxml import etree from pcs.test.tools.assertions import assert_raise_library_error, start_tag_error_text from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock, TestCase from pcs.common import report_codes from pcs.lib import resource_agent as lib_ra from pcs.lib.env import LibraryEnvironment from pcs.lib.errors import ReportItemSeverity as severity from pcs.lib.commands import resource_agent as lib @mock.patch("pcs.lib.resource_agent.list_resource_agents_standards") @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class TestListStandards(TestCase): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() self.lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) def test_success(self, mock_list_standards): standards = [ "lsb", "nagios", "ocf", "service", "systemd", ] mock_list_standards.return_value = standards self.assertEqual( lib.list_standards(self.lib_env), standards ) mock_list_standards.assert_called_once_with("mock_runner") @mock.patch("pcs.lib.resource_agent.list_resource_agents_ocf_providers") @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class TestListOcfProviders(TestCase): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() self.lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) def test_success(self, mock_list_providers): providers = [ "booth", "heartbeat", "openstack", "pacemaker", ] mock_list_providers.return_value = providers self.assertEqual( lib.list_ocf_providers(self.lib_env), providers ) mock_list_providers.assert_called_once_with("mock_runner") @mock.patch("pcs.lib.resource_agent.list_resource_agents_standards") @mock.patch("pcs.lib.resource_agent.list_resource_agents") @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class TestListAgentsForStandardAndProvider(TestCase): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() self.lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) def test_standard_specified(self, mock_list_agents, mock_list_standards): agents = [ "Delay", "Dummy", "Stateful", ] mock_list_agents.return_value = agents self.assertEqual( lib.list_agents_for_standard_and_provider(self.lib_env, "ocf:test"), agents ) mock_list_agents.assert_called_once_with("mock_runner", "ocf:test") mock_list_standards.assert_not_called() def test_standard_not_specified( self, mock_list_agents, mock_list_standards ): agents_ocf = [ "Delay", "Dummy", "Stateful", ] agents_service = [ "corosync", "pacemaker", "pcsd", ] mock_list_standards.return_value = ["ocf:test", "service"] mock_list_agents.side_effect = [agents_ocf, agents_service] self.assertEqual( lib.list_agents_for_standard_and_provider(self.lib_env), sorted(agents_ocf + agents_service, key=lambda x: x.lower()) ) mock_list_standards.assert_called_once_with("mock_runner") self.assertEqual(2, len(mock_list_agents.mock_calls)) mock_list_agents.assert_has_calls([ mock.call("mock_runner", "ocf:test"), mock.call("mock_runner", "service"), ]) @mock.patch( "pcs.lib.resource_agent.list_resource_agents_standards_and_providers", lambda runner: ["service", "ocf:test"] ) @mock.patch( "pcs.lib.resource_agent.list_resource_agents", lambda runner, standard: { "ocf:test": [ "Stateful", "Delay", ], "service": [ "corosync", "pacemaker_remote", ], }.get(standard, []) ) @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class TestListAgents(TestCase): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() self.lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) def test_list_all(self): self.assertEqual( lib.list_agents(self.lib_env, False, None), [ { "name": "ocf:test:Delay", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, { "name": "ocf:test:Stateful", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, { "name": "service:corosync", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, { "name": "service:pacemaker_remote", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, ] ) def test_search(self): self.assertEqual( lib.list_agents(self.lib_env, False, "te"), [ { "name": "ocf:test:Delay", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, { "name": "ocf:test:Stateful", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, { "name": "service:pacemaker_remote", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, ] ) @mock.patch.object(lib_ra.Agent, "_get_metadata", autospec=True) def test_describe(self, mock_metadata): def mock_metadata_func(self): if self.get_name() == "ocf:test:Stateful": raise lib_ra.UnableToGetAgentMetadata( self.get_name(), "test exception" ) return etree.XML(""" short {name} long {name} """.format(name=self.get_name())) mock_metadata.side_effect = mock_metadata_func # Stateful is missing as it does not provide valid metadata - see above self.assertEqual( lib.list_agents(self.lib_env, True, None), [ { "name": "ocf:test:Delay", "shortdesc": "short ocf:test:Delay", "longdesc": "long ocf:test:Delay", "parameters": [], "actions": [], }, { "name": "service:corosync", "shortdesc": "short service:corosync", "longdesc": "long service:corosync", "parameters": [], "actions": [], }, { "name": "service:pacemaker_remote", "shortdesc": "short service:pacemaker_remote", "longdesc": "long service:pacemaker_remote", "parameters": [], "actions": [], }, ] ) class CompleteAgentList(TestCase): def test_skip_agent_name_when_InvalidResourceAgentName_raised(self): invalid_agent_name = "systemd:lvm2-pvscan@252:2"#suppose it is invalid class Agent(object): def __init__(self, runner, name): if name == invalid_agent_name: raise lib_ra.InvalidResourceAgentName(name) self.name = name def get_name_info(self): return self.name self.assertEqual(["ocf:heartbeat:Dummy"], lib._complete_agent_list( mock.MagicMock(), ["ocf:heartbeat:Dummy", invalid_agent_name], describe=False, search=False, metadata_class=Agent, )) @mock.patch.object(lib_ra.ResourceAgent, "_load_metadata", autospec=True) @mock.patch("pcs.lib.resource_agent.guess_exactly_one_resource_agent_full_name") @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class TestDescribeAgent(TestCase): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() self.lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.metadata = """ short desc long desc """ self.description = { "name": "ocf:test:Dummy", "shortdesc": "short desc", "longdesc": "long desc", "parameters": [], "actions": [], "default_actions": [{"interval": "60s", "name": "monitor"}], } def test_full_name_success(self, mock_guess, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( lib.describe_agent(self.lib_env, "ocf:test:Dummy"), self.description ) self.assertEqual(len(mock_metadata.mock_calls), 1) mock_guess.assert_not_called() def test_guess_success(self, mock_guess, mock_metadata): mock_metadata.return_value = self.metadata mock_guess.return_value = lib_ra.ResourceAgent( self.lib_env.cmd_runner(), "ocf:test:Dummy" ) self.assertEqual( lib.describe_agent(self.lib_env, "dummy"), self.description ) self.assertEqual(len(mock_metadata.mock_calls), 1) mock_guess.assert_called_once_with("mock_runner", "dummy") def test_full_name_fail(self, mock_guess, mock_metadata): mock_metadata.return_value = "invalid xml" assert_raise_library_error( lambda: lib.describe_agent(self.lib_env, "ocf:test:Dummy"), ( severity.ERROR, report_codes.UNABLE_TO_GET_AGENT_METADATA, { "agent": "ocf:test:Dummy", "reason": start_tag_error_text(), } ) ) self.assertEqual(len(mock_metadata.mock_calls), 1) mock_guess.assert_not_called() class DescribeAgentUtf8(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.config.runner.pcmk.load_agent( agent_filename="resource_agent_ocf_heartbeat_dummy_utf8.xml" ) def test_describe(self): name = "ocf:heartbeat:Dummy" self.assertEqual( lib.describe_agent(self.env_assist.get_env(), name), { "name": name, "shortdesc": u"Example stateless resource agent: ®", "longdesc": u"This is a Dummy Resource Agent for testing utf-8" u" in metadata: ®" , "parameters": [ { "advanced": False, "default": u"/var/run/resource-agents/Dummy-®.state", "deprecated": False, "longdesc": u"Location to store the resource state in: ®", "name": u"state-®", "obsoletes": None, "pcs_deprecated_warning": "", "required": False, "shortdesc": u"State file: ®", "type": "string", }, { "advanced": True, "default": 0, "deprecated": False, "longdesc": "Set to 1 to turn on resource agent tracing" " (expect large output) The trace output will be " "saved to trace_file, if set, or by default to " "$HA_VARRUN/ra_trace//.." " e.g. $HA_VARRUN/ra_trace/oracle/db." "start.2012-11-27.08:37:08", "name": "trace_ra", "obsoletes": None, "pcs_deprecated_warning": "", "required": False, "shortdesc": "Set to 1 to turn on resource agent " "tracing (expect large output)", "type": "integer", }, { "advanced": True, "default": "", "deprecated": False, "longdesc": "Path to a file to store resource agent " "tracing log", "name": "trace_file", "obsoletes": None, "pcs_deprecated_warning": "", "required": False, "shortdesc": "Path to a file to store resource agent " "tracing log", "type": "string", } ], "actions": [ {"name": "start", "timeout": "20"}, {"name": "stop", "timeout": "20"}, {"name": "monitor", "interval": "10", "timeout": "20"}, {"name": "meta-data", "timeout": "5"}, {"name": "validate-all", "timeout": "20"}, {"name": u"custom-®", "timeout": "20"}, ], "default_actions": [ {"name": "start", "interval": "0s", "timeout": "20"}, {"name": "stop", "interval": "0s", "timeout": "20"}, {"name": "monitor", "interval": "10", "timeout": "20"}, {"name": u"custom-®", "interval": "0s", "timeout": "20"}, ], } ) pcs-0.9.164/pcs/lib/commands/test/test_stonith.py000066400000000000000000000145631326265502500217130ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.lib.commands import stonith from pcs.lib.resource_agent import StonithAgent from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.pcs_unittest import TestCase class Create(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.agent_name = "test_simple" self.instance_name = "stonith-test" self.timeout = 10 self.expected_cib = """ """ self.expected_status = """ """.format(id=self.instance_name, agent=self.agent_name) (self.config .runner.pcmk.load_agent( agent_name="stonith:{0}".format(self.agent_name), agent_filename="stonith_agent_fence_simple.xml" ) .runner.cib.load() .runner.pcmk.load_stonithd_metadata() ) def tearDown(self): StonithAgent.clear_stonithd_metadata_cache() def test_minimal_success(self): self.config.env.push_cib(resources=self.expected_cib) stonith.create( self.env_assist.get_env(), self.instance_name, self.agent_name, operations=[], meta_attributes={}, instance_attributes={"must-set": "value"} ) def test_minimal_wait_ok_run_ok(self): (self.config .runner.pcmk.can_wait(before="runner.cib.load") .env.push_cib( resources=self.expected_cib, wait=self.timeout ) .runner.pcmk.load_state(resources=self.expected_status) ) stonith.create( self.env_assist.get_env(), self.instance_name, self.agent_name, operations=[], meta_attributes={}, instance_attributes={"must-set": "value"}, wait=self.timeout ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={"Started": ["node1"]}, resource_id=self.instance_name, ), ]) class CreateInGroup(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) self.agent_name = "test_simple" self.instance_name = "stonith-test" self.timeout = 10 self.expected_cib = """ """ self.expected_status = """ """.format(id=self.instance_name, agent=self.agent_name) (self.config .runner.pcmk.load_agent( agent_name="stonith:{0}".format(self.agent_name), agent_filename="stonith_agent_fence_simple.xml" ) .runner.cib.load() .runner.pcmk.load_stonithd_metadata() ) def tearDown(self): StonithAgent.clear_stonithd_metadata_cache() def test_minimal_success(self): self.config.env.push_cib(resources=self.expected_cib) stonith.create_in_group( self.env_assist.get_env(), self.instance_name, self.agent_name, "my-group", operations=[], meta_attributes={}, instance_attributes={"must-set": "value"} ) def test_minimal_wait_ok_run_ok(self): (self.config .runner.pcmk.can_wait(before="runner.cib.load") .env.push_cib( resources=self.expected_cib, wait=self.timeout ) .runner.pcmk.load_state(resources=self.expected_status) ) stonith.create_in_group( self.env_assist.get_env(), self.instance_name, self.agent_name, "my-group", operations=[], meta_attributes={}, instance_attributes={"must-set": "value"}, wait=self.timeout ) self.env_assist.assert_reports([ fixture.info( report_codes.RESOURCE_RUNNING_ON_NODES, roles_with_nodes={"Started": ["node1"]}, resource_id=self.instance_name, ), ]) pcs-0.9.164/pcs/lib/commands/test/test_stonith_agent.py000066400000000000000000000261201326265502500230610ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import logging from lxml import etree from pcs.test.tools.assertions import ( assert_raise_library_error, assert_report_item_list_equal, start_tag_error_text, ) from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock, TestCase from pcs.common import report_codes from pcs.lib import resource_agent as lib_ra from pcs.lib.env import LibraryEnvironment from pcs.lib.errors import ReportItemSeverity as severity from pcs.lib.external import CommandRunner from pcs.lib.commands import stonith_agent as lib @mock.patch( "pcs.lib.resource_agent.list_stonith_agents", lambda runner: [ "fence_apc", "fence_dummy", "fence_xvm", ] ) @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class TestListAgents(TestCase): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() self.lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) def tearDown(self): lib_ra.StonithAgent._stonithd_metadata = None def test_list_all(self): self.assertEqual( lib.list_agents(self.lib_env, False, None), [ { "name": "fence_apc", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, { "name": "fence_dummy", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, { "name": "fence_xvm", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, ] ) def test_search(self): self.assertEqual( lib.list_agents(self.lib_env, False, "M"), [ { "name": "fence_dummy", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, { "name": "fence_xvm", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], }, ] ) @mock.patch.object(lib_ra.Agent, "_get_metadata", autospec=True) def test_describe(self, mock_metadata): self.maxDiff = None def mock_metadata_func(self): if self.get_name() == "ocf:test:Stateful": raise lib_ra.UnableToGetAgentMetadata( self.get_name(), "test exception" ) return etree.XML(""" short {name} long {name} """.format(name=self.get_name())) mock_metadata.side_effect = mock_metadata_func # Stateful is missing as it does not provide valid metadata - see above self.assertEqual( lib.list_agents(self.lib_env, True, None), [ { "name": "fence_apc", "shortdesc": "short fence_apc", "longdesc": "long fence_apc", "parameters": [], "actions": [], }, { "name": "fence_dummy", "shortdesc": "short fence_dummy", "longdesc": "long fence_dummy", "parameters": [], "actions": [], }, { "name": "fence_xvm", "shortdesc": "short fence_xvm", "longdesc": "long fence_xvm", "parameters": [], "actions": [], }, ] ) @mock.patch.object(lib_ra.StonithAgent, "_load_metadata", autospec=True) @mock.patch.object( lib_ra.StonithdMetadata, "get_parameters", lambda self: [] ) @mock.patch.object( LibraryEnvironment, "cmd_runner", lambda self: "mock_runner" ) class TestDescribeAgent(TestCase): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() self.lib_env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.metadata = """ short desc long desc """ self.description = { "name": "fence_dummy", "shortdesc": "short desc", "longdesc": "long desc", "parameters": [], "actions": [], "default_actions": [{"name": "monitor", "interval": "60s"}], } def tearDown(self): lib_ra.StonithAgent._stonithd_metadata = None def test_success(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( lib.describe_agent(self.lib_env, "fence_dummy"), self.description ) self.assertEqual(len(mock_metadata.mock_calls), 1) def test_fail(self, mock_metadata): mock_metadata.return_value = "invalid xml" assert_raise_library_error( lambda: lib.describe_agent(self.lib_env, "fence_dummy"), ( severity.ERROR, report_codes.UNABLE_TO_GET_AGENT_METADATA, { "agent": "fence_dummy", "reason": start_tag_error_text(), } ) ) self.assertEqual(len(mock_metadata.mock_calls), 1) class ValidateParameters(TestCase): def setUp(self): self.agent = lib_ra.StonithAgent( mock.MagicMock(spec_set=CommandRunner), "fence_dummy" ) self.metadata = etree.XML(""" Long description short description Fencing action """) patcher = mock.patch.object(lib_ra.StonithAgent, "_get_metadata") self.addCleanup(patcher.stop) self.get_metadata = patcher.start() self.get_metadata.return_value = self.metadata patcher_stonithd = mock.patch.object( lib_ra.StonithdMetadata, "_get_metadata" ) self.addCleanup(patcher_stonithd.stop) self.get_stonithd_metadata = patcher_stonithd.start() self.get_stonithd_metadata.return_value = etree.XML(""" """) def test_action_is_deprecated(self): assert_report_item_list_equal( self.agent.validate_parameters({ "action": "reboot", "required_param": "value", }), [ ( severity.ERROR, report_codes.DEPRECATED_OPTION, { "option_name": "action", "option_type": "stonith", "replaced_by": [ "pcmk_off_action", "pcmk_reboot_action" ], }, report_codes.FORCE_OPTIONS ), ], ) def test_action_is_deprecated_forced(self): assert_report_item_list_equal( self.agent.validate_parameters({ "action": "reboot", "required_param": "value", }, allow_invalid=True), [ ( severity.WARNING, report_codes.DEPRECATED_OPTION, { "option_name": "action", "option_type": "stonith", "replaced_by": [ "pcmk_off_action", "pcmk_reboot_action" ], }, None ), ], ) def test_action_not_reported_deprecated_when_empty(self): assert_report_item_list_equal( self.agent.validate_parameters({ "action": "", "required_param": "value", }), [ ], ) def test_required_not_specified_on_update(self): assert_report_item_list_equal( self.agent.validate_parameters({ "test_param": "value", }, update=True), [ ], ) @mock.patch.object(lib_ra.StonithAgent, "get_actions") class StonithAgentMetadataGetCibDefaultActions(TestCase): fixture_actions = [ {"name": "custom1", "timeout": "40s"}, {"name": "custom2", "interval": "25s", "timeout": "60s"}, {"name": "meta-data"}, {"name": "monitor", "interval": "10s", "timeout": "30s"}, {"name": "start", "interval": "40s"}, {"name": "status", "interval": "15s", "timeout": "20s"}, {"name": "validate-all"}, ] def setUp(self): self.agent = lib_ra.StonithAgent( mock.MagicMock(spec_set=CommandRunner), "fence_dummy" ) def test_select_only_actions_for_cib(self, get_actions): get_actions.return_value = self.fixture_actions self.assertEqual( [ {"name": "monitor", "interval": "10s", "timeout": "30s"} ], self.agent.get_cib_default_actions() ) def test_select_only_necessary_actions_for_cib(self, get_actions): get_actions.return_value = self.fixture_actions self.assertEqual( [ {"name": "monitor", "interval": "10s", "timeout": "30s"} ], self.agent.get_cib_default_actions(necessary_only=True) ) pcs-0.9.164/pcs/lib/commands/test/test_ticket.py000066400000000000000000000070521326265502500215010ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.pcs_unittest import mock from pcs.test.tools.misc import create_patcher from pcs.common import report_codes from pcs.lib.commands.constraint import ticket as ticket_command from pcs.lib.errors import ReportItemSeverity as severities from pcs.lib.test.misc import get_mocked_env from pcs.test.tools.assertions import assert_raise_library_error from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.misc import get_test_resource as rc from pcs.test.tools.xml import get_xml_manipulation_creator_from_file patch_commands = create_patcher("pcs.lib.commands.constraint.ticket") class CreateTest(TestCase): def setUp(self): self.create_cib = get_xml_manipulation_creator_from_file( rc("cib-empty.xml") ) def test_sucess_create(self): env_assist, config = get_env_tools(test_case=self) (config .runner.cib.load( resources=""" """ ) .env.push_cib( optional_in_conf=""" """ ) ) ticket_command.create( env_assist.get_env(), "ticketA", "resourceA", { "loss-policy": "fence", "rsc-role": "master" } ) def test_refuse_for_nonexisting_resource(self): env = get_mocked_env(cib_data=str(self.create_cib())) assert_raise_library_error( lambda: ticket_command.create( env, "ticketA", "resourceA", "master", {"loss-policy": "fence"} ), ( severities.ERROR, report_codes.ID_NOT_FOUND, { "context_type": "cib", "context_id": "", "id": "resourceA", "expected_types": [ "bundle", "clone", "group", "master", "primitive" ], }, None ), ) @patch_commands("get_constraints", mock.Mock) class RemoveTest(TestCase): @patch_commands("ticket.remove_plain", mock.Mock(return_value=1)) @patch_commands("ticket.remove_with_resource_set",mock.Mock(return_value=0)) def test_successfully_remove_plain(self): self.assertTrue(ticket_command.remove(mock.MagicMock(), "T", "R")) @patch_commands("ticket.remove_plain", mock.Mock(return_value=0)) @patch_commands("ticket.remove_with_resource_set",mock.Mock(return_value=1)) def test_successfully_remove_with_resource_set(self): self.assertTrue(ticket_command.remove(mock.MagicMock(), "T", "R")) @patch_commands("ticket.remove_plain", mock.Mock(return_value=0)) @patch_commands("ticket.remove_with_resource_set",mock.Mock(return_value=0)) def test_raises_library_error_when_no_matching_constraint_found(self): self.assertFalse(ticket_command.remove(mock.MagicMock(), "T", "R")) pcs-0.9.164/pcs/lib/communication/000077500000000000000000000000001326265502500166665ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/communication/__init__.py000066400000000000000000000000001326265502500207650ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/communication/booth.py000066400000000000000000000065661326265502500203700ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import base64 import json import os from pcs.common.node_communicator import RequestData from pcs.lib import reports from pcs.lib.booth import reports as reports_booth from pcs.lib.communication.tools import ( AllAtOnceStrategyMixin, AllSameDataMixin, RunRemotelyBase, SkipOfflineMixin, SimpleResponseProcessingMixin, ) class BoothSendConfig( SimpleResponseProcessingMixin, SkipOfflineMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase, ): def __init__( self, report_processor, booth_name, config_data, authfile=None, authfile_data=None, skip_offline_targets=False ): super(BoothSendConfig, self).__init__(report_processor) self._set_skip_offline(skip_offline_targets) self._booth_name = booth_name self._config_data = config_data self._authfile = authfile self._authfile_data = authfile_data def _get_request_data(self): data = { "config": { "name": "{0}.conf".format(self._booth_name), "data": self._config_data } } if self._authfile is not None and self._authfile_data is not None: data["authfile"] = { "name": os.path.basename(self._authfile), "data": base64.b64encode(self._authfile_data).decode("utf-8") } return RequestData( "remote/booth_set_config", [("data_json", json.dumps(data))] ) def _get_success_report(self, node_label): return reports_booth.booth_config_accepted_by_node( node_label, [self._booth_name] ) def before(self): self._report(reports_booth.booth_config_distribution_started()) class ProcessJsonDataMixin(object): __data = None @property def _data(self): if self.__data is None: self.__data = [] return self.__data def _process_response(self, response): report = self._get_response_report(response) if report is not None: self._report(report) return target = response.request.target try: self._data.append((target, json.loads(response.data))) except ValueError: self._report(reports.invalid_response_format(target.label)) def on_complete(self): return self._data class BoothGetConfig( ProcessJsonDataMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase, ): def __init__(self, report_processor, booth_name): super(BoothGetConfig, self).__init__(report_processor) self._booth_name = booth_name def _get_request_data(self): return RequestData( "remote/booth_get_config", [("name", self._booth_name)] ) class BoothSaveFiles( ProcessJsonDataMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase, ): def __init__(self, report_processor, file_list, rewrite_existing=True): super(BoothSaveFiles, self).__init__(report_processor) self._file_list = file_list self._rewrite_existing = rewrite_existing def _get_request_data(self): data = [("data_json", json.dumps(self._file_list))] if self._rewrite_existing: data.append(("rewrite_existing", "1")) return RequestData("remote/booth_save_files", data) pcs-0.9.164/pcs/lib/communication/corosync.py000066400000000000000000000055461326265502500211110ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import json from pcs.common.node_communicator import RequestData from pcs.lib import reports from pcs.lib.communication.tools import ( AllAtOnceStrategyMixin, AllSameDataMixin, RunRemotelyBase, SkipOfflineMixin, ) class CheckCorosyncOffline( SkipOfflineMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): def __init__(self, report_processor, skip_offline_targets=False): super(CheckCorosyncOffline, self).__init__(report_processor) self._set_skip_offline(skip_offline_targets) def _get_request_data(self): return RequestData("remote/status") def _process_response(self, response): report = self._get_response_report(response) node_label = response.request.target.label if report is not None: self._report_list([ report, reports.corosync_not_running_check_node_error( node_label, self._failure_severity, self._failure_forceable, ) ]) return try: status = response.data if not json.loads(status)["corosync"]: report = reports.corosync_not_running_on_node_ok(node_label) else: report = reports.corosync_running_on_node_fail(node_label) except (ValueError, LookupError): report = reports.corosync_not_running_check_node_error( node_label, self._failure_severity, self._failure_forceable ) self._report(report) def before(self): self._report(reports.corosync_not_running_check_started()) class DistributeCorosyncConf( SkipOfflineMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): def __init__( self, report_processor, config_text, skip_offline_targets=False ): super(DistributeCorosyncConf, self).__init__(report_processor) self._config_text = config_text self._set_skip_offline(skip_offline_targets) def _get_request_data(self): return RequestData( "remote/set_corosync_conf", [("corosync_conf", self._config_text)] ) def _process_response(self, response): report = self._get_response_report(response) node_label = response.request.target.label if report is None: self._report(reports.corosync_config_accepted_by_node(node_label)) else: self._report_list([ report, reports.corosync_config_distribution_node_error( node_label, self._failure_severity, self._failure_forceable, ) ]) def before(self): self._report(reports.corosync_config_distribution_started()) pcs-0.9.164/pcs/lib/communication/nodes.py000066400000000000000000000211031326265502500203450ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import json from pcs.common import report_codes from pcs.common.node_communicator import RequestData from pcs.lib import reports, node_communication_format from pcs.lib.communication.tools import ( AllAtOnceStrategyMixin, AllSameDataMixin, RunRemotelyBase, SkipOfflineMixin, ) from pcs.lib.errors import ReportItemSeverity from pcs.lib.node_communication import response_to_report_item class GetOnlineTargets( AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): def __init__(self, report_processor, ignore_offline_targets=False): super(GetOnlineTargets, self).__init__(report_processor) self._ignore_offline_targets = ignore_offline_targets self._online_target_list = [] def _get_request_data(self): return RequestData("remote/check_auth", [("check_auth_only", 1)]) def _process_response(self, response): report = response_to_report_item(response) if report is None: self._online_target_list.append(response.request.target) return if not response.was_connected: report = ( reports.omitting_node(response.request.target.label) if self._ignore_offline_targets else response_to_report_item( response, forceable=report_codes.SKIP_OFFLINE_NODES ) ) self._report(report) def on_complete(self): return self._online_target_list class PrecheckNewNode( SkipOfflineMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): def __init__( self, report_items, check_response, skip_offline_targets=False ): super(PrecheckNewNode, self).__init__(None) self._set_skip_offline(skip_offline_targets) self._report_items = report_items self._check_response = check_response def _get_request_data(self): return RequestData("remote/node_available") def _process_response(self, response): # do not send outside any report, just append them into specified list report = self._get_response_report(response) if report: self._report_items.append(report) return target = response.request.target data = None try: data = json.loads(response.data) except ValueError: self._report_items.append( reports.invalid_response_format(target.label) ) return is_in_expected_format = ( #node_available is a mandatory field isinstance(data, dict) and "node_available" in data ) if not is_in_expected_format: self._report_items.append( reports.invalid_response_format(target.label) ) return self._check_response(data, self._report_items, target.label) class RunActionBase( SkipOfflineMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): def __init__( self, report_processor, action_definition, skip_offline_targets=False, allow_fails=False, description="", ): super(RunActionBase, self).__init__(report_processor) self._init_properties() self._set_skip_offline(skip_offline_targets) self._action_error_force = _force(self._force_code, allow_fails) self._action_definition = action_definition self._description = description def _init_properties(self): raise NotImplementedError() def _is_success(self, action_response): raise NotImplementedError() def _get_request_data(self): return RequestData( self._request_url, [("data_json", json.dumps(self._action_definition))], ) def _process_response(self, response): report = self._get_response_report(response) if report: self._report(report) return results = None target = response.request.target try: results = json.loads(response.data) except ValueError: self._report(reports.invalid_response_format(target.label)) return results = node_communication_format.response_to_result( results, self._response_key, self._action_definition.keys(), target.label ) for key, item_response in sorted(results.items()): if self._is_success(item_response): #only success process individually report = self._success_report(target.label, key) else: report = self._failure_report( target.label, key, node_communication_format.get_format_result( self._code_message_map )(item_response), **self._action_error_force ) self._report(report) def before(self): self._report(self._start_report( self._action_definition.keys(), [target.label for target in self._target_list], self._description )) class ServiceAction(RunActionBase): def _init_properties(self): self._request_url = "remote/manage_services" self._response_key = "actions" self._force_code = report_codes.SKIP_ACTION_ON_NODES_ERRORS self._start_report = reports.service_commands_on_nodes_started self._success_report = reports.service_command_on_node_success self._failure_report = reports.service_command_on_node_error self._code_message_map = {"fail": "Operation failed."} def _is_success(self, action_response): return action_response.code == "success" class FileActionBase(RunActionBase): #pylint: disable=abstract-method def _init_properties(self): self._response_key = "files" self._force_code = report_codes.SKIP_FILE_DISTRIBUTION_ERRORS class DistributeFiles(FileActionBase): def _init_properties(self): super(DistributeFiles, self)._init_properties() self._request_url = "remote/put_file" self._start_report = reports.files_distribution_started self._success_report = reports.file_distribution_success self._failure_report = reports.file_distribution_error self._code_message_map = {"conflict": "File already exists"} def _is_success(self, action_response): return action_response.code in ["written", "rewritten", "same_content"] class RemoveFiles(FileActionBase): def _init_properties(self): super(RemoveFiles, self)._init_properties() self._request_url = "remote/remove_file" self._start_report = reports.files_remove_from_node_started self._success_report = reports.file_remove_from_node_success self._failure_report = reports.file_remove_from_node_error self._code_message_map = {} def _is_success(self, action_response): return action_response.code in ["deleted", "not_found"] def _force(force_code, is_forced): if is_forced: return dict( severity=ReportItemSeverity.WARNING, forceable=None, ) return dict( severity=ReportItemSeverity.ERROR, forceable=force_code, ) def availability_checker_node(availability_info, report_items, node_label): """ Check if availability_info means that the node is suitable as cluster (corosync) node. """ if availability_info["node_available"]: return if availability_info.get("pacemaker_running", False): report_items.append(reports.cannot_add_node_is_running_service( node_label, "pacemaker" )) return if availability_info.get("pacemaker_remote", False): report_items.append(reports.cannot_add_node_is_running_service( node_label, "pacemaker_remote" )) return report_items.append(reports.cannot_add_node_is_in_cluster(node_label)) def availability_checker_remote_node( availability_info, report_items, node_label ): """ Check if availability_info means that the node is suitable as remote node. """ if availability_info["node_available"]: return if availability_info.get("pacemaker_running", False): report_items.append(reports.cannot_add_node_is_running_service( node_label, "pacemaker" )) return if not availability_info.get("pacemaker_remote", False): report_items.append(reports.cannot_add_node_is_in_cluster(node_label)) return pcs-0.9.164/pcs/lib/communication/qdevice.py000066400000000000000000000051571326265502500206700ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common.node_communicator import RequestData from pcs.lib import reports from pcs.lib.communication.tools import ( AllAtOnceStrategyMixin, AllSameDataMixin, RunRemotelyBase, SkipOfflineMixin, SimpleResponseProcessingMixin, ) class QdeviceBase( SkipOfflineMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): #pylint: disable=abstract-method def __init__(self, report_processor, skip_offline_targets=False): super(QdeviceBase, self).__init__(report_processor) self._set_skip_offline(skip_offline_targets) class Stop(SimpleResponseProcessingMixin, QdeviceBase): def _get_request_data(self): return RequestData("remote/qdevice_client_stop") def _get_success_report(self, node_label): return reports.service_stop_success("corosync-qdevice", node_label) class Start(QdeviceBase): def _get_request_data(self): return RequestData("remote/qdevice_client_start") def _process_response(self, response): report = self._get_response_report(response) node_label = response.request.target.label if report is None: if response.data == "corosync is not running, skipping": report = reports.service_start_skipped( "corosync-qdevice", "corosync is not running", node_label ) else: report = reports.service_start_success( "corosync-qdevice", node_label ) self._report(report) class Enable(QdeviceBase): def _get_request_data(self): return RequestData("remote/qdevice_client_enable") def _process_response(self, response): report = self._get_response_report(response) node_label = response.request.target.label if report is None: if response.data == "corosync is not enabled, skipping": report = reports.service_enable_skipped( "corosync-qdevice", "corosync is not enabled", node_label ) else: report = reports.service_enable_success( "corosync-qdevice", node_label ) self._report(report) class Disable(SimpleResponseProcessingMixin, QdeviceBase): def _get_request_data(self): return RequestData("remote/qdevice_client_disable") def _get_success_report(self, node_label): return reports.service_disable_success("corosync-qdevice", node_label) pcs-0.9.164/pcs/lib/communication/qdevice_net.py000066400000000000000000000101031326265502500215210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import base64 import binascii from pcs.common.node_communicator import ( Request, RequestData, ) from pcs.lib import reports from pcs.lib.communication.tools import ( AllAtOnceStrategyMixin, AllSameDataMixin, RunRemotelyBase, SkipOfflineMixin, SimpleResponseProcessingMixin, SimpleResponseProcessingNoResponseOnSuccessMixin, ) from pcs.lib.communication.qdevice import QdeviceBase class GetCaCert( AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): def __init__(self, report_processor): super(GetCaCert, self).__init__(report_processor) self._data = [] def _get_request_data(self): return RequestData("remote/qdevice_net_get_ca_certificate") def _process_response(self, response): report = self._get_response_report(response) if report is not None: self._report(report) return target = response.request.target try: self._data.append((target, base64.b64decode(response.data))) except (TypeError, binascii.Error): self._report(reports.invalid_response_format(target.label)) def on_complete(self): return self._data class ClientSetup( SimpleResponseProcessingNoResponseOnSuccessMixin, SkipOfflineMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase, ): def __init__(self, report_processor, ca_cert, skip_offline_targets=False): super(ClientSetup, self).__init__(report_processor) self._set_skip_offline(skip_offline_targets) self._ca_cert = ca_cert def _get_request_data(self): return RequestData( "remote/qdevice_net_client_init_certificate_storage", [("ca_certificate", base64.b64encode(self._ca_cert))] ) class SignCertificate(AllAtOnceStrategyMixin, RunRemotelyBase): def __init__(self, report_processor): super(SignCertificate, self).__init__(report_processor) self._output_data = [] self._input_data = [] def add_request(self, target, cert, cluster_name): self._input_data.append((target, cert, cluster_name)) def _prepare_initial_requests(self): return [ Request( target, RequestData( "remote/qdevice_net_sign_node_certificate", [ ("certificate_request", base64.b64encode(cert)), ("cluster_name", cluster_name), ] ) ) for target, cert, cluster_name in self._input_data ] def _process_response(self, response): report = self._get_response_report(response) if report is not None: self._report(report) return target = response.request.target try: self._output_data.append((target, base64.b64decode(response.data))) except (TypeError, binascii.Error): self._report(reports.invalid_response_format(target.label)) def on_complete(self): return self._output_data class ClientImportCertificateAndKey( SimpleResponseProcessingMixin, SkipOfflineMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): def __init__(self, report_processor, pk12, skip_offline_targets=False): super(ClientImportCertificateAndKey, self).__init__(report_processor) self._set_skip_offline(skip_offline_targets) self._pk12 = pk12 def _get_request_data(self): return RequestData( "remote/qdevice_net_client_import_certificate", [("certificate", base64.b64encode(self._pk12))] ) def _get_success_report(self, node_label): return reports.qdevice_certificate_accepted_by_node(node_label) class ClientDestroy(SimpleResponseProcessingMixin, QdeviceBase): def _get_request_data(self): return RequestData("remote/qdevice_net_client_destroy") def _get_success_report(self, node_label): return reports.qdevice_certificate_removed_from_node(node_label) pcs-0.9.164/pcs/lib/communication/sbd.py000066400000000000000000000211761326265502500200170ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import json from pcs.common.node_communicator import ( Request, RequestData, ) from pcs.lib import reports from pcs.lib.communication.tools import ( AllAtOnceStrategyMixin, AllSameDataMixin, OneByOneStrategyMixin, RunRemotelyBase, SimpleResponseProcessingMixin, ) from pcs.lib.errors import ReportItemSeverity from pcs.lib.node_communication import response_to_report_item from pcs.lib.tools import environment_file_to_dict class ServiceAction( SimpleResponseProcessingMixin, AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): def _get_request_action(self): raise NotImplementedError() def _get_before_report(self): raise NotImplementedError() def _get_success_report(self, node_label): raise NotImplementedError() def _get_request_data(self): return RequestData(self._get_request_action()) def before(self): self._report(self._get_before_report()) class EnableSbdService(ServiceAction): def _get_request_action(self): return "remote/sbd_enable" def _get_before_report(self): return reports.sbd_enabling_started() def _get_success_report(self, node_label): return reports.service_enable_success("sbd", node_label) class DisableSbdService(ServiceAction): def _get_request_action(self): return "remote/sbd_disable" def _get_before_report(self): return reports.sbd_disabling_started() def _get_success_report(self, node_label): return reports.service_disable_success("sbd", node_label) class StonithWatchdogTimeoutAction( AllSameDataMixin, OneByOneStrategyMixin, RunRemotelyBase ): def _get_request_action(self): raise NotImplementedError() def _get_request_data(self): return RequestData(self._get_request_action()) def _process_response(self, response): report = response_to_report_item( response, severity=ReportItemSeverity.WARNING ) if report is None: self._on_success() return self._report(report) return self._get_next_list() class RemoveStonithWatchdogTimeout(StonithWatchdogTimeoutAction): def _get_request_action(self): return "remote/remove_stonith_watchdog_timeout" class SetStonithWatchdogTimeoutToZero(StonithWatchdogTimeoutAction): def _get_request_action(self): return "remote/set_stonith_watchdog_timeout_to_zero" class SetSbdConfig( SimpleResponseProcessingMixin, AllAtOnceStrategyMixin, RunRemotelyBase ): def __init__(self, report_processor): super(SetSbdConfig, self).__init__(report_processor) self._request_data_list = [] def _prepare_initial_requests(self): return [ Request( target, RequestData("remote/set_sbd_config", [("config", config)]) ) for target, config in self._request_data_list ] def _get_success_report(self, node_label): return reports.sbd_config_accepted_by_node(node_label) def add_request(self, target, config): self._request_data_list.append((target, config)) def before(self): self._report(reports.sbd_config_distribution_started()) class GetSbdConfig(AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase): def __init__(self, report_processor): super(GetSbdConfig, self).__init__(report_processor) self._config_list = [] self._successful_target_list = [] def _get_request_data(self): return RequestData("remote/get_sbd_config") def _process_response(self, response): report = response_to_report_item( response, severity=ReportItemSeverity.WARNING ) node_label = response.request.target.label if report is not None: if not response.was_connected: self._report(report) self._report( reports.unable_to_get_sbd_config( node_label, "", ReportItemSeverity.WARNING ) ) return self._config_list.append({ "node": node_label, "config": environment_file_to_dict(response.data) }) self._successful_target_list.append(node_label) def on_complete(self): for node in self._target_list: if node.label not in self._successful_target_list: self._config_list.append({ "node": node.label, "config": None }) return self._config_list class GetSbdStatus(AllSameDataMixin, AllAtOnceStrategyMixin, RunRemotelyBase): def __init__(self, report_processor): super(GetSbdStatus, self).__init__(report_processor) self._status_list = [] self._successful_target_list = [] def _get_request_data(self): return RequestData("remote/check_sbd", # here we just need info about sbd service, therefore watchdog and # device list is empty [ ("watchdog", ""), ("device_list", "[]"), ] ) def _process_response(self, response): report = response_to_report_item( response, severity=ReportItemSeverity.WARNING ) node_label = response.request.target.label if report is not None: self._report_list([ report, #reason is in previous report item, warning is there implicit reports.unable_to_get_sbd_status(node_label, "") ]) return try: self._status_list.append({ "node": node_label, "status": json.loads(response.data)["sbd"] }) self._successful_target_list.append(node_label) except (ValueError, KeyError) as e: self._report(reports.unable_to_get_sbd_status(node_label, str(e))) def on_complete(self): for node in self._target_list: if node.label not in self._successful_target_list: self._status_list.append({ "node": node.label, "status": { "installed": None, "enabled": None, "running": None } }) return self._status_list class CheckSbd(AllAtOnceStrategyMixin, RunRemotelyBase): def __init__(self, report_processor): super(CheckSbd, self).__init__(report_processor) self._request_data_list = [] def _prepare_initial_requests(self): return [ Request( target, RequestData( "remote/check_sbd", [ ("watchdog", watchdog), ("device_list", json.dumps(device_list)) ] ) ) for target, watchdog, device_list in self._request_data_list ] def _process_response(self, response): report = response_to_report_item(response) if report: self._report(report) return report_list = [] node_label = response.request.target.label try: data = json.loads(response.data) if not data["sbd"]["installed"]: report_list.append(reports.sbd_not_installed(node_label)) if not data["watchdog"]["exist"]: report_list.append(reports.watchdog_not_found( node_label, data["watchdog"]["path"] )) for device in data.get("device_list", []): if not device["exist"]: report_list.append(reports.sbd_device_does_not_exist( device["path"], node_label )) elif not device["block_device"]: report_list.append(reports.sbd_device_is_not_block_device( device["path"], node_label )) # TODO maybe we can check whenever device is initialized by sbd (by # running 'sbd -d dump;') except (ValueError, KeyError, TypeError): report_list.append(reports.invalid_response_format(node_label)) if report_list: self._report_list(report_list) else: self._report( reports.sbd_check_success(response.request.target.label) ) def add_request(self, target, watchdog, device_list): self._request_data_list.append((target, watchdog, device_list)) def before(self): self._report(reports.sbd_check_started()) pcs-0.9.164/pcs/lib/communication/test/000077500000000000000000000000001326265502500176455ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/communication/test/__init__.py000066400000000000000000000000001326265502500217440ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/communication/test/test_booth.py000066400000000000000000000013341326265502500223720ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase, skip class BoothSendConfig(TestCase): """ tested in: pcs.lib.commands.test.test_booth.ConfigSyncTest """ class BoothGetConfig(TestCase): """ tested in: pcs.lib.commands.test.test_booth.PullConfigSuccess pcs.lib.commands.test.test_booth.PullConfigFailure pcs.lib.commands.test.test_booth.PullConfigWithAuthfileSuccess pcs.lib.commands.test.test_booth.PullConfigWithAuthfileFailure """ @skip("TODO: missing tests for pcs.lib.communication.booth.BoothSaveFiles") class BoothSaveFiles(TestCase): def test_skip(self): pass pcs-0.9.164/pcs/lib/communication/test/test_corosync.py000066400000000000000000000007661326265502500231260ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase class CheckCorosyncOffline(TestCase): """ tested in: pcs.lib.test.test_env.PushCorosyncConfLiveNoQdeviceTest pcs.lib.commands.test.sbd.test_enable_sbd """ class DistributeCorosyncConf(TestCase): """ tested in: pcs.lib.test.test_env.PushCorosyncConfLiveNoQdeviceTest pcs.lib.commands.test.sbd.test_enable_sbd """ pcs-0.9.164/pcs/lib/communication/test/test_nodes.py000066400000000000000000000074011326265502500223700ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.assertions import assert_report_item_list_equal from pcs.common import report_codes from pcs.lib.errors import ReportItemSeverity as severity from pcs.lib.communication import nodes class GetOnlineTargets(TestCase): """ tested in: pcs.lib.commands.test.sbd.test_enable_sbd """ class AvailabilityCheckerNode(TestCase): def setUp(self): self.node = "node1" def assert_result_causes_reports( self, availability_info, expected_report_items ): report_items = [] nodes.availability_checker_node( availability_info, report_items, self.node ) assert_report_item_list_equal(report_items, expected_report_items) def test_no_reports_when_available(self): self.assert_result_causes_reports({"node_available": True}, []) def test_report_node_is_in_cluster(self): self.assert_result_causes_reports({"node_available": False}, [ ( severity.ERROR, report_codes.CANNOT_ADD_NODE_IS_IN_CLUSTER, { "node": self.node } ), ]) def test_report_node_is_running_pacemaker_remote(self): self.assert_result_causes_reports( {"node_available": False, "pacemaker_remote": True}, [ ( severity.ERROR, report_codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE, { "node": self.node, "service": "pacemaker_remote", } ), ] ) def test_report_node_is_running_pacemaker(self): self.assert_result_causes_reports( {"node_available": False, "pacemaker_running": True}, [ ( severity.ERROR, report_codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE, { "node": self.node, "service": "pacemaker", } ), ] ) class AvailabilityCheckerRemoteNode(TestCase): def setUp(self): self.node = "node1" def assert_result_causes_reports( self, availability_info, expected_report_items ): report_items = [] nodes.availability_checker_remote_node( availability_info, report_items, self.node ) assert_report_item_list_equal(report_items, expected_report_items) def test_no_reports_when_available(self): self.assert_result_causes_reports({"node_available": True}, []) def test_report_node_is_running_pacemaker(self): self.assert_result_causes_reports( {"node_available": False, "pacemaker_running": True}, [ ( severity.ERROR, report_codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE, { "node": self.node, "service": "pacemaker", } ), ] ) def test_report_node_is_in_cluster(self): self.assert_result_causes_reports({"node_available": False}, [ ( severity.ERROR, report_codes.CANNOT_ADD_NODE_IS_IN_CLUSTER, { "node": self.node } ), ]) def test_no_reports_when_pacemaker_remote_there(self): self.assert_result_causes_reports( {"node_available": False, "pacemaker_remote": True}, [] ) pcs-0.9.164/pcs/lib/communication/test/test_qdevice.py000066400000000000000000000014021326265502500226730ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.communication import qdevice from pcs.test.tools.pcs_unittest import TestCase class Stop(TestCase): """ tested in: pcs.lib.commands.test.test_quorum.RemoveDeviceNetTest pcs.lib.test.test_env.PushCorosyncConfLiveWithQdeviceTest """ class Start(TestCase): """ tested in: pcs.lib.commands.test.test_quorum.AddDeviceNetTest pcs.lib.test.test_env.PushCorosyncConfLiveWithQdeviceTest """ class Enable(TestCase): """ tested in: pcs.lib.commands.test.test_quorum.AddDeviceNetTest """ class Disable(TestCase): """ tested in: pcs.lib.commands.test.test_quorum.RemoveDeviceNetTest """ pcs-0.9.164/pcs/lib/communication/test/test_qdevice_net.py000066400000000000000000000014431326265502500235460ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.communication import qdevice_net from pcs.test.tools.pcs_unittest import TestCase class GetCaCert(TestCase): """ tested in: pcs.lib.commands.test.test_quorum.AddDeviceNetTest """ class ClientSetup(TestCase): """ tested in: pcs.lib.commands.test.test_quorum.AddDeviceNetTest """ class SignCertificate(TestCase): """ tested in: pcs.lib.commands.test.test_quorum.AddDeviceNetTest """ class ClientImportCertificateAndKey(TestCase): """ tested in: pcs.lib.commands.test.test_quorum.AddDeviceNetTest """ class ClientDestroy(TestCase): """ tested in: pcs.lib.commands.test.test_quorum.RemoveDeviceNetTest """ pcs-0.9.164/pcs/lib/communication/test/test_sbd.py000066400000000000000000000022051326265502500220250ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase class EnableSbdService(TestCase): """ tested in: pcs.lib.commands.test.sbd.test_enable_sbd """ class DisableSbdService(TestCase): """ tested in: pcs.lib.commands.test.sbd.test_disable_sbd.DisableSbd """ class RemoveStonithWatchdogTimeout(TestCase): """ tested in: pcs.lib.commands.test.sbd.test_enable_sbd """ class SetStonithWatchdogTimeoutToZero(TestCase): """ tested in: pcs.lib.commands.test.sbd.test_disable_sbd.DisableSbd """ class SetSbdConfig(TestCase): """ tested in: pcs.lib.commands.test.sbd.test_enable_sbd """ class GetSbdConfig(TestCase): """ tested in: pcs.lib.commands.test.sbd.test_get_cluster_sbd_config.GetClusterSbdConfig """ class GetSbdStatus(TestCase): """ tested in: pcs.lib.commands.test.sbd.test_get_cluster_sbd_status.GetClusterSbdStatus """ class CheckSbd(TestCase): """ tested in: pcs.lib.commands.test.sbd.test_enable_sbd """ pcs-0.9.164/pcs/lib/communication/tools.py000066400000000000000000000224271326265502500204070ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import report_codes from pcs.common.node_communicator import Request from pcs.lib import reports from pcs.lib.node_communication import response_to_report_item from pcs.lib.errors import ( LibraryError, ReportItemSeverity, ) def run(communicator, cmd): """ Run communication command. Returns return value of method on_complete() of communcation command after run. NodeCommunicator communicator -- object used for communication CommunicationCommandInterface cmd """ cmd.before() communicator.add_requests(cmd.get_initial_request_list()) for response in communicator.start_loop(): extra_requests = cmd.on_response(response) if extra_requests: communicator.add_requests(extra_requests) return cmd.on_complete() def run_and_raise(communicator, cmd): """ Run communication command. Returns return value of method on_complete() of communcation command after run. Raises LibraryError (with no report item) when some errors occured while running communication command. NodeCommunicator communicator -- object used for communication CommunicationCommandInterface cmd """ to_return = run(communicator, cmd) if cmd.error_list: raise LibraryError() return to_return class CommunicationCommandInterface(object): """ Interface for all communication commands. """ def get_initial_request_list(self): """ Returns an initial list of Request object. """ raise NotImplementedError() def on_response(self, response): """ Process received response. Returns list of new Request that should be added to the executing queue. Response response -- a response to be processed """ raise NotImplementedError() def on_complete(self): """ Runs after all reqests finished. """ raise NotImplementedError() def before(self): """ Runs before executing the requests. """ raise NotImplementedError() @property def error_list(self): """ List of errors which occured during running the requests. """ raise NotImplementedError() class RunRemotelyBase(CommunicationCommandInterface): """ Abstract class for communication commands. This class provides methods for reporting. """ #pylint: disable=abstract-method def __init__(self, report_processor): self._report_processor = report_processor self._error_list = [] def _get_response_report(self, response): """ Convert specified response to report item. Returns None if the response has no failures. Response response -- a response to be converted """ return response_to_report_item(response) def _report_list(self, report_list): """ Send reports from report_list to the report processor. list report_list -- list of ReportItem objects """ self._error_list.extend(self._report_processor.report_list(report_list)) def _report(self, report): """ Send specified report to the report processor. ReportItem report -- report which will be reported """ self._report_list([report]) def _process_response(self, response): """ Process received response. Returns list of new Request that should be added to the executing queue. If no new Request should be added, there is no need to return empty list. Response response -- a response to be processed """ raise NotImplementedError() def on_response(self, response): returned = self._process_response(response) return returned if returned else [] def on_complete(self): return None def before(self): pass @property def error_list(self): return self._error_list class StrategyBase(object): """ Abstract base class of the communication strategies. Always use at most one strategy mixin in the communication commands classes. """ def _prepare_initial_requests(self): """ Returns list of all Request objects which should be run. Full implementation of strategy will use this list for creating initial request list and others. """ raise NotImplementedError() def get_initial_request_list(self): """ This method has to be implemented by the descendants. """ raise NotImplementedError() class OneByOneStrategyMixin(StrategyBase): """ Communication strategy in which requests are executed one by one. So only one request from _prepare_initial_requests is chosen as initial request list. Other requests are then available by calling method _get_next_list. """ #pylint: disable=abstract-method __iter = None __successful = False def get_initial_request_list(self): """ Returns only first request from _prepare_initial_requests. """ self.__iter = iter(self._prepare_initial_requests()) return self._get_next_list() def _get_next_list(self): """ Returns a list which contains another Request object from _prepare_initial_requests. Raises StopIteration when there is no other request left. """ try: return [next(self.__iter)] except StopIteration: return [] def _on_success(self): self.__successful = True def on_complete(self): if not self.__successful: self._report(reports.unable_to_perform_operation_on_any_node()) return None class AllAtOnceStrategyMixin(StrategyBase): """ Communication strategy in which all requests are executed at once in parallel. """ #pylint: disable=abstract-method def get_initial_request_list(self): return self._prepare_initial_requests() class AllSameDataMixin(object): """ Communication command mixin which adds common methods for commands where requests to all targets have the same data. """ __targets = None def _get_request_data(self): """ Returns RequestData object to use as data for requests to all targets. """ raise NotImplementedError() def _prepare_initial_requests(self): return [ Request(target, self._get_request_data()) for target in self.__target_list ] def add_request(self, target): """ Add target to which request will be send. RequestTarget target -- target that will be added. """ self.set_targets([target]) def set_targets(self, target_list): """ Add targets to which requests will be send. list target_list -- RequestTarget list """ self.__target_list.extend(target_list) @property def __target_list(self): if self.__targets is None: self.__targets = [] return self.__targets @property def _target_list(self): """ List of RequestTarget to which request will be send. """ return list(self.__target_list) class SimpleResponseProcessingMixin(object): """ Communication command mixin which adds common response processing. When request fails error/warning will be reported. Otherwise _get_success_report will be reported. """ def _get_success_report(self, node_label): """ Returns ReportItem which should be reported when request was successfull. string node_label -- node identifier on which request was successful """ raise NotImplementedError() def _process_response(self, response): report = self._get_response_report(response) if report is None: report = self._get_success_report(response.request.target.label) self._report(report) class SimpleResponseProcessingNoResponseOnSuccessMixin(object): """ Communication command mixin which adds common response processing. When request fails error/warning will be reported. """ def _process_response(self, response): report = self._get_response_report(response) if report is not None: self._report(report) class SkipOfflineMixin(object): """ Communication command mixin which simplifies handling of forcing skip offline nodes. This mixin provides method _set_skip_offline which should be called from __init__ of the descendants. Then report item from response returned from _get_response_report is set accordingly to value of skip_offline_targets. """ _failure_severity = ReportItemSeverity.ERROR _failure_forceable = report_codes.SKIP_OFFLINE_NODES def _set_skip_offline(self, skip_offline_targets): """ Set value of skip_offline_targets flag. boolean skip_offline_targets """ if skip_offline_targets: self._failure_severity = ReportItemSeverity.WARNING self._failure_forceable = None def _get_response_report(self, response): return response_to_report_item( response, severity=self._failure_severity, forceable=self._failure_forceable, ) pcs-0.9.164/pcs/lib/corosync/000077500000000000000000000000001326265502500156605ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/corosync/__init__.py000066400000000000000000000000001326265502500177570ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/corosync/config_facade.py000066400000000000000000000732311326265502500207700ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import re from pcs.common import report_codes from pcs.lib import reports, validate from pcs.lib.errors import ReportItemSeverity, LibraryError from pcs.lib.corosync import config_parser from pcs.lib.node import NodeAddresses, NodeAddressesList class ConfigFacade(object): """ Provides high level access to a corosync config file """ QUORUM_OPTIONS = ( "auto_tie_breaker", "last_man_standing", "last_man_standing_window", "wait_for_all", ) QUORUM_OPTIONS_INCOMPATIBLE_WITH_QDEVICE = ( "auto_tie_breaker", "last_man_standing", "last_man_standing_window", ) __QUORUM_DEVICE_HEURISTICS_EXEC_NAME_RE = re.compile("^exec_[^.:{}#\s]+$") @classmethod def from_string(cls, config_string): """ Parse corosync config and create a facade around it config_string corosync config text """ try: return cls(config_parser.parse_string(config_string)) except config_parser.MissingClosingBraceException: raise LibraryError( reports.corosync_config_parser_missing_closing_brace() ) except config_parser.UnexpectedClosingBraceException: raise LibraryError( reports.corosync_config_parser_unexpected_closing_brace() ) except config_parser.CorosyncConfParserException: raise LibraryError( reports.corosync_config_parser_other_error() ) def __init__(self, parsed_config): """ Create a facade around a parsed corosync config file parsed_config parsed corosync config """ self._config = parsed_config # set to True if changes cannot be applied on running cluster self._need_stopped_cluster = False # set to True if qdevice reload is required to apply changes self._need_qdevice_reload = False @property def config(self): return self._config @property def need_stopped_cluster(self): return self._need_stopped_cluster @property def need_qdevice_reload(self): return self._need_qdevice_reload def get_cluster_name(self): cluster_name = "" for totem in self.config.get_sections("totem"): for attrs in totem.get_attributes("cluster_name"): cluster_name = attrs[1] return cluster_name def get_nodes(self): """ Get all defined nodes """ result = NodeAddressesList() for nodelist in self.config.get_sections("nodelist"): for node in nodelist.get_sections("node"): node_data = { "ring0_addr": None, "ring1_addr": None, "name": None, "nodeid": None, } for attr_name, attr_value in node.get_attributes(): if attr_name in node_data: node_data[attr_name] = attr_value result.append(NodeAddresses( node_data["ring0_addr"], node_data["ring1_addr"], node_data["name"], node_data["nodeid"] )) return result def set_quorum_options(self, report_processor, options): """ Set options in quorum section options quorum options dict """ report_processor.process_list( self.__validate_quorum_options(options) ) quorum_section_list = self.__ensure_section(self.config, "quorum") self.__set_section_options(quorum_section_list, options) self.__update_two_node() self.__remove_empty_sections(self.config) self._need_stopped_cluster = True def get_quorum_options(self): """ Get configurable options from quorum section """ options = {} for section in self.config.get_sections("quorum"): for name, value in section.get_attributes(): if name in self.__class__.QUORUM_OPTIONS: options[name] = value return options def is_enabled_auto_tie_breaker(self): """ Returns True if auto tie braker option is enabled, False otherwise. """ auto_tie_breaker = "0" for quorum in self.config.get_sections("quorum"): for attr in quorum.get_attributes("auto_tie_breaker"): auto_tie_breaker = attr[1] return auto_tie_breaker == "1" def __validate_quorum_options(self, options): report_items = [] has_qdevice = self.has_quorum_device() qdevice_incompatible_options = [] for name, value in sorted(options.items()): allowed_names = self.__class__.QUORUM_OPTIONS if name not in allowed_names: report_items.append( reports.invalid_options([name], allowed_names, "quorum") ) continue if value == "": continue if ( has_qdevice and name in self.__class__.QUORUM_OPTIONS_INCOMPATIBLE_WITH_QDEVICE ): qdevice_incompatible_options.append(name) if name == "last_man_standing_window": if not value.isdigit(): report_items.append(reports.invalid_option_value( name, value, "positive integer" )) else: allowed_values = ("0", "1") if value not in allowed_values: report_items.append(reports.invalid_option_value( name, value, allowed_values )) if qdevice_incompatible_options: report_items.append( reports.corosync_options_incompatible_with_qdevice( qdevice_incompatible_options ) ) return report_items def has_quorum_device(self): """ Check if quorum device is present in the config """ for quorum in self.config.get_sections("quorum"): for device in quorum.get_sections("device"): if device.get_attributes("model"): return True return False def get_quorum_device_settings(self): """ Get configurable options from quorum.device section """ model = None model_options = {} generic_options = {} heuristics_options = {} for quorum in self.config.get_sections("quorum"): for device in quorum.get_sections("device"): for name, value in device.get_attributes(): if name == "model": model = value else: generic_options[name] = value for subsection in device.get_sections(): if subsection.name == "heuristics": heuristics_options.update(subsection.get_attributes()) continue if subsection.name not in model_options: model_options[subsection.name] = {} model_options[subsection.name].update( subsection.get_attributes() ) return ( model, model_options.get(model, {}), generic_options, heuristics_options, ) def add_quorum_device( self, report_processor, model, model_options, generic_options, heuristics_options, force_model=False, force_options=False, ): """ Add quorum device configuration string model -- quorum device model dict model_options -- model specific options dict generic_options -- generic quorum device options dict heuristics_options -- heuristics options bool force_model -- continue even if the model is not valid bool force_options -- continue even if options are not valid """ # validation if self.has_quorum_device(): raise LibraryError(reports.qdevice_already_defined()) report_processor.process_list( self.__validate_quorum_device_model(model, force_model) + self.__validate_quorum_device_model_options( model, model_options, need_required=True, force=force_options ) + self.__validate_quorum_device_generic_options( generic_options, force=force_options ) + self.__validate_quorum_device_add_heuristics( heuristics_options, force_options=force_options ) ) # configuration cleanup remove_need_stopped_cluster = dict([ (name, "") for name in self.__class__.QUORUM_OPTIONS_INCOMPATIBLE_WITH_QDEVICE ]) # remove old device settings quorum_section_list = self.__ensure_section(self.config, "quorum") for quorum in quorum_section_list: for device in quorum.get_sections("device"): quorum.del_section(device) for name, value in quorum.get_attributes(): if ( name in remove_need_stopped_cluster and value not in ["", "0"] ): self._need_stopped_cluster = True # remove conflicting quorum options attrs_to_remove = { "allow_downscale": "", "two_node": "", } attrs_to_remove.update(remove_need_stopped_cluster) self.__set_section_options(quorum_section_list, attrs_to_remove) # remove nodes' votes for nodelist in self.config.get_sections("nodelist"): for node in nodelist.get_sections("node"): node.del_attributes_by_name("quorum_votes") # add new configuration quorum = quorum_section_list[-1] new_device = config_parser.Section("device") quorum.add_section(new_device) self.__set_section_options([new_device], generic_options) new_device.set_attribute("model", model) new_model = config_parser.Section(model) self.__set_section_options([new_model], model_options) new_device.add_section(new_model) new_heuristics = config_parser.Section("heuristics") self.__set_section_options([new_heuristics], heuristics_options) new_device.add_section(new_heuristics) if self.__is_heuristics_enabled_with_no_exec(): report_processor.process( reports.corosync_quorum_heuristics_enabled_with_no_exec() ) self.__update_qdevice_votes() self.__update_two_node() self.__remove_empty_sections(self.config) def update_quorum_device( self, report_processor, model_options, generic_options, heuristics_options, force_options=False ): """ Update existing quorum device configuration dict model_options -- model specific options dict generic_options -- generic quorum device options dict heuristics_options -- heuristics options bool force_options -- continue even if options are not valid """ # validation if not self.has_quorum_device(): raise LibraryError(reports.qdevice_not_defined()) model = None for quorum in self.config.get_sections("quorum"): for device in quorum.get_sections("device"): for dummy_name, value in device.get_attributes("model"): model = value report_processor.process_list( self.__validate_quorum_device_model_options( model, model_options, need_required=False, force=force_options ) + self.__validate_quorum_device_generic_options( generic_options, force=force_options ) + self.__validate_quorum_device_update_heuristics( heuristics_options, force_options=force_options ) ) # set new configuration device_sections = [] model_sections = [] heuristics_sections = [] for quorum in self.config.get_sections("quorum"): device_sections.extend(quorum.get_sections("device")) for device in quorum.get_sections("device"): model_sections.extend(device.get_sections(model)) heuristics_sections.extend(device.get_sections("heuristics")) # we know device sections exist, otherwise the function would exit at # has_quorum_device line above if not model_sections: new_model = config_parser.Section(model) device_sections[-1].add_section(new_model) model_sections.append(new_model) if not heuristics_sections: new_heuristics = config_parser.Section("heuristics") device_sections[-1].add_section(new_heuristics) heuristics_sections.append(new_heuristics) self.__set_section_options(device_sections, generic_options) self.__set_section_options(model_sections, model_options) self.__set_section_options(heuristics_sections, heuristics_options) if self.__is_heuristics_enabled_with_no_exec(): report_processor.process( reports.corosync_quorum_heuristics_enabled_with_no_exec() ) self.__update_qdevice_votes() self.__update_two_node() self.__remove_empty_sections(self.config) self._need_qdevice_reload = True def remove_quorum_device_heuristics(self): """ Remove quorum device heuristics configuration """ if not self.has_quorum_device(): raise LibraryError(reports.qdevice_not_defined()) for quorum in self.config.get_sections("quorum"): for device in quorum.get_sections("device"): for heuristics in device.get_sections("heuristics"): device.del_section(heuristics) self.__remove_empty_sections(self.config) self._need_qdevice_reload = True def remove_quorum_device(self): """ Remove all quorum device configuration """ if not self.has_quorum_device(): raise LibraryError(reports.qdevice_not_defined()) for quorum in self.config.get_sections("quorum"): for device in quorum.get_sections("device"): quorum.del_section(device) self.__update_two_node() self.__remove_empty_sections(self.config) def __validate_quorum_device_model(self, model, force_model=False): report_items = [] allowed_values = ( "net", ) if model not in allowed_values: report_items.append(reports.invalid_option_value( "model", model, allowed_values, severity=( ReportItemSeverity.WARNING if force_model else ReportItemSeverity.ERROR ), forceable=( None if force_model else report_codes.FORCE_QDEVICE_MODEL ) )) return report_items def __validate_quorum_device_model_options( self, model, model_options, need_required, force=False ): if model == "net": return self.__validate_quorum_device_model_net_options( model_options, need_required, force=force ) return [] def __validate_quorum_device_model_net_options( self, model_options, need_required, force=False ): required_options = frozenset(["host", "algorithm"]) optional_options = frozenset([ "connect_timeout", "force_ip_version", "port", "tie_breaker", ]) allowed_options = required_options | optional_options model_options_names = frozenset(model_options.keys()) missing_options = [] report_items = [] severity = ( ReportItemSeverity.WARNING if force else ReportItemSeverity.ERROR ) forceable = None if force else report_codes.FORCE_OPTIONS if need_required: missing_options += required_options - model_options_names for name, value in sorted(model_options.items()): if name not in allowed_options: report_items.append(reports.invalid_options( [name], allowed_options, "quorum device model", severity=severity, forceable=forceable )) continue if value == "": # do not allow to remove required options if name in required_options: missing_options.append(name) else: continue if name == "algorithm": allowed_values = ("ffsplit", "lms") if value not in allowed_values: report_items.append(reports.invalid_option_value( name, value, allowed_values, severity=severity, forceable=forceable )) if name == "connect_timeout": minimum, maximum = 1000, 2*60*1000 if not (value.isdigit() and minimum <= int(value) <= maximum): min_max = "{min}-{max}".format(min=minimum, max=maximum) report_items.append(reports.invalid_option_value( name, value, min_max, severity=severity, forceable=forceable )) if name == "force_ip_version": allowed_values = ("0", "4", "6") if value not in allowed_values: report_items.append(reports.invalid_option_value( name, value, allowed_values, severity=severity, forceable=forceable )) if name == "port": minimum, maximum = 1, 65535 if not (value.isdigit() and minimum <= int(value) <= maximum): min_max = "{min}-{max}".format(min=minimum, max=maximum) report_items.append(reports.invalid_option_value( name, value, min_max, severity=severity, forceable=forceable )) if name == "tie_breaker": node_ids = [node.id for node in self.get_nodes()] allowed_nonid = ["lowest", "highest"] if value not in allowed_nonid + node_ids: allowed_values = allowed_nonid + ["valid node id"] report_items.append(reports.invalid_option_value( name, value, allowed_values, severity=severity, forceable=forceable )) if missing_options: report_items.append( reports.required_option_is_missing(sorted(missing_options)) ) return report_items def __validate_quorum_device_generic_options( self, generic_options, force=False ): optional_options = frozenset([ "sync_timeout", "timeout", ]) allowed_options = optional_options report_items = [] severity = ( ReportItemSeverity.WARNING if force else ReportItemSeverity.ERROR ) forceable = None if force else report_codes.FORCE_OPTIONS for name, value in sorted(generic_options.items()): if name not in allowed_options: # model is never allowed in generic options, it is passed # in its own argument report_items.append(reports.invalid_options( [name], allowed_options, "quorum device", severity=( severity if name != "model" else ReportItemSeverity.ERROR ), forceable=(forceable if name != "model" else None) )) continue if value == "": continue if not value.isdigit(): report_items.append(reports.invalid_option_value( name, value, "positive integer", severity=severity, forceable=forceable )) return report_items def __split_heuristics_exec_options(self, heuristics_options): options_exec = dict() options_nonexec = dict() for name, value in heuristics_options.items(): if name.startswith("exec_"): options_exec[name] = value else: options_nonexec[name] = value return options_nonexec, options_exec def __get_heuristics_options_validators( self, allow_empty_values=False, force_options=False ): validators = { "mode": validate.value_in( "mode", ("off", "on", "sync"), code_to_allow_extra_values=report_codes.FORCE_OPTIONS, allow_extra_values=force_options ), "interval": validate.value_positive_integer( "interval", code_to_allow_extra_values=report_codes.FORCE_OPTIONS, allow_extra_values=force_options ), "sync_timeout": validate.value_positive_integer( "sync_timeout", code_to_allow_extra_values=report_codes.FORCE_OPTIONS, allow_extra_values=force_options ), "timeout": validate.value_positive_integer( "timeout", code_to_allow_extra_values=report_codes.FORCE_OPTIONS, allow_extra_values=force_options ), } if not allow_empty_values: # make sure to return a list even in python3 so we can call append # on it return list(validators.values()) return [ validate.value_empty_or_valid(option_name, validator) for option_name, validator in validators.items() ] def __validate_heuristics_noexec_option_names( self, options_nonexec, force_options=False ): return validate.names_in( ("mode", "interval", "sync_timeout", "timeout"), options_nonexec.keys(), "heuristics", report_codes.FORCE_OPTIONS, allow_extra_names=force_options, allowed_option_patterns=["exec_NAME"] ) def __validate_heuristics_exec_option_names(self, options_exec): # We must be strict and do not allow to override this validation, # otherwise setting a cratfed exec_NAME could be misused for setting # arbitrary corosync.conf settings. regexp = self.__QUORUM_DEVICE_HEURISTICS_EXEC_NAME_RE report_list = [] valid_options = [] not_valid_options = [] for name in options_exec: if regexp.match(name) is None: not_valid_options.append(name) else: valid_options.append(name) if not_valid_options: report_list.append( reports.invalid_userdefined_options( not_valid_options, "exec_NAME cannot contain '.:{}#' and whitespace characters", "heuristics", severity=ReportItemSeverity.ERROR, forceable=None ) ) return report_list, valid_options def __validate_quorum_device_add_heuristics( self, heuristics_options, force_options=False ): report_list = [] options_nonexec, options_exec = self.__split_heuristics_exec_options( heuristics_options ) validators = self.__get_heuristics_options_validators( force_options=force_options ) exec_options_reports, valid_exec_options = ( self.__validate_heuristics_exec_option_names(options_exec) ) for option in valid_exec_options: validators.append( validate.value_not_empty(option, "a command to be run") ) report_list.extend( validate.run_collection_of_option_validators( heuristics_options, validators ) + self.__validate_heuristics_noexec_option_names( options_nonexec, force_options=force_options ) + exec_options_reports ) return report_list def __validate_quorum_device_update_heuristics( self, heuristics_options, force_options=False ): report_list = [] options_nonexec, options_exec = self.__split_heuristics_exec_options( heuristics_options ) validators = self.__get_heuristics_options_validators( allow_empty_values=True, force_options=force_options ) # no validation necessary for values of valid exec options - they are # either empty (meaning they should be removed) or nonempty strings exec_options_reports, dummy_valid_exec_options = ( self.__validate_heuristics_exec_option_names(options_exec) ) report_list.extend( validate.run_collection_of_option_validators( heuristics_options, validators ) + self.__validate_heuristics_noexec_option_names( options_nonexec, force_options=force_options ) + exec_options_reports ) return report_list def __is_heuristics_enabled_with_no_exec(self): regexp = self.__QUORUM_DEVICE_HEURISTICS_EXEC_NAME_RE mode = None exec_found = False for quorum in self.config.get_sections("quorum"): for device in quorum.get_sections("device"): for heuristics in device.get_sections("heuristics"): for name, value in heuristics.get_attributes(): if name == "mode" and value: # Cannot break, must go through all modes, the last # one matters mode = value elif regexp.match(name) and value: exec_found = True return not exec_found and mode in ("on", "sync") def __update_two_node(self): # get relevant status has_quorum_device = self.has_quorum_device() has_two_nodes = len(self.get_nodes()) == 2 auto_tie_breaker = self.is_enabled_auto_tie_breaker() # update two_node if has_two_nodes and not auto_tie_breaker and not has_quorum_device: quorum_section_list = self.__ensure_section(self.config, "quorum") self.__set_section_options(quorum_section_list, {"two_node": "1"}) else: for quorum in self.config.get_sections("quorum"): quorum.del_attributes_by_name("two_node") def __update_qdevice_votes(self): # ffsplit won't start if votes is missing or not set to 1 # for other algorithms it's required not to put votes at all model = None algorithm = None device_sections = [] for quorum in self.config.get_sections("quorum"): for device in quorum.get_sections("device"): device_sections.append(device) for dummy_name, value in device.get_attributes("model"): model = value for device in device_sections: for model_section in device.get_sections(model): for dummy_name, value in model_section.get_attributes( "algorithm" ): algorithm = value if model == "net": if algorithm == "ffsplit": self.__set_section_options(device_sections, {"votes": "1"}) else: self.__set_section_options(device_sections, {"votes": ""}) def __set_section_options(self, section_list, options): for section in section_list[:-1]: for name in options: section.del_attributes_by_name(name) for name, value in sorted(options.items()): if value == "": section_list[-1].del_attributes_by_name(name) else: section_list[-1].set_attribute(name, value) def __ensure_section(self, parent_section, section_name): section_list = parent_section.get_sections(section_name) if not section_list: new_section = config_parser.Section(section_name) parent_section.add_section(new_section) section_list.append(new_section) return section_list def __remove_empty_sections(self, parent_section): for section in parent_section.get_sections(): self.__remove_empty_sections(section) if section.empty: parent_section.del_section(section) pcs-0.9.164/pcs/lib/corosync/config_parser.py000066400000000000000000000111311326265502500210500ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) class Section(object): def __init__(self, name): self._parent = None self._attr_list = [] self._section_list = [] self._name = name @property def parent(self): return self._parent @property def name(self): return self._name @property def empty(self): return not self._attr_list and not self._section_list def export(self, indent=" "): lines = [] for attr in self._attr_list: lines.append("{0}: {1}".format(*attr)) if self._attr_list and self._section_list: lines.append("") section_count = len(self._section_list) for index, section in enumerate(self._section_list, 1): lines.extend(str(section).split("\n")) if not lines[-1].strip(): del lines[-1] if index < section_count: lines.append("") if self.parent: lines = [indent + x if x else x for x in lines] lines.insert(0, self.name + " {") lines.append("}") final = "\n".join(lines) if final: final += "\n" return final def get_root(self): parent = self while parent.parent: parent = parent.parent return parent def get_attributes(self, name=None): return [ attr for attr in self._attr_list if name is None or attr[0] == name ] def add_attribute(self, name, value): self._attr_list.append([name, value]) return self def del_attribute(self, attribute): self._attr_list = [ attr for attr in self._attr_list if attr != attribute ] return self def del_attributes_by_name(self, name, value=None): self._attr_list = [ attr for attr in self._attr_list if not(attr[0] == name and (value is None or attr[1] == value)) ] return self def set_attribute(self, name, value): found = False new_attr_list = [] for attr in self._attr_list: if attr[0] != name: new_attr_list.append(attr) elif not found: found = True attr[1] = value new_attr_list.append(attr) self._attr_list = new_attr_list if not found: self.add_attribute(name, value) return self def get_sections(self, name=None): return [ section for section in self._section_list if name is None or section.name == name ] def add_section(self, section): parent = self while parent: if parent == section: raise CircularParentshipException() parent = parent.parent if section.parent: section.parent.del_section(section) section._parent = self self._section_list.append(section) return self def del_section(self, section): self._section_list.remove(section) # don't set parent to None if the section was not found in the list # thanks to remove raising a ValueError in that case section._parent = None return self def __str__(self): return self.export() def parse_string(conf_text): root = Section("") _parse_section(conf_text.split("\n"), root) return root def _parse_section(lines, section): # parser is trying to work the same way as an original corosync parser while lines: current_line = lines.pop(0).strip() if not current_line or current_line[0] == "#": continue if "{" in current_line: section_name, dummy_junk = current_line.rsplit("{", 1) new_section = Section(section_name.strip()) section.add_section(new_section) _parse_section(lines, new_section) elif "}" in current_line: if not section.parent: raise UnexpectedClosingBraceException() return elif ":" in current_line: section.add_attribute( *[x.strip() for x in current_line.split(":", 1)] ) if section.parent: raise MissingClosingBraceException() class CorosyncConfParserException(Exception): pass class CircularParentshipException(CorosyncConfParserException): pass class ParseErrorException(CorosyncConfParserException): pass class MissingClosingBraceException(ParseErrorException): pass class UnexpectedClosingBraceException(ParseErrorException): pass pcs-0.9.164/pcs/lib/corosync/live.py000066400000000000000000000043541326265502500171770ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os.path from pcs import settings from pcs.common.tools import join_multilines from pcs.lib import reports from pcs.lib.errors import LibraryError def get_local_corosync_conf(): """ Read corosync.conf file from local machine """ path = settings.corosync_conf_file try: return open(path).read() except EnvironmentError as e: raise LibraryError(reports.corosync_config_read_error(path, e.strerror)) def get_local_cluster_conf(): """ Read cluster.conf file from local machine """ path = settings.cluster_conf_file try: return open(path).read() except EnvironmentError as e: raise LibraryError(reports.cluster_conf_read_error(path, e.strerror)) def exists_local_corosync_conf(): return os.path.exists(settings.corosync_conf_file) def reload_config(runner): """ Ask corosync to reload its configuration """ stdout, stderr, retval = runner.run([ os.path.join(settings.corosync_binaries, "corosync-cfgtool"), "-R" ]) message = join_multilines([stderr, stdout]) if retval != 0 or "invalid option" in message: raise LibraryError(reports.corosync_config_reload_error(message)) def get_quorum_status_text(runner): """ Get runtime quorum status from the local node """ stdout, stderr, retval = runner.run([ os.path.join(settings.corosync_binaries, "corosync-quorumtool"), "-p" ]) # retval is 0 on success if node is not in partition with quorum # retval is 1 on error OR on success if node has quorum if retval not in [0, 1] or stderr.strip(): raise LibraryError(reports.corosync_quorum_get_status_error(stderr)) return stdout def set_expected_votes(runner, votes): """ set expected votes in live cluster to specified value """ stdout, stderr, retval = runner.run([ os.path.join(settings.corosync_binaries, "corosync-quorumtool"), # format votes to handle the case where they are int "-e", "{0}".format(votes) ]) if retval != 0: raise LibraryError( reports.corosync_quorum_set_expected_votes_error(stderr) ) return stdout pcs-0.9.164/pcs/lib/corosync/qdevice_client.py000066400000000000000000000014221326265502500212070ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os.path from pcs import settings from pcs.common.tools import join_multilines from pcs.lib import reports from pcs.lib.errors import LibraryError def get_status_text(runner, verbose=False): """ Get quorum device client runtime status in plain text bool verbose get more detailed output """ cmd = [ os.path.join(settings.corosync_binaries, "corosync-qdevice-tool"), "-s" ] if verbose: cmd.append("-v") stdout, stderr, retval = runner.run(cmd) if retval != 0: raise LibraryError( reports.corosync_quorum_get_status_error( join_multilines([stderr, stdout]) ) ) return stdout pcs-0.9.164/pcs/lib/corosync/qdevice_net.py000066400000000000000000000247271326265502500205340ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import functools import os import os.path import re import shutil from pcs import settings from pcs.common.tools import join_multilines from pcs.lib import external, reports from pcs.lib.errors import LibraryError from pcs.lib.tools import write_tmpfile __model = "net" __service_name = "corosync-qnetd" __qnetd_certutil = os.path.join( settings.corosync_qnet_binaries, "corosync-qnetd-certutil" ) __qnetd_tool = os.path.join( settings.corosync_qnet_binaries, "corosync-qnetd-tool" ) __qdevice_certutil = os.path.join( settings.corosync_binaries, "corosync-qdevice-net-certutil" ) class QnetdNotRunningException(Exception): pass def qdevice_setup(runner): """ initialize qdevice on local host """ if external.is_dir_nonempty(settings.corosync_qdevice_net_server_certs_dir): raise LibraryError(reports.qdevice_already_initialized(__model)) stdout, stderr, retval = runner.run([ __qnetd_certutil, "-i" ]) if retval != 0: raise LibraryError( reports.qdevice_initialization_error( __model, join_multilines([stderr, stdout]) ) ) def qdevice_initialized(): """ check if qdevice server certificate database has been initialized """ return os.path.exists(os.path.join( settings.corosync_qdevice_net_server_certs_dir, "cert8.db" )) def qdevice_destroy(): """ delete qdevice configuration on local host """ try: if qdevice_initialized(): shutil.rmtree(settings.corosync_qdevice_net_server_certs_dir) except EnvironmentError as e: raise LibraryError( reports.qdevice_destroy_error(__model, e.strerror) ) def qdevice_status_generic_text(runner, verbose=False): """ get qdevice runtime status in plain text bool verbose get more detailed output """ args = ["-s"] if verbose: args.append("-v") stdout, stderr, retval = _qdevice_run_tool(runner, args) if retval != 0: raise LibraryError( reports.qdevice_get_status_error( __model, join_multilines([stderr, stdout]) ) ) return stdout def qdevice_status_cluster_text(runner, cluster=None, verbose=False): """ get qdevice runtime status in plain text bool verbose get more detailed output string cluster show information only about specified cluster """ args = ["-l"] if verbose: args.append("-v") if cluster: args.extend(["-c", cluster]) stdout, stderr, retval = _qdevice_run_tool(runner, args) if retval != 0: raise LibraryError( reports.qdevice_get_status_error( __model, join_multilines([stderr, stdout]) ) ) return stdout def qdevice_connected_clusters(status_cluster_text): """ parse qnetd cluster status listing and return connected clusters' names string status_cluster_text output of corosync-qnetd-tool -l """ connected_clusters = [] regexp = re.compile(r'^Cluster "(?P[^"]+)":$') for line in status_cluster_text.splitlines(): match = regexp.search(line) if match: connected_clusters.append(match.group("cluster")) return connected_clusters def _qdevice_run_tool(runner, args): """ run corosync-qnetd-tool, raise QnetdNotRunningException if qnetd not running CommandRunner runner iterable args corosync-qnetd-tool arguments """ stdout, stderr, retval = runner.run([__qnetd_tool] + args) if retval == 3 and "is qnetd running?" in stderr.lower(): raise QnetdNotRunningException() return stdout, stderr, retval def qdevice_enable(runner): """ make qdevice start automatically on boot on local host """ external.enable_service(runner, __service_name) def qdevice_disable(runner): """ make qdevice not start automatically on boot on local host """ external.disable_service(runner, __service_name) def qdevice_start(runner): """ start qdevice now on local host """ external.start_service(runner, __service_name) def qdevice_stop(runner): """ stop qdevice now on local host """ external.stop_service(runner, __service_name) def qdevice_kill(runner): """ kill qdevice now on local host """ external.kill_services(runner, [__service_name]) def qdevice_sign_certificate_request(runner, cert_request, cluster_name): """ sign client certificate request cert_request certificate request data string cluster_name name of the cluster to which qdevice is being added """ if not qdevice_initialized(): raise LibraryError(reports.qdevice_not_initialized(__model)) # save the certificate request, corosync tool only works with files tmpfile = _store_to_tmpfile( cert_request, reports.qdevice_certificate_sign_error ) # sign the request stdout, stderr, retval = runner.run([ __qnetd_certutil, "-s", "-c", tmpfile.name, "-n", cluster_name ]) tmpfile.close() # temp file is deleted on close if retval != 0: raise LibraryError( reports.qdevice_certificate_sign_error( join_multilines([stderr, stdout]) ) ) # get signed certificate, corosync tool only works with files return _get_output_certificate( stdout, reports.qdevice_certificate_sign_error ) def client_setup(runner, ca_certificate): """ initialize qdevice client on local host ca_certificate qnetd CA certificate """ client_destroy() # save CA certificate, corosync tool only works with files ca_file_path = os.path.join( settings.corosync_qdevice_net_client_certs_dir, settings.corosync_qdevice_net_client_ca_file_name ) try: if not os.path.exists(ca_file_path): os.makedirs( settings.corosync_qdevice_net_client_certs_dir, mode=0o700 ) with open(ca_file_path, "wb") as ca_file: ca_file.write(ca_certificate) except EnvironmentError as e: raise LibraryError( reports.qdevice_initialization_error(__model, e.strerror) ) # initialize client's certificate storage stdout, stderr, retval = runner.run([ __qdevice_certutil, "-i", "-c", ca_file_path ]) if retval != 0: raise LibraryError( reports.qdevice_initialization_error( __model, join_multilines([stderr, stdout]) ) ) def client_initialized(): """ check if qdevice net client certificate database has been initialized """ return os.path.exists(os.path.join( settings.corosync_qdevice_net_client_certs_dir, "cert8.db" )) def client_destroy(): """ delete qdevice client config files on local host """ try: if client_initialized(): shutil.rmtree(settings.corosync_qdevice_net_client_certs_dir) except EnvironmentError as e: raise LibraryError( reports.qdevice_destroy_error(__model, e.strerror) ) def client_generate_certificate_request(runner, cluster_name): """ create a certificate request which can be signed by qnetd server string cluster_name name of the cluster to which qdevice is being added """ if not client_initialized(): raise LibraryError(reports.qdevice_not_initialized(__model)) stdout, stderr, retval = runner.run([ __qdevice_certutil, "-r", "-n", cluster_name ]) if retval != 0: raise LibraryError( reports.qdevice_initialization_error( __model, join_multilines([stderr, stdout]) ) ) return _get_output_certificate( stdout, functools.partial(reports.qdevice_initialization_error, __model) ) def client_cert_request_to_pk12(runner, cert_request): """ transform signed certificate request to pk12 certificate which can be imported to nodes cert_request signed certificate request """ if not client_initialized(): raise LibraryError(reports.qdevice_not_initialized(__model)) # save the signed certificate request, corosync tool only works with files tmpfile = _store_to_tmpfile( cert_request, reports.qdevice_certificate_import_error ) # transform it stdout, stderr, retval = runner.run([ __qdevice_certutil, "-M", "-c", tmpfile.name ]) tmpfile.close() # temp file is deleted on close if retval != 0: raise LibraryError( reports.qdevice_certificate_import_error( join_multilines([stderr, stdout]) ) ) # get resulting pk12, corosync tool only works with files return _get_output_certificate( stdout, reports.qdevice_certificate_import_error ) def client_import_certificate_and_key(runner, pk12_certificate): """ import qdevice client certificate to the local node certificate storage """ if not client_initialized(): raise LibraryError(reports.qdevice_not_initialized(__model)) # save the certificate, corosync tool only works with files tmpfile = _store_to_tmpfile( pk12_certificate, reports.qdevice_certificate_import_error ) stdout, stderr, retval = runner.run([ __qdevice_certutil, "-m", "-c", tmpfile.name ]) tmpfile.close() # temp file is deleted on close if retval != 0: raise LibraryError( reports.qdevice_certificate_import_error( join_multilines([stderr, stdout]) ) ) def _store_to_tmpfile(data, report_func): try: return write_tmpfile(data, binary=True) except EnvironmentError as e: raise LibraryError(report_func(e.strerror)) def _get_output_certificate(cert_tool_output, report_func): regexp = re.compile(r"^Certificate( request)? stored in (?P.+)$") filename = None for line in cert_tool_output.splitlines(): match = regexp.search(line) if match: filename = match.group("path") if not filename: raise LibraryError(report_func(cert_tool_output)) try: with open(filename, "rb") as cert_file: return cert_file.read() except EnvironmentError as e: raise LibraryError(report_func( "{path}: {error}".format(path=filename, error=e.strerror) )) pcs-0.9.164/pcs/lib/env.py000066400000000000000000000361741326265502500151760ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os.path from pcs import settings from pcs.common.node_communicator import ( NodeCommunicatorFactory, NodeTargetFactory ) from pcs.common.tools import Version from pcs.lib import reports from pcs.lib.booth.env import BoothEnv from pcs.lib.cib.tools import get_cib_crm_feature_set from pcs.lib.pacemaker.env import PacemakerEnv from pcs.lib.cluster_conf_facade import ClusterConfFacade from pcs.lib.communication import qdevice from pcs.lib.communication.corosync import ( CheckCorosyncOffline, DistributeCorosyncConf, ) from pcs.lib.communication.tools import ( run, run_and_raise, ) from pcs.lib.corosync.config_facade import ConfigFacade as CorosyncConfigFacade from pcs.lib.corosync.live import ( exists_local_corosync_conf, get_local_corosync_conf, get_local_cluster_conf, reload_config as reload_corosync_config, ) from pcs.lib.external import ( is_cman_cluster, is_service_running, CommandRunner, NodeCommunicator, ) from pcs.lib.errors import LibraryError from pcs.lib.node_communication import LibCommunicatorLogger from pcs.lib.pacemaker.live import ( diff_cibs_xml, ensure_cib_version, ensure_wait_for_idle_support, get_cib, get_cib_xml, get_cluster_status_xml, push_cib_diff_xml, replace_cib_configuration, wait_for_idle, ) from pcs.lib.pacemaker.state import get_cluster_state_dom from pcs.lib.pacemaker.values import get_valid_timeout_seconds from pcs.lib.tools import write_tmpfile from pcs.lib.xml_tools import etree_to_str class LibraryEnvironment(object): # pylint: disable=too-many-instance-attributes def __init__( self, logger, report_processor, user_login=None, user_groups=None, cib_data=None, corosync_conf_data=None, booth=None, pacemaker=None, token_file_data_getter=None, cluster_conf_data=None, request_timeout=None, ): self._logger = logger self._report_processor = report_processor self._user_login = user_login self._user_groups = [] if user_groups is None else user_groups self._cib_data = cib_data self._corosync_conf_data = corosync_conf_data self._cluster_conf_data = cluster_conf_data self._booth = ( BoothEnv(report_processor, booth) if booth is not None else None ) #pacemaker is currently not mocked and it provides only an access to #the authkey self._pacemaker = PacemakerEnv() self._request_timeout = request_timeout self._is_cman_cluster = None # TODO tokens probably should not be inserted from outside, but we're # postponing dealing with them, because it's not that easy to move # related code currently - it's in pcsd self._token_file_data_getter = token_file_data_getter self._token_file = None self._cib_upgrade_reported = False self._cib_data_tmp_file = None self.__loaded_cib_diff_source = None self.__loaded_cib_diff_source_feature_set = None self.__loaded_cib_to_modify = None self._communicator_factory = NodeCommunicatorFactory( LibCommunicatorLogger(self.logger, self.report_processor), self.user_login, self.user_groups, self._request_timeout ) self.__timeout_cache = {} @property def logger(self): return self._logger @property def report_processor(self): return self._report_processor @property def user_login(self): return self._user_login @property def user_groups(self): return self._user_groups @property def is_cman_cluster(self): if self._is_cman_cluster is None: self._is_cman_cluster = is_cman_cluster(self.cmd_runner()) return self._is_cman_cluster def get_cib(self, minimal_version=None): if self.__loaded_cib_diff_source is not None: raise AssertionError("CIB has already been loaded") self.__loaded_cib_diff_source = get_cib_xml(self.cmd_runner()) self.__loaded_cib_to_modify = get_cib(self.__loaded_cib_diff_source) if minimal_version is not None: upgraded_cib = ensure_cib_version( self.cmd_runner(), self.__loaded_cib_to_modify, minimal_version ) if upgraded_cib is not None: self.__loaded_cib_to_modify = upgraded_cib self.__loaded_cib_diff_source = etree_to_str(upgraded_cib) if not self._cib_upgrade_reported: self.report_processor.process( reports.cib_upgrade_successful() ) self._cib_upgrade_reported = True self.__loaded_cib_diff_source_feature_set = ( get_cib_crm_feature_set( self.__loaded_cib_to_modify, none_if_missing=True ) or Version(0, 0, 0) ) return self.__loaded_cib_to_modify @property def cib(self): if self.__loaded_cib_diff_source is None: raise AssertionError("CIB has not been loaded") return self.__loaded_cib_to_modify def get_cluster_state(self): return get_cluster_state_dom(get_cluster_status_xml(self.cmd_runner())) def _get_wait_timeout(self, wait): if wait is False: return False if wait not in self.__timeout_cache: if not self.is_cib_live: raise LibraryError(reports.wait_for_idle_not_live_cluster()) ensure_wait_for_idle_support(self.cmd_runner()) self.__timeout_cache[wait] = get_valid_timeout_seconds(wait) return self.__timeout_cache[wait] def ensure_wait_satisfiable(self, wait): """ Raise when wait is not supported or when wait is not valid wait value. mixed wait can be False when waiting is not required or valid timeout """ self._get_wait_timeout(wait) def push_cib(self, custom_cib=None, wait=False): """ Push previously loaded instance of CIB or a custom CIB etree custom_cib -- push a custom CIB instead of a loaded instance (allows to push an externally provided CIB and replace the one in the cluster completely) mixed wait -- how many seconds to wait for pacemaker to process new CIB or False for not waiting at all """ if custom_cib is not None: if self.__loaded_cib_diff_source is not None: raise AssertionError( "CIB has been loaded, cannot push custom CIB" ) return self.__push_cib_full(custom_cib, wait) if self.__loaded_cib_diff_source is None: raise AssertionError("CIB has not been loaded") # Push by diff works with crm_feature_set > 3.0.8, see # https://bugzilla.redhat.com/show_bug.cgi?id=1488044 for details. We # only check the version if a CIB has been loaded, otherwise the push # fails anyway. By my testing it seems that only the source CIB's # version matters. if self.__loaded_cib_diff_source_feature_set < Version(3, 0, 9): self.report_processor.process( reports.cib_push_forced_full_due_to_crm_feature_set( Version(3, 0, 9), self.__loaded_cib_diff_source_feature_set ) ) return self.__push_cib_full(self.__loaded_cib_to_modify, wait=wait) return self.__push_cib_diff(wait=wait) def __push_cib_full(self, cib_to_push, wait=False): cmd_runner = self.cmd_runner() self.__do_push_cib( cmd_runner, lambda: replace_cib_configuration(cmd_runner, cib_to_push), wait ) def __push_cib_diff(self, wait=False): cmd_runner = self.cmd_runner() self.__do_push_cib( cmd_runner, lambda: self.__main_push_cib_diff(cmd_runner), wait ) def __main_push_cib_diff(self, cmd_runner): cib_diff_xml = diff_cibs_xml( cmd_runner, self.report_processor, self.__loaded_cib_diff_source, etree_to_str(self.__loaded_cib_to_modify) ) if cib_diff_xml: push_cib_diff_xml(cmd_runner, cib_diff_xml) def __do_push_cib(self, cmd_runner, push_strategy, wait): timeout = self._get_wait_timeout(wait) push_strategy() self._cib_upgrade_reported = False self.__loaded_cib_diff_source = None self.__loaded_cib_diff_source_feature_set = None self.__loaded_cib_to_modify = None if self.is_cib_live and timeout is not False: wait_for_idle(cmd_runner, timeout) @property def is_cib_live(self): return self._cib_data is None @property def final_mocked_cib_content(self): if self.is_cib_live: raise AssertionError( "Final mocked cib content does not make sense in live env." ) if self._cib_data_tmp_file: self._cib_data_tmp_file.seek(0) return self._cib_data_tmp_file.read() return self._cib_data def get_corosync_conf_data(self): if self._corosync_conf_data is None: return get_local_corosync_conf() return self._corosync_conf_data def get_corosync_conf(self): return CorosyncConfigFacade.from_string(self.get_corosync_conf_data()) def push_corosync_conf( self, corosync_conf_facade, skip_offline_nodes=False ): corosync_conf_data = corosync_conf_facade.config.export() if self.is_corosync_conf_live: self._push_corosync_conf_live( self.get_node_target_factory().get_target_list( corosync_conf_facade.get_nodes() ), corosync_conf_data, corosync_conf_facade.need_stopped_cluster, corosync_conf_facade.need_qdevice_reload, skip_offline_nodes, ) else: self._corosync_conf_data = corosync_conf_data def _push_corosync_conf_live( self, target_list, corosync_conf_data, need_stopped_cluster, need_qdevice_reload, skip_offline_nodes ): if need_stopped_cluster: com_cmd = CheckCorosyncOffline( self.report_processor, skip_offline_nodes ) com_cmd.set_targets(target_list) run_and_raise(self.get_node_communicator(), com_cmd) com_cmd = DistributeCorosyncConf( self.report_processor, corosync_conf_data, skip_offline_nodes ) com_cmd.set_targets(target_list) run_and_raise(self.get_node_communicator(), com_cmd) if is_service_running(self.cmd_runner(), "corosync"): reload_corosync_config(self.cmd_runner()) self.report_processor.process( reports.corosync_config_reloaded() ) if need_qdevice_reload: self.report_processor.process( reports.qdevice_client_reload_started() ) com_cmd = qdevice.Stop(self.report_processor, skip_offline_nodes) com_cmd.set_targets(target_list) run(self.get_node_communicator(), com_cmd) report_list = com_cmd.error_list com_cmd = qdevice.Start(self.report_processor, skip_offline_nodes) com_cmd.set_targets(target_list) run(self.get_node_communicator(), com_cmd) report_list += com_cmd.error_list if report_list: raise LibraryError() def get_cluster_conf_data(self): if self.is_cluster_conf_live: return get_local_cluster_conf() return self._cluster_conf_data def get_cluster_conf(self): return ClusterConfFacade.from_string(self.get_cluster_conf_data()) @property def is_cluster_conf_live(self): return self._cluster_conf_data is None def is_node_in_cluster(self): if self.is_cman_cluster: #TODO --cluster_conf is not propagated here. So no live check not #needed here. But this should not be permanently return os.path.exists(settings.corosync_conf_file) if not self.is_corosync_conf_live: raise AssertionError( "Cannot check if node is in cluster with mocked corosync_conf." ) return exists_local_corosync_conf() def command_expect_live_corosync_env(self): if not self.is_corosync_conf_live: raise LibraryError( reports.live_environment_required(["COROSYNC_CONF"]) ) @property def is_corosync_conf_live(self): return self._corosync_conf_data is None def cmd_runner(self): runner_env = { # make sure to get output of external processes in English and ASCII "LC_ALL": "C", } if self.user_login: runner_env["CIB_user"] = self.user_login if not self.is_cib_live: # Dump CIB data to a temporary file and set it up in the runner. # This way every called pacemaker tool can access the CIB and we # don't need to take care of it every time the runner is called. if not self._cib_data_tmp_file: try: cib_data = self._cib_data self._cib_data_tmp_file = write_tmpfile(cib_data) self.report_processor.process( reports.tmp_file_write( self._cib_data_tmp_file.name, cib_data ) ) except EnvironmentError as e: raise LibraryError(reports.cib_save_tmp_error(str(e))) runner_env["CIB_file"] = self._cib_data_tmp_file.name return CommandRunner(self.logger, self.report_processor, runner_env) @property def communicator_factory(self): return self._communicator_factory def get_node_communicator(self): return self.communicator_factory.get_communicator() def get_node_target_factory(self): token_file = self.__get_token_file() return NodeTargetFactory(token_file["tokens"], token_file["ports"]) # deprecated, use communicator_factory or get_node_communicator() def node_communicator(self): return NodeCommunicator( self.logger, self.report_processor, self.__get_auth_tokens(), self.user_login, self.user_groups, self._request_timeout ) def __get_token_file(self): if self._token_file is None: if self._token_file_data_getter: self._token_file = self._token_file_data_getter() else: self._token_file = { "tokens": {}, "ports": {}, } return self._token_file @property def booth(self): return self._booth @property def pacemaker(self): return self._pacemaker pcs-0.9.164/pcs/lib/env_file.py000066400000000000000000000101231326265502500161570ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os.path from pcs.common import report_codes from pcs.common.tools import format_environment_error from pcs.lib import reports from pcs.lib.errors import ReportItemSeverity, LibraryError, LibraryEnvError class GhostFile(object): is_live = False def __init__(self, file_role, content=None, is_binary=False): self.__file_role = file_role self.__content = content self.__no_existing_file_expected = False self.__can_overwrite_existing_file = False self.__is_binary = is_binary def read(self): if self.__content is None: raise LibraryEnvError( reports.file_does_not_exist(self.__file_role) ) return self.__content @property def exists(self): #file will be considered to exist after writing: it is symmetrical with #RealFile return self.__content is not None def remove(self, silence_no_existence): raise AssertionError("Remove GhostFile is not supported.") def write(self, content, file_operation=None): """ callable file_operation is there only for RealFile compatible interface it has no efect """ self.__content = content def assert_no_conflict_with_existing( self, report_processor, can_overwrite_existing=False ): self.__no_existing_file_expected = True self.__can_overwrite_existing_file = can_overwrite_existing def export(self): return { "content": self.__content, "no_existing_file_expected": self.__no_existing_file_expected, "can_overwrite_existing_file": self.__can_overwrite_existing_file, "is_binary": self.__is_binary, } class RealFile(object): is_live = True def __init__(self, file_role, file_path, is_binary=False): self.__file_role = file_role self.__file_path = file_path self.__is_binary=is_binary def assert_no_conflict_with_existing( self, report_processor, can_overwrite_existing=False ): if self.exists: report_processor.process(reports.file_already_exists( self.__file_role, self.__file_path, ReportItemSeverity.WARNING if can_overwrite_existing else ReportItemSeverity.ERROR, forceable=None if can_overwrite_existing else report_codes.FORCE_FILE_OVERWRITE, )) @property def exists(self): return os.path.exists(self.__file_path) def write(self, content, file_operation=None): """ callable file_operation takes path and proces operation on it e.g. chmod """ mode = "wb" if self.__is_binary else "w" try: with open(self.__file_path, mode) as config_file: config_file.write(content) if file_operation: file_operation(self.__file_path) except EnvironmentError as e: raise self.__report_io_error(e, "write") def read(self): try: mode = "rb" if self.__is_binary else "r" with open(self.__file_path, mode) as file: return file.read() except EnvironmentError as e: raise self.__report_io_error(e, "read") def remove(self, silence_no_existence=False): if self.exists: try: os.remove(self.__file_path) except EnvironmentError as e: raise self.__report_io_error(e, "remove") elif not silence_no_existence: raise LibraryError(reports.file_io_error( self.__file_role, file_path=self.__file_path, operation="remove", reason="File does not exist" )) def __report_io_error(self, e, operation): return LibraryError(reports.file_io_error( self.__file_role, file_path=self.__file_path, operation=operation, reason=format_environment_error(e) )) pcs-0.9.164/pcs/lib/env_tools.py000066400000000000000000000015451326265502500164100ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.lib.cib.resource import remote_node, guest_node from pcs.lib.xml_tools import get_root from pcs.lib.node import NodeAddressesList def get_nodes(corosync_conf=None, tree=None): return NodeAddressesList( ( corosync_conf.get_nodes() if corosync_conf else NodeAddressesList([]) ) + ( get_nodes_remote(tree) if tree is not None else NodeAddressesList([]) ) + ( get_nodes_guest(tree) if tree is not None else NodeAddressesList([]) ) ) def get_nodes_remote(tree): return NodeAddressesList(remote_node.find_node_list(get_root(tree))) def get_nodes_guest(tree): return NodeAddressesList(guest_node.find_node_list(get_root(tree))) pcs-0.9.164/pcs/lib/errors.py000066400000000000000000000042361326265502500157140ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) class LibraryError(Exception): pass class LibraryEnvError(LibraryError): def __init__(self, *args, **kwargs): super(LibraryEnvError, self).__init__(*args, **kwargs) self.processed = [] def sign_processed(self, report): self.processed.append(report) @property def unprocessed(self): return [report for report in self.args if report not in self.processed] class ReportItemSeverity(object): ERROR = 'ERROR' WARNING = 'WARNING' INFO = 'INFO' DEBUG = 'DEBUG' class ReportItem(object): @classmethod def error(cls, code, **kwargs): return cls(code, ReportItemSeverity.ERROR, **kwargs) @classmethod def warning(cls, code, **kwargs): return cls(code, ReportItemSeverity.WARNING, **kwargs) @classmethod def info(cls, code, **kwargs): return cls(code, ReportItemSeverity.INFO, **kwargs) @classmethod def debug(cls, code, **kwargs): return cls(code, ReportItemSeverity.DEBUG, **kwargs) def __init__( self, code, severity, forceable=None, info=None ): self.code = code self.severity = severity self.forceable = forceable self.info = info if info else dict() def __repr__(self): return "{severity} {code}: {info} forceable: {forceable}".format( severity=self.severity, code=self.code, info=self.info, forceable=self.forceable, ) class ReportListAnalyzer(object): def __init__(self, report_list): self.__error_list = None self.__report_list = report_list def reports_with_severities(self, severity_list): return [ report_item for report_item in self.report_list if report_item.severity in severity_list ] @property def report_list(self): return self.__report_list @property def error_list(self): if self.__error_list is None: self.__error_list = self.reports_with_severities( [ReportItemSeverity.ERROR] ) return self.__error_list pcs-0.9.164/pcs/lib/exchange_formats.md000066400000000000000000000021621326265502500176610ustar00rootroot00000000000000Exchange formats ================ Library exchanges with the client data in the formats described below. Resource set ------------ Dictionary with keys are "options" and "ids". On the key "options" is a dictionary of resource set options. On the key "ids" is a list of resource id. ```python { "options": {"id": "id"}, "ids": ["resourceA", "resourceB"], } ``` Constraint ---------- When constraint is plain (without resource sets) there is only dictionary with constraint options. ```python {"id": "id", "rsc": "resourceA"} ``` When is constraint with resource sets there is dictionary with keys "resource_sets" and "options". On the key "options" is a dictionary of constraint options. On the key "resource_sets" is a dictionary of resource sets (see Resource set). ```python { "options": {"id": "id"}, "resource_sets": {"options": {"id": "id"}, "ids": ["resourceA", "resourceB"]}, } ``` Resource operation interval duplication --------------------------------------- Dictionary. Key is operation name. Value is list of list of interval. ```python { "monitor": [ ["3600s", "60m", "1h"], ["60s", "1m"], ], }, ``` pcs-0.9.164/pcs/lib/external.py000066400000000000000000000555551326265502500162340ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import base64 import io import json import os try: # python 2 from pipes import quote as shell_quote except ImportError: # python 3 from shlex import quote as shell_quote import re import signal import subprocess import sys try: # python2 from urllib import urlencode as urllib_urlencode except ImportError: # python3 from urllib.parse import urlencode as urllib_urlencode from pcs import settings from pcs.common import pcs_pycurl as pycurl from pcs.common.tools import ( join_multilines, simple_cache, ) from pcs.lib import reports from pcs.lib.errors import LibraryError, ReportItemSeverity _chkconfig = settings.chkconfig_binary _service = settings.service_binary _systemctl = settings.systemctl_binary class ManageServiceError(Exception): #pylint: disable=super-init-not-called def __init__(self, service, message=None, instance=None): self.service = service self.message = message self.instance = instance class DisableServiceError(ManageServiceError): pass class EnableServiceError(ManageServiceError): pass class StartServiceError(ManageServiceError): pass class StopServiceError(ManageServiceError): pass class KillServicesError(ManageServiceError): pass def is_dir_nonempty(path): if not os.path.exists(path): return False if not os.path.isdir(path): return True return len(os.listdir(path)) > 0 def _get_service_name(service, instance=None): return "{0}{1}.service".format( service, "" if instance is None else "@{0}".format(instance) ) def ensure_is_systemd(): """ Ensure if current system is systemd system. Raises Library error if not. """ if not is_systemctl(): raise LibraryError( reports.unsupported_operation_on_non_systemd_systems() ) @simple_cache def is_systemctl(): """ Check whenever is local system running on systemd. Returns True if current system is systemctl compatible, False otherwise. """ systemd_paths = [ '/run/systemd/system', '/var/run/systemd/system', ] for path in systemd_paths: if os.path.isdir(path): return True return False def disable_service(runner, service, instance=None): """ Disable specified service in local system. Raise DisableServiceError or LibraryError on failure. runner -- CommandRunner service -- name of service instance -- instance name, it ha no effect on not systemd systems. If None no instance name will be used. """ if not is_service_installed(runner, service, instance): return if is_systemctl(): stdout, stderr, retval = runner.run([ _systemctl, "disable", _get_service_name(service, instance) ]) else: stdout, stderr, retval = runner.run([_chkconfig, service, "off"]) if retval != 0: raise DisableServiceError( service, join_multilines([stderr, stdout]), instance ) def enable_service(runner, service, instance=None): """ Enable specified service in local system. Raise EnableServiceError or LibraryError on failure. runner -- CommandRunner service -- name of service instance -- instance name, it ha no effect on not systemd systems. If None no instance name will be used. """ if is_systemctl(): stdout, stderr, retval = runner.run([ _systemctl, "enable", _get_service_name(service, instance) ]) else: stdout, stderr, retval = runner.run([_chkconfig, service, "on"]) if retval != 0: raise EnableServiceError( service, join_multilines([stderr, stdout]), instance ) def start_service(runner, service, instance=None): """ Start specified service in local system CommandRunner runner string service service name string instance instance name, it ha no effect on not systemd systems. If None no instance name will be used. """ if is_systemctl(): stdout, stderr, retval = runner.run([ _systemctl, "start", _get_service_name(service, instance) ]) else: stdout, stderr, retval = runner.run([_service, service, "start"]) if retval != 0: raise StartServiceError( service, join_multilines([stderr, stdout]), instance ) def stop_service(runner, service, instance=None): """ Stop specified service in local system CommandRunner runner string service service name string instance instance name, it ha no effect on not systemd systems. If None no instance name will be used. """ if is_systemctl(): stdout, stderr, retval = runner.run([ _systemctl, "stop", _get_service_name(service, instance) ]) else: stdout, stderr, retval = runner.run([_service, service, "stop"]) if retval != 0: raise StopServiceError( service, join_multilines([stderr, stdout]), instance ) def kill_services(runner, services): """ Kill specified services in local system CommandRunner runner iterable services service names """ # make killall not report that a process is not running stdout, stderr, retval = runner.run( ["killall", "--quiet", "--signal", "9", "--"] + list(services) ) # If a process isn't running, killall will still return 1 even with --quiet. # We don't consider that an error, so we check for output string as well. # If it's empty, no actuall error happened. if retval != 0: message = join_multilines([stderr, stdout]) if message: raise KillServicesError(list(services), message) def is_service_enabled(runner, service, instance=None): """ Check if specified service is enabled in local system. runner -- CommandRunner service -- name of service """ if is_systemctl(): dummy_stdout, dummy_stderr, retval = runner.run( [_systemctl, "is-enabled", _get_service_name(service, instance)] ) else: dummy_stdout, dummy_stderr, retval = runner.run([_chkconfig, service]) return retval == 0 def is_service_running(runner, service, instance=None): """ Check if specified service is currently running on local system. runner -- CommandRunner service -- name of service """ if is_systemctl(): dummy_stdout, dummy_stderr, retval = runner.run([ _systemctl, "is-active", _get_service_name(service, instance) ]) else: dummy_stdout, dummy_stderr, retval = runner.run( [_service, service, "status"] ) return retval == 0 def is_service_installed(runner, service, instance=None): """ Check if specified service is installed on local system. runner -- CommandRunner service -- name of service instance -- systemd service instance """ if not is_systemctl(): return service in get_non_systemd_services(runner) service_name = "{0}{1}".format(service, "" if instance is None else "@") return service_name in get_systemd_services(runner) def get_non_systemd_services(runner): """ Returns list of all installed services on non systemd system. runner -- CommandRunner """ if is_systemctl(): return [] stdout, dummy_stderr, return_code = runner.run([_chkconfig]) if return_code != 0: return [] service_list = [] for service in stdout.splitlines(): service = service.split(" ", 1)[0] if service: service_list.append(service) return service_list def get_systemd_services(runner): """ Returns list of all systemd services installed on local system. runner -- CommandRunner """ if not is_systemctl(): return [] stdout, dummy_stderr, return_code = runner.run([ _systemctl, "list-unit-files", "--full" ]) if return_code != 0: return [] service_list = [] for service in stdout.splitlines(): match = re.search(r'^([\S]*)\.service', service) if match: service_list.append(match.group(1)) return service_list def is_cman_cluster(runner): """ Detect if underlaying locally installed cluster is CMAN based """ # Checking corosync version works in most cases and supports non-rhel # distributions as well as running (manually compiled) corosync2 on rhel6. # - corosync2 does not support cman at all # - corosync1 runs with cman on rhel6 # - corosync1 can be used without cman, but we don't support it anyways # - corosync2 is the default result if errors occur stdout, dummy_stderr, retval = runner.run([ os.path.join(settings.corosync_binaries, "corosync"), "-v" ]) if retval != 0: return False match = re.search(r"version\D+(\d+)", stdout) return match is not None and match.group(1) == "1" def is_proxy_set(env_dict): """ Returns True whenever any of proxy environment variables (https_proxy, HTTPS_PROXY, all_proxy, ALL_PROXY) are set in env_dict. False otherwise. env_dict -- environment variables in dict """ proxy_list = ["https_proxy", "all_proxy"] for var in proxy_list + [v.upper() for v in proxy_list]: if env_dict.get(var, "") != "": return True return False class CommandRunner(object): def __init__(self, logger, reporter, env_vars=None): self._logger = logger self._reporter = reporter # Reset environment variables by empty dict is desired here. We need # to get rid of defaults - we do not know the context and environment # where the library runs. We also get rid of PATH settings, so all # executables must be specified with full path unless the PATH variable # is set from outside. self._env_vars = env_vars if env_vars else dict() self._python2 = (sys.version_info.major == 2) @property def env_vars(self): return self._env_vars.copy() def run( self, args, stdin_string=None, env_extend=None, binary_output=False ): # Allow overriding default settings. If a piece of code really wants to # set own PATH or CIB_file, we must allow it. I.e. it wants to run # a pacemaker tool on a CIB in a file but cannot afford the risk of # changing the CIB in the file specified by the user. env_vars = self._env_vars.copy() env_vars.update( dict(env_extend) if env_extend else dict() ) log_args = " ".join([shell_quote(x) for x in args]) self._logger.debug( "Running: {args}\nEnvironment:{env_vars}{stdin_string}".format( args=log_args, stdin_string=("" if not stdin_string else ( "\n--Debug Input Start--\n{0}\n--Debug Input End--" .format(stdin_string) )), env_vars=("" if not env_vars else ( "\n" + "\n".join([ " {0}={1}".format(key, val) for key, val in sorted(env_vars.items()) ]) )) ) ) self._reporter.process( reports.run_external_process_started( log_args, stdin_string, env_vars ) ) try: process = subprocess.Popen( args, # Some commands react differently if they get anything via stdin stdin=(subprocess.PIPE if stdin_string is not None else None), stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=( lambda: signal.signal(signal.SIGPIPE, signal.SIG_DFL) ), close_fds=True, shell=False, env=env_vars, # decodes newlines and in python3 also converts bytes to str universal_newlines=(not self._python2 and not binary_output) ) out_std, out_err = process.communicate(stdin_string) retval = process.returncode except OSError as e: raise LibraryError( reports.run_external_process_error(log_args, e.strerror) ) self._logger.debug( ( "Finished running: {args}\nReturn value: {retval}" + "\n--Debug Stdout Start--\n{out_std}\n--Debug Stdout End--" + "\n--Debug Stderr Start--\n{out_err}\n--Debug Stderr End--" ).format( args=log_args, retval=retval, out_std=out_std, out_err=out_err ) ) self._reporter.process(reports.run_external_process_finished( log_args, retval, out_std, out_err )) return out_std, out_err, retval # deprecated class NodeCommunicationException(Exception): # pylint: disable=super-init-not-called def __init__(self, node, command, reason): self.node = node self.command = command self.reason = reason # deprecated class NodeConnectionException(NodeCommunicationException): pass # deprecated class NodeAuthenticationException(NodeCommunicationException): pass # deprecated class NodePermissionDeniedException(NodeCommunicationException): pass # deprecated class NodeCommandUnsuccessfulException(NodeCommunicationException): pass # deprecated class NodeUnsupportedCommandException(NodeCommunicationException): pass # deprecated class NodeConnectionTimedOutException(NodeCommunicationException): pass # deprecated def node_communicator_exception_to_report_item( e, severity=ReportItemSeverity.ERROR, forceable=None ): """ Transform NodeCommunicationException to ReportItem """ if isinstance(e, NodeCommandUnsuccessfulException): return reports.node_communication_command_unsuccessful( e.node, e.command, e.reason ) exception_to_report = { NodeAuthenticationException: reports.node_communication_error_not_authorized, NodePermissionDeniedException: reports.node_communication_error_permission_denied, NodeUnsupportedCommandException: reports.node_communication_error_unsupported_command, NodeCommunicationException: reports.node_communication_error_other_error, NodeConnectionException: reports.node_communication_error_unable_to_connect, NodeConnectionTimedOutException: reports.node_communication_error_timed_out, } if e.__class__ in exception_to_report: return exception_to_report[e.__class__]( e.node, e.command, e.reason, severity, forceable ) raise e # deprecated, use pcs.common.node_communicator.Communicator class NodeCommunicator(object): """ Sends requests to nodes """ @classmethod def format_data_dict(cls, data): """ Encode data for transport (only plain dict is supported) """ return urllib_urlencode(data) @classmethod def format_data_json(cls, data): """ Encode data for transport (more complex data than in format_data_dict) """ return json.dumps(data) def __init__( self, logger, reporter, auth_tokens, user=None, groups=None, request_timeout=None ): """ auth_tokens authorization tokens for nodes: {node: token} user username groups groups the user is member of request_timeout -- positive integer, time for one reqest in seconds """ self._logger = logger self._reporter = reporter self._auth_tokens = auth_tokens self._user = user self._groups = groups self._request_timeout = request_timeout @property def request_timeout(self): return ( settings.default_request_timeout if self._request_timeout is None else self._request_timeout ) def call_node(self, node_addr, request, data, request_timeout=None): """ Send a request to a node node_addr destination node, instance of NodeAddresses request command to be run on the node data command parameters, encoded by format_data_* method """ return self.call_host(node_addr.ring0, request, data, request_timeout) def call_host(self, host, request, data, request_timeout=None): """ Send a request to a host host host address request command to be run on the host data command parameters, encoded by format_data_* method request timeout float timeout for request, if not set object property will be used """ def __debug_callback(data_type, debug_data): prefixes = { pycurl.DEBUG_TEXT: b"* ", pycurl.DEBUG_HEADER_IN: b"< ", pycurl.DEBUG_HEADER_OUT: b"> ", pycurl.DEBUG_DATA_IN: b"<< ", pycurl.DEBUG_DATA_OUT: b">> ", } if data_type in prefixes: debug_output.write(prefixes[data_type]) debug_output.write(debug_data) if not debug_data.endswith(b"\n"): debug_output.write(b"\n") output = io.BytesIO() debug_output = io.BytesIO() cookies = self.__prepare_cookies(host) timeout = ( request_timeout if request_timeout is not None else self.request_timeout ) url = "https://{host}:2224/{request}".format( host=("[{0}]".format(host) if ":" in host else host), request=request ) handler = pycurl.Curl() handler.setopt(pycurl.PROTOCOLS, pycurl.PROTO_HTTPS) handler.setopt(pycurl.TIMEOUT_MS, int(timeout * 1000)) handler.setopt(pycurl.URL, url.encode("utf-8")) handler.setopt(pycurl.WRITEFUNCTION, output.write) handler.setopt(pycurl.VERBOSE, 1) handler.setopt(pycurl.DEBUGFUNCTION, __debug_callback) handler.setopt(pycurl.SSL_VERIFYHOST, 0) handler.setopt(pycurl.SSL_VERIFYPEER, 0) handler.setopt(pycurl.NOSIGNAL, 1) # required for multi-threading if cookies: handler.setopt(pycurl.COOKIE, ";".join(cookies).encode("utf-8")) if data: handler.setopt(pycurl.COPYPOSTFIELDS, data.encode("utf-8")) msg = "Sending HTTP Request to: {url}" if data: msg += "\n--Debug Input Start--\n{data}\n--Debug Input End--" self._logger.debug(msg.format(url=url, data=data)) self._reporter.process( reports.node_communication_started(url, data) ) result_msg = ( "Finished calling: {url}\nResponse Code: {code}" + "\n--Debug Response Start--\n{response}\n--Debug Response End--" ) try: handler.perform() response_data = output.getvalue().decode("utf-8") response_code = handler.getinfo(pycurl.RESPONSE_CODE) self._logger.debug(result_msg.format( url=url, code=response_code, response=response_data )) self._reporter.process(reports.node_communication_finished( url, response_code, response_data )) if response_code == 400: # old pcsd protocol: error messages are commonly passed in plain # text in response body with HTTP code 400 # we need to be backward compatible with that raise NodeCommandUnsuccessfulException( host, request, response_data.rstrip() ) elif response_code == 401: raise NodeAuthenticationException( host, request, "HTTP error: {0}".format(response_code) ) elif response_code == 403: raise NodePermissionDeniedException( host, request, "HTTP error: {0}".format(response_code) ) elif response_code == 404: raise NodeUnsupportedCommandException( host, request, "HTTP error: {0}".format(response_code) ) elif response_code >= 400: raise NodeCommunicationException( host, request, "HTTP error: {0}".format(response_code) ) return response_data except pycurl.error as e: # In pycurl versions lower then 7.19.3 it is not possible to set # NOPROXY option. Therefore for the proper support of proxy settings # we have to use environment variables. if is_proxy_set(os.environ): self._logger.warning("Proxy is set") self._reporter.process( reports.node_communication_proxy_is_set() ) errno, reason = e.args msg = "Unable to connect to {node} ({reason})" self._logger.debug(msg.format(node=host, reason=reason)) self._reporter.process( reports.node_communication_not_connected(host, reason) ) if errno == pycurl.E_OPERATION_TIMEDOUT: raise NodeConnectionTimedOutException(host, request, reason) else: raise NodeConnectionException(host, request, reason) finally: debug_data = debug_output.getvalue().decode("utf-8", "ignore") self._logger.debug( ( "Communication debug info for calling: {url}\n" "--Debug Communication Info Start--\n" "{data}\n" "--Debug Communication Info End--" ).format(url=url, data=debug_data) ) self._reporter.process( reports.node_communication_debug_info(url, debug_data) ) def __prepare_cookies(self, host): # Let's be safe about characters in variables (they can come from env) # and do base64. We cannot do it for CIB_user however to be backward # compatible so we at least remove disallowed characters. cookies = [] if host in self._auth_tokens: cookies.append("token={0}".format(self._auth_tokens[host])) if self._user: cookies.append("CIB_user={0}".format( re.sub(r"[^!-~]", "", self._user).replace(";", "") )) if self._groups: cookies.append("CIB_user_groups={0}".format( # python3 requires the value to be bytes not str base64.b64encode( " ".join(self._groups).encode("utf-8") ).decode("utf-8") )) return cookies pcs-0.9.164/pcs/lib/node.py000066400000000000000000000052651326265502500153300ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) class NodeNotFound(Exception): pass def node_addresses_contain_host(node_addresses_list, host): return ( host in [node.ring0 for node in node_addresses_list] or host in [node.ring1 for node in node_addresses_list if node.ring1] ) def node_addresses_contain_name(node_addresses_list, name): return name in [node.name for node in node_addresses_list] class NodeAddresses(object): def __init__(self, ring0, ring1=None, name=None, id=None): self._ring0 = ring0 self._ring1 = ring1 self._name = name self._id = id def __hash__(self): return hash(self.label) def __eq__(self, other): return self.label == other.label def __ne__(self, other): return not (self == other) def __lt__(self, other): return self.label < other.label def __repr__(self): #the "dict" with name and id is "written" inside string because in #python3 the order is not return str("<{0}.{1} {2}, {{'name': {3}, 'id': {4}}}>").format( self.__module__, self.__class__.__name__, repr( [self.ring0] if self.ring1 is None else [self.ring0, self.ring1] ), repr(self.name), repr(self.id), ) @property def ring0(self): return self._ring0 @property def ring1(self): return self._ring1 @property def name(self): return self._name @property def id(self): return self._id @property def label(self): return self.name if self.name else self.ring0 class NodeAddressesList(object): def __init__(self, node_addrs_list=None): self._list = [] if node_addrs_list: for node_addr in node_addrs_list: self._list.append(node_addr) def append(self, item): self._list.append(item) def __len__(self): return self._list.__len__() def __getitem__(self, key): return self._list.__getitem__(key) def __iter__(self): return self._list.__iter__() def __reversed__(self): return self._list.__reversed__() def __add__(self, other): if isinstance(other, NodeAddressesList): return NodeAddressesList(self._list + other._list) #Suppose that the other is a list. If it is not a list it correctly #raises. return NodeAddressesList(self._list + other) def find_by_label(self, label): for node in self._list: if node.label == label: return node raise NodeNotFound() pcs-0.9.164/pcs/lib/node_communication.py000066400000000000000000000141741326265502500202540ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import os from pcs.common import pcs_pycurl as pycurl from pcs.common.node_communicator import CommunicatorLoggerInterface from pcs.lib.errors import ReportItemSeverity from pcs.lib import reports class LibCommunicatorLogger(CommunicatorLoggerInterface): def __init__(self, logger, reporter): self._logger = logger self._reporter = reporter def log_request_start(self, request): msg = "Sending HTTP Request to: {url}" if request.data: msg += "\n--Debug Input Start--\n{data}\n--Debug Input End--" self._logger.debug( msg.format(url=request.url, data=request.data) ) self._reporter.process( reports.node_communication_started(request.url, request.data) ) def log_response(self, response): if response.was_connected: self._log_response_successful(response) else: self._log_response_failure(response) self._log_debug(response) def _log_response_successful(self, response): url = response.request.url msg = ( "Finished calling: {url}\nResponse Code: {code}" + "\n--Debug Response Start--\n{response}\n--Debug Response End--" ) self._logger.debug(msg.format( url=url, code=response.response_code, response=response.data )) self._reporter.process(reports.node_communication_finished( url, response.response_code, response.data )) def _log_response_failure(self, response): msg = "Unable to connect to {node} ({reason})" self._logger.debug(msg.format( node=response.request.host, reason=response.error_msg )) self._reporter.process( reports.node_communication_not_connected( response.request.host, response.error_msg ) ) if is_proxy_set(os.environ): self._logger.warning("Proxy is set") self._reporter.process(reports.node_communication_proxy_is_set( response.request.host_label, response.request.host )) def _log_debug(self, response): url = response.request.url debug_data = response.debug self._logger.debug( ( "Communication debug info for calling: {url}\n" "--Debug Communication Info Start--\n" "{data}\n" "--Debug Communication Info End--" ).format(url=url, data=debug_data) ) self._reporter.process( reports.node_communication_debug_info(url, debug_data) ) def log_retry(self, response, previous_host): msg = ( "Unable to connect to '{label}' via address '{old_addr}'. Retrying " "request '{req}' via address '{new_addr}'" ).format( label=response.request.host_label, old_addr=previous_host, new_addr=response.request.host, req=response.request.url, ) self._logger.warning(msg) self._reporter.process(reports.node_communication_retrying( response.request.host_label, previous_host, response.request.host, response.request.url, )) def log_no_more_addresses(self, response): msg = "No more addresses for node {label} to run '{req}'".format( label=response.request.host_label, req=response.request.url, ) self._logger.warning(msg) self._reporter.process(reports.node_communication_no_more_addresses( response.request.host_label, response.request.url )) def response_to_report_item( response, severity=ReportItemSeverity.ERROR, forceable=None ): """ Returns report item which corresponds to response if was not successful. Otherwise returns None. Response response -- response from which report item shoculd be created ReportItemseverity severity -- severity of report item string forceable -- force code """ response_code = response.response_code report = None reason = None if response.was_connected: if response_code == 400: # old pcsd protocol: error messages are commonly passed in plain # text in response body with HTTP code 400 # we need to be backward compatible with that report = reports.node_communication_command_unsuccessful reason = response.data.rstrip() elif response_code == 401: report = reports.node_communication_error_not_authorized reason = "HTTP error: {0}".format(response_code) elif response_code == 403: report = reports.node_communication_error_permission_denied reason = "HTTP error: {0}".format(response_code) elif response_code == 404: report = reports.node_communication_error_unsupported_command reason = "HTTP error: {0}".format(response_code) elif response_code >= 400: report = reports.node_communication_error_other_error reason = "HTTP error: {0}".format(response_code) else: if response.errno in [ pycurl.E_OPERATION_TIMEDOUT, pycurl.E_OPERATION_TIMEOUTED ]: report = reports.node_communication_error_timed_out reason = response.error_msg else: report = reports.node_communication_error_unable_to_connect reason = response.error_msg if not report: return None return report( response.request.host, response.request.action, reason, severity, forceable, ) def is_proxy_set(env_dict): """ Returns True whenever any of proxy environment variables (https_proxy, HTTPS_PROXY, all_proxy, ALL_PROXY) are set in env_dict. False otherwise. dict env_dict -- environment variables in dict """ proxy_list = ["https_proxy", "all_proxy"] for var in proxy_list + [v.upper() for v in proxy_list]: if env_dict.get(var, "") != "": return True return False pcs-0.9.164/pcs/lib/node_communication_format.py000066400000000000000000000114401326265502500216150ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from collections import namedtuple from pcs.lib import reports from pcs.lib.errors import LibraryError import base64 def create_pcmk_remote_actions(action_list): return dict([ ( "pacemaker_remote {0}".format(action), service_cmd_format( "pacemaker_remote", action ) ) for action in action_list ]) def pcmk_authkey_format(authkey_content): """ Return a dict usable in the communication with a remote/put_file authkey_content is raw authkey content """ return { "data": base64.b64encode(authkey_content).decode("utf-8"), "type": "pcmk_remote_authkey", "rewrite_existing": True, } def corosync_authkey_format(authkey_content): """ Return a dict usable in the communication with a remote/put_file authkey_content is raw authkey content """ return { "data": base64.b64encode(authkey_content).decode("utf-8"), "type": "corosync_authkey", "rewrite_existing": True, } def pcmk_authkey_file(authkey_content): return { "pacemaker_remote authkey": pcmk_authkey_format(authkey_content) } def corosync_authkey_file(authkey_content): return { "corosync authkey": corosync_authkey_format(authkey_content) } def service_cmd_format(service, command): """ Return a dict usable in the communication with a remote/run_action string service is name of requested service (eg. pacemaker_remote) string command specifies an action on service (eg. start) """ return { "type": "service_command", "service": service, "command": command, } class Result(namedtuple("Result", "code message")): """ Wrapper over some call results """ def unpack_items_from_response(main_response, main_key, node_label): """ Check format of main_response and return main_response[main_key]. dict main_response has on the key 'main_key' dict with item name as key and dict with result as value. E.g. { "files": { "file1": {"code": "success", "message": ""} } } string main_key is name of key under that is a dict with results string node_label is a node label for reporting an invalid format """ is_in_expected_format = ( isinstance(main_response, dict) and main_key in main_response and isinstance(main_response[main_key], dict) ) if not is_in_expected_format: raise LibraryError(reports.invalid_response_format(node_label)) return main_response[main_key] def response_items_to_result(response_items, expected_keys, node_label): """ Check format of response_items and return dict where keys are transformed to Result. E.g. {"file1": {"code": "success", "message": ""}} -> {"file1": Result("success", "")}} dict resposne_items has item name as key and dict with result as value. list expected_keys contains expected keys in a dict main_response[main_key] string node_label is a node label for reporting an invalid format """ if set(expected_keys) != set(response_items.keys()): raise LibraryError(reports.invalid_response_format(node_label)) for result in response_items.values(): if( not isinstance(result, dict) or "code" not in result or "message" not in result ): raise LibraryError(reports.invalid_response_format(node_label)) return dict([ ( file_key, Result(raw_result["code"], raw_result["message"]) ) for file_key, raw_result in response_items.items() ]) def response_to_result( main_response, main_key, expected_keys, node_label ): """ Validate response (from remote/put_file or remote/run_action) and transform results from dict to Result. dict main_response has on the key 'main_key' dict with item name as key and dict with result as value. E.g. { "files": { "file1": {"code": "success", "message": ""} } } string main_key is name of key under that is a dict with results list expected_keys contains expected keys in a dict main_response[main_key] string node_label is a node label for reporting an invalid format """ return response_items_to_result( unpack_items_from_response(main_response, main_key, node_label), expected_keys, node_label ) def get_format_result(code_message_map): def format_result(result): if result.code in code_message_map: return code_message_map[result.code] return result.message return format_result pcs-0.9.164/pcs/lib/pacemaker/000077500000000000000000000000001326265502500157515ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/pacemaker/__init__.py000066400000000000000000000000001326265502500200500ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/pacemaker/env.py000066400000000000000000000012301326265502500171070ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.common import env_file_role_codes from pcs.lib.env_file import RealFile from pcs import settings class PacemakerEnv(object): def __init__(self): """ callable get_cib should return cib as lxml tree """ self.__authkey = RealFile( file_role=env_file_role_codes.PACEMAKER_AUTHKEY, file_path=settings.pacemaker_authkey_file, is_binary=True, ) @property def has_authkey(self): return self.__authkey.exists def get_authkey_content(self): return self.__authkey.read() pcs-0.9.164/pcs/lib/pacemaker/live.py000066400000000000000000000275631326265502500172770ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree import os.path from pcs import settings from pcs.common.tools import ( join_multilines, xml_fromstring ) from pcs.lib import reports from pcs.lib.cib.tools import get_pacemaker_version_by_which_cib_was_validated from pcs.lib.errors import LibraryError from pcs.lib.pacemaker.state import ClusterState from pcs.lib.tools import write_tmpfile from pcs.lib.xml_tools import etree_to_str __EXITCODE_WAIT_TIMEOUT = 62 __EXITCODE_CIB_SCOPE_VALID_BUT_NOT_PRESENT = 6 __EXITCODE_CIB_SCHEMA_IS_THE_LATEST_AVAILABLE = 211 __RESOURCE_REFRESH_OPERATION_COUNT_THRESHOLD = 100 class CrmMonErrorException(LibraryError): pass ### status def get_cluster_status_xml(runner): stdout, stderr, retval = runner.run( [__exec("crm_mon"), "--one-shot", "--as-xml", "--inactive"] ) if retval != 0: raise CrmMonErrorException( reports.cluster_state_cannot_load(join_multilines([stderr, stdout])) ) return stdout ### cib def get_cib_xml_cmd_results(runner, scope=None): command = [__exec("cibadmin"), "--local", "--query"] if scope: command.append("--scope={0}".format(scope)) stdout, stderr, returncode = runner.run(command) return stdout, stderr, returncode def get_cib_xml(runner, scope=None): stdout, stderr, retval = get_cib_xml_cmd_results(runner, scope) if retval != 0: if retval == __EXITCODE_CIB_SCOPE_VALID_BUT_NOT_PRESENT and scope: raise LibraryError( reports.cib_load_error_scope_missing( scope, join_multilines([stderr, stdout]) ) ) else: raise LibraryError( reports.cib_load_error(join_multilines([stderr, stdout])) ) return stdout def parse_cib_xml(xml): return xml_fromstring(xml) def get_cib(xml): try: return parse_cib_xml(xml) except (etree.XMLSyntaxError, etree.DocumentInvalid) as e: raise LibraryError(reports.cib_load_error_invalid_format(str(e))) def verify(runner, verbose=False): crm_verify_cmd = [__exec("crm_verify")] if verbose: crm_verify_cmd.append("-V") #With the `crm_verify` command it is not possible simply use the environment #variable CIB_file because `crm_verify` simply tries to connect to cib file #via tool that can fail because: Update does not conform to the configured #schema #So we use the explicit flag `--xml-file`. cib_tmp_file = runner.env_vars.get("CIB_file", None) if cib_tmp_file is None: crm_verify_cmd.append("--live-check") else: crm_verify_cmd.extend(["--xml-file", cib_tmp_file]) #the tuple (stdout, stderr, returncode) is returned here return runner.run(crm_verify_cmd) def replace_cib_configuration_xml(runner, xml): cmd = [ __exec("cibadmin"), "--replace", "--verbose", "--xml-pipe", "--scope", "configuration", ] stdout, stderr, retval = runner.run(cmd, stdin_string=xml) if retval != 0: raise LibraryError(reports.cib_push_error(stderr, stdout)) def replace_cib_configuration(runner, tree): return replace_cib_configuration_xml(runner, etree_to_str(tree)) def push_cib_diff_xml(runner, cib_diff_xml): cmd = [ __exec("cibadmin"), "--patch", "--verbose", "--xml-pipe", ] stdout, stderr, retval = runner.run(cmd, stdin_string=cib_diff_xml) if retval != 0: raise LibraryError(reports.cib_push_error(stderr, stdout)) def diff_cibs_xml(runner, reporter, cib_old_xml, cib_new_xml): """ Return xml diff of two CIBs CommandRunner runner string cib_old_xml -- original CIB string cib_new_xml -- modified CIB """ try: cib_old_tmp_file = write_tmpfile(cib_old_xml) reporter.process( reports.tmp_file_write(cib_old_tmp_file.name, cib_old_xml) ) cib_new_tmp_file = write_tmpfile(cib_new_xml) reporter.process( reports.tmp_file_write(cib_new_tmp_file.name, cib_new_xml) ) except EnvironmentError as e: raise LibraryError(reports.cib_save_tmp_error(str(e))) command = [ __exec("crm_diff"), "--original", cib_old_tmp_file.name, "--new", cib_new_tmp_file.name, "--no-version", ] # dummy_retval == 1 means one of two things: # a) an error has occured # b) --original and --new differ # therefore it's of no use to see if an error occurred stdout, stderr, dummy_retval = runner.run(command) if stderr.strip(): raise LibraryError( reports.cib_diff_error(stderr.strip(), cib_old_xml, cib_new_xml) ) return stdout.strip() def ensure_cib_version(runner, cib, version): """ This method ensures that specified cib is verified by pacemaker with version 'version' or newer. If cib doesn't correspond to this version, method will try to upgrade cib. Returns cib which was verified by pacemaker version 'version' or later. Raises LibraryError on any failure. CommandRunner runner -- runner etree cib -- cib tree pcs.common.tools.Version version -- required cib version """ current_version = get_pacemaker_version_by_which_cib_was_validated(cib) if current_version >= version: return None _upgrade_cib(runner) new_cib_xml = get_cib_xml(runner) try: new_cib = parse_cib_xml(new_cib_xml) except (etree.XMLSyntaxError, etree.DocumentInvalid) as e: raise LibraryError(reports.cib_upgrade_failed(str(e))) current_version = get_pacemaker_version_by_which_cib_was_validated(new_cib) if current_version >= version: return new_cib raise LibraryError(reports.unable_to_upgrade_cib_to_required_version( current_version, version )) def _upgrade_cib(runner): """ Upgrade CIB to the latest schema available locally or clusterwise. CommandRunner runner """ stdout, stderr, retval = runner.run( [__exec("cibadmin"), "--upgrade", "--force"] ) # If we are already on the latest schema available, do not consider it an # error. We do not know here what version is required. The caller however # knows and is responsible for dealing with it. if retval not in (0, __EXITCODE_CIB_SCHEMA_IS_THE_LATEST_AVAILABLE): raise LibraryError( reports.cib_upgrade_failed(join_multilines([stderr, stdout])) ) ### wait for idle def has_wait_for_idle_support(runner): # returns 1 on success so we don't care about retval stdout, stderr, dummy_retval = runner.run( [__exec("crm_resource"), "-?"] ) # help goes to stderr but we check stdout as well if that gets changed return "--wait" in stderr or "--wait" in stdout def ensure_wait_for_idle_support(runner): if not has_wait_for_idle_support(runner): raise LibraryError(reports.wait_for_idle_not_supported()) def wait_for_idle(runner, timeout=None): """ Run waiting command. Raise LibraryError if command failed. runner is preconfigured object for running external programs string timeout is waiting timeout """ args = [__exec("crm_resource"), "--wait"] if timeout is not None: args.append("--timeout={0}".format(timeout)) stdout, stderr, retval = runner.run(args) if retval != 0: # Usefull info goes to stderr - not only error messages, a list of # pending actions in case of timeout goes there as well. # We use stdout just to be sure if that's get changed. if retval == __EXITCODE_WAIT_TIMEOUT: raise LibraryError( reports.wait_for_idle_timed_out( join_multilines([stderr, stdout]) ) ) else: raise LibraryError( reports.wait_for_idle_error( join_multilines([stderr, stdout]) ) ) ### nodes def get_local_node_name(runner): # It would be possible to run "crm_node --name" to get the name in one call, # but it returns false names when cluster is not running (or we are on # a remote node). Getting node id first is reliable since it fails in those # cases. stdout, dummy_stderr, retval = runner.run( [__exec("crm_node"), "--cluster-id"] ) if retval != 0: raise LibraryError( reports.pacemaker_local_node_name_not_found("node id not found") ) node_id = stdout.strip() stdout, dummy_stderr, retval = runner.run( [__exec("crm_node"), "--name-for-id={0}".format(node_id)] ) if retval != 0: raise LibraryError( reports.pacemaker_local_node_name_not_found("node name not found") ) node_name = stdout.strip() if node_name == "(null)": raise LibraryError( reports.pacemaker_local_node_name_not_found("node name is null") ) return node_name def get_local_node_status(runner): try: cluster_status = ClusterState(get_cluster_status_xml(runner)) except CrmMonErrorException: return {"offline": True} node_name = get_local_node_name(runner) for node_status in cluster_status.node_section.nodes: if node_status.attrs.name == node_name: result = { "offline": False, } for attr in ( 'id', 'name', 'type', 'online', 'standby', 'standby_onfail', 'maintenance', 'pending', 'unclean', 'shutdown', 'expected_up', 'is_dc', 'resources_running', ): result[attr] = getattr(node_status.attrs, attr) return result raise LibraryError(reports.node_not_found(node_name)) def remove_node(runner, node_name): stdout, stderr, retval = runner.run([ __exec("crm_node"), "--force", "--remove", node_name, ]) if retval != 0: raise LibraryError( reports.node_remove_in_pacemaker_failed( node_name, reason=join_multilines([stderr, stdout]) ) ) ### resources def resource_cleanup(runner, resource=None, node=None): cmd = [__exec("crm_resource"), "--cleanup"] if resource: cmd.extend(["--resource", resource]) if node: cmd.extend(["--node", node]) stdout, stderr, retval = runner.run(cmd) if retval != 0: raise LibraryError( reports.resource_cleanup_error( join_multilines([stderr, stdout]), resource, node ) ) # usefull output (what has been done) goes to stderr return join_multilines([stdout, stderr]) def resource_refresh(runner, resource=None, node=None, full=False, force=None): if not force and not node and not resource: summary = ClusterState(get_cluster_status_xml(runner)).summary operations = summary.nodes.attrs.count * summary.resources.attrs.count if operations > __RESOURCE_REFRESH_OPERATION_COUNT_THRESHOLD: raise LibraryError( reports.resource_refresh_too_time_consuming( __RESOURCE_REFRESH_OPERATION_COUNT_THRESHOLD ) ) cmd = [__exec("crm_resource"), "--refresh"] if resource: cmd.extend(["--resource", resource]) if node: cmd.extend(["--node", node]) if full: cmd.extend(["--force"]) stdout, stderr, retval = runner.run(cmd) if retval != 0: raise LibraryError( reports.resource_refresh_error( join_multilines([stderr, stdout]), resource, node ) ) # usefull output (what has been done) goes to stderr return join_multilines([stdout, stderr]) ### tools # shortcut for getting a full path to a pacemaker executable def __exec(name): return os.path.join(settings.pacemaker_binaries, name) pcs-0.9.164/pcs/lib/pacemaker/state.py000066400000000000000000000212151326265502500174440ustar00rootroot00000000000000''' The intention is put there knowledge about cluster state structure. Hide information about underlaying xml is desired too. ''' from __future__ import ( absolute_import, division, print_function, ) import os.path from collections import defaultdict from lxml import etree from pcs import settings from pcs.common.tools import xml_fromstring from pcs.lib import reports from pcs.lib.errors import LibraryError, ReportItemSeverity as severities from pcs.lib.pacemaker.values import ( is_false, is_true, ) from pcs.lib.xml_tools import find_parent class ResourceNotFound(Exception): pass class _Attrs(object): def __init__(self, owner_name, attrib, required_attrs): ''' attrib lxml.etree._Attrib - wrapped attribute collection required_attrs dict of required atribute names object_name:xml_attribute ''' self.owner_name = owner_name self.attrib = attrib self.required_attrs = required_attrs def __getattr__(self, name): if name in self.required_attrs.keys(): try: attr_specification = self.required_attrs[name] if isinstance(attr_specification, tuple): attr_name, attr_transform = attr_specification return attr_transform(self.attrib[attr_name]) else: return self.attrib[attr_specification] except KeyError: raise AttributeError( "Missing attribute '{0}' ('{1}' in source) in '{2}'" .format(name, self.required_attrs[name], self.owner_name) ) raise AttributeError( "'{0}' does not declare attribute '{1}'" .format(self.owner_name, name) ) class _Children(object): def __init__(self, owner_name, dom_part, children, sections): self.owner_name = owner_name self.dom_part = dom_part self.children = children self.sections = sections def __getattr__(self, name): if name in self.children.keys(): element_name, wrapper = self.children[name] return [ wrapper(element) for element in self.dom_part.findall('.//' + element_name) ] if name in self.sections.keys(): element_name, wrapper = self.sections[name] return wrapper(self.dom_part.findall('.//' + element_name)[0]) raise AttributeError( "'{0}' does not declare child or section '{1}'" .format(self.owner_name, name) ) class _Element(object): required_attrs = {} children = {} sections = {} def __init__(self, dom_part): self.dom_part = dom_part self.attrs = _Attrs( self.__class__.__name__, self.dom_part.attrib, self.required_attrs ) self.children_access = _Children( self.__class__.__name__, self.dom_part, self.children, self.sections, ) def __getattr__(self, name): return getattr(self.children_access, name) class _SummaryNodes(_Element): required_attrs = { 'count': ('number', int), } class _SummaryResources(_Element): required_attrs = { 'count': ('number', int), } class _SummarySection(_Element): sections = { 'nodes': ('nodes_configured', _SummaryNodes), 'resources': ('resources_configured', _SummaryResources), } class _Node(_Element): required_attrs = { 'id': 'id', 'name': 'name', 'type': 'type', 'online': ('online', is_true), 'standby': ('standby', is_true), 'standby_onfail': ('standby_onfail', is_true), 'maintenance': ('maintenance', is_true), 'pending': ('pending', is_true), 'unclean': ('unclean', is_true), 'shutdown': ('shutdown', is_true), 'expected_up': ('expected_up', is_true), 'is_dc': ('is_dc', is_true), 'resources_running': ('resources_running', int), } class _NodeSection(_Element): children = { 'nodes': ('node', _Node), } def get_cluster_state_dom(xml): try: dom = xml_fromstring(xml) if os.path.isfile(settings.crm_mon_schema): etree.RelaxNG(file=settings.crm_mon_schema).assertValid(dom) return dom except (etree.XMLSyntaxError, etree.DocumentInvalid): raise LibraryError(reports.cluster_state_invalid_format()) class ClusterState(_Element): sections = { 'summary': ('summary', _SummarySection), 'node_section': ('nodes', _NodeSection), } def __init__(self, xml): self.dom = get_cluster_state_dom(xml) super(ClusterState, self).__init__(self.dom) def _id_xpath_predicate(resource_id): return """(@id="{0}" or starts-with(@id, "{0}:"))""".format(resource_id) def _get_primitives_for_state_check( cluster_state, resource_id, expected_running ): primitives = cluster_state.xpath(""" .//resource[{predicate_id}] | .//group[{predicate_id}]/resource[{predicate_position}] | .//clone[@id="{id}"]/resource | .//clone[@id="{id}"]/group/resource[{predicate_position}] | .//bundle[@id="{id}"]/replica/resource """.format( id=resource_id, predicate_id=_id_xpath_predicate(resource_id), predicate_position=("last()" if expected_running else "1") )) return [ element for element in primitives if not is_true(element.attrib.get("failed", "")) ] def _get_primitive_roles_with_nodes(primitive_el_list): # Clone resources are represented by multiple primitive elements. roles_with_nodes = defaultdict(set) for resource_element in primitive_el_list: if resource_element.attrib["role"] in ["Started", "Master", "Slave"]: roles_with_nodes[resource_element.attrib["role"]].update([ node.attrib["name"] for node in resource_element.findall(".//node") ]) return dict([ (role, sorted(nodes)) for role, nodes in roles_with_nodes.items() ]) def info_resource_state(cluster_state, resource_id): roles_with_nodes = _get_primitive_roles_with_nodes( _get_primitives_for_state_check( cluster_state, resource_id, expected_running=True ) ) if not roles_with_nodes: return reports.resource_does_not_run( resource_id, severities.INFO ) return reports.resource_running_on_nodes( resource_id, roles_with_nodes, severities.INFO ) def ensure_resource_state(expected_running, cluster_state, resource_id): roles_with_nodes = _get_primitive_roles_with_nodes( _get_primitives_for_state_check( cluster_state, resource_id, expected_running ) ) if not roles_with_nodes: return reports.resource_does_not_run( resource_id, severities.INFO if not expected_running else severities.ERROR ) return reports.resource_running_on_nodes( resource_id, roles_with_nodes, severities.INFO if expected_running else severities.ERROR ) def ensure_resource_running(cluster_state, resource_id): return ensure_resource_state( expected_running=True, cluster_state=cluster_state, resource_id=resource_id, ) def is_resource_managed(cluster_state, resource_id): """ Check if the resource is managed etree cluster_state -- status of the cluster string resource_id -- id of the resource """ primitive_list = cluster_state.xpath(""" .//resource[{predicate_id}] | .//group[{predicate_id}]/resource """.format(predicate_id=_id_xpath_predicate(resource_id)) ) if primitive_list: for primitive in primitive_list: if is_false(primitive.attrib.get("managed", "")): return False parent = find_parent(primitive, ["clone", "bundle"]) if ( parent is not None and is_false(parent.attrib.get("managed", "")) ): return False return True parent_list = cluster_state.xpath(""" .//clone[@id="{0}"] | .//bundle[@id="{0}"] """.format(resource_id) ) for parent in parent_list: if is_false(parent.attrib.get("managed", "")): return False for primitive in parent.xpath(".//resource"): if is_false(primitive.attrib.get("managed", "")): return False return True raise ResourceNotFound(resource_id) pcs-0.9.164/pcs/lib/pacemaker/test/000077500000000000000000000000001326265502500167305ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/pacemaker/test/__init__.py000066400000000000000000000000001326265502500210270ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/pacemaker/test/test_live.py000066400000000000000000001051631326265502500213060ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree import os.path from pcs.test.tools.assertions import ( assert_raise_library_error, assert_xml_equal, start_tag_error_text, ) from pcs.test.tools import fixture from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.misc import get_test_resource as rc from pcs.test.tools.pcs_unittest import TestCase, mock from pcs.test.tools.xml import XmlManipulation from pcs import settings from pcs.common import report_codes from pcs.common.tools import Version import pcs.lib.pacemaker.live as lib from pcs.lib.errors import ReportItemSeverity as Severity from pcs.lib.external import CommandRunner def get_runner(stdout="", stderr="", returncode=0, env_vars=None): runner = mock.MagicMock(spec_set=CommandRunner) runner.run.return_value = (stdout, stderr, returncode) runner.env_vars = env_vars if env_vars else {} return runner class LibraryPacemakerTest(TestCase): def path(self, name): return os.path.join(settings.pacemaker_binaries, name) def crm_mon_cmd(self): return [self.path("crm_mon"), "--one-shot", "--as-xml", "--inactive"] class LibraryPacemakerNodeStatusTest(LibraryPacemakerTest): def setUp(self): self.status = XmlManipulation.from_file(rc("crm_mon.minimal.xml")) def fixture_get_node_status(self, node_name, node_id): return { "id": node_id, "name": node_name, "type": "member", "online": True, "standby": False, "standby_onfail": False, "maintenance": True, "pending": True, "unclean": False, "shutdown": False, "expected_up": True, "is_dc": True, "resources_running": 7, } def fixture_add_node_status(self, node_attrs): xml_attrs = [] for name, value in node_attrs.items(): if value is True: value = "true" elif value is False: value = "false" xml_attrs.append('{0}="{1}"'.format(name, value)) node_xml = "".format(" ".join(xml_attrs)) self.status.append_to_first_tag_name("nodes", node_xml) class GetClusterStatusXmlTest(LibraryPacemakerTest): def test_success(self): expected_stdout = "" expected_stderr = "" expected_retval = 0 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) real_xml = lib.get_cluster_status_xml(mock_runner) mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) self.assertEqual(expected_stdout, real_xml) def test_error(self): expected_stdout = "some info" expected_stderr = "some error" expected_retval = 1 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) assert_raise_library_error( lambda: lib.get_cluster_status_xml(mock_runner), ( Severity.ERROR, report_codes.CRM_MON_ERROR, { "reason": expected_stderr + "\n" + expected_stdout, } ) ) mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) class GetCibXmlTest(LibraryPacemakerTest): def test_success(self): expected_stdout = "" expected_stderr = "" expected_retval = 0 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) real_xml = lib.get_cib_xml(mock_runner) mock_runner.run.assert_called_once_with( [self.path("cibadmin"), "--local", "--query"] ) self.assertEqual(expected_stdout, real_xml) def test_error(self): expected_stdout = "some info" expected_stderr = "some error" expected_retval = 1 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) assert_raise_library_error( lambda: lib.get_cib_xml(mock_runner), ( Severity.ERROR, report_codes.CIB_LOAD_ERROR, { "reason": expected_stderr + "\n" + expected_stdout, } ) ) mock_runner.run.assert_called_once_with( [self.path("cibadmin"), "--local", "--query"] ) def test_success_scope(self): expected_stdout = "" expected_stderr = "" expected_retval = 0 scope = "test_scope" mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) real_xml = lib.get_cib_xml(mock_runner, scope) mock_runner.run.assert_called_once_with( [ self.path("cibadmin"), "--local", "--query", "--scope={0}".format(scope) ] ) self.assertEqual(expected_stdout, real_xml) def test_scope_error(self): expected_stdout = "some info" expected_stderr = "some error" expected_retval = 6 scope = "test_scope" mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) assert_raise_library_error( lambda: lib.get_cib_xml(mock_runner, scope=scope), ( Severity.ERROR, report_codes.CIB_LOAD_ERROR_SCOPE_MISSING, { "scope": scope, "reason": expected_stderr + "\n" + expected_stdout, } ) ) mock_runner.run.assert_called_once_with( [ self.path("cibadmin"), "--local", "--query", "--scope={0}".format(scope) ] ) class GetCibTest(LibraryPacemakerTest): def test_success(self): xml = "" assert_xml_equal(xml, str(XmlManipulation((lib.get_cib(xml))))) def test_invalid_xml(self): xml = "" assert_raise_library_error( lambda: lib.get_cib(xml), ( Severity.ERROR, report_codes.CIB_LOAD_ERROR_BAD_FORMAT, { } ) ) class Verify(LibraryPacemakerTest): def test_run_on_live_cib(self): runner = get_runner() self.assertEqual( lib.verify(runner), runner.run.return_value ) runner.run.assert_called_once_with( [self.path("crm_verify"), "--live-check"], ) def test_run_on_mocked_cib(self): fake_tmp_file = "/fake/tmp/file" runner = get_runner(env_vars={"CIB_file": fake_tmp_file}) self.assertEqual(lib.verify(runner), runner.run.return_value) runner.run.assert_called_once_with( [self.path("crm_verify"), "--xml-file", fake_tmp_file], ) def test_run_verbose(self): runner = get_runner() self.assertEqual( lib.verify(runner, verbose=True), runner.run.return_value ) runner.run.assert_called_once_with( [self.path("crm_verify"), "-V", "--live-check"], ) class ReplaceCibConfigurationTest(LibraryPacemakerTest): def test_success(self): xml = "" expected_stdout = "expected output" expected_stderr = "" expected_retval = 0 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) lib.replace_cib_configuration( mock_runner, XmlManipulation.from_str(xml).tree ) mock_runner.run.assert_called_once_with( [ self.path("cibadmin"), "--replace", "--verbose", "--xml-pipe", "--scope", "configuration" ], stdin_string=xml ) def test_error(self): xml = "" expected_stdout = "expected output" expected_stderr = "expected stderr" expected_retval = 1 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) assert_raise_library_error( lambda: lib.replace_cib_configuration( mock_runner, XmlManipulation.from_str(xml).tree ) , ( Severity.ERROR, report_codes.CIB_PUSH_ERROR, { "reason": expected_stderr, "pushed_cib": expected_stdout, } ) ) mock_runner.run.assert_called_once_with( [ self.path("cibadmin"), "--replace", "--verbose", "--xml-pipe", "--scope", "configuration" ], stdin_string=xml ) class UpgradeCibTest(TestCase): def test_success(self): mock_runner = get_runner("", "", 0) lib._upgrade_cib(mock_runner) mock_runner.run.assert_called_once_with( ["/usr/sbin/cibadmin", "--upgrade", "--force"] ) def test_error(self): error = "Call cib_upgrade failed (-62): Timer expired" mock_runner = get_runner("", error, 62) assert_raise_library_error( lambda: lib._upgrade_cib(mock_runner), ( Severity.ERROR, report_codes.CIB_UPGRADE_FAILED, { "reason": error, } ) ) mock_runner.run.assert_called_once_with( ["/usr/sbin/cibadmin", "--upgrade", "--force"] ) def test_already_at_latest_schema(self): error = ("Call cib_upgrade failed (-211): Schema is already " "the latest available") mock_runner = get_runner("", error, 211) lib._upgrade_cib(mock_runner) mock_runner.run.assert_called_once_with( ["/usr/sbin/cibadmin", "--upgrade", "--force"] ) @mock.patch("pcs.lib.pacemaker.live.get_cib_xml") @mock.patch("pcs.lib.pacemaker.live._upgrade_cib") class EnsureCibVersionTest(TestCase): def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) self.cib = etree.XML('') def test_same_version(self, mock_upgrade, mock_get_cib): self.assertTrue( lib.ensure_cib_version( self.mock_runner, self.cib, Version(2, 3, 4) ) is None ) mock_upgrade.assert_not_called() mock_get_cib.assert_not_called() def test_higher_version(self, mock_upgrade, mock_get_cib): self.assertTrue( lib.ensure_cib_version( self.mock_runner, self.cib, Version(2, 3, 3) ) is None ) mock_upgrade.assert_not_called() mock_get_cib.assert_not_called() def test_upgraded_same_version(self, mock_upgrade, mock_get_cib): upgraded_cib = '' mock_get_cib.return_value = upgraded_cib assert_xml_equal( upgraded_cib, etree.tostring( lib.ensure_cib_version( self.mock_runner, self.cib, Version(2, 3, 5) ) ).decode() ) mock_upgrade.assert_called_once_with(self.mock_runner) mock_get_cib.assert_called_once_with(self.mock_runner) def test_upgraded_higher_version(self, mock_upgrade, mock_get_cib): upgraded_cib = '' mock_get_cib.return_value = upgraded_cib assert_xml_equal( upgraded_cib, etree.tostring( lib.ensure_cib_version( self.mock_runner, self.cib, Version(2, 3, 5) ) ).decode() ) mock_upgrade.assert_called_once_with(self.mock_runner) mock_get_cib.assert_called_once_with(self.mock_runner) def test_upgraded_lower_version(self, mock_upgrade, mock_get_cib): mock_get_cib.return_value = etree.tostring(self.cib).decode() assert_raise_library_error( lambda: lib.ensure_cib_version( self.mock_runner, self.cib, Version(2, 3, 5) ), ( Severity.ERROR, report_codes.CIB_UPGRADE_FAILED_TO_MINIMAL_REQUIRED_VERSION, { "required_version": "2.3.5", "current_version": "2.3.4" } ) ) mock_upgrade.assert_called_once_with(self.mock_runner) mock_get_cib.assert_called_once_with(self.mock_runner) def test_cib_parse_error(self, mock_upgrade, mock_get_cib): mock_get_cib.return_value = "not xml" assert_raise_library_error( lambda: lib.ensure_cib_version( self.mock_runner, self.cib, Version(2, 3, 5) ), ( Severity.ERROR, report_codes.CIB_UPGRADE_FAILED, { "reason": start_tag_error_text(), } ) ) mock_upgrade.assert_called_once_with(self.mock_runner) mock_get_cib.assert_called_once_with(self.mock_runner) class GetLocalNodeStatusTest(LibraryPacemakerNodeStatusTest): def test_offline(self): expected_stdout = "some info" expected_stderr = "some error" expected_retval = 1 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) self.assertEqual( {"offline": True}, lib.get_local_node_status(mock_runner) ) mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) def test_invalid_status(self): expected_stdout = "invalid xml" expected_stderr = "" expected_retval = 0 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) assert_raise_library_error( lambda: lib.get_local_node_status(mock_runner), ( Severity.ERROR, report_codes.BAD_CLUSTER_STATE_FORMAT, {} ) ) mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) def test_success(self): node_id = "id_1" node_name = "name_1" node_status = self.fixture_get_node_status(node_name, node_id) expected_status = dict(node_status, offline=False) self.fixture_add_node_status( self.fixture_get_node_status("name_2", "id_2") ) self.fixture_add_node_status(node_status) self.fixture_add_node_status( self.fixture_get_node_status("name_3", "id_3") ) mock_runner = mock.MagicMock(spec_set=CommandRunner) call_list = [ mock.call(self.crm_mon_cmd()), mock.call([self.path("crm_node"), "--cluster-id"]), mock.call( [self.path("crm_node"), "--name-for-id={0}".format(node_id)] ), ] return_value_list = [ (str(self.status), "", 0), (node_id, "", 0), (node_name, "", 0) ] mock_runner.run.side_effect = return_value_list real_status = lib.get_local_node_status(mock_runner) self.assertEqual(len(return_value_list), len(call_list)) self.assertEqual(len(return_value_list), mock_runner.run.call_count) mock_runner.run.assert_has_calls(call_list) self.assertEqual(expected_status, real_status) def test_node_not_in_status(self): node_id = "id_1" node_name = "name_1" node_name_bad = "name_X" node_status = self.fixture_get_node_status(node_name, node_id) self.fixture_add_node_status(node_status) mock_runner = mock.MagicMock(spec_set=CommandRunner) call_list = [ mock.call(self.crm_mon_cmd()), mock.call([self.path("crm_node"), "--cluster-id"]), mock.call( [self.path("crm_node"), "--name-for-id={0}".format(node_id)] ), ] return_value_list = [ (str(self.status), "", 0), (node_id, "", 0), (node_name_bad, "", 0) ] mock_runner.run.side_effect = return_value_list assert_raise_library_error( lambda: lib.get_local_node_status(mock_runner), ( Severity.ERROR, report_codes.NODE_NOT_FOUND, {"node": node_name_bad} ) ) self.assertEqual(len(return_value_list), len(call_list)) self.assertEqual(len(return_value_list), mock_runner.run.call_count) mock_runner.run.assert_has_calls(call_list) def test_error_1(self): node_id = "id_1" node_name = "name_1" node_status = self.fixture_get_node_status(node_name, node_id) self.fixture_add_node_status(node_status) mock_runner = mock.MagicMock(spec_set=CommandRunner) call_list = [ mock.call(self.crm_mon_cmd()), mock.call([self.path("crm_node"), "--cluster-id"]), ] return_value_list = [ (str(self.status), "", 0), ("", "some error", 1), ] mock_runner.run.side_effect = return_value_list assert_raise_library_error( lambda: lib.get_local_node_status(mock_runner), ( Severity.ERROR, report_codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND, {"reason": "node id not found"} ) ) self.assertEqual(len(return_value_list), len(call_list)) self.assertEqual(len(return_value_list), mock_runner.run.call_count) mock_runner.run.assert_has_calls(call_list) def test_error_2(self): node_id = "id_1" node_name = "name_1" node_status = self.fixture_get_node_status(node_name, node_id) self.fixture_add_node_status(node_status) mock_runner = mock.MagicMock(spec_set=CommandRunner) call_list = [ mock.call(self.crm_mon_cmd()), mock.call([self.path("crm_node"), "--cluster-id"]), mock.call( [self.path("crm_node"), "--name-for-id={0}".format(node_id)] ), ] return_value_list = [ (str(self.status), "", 0), (node_id, "", 0), ("", "some error", 1), ] mock_runner.run.side_effect = return_value_list assert_raise_library_error( lambda: lib.get_local_node_status(mock_runner), ( Severity.ERROR, report_codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND, {"reason": "node name not found"} ) ) self.assertEqual(len(return_value_list), len(call_list)) self.assertEqual(len(return_value_list), mock_runner.run.call_count) mock_runner.run.assert_has_calls(call_list) def test_error_3(self): node_id = "id_1" node_name = "name_1" node_status = self.fixture_get_node_status(node_name, node_id) self.fixture_add_node_status(node_status) mock_runner = mock.MagicMock(spec_set=CommandRunner) call_list = [ mock.call(self.crm_mon_cmd()), mock.call([self.path("crm_node"), "--cluster-id"]), mock.call( [self.path("crm_node"), "--name-for-id={0}".format(node_id)] ), ] return_value_list = [ (str(self.status), "", 0), (node_id, "", 0), ("(null)", "", 0), ] mock_runner.run.side_effect = return_value_list assert_raise_library_error( lambda: lib.get_local_node_status(mock_runner), ( Severity.ERROR, report_codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND, {"reason": "node name is null"} ) ) self.assertEqual(len(return_value_list), len(call_list)) self.assertEqual(len(return_value_list), mock_runner.run.call_count) mock_runner.run.assert_has_calls(call_list) class RemoveNode(LibraryPacemakerTest): def test_success(self): mock_runner = get_runner("", "", 0) lib.remove_node( mock_runner, "NODE_NAME" ) mock_runner.run.assert_called_once_with([ self.path("crm_node"), "--force", "--remove", "NODE_NAME", ]) def test_error(self): expected_stderr = "expected stderr" mock_runner = get_runner("", expected_stderr, 1) assert_raise_library_error( lambda: lib.remove_node(mock_runner, "NODE_NAME") , ( Severity.ERROR, report_codes.NODE_REMOVE_IN_PACEMAKER_FAILED, { "node_name": "NODE_NAME", "reason": expected_stderr, } ) ) class ResourceCleanupTest(TestCase): def setUp(self): self.stdout = "expected output" self.stderr = "expected stderr" self.resource = "my_resource" self.node = "my_node" self.env_assist, self.config = get_env_tools(test_case=self) def assert_output(self, real_output): self.assertEqual( self.stdout + "\n" + self.stderr, real_output ) def test_basic(self): self.config.runner.pcmk.resource_cleanup( stdout=self.stdout, stderr=self.stderr ) env = self.env_assist.get_env() real_output = lib.resource_cleanup(env.cmd_runner()) self.assert_output(real_output) def test_resource(self): self.config.runner.pcmk.resource_cleanup( stdout=self.stdout, stderr=self.stderr, resource=self.resource ) env = self.env_assist.get_env() real_output = lib.resource_cleanup( env.cmd_runner(), resource=self.resource ) self.assert_output(real_output) def test_node(self): self.config.runner.pcmk.resource_cleanup( stdout=self.stdout, stderr=self.stderr, node=self.node ) env = self.env_assist.get_env() real_output = lib.resource_cleanup( env.cmd_runner(), node=self.node ) self.assert_output(real_output) def test_all_options(self): self.config.runner.pcmk.resource_cleanup( stdout=self.stdout, stderr=self.stderr, resource=self.resource, node=self.node ) env = self.env_assist.get_env() real_output = lib.resource_cleanup( env.cmd_runner(), resource=self.resource, node=self.node ) self.assert_output(real_output) def test_error_cleanup(self): self.config.runner.pcmk.resource_cleanup( stdout=self.stdout, stderr=self.stderr, returncode=1 ) env = self.env_assist.get_env() self.env_assist.assert_raise_library_error( lambda: lib.resource_cleanup(env.cmd_runner()), [ fixture.error( report_codes.RESOURCE_CLEANUP_ERROR, force_code=None, reason=(self.stderr + "\n" + self.stdout) ) ], expected_in_processor=False ) class ResourceRefreshTest(LibraryPacemakerTest): def fixture_status_xml(self, nodes, resources): xml_man = XmlManipulation.from_file(rc("crm_mon.minimal.xml")) doc = xml_man.tree.getroottree() doc.find("/summary/nodes_configured").set("number", str(nodes)) doc.find("/summary/resources_configured").set("number", str(resources)) return str(XmlManipulation(doc)) def test_basic(self): expected_stdout = "expected output" expected_stderr = "expected stderr" mock_runner = mock.MagicMock(spec_set=CommandRunner) call_list = [ mock.call(self.crm_mon_cmd()), mock.call([self.path("crm_resource"), "--refresh"]), ] return_value_list = [ (self.fixture_status_xml(1, 1), "", 0), (expected_stdout, expected_stderr, 0), ] mock_runner.run.side_effect = return_value_list real_output = lib.resource_refresh(mock_runner) self.assertEqual(len(return_value_list), len(call_list)) self.assertEqual(len(return_value_list), mock_runner.run.call_count) mock_runner.run.assert_has_calls(call_list) self.assertEqual( expected_stdout + "\n" + expected_stderr, real_output ) def test_threshold_exceeded(self): mock_runner = get_runner( self.fixture_status_xml(1000, 1000), "", 0 ) assert_raise_library_error( lambda: lib.resource_refresh(mock_runner), ( Severity.ERROR, report_codes.RESOURCE_REFRESH_TOO_TIME_CONSUMING, {"threshold": 100}, report_codes.FORCE_LOAD_THRESHOLD ) ) mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) def test_threshold_exceeded_forced(self): expected_stdout = "expected output" expected_stderr = "expected stderr" mock_runner = get_runner(expected_stdout, expected_stderr, 0) real_output = lib.resource_refresh(mock_runner, force=True) mock_runner.run.assert_called_once_with( [self.path("crm_resource"), "--refresh"] ) self.assertEqual( expected_stdout + "\n" + expected_stderr, real_output ) def test_resource(self): resource = "test_resource" expected_stdout = "expected output" expected_stderr = "expected stderr" mock_runner = get_runner(expected_stdout, expected_stderr, 0) real_output = lib.resource_refresh(mock_runner, resource=resource) mock_runner.run.assert_called_once_with( [self.path("crm_resource"), "--refresh", "--resource", resource] ) self.assertEqual( expected_stdout + "\n" + expected_stderr, real_output ) def test_node(self): node = "test_node" expected_stdout = "expected output" expected_stderr = "expected stderr" mock_runner = get_runner(expected_stdout, expected_stderr, 0) real_output = lib.resource_refresh(mock_runner, node=node) mock_runner.run.assert_called_once_with( [self.path("crm_resource"), "--refresh", "--node", node] ) self.assertEqual( expected_stdout + "\n" + expected_stderr, real_output ) def test_full(self): expected_stdout = "expected output" expected_stderr = "expected stderr" mock_runner = mock.MagicMock(spec_set=CommandRunner) call_list = [ mock.call(self.crm_mon_cmd()), mock.call([self.path("crm_resource"), "--refresh", "--force"]), ] return_value_list = [ (self.fixture_status_xml(1, 1), "", 0), (expected_stdout, expected_stderr, 0), ] mock_runner.run.side_effect = return_value_list real_output = lib.resource_refresh(mock_runner, full=True) self.assertEqual(len(return_value_list), len(call_list)) self.assertEqual(len(return_value_list), mock_runner.run.call_count) mock_runner.run.assert_has_calls(call_list) self.assertEqual( expected_stdout + "\n" + expected_stderr, real_output ) def test_all_options(self): node = "test_node" resource = "test_resource" expected_stdout = "expected output" expected_stderr = "expected stderr" mock_runner = get_runner(expected_stdout, expected_stderr, 0) real_output = lib.resource_refresh( mock_runner, resource=resource, node=node, full=True ) mock_runner.run.assert_called_once_with( [ self.path("crm_resource"), "--refresh", "--resource", resource, "--node", node, "--force" ] ) self.assertEqual( expected_stdout + "\n" + expected_stderr, real_output ) def test_error_state(self): expected_stdout = "some info" expected_stderr = "some error" expected_retval = 1 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) assert_raise_library_error( lambda: lib.resource_refresh(mock_runner), ( Severity.ERROR, report_codes.CRM_MON_ERROR, { "reason": expected_stderr + "\n" + expected_stdout, } ) ) mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) def test_error_refresh(self): expected_stdout = "some info" expected_stderr = "some error" expected_retval = 1 mock_runner = mock.MagicMock(spec_set=CommandRunner) call_list = [ mock.call(self.crm_mon_cmd()), mock.call([self.path("crm_resource"), "--refresh"]), ] return_value_list = [ (self.fixture_status_xml(1, 1), "", 0), (expected_stdout, expected_stderr, expected_retval), ] mock_runner.run.side_effect = return_value_list assert_raise_library_error( lambda: lib.resource_refresh(mock_runner), ( Severity.ERROR, report_codes.RESOURCE_REFRESH_ERROR, { "reason": expected_stderr + "\n" + expected_stdout, } ) ) self.assertEqual(len(return_value_list), len(call_list)) self.assertEqual(len(return_value_list), mock_runner.run.call_count) mock_runner.run.assert_has_calls(call_list) class ResourcesWaitingTest(LibraryPacemakerTest): def test_has_support(self): expected_stdout = "" expected_stderr = "something --wait something else" expected_retval = 1 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) self.assertTrue( lib.has_wait_for_idle_support(mock_runner) ) mock_runner.run.assert_called_once_with( [self.path("crm_resource"), "-?"] ) def test_has_support_stdout(self): expected_stdout = "something --wait something else" expected_stderr = "" expected_retval = 1 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) self.assertTrue( lib.has_wait_for_idle_support(mock_runner) ) mock_runner.run.assert_called_once_with( [self.path("crm_resource"), "-?"] ) def test_doesnt_have_support(self): expected_stdout = "something something else" expected_stderr = "something something else" expected_retval = 1 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) self.assertFalse( lib.has_wait_for_idle_support(mock_runner) ) mock_runner.run.assert_called_once_with( [self.path("crm_resource"), "-?"] ) @mock.patch( "pcs.lib.pacemaker.live.has_wait_for_idle_support", autospec=True ) def test_ensure_support_success(self, mock_obj): mock_obj.return_value = True self.assertEqual(None, lib.ensure_wait_for_idle_support(mock.Mock())) @mock.patch( "pcs.lib.pacemaker.live.has_wait_for_idle_support", autospec=True ) def test_ensure_support_error(self, mock_obj): mock_obj.return_value = False assert_raise_library_error( lambda: lib.ensure_wait_for_idle_support(mock.Mock()), ( Severity.ERROR, report_codes.WAIT_FOR_IDLE_NOT_SUPPORTED, {} ) ) def test_wait_success(self): expected_stdout = "expected output" expected_stderr = "expected stderr" expected_retval = 0 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) self.assertEqual(None, lib.wait_for_idle(mock_runner)) mock_runner.run.assert_called_once_with( [self.path("crm_resource"), "--wait"] ) def test_wait_timeout_success(self): expected_stdout = "expected output" expected_stderr = "expected stderr" expected_retval = 0 timeout = 10 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) self.assertEqual(None, lib.wait_for_idle(mock_runner, timeout)) mock_runner.run.assert_called_once_with( [ self.path("crm_resource"), "--wait", "--timeout={0}".format(timeout) ] ) def test_wait_error(self): expected_stdout = "some info" expected_stderr = "some error" expected_retval = 1 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) assert_raise_library_error( lambda: lib.wait_for_idle(mock_runner), ( Severity.ERROR, report_codes.WAIT_FOR_IDLE_ERROR, { "reason": expected_stderr + "\n" + expected_stdout, } ) ) mock_runner.run.assert_called_once_with( [self.path("crm_resource"), "--wait"] ) def test_wait_error_timeout(self): expected_stdout = "some info" expected_stderr = "some error" expected_retval = 62 mock_runner = get_runner( expected_stdout, expected_stderr, expected_retval ) assert_raise_library_error( lambda: lib.wait_for_idle(mock_runner), ( Severity.ERROR, report_codes.WAIT_FOR_IDLE_TIMED_OUT, { "reason": expected_stderr + "\n" + expected_stdout, } ) ) mock_runner.run.assert_called_once_with( [self.path("crm_resource"), "--wait"] ) pcs-0.9.164/pcs/lib/pacemaker/test/test_state.py000066400000000000000000001060641326265502500214700ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase, mock from lxml import etree from pcs.test.tools.assertions import ( assert_raise_library_error, assert_report_item_equal, ) from pcs.test.tools.misc import get_test_resource as rc from pcs.test.tools.xml import get_xml_manipulation_creator_from_file from pcs.lib.pacemaker import state from pcs.lib.pacemaker.state import ( ClusterState, _Attrs, _Children, ) from pcs.common import report_codes from pcs.lib.errors import ReportItemSeverity as severities class AttrsTest(TestCase): def test_get_declared_attr(self): attrs = _Attrs('test', {'node-name': 'node1'}, {'name': 'node-name'}) self.assertEqual('node1', attrs.name) def test_raises_on_undeclared_attribute(self): attrs = _Attrs('test', {'node-name': 'node1'}, {}) self.assertRaises(AttributeError, lambda: attrs.name) def test_raises_on_missing_required_attribute(self): attrs = _Attrs('test', {}, {'name': 'node-name'}) self.assertRaises(AttributeError, lambda: attrs.name) def test_attr_transformation_success(self): attrs = _Attrs('test', {'number': '7'}, {'count': ('number', int)}) self.assertEqual(7, attrs.count) def test_attr_transformation_fail(self): attrs = _Attrs('test', {'number': 'abc'}, {'count': ('number', int)}) self.assertRaises(ValueError, lambda: attrs.count) class ChildrenTest(TestCase): def setUp(self): self.dom = etree.fromstring( '
' ) def wrap(self, element): return '{0}.{1}'.format(element.tag, element.attrib['name']) def test_get_declared_section(self): children = _Children( 'test', self.dom, {}, {'some_section': ('some', self.wrap)} ) self.assertEqual('some.0', children.some_section) def test_get_declared_children(self): children = _Children('test', self.dom, {'anys': ('any', self.wrap)}, {}) self.assertEqual(['any.1', 'any.2'], children.anys) def test_raises_on_undeclared_children(self): children = _Children('test', self.dom, {}, {}) self.assertRaises(AttributeError, lambda: children.some_section) class TestBase(TestCase): def setUp(self): self.create_covered_status = get_xml_manipulation_creator_from_file( rc('crm_mon.minimal.xml') ) self.covered_status = self.create_covered_status() class ClusterStatusTest(TestBase): def test_minimal_crm_mon_is_valid(self): ClusterState(str(self.covered_status)) def test_refuse_invalid_xml(self): assert_raise_library_error( lambda: ClusterState('invalid xml'), (severities.ERROR, report_codes.BAD_CLUSTER_STATE_FORMAT, {}) ) def test_refuse_invalid_document(self): self.covered_status.append_to_first_tag_name( 'nodes', '' ) assert_raise_library_error( lambda: ClusterState(str(self.covered_status)), (severities.ERROR, report_codes.BAD_CLUSTER_STATE_FORMAT, {}) ) class WorkWithClusterStatusNodesTest(TestBase): def fixture_node_string(self, **kwargs): attrs = dict(name='name', id='id', type='member') attrs.update(kwargs) return ''''''.format(**attrs) def test_can_get_node_names(self): self.covered_status.append_to_first_tag_name( 'nodes', self.fixture_node_string(name='node1', id='1'), self.fixture_node_string(name='node2', id='2'), ) xml = str(self.covered_status) self.assertEqual( ['node1', 'node2'], [node.attrs.name for node in ClusterState(xml).node_section.nodes] ) def test_can_filter_out_remote_nodes(self): self.covered_status.append_to_first_tag_name( 'nodes', self.fixture_node_string(name='node1', id='1'), self.fixture_node_string(name='node2', type='remote', id='2'), ) xml = str(self.covered_status) self.assertEqual( ['node1'], [ node.attrs.name for node in ClusterState(xml).node_section.nodes if node.attrs.type != 'remote' ] ) class WorkWithClusterStatusSummaryTest(TestBase): def test_nodes_count(self): xml = str(self.covered_status) self.assertEqual(0, ClusterState(xml).summary.nodes.attrs.count) def test_resources_count(self): xml = str(self.covered_status) self.assertEqual(0, ClusterState(xml).summary.resources.attrs.count) class GetPrimitiveRolesWithNodes(TestCase): def test_success(self): primitives_xml = [ """ """, """ """, """ """, """ """, """ """, """ """, ] primitives = [ etree.fromstring(xml) for xml in primitives_xml ] self.assertEqual( state._get_primitive_roles_with_nodes(primitives), { "Started": ["node1", "node5"], "Master": ["node2"], "Slave": ["node3", "node4"] } ) def test_empty(self): self.assertEqual( state._get_primitive_roles_with_nodes([]), { } ) class GetPrimitivesForStateCheck(TestCase): status_xml = etree.fromstring(""" """) def setUp(self): self.status = etree.parse(rc("crm_mon.minimal.xml")).getroot() self.status.append(self.status_xml) for resource in self.status.xpath(".//resource"): resource.attrib.update({ "resource_agent": "ocf::pacemaker:Stateful", "role": "Started", "active": "true", "orphaned": "false", "blocked": "false", "managed": "true", "failure_ignored": "false", "nodes_running_on": "1", }) def assert_primitives(self, resource_id, primitive_ids, expected_running): self.assertEqual( [ elem.attrib["id"] for elem in state._get_primitives_for_state_check( self.status, resource_id, expected_running ) ], primitive_ids ) def test_missing(self): self.assert_primitives("Rxx", [], True) self.assert_primitives("Rxx", [], False) def test_primitive(self): self.assert_primitives("R01", ["R01"], True) self.assert_primitives("R01", ["R01"], False) def test_primitive_failed(self): self.assert_primitives("R02", [], True) self.assert_primitives("R02", [], False) def test_group(self): self.assert_primitives("G1", ["R04"], True) self.assert_primitives("G1", ["R03"], False) def test_group_failed_primitive(self): self.assert_primitives("G2", [], True) self.assert_primitives("G2", [], False) def test_primitive_in_group(self): self.assert_primitives("R03", ["R03"], True) self.assert_primitives("R03", ["R03"], False) def test_primitive_in_group_failed(self): self.assert_primitives("R05", [], True) self.assert_primitives("R05", [], False) def test_clone(self): self.assert_primitives("R07-clone", ["R07", "R07"], True) self.assert_primitives("R07-clone", ["R07", "R07"], False) self.assert_primitives("R10-clone", ["R10:0", "R10:1"], True) self.assert_primitives("R10-clone", ["R10:0", "R10:1"], False) def test_clone_partially_failed(self): self.assert_primitives("R08-clone", ["R08"], True) self.assert_primitives("R08-clone", ["R08"], False) self.assert_primitives("R11-clone", ["R11:0"], True) self.assert_primitives("R11-clone", ["R11:0"], False) def test_clone_failed(self): self.assert_primitives("R09-clone", [], True) self.assert_primitives("R09-clone", [], False) self.assert_primitives("R12-clone", [], True) self.assert_primitives("R12-clone", [], False) def test_primitive_in_clone(self): self.assert_primitives("R07", ["R07", "R07"], True) self.assert_primitives("R07", ["R07", "R07"], False) self.assert_primitives("R10", ["R10:0", "R10:1"], True) self.assert_primitives("R10", ["R10:0", "R10:1"], False) def test_primitive_in_clone_partially_failed(self): self.assert_primitives("R08", ["R08"], True) self.assert_primitives("R08", ["R08"], False) self.assert_primitives("R11", ["R11:0"], True) self.assert_primitives("R11", ["R11:0"], False) def test_primitive_in_clone_failed(self): self.assert_primitives("R09", [], True) self.assert_primitives("R09", [], False) self.assert_primitives("R12", [], True) self.assert_primitives("R12", [], False) def test_clone_containing_group(self): self.assert_primitives("G3-clone", ["R14", "R14"], True) self.assert_primitives("G3-clone", ["R13", "R13"], False) self.assert_primitives("G6-clone", ["R20:0", "R20:1"], True) self.assert_primitives("G6-clone", ["R19:0", "R19:1"], False) def test_clone_containing_group_partially_failed(self): self.assert_primitives("G4-clone", ["R16"], True) self.assert_primitives("G4-clone", ["R15"], False) self.assert_primitives("G7-clone", ["R22:1"], True) self.assert_primitives("G7-clone", ["R21:1"], False) def test_clone_containing_group_failed(self): self.assert_primitives("G5-clone", [], True) self.assert_primitives("G5-clone", [], False) self.assert_primitives("G8-clone", [], True) self.assert_primitives("G8-clone", [], False) def test_group_in_clone_containing_group(self): self.assert_primitives("G3", ["R14", "R14"], True) self.assert_primitives("G3", ["R13", "R13"], False) self.assert_primitives("G6", ["R20:0", "R20:1"], True) self.assert_primitives("G6", ["R19:0", "R19:1"], False) def test_group_in_clone_containing_group_partially_failed(self): self.assert_primitives("G4", ["R16"], True) self.assert_primitives("G4", ["R15"], False) self.assert_primitives("G7", ["R22:1"], True) self.assert_primitives("G7", ["R21:1"], False) def test_group_in_clone_containing_group_failed(self): self.assert_primitives("G5", [], True) self.assert_primitives("G5", [], False) self.assert_primitives("G8", [], True) self.assert_primitives("G8", [], False) def test_primitive_in_clone_containing_group(self): self.assert_primitives("R14", ["R14", "R14"], True) self.assert_primitives("R14", ["R14", "R14"], False) self.assert_primitives("R20", ["R20:0", "R20:1"], True) self.assert_primitives("R20", ["R20:0", "R20:1"], False) def test_primitive_in_clone_containing_group_partially_failed(self): self.assert_primitives("R16", ["R16"], True) self.assert_primitives("R16", ["R16"], False) self.assert_primitives("R22", ["R22:1"], True) self.assert_primitives("R22", ["R22:1"], False) def test_primitive_in_clone_containing_group_failed(self): self.assert_primitives("R18", [], True) self.assert_primitives("R18", [], False) self.assert_primitives("R24", [], True) self.assert_primitives("R24", [], False) def test_bundle(self): self.assert_primitives("B1", ["B1-R1", "B1-R2"], True) self.assert_primitives("B1", ["B1-R1", "B1-R2"], False) self.assert_primitives("B2", ["B2-R2", "B2-R1", "B2-R2"], True) self.assert_primitives("B2", ["B2-R2", "B2-R1", "B2-R2"], False) def test_primitive_in_bundle(self): self.assert_primitives("B1-R1", ["B1-R1"], True) self.assert_primitives("B1-R1", ["B1-R1"], False) self.assert_primitives("B2-R1", ["B2-R1"], True) self.assert_primitives("B2-R1", ["B2-R1"], False) self.assert_primitives("B2-R2", ["B2-R2", "B2-R2"], True) self.assert_primitives("B2-R2", ["B2-R2", "B2-R2"], False) class CommonResourceState(TestCase): resource_id = "R" def setUp(self): self.cluster_state = "state" patcher_primitives = mock.patch( "pcs.lib.pacemaker.state._get_primitives_for_state_check" ) self.addCleanup(patcher_primitives.stop) self.get_primitives_for_state_check = patcher_primitives.start() patcher_roles = mock.patch( "pcs.lib.pacemaker.state._get_primitive_roles_with_nodes" ) self.addCleanup(patcher_roles.stop) self.get_primitive_roles_with_nodes = patcher_roles.start() def fixture_running_state_info(self): return { "Started": ["node1"], "Master": ["node2"], "Slave": ["node3", "node4"], } def fixture_running_report(self, severity): return (severity, report_codes.RESOURCE_RUNNING_ON_NODES, { "resource_id": self.resource_id, "roles_with_nodes": self.fixture_running_state_info(), }) def fixture_not_running_report(self, severity): return (severity, report_codes.RESOURCE_DOES_NOT_RUN, { "resource_id": self.resource_id }) class EnsureResourceState(CommonResourceState): def assert_running_info_transform(self, run_info, report, expected_running): self.get_primitives_for_state_check.return_value = ["elem1", "elem2"] self.get_primitive_roles_with_nodes.return_value = run_info assert_report_item_equal( state.ensure_resource_state( expected_running, self.cluster_state, self.resource_id ), report ) self.get_primitives_for_state_check.assert_called_once_with( self.cluster_state, self.resource_id, expected_running ) self.get_primitive_roles_with_nodes.assert_called_once_with( ["elem1", "elem2"] ) def test_report_info_running(self): self.assert_running_info_transform( self.fixture_running_state_info(), self.fixture_running_report(severities.INFO), expected_running=True, ) def test_report_error_running(self): self.assert_running_info_transform( self.fixture_running_state_info(), self.fixture_running_report(severities.ERROR), expected_running=False, ) def test_report_error_not_running(self): self.assert_running_info_transform( [], self.fixture_not_running_report(severities.ERROR), expected_running=True, ) def test_report_info_not_running(self): self.assert_running_info_transform( [], self.fixture_not_running_report(severities.INFO), expected_running=False, ) class InfoResourceState(CommonResourceState): def assert_running_info_transform(self, run_info, report): self.get_primitives_for_state_check.return_value = ["elem1", "elem2"] self.get_primitive_roles_with_nodes.return_value = run_info assert_report_item_equal( state.info_resource_state(self.cluster_state, self.resource_id), report ) self.get_primitives_for_state_check.assert_called_once_with( self.cluster_state, self.resource_id, expected_running=True ) self.get_primitive_roles_with_nodes.assert_called_once_with( ["elem1", "elem2"] ) def test_report_info_running(self): self.assert_running_info_transform( self.fixture_running_state_info(), self.fixture_running_report(severities.INFO) ) def test_report_info_not_running(self): self.assert_running_info_transform( [], self.fixture_not_running_report(severities.INFO) ) class IsResourceManaged(TestCase): status_xml = etree.fromstring(""" """) def setUp(self): self.status = etree.parse(rc("crm_mon.minimal.xml")).getroot() self.status.append(self.status_xml) for resource in self.status.xpath(".//resource"): resource.attrib.update({ "resource_agent": "ocf::pacemaker:Stateful", "role": "Started", "active": "true", "orphaned": "false", "blocked": "false", "failed": "false", "failure_ignored": "false", "nodes_running_on": "1", }) def assert_managed(self, resource, managed): self.assertEqual( managed, state.is_resource_managed(self.status, resource) ) def test_missing(self): self.assertRaises( state.ResourceNotFound, self.assert_managed, "Rxx", True ) def test_primitive(self): self.assert_managed("R01", True) self.assert_managed("R02", False) def test_group(self): self.assert_managed("G1", True) self.assert_managed("G2", False) self.assert_managed("G3", False) self.assert_managed("G4", False) def test_primitive_in_group(self): self.assert_managed("R03", True) self.assert_managed("R04", True) self.assert_managed("R05", False) self.assert_managed("R06", True) self.assert_managed("R07", True) self.assert_managed("R08", False) self.assert_managed("R09", False) self.assert_managed("R10", False) def test_clone(self): self.assert_managed("R11-clone", True) self.assert_managed("R12-clone", False) self.assert_managed("R13-clone", False) self.assert_managed("R14-clone", False) self.assert_managed("R15-clone", True) self.assert_managed("R16-clone", False) self.assert_managed("R17-clone", False) self.assert_managed("R18-clone", False) def test_primitive_in_clone(self): self.assert_managed("R11", True) self.assert_managed("R12", False) self.assert_managed("R13", False) self.assert_managed("R14", False) def test_primitive_in_unique_clone(self): self.assert_managed("R15", True) self.assert_managed("R16", False) self.assert_managed("R17", False) self.assert_managed("R18", False) def test_clone_containing_group(self): self.assert_managed("G5-clone", True) self.assert_managed("G6-clone", False) self.assert_managed("G7-clone", False) self.assert_managed("G8-clone", False) self.assert_managed("G9-clone", False) self.assert_managed("G10-clone", True) self.assert_managed("G11-clone", False) self.assert_managed("G12-clone", False) self.assert_managed("G13-clone", False) self.assert_managed("G14-clone", False) def test_group_in_clone(self): self.assert_managed("G5", True) self.assert_managed("G6", False) self.assert_managed("G7", False) self.assert_managed("G8", False) self.assert_managed("G9", False) def test_group_in_unique_clone(self): self.assert_managed("G10", True) self.assert_managed("G11", False) self.assert_managed("G12", False) self.assert_managed("G13", False) self.assert_managed("G14", False) def test_primitive_in_group_in_clone(self): self.assert_managed("R19", True) self.assert_managed("R20", True) self.assert_managed("R21", False) self.assert_managed("R22", False) self.assert_managed("R23", False) self.assert_managed("R24", True) self.assert_managed("R25", True) self.assert_managed("R26", False) self.assert_managed("R27", False) self.assert_managed("R28", False) def test_primitive_in_group_in_unique_clone(self): self.assert_managed("R29", True) self.assert_managed("R30", True) self.assert_managed("R31", False) self.assert_managed("R32", False) self.assert_managed("R33", False) self.assert_managed("R34", True) self.assert_managed("R35", True) self.assert_managed("R36", False) self.assert_managed("R37", False) self.assert_managed("R38", False) def test_bundle(self): self.assert_managed("B1", True) self.assert_managed("B2", False) self.assert_managed("B3", True) self.assert_managed("B4", False) self.assert_managed("B5", False) self.assert_managed("B6", False) self.assert_managed("B7", False) def test_primitive_in_bundle(self): self.assert_managed("R39", True) self.assert_managed("R40", True) self.assert_managed("R41", False) self.assert_managed("R42", False) self.assert_managed("R43", False) self.assert_managed("R44", True) self.assert_managed("R45", True) self.assert_managed("R46", False) self.assert_managed("R47", False) self.assert_managed("R48", False) pcs-0.9.164/pcs/lib/pacemaker/test/test_values.py000066400000000000000000000243661326265502500216530ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.assertions import assert_raise_library_error from pcs.common import report_codes from pcs.lib.errors import ReportItemSeverity as severity import pcs.lib.pacemaker.values as lib class BooleanTest(TestCase): def test_true_is_true(self): self.assertTrue(lib.is_true("true")) self.assertTrue(lib.is_true("tRue")) self.assertTrue(lib.is_true("on")) self.assertTrue(lib.is_true("ON")) self.assertTrue(lib.is_true("yes")) self.assertTrue(lib.is_true("yeS")) self.assertTrue(lib.is_true("y")) self.assertTrue(lib.is_true("Y")) self.assertTrue(lib.is_true("1")) def test_nontrue_is_not_true(self): self.assertFalse(lib.is_true("")) self.assertFalse(lib.is_true(" 1 ")) self.assertFalse(lib.is_true("a")) self.assertFalse(lib.is_true("2")) self.assertFalse(lib.is_true("10")) self.assertFalse(lib.is_true("yes please")) def test_true_is_boolean(self): self.assertTrue(lib.is_boolean("true")) self.assertTrue(lib.is_boolean("tRue")) self.assertTrue(lib.is_boolean("on")) self.assertTrue(lib.is_boolean("ON")) self.assertTrue(lib.is_boolean("yes")) self.assertTrue(lib.is_boolean("yeS")) self.assertTrue(lib.is_boolean("y")) self.assertTrue(lib.is_boolean("Y")) self.assertTrue(lib.is_boolean("1")) def test_false_is_false(self): self.assertTrue(lib.is_false("false")) self.assertTrue(lib.is_false("faLse")) self.assertTrue(lib.is_false("off")) self.assertTrue(lib.is_false("OFF")) self.assertTrue(lib.is_false("no")) self.assertTrue(lib.is_false("nO")) self.assertTrue(lib.is_false("n")) self.assertTrue(lib.is_false("N")) self.assertTrue(lib.is_false("0")) def test_nonfalse_is_not_false(self): self.assertFalse(lib.is_false("")) self.assertFalse(lib.is_false(" 0 ")) self.assertFalse(lib.is_false("x")) self.assertFalse(lib.is_false("-1")) self.assertFalse(lib.is_false("10")) self.assertFalse(lib.is_false("heck no")) def test_false_is_boolean(self): self.assertTrue(lib.is_boolean("false")) self.assertTrue(lib.is_boolean("fAlse")) self.assertTrue(lib.is_boolean("off")) self.assertTrue(lib.is_boolean("oFf")) self.assertTrue(lib.is_boolean("no")) self.assertTrue(lib.is_boolean("nO")) self.assertTrue(lib.is_boolean("n")) self.assertTrue(lib.is_boolean("N")) self.assertTrue(lib.is_boolean("0")) def test_nonboolean_is_not_boolean(self): self.assertFalse(lib.is_boolean("")) self.assertFalse(lib.is_boolean("a")) self.assertFalse(lib.is_boolean("2")) self.assertFalse(lib.is_boolean("10")) self.assertFalse(lib.is_boolean("yes please")) self.assertFalse(lib.is_boolean(" y")) self.assertFalse(lib.is_boolean("n ")) self.assertFalse(lib.is_boolean("NO!")) class TimeoutTest(TestCase): def test_valid(self): self.assertEqual(10, lib.timeout_to_seconds(10)) self.assertEqual(10, lib.timeout_to_seconds("10")) self.assertEqual(10, lib.timeout_to_seconds("10s")) self.assertEqual(10, lib.timeout_to_seconds("10sec")) self.assertEqual(600, lib.timeout_to_seconds("10m")) self.assertEqual(600, lib.timeout_to_seconds("10min")) self.assertEqual(36000, lib.timeout_to_seconds("10h")) self.assertEqual(36000, lib.timeout_to_seconds("10hr")) def test_invalid(self): self.assertEqual(None, lib.timeout_to_seconds(-10)) self.assertEqual(None, lib.timeout_to_seconds("1a1s")) self.assertEqual(None, lib.timeout_to_seconds("10mm")) self.assertEqual(None, lib.timeout_to_seconds("10mim")) self.assertEqual(None, lib.timeout_to_seconds("aaa")) self.assertEqual(None, lib.timeout_to_seconds("")) self.assertEqual(-10, lib.timeout_to_seconds(-10, True)) self.assertEqual("1a1s", lib.timeout_to_seconds("1a1s", True)) self.assertEqual("10mm", lib.timeout_to_seconds("10mm", True)) self.assertEqual("10mim", lib.timeout_to_seconds("10mim", True)) self.assertEqual("aaa", lib.timeout_to_seconds("aaa", True)) self.assertEqual("", lib.timeout_to_seconds("", True)) class ValidateIdTest(TestCase): def test_valid(self): self.assertEqual(None, lib.validate_id("dummy")) self.assertEqual(None, lib.validate_id("DUMMY")) self.assertEqual(None, lib.validate_id("dUmMy")) self.assertEqual(None, lib.validate_id("dummy0")) self.assertEqual(None, lib.validate_id("dum0my")) self.assertEqual(None, lib.validate_id("dummy-")) self.assertEqual(None, lib.validate_id("dum-my")) self.assertEqual(None, lib.validate_id("dummy.")) self.assertEqual(None, lib.validate_id("dum.my")) self.assertEqual(None, lib.validate_id("_dummy")) self.assertEqual(None, lib.validate_id("dummy_")) self.assertEqual(None, lib.validate_id("dum_my")) def test_invalid_empty(self): assert_raise_library_error( lambda: lib.validate_id("", "test id"), ( severity.ERROR, report_codes.EMPTY_ID, { "id": "", "id_description": "test id", } ) ) def test_invalid_first_character(self): desc = "test id" info = { "id": "", "id_description": desc, "invalid_character": "", "is_first_char": True, } report = (severity.ERROR, report_codes.INVALID_ID, info) info["id"] = "0" info["invalid_character"] = "0" assert_raise_library_error( lambda: lib.validate_id("0", desc), report ) info["id"] = "-" info["invalid_character"] = "-" assert_raise_library_error( lambda: lib.validate_id("-", desc), report ) info["id"] = "." info["invalid_character"] = "." assert_raise_library_error( lambda: lib.validate_id(".", desc), report ) info["id"] = ":" info["invalid_character"] = ":" assert_raise_library_error( lambda: lib.validate_id(":", desc), report ) info["id"] = "0dummy" info["invalid_character"] = "0" assert_raise_library_error( lambda: lib.validate_id("0dummy", desc), report ) info["id"] = "-dummy" info["invalid_character"] = "-" assert_raise_library_error( lambda: lib.validate_id("-dummy", desc), report ) info["id"] = ".dummy" info["invalid_character"] = "." assert_raise_library_error( lambda: lib.validate_id(".dummy", desc), report ) info["id"] = ":dummy" info["invalid_character"] = ":" assert_raise_library_error( lambda: lib.validate_id(":dummy", desc), report ) def test_invalid_character(self): desc = "test id" info = { "id": "", "id_description": desc, "invalid_character": "", "is_first_char": False, } report = (severity.ERROR, report_codes.INVALID_ID, info) info["id"] = "dum:my" info["invalid_character"] = ":" assert_raise_library_error( lambda: lib.validate_id("dum:my", desc), report ) info["id"] = "dummy:" info["invalid_character"] = ":" assert_raise_library_error( lambda: lib.validate_id("dummy:", desc), report ) info["id"] = "dum?my" info["invalid_character"] = "?" assert_raise_library_error( lambda: lib.validate_id("dum?my", desc), report ) info["id"] = "dummy?" info["invalid_character"] = "?" assert_raise_library_error( lambda: lib.validate_id("dummy?", desc), report ) class SanitizeId(TestCase): def test_dont_change_valid_id(self): self.assertEqual("d", lib.sanitize_id("d")) self.assertEqual("dummy", lib.sanitize_id("dummy")) self.assertEqual("dum0my", lib.sanitize_id("dum0my")) self.assertEqual("dum-my", lib.sanitize_id("dum-my")) self.assertEqual("dum.my", lib.sanitize_id("dum.my")) self.assertEqual("dum_my", lib.sanitize_id("dum_my")) self.assertEqual("_dummy", lib.sanitize_id("_dummy")) def test_empty(self): self.assertEqual("", lib.sanitize_id("")) def test_invalid_id(self): self.assertEqual("", lib.sanitize_id("0")) self.assertEqual("", lib.sanitize_id("-")) self.assertEqual("", lib.sanitize_id(".")) self.assertEqual("", lib.sanitize_id(":", "_")) self.assertEqual("dummy", lib.sanitize_id("0dummy")) self.assertEqual("dummy", lib.sanitize_id("-dummy")) self.assertEqual("dummy", lib.sanitize_id(".dummy")) self.assertEqual("dummy", lib.sanitize_id(":dummy", "_")) self.assertEqual("dummy", lib.sanitize_id("dum:my")) self.assertEqual("dum_my", lib.sanitize_id("dum:my", "_")) class IsScoreValueTest(TestCase): def test_returns_true_for_number(self): self.assertTrue(lib.is_score("1")) def test_returns_true_for_minus_number(self): self.assertTrue(lib.is_score("-1")) def test_returns_true_for_plus_number(self): self.assertTrue(lib.is_score("+1")) def test_returns_true_for_infinity(self): self.assertTrue(lib.is_score("INFINITY")) def test_returns_true_for_minus_infinity(self): self.assertTrue(lib.is_score("-INFINITY")) def test_returns_true_for_plus_infinity(self): self.assertTrue(lib.is_score("+INFINITY")) def test_returns_false_for_nonumber_noinfinity(self): self.assertFalse(lib.is_score("something else")) def test_returns_false_for_multiple_operators(self): self.assertFalse(lib.is_score("++INFINITY")) pcs-0.9.164/pcs/lib/pacemaker/values.py000066400000000000000000000101251326265502500176210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import re from pcs.lib import reports from pcs.lib.errors import LibraryError _BOOLEAN_TRUE = frozenset(["true", "on", "yes", "y", "1"]) _BOOLEAN_FALSE = frozenset(["false", "off", "no", "n", "0"]) _BOOLEAN = _BOOLEAN_TRUE | _BOOLEAN_FALSE _ID_FIRST_CHAR_NOT_RE = re.compile("[^a-zA-Z_]") _ID_REST_CHARS_NOT_RE = re.compile("[^a-zA-Z0-9_.-]") SCORE_INFINITY = "INFINITY" def is_boolean(val): """ Does pacemaker consider a value to be a boolean? See crm_is_true in pacemaker/lib/common/utils.c val checked value """ return val.lower() in _BOOLEAN def is_true(val): """ Does pacemaker consider a value to be true? See crm_is_true in pacemaker/lib/common/utils.c var checked value """ return val.lower() in _BOOLEAN_TRUE def is_false(val): """ Does pacemaker consider a value to be false? See crm_is_true in pacemaker/lib/common/utils.c var checked value """ return val.lower() in _BOOLEAN_FALSE def is_score(value): if not value: return False unsigned_value = value[1:] if value[0] in ("+", "-") else value return unsigned_value == SCORE_INFINITY or unsigned_value.isdigit() def timeout_to_seconds(timeout, return_unknown=False): """ Transform pacemaker style timeout to number of seconds timeout timeout string return_unknown if timeout is not valid then return None on False or timeout on True (default False) """ try: candidate = int(timeout) if candidate >= 0: return candidate return timeout if return_unknown else None except ValueError: pass # now we know the timeout is not an integer nor an integer string suffix_multiplier = { "s": 1, "sec": 1, "m": 60, "min": 60, "h": 3600, "hr": 3600, } for suffix, multiplier in suffix_multiplier.items(): if timeout.endswith(suffix) and timeout[:-len(suffix)].isdigit(): return int(timeout[:-len(suffix)]) * multiplier return timeout if return_unknown else None def get_valid_timeout_seconds(timeout_candidate): """ Transform pacemaker style timeout to number of seconds, raise LibraryError on invalid timeout timeout_candidate timeout string or None """ if timeout_candidate is None: return None wait_timeout = timeout_to_seconds(timeout_candidate) if wait_timeout is None: raise LibraryError(reports.invalid_timeout(timeout_candidate)) return wait_timeout def validate_id(id_candidate, description="id", reporter=None): """ Validate a pacemaker id, raise LibraryError on invalid id. id_candidate id's value description id's role description (default "id") """ # see NCName definition # http://www.w3.org/TR/REC-xml-names/#NT-NCName # http://www.w3.org/TR/REC-xml/#NT-Name if len(id_candidate) < 1: report = reports.invalid_id_is_empty(id_candidate, description) if reporter is not None: # we check for None so it works with an empty list as well reporter.append(report) return else: raise LibraryError(report) if _ID_FIRST_CHAR_NOT_RE.match(id_candidate[0]): report = reports.invalid_id_bad_char( id_candidate, description, id_candidate[0], True ) if reporter is not None: reporter.append(report) else: raise LibraryError(report) for char in id_candidate[1:]: if _ID_REST_CHARS_NOT_RE.match(char): report = reports.invalid_id_bad_char( id_candidate, description, char, False ) if reporter is not None: reporter.append(report) else: raise LibraryError(report) def sanitize_id(id_candidate, replacement=""): if not id_candidate: return id_candidate return "".join([ "" if _ID_FIRST_CHAR_NOT_RE.match(id_candidate[0]) else id_candidate[0], _ID_REST_CHARS_NOT_RE.sub(replacement, id_candidate[1:]) ]) pcs-0.9.164/pcs/lib/reports.py000066400000000000000000002306051326265502500160770ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from pcs.common import report_codes from pcs.lib.errors import ReportItem, ReportItemSeverity def forceable_error(force_code, report_creator, *args, **kwargs): """ Return ReportItem created by report_creator. This is experimental shortcut for common pattern. It is intended to cooperate with functions "error" and "warning". the pair with function "warning". string force_code is code for forcing error callable report_creator is function that produce ReportItem. It must take parameters forceable (None or force code) and severity (from ReportItemSeverity) rest of args are for the report_creator """ return report_creator( *args, forceable=force_code, severity=ReportItemSeverity.ERROR, **kwargs ) def warning(report_creator, *args, **kwargs): """ Return ReportItem created by report_creator. This is experimental shortcut for common pattern. It is intended to cooperate with functions "error" and "forceable_error". callable report_creator is function that produce ReportItem. It must take parameters forceable (None or force code) and severity (from ReportItemSeverity) rest of args are for the report_creator """ return report_creator( *args, forceable=None, severity=ReportItemSeverity.WARNING, **kwargs ) def error(report_creator, *args, **kwargs): """ Return ReportItem created by report_creator. This is experimental shortcut for common pattern. It is intended to cooperate with functions "forceable_error" and "forceable_error". callable report_creator is function that produce ReportItem. It must take parameters forceable (None or force code) and severity (from ReportItemSeverity) rest of args are for the report_creator """ return report_creator( *args, forceable=None, severity=ReportItemSeverity.ERROR, **kwargs ) def get_problem_creator(force_code=None, is_forced=False): """ Returns report creator wraper (forceable_error or warning). This is experimental shortcut for decision if ReportItem will be either forceable_error or warning. string force_code is code for forcing error. It could be usefull to prepare it for whole module by using functools.partial. bool warn_only is flag for selecting wrapper """ if not force_code: return error if is_forced: return warning return partial(forceable_error, force_code) def common_error(text): # TODO replace by more specific reports """ unspecified error with text message, do not use unless absolutely necessary """ return ReportItem.error( report_codes.COMMON_ERROR, info={"text": text} ) def common_info(text): # TODO replace by more specific reports """ unspecified info with text message, do not use unless absolutely necessary """ return ReportItem.info( report_codes.COMMON_INFO, info={"text": text} ) def resource_for_constraint_is_multiinstance( resource_id, parent_type, parent_id, severity=ReportItemSeverity.ERROR, forceable=None ): """ when setting up a constraint a resource in a clone or a master was specified resource_id string specified resource parent_type string "clone" or "master" parent_id clone or master resource id severity report item severity forceable is this report item forceable? by what cathegory? """ return ReportItem( report_codes.RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE, severity, info={ "resource_id": resource_id, "parent_type": parent_type, "parent_id": parent_id, }, forceable=forceable ) def duplicate_constraints_exist( constraint_type, constraint_info_list, severity=ReportItemSeverity.ERROR, forceable=None ): """ when creating a constraint it was detected the constraint already exists constraint_type string "rsc_colocation", "rsc_order", "rsc_ticket" constraint_info_list list of structured constraint data according to type severity report item severity forceable is this report item forceable? by what cathegory? """ return ReportItem( report_codes.DUPLICATE_CONSTRAINTS_EXIST, severity, info={ "constraint_type": constraint_type, "constraint_info_list": constraint_info_list, }, forceable=forceable ) def empty_resource_set_list(): """ an empty resource set has been specified, which is not allowed by cib schema """ return ReportItem.error( report_codes.EMPTY_RESOURCE_SET_LIST, ) def required_option_is_missing( option_names, option_type=None, severity=ReportItemSeverity.ERROR, forceable=None ): """ required option has not been specified, command cannot continue list name is/are required but was not entered option_type describes the option severity report item severity forceable is this report item forceable? by what cathegory? """ return ReportItem( report_codes.REQUIRED_OPTION_IS_MISSING, severity, forceable=forceable, info={ "option_names": option_names, "option_type": option_type, } ) def prerequisite_option_is_missing( option_name, prerequisite_name, option_type="", prerequisite_type="" ): """ if the option_name is specified, the prerequisite_option must be specified string option_name -- an option which depends on the prerequisite_option string prerequisite_name -- the prerequisite option string option_type -- describes the option string prerequisite_type -- describes the prerequisite_option """ return ReportItem.error( report_codes.PREREQUISITE_OPTION_IS_MISSING, info={ "option_name": option_name, "option_type": option_type, "prerequisite_name": prerequisite_name, "prerequisite_type": prerequisite_type, } ) def required_option_of_alternatives_is_missing( option_names, option_type=None ): """ at least one option has to be specified iterable option_names -- options from which at least one has to be specified string option_type -- describes the option """ severity = ReportItemSeverity.ERROR forceable = None return ReportItem( report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, severity, forceable=forceable, info={ "option_names": option_names, "option_type": option_type, } ) def invalid_options( option_names, allowed_options, option_type, allowed_option_patterns=None, severity=ReportItemSeverity.ERROR, forceable=None ): """ specified option names are not valid, usualy an error or a warning list option_names -- specified invalid option names list allowed_options -- possible allowed option names string option_type -- describes the option list allowed_option_patterns -- allowed user defind options patterns string severity -- report item severity mixed forceable -- is this report item forceable? by what cathegory? """ return ReportItem( report_codes.INVALID_OPTIONS, severity, forceable, info={ "option_names": option_names, "option_type": option_type, "allowed": sorted(allowed_options), "allowed_patterns": sorted(allowed_option_patterns or []), } ) def invalid_userdefined_options( option_names, allowed_description, option_type, severity=ReportItemSeverity.ERROR, forceable=None ): """ specified option names defined by a user are not valid This is different than invalid_options. In this case, the options are supposed to be defined by a user. This report carries information that the option names do not meet requirements, i.e. contain not allowed characters. Invalid_options is used when the options are predefined by pcs (or underlying tools). list option_names -- specified invalid option names string allowed_description -- describes what option names should look like string option_type -- describes the option string severity -- report item severity mixed forceable -- is this report item forceable? by what cathegory? """ return ReportItem( report_codes.INVALID_USERDEFINED_OPTIONS, severity, forceable, info={ "option_names": sorted(option_names), "option_type": option_type, "allowed_description": allowed_description, } ) def invalid_option_type(option_name, allowed_types): """ specified value is not of a valid type for the option string option_name -- option name whose value is not of a valid type list|string allowed_types -- list of allowed types or string description """ return ReportItem.error( report_codes.INVALID_OPTION_TYPE, info={ "option_name": option_name, "allowed_types": allowed_types, }, ) def invalid_option_value( option_name, option_value, allowed_values, severity=ReportItemSeverity.ERROR, forceable=None ): """ specified value is not valid for the option, usualy an error or a warning option_name specified option name whose value is not valid option_value specified value which is not valid allowed_options list of allowed values or string description severity report item severity forceable is this report item forceable? by what cathegory? """ return ReportItem( report_codes.INVALID_OPTION_VALUE, severity, info={ "option_value": option_value, "option_name": option_name, "allowed_values": allowed_values, }, forceable=forceable ) def deprecated_option( option_name, replaced_by_options, option_type, severity=ReportItemSeverity.ERROR, forceable=None ): """ Specified option name is deprecated and has been replaced by other option(s) string option_name -- the deprecated option iterable or string replaced_by_options -- new option(s) to be used instead string option_type -- option description string severity -- report item severity string forceable -- a category by which the report is forceable """ return ReportItem( report_codes.DEPRECATED_OPTION, severity, info={ "option_name": option_name, "option_type": option_type, "replaced_by": sorted(replaced_by_options), }, forceable=forceable ) def mutually_exclusive_options(option_names, option_type): """ entered options can not coexist set option_names contain entered mutually exclusive options string option_type describes the option """ return ReportItem.error( report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, info={ "option_names": option_names, "option_type": option_type, }, ) def invalid_cib_content(report): """ Given cib content is not valid. string report -- is human readable explanation of a cib invalidity. For example a stderr of `crm_verify`. """ return ReportItem.error( report_codes.INVALID_CIB_CONTENT, info={ "report": report, } ) def invalid_id_is_empty(id, id_description): """ empty string was specified as an id, which is not valid id string specified id id_description string decribe id's role """ return ReportItem.error( report_codes.EMPTY_ID, info={ "id": id, "id_description": id_description, } ) def invalid_id_bad_char(id, id_description, bad_char, is_first_char): """ specified id is not valid as it contains a forbidden character id string specified id id_description string decribe id's role bad_char forbidden character is_first_char is it the first character which is forbidden? """ return ReportItem.error( report_codes.INVALID_ID, info={ "id": id, "id_description": id_description, "is_first_char": is_first_char, "invalid_character": bad_char, } ) def invalid_timeout(timeout): """ specified timeout is not valid (number or other format e.g. 2min) timeout string specified invalid timeout """ return ReportItem.error( report_codes.INVALID_TIMEOUT_VALUE, info={"timeout": timeout} ) def invalid_score(score): """ specified score value is not valid score specified score value """ return ReportItem.error( report_codes.INVALID_SCORE, info={ "score": score, } ) def multiple_score_options(): """ more than one of mutually exclusive score options has been set (score, score-attribute, score-attribute-mangle in rules or colocation sets) """ return ReportItem.error( report_codes.MULTIPLE_SCORE_OPTIONS, ) def run_external_process_started(command, stdin, environment): """ information about running an external process command string the external process command stdin string passed to the external process via its stdin """ return ReportItem.debug( report_codes.RUN_EXTERNAL_PROCESS_STARTED, info={ "command": command, "stdin": stdin, "environment": environment, } ) def run_external_process_finished(command, retval, stdout, stderr): """ information about result of running an external process command string the external process command retval external process's return (exit) code stdout string external process's stdout stderr string external process's stderr """ return ReportItem.debug( report_codes.RUN_EXTERNAL_PROCESS_FINISHED, info={ "command": command, "return_value": retval, "stdout": stdout, "stderr": stderr, } ) def run_external_process_error(command, reason): """ attempt to run an external process failed command string the external process command reason string error description """ return ReportItem.error( report_codes.RUN_EXTERNAL_PROCESS_ERROR, info={ "command": command, "reason": reason } ) def node_communication_started(target, data): """ request is about to be sent to a remote node, debug info target string where the request is about to be sent to data string request's data """ return ReportItem.debug( report_codes.NODE_COMMUNICATION_STARTED, info={ "target": target, "data": data, } ) def node_communication_finished(target, retval, data): """ remote node request has been finished, debug info target string where the request was sent to retval response return code data response data """ return ReportItem.debug( report_codes.NODE_COMMUNICATION_FINISHED, info={ "target": target, "response_code": retval, "response_data": data } ) def node_communication_debug_info(target, data): """ Node communication debug info from pycurl """ return ReportItem.debug( report_codes.NODE_COMMUNICATION_DEBUG_INFO, info={ "target": target, "data": data, } ) def node_communication_not_connected(node, reason): """ an error occured when connecting to a remote node, debug info node string node address / name reason string decription of the error """ return ReportItem.debug( report_codes.NODE_COMMUNICATION_NOT_CONNECTED, info={ "node": node, "reason": reason, } ) def node_communication_no_more_addresses(node, request): """ request failed and there are no more addresses to try it again """ return ReportItem.warning( report_codes.NODE_COMMUNICATION_NO_MORE_ADDRESSES, info={ "node": node, "request": request, } ) def node_communication_error_not_authorized( node, command, reason, severity=ReportItemSeverity.ERROR, forceable=None ): """ node rejected a request as we are not authorized node string node address / name reason string decription of the error """ return ReportItem( report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, severity, info={ "node": node, "command": command, "reason": reason, }, forceable=forceable ) def node_communication_error_permission_denied( node, command, reason, severity=ReportItemSeverity.ERROR, forceable=None ): """ node rejected a request as we do not have permissions to run the request node string node address / name reason string decription of the error """ return ReportItem( report_codes.NODE_COMMUNICATION_ERROR_PERMISSION_DENIED, severity, info={ "node": node, "command": command, "reason": reason, }, forceable=forceable ) def node_communication_error_unsupported_command( node, command, reason, severity=ReportItemSeverity.ERROR, forceable=None ): """ node rejected a request as it does not support the request node string node address / name reason string decription of the error """ return ReportItem( report_codes.NODE_COMMUNICATION_ERROR_UNSUPPORTED_COMMAND, severity, info={ "node": node, "command": command, "reason": reason, }, forceable=forceable ) def node_communication_command_unsuccessful( node, command, reason, severity=ReportItemSeverity.ERROR, forceable=None ): """ node rejected a request for another reason with a plain text explanation node string node address / name reason string decription of the error """ return ReportItem( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, severity, info={ "node": node, "command": command, "reason": reason, }, forceable=forceable ) def node_communication_error_other_error( node, command, reason, severity=ReportItemSeverity.ERROR, forceable=None ): """ node rejected a request for another reason (may be faulty node) node string node address / name reason string decription of the error """ return ReportItem( report_codes.NODE_COMMUNICATION_ERROR, severity, info={ "node": node, "command": command, "reason": reason, }, forceable=forceable ) def node_communication_error_unable_to_connect( node, command, reason, severity=ReportItemSeverity.ERROR, forceable=None ): """ we were unable to connect to a node node string node address / name reason string decription of the error """ return ReportItem( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, severity, info={ "node": node, "command": command, "reason": reason, }, forceable=forceable ) def node_communication_error_timed_out( node, command, reason, severity=ReportItemSeverity.ERROR, forceable=None ): """ Communication with node timed out. """ return ReportItem( report_codes.NODE_COMMUNICATION_ERROR_TIMED_OUT, severity, info={ "node": node, "command": command, "reason": reason, }, forceable=forceable ) def node_communication_proxy_is_set(node=None, address=None): """ Warning when connection failed and there is proxy set in environment variables """ return ReportItem.warning( report_codes.NODE_COMMUNICATION_PROXY_IS_SET, info={ "node": node, "address": address, } ) def node_communication_retrying(node, failed_address, next_address, request): """ Request failed due communication error connecting via specified address, therefore trying another address if there is any. """ return ReportItem.warning( report_codes.NODE_COMMUNICATION_RETRYING, info={ "node": node, "failed_address": failed_address, "next_address": next_address, "request": request, } ) def cannot_add_node_is_in_cluster(node): """ Node is in cluster. It is not possible to add it as a new cluster node. """ return ReportItem.error( report_codes.CANNOT_ADD_NODE_IS_IN_CLUSTER, info={"node": node} ) def cannot_add_node_is_running_service(node, service): """ Node is running service. It is not possible to add it as a new cluster node. string node address of desired node string service name of service (pacemaker, pacemaker_remote) """ return ReportItem.error( report_codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE, info={ "node": node, "service": service, } ) def defaults_can_be_overriden(): """ Warning when settings defaults (op_defaults, rsc_defaults...) """ return ReportItem.warning(report_codes.DEFAULTS_CAN_BE_OVERRIDEN) def corosync_config_distribution_started(): """ corosync configuration is about to be sent to nodes """ return ReportItem.info( report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED, ) def corosync_config_accepted_by_node(node): """ corosync configuration has been accepted by a node node string node address / name """ return ReportItem.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, info={"node": node} ) def corosync_config_distribution_node_error( node, severity=ReportItemSeverity.ERROR, forceable=None ): """ communication error occured when saving corosync configuration to a node node string faulty node address / name """ return ReportItem( report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, severity, info={"node": node}, forceable=forceable ) def corosync_not_running_check_started(): """ we are about to make sure corosync is not running on nodes """ return ReportItem.info( report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED, ) def corosync_not_running_check_node_error( node, severity=ReportItemSeverity.ERROR, forceable=None ): """ communication error occured when checking corosync is not running on a node node string faulty node address / name """ return ReportItem( report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, severity, info={"node": node}, forceable=forceable ) def corosync_not_running_on_node_ok(node): """ corosync is not running on a node, which is ok node string node address / name """ return ReportItem.info( report_codes.COROSYNC_NOT_RUNNING_ON_NODE, info={"node": node} ) def corosync_running_on_node_fail(node): """ corosync is running on a node, which is not ok node string node address / name """ return ReportItem.error( report_codes.COROSYNC_RUNNING_ON_NODE, info={"node": node} ) def corosync_quorum_get_status_error(reason): """ unable to get runtime status of quorum on local node string reason an error message """ return ReportItem.error( report_codes.COROSYNC_QUORUM_GET_STATUS_ERROR, info={ "reason": reason, } ) def corosync_quorum_heuristics_enabled_with_no_exec(): """ no exec_ is specified, therefore heuristics are effectively disabled """ return ReportItem.warning( report_codes.COROSYNC_QUORUM_HEURISTICS_ENABLED_WITH_NO_EXEC ) def corosync_quorum_set_expected_votes_error(reason): """ unable to set expcted votes in a live cluster string reason an error message """ return ReportItem.error( report_codes.COROSYNC_QUORUM_SET_EXPECTED_VOTES_ERROR, info={ "reason": reason, } ) def corosync_config_reloaded(): """ corosync configuration has been reloaded """ return ReportItem.info( report_codes.COROSYNC_CONFIG_RELOADED, ) def corosync_config_reload_error(reason): """ an error occured when reloading corosync configuration reason string an error message """ return ReportItem.error( report_codes.COROSYNC_CONFIG_RELOAD_ERROR, info={"reason": reason} ) def corosync_config_read_error(path, reason): """ an error occured when reading corosync configuration file from disk reason string an error message """ return ReportItem.error( report_codes.UNABLE_TO_READ_COROSYNC_CONFIG, info={ "path": path, "reason": reason, } ) def corosync_config_parser_missing_closing_brace(): """ corosync config cannot be parsed due to missing closing brace """ return ReportItem.error( report_codes.PARSE_ERROR_COROSYNC_CONF_MISSING_CLOSING_BRACE, ) def corosync_config_parser_unexpected_closing_brace(): """ corosync config cannot be parsed due to unexpected closing brace """ return ReportItem.error( report_codes.PARSE_ERROR_COROSYNC_CONF_UNEXPECTED_CLOSING_BRACE, ) def corosync_config_parser_other_error(): """ corosync config cannot be parsed, the cause is not specified It is better to use more specific error if possible. """ return ReportItem.error( report_codes.PARSE_ERROR_COROSYNC_CONF, ) def corosync_options_incompatible_with_qdevice(options): """ cannot set specified corosync options when qdevice is in use iterable options incompatible options names """ return ReportItem.error( report_codes.COROSYNC_OPTIONS_INCOMPATIBLE_WITH_QDEVICE, info={ "options_names": options, } ) def qdevice_already_defined(): """ qdevice is already set up in a cluster, when it was expected not to be """ return ReportItem.error( report_codes.QDEVICE_ALREADY_DEFINED, ) def qdevice_not_defined(): """ qdevice is not set up in a cluster, when it was expected to be """ return ReportItem.error( report_codes.QDEVICE_NOT_DEFINED, ) def qdevice_remove_or_cluster_stop_needed(): """ operation cannot be executed, qdevice removal or cluster stop is needed """ return ReportItem.error( report_codes.QDEVICE_REMOVE_OR_CLUSTER_STOP_NEEDED, ) def qdevice_client_reload_started(): """ qdevice client configuration is about to be reloaded on nodes """ return ReportItem.info( report_codes.QDEVICE_CLIENT_RELOAD_STARTED, ) def qdevice_already_initialized(model): """ cannot create qdevice on local host, it has been already created string model qdevice model """ return ReportItem.error( report_codes.QDEVICE_ALREADY_INITIALIZED, info={ "model": model, } ) def qdevice_not_initialized(model): """ cannot work with qdevice on local host, it has not been created yet string model qdevice model """ return ReportItem.error( report_codes.QDEVICE_NOT_INITIALIZED, info={ "model": model, } ) def qdevice_initialization_success(model): """ qdevice was successfully initialized on local host string model qdevice model """ return ReportItem.info( report_codes.QDEVICE_INITIALIZATION_SUCCESS, info={ "model": model, } ) def qdevice_initialization_error(model, reason): """ an error occured when creating qdevice on local host string model qdevice model string reason an error message """ return ReportItem.error( report_codes.QDEVICE_INITIALIZATION_ERROR, info={ "model": model, "reason": reason, } ) def qdevice_certificate_distribution_started(): """ Qdevice certificates are about to be set up on nodes """ return ReportItem.info( report_codes.QDEVICE_CERTIFICATE_DISTRIBUTION_STARTED, ) def qdevice_certificate_accepted_by_node(node): """ Qdevice certificates have been saved to a node string node node on which certificates have been saved """ return ReportItem.info( report_codes.QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE, info={"node": node} ) def qdevice_certificate_removal_started(): """ Qdevice certificates are about to be removed from nodes """ return ReportItem.info( report_codes.QDEVICE_CERTIFICATE_REMOVAL_STARTED, ) def qdevice_certificate_removed_from_node(node): """ Qdevice certificates have been removed from a node string node node on which certificates have been deleted """ return ReportItem.info( report_codes.QDEVICE_CERTIFICATE_REMOVED_FROM_NODE, info={"node": node} ) def qdevice_certificate_import_error(reason): """ an error occured when importing qdevice certificate to a node string reason an error message """ return ReportItem.error( report_codes.QDEVICE_CERTIFICATE_IMPORT_ERROR, info={ "reason": reason, } ) def qdevice_certificate_sign_error(reason): """ an error occured when signing qdevice certificate string reason an error message """ return ReportItem.error( report_codes.QDEVICE_CERTIFICATE_SIGN_ERROR, info={ "reason": reason, } ) def qdevice_destroy_success(model): """ qdevice configuration successfully removed from local host string model qdevice model """ return ReportItem.info( report_codes.QDEVICE_DESTROY_SUCCESS, info={ "model": model, } ) def qdevice_destroy_error(model, reason): """ an error occured when removing qdevice configuration from local host string model qdevice model string reason an error message """ return ReportItem.error( report_codes.QDEVICE_DESTROY_ERROR, info={ "model": model, "reason": reason, } ) def qdevice_not_running(model): """ qdevice is expected to be running but is not running string model qdevice model """ return ReportItem.error( report_codes.QDEVICE_NOT_RUNNING, info={ "model": model, } ) def qdevice_get_status_error(model, reason): """ unable to get runtime status of qdevice string model qdevice model string reason an error message """ return ReportItem.error( report_codes.QDEVICE_GET_STATUS_ERROR, info={ "model": model, "reason": reason, } ) def qdevice_used_by_clusters( clusters, severity=ReportItemSeverity.ERROR, forceable=None ): """ Qdevice is currently being used by clusters, cannot stop it unless forced """ return ReportItem( report_codes.QDEVICE_USED_BY_CLUSTERS, severity, info={ "clusters": clusters, }, forceable=forceable ) def cman_unsupported_command(): """ requested library command is not available as local cluster is CMAN based """ return ReportItem.error( report_codes.CMAN_UNSUPPORTED_COMMAND, ) def id_already_exists(id): """ specified id already exists in CIB and cannot be used for a new CIB object id string existing id """ return ReportItem.error( report_codes.ID_ALREADY_EXISTS, info={"id": id} ) def id_belongs_to_unexpected_type(id, expected_types, current_type): """ Specified id exists but for another element than expected. For example user wants to create resource in group that is specifies by id. But id does not belong to group. """ return ReportItem.error( report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, info={ "id": id, "expected_types": expected_types, "current_type": current_type, } ) def object_with_id_in_unexpected_context( object_type, object_id, expected_context_type, expected_context_id ): """ Object specified by object_type (tag) and object_id exists but not inside given context (expected_context_type, expected_context_id). """ return ReportItem.error( report_codes.OBJECT_WITH_ID_IN_UNEXPECTED_CONTEXT, info={ "type": object_type, "id": object_id, "expected_context_type": expected_context_type, "expected_context_id": expected_context_id, } ) def id_not_found(id, expected_types, context_type="", context_id=""): """ specified id does not exist in CIB, user referenced a nonexisting id string id -- specified id list expected_types -- list of id's roles - expected types with the id string context_type -- context_id's role / type string context_id -- specifies the search area """ return ReportItem.error( report_codes.ID_NOT_FOUND, info={ "id": id, "expected_types": sorted(expected_types), "context_type": context_type, "context_id": context_id, } ) def resource_bundle_already_contains_a_resource(bundle_id, resource_id): """ The bundle already contains a resource, another one caanot be added string bundle_id -- id of the bundle string resource_id -- id of the resource already contained in the bundle """ return ReportItem.error( report_codes.RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE, info={ "bundle_id": bundle_id, "resource_id": resource_id, } ) def resource_cannot_be_next_to_itself_in_group(resource_id, group_id): """ Cannot put resource(id=resource_id) into group(id=group_id) next to itself: resource(id=resource_id). """ return ReportItem.error( report_codes.RESOURCE_CANNOT_BE_NEXT_TO_ITSELF_IN_GROUP, info={ "resource_id": resource_id, "group_id": group_id, } ) def stonith_resources_do_not_exist( stonith_ids, severity=ReportItemSeverity.ERROR, forceable=None ): """ specified stonith resource doesn't exist (e.g. when creating in constraints) iterable stoniths -- list of specified stonith id """ return ReportItem( report_codes.STONITH_RESOURCES_DO_NOT_EXIST, severity, info={ "stonith_ids": stonith_ids, }, forceable=forceable ) def resource_running_on_nodes( resource_id, roles_with_nodes, severity=ReportItemSeverity.INFO ): """ Resource is running on some nodes. Taken from cluster state. string resource_id represent the resource list of tuple roles_with_nodes contain pairs (role, node) """ return ReportItem( report_codes.RESOURCE_RUNNING_ON_NODES, severity, info={ "resource_id": resource_id, "roles_with_nodes": roles_with_nodes, } ) def resource_does_not_run(resource_id, severity=ReportItemSeverity.INFO): """ Resource is not running on any node. Taken from cluster state. string resource_id represent the resource """ return ReportItem( report_codes.RESOURCE_DOES_NOT_RUN, severity, info={ "resource_id": resource_id, } ) def resource_is_guest_node_already(resource_id): """ The resource is already used as guest node (i.e. has meta attribute remote-node). string resource_id -- id of the resource that is guest node """ return ReportItem.error( report_codes.RESOURCE_IS_GUEST_NODE_ALREADY, info={ "resource_id": resource_id, } ) def resource_is_unmanaged(resource_id): """ The resource the user works with is unmanaged (e.g. in enable/disable) string resource_id -- id of the unmanaged resource """ return ReportItem.warning( report_codes.RESOURCE_IS_UNMANAGED, info={ "resource_id": resource_id, } ) def resource_managed_no_monitor_enabled(resource_id): """ The resource which was set to managed mode has no monitor operations enabled string resource_id -- id of the resource """ return ReportItem.warning( report_codes.RESOURCE_MANAGED_NO_MONITOR_ENABLED, info={ "resource_id": resource_id, } ) def cib_load_error(reason): """ cannot load cib from cibadmin, cibadmin exited with non-zero code string reason error description """ return ReportItem.error( report_codes.CIB_LOAD_ERROR, info={ "reason": reason, } ) def cib_load_error_scope_missing(scope, reason): """ cannot load cib from cibadmin, specified scope is missing in the cib scope string requested cib scope string reason error description """ return ReportItem.error( report_codes.CIB_LOAD_ERROR_SCOPE_MISSING, info={ "scope": scope, "reason": reason, } ) def cib_load_error_invalid_format(reason): """ cib does not conform to the schema """ return ReportItem.error( report_codes.CIB_LOAD_ERROR_BAD_FORMAT, info={ "reason": reason, } ) def cib_missing_mandatory_section(section_name): """ CIB is missing a section which is required to be present section_name string name of the missing section (element name or path) """ return ReportItem.error( report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, info={ "section": section_name, } ) def cib_push_error(reason, pushed_cib): """ cannot push cib to cibadmin, cibadmin exited with non-zero code string reason error description string pushed_cib cib which failed to be pushed """ return ReportItem.error( report_codes.CIB_PUSH_ERROR, info={ "reason": reason, "pushed_cib": pushed_cib, } ) def cib_save_tmp_error(reason): """ cannot save CIB into a temporary file string reason error description """ return ReportItem.error( report_codes.CIB_SAVE_TMP_ERROR, info={ "reason": reason, } ) def cib_diff_error(reason, cib_old, cib_new): """ cannot obtain a diff of CIBs string reason -- error description string cib_old -- the CIB to be diffed against string cib_new -- the CIB diffed against the old cib """ return ReportItem.error( report_codes.CIB_DIFF_ERROR, info={ "reason": reason, "cib_old": cib_old, "cib_new": cib_new, } ) def cib_push_forced_full_due_to_crm_feature_set(required_set, current_set): """ Pcs uses the "push full CIB" approach so race conditions may occur. pcs.common.tools.Version required_set -- crm_feature_set required for diff pcs.common.tools.Version current_set -- actual CIB crm_feature_set """ return ReportItem.warning( report_codes.CIB_PUSH_FORCED_FULL_DUE_TO_CRM_FEATURE_SET, info={ "required_set": str(required_set), "current_set": str(current_set), } ) def cluster_state_cannot_load(reason): """ cannot load cluster status from crm_mon, crm_mon exited with non-zero code string reason error description """ return ReportItem.error( report_codes.CRM_MON_ERROR, info={ "reason": reason, } ) def cluster_state_invalid_format(): """ crm_mon xml output does not conform to the schema """ return ReportItem.error( report_codes.BAD_CLUSTER_STATE_FORMAT, ) def wait_for_idle_not_supported(): """ crm_resource does not support --wait """ return ReportItem.error( report_codes.WAIT_FOR_IDLE_NOT_SUPPORTED, ) def wait_for_idle_timed_out(reason): """ waiting for resources (crm_resource --wait) failed, timeout expired string reason error description """ return ReportItem.error( report_codes.WAIT_FOR_IDLE_TIMED_OUT, info={ "reason": reason, } ) def wait_for_idle_error(reason): """ waiting for resources (crm_resource --wait) failed string reason error description """ return ReportItem.error( report_codes.WAIT_FOR_IDLE_ERROR, info={ "reason": reason, } ) def wait_for_idle_not_live_cluster(): """ cannot wait for the cluster if not running with a live cluster """ return ReportItem.error( report_codes.WAIT_FOR_IDLE_NOT_LIVE_CLUSTER, ) def resource_cleanup_error(reason, resource=None, node=None): """ An error occured when deleting resource failed operations in pacemaker string reason -- error description string resource -- resource which has been cleaned up string node -- node which has been cleaned up """ return ReportItem.error( report_codes.RESOURCE_CLEANUP_ERROR, info={ "reason": reason, "resource": resource, "node": node, } ) def resource_refresh_error(reason, resource=None, node=None): """ An error occured when deleting resource history in pacemaker string reason -- error description string resource -- resource which has been cleaned up string node -- node which has been cleaned up """ return ReportItem.error( report_codes.RESOURCE_REFRESH_ERROR, info={ "reason": reason, "resource": resource, "node": node, } ) def resource_refresh_too_time_consuming(threshold): """ Resource refresh would execute more than threshold operations in a cluster int threshold -- current threshold for trigerring this error """ return ReportItem.error( report_codes.RESOURCE_REFRESH_TOO_TIME_CONSUMING, info={"threshold": threshold}, forceable=report_codes.FORCE_LOAD_THRESHOLD ) def resource_operation_interval_duplication(duplications): """ More operations with same name and same interval apeared. Each operation with the same name (e.g. monitoring) need to have unique interval. dict duplications see resource operation interval duplication in pcs/lib/exchange_formats.md """ return ReportItem.error( report_codes.RESOURCE_OPERATION_INTERVAL_DUPLICATION, info={ "duplications": duplications, } ) def resource_operation_interval_adapted( operation_name, original_interval, adapted_interval ): """ Interval of resource operation was adopted to operation (with the same name) intervals were unique. Each operation with the same name (e.g. monitoring) need to have unique interval. """ return ReportItem.warning( report_codes.RESOURCE_OPERATION_INTERVAL_ADAPTED, info={ "operation_name": operation_name, "original_interval": original_interval, "adapted_interval": adapted_interval, } ) def node_not_found( node, searched_types=None, severity=ReportItemSeverity.ERROR, forceable=None ): """ specified node does not exist node string specified node searched_types list|string """ return ReportItem( report_codes.NODE_NOT_FOUND, severity, info={ "node": node, "searched_types": searched_types if searched_types else [] }, forceable=forceable ) def node_to_clear_is_still_in_cluster( node, severity=ReportItemSeverity.ERROR, forceable=None ): """ specified node is still in cluster and `crm_node --remove` should be not used node string specified node """ return ReportItem( report_codes.NODE_TO_CLEAR_IS_STILL_IN_CLUSTER, severity, info={ "node": node, }, forceable=forceable ) def node_remove_in_pacemaker_failed(node_name, reason): """ calling of crm_node --remove failed string reason is caught reason """ return ReportItem.error( report_codes.NODE_REMOVE_IN_PACEMAKER_FAILED, info={ "node_name": node_name, "reason": reason, } ) def multiple_result_found( result_type, result_identifier_list, search_description="", severity=ReportItemSeverity.ERROR, forceable=None ): """ Multiple result was found when something was looked for. E.g. resource for remote node. string result_type specifies what was looked for, e.g. "resource" list result_identifier_list contains identifiers of results e.g. resource ids string search_description e.g. name of remote_node """ return ReportItem( report_codes.MULTIPLE_RESULTS_FOUND, severity, info={ "result_type": result_type, "result_identifier_list": result_identifier_list, "search_description": search_description, }, forceable=forceable ) def pacemaker_local_node_name_not_found(reason): """ we are unable to figure out pacemaker's local node's name reason string error message """ return ReportItem.error( report_codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND, info={"reason": reason} ) def rrp_active_not_supported(warning=False): """ active RRP mode is not supported, require user confirmation warning set to True if user confirmed he/she wants to proceed """ return ReportItem( report_codes.RRP_ACTIVE_NOT_SUPPORTED, ReportItemSeverity.WARNING if warning else ReportItemSeverity.ERROR, forceable=(None if warning else report_codes.FORCE_ACTIVE_RRP) ) def cman_ignored_option(option): """ specified option is ignored as CMAN clusters do not support it options string option name """ return ReportItem.warning( report_codes.IGNORED_CMAN_UNSUPPORTED_OPTION, info={'option_name': option} ) def rrp_addresses_transport_mismatch(): """ RRP defined by network addresses is not allowed when udp transport is used """ return ReportItem.error( report_codes.NON_UDP_TRANSPORT_ADDR_MISMATCH, ) def cman_udpu_restart_required(): """ warn user it is required to restart CMAN cluster for changes to take effect """ return ReportItem.warning( report_codes.CMAN_UDPU_RESTART_REQUIRED, ) def cman_broadcast_all_rings(): """ broadcast enabled in all rings, CMAN doesn't support 1 ring only broadcast """ return ReportItem.warning( report_codes.CMAN_BROADCAST_ALL_RINGS, ) def service_start_started(service, instance=None): """ system service is being started string service service name or description string instance instance of service """ return ReportItem.info( report_codes.SERVICE_START_STARTED, info={ "service": service, "instance": instance, } ) def service_start_error(service, reason, node=None, instance=None): """ system service start failed string service service name or description string reason error message string node node on which service has been requested to start string instance instance of service """ return ReportItem.error( report_codes.SERVICE_START_ERROR, info={ "service": service, "reason": reason, "node": node, "instance": instance, } ) def service_start_success(service, node=None, instance=None): """ system service was started successfully string service service name or description string node node on which service has been requested to start string instance instance of service """ return ReportItem.info( report_codes.SERVICE_START_SUCCESS, info={ "service": service, "node": node, "instance": instance, } ) def service_start_skipped(service, reason, node=None, instance=None): """ starting system service was skipped, no error occured string service service name or description string reason why the start has been skipped string node node on which service has been requested to start string instance instance of service """ return ReportItem.info( report_codes.SERVICE_START_SKIPPED, info={ "service": service, "reason": reason, "node": node, "instance": instance, } ) def service_stop_started(service, instance=None): """ system service is being stopped string service service name or description string instance instance of service """ return ReportItem.info( report_codes.SERVICE_STOP_STARTED, info={ "service": service, "instance": instance, } ) def service_stop_error(service, reason, node=None, instance=None): """ system service stop failed string service service name or description string reason error message string node node on which service has been requested to stop string instance instance of service """ return ReportItem.error( report_codes.SERVICE_STOP_ERROR, info={ "service": service, "reason": reason, "node": node, "instance": instance, } ) def service_stop_success(service, node=None, instance=None): """ system service was stopped successfully string service service name or description string node node on which service has been requested to stop string instance instance of service """ return ReportItem.info( report_codes.SERVICE_STOP_SUCCESS, info={ "service": service, "node": node, "instance": instance, } ) def service_kill_error(services, reason): """ system services kill failed iterable services services name or description string reason error message """ return ReportItem.error( report_codes.SERVICE_KILL_ERROR, info={ "services": services, "reason": reason, } ) def service_kill_success(services): """ system services were killed successfully iterable services services name or description """ return ReportItem.info( report_codes.SERVICE_KILL_SUCCESS, info={ "services": services, } ) def service_enable_started(service, instance=None): """ system service is being enabled string service service name or description string instance instance of service """ return ReportItem.info( report_codes.SERVICE_ENABLE_STARTED, info={ "service": service, "instance": instance, } ) def service_enable_error(service, reason, node=None, instance=None): """ system service enable failed string service service name or description string reason error message string node node on which service was enabled string instance instance of service """ return ReportItem.error( report_codes.SERVICE_ENABLE_ERROR, info={ "service": service, "reason": reason, "node": node, "instance": instance, } ) def service_enable_success(service, node=None, instance=None): """ system service was enabled successfully string service service name or description string node node on which service has been enabled string instance instance of service """ return ReportItem.info( report_codes.SERVICE_ENABLE_SUCCESS, info={ "service": service, "node": node, "instance": instance, } ) def service_enable_skipped(service, reason, node=None, instance=None): """ enabling system service was skipped, no error occured string service service name or description string reason why the enabling has been skipped string node node on which service has been requested to enable string instance instance of service """ return ReportItem.info( report_codes.SERVICE_ENABLE_SKIPPED, info={ "service": service, "reason": reason, "node": node, "instance": instance } ) def service_disable_started(service, instance=None): """ system service is being disabled string service service name or description string instance instance of service """ return ReportItem.info( report_codes.SERVICE_DISABLE_STARTED, info={ "service": service, "instance": instance, } ) def service_disable_error(service, reason, node=None, instance=None): """ system service disable failed string service service name or description string reason error message string node node on which service was disabled string instance instance of service """ return ReportItem.error( report_codes.SERVICE_DISABLE_ERROR, info={ "service": service, "reason": reason, "node": node, "instance": instance, } ) def service_disable_success(service, node=None, instance=None): """ system service was disabled successfully string service service name or description string node node on which service was disabled string instance instance of service """ return ReportItem.info( report_codes.SERVICE_DISABLE_SUCCESS, info={ "service": service, "node": node, "instance": instance, } ) def unable_to_get_agent_metadata( agent, reason, severity=ReportItemSeverity.ERROR, forceable=None ): """ There were some issues trying to get metadata of agent string agent agent which metadata were unable to obtain string reason reason of failure """ return ReportItem( report_codes.UNABLE_TO_GET_AGENT_METADATA, severity, info={ "agent": agent, "reason": reason }, forceable=forceable ) def invalid_resource_agent_name(name): """ The entered resource agent name is not valid. This name has the internal structure. The code needs to work with parts of this structure and fails if parts can not be obtained. string name is entered name """ return ReportItem.error( report_codes.INVALID_RESOURCE_AGENT_NAME, info={ "name": name, } ) def invalid_stonith_agent_name(name): """ The entered stonith agent name is not valid. string name -- entered stonith agent name """ return ReportItem.error( report_codes.INVALID_STONITH_AGENT_NAME, info={ "name": name, } ) def agent_name_guessed(entered_name, guessed_name): """ Resource agent name was deduced from the entered name. Pcs supports the using of abbreviated resource agent name (e.g. ocf:heartbeat:Delay => Delay) when it can be clearly deduced. string entered_name is entered name string guessed_name is deduced name """ return ReportItem.info( report_codes.AGENT_NAME_GUESSED, info={ "entered_name": entered_name, "guessed_name": guessed_name, } ) def agent_name_guess_found_more_than_one(agent, possible_agents): """ More than one agents found based on the search string, specify one of them string agent searched name of an agent iterable possible_agents full names of agents matching the search """ return ReportItem.error( report_codes.AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE, info={ "agent": agent, "possible_agents": possible_agents, "possible_agents_str": ", ".join(sorted(possible_agents)), } ) def agent_name_guess_found_none(agent): """ Specified agent doesn't exist string agent name of the agent which doesn't exist """ return ReportItem.error( report_codes.AGENT_NAME_GUESS_FOUND_NONE, info={"agent": agent} ) def omitting_node(node): """ warning that specified node will be omitted in following actions node -- node name """ return ReportItem.warning( report_codes.OMITTING_NODE, info={"node": node} ) def sbd_check_started(): """ info that SBD pre-enabling checks started """ return ReportItem.info( report_codes.SBD_CHECK_STARTED, ) def sbd_check_success(node): """ info that SBD pre-enabling check finished without issues on specified node node -- node name """ return ReportItem.info( report_codes.SBD_CHECK_SUCCESS, info={"node": node} ) def sbd_config_distribution_started(): """ distribution of SBD configuration started """ return ReportItem.info( report_codes.SBD_CONFIG_DISTRIBUTION_STARTED, ) def sbd_config_accepted_by_node(node): """ info that SBD configuration has been saved successfully on specified node node -- node name """ return ReportItem.info( report_codes.SBD_CONFIG_ACCEPTED_BY_NODE, info={"node": node} ) def unable_to_get_sbd_config(node, reason, severity=ReportItemSeverity.ERROR): """ unable to get SBD config from specified node (communication or parsing error) node -- node name reason -- reason of failure """ return ReportItem( report_codes.UNABLE_TO_GET_SBD_CONFIG, severity, info={ "node": node, "reason": reason } ) def sbd_enabling_started(): """ enabling SBD service started """ return ReportItem.info( report_codes.SBD_ENABLING_STARTED, ) def sbd_disabling_started(): """ disabling SBD service started """ return ReportItem.info( report_codes.SBD_DISABLING_STARTED, ) def sbd_device_initialization_started(device_list): """ initialization of SBD device(s) started """ return ReportItem.info( report_codes.SBD_DEVICE_INITIALIZATION_STARTED, info={ "device_list": device_list, } ) def sbd_device_initialization_success(device_list): """ initialization of SBD device(s) successed """ return ReportItem.info( report_codes.SBD_DEVICE_INITIALIZATION_SUCCESS, info={ "device_list": device_list, } ) def sbd_device_initialization_error(device_list, reason): """ initialization of SBD device failed """ return ReportItem.error( report_codes.SBD_DEVICE_INITIALIZATION_ERROR, info={ "device_list": device_list, "reason": reason, } ) def sbd_device_list_error(device, reason): """ command 'sbd list' failed """ return ReportItem.error( report_codes.SBD_DEVICE_LIST_ERROR, info={ "device": device, "reason": reason, } ) def sbd_device_message_error(device, node, message, reason): """ unable to set message 'message' on shared block device 'device' for node 'node'. """ return ReportItem.error( report_codes.SBD_DEVICE_MESSAGE_ERROR, info={ "device": device, "node": node, "message": message, "reason": reason, } ) def sbd_device_dump_error(device, reason): """ command 'sbd dump' failed """ return ReportItem.error( report_codes.SBD_DEVICE_DUMP_ERROR, info={ "device": device, "reason": reason, } ) def files_distribution_started(file_list, node_list=None, description=None): """ files is about to be sent to nodes """ file_list = file_list if file_list else [] return ReportItem.info( report_codes.FILES_DISTRIBUTION_STARTED, info={ "file_list": file_list, "node_list": node_list, "description": description, } ) def file_distribution_success(node=None, file_description=None): """ files was successfuly distributed on nodes string node -- name of destination node string file_description -- name (code) of sucessfully put files """ return ReportItem.info( report_codes.FILE_DISTRIBUTION_SUCCESS, info={ "node": node, "file_description": file_description, }, ) def file_distribution_error( node=None, file_description=None, reason=None, severity=ReportItemSeverity.ERROR, forceable=None ): """ cannot put files to specific nodes string node -- name of destination node string file_description -- is file code string reason -- is error message """ return ReportItem( report_codes.FILE_DISTRIBUTION_ERROR, severity, info={ "node": node, "file_description": file_description, "reason": reason, }, forceable=forceable ) def files_remove_from_node_started(file_list, node_list=None, description=None): """ files is about to be removed from nodes """ file_list = file_list if file_list else [] return ReportItem.info( report_codes.FILES_REMOVE_FROM_NODE_STARTED, info={ "file_list": file_list, "node_list": node_list, "description": description, } ) def file_remove_from_node_success(node=None, file_description=None): """ files was successfuly removed nodes string node -- name of destination node string file_description -- name (code) of sucessfully put files """ return ReportItem.info( report_codes.FILE_REMOVE_FROM_NODE_SUCCESS, info={ "node": node, "file_description": file_description, }, ) def file_remove_from_node_error( node=None, file_description=None, reason=None, severity=ReportItemSeverity.ERROR, forceable=None ): """ cannot remove files from specific nodes string node -- name of destination node string file_description -- is file code string reason -- is error message """ return ReportItem( report_codes.FILE_REMOVE_FROM_NODE_ERROR, severity, info={ "node": node, "file_description": file_description, "reason": reason, }, forceable=forceable ) def service_commands_on_nodes_started( action_list, node_list=None, description=None ): """ node was requested for actions """ action_list = action_list if action_list else [] return ReportItem.info( report_codes.SERVICE_COMMANDS_ON_NODES_STARTED, info={ "action_list": action_list, "node_list": node_list, "description": description, } ) def service_command_on_node_success( node=None, service_command_description=None ): """ files was successfuly distributed on nodes string service_command_description -- name (code) of sucessfully service command """ return ReportItem.info( report_codes.SERVICE_COMMAND_ON_NODE_SUCCESS, info={ "node": node, "service_command_description": service_command_description, }, ) def service_command_on_node_error( node=None, service_command_description=None, reason=None, severity=ReportItemSeverity.ERROR, forceable=None ): """ action on nodes failed string service_command_description -- name (code) of sucessfully service command string reason -- is error message """ return ReportItem( report_codes.SERVICE_COMMAND_ON_NODE_ERROR, severity, info={ "node": node, "service_command_description": service_command_description, "reason": reason, }, forceable=forceable ) def invalid_response_format(node): """ error message that response in invalid format has been received from specified node node -- node name """ return ReportItem.error( report_codes.INVALID_RESPONSE_FORMAT, info={"node": node} ) def sbd_no_device_for_node(node): """ there is no device defined for node when enabling sbd with device """ return ReportItem.error( report_codes.SBD_NO_DEVICE_FOR_NODE, info={"node": node} ) def sbd_too_many_devices_for_node(node, device_list, max_devices): """ More than 3 devices defined for node """ return ReportItem.error( report_codes.SBD_TOO_MANY_DEVICES_FOR_NODE, info={ "node": node, "device_list": device_list, "max_devices": max_devices, } ) def sbd_device_path_not_absolute(device, node=None): """ path of SBD device is not absolute """ return ReportItem.error( report_codes.SBD_DEVICE_PATH_NOT_ABSOLUTE, info={ "device": device, "node": node, } ) def sbd_device_does_not_exist(device, node): """ specified device on node doesn't exist """ return ReportItem.error( report_codes.SBD_DEVICE_DOES_NOT_EXIST, info={ "device": device, "node": node, } ) def sbd_device_is_not_block_device(device, node): """ specified device on node is not block device """ return ReportItem.error( report_codes.SBD_DEVICE_IS_NOT_BLOCK_DEVICE, info={ "device": device, "node": node, } ) def sbd_not_installed(node): """ sbd is not installed on specified node node -- node name """ return ReportItem.error( report_codes.SBD_NOT_INSTALLED, info={"node": node} ) def watchdog_not_found(node, watchdog): """ watchdog doesn't exist on specified node node -- node name watchdog -- watchdog device path """ return ReportItem.error( report_codes.WATCHDOG_NOT_FOUND, info={ "node": node, "watchdog": watchdog } ) def invalid_watchdog_path(watchdog): """ watchdog path is not absolut path watchdog -- watchdog device path """ return ReportItem.error( report_codes.WATCHDOG_INVALID, info={"watchdog": watchdog} ) def unable_to_get_sbd_status(node, reason): """ there was (communication or parsing) failure during obtaining status of SBD from specified node node -- node name reason -- reason of failure """ return ReportItem.warning( report_codes.UNABLE_TO_GET_SBD_STATUS, info={ "node": node, "reason": reason } ) def cluster_restart_required_to_apply_changes(): """ warn user a cluster needs to be manually restarted to use new configuration """ return ReportItem.warning( report_codes.CLUSTER_RESTART_REQUIRED_TO_APPLY_CHANGES, ) def cib_alert_recipient_already_exists( alert_id, recipient_value, severity=ReportItemSeverity.ERROR, forceable=None ): """ Recipient with specified value already exists in alert with id 'alert_id' alert_id -- id of alert to which recipient belongs recipient_value -- value of recipient """ return ReportItem( report_codes.CIB_ALERT_RECIPIENT_ALREADY_EXISTS, severity, info={ "recipient": recipient_value, "alert": alert_id }, forceable=forceable ) def cib_alert_recipient_invalid_value(recipient_value): """ Invalid recipient value. recipient_value -- recipient value """ return ReportItem.error( report_codes.CIB_ALERT_RECIPIENT_VALUE_INVALID, info={"recipient": recipient_value} ) def cib_upgrade_successful(): """ Upgrade of CIB schema was successful. """ return ReportItem.info( report_codes.CIB_UPGRADE_SUCCESSFUL, ) def cib_upgrade_failed(reason): """ Upgrade of CIB schema failed. reason -- reason of failure """ return ReportItem.error( report_codes.CIB_UPGRADE_FAILED, info={"reason": reason} ) def unable_to_upgrade_cib_to_required_version( current_version, required_version ): """ Unable to upgrade CIB to minimal required schema version. pcs.common.tools.Version current_version -- current version of CIB schema pcs.common.tools.Version required_version -- required version of CIB schema """ return ReportItem.error( report_codes.CIB_UPGRADE_FAILED_TO_MINIMAL_REQUIRED_VERSION, info={ "required_version": str(required_version), "current_version": str(current_version) } ) def file_already_exists( file_role, file_path, severity=ReportItemSeverity.ERROR, forceable=None, node=None ): return ReportItem( report_codes.FILE_ALREADY_EXISTS, severity, info={ "file_role": file_role, "file_path": file_path, "node": node, }, forceable=forceable, ) def file_does_not_exist(file_role, file_path=""): return ReportItem.error( report_codes.FILE_DOES_NOT_EXIST, info={ "file_role": file_role, "file_path": file_path, }, ) def file_io_error( file_role, file_path="", reason="", operation="work with", severity=ReportItemSeverity.ERROR ): return ReportItem( report_codes.FILE_IO_ERROR, severity, info={ "file_role": file_role, "file_path": file_path, "reason": reason, "operation": operation }, ) def unable_to_determine_user_uid(user): return ReportItem.error( report_codes.UNABLE_TO_DETERMINE_USER_UID, info={ "user": user } ) def unable_to_determine_group_gid(group): return ReportItem.error( report_codes.UNABLE_TO_DETERMINE_GROUP_GID, info={ "group": group } ) def unsupported_operation_on_non_systemd_systems(): return ReportItem.error( report_codes.UNSUPPORTED_OPERATION_ON_NON_SYSTEMD_SYSTEMS, ) def live_environment_required(forbidden_options): return ReportItem.error( report_codes.LIVE_ENVIRONMENT_REQUIRED, info={ "forbidden_options": forbidden_options, } ) def live_environment_required_for_local_node(): """ The operation cannot be performed on CIB in file (not live cluster) if no node name is specified i.e. working with the local node """ return ReportItem.error( report_codes.LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE, ) def nolive_skip_files_distribution(files_description, nodes): """ When running action with e.g. -f the files was not distributed to nodes. list files_description -- contains description of files list nodes -- destinations where should be files distributed """ return ReportItem.info( report_codes.NOLIVE_SKIP_FILES_DISTRIBUTION, info={ "files_description": files_description, "nodes": nodes, } ) def nolive_skip_files_remove(files_description, nodes): """ When running action with e.g. -f the files was not removed from nodes. list files_description -- contains description of files list nodes -- destinations from where should be files removed """ return ReportItem.info( report_codes.NOLIVE_SKIP_FILES_REMOVE, info={ "files_description": files_description, "nodes": nodes, } ) def nolive_skip_service_command_on_nodes(service, command, nodes): """ When running action with e.g. -f the service command is not run on nodes. string service -- e.g. pacemaker, pacemaker_remote, corosync string command -- e.g. start, enable, stop, disable list nodes -- destinations where should be commad run """ return ReportItem.info( report_codes.NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES, info={ "service": service, "command": command, "nodes": nodes, } ) def quorum_cannot_disable_atb_due_to_sbd( severity=ReportItemSeverity.ERROR, forceable=None ): """ Quorum option auto_tie_breaker cannot be disbled due to SBD. """ return ReportItem( report_codes.COROSYNC_QUORUM_CANNOT_DISABLE_ATB_DUE_TO_SBD, severity, forceable=forceable ) def sbd_requires_atb(): """ Warning that ATB will be enabled in order to make SBD fencing effective. """ return ReportItem.warning( report_codes.SBD_REQUIRES_ATB, ) def acl_role_is_already_assigned_to_target(role_id, target_id): """ Error that ACL target or group has already assigned role. """ return ReportItem.error( report_codes.CIB_ACL_ROLE_IS_ALREADY_ASSIGNED_TO_TARGET, info={ "role_id": role_id, "target_id": target_id, } ) def acl_role_is_not_assigned_to_target(role_id, target_id): """ Error that acl role is not assigned to target or group """ return ReportItem.error( report_codes.CIB_ACL_ROLE_IS_NOT_ASSIGNED_TO_TARGET, info={ "role_id": role_id, "target_id": target_id, } ) def acl_target_already_exists(target_id): """ Error that target with specified id aleready axists in configuration. """ return ReportItem.error( report_codes.CIB_ACL_TARGET_ALREADY_EXISTS, info={ "target_id": target_id, } ) def cluster_conf_invalid_format(reason): """ cluster.conf parsing error """ return ReportItem.error( report_codes.CLUSTER_CONF_LOAD_ERROR_INVALID_FORMAT, info={ "reason": reason, } ) def cluster_conf_read_error(path, reason): """ Unable to read cluster.conf """ return ReportItem.error( report_codes.CLUSTER_CONF_READ_ERROR, info={ "path": path, "reason": reason, } ) def fencing_level_already_exists(level, target_type, target_value, devices): """ Fencing level already exists, it cannot be created """ return ReportItem.error( report_codes.CIB_FENCING_LEVEL_ALREADY_EXISTS, info={ "level": level, "target_type": target_type, "target_value": target_value, "devices": devices, } ) def fencing_level_does_not_exist(level, target_type, target_value, devices): """ Fencing level does not exist, it cannot be updated or deleted """ return ReportItem.error( report_codes.CIB_FENCING_LEVEL_DOES_NOT_EXIST, info={ "level": level, "target_type": target_type, "target_value": target_value, "devices": devices, } ) def use_command_node_add_remote( severity=ReportItemSeverity.ERROR, forceable=None ): """ Advise the user for more appropriate command. """ return ReportItem( report_codes.USE_COMMAND_NODE_ADD_REMOTE, severity, info={}, forceable=forceable ) def use_command_node_add_guest( severity=ReportItemSeverity.ERROR, forceable=None ): """ Advise the user for more appropriate command. """ return ReportItem( report_codes.USE_COMMAND_NODE_ADD_GUEST, severity, info={}, forceable=forceable ) def use_command_node_remove_guest( severity=ReportItemSeverity.ERROR, forceable=None ): """ Advise the user for more appropriate command. """ return ReportItem( report_codes.USE_COMMAND_NODE_REMOVE_GUEST, severity, info={}, forceable=forceable ) def tmp_file_write(file_path, content): """ It has been written into a temporary file string file_path -- the file path string content -- content which has been written """ return ReportItem.debug( report_codes.TMP_FILE_WRITE, info={ "file_path": file_path, "content": content, } ) def unable_to_perform_operation_on_any_node(): """ This report is raised whenever pcs.lib.communication.tools.OneByOneStrategyMixin strategy mixin is used for network communication and operation failed on all available hosts and because of this it is not possible to continue. """ return ReportItem.error( report_codes.UNABLE_TO_PERFORM_OPERATION_ON_ANY_NODE, ) pcs-0.9.164/pcs/lib/resource_agent.py000066400000000000000000000747541326265502500174210ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from collections import namedtuple from lxml import etree import os import re from pcs import settings from pcs.common import report_codes from pcs.common.tools import xml_fromstring from pcs.lib import reports from pcs.lib.errors import LibraryError, ReportItemSeverity from pcs.lib.pacemaker.values import is_true _crm_resource = os.path.join(settings.pacemaker_binaries, "crm_resource") # Operation monitor is required always! No matter if --no-default-ops was # entered or if agent does not specify it. See # http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_resource_operations NECESSARY_CIB_ACTION_NAMES = ["monitor"] #These are all standards valid in cib. To get a list of standards supported by #pacemaker in local environment use result of "pcs resource standards". STANDARD_LIST = [ "ocf", "lsb", "heartbeat", "stonith", "upstart", "service", "systemd", "nagios", ] DEFAULT_INTERVALS = { "monitor": "60s" } _STONITH_ACTION_REPLACED_BY = ("pcmk_off_action", "pcmk_reboot_action") def get_default_interval(operation_name): """ Return default interval for given operation_name. string operation_name """ return DEFAULT_INTERVALS.get(operation_name, "0s") def complete_all_intervals(raw_operation_list): """ Return operation_list based on raw_operation_list where each item has key "interval". list of dict raw_operation_list can include items withou key "interval". """ operation_list = [] for raw_operation in raw_operation_list: operation = raw_operation.copy() if "interval" not in operation: operation["interval"] = get_default_interval(operation["name"]) operation_list.append(operation) return operation_list class ResourceAgentError(Exception): # pylint: disable=super-init-not-called def __init__(self, agent, message=""): self.agent = agent self.message = message class UnableToGetAgentMetadata(ResourceAgentError): pass class InvalidResourceAgentName(ResourceAgentError): pass class InvalidStonithAgentName(ResourceAgentError): pass class ResourceAgentName( namedtuple("ResourceAgentName", "standard provider type") ): @property def full_name(self): return ":".join( filter( None, [self.standard, self.provider, self.type] ) ) def get_resource_agent_name_from_string(full_agent_name): #full_agent_name could be for example systemd:lvm2-pvscan@252:2 #note that the second colon is not separator of provider and type match = re.match( "^(?Psystemd|service):(?P[^:@]+@.*)$", full_agent_name ) if match: return ResourceAgentName( match.group("standard"), None, match.group("agent_type") ) match = re.match( "^(?P[^:]+)(:(?P[^:]+))?:(?P[^:]+)$", full_agent_name ) if not match: raise InvalidResourceAgentName(full_agent_name) standard = match.group("standard") provider = match.group("provider") if match.group("provider") else None agent_type = match.group("type") if standard not in STANDARD_LIST: raise InvalidResourceAgentName(full_agent_name) if standard == "ocf" and not provider: raise InvalidResourceAgentName(full_agent_name) if standard != "ocf" and provider: raise InvalidResourceAgentName(full_agent_name) return ResourceAgentName(standard, provider, agent_type) def list_resource_agents_standards(runner): """ Return list of resource agents standards (ocf, lsb, ... ) on the local host CommandRunner runner """ # retval is number of standards found stdout, dummy_stderr, dummy_retval = runner.run([ _crm_resource, "--list-standards" ]) ignored_standards = frozenset([ # we are only interested in RESOURCE agents "stonith", ]) return _prepare_agent_list(stdout, ignored_standards) def list_resource_agents_ocf_providers(runner): """ Return list of resource agents ocf providers on the local host CommandRunner runner """ # retval is number of providers found stdout, dummy_stderr, dummy_retval = runner.run([ _crm_resource, "--list-ocf-providers" ]) return _prepare_agent_list(stdout) def list_resource_agents_standards_and_providers(runner): """ Return list of all standard[:provider] on the local host CommandRunner runner """ standards = ( list_resource_agents_standards(runner) + [ "ocf:{0}".format(provider) for provider in list_resource_agents_ocf_providers(runner) ] ) # do not list ocf resources twice try: standards.remove("ocf") except ValueError: pass return sorted( standards, # works with both str and unicode in both python 2 and 3 key=lambda x: x.lower() ) def list_resource_agents(runner, standard_provider): """ Return list of resource agents for specified standard on the local host CommandRunner runner string standard_provider standard[:provider], e.g. lsb, ocf, ocf:pacemaker """ # retval is 0 on success, anything else when no agents found stdout, dummy_stderr, retval = runner.run([ _crm_resource, "--list-agents", standard_provider ]) if retval != 0: return [] return _prepare_agent_list(stdout) def list_stonith_agents(runner): """ Return list of fence agents on the local host CommandRunner runner """ # retval is 0 on success, anything else when no agents found stdout, dummy_stderr, retval = runner.run([ _crm_resource, "--list-agents", "stonith" ]) if retval != 0: return [] ignored_agents = frozenset([ "fence_ack_manual", "fence_check", "fence_kdump_send", "fence_legacy", "fence_na", "fence_node", "fence_nss_wrapper", "fence_pcmk", "fence_sanlockd", "fence_tool", "fence_virtd", "fence_vmware_helper", ]) return _prepare_agent_list(stdout, ignored_agents) def _prepare_agent_list(agents_string, filter_list=None): ignored = frozenset(filter_list) if filter_list else frozenset([]) result = [ name for name in [line.strip() for line in agents_string.splitlines()] if name and name not in ignored ] return sorted( result, # works with both str and unicode in both python 2 and 3 key=lambda x: x.lower() ) def guess_resource_agent_full_name(runner, search_agent_name): """ List resource agents matching specified search term string search_agent_name part of full agent name """ search_lower = search_agent_name.lower() # list all possible names possible_names = [] for std in list_resource_agents_standards_and_providers(runner): for agent in list_resource_agents(runner, std): if search_lower == agent.lower(): possible_names.append("{0}:{1}".format(std, agent)) # construct agent wrappers agent_candidates = [ ResourceAgent(runner, agent) for agent in possible_names ] # check if the agent is valid return [ agent for agent in agent_candidates if agent.is_valid_metadata() ] def guess_exactly_one_resource_agent_full_name(runner, search_agent_name): """ Get one resource agent matching specified search term string search_agent_name last part of full agent name Raise LibraryError if zero or more than one agents found """ agents = guess_resource_agent_full_name(runner, search_agent_name) if not agents: raise LibraryError( reports.agent_name_guess_found_none(search_agent_name) ) if len(agents) > 1: raise LibraryError( reports.agent_name_guess_found_more_than_one( search_agent_name, [agent.get_name() for agent in agents] ) ) return agents[0] def find_valid_resource_agent_by_name( report_processor, runner, name, allowed_absent=False, absent_agent_supported=True ): """ Return instance of ResourceAgent corresponding to name report_processor is tool for warning/info/error reporting runner is tool for launching external commands string name specifies a searched agent bool absent_agent_supported flag decides if is possible to allow to return absent agent: if is produced forceable/no-forcable error """ if ":" not in name: agent = guess_exactly_one_resource_agent_full_name(runner, name) report_processor.process( reports.agent_name_guessed(name, agent.get_name()) ) return agent return _find_valid_agent_by_name( report_processor, runner, name, ResourceAgent, AbsentResourceAgent if allowed_absent else None, absent_agent_supported=absent_agent_supported, ) def find_valid_stonith_agent_by_name( report_processor, runner, name, allowed_absent=False, absent_agent_supported=True ): return _find_valid_agent_by_name( report_processor, runner, name, StonithAgent, AbsentStonithAgent if allowed_absent else None, absent_agent_supported=absent_agent_supported, ) def _find_valid_agent_by_name( report_processor, runner, name, PresentAgentClass, AbsentAgentClass, absent_agent_supported=True ): try: return PresentAgentClass(runner, name).validate_metadata() except (InvalidResourceAgentName, InvalidStonithAgentName) as e: raise LibraryError(resource_agent_error_to_report_item(e)) except UnableToGetAgentMetadata as e: if not absent_agent_supported: raise LibraryError(resource_agent_error_to_report_item(e)) if not AbsentAgentClass: raise LibraryError(resource_agent_error_to_report_item( e, forceable=True )) report_processor.process(resource_agent_error_to_report_item( e, severity=ReportItemSeverity.WARNING, )) return AbsentAgentClass(runner, name) class Agent(object): """ Base class for providing convinient access to an agent's metadata """ def __init__(self, runner): """ create an instance which reads metadata by itself on demand CommandRunner runner """ self._runner = runner self._metadata = None def get_name(self): raise NotImplementedError() def get_name_info(self): """ Get structured agent's info, only name is populated """ return { "name": self.get_name(), "shortdesc":"", "longdesc": "", "parameters": [], "actions": [], } def get_description_info(self): """ Get structured agent's info, only name and description is populated """ agent_info = self.get_name_info() agent_info["shortdesc"] = self.get_shortdesc() agent_info["longdesc"] = self.get_longdesc() return agent_info def get_full_info(self): """ Get structured agent's info, all items are populated """ agent_info = self.get_description_info() agent_info["parameters"] = self.get_parameters() agent_info["actions"] = self.get_actions() agent_info["default_actions"] = self.get_cib_default_actions() return agent_info def get_shortdesc(self): """ Get a short description of agent's purpose """ return ( self._get_text_from_dom_element( self._get_metadata().find("shortdesc") ) or self._get_metadata().get("shortdesc", "") ) def get_longdesc(self): """ Get a long description of agent's purpose """ return self._get_text_from_dom_element( self._get_metadata().find("longdesc") ) def get_parameters(self): """ Get list of agent's parameters, each parameter is described by dict: { name: name of parameter longdesc: long description, shortdesc: short description, type: data type od parameter, default: default value, required: True if is required parameter, False otherwise } """ params_element = self._get_metadata().find("parameters") if params_element is None: return [] param_list = [] for param_el in params_element.iter("parameter"): param = self._get_parameter(param_el) if not param["obsoletes"]: param_list.append(param) return param_list def _get_parameter(self, parameter_element): value_type = "string" default_value = None content_element = parameter_element.find("content") if content_element is not None: value_type = content_element.get("type", value_type) default_value = content_element.get("default", default_value) return self._create_parameter({ "name": parameter_element.get("name", ""), "longdesc": self._get_text_from_dom_element( parameter_element.find("longdesc") ), "shortdesc": self._get_text_from_dom_element( parameter_element.find("shortdesc") ), "type": value_type, "default": default_value, "required": is_true(parameter_element.get("required", "0")), "advanced": False, "deprecated": is_true(parameter_element.get("deprecated", "0")), "obsoletes": parameter_element.get("obsoletes", None), }) def validate_parameters( self, parameters, parameters_type="resource", allow_invalid=False, update=False ): forceable = report_codes.FORCE_OPTIONS if not allow_invalid else None severity = ( ReportItemSeverity.ERROR if not allow_invalid else ReportItemSeverity.WARNING ) report_list = [] bad_opts, missing_req_opts = self.validate_parameters_values( parameters ) if bad_opts: report_list.append(reports.invalid_options( bad_opts, sorted([attr["name"] for attr in self.get_parameters()]), parameters_type, severity=severity, forceable=forceable, )) if not update and missing_req_opts: report_list.append(reports.required_option_is_missing( missing_req_opts, parameters_type, severity=severity, forceable=forceable, )) return report_list def validate_parameters_values(self, parameters_values): """ Return tuple of lists (, ) dict parameters_values key is attribute name and value is attribute value """ # TODO Add value and type checking (e.g. if parameter["type"] is # integer, its value cannot be "abc"). This most probably will require # redefining the format of the return value and rewriting the whole # function, which will only be good. For now we just stick to the # original legacy code. agent_params = self.get_parameters() required_missing = [] for attr in agent_params: if attr["required"] and attr["name"] not in parameters_values: required_missing.append(attr["name"]) valid_attrs = [attr["name"] for attr in agent_params] return ( [attr for attr in parameters_values if attr not in valid_attrs], required_missing ) def _get_raw_actions(self): actions_element = self._get_metadata().find("actions") if actions_element is None: return [] # TODO Resulting dict should contain all keys defined for an action. # But we do not know what are those, because the metadata xml schema is # outdated and doesn't describe current agents' metadata xml. return [ dict(action.items()) for action in actions_element.iter("action") ] def get_actions(self): """ Get list of agent's actions (operations). Each action is represented as a dict. Example: [{"name": "monitor", "timeout": 20, "interval": 10}] """ action_list = [] for raw_action in self._get_raw_actions(): action = {} for key, value in raw_action.items(): if key != "depth": action[key] = value elif value != "0": action["OCF_CHECK_LEVEL"] = value action_list.append(action) return action_list def _is_cib_default_action(self, action): return False def get_cib_default_actions(self, necessary_only=False): """ List actions that should be put to resource on its creation. Note that every action has at least attribute name. """ action_list = [ action for action in self.get_actions() if ( necessary_only and action.get("name") in NECESSARY_CIB_ACTION_NAMES ) or ( not necessary_only and self._is_cib_default_action(action) ) ] for action_name in NECESSARY_CIB_ACTION_NAMES: if action_name not in [action["name"] for action in action_list]: action_list.append({"name": action_name}) return complete_all_intervals(action_list) def _get_metadata(self): """ Return metadata DOM Raise UnableToGetAgentMetadata if agent doesn't exist or unable to get or parse its metadata """ if self._metadata is None: self._metadata = self._parse_metadata(self._load_metadata()) return self._metadata def _load_metadata(self): raise NotImplementedError() def _parse_metadata(self, metadata): try: dom = xml_fromstring(metadata) # TODO Majority of agents don't provide valid metadata, so we skip # the validation for now. We want to enable it once the schema # and/or agents are fixed. # When enabling this check for overrides in child classes. #if os.path.isfile(settings.agent_metadata_schema): # etree.DTD(file=settings.agent_metadata_schema).assertValid(dom) return dom except (etree.XMLSyntaxError, etree.DocumentInvalid) as e: raise UnableToGetAgentMetadata(self.get_name(), str(e)) def _get_text_from_dom_element(self, element): if element is None or element.text is None: return "" return element.text.strip() def _create_parameter(self, properties): new_param = { "name": "", "longdesc": "", "shortdesc": "", "type": "string", "default": None, "required": False, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", } new_param.update(properties) return new_param class FakeAgentMetadata(Agent): #pylint:disable=abstract-method pass class StonithdMetadata(FakeAgentMetadata): def get_name(self): return "stonithd" def _get_parameter(self, parameter_element): parameter = super(StonithdMetadata, self)._get_parameter( parameter_element ) # Metadata are written in such a way that a longdesc text is a # continuation of a shortdesc text. parameter["longdesc"] = "{0}\n{1}".format( parameter["shortdesc"], parameter["longdesc"] ).strip() parameter["advanced"] = parameter["shortdesc"].startswith( "Advanced use only" ) return parameter def _load_metadata(self): stdout, stderr, dummy_retval = self._runner.run( [settings.stonithd_binary, "metadata"] ) metadata = stdout.strip() if not metadata: raise UnableToGetAgentMetadata(self.get_name(), stderr.strip()) return metadata class CrmAgent(Agent): #pylint:disable=abstract-method def __init__(self, runner, name): """ init CommandRunner runner """ super(CrmAgent, self).__init__(runner) self._name_parts = self._prepare_name_parts(name) def _prepare_name_parts(self, name): raise NotImplementedError() def _get_full_name(self): return self._name_parts.full_name def get_standard(self): return self._name_parts.standard def get_provider(self): return self._name_parts.provider def get_type(self): return self._name_parts.type def is_valid_metadata(self): """ If we are able to get metadata, we consider the agent existing and valid """ # if the agent is valid, we do not need to load its metadata again try: self._get_metadata() except UnableToGetAgentMetadata: return False return True def validate_metadata(self): """ Validate metadata by attepmt to retrieve it. """ self._get_metadata() return self def _load_metadata(self): env_path = ":".join([ # otherwise pacemaker cannot run RHEL fence agents to get their # metadata settings.fence_agent_binaries, # otherwise heartbeat and cluster-glue agents don't work "/bin/", # otherwise heartbeat and cluster-glue agents don't work "/usr/bin/", ]) stdout, stderr, retval = self._runner.run( [_crm_resource, "--show-metadata", self._get_full_name()], env_extend={ "PATH": env_path, } ) if retval != 0: raise UnableToGetAgentMetadata(self.get_name(), stderr.strip()) return stdout.strip() class ResourceAgent(CrmAgent): """ Provides convinient access to a resource agent's metadata """ def _prepare_name_parts(self, name): return get_resource_agent_name_from_string(name) def get_name(self): return self._get_full_name() def get_parameters(self): parameters = super(ResourceAgent, self).get_parameters() if ( self.get_standard() == "ocf" and (self.get_provider() in ("heartbeat", "pacemaker")) ): trace_ra_found = False trace_file_found = False for param in parameters: param_name = param["name"].lower() if param_name == "trace_ra": trace_ra_found = True if param_name == "trace_file": trace_file_found = True if trace_file_found and trace_ra_found: break if not trace_ra_found: shortdesc = ( "Set to 1 to turn on resource agent tracing" " (expect large output)" ) parameters.append(self._create_parameter({ "name": "trace_ra", "longdesc": ( shortdesc + " The trace output will be saved to trace_file, if set," " or by default to" " $HA_VARRUN/ra_trace//.." " e.g. $HA_VARRUN/ra_trace/oracle/" "db.start.2012-11-27.08:37:08" ), "shortdesc": shortdesc, "type": "integer", "default": 0, "required": False, "advanced": True, })) if not trace_file_found: shortdesc = ( "Path to a file to store resource agent tracing log" ) parameters.append(self._create_parameter({ "name": "trace_file", "longdesc": shortdesc, "shortdesc": shortdesc, "type": "string", "default": "", "required": False, "advanced": True, })) return parameters def _is_cib_default_action(self, action): # Copy all actions to the CIB even those not defined in the OCF standard # or pacemaker. This way even custom actions defined in a resource agent # will be copied to the CIB and run by pacemaker if they specify # an interval. See https://github.com/ClusterLabs/pcs/issues/132 return action.get("name") not in [ # one-time action, not meant to be processed by pacemaker "meta-data", # deprecated alias of monitor "status", # one-time action, not meant to be processed by pacemaker "validate-all", ] class AbsentAgentMixin(object): def _load_metadata(self): return "" def validate_parameters_values(self, parameters_values): return ([], []) class AbsentResourceAgent(AbsentAgentMixin, ResourceAgent): pass class StonithAgent(CrmAgent): """ Provides convinient access to a stonith agent's metadata """ _stonithd_metadata = None @classmethod def clear_stonithd_metadata_cache(cls): cls._stonithd_metadata = None def _prepare_name_parts(self, name): # pacemaker doesn't support stonith (nor resource) agents with : in type if ":" in name: raise InvalidStonithAgentName(name) return ResourceAgentName("stonith", None, name) def get_name(self): return self.get_type() def get_parameters(self): return ( self._filter_parameters( super(StonithAgent, self).get_parameters() ) + self._get_stonithd_metadata().get_parameters() ) def validate_parameters( self, parameters, parameters_type="stonith", allow_invalid=False, update=False ): report_list = super(StonithAgent, self).validate_parameters( parameters, parameters_type=parameters_type, allow_invalid=allow_invalid, update=update ) if parameters.get("action", ""): report_list.append(reports.deprecated_option( "action", _STONITH_ACTION_REPLACED_BY, parameters_type, severity=( ReportItemSeverity.ERROR if not allow_invalid else ReportItemSeverity.WARNING ), forceable=( report_codes.FORCE_OPTIONS if not allow_invalid else None ) )) return report_list def _filter_parameters(self, parameters): """ Remove parameters that should not be available to the user. """ # We don't allow the user to change these options which are only # intended to be used interactively on command line. remove_parameters = frozenset([ "help", "version", ]) filtered = [] for param in parameters: if param["name"] in remove_parameters: continue elif param["name"] == "action": # However we still need the user to be able to set 'action' due # to backward compatibility reasons. So we just mark it as not # required. We also move it to advanced params to indicate users # should not set it in most cases. new_param = dict(param) new_param["required"] = False new_param["advanced"] = True new_param["pcs_deprecated_warning"] = ( "Specifying 'action' is deprecated and not necessary with" " current Pacemaker versions. Use {0} instead." ).format( ", ".join( ["'{0}'".format(x) for x in _STONITH_ACTION_REPLACED_BY] ) ) filtered.append(new_param) else: filtered.append(param) # 'port' parameter is required by a fence agent, but it is filled # automatically by pacemaker based on 'pcmk_host_map' or # 'pcmk_host_list' parameter (defined in stonithd metadata). # Pacemaker marks the 'port' parameter as not required for us. return filtered def _get_stonithd_metadata(self): if not self.__class__._stonithd_metadata: self.__class__._stonithd_metadata = StonithdMetadata(self._runner) return self.__class__._stonithd_metadata def get_provides_unfencing(self): # self.get_actions returns an empty list for action in self._get_raw_actions(): if ( action.get("name", "") == "on" and action.get("on_target", "0") == "1" and action.get("automatic", "0") == "1" ): return True return False def _is_cib_default_action(self, action): return action.get("name") == "monitor" class AbsentStonithAgent(AbsentAgentMixin, StonithAgent): def get_parameters(self): return [] def resource_agent_error_to_report_item( e, severity=ReportItemSeverity.ERROR, forceable=False ): """ Transform ResourceAgentError to ReportItem """ force = None if e.__class__ == UnableToGetAgentMetadata: if severity == ReportItemSeverity.ERROR and forceable: force = report_codes.FORCE_METADATA_ISSUE return reports.unable_to_get_agent_metadata( e.agent, e.message, severity, force ) if e.__class__ == InvalidResourceAgentName: return reports.invalid_resource_agent_name(e.agent) if e.__class__ == InvalidStonithAgentName: return reports.invalid_stonith_agent_name(e.agent) raise e pcs-0.9.164/pcs/lib/sbd.py000066400000000000000000000174711326265502500151550ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from os import path from pcs import settings from pcs.lib import ( external, reports, ) from pcs.lib.tools import dict_to_environment_file, environment_file_to_dict from pcs.lib.errors import LibraryError DEVICE_INITIALIZATION_OPTIONS_MAPPING = { "watchdog-timeout": "-1", "allocate-timeout": "-2", "loop-timeout": "-3", "msgwait-timeout": "-4", } def _even_number_of_nodes_and_no_qdevice( corosync_conf_facade, node_number_modifier=0 ): """ Returns True whenever cluster has no quorum device configured and number of nodes + node_number_modifier is even number, False otherwise. corosync_conf_facade -- node_number_modifier -- this value will be added to current number of nodes. This can be useful to test whenever is ATB needed when adding/removing node. """ return ( not corosync_conf_facade.has_quorum_device() and (len(corosync_conf_facade.get_nodes()) + node_number_modifier) % 2 == 0 ) def is_auto_tie_breaker_needed( runner, corosync_conf_facade, node_number_modifier=0 ): """ Returns True whenever quorum option auto tie breaker is needed to be enabled for proper working of SBD fencing. False if it is not needed. runner -- command runner corosync_conf_facade -- node_number_modifier -- this value vill be added to current number of nodes. This can be useful to test whenever is ATB needed when adding/removeing node. """ return ( _even_number_of_nodes_and_no_qdevice( corosync_conf_facade, node_number_modifier ) and is_sbd_installed(runner) and is_sbd_enabled(runner) and not is_device_set_local() ) def atb_has_to_be_enabled_pre_enable_check(corosync_conf_facade): """ Returns True whenever quorum option auto_tie_breaker is needed to be enabled for proper working of SBD fencing. False if it is not needed. This function doesn't check if sbd is installed nor enabled. """ return ( not corosync_conf_facade.is_enabled_auto_tie_breaker() and _even_number_of_nodes_and_no_qdevice(corosync_conf_facade) ) def atb_has_to_be_enabled(runner, corosync_conf_facade, node_number_modifier=0): """ Return True whenever quorum option auto tie breaker has to be enabled for proper working of SBD fencing. False if it's not needed or it is already enabled. runner -- command runner corosync_conf_facade -- node_number_modifier -- this value vill be added to current number of nodes. This can be useful to test whenever is ATB needed when adding/removeing node. """ return ( not corosync_conf_facade.is_enabled_auto_tie_breaker() and is_auto_tie_breaker_needed( runner, corosync_conf_facade, node_number_modifier ) ) def create_sbd_config(base_config, node_label, watchdog, device_list=None): # TODO: figure out which name/ring has to be in SBD_OPTS config = dict(base_config) config["SBD_OPTS"] = '"-n {node_name}"'.format(node_name=node_label) if watchdog: config["SBD_WATCHDOG_DEV"] = watchdog if device_list: config["SBD_DEVICE"] = '"{0}"'.format(";".join(device_list)) return dict_to_environment_file(config) def get_default_sbd_config(): """ Returns default SBD configuration as dictionary. """ return { "SBD_DELAY_START": "no", "SBD_PACEMAKER": "yes", "SBD_STARTMODE": "always", "SBD_WATCHDOG_DEV": settings.sbd_watchdog_default, "SBD_WATCHDOG_TIMEOUT": "5" } def get_local_sbd_config(): """ Get local SBD configuration. Returns SBD configuration file as string. Raises LibraryError on any failure. """ try: with open(settings.sbd_config, "r") as sbd_cfg: return sbd_cfg.read() except EnvironmentError as e: raise LibraryError(reports.unable_to_get_sbd_config( "local node", str(e) )) def get_sbd_service_name(): return "sbd" if external.is_systemctl() else "sbd_helper" def is_sbd_enabled(runner): """ Check if SBD service is enabled in local system. Return True if SBD service is enabled, False otherwise. runner -- CommandRunner """ return external.is_service_enabled(runner, get_sbd_service_name()) def is_sbd_installed(runner): """ Check if SBD service is installed in local system. Reurns True id SBD service is installed. False otherwise. runner -- CommandRunner """ return external.is_service_installed(runner, get_sbd_service_name()) def initialize_block_devices( report_processor, cmd_runner, device_list, option_dict ): """ Initialize devices with specified options in option_dict. Raise LibraryError on failure. report_processor -- report processor cmd_runner -- CommandRunner device_list -- list of strings option_dict -- dictionary of options and their values """ report_processor.process( reports.sbd_device_initialization_started(device_list) ) cmd = [settings.sbd_binary] for device in device_list: cmd += ["-d", device] for option, value in sorted(option_dict.items()): cmd += [DEVICE_INITIALIZATION_OPTIONS_MAPPING[option], str(value)] cmd.append("create") _, std_err, ret_val = cmd_runner.run(cmd) if ret_val != 0: raise LibraryError( reports.sbd_device_initialization_error(device_list, std_err) ) report_processor.process( reports.sbd_device_initialization_success(device_list) ) def get_local_sbd_device_list(): """ Returns list of devices specified in local SBD config """ if not path.exists(settings.sbd_config): return [] cfg = environment_file_to_dict(get_local_sbd_config()) if "SBD_DEVICE" not in cfg: return [] devices = cfg["SBD_DEVICE"] if devices.startswith('"') and devices.endswith('"'): devices = devices[1:-1] return [ device.strip() for device in devices.split(";") if device.strip() ] def is_device_set_local(): """ Returns True if there is at least one device specified in local SBD config, False otherwise. """ return len(get_local_sbd_device_list()) > 0 def get_device_messages_info(cmd_runner, device): """ Returns info about messages (string) stored on specified SBD device. cmd_runner -- CommandRunner device -- string """ std_out, dummy_std_err, ret_val = cmd_runner.run( [settings.sbd_binary, "-d", device, "list"] ) if ret_val != 0: # sbd writes error message into std_out raise LibraryError(reports.sbd_device_list_error(device, std_out)) return std_out def get_device_sbd_header_dump(cmd_runner, device): """ Returns header dump (string) of specified SBD device. cmd_runner -- CommandRunner device -- string """ std_out, dummy_std_err, ret_val = cmd_runner.run( [settings.sbd_binary, "-d", device, "dump"] ) if ret_val != 0: # sbd writes error message into std_out raise LibraryError(reports.sbd_device_dump_error(device, std_out)) return std_out def set_message(cmd_runner, device, node_name, message): """ Set message of specified type 'message' on SBD device for node. cmd_runner -- CommandRunner device -- string, device path node_name -- string, nae of node for which message should be set message -- string, message type """ dummy_std_out, std_err, ret_val = cmd_runner.run( [settings.sbd_binary, "-d", device, "message", node_name, message] ) if ret_val != 0: raise LibraryError(reports.sbd_device_message_error( device, node_name, message, std_err )) pcs-0.9.164/pcs/lib/test/000077500000000000000000000000001326265502500150005ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/test/__init__.py000066400000000000000000000000001326265502500170770ustar00rootroot00000000000000pcs-0.9.164/pcs/lib/test/misc.py000066400000000000000000000006611326265502500163100ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import logging from pcs.lib.env import LibraryEnvironment as Env from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock def get_mocked_env(**kwargs): return Env( logger=mock.MagicMock(logging.Logger), report_processor=MockLibraryReportProcessor(), **kwargs ) pcs-0.9.164/pcs/lib/test/test_cluster_conf_facade.py000066400000000000000000000106141326265502500223640ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.test.tools.assertions import ( assert_raise_library_error, assert_xml_equal, ) from pcs.test.tools.misc import outdent from lxml import etree from pcs.common import report_codes from pcs.lib.errors import ReportItemSeverity as severity from pcs.lib import cluster_conf_facade as lib class FromStringTest(TestCase): def test_success(self): facade = lib.ClusterConfFacade.from_string("") self.assertTrue(isinstance(facade, lib.ClusterConfFacade)) assert_xml_equal("", etree.tostring(facade._config).decode()) def test_syntax_error(self): assert_raise_library_error( lambda: lib.ClusterConfFacade.from_string(""), ( severity.ERROR, report_codes.CLUSTER_CONF_LOAD_ERROR_INVALID_FORMAT, {} ) ) def test_invalid_document_error(self): assert_raise_library_error( lambda: lib.ClusterConfFacade.from_string("string"), ( severity.ERROR, report_codes.CLUSTER_CONF_LOAD_ERROR_INVALID_FORMAT, {} ) ) class GetClusterNameTest(TestCase): def test_success(self): cfg = etree.XML('') self.assertEqual( "cluster-name", lib.ClusterConfFacade(cfg).get_cluster_name() ) def test_no_name(self): cfg = etree.XML('') self.assertEqual("", lib.ClusterConfFacade(cfg).get_cluster_name()) def test_not_cluster_element(self): cfg = etree.XML('') self.assertEqual("", lib.ClusterConfFacade(cfg).get_cluster_name()) class GetNodesTest(TestCase): def assert_equal_nodelist(self, expected_nodes, real_nodelist): real_nodes = [ {"ring0": n.ring0, "ring1": n.ring1, "name": n.name, "id": n.id} for n in real_nodelist ] self.assertEqual(expected_nodes, real_nodes) def test_success(self): config = outdent(""" """) self.assert_equal_nodelist( [ { "ring0": "node1", "ring1": "node1-altname", "name": None, "id": "1", }, { "ring0": "node2", "ring1": None, "name": None, "id": "2", }, { "ring0": "node3", "ring1": None, "name": None, "id": "3", } ], lib.ClusterConfFacade(etree.XML(config)).get_nodes() ) def test_no_nodes(self): config = "" self.assert_equal_nodelist( [], lib.ClusterConfFacade(etree.XML(config)).get_nodes() ) def test_missing_info(self): config = outdent(""" """) self.assert_equal_nodelist( [ { "ring0": None, "ring1": None, "name": None, "id": "1", }, { "ring0": "node2", "ring1": None, "name": None, "id": None, }, { "ring0": None, "ring1": None, "name": None, "id": None, } ], lib.ClusterConfFacade(etree.XML(config)).get_nodes() ) pcs-0.9.164/pcs/lib/test/test_env.py000066400000000000000000001012201326265502500171750ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase import logging from functools import partial from lxml import etree from pcs.test.tools import fixture from pcs.test.tools.assertions import ( assert_raise_library_error, assert_xml_equal, ) from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.misc import get_test_resource as rc, create_patcher from pcs.test.tools.pcs_unittest import mock from pcs.lib.env import LibraryEnvironment from pcs.common import report_codes from pcs.lib.node import NodeAddresses, NodeAddressesList from pcs.lib.cluster_conf_facade import ClusterConfFacade from pcs.lib.corosync.config_facade import ConfigFacade as CorosyncConfigFacade from pcs.lib.errors import ReportItemSeverity as severity patch_env = create_patcher("pcs.lib.env") patch_env_object = partial(mock.patch.object, LibraryEnvironment) class LibraryEnvironmentTest(TestCase): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() def test_logger(self): env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertEqual(self.mock_logger, env.logger) def test_report_processor(self): env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertEqual(self.mock_reporter, env.report_processor) def test_user_set(self): user = "testuser" env = LibraryEnvironment( self.mock_logger, self.mock_reporter, user_login=user ) self.assertEqual(user, env.user_login) def test_user_not_set(self): env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertEqual(None, env.user_login) def test_usergroups_set(self): groups = ["some", "group"] env = LibraryEnvironment( self.mock_logger, self.mock_reporter, user_groups=groups ) self.assertEqual(groups, env.user_groups) def test_usergroups_not_set(self): env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertEqual([], env.user_groups) @patch_env("is_cman_cluster") def test_is_cman_cluster(self, mock_is_cman): mock_is_cman.return_value = True env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertTrue(env.is_cman_cluster) self.assertTrue(env.is_cman_cluster) self.assertEqual(1, mock_is_cman.call_count) @patch_env("get_local_cluster_conf") def test_get_cluster_conf_live(self, mock_get_local_cluster_conf): env = LibraryEnvironment( self.mock_logger, self.mock_reporter, cluster_conf_data=None ) mock_get_local_cluster_conf.return_value = "cluster.conf data" self.assertEqual("cluster.conf data", env.get_cluster_conf_data()) mock_get_local_cluster_conf.assert_called_once_with() @patch_env("get_local_cluster_conf") def test_get_cluster_conf_not_live(self, mock_get_local_cluster_conf): env = LibraryEnvironment( self.mock_logger, self.mock_reporter, cluster_conf_data="data" ) self.assertEqual("data", env.get_cluster_conf_data()) self.assertEqual(0, mock_get_local_cluster_conf.call_count) @mock.patch.object( LibraryEnvironment, "get_cluster_conf_data", lambda self: "" ) def test_get_cluster_conf(self): env = LibraryEnvironment(self.mock_logger, self.mock_reporter) facade_obj = env.get_cluster_conf() self.assertTrue(isinstance(facade_obj, ClusterConfFacade)) assert_xml_equal( '', etree.tostring(facade_obj._config).decode() ) def test_is_cluster_conf_live_live(self): env = LibraryEnvironment(self.mock_logger, self.mock_reporter) self.assertTrue(env.is_cluster_conf_live) def test_is_cluster_conf_live_not_live(self): env = LibraryEnvironment( self.mock_logger, self.mock_reporter, cluster_conf_data="data" ) self.assertFalse(env.is_cluster_conf_live) @patch_env("CommandRunner") class CmdRunner(TestCase): def setUp(self): self.mock_logger = mock.MagicMock(logging.Logger) self.mock_reporter = MockLibraryReportProcessor() def test_no_options(self, mock_runner): expected_runner = mock.MagicMock() mock_runner.return_value = expected_runner env = LibraryEnvironment(self.mock_logger, self.mock_reporter) runner = env.cmd_runner() self.assertEqual(expected_runner, runner) mock_runner.assert_called_once_with( self.mock_logger, self.mock_reporter, { "LC_ALL": "C", } ) def test_user(self, mock_runner): expected_runner = mock.MagicMock() mock_runner.return_value = expected_runner user = "testuser" env = LibraryEnvironment( self.mock_logger, self.mock_reporter, user_login=user ) runner = env.cmd_runner() self.assertEqual(expected_runner, runner) mock_runner.assert_called_once_with( self.mock_logger, self.mock_reporter, { "CIB_user": user, "LC_ALL": "C", } ) @patch_env("write_tmpfile") def test_dump_cib_file(self, mock_tmpfile, mock_runner): expected_runner = mock.MagicMock() mock_runner.return_value = expected_runner mock_instance = mock.MagicMock() mock_instance.name = rc("file.tmp") mock_tmpfile.return_value = mock_instance env = LibraryEnvironment( self.mock_logger, self.mock_reporter, cib_data="" ) runner = env.cmd_runner() self.assertEqual(expected_runner, runner) mock_runner.assert_called_once_with( self.mock_logger, self.mock_reporter, { "LC_ALL": "C", "CIB_file": rc("file.tmp"), } ) mock_tmpfile.assert_called_once_with("") @patch_env_object("cmd_runner", lambda self: "runner") class EnsureValidWait(TestCase): def setUp(self): self.create_env = partial( LibraryEnvironment, mock.MagicMock(logging.Logger), MockLibraryReportProcessor() ) @property def env_live(self): return self.create_env() @property def env_fake(self): return self.create_env(cib_data="") def test_not_raises_if_waiting_false_no_matter_if_env_is_live(self): self.env_live.ensure_wait_satisfiable(False) self.env_fake.ensure_wait_satisfiable(False) def test_raises_when_is_not_live(self): env = self.env_fake assert_raise_library_error( lambda: env.ensure_wait_satisfiable(10), ( severity.ERROR, report_codes.WAIT_FOR_IDLE_NOT_LIVE_CLUSTER, {}, None ) ) @patch_env("get_valid_timeout_seconds") @patch_env("ensure_wait_for_idle_support") def test_do_checks(self, ensure_wait_for_idle_support, get_valid_timeout): env = self.env_live env.ensure_wait_satisfiable(10) ensure_wait_for_idle_support.assert_called_once_with(env.cmd_runner()) get_valid_timeout.assert_called_once_with(10) class PushCorosyncConfLiveBase(TestCase): def setUp(self): self.env_assistant, self.config = get_env_tools(self) self.corosync_conf_facade = mock.MagicMock(CorosyncConfigFacade) self.corosync_conf_text = "corosync conf" self.corosync_conf_facade.config.export.return_value = ( self.corosync_conf_text ) self.corosync_conf_facade.get_nodes.return_value = NodeAddressesList([ NodeAddresses("node-1"), NodeAddresses("node-2"), ]) self.corosync_conf_facade.need_stopped_cluster = False self.corosync_conf_facade.need_qdevice_reload = False self.node_labels = ["node-1", "node-2"] @mock.patch("pcs.lib.external.is_systemctl", lambda: True) class PushCorosyncConfLiveNoQdeviceTest(PushCorosyncConfLiveBase): def test_dont_need_stopped_cluster(self): (self.config .http.corosync.set_corosync_conf( self.corosync_conf_text, node_labels=self.node_labels ) .runner.systemctl.is_active("corosync") .runner.corosync.reload() ) self.env_assistant.get_env().push_corosync_conf( self.corosync_conf_facade ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-1", ), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-2", ), fixture.info(report_codes.COROSYNC_CONFIG_RELOADED) ]) def test_dont_need_stopped_cluster_error(self): (self.config .http.corosync.set_corosync_conf( self.corosync_conf_text, communication_list=[ { "label": "node-1", }, { "label": "node-2", "response_code": 400, "output": "Failed" }, ] ) ) env = self.env_assistant.get_env() self.env_assistant.assert_raise_library_error( lambda: env.push_corosync_conf(self.corosync_conf_facade), [] ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-1", ), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=report_codes.SKIP_OFFLINE_NODES, node="node-2", command="remote/set_corosync_conf", reason="Failed", ), fixture.error( report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, force_code=report_codes.SKIP_OFFLINE_NODES, node="node-2", ), ]) def test_dont_need_stopped_cluster_error_skip_offline(self): (self.config .http.corosync.set_corosync_conf( self.corosync_conf_text, communication_list=[ { "label": "node-1", }, { "label": "node-2", "response_code": 400, "output": "Failed" }, ] ) .runner.systemctl.is_active("corosync") .runner.corosync.reload() ) self.env_assistant.get_env().push_corosync_conf( self.corosync_conf_facade, skip_offline_nodes=True ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-1", ), fixture.warn( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node="node-2", command="remote/set_corosync_conf", reason="Failed", ), fixture.warn( report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, node="node-2", ), fixture.info(report_codes.COROSYNC_CONFIG_RELOADED) ]) def test_need_stopped_cluster(self): self.corosync_conf_facade.need_stopped_cluster = True (self.config .http.corosync.check_corosync_offline( node_labels=self.node_labels ) .http.corosync.set_corosync_conf( self.corosync_conf_text, node_labels=self.node_labels ) .runner.systemctl.is_active("corosync", is_active=False) ) self.env_assistant.get_env().push_corosync_conf( self.corosync_conf_facade ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED), fixture.info( report_codes.COROSYNC_NOT_RUNNING_ON_NODE, node="node-1", ), fixture.info( report_codes.COROSYNC_NOT_RUNNING_ON_NODE, node="node-2", ), fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-1", ), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-2", ), ]) def test_need_stopped_cluster_not_stopped(self): self.corosync_conf_facade.need_stopped_cluster = True (self.config .http.corosync.check_corosync_offline( communication_list=[ { "label": node, "output": '{"corosync":true}' } for node in self.node_labels ] ) ) env = self.env_assistant.get_env() self.env_assistant.assert_raise_library_error( lambda: env.push_corosync_conf(self.corosync_conf_facade), [] ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED), fixture.error( report_codes.COROSYNC_RUNNING_ON_NODE, node="node-1", ), fixture.error( report_codes.COROSYNC_RUNNING_ON_NODE, node="node-2", ), ]) def test_need_stopped_cluster_not_stopped_skip_offline(self): # If we know for sure that corosync is running, skip_offline doesn't # matter. self.corosync_conf_facade.need_stopped_cluster = True (self.config .http.corosync.check_corosync_offline( communication_list=[ dict( label="node-1", output='{"corosync":true}', ), dict( label="node-2", ), ] ) ) env = self.env_assistant.get_env() self.env_assistant.assert_raise_library_error( lambda: env.push_corosync_conf( self.corosync_conf_facade, skip_offline_nodes=True ), [] ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED), fixture.error( report_codes.COROSYNC_RUNNING_ON_NODE, node="node-1", ), fixture.info( report_codes.COROSYNC_NOT_RUNNING_ON_NODE, node="node-2", ) ]) def test_need_stopped_cluster_json_error(self): self.corosync_conf_facade.need_stopped_cluster = True (self.config .http.corosync.check_corosync_offline( communication_list=[ dict( label="node-1", output="{" # not valid json ), dict( label="node-2", # The expected key (/corosync) is missing, we don't # care about version 2 status key # (/services/corosync/running) output='{"services":{"corosync":{"running":true}}}' ), ] ) ) env = self.env_assistant.get_env() self.env_assistant.assert_raise_library_error( lambda: env.push_corosync_conf(self.corosync_conf_facade), [] ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED), fixture.error( report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, force_code=report_codes.SKIP_OFFLINE_NODES, node="node-1", ), fixture.error( report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, force_code=report_codes.SKIP_OFFLINE_NODES, node="node-2", ), ]) def test_need_stopped_cluster_comunnication_failure(self): self.corosync_conf_facade.need_stopped_cluster = True (self.config .http.corosync.check_corosync_offline( communication_list=[ dict( label="node-1", ), dict( label="node-2", response_code=401, output="""{"notauthorized":"true"}""" ), ] ) ) env = self.env_assistant.get_env() self.env_assistant.assert_raise_library_error( lambda: env.push_corosync_conf(self.corosync_conf_facade), [] ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED), fixture.info( report_codes.COROSYNC_NOT_RUNNING_ON_NODE, node="node-1", ), fixture.error( report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, force_code=report_codes.SKIP_OFFLINE_NODES, node="node-2", ), fixture.error( report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, force_code=report_codes.SKIP_OFFLINE_NODES, node="node-2", ), ]) def test_need_stopped_cluster_comunnication_failures_skip_offline(self): # If we don't know if corosync is running, skip_offline matters. self.corosync_conf_facade.need_stopped_cluster = True (self.config .http.corosync.check_corosync_offline( communication_list=[ dict( label="node-1", output="{" # not valid json ), dict( label="node-2", response_code=401, output="""{"notauthorized":"true"}""" ), ] ) .http.corosync.set_corosync_conf( self.corosync_conf_text, communication_list=[ dict( label="node-1", ), dict( label="node-2", response_code=401, output="""{"notauthorized":"true"}""", ) ] ) .runner.systemctl.is_active("corosync", is_active=False) ) self.env_assistant.get_env().push_corosync_conf( self.corosync_conf_facade, skip_offline_nodes=True ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED), fixture.warn( report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, node="node-1", ), fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, node="node-2", reason="HTTP error: 401", command="remote/status", ), fixture.warn( report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, node="node-2", ), fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-1", ), fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, node="node-2", reason="HTTP error: 401", command="remote/set_corosync_conf", ), fixture.warn( report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, node="node-2", ), ]) @mock.patch("pcs.lib.external.is_systemctl", lambda: True) class PushCorosyncConfLiveWithQdeviceTest(PushCorosyncConfLiveBase): def test_qdevice_reload(self): self.corosync_conf_facade.need_qdevice_reload = True (self.config .http.corosync.set_corosync_conf( self.corosync_conf_text, node_labels=self.node_labels ) .runner.systemctl.is_active("corosync", is_active=False) .http.corosync.qdevice_client_stop( node_labels=self.node_labels ) .http.corosync.qdevice_client_start( node_labels=self.node_labels ) ) self.env_assistant.get_env().push_corosync_conf( self.corosync_conf_facade ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-1", ), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-2", ), fixture.info(report_codes.QDEVICE_CLIENT_RELOAD_STARTED), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node="node-1", service="corosync-qdevice", instance=None, ), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node="node-2", service="corosync-qdevice", instance=None, ), fixture.info( report_codes.SERVICE_START_SUCCESS, node="node-1", service="corosync-qdevice", instance=None, ), fixture.info( report_codes.SERVICE_START_SUCCESS, node="node-2", service="corosync-qdevice", instance=None, ), ]) def test_qdevice_reload_corosync_stopped(self): self.corosync_conf_facade.need_qdevice_reload = True (self.config .http.corosync.set_corosync_conf( self.corosync_conf_text, node_labels=self.node_labels ) .runner.systemctl.is_active("corosync", is_active=False) .http.corosync.qdevice_client_stop( node_labels=self.node_labels ) .http.corosync.qdevice_client_start( communication_list=[ { "label": label, "output": "corosync is not running, skipping", } for label in self.node_labels ] ) ) self.env_assistant.get_env().push_corosync_conf( self.corosync_conf_facade ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-1", ), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-2", ), fixture.info(report_codes.QDEVICE_CLIENT_RELOAD_STARTED), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node="node-1", service="corosync-qdevice", instance=None, ), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node="node-2", service="corosync-qdevice", instance=None, ), fixture.info( report_codes.SERVICE_START_SKIPPED, node="node-1", service="corosync-qdevice", instance=None, reason="corosync is not running", ), fixture.info( report_codes.SERVICE_START_SKIPPED, node="node-2", service="corosync-qdevice", instance=None, reason="corosync is not running", ), ]) def test_qdevice_reload_failures(self): # This also tests that failing to stop qdevice on a node doesn't prevent # starting qdevice on the same node. self.corosync_conf_facade.need_qdevice_reload = True (self.config .http.corosync.set_corosync_conf( self.corosync_conf_text, node_labels=self.node_labels ) .runner.systemctl.is_active("corosync", is_active=False) .http.corosync.qdevice_client_stop( communication_list=[ dict( label="node-1", ), dict( label="node-2", response_code=400, output="error", ), ] ) .http.corosync.qdevice_client_start( communication_list=[ dict( label="node-1", errno=8, error_msg="failure", was_connected=False, ), dict( label="node-2", ), ] ) ) env = self.env_assistant.get_env() self.env_assistant.assert_raise_library_error( lambda: env.push_corosync_conf(self.corosync_conf_facade), [] ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-1", ), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-2", ), fixture.info(report_codes.QDEVICE_CLIENT_RELOAD_STARTED), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node="node-1", service="corosync-qdevice", instance=None, ), fixture.error( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, force_code=report_codes.SKIP_OFFLINE_NODES, node="node-2", reason="error", command="remote/qdevice_client_stop", ), fixture.error( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, force_code=report_codes.SKIP_OFFLINE_NODES, node="node-1", reason="failure", command="remote/qdevice_client_start", ), fixture.info( report_codes.SERVICE_START_SUCCESS, node="node-2", service="corosync-qdevice", instance=None, ), ]) def test_qdevice_reload_failures_skip_offline(self): self.corosync_conf_facade.need_qdevice_reload = True (self.config .http.corosync.set_corosync_conf( self.corosync_conf_text, communication_list=[ dict( label="node-1", ), dict( label="node-2", errno=8, error_msg="failure", was_connected=False, ), ] ) .runner.systemctl.is_active("corosync", is_active=False) .http.corosync.qdevice_client_stop( communication_list=[ dict( label="node-1", ), dict( label="node-2", response_code=400, output="error", ), ] ) .http.corosync.qdevice_client_start( communication_list=[ dict( label="node-1", errno=8, error_msg="failure", was_connected=False, ), dict( label="node-2", ), ] ) ) env = self.env_assistant.get_env() env.push_corosync_conf( self.corosync_conf_facade, skip_offline_nodes=True ) self.env_assistant.assert_reports([ fixture.info(report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED), fixture.info( report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, node="node-1", ), fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node="node-2", reason="failure", command="remote/set_corosync_conf", ), fixture.warn( report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, node="node-2", ), fixture.info(report_codes.QDEVICE_CLIENT_RELOAD_STARTED), fixture.info( report_codes.SERVICE_STOP_SUCCESS, node="node-1", service="corosync-qdevice", instance=None, ), fixture.warn( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, node="node-2", reason="error", command="remote/qdevice_client_stop", ), fixture.warn( report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, node="node-1", reason="failure", command="remote/qdevice_client_start", ), fixture.info( report_codes.SERVICE_START_SUCCESS, node="node-2", service="corosync-qdevice", instance=None, ), ]) class PushCorosyncConfFile(TestCase): def setUp(self): self.env_assistant, self.config = get_env_tools(test_case=self) self.config.env.set_corosync_conf_data("totem {\n version: 2\n}\n") def test_success(self): new_corosync_conf_data = "totem {\n version: 3\n}\n" self.config.env.push_corosync_conf( corosync_conf_text=new_corosync_conf_data ) env = self.env_assistant.get_env() env.push_corosync_conf( CorosyncConfigFacade.from_string(new_corosync_conf_data) ) class GetCorosyncConfFile(TestCase): def setUp(self): self.corosync_conf_data = "totem {\n version: 2\n}\n" self.env_assistant, self.config = get_env_tools(test_case=self) self.config.env.set_corosync_conf_data(self.corosync_conf_data) def test_success(self): env = self.env_assistant.get_env() self.assertFalse(env.is_corosync_conf_live) self.assertEqual(self.corosync_conf_data, env.get_corosync_conf_data()) self.assertEqual( self.corosync_conf_data, env.get_corosync_conf().config.export() ) class GetCorosyncConfLive(TestCase): def setUp(self): self.env_assistant, self.config = get_env_tools(self) def test_success(self): corosync_conf_data = "totem {\n version: 2\n}\n" self.config.corosync_conf.load_content(corosync_conf_data) env = self.env_assistant.get_env() self.assertTrue(env.is_corosync_conf_live) self.assertEqual( corosync_conf_data, env.get_corosync_conf().config.export() ) pcs-0.9.164/pcs/lib/test/test_env_cib.py000066400000000000000000000401331326265502500200170ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from functools import partial from lxml import etree from pcs.common import report_codes from pcs.common.tools import Version from pcs.lib.env import LibraryEnvironment from pcs.test.tools import fixture from pcs.test.tools.assertions import assert_xml_equal from pcs.test.tools.command_env import get_env_tools from pcs.test.tools.misc import ( get_test_resource as rc, create_setup_patch_mixin, ) from pcs.test.tools.pcs_unittest import TestCase, mock from pcs.test.tools.xml import etree_to_str def mock_tmpfile(filename): mock_file = mock.MagicMock() mock_file.name = rc(filename) return mock_file SetupPatchMixin = create_setup_patch_mixin( partial(mock.patch.object, LibraryEnvironment) ) class ManageCibAssertionMixin(object): def assert_raises_cib_error(self, callable_obj, message): with self.assertRaises(AssertionError) as context_manager: callable_obj() self.assertEqual(str(context_manager.exception), message) def assert_raises_cib_not_loaded(self, callable_obj): self.assert_raises_cib_error( callable_obj, "CIB has not been loaded" ) def assert_raises_cib_already_loaded(self, callable_obj): self.assert_raises_cib_error( callable_obj, "CIB has already been loaded" ) def assert_raises_cib_loaded_cannot_custom(self, callable_obj): self.assert_raises_cib_error( callable_obj, "CIB has been loaded, cannot push custom CIB" ) class IsCibLive(TestCase): def test_is_live_when_no_cib_data_specified(self): env_assist, _ = get_env_tools(test_case=self) self.assertTrue(env_assist.get_env().is_cib_live) def test_is_not_live_when_cib_data_specified(self): env_assist, config = get_env_tools(test_case=self) config.env.set_cib_data("") self.assertFalse(env_assist.get_env().is_cib_live) class UpgradeCib(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_get_and_push_cib_version_upgrade_needed(self): (self.config .runner.cib.load(name="load_cib_old", filename="cib-empty-2.6.xml") .runner.cib.upgrade() .runner.cib.load(filename="cib-empty-2.8.xml") ) env = self.env_assist.get_env() env.get_cib(Version(2, 8, 0)) self.env_assist.assert_reports( [fixture.info(report_codes.CIB_UPGRADE_SUCCESSFUL)] ) def test_get_and_push_cib_version_upgrade_not_needed(self): self.config.runner.cib.load(filename="cib-empty-2.6.xml") env = self.env_assist.get_env() env.get_cib(Version(2, 5, 0)) class GetCib(TestCase, ManageCibAssertionMixin): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_raise_library_error_when_cibadmin_failed(self): stderr = "cibadmin: Connection to local file failed..." (self.config #Value of cib_data is unimportant here. This content is only put #into tempfile when the runner is not mocked. And content is then #loaded from tempfile by `cibadmin --local --query`. Runner is #mocked in tests so the value of cib_data is not in the fact used. .env.set_cib_data("whatever") .runner.cib.load(returncode=203, stderr=stderr) ) self.env_assist.assert_raise_library_error( self.env_assist.get_env().get_cib, [ fixture.error(report_codes.CIB_LOAD_ERROR, reason=stderr) ], expected_in_processor=False ) def test_returns_cib_from_cib_data(self): cib_filename = "cib-empty.xml" (self.config #Value of cib_data is unimportant here. See details in sibling test. .env.set_cib_data("whatever") .runner.cib.load(filename=cib_filename) ) assert_xml_equal( etree_to_str(self.env_assist.get_env().get_cib()), open(rc(cib_filename)).read() ) def test_get_and_property(self): self.config.runner.cib.load() env = self.env_assist.get_env() self.assertEqual(env.get_cib(), env.cib) def test_property_without_get(self): env = self.env_assist.get_env() # need to use lambda because env.cib is a property self.assert_raises_cib_not_loaded(lambda: env.cib) def test_double_get(self): self.config.runner.cib.load() env = self.env_assist.get_env() env.get_cib() self.assert_raises_cib_already_loaded(env.get_cib) class PushLoadedCib(TestCase, ManageCibAssertionMixin): wait_timeout = 10 def setUp(self): tmpfile_patcher = mock.patch("pcs.lib.pacemaker.live.write_tmpfile") self.addCleanup(tmpfile_patcher.stop) self.mock_write_tmpfile = tmpfile_patcher.start() self.tmpfile_old = mock_tmpfile("old.cib") self.tmpfile_new = mock_tmpfile("new.cib") self.mock_write_tmpfile.side_effect = [ self.tmpfile_old, self.tmpfile_new ] self.cib_can_diff = "cib-empty-2.0.xml" self.cib_cannot_diff = "cib-empty-1.2.xml" self.env_assist, self.config = get_env_tools(test_case=self) def config_load_and_push_diff(self): (self.config .runner.cib.load(filename=self.cib_can_diff) .runner.cib.diff(self.tmpfile_old.name, self.tmpfile_new.name) .runner.cib.push_diff() ) def config_load_and_push(self): (self.config .runner.cib.load(filename=self.cib_cannot_diff) .runner.cib.push() ) def push_reports(self, cib_old=None, cib_new=None): # No test changes the CIB between load and push. The point is to test # loading and pushing, not editing the CIB. loaded_cib = self.config.calls.get("runner.cib.load").stdout return [ fixture.debug( report_codes.TMP_FILE_WRITE, file_path=self.tmpfile_old.name, content=(cib_old if cib_old is not None else loaded_cib) ), fixture.debug( report_codes.TMP_FILE_WRITE, file_path=self.tmpfile_new.name, content=(cib_new if cib_new is not None else loaded_cib).strip() ), ] def push_full_forced_reports(self, version): return [ fixture.warn( report_codes.CIB_PUSH_FORCED_FULL_DUE_TO_CRM_FEATURE_SET, current_set=version, required_set="3.0.9" ) ] def test_get_and_push(self): self.config_load_and_push_diff() env = self.env_assist.get_env() env.get_cib() env.push_cib() self.env_assist.assert_reports(self.push_reports()) def test_get_and_push_cannot_diff(self): self.config_load_and_push() env = self.env_assist.get_env() env.get_cib() env.push_cib() self.env_assist.assert_reports( self.push_full_forced_reports("3.0.8") ) def test_modified_cib_features_do_not_matter(self): self.config_load_and_push_diff() env = self.env_assist.get_env() cib = env.get_cib() cib.set("crm_feature_set", "3.0.8") env.push_cib() self.env_assist.assert_reports(self.push_reports( cib_new=self.config.calls.get("runner.cib.load").stdout.replace( "3.0.9", "3.0.8" ) )) def test_push_no_features_goes_with_full(self): (self.config .runner.cib.load_content("", name="runner.cib.load_content") .runner.cib.push(load_key="runner.cib.load_content") ) env = self.env_assist.get_env() env.get_cib() env.push_cib() self.env_assist.assert_reports( self.push_full_forced_reports("0.0.0") ) def test_can_get_after_push(self): self.config_load_and_push_diff() self.config.runner.cib.load( name="load_cib_2", filename=self.cib_can_diff ) env = self.env_assist.get_env() env.get_cib() env.push_cib() # need to use lambda because env.cib is a property self.assert_raises_cib_not_loaded(lambda: env.cib) env.get_cib() self.env_assist.assert_reports(self.push_reports()) def test_can_get_after_push_cannot_diff(self): self.config_load_and_push() self.config.runner.cib.load( name="load_cib_2", filename=self.cib_cannot_diff ) env = self.env_assist.get_env() env.get_cib() env.push_cib() # need to use lambda because env.cib is a property self.assert_raises_cib_not_loaded(lambda: env.cib) env.get_cib() self.env_assist.assert_reports( self.push_full_forced_reports("3.0.8") ) def test_not_loaded(self): env = self.env_assist.get_env() self.assert_raises_cib_not_loaded(env.push_cib) def test_tmpfile_fails(self): self.config.runner.cib.load(filename=self.cib_can_diff) self.mock_write_tmpfile.side_effect = EnvironmentError("test error") env = self.env_assist.get_env() env.get_cib() self.env_assist.assert_raise_library_error( env.push_cib, [ fixture.error( report_codes.CIB_SAVE_TMP_ERROR, reason="test error", ) ], expected_in_processor=False ) def test_diff_fails(self): (self.config .runner.cib.load(filename=self.cib_can_diff) .runner.cib.diff( self.tmpfile_old.name, self.tmpfile_new.name, stderr="invalid cib", returncode=1 ) ) env = self.env_assist.get_env() env.get_cib() self.env_assist.assert_raise_library_error( env.push_cib, [ fixture.error( report_codes.CIB_DIFF_ERROR, reason="invalid cib", ) ], expected_in_processor=False ) self.env_assist.assert_reports(self.push_reports()) def test_push_diff_fails(self): (self.config .runner.cib.load(filename=self.cib_can_diff) .runner.cib.diff(self.tmpfile_old.name, self.tmpfile_new.name) .runner.cib.push_diff(stderr="invalid cib", returncode=1) ) env = self.env_assist.get_env() env.get_cib() self.env_assist.assert_raise_library_error( env.push_cib, [ fixture.error( report_codes.CIB_PUSH_ERROR, reason="invalid cib", ) ], expected_in_processor=False ) self.env_assist.assert_reports(self.push_reports()) def test_push_fails(self): (self.config .runner.cib.load(filename=self.cib_cannot_diff) .runner.cib.push(stderr="invalid cib", returncode=1) ) env = self.env_assist.get_env() env.get_cib() self.env_assist.assert_raise_library_error( env.push_cib, [ fixture.error( report_codes.CIB_PUSH_ERROR, reason="invalid cib", ) ], expected_in_processor=False ) self.env_assist.assert_reports( self.push_full_forced_reports("3.0.8") ) def test_wait(self): (self.config .runner.cib.load(filename=self.cib_can_diff) .runner.pcmk.can_wait() .runner.cib.diff(self.tmpfile_old.name, self.tmpfile_new.name) .runner.cib.push_diff() .runner.pcmk.wait(timeout=self.wait_timeout) ) env = self.env_assist.get_env() env.get_cib() env.push_cib(wait=self.wait_timeout) self.env_assist.assert_reports(self.push_reports()) def test_wait_cannot_diff(self): (self.config .runner.cib.load(filename=self.cib_cannot_diff) .runner.pcmk.can_wait() .runner.cib.push() .runner.pcmk.wait(timeout=self.wait_timeout) ) env = self.env_assist.get_env() env.get_cib() env.push_cib(wait=self.wait_timeout) self.env_assist.assert_reports( self.push_full_forced_reports("3.0.8") ) def test_wait_not_supported(self): (self.config .runner.cib.load(filename=self.cib_can_diff) .runner.pcmk.can_wait(stdout="cannot wait") ) env = self.env_assist.get_env() env.get_cib() self.env_assist.assert_raise_library_error( lambda: env.push_cib(wait=self.wait_timeout), [ fixture.error(report_codes.WAIT_FOR_IDLE_NOT_SUPPORTED), ], expected_in_processor=False ) def test_wait_raises_on_invalid_value(self): (self.config .runner.cib.load(filename=self.cib_can_diff) .runner.pcmk.can_wait() ) env = self.env_assist.get_env() env.get_cib() self.env_assist.assert_raise_library_error( lambda: env.push_cib(wait="abc"), [ fixture.error( report_codes.INVALID_TIMEOUT_VALUE, timeout="abc" ), ], expected_in_processor=False ) class PushCustomCib(TestCase, ManageCibAssertionMixin): custom_cib = "" wait_timeout = 10 def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_push_without_get(self): self.config.runner.cib.push_independent(self.custom_cib) self.env_assist.get_env().push_cib(etree.XML(self.custom_cib)) def test_push_after_get(self): self.config.runner.cib.load() env = self.env_assist.get_env() env.get_cib() self.assert_raises_cib_loaded_cannot_custom( partial(env.push_cib, etree.XML(self.custom_cib)) ) def test_wait(self): (self.config .runner.pcmk.can_wait() .runner.cib.push_independent(self.custom_cib) .runner.pcmk.wait(timeout=self.wait_timeout) ) env = self.env_assist.get_env() env.push_cib( etree.XML(self.custom_cib), wait=self.wait_timeout ) def test_wait_not_supported(self): self.config.runner.pcmk.can_wait(stdout="cannot wait") env = self.env_assist.get_env() self.env_assist.assert_raise_library_error( lambda: env.push_cib( etree.XML(self.custom_cib), wait=self.wait_timeout ), [ fixture.error(report_codes.WAIT_FOR_IDLE_NOT_SUPPORTED), ], expected_in_processor=False ) def test_wait_raises_on_invalid_value(self): self.config.runner.pcmk.can_wait() env = self.env_assist.get_env() self.env_assist.assert_raise_library_error( lambda: env.push_cib(etree.XML(self.custom_cib), wait="abc"), [ fixture.error( report_codes.INVALID_TIMEOUT_VALUE, timeout="abc" ), ], expected_in_processor=False ) class PushCibMockedWithWait(TestCase): def setUp(self): self.env_assist, self.config = get_env_tools(test_case=self) def test_wait_not_suported_for_mocked_cib(self): (self.config .env.set_cib_data("") .runner.cib.load() ) env = self.env_assist.get_env() env.get_cib() self.env_assist.assert_raise_library_error( lambda: env.push_cib(wait=10), [ fixture.error(report_codes.WAIT_FOR_IDLE_NOT_LIVE_CLUSTER), ], expected_in_processor=False ) pcs-0.9.164/pcs/lib/test/test_env_file.py000066400000000000000000000204741326265502500202070ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.common import report_codes from pcs.lib import env_file from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.misc import create_patcher from pcs.test.tools.assertions import( assert_raise_library_error, assert_report_item_list_equal ) from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock patch_env_file = create_patcher(env_file) FILE_PATH = "/path/to.file" MISSING_PATH = "/no/existing/file.path" CONF_PATH = "/etc/booth/some-name.conf" class GhostFileInit(TestCase): def test_is_not_binary_default(self): ghost_file = env_file.GhostFile("some role", content=None) self.assertFalse(ghost_file.export()["is_binary"]) def test_accepts_is_binary_attribute(self): ghost_file = env_file.GhostFile( "some role", content=None, is_binary=True ) self.assertTrue(ghost_file.export()["is_binary"]) class GhostFileReadTest(TestCase): def test_raises_when_trying_read_nonexistent_file(self): assert_raise_library_error( lambda: env_file.GhostFile("some role", content=None).read(), ( severities.ERROR, report_codes.FILE_DOES_NOT_EXIST, { "file_role": "some role", } ), ) class GhostFileExists(TestCase): def test_return_true_if_file_exists(self): self.assertTrue(env_file.GhostFile("some_role", "any content").exists) def test_return_False_if_file_exists(self): self.assertFalse(env_file.GhostFile("some_role").exists) def test_return_True_after_write(self): ghost_file = env_file.GhostFile("some_role") ghost_file.write("any content") self.assertTrue(ghost_file.exists) class RealFileExists(TestCase): @patch_env_file("os.path.exists", return_value=True) def test_return_true_if_file_exists(self, exists): self.assertTrue(env_file.RealFile("some role", FILE_PATH).exists) @patch_env_file("os.path.exists", return_value=False) def test_return_false_if_file_does_not_exist(self, exists): self.assertFalse(env_file.RealFile("some role", FILE_PATH).exists) @patch_env_file("os.path.exists", return_value=True) class RealFileAssertNoConflictWithExistingTest(TestCase): def check(self, report_processor, can_overwrite_existing=False): real_file = env_file.RealFile("some role", CONF_PATH) real_file.assert_no_conflict_with_existing( report_processor, can_overwrite_existing ) def test_success_when_config_not_exists(self, mock_exists): mock_exists.return_value = False report_processor=MockLibraryReportProcessor() self.check(report_processor) assert_report_item_list_equal(report_processor.report_item_list, []) def test_raises_when_config_exists_and_overwrite_not_allowed(self, mock_ex): assert_raise_library_error( lambda: self.check(MockLibraryReportProcessor()), ( severities.ERROR, report_codes.FILE_ALREADY_EXISTS, { "file_path": CONF_PATH }, report_codes.FORCE_FILE_OVERWRITE, ), ) def test_warn_when_config_exists_and_overwrite_allowed(self, mock_exists): report_processor=MockLibraryReportProcessor() self.check(report_processor, can_overwrite_existing=True) assert_report_item_list_equal(report_processor.report_item_list, [( severities.WARNING, report_codes.FILE_ALREADY_EXISTS, { "file_path": CONF_PATH }, )]) class RealFileWriteTest(TestCase): def test_success_write_content_to_path(self): mock_open = mock.mock_open() mock_file_operation = mock.Mock() with patch_env_file("open", mock_open, create=True): env_file.RealFile("some role", CONF_PATH).write( "config content", file_operation=mock_file_operation ) mock_open.assert_called_once_with(CONF_PATH, "w") mock_open().write.assert_called_once_with("config content") mock_file_operation.assert_called_once_with(CONF_PATH) def test_success_binary(self): mock_open = mock.mock_open() mock_file_operation = mock.Mock() with patch_env_file("open", mock_open, create=True): env_file.RealFile("some role", CONF_PATH, is_binary=True).write( "config content".encode("utf-8"), file_operation=mock_file_operation, ) mock_open.assert_called_once_with(CONF_PATH, "wb") mock_open().write.assert_called_once_with( "config content".encode("utf-8") ) mock_file_operation.assert_called_once_with(CONF_PATH) def test_raises_when_could_not_write(self): assert_raise_library_error( lambda: env_file.RealFile("some role", MISSING_PATH).write(["content"]), ( severities.ERROR, report_codes.FILE_IO_ERROR, { "reason": "No such file or directory: '{0}'".format(MISSING_PATH) , } ) ) class RealFileReadTest(TestCase): def assert_read_in_correct_mode(self, real_file, mode): mock_open = mock.mock_open() with patch_env_file("open", mock_open, create=True): mock_open().read.return_value = "test booth\nconfig" self.assertEqual("test booth\nconfig", real_file.read()) mock_open.assert_has_calls([mock.call(FILE_PATH, mode)]) def test_success_read_content_from_file(self): self.assert_read_in_correct_mode( env_file.RealFile("some role", FILE_PATH, is_binary=False), mode="r" ) def test_success_read_content_from_binary_file(self): self.assert_read_in_correct_mode( env_file.RealFile("some role", FILE_PATH, is_binary=True), mode="rb" ) def test_raises_when_could_not_read(self): assert_raise_library_error( lambda: env_file.RealFile("some role", MISSING_PATH).read(), ( severities.ERROR, report_codes.FILE_IO_ERROR, { "reason": "No such file or directory: '{0}'".format(MISSING_PATH) , } ) ) class RealFileRemoveTest(TestCase): @patch_env_file("os.remove") @patch_env_file("os.path.exists", return_value=True) def test_success_remove_file(self, _, mock_remove): env_file.RealFile("some role", FILE_PATH).remove() mock_remove.assert_called_once_with(FILE_PATH) @patch_env_file( "os.remove", side_effect=EnvironmentError(1, "mock remove failed", FILE_PATH) ) @patch_env_file("os.path.exists", return_value=True) def test_raise_library_error_when_remove_failed(self, _, dummy): assert_raise_library_error( lambda: env_file.RealFile("some role", FILE_PATH).remove(), ( severities.ERROR, report_codes.FILE_IO_ERROR, { 'reason': "mock remove failed: '/path/to.file'", 'file_role': 'some role', 'file_path': '/path/to.file' } ) ) @patch_env_file("os.path.exists", return_value=False) def test_existence_is_required(self, _): assert_raise_library_error( lambda: env_file.RealFile("some role", FILE_PATH).remove(), ( severities.ERROR, report_codes.FILE_IO_ERROR, { 'reason': "File does not exist", 'file_role': 'some role', 'file_path': '/path/to.file' } ) ) @patch_env_file("os.path.exists", return_value=False) def test_noexistent_can_be_silenced(self, _): env_file.RealFile("some role", FILE_PATH).remove( silence_no_existence=True ) pcs-0.9.164/pcs/lib/test/test_errors.py000066400000000000000000000045151326265502500177320ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.pcs_unittest import TestCase from pcs.lib import errors class LibraryEnvErrorTest(TestCase): def test_can_sign_solved_reports(self): e = errors.LibraryEnvError("first", "second", "third") for report in e.args: if report == "second": e.sign_processed(report) self.assertEqual(["first", "third"], e.unprocessed) class ReportListAnalyzerSelectSeverities(TestCase): def setUp(self): self.severities = [ errors.ReportItemSeverity.WARNING, errors.ReportItemSeverity.INFO, errors.ReportItemSeverity.DEBUG, ] def assert_select_reports(self, all_reports, expected_errors): self.assertEqual( expected_errors, errors.ReportListAnalyzer(all_reports) .reports_with_severities(self.severities) ) def test_returns_empty_on_no_reports(self): self.assert_select_reports([], []) def test_returns_empty_on_reports_with_other_severities(self): self.assert_select_reports([errors.ReportItem.error("ERR")], []) def test_returns_selection_of_desired_severities(self): err = errors.ReportItem.error("ERR") warn = errors.ReportItem.warning("WARN") info = errors.ReportItem.info("INFO") debug = errors.ReportItem.debug("DEBUG") self.assert_select_reports( [ err, warn, info, debug, ], [ warn, info, debug, ] ) class ReportListAnalyzerErrorList(TestCase): def assert_select_reports(self, all_reports, expected_errors): self.assertEqual( expected_errors, errors.ReportListAnalyzer(all_reports).error_list ) def test_returns_empty_on_no_reports(self): self.assert_select_reports([], []) def test_returns_empty_on_no_errors(self): self.assert_select_reports([errors.ReportItem.warning("WARN")], []) def test_returns_only_errors_on_mixed_content(self): err = errors.ReportItem.error("ERR") self.assert_select_reports( [errors.ReportItem.warning("WARN"), err], [err] ) pcs-0.9.164/pcs/lib/test/test_node_communication.py000066400000000000000000000376651326265502500223040ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import io import logging from pcs.test.tools.assertions import assert_report_item_equal from pcs.test.tools.custom_mock import ( MockCurl, MockCurlSimple, MockLibraryReportProcessor, ) from pcs.test.tools.misc import outdent from pcs.test.tools.pcs_unittest import ( mock, TestCase, ) from pcs.common import ( pcs_pycurl as pycurl, report_codes, ) from pcs.common.node_communicator import ( Request, RequestData, RequestTarget, Response, ) from pcs.lib.errors import ReportItemSeverity as severity import pcs.lib.node_communication as lib class ResponseToReportItemTest(TestCase): def fixture_response_connected(self, response_code): handle = MockCurl({pycurl.RESPONSE_CODE: response_code}) handle.request_obj = Request( RequestTarget(self.host), RequestData(self.request) ) handle.output_buffer = io.BytesIO() handle.output_buffer.write(self.data) return Response.connection_successful(handle) def fixture_response_not_connected(self, errno, error_msg): handle = MockCurl() handle.request_obj = Request( RequestTarget(self.host), RequestData(self.request) ) return Response.connection_failure(handle, errno, error_msg) def setUp(self): self.host = "host" self.request = "request" self.data = b"data" def test_code_200(self): self.assertIsNone( lib.response_to_report_item(self.fixture_response_connected(200)) ) def test_code_400(self): assert_report_item_equal( lib.response_to_report_item(self.fixture_response_connected(400)), ( severity.ERROR, report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, { "node": self.host, "command": self.request, "reason": self.data.decode("utf-8") }, None ) ) def test_code_401(self): assert_report_item_equal( lib.response_to_report_item(self.fixture_response_connected(401)), ( severity.ERROR, report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, { "node": self.host, "command": self.request, "reason": "HTTP error: 401" }, None ) ) def test_code_403(self): assert_report_item_equal( lib.response_to_report_item(self.fixture_response_connected(403)), ( severity.ERROR, report_codes.NODE_COMMUNICATION_ERROR_PERMISSION_DENIED, { "node": self.host, "command": self.request, "reason": "HTTP error: 403" }, None ) ) def test_code_404(self): assert_report_item_equal( lib.response_to_report_item(self.fixture_response_connected(404)), ( severity.ERROR, report_codes.NODE_COMMUNICATION_ERROR_UNSUPPORTED_COMMAND, { "node": self.host, "command": self.request, "reason": "HTTP error: 404" }, None ) ) def test_code_other(self): assert_report_item_equal( lib.response_to_report_item(self.fixture_response_connected(500)), ( severity.ERROR, report_codes.NODE_COMMUNICATION_ERROR, { "node": self.host, "command": self.request, "reason": "HTTP error: 500" }, None ) ) def test_timed_out(self): response = self.fixture_response_not_connected( pycurl.E_OPERATION_TIMEDOUT, "err" ) assert_report_item_equal( lib.response_to_report_item(response), ( severity.ERROR, report_codes.NODE_COMMUNICATION_ERROR_TIMED_OUT, { "node": self.host, "command": self.request, "reason": "err" }, None ) ) def test_timedouted(self): response = self.fixture_response_not_connected( pycurl.E_OPERATION_TIMEOUTED, "err" ) assert_report_item_equal( lib.response_to_report_item(response), ( severity.ERROR, report_codes.NODE_COMMUNICATION_ERROR_TIMED_OUT, { "node": self.host, "command": self.request, "reason": "err" }, None ) ) def test_unable_to_connect(self): response = self.fixture_response_not_connected( pycurl.E_SEND_ERROR, "err" ) assert_report_item_equal( lib.response_to_report_item(response), ( severity.ERROR, report_codes.NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT, { "node": self.host, "command": self.request, "reason": "err" }, None ) ) class IsProxySetTest(TestCase): def test_without_proxy(self): self.assertFalse(lib.is_proxy_set({ "var1": "value", "var2": "val", })) def test_multiple(self): self.assertTrue(lib.is_proxy_set({ "var1": "val", "https_proxy": "test.proxy", "var2": "val", "all_proxy": "test2.proxy", "var3": "val", })) def test_empty_string(self): self.assertFalse(lib.is_proxy_set({ "all_proxy": "", })) def test_http_proxy(self): self.assertFalse(lib.is_proxy_set({ "http_proxy": "test.proxy", })) def test_HTTP_PROXY(self): self.assertFalse(lib.is_proxy_set({ "HTTP_PROXY": "test.proxy", })) def test_https_proxy(self): self.assertTrue(lib.is_proxy_set({ "https_proxy": "test.proxy", })) def test_HTTPS_PROXY(self): self.assertTrue(lib.is_proxy_set({ "HTTPS_PROXY": "test.proxy", })) def test_all_proxy(self): self.assertTrue(lib.is_proxy_set({ "all_proxy": "test.proxy", })) def test_ALL_PROXY(self): self.assertTrue(lib.is_proxy_set({ "ALL_PROXY": "test.proxy", })) def test_no_proxy(self): self.assertTrue(lib.is_proxy_set({ "no_proxy": "*", "all_proxy": "test.proxy", })) def fixture_logger_call_send(url, data): send_msg = "Sending HTTP Request to: {url}" if data: send_msg += "\n--Debug Input Start--\n{data}\n--Debug Input End--" return mock.call.debug(send_msg.format(url=url, data=data)) def fixture_logger_call_debug_data(url, data): send_msg = outdent("""\ Communication debug info for calling: {url} --Debug Communication Info Start-- {data} --Debug Communication Info End--""" ) return mock.call.debug(send_msg.format(url=url, data=data)) def fixture_logger_call_connected(url, response_code, response_data): result_msg = ( "Finished calling: {url}\nResponse Code: {code}" + "\n--Debug Response Start--\n{response}\n--Debug Response End--" ) return mock.call.debug(result_msg.format( url=url, code=response_code, response=response_data )) def fixture_logger_call_not_connected(node, reason): msg = "Unable to connect to {node} ({reason})" return mock.call.debug(msg.format(node=node, reason=reason)) def fixture_logger_call_proxy_set(): return mock.call.warning("Proxy is set") def fixture_logger_calls_on_success( url, response_code, response_data, debug_data ): return [ fixture_logger_call_connected(url, response_code, response_data), fixture_logger_call_debug_data(url, debug_data), ] def fixture_report_item_list_send(url, data): return [( severity.DEBUG, report_codes.NODE_COMMUNICATION_STARTED, { "target": url, "data": data, } )] def fixture_report_item_list_debug(url, data): return [( severity.DEBUG, report_codes.NODE_COMMUNICATION_DEBUG_INFO, { "target": url, "data": data, } )] def fixture_report_item_list_connected(url, response_code, response_data): return [( severity.DEBUG, report_codes.NODE_COMMUNICATION_FINISHED, { "target": url, "response_code": response_code, "response_data": response_data, } )] def fixture_report_item_list_not_connected(node, reason): return [( severity.DEBUG, report_codes.NODE_COMMUNICATION_NOT_CONNECTED, { "node": node, "reason": reason, }, None )] def fixture_report_item_list_proxy_set(node, address): return [( severity.WARNING, report_codes.NODE_COMMUNICATION_PROXY_IS_SET, { "node": node, "address": address, }, None )] def fixture_report_item_list_on_success( url, response_code, response_data, debug_data ): return ( fixture_report_item_list_connected(url, response_code, response_data) + fixture_report_item_list_debug(url, debug_data) ) def fixture_request(): return Request(RequestTarget("host"), RequestData("action")) class CommunicatorLoggerTest(TestCase): def setUp(self): self.logger = mock.MagicMock(spec_set=logging.Logger) self.reporter = MockLibraryReportProcessor() self.com_logger = lib.LibCommunicatorLogger(self.logger, self.reporter) def test_log_request_start(self): request = fixture_request() self.com_logger.log_request_start(request) self.reporter.assert_reports( fixture_report_item_list_send(request.url, request.data) ) self.assertEqual( [fixture_logger_call_send(request.url, request.data)], self.logger.mock_calls ) def test_log_response_connected(self): expected_code = 200 expected_data = "data" expected_debug_data = "* text\n>> data out\n" response = Response.connection_successful( MockCurlSimple( info={pycurl.RESPONSE_CODE: expected_code}, output=expected_data.encode("utf-8"), debug_output=expected_debug_data.encode("utf-8"), request=fixture_request(), ) ) self.com_logger.log_response(response) self.reporter.assert_reports( fixture_report_item_list_on_success( response.request.url, expected_code, expected_data, expected_debug_data ) ) logger_calls = fixture_logger_calls_on_success( response.request.url, expected_code, expected_data, expected_debug_data ) self.assertEqual(logger_calls, self.logger.mock_calls) @mock.patch("pcs.lib.node_communication.is_proxy_set") def test_log_response_not_connected(self, mock_proxy): mock_proxy.return_value = False expected_debug_data = "* text\n>> data out\n" error_msg = "error" response = Response.connection_failure( MockCurlSimple( debug_output=expected_debug_data.encode("utf-8"), request=fixture_request(), ), pycurl.E_HTTP_POST_ERROR, error_msg, ) self.com_logger.log_response(response) self.reporter.assert_reports( fixture_report_item_list_not_connected( response.request.host_label, error_msg ) + fixture_report_item_list_debug( response.request.url, expected_debug_data ) ) logger_calls = [ fixture_logger_call_not_connected( response.request.host_label, error_msg ), fixture_logger_call_debug_data( response.request.url, expected_debug_data ) ] self.assertEqual(logger_calls, self.logger.mock_calls) @mock.patch("pcs.lib.node_communication.is_proxy_set") def test_log_response_not_connected_with_proxy(self, mock_proxy): mock_proxy.return_value = True expected_debug_data = "* text\n>> data out\n" error_msg = "error" response = Response.connection_failure( MockCurlSimple( debug_output=expected_debug_data.encode("utf-8"), request=fixture_request(), ), pycurl.E_HTTP_POST_ERROR, error_msg, ) self.com_logger.log_response(response) self.reporter.assert_reports( fixture_report_item_list_not_connected( response.request.host_label, error_msg ) + fixture_report_item_list_proxy_set( response.request.host_label, response.request.host ) + fixture_report_item_list_debug( response.request.url, expected_debug_data ) ) logger_calls = [ fixture_logger_call_not_connected( response.request.host_label, error_msg ), fixture_logger_call_proxy_set(), fixture_logger_call_debug_data( response.request.url, expected_debug_data ) ] self.assertEqual(logger_calls, self.logger.mock_calls) def test_log_retry(self): prev_host = "prev host" response = Response.connection_failure( MockCurlSimple(request=fixture_request()), pycurl.E_HTTP_POST_ERROR, "e", ) self.com_logger.log_retry(response, prev_host) self.reporter.assert_reports([( severity.WARNING, report_codes.NODE_COMMUNICATION_RETRYING, { "node": response.request.host_label, "failed_address": prev_host, "next_address": response.request.host, "request": response.request.url, }, None )]) logger_call = mock.call.warning( ( "Unable to connect to '{label}' via address '{old_addr}'. " "Retrying request '{req}' via address '{new_addr}'" ).format( label=response.request.host_label, old_addr=prev_host, new_addr=response.request.host, req=response.request.url, ) ) self.assertEqual([logger_call], self.logger.mock_calls) def test_log_no_more_addresses(self): response = Response.connection_failure( MockCurlSimple(request=fixture_request()), pycurl.E_HTTP_POST_ERROR, "e" ) self.com_logger.log_no_more_addresses(response) self.reporter.assert_reports([( severity.WARNING, report_codes.NODE_COMMUNICATION_NO_MORE_ADDRESSES, { "node": response.request.host_label, "request": response.request.url, }, None )]) logger_call = mock.call.warning( "No more addresses for node {label} to run '{req}'".format( label=response.request.host_label, req=response.request.url, ) ) self.assertEqual([logger_call], self.logger.mock_calls) pcs-0.9.164/pcs/lib/test/test_node_communication_format.py000066400000000000000000000070131326265502500236340ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from pcs.test.tools.assertions import assert_raise_library_error from pcs.test.tools.misc import create_setup_patch_mixin from pcs.test.tools.pcs_unittest import TestCase from pcs.common import report_codes from pcs.lib import node_communication_format from pcs.lib.errors import ReportItemSeverity as severity SetupPatchMixin = create_setup_patch_mixin(node_communication_format) class PcmkAuthkeyFormat(TestCase, SetupPatchMixin): def test_create_expected_dict(self): b64encode = self.setup_patch("base64.b64encode") b64encode.return_value = "encoded_content".encode() self.assertEqual( node_communication_format.pcmk_authkey_format("content"), { "data": b64encode.return_value.decode("utf-8"), "type": "pcmk_remote_authkey", "rewrite_existing": True, } ) class ServiceCommandFormat(TestCase): def test_create_expected_dict(self): self.assertEqual( node_communication_format.service_cmd_format("pcsd", "start"), { "type": "service_command", "service": "pcsd", "command": "start", } ) def fixture_invalid_response_format(node_label): return ( severity.ERROR, report_codes.INVALID_RESPONSE_FORMAT, { "node": node_label }, None ) class ResponseToNodeActionResults(TestCase): def setUp(self): self.expected_keys = ["file"] self.main_key = "files" self.node_label = "node1" def assert_result_causes_invalid_format(self, result): assert_raise_library_error( lambda: node_communication_format.response_to_result( result, self.main_key, self.expected_keys, self.node_label, ), fixture_invalid_response_format(self.node_label) ) def test_report_response_is_not_dict(self): self.assert_result_causes_invalid_format("bad answer") def test_report_dict_without_mandatory_key(self): self.assert_result_causes_invalid_format({}) def test_report_when_on_files_is_not_dict(self): self.assert_result_causes_invalid_format({"files": True}) def test_report_when_on_some_result_is_not_dict(self): self.assert_result_causes_invalid_format({ "files": { "file": True } }) def test_report_when_on_some_result_is_without_code(self): self.assert_result_causes_invalid_format({ "files": { "file": {"message": "some_message"} } }) def test_report_when_on_some_result_is_without_message(self): self.assert_result_causes_invalid_format({ "files": { "file": {"code": "some_code"} } }) def test_report_when_some_result_key_is_missing(self): self.assert_result_causes_invalid_format({ "files": { } }) def test_report_when_some_result_key_is_extra(self): self.assert_result_causes_invalid_format({ "files": { "file": { "code": "some_code", "message": "some_message", }, "extra": { "code": "some_extra_code", "message": "some_extra_message", } } }) pcs-0.9.164/pcs/lib/test/test_resource_agent.py000066400000000000000000002106111326265502500214170ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from functools import partial from pcs.test.tools.assertions import ( ExtendedAssertionsMixin, assert_raise_library_error, assert_report_item_list_equal, assert_xml_equal, start_tag_error_text, ) from pcs.test.tools.misc import create_patcher from pcs.test.tools.pcs_unittest import TestCase, mock from pcs.test.tools.xml import XmlManipulation from pcs.common import report_codes from pcs.lib import resource_agent as lib_ra from pcs.lib.errors import ReportItemSeverity as severity, LibraryError from pcs.lib.external import CommandRunner patch_agent = create_patcher("pcs.lib.resource_agent") patch_agent_object = partial(mock.patch.object, lib_ra.Agent) class GetDefaultInterval(TestCase): def test_return_0s_on_name_different_from_monitor(self): self.assertEqual("0s", lib_ra.get_default_interval("start")) def test_return_60s_on_monitor(self): self.assertEqual("60s", lib_ra.get_default_interval("monitor")) @patch_agent("get_default_interval", mock.Mock(return_value="10s")) class CompleteAllIntervals(TestCase): def test_add_intervals_everywhere_is_missing(self): self.assertEqual( [ {"name": "monitor", "interval": "20s"}, {"name": "start", "interval": "10s"}, ], lib_ra.complete_all_intervals([ {"name": "monitor", "interval": "20s"}, {"name": "start"}, ]) ) class GetResourceAgentNameFromString(TestCase): def test_returns_resource_agent_name_when_is_valid(self): self.assertEqual( lib_ra.ResourceAgentName("ocf", "heartbeat", "Dummy"), lib_ra.get_resource_agent_name_from_string("ocf:heartbeat:Dummy") ) def test_refuses_string_if_is_not_valid(self): self.assertRaises( lib_ra.InvalidResourceAgentName, lambda: lib_ra.get_resource_agent_name_from_string( "invalid:resource:agent:string" ) ) def test_refuses_with_unknown_standard(self): self.assertRaises( lib_ra.InvalidResourceAgentName, lambda: lib_ra.get_resource_agent_name_from_string("unknown:Dummy") ) def test_refuses_ocf_agent_name_without_provider(self): self.assertRaises( lib_ra.InvalidResourceAgentName, lambda: lib_ra.get_resource_agent_name_from_string("ocf:Dummy") ) def test_refuses_non_ocf_agent_name_with_provider(self): self.assertRaises( lib_ra.InvalidResourceAgentName, lambda: lib_ra.get_resource_agent_name_from_string("lsb:provider:Dummy") ) def test_returns_resource_agent_containing_sytemd_instance(self): self.assertEqual( lib_ra.ResourceAgentName("systemd", None, "lvm2-pvscan@252:2"), lib_ra.get_resource_agent_name_from_string( "systemd:lvm2-pvscan@252:2" ) ) def test_returns_resource_agent_containing_service_instance(self): self.assertEqual( lib_ra.ResourceAgentName("service", None, "lvm2-pvscan@252:2"), lib_ra.get_resource_agent_name_from_string( "service:lvm2-pvscan@252:2" ) ) def test_returns_resource_agent_containing_systemd_instance_short(self): self.assertEqual( lib_ra.ResourceAgentName("service", None, "getty@tty1"), lib_ra.get_resource_agent_name_from_string("service:getty@tty1") ) def test_refuses_systemd_agent_name_with_provider(self): self.assertRaises( lib_ra.InvalidResourceAgentName, lambda: lib_ra.get_resource_agent_name_from_string( "sytemd:lvm2-pvscan252:@2" ) ) class ListResourceAgentsStandardsTest(TestCase): def test_success_and_filter_stonith_out(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) agents = [ "ocf", "lsb", "service", "systemd", "nagios", "stonith", ] # retval is number of providers found mock_runner.run.return_value = ( "\n".join(agents) + "\n", "", len(agents) ) self.assertEqual( lib_ra.list_resource_agents_standards(mock_runner), [ "lsb", "nagios", "ocf", "service", "systemd", ] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-standards" ]) def test_success_filter_whitespace(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) agents = [ "", "ocf", " lsb", "service ", "systemd", " nagios ", "", "stonith", "", ] # retval is number of providers found mock_runner.run.return_value = ( "\n".join(agents) + "\n", "", len(agents) ) self.assertEqual( lib_ra.list_resource_agents_standards(mock_runner), [ "lsb", "nagios", "ocf", "service", "systemd", ] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-standards" ]) def test_empty(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ("", "", 0) self.assertEqual( lib_ra.list_resource_agents_standards(mock_runner), [] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-standards" ]) def test_error(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ("lsb", "error", 1) self.assertEqual( lib_ra.list_resource_agents_standards(mock_runner), ["lsb"] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-standards" ]) class ListResourceAgentsOcfProvidersTest(TestCase): def test_success(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) providers = [ "heartbeat", "openstack", "pacemaker", "booth", ] # retval is number of providers found mock_runner.run.return_value = ( "\n".join(providers) + "\n", "", len(providers) ) self.assertEqual( lib_ra.list_resource_agents_ocf_providers(mock_runner), [ "booth", "heartbeat", "openstack", "pacemaker", ] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-ocf-providers" ]) def test_success_filter_whitespace(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) providers = [ "", "heartbeat", " openstack", "pacemaker ", " booth ", ] # retval is number of providers found mock_runner.run.return_value = ( "\n".join(providers) + "\n", "", len(providers) ) self.assertEqual( lib_ra.list_resource_agents_ocf_providers(mock_runner), [ "booth", "heartbeat", "openstack", "pacemaker", ] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-ocf-providers" ]) def test_empty(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ("", "", 0) self.assertEqual( lib_ra.list_resource_agents_ocf_providers(mock_runner), [] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-ocf-providers" ]) def test_error(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ("booth", "error", 1) self.assertEqual( lib_ra.list_resource_agents_ocf_providers(mock_runner), ["booth"] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-ocf-providers" ]) class ListResourceAgentsStandardsAndProvidersTest(TestCase): def test_success(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = [ ( "\n".join([ "ocf", "lsb", "service", "systemd", "nagios", "stonith", "", ]), "", 0 ), ( "\n".join([ "heartbeat", "openstack", "pacemaker", "booth", "", ]), "", 0 ), ] self.assertEqual( lib_ra.list_resource_agents_standards_and_providers(mock_runner), [ "lsb", "nagios", "ocf:booth", "ocf:heartbeat", "ocf:openstack", "ocf:pacemaker", "service", "systemd", ] ) self.assertEqual(2, len(mock_runner.run.mock_calls)) mock_runner.run.assert_has_calls([ mock.call(["/usr/sbin/crm_resource", "--list-standards"]), mock.call(["/usr/sbin/crm_resource", "--list-ocf-providers"]), ]) class ListResourceAgentsTest(TestCase): def test_success_standard(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( "\n".join([ "docker", "Dummy", "dhcpd", "Dummy", "ethmonitor", "", ]), "", 0 ) self.assertEqual( lib_ra.list_resource_agents(mock_runner, "ocf"), [ "dhcpd", "docker", "Dummy", "Dummy", "ethmonitor", ] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-agents", "ocf" ]) def test_success_standard_provider(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( "\n".join([ "ping", "SystemHealth", "SysInfo", "HealthCPU", "Dummy", "", ]), "", 0 ) self.assertEqual( lib_ra.list_resource_agents(mock_runner, "ocf:pacemaker"), [ "Dummy", "HealthCPU", "ping", "SysInfo", "SystemHealth", ] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-agents", "ocf:pacemaker" ]) def test_bad_standard(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( "", "No agents found for standard=nonsense, provider=*", 1 ) self.assertEqual( lib_ra.list_resource_agents(mock_runner, "nonsense"), [] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-agents", "nonsense" ]) class ListStonithAgentsTest(TestCase): def test_success(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( "\n".join([ "fence_xvm", "fence_wti", "fence_vmware_soap", "fence_virt", "fence_scsi", "", ]), "", 0 ) self.assertEqual( lib_ra.list_stonith_agents(mock_runner), [ "fence_scsi", "fence_virt", "fence_vmware_soap", "fence_wti", "fence_xvm", ] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-agents", "stonith" ]) def test_no_agents(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( "", "No agents found for standard=stonith provider=*", 1 ) self.assertEqual( lib_ra.list_stonith_agents(mock_runner), [] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-agents", "stonith" ]) def test_filter_hidden_agents(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( "\n".join([ "fence_na", "fence_wti", "fence_scsi", "fence_vmware_helper", "fence_nss_wrapper", "fence_node", "fence_vmware_soap", "fence_virt", "fence_pcmk", "fence_sanlockd", "fence_xvm", "fence_ack_manual", "fence_legacy", "fence_check", "fence_tool", "fence_kdump_send", "fence_virtd", "", ]), "", 0 ) self.assertEqual( lib_ra.list_stonith_agents(mock_runner), [ "fence_scsi", "fence_virt", "fence_vmware_soap", "fence_wti", "fence_xvm", ] ) mock_runner.run.assert_called_once_with([ "/usr/sbin/crm_resource", "--list-agents", "stonith" ]) class GuessResourceAgentFullNameTest(TestCase): def setUp(self): self.mock_runner_side_effect = [ # list standards ("ocf\n", "", 0), # list providers ("heartbeat\npacemaker\n", "", 0), # list agents for standard-provider pairs ("Delay\nDummy\n", "", 0), ("Dummy\nStateful\n", "", 0), ] def test_one_agent_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( self.mock_runner_side_effect + [ ("", "", 0) ] ) self.assertEqual( [ agent.get_name() for agent in lib_ra.guess_resource_agent_full_name(mock_runner, "delay") ], ["ocf:heartbeat:Delay"] ) def test_one_agent_exception(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( self.mock_runner_side_effect + [ ("", "", 0), ] ) self.assertEqual( lib_ra.guess_exactly_one_resource_agent_full_name( mock_runner, "delay" ).get_name(), "ocf:heartbeat:Delay" ) def test_two_agents_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( self.mock_runner_side_effect + [ ("", "", 0), ("", "", 0), ] ) self.assertEqual( [ agent.get_name() for agent in lib_ra.guess_resource_agent_full_name(mock_runner, "dummy") ], ["ocf:heartbeat:Dummy", "ocf:pacemaker:Dummy"] ) def test_two_agents_one_valid_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( self.mock_runner_side_effect + [ ("", "", 0), ("invalid metadata", "", 0), ] ) self.assertEqual( [ agent.get_name() for agent in lib_ra.guess_resource_agent_full_name(mock_runner, "dummy") ], ["ocf:heartbeat:Dummy"] ) def test_two_agents_exception(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( self.mock_runner_side_effect + [ ("", "", 0), ("", "", 0), ] ) assert_raise_library_error( lambda: lib_ra.guess_exactly_one_resource_agent_full_name( mock_runner, "dummy" ), ( severity.ERROR, report_codes.AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE, { "agent": "dummy", "possible_agents": [ "ocf:heartbeat:Dummy", "ocf:pacemaker:Dummy" ], } ), ) def test_no_agents_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = self.mock_runner_side_effect self.assertEqual( lib_ra.guess_resource_agent_full_name(mock_runner, "missing"), [] ) def test_no_agents_exception(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = self.mock_runner_side_effect assert_raise_library_error( lambda: lib_ra.guess_exactly_one_resource_agent_full_name( mock_runner, "missing" ), ( severity.ERROR, report_codes.AGENT_NAME_GUESS_FOUND_NONE, { "agent": "missing", } ), ) def test_no_valids_agent_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( self.mock_runner_side_effect + [ ("invalid metadata", "", 0), ] ) self.assertEqual( lib_ra.guess_resource_agent_full_name(mock_runner, "Delay"), [] ) @patch_agent_object("_get_metadata") class AgentMetadataGetShortdescTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) def test_no_desc(self, mock_metadata): xml = '' mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_shortdesc(), "" ) def test_shortdesc_attribute(self, mock_metadata): xml = '' mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_shortdesc(), "short description" ) def test_shortdesc_element(self, mock_metadata): xml = """ short \n description """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_shortdesc(), "short \n description" ) @patch_agent_object("_get_metadata") class AgentMetadataGetLongdescTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) def test_no_desc(self, mock_metadata): xml = '' mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_longdesc(), "" ) def test_longesc_element(self, mock_metadata): xml = """ long \n description """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_longdesc(), "long \n description" ) @patch_agent_object("_get_metadata") class AgentMetadataGetParametersTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) def test_no_parameters(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_parameters(), [] ) def test_empty_parameters(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_parameters(), [] ) def test_empty_parameter(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_parameters(), [ { "name": "", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", } ] ) def test_all_data_and_minimal_data(self, mock_metadata): xml = """ Long description short description """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_parameters(), [ { "name": "test_param", "longdesc": "Long description", "shortdesc": "short description", "type": "test_type", "required": True, "default": "default_value", "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", }, { "name": "another parameter", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", } ] ) def test_remove_obsoletes_keep_deprecated(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_parameters(), [ { "name": "deprecated", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": True, "obsoletes": None, "pcs_deprecated_warning": "", }, ] ) @patch_agent_object("_get_metadata") class AgentMetadataGetActionsTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) def test_no_actions(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_actions(), [] ) def test_empty_actions(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_actions(), [] ) def test_empty_action(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_actions(), [{}] ) def test_more_actions(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_actions(), [ { "name": "on", "automatic": "0" }, {"name": "off"}, {"name": "reboot"}, {"name": "status"} ] ) def test_remove_depth_with_0(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_actions(), [ { "name": "monitor", "timeout": "20" }, ] ) def test_transfor_depth_to_OCF_CHECK_LEVEL(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_actions(), [ { "name": "monitor", "timeout": "20", "OCF_CHECK_LEVEL": "1", }, ] ) @patch_agent_object( "_is_cib_default_action", lambda self, action: action.get("name") == "monitor" ) @patch_agent_object("get_actions") class AgentMetadataGetCibDefaultActions(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) def test_complete_monitor(self, get_actions): get_actions.return_value = [{"name": "meta-data"}] self.assertEqual( [{"name": "monitor", "interval": "60s"}], self.agent.get_cib_default_actions() ) def test_complete_intervals(self, get_actions): get_actions.return_value = [ {"name": "meta-data"}, {"name": "monitor", "timeout": "30s"}, ] self.assertEqual( [{"name": "monitor", "interval": "60s", "timeout": "30s"}], self.agent.get_cib_default_actions() ) @mock.patch.object(lib_ra.ResourceAgent, "get_actions") class ResourceAgentMetadataGetCibDefaultActions(TestCase): fixture_actions = [ {"name": "custom1", "timeout": "40s"}, {"name": "custom2", "interval": "25s", "timeout": "60s"}, {"name": "meta-data"}, {"name": "monitor", "interval": "10s", "timeout": "30s"}, {"name": "start", "timeout": "40s"}, {"name": "status", "interval": "15s", "timeout": "20s"}, {"name": "validate-all"}, ] def setUp(self): self.agent = lib_ra.ResourceAgent( mock.MagicMock(spec_set=CommandRunner), "ocf:pacemaker:Dummy" ) def test_select_only_actions_for_cib(self, get_actions): get_actions.return_value = self.fixture_actions self.assertEqual( [ {"name": "custom1", "interval": "0s", "timeout": "40s"}, {"name": "custom2", "interval": "25s", "timeout": "60s"}, {"name": "monitor", "interval": "10s", "timeout": "30s"}, {"name": "start", "interval": "0s", "timeout": "40s"}, ], self.agent.get_cib_default_actions() ) def test_complete_monitor(self, get_actions): get_actions.return_value = [{"name": "meta-data"}] self.assertEqual( [{"name": "monitor", "interval": "60s"}], self.agent.get_cib_default_actions() ) def test_complete_intervals(self, get_actions): get_actions.return_value = [ {"name": "meta-data"}, {"name": "monitor", "timeout": "30s"}, ] self.assertEqual( [{"name": "monitor", "interval": "60s", "timeout": "30s"}], self.agent.get_cib_default_actions() ) def test_select_only_necessary_actions_for_cib(self, get_actions): get_actions.return_value = self.fixture_actions self.assertEqual( [ {"name": "monitor", "interval": "10s", "timeout": "30s"} ], self.agent.get_cib_default_actions(necessary_only=True) ) @patch_agent_object("_get_metadata") @patch_agent_object("get_name", lambda self: "agent-name") class AgentMetadataGetInfoTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) self.metadata = etree.XML(""" short description long description Long description short description """) def test_name_info(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( self.agent.get_name_info(), { "name": "agent-name", "shortdesc": "", "longdesc": "", "parameters": [], "actions": [], } ) def test_description_info(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( self.agent.get_description_info(), { "name": "agent-name", "shortdesc": "short description", "longdesc": "long description", "parameters": [], "actions": [], } ) def test_full_info(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( self.agent.get_full_info(), { "name": "agent-name", "shortdesc": "short description", "longdesc": "long description", "parameters": [ { "name": "test_param", "longdesc": "Long description", "shortdesc": "short description", "type": "test_type", "required": True, "default": "default_value", "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", }, { "name": "another parameter", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", } ], "actions": [ { "name": "on", "automatic": "0" }, {"name": "off"}, ], "default_actions": [{"name": "monitor", "interval": "60s"}], } ) @patch_agent_object("_get_metadata") class AgentMetadataValidateParametersValuesTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) self.metadata = etree.XML(""" Long description short description """) def test_all_required(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( self.agent.validate_parameters_values({ "another_required_param": "value1", "required_param": "value2", }), ([], []) ) def test_all_required_and_optional(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( self.agent.validate_parameters_values({ "another_required_param": "value1", "required_param": "value2", "test_param": "value3", }), ([], []) ) def test_all_required_and_invalid(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( self.agent.validate_parameters_values({ "another_required_param": "value1", "required_param": "value2", "invalid_param": "value3", }), (["invalid_param"], []) ) def test_missing_required(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( self.agent.validate_parameters_values({ }), ([], ["required_param", "another_required_param"]) ) def test_missing_required_and_invalid(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( self.agent.validate_parameters_values({ "another_required_param": "value1", "invalid_param": "value3", }), (["invalid_param"], ["required_param"]) ) def test_ignore_obsoletes_use_deprecated(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.validate_parameters_values({ }), ([], ["deprecated"]) ) def test_dont_allow_obsoletes_use_deprecated(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.validate_parameters_values({ "obsoletes": "value", }), (["obsoletes"], ["deprecated"]) ) class AgentMetadataValidateParameters(TestCase): def setUp(self): self.agent = lib_ra.Agent(mock.MagicMock(spec_set=CommandRunner)) self.metadata = etree.XML(""" Long description short description """) patcher = patch_agent_object("_get_metadata") self.addCleanup(patcher.stop) self.get_metadata = patcher.start() self.get_metadata.return_value = self.metadata def test_returns_empty_report_when_all_required_there(self): self.assertEqual( [], self.agent.validate_parameters({ "another_required_param": "value1", "required_param": "value2", }), ) def test_returns_empty_report_when_all_required_and_optional_there(self): self.assertEqual( [], self.agent.validate_parameters({ "another_required_param": "value1", "required_param": "value2", "test_param": "value3", }) ) def test_report_invalid_option(self): assert_report_item_list_equal( self.agent.validate_parameters({ "another_required_param": "value1", "required_param": "value2", "invalid_param": "value3", }), [ ( severity.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["invalid_param"], "option_type": "resource", "allowed": [ "another_required_param", "required_param", "test_param", ], "allowed_patterns": [], }, report_codes.FORCE_OPTIONS ), ], ) def test_report_missing_option(self): assert_report_item_list_equal( self.agent.validate_parameters({}), [ ( severity.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_names": [ "required_param", "another_required_param", ], "option_type": "resource", }, report_codes.FORCE_OPTIONS ), ], ) def test_warn_missing_required(self): assert_report_item_list_equal( self.agent.validate_parameters({}, allow_invalid=True), [ ( severity.WARNING, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_names": [ "required_param", "another_required_param", ], "option_type": "resource", }, ), ] ) def test_ignore_obsoletes_use_deprecated(self): xml = """ """ self.get_metadata.return_value = etree.XML(xml) assert_report_item_list_equal( self.agent.validate_parameters({}), [ ( severity.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_names": [ "deprecated", ], "option_type": "resource", }, report_codes.FORCE_OPTIONS ), ] ) def test_dont_allow_obsoletes_use_deprecated(self): xml = """ """ self.get_metadata.return_value = etree.XML(xml) assert_report_item_list_equal( self.agent.validate_parameters({"obsoletes": "value"}), [ ( severity.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_names": [ "deprecated", ], "option_type": "resource", }, report_codes.FORCE_OPTIONS ), ( severity.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["obsoletes"], "option_type": "resource", "allowed": [ "deprecated", ], "allowed_patterns": [], }, report_codes.FORCE_OPTIONS ), ] ) def test_required_not_specified_on_update(self): assert_report_item_list_equal( self.agent.validate_parameters({ "test_param": "value", }, update=True), [ ], ) class StonithdMetadataGetMetadataTest(TestCase, ExtendedAssertionsMixin): def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) self.agent = lib_ra.StonithdMetadata(self.mock_runner) def test_success(self): metadata = """ stonithd test metadata """ self.mock_runner.run.return_value = (metadata, "", 0) assert_xml_equal( str(XmlManipulation(self.agent._get_metadata())), metadata ) self.mock_runner.run.assert_called_once_with( ["/usr/libexec/pacemaker/stonithd", "metadata"] ) def test_failed_to_get_xml(self): self.mock_runner.run.return_value = ("", "some error", 1) self.assert_raises( lib_ra.UnableToGetAgentMetadata, self.agent._get_metadata, { "agent": "stonithd", "message": "some error", } ) self.mock_runner.run.assert_called_once_with( ["/usr/libexec/pacemaker/stonithd", "metadata"] ) def test_invalid_xml(self): self.mock_runner.run.return_value = ("some garbage", "", 0) self.assert_raises( lib_ra.UnableToGetAgentMetadata, self.agent._get_metadata, { "agent": "stonithd", "message": start_tag_error_text(), } ) self.mock_runner.run.assert_called_once_with( ["/usr/libexec/pacemaker/stonithd", "metadata"] ) @patch_agent_object("_get_metadata") class StonithdMetadataGetParametersTest(TestCase): def setUp(self): self.agent = lib_ra.StonithdMetadata( mock.MagicMock(spec_set=CommandRunner) ) def test_success(self, mock_metadata): xml = """ Long description Advanced use only: short description """ mock_metadata.return_value = etree.XML(xml) self.assertEqual( self.agent.get_parameters(), [ { "name": "test_param", "longdesc": "Advanced use only: short description\nLong " "description", "shortdesc": "Advanced use only: short description", "type": "test_type", "required": False, "default": "default_value", "advanced": True, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", }, { "name": "another parameter", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", } ] ) class CrmAgentDescendant(lib_ra.CrmAgent): def _prepare_name_parts(self, name): return lib_ra.ResourceAgentName("STANDARD", None, name) def get_name(self): return self.get_type() class CrmAgentMetadataGetMetadataTest(TestCase, ExtendedAssertionsMixin): def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) self.agent = CrmAgentDescendant(self.mock_runner, "TYPE") def test_success(self): metadata = """ crm agent test metadata """ self.mock_runner.run.return_value = (metadata, "", 0) assert_xml_equal( str(XmlManipulation(self.agent._get_metadata())), metadata ) self.mock_runner.run.assert_called_once_with( [ "/usr/sbin/crm_resource", "--show-metadata", self.agent._get_full_name() ], env_extend={ "PATH": "/usr/sbin/:/bin/:/usr/bin/", } ) def test_failed_to_get_xml(self): self.mock_runner.run.return_value = ("", "some error", 1) self.assert_raises( lib_ra.UnableToGetAgentMetadata, self.agent._get_metadata, { "agent": self.agent.get_name(), "message": "some error", } ) self.mock_runner.run.assert_called_once_with( [ "/usr/sbin/crm_resource", "--show-metadata", self.agent._get_full_name() ], env_extend={ "PATH": "/usr/sbin/:/bin/:/usr/bin/", } ) def test_invalid_xml(self): self.mock_runner.run.return_value = ("some garbage", "", 0) self.assert_raises( lib_ra.UnableToGetAgentMetadata, self.agent._get_metadata, { "agent": self.agent.get_name(), "message": start_tag_error_text(), } ) self.mock_runner.run.assert_called_once_with( [ "/usr/sbin/crm_resource", "--show-metadata", self.agent._get_full_name() ], env_extend={ "PATH": "/usr/sbin/:/bin/:/usr/bin/", } ) class CrmAgentMetadataIsValidAgentTest(TestCase): def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) self.agent = CrmAgentDescendant(self.mock_runner, "TYPE") def test_success(self): metadata = """ crm agent test metadata """ self.mock_runner.run.return_value = (metadata, "", 0) self.assertTrue(self.agent.is_valid_metadata()) def test_fail(self): self.mock_runner.run.return_value = ("", "", 1) self.assertFalse(self.agent.is_valid_metadata()) class StonithAgentMetadataGetNameTest(TestCase, ExtendedAssertionsMixin): def test_success(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) agent_name = "fence_dummy" agent = lib_ra.StonithAgent(mock_runner, agent_name) self.assertEqual(agent.get_name(), agent_name) class StonithAgentMetadataGetMetadataTest(TestCase, ExtendedAssertionsMixin): # Only test that correct name is going to crm_resource. Everything else is # covered by the parent class and therefore tested in its test. def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) self.agent_name = "fence_dummy" self.agent = lib_ra.StonithAgent( self.mock_runner, self.agent_name ) def tearDown(self): lib_ra.StonithAgent._stonithd_metadata = None def test_success(self): metadata = """ crm agent test metadata """ self.mock_runner.run.return_value = (metadata, "", 0) assert_xml_equal( str(XmlManipulation(self.agent._get_metadata())), metadata ) self.mock_runner.run.assert_called_once_with( [ "/usr/sbin/crm_resource", "--show-metadata", "stonith:{0}".format(self.agent_name) ], env_extend={ "PATH": "/usr/sbin/:/bin/:/usr/bin/", } ) class StonithAgentMetadataGetParametersTest(TestCase): def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) self.agent_name = "fence_dummy" self.agent = lib_ra.StonithAgent( self.mock_runner, self.agent_name ) def tearDown(self): lib_ra.StonithAgent._stonithd_metadata = None def test_success(self): metadata = """ crm agent test metadata Fencing Action """ stonithd_metadata = """ """ self.mock_runner.run.side_effect = [ (metadata, "", 0), (stonithd_metadata, "", 0), ] self.assertEqual( self.agent.get_parameters(), [ { "name": "debug", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", }, { "name": "valid_param", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", }, { "name": "verbose", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", }, { "name": "action", "longdesc": "", "shortdesc": "Fencing Action", "type": "string", "required": False, "default": None, "advanced": True, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "Specifying 'action' is" " deprecated and not necessary with current Pacemaker" " versions. Use 'pcmk_off_action'," " 'pcmk_reboot_action' instead." , }, { "name": "another_param", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", }, { "name": "stonithd_param", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, "advanced": False, "deprecated": False, "obsoletes": None, "pcs_deprecated_warning": "", }, ] ) self.assertEqual(2, len(self.mock_runner.run.mock_calls)) self.mock_runner.run.assert_has_calls([ mock.call( [ "/usr/sbin/crm_resource", "--show-metadata", "stonith:{0}".format(self.agent_name) ], env_extend={ "PATH": "/usr/sbin/:/bin/:/usr/bin/", } ), mock.call( ["/usr/libexec/pacemaker/stonithd", "metadata"] ), ]) @patch_agent_object("_get_metadata") class StonithAgentMetadataGetProvidesUnfencingTest(TestCase): def setUp(self): self.agent = lib_ra.StonithAgent( mock.MagicMock(spec_set=CommandRunner), "fence_dummy" ) def tearDown(self): lib_ra.StonithAgent._stonithd_metadata = None def test_true(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertTrue(self.agent.get_provides_unfencing()) def test_no_action_on(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertFalse(self.agent.get_provides_unfencing()) def test_no_tagret(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertFalse(self.agent.get_provides_unfencing()) def test_no_automatic(self, mock_metadata): xml = """ """ mock_metadata.return_value = etree.XML(xml) self.assertFalse(self.agent.get_provides_unfencing()) class ResourceAgentTest(TestCase): def test_raises_on_invalid_name(self): self.assertRaises( lib_ra.InvalidResourceAgentName, lambda: lib_ra.ResourceAgent(mock.MagicMock(), "invalid_name") ) def test_does_not_raise_on_valid_name(self): lib_ra.ResourceAgent(mock.MagicMock(), "ocf:heardbeat:name") @patch_agent_object("_get_metadata") class ResourceAgentGetParameters(TestCase): def fixture_metadata(self, params): return etree.XML(""" {0} """.format([''.format(name) for name in params]) ) def assert_param_names(self, expected_names, actual_params): self.assertEqual( expected_names, [param["name"] for param in actual_params] ) def test_add_trace_parameters_to_ocf(self, mock_metadata): mock_metadata.return_value = self.fixture_metadata(["test_param"]) agent = lib_ra.ResourceAgent( mock.MagicMock(spec_set=CommandRunner), "ocf:pacemaker:test" ) self.assert_param_names( ["test_param", "trace_ra", "trace_file"], agent.get_parameters() ) def test_do_not_add_trace_parameters_if_present(self, mock_metadata): mock_metadata.return_value = self.fixture_metadata([ "trace_ra", "test_param", "trace_file" ]) agent = lib_ra.ResourceAgent( mock.MagicMock(spec_set=CommandRunner), "ocf:pacemaker:test" ) self.assert_param_names( ["trace_ra", "test_param", "trace_file"], agent.get_parameters() ) def test_do_not_add_trace_parameters_to_others(self, mock_metadata): mock_metadata.return_value = self.fixture_metadata(["test_param"]) agent = lib_ra.ResourceAgent( mock.MagicMock(spec_set=CommandRunner), "service:test" ) self.assert_param_names( ["test_param"], agent.get_parameters() ) class FindResourceAgentByNameTest(TestCase): def setUp(self): self.report_processor = mock.MagicMock() self.runner = mock.MagicMock() self.run = partial( lib_ra.find_valid_resource_agent_by_name, self.report_processor, self.runner, ) @patch_agent("reports.agent_name_guessed") @patch_agent("guess_exactly_one_resource_agent_full_name") def test_returns_guessed_agent(self, mock_guess, mock_report): #setup name = "Delay" guessed_name = "ocf:heartbeat:Delay" report = "AGENT_NAME_GUESSED" agent = mock.MagicMock(get_name=mock.Mock(return_value=guessed_name)) mock_guess.return_value = agent mock_report.return_value = report #test self.assertEqual(agent, self.run(name)) mock_guess.assert_called_once_with(self.runner, name) self.report_processor.process.assert_called_once_with(report) mock_report.assert_called_once_with(name, guessed_name) @patch_agent("ResourceAgent") def test_returns_real_agent_when_is_there(self, ResourceAgent): #setup name = "ocf:heartbeat:Delay" agent = mock.MagicMock() agent.validate_metadata = mock.Mock(return_value=agent) ResourceAgent.return_value = agent #test self.assertEqual(agent, self.run(name)) ResourceAgent.assert_called_once_with(self.runner, name) @patch_agent("resource_agent_error_to_report_item") @patch_agent("AbsentResourceAgent") @patch_agent("ResourceAgent") def test_returns_absent_agent_on_metadata_load_fail( self, ResourceAgent, AbsentResourceAgent, error_to_report_item ): #setup name = "ocf:heartbeat:Some" report = "UNABLE_TO_GET_AGENT_METADATA" e = lib_ra.UnableToGetAgentMetadata(name, "metadata missing") agent = 'absent agent' ResourceAgent.side_effect = e error_to_report_item.return_value = report AbsentResourceAgent.return_value = agent #test self.assertEqual(agent, self.run(name, allowed_absent=True)) ResourceAgent.assert_called_once_with(self.runner, name) AbsentResourceAgent.assert_called_once_with(self.runner, name) error_to_report_item.assert_called_once_with( e, severity=severity.WARNING ) self.report_processor.process.assert_called_once_with(report) @patch_agent("resource_agent_error_to_report_item") @patch_agent("ResourceAgent") def test_raises_on_metatdata_load_fail_disallowed_absent( self, ResourceAgent, error_to_report_item ): name = "ocf:heartbeat:Some" report = "UNABLE_TO_GET_AGENT_METADATA" e = lib_ra.UnableToGetAgentMetadata(name, "metadata missing") ResourceAgent.side_effect = e error_to_report_item.return_value = report with self.assertRaises(LibraryError) as context_manager: self.run(name) self.assertEqual(report, context_manager.exception.args[0]) ResourceAgent.assert_called_once_with(self.runner, name) error_to_report_item.assert_called_once_with(e, forceable=True) @patch_agent("resource_agent_error_to_report_item") @patch_agent("ResourceAgent") def test_raises_on_invalid_name(self, ResourceAgent, error_to_report_item): name = "ocf:heartbeat:Something:else" report = "INVALID_RESOURCE_AGENT_NAME" e = lib_ra.InvalidResourceAgentName(name, "invalid agent name") ResourceAgent.side_effect = e error_to_report_item.return_value = report with self.assertRaises(LibraryError) as context_manager: self.run(name) self.assertEqual(report, context_manager.exception.args[0]) ResourceAgent.assert_called_once_with(self.runner, name) error_to_report_item.assert_called_once_with(e) class FindStonithAgentByName(TestCase): # It is quite similar to find_valid_stonith_agent_by_name, so only minimum # tests here: # - test success # - test with ":" in agent name - there was a bug def setUp(self): self.report_processor = mock.MagicMock() self.runner = mock.MagicMock() self.run = partial( lib_ra.find_valid_stonith_agent_by_name, self.report_processor, self.runner, ) @patch_agent("StonithAgent") def test_returns_real_agent_when_is_there(self, StonithAgent): #setup name = "fence_xvm" agent = mock.MagicMock() agent.validate_metadata = mock.Mock(return_value=agent) StonithAgent.return_value = agent #test self.assertEqual(agent, self.run(name)) StonithAgent.assert_called_once_with(self.runner, name) @patch_agent("resource_agent_error_to_report_item") @patch_agent("StonithAgent") def test_raises_on_invalid_name(self, StonithAgent, error_to_report_item): name = "fence_xvm:invalid" report = "INVALID_STONITH_AGENT_NAME" e = lib_ra.InvalidStonithAgentName(name, "invalid agent name") StonithAgent.side_effect = e error_to_report_item.return_value = report with self.assertRaises(LibraryError) as context_manager: self.run(name) self.assertEqual(report, context_manager.exception.args[0]) StonithAgent.assert_called_once_with(self.runner, name) error_to_report_item.assert_called_once_with(e) class AbsentResourceAgentTest(TestCase): @mock.patch.object(lib_ra.CrmAgent, "_load_metadata") def test_behaves_like_a_proper_agent(self, load_metadata): name = "ocf:heartbeat:Absent" runner = mock.MagicMock(spec_set=CommandRunner) load_metadata.return_value = "" agent = lib_ra.ResourceAgent(runner, name) absent = lib_ra.AbsentResourceAgent(runner, name) #metadata are valid absent.validate_metadata() self.assertTrue(absent.is_valid_metadata()) self.assertEqual(agent.get_name(), absent.get_name()) self.assertEqual( agent.get_description_info(), absent.get_description_info() ) self.assertEqual(agent.get_full_info(), absent.get_full_info()) self.assertEqual(agent.get_shortdesc(), absent.get_shortdesc()) self.assertEqual(agent.get_longdesc(), absent.get_longdesc()) self.assertEqual(agent.get_parameters(), absent.get_parameters()) self.assertEqual(agent.get_actions(), absent.get_actions()) self.assertEqual(([], []), absent.validate_parameters_values({ "whatever": "anything" })) pcs-0.9.164/pcs/lib/test/test_validate.py000066400000000000000000001016741326265502500202130ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree import re from pcs.common import report_codes from pcs.lib import validate from pcs.lib.cib.tools import IdProvider from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import assert_report_item_list_equal from pcs.test.tools.pcs_unittest import TestCase class ValuesToPairs(TestCase): def test_create_from_plain_values(self): self.assertEqual( { "first": validate.ValuePair("A", "a"), "second": validate.ValuePair("B", "b"), }, validate.values_to_pairs( { "first": "A", "second": "B", }, lambda key, value: value.lower() ) ) def test_keep_pair_if_is_already_there(self): self.assertEqual( { "first": validate.ValuePair("A", "aaa"), "second": validate.ValuePair("B", "b"), }, validate.values_to_pairs( { "first": validate.ValuePair("A", "aaa"), "second": "B", }, lambda key, value: value.lower() ) ) class PairsToValues(TestCase): def test_keep_values_if_is_not_pair(self): self.assertEqual( { "first": "A", "second": "B", }, validate.pairs_to_values( { "first": "A", "second": "B", } ) ) def test_extract_normalized_values(self): self.assertEqual( { "first": "aaa", "second": "B", }, validate.pairs_to_values( { "first": validate.ValuePair( original="A", normalized="aaa" ), "second": "B", } ) ) class OptionValueNormalization(TestCase): def test_return_normalized_value_if_normalization_for_key_specified(self): normalize = validate.option_value_normalization({ "first": lambda value: value.upper() }) self.assertEqual("ONE", normalize("first", "one")) def test_return_value_if_normalization_for_key_unspecified(self): normalize = validate.option_value_normalization({}) self.assertEqual("one", normalize("first", "one")) class DependsOn(TestCase): def test_success_when_dependency_present(self): assert_report_item_list_equal( validate.depends_on_option("name", "prerequisite", "type")({ "name": "value", "prerequisite": "value", }), [] ) def test_report_when_dependency_missing(self): assert_report_item_list_equal( validate.depends_on_option( "name", "prerequisite", "type1", "type2" )({ "name": "value", }), [ ( severities.ERROR, report_codes.PREREQUISITE_OPTION_IS_MISSING, { "option_name": "name", "option_type": "type1", "prerequisite_name": "prerequisite", "prerequisite_type": "type2", }, None ), ] ) class IsRequired(TestCase): def test_returns_no_report_when_required_is_present(self): assert_report_item_list_equal( validate.is_required("name", "some type")({"name": "monitor"}), [] ) def test_returns_report_when_required_is_missing(self): assert_report_item_list_equal( validate.is_required("name", "some type")({}), [ ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { "option_names": ["name"], "option_type": "some type", }, None ), ] ) class IsRequiredSomeOf(TestCase): def test_returns_no_report_when_first_is_present(self): assert_report_item_list_equal( validate.is_required_some_of(["first", "second"], "type")({ "first": "value", }), [] ) def test_returns_no_report_when_second_is_present(self): assert_report_item_list_equal( validate.is_required_some_of(["first", "second"], "type")({ "second": "value", }), [] ) def test_returns_report_when_missing(self): assert_report_item_list_equal( validate.is_required_some_of(["first", "second"], "type")({ "third": "value", }), [ ( severities.ERROR, report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, { "option_names": ["first", "second"], "option_type": "type", }, None ), ] ) class ValueCondTest(TestCase): def setUp(self): self.predicate = lambda a: a == "b" def test_returns_empty_report_on_valid_option(self): self.assertEqual( [], validate.value_cond("a", self.predicate, "test")({"a": "b"}) ) def test_returns_empty_report_on_valid_normalized_option(self): self.assertEqual( [], validate.value_cond("a", self.predicate, "test")( {"a": validate.ValuePair(original="C", normalized="b")} ), ) def test_returns_report_about_invalid_option(self): assert_report_item_list_equal( validate.value_cond("a", self.predicate, "test")({"a": "c"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "c", "allowed_values": "test", }, None ), ] ) def test_support_OptionValuePair(self): assert_report_item_list_equal( validate.value_cond("a", self.predicate, "test")( {"a": validate.ValuePair(original="b", normalized="c")} ), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "b", "allowed_values": "test", }, None ), ] ) def test_supports_another_report_option_name(self): assert_report_item_list_equal( validate.value_cond( "a", self.predicate, "test", option_name_for_report="option a" )( {"a": "c"} ), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "option a", "option_value": "c", "allowed_values": "test", }, None ), ] ) def test_supports_forceable_errors(self): assert_report_item_list_equal( validate.value_cond( "a", self.predicate, "test", code_to_allow_extra_values="FORCE" )( {"a": "c"} ), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "c", "allowed_values": "test", }, "FORCE" ), ] ) def test_supports_warning(self): assert_report_item_list_equal( validate.value_cond( "a", self.predicate, "test", code_to_allow_extra_values="FORCE", allow_extra_values=True )( {"a": "c"} ), [ ( severities.WARNING, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "c", "allowed_values": "test", }, None ), ] ) class ValueEmptyOrValid(TestCase): def setUp(self): self.validator = validate.value_cond("a", lambda a: a == "b", "test") def test_missing(self): assert_report_item_list_equal( validate.value_empty_or_valid("a", self.validator)({"b": "c"}), [ ] ) def test_empty(self): assert_report_item_list_equal( validate.value_empty_or_valid("a", self.validator)({"a": ""}), [ ] ) def test_valid(self): assert_report_item_list_equal( validate.value_empty_or_valid("a", self.validator)({"a": "b"}), [ ] ) def test_not_valid(self): assert_report_item_list_equal( validate.value_empty_or_valid("a", self.validator)({"a": "c"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "c", "allowed_values": "test", }, None ), ] ) class ValueId(TestCase): def test_empty_id(self): assert_report_item_list_equal( validate.value_id("id", "test id")({"id": ""}), [ ( severities.ERROR, report_codes.EMPTY_ID, { "id": "", "id_description": "test id", }, None ), ] ) def test_invalid_first_char(self): assert_report_item_list_equal( validate.value_id("id", "test id")({"id": "0-test"}), [ ( severities.ERROR, report_codes.INVALID_ID, { "id": "0-test", "id_description": "test id", "invalid_character": "0", "is_first_char": True, }, None ), ] ) def test_invalid_char(self): assert_report_item_list_equal( validate.value_id("id", "test id")({"id": "te#st"}), [ ( severities.ERROR, report_codes.INVALID_ID, { "id": "te#st", "id_description": "test id", "invalid_character": "#", "is_first_char": False, }, None ), ] ) def test_used_id(self): id_provider = IdProvider(etree.fromstring("")) assert_report_item_list_equal( validate.value_id("id", "test id", id_provider)({"id": "used"}), [ ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, { "id": "used", }, None ), ] ) def test_pair_invalid(self): assert_report_item_list_equal( validate.value_id("id", "test id")({ "id": validate.ValuePair("@&#", "") }), [ ( severities.ERROR, report_codes.EMPTY_ID, { # TODO: This should be "@&#". However an old validator # is used and it doesn't work with pairs. "id": "", "id_description": "test id", }, None ), ] ) def test_pair_used_id(self): id_provider = IdProvider(etree.fromstring("")) assert_report_item_list_equal( validate.value_id("id", "test id", id_provider)({ "id": validate.ValuePair("not-used", "used") }), [ ( severities.ERROR, report_codes.ID_ALREADY_EXISTS, { # TODO: This should be "not-used". However an old # validator is used and it doesn't work with pairs. "id": "used", }, None ), ] ) def test_success(self): id_provider = IdProvider(etree.fromstring("")) assert_report_item_list_equal( validate.value_id("id", "test id", id_provider)({"id": "correct"}), [] ) def test_pair_success(self): id_provider = IdProvider(etree.fromstring("")) assert_report_item_list_equal( validate.value_id("id", "test id", id_provider)({ "id": validate.ValuePair("correct", "correct") }), [] ) class ValueIn(TestCase): def test_returns_empty_report_on_valid_option(self): self.assertEqual( [], validate.value_in("a", ["b"])({"a": "b"}) ) def test_returns_empty_report_on_valid_normalized_option(self): self.assertEqual( [], validate.value_in("a", ["b"])( {"a": validate.ValuePair(original="C", normalized="b")} ), ) def test_returns_report_about_invalid_option(self): assert_report_item_list_equal( validate.value_in("a", ["b"])({"a": "c"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "c", "allowed_values": ["b"], }, None ), ] ) def test_support_OptionValuePair(self): assert_report_item_list_equal( validate.value_in("a", ["b"])( {"a": validate.ValuePair(original="C", normalized="c")} ), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "C", "allowed_values": ["b"], }, None ), ] ) def test_supports_another_report_option_name(self): assert_report_item_list_equal( validate.value_in("a", ["b"], option_name_for_report="option a")( {"a": "c"} ), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "option a", "option_value": "c", "allowed_values": ["b"], }, None ), ] ) def test_supports_forceable_errors(self): assert_report_item_list_equal( validate.value_in("a", ["b"], code_to_allow_extra_values="FORCE")( {"a": "c"} ), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "c", "allowed_values": ["b"], }, "FORCE" ), ] ) def test_supports_warning(self): assert_report_item_list_equal( validate.value_in( "a", ["b"], code_to_allow_extra_values="FORCE", allow_extra_values=True )( {"a": "c"} ), [ ( severities.WARNING, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "c", "allowed_values": ["b"], }, None ), ] ) class ValueNonnegativeInteger(TestCase): # The real code only calls value_cond => only basic tests here. def test_empty_report_on_valid_option(self): assert_report_item_list_equal( validate.value_nonnegative_integer("key")({"key": "10"}), [] ) def test_report_invalid_value(self): assert_report_item_list_equal( validate.value_nonnegative_integer("key")({"key": "-10"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "key", "option_value": "-10", "allowed_values": "a non-negative integer", }, None ), ] ) class ValueNotEmpty(TestCase): def test_empty_report_on_not_empty_value(self): assert_report_item_list_equal( validate.value_not_empty("key", "description")({"key": "abc"}), [] ) def test_empty_report_on_zero_int_value(self): assert_report_item_list_equal( validate.value_not_empty("key", "description")({"key": 0}), [] ) def test_report_on_empty_string(self): assert_report_item_list_equal( validate.value_not_empty("key", "description")({"key": ""}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "key", "option_value": "", "allowed_values": "description", }, None ), ] ) class ValuePortNumber(TestCase): # The real code only calls value_cond => only basic tests here. def test_empty_report_on_valid_option(self): assert_report_item_list_equal( validate.value_port_number("key")({"key": "54321"}), [] ) def test_report_invalid_value(self): assert_report_item_list_equal( validate.value_port_number("key")({"key": "65536"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "key", "option_value": "65536", "allowed_values": "a port number (1-65535)", }, None ), ] ) class ValuePortRange(TestCase): # The real code only calls value_cond => only basic tests here. def test_empty_report_on_valid_option(self): assert_report_item_list_equal( validate.value_port_range("key")({"key": "100-200"}), [] ) def test_report_nonsense(self): assert_report_item_list_equal( validate.value_port_range("key")({"key": "10-20-30"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "key", "option_value": "10-20-30", "allowed_values": "port-port", }, None ), ] ) def test_report_bad_start(self): assert_report_item_list_equal( validate.value_port_range("key")({"key": "0-100"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "key", "option_value": "0-100", "allowed_values": "port-port", }, None ), ] ) def test_report_bad_end(self): assert_report_item_list_equal( validate.value_port_range("key")({"key": "100-65536"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "key", "option_value": "100-65536", "allowed_values": "port-port", }, None ), ] ) class ValuePositiveInteger(TestCase): # The real code only calls value_cond => only basic tests here. def test_empty_report_on_valid_option(self): assert_report_item_list_equal( validate.value_positive_integer("key")({"key": "10"}), [] ) def test_report_invalid_value(self): assert_report_item_list_equal( validate.value_positive_integer("key")({"key": "0"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "key", "option_value": "0", "allowed_values": "a positive integer", }, None ), ] ) class MutuallyExclusive(TestCase): def test_returns_empty_report_when_valid(self): assert_report_item_list_equal( validate.mutually_exclusive(["a", "b"])({"a": "A"}), [], ) def test_returns_mutually_exclusive_report_on_2_names_conflict(self): assert_report_item_list_equal( validate.mutually_exclusive(["a", "b", "c"])({ "a": "A", "b": "B", "d": "D", }), [ ( severities.ERROR, report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, { "option_type": "option", "option_names": ["a", "b"], }, None ), ], ) def test_returns_mutually_exclusive_report_on_multiple_name_conflict(self): assert_report_item_list_equal( validate.mutually_exclusive(["a", "b", "c", "e"])({ "a": "A", "b": "B", "c": "C", "d": "D", }), [ ( severities.ERROR, report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, { "option_type": "option", "option_names": ["a", "b", "c"], }, None ), ], ) class CollectOptionValidations(TestCase): def test_collect_all_errors_from_specifications(self): specification = [ lambda option_dict: ["A{0}".format(option_dict["x"])], lambda option_dict: ["B"], ] self.assertEqual( ["Ay", "B"], validate.run_collection_of_option_validators( {"x": "y"}, specification ) ) class NamesIn(TestCase): def test_return_empty_report_on_allowed_names(self): assert_report_item_list_equal( validate.names_in( ["a", "b", "c"], ["a", "b"], ), [], ) def test_return_error_on_not_allowed_names(self): assert_report_item_list_equal( validate.names_in( ["a", "b", "c"], ["x", "y"], ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["x", "y"], "allowed": ["a", "b", "c"], "option_type": "option", "allowed_patterns": [], }, None ) ] ) def test_return_error_with_allowed_patterns(self): assert_report_item_list_equal( validate.names_in( ["a", "b", "c"], ["x", "y"], allowed_option_patterns=["pattern"] ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["x", "y"], "allowed": ["a", "b", "c"], "option_type": "option", "allowed_patterns": ["pattern"], }, None ) ] ) def test_return_error_on_not_allowed_names_without_force_code(self): assert_report_item_list_equal( validate.names_in( ["a", "b", "c"], ["x", "y"], #does now work without code_to_allow_extra_names allow_extra_names=True, ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["x", "y"], "allowed": ["a", "b", "c"], "option_type": "option", "allowed_patterns": [], }, None ) ] ) def test_return_forceable_error_on_not_allowed_names(self): assert_report_item_list_equal( validate.names_in( ["a", "b", "c"], ["x", "y"], option_type="some option", code_to_allow_extra_names="FORCE_CODE", ), [ ( severities.ERROR, report_codes.INVALID_OPTIONS, { "option_names": ["x", "y"], "allowed": ["a", "b", "c"], "option_type": "some option", "allowed_patterns": [], }, "FORCE_CODE" ) ] ) def test_return_warning_on_not_allowed_names(self): assert_report_item_list_equal( validate.names_in( ["a", "b", "c"], ["x", "y"], option_type="some option", code_to_allow_extra_names="FORCE_CODE", allow_extra_names=True, ), [ ( severities.WARNING, report_codes.INVALID_OPTIONS, { "option_names": ["x", "y"], "allowed": ["a", "b", "c"], "option_type": "some option", "allowed_patterns": [], }, None ) ] ) class IsInteger(TestCase): def test_no_range(self): self.assertTrue(validate.is_integer(1)) self.assertTrue(validate.is_integer("1")) self.assertTrue(validate.is_integer(-1)) self.assertTrue(validate.is_integer("-1")) self.assertTrue(validate.is_integer(+1)) self.assertTrue(validate.is_integer("+1")) self.assertTrue(validate.is_integer(" 1")) self.assertTrue(validate.is_integer("-1 ")) self.assertTrue(validate.is_integer("+1 ")) self.assertFalse(validate.is_integer("")) self.assertFalse(validate.is_integer("1a")) self.assertFalse(validate.is_integer("a1")) self.assertFalse(validate.is_integer("aaa")) self.assertFalse(validate.is_integer(1.0)) self.assertFalse(validate.is_integer("1.0")) def test_at_least(self): self.assertTrue(validate.is_integer(5, 5)) self.assertTrue(validate.is_integer(5, 4)) self.assertTrue(validate.is_integer("5", 5)) self.assertTrue(validate.is_integer("5", 4)) self.assertFalse(validate.is_integer(5, 6)) self.assertFalse(validate.is_integer("5", 6)) def test_at_most(self): self.assertTrue(validate.is_integer(5, None, 5)) self.assertTrue(validate.is_integer(5, None, 6)) self.assertTrue(validate.is_integer("5", None, 5)) self.assertTrue(validate.is_integer("5", None, 6)) self.assertFalse(validate.is_integer(5, None, 4)) self.assertFalse(validate.is_integer("5", None, 4)) def test_range(self): self.assertTrue(validate.is_integer(5, 5, 5)) self.assertTrue(validate.is_integer(5, 4, 6)) self.assertTrue(validate.is_integer("5", 5, 5)) self.assertTrue(validate.is_integer("5", 4, 6)) self.assertFalse(validate.is_integer(3, 4, 6)) self.assertFalse(validate.is_integer(7, 4, 6)) self.assertFalse(validate.is_integer("3", 4, 6)) self.assertFalse(validate.is_integer("7", 4, 6)) class IsPortNumber(TestCase): def test_valid_port(self): self.assertTrue(validate.is_port_number(1)) self.assertTrue(validate.is_port_number("1")) self.assertTrue(validate.is_port_number(65535)) self.assertTrue(validate.is_port_number("65535")) self.assertTrue(validate.is_port_number(8192)) self.assertTrue(validate.is_port_number(" 8192 ")) def test_bad_port(self): self.assertFalse(validate.is_port_number(0)) self.assertFalse(validate.is_port_number("0")) self.assertFalse(validate.is_port_number(65536)) self.assertFalse(validate.is_port_number("65536")) self.assertFalse(validate.is_port_number(-128)) self.assertFalse(validate.is_port_number("-128")) self.assertFalse(validate.is_port_number("abcd")) class MatchesRegexp(TestCase): def test_matches_string(self): self.assertTrue(validate.matches_regexp("abcdcba", "^[a-d]+$")) def test_matches_regexp(self): self.assertTrue(validate.matches_regexp( "abCDCBa", re.compile("^[a-d]+$", re.IGNORECASE) )) def test_not_matches_string(self): self.assertFalse(validate.matches_regexp("abcDcba", "^[a-d]+$")) def test_not_matches_regexp(self): self.assertFalse(validate.matches_regexp( "abCeCBa", re.compile("^[a-d]+$", re.IGNORECASE) )) class IsEmptyString(TestCase): def test_empty_string(self): self.assertTrue(validate.is_empty_string("")) def test_not_empty_string(self): self.assertFalse(validate.is_empty_string("a")) self.assertFalse(validate.is_empty_string("0")) self.assertFalse(validate.is_empty_string(0)) class IsTimeInterval(TestCase): def test_no_reports_for_valid_time_interval(self): for interval in ["0", "1s", "2sec", "3m", "4min", "5h", "6hr"]: self.assertEquals( [], validate.value_time_interval("a")({"a": interval}), "interval: {0}".format(interval) ) def test_reports_about_invalid_interval(self): assert_report_item_list_equal( validate.value_time_interval("a")({"a": "invalid_value"}), [ ( severities.ERROR, report_codes.INVALID_OPTION_VALUE, { "option_name": "a", "option_value": "invalid_value", "allowed_values": "time interval (e.g. 1, 2s, 3m, 4h, ...)" , }, None ), ] ) pcs-0.9.164/pcs/lib/test/test_xml_tools.py000066400000000000000000000133411326265502500204330ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) from lxml import etree from pcs.lib import xml_tools as lib from pcs.test.tools.assertions import assert_xml_equal from pcs.test.tools.pcs_unittest import TestCase class GetSubElementTest(TestCase): def setUp(self): self.root = etree.Element("root") self.sub = etree.SubElement(self.root, "sub_element") def test_sub_element_exists(self): self.assertEqual( self.sub, lib.get_sub_element(self.root, "sub_element") ) def test_new_no_id(self): assert_xml_equal( '', etree.tostring( lib.get_sub_element(self.root, "new_element") ).decode() ) assert_xml_equal( """ """, etree.tostring(self.root).decode() ) def test_new_with_id(self): assert_xml_equal( '', etree.tostring( lib.get_sub_element(self.root, "new_element", "new_id") ).decode() ) assert_xml_equal( """ """, etree.tostring(self.root).decode() ) def test_new_first(self): lib.get_sub_element(self.root, "new_element", "new_id", 0) assert_xml_equal( """ """, etree.tostring(self.root).decode() ) def test_new_last(self): lib.get_sub_element(self.root, "new_element", "new_id", None) assert_xml_equal( """ """, etree.tostring(self.root).decode() ) class UpdateAttributeRemoveEmpty(TestCase): def setUp(self): self.el = etree.Element( "test_element", { "a": "A", "b": "B", } ) def assert_xml_equal(self, expected): assert_xml_equal(expected, etree.tostring(self.el).decode()) def test_set_new_attr(self): lib.update_attribute_remove_empty(self.el, "c", "C") self.assert_xml_equal('') def test_change_existing_attr(self): lib.update_attribute_remove_empty(self.el, "b", "b1") self.assert_xml_equal('') def test_remove_existing_attr(self): lib.update_attribute_remove_empty(self.el, "b", "") self.assert_xml_equal('') def test_zero_does_not_remove(self): lib.update_attribute_remove_empty(self.el, "b", "0") self.assert_xml_equal('') def test_remove_missing_attr(self): lib.update_attribute_remove_empty(self.el, "c", "") self.assert_xml_equal('') def test_more(self): lib.update_attributes_remove_empty(self.el, { "a": "X", "b": "", "c": "C", "d": "", }) self.assert_xml_equal('') class EtreeElementAttributesToDictTest(TestCase): def setUp(self): self.el = etree.Element( "test_element", { "id": "test_id", "description": "some description", "attribute": "value", } ) def test_only_existing(self): self.assertEqual( { "id": "test_id", "attribute": "value", }, lib.etree_element_attibutes_to_dict(self.el, ["id", "attribute"]) ) def test_only_not_existing(self): self.assertEqual( { "_id": None, "not_existing": None, }, lib.etree_element_attibutes_to_dict( self.el, ["_id", "not_existing"] ) ) def test_mix(self): self.assertEqual( { "id": "test_id", "attribute": "value", "not_existing": None, }, lib.etree_element_attibutes_to_dict( self.el, ["id", "not_existing", "attribute"] ) ) class RemoveWhenPointless(TestCase): def assert_count_tags_after_call(self, count, tag, **kwargs): tree = etree.fromstring( """ """ ) xpath=".//{0}".format(tag) lib.remove_when_pointless(tree.find(xpath), **kwargs) self.assertEqual(len(tree.xpath(xpath)), count) def assert_remove(self, tag, **kwargs): self.assert_count_tags_after_call(0, tag, **kwargs) def assert_keep(self, tag, **kwargs): self.assert_count_tags_after_call(1, tag, **kwargs) def test_remove_empty(self): self.assert_remove("empty") def test_keep_with_subelement(self): self.assert_keep("with-subelement") def test_keep_when_attr(self): self.assert_keep("with-attr") def test_remove_when_attr_not_important(self): self.assert_remove("with-attr", attribs_important=False) def test_remove_when_only_id(self): self.assert_remove("with-only-id") pcs-0.9.164/pcs/lib/tools.py000066400000000000000000000036031326265502500155350ustar00rootroot00000000000000from __future__ import ( absolute_import, division, print_function, ) import binascii import os import tempfile def generate_key(random_bytes_count=32): return binascii.hexlify(generate_binary_key(random_bytes_count)) def generate_binary_key(random_bytes_count): return os.urandom(random_bytes_count) def environment_file_to_dict(config): """ Parse systemd Environment file. This parser is simplified version of parser in systemd, because of their poor implementation. Returns configuration in dictionary in format: {