././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3353298 ClusterShell-1.9.2/0000755104717000001440000000000014505640536013576 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/COPYING.LGPLv2.10000644104717000001440000006364214501416555016006 0ustar00sthiellusers GNU LESSER GENERAL PUBLIC LICENSE Version 2.1, February 1999 Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. [This is the first released version of the Lesser GPL. It also counts as the successor of the GNU Library Public License, version 2, hence the version number 2.1.] Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This license, the Lesser General Public License, applies to some specially designated software packages--typically libraries--of the Free Software Foundation and other authors who decide to use it. You can use it too, but we suggest you first think carefully about whether this license or the ordinary General Public License is the better strategy to use in any particular case, based on the explanations below. When we speak of free software, we are referring to freedom of use, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish); that you receive source code or can get it if you want it; that you can change the software and use pieces of it in new free programs; and that you are informed that you can do these things. To protect your rights, we need to make restrictions that forbid distributors to deny you these rights or to ask you to surrender these rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library or if you modify it. For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link other code with the library, you must provide complete object files to the recipients, so that they can relink them with the library after making changes to the library and recompiling it. And you must show them these terms so they know their rights. We protect your rights with a two-step method: (1) we copyright the library, and (2) we offer you this license, which gives you legal permission to copy, distribute and/or modify the library. To protect each distributor, we want to make it very clear that there is no warranty for the free library. Also, if the library is modified by someone else and passed on, the recipients should know that what they have is not the original version, so that the original author's reputation will not be affected by problems that might be introduced by others. Finally, software patents pose a constant threat to the existence of any free program. We wish to make sure that a company cannot effectively restrict the users of a free program by obtaining a restrictive license from a patent holder. Therefore, we insist that any patent license obtained for a version of the library must be consistent with the full freedom of use specified in this license. Most GNU software, including some libraries, is covered by the ordinary GNU General Public License. This license, the GNU Lesser General Public License, applies to certain designated libraries, and is quite different from the ordinary General Public License. We use this license for certain libraries in order to permit linking those libraries into non-free programs. When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library. We call this license the "Lesser" General Public License because it does Less to protect the user's freedom than the ordinary General Public License. It also provides other free software developers Less of an advantage over competing non-free programs. These disadvantages are the reason we use the ordinary General Public License for many libraries. However, the Lesser license provides advantages in certain special circumstances. For example, on rare occasions, there may be a special need to encourage the widest possible use of a certain library, so that it becomes a de-facto standard. To achieve this, non-free programs must be allowed to use the library. A more frequent case is that a free library does the same job as widely used non-free libraries. In this case, there is little to gain by limiting the free library to free software only, so we use the Lesser General Public License. In other cases, permission to use a particular library in non-free programs enables a greater number of people to use a large body of free software. For example, permission to use the GNU C Library in non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system. Although the Lesser General Public License is Less protective of the users' freedom, it does ensure that the user of a program that is linked with the Library has the freedom and the wherewithal to run that program using a modified version of the Library. The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a "work based on the library" and a "work that uses the library". The former contains code derived from the library, whereas the latter must be combined with the library in order to run. GNU LESSER GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License Agreement applies to any software library or other program which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Lesser General Public License (also called "this License"). Each licensee is addressed as "you". A "library" means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables. The "Library", below, refers to any such software library or work which has been distributed under these terms. A "work based on the Library" means either the Library or any derivative work under copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term "modification".) "Source code" for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library. Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does. 1. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the Library. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) The modified work must itself be a software library. b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change. c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License. d) If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful. (For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Library. In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices. Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy. This option is useful when you wish to copy part of the code of the Library into a program that is not a library. 4. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange. If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to distribute the source code, even though third parties are not compelled to copy the source along with the object code. 5. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License. However, linking a "work that uses the Library" with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a "work that uses the library". The executable is therefore covered by this License. Section 6 states terms for distribution of such executables. When a "work that uses the Library" uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law. If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.) Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6, whether or not they are linked directly with the Library itself. 6. As an exception to the Sections above, you may also combine or link a "work that uses the Library" with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications. You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things: a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable "work that uses the Library", as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.) b) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (1) uses at run time a copy of the library already present on the user's computer system, rather than copying library functions into the executable, and (2) will operate properly with a modified version of the library, if the user installs one, as long as the modified version is interface-compatible with the version that the work was made with. c) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution. d) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place. e) Verify that the user has already received a copy of these materials or that you have already sent this user a copy. For an executable, the required form of the "work that uses the Library" must include any data and utility programs needed for reproducing the executable from it. However, as a special exception, the materials to be distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot use both them and the Library together in an executable that you distribute. 7. You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above. b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 8. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 9. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it. 10. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties with this License. 11. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 12. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 13. The Free Software Foundation may publish revised and/or new versions of the Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation. 14. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Libraries If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License). To apply these terms, attach the following notices to the library. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the library, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the library `Frob' (a library for tweaking knobs) written by James Random Hacker. , 1 April 1990 Ty Coon, President of Vice That's all there is to it! ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/ChangeLog0000644104717000001440000015551114505632065015356 0ustar00sthiellusers2023-09-29 S. Thiell * Version 1.9.2 released. The main changes are listed below. * CLI: fix line buffering with Python 3 (#528) * clush: fix --[r]copy dest when --dest is omitted (#525) * NodeUtils: allow null values in cluster.yaml (#533) * Topology: check that node groups/wildcards are non-empty (#527) * rpm: xcat.conf.example missing (#526) 2023-02-09 S. Thiell * Version 1.9.1 released. The main changes are listed below. * clush: select proper last parsed config file (#511) (#512) * setup.py: update download url and remove python 2.6 support (#508) * setup.py: improvements for pip install and venv (#510) * doc: correct typo 'sterr' (#513) * Fix typos found with codespell (#514) * RangeSet: support negative ranges (#515) (#518) * RangeSet: remove duplicate intiter() definition (#519) 2022-11-25 S. Thiell * Version 1.9 released. The main changes are listed below. * clush: add --mode support with sudo and sshpass examples (#198,#234,#423) * clush: add same arguments '--outdir=OUTDIR' and '--errdir=ERRDIR' as pssh (#470) * clush: always close stdin stream of worker when it is not used (#478) * clush: use daemon attribute instead of deprecated setDaemon() (#479) * slurm.conf.example: filter out more Slurm node state flags (#469) * NodeSet: add special notation @@source to expand group names (#468) * RangeSet: nD folding optimization (#485) * RangeSet: support ranges with zero padding of mixed lengths (#293, #473) * RangeSet: add explicit intiter() method to iterate over integers (#476) * EngineClient: EnginePort improvements, add event ev_port_start() (#481) * Tree: fix start and bufferize early writes (#482) * Tree: fix error with intermediate gateways (#471) * Defaults: Introduce CLUSTERSHELL_CFGDIR (#483) * Fix for python-3.10 (#484) * Worker: deprecate old EventHandler method signatures (#358) * Worker: remove old last_*() methods (#226) * tests: misc. improvements (#110, #501, #502, 503) 2021-11-03 S. Thiell * Version 1.8.4 released. The main changes are listed below. * RangeSetND: fix padding info when slicing using __getitem__() (#429) * Defaults: Allow out-of-tree worker modules * NodeUtils: allow YAML list to declare node groups (#438) * Tree: Use default local_worker and allow overriding Defaults * Worker/Rsh: return maxrc properly for Rsh Worker (#448) * xCAT binding: add support for spaces in group names (#459) * CLI/Clush: Avoid python3 error with no stdin (#460) * CLI/Clush: use os.read() in stdin thread (#463) * CLI/Clush: Add maxrc option to clush.conf (#451) * CLI/Display: Add support for NO_COLOR and CLICOLOR (#428) (#432) 2019-12-01 S. Thiell * Version 1.8.3 released. The main changes are listed below. * NodeUtils: use yaml.safe_load() instead of default loader (#417) * Tree-mode doesn't work with multi-hop gateways (#419) * Silent error on Python version mismatch with gateway (#388) * clush: support --worker/-R when topology.conf is present (#386) 2019-08-12 S. Thiell * Version 1.8.2 released. The main changes are listed below. * CLI/Display: use utf-8 (or non-ascii) encoding (#400) * Defaults: add NodeSet's default folding axis (#401) * EngineTimer: fix misuse of epsilon when firing timer (#399) * xCAT group bindings: performance update (#398) * Slurm group bindings: fix issue where job ids were used instead of user names (#405) * Doc: disable smartquotes (#402) and fix typo, %r is not for rank (#409) * Packaging: add support for RHEL8 (#413) 2018-10-30 S. Thiell * Version 1.8.1 released. The main changes are listed below. * Tree: support offline gateways (#260) * CLI: added the following command line options (#336): --conf to specify alternative clush.conf (clush only) --groupsconf to specify alternative groups.conf (all CLIs) * NodeSet: speed-up nodeset parsing (#370) * EventHandler: reinstate ev_error and ev_timeout as deprecated (#377) * nodeset/cluset CLI: allow litteral new line in -S, so both -S "\n" and -S $'\n' will work * nodeset/cluset CLI: handle multiline shell arguments in options (#394) 2017-10-23 S. Thiell * Version 1.8 released. 2017-10-18 S. Thiell * 1.8 RC1 (1.7.91) release. 2017-10-14 S. Thiell * NodeSet.py: add node wildcard support (ticket #349). 2017-10-02 S. Thiell * 1.8 beta2 (1.7.82) release. * CLI: make color output legible on dark backgrounds (#334) * Event API 2.0: new ev_* method signatures (#232) * CLI/Clush.py: initialize logging earlier (#348) * NodeUtils: ignore YAML group files with permission error (#348) * CLI/Clush.py: fix mishandled broken pipe in Python 3 * CLI/Nodeset: display warning on misplaced set operation (#318) 2017-09-01 S. Thiell * 1.8 beta1 (1.7.81) release. 2017-08-09 S. Thiell * Added Python 3 support (#337). 2017-07-27 S. Thiell * NodeUtils.py: fix external reverse node group upcall caching issue. 2017-06-27 S. Thiell * CLI/Clush.py: add -n as an alias of --nostdin (#333). 2017-04-29 S. Thiell * NodeSet.py: allow fully numeric node names (ticket #338). 2017-02-21 S. Thiell * Worker/Tree.py: fix defect in file copying code when destination target directory is provided (ticket #332). 2016-12-20 S. Thiell * Version 1.7.3 released. * conf/groups.conf.d: add an external group source example file for xCAT static groups (xcat.conf.example). 2016-12-18 S. Thiell * Clush.py: fix sorting issue with clush -L (ticket #326). 2016-11-06 S. Thiell * cluset: add cluset command with doc to avoid a conflict with xCAT's nodeset command (ticket #300). 2016-10-12 All contributors * Change license from "CeCILL-C V1" to "LGPL v2.1 or later". 2016-10-04 S. Thiell * setup.py: remove scripts/*; use console_scripts instead. 2016-10-02 S. Thiell * Tree.py: in copy mode, do not send tar data to local targets, but only remote ones; this does fix broken pipe errors (ticket #319). * Engine.py: implement basic per worker fanout (private), allowing the use of fanout=1 in tree mode (ticket #322). 2016-06-18 S. Thiell * Version 1.7.2 released. 2016-06-07 D. Martinet * Clush/Nodeset: add --pick N option (ticket #311). 2016-05-22 S. Thiell * Tree.py: fix the tracking of gateway active targets (ticket #308). * EngineClient.py: handle broken pipe on write() (ticket #196). 2016-04-24 S. Thiell * NodeSet.py: allow empty string as valid argument for empty NodeSet objects (ticket #294). 2016-02-28 S. Thiell * Version 1.7.1 released. 2016-02-27 S. Thiell * Worker/Tree.py: implement tree mode reverse copy using tar commands to fix clush --rcopy (ticket #290). * Communication.py: remove 76-char base64 encoding fixed length restriction for tree XML payload communication. The default max length is now 64K, which gives good results. The environment variable 'CLUSTERSHELL_GW_B64_LINE_LENGTH' is propagated to gateways and may be used to override this value. 2016-02-12 S. Thiell * RangeSet.py and NodeSet.py: fix bad 0-padding handling by RangeSetND or NodeSet objects in nD (ticket #286). 2016-02-09 S. Thiell * NodeSet.py: fix parser issue when brackets were used with nodeset starting with a digit (ticket #284). 2015-11-30 S. Thiell * CLI/Nodeset.py: fix --output-format / -O when folding (-f) by applying the provided format to each node (ticket #277). 2015-11-10 S. Thiell * Version 1.7 released. 2015-11-01 S. Thiell * Clush.py: added -P/--progress to force display of the live progress indicator and display global write bandwidth when writing standard input. 2015-10-25 S. Thiell * Clush.py: added --option/-O clush.conf settings override (pull request #248). 2015-10-18 S. Thiell * Clush.py: added --hostfile command line option to specify a file containing single hosts, node sets or node groups (ticket #235). 2015-10-16 S. Thiell * NodeSet.py: enhancing parser to recognize nodesets with brackets having leading/trailing digits like in "prod-00[01-99]" (ticket #228). 2015-08-29 S. Thiell * CLI/Nodeset.py: added --axis option to choose nD fold axis (ticket #269). * NodeSet.py: added fold_axis public member to NodeSetBase along with expand algorithm when casting to string to choose nD fold axis (ticket #269). 2015-08-28 S. Thiell * CLI/Config.py: better per-user clush.conf support. clush now also checks for $XDG_CONFIG_HOME/clustershell/clush.conf and $HOME/.local/etc/clustershell/clush.conf (ticket #111). 2015-08-27 S. Thiell * CLI/Nodeset.py: add --list-all / -L to list groups from all group sources (ticket #266). If repeated, it has the same behavior than -l. * NodeUtils.py: add support for built-in groups definition files based on YAML. Added autodir configuration option in groups.conf to declare a directory where .yaml files are automatically loaded. Example available in groups.d/cluster.yaml.example. Added support for groups.conf section with multiple source names separated by comma. This is also the case for groups.conf.d/*.conf extensions. Also added new upcall command $SOURCE variable that is replaced by calling source name before execution. Finally, /etc/clustershell/groups is now deprecated and replaced by /etc/clustershell/groups.d/local.cfg for new installation (ticket #258). 2015-07-07 S. Thiell * CLI/Nodeset.py: add --autostep=auto and --autostep=x% option (#161). * NodeSet: add autostep property to allow changing the way every RangeSet of a NodeSet object is displayed (eg. node[2-8/2] instead of node[2,4,6,8]). Autostep value is the min number of indexes that are found at equal distance of each other inside a range before NodeSet starts to use this syntax. 2015-05-18 S. Thiell * Version 1.6.92 released. 2015-04-10 S. Thiell * Tree: implement task.copy() in tree mode using temporary tar file. 2015-04-01 S. Thiell * Tree: allow local command execution on gateways by adding remote=False to task.shell()/run(). In practice with this patch, we can now easily execute local commands on (remote) gateways using node argument like `ipmitool -H %h` to spread the load between gateways. 2015-03-24 S. Thiell * NodeSet.py: disallow opening bracket after digit (ticket #228). 2015-03-23 S. Thiell * Clush.py: Warn user of possible use of shell globbing, especially when using brackets and bash without GLOBIGNORE set (ticket #225). * Clush.py: Fix --diff against null content (ticket #214). 2015-03-19 S. Thiell * Task.py: Make max_retcode() return None on no-op. Until now, max_retcode() returned 0 by default, so even when no command was able to finish (for example, due to reached timeout). This behavior did not allow users to distinguish between successful commands and such no-op. 2015-03-11 S. Thiell * Worker.py: Introduce StreamWorker as a generic worker class to manage a set of streams (using one EngineClient with multiple I/O streams internally). It's a concrete class that is now used in Gateway.py to manage I/O from the parent host in tree propagation mode. Also changed WorkerSimple (and thus WorkerPopen) to inherit from StreamWorker. 2014-05-20 S. Thiell * EngineClient.py: Code improvement to support multiple customizable I/O streams per EngineClient in different mode, each being named and having their own read/write buffers and attributes. 2014-04-30 A. Degremont * Clush.py: Add a 'worker' option to switch default worker (ticket #221). 2014-04-23 S. Thiell * EPoll.py: Close epoll control file descriptor when engine is released. 2014-01-26 A. Degremont * NodeUtils.py: Add group source caching expiration (ticket #98). 2014-01-16 S. Thiell * RangeSet.py: Multidimensional RangeSet support (new RangeSetND class). Created RangeSetND class to manage a matrix of RangeSet objects. Folding of such objects is quite complex and time consuming. A special optimization is provided when only one dimension is varying. Patch by aurelien.degremont@cea.fr and stephane.thiell@cea.fr. * NodeSet.py: Multidimensional nodeset support. Added support of RangeSetND to NodeSet. Optimized NodeSet so that 1D NodeSet objects are still using RangeSet (ticket #92). Also benefiting from RangeSetND optimization when only one dimension is varying. Patch by aurelien.degremont@cea.fr and stephane.thiell@cea.fr. 2014-01-14 S. Thiell * NodeSet.py: fix and clean fromall()/@* magic and add resolver option to grouplist()'s NodeSet module function. 2014-01-14 S. Thiell * NodeSet.py: Define module API to access and set group resolver used for @-prefixed (eg. '@group') resolution. This is used to circumvent accessing and setting NodeSet module's variable 'RESOLVER_STD_GROUP' directly, which is not convenient and error prone. The new functions are std_group_resolver() and set_std_group_resolver(). Updated User Guide. * CLI/Clush.py: ignore IOError on stdin reader thread, but print a warning in verbose or debug mode (ticket #201). 2014-01-06 S. Thiell * NodeSet.py: Fix internal implementation of NodeSet.contiguous(), that is, as NodeSet is mutable, we should avoid using the same NodeSet instance in NodeSet.contiguous() for different NodeSet values. 2013-12-17 S. Thiell * Task.py: fix task.iter_buffers() and worker.iter_buffers() to allow optional argument match_keys to be an empty list for convenience. It should be set to None to disable match_keys check. Also check that match_keys is a true key/node sequence and not a string. 2013-11-05 S. Thiell * EngineClient.py: Hide unwanted debug messages: when aborting a task, cleanup of associated resources may lead to dropped inter-task messages through the EnginePort mechanism. We now only display associated warning messages when debugging is enabled... * Task.py: Fix abort() race condition. * CLI/Clush.py: Fix a defect to allow the use of command timeout when copying files (with clush -u delay -c ..., ticket #220). 2013-11-04 A. Degremont * Worker/Rsh.py: Add a Rsh worker. It is compatible with rsh clones like mrsh/krsh. (ticket #216). * Task.py: Add a 'worker' default option for Task object. It is used in Task.shell() and Task.copy(). 2012-09-13 S. Thiell * Engine.py: Allow EngineTimer with immediate fire date, that is, a fire delay of 0s. Obviously not fired in time, such a timer will still be armed and fired as soon as possible (ticket #200). 2012-08-27 S. Thiell * Engine.py: Fix catch-all used in case of KeyboardInterrupt exception during runloop, resulting in ghost engine clients in that case and results possibly not cleaned properly (ticket #199). 2012-08-01 S. Thiell * CLI/Clush.py: Fix clush_exit() side effects thanks to latest task termination improvements (tickets #185). * Task.py: Avoid termination race condition when using multiple threads and calling abort()+join() from another thread (tickets #197). 2012-07-09 S. Thiell * NodeSet.py: "All nodes" extended pattern support with @* (ticket #193). 2012-04-08 S. Thiell * Version 1.6 released. * doc/guide: Add ClusterShell User and Programming Guide LaTeX source to repository. 2012-04-07 S. Thiell * doc/examples/check_nodes.py: Add simple example of event-driven script. 2012-03-31 S. Thiell * CLI/Nodeset.py: Allow -a and common nodeset operations when using -l to list belonging groups (a new 1.6 feature, see ticket #162). 2012-03-29 S. Thiell * Worker/Worker.py: added documentation for worker.current_[node,msg,errmsg,rc] variables (ticket #160). * Task.py: timeout parameters better explained (ticket #157). 2012-03-28 S. Thiell * CLI/OptionParser.py: Add --diff option to enable diff display between gathered outputs. Enabled in clush and clubak (ticket #176). * CLI/Display.py: Add _print_diff() and flush() methods. * Task.py: Initialize MsgTree instances in constructor according to default values in order to allow no-op calls to buffer getters before resume() (ticket #186). 2012-03-26 S. Thiell * CLI/Clush.py: Fix clush --[r]copy behavior when no source directory is specified (ticket #172). * CLI/Clush.py: Fix interactive mode gather/standard toggle error, when using special character '=' (ticket #171). * CLI/Clubak.py: Add -v/-q verbosity options (ticket #174). 2012-03-24 S. Thiell * CLI/Clubak.py: Add --interpret-keys=never,always,auto option to clubak to allow a more generic usage of clubak, ie. even in cases where keys are not nodeset compliant (ticket #180). 2012-03-21 S. Thiell * conf/groups.conf: Fix group cross reference issue (ticket #183), we now use sed commands instead of awk ones in this default groups.conf file. 2012-03-18 S. Thiell * conf/groups.conf: Fix default source regexp for mawk (ticket #178). * Packaging: Add groups.conf.d directory and sample files. 2012-03-17 S. Thiell * CLI/Nodeset.py: Add support for -l[ll] to list belonging groups (CLI interface to NodeSet.groups()) (ticket #162). * NodeSet: Add groups() public method to list groups nodeset belongs to. 2012-03-15 S. Thiell * NodeUtils.py: Add groupsdir option (ticket #179). 2012-03-14 S. Thiell * CLI/Nodeset.py: Add --contiguous splitting option (ticket #173). * NodeSet.py: Add contiguous() iterator. * RangeSet.py: Add contiguous() iterator. * RangeSet.py: Allow slice object in fromone() constructor. 2012-02-26 S. Thiell * Gateway.py: Improved logging facility, configurable through CLUSTERSHELL_GW_LOG_DIR and CLUSTERSHELL_GW_LOG_LEVEL environment variables from the root node. * Communication.py: Messages are now transfered in xml payload instead of 'output' attribute for improved handling of multi-lines messages in StdOutMessage and StdErrMessage. 2012-02-24 S. Thiell * Worker/EngineClient.py: Fix gateway write performance issue, as seen on a very large cluster with a no-grooming test case and lots of small messages sent, by calling os.write() as soon as possible (might safely fail if not ready as we are in non-blocking mode). * NodeSet.py: Internal parsing optimization by adding a "should copy RangeSet object?" flag to NodeSetBase constructor in order to save useless but slightly costly RangeSet.copy() calls. * NodeSet.py: Small rangeset parsing optimization on single node string parsing code. 2012-02-19 S. Thiell * NodeSet.py: Add NodeSet.nsiter(), a fast iterator on nodes as NodeSet objects to avoid object-to-string-to-object conversion in some cases when using __iter__() -- like in PropagationTreeRouter.dispatch(). 2012-02-15 S. Thiell * Clush.py: Add --topology hidden option to enable V2 tree propagation technology preview. 2012-02-01 S. Thiell * RangeSet.py: Fix RangeSet.__setstate__() for proper object unpickling from older RangeSet versions. Add unpickling tests. 2012-01-28 S. Thiell * RangeSet.py: Discard AVL-tree based implementation, as we noticed that built-in set is much faster. New implementation is based on built-in set, and slightly changes padding and __iter__() behaviors. Padding value is now accessible and settable at any time via a public variable "padding". Auto-detection of padding is still available, but it is used as a convenience for printing range sets. Moreover, all set-like operations are now only based on integers, ignoring RangeSet's padding value. __iter__() has been changed in order to iterate over sorted inner set integers, instead of string items. A new method striter() is available to iterate over string padding-enabled items. These changes allow us to offer a full set-like API for RangeSet (new methods like isdisjoint(), pop(), etc. are available according to your Python version). Also, a new constructor that take any iterable of integers is available. Finally, this implementation should much more faster than all previous ones, especially for large range sets (ten thousand and more) with lots of holes. 2012-01-10 S. Thiell * RangeSet.py: Move RangeSet class from NodeSet.py to this new module dedicated to scalable management of cluster range sets (tens of thousands of disjoint ranges). Change internal algorithm used to manage ranges from a list to an AVL-tree based on bintrees project's avltree implementation. Got rid of expand/fold() methods that don't scale, all sets-like methods have been rewritten using AVL-tree. 2012-01-04 S. Thiell * Task.py: Change behavior of shell()'s tree=None (auto) parameter: added Task default parameter "auto_tree" defaulting to False and checked by shell() when tree=None. This means that even with a valid topology configuration file, the user has to explicitly enable tree mode for now. This is for next 1.6 release and should be changed to True in version 2.0. 2011-11-28 S. Thiell * Task.py: Fix 'tree' option of shell(), which can be either True (force enable tree mode), False (disable tree mode) and None (automatic). 2011-11-24 S. Thiell * CLI/Clush.py: Enable tree mode by default with grooming option. * Worker/Tree.py: Integrate WorkerTree within ClusterShell Engine framework, it will be used instead of PropagationTree. * Engine/Engine.py: Inhibit any engine client changes when client is not registered. * Topology.py: Change DEFAULT section to Main section in topology.conf. Cosmetic changes. 2011-06-09 S. Thiell * Version 1.5.1 released. * NodeSet.py: Added workaround to allow pickling/unpickling of RangeSet objects for Python 2.4 (ticket #156). 2011-06-08 S. Thiell * Version 1.5 released (Sedona release). 2011-06-07 S. Thiell * MsgTree.py: Improved MsgTree API to lighten updates of keys associated to tree elements (ticket #131). * CLI/Clubak.py: Updated for new MsgTree API and added a -F/--fast switch to enable preloading of whole messages to speed up processing, but with an increase of memory consumption (ticket #131). 2011-05-31 S. Thiell * NodeSet.py: Optimized NodeSet.fromlist() method by adding updaten() method which is quite O(num_patterns). 2011-05-29 S. Thiell * NodeSet.py: Fixed missing autostep check in _fold() which could lead to autostep not being taken into account (ticket #150). * Worker/Ssh.py: Fix scp user option in Scp class (ticket #152). * Engine/*.py: Internal engine design change: do not retry engine eventloop on any EngineClient registration changes, so process more events by chunk (should be faster) and add a loop iteration counter to work around internally re-used FDs (finalize ticket #153). 2011-05-26 S. Thiell * Worker/EngineClient.py: Enable fastsubprocess module, and use file descriptors instead of file objects everywhere (ticket #153). * Worker/fastsubprocess.py: Faster, relaxed version of Python 2.6 subprocess.py with non blocking fd support. 2011-05-15 S. Thiell * Engine/Engine.py: Improved start_all() fanout algorithm by adding a separate pending clients list. * Created 1.5 branch. 2011-03-19 S. Thiell * Version 1.4.3 released. * CLI/Nodeset.py: Make stdin '-' keyword work when used for -i/x/X operations (ticket #148). * CLI/Clush.py: Fixed issue when using clush -bL (missing argument) due to latest 1.4.2 changes. Added tests/ClushScriptTest.py to detect that in the future (ticket #147). 2011-02-15 S. Thiell * Version 1.4.2 released. 2011-03-12 S. Thiell * NodeSet.py: Fixed issues with objects copying, so got rid of copy module and added optimized RangeSet.copy() and NodeSet.copy() methods (ticket #146). 2011-03-09 S. Thiell * CLI/Clush.py: Added running progress indicator for --[r]copy commands. 2011-03-08 S. Thiell * CLI/Clush.py: Improved -v switch (closes ticket #100: print live node output plus noderange-grouped output at the end). * CLI/Clubak.py: Add -T,--tree message tree mode option (ticket #144). * MsgTree.py: Class initialization variant (trace mode) to keep track of old keys/nodes for each message (part of #144). 2011-03-06 S. Thiell * CLI/Clush.py: Implement clush -L (not -bL) to order output by nodename, like clubak -L (ticket #141). * CLI/Nodeset.py: Added -I/--slice command option to select node(s) by index(es) or RangeSet-style slice (ticket #140). * CLI/Nodeset.py: Remove pending limitation when using -[ixX] operations with nodesets specified by -a (all) or through stdin. * NodeSet.py: Add RangeSet.slices() method. 2011-03-05 S. Thiell * NodeSet.py: Internal changes to use slice type to represent ranges in RangeSet. Changed RangeSet.add_range() 'stop' argument semantic, it is now conforming to range()'s one. * NodeSet.py: Fix issue with in-place operators returning None. Added tests. 2011-02-27 S. Thiell * NodeSet.py: Fix issue when using negative index or negative slice indices for RangeSet and NodeSet. 2011-02-24 S. Thiell * CLI/Nodeset.py: Add -ll and -lll extended options to list corresponding group nodes, and also group node count (ticket #143). 2011-02-13 S. Thiell * Version 1.4.1 released. 2011-02-08 S. Thiell * CLI/Config.py: Add fd_max integer parameter to set the max number of open files (soft limit) permitted per clush process. This will fix an issue on systems where hard limit is not reasonable. 2011-02-07 S. Thiell * CLI/OptionParser.py: Add clush -E hidden option to enforce a specific I/O events engine (should not be needed, but can be useful for testing). Improve engine selection error handling. 2011-02-06 S. Thiell * Engine/Select.py: New select()-based engine (from H. Doreau, ticket #8). * CLI/{Clush,Display}.py: Do not display exit code with clush when -qS is specified (ticket #117). * CLI/Clush.py: Allow clush to run without argument when stdin is not a tty, by disabling ssh pseudo-tty allocation. You can now type `echo uname | clush -w ` (ticket #134). * Worker/Ssh.py: Fix issue when more than one ssh options are specified with -o or in clush.conf (ticket #138). 2011-02-05 S. Thiell * CLI/Clush.py: Fix issue when executing local command with clush -b in interactive mode (eg. !uname). * Worker/Worker.py: Define new current_node, current_msg, current_errmsg and current_rc Worker variables, updated at each event (last_read(), last_node() and last_retcode() will be deprecated from version 2.0). * Worker/*.py: Performance: removed _invoke() indirections when generating events + local variables optimization. * Task.py: Performance: replaced _TaskMsgTree metaclass by direct calls to MsgTree methods + local variables optimization. * Worker/Ssh.py: Local variables optimization. * CLI/Clush.py: Do not disable internal messages gathering when using -bL for proper display after Ctrl-C interruption (#133). 2011-01-26 S. Thiell * tests/config: test config-template directory created. 2011-01-17 S. Thiell * Communication.py: New module from 2.0 dev branch (author: H. Doreau). * Gateway.py: New module from 2.0 dev branch (author: H. Doreau). * Propagation.py: New module from 2.0 dev branch (author: H. Doreau). * Topology.py: New module from 2.0 dev branch (author: H. Doreau). 2011-01-15 S. Thiell * Version 1.4 released. * NodeSet.py: Add docstring for NodeSet string arithmetic (, ! & ^), which is also called extended string pattern (trac ticket #127). 2010-12-14 S. Thiell * Version 1.4 beta 1 released. * CLI/Display.py: In buffer header (for -b/-B without -L), print node count in brackets if > 1 and enabled by configuration (trac ticket #130). * CLI/Config.py: Add boolean node_count param (part of trac ticket #130). 2010-12-08 S. Thiell * CLI/Nodeset.py: Support nodeset --split option (trac ticket #91). * CLI/OptionParser.py: Add --split option (part of #91). * NodeSet.py: Avoid overflow by returning truncated results when there are not enough elements in the set for RangeSet.split(n) and NodeSet.split(n). 2010-12-02 S. Thiell * NodeSet.py: Much improved algorithm for RangeSet.add_range(). 2010-11-30 S. Thiell * Worker/{Popen,Pdsh,Ssh}.py: Tell system to release associated resources with the child process on abort. 2010-11-30 S. Thiell * Worker/Popen.py: Fix stderr pipe leak (trac ticket #121). * Worker/Ssh.py: Fix stderr pipe leak (trac ticket #121). * Worker/Pdsh.py: Fix stderr pipe leak (trac ticket #121). * tests/TaskRLimitsTest.py: New test. 2010-11-28 S. Thiell * NodeSet.py: Optimized NodeSet.__getitem__() (trac ticket #18). 2010-11-25 S. Thiell * NodeSet.py: Slice-optimized version of RangeSet.__getitem__(). 2010-11-03 S. Thiell * CLI/Clush.py: Added --rcopy support (trac ticket #55). * Task.py: Added rcopy() method (part of trac ticket #55). * Worker/Pdsh.py: Support for reverse file copy (part of trac ticket #55). * Worker/Ssh.py: Support for reverse file copy (part of trac ticket #55). 2010-11-02 S. Thiell * Worker/Ssh.py: Fix missing ev_start trigger when using task.copy() (trac ticket #125). 2010-11-01 S. Thiell * CLI/OptionParser.py: Make -c/--copy an option that can take several source arguments. * CLI/Clush.py: Improve signal handling (trac ticket #65). 2010-10-25 S. Thiell * CLI/Clush.py: Add launched-in-background checks before enabling user interaction (fix trac ticket #114). 2010-10-20 S. Thiell * Task.py: Docstring improvements (trac tickets #120, #122). 2010-10-20 A. Degremont * NodeSet.py: Optimize NodeSetBase iteration. 2010-10-17 S. Thiell * Engine/Factory.py: Re-enable EPoll engine (closes trac ticket #56). * Engine/EPoll.py: Cleanup and minor fix in the way event masks are modified. * CLI/Clush.py: Changed the way of reading stdin, which is now based on blocking reads using a specified thread and thread-safe messaging with acknowledgement using a task port (part of trac ticket #56). 2010-10-11 S. Thiell * Worker/Worker.py: Add Worker.abort() base method and ensure proper implementation in all workers (trac ticket #63). 2010-10-10 S. Thiell * Worker/Worker.py: WorkerBadArgumentError exception is now deprecated, use ValueError instead. Also added exception message in each worker (trac ticket #116). 2010-10-01 A. Degremont * Task.py: Add Task.run() new method (trac ticket #119). 2010-09-28 S. Thiell * CLI/OptionParser.py: Do not allow option value starting with '-' in some cases. 2010-09-26 S. Thiell * CLI: Package created. 2010-09-03 S. Thiell * Worker/Ssh.py: Fix issue with clush -l USER by separating underlying ssh "-l USER" in two shell arguments (trac ticket #113). 2010-08-31 S. Thiell * scripts/clush.py: Live per-line gathering (-bL mode) improvements. * Task.py: Fixed Task.timer() when called from another thread - it used to return None (trac ticket #112). 2010-08-29 S. Thiell * Task.py: Add docstring for timer's autoclose feature (trac ticket #109). * Worker/Worker.py: Attribute 'last_errmsg' not properly initialized (trac ticket #107). * setup.py: Switch to setuptools. * clustershell.spec.in: Fix issue on el5 with if condition when defining python_sitelib. 2010-08-26 S. Thiell * Packaging automation engineering and improved specfile. * License files converted to UTF-8. 2010-07-27 S. Thiell * Version 1.3 released. 2010-07-21 S. Thiell * Version 1.3 RC 2 released. * NodeSet.py: Like in some previous version, support None as argument for most methods (trac ticket #106). 2010-07-16 S. Thiell * scripts/clush.py: Fix uncaught exceptions introduced in 1.3 RC 1 (trac ticket #105). 2010-07-12 S. Thiell * Version 1.3 RC 1 released. * Task.py: Raise proper KeyError exception in Task.key_retcode(key) when key is not found in any finished workers (trac ticket #102). 2010-07-06 S. Thiell * Task.py: Added documentation for reserved set_default() and set_info() keys (trac ticket #101). * scripts/clubak.py: Merge latest code display changes made on clush to clubak, including "--color={never,always,auto}" (trac ticket #89). Updated documentation accordingly. 2010-06-29 H. Doreau * Worker/Pdsh.py: removed obsolete _read() and _readerr() methods that overrode EngineClient methods without raising an EOFException when read() reads nothing (trac ticket #97). 2010-06-28 S. Thiell * scripts/clush.py: Centralized handling of exceptions raised from Main and separate Task thread because some exceptions handled only in Main thread were not caught (fix btw trac ticket #93). 2010-06-17 S. Thiell * Version 1.3 beta 6 released. 2010-06-16 S. Thiell * scripts/clush.py: Check for trailing args when using -c/--copy (trac ticket #88). * NodeSet.py, NodeUtils.py: Add a way to retrieve all nodes when "all" external call is missing but "map" and "list" calls are specified (trac ticket #90). * Task.py: Add handling of stderr during task.copy(). * Worker/Ssh.py: Add handling of stderr (when needed) during scp. * scripts/clush.py: Fix display issue with clush --copy when some nodes are not reachable. * Version 1.3 beta 5 released. 2010-06-15 S. Thiell * scripts/clush.py: Add --color={never,always,auto} command line option and color: {never,always,auto} config option (trac ticket #68), defaulting to `never'. Also did some code refactoring/lightening (created a Display class). Updated clush and clush.conf man pages. 2010-06-09 S. Thiell * scripts/clush.py: Automatically increase open files soft limit (trac ticket #61). Handle "Too many open files" exception. * Task.py: Add excepthook and default_excepthook methods to handle uncaught exception in a Task thread. Make it compliant with sys.excepthook also. 2010-06-08 S. Thiell * Version 1.3 beta 4 released. * doc/extras/vim/syntax/groupsconf.vim: Improved vim syntax file for groups.conf (trac ticket #85): now $GROUP and $NODE are keywords. * scripts/clush.py: Do not wait the end of all commands when using -bL switches when possible (trac ticket #69). * MsgTree.py: Added remove(match) method to remove entry from the tree. * Task.py: Added flush_buffers() and flush_errors() methods. * Worker/Worker.py: Added flush_buffers() and flush_errors() methods. 2010-05-26 S. Thiell * Version 1.3 beta 3 released. * scripts/clush.py: Fixed issue (-g/-X group not working as expected) found in release 1.3 beta2. 2010-05-25 S. Thiell * Version 1.3 beta 2 released. * scripts/clush.py: Added -G, --groupbase to strip group source prefix when using -r. * scripts/clubak.py: Added -G, --groupbase to strip group source prefix when using -r. * scripts/nodeset.py: Changed -N, --noprefix to -G, --groupbase to avoid conflict with clush -N. * scripts/clush.py: Fixed missing support for group source (-s GROUPSOURCE) when using -a or -g GROUP. * scripts/nodeset.py: Added --all, -a support (also work is -s GROUPSOURCE). Almost-silently removed -a for --autostep, I hope nobody's using it. :) * Updated man pages of clush, clubak and nodeset to match latest options changes (trac #58). * scripts/clubak.py: Added regroup support to clubak (trac ticket #78). Added -S to specify user settable separator string (trac ticket #62). 2010-05-24 S. Thiell * tests/NodeSetGroupTest.py: Some cleanup in tests (use setUp, tearDown) and create temporary groups test files. * tests/NodeSetRegroupTest.py: Removed (tests moved to NodeSetGroupTest.py). * scripts/nodeset.py: Add -N option to avoid display of group source prefix (trac ticket #79). * NodeSet.py: Add noprefix boolean option to regroup() to avoid building nodegroups with group source prefixes. Added test. * scripts/clush.py: Fix unhandled GroupResolverSourceError exception (part of trac ticket #74). * scripts/nodeset.py: Renamed -n NAMESPACE option to -s GROUPSOURCE (or --groupsource=GROUPSOURCE). Fixed trac ticket #76 so that -f, -e or -c take -s into account. Improved error handling (trac ticket #74). Added --groupsources command to list configured group sources (trac #77). 2010-05-20 S. Thiell * tests/NodeSetRegroupTest.py: added tests for nodeset.regroup(). 2010-05-19 S. Thiell * doc/extras/vim/ftdetect/clustershell.vim: renamed clush.vim to clustershell.vim. * doc/extras/vim/syntax/clushconf.vim: renamed clush.vim to clushconf.vim and cleaned up old external groups keywords. * doc/extras/vim/syntax/groupsconf.vim: added vim syntax file for groups.conf (trac ticket #73). 2010-04-08 S. Thiell * NodeSet.py: Added __getstate__() and __setstate__() methods to support pickling of NodeSet objects. * scripts/clush.py: Add option flag -n NAMESPACE to specify groups.conf(5) namespace to use for regrouping displayed nodeset. * scripts/clush.py: Add -r (--regroup) option to display default groups in nodeset when possible. 2010-04-07 S. Thiell * scripts/clush.py: Modified script to support new external "all nodes" upcall and node groups. * scripts/nodeset.py: Added command flags -l (list groups), -r (used to regroup nodes in groups), and also added option flag -n to specify desired namespace. * NodeSet.py: Added node group support with the help of the new NodeUtils module (trac ticket #41). Improved parser to support basic node/nodegroups arithmetic (trac ticket #44). * NodeUtils.py: New module that provides binding support to external node group sources (trac ticket #43). 2010-03-05 S. Thiell * Worker/*.py: Do not forget to keep last line and generate an ev_read event when it does not contain EOL (trac ticket #66). Added tests. 2010-02-26 S. Thiell * Version 1.2 RC 1 released. 2010-02-25 S. Thiell * Important code cleaning (use absolute imports, remove some unused imports, remove duplicate code, etc. thanks to pylint). 2010-02-22 S. Thiell * scripts/nodeset.py: Change command syntax: operations are now specified inline between nodesets (trac ticket #45). Update doc and tests. * scripts/clubak.py: Fix TypeError exception raised on unexpected input and accept 'node:message' line pattern (trac ticket #59). * scripts/clush.py: Add -B flag (trac ticket #60) to gather with stderr. * NodeSet.py: NodeSet constructor now raises a NodeSetParseError exception when unsupported type is used as input (trac ticket #53). 2010-02-21 S. Thiell * Task.py: Fix a deadlock when a task is resumed two times from another thread (raise AlreadyRunningError instead). Added test. * Worker/Worker.py: Improve usage error handling for some methods (trac ticket #28), raising WorkerError when needed. Add library misusage tests. 2010-02-18 S. Thiell * scripts/clush.py: Disable MsgTree buffering when not performing any gathering of results (when -b is not used). * Task.py: Allow disabling of MsgTree buffering (trac ticket #3) via 'stdout_msgtree" and 'stderr_msgtree' Task default keywords, useful if we don't want MsgTree internal buffering for fully event-based scripts (eg. clush without -b). When disabled, any Task method accessing MsgTree data like iter_buffers() will raise a new exception (TaskMsgTreeError). 2010-02-17 S. Thiell * Version 1.2 beta 5 released. 2010-02-16 S. Thiell * NodeSet.py: Fix mixed-type comparisons, where, like standard set(), are allowed, instead of raising TypeError. 2010-02-15 S. Thiell * Version 1.2 beta 4 released. * MsgTree.py: Added MsgTreeElem.splitlines() method as alias of lines(). 2010-02-14 S. Thiell * Updated doc/man pages for latest clush changes and added clubak tool. * Worker/Ssh.py: Fix Ssh worker issue where sometimes stderr buffer could not be read completely (trac ticket #50). 2010-02-13 S. Thiell * scripts/clush.py: Comply with clubak by adding -L option that allow switching to alternative line mode display (when using -b). Also, sort buffers by nodes or nodeset length like clubak (fix trac ticket #54). 2010-02-11 S. Thiell * Version 1.2 beta 3 released. * scripts/clush.py: For clush --copy, when --dest is not specified, set the destination path to the source dirname path and not the source full path. * scripts/clush.py: Added option --nostdin to prevent reading from standard input (fix trac ticket #49). * Engine/Factory.py: Disable Engine.EPoll automatic selection as an issue has been found with clush when stdin is a plain file ( * Worker/Worker.py: Added missing WorkerSimple.last_error() method. Fixed worker bad argument error exception. * Worker/Ssh.py: Added command, source and dest public instance variable. * Worker/Pdsh.py: Added command, source and dest public instance variable. * scripts/clush.py: Due to set_info() behaviour modifications in multi-thread mode, change some set_info() for set_default() to modify task specific dictionary synchronously. Also remove splitlines() where MsgTreeElem are returned instead of whole buffer after latest MsgTree improvements. * scripts/clubak.py: Added clubak utility (trac ticket #47). It provides dshbak backward-compatibility, but always try to sort buffers by nodes or nodeset. It also provides additional -L option to switch to alternative line mode display. 2010-02-09 S. Thiell * Worker.py: Updated Task/MsgTree dependencies. Added iter_node_errors() method. Added match_keys optional parameter to iter_node_buffers() and iter_node_errors(). Added WorkerSimple.error() method (read stderr). Added tests. * Task.py: Updated MsgTree dependencies. Factorized most tree data's access methods. * MsgTree.py: Merged Msg and _MsgTreeElem in one class MsgTreeElem. All message objects returned are now instance of MsgTreeElem. Some algorithms improvements. Renamed main MsgTree access methods: messages(), items() and walk(). Added more docstring. * NodeSet.py: Modified NodeSet.__iter__() and __str__() so that nodes are now always sorted by name/pattern (eg. acluster2, bcluster1). 2010-02-07 S. Thiell * MsgTree.py: Rewrite of MsgTree module with a better API (part of trac ticket #47). Adapted library classes. Added specific tests. 2010-02-02 S. Thiell * Task.py: Add Task.key_error() and its alias node_error() methods for easy retrieving of error buffers for a specified key/node. * scripts/clush.py: Fix stdout/stderr separation issue (introduced in 1.2b2) thanks to the new Task.set_default() method. * Task.py: As set_info() is now dispatched through the task special port, and applied only on task.resume() when called from another thread, add two new methods default() and set_default() to synchronously manage another task specific dictionary, useful for default configuration parameters. 2010-02-01 S. Thiell * Version 1.2 beta 2 released. 2010-02-01 A. Degremont * NodeSet.py: Added __getslice__() and split() method to RangeSet. Added split() to NodeSet (trac ticket #18). 2010-02-01 S. Thiell * NodeSet.py: Added equality comparisons for RangeSet and NodeSet. Fixed a bug in NodeSet.issuperset(). * mkrpm.sh: Improve RPM build process and allow SRPM package to be easily rebuilt (trac ticket #51). 2010-01-31 S. Thiell * scripts/clush.py: Fix broken pipe issue (trac ticket #34). * scripts/clush.py: Fix unhandled NodeSet parse error (trac ticket #36). * scripts/clush.py: Display uncompleted nodes on keyboard interrupt. 2010-01-29 S. Thiell * scripts/clush.py: Return some error code when -S -u TIMEOUT is used and some command timeout occurred (trac ticket #48). * scripts/clush.py: Display output messages on KeyboardInterrupt (trac ticket #22). * tests/TaskThreadJoinTest.py: Added test cases for task.join(). * tests/TaskThreadSuspendTest.py: Added test cases for task.suspend(). * tests/TaskPortTest.py: Added test cases for task.port(). * Task.py: Improved features in multithreaded environments thanks to new port feature: abort(), suspend(), resume(), schedule(), etc. are now thread-safe (trac ticket #21). * Worker/EngineClient.py: Added port feature, a way to communicate between different tasks. 2009-12-09 A. Degremont * scripts/clush.py: Add -X flag to exclude node groups. Node flags -w/-x/-g/-X can now be specified multiple times. 2009-12-17 S. Thiell * Engine/Factory.py: Add engine automatic selection mechanism (trac ticket #10). * Task.py: Add task_terminate() function for convenience. 2009-12-15 S. Thiell * scripts/clush.py: Fix clush -q/--quiet issue again! 2009-12-09 A. Degremont * scripts/nodeset.py: Protect --separator from code injection and handle gracefully incorrect separtor. 2009-12-09 S. Thiell * Version 1.2 beta 1 released. * scripts/clush.py and library: Add -p option when using --copy to preserve file modification times and modes. * scripts/clush.py: Fix clush -q/--quiet issue. * scripts/nodeset.py: Add separator option to nodeset --expand with -S (trac ticket #39). * Worker/Pdsh.py: Added copy support for directory (automatic detection). Added non-reg tests. 2009-12-08 S. Thiell * scripts/clush.py: Added source presence check on copy. 2009-12-07 S. Thiell * Worker/Ssh.py: Added copy support for directory (automatic detection). * Worker/Ssh.py: Fix Scp Popen4->subprocess.popen issue (simple quote escape not needed). 2009-11-10 S. Thiell * Version 1.2 beta 0 released. Updated doc and man pages for 1.2. 2009-11-09 S. Thiell * Engine/EPoll.py: Add stdout/stderr support (still experimental). * Worker/Pdsh.py: Fix stdout/stderr support. * Backport recent 1.1-branch improvements: tests code coverage, also resulting in some fixes (see 1.1 2009-10-28). 2009-11-09 S. Thiell * scripts/clush.py: Added stdout/stderr support in clush script. 2009-11-04 S. Thiell * Added optional separate stdout/stderr handling (with 1.1 Task API compat). Added some tests for that. * Create a MsgTree class in MsgTree.py and remove this code from Task.py. * First changes to use setUp() in test case objects. 2009-08-02 S. Thiell * clush.py: (1) remove /step in displayed nodeset when using -b (to allow copy/paste to other tools like ipmipower that doesn't support N-M/STEP ranges), (2) when command timeout is specified (-u), show nodes (on stderr) that didn't have time to fully complete, (3) flush stdio buffers before exiting. [merged from branch 1.1] 2009-07-29 S. Thiell * tests/NodeSetScriptTest.py: added unit test for scripts/nodeset.py * NodeSet.py: fixed a problem with intersection_update() when used with two simple nodes (no rangeset). * scripts/nodeset.py: merge -i and -X options issue fix from 1.1 branch (#29) 2009-07-28 S. Thiell * scripts/clush.py: remove DeprecationWarning ignore filter (the library is now natively Python 2.6/Fedora 11 ready) * Change all sets to use built in set type available since Python 2.4 (the sets module is deprecated). * Engine/EPoll.py: added epoll based Engine (Python 2.6+ needed) * Engine/Poll.py: added _register_specific() and _unregister_specific() methods to match modified Engine base class. * Engine/Engine.py: added calls to derived class's _register_specific() and _unregister_specific() instead of only _modify_specific() 2009-07-23 S. Thiell * Replaced popen2.Popen4 (deprecated) by subprocess.Popen (Python 2.4+), renaming Worker.Popen2 to Worker.Popen. * clush.py: (backport for 1.1 branch) fix another command_timeout (-u) issue, now the command_timeout value is passed as the timeout value at worker level. * Version 1.1 branched. 2009-07-22 S. Thiell * Version 1.1 RC 2 released. * clush.py: change -u timeout behavior, if set it's now the timeout value passed to task.shell() (and not connect_timeout + command_timeout). * clush.py: add -o options to pass custom ssh options (#24). * Worker/Ssh.py: simple quote escape fix (trac ticket #25). * Worker/Popen2.py: simple quote escape fix (trac ticket #25) * clush.py: fix options issue when using -f, -u or -t. 2009-07-13 S. Thiell * Version 1.1 RC 1 released. * Changed license to CeCILL-C (http://www.cecill.info). * clush.py (ttyloop): (feature) added '=' special command in interactive mode to toggle output format mode (standard/gathered). * Engine/Engine.py (register): (bug) register writer fd to even when set_writer_eof() has previously been called. * Worker/EngineClient.py (_handle_write): (bug) don't close writer when some data remains in write buffer, even if self._weof is True. 2009-07-10 S. Thiell * clush.py (ttyloop): added a workaround to replace raw_input() which is not interruptible in Python 2.3 (issue #706406). 2009-07-09 S. Thiell * NodeSet.py (__contains__): fixed issue that could appear when padding was used, eg. "node113" in "node[030,113] didn't work. 2009-07-08 S. Thiell * Version 1.1 beta 6 released. * clush.py: major improvements (added write support, better interactive mode with readline, launch task in separate thread to let the main thread blocking tty input, added Enter key press support during run, added node groups support (-a and -g) using external commands defined in clush.conf, added --copy toggle to clush to copy files to the cluster nodes, added -q option, added progress indicator when clush is called with gather option -b) * Added man pages for clush and nodeset commands. * doc/extras/vim (clush.vim): added vim syntax files for clush.conf * Engine.py: (feature) added write support to workers * Worker: (api) created a base class WorkerSimple 2009-04-17 S. Thiell * Version 1.1 beta 5 released (LUG'09 live update). * Worker/Worker.py: (bug) update last_node so that user can call worker.last_node() in an ev_timeout handler callback. 2009-04-17 A. Degremont * clush.py: (feature) make use of optparse.OptionParser 2009-04-15 S. Thiell * Version 1.1 beta 4 released. 2009-04-14 S. Thiell * Engine/Engine.py (EngineBaseTimer): (bug) fixed issue in timers when invalidated two times. 2009-04-06 S. Thiell * Version 1.1 beta 3 released. * Engine/Engine.py (_EngineTimerQ): (bug) fixed issue in timer invalidation. 2009-04-03 S. Thiell * Engine/Engine.py (EngineTimer): (api) added is_valid() method to check if a timer is still valid. * Task.py: (api) added optional `match_keys' parameter in Task and Worker iter_buffers() and iter_retcodes() methods. 2009-03-26 S. Thiell * Version 1.1 beta 2 released. 2009-03-23 S. Thiell * Worker/Worker.py: (api) added Worker.did_timeout() method to check if a worker has timed out (useful for Popen2 workers, other use DistantWorker.num_timeout()). 2009-02-21 S. Thiell * Version 1.1 beta 1 released. 2009-02-20 S. Thiell * NodeSet.py (NodeSet): (api) added clear() method. (RangeSet): likewise. * NodeSet.py (NodeSet): added workaround to allow NodeSet to be properly pickled (+inf floating number pickle bug with Python 2.4) * NodeSet.py (RangeSet): (bug) don't keep a reference on internal RangeSet when creating a NodeSet from another one. 2009-02-16 S. Thiell * Version 1.1 beta 0 released. * Worker/Ssh.py: (feature) new worker, based on OpenSSH, with fanout support (thus removing ClusterShell mandatory pdsh dependency). * Engine/Engine.py: (feature, api) added timer and repeater support. * 1.0->1.1 internal design changes. Copyright CEA/DAM/DIF (2009, 2010, 2011) Copying and distribution of this file, with or without modification, are permitted provided the copyright notice and this notice are preserved. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/MANIFEST.in0000644104717000001440000000164514501416555015340 0ustar00sthiellusersinclude ChangeLog include README.md include COPYING.LGPLv2.1 include conf/*.conf include conf/*.example include conf/clush.conf.d/README include conf/clush.conf.d/*.example include conf/groups.d/README include conf/groups.d/*.cfg include conf/groups.d/*.example include conf/groups.conf.d/README include conf/groups.conf.d/*.example include doc/txt/README include doc/txt/*.txt include doc/txt/*.rst include doc/man/man1/*.1 include doc/man/man5/*.5 include doc/sphinx/Makefile include doc/sphinx/conf.py include doc/sphinx/*.png include doc/sphinx/*.rst include doc/sphinx/_static/*.png include doc/sphinx/_static/*.css include doc/sphinx/tools/*.rst include doc/sphinx/guide/*.rst include doc/sphinx/api/*.rst include doc/sphinx/api/workers/*.rst include doc/extras/vim/syntax/*.vim include doc/extras/vim/ftdetect/*.vim include doc/examples/*.py include doc/examples/defaults.conf-rsh include doc/epydoc/*.conf include tests/*.py ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3353298 ClusterShell-1.9.2/PKG-INFO0000644104717000001440000000677014505640536014705 0ustar00sthiellusersMetadata-Version: 1.1 Name: ClusterShell Version: 1.9.2 Summary: ClusterShell library and tools Home-page: https://clustershell.readthedocs.io/ Author: Stephane Thiell Author-email: sthiell@stanford.edu License: LGPLv2+ Download-URL: https://github.com/cea-hpc/clustershell/archive/refs/tags/v1.9.2.tar.gz Description: ClusterShell is an event-driven open source Python framework, designed to run local or distant commands in parallel on server farms or on large Linux clusters. It will take care of common issues encountered on HPC clusters, such as operating on groups of nodes, running distributed commands using optimized execution algorithms, as well as gathering results and merging identical outputs, or retrieving return codes. ClusterShell takes advantage of existing remote shell facilities already installed on your systems, like SSH. User tools ---------- ClusterShell provides clush, clubak and cluset/nodeset, convenient command-line tools that allow traditional shell scripts to benefit from some of the library's features: - **clush**: issue commands to cluster nodes and format output Example of use: :: $ clush -abL uname -r node[32-49,51-71,80,82-150,156-159]: 2.6.18-164.11.1.el5 node[3-7,72-79]: 2.6.18-164.11.1.el5_lustre1.10.0.36 node[2,151-155]: 2.6.31.6-145.fc11.2.x86_64 See *man clush* for more details. - **clubak**: improved dshbak to gather and sort dsh-like outputs See *man clubak* for more details. - **nodeset** (or **cluset**): compute advanced nodeset/nodegroup operations Examples of use: :: $ echo node160 node161 node162 node163 | nodeset -f node[160-163] $ nodeset -f node[0-7,32-159] node[160-163] node[0-7,32-163] $ nodeset -e node[160-163] node160 node161 node162 node163 $ nodeset -f node[32-159] -x node33 node[32,34-159] $ nodeset -f node[32-159] -i node[0-7,20-21,32,156-159] node[32,156-159] $ nodeset -f node[33-159] --xor node[32-33,156-159] node[32,34-155] $ nodeset -l @oss @mds @io @compute $ nodeset -e @mds node6 node7 See *man nodeset* (or *man cluset*) for more details. Please visit the ClusterShell website_. .. _website: http://cea-hpc.github.io/clustershell/ Keywords: clustershell,clush,clubak,nodeset Platform: GNU/Linux Platform: BSD Platform: MacOSX Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+) Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: POSIX :: BSD Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: System :: Clustering Classifier: Topic :: System :: Distributed Computing License-File: COPYING.LGPLv2.1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/README.md0000644104717000001440000000653614501416555015065 0ustar00sthiellusersClusterShell Python Library and Tools ===================================== ClusterShell is an event-driven open source Python library, designed to run local or distant commands in parallel on server farms or on large Linux clusters. It will take care of common issues encountered on HPC clusters, such as operating on groups of nodes, running distributed commands using optimized execution algorithms, as well as gathering results and merging identical outputs, or retrieving return codes. ClusterShell takes advantage of existing remote shell facilities already installed on your systems, like SSH. ClusterShell's primary goal is to improve the administration of high- performance clusters by providing a lightweight but scalable Python API for developers. It also provides clush, clubak and cluset/nodeset, convenient command-line tools that allow traditional shell scripts to benefit from some of the library features. Requirements ------------ * GNU/Linux, BSD, Mac OS X * OpenSSH (ssh/scp) or rsh * Python 2.x (x >= 7) or Python 3.x (x >= 6) * PyYAML License ------- ClusterShell is distributed under the GNU Lesser General Public License version 2.1 or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Documentation ------------- Online documentation is available here: http://clustershell.readthedocs.org/ The Sphinx documentation source is available under the doc/sphinx directory. Type 'make' to see all available formats (you need Sphinx installed and sphinx_rtd_theme to build the documentation). For example, to generate html docs, just type: make html BUILDDIR=/dest/path For local library API documentation, just type: $ pydoc ClusterShell The following man pages are also provided: clush(1), clubak(1), nodeset(1), clush.conf(5), groups.conf(5) Test Suite ---------- Regression testing scripts are available in the 'tests' directory: $ cd tests $ nosetests -sv $ nosetests -sv --all-modules You have to allow 'ssh localhost' and 'ssh $HOSTNAME' without any warnings for "remote" tests to run as expected. $HOSTNAME should not be 127.0.0.1 nor ::1. Also some tests use the 'bc' command. Python code (simple example) ---------------------------- ```python >>> from ClusterShell.Task import task_self >>> from ClusterShell.NodeSet import NodeSet >>> task = task_self() >>> task.run("/bin/uname -r", nodes="linux[4-6,32-39]") >>> for buf, key in task.iter_buffers(): ... print NodeSet.fromlist(key), buf ... linux[32-39] 2.6.40.6-0.fc15.x86_64 linux[4-6] 2.6.32-71.el6.x86_64 ``` Links ----- Web site: http://cea-hpc.github.com/clustershell/ Online documentation: http://clustershell.readthedocs.org/ Github source repository: https://github.com/cea-hpc/clustershell Github Wiki: https://github.com/cea-hpc/clustershell/wiki Github Issue tracking system: https://github.com/cea-hpc/clustershell/issues Python Package Index (PyPI) links: https://pypi.org/project/ClusterShell/ http://pypi.python.org/pypi/ClusterShell ClusterShell was born along with Shine, a scalable Lustre FS admin tool: https://github.com/cea-hpc/shine Core developers/reviewers ------------------------- * Stephane Thiell * Aurelien Degremont * Henri Doreau * Dominique Martinet CEA/DAM 2010, 2011, 2012, 2013, 2014, 2015 - http://www-hpc.cea.fr ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3253293 ClusterShell-1.9.2/conf/0000755104717000001440000000000014505640536014523 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/clush.conf0000644104717000001440000000063714501416555016514 0ustar00sthiellusers# Configuration file for clush # # Please see man clush.conf(5) # [Main] fanout: 64 connect_timeout: 15 command_timeout: 0 color: auto fd_max: 8192 history_size: 100 maxrc: no node_count: yes verbosity: 1 confdir: /etc/clustershell/clush.conf.d $CFGDIR/clush.conf.d # Add always all remote hosts to known_hosts without confirmation #ssh_user: root #ssh_path: /usr/bin/ssh #ssh_options: -oStrictHostKeyChecking=no ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3253293 ClusterShell-1.9.2/conf/clush.conf.d/0000755104717000001440000000000014505640536017007 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/clush.conf.d/README0000644104717000001440000000032714501416555017667 0ustar00sthiellusersclush.conf.d/README Default directory for additional clush configuration files. clush scans the directory set by the confdir variable, defined in /etc/clustershell/clush.conf, loading all files of the form *.conf. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/clush.conf.d/sshpass.conf.example0000644104717000001440000000132614501416555022774 0ustar00sthiellusers# Example configuration file for ssh password auth support with sshpass. # # Copy as sshpass.conf to enable and edit the paths below as needed. # sshpass needs to be installed on your operating system. # # To activate sshpass mode, use clush -m sshpath ... [mode:sshpass] password_prompt: yes ssh_path: /usr/bin/sshpass /usr/bin/ssh scp_path: /usr/bin/sshpass /usr/bin/scp ssh_options: -oBatchMode=no -oStrictHostKeyChecking=no # Another mode that reads the password from a local file instead [mode:sshpass-file] password_prompt: no ssh_path: /usr/bin/sshpass -f /root/remotepasswordfile /usr/bin/ssh scp_path: /usr/bin/sshpass -f /root/remotepasswordfile /usr/bin/scp ssh_options: -oBatchMode=no -oStrictHostKeyChecking=no ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/clush.conf.d/sudo.conf.example0000644104717000001440000000044314501416555022261 0ustar00sthiellusers# Example configuration file for sudo support. # # Copy as sudo.conf to enable and edit sudo's path as needed # (sudo needs to be installed on your operating system). # # To activate sudo mode, use clush -m sudo ... [mode:sudo] password_prompt: yes command_prefix: /usr/bin/sudo -S -p "''" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/groups.conf0000644104717000001440000000404114501416555016706 0ustar00sthiellusers# ClusterShell node groups main configuration file # # Please see `man 5 groups.conf` and # http://clustershell.readthedocs.org/en/latest/config.html#node-groups # for further details. # # NOTE: This is a simple group configuration example file, not a # default config file. Please edit it to fit your own needs. # [Main] # Default group source default: local # Group source config directory list (space separated, use quotes if needed). # Examples are provided. Copy them from *.conf.example to *.conf to enable. # # $CFGDIR is replaced by the highest priority config directory found. # Default confdir value enables both system-wide and user configuration. confdir: /etc/clustershell/groups.conf.d $CFGDIR/groups.conf.d # New in 1.7, autodir defines a directory list (space separated, use quotes if # needed) where group data files will be auto-loaded. # Only *.yaml file are loaded. Copy *.yaml.example files to enable. # Group data files avoid the need of external calls for static config files. # # $CFGDIR is replaced by the highest priority config directory found. # Default autodir value enables both system-wide and user configuration. autodir: /etc/clustershell/groups.d $CFGDIR/groups.d # Sections below also define group sources. # # NOTE: /etc/clustershell/groups is deprecated since version 1.7, thus if it # doesn't exist, the "local.cfg" file from autodir will be used. # # See the documentation for $CFGDIR, $SOURCE, $GROUP and $NODE upcall special # variables. Please remember that they are substituted before the shell command # is effectively executed. # [local] # flat file "group: nodeset" based group source using $CFGDIR/groups.d/local.cfg # with backward support for /etc/clustershell/groups map: [ -f $CFGDIR/groups ] && f=$CFGDIR/groups || f=$CFGDIR/groups.d/local.cfg; sed -n 's/^$GROUP:\(.*\)/\1/p' $f all: [ -f $CFGDIR/groups ] && f=$CFGDIR/groups || f=$CFGDIR/groups.d/local.cfg; sed -n 's/^all:\(.*\)/\1/p' $f list: [ -f $CFGDIR/groups ] && f=$CFGDIR/groups || f=$CFGDIR/groups.d/local.cfg; sed -n 's/^\([0-9A-Za-z_-]*\):.*/\1/p' $f ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3253293 ClusterShell-1.9.2/conf/groups.conf.d/0000755104717000001440000000000014505640536017210 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/groups.conf.d/README0000644104717000001440000000035514501416555020071 0ustar00sthiellusersgroups.conf.d/README Default directory for additional node group sources configuration files. ClusterShell scans the directory set by the confdir variable, defined in /etc/clustershell/groups.conf, loading all files of the form *.conf. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/groups.conf.d/ace.conf.example0000644104717000001440000000102114501416555022231 0ustar00sthiellusers# Additional ClusterShell group source config file # # Please see `man 5 groups.conf` for further details. # # This config file provided as an example of group sources for Cray # Advanced Cluster Engine (ACE) system management software. # # ACE @type -> host(s) # # example: # $ nodeset -f @ace:compute # prod-[0001-0144] # [ace] map: ace servers | awk '/$GROUP/ {gsub("*",""); print $11}' all: ace servers | awk '!/Type/ && $11 != "-" {gsub("*",""); print $11}' list: ace servers | awk '!/Type/ && $11 != "-" {print $2}' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/groups.conf.d/genders.conf.example0000644104717000001440000000032214501416555023133 0ustar00sthiellusers# Additional ClusterShell group source config file # # Please see `man 5 groups.conf` for further details. # # LLNL genders bindings # [genders] map: nodeattr -n $GROUP all: nodeattr -n ALL list: nodeattr -l ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/groups.conf.d/slurm.conf.example0000644104717000001440000000146114501416555022653 0ustar00sthiellusers# Additional ClusterShell group source config file # # Please see `man 5 groups.conf` for further details. # # # SLURM partition bindings # [slurmpart,sp] map: sinfo -h -o "%N" -p $GROUP all: sinfo -h -o "%N" list: sinfo -h -o "%R" reverse: sinfo -h -N -o "%R" -n $NODE # # SLURM state bindings # [slurmstate,st] map: sinfo -h -o "%N" -t $GROUP all: sinfo -h -o "%N" list: sinfo -h -o "%T" | tr -d '*~#!%$@+^-' reverse: sinfo -h -N -o "%T" -n $NODE | tr -d '*~#!%$@+^-' cache_time: 60 # # SLURM job bindings # [slurmjob,sj] map: squeue -h -j $GROUP -o "%N" list: squeue -h -o "%i" -t R reverse: squeue -h -w $NODE -o "%i" cache_time: 60 # # SLURM user bindings for running jobs # [slurmuser,su] map: squeue -h -u $GROUP -o "%N" -t R list: squeue -h -o "%u" -t R reverse: squeue -h -w $NODE -o "%u" cache_time: 60 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/groups.conf.d/xcat.conf.example0000644104717000001440000000060414501416555022446 0ustar00sthiellusers# Additional ClusterShell group source config file # # Please see `man 5 groups.conf` for further details. # # xCAT static node group binding # [xcat] # list the nodes in the specified node group map: lsdef -s -t node "$GROUP" | cut -d' ' -f1 # list all the nodes defined in the xCAT tables all: lsdef -s -t node | cut -d' ' -f1 # list all groups list: lsdef -t group | cut -d' ' -f1 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3253293 ClusterShell-1.9.2/conf/groups.d/0000755104717000001440000000000014505640536016264 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/groups.d/README0000644104717000001440000000117614501416555017147 0ustar00sthiellusersgroups.d/README Default directory for YAML node group sources definition files. ClusterShell scans the directory set by the autodir variable, defined in /etc/clustershell/groups.conf, loading all files of the form *.yaml. These files are automatically parsed by ClusterShell to avoid the need of external upcalls for flat files-based group sources. Each file may contain one or several group sources definitions. Format of each YAML file is as follow: source1: group1: 'nodeset1' group2: 'nodeset2' source2: group3: 'nodeset3' group4: 'nodeset4' ... Please take a look at *.yaml.example files for more examples. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/groups.d/cluster.yaml.example0000644104717000001440000000336514501416555022270 0ustar00sthiellusers# ClusterShell groups config cluster.yaml.example # # Example of YAML groups config file with multiple sources. # ^^^^^^^ # Here you can describe your cluster nodes and equipment using several # group sources. # # Example of group source use-cases are: # - functional info (compute, storage, service nodes, etc.) # - location (room, rack position, etc.) # - physical attributes (cpu type, gpu types, memory size, etc.) # - vendors and hardware models, useful info for firmware update # - infrastructure (pdu, network and interco switches) # - ownership of nodes and partitions... # # File will be auto-loaded if renamed to .yaml # # Break and adapt to fit your own needs. Use nodeset CLI to test config. # Group source roles: # define groups @roles:adm, @roles:io, etc. roles: adm: 'example0' io: '@racks:rack2,example2' compute: '@racks:rack[3-4]' gpu: '@racks:rack4' # the 'all' special group is only needed if we don't want all nodes from # this group source included, here we don't want example0 for clush -a all: '@io,@compute' # Group source racks: # define groups @racks:rack[1-4], @racks:old and @racks:new racks: rack1: 'example[0,2]' rack2: 'example[4-5]' rack3: 'example[32-159]' rack4: 'example[156-159]' # groups from same source may be referenced without the "source:" prefix # and yes, ranges work for groups too! old: '@rack[1,3]' new: '@rack[2,4]' # YAML lists rack5: - 'example[200-205]' # some comment about example[200-205] - 'example245' - 'example300,example[401-406]' # Group source cpu: # define groups @cpu:ivy, @cpu:hsw and @cpu:all cpu: ivy: 'example[32-63]' # groups from other sources must be prefixed with "source:" hsw: '@roles:compute!@ivy' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/groups.d/local.cfg0000644104717000001440000000070114501416555020033 0ustar00sthiellusers# ClusterShell groups config local.cfg # # Replace /etc/clustershell/groups # # Note: file auto-loaded unless /etc/clustershell/groups is present # # See also groups.d/cluster.yaml.example for an example of multiple # sources single flat file setup using YAML syntax. # # Feel free to edit to fit your needs. adm: example0 oss: example4 example5 mds: example6 io: example[4-6] compute: example[32-159] gpu: example[156-159] all: example[4-6,32-159] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/conf/topology.conf.example0000644104717000001440000000032214501416555020673 0ustar00sthiellusers# ClusterShell cluster topology example file # # rio0 # |- rio[10-11] # | `- rio[100-240] # `- rio[12-13] # `- rio[300-440] [routes] rio0: rio[10-13] rio[10-11]: rio[100-240] rio[12-13]: rio[300-440] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3243294 ClusterShell-1.9.2/doc/0000755104717000001440000000000014505640536014343 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3253293 ClusterShell-1.9.2/doc/epydoc/0000755104717000001440000000000014505640536015626 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/epydoc/clustershell_epydoc.conf0000644104717000001440000000204214501416555022545 0ustar00sthiellusers# To generate ClusterShell epydoc documentation, set your current # directory to the package root directory, then use the following # command: # # $ epydoc --config doc/epydoc/clustershell_epydoc.conf # [epydoc] # Epydoc section marker (required by ConfigParser) # Information about the project. name: ClusterShell url: http://clustershell.sourceforge.net # The list of modules to document. modules: lib/ClusterShell, scripts/clubak.py, scripts/clush.py, scripts/nodeset.py #exclude: ClusterShell\.Worker\.Paramiko # The type of the output that should be generated. output: html #output: pdf # Write html output to the following directory target: doc/epydoc/html # Include all automatically generated graphs. These graphs are # generated using Graphviz dot. graph: all dotpath: /usr/bin/dot # The format for showing inheritance objects. # It should be one of: 'grouped', 'listed', 'included'. #inheritance: listed # Whether or not to include syntax highlighted source code in # the output (HTML only). sourcecode: yes #docformat: restructuredtext ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3253293 ClusterShell-1.9.2/doc/examples/0000755104717000001440000000000014505640536016161 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/examples/check_nodes.py0000755104717000001440000001013214501416555020776 0ustar00sthiellusers#!/usr/bin/python # check_nodes.py: ClusterShell simple example script. # # This script runs a simple command on remote nodes and report node # availability (basic health check) and also min/max boot dates. # It shows an example of use of Task, NodeSet and EventHandler objects. # Feel free to copy and modify it to fit your needs. # # Usage example: ./check_nodes.py -n node[1-99] import optparse from datetime import date, datetime import time from ClusterShell.Event import EventHandler from ClusterShell.NodeSet import NodeSet from ClusterShell.Task import task_self class CheckNodesResult(object): """Our result class""" def __init__(self): """Initialize result class""" self.nodes_ok = NodeSet() self.nodes_ko = NodeSet() self.min_boot_date = None self.max_boot_date = None def show(self): """Display results""" if self.nodes_ok: print "%s: OK (boot date: min %s, max %s)" % \ (self.nodes_ok, self.min_boot_date, self.max_boot_date) if self.nodes_ko: print "%s: FAILED" % self.nodes_ko class CheckNodesHandler(EventHandler): """Our ClusterShell EventHandler""" def __init__(self, result): """Initialize our event handler with a ref to our result object.""" EventHandler.__init__(self) self.result = result def ev_read(self, worker, node, sname, msg): """Read event from remote nodes""" # this is an example to demonstrate remote result parsing bootime = " ".join(msg.strip().split()[2:]) date_boot = None for fmt in ("%Y-%m-%d %H:%M",): # formats with year try: # datetime.strptime() is Python2.5+, use old method instead date_boot = datetime(*(time.strptime(bootime, fmt)[0:6])) except ValueError: pass for fmt in ("%b %d %H:%M",): # formats without year try: date_boot = datetime(date.today().year, \ *(time.strptime(bootime, fmt)[1:6])) except ValueError: pass if date_boot: if not self.result.min_boot_date or \ self.result.min_boot_date > date_boot: self.result.min_boot_date = date_boot if not self.result.max_boot_date or \ self.result.max_boot_date < date_boot: self.result.max_boot_date = date_boot self.result.nodes_ok.add(node) else: self.result.nodes_ko.add(node) def ev_close(self, worker, timedout): """Worker has finished (command done on all nodes)""" if timedout: nodeset = NodeSet.fromlist(worker.iter_keys_timeout()) self.result.nodes_ko.add(nodeset) self.result.show() def main(): """ Main script function """ # Initialize option parser parser = optparse.OptionParser() parser.add_option("-d", "--debug", action="store_true", dest="debug", default=False, help="Enable debug mode") parser.add_option("-n", "--nodes", action="store", dest="nodes", default="@all", help="Target nodes (default @all group)") parser.add_option("-f", "--fanout", action="store", dest="fanout", default="128", help="Fanout window size (default 128)", type=int) parser.add_option("-t", "--timeout", action="store", dest="timeout", default="5", help="Timeout in seconds (default 5)", type=float) options, _ = parser.parse_args() # Get current task (associated to main thread) task = task_self() nodes_target = NodeSet(options.nodes) task.set_info("fanout", options.fanout) if options.debug: print "nodeset : %s" % nodes_target task.set_info("debug", True) # Create ClusterShell event handler handler = CheckNodesHandler(CheckNodesResult()) # Schedule remote command and run task (blocking call) task.run("who -b", nodes=nodes_target, handler=handler, \ timeout=options.timeout) if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/examples/defaults.conf-rsh0000644104717000001440000000071514501416555021432 0ustar00sthiellusers# # ClusterShell Library Defaults # # Example defaults.conf file for clusters using rsh instead of ssh. # # To enable this file, install it in one of the following locations: # $CLUSTERSHELL_CFGDIR/defaults.conf (global configuration, default to # /etc/clustershell/defaults.conf) # $XDG_CONFIG_HOME/clustershell/defaults.conf (per-user) # $HOME/.local/etc/clustershell/defaults.conf (per-user) # [task.default] distant_workername: rsh ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3233292 ClusterShell-1.9.2/doc/extras/0000755104717000001440000000000014505640536015651 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3233292 ClusterShell-1.9.2/doc/extras/vim/0000755104717000001440000000000014505640536016444 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3263295 ClusterShell-1.9.2/doc/extras/vim/ftdetect/0000755104717000001440000000000014505640536020246 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/extras/vim/ftdetect/clustershell.vim0000644104717000001440000000050014501416555023465 0ustar00sthiellusers" " Installed As: vim/ftdetect/clustershell.vim " au BufNewFile,BufRead *clush.conf setlocal filetype=clushconf au BufNewFile,BufRead *clush.conf.d/*.conf setlocal filetype=clushconf au BufNewFile,BufRead *groups.conf setlocal filetype=groupsconf au BufNewFile,BufRead *groups.conf.d/*.conf setlocal filetype=groupsconf ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3263295 ClusterShell-1.9.2/doc/extras/vim/syntax/0000755104717000001440000000000014505640536017772 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/extras/vim/syntax/clushconf.vim0000644104717000001440000000321714501416555022474 0ustar00sthiellusers " Vim syntax file for clush.conf " For version 5.x: Clear all syntax items " For version 6.x: Quit when a syntax file was already loaded if version < 600 syntax clear elseif exists("b:current_syntax") finish endif " shut case off syn case ignore syn match clushComment "#.*$" syn match clushComment ";.*$" syn match clushHeader "\[\w\+\]$" syn match clushHeaderMode "\[mode:\S\+\]$" syn match confDirGroup "^confdir\(:\|=\).*$" contains=confDirKeys,confDirVars syn match confDirVars "$CFGDIR" contained syn match confDirKeys "^\w\+\(:\|=\)"me=e-1 contained syn keyword clushKeys fanout command_timeout connect_timeout color fd_max history_size node_count maxrc verbosity syn keyword clushKeys ssh_user ssh_path ssh_options syn keyword clushKeys scp_user scp_path scp_options syn keyword clushKeys rsh_path rcp_path rcp_options syn keyword clushKeys command_prefix password_prompt " Define the default highlighting. " For version 5.7 and earlier: only when not done already " For version 5.8 and later: only when an item doesn't have highlighting yet if version >= 508 || !exists("did_clushconf_syntax_inits") if version < 508 let did_clushconf_syntax_inits = 1 command -nargs=+ HiLink hi link else command -nargs=+ HiLink hi def link endif HiLink clushHeader Special HiLink clushHeaderMode Constant HiLink clushComment Comment HiLink clushLabel Type HiLink clushKeys Identifier HiLink confDirKeys Identifier HiLink confDirVars Keyword delcommand HiLink endif let b:current_syntax = "clushconf" " vim:ts=8 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/extras/vim/syntax/groupsconf.vim0000644104717000001440000000352114501416555022673 0ustar00sthiellusers " Vim syntax file for ClusterShell groups.conf " For version 5.x: Clear all syntax items " For version 6.x: Quit when a syntax file was already loaded if version < 600 syntax clear elseif exists("b:current_syntax") finish endif " shut case off syn case ignore " Main/default syn match groupsDefaultValue "\(:\|=\)\s*\w\+$"ms=s+1 contained syn match groupsColonValue "\(:\|=\).*" contained contains=groupsDefaultValue syn match groupsDefaultKey "^default\(:\|=\).*$" contains=groupsColonValue syn match groupsGroupsDirKey "^\(groupsdir\|confdir\|autodir\)\(:\|=\).*$" contains=groupsKeys,groupsVars " Sources syn match groupsVars "\(\$GROUP\|\$NODE\|$SOURCE\|$CFGDIR\)" contained syn match groupsKeys "^\w\+\(:\|=\)"me=e-1 contained syn match groupsKeyValue "^\(map\|all\|list\|reverse\|cache_time\)\+\(:\|=\).*$" contains=groupsKeys,groupsVars syn match groupsComment "#.*$" syn match groupsComment ";.*$" syn match groupsHeader "\[\w\+\(,\w\+\)*\]" contains=gHdrSource,gHdrSourceDelim syn match groupsMainHeader "\[Main\]" syn match gHdrSource '[^,]' contained syn match gHdrSourceDelim ',' contained " Define the default highlighting. " For version 5.7 and earlier: only when not done already " For version 5.8 and later: only when an item doesn't have highlighting yet if version >= 508 || !exists("did_groupsconf_syntax_inits") if version < 508 let did_groupsconf_syntax_inits = 1 command -nargs=+ HiLink hi link else command -nargs=+ HiLink hi def link endif HiLink gHdrSource Keyword HiLink gHdrSourceDelim Delimiter HiLink groupsComment Comment HiLink groupsMainHeader Constant HiLink groupsDefaultKey Identifier HiLink groupsDefaultValue Special HiLink groupsKeys Identifier HiLink groupsVars Keyword delcommand HiLink endif let b:current_syntax = "groupsconf" " vim:ts=8 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3233292 ClusterShell-1.9.2/doc/man/0000755104717000001440000000000014505640536015116 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3263295 ClusterShell-1.9.2/doc/man/man1/0000755104717000001440000000000014505640536015752 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/man/man1/clubak.10000644104717000001440000001107114505632065017273 0ustar00sthiellusers.\" Man page generated from reStructuredText. . .TH CLUBAK 1 "2023-09-29" "1.9.2" "ClusterShell User Manual" .SH NAME clubak \- format output from clush/pdsh-like output and more . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .SH SYNOPSIS .sp \fBclubak\fP [ OPTIONS ] .SH DESCRIPTION .sp \fBclubak\fP formats text from standard input containing lines of the form "\fInode:output\fP". It is fully backward compatible with \fBdshbak\fP(1) but provides additional features. For instance, \fBclubak\fP always displays its results sorted by node/nodeset. .sp You do not need to use \fBclubak\fP when using \fBclush\fP(1) as all output formatting features are already included in. It is provided for other usages, like post\-processing results of the form "\fInode:output\fP". .sp Like \fBclush\fP(1), \fBclubak\fP uses the \fIClusterShell.MsgTree\fP module of the ClusterShell library (see \fBpydoc ClusterShell.MsgTree\fP). .SH INVOCATION .sp \fBclubak\fP should be started with connected standard input. .SH OPTIONS .INDENT 0.0 .TP .B \-\-version show \fBclubak\fP version number and exit .TP .B \-b\fP,\fB \-c gather nodes with same output (\-c is provided for \fBdshbak\fP(1) compatibility) .TP .B \-d\fP,\fB \-\-debug output more messages for debugging purpose .TP .B \-L disable header block and order output by nodes .TP .B \-r\fP,\fB \-\-regroup fold nodeset using node groups .TP .BI \-s \ GROUPSOURCE\fR,\fB \ \-\-groupsource\fB= GROUPSOURCE optional \fBgroups.conf\fP(5) group source to use .TP .BI \-\-groupsconf\fB= FILE use alternate config file for groups.conf(5) .TP .B \-G\fP,\fB \-\-groupbase do not display group source prefix (always \fI@groupname\fP) .TP .BI \-S \ SEPARATOR\fR,\fB \ \-\-separator\fB= SEPARATOR node / line content separator string (default: \fI:\fP) .TP .B \-F\fP,\fB \-\-fast faster but memory hungry mode (preload all messages per node) .TP .B \-T\fP,\fB \-\-tree message tree trace mode; switch to enable \fBClusterShell.MsgTree\fP trace mode, all keys/nodes being kept for each message element of the tree, thus allowing special output gathering .TP .BI \-\-color\fB= WHENCOLOR \fBclush\fP can use NO_COLOR, CLICOLOR and CLICOLOR_FORCE environment variables. \fB\-\-color\fP command line option always takes precedence over environment variables. NO_COLOR takes precedence over CLICOLOR_FORCE which takes precedence over CLICOLOR. \fB\-\-color\fP tells whether to use ANSI colors to surround node or nodeset prefix/header with escape sequences to display them in color on the terminal. \fIWHENCOLOR\fP is \fBnever\fP, \fBalways\fP or \fBauto\fP (which use color if standard output refers to a terminal). Color is set to [34m (blue foreground text) and cannot be modified. .TP .B \-\-diff show diff between gathered outputs .UNINDENT .SH EXIT STATUS .sp An exit status of zero indicates success of the \fBclubak\fP command. .SH EXAMPLES .INDENT 0.0 .IP 1. 3 \fBclubak\fP can be used to gather some recorded \fBclush\fP(1) results: .UNINDENT .INDENT 0.0 .TP .B Record \fBclush\fP(1) results in a file: .nf # clush \-w node[1\-7] uname \-r >/tmp/clush_output # clush \-w node[32\-159] uname \-r >>/tmp/clush_output .fi .sp .TP .B Display file gathered results (in line\-mode): .nf # clubak \-bL .SH COPYRIGHT GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) .\" Generated by docutils manpage writer. . ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/man/man1/cluset.10000644104717000001440000002733014505632065017336 0ustar00sthiellusers.\" Man page generated from reStructuredText. . .TH CLUSET 1 "2023-09-29" "1.9.2" "ClusterShell User Manual" .SH NAME cluset \- compute advanced cluster node set operations . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .SH SYNOPSIS .INDENT 0.0 .INDENT 3.5 \fBcluset\fP [OPTIONS] [COMMAND] [nodeset1 [OPERATION] nodeset2|...] .UNINDENT .UNINDENT .SH DESCRIPTION .sp Note: \fBcluset\fP and \fBnodeset\fP are the same command. .sp \fBcluset\fP is an utility command provided with the ClusterShell library which implements some features of ClusterShell\(aqs NodeSet and RangeSet Python classes. It provides easy manipulation of 1D or nD\-indexed cluster nodes and node groups. .sp Also, \fBcluset\fP is automatically bound to the library node group resolution mechanism. Thus, it is especially useful to enhance cluster aware administration shell scripts. .SH OPTIONS .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .TP .B \-\-version show program\(aqs version number and exit .TP .B \-h\fP,\fB \-\-help show this help message and exit .TP .BI \-s \ GROUPSOURCE\fR,\fB \ \-\-groupsource\fB= GROUPSOURCE optional \fBgroups.conf\fP(5) group source to use .TP .BI \-\-groupsconf\fB= FILE use alternate config file for groups.conf(5) .UNINDENT .INDENT 0.0 .TP .B Commands: .INDENT 7.0 .TP .B \-c\fP,\fB \-\-count show number of nodes in nodeset(s) .TP .B \-e\fP,\fB \-\-expand expand nodeset(s) to separate nodes (see also \-S \fISEPARATOR\fP) .TP .B \-f\fP,\fB \-\-fold fold nodeset(s) (or separate nodes) into one nodeset .TP .B \-l\fP,\fB \-\-list list node groups, list node groups and nodes (\fB\-ll\fP) or list node groups, nodes and node count (\fB\-lll\fP). When no argument is specified at all, this command will list all node group names found in selected group source (see also \-s \fIGROUPSOURCE\fP). If any nodesets are specified as argument, this command will find node groups these nodes belongs to (individually). Optionally for each group, the fraction of these nodes being member of the group may be displayed (with \fB\-ll\fP), and also member count/total group node count (with \fB\-lll\fP). If a single hyphen\-minus (\-) is given as a nodeset, it will be read from standard input. .TP .B \-r\fP,\fB \-\-regroup fold nodes using node groups (see \-s \fIGROUPSOURCE\fP) .TP .B \-\-groupsources list all active group sources (see \fBgroups.conf\fP(5)) .UNINDENT .TP .B Operations: .INDENT 7.0 .TP .BI \-x \ SUB_NODES\fR,\fB \ \-\-exclude\fB= SUB_NODES exclude specified set .TP .BI \-i \ AND_NODES\fR,\fB \ \-\-intersection\fB= AND_NODES calculate sets intersection .TP .BI \-X \ XOR_NODES\fR,\fB \ \-\-xor\fB= XOR_NODES calculate symmetric difference between sets .UNINDENT .TP .B Options: .INDENT 7.0 .TP .B \-a\fP,\fB \-\-all call external node groups support to display all nodes .TP .BI \-\-autostep\fB= AUTOSTEP enable a\-b/step style syntax when folding nodesets, value is min node count threshold (integer \(aq4\(aq, percentage \(aq50%\(aq or \(aqauto\(aq). If not specified, auto step is disabled (best for compatibility with other cluster tools. Example: autostep=4, "node2 node4 node6" folds in node[2,4,6] but autostep=3, "node2 node4 node6" folds in node[2\-6/2]. .TP .B \-d\fP,\fB \-\-debug output more messages for debugging purpose .TP .B \-q\fP,\fB \-\-quiet be quiet, print essential output only .TP .B \-R\fP,\fB \-\-rangeset switch to RangeSet instead of NodeSet. Useful when working on numerical cluster ranges, eg. 1,5,18\-31 .TP .B \-G\fP,\fB \-\-groupbase hide group source prefix (always \fI@groupname\fP) .TP .BI \-S \ SEPARATOR\fR,\fB \ \-\-separator\fB= SEPARATOR separator string to use when expanding nodesets (default: \(aq \(aq) .TP .BI \-O \ FORMAT\fR,\fB \ \-\-output\-format\fB= FORMAT output format (default: \(aq%s\(aq) .TP .BI \-I \ SLICE_RANGESET\fR,\fB \ \-\-slice\fB= SLICE_RANGESET return sliced off result; examples of SLICE_RANGESET are "0" for simple index selection, or "1\-9/2,16" for complex rangeset selection .TP .BI \-\-split\fB= MAXSPLIT split result into a number of subsets .TP .B \-\-contiguous split result into contiguous subsets (ie. for nodeset, subsets will contain nodes with same pattern name and a contiguous range of indexes, like foobar[1\-100]; for rangeset, subsets with consists in contiguous index ranges)""" .TP .BI \-\-axis\fB= RANGESET for nD nodesets, fold along provided axis only. Axis are indexed from 1 to n and can be specified here either using the rangeset syntax, eg. \(aq1\(aq, \(aq1\-2\(aq, \(aq1,3\(aq, or by a single negative number meaning that the indices is counted from the end. Because some nodesets may have several different dimensions, axis indices are silently truncated to fall in the allowed range. .TP .BI \-\-pick\fB= N pick N node(s) at random in nodeset .UNINDENT .UNINDENT .UNINDENT .UNINDENT .sp For a short explanation of these options, see \fB\-h, \-\-help\fP\&. .sp If a single hyphen\-minus (\-) is given as a nodeset, it will be read from standard input. .SH EXTENDED PATTERNS .sp The \fBcluset\fP command benefits from ClusterShell NodeSet basic arithmetic addition. This feature extends recognized string patterns by supporting operators matching all Operations seen previously. String patterns are read from left to right, by proceeding any character operators accordingly. .INDENT 0.0 .TP .B Supported character operators .INDENT 7.0 .TP .B \fB,\fP indicates that the \fIunion\fP of both left and right nodeset should be computed before continuing .TP .B \fB!\fP indicates the \fIdifference\fP operation .TP .B \fB&\fP indicates the \fIintersection\fP operation .TP .B \fB^\fP indicates the \fIsymmetric difference\fP (XOR) operation .UNINDENT .sp Care should be taken to escape these characters as needed when the shell does not interpret them literally. .TP .B Examples of use of extended patterns .INDENT 7.0 .TP .B $ cluset \-f node[0\-7],node[8\-10] .UNINDENT .nf node[0\-10] .fi .sp .INDENT 7.0 .TP .B $ cluset \-f node[0\-10]!node[8\-10] .UNINDENT .nf node[0\-7] .fi .sp .INDENT 7.0 .TP .B $ cluset \-f node[0\-10]&node[5\-13] .UNINDENT .nf node[5\-10] .fi .sp .INDENT 7.0 .TP .B $ cluset \-f node[0\-10]^node[5\-13] .UNINDENT .nf node[0\-4,11\-13] .fi .sp .TP .B Example of advanced usage .INDENT 7.0 .TP .B $ cluset \-f @gpu^@slurm:bigmem!@chassis[1\-9/2] .UNINDENT .sp This computes a folded nodeset containing nodes found in group @gpu and @slurm:bigmem, but not in both, minus the nodes found in odd chassis groups from 1 to 9. .TP .B "All nodes" extension The \fB@*\fP and \fB@SOURCE:*\fP special notations may be used in extended patterns to represent all nodes (in SOURCE) according to the \fIall\fP external shell command (see \fBgroups.conf\fP(5)) and are equivalent to: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B $ cluset [\-s SOURCE] \-a \-f .UNINDENT .UNINDENT .UNINDENT .TP .B Group names in expressions The \fB@@SOURCE\fP notation may be used to access all group names from the specified SOURCE (or from the default group source when just \fB@@\fP is used) in node set expressions; this works with either file\-based group sources or with external group sources that have the \fIlist\fP upcall defined (see \fBgroups.conf\fP(5)): .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B $ cluset \-f @@rack .UNINDENT .nf J[1\-3] .fi .sp .UNINDENT .UNINDENT .UNINDENT .SH NODE WILDCARDS .sp Any wildcard mask found is matched against all nodes from the group source (see \fBgroups.conf\fP(5) and the \fB\-a/\-\-all\fP option above). \fB*\fP means match zero or more characters of any type; \fB?\fP means match exactly one character of any type. This can be especially useful for server farms, or when cluster node names differ. .INDENT 0.0 .TP .B Say that your group configuration is set to return the following “all nodesâ€: .INDENT 7.0 .TP .B $ cluset \-f \-a .UNINDENT .nf bckserv[1\-2],dbserv[1\-4],wwwserv[1\-9] .fi .sp .TP .B Then, you can use wildcards to select particular nodes, as shown below: .INDENT 7.0 .TP .B $ cluset \-f \(aqwww*\(aq .UNINDENT .nf wwwserv[1\-9] .fi .sp .INDENT 7.0 .TP .B $ cluset \-f \(aqwww*[1\-4]\(aq .UNINDENT .nf wwwserv[1\-4] .fi .sp .INDENT 7.0 .TP .B $ cluset \-f \(aq*serv1\(aq .UNINDENT .nf bckserv1,dbserv1,wwwserv1 .fi .sp .UNINDENT .sp Wildcard masks are resolved prior to extended patterns, but each mask is evaluated as a whole node set operand. In the example below, we select all nodes matching \fB*serv*\fP before removing all nodes matching \fBwww*\fP: .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .TP .B $ cluset \-f \(aq*serv*!www*\(aq .UNINDENT .nf bckserv[1\-2],dbserv[1\-4] .fi .sp .UNINDENT .UNINDENT .SH EXIT STATUS .sp An exit status of zero indicates success of the \fBcluset\fP command. A non\-zero exit status indicates failure. .SH EXAMPLES .INDENT 0.0 .TP .B Getting the node count .INDENT 7.0 .TP .B $ cluset \-c node[0\-7,32\-159] .UNINDENT .nf 136 .fi .sp .INDENT 7.0 .TP .B $ cluset \-c node[0\-7,32\-159] node[160\-163] .UNINDENT .nf 140 .fi .sp .INDENT 7.0 .TP .B $ cluset \-c dc[1\-2]n[100\-199] .UNINDENT .nf 200 .fi .sp .INDENT 7.0 .TP .B $ cluset \-c @login .UNINDENT .nf 4 .fi .sp .TP .B Folding nodesets .INDENT 7.0 .TP .B $ cluset \-f node[0\-7,32\-159] node[160\-163] .UNINDENT .nf node[0\-7,32\-163] .fi .sp .INDENT 7.0 .TP .B $ echo node3 node6 node1 node2 node7 node5 | cluset \-f .UNINDENT .nf node[1\-3,5\-7] .fi .sp .INDENT 7.0 .TP .B $ cluset \-f dc1n2 dc2n2 dc1n1 dc2n1 .UNINDENT .nf dc[1\-2]n[1\-2] .fi .sp .INDENT 7.0 .TP .B $ cluset \-\-axis=1 \-f dc1n2 dc2n2 dc1n1 dc2n1 .UNINDENT .nf dc[1\-2]n1,dc[1\-2]n2 .fi .sp .TP .B Expanding nodesets .INDENT 7.0 .TP .B $ cluset \-e node[160\-163] .UNINDENT .nf node160 node161 node162 node163 .fi .sp .INDENT 7.0 .TP .B $ echo \(aqdc[1\-2]n[2\-6/2]\(aq | cluset \-e .UNINDENT .nf dc1n2 dc1n4 dc1n6 dc2n2 dc2n4 dc2n6 .fi .sp .TP .B Excluding nodes from nodeset .INDENT 7.0 .TP .B $ cluset \-f node[32\-159] \-x node33 .UNINDENT .nf node[32,34\-159] .fi .sp .TP .B Computing nodesets intersection .INDENT 7.0 .TP .B $ cluset \-f node[32\-159] \-i node[0\-7,20\-21,32,156\-159] .UNINDENT .nf node[32,156\-159] .fi .sp .TP .B Computing nodesets symmetric difference (xor) .INDENT 7.0 .TP .B $ cluset \-f node[33\-159] \-\-xor node[32\-33,156\-159] .UNINDENT .nf node[32,34\-155] .fi .sp .TP .B Splitting nodes into several nodesets (expanding results) .INDENT 7.0 .TP .B $ cluset \-\-split=3 \-e node[1\-9] .UNINDENT .nf node1 node2 node3 node4 node5 node6 node7 node8 node9 .fi .sp .TP .B Splitting non\-contiguous nodesets (folding results) .INDENT 7.0 .TP .B $ cluset \-\-contiguous \-f node2 node3 node4 node8 node9 .UNINDENT .nf node[2\-4] node[8\-9] .fi .sp .INDENT 7.0 .TP .B $ cluset \-\-contiguous \-f dc[1,3]n[1\-2,4\-5] .UNINDENT .nf dc1n[1\-2] dc1n[4\-5] dc3n[1\-2] dc3n[4\-5] .fi .sp .UNINDENT .SH HISTORY .sp \fBcluset\fP was added in 1.7.3 to avoid a conflict with xCAT\(aqs \fBnodeset\fP command and also to conform with ClusterShell\(aqs "clu*" command nomenclature. .SH SEE ALSO .sp \fBclubak\fP(1), \fBclush\fP(1), \fBnodeset\fP(1), \fBgroups.conf\fP(5). .sp \fI\%http://clustershell.readthedocs.org/\fP .SH BUG REPORTS .INDENT 0.0 .TP .B Use the following URL to submit a bug report or feedback: \fI\%https://github.com/cea\-hpc/clustershell/issues\fP .UNINDENT .SH AUTHOR Stephane Thiell .SH COPYRIGHT GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) .\" Generated by docutils manpage writer. . ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/man/man1/clush.10000644104717000001440000004047314505632065017160 0ustar00sthiellusers.\" Man page generated from reStructuredText. . .TH CLUSH 1 "2023-09-29" "1.9.2" "ClusterShell User Manual" .SH NAME clush \- execute shell commands on a cluster . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .SH SYNOPSIS .sp \fBclush\fP \fB\-a\fP | \fB\-g\fP \fIgroup\fP | \fB\-w\fP \fInodes\fP [OPTIONS] .sp \fBclush\fP \fB\-a\fP | \fB\-g\fP \fIgroup\fP | \fB\-w\fP \fInodes\fP [OPTIONS] \fIcommand\fP .sp \fBclush\fP \fB\-a\fP | \fB\-g\fP \fIgroup\fP | \fB\-w\fP \fInodes\fP [OPTIONS] \-\-copy \fIfile\fP | \fIdir\fP [ \fIfile\fP | \fIdir\fP ...] [ \-\-dest \fIpath\fP ] .sp \fBclush\fP \fB\-a\fP | \fB\-g\fP \fIgroup\fP | \fB\-w\fP \fInodes\fP [OPTIONS] \-\-rcopy \fIfile\fP | \fIdir\fP [ \fIfile\fP | \fIdir\fP ...] [ \-\-dest \fIpath\fP ] .SH DESCRIPTION .sp \fBclush\fP is a program for executing commands in parallel on a cluster and for gathering their results. \fBclush\fP executes commands interactively or can be used within shell scripts and other applications. It is a partial front\-end to the ClusterShell library that ensures a light, unified and robust parallel command execution framework. Thus, it allows traditional shell scripts to benefit from some of the library features. \fBclush\fP currently makes use of the Ssh worker of ClusterShell, by default, that only requires \fBssh\fP(1) (OpenSSH SSH client). .SH INVOCATION .sp \fBclush\fP can be started non\-interactively to run a shell \fIcommand\fP, or can be invoked as an interactive shell. To start a \fBclush\fP interactive session, invoke the \fBclush\fP command without providing \fIcommand\fP\&. .INDENT 0.0 .TP .B Non\-interactive mode When \fBclush\fP is started non\-interactively, the \fIcommand\fP is executed on the specified remote hosts in parallel. If option \fB\-b\fP or \fB\-\-dshbak\fP is specified, \fBclush\fP waits for command completion and then displays gathered output results. .sp The \fB\-w\fP option allows you to specify remote hosts by using ClusterShell NodeSet syntax, including the node groups \fB@group\fP special syntax and the \fBExtended Patterns\fP syntax to benefits from NodeSet basic arithmetic (like \fB@Agroup\e&@Bgroup\fP). See EXTENDED PATTERNS in \fBnodeset\fP(1) and also \fBgroups.conf\fP(5) for more information. .sp Unless the option \fB\-\-nostdin\fP (or \fB\-n\fP) is specified, \fBclush\fP detects when its standard input is connected to a terminal (as determined by \fBisatty\fP(3)). If actually connected to a terminal, \fBclush\fP listens to standard input when commands are running, waiting for an \fIEnter\fP key press. Doing so will display the status of current nodes. If standard input is not connected to a terminal, and unless the option \fB\-\-nostdin\fP is specified, \fBclush\fP binds the standard input of the remote commands to its own standard input, allowing scripting methods like: .INDENT 7.0 .INDENT 3.5 .nf # echo foo | clush \-w node[40\-42] \-b cat \-\-\-\-\-\-\-\-\-\-\-\-\-\-\- node[40\-42] \-\-\-\-\-\-\-\-\-\-\-\-\-\-\- foo .fi .sp .UNINDENT .UNINDENT .sp Please see some other great examples in the EXAMPLES section below. .TP .B Interactive session If a \fIcommand\fP is not specified, and its standard input is connected to a terminal, \fBclush\fP runs interactively. In this mode, \fBclush\fP uses the GNU \fBreadline\fP library to read command lines. Readline provides commands for searching through the command history for lines containing a specified string. For instance, type Control\-R to search in the history for the next entry matching the search string typed so far. \fBclush\fP also recognizes special single\-character prefixes that allows the user to see and modify the current nodeset (the nodes where the commands are executed). .INDENT 7.0 .TP .B Single\-character interactive commands are: .INDENT 7.0 .TP .B clush> ? show current nodeset .TP .B clush> @ set current nodeset .TP .B clush> + add nodes to current nodeset .TP .B clush> \- remove nodes from current nodeset .TP .B clush> !COMMAND execute COMMAND on the local system .TP .B clush> = toggle the output format (gathered or standard mode) .UNINDENT .UNINDENT .sp To leave an interactive session, type \fBquit\fP or Control\-D. .TP .B Local execution ( \fB\-\-worker=exec\fP or \fB\-R exec\fP ) Instead of running provided command on remote nodes, \fBclush\fP can use the dedicated \fIexec\fP worker to launch the command \fIlocally\fP, for each node. Some parameters could be used in the command line to make a different command for each node. \fB%h\fP or \fB%host\fP will be replaced by node name and \fB%n\fP or \fB%rank\fP by the remote rank [0\-N] (to get a literal % use %%) .TP .B File copying mode ( \fB\-\-copy\fP ) When \fBclush\fP is started with the \fB\-c\fP or \fB\-\-copy\fP option, it will attempt to copy specified \fIfiles\fP and/or \fIdirectories\fP to the provided cluster nodes. The \fB\-\-dest\fP option can be used to specify a single path where all the file(s) should be copied to on the target nodes. In the absence of \fB\-\-dest\fP, \fBclush\fP will attempt to copy each file or directory found in the command line to their same location on the target nodes. .TP .B Reverse file copying mode ( \fB\-\-rcopy\fP ) When \fBclush\fP is started with the \fB\-\-rcopy\fP option, it will attempt to retrieve specified \fIfile\fP and/or \fIdir\fP from provided cluster nodes. If the \fB\-\-dest\fP option is specified, it must be a directory path where the files will be stored with their hostname appended. If the destination path is not specified, it will take each \fIfile\fP or \fIdirectory\fP\(aqs parent directory as the local destination. .UNINDENT .SH OPTIONS .INDENT 0.0 .TP .B \-\-version show \fBclush\fP version number and exit .TP .BI \-s \ GROUPSOURCE\fR,\fB \ \-\-groupsource\fB= GROUPSOURCE optional \fBgroups.conf\fP(5) group source to use .TP .B \-n\fP,\fB \-\-nostdin do not watch for possible input from stdin; this should be used when \fBclush\fP is run in the background (or in scripts). .TP .BI \-\-groupsconf\fB= FILE use alternate config file for groups.conf(5) .TP .BI \-\-conf\fB= FILE use alternate config file for clush.conf(5) .TP .BI \-O \ \fR,\fB \ \-\-option\fB= override any key=value \fBclush.conf\fP(5) options (repeat as needed) .UNINDENT .INDENT 0.0 .TP .B Selecting target nodes: .INDENT 7.0 .TP .BI \-w \ NODES nodes where to run the command .TP .BI \-x \ NODES exclude nodes from the node list .TP .B \-a\fP,\fB \-\-all run command on all nodes .TP .BI \-g \ GROUP\fR,\fB \ \-\-group\fB= GROUP run command on a group of nodes .TP .BI \-X \ GROUP exclude nodes from this group .TP .BI \-\-hostfile\fB= FILE\fR,\fB \ \-\-machinefile\fB= FILE path to a file containing a list of single hosts, node sets or node groups, separated by spaces and lines (may be specified multiple times, one per file) .TP .BI \-\-topology\fB= FILE topology configuration file to use for tree mode .TP .BI \-\-pick\fB= N pick N node(s) at random in nodeset .UNINDENT .TP .B Output behaviour: .INDENT 7.0 .TP .B \-q\fP,\fB \-\-quiet be quiet, print essential output only .TP .B \-v\fP,\fB \-\-verbose be verbose, print informative messages .TP .B \-d\fP,\fB \-\-debug output more messages for debugging purpose .TP .B \-G\fP,\fB \-\-groupbase do not display group source prefix .TP .B \-L disable header block and order output by nodes; if \-b/\-B is not specified, \fBclush\fP will wait for all commands to finish and then display aggregated output of commands with same return codes, ordered by node name; alternatively, when used in conjunction with \-b/\-B (eg. \-bL), \fBclush\fP will enable a "life gathering" of results by line, such as the next line is displayed as soon as possible (eg. when all nodes have sent the line) .TP .B \-N disable labeling of command line .TP .B \-P\fP,\fB \-\-progress show progress during command execution; if writing is performed to standard input, the live progress indicator will display the global bandwidth of data written to the target nodes .TP .B \-b\fP,\fB \-\-dshbak display gathered results in a dshbak\-like way (note: it will only try to aggregate the output of commands with same return codes) .TP .B \-B like \-b but including standard error .TP .B \-r\fP,\fB \-\-regroup fold nodeset using node groups .TP .B \-S\fP,\fB \-\-maxrc return the largest of command return codes .TP .BI \-\-color\fB= WHENCOLOR \fBclush\fP can use NO_COLOR, CLICOLOR and CLICOLOR_FORCE environment variables. NO_COLOR takes precedence over CLICOLOR_FORCE which takes precedence over CLICOLOR. When \fB\-\-color\fP option is used these environment variables are not taken into account. \fB\-\-color\fP tells whether to use ANSI colors to surround node or nodeset prefix/header with escape sequences to display them in color on the terminal. \fIWHENCOLOR\fP is \fBnever\fP, \fBalways\fP or \fBauto\fP (which use color if standard output/error refer to a terminal). Colors are set to [34m (blue foreground text) for stdout and [31m (red foreground text) for stderr, and cannot be modified. .TP .B \-\-diff show diff between common outputs (find the best reference output by focusing on largest nodeset and also smaller command return code) .TP .BI \-\-outdir\fB= OUTDIR output directory for stdout files (OPTIONAL) .TP .BI \-\-errdir\fB= ERRDIR output directory for stderr files (OPTIONAL) .UNINDENT .TP .B File copying: .INDENT 7.0 .TP .B \-c\fP,\fB \-\-copy copy local file or directory to remote nodes .TP .B \-\-rcopy copy file or directory from remote nodes .TP .BI \-\-dest\fB= DEST_PATH destination file or directory on the nodes (optional: use the first source directory path when not specified) .TP .B \-p preserve modification times and modes .UNINDENT .TP .B Connection options: .INDENT 7.0 .TP .BI \-f \ FANOUT\fR,\fB \ \-\-fanout\fB= FANOUT do not execute more than FANOUT commands at the same time, useful to limit resource usage. In tree mode, the same \fIfanout\fP value is used on the head node and on each gateway (the \fIfanout\fP value is propagated). That is, if the \fIfanout\fP is \fB16\fP, each gateway will initiate up to \fB16\fP connections to their target nodes at the same time. Default \fIfanout\fP value is defined in \fBclush.conf\fP(5). .TP .BI \-l \ USER\fR,\fB \ \-\-user\fB= USER execute remote command as user .TP .BI \-o \ OPTIONS\fR,\fB \ \-\-options\fB= OPTIONS can be used to give ssh options, eg. \fB\-o "\-p 2022 \-i ~/.ssh/myidrsa"\fP; these options are added first to ssh and override default ones .TP .BI \-t \ CONNECT_TIMEOUT\fR,\fB \ \-\-connect_timeout\fB= CONNECT_TIMEOUT limit time to connect to a node .TP .BI \-u \ COMMAND_TIMEOUT\fR,\fB \ \-\-command_timeout\fB= COMMAND_TIMEOUT limit time for command to run on the node .TP .BI \-m \ MODE\fR,\fB \ \-\-mode\fB= MODE run mode; define MODEs in \fB/*.conf\fP .TP .BI \-R \ WORKER\fR,\fB \ \-\-worker\fB= WORKER worker name to use for connection (\fBexec\fP, \fBssh\fP, \fBrsh\fP, \fBpdsh\fP, or the name of a Python worker module), default is \fBssh\fP .TP .BI \-\-remote\fB= REMOTE whether to enable remote execution: in tree mode, \(aqyes\(aq forces connections to the leaf nodes for execution, \(aqno\(aq establishes connections up to the leaf parent nodes for execution (default is \(aqyes\(aq) .UNINDENT .UNINDENT .sp For a short explanation of these options, see \fB\-h, \-\-help\fP\&. .SH EXIT STATUS .sp By default, an exit status of zero indicates success of the \fBclush\fP command but gives no information about the remote commands exit status. However, when the \fB\-S\fP option is specified, the exit status of \fBclush\fP is the largest value of the remote commands return codes. .sp For failed remote commands whose exit status is non\-zero, and unless the combination of options \fB\-qS\fP is specified, \fBclush\fP displays messages similar to: .INDENT 0.0 .TP .B clush: node[40\-42]: exited with exit code 1 .UNINDENT .SH EXAMPLES .SS Remote parallel execution .INDENT 0.0 .TP .B # clush \-w node[3\-5,62] uname \-r Run command \fIuname \-r\fP in parallel on nodes: node3, node4, node5 and node62 .UNINDENT .SS Local parallel execution .INDENT 0.0 .TP .B # clush \-w node[1\-3] \-\-worker=exec ping \-c1 %host Run locally, in parallel, a ping command for nodes: node1, node2 and node3. You may also use \fB\-R exec\fP as the shorter and pdsh compatible option. .UNINDENT .SS Display features .INDENT 0.0 .TP .B # clush \-w node[3\-5,62] \-b uname \-r Run command \fIuname \-r\fP on nodes[3\-5,62] and display gathered output results (integrated \fBdshbak\fP\-like). .TP .B # clush \-w node[3\-5,62] \-bL uname \-r Line mode: run command \fIuname \-r\fP on nodes[3\-5,62] and display gathered output results without default header block. .TP .B # ssh node32 find /etc/yum.repos.d \-type f | clush \-w node[40\-42] \-b xargs ls \-l Search some files on node32 in /etc/yum.repos.d and use clush to list the matching ones on node[40\-42], and use \fB\-b\fP to display gathered results. .TP .B # clush \-w node[3\-5,62] \-\-diff dmidecode \-s bios\-version Run this Linux command to get BIOS version on nodes[3\-5,62] and show version differences (if any). .UNINDENT .SS All nodes .INDENT 0.0 .TP .B # clush \-a uname \-r Run command \fIuname \-r\fP on all cluster nodes, see \fBgroups.conf\fP(5) to setup all cluster nodes (\fIall:\fP field). .TP .B # clush \-a \-x node[5,7] uname \-r Run command \fIuname \-r\fP on all cluster nodes except on nodes node5 and node7. .TP .B # clush \-a \-\-diff cat /some/file Run command \fIcat /some/file\fP on all cluster nodes and show differences (if any), line by line, between common outputs. .UNINDENT .SS Node groups .INDENT 0.0 .TP .B # clush \-w @oss modprobe lustre Run command \fImodprobe lustre\fP on nodes from node group named \fIoss\fP, see \fBgroups.conf\fP(5) to setup node groups (\fImap:\fP field). .TP .B # clush \-g oss modprobe lustre Same as previous example but using \fB\-g\fP to avoid \fI@\fP group prefix. .TP .B # clush \-w @mds,@oss modprobe lustre You may specify several node groups by separating them with commas (please see EXTENDED PATTERNS in \fBnodeset\fP(1) and also \fBgroups.conf\fP(5) for more information). .UNINDENT .SS Copy files .INDENT 0.0 .TP .B # clush \-w node[3\-5,62] \-\-copy /etc/motd Copy local file \fI/etc/motd\fP to remote nodes node[3\-5,62]. .TP .B # clush \-w node[3\-5,62] \-\-copy /etc/motd \-\-dest /tmp/motd2 Copy local file \fI/etc/motd\fP to remote nodes node[3\-5,62] at path \fI/tmp/motd2\fP\&. .TP .B # clush \-w node[3\-5,62] \-c /usr/share/doc/clustershell Recursively copy local directory \fI/usr/share/doc/clustershell\fP to the same path on remote nodes node[3\-5,62]. .TP .B # clush \-w node[3\-5,62] \-\-rcopy /etc/motd \-\-dest /tmp Copy \fI/etc/motd\fP from remote nodes node[3\-5,62] to local \fI/tmp\fP directory, each file having their remote hostname appended, eg. \fI/tmp/motd.node3\fP\&. .UNINDENT .SH FILES .INDENT 0.0 .TP .B \fI$CLUSTERSHELL_CFGDIR/clush.conf\fP Global clush configuration file. If $CLUSTERSHELL_CFGDIR is not defined, \fI/etc/clustershell/clush.conf\fP is used instead. .TP .B \fI$XDG_CONFIG_HOME/clustershell/clush.conf\fP User configuration file for clush. If $XDG_CONFIG_HOME is not defined, \fI$HOME/.config/clustershell/clush.conf\fP is used instead. .TP .B \fI$HOME/.local/etc/clustershell/clush.conf\fP Local user configuration file for clush (default installation for pip \-\-user) .TP .B \fI~/.clush.conf\fP Deprecated per\-user clush configuration file. .TP .B \fI~/.clush_history\fP File in which interactive \fBclush\fP command history is saved. .UNINDENT .SH SEE ALSO .sp \fBclubak\fP(1), \fBcluset\fP(1), \fBnodeset\fP(1), \fBreadline\fP(3), \fBclush.conf\fP(5), \fBgroups.conf\fP(5). .sp \fI\%http://clustershell.readthedocs.org/\fP .SH BUG REPORTS .INDENT 0.0 .TP .B Use the following URL to submit a bug report or feedback: \fI\%https://github.com/cea\-hpc/clustershell/issues\fP .UNINDENT .SH AUTHOR Stephane Thiell .SH COPYRIGHT GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) .\" Generated by docutils manpage writer. . ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/man/man1/nodeset.10000644104717000001440000003034614505632065017501 0ustar00sthiellusers.\" Man page generated from reStructuredText. . .TH NODESET 1 "2023-09-29" "1.9.2" "ClusterShell User Manual" .SH NAME nodeset \- compute advanced nodeset operations . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .SH SYNOPSIS .INDENT 0.0 .INDENT 3.5 \fBnodeset\fP [OPTIONS] [COMMAND] [nodeset1 [OPERATION] nodeset2|...] .UNINDENT .UNINDENT .SH DESCRIPTION .sp Note: \fBnodeset\fP and \fBcluset\fP are the same command. .sp \fBnodeset\fP is an utility command provided with the ClusterShell library which implements some features of ClusterShell\(aqs NodeSet and RangeSet Python classes. It provides easy manipulation of 1D or nD\-indexed cluster nodes and node groups. .sp Also, \fBnodeset\fP is automatically bound to the library node group resolution mechanism. Thus, it is especially useful to enhance cluster aware administration shell scripts. .SH OPTIONS .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .TP .B \-\-version show program\(aqs version number and exit .TP .B \-h\fP,\fB \-\-help show this help message and exit .TP .BI \-s \ GROUPSOURCE\fR,\fB \ \-\-groupsource\fB= GROUPSOURCE optional \fBgroups.conf\fP(5) group source to use .TP .BI \-\-groupsconf\fB= FILE use alternate config file for groups.conf(5) .UNINDENT .INDENT 0.0 .TP .B Commands: .INDENT 7.0 .TP .B \-c\fP,\fB \-\-count show number of nodes in nodeset(s) .TP .B \-e\fP,\fB \-\-expand expand nodeset(s) to separate nodes (see also \-S \fISEPARATOR\fP) .TP .B \-f\fP,\fB \-\-fold fold nodeset(s) (or separate nodes) into one nodeset .TP .B \-l\fP,\fB \-\-list list node groups, list node groups and nodes (\fB\-ll\fP) or list node groups, nodes and node count (\fB\-lll\fP). When no argument is specified at all, this command will list all node group names found in selected group source (see also \-s \fIGROUPSOURCE\fP). If any nodesets are specified as argument, this command will find node groups these nodes belongs to (individually). Optionally for each group, the fraction of these nodes being member of the group may be displayed (with \fB\-ll\fP), and also member count/total group node count (with \fB\-lll\fP). If a single hyphen\-minus (\-) is given as a nodeset, it will be read from standard input. .TP .B \-r\fP,\fB \-\-regroup fold nodes using node groups (see \-s \fIGROUPSOURCE\fP) .TP .B \-\-groupsources list all active group sources (see \fBgroups.conf\fP(5)) .UNINDENT .TP .B Operations: .INDENT 7.0 .TP .BI \-x \ SUB_NODES\fR,\fB \ \-\-exclude\fB= SUB_NODES exclude specified nodeset .TP .BI \-i \ AND_NODES\fR,\fB \ \-\-intersection\fB= AND_NODES calculate nodesets intersection .TP .BI \-X \ XOR_NODES\fR,\fB \ \-\-xor\fB= XOR_NODES calculate symmetric difference between nodesets .UNINDENT .TP .B Options: .INDENT 7.0 .TP .B \-a\fP,\fB \-\-all call external node groups support to display all nodes .TP .BI \-\-autostep\fB= AUTOSTEP enable a\-b/step style syntax when folding nodesets, value is min node count threshold (integer \(aq4\(aq, percentage \(aq50%\(aq or \(aqauto\(aq). If not specified, auto step is disabled (best for compatibility with other cluster tools. Example: autostep=4, "node2 node4 node6" folds in node[2,4,6] but autostep=3, "node2 node4 node6" folds in node[2\-6/2]. .TP .B \-d\fP,\fB \-\-debug output more messages for debugging purpose .TP .B \-q\fP,\fB \-\-quiet be quiet, print essential output only .TP .B \-R\fP,\fB \-\-rangeset switch to RangeSet instead of NodeSet. Useful when working on numerical cluster ranges, eg. 1,5,18\-31 .TP .B \-G\fP,\fB \-\-groupbase hide group source prefix (always \fI@groupname\fP) .TP .BI \-S \ SEPARATOR\fR,\fB \ \-\-separator\fB= SEPARATOR separator string to use when expanding nodesets (default: \(aq \(aq) .TP .BI \-O \ FORMAT\fR,\fB \ \-\-output\-format\fB= FORMAT output format (default: \(aq%s\(aq) .TP .BI \-I \ SLICE_RANGESET\fR,\fB \ \-\-slice\fB= SLICE_RANGESET return sliced off result; examples of SLICE_RANGESET are "0" for simple index selection, or "1\-9/2,16" for complex rangeset selection .TP .BI \-\-split\fB= MAXSPLIT split result into a number of subsets .TP .B \-\-contiguous split result into contiguous subsets (ie. for nodeset, subsets will contain nodes with same pattern name and a contiguous range of indexes, like foobar[1\-100]; for rangeset, subsets with consists in contiguous index ranges)""" .TP .BI \-\-axis\fB= RANGESET for nD nodesets, fold along provided axis only. Axis are indexed from 1 to n and can be specified here either using the rangeset syntax, eg. \(aq1\(aq, \(aq1\-2\(aq, \(aq1,3\(aq, or by a single negative number meaning that the indices is counted from the end. Because some nodesets may have several different dimensions, axis indices are silently truncated to fall in the allowed range. .TP .BI \-\-pick\fB= N pick N node(s) at random in nodeset .UNINDENT .UNINDENT .UNINDENT .UNINDENT .sp For a short explanation of these options, see \fB\-h, \-\-help\fP\&. .sp If a single hyphen\-minus (\-) is given as a nodeset, it will be read from standard input. .SH EXTENDED PATTERNS .sp The \fBnodeset\fP command benefits from ClusterShell NodeSet basic arithmetic addition. This feature extends recognized string patterns by supporting operators matching all Operations seen previously. String patterns are read from left to right, by proceeding any character operators accordingly. .INDENT 0.0 .TP .B Supported character operators .INDENT 7.0 .TP .B \fB,\fP indicates that the \fIunion\fP of both left and right nodeset should be computed before continuing .TP .B \fB!\fP indicates the \fIdifference\fP operation .TP .B \fB&\fP indicates the \fIintersection\fP operation .TP .B \fB^\fP indicates the \fIsymmetric difference\fP (XOR) operation .UNINDENT .sp Care should be taken to escape these characters as needed when the shell does not interpret them literally. .TP .B Examples of use of extended patterns .INDENT 7.0 .TP .B $ nodeset \-f node[0\-7],node[8\-10] .UNINDENT .nf node[0\-10] .fi .sp .INDENT 7.0 .TP .B $ nodeset \-f node[0\-10]!node[8\-10] .UNINDENT .nf node[0\-7] .fi .sp .INDENT 7.0 .TP .B $ nodeset \-f node[0\-10]&node[5\-13] .UNINDENT .nf node[5\-10] .fi .sp .INDENT 7.0 .TP .B $ nodeset \-f node[0\-10]^node[5\-13] .UNINDENT .nf node[0\-4,11\-13] .fi .sp .TP .B Example of advanced usage .INDENT 7.0 .TP .B $ nodeset \-f @gpu^@slurm:bigmem!@chassis[1\-9/2] .UNINDENT .sp This computes a folded nodeset containing nodes found in group @gpu and @slurm:bigmem, but not in both, minus the nodes found in odd chassis groups from 1 to 9. .TP .B "All nodes" extension The \fB@*\fP and \fB@SOURCE:*\fP special notations may be used in extended patterns to represent all nodes (in SOURCE) according to the \fIall\fP external shell command (see \fBgroups.conf\fP(5)) and are equivalent to: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B $ nodeset [\-s SOURCE] \-a \-f .UNINDENT .UNINDENT .UNINDENT .TP .B Group names in expressions The \fB@@SOURCE\fP notation may be used to access all group names from the specified SOURCE (or from the default group source when just \fB@@\fP is used) in node set expressions; this works with either file\-based group sources or with external group sources that have the \fIlist\fP upcall defined (see \fBgroups.conf\fP(5)): .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B $ nodeset \-f @@rack .UNINDENT .nf J[1\-3] .fi .sp .UNINDENT .UNINDENT .UNINDENT .SH NODE WILDCARDS .sp Any wildcard mask found is matched against all nodes from the group source (see \fBgroups.conf\fP(5) and the \fB\-a/\-\-all\fP option above). \fB*\fP means match zero or more characters of any type; \fB?\fP means match exactly one character of any type. This can be especially useful for server farms, or when cluster node names differ. .INDENT 0.0 .TP .B Say that your group configuration is set to return the following “all nodesâ€: .INDENT 7.0 .TP .B $ nodeset \-f \-a .UNINDENT .nf bckserv[1\-2],dbserv[1\-4],wwwserv[1\-9] .fi .sp .TP .B Then, you can use wildcards to select particular nodes, as shown below: .INDENT 7.0 .TP .B $ nodeset \-f \(aqwww*\(aq .UNINDENT .nf wwwserv[1\-9] .fi .sp .INDENT 7.0 .TP .B $ nodeset \-f \(aqwww*[1\-4]\(aq .UNINDENT .nf wwwserv[1\-4] .fi .sp .INDENT 7.0 .TP .B $ nodeset \-f \(aq*serv1\(aq .UNINDENT .nf bckserv1,dbserv1,wwwserv1 .fi .sp .UNINDENT .sp Wildcard masks are resolved prior to extended patterns, but each mask is evaluated as a whole node set operand. In the example below, we select all nodes matching \fB*serv*\fP before removing all nodes matching \fBwww*\fP: .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .TP .B $ nodeset \-f \(aq*serv*!www*\(aq .UNINDENT .nf bckserv[1\-2],dbserv[1\-4] .fi .sp .UNINDENT .UNINDENT .SH EXIT STATUS .sp An exit status of zero indicates success of the \fBnodeset\fP command. A non\-zero exit status indicates failure. .SH EXAMPLES .INDENT 0.0 .TP .B Getting the node count .INDENT 7.0 .TP .B $ nodeset \-c node[0\-7,32\-159] .UNINDENT .nf 136 .fi .sp .INDENT 7.0 .TP .B $ nodeset \-c node[0\-7,32\-159] node[160\-163] .UNINDENT .nf 140 .fi .sp .INDENT 7.0 .TP .B $ nodeset \-c dc[1\-2]n[100\-199] .UNINDENT .nf 200 .fi .sp .INDENT 7.0 .TP .B $ nodeset \-c @login .UNINDENT .nf 4 .fi .sp .TP .B Folding nodesets .INDENT 7.0 .TP .B $ nodeset \-f node[0\-7,32\-159] node[160\-163] .UNINDENT .nf node[0\-7,32\-163] .fi .sp .INDENT 7.0 .TP .B $ echo node3 node6 node1 node2 node7 node5 | nodeset \-f .UNINDENT .nf node[1\-3,5\-7] .fi .sp .INDENT 7.0 .TP .B $ nodeset \-f dc1n2 dc2n2 dc1n1 dc2n1 .UNINDENT .nf dc[1\-2]n[1\-2] .fi .sp .INDENT 7.0 .TP .B $ nodeset \-\-axis=1 \-f dc1n2 dc2n2 dc1n1 dc2n1 .UNINDENT .nf dc[1\-2]n1,dc[1\-2]n2 .fi .sp .TP .B Expanding nodesets .INDENT 7.0 .TP .B $ nodeset \-e node[160\-163] .UNINDENT .nf node160 node161 node162 node163 .fi .sp .INDENT 7.0 .TP .B $ echo \(aqdc[1\-2]n[2\-6/2]\(aq | nodeset \-e .UNINDENT .nf dc1n2 dc1n4 dc1n6 dc2n2 dc2n4 dc2n6 .fi .sp .TP .B Excluding nodes from nodeset .INDENT 7.0 .TP .B $ nodeset \-f node[32\-159] \-x node33 .UNINDENT .nf node[32,34\-159] .fi .sp .TP .B Computing nodesets intersection .INDENT 7.0 .TP .B $ nodeset \-f node[32\-159] \-i node[0\-7,20\-21,32,156\-159] .UNINDENT .nf node[32,156\-159] .fi .sp .TP .B Computing nodesets symmetric difference (xor) .INDENT 7.0 .TP .B $ nodeset \-f node[33\-159] \-\-xor node[32\-33,156\-159] .UNINDENT .nf node[32,34\-155] .fi .sp .TP .B Splitting nodes into several nodesets (expanding results) .INDENT 7.0 .TP .B $ nodeset \-\-split=3 \-e node[1\-9] .UNINDENT .nf node1 node2 node3 node4 node5 node6 node7 node8 node9 .fi .sp .TP .B Splitting non\-contiguous nodesets (folding results) .INDENT 7.0 .TP .B $ nodeset \-\-contiguous \-f node2 node3 node4 node8 node9 .UNINDENT .nf node[2\-4] node[8\-9] .fi .sp .INDENT 7.0 .TP .B $ nodeset \-\-contiguous \-f dc[1,3]n[1\-2,4\-5] .UNINDENT .nf dc1n[1\-2] dc1n[4\-5] dc3n[1\-2] dc3n[4\-5] .fi .sp .UNINDENT .SH HISTORY .sp Command syntax has been changed since \fBnodeset\fP command available with ClusterShell v1.1. Operations, like \fI\-\-intersection\fP or \fI\-x\fP, are now specified between nodesets in the command line. .INDENT 0.0 .TP .B ClusterShell v1.1: .INDENT 7.0 .TP .B $ nodeset \-f \-x node[3,5\-6,9] node[1\-9] .UNINDENT .nf node[1\-2,4,7\-8] .fi .sp .TP .B ClusterShell v1.2+: .INDENT 7.0 .TP .B $ nodeset \-f node[1\-9] \-x node[3,5\-6,9] .UNINDENT .nf node[1\-2,4,7\-8] .fi .sp .UNINDENT .sp \fBcluset\fP was added in 1.7.3 to avoid a conflict with xCAT\(aqs \fBnodeset\fP command and also to conform with ClusterShell\(aqs "clu*" command nomenclature. .SH SEE ALSO .sp \fBclubak\fP(1), \fBcluset\fP(1), \fBclush\fP(1), \fBgroups.conf\fP(5). .sp \fI\%http://clustershell.readthedocs.org/\fP .SH BUG REPORTS .INDENT 0.0 .TP .B Use the following URL to submit a bug report or feedback: \fI\%https://github.com/cea\-hpc/clustershell/issues\fP .UNINDENT .SH AUTHOR Stephane Thiell .SH COPYRIGHT GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) .\" Generated by docutils manpage writer. . ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3263295 ClusterShell-1.9.2/doc/man/man5/0000755104717000001440000000000014505640536015756 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/man/man5/clush.conf.50000644104717000001440000001775114505632065020117 0ustar00sthiellusers.\" Man page generated from reStructuredText. . .TH CLUSH.CONF 5 "2023-09-29" "1.9.2" "ClusterShell User Manual" .SH NAME clush.conf \- Configuration file for clush . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .SH DESCRIPTION .sp \fBclush\fP(1) obtains configuration options from the following sources in the following order: .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .IP 1. 3 command\-line options .IP 2. 3 user configuration file (\fI$XDG_CONFIG_HOME/clustershell/clush.conf\fP) .IP 3. 3 local pip user installation (\fI$HOME/.local/etc/clustershell/clush.conf\fP) .IP 4. 3 global configuration file (\fI$CLUSTERSHELL_CFGDIR/clush.conf\fP, defaults to \fI/etc/clustershell/clush.conf\fP) .UNINDENT .UNINDENT .UNINDENT .sp For each parameter, the first obtained value will be used. .sp The configuration file has a format in the style of RFC 822 composed of one main section: .INDENT 0.0 .TP .B Main Program options definition .UNINDENT .SS [Main] .sp Configuration parameters of the \fBMain\fP section are described below. .INDENT 0.0 .TP .B fanout Size of the sliding window (fanout) of active commands for \fBclush\fP\&. This \fIfanout\fP is used to avoid too many concurrent connections and to conserve resources on the initiating hosts. In tree mode, the same \fIfanout\fP value is used on the head node and on each gateway (the \fIfanout\fP value is propagated). That is, if the \fIfanout\fP is \fB16\fP on the head node, each gateway will initiate up to \fB16\fP connections to their target nodes at the same time. .TP .B confdir Optional list of directory paths where clush should look for \fI\&.conf\fP files which define run modes that can then be activated with \fB\-\-mode\fP\&. All other clush config file settings defined here might be overridden in a run mode. Each mode section should have a name prefixed by "mode:" to clearly identify a section defining a mode. Duplicate modes are not allowed in those files. Configuration files that are not readable by the current user are ignored. The variable \fI$CFGDIR\fP is replaced by the path of the highest priority configuration directory found (where clush.conf resides). The default confdir value enables both system\-wide and any installed user configuration (thanks to \fI$CFGDIR\fP). Duplicate directory paths are ignored. .TP .B connect_timeout Timeout in seconds to allow a connection to establish. This parameter is passed to ssh. If set to \fI0\fP, no timeout occurs. .TP .B command_prefix Command prefix. Generally used for specific run modes, for example to implement \fBsudo\fP(8) support. .TP .B command_timeout Timeout in seconds to allow a command to complete since the connection has been established. This parameter is passed to ssh. In addition, the ClusterShell library ensures that any commands complete in less than ( connect_timeout + command_timeout ). If set to \fI0\fP, no timeout occurs. .TP .B color \fBclush\fP can use NO_COLOR, CLICOLOR and CLICOLOR_FORCE environment variables. NO_COLOR takes precedence over CLICOLOR_FORCE which takes precedence over CLICOLOR. When the option is set in configuration file environment variables are taken into account only with \fIauto\fP argument. \fBcolor\fP tells whether to use ANSI colors to surround node or nodeset prefix/header with escape sequences to display them in color on the terminal. Valid arguments are \fBnever\fP, \fBalways\fP or \fBauto\fP (which use color if standard output/error refer to a terminal). Colors are set to [34m (blue foreground text) for stdout and [31m (red foreground text) for stderr, and cannot be modified. .TP .B fd_max Maximum number of open file descriptors permitted per clush process (soft resource limit for open files). This limit can never exceed the system (hard) limit. The \fIfd_max\fP (soft) and system (hard) limits should be high enough to run \fBclush\fP, although their values depend on your \fIfanout\fP value. .TP .B history_size Set the maximum number of history entries saved in the GNU readline history list. Negative values imply unlimited history file size. .TP .B node_count Should \fBclush\fP display additional (node count) information in buffer header? (\fIyes\fP/\fIno\fP) .TP .B maxrc Should \fBclush\fP return the largest of command return codes? (yes/no) .TP .B password_prompt Enable password prompt and password forwarding to stdin? (yes/no) Generally used for specific run modes, for example to implement interactive \fBsudo\fP(8) support. .TP .B verbosity Set the verbosity level: \fI0\fP (quiet), \fI1\fP (default), \fI2\fP (verbose) or more (debug). .TP .B ssh_user Set the ssh user to use for remote connection (default is to not specify). .TP .B ssh_path Set the ssh binary path to use for remote connection (default is \fIssh\fP). .TP .B ssh_options Set additional options to pass to the underlying ssh command. .TP .B scp_path Set the scp binary path to use for remote copy (default is \fIscp\fP). .TP .B scp_options Set additional options to pass to the underlying scp command. If not specified, ssh_options are used instead. .TP .B rsh_path Set the rsh binary path to use for remote connection (default is \fIrsh\fP). You could easily use mrsh or krsh by simply changing this value. .TP .B rcp_path Same a rsh_path for rcp command. (Default is \fIrcp\fP) .TP .B rsh_options Set additional options to pass to the underlying rsh/rcp command. .UNINDENT .SS Run modes .sp Since version 1.9, clush has support for run modes, which are special \fBclush.conf\fP(5) settings with a given name. Two run modes are provided in example configuration files that can be copied and modified. They implement password\-based authentication with \fBsshpass\fP(1) and support of interactive \fBsudo\fP(8) with password. .sp To use a run mode with \fBclush \-\-mode\fP, install a configuration file in one of \fBclush.conf\fP(5)\(aqs \fIconfdir\fP (usually \fBclush.conf.d\fP). Only configuration files ending in \fI\&.conf\fP are scanned. If the user running \fBclush\fP(1) doesn\(aqt have read access to a configuration file, is it ignored. When \fB\-\-mode\fP is specified, you can display all available run modes for the current user by enabling debug mode (\fB\-d\fP). .SH EXAMPLES .sp Simple configuration file. .SS \fIclush.conf\fP .nf [Main] fanout: 128 connect_timeout: 15 command_timeout: 0 history_size: 100 color: auto fd_max: 10240 maxrc: no node_count: yes confdir: /etc/clustershell/clush.conf.d .fi .sp .SH FILES .INDENT 0.0 .TP .B \fI$CLUSTERSHELL_CFGDIR/clush.conf\fP Global clush configuration file. If $CLUSTERSHELL_CFGDIR is not defined, \fI/etc/clustershell/clush.conf\fP is used instead. .TP .B \fI$XDG_CONFIG_HOME/clustershell/clush.conf\fP User configuration file for clush. If $XDG_CONFIG_HOME is not defined, \fI$HOME/.config/clustershell/clush.conf\fP is used instead. .TP .B \fI$HOME/.local/etc/clustershell/clush.conf\fP Local user configuration file for clush (default installation for pip \-\-user) .TP .B \fI~/.clush.conf\fP Deprecated per\-user clush configuration file. .UNINDENT .SH HISTORY .sp As of ClusterShell version 1.3, the \fBExternal\fP section has been removed from \fIclush.conf\fP\&. External commands whose outputs were used by \fBclush\fP (\-a, \-g, \-X) are now handled by the library itself and defined in \fBgroups.conf\fP(5). .SH SEE ALSO .sp \fBclush\fP(1), \fBgroups.conf\fP(5), \fBsshpass\fP(1), \fBsudo\fP(8). .sp \fI\%http://clustershell.readthedocs.org/\fP .SH AUTHOR Stephane Thiell, .SH COPYRIGHT GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) .\" Generated by docutils manpage writer. . ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/man/man5/groups.conf.50000644104717000001440000002004214505632065020303 0ustar00sthiellusers.\" Man page generated from reStructuredText. . .TH GROUPS.CONF 5 "2023-09-29" "1.9.2" "ClusterShell User Manual" .SH NAME groups.conf \- Configuration file for ClusterShell node groups . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .SH DESCRIPTION .sp The ClusterShell library obtains its node groups configuration from the following sources in the following order: .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .IP 1. 3 user configuration file (\fI$XDG_CONFIG_HOME/clustershell/groups.conf\fP) .IP 2. 3 local pip user installation (\fI$HOME/.local/etc/clustershell/groups.conf\fP) .IP 3. 3 Global configuration file (\fI$CLUSTERSHELL_CFGDIR/groups.conf\fP, defaults to \fI/etc/clustershell/groups.conf\fP) .UNINDENT .UNINDENT .UNINDENT .sp If no \fIgroups.conf\fP is found, group support will be disabled. .sp Additional configuration files are also read from the directories set by the confdir option, if present. See the \fBconfdir\fP option below for further details. .sp Configuration files have a format in the style of RFC 822 potentially composed of several sections which may be present in any order. There are two types of sections: Main and \fIGroup_source\fP: .INDENT 0.0 .TP .B Main Global configuration options. There should be only one Main section. .TP .B \fIGroup_source\fP The \fIGroup_source\fP section(s) define the configuration for each node group source (or namespace). This configuration consists in external commands definition (map, all, list and reverse). .UNINDENT .sp Only \fIGroup_source\fP section(s) are allowed in additional configuration files. .SS [Main] OPTIONS .sp Configuration parameters of the \fBMain\fP section are described below. .INDENT 0.0 .TP .B default Specify the default group source (group namespace) used by the NodeSet parser when the user does not explicitly specify the group source (eg. "@io"). .TP .B confdir Optional list of directories where the ClusterShell library should look for \fB\&.conf\fP files which define group sources to use. Each file in these directories with the .conf suffix should contain one or more \fIGroup_source\fP sections as documented in [\fIGroup_source\fP] options below. These will be merged with the group sources defined in \fI/etc/clustershell/groups.conf\fP to form the complete set of group sources that ClusterShell will use. Duplicate \fIGroup_source\fP sections are not allowed. Note: .conf files that are not readable by the current user are ignored (except the one that defines the default group source). The variable \fI$CFGDIR\fP is replaced by the path of the highest priority configuration directory found (where groups.conf resides). The default confdir value enables both system\-wide and any installed user configuration (thanks to \fI$CFGDIR\fP). Duplicate directory paths are ignored. .TP .B autodir Optional list of directories where the ClusterShell library should look for \fB\&.yaml\fP files that define in\-file group dictionaries. No need to call external commands for these files, they are parsed by the ClusterShell library itself. Multiple group source definitions in the same file is supported. The variable \fI$CFGDIR\fP is replaced by the path of the highest priority configuration directory found (where groups.conf resides). The default confdir value enables both system\-wide and any installed user configuration (thanks to \fI$CFGDIR\fP). Duplicate directory paths are ignored. .UNINDENT .SS [\fIGroup_source\fP] OPTIONS .sp Configuration parameters of each group source section are described below. .INDENT 0.0 .TP .B map Specify the external shell command used to resolve a group name into a nodeset, list of nodes or list of nodeset (separated by space characters or by carriage returns). The variable \fI$GROUP\fP is replaced before executing the command. .TP .B all Optional external shell command that should return a nodeset, list of nodes or list of nodeset of all nodes for this group source. If not specified, the library will try to resolve all nodes by using the \fBlist\fP external command in the same group source followed by \fBmap\fP for each group. .TP .B list Optional external shell command that should return the list of all groups for this group source (separated by space characters or by carriage returns). .TP .B reverse Optional external shell command used to find the group(s) of a single node. The variable $NODE is previously replaced. If this upcall is not specified, the reverse operation is computed in memory by the library from the \fIlist\fP and \fImap\fP external calls. Also, if the number of nodes to reverse is greater than the number of available groups, the \fIreverse\fP external command is avoided automatically. .TP .B cache_time Number of seconds each upcall result is kept in cache, in memory only. Default is 3600 seconds. This is useful only for daemons using nodegroups. .UNINDENT .sp When the library executes a group source external shell command, the current working directory is previously set to the corresponding confdir. This allows the use of relative paths for third party files in the command. .sp In addition to context\-dependent $GROUP and $NODE variables described above, the two following variables are always available and also replaced before executing shell commands: .INDENT 0.0 .IP \(bu 2 \fI$CFGDIR\fP is replaced by groups.conf highest priority base directory path .IP \(bu 2 \fI$SOURCE\fP is replaced by current source name .UNINDENT .sp Each external command might return a non\-zero return code when the operation is not doable. But if the call return zero, for instance, for a non\-existing group, the user will not receive any error when trying to resolve such unknown group. The desired behaviour is up to the system administrator. .SH RESOURCE USAGE .sp All external command results are cached in memory to avoid multiple calls. Each result is kept for a limited amount of time. See cache_time option to tune this behaviour. .SH EXAMPLES .sp Simple configuration file for local groups and slurm partitions binding. .SS \fIgroups.conf\fP .nf [Main] default: local confdir: /etc/clustershell/groups.conf.d $CFGDIR/groups.conf.d autodir: /etc/clustershell/groups.d $CFGDIR/groups.d [local] map: sed \-n \(aqs/^$GROUP:(.*)/1/p\(aq /etc/clustershell/groups list: sed \-n \(aqs/^\e(\fB[0\-9A\-Za\-z_\-]\fP*\e):.*/\e1/p\(aq /etc/clustershell/groups [slurm] map: sinfo \-h \-o "%N" \-p $GROUP all: sinfo \-h \-o "%N" list: sinfo \-h \-o "%P" reverse: sinfo \-h \-N \-o "%P" \-n $NODE .fi .sp .SH FILES .INDENT 0.0 .TP .B \fI$CLUSTERSHELL_CFGDIR/groups.conf\fP (defaults to \fI/etc/clustershell/groups.conf\fP) Global node groups configuration file. .TP .B \fI$CLUSTERSHELL_CFGDIR/groups.conf.d/\fP (defaults to \fI/etc/clustershell/groups.conf.d/\fP) Recommended directory for additional configuration files. .TP .B \fI$CLUSTERSHELL_CFGDIR/groups.d/\fP (defaults to \fI/etc/clustershell/groups.d/\fP) Recommended directory for \fIautodir\fP, where native group definition files (.yaml files) are found. .TP .B \fI$XDG_CONFIG_HOME/clustershell/groups.conf\fP Main user groups.conf configuration file. If $XDG_CONFIG_HOME is not defined, \fI$HOME/.config/clustershell/groups.conf\fP is used instead. .TP .B \fI$HOME/.local/etc/clustershell/groups.conf\fP Local groups.conf user configuration file (default installation for pip \-\-user) .UNINDENT .SH SEE ALSO .sp \fBclubak\fP(1), \fBcluset\fP(1), \fBclush\fP(1), \fBnodeset\fP(1) .sp \fI\%http://clustershell.readthedocs.org/\fP .SH AUTHOR Stephane Thiell, .SH COPYRIGHT GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) .\" Generated by docutils manpage writer. . ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3273294 ClusterShell-1.9.2/doc/sphinx/0000755104717000001440000000000014505640536015654 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/Makefile0000644104717000001440000001314714501416555017320 0ustar00sthiellusers# Makefile for Sphinx documentation # # You can set these variables from the command line. TMPDIR ?= /tmp SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = $(TMPDIR) # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/{html,dirhtml,singlehtml,pickle,json,htmlhelp,qthelp,devhelp,epub,latex,text,man,textinfo,gettext,changes,linkcheck,doctest} html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/clustershell.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/clustershell.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/clustershell" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/clustershell" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3273294 ClusterShell-1.9.2/doc/sphinx/_static/0000755104717000001440000000000014505640536017302 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/_static/clustershell-nautilus-logo200.png0000644104717000001440000004203214501416555025542 0ustar00sthiellusers‰PNG  IHDRÈmÓWÞñtEXtSoftwareAdobe ImageReadyqÉe<$iTXtXML:com.adobe.xmp `×"PLTE¼¼¼ººº¾¾¾···¿¿¿»»»ÁÁÁþþþýýýÊÊÊÀÀÀÔÔÔãããÅÅÅÖÖÖ½½½ÍÍÍâââáááÒÒÒñññ¶¶¶´´´ÃÃÃðððöööûûûßßßÓÓÓÂÂÂÐÐи¸¸¹¹¹ÇÇÇÜÜÜ***³³³õõõäääÉÉɰ°°ÞÞÞàààëëëÈÈÈÛÛÛÆÆÆµµµ111ÄÄÄÎÎÎËËËòòò÷÷÷+++ÙÙÙ¨¨¨çççïïïøøøìììóóóÑÑÑØØØ×××úúú///ÚÚÚ444æææååå222±±±«««îîîÌÌ̯¯¯­­­VVV"""üüü---èèè777²²²ªªª&&&œœœ(((''')))¤¤¤@@@,,,ôôôÏÏÏ888666¥¥¥$$$999...OOOFFFBBBÝÝÝ<<<ííí!!!xxxŸŸŸLLL AAA§§§ÕÕÕ:::???ùùù000¦¦¦IIIgggXXXjjjNNN>>>qqqaaafffžžžkkkHHHSSS®®®EEE£££mmmGGG555™™™ppp›››ooodddYYYééé===ccc###vvvlllDDD;;;}}}333___JJJƒƒƒ%%%€€€eee‘‘‘nnn¬¬¬CCCššš†††RRRuuu¡¡¡•••   {{{‚‚‚^^^‰‰‰QQQwwwŒŒŒWWWbbb|||UUU ………˜˜˜ŠŠŠ¢¢¢ŽŽŽ[[[```©©©hhhTTT‡‡‡~~~„„„–––KKKêêêzzzyyysss ———\\\”””]]]iii‹‹‹MMMrrr’’’ttt“““ZZZˆˆˆ PPP ÿÿÿÿÿÿÇ>³ŠtRNSÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿS÷% <¤(ðÔ!(L—d„Ó™ž2}âˆë‡:¥¢‡«5ˆþ]@€³3à¿çñÄÔFdì=-©uÚ:Úº"á²?Ë.¥òoBÚrn¯œyWÇ"tÅÌ ¡‘ù±!×”Rr8ÌCMRñïÛ\„„DŒVD„Dÿg æº`׸¿ï•Ð QõÇñ·aé ¯V‰Yy’ó¶Ý¹±µ[¨„2ÊDªFV@FIEÕ €ÿ ¥Œ  ù‡RQF~±ë?+ÈgáÖ¹„ë6>Ý||¡·<0ó“Ãñ M"Æj@Ó7¥„#ˆ@ªæ¿„<ÒU0®;†CÁ“G±ükæ·îõàÖ ]É•)/ñŒòÒR,cËË¿¾¥¤e€%P #jÜÌFPû0Ùèêyÿ‹Aß'(œPÖ¶o$é‡^±ÐÒÔ[Ww;fÑ §=2üð`šleõÁÿJ|†ßL€‚*•‘OÐ5Qâ}¨@ú¢?71©R*ø—‚H‚;tÔÃ,ü&9å^H5ö4 Û ~¦£ßüàþ mKHä=L2›Ú¶ ÿK¯+3(êj¢8€´¦OÎfB3¹¾c0J%<ã¿d‚ìBÄuH#yY‘> IÈ©yØ>°îîÑîy  Ü¨Í Í¶ÿKŒô$J )³›€åo†H[¸ê`¥¶ÔdȪù/ ›ÆÔ± áTì9Íÿñ^lìˆ%T¼’HŒ$ðxæë7àR&¯ÕôŸ ·® aà82D h–;bX;pìÁ›¿ùàŽA'b:&¢ˆ~ë™Düï¶Ao2ë¾¶÷Ýðª+‹ƒ¶Ç1äÌ"OuVGÞ'„è „G^ÿówqpip€MyKpÓÍKЧ@nœ`í¤ý! |SÿU F}P€Å"£BhºŠ]c‹ÖI€xÀ·1¡ÉÒÛ7Ú=º,B\<åCCxþÌZ1 –k9€&˼;@;;”°ÊC£€Eº ÚÔÌÞm½{äNù/yàÄ };èh¶“Áq’‹j+·Ö’õ‹ƒzM°‰ñˆ|Ù~Àc¤ƒ–_žø6ße „J›€{ä9´mïeŽ“ÆTû®SÎʪ5Ð6±L”ñQ¶NPl>;ÿ$ê, çÇÏxm?u®—R‘Òl·ªÂÏå}¼ä¬±e¬¬CÞûñÃЭ¾Ù (zžKéz.zÒ€’ óBùV˜2ZÎþ øÍèÕ¯?ôój¯BE))~Z áOàÄêÒ[/&†s¥«ÁŸ~h½Z}&úlÁô^V±»N¿Ìq[ÜâáénL—þé+¶¶²^ϧ»zzÀ©hFî@y`­xéÄ’TEÙý¤5­uP¤©ÝM§jÉ‚”þ÷A¿AÙÇ D7gëðbâJ£Í†êo¼Ï‹Y-Á/ÕÜší\½zu¥¦âJ(TY`óÛ‹2hAð ;Xí|©hz.ýŒùßošêIŽ@µd¢t ÐüËÏG]/ zp©ÎIø¿ðqª¦·7°bI2Èñ8åWZ"é w¾UJ„—-«<&Æ–\õ²ßíh§3þ†Ñ†Kˆà ·ý˜íù× ‘©AXzQ€T€µ©&Fsj’” q ÌI®É-v»sHô‚«–¾ï]¢ÿçA¬¤­¬"„„t)½ ÃÔ„Ð ã_iÿრ)çMdÛ¢¯WÊ5k@?ª™“þnD•ÁËõ™%.{Ì¿Ø+À¸t]„Ysqð Bzb·¼Efq蜶XI_¼U¤,Ôå¶{!ìûwXTÀ¥B  6 /upëǯ{@K}ºv¸>idøðaÞÑSÓÄvçCh%˪ƒ¬o•-×ÉÄ_°]m ÿˆŒ5ž³ Æ’£D‰K” ÉìupÔjMìF@—$Ð1A>;;ÁظØ2¡ìË—Fµ7u»´’Â5ÐX:¸º*ëwú»GW”Ãúø]écòÕ˜Ÿ‚~| J*‡‡|EÛüO}¾³Bœ„j޲¤˜¸¹;z3]bº c®bâÿ¼&éÔDS[@—Z™‰æ ¤Éùag©þ"Þ‚ äDÀJ™‘»0gM¿.®ÌØÑ¸Î²,//@Ac&Gkp&f ±rÓñü:ÓOšÒîøy/’jèPF¤AZ8SmÀv÷Q“z8Œ»dïÙÐfR*f枀Öò÷\—£‚ÜN=Fý÷u„ˆ‰JŽ—ZFN-4/FÝÁÿu ”庪 €†–þÊ=×ölGǃíâl㓞J§»?F^_Ú7N» ¸ëÒ^æ÷ X¥ ¢€sÜ—`–AàMƒJ4ÕL™&N2Ó¸®nè|…†"äÔX±f-·õ!/â|ñ±ú‡Õ¡?[–Ó?¢ì­Æ)¢m##UùÁ™¨Ü)ïR~6@º|­¦Mür*«¾6qkŸ-›Äšœ=û"–ëlBÝ=ÙC…™Ð¾½½c—Wbt8Ø·ïAåö׬­Kè‰#1`$Ó Ò˜¯Ö7õ0Ìs§\ôõ_@Ÿ~€ž `´m4¨Î;Då†=ùxì(ëè×k¼«üÇu1ýƒ ô¤Òšj’>0rF‰s…F€õc~­²í,PµÈ;ç‚êË1=HzÀ@¡m39È–O80Î;ûfôAÔÄ„s¢ÅÎhôéÇ[5w3§ œ[ªšS?´ 3Ú+áyA´¡È$Ñë eù9ÏѪPÁJR í»=­u¾@!ZIÌëWi—¶ç¿RýÃÂl¨ÿ! f¬*VàŽ6ç¡^ÊQŠ5ù²v ü ÕŠM"ƒ¯öóB@íJA÷‡Dω¼ ^K޲OŠW377–M¯{¶^ö-r*}hÑ÷ìaàÈÆs¸·FÍ‹/·/)=^^ilT+vÅ6PÜÏñù¿^ÒÂÊÎ Ç8·/ãÆÐ»Ý/$ë]Ÿ†uC½¨~o»pÊÿaæàЬ׋›Òzáñ¸níá Ææ%2“UTÏçõRn/yýõ s ºâ㼃£ãl“ìºÛ× k ú\=w[¸:*\BRz-Íp!Uàô‚îÅÓ`£hŸ‚êR|\ŽŒñ©¢×xZ|`›ì·‘’!^þªèSžÂ9(nqºsæOY£¦:ÑßBH$#…0'Œ‡55³€²bÀ5Ð XVÈ\“÷Ñv±%zÌãni| °yEê˜Ñ3‹RI‘T‹ÇÊã×X×nll|ïäHVÜØu®°ènîÙSk4s}-Ô¼©JQª6`øÆÎg¨¬EñÕk\â~õAÆßA.ã*¨uãó]Ygw]/3²æÅ‡Gó>ÆYü¦Âf¿Õ"ŠH%‚Ußg#¨4O{"À¢£¾¥ 2Ï>>k¨7«øÄèù‰,‚/í신ЛÑýˆð˜.5 ~…Ï:µ<«“Ó<ä:1s¿9èdnë‘,¸A,n×Ðʆžçf³}^2mÔ¸‡®Öõ: §ò säúq.â²ÎÝì›töAü£T™P3Ø‚NiÌøúüZÅ0XúûAIy€¨sÑBÀ³pÈ´ó|½Z…«ß¶²ui˜»qMö^b‡u³¾|Þ!d{qÊ!|\ÉùsÂñŽ:Ç£­‘Ú¨ÅÏúâKWn÷…çlæê,•{@ îÞ”Ø$%ÌØmh +0ñ’‰…>}ƒNwºÆää†Þ~ì”ÅÒœµ?´wwjÌ:ŠF!7ÍãÄà/’P›€ýÝ šÌÓI ¢;%Ó06W‰J´rÉÊÆþ˜/¼ê²½¨7…À<ïNÑÂLÆýPO@}-Ṃ?ÎJ™Á{:/œƒ^8'Ù¾o·œQ˜m ÝÌXŸ}lb AoŸóªø`)(4‚Š-u¬ÃûúR†²œ¨DB£µ¶ž7’›ã/[”fÏí)ÍHîžËžH iÒ?Ê÷Ÿ‰øôìpþÕÀ–ð „ôÌÄt< ÉíH¥x94¹Ÿ-wh»(X ¥ÁPñ?6ûô¶(ô›i´@퉤 ž®QýrÔ6DVÊøMænA yuâ\Yç«2œf==¸B*ú¬Œ×ÅÞƒz TnúHkÈ^ë_u'oóNÅKÓÊŒSÒýÃóžù–‰Ç-­Á¾óãã×n¯Îõz½!š²¾r½¸¬Xô'ö' „±`äúgaà‰ˆØ"¬::Ä.ÛþFPÑi¯À72Ö²_} Ú÷ÁØ~rÂV¬c8—õN6)ÐñPœ?ÈP+K5f¿'j…ΞM0± šjÜÑ÷ ­Ê¨åÊ~‘±2×¢­¨òŸSÙˆ„ñêRK©ã¨¥ª®ŽPW”œ††„Y€—Q†ú&5½´àÞóÐ:õAû#““×|èþÈW}%)&¦6ú³­%£&štÌîíjàýºÌˆ‡^ZhÙD¾=³ò4`JãÚî¯6Ö ;Aåg¨¤,âNe¤¯‚Tzj ‘I| ´ Þo{XwÉÚæ»:¼éÉ{µ[yQ¿C¾€‰’é–í{èÇ'v9mI%MÍR"Ý$Q*i@H©M» ¼í6¸í÷ÖûêëVßqð€†¸ü© èãø§k×^ÛõÖ^xü@Ry"$ü¡ÞVMÐĠ#X¯£«œr‚JtñŽ–[t«ß²Ðë›èåú>ã¬EI¦œ“:®ªþÊ`¤¢þÎÔpÒ*é'ÐYGK71Aõsæí‡ľ€j°þ‘y_ {áU[@ÓˆžJÐëÞ £$8 6³‘2¸±ïùó§·ÐÛ/øŸ¶×©7Ë‹¿FaÅM.üÐÏÒ’i*‚;þXí¶¢§”þ^¢ùHH žÚ ù½×EÐ å­àD´™6Nu-"N¿C¿:I /¸$ïM “òO_s˜U‘#èÛ¼¥S#’$gÕ„Ã?MEP×@Ð&®—ÏÛŠïO¹š$$”éT™ü²ÈÁ[}&Äs’7%¢Ö”"D8‚t4b*xe‰`Š!ÐÁ ãõ2j?›=ŽO8Î=ݨ™:³ù®i…®i‰:ÚçÏŽ3±8†Ÿÿ°µä|%ç‹âà¥=  € qêÌœ qƶ[ÌæÙíæˆ}h“DN:üþk±x~¹³"ˆP7F¤ÔÞozí÷‚ï^¨ã·o6Â'Y:ú)¡ùúS~qÖáU'Ô5+x“Õ@Ø0‘0Lä&éEo@=9;€ÝTï¼ÂHEßG’òâSz­ô2¿ª9ªà×J'ÃD­¼¹4ãl8œëõ¬õ90¡€†bêt„Pô_Ò;óŒÊ¯Ò •F¿3«q$™w}É{êCçT—ލL.g'cŸyR¼Þ9Rn HµyðécGïùr.Ë€5AOˆ¬TÒ¦XZ»É×ëkqyqé;Ê{ίCк( 7–°ASDœ à"@½ kœï&ÞÈwçô;K /k*Ú«FÉää|ôD§Ôg„r Ää×ÎKÿQ œzªð#Œ@WíëI ÐõÚ”¾÷N¹"ìGÌÔ$èù:Ë$Œ6ämû¸WP:٣첔"ß»º+îL>ª@ÏÆîʳÚKNˆçT{­‘È€rÃa9ŸývȺ7\ÿóÙä⯥…Nç²3ÐÔBD†× ¥¡ºO-0DÇeâÖ¹Àž"•‚„ÊP`*‡ÇÞä<{å_Qse}†PS4êŒYtµî-šÿQ 8|÷>VÆ&_¿"#ðòl§é}¿“¯KÚùoŸòÙ‰‘霔ži‚—xXôz”Þh4‘·|E¡ªƧ«)ɨzÛ©¬Z×ßüFš}ŠèTü°èp˵4—Îéu ‡÷£Áæ’¤ä9ÂzÌø…i Ö1@(Mtèf¡–!Þ¥oÔØ i1Ë ôz çÞÍÆé¼~x9Ó÷.A/–|xöiµÏ3yjN!jÂ?xvÀ™³(CñÆÍu—O)Ñ3œ?ìi¯BR8Ô¶7@p—CnN_Ïm÷°I¶–ÿøLÜ:~\D´’×F0BúLabC•ïíünRmp¥ñÁ9΀ñé÷öÏ¿vlij´Žú+8Çd5¤É¿œhÀw‡ã“cêóI(®dïó0nêK Z¹¯ïz §ê‚C`vdù8'«CŸ‹«*ÙE^>ÿîôƱ¤ÔŸÕÅ(l~Mã ­~±âd}DL=äû†‰%,Òh©™Ô…Зnܦ%0gOwsž©p‘•µq“í ´ý¸{Ú”Œ€€U™—Šõb}qç{Þ§4÷»R))ït‰:;\_¼±C¶°gpÆ&ÚcåPWsgMë ö©::bE”Ðo@nH @wÎÛD|¸c\Çå)¨Õ©eí½Õô=É ®zËÕüý³bÊêÞB§ ìÕŸ”USÿtào£*óÑn*Ñ/ <©À®T dMôßX˜´ÝY9ǦæÞ\‚@"ÝJ ºŠc"=LÏÊ5ÓpÐJH *W6<)gž<Ô-nEê©b;Qà¸ãŒÅ§wF÷â2E…EÔs ½ÄK§ O'o4š°ÑçÀ~xôvÞ–ö`&zà|ïí.P¥H|ω1U"馭ÊñA˜ä¹úÓ7¯XdÁ¼_¿.zèüºÿNkpæiÃeCþ_\nRÿG}ˆð—Æ7<‹öýlü×^äEÆ&yœútrïT\x•VÐzÔ£ÉÃxOk-YËÈíƒ{G&n²~ý”(7ŒÆôQ¥áŒ•†štÂ5•rÞ07$˜‘ w–ôô(ú¤ooðÚ º~Íiî¶0…–4›¤(€‹P&^áµ~褦ªz뇡n+€¼h¬(½ ‰Ä>J¥.,€Û o%è½|F»µ›wioúå™J8Gô©˜.0býYšJJ<¿(ˆ’ͯ:BhßÙ|$Ì p%9²îäãL£_ÙÁ½·ø˜õè$Ò3$ÙÕÕe¤«ýÅA{{™Iv—ìZ‰ {æ³üuÕTbj¡ñ7ôðÕÒãľÍÓWÝ)¾WM)föu úr¸ AÑ™& ¼ÛJÚF@ûšx¥h¿–«žßÌUe`Ùß^ÊÈ©iA÷éù¯+Yi·XøxoÎÕE²û{ÚwCvÀ?¦@‰½)ìÍu¾ªÞ|ˆ ôO?%Q‘#U¼áPIÒýDÙïæqªF‘R€dÉù*œÓÇìZfKÎØØ`präÑ©£mù‡€Ø„{í^±–‘¾âÎL"Ñlj¯aàaÓ¤K{ôê.ÿçÌjࡽÕ9¬ ˆ€ŠJw3Íb§ö÷r°¹)½°¨¹7!m·ðÅ«7¯~šò0'!d[Ë#TÎÉNt[XG5;ÇÛ÷¸Zçv²Ž.¦ª‹ T> F´w!‹þy_+!]™ß¬–ˆsmLGl^l ¯:k%E+ -LÌÎoVríNKëåxYר£¼â€ØbKß¡w¨,ª$íXííJY^*(\|]2LJ`JÁRÇå¡B$¨‰`”2 ©Ê@™Þ§{ZùLºˆu ¸Ny½ ˆ®ÞXÎû¼†Õ%y±ªòf ððœ±xïFr+÷Ñ¥ÞŽâbƒŸ+}à( Ÿç¢¸ò“rÕ^UÖ©2FT4x5Wý½C4Hté®(¾;¤ÒH".ù(A‡CÃiÄÌìÍ&×.Ä‹sÉŽÄFÆÊ»ÆWdäeìÜ?nÂìÜ'½5mn§@KFdÅ€_9ÕScèìU¼må%¹H É,§¡vÑù˜”ZZòÄ"÷HYÑ7›ˆìCgK× ¨Q³X½‹Q¤3«ÔÔ‰­avl@ÁÔG'ŠæÒDf ë¡”çÇbCƒø“ƒ3F‹Ã`/…,¥–s¹hßË Üné×&¢aÿ±b•!OÎÍÕ2>y‘õ¨»ÃäÈõ 9ŸZ£cWïûúii‰oŸ¸úÅe„¯Îµ.vÞ3Ô‹.\"¹È”JT„Ý›3d?A!é*Òf M÷ |·¹Zh´yI2 §ËªÀÓMÐø‰«kX=Ɉ3 £¦¨Ëjj` ™¤ãð0»Ã41Ø+¾ûÞòtAàõK¹Mƒ~7ü3ãi] ^ß÷UÃlQúÃ…sg¿z”ìÎø` ö§?E¿V©¬O:âujcF>}ÿþlÝ©„Ö§{;1{ê"œ¦ “ëÓ¬u‚ÚÊÖ3,·8ù_…ÚÎò³àÕ˜ín½#ôó#nÈÓ+£ÝÍàl3—¼;æÍU"¦¿À€¹îsW"4”@”U· *R>a**SR2ߊγ!MŒ€Þ›©1Ô6ÐÂY?ŽDÉ=û j-Ùk~ma³ýÌ]P,Õ ½pàÝûÊÕåò– ¸Y ¹éótF?BåŸSG˜&xdéÙëWõ*ªÊ«8{{;ï¬|Ýû/ß~üzRAL½6Ÿ¢/.®%¨/ ïôo­eáÿé—ýüV=õ#”=uÜ\âÝ•+¹÷ô⢓™ØÓÞSynýüü”‚ü­‰.è\„\¼c¡©ï›àô©=TÂäRXr[¨Vó1 &a„4ßúÔüÙ‹²¾fö¶âN‡?VÜ?¸Öeg#¿Xçx¤¡¹³Oú(7Oetœ¬€Ìf M0ÿ¬ð>Ú"²ú„Gó®GxH_PòîÞFÆÈ¶ÛÇÂ9/ñÜ㣡ùù¨šP7kq­4ýd}—83­ ÈéojÒS·]3ÇA±Ë÷U¿ÛÜTÕ›÷³ŽZ4ì’_ò¥Dò0áô͘É7ŒÀóL\=´­!hWš‚Z3‚ºÕ9L™™—ˆ™Ð¹Ep9p3 ‹z®o‘V–oœ·<.Ën{ÿƒõàì+ š§H~ ½À’øõöHðé31@jcòÜm6JÂߢ-˜r[ë^"A]–~âq:w—›3K‹î>ÌÏÕr[ï¼;T\,ë%ËÅå%ÛïnÿîP XjmmòÄë§§žÜu楦äC©1ëéÑÁéJ–a2þS¯RR’-‹4 ³Ì-|Î ¥hA·€jGÈ)‰àsAgÄ¢hRõ ŸPÓ·Œ"®M!£j.$>4Ðr>|–©(ãØoýfè<}P^üígÐTIý×WÐ%™þ¼¼P®³ôŠ—×V‚ ”hÖˆßÕ#` æSZ}ÉâA®òÛÙíòq#7 ûÖ•qyYweäŸÊî¹8|ÙS+- «ü©BSšGFFEPN¸à‰Å|щC½¹i7:Ôm˜¾ °uehÀ¥Ö‰’ïÁC·Øðs˜­)ÓÅÀÐÛñv@€;U¦8‘0#†×o¿mDI55±^µ ¤&¥”¹Òo¹ýé â¬ö•jàê{zjÊóÓê8Wøë‹ôO9Ä54qÕ¾ªõê+-‹Àº º×ñjÁ0ýæï@”JÃ}CblÅÓäÛ‹=¬³Ö­]»âÊ"µ¼bm›+üÆyù¸Ó\?<Í'ÇLÇ¡ T•è4ˆ‘ÝYçS¦j«â«®Ñb µ ºY…Dÿèc×ðîý¡ZáSçg‘Šì3€ôœc20È4yḣ „Ôå˜8)Ï|sÿ|!40eäH1ZZ²ËúÈ®WÎ]×vmÙJ³"‡Îû·f§X¸œ‡B>|ò»Üñ f °„ âWd³-‰‡¿;k€E<÷ œ»Ôáê”R>”ùAL¡3ÿò¥îÃ…Güd 3, J¤¤t’aŠ*Ô"Ì|tºma¬XR3{¶œúü|ñ4-‡€üîèHË\ÌΦdìÚCK—Ž;(R\„ÈÓa\RɽÏÔØ˜ Àó]ÆyÁÒ„¤Ì‚6*žPn¸‰‹õH©"Ç(ÕÚCö"`Èò•|Pï’¼2•æÙÓ%õ?Ó }›rjÆq½}<ÕÕp~  È¡ŸD‘ù\Gà»ð'¡\*˜™ÝÍNœ™å‡Þ]B‘ÁP,R’>(;u9fAj±e¤œ†),Éé>‡1ëáÄ­…ƑɮòònA–UÝ¥øØ›¤f-ã3KÐX Ó$Ú‚qŸ$J.ˆÌ×'÷£ÝëLîµ0(Ë‘ªqêêánžóÒŠ•Ë–õ<'Caƒªu¸¡p1débpèvoo`ÌNVù甿¨×±×/… 0Eq/‚–­ ÈS w¡¸§>¼Tv ×JÌ.ÉYýN"Bût62 ꌻýÅþÞŠµ‹bbd,Ä´“Ühfzphk/IGÅÊÄN6¥`—.1=è›ÕâÙàT«eâ:éæU;ì´@“4Z3e&e&G7{Í2 jŒž…ì¼Çõ‹ç›7"‚ª0J¼8 ‘E¿È®X|ê–T`õeÇ—->ÆèˆÇÜw¨í. Ã[ÖŸu>ì„ërœ0”:z°(Q²L7_h­–F¡˜mù€QSªñ±Ð¡h*ý /®ºh%¯Ïš 3càv(Tõ$6¹£Ð-ÁÊŒi"`v;]þxŸËb0«UÜ52ÅB¼Ø²Î$vÛ#-‹Dôöýąŵњ ËO›I¦ÎdP=P®˜q=À'´ kŒ"3)E˜Æ™KË€XÇÇ“5 jâvuð±ínErÝÚõþëþv,¼ß ˜å!¯¢ÀÌuSªï½ªÜ­÷\N«qp’º/O4.p_°¿Î‚‘PþµöËWãC øáÜk}ø8ÝÁ¤¬£Ày]Tùt‹„S÷9t¨yGœD›ŽvµQ}2îc§~xx¬lÖåÌVqYñŒÒ˜ü€ö€í„í±‰¡@.ñS]§>™(úß|«Ã0NYÄW"¡køíx“’R⤠--cäÎv]ÀÃÌ'çG¼r÷-Þ %¿‹1BÒìÊ®ØÙ‹™¯UT5½Sµ§Ï»Y®I á wÓ‰¯Ò‰×çòð°¶Lˆib!ç#áüæ\Eî-AÉ™'ïDn^€^‚T!&ÏxR9Aä WWy®‘Æòɉ@áRoQgymùz>uymQߎO† NŽ^tÓä˜Y©*­-nÃ(}Î;¸ÝÈúøqleë’ËÀ©€ß@¨*YY*íðŽ||Tf›{§!º?¿ìôåä‡Oy¹f»P†6ßyY~iN™ù/+Úû2¬e+2oÔÎí5vîÛ&‡Æ¤õ… ÄÉû¹®Ž‹¡‰ù¨à£\!×iD¤DìW¯`î?[´ýñd.²ê¨EDŒ&u‚Ü<ĽN„ç7­Å§•OåŠïš°ðàËA‚¼ã[ù{4z(2’ûŽlà0JWE`ºozê‹_˜œ=ˆÛywßMkÞöu«0[? ‰<öM8+Û˜NK+F.6¹é¸ã…!éüü@ýØ×äP€J[ÁW¶K6ÍË50<¼0scµ3_<óòÔfsdvÐÃÆË&y±úsYïÌ*ïïå8_©­M ¦~-hÿÁí7÷¯UllC‹€4RþcPe,=Ñwqíùl,¾ ™îê‘m²ÖT öòP™m - ’mÒÞIÿÞûÄ¿BTÕ*u»[=å&ßäŸép¥´öŸI<.(K{1ønñ· Q“‰Äàê%Z:I©Ií¨ÿ¤YµyÒ¹ûȇ$´,dR+]N°¹ºÏçé$ ù\òÛ\²&)Ïvæ …„º=ŒÊȞȟ*¼=û\ÿsžNÏãœèûNl¬ôkûÂܧC|;w¬§%» UGµÄÐZEhû‘úÙ¢Þd?ñ‘¯÷Ò*ž§Þ¼FO<%Vg¢RUd¢yšySú­®¥øúq|J”“û;ñÛ‰ •4pƒ[× D¹íYèDi$¤fŸÛ;G~Ÿ¡#ãˆÖ 1 rH³õ y˜˜²úüÎî\Æ“žÕ O6<½ o”¶iŠÃqÌl=±M~Þè<ÿ’oN+³Šo…"[èÁ(¯=ïõ‡õòðâä§áƲGæ“NŒ@­ôÀ–ðp’³ ¥¨0¦¦h›‚¯(JÚÝ7~]ÇÃíÙumi I±ð|FÃ+l8mHgCÎ P˜W kÛÐÑÒŸÒ± ¸M–¸`F |èâ<ºU‰€Ñp0ö6–Õu¼¤ƒ‰’$®»k¹q‰»¼¨{É„ÓÐ߉ÓY"¡A°º½ˆ{ɧ`n_ãu*¯_˜ ™X»7w%m§æ O6c§U¸º7‰’l¸Öãåpþ–ó¥ÂÑÈUŠpb¤Ä½LÕ$É(üžmÑKZ¡›”ög –_^@bÍ&¸Ž¸BÖè˜öxYÕ•¯B‡@:L4=ã "ðGˉ“ °8u 6>5I Be¾þƒ2쟳SëbMI`HZ1Z1~;Z„YFY N@»cmœACcsOÎ.¶~IÌÆÏ7îQìÖ“«Ÿ£o2ðì²~¤ç…ÇEE‹E;òO_­l­m6mwµl vf Í›û¿ª¨™hŒ‰ýѵt#sVOÚˆ‰YŽ’GAtú$¦£¶ŸÍðônrϳàÁú¾Ý%ΰéÝЀÀçÑüWÍѤôŠ}e0¬¡û™§67 ++133+3Ëgv¥hyFL7Õ'·ˆKô{™›’h8RŒÝÎLŒ›†î–_Ú&;›<‡ëÔG¤9­Y¹cì3wv¶Jv~?“ìŽó~ù}i\^eí\ƒ…ës®?÷üÁ³x¹&jãùÕLÈŠ%±Äj†ˆ×Y9Í=£ƒ ¡ÎŸÞŸœˆ7^Sh“¦ Å a©{Žú,åC ¢BëëëS\ÒŠJõ³{I5Çý¹²·®¾ç/jX­rˆ×_ñ¡PÃ’ È}èèÔ8EXH÷×C¾cNüæHS À(&''Jùû3D% * 2 77-ÿݪ˜l©72¡µnÈÏkâlÀB`P‰ÊÌŽt»Ñ—–Ëiþ¬0Æ·çnkÑü­s˲Öwïf­Ïí?9½ÿ>×^#‚þ&¢¹t£á]ÅjÊÛI˜¹¡.J(4lØ/pæ$ÊùìÕ%Ïåú,MÞÅ¡ÏO˺êcÚâæêô€Aâ]ŒIÝÜÝ1Çl·l—:^IS{))*¹$âÄ&ÃèÛ5Wg¸%àN§Ï°Óa•ñ–7«ýŸ@x”XÑp6n¼† i¯÷ íO¢ù¹ üWëB¡IäGäDŠ{.á7ß$AtWh¾×~­Gø³¢æLÛòîî«9£çÆGßœ÷L»\34µå‘Àc(µÙ»Á&ö!cÛ¤×íK) ÆJJEUàsbKcèS#ýÁÒ¡ØeÄÍmD+@+9ÔW¾/Þú}ïèK~fE+áчú”:±DÅäÃ$â„ÆÉiP‰Ø‹ fÉ™a§[c.$ãÕ功:ëf-ëæá&«”\Ñk›=™\p·+Éz‚ÑÁ6Ä[O8÷H ¹kŽÑÃòÑI°Ð±<ºTxá:†öÜ­’âßÚyPL¿ö¢H·‘#iÙìì0Tô“C‚ÆÆÆGâåA ‡¡ªŸëw’  ìàfÍ;A~žƒÎâ=b-ÝL¬Ë/[Dyžz‘ÿšÖ}º±×VçðŒ³ßNaPòÈÖ—“‘_¯–”¸Â×Ãs^—ùÂ’˜9¥E2J{Ó‚Üä=‚¬û’;æZ˵â~t+Ýò#V¸úÔÜ.Ý]XˆêJkÆÈöÐØ3 J>„„šAôìÆ;¤¶pÃá‡j…ûÈŸÝÈÏ¢ˆÄomNÔºæöö‰bĨÄCw; FÀf‡’Àәъ!DÙ×fiļY›y:îºÞ2[Tû¬¼J¶lj¹g'ùmžCQSS“‚sLýy§Ê¢‚‹·&Ê ì2H¼¶N¿hÎóΛîäâä Ç"œ© eÕ›•—@¿X­>ßø©òZ¼ýsýª?Qb†ä^ª©­Í¸Â¬†‘0lÖ*OÄ ÷&¥LEI ¢Ïа† Òî9?pJ·c“øÙôþœYqþ ­çCÌ^i?ižèogƒ!Èȸ‘$ìHnZ2 ŒM Ï&6IË=µ?_n¸9³ðôüPWÞúÓ?7ÛRwóôt3ÄBNJ Ÿ¤…9ôå[ÂP­Wœø€ çh®á¸ZMªˆ£az“?X¯å2˜åVåéŸ? ³sØúUgÏ|Ò¬rTöä,äˬBP=)¬a¾3ê4xeé"‚ŽTN›ÔÿÕðqI:?ê÷„êLSõ¶‘#'AòÓr ÙÙ‰‰ÉÈØøapv6362¿=Þ¤ÁØ‘üÜð¥Âý†¨ÏøE?MÕ‰™È ´-ŸŸ1ÃKŒØ,–ƒŠÆ Ù?Üa ½ñ½TŸ"Â)ƒ00ÂòÚPR*©K-O=K Ñi.ÍjtѪÊ*¸ï·³¡_ÛyÍ€½ñ³e¨øÙÁÊôé¥3üÂ,]÷\‚{aÌêrêÌähÿÓ·œJ'ÅÈ$˜ÿJs&¢9 1š …F d¢H8^íÙÈPüHv2„;ÍúÓ-ñW†æbÝõ~Å\:í.^-zÎ,¥‹!`n -Â*Êö½ðAöÄâ ·÷äx¹I)„@K¥Ç¤(o<[íh.¼¹ù¦¹z÷êXÅ‚˜Ûðæ'kv4»užµ7²ö³gÎ\t_«Ê–Ý™U0´G"Å.:Gû'NŸ¡e'!Uúkí²øÝ%‚5¥“ ¡#Æçð04 œ' †± 0$ EFLkü mlô{j0É'£O>!ÖÁËqdÞÝ\Á€BKÑ¡0aÞ«˺@SÇð}ý€aΣ¬ðHóØp2éªæôéW­ÚnÜzeH ë'Ñ–c‰'c{0ë’|ûnw¡–xéððíÍþšžÛJŸž,sÜîrйóáe47¿»Ó¤=»Ã_o`fVÐÆJqðá9x-¡ã  EFÆB€$CrÃÑÄ,4r§"‹æÜ•éQ—¹\ecËîù6™ÛÃÈ`ÄÂÄparVR²i¬õFzi&X–GWž ’VRÂ…™ÛÆ=»²ß{©imšÍGä<1ˆàÎl&V¡xpk60kíªkdÔ4ÛÒ‡ñw.´^mšùpÚéÁe/Ù  ëЫþ´4ì(’Æó_ubyÓë j«‘ó‰Òáw ÅG“!ˆ PÄx+g‡£XPÜH’ɬÎýQn1…]ËzY“{q™îf‰æH4šã#lÊ!ªK𹜉ð:âÊûM‡»9]¸Þ‡ßº©Ù& ‰,Ù¼~˜¶³¿16ºâ?í³\ÄK¡ýá4ÂÔÞùÁžç“«¶ј$Sš…+ówÏëT5Gµ,¿z6”58ÑDŒf‡!H4þFK¹•4µ"¥6• f%aA³ ˆadÄ(8 ïUð¿E‹±¡ r,žu+Øñ³ež*vqà:Ja'¨äfƒ£D%ÉÕ¨$D}ÌlŠ®E}ÀëHƒu@ZÂç·ßðÊ^ñ&`èt“6v–Œ€—±z$&·ee>ßP"8{¨¢í#DfDs?x¹vÁ¯Þ¢…Œ’œc!·/´#<$£âùÄ㢧N£9Ë+/'iØ„5„ÿv“?· /'©©¨¨‹¨0 7^,h¼|ð…B#ÙHnç¾y2“XI|/ÀEa 'C²‘³Ðѱà] Þ»¿Š_[ð?£pÕbB"DZå%‰V#I›þ'""o% %JØC€KîÊÒw]EóáBÌþ]ùÒ‰÷ÑxS,Í +½ç~PðÎJ)ITc©!É÷.š]X#*,ZI#yø¸ÕsçU©gPÊjfÎí‚»ÎÎ{+%t|ÚD€œõSæ¡©c“¤„¥0>QÄw5¡ÃÓ|¢Äh˜°ÿz¯ûRɤÁ]ßYÝÃX …©¤ºàÏ{×JÕ›GÕ(LO@SºÐLóúÝñ«™åŸ /E›Cá}e²:oy5²Œp­(ì¢s!“VˆÙœÜ(ŒîÃ`EH9…í/^xuñÌâòšÓñ­ë {Ïo½Ÿqj03U·‘¡æeüÇ÷ˆ””lpLTTjää¬R$ >>:*ÄõÉÅE¶Êh;¾õZN£ëëöhrÊ_%ÎèíMí­G!ÔÞ–€åj=öΦ…œ|YcË×@Ïþ¨,C,µò_!I…)E¡¤0J„¨ ž¶ 07 §€)‰ùzÿÕŠ?7-ܼ¤²aã5^ôþ|IºB7SÿñJ+z!^|b Ne*)IN.ENNnŠ!'ÇP Oœc©^"A°˜&r¨P I´)~¿1U©½qŠ*L^H‰I©*šx¿¨QâN.¬Ø¦ÍËÁ§&LAtø’ˆ6 ®¤I¥ GSS§´Qb¥%§ãf' £½¿wºš¶:ÚÜ>š›½á¾š›F-†3⟛ ÕTö–6jÓã Å2³J‘«aÕX}Ô¨$I1¥›‰~&RF¯þÃ6D ÔF"FŠÌ€l†±ÜîaÑÅQ¦Ç#džN)gD$#ÀÏ ,y=-D탤âLb¢¤ðs#‰áb¹Éª[.<¾BK“¾”˜h€÷ä$If !ÂzV—(•Aš:B—B@D›ŠUC]]T= Ë´6,"©F¥. ¨Ë«+ É+óŸ/?‘‰0jã¤Ç²²²I0öéà…4«û£ã‹Ó÷« ÂZ ðqb¹Iì9((qm’‚LÚ¬|R¢¬,æ†4fv§3¯‹±Ó²#0¬¯®)3ýkzšQ†Ç›š^E‘BB€3,LD ‰Ã#‡¥ÔmSÄÝT¡ÿË\@YÚ[ˆ‚žT hÌqÊœª•E'I#Ar7û­ç•Óé‚ô¤¼ä[d"èt)J…þã"r”"Â>h6{)øžó´;+¹; ³ºH’Àßœ ÿ›“2„VŒÊ ªJÔ*ŠŠŠ68IVÅ6 ]EiU«ÿbìšG•^/ŒJ.Õ.h%³0ø¡±yÛ*G¹éiþd%(¨Ž–d¶±ØIñ*êâ((åÔÃpLꦬ )ÅÅ+ ì>ì4d$¤¬ÌFÿCƒù€HYS35UYFÆÈÈ›'U3õoÞ74©éqê8&U ^jø­,,8%&i„°¿ÚÞJôaœJˆ‹m6zzœaza‚¸0 ^NR 9A€" @sH% sȉèý=7ü³×þíG™Z妞¢"=!£#áOôäB*¤jÿ‡¤yd•#„Ôñ±äÔÓãåÅñ* rЍ«‘«! ø8ÕàŒ¤€Ñßûïþ÷@¾Ë…QS™Aæw—8*FPjþÁÀÑÛè¶ñâõÚtyqºz‚œrI¦T’R’XfRlÄÿÖýZÿÝÇJæ?¡2µÙ â·—¢%§^íp”Új¬ÌrJÊ?ýÿ ò—ŦªÒÖv³—WP“‰SC][CÃTMç¼Âòÿäg…!´’Á{_JJE^=#%BÆür§ÿ'ÀβoÁN¿©IEND®B`‚././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/_static/theme_overrides.css0000644104717000001440000000071614501416555023202 0ustar00sthiellusers/* passed to app.add_stylesheet() in conf.py to override readthedocs.org table width restrictions... from https://github.com/snide/sphinx_rtd_theme/issues/117 */ .wy-table-responsive table td, .wy-table-responsive table th { /* !important prevents the common CSS stylesheets from overriding this as on RTD they are loaded after this stylesheet */ white-space: normal !important; } .wy-table-responsive { overflow: visible !important; } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3283296 ClusterShell-1.9.2/doc/sphinx/api/0000755104717000001440000000000014505640536016425 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/Defaults.rst0000644104717000001440000000032014501416555020717 0ustar00sthiellusersDefaults -------- .. automodule:: ClusterShell.Defaults .. py:currentmodule:: ClusterShell.Defaults .. autoclass:: Defaults :members: .. data:: DEFAULTS Globally accessible :class:`Defaults` object. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/EngineTimer.rst0000644104717000001440000000024214501416555021361 0ustar00sthiellusersEngineTimer ----------- .. py:currentmodule:: ClusterShell.Engine.Engine .. autoclass:: EngineTimer :members: :inherited-members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/Event.rst0000644104717000001440000000024114501416555020233 0ustar00sthiellusersEvent ----- .. automodule:: ClusterShell.Event .. py:currentmodule:: ClusterShell.Event .. autoclass:: EventHandler :members: :member-order: bysource ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/MsgTree.rst0000644104717000001440000000023514501416555020523 0ustar00sthiellusersMsgTree ------- .. automodule:: ClusterShell.MsgTree .. py:currentmodule:: ClusterShell.MsgTree .. autoclass:: MsgTree :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/NodeSet.rst0000644104717000001440000000051714501416555020521 0ustar00sthiellusersNodeSet ------- .. automodule:: ClusterShell.NodeSet .. py:currentmodule:: ClusterShell.NodeSet .. autoclass:: NodeSet :members: :special-members: :inherited-members: .. autofunction:: expand .. autofunction:: fold .. autofunction:: grouplist .. autofunction:: std_group_resolver .. autofunction:: set_std_group_resolver ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/NodeUtils.rst0000644104717000001440000000046114501416555021064 0ustar00sthiellusersNodeUtils --------- .. automodule:: ClusterShell.NodeUtils .. py:currentmodule:: ClusterShell.NodeUtils .. autoclass:: GroupSource :members: :special-members: .. autoclass:: GroupResolver :members: :special-members: .. autoclass:: GroupResolverConfig :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/RangeSet.rst0000644104717000001440000000037114501416555020666 0ustar00sthiellusersRangeSet -------- .. automodule:: ClusterShell.RangeSet .. py:currentmodule:: ClusterShell.RangeSet .. autoclass:: RangeSet :members: :special-members: RangeSetND ---------- .. autoclass:: RangeSetND :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/Task.rst0000644104717000001440000000040614501416555020057 0ustar00sthiellusersTask ---- .. automodule:: ClusterShell.Task .. py:currentmodule:: ClusterShell.Task .. autoclass:: Task :members: :special-members: .. autofunction:: task_self .. autofunction:: task_wait .. autofunction:: task_terminate .. autofunction:: task_cleanup ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/index.rst0000644104717000001440000000031514501416555020263 0ustar00sthiellusersPython API ========== ClusterShell public API autodoc. .. toctree:: :maxdepth: 3 NodeSet NodeUtils RangeSet MsgTree Task Defaults Event EngineTimer workers/index ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3283296 ClusterShell-1.9.2/doc/sphinx/api/workers/0000755104717000001440000000000014505640536020121 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/workers/ExecWorker.rst0000644104717000001440000000030514501416555022725 0ustar00sthiellusersExecWorker ---------- .. py:currentmodule:: ClusterShell.Worker.Exec .. autoclass:: ExecWorker :members: :special-members: .. autoclass:: ExecClient :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/workers/StreamWorker.rst0000644104717000001440000000031614501416555023276 0ustar00sthiellusersStreamWorker ------------ .. py:currentmodule:: ClusterShell.Worker.Worker .. autoclass:: StreamWorker :members: :special-members: .. autoclass:: StreamClient :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/workers/TreeWorker.rst0000644104717000001440000000025614501416555022745 0ustar00sthiellusersTreeWorker ---------- .. automodule:: ClusterShell.Worker.Tree .. py:currentmodule:: ClusterShell.Worker.Tree .. autoclass:: TreeWorker :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/workers/Worker.rst0000644104717000001440000000035014501416555022120 0ustar00sthiellusersWorker ------ .. automodule:: ClusterShell.Worker.Worker .. py:currentmodule:: ClusterShell.Worker.Worker .. autoclass:: Worker :members: :special-members: .. autoclass:: DistantWorker :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/workers/WorkerPdsh.rst0000644104717000001440000000040314501416555022736 0ustar00sthiellusersWorkerPdsh ---------- .. py:currentmodule:: ClusterShell.Worker.Pdsh .. autoclass:: WorkerPdsh :members: :special-members: .. autoclass:: PdshClient :members: :special-members: .. autoclass:: PdcpClient :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/workers/WorkerPopen.rst0000644104717000001440000000031214501416555023120 0ustar00sthiellusersWorkerPopen ----------- .. py:currentmodule:: ClusterShell.Worker.Popen .. autoclass:: WorkerPopen :members: :special-members: .. autoclass:: PopenClient :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/workers/WorkerRsh.rst0000644104717000001440000000037514501416555022604 0ustar00sthiellusersWorkerRsh --------- .. py:currentmodule:: ClusterShell.Worker.Rsh .. autoclass:: WorkerRsh :members: :special-members: .. autoclass:: RshClient :members: :special-members: .. autoclass:: RcpClient :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/workers/WorkerSsh.rst0000644104717000001440000000037514501416555022605 0ustar00sthiellusersWorkerSsh --------- .. py:currentmodule:: ClusterShell.Worker.Ssh .. autoclass:: WorkerSsh :members: :special-members: .. autoclass:: SshClient :members: :special-members: .. autoclass:: ScpClient :members: :special-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/api/workers/index.rst0000644104717000001440000000051314501416555021757 0ustar00sthiellusersWorkers ======= ClusterShell public Workers API autodoc. Notes: * Workers named *NameWorker* are new-style workers. * Workers named *WorkerName* are old-style workers. Contents: .. toctree:: :maxdepth: 2 Worker ExecWorker StreamWorker TreeWorker WorkerRsh WorkerPdsh WorkerPopen WorkerSsh ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/clustershell-nautilus-logo200.png0000644104717000001440000004203214501416555024114 0ustar00sthiellusers‰PNG  IHDRÈmÓWÞñtEXtSoftwareAdobe ImageReadyqÉe<$iTXtXML:com.adobe.xmp `×"PLTE¼¼¼ººº¾¾¾···¿¿¿»»»ÁÁÁþþþýýýÊÊÊÀÀÀÔÔÔãããÅÅÅÖÖÖ½½½ÍÍÍâââáááÒÒÒñññ¶¶¶´´´ÃÃÃðððöööûûûßßßÓÓÓÂÂÂÐÐи¸¸¹¹¹ÇÇÇÜÜÜ***³³³õõõäääÉÉɰ°°ÞÞÞàààëëëÈÈÈÛÛÛÆÆÆµµµ111ÄÄÄÎÎÎËËËòòò÷÷÷+++ÙÙÙ¨¨¨çççïïïøøøìììóóóÑÑÑØØØ×××úúú///ÚÚÚ444æææååå222±±±«««îîîÌÌ̯¯¯­­­VVV"""üüü---èèè777²²²ªªª&&&œœœ(((''')))¤¤¤@@@,,,ôôôÏÏÏ888666¥¥¥$$$999...OOOFFFBBBÝÝÝ<<<ííí!!!xxxŸŸŸLLL AAA§§§ÕÕÕ:::???ùùù000¦¦¦IIIgggXXXjjjNNN>>>qqqaaafffžžžkkkHHHSSS®®®EEE£££mmmGGG555™™™ppp›››ooodddYYYééé===ccc###vvvlllDDD;;;}}}333___JJJƒƒƒ%%%€€€eee‘‘‘nnn¬¬¬CCCššš†††RRRuuu¡¡¡•••   {{{‚‚‚^^^‰‰‰QQQwwwŒŒŒWWWbbb|||UUU ………˜˜˜ŠŠŠ¢¢¢ŽŽŽ[[[```©©©hhhTTT‡‡‡~~~„„„–––KKKêêêzzzyyysss ———\\\”””]]]iii‹‹‹MMMrrr’’’ttt“““ZZZˆˆˆ PPP ÿÿÿÿÿÿÇ>³ŠtRNSÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿS÷% <¤(ðÔ!(L—d„Ó™ž2}âˆë‡:¥¢‡«5ˆþ]@€³3à¿çñÄÔFdì=-©uÚ:Úº"á²?Ë.¥òoBÚrn¯œyWÇ"tÅÌ ¡‘ù±!×”Rr8ÌCMRñïÛ\„„DŒVD„Dÿg æº`׸¿ï•Ð QõÇñ·aé ¯V‰Yy’ó¶Ý¹±µ[¨„2ÊDªFV@FIEÕ €ÿ ¥Œ  ù‡RQF~±ë?+ÈgáÖ¹„ë6>Ý||¡·<0ó“Ãñ M"Æj@Ó7¥„#ˆ@ªæ¿„<ÒU0®;†CÁ“G±ükæ·îõàÖ ]É•)/ñŒòÒR,cËË¿¾¥¤e€%P #jÜÌFPû0Ùèêyÿ‹Aß'(œPÖ¶o$é‡^±ÐÒÔ[Ww;fÑ §=2üð`šleõÁÿJ|†ßL€‚*•‘OÐ5Qâ}¨@ú¢?71©R*ø—‚H‚;tÔÃ,ü&9å^H5ö4 Û ~¦£ßüàþ mKHä=L2›Ú¶ ÿK¯+3(êj¢8€´¦OÎfB3¹¾c0J%<ã¿d‚ìBÄuH#yY‘> IÈ©yØ>°îîÑîy  Ü¨Í Í¶ÿKŒô$J )³›€åo†H[¸ê`¥¶ÔdȪù/ ›ÆÔ± áTì9Íÿñ^lìˆ%T¼’HŒ$ðxæë7àR&¯ÕôŸ ·® aà82D h–;bX;pìÁ›¿ùàŽA'b:&¢ˆ~ë™Düï¶Ao2ë¾¶÷Ýðª+‹ƒ¶Ç1äÌ"OuVGÞ'„è „G^ÿówqpip€MyKpÓÍKЧ@nœ`í¤ý! |SÿU F}P€Å"£BhºŠ]c‹ÖI€xÀ·1¡ÉÒÛ7Ú=º,B\<åCCxþÌZ1 –k9€&˼;@;;”°ÊC£€Eº ÚÔÌÞm½{äNù/yàÄ };èh¶“Áq’‹j+·Ö’õ‹ƒzM°‰ñˆ|Ù~Àc¤ƒ–_žø6ße „J›€{ä9´mïeŽ“ÆTû®SÎʪ5Ð6±L”ñQ¶NPl>;ÿ$ê, çÇÏxm?u®—R‘Òl·ªÂÏå}¼ä¬±e¬¬CÞûñÃЭ¾Ù (zžKéz.zÒ€’ óBùV˜2ZÎþ øÍèÕ¯?ôój¯BE))~Z áOàÄêÒ[/&†s¥«ÁŸ~h½Z}&úlÁô^V±»N¿Ìq[ÜâáénL—þé+¶¶²^ϧ»zzÀ©hFî@y`­xéÄ’TEÙý¤5­uP¤©ÝM§jÉ‚”þ÷A¿AÙÇ D7gëðbâJ£Í†êo¼Ï‹Y-Á/ÕÜší\½zu¥¦âJ(TY`óÛ‹2hAð ;Xí|©hz.ýŒùßošêIŽ@µd¢t ÐüËÏG]/ zp©ÎIø¿ðqª¦·7°bI2Èñ8åWZ"é w¾UJ„—-«<&Æ–\õ²ßíh§3þ†Ñ†Kˆà ·ý˜íù× ‘©AXzQ€T€µ©&Fsj’” q ÌI®É-v»sHô‚«–¾ï]¢ÿçA¬¤­¬"„„t)½ ÃÔ„Ð ã_iÿრ)çMdÛ¢¯WÊ5k@?ª™“þnD•ÁËõ™%.{Ì¿Ø+À¸t]„Ysqð Bzb·¼Efq蜶XI_¼U¤,Ôå¶{!ìûwXTÀ¥B  6 /upëǯ{@K}ºv¸>idøðaÞÑSÓÄvçCh%˪ƒ¬o•-×ÉÄ_°]m ÿˆŒ5ž³ Æ’£D‰K” ÉìupÔjMìF@—$Ð1A>;;ÁظØ2¡ìË—Fµ7u»´’Â5ÐX:¸º*ëwú»GW”Ãúø]écòÕ˜Ÿ‚~| J*‡‡|EÛüO}¾³Bœ„j޲¤˜¸¹;z3]bº c®bâÿ¼&éÔDS[@—Z™‰æ ¤Éùag©þ"Þ‚ äDÀJ™‘»0gM¿.®ÌØÑ¸Î²,//@Ac&Gkp&f ±rÓñü:ÓOšÒîøy/’jèPF¤AZ8SmÀv÷Q“z8Œ»dïÙÐfR*f枀Öò÷\—£‚ÜN=Fý÷u„ˆ‰JŽ—ZFN-4/FÝÁÿu ”庪 €†–þÊ=×ölGǃíâl㓞J§»?F^_Ú7N» ¸ëÒ^æ÷ X¥ ¢€sÜ—`–AàMƒJ4ÕL™&N2Ó¸®nè|…†"äÔX±f-·õ!/â|ñ±ú‡Õ¡?[–Ó?¢ì­Æ)¢m##UùÁ™¨Ü)ïR~6@º|­¦Mür*«¾6qkŸ-›Äšœ=û"–ëlBÝ=ÙC…™Ð¾½½c—Wbt8Ø·ïAåö׬­Kè‰#1`$Ó Ò˜¯Ö7õ0Ìs§\ôõ_@Ÿ~€ž `´m4¨Î;Då†=ùxì(ëè×k¼«üÇu1ýƒ ô¤Òšj’>0rF‰s…F€õc~­²í,PµÈ;ç‚êË1=HzÀ@¡m39È–O80Î;ûfôAÔÄ„s¢ÅÎhôéÇ[5w3§ œ[ªšS?´ 3Ú+áyA´¡È$Ñë eù9ÏѪPÁJR í»=­u¾@!ZIÌëWi—¶ç¿RýÃÂl¨ÿ! f¬*VàŽ6ç¡^ÊQŠ5ù²v ü ÕŠM"ƒ¯öóB@íJA÷‡Dω¼ ^K޲OŠW377–M¯{¶^ö-r*}hÑ÷ìaàÈÆs¸·FÍ‹/·/)=^^ilT+vÅ6PÜÏñù¿^ÒÂÊÎ Ç8·/ãÆÐ»Ý/$ë]Ÿ†uC½¨~o»pÊÿaæàЬ׋›Òzáñ¸níá Ææ%2“UTÏçõRn/yýõ s ºâ㼃£ãl“ìºÛ× k ú\=w[¸:*\BRz-Íp!Uàô‚îÅÓ`£hŸ‚êR|\ŽŒñ©¢×xZ|`›ì·‘’!^þªèSžÂ9(nqºsæOY£¦:ÑßBH$#…0'Œ‡55³€²bÀ5Ð XVÈ\“÷Ñv±%zÌãni| °yEê˜Ñ3‹RI‘T‹ÇÊã×X×nll|ïäHVÜØu®°ènîÙSk4s}-Ô¼©JQª6`øÆÎg¨¬EñÕk\â~õAÆßA.ã*¨uãó]Ygw]/3²æÅ‡Gó>ÆYü¦Âf¿Õ"ŠH%‚Ußg#¨4O{"À¢£¾¥ 2Ï>>k¨7«øÄèù‰,‚/í신ЛÑýˆð˜.5 ~…Ï:µ<«“Ó<ä:1s¿9èdnë‘,¸A,n×Ðʆžçf³}^2mÔ¸‡®Öõ: §ò säúq.â²ÎÝì›töAü£T™P3Ø‚NiÌøúüZÅ0XúûAIy€¨sÑBÀ³pÈ´ó|½Z…«ß¶²ui˜»qMö^b‡u³¾|Þ!d{qÊ!|\ÉùsÂñŽ:Ç£­‘Ú¨ÅÏúâKWn÷…çlæê,•{@ îÞ”Ø$%ÌØmh +0ñ’‰…>}ƒNwºÆää†Þ~ì”ÅÒœµ?´wwjÌ:ŠF!7ÍãÄà/’P›€ýÝ šÌÓI ¢;%Ó06W‰J´rÉÊÆþ˜/¼ê²½¨7…À<ïNÑÂLÆýPO@}-Ṃ?ÎJ™Á{:/œƒ^8'Ù¾o·œQ˜m ÝÌXŸ}lb AoŸóªø`)(4‚Š-u¬ÃûúR†²œ¨DB£µ¶ž7’›ã/[”fÏí)ÍHîžËžH iÒ?Ê÷Ÿ‰øôìpþÕÀ–ð „ôÌÄt< ÉíH¥x94¹Ÿ-wh»(X ¥ÁPñ?6ûô¶(ô›i´@퉤 ž®QýrÔ6DVÊøMænA yuâ\Yç«2œf==¸B*ú¬Œ×ÅÞƒz TnúHkÈ^ë_u'oóNÅKÓÊŒSÒýÃóžù–‰Ç-­Á¾óãã×n¯Îõz½!š²¾r½¸¬Xô'ö' „±`äúgaà‰ˆØ"¬::Ä.ÛþFPÑi¯À72Ö²_} Ú÷ÁØ~rÂV¬c8—õN6)ÐñPœ?ÈP+K5f¿'j…ΞM0± šjÜÑ÷ ­Ê¨åÊ~‘±2×¢­¨òŸSÙˆ„ñêRK©ã¨¥ª®ŽPW”œ††„Y€—Q†ú&5½´àÞóÐ:õAû#““×|èþÈW}%)&¦6ú³­%£&štÌîíjàýºÌˆ‡^ZhÙD¾=³ò4`JãÚî¯6Ö ;Aåg¨¤,âNe¤¯‚Tzj ‘I| ´ Þo{XwÉÚæ»:¼éÉ{µ[yQ¿C¾€‰’é–í{èÇ'v9mI%MÍR"Ý$Q*i@H©M» ¼í6¸í÷ÖûêëVßqð€†¸ü© èãø§k×^ÛõÖ^xü@Ry"$ü¡ÞVMÐĠ#X¯£«œr‚JtñŽ–[t«ß²Ðë›èåú>ã¬EI¦œ“:®ªþÊ`¤¢þÎÔpÒ*é'ÐYGK71Aõsæí‡ľ€j°þ‘y_ {áU[@ÓˆžJÐëÞ £$8 6³‘2¸±ïùó§·ÐÛ/øŸ¶×©7Ë‹¿FaÅM.üÐÏÒ’i*‚;þXí¶¢§”þ^¢ùHH žÚ ù½×EÐ å­àD´™6Nu-"N¿C¿:I /¸$ïM “òO_s˜U‘#èÛ¼¥S#’$gÕ„Ã?MEP×@Ð&®—ÏÛŠïO¹š$$”éT™ü²ÈÁ[}&Äs’7%¢Ö”"D8‚t4b*xe‰`Š!ÐÁ ãõ2j?›=ŽO8Î=ݨ™:³ù®i…®i‰:ÚçÏŽ3±8†Ÿÿ°µä|%ç‹âà¥=  € qêÌœ qƶ[ÌæÙíæˆ}h“DN:üþk±x~¹³"ˆP7F¤ÔÞozí÷‚ï^¨ã·o6Â'Y:ú)¡ùúS~qÖáU'Ô5+x“Õ@Ø0‘0Lä&éEo@=9;€ÝTï¼ÂHEßG’òâSz­ô2¿ª9ªà×J'ÃD­¼¹4ãl8œëõ¬õ90¡€†bêt„Pô_Ò;óŒÊ¯Ò •F¿3«q$™w}É{êCçT—ލL.g'cŸyR¼Þ9Rn HµyðécGïùr.Ë€5AOˆ¬TÒ¦XZ»É×ëkqyqé;Ê{ίCк( 7–°ASDœ à"@½ kœï&ÞÈwçô;K /k*Ú«FÉää|ôD§Ôg„r Ää×ÎKÿQ œzªð#Œ@WíëI ÐõÚ”¾÷N¹"ìGÌÔ$èù:Ë$Œ6ämû¸WP:٣첔"ß»º+îL>ª@ÏÆîʳÚKNˆçT{­‘È€rÃa9ŸývȺ7\ÿóÙä⯥…Nç²3ÐÔBD†× ¥¡ºO-0DÇeâÖ¹Àž"•‚„ÊP`*‡ÇÞä<{å_Qse}†PS4êŒYtµî-šÿQ 8|÷>VÆ&_¿"#ðòl§é}¿“¯KÚùoŸòÙ‰‘霔ži‚—xXôz”Þh4‘·|E¡ªƧ«)ɨzÛ©¬Z×ßüFš}ŠèTü°èp˵4—Îéu ‡÷£Áæ’¤ä9ÂzÌø…i Ö1@(Mtèf¡–!Þ¥oÔØ i1Ë ôz çÞÍÆé¼~x9Ó÷.A/–|xöiµÏ3yjN!jÂ?xvÀ™³(CñÆÍu—O)Ñ3œ?ìi¯BR8Ô¶7@p—CnN_Ïm÷°I¶–ÿøLÜ:~\D´’×F0BúLabC•ïíünRmp¥ñÁ9΀ñé÷öÏ¿vlij´Žú+8Çd5¤É¿œhÀw‡ã“cêóI(®dïó0nêK Z¹¯ïz §ê‚C`vdù8'«CŸ‹«*ÙE^>ÿîôƱ¤ÔŸÕÅ(l~Mã ­~±âd}DL=äû†‰%,Òh©™Ô…Зnܦ%0gOwsž©p‘•µq“í ´ý¸{Ú”Œ€€U™—Šõb}qç{Þ§4÷»R))ït‰:;\_¼±C¶°gpÆ&ÚcåPWsgMë ö©::bE”Ðo@nH @wÎÛD|¸c\Çå)¨Õ©eí½Õô=É ®zËÕüý³bÊêÞB§ ìÕŸ”USÿtào£*óÑn*Ñ/ <©À®T dMôßX˜´ÝY9ǦæÞ\‚@"ÝJ ºŠc"=LÏÊ5ÓpÐJH *W6<)gž<Ô-nEê©b;Qà¸ãŒÅ§wF÷â2E…EÔs ½ÄK§ O'o4š°ÑçÀ~xôvÞ–ö`&zà|ïí.P¥H|ω1U"馭ÊñA˜ä¹úÓ7¯XdÁ¼_¿.zèüºÿNkpæiÃeCþ_\nRÿG}ˆð—Æ7<‹öýlü×^äEÆ&yœútrïT\x•VÐzÔ£ÉÃxOk-YËÈíƒ{G&n²~ý”(7ŒÆôQ¥áŒ•†štÂ5•rÞ07$˜‘ w–ôô(ú¤ooðÚ º~Íiî¶0…–4›¤(€‹P&^áµ~褦ªz뇡n+€¼h¬(½ ‰Ä>J¥.,€Û o%è½|F»µ›wioúå™J8Gô©˜.0býYšJJ<¿(ˆ’ͯ:BhßÙ|$Ì p%9²îäãL£_ÙÁ½·ø˜õè$Ò3$ÙÕÕe¤«ýÅA{{™Iv—ìZ‰ {æ³üuÕTbj¡ñ7ôðÕÒãľÍÓWÝ)¾WM)föu úr¸ AÑ™& ¼ÛJÚF@ûšx¥h¿–«žßÌUe`Ùß^ÊÈ©iA÷éù¯+Yi·XøxoÎÕE²û{ÚwCvÀ?¦@‰½)ìÍu¾ªÞ|ˆ ôO?%Q‘#U¼áPIÒýDÙïæqªF‘R€dÉù*œÓÇìZfKÎØØ`präÑ©£mù‡€Ø„{í^±–‘¾âÎL"Ñlj¯aàaÓ¤K{ôê.ÿçÌjࡽÕ9¬ ˆ€ŠJw3Íb§ö÷r°¹)½°¨¹7!m·ðÅ«7¯~šò0'!d[Ë#TÎÉNt[XG5;ÇÛ÷¸Zçv²Ž.¦ª‹ T> F´w!‹þy_+!]™ß¬–ˆsmLGl^l ¯:k%E+ -LÌÎoVríNKëåxYר£¼â€ØbKß¡w¨,ª$íXííJY^*(\|]2LJ`JÁRÇå¡B$¨‰`”2 ©Ê@™Þ§{ZùLºˆu ¸Ny½ ˆ®ÞXÎû¼†Õ%y±ªòf ððœ±xïFr+÷Ñ¥ÞŽâbƒŸ+}à( Ÿç¢¸ò“rÕ^UÖ©2FT4x5Wý½C4Hté®(¾;¤ÒH".ù(A‡CÃiÄÌìÍ&×.Ä‹sÉŽÄFÆÊ»ÆWdäeìÜ?nÂìÜ'½5mn§@KFdÅ€_9ÕScèìU¼må%¹H É,§¡vÑù˜”ZZòÄ"÷HYÑ7›ˆìCgK× ¨Q³X½‹Q¤3«ÔÔ‰­avl@ÁÔG'ŠæÒDf ë¡”çÇbCƒø“ƒ3F‹Ã`/…,¥–s¹hßË Üné×&¢aÿ±b•!OÎÍÕ2>y‘õ¨»ÃäÈõ 9ŸZ£cWïûúii‰oŸ¸úÅe„¯Îµ.vÞ3Ô‹.\"¹È”JT„Ý›3d?A!é*Òf M÷ |·¹Zh´yI2 §ËªÀÓMÐø‰«kX=Ɉ3 £¦¨Ëjj` ™¤ãð0»Ã41Ø+¾ûÞòtAàõK¹Mƒ~7ü3ãi] ^ß÷UÃlQúÃ…sg¿z”ìÎø` ö§?E¿V©¬O:âujcF>}ÿþlÝ©„Ö§{;1{ê"œ¦ “ëÓ¬u‚ÚÊÖ3,·8ù_…ÚÎò³àÕ˜ín½#ôó#nÈÓ+£ÝÍàl3—¼;æÍU"¦¿À€¹îsW"4”@”U· *R>a**SR2ߊγ!MŒ€Þ›©1Ô6ÐÂY?ŽDÉ=û j-Ùk~ma³ýÌ]P,Õ ½pàÝûÊÕåò– ¸Y ¹éótF?BåŸSG˜&xdéÙëWõ*ªÊ«8{{;ï¬|Ýû/ß~üzRAL½6Ÿ¢/.®%¨/ ïôo­eáÿé—ýüV=õ#”=uÜ\âÝ•+¹÷ô⢓™ØÓÞSynýüü”‚ü­‰.è\„\¼c¡©ï›àô©=TÂäRXr[¨Vó1 &a„4ßúÔüÙ‹²¾fö¶âN‡?VÜ?¸Öeg#¿Xçx¤¡¹³Oú(7Oetœ¬€Ìf M0ÿ¬ð>Ú"²ú„Gó®GxH_PòîÞFÆÈ¶ÛÇÂ9/ñÜ㣡ùù¨šP7kq­4ýd}—83­ ÈéojÒS·]3ÇA±Ë÷U¿ÛÜTÕ›÷³ŽZ4ì’_ò¥Dò0áô͘É7ŒÀóL\=´­!hWš‚Z3‚ºÕ9L™™—ˆ™Ð¹Ep9p3 ‹z®o‘V–oœ·<.Ën{ÿƒõàì+ š§H~ ½À’øõöHðé31@jcòÜm6JÂߢ-˜r[ë^"A]–~âq:w—›3K‹î>ÌÏÕr[ï¼;T\,ë%ËÅå%ÛïnÿîP XjmmòÄë§§žÜu楦äC©1ëéÑÁéJ–a2þS¯RR’-‹4 ³Ì-|Î ¥hA·€jGÈ)‰àsAgÄ¢hRõ ŸPÓ·Œ"®M!£j.$>4Ðr>|–©(ãØoýfè<}P^üígÐTIý×WÐ%™þ¼¼P®³ôŠ—×V‚ ”hÖˆßÕ#` æSZ}ÉâA®òÛÙíòq#7 ûÖ•qyYweäŸÊî¹8|ÙS+- «ü©BSšGFFEPN¸à‰Å|щC½¹i7:Ôm˜¾ °uehÀ¥Ö‰’ïÁC·Øðs˜­)ÓÅÀÐÛñv@€;U¦8‘0#†×o¿mDI55±^µ ¤&¥”¹Òo¹ýé â¬ö•jàê{zjÊóÓê8Wøë‹ôO9Ä54qÕ¾ªõê+-‹Àº º×ñjÁ0ýæï@”JÃ}CblÅÓäÛ‹=¬³Ö­]»âÊ"µ¼bm›+üÆyù¸Ó\?<Í'ÇLÇ¡ T•è4ˆ‘ÝYçS¦j«â«®Ñb µ ºY…Dÿèc×ðîý¡ZáSçg‘Šì3€ôœc20È4yḣ „Ôå˜8)Ï|sÿ|!40eäH1ZZ²ËúÈ®WÎ]×vmÙJ³"‡Îû·f§X¸œ‡B>|ò»Üñ f °„ âWd³-‰‡¿;k€E<÷ œ»Ôáê”R>”ùAL¡3ÿò¥îÃ…Güd 3, J¤¤t’aŠ*Ô"Ì|tºma¬XR3{¶œúü|ñ4-‡€üîèHË\ÌΦdìÚCK—Ž;(R\„ÈÓa\RɽÏÔØ˜ Àó]ÆyÁÒ„¤Ì‚6*žPn¸‰‹õH©"Ç(ÕÚCö"`Èò•|Pï’¼2•æÙÓ%õ?Ó }›rjÆq½}<ÕÕp~  È¡ŸD‘ù\Gà»ð'¡\*˜™ÝÍNœ™å‡Þ]B‘ÁP,R’>(;u9fAj±e¤œ†),Éé>‡1ëáÄ­…ƑɮòònA–UÝ¥øØ›¤f-ã3KÐX Ó$Ú‚qŸ$J.ˆÌ×'÷£ÝëLîµ0(Ë‘ªqêêánžóÒŠ•Ë–õ<'Caƒªu¸¡p1débpèvoo`ÌNVù甿¨×±×/… 0Eq/‚–­ ÈS w¡¸§>¼Tv ×JÌ.ÉYýN"Bût62 ꌻýÅþÞŠµ‹bbd,Ä´“Ühfzphk/IGÅÊÄN6¥`—.1=è›ÕâÙàT«eâ:éæU;ì´@“4Z3e&e&G7{Í2 jŒž…ì¼Çõ‹ç›7"‚ª0J¼8 ‘E¿È®X|ê–T`õeÇ—->ÆèˆÇÜw¨í. Ã[ÖŸu>ì„ërœ0”:z°(Q²L7_h­–F¡˜mù€QSªñ±Ð¡h*ý /®ºh%¯Ïš 3càv(Tõ$6¹£Ð-ÁÊŒi"`v;]þxŸËb0«UÜ52ÅB¼Ø²Î$vÛ#-‹Dôöýąŵњ ËO›I¦ÎdP=P®˜q=À'´ kŒ"3)E˜Æ™KË€XÇÇ“5 jâvuð±ínErÝÚõþëþv,¼ß ˜å!¯¢ÀÌuSªï½ªÜ­÷\N«qp’º/O4.p_°¿Î‚‘PþµöËWãC øáÜk}ø8ÝÁ¤¬£Ày]Tùt‹„S÷9t¨yGœD›ŽvµQ}2îc§~xx¬lÖåÌVqYñŒÒ˜ü€ö€í„í±‰¡@.ñS]§>™(úß|«Ã0NYÄW"¡køíx“’R⤠--cäÎv]ÀÃÌ'çG¼r÷-Þ %¿‹1BÒìÊ®ØÙ‹™¯UT5½Sµ§Ï»Y®I á wÓ‰¯Ò‰×çòð°¶Lˆib!ç#áüæ\Eî-AÉ™'ïDn^€^‚T!&ÏxR9Aä WWy®‘Æòɉ@áRoQgymùz>uymQߎO† NŽ^tÓä˜Y©*­-nÃ(}Î;¸ÝÈúøqleë’ËÀ©€ß@¨*YY*íðŽ||Tf›{§!º?¿ìôåä‡Oy¹f»P†6ßyY~iN™ù/+Úû2¬e+2oÔÎí5vîÛ&‡Æ¤õ… ÄÉû¹®Ž‹¡‰ù¨à£\!×iD¤DìW¯`î?[´ýñd.²ê¨EDŒ&u‚Ü<ĽN„ç7­Å§•OåŠïš°ðàËA‚¼ã[ù{4z(2’ûŽlà0JWE`ºozê‹_˜œ=ˆÛywßMkÞöu«0[? ‰<öM8+Û˜NK+F.6¹é¸ã…!éüü@ýØ×äP€J[ÁW¶K6ÍË50<¼0scµ3_<óòÔfsdvÐÃÆË&y±úsYïÌ*ïïå8_©­M ¦~-hÿÁí7÷¯UllC‹€4RþcPe,=Ñwqíùl,¾ ™îê‘m²ÖT öòP™m - ’mÒÞIÿÞûÄ¿BTÕ*u»[=å&ßäŸép¥´öŸI<.(K{1ønñ· Q“‰Äàê%Z:I©Ií¨ÿ¤YµyÒ¹ûȇ$´,dR+]N°¹ºÏçé$ ù\òÛ\²&)Ïvæ …„º=ŒÊȞȟ*¼=û\ÿsžNÏãœèûNl¬ôkûÂܧC|;w¬§%» UGµÄÐZEhû‘úÙ¢Þd?ñ‘¯÷Ò*ž§Þ¼FO<%Vg¢RUd¢yšySú­®¥øúq|J”“û;ñÛ‰ •4pƒ[× D¹íYèDi$¤fŸÛ;G~Ÿ¡#ãˆÖ 1 rH³õ y˜˜²úüÎî\Æ“žÕ O6<½ o”¶iŠÃqÌl=±M~Þè<ÿ’oN+³Šo…"[èÁ(¯=ïõ‡õòðâä§áƲGæ“NŒ@­ôÀ–ðp’³ ¥¨0¦¦h›‚¯(JÚÝ7~]ÇÃíÙumi I±ð|FÃ+l8mHgCÎ P˜W kÛÐÑÒŸÒ± ¸M–¸`F |èâ<ºU‰€Ñp0ö6–Õu¼¤ƒ‰’$®»k¹q‰»¼¨{É„ÓÐ߉ÓY"¡A°º½ˆ{ɧ`n_ãu*¯_˜ ™X»7w%m§æ O6c§U¸º7‰’l¸Öãåpþ–ó¥ÂÑÈUŠpb¤Ä½LÕ$É(üžmÑKZ¡›”ög –_^@bÍ&¸Ž¸BÖè˜öxYÕ•¯B‡@:L4=ã "ðGˉ“ °8u 6>5I Be¾þƒ2쟳SëbMI`HZ1Z1~;Z„YFY N@»cmœACcsOÎ.¶~IÌÆÏ7îQìÖ“«Ÿ£o2ðì²~¤ç…ÇEE‹E;òO_­l­m6mwµl vf Í›û¿ª¨™hŒ‰ýѵt#sVOÚˆ‰YŽ’GAtú$¦£¶ŸÍðônrϳàÁú¾Ý%ΰéÝЀÀçÑüWÍѤôŠ}e0¬¡û™§67 ++133+3Ëgv¥hyFL7Õ'·ˆKô{™›’h8RŒÝÎLŒ›†î–_Ú&;›<‡ëÔG¤9­Y¹cì3wv¶Jv~?“ìŽó~ù}i\^eí\ƒ…ës®?÷üÁ³x¹&jãùÕLÈŠ%±Äj†ˆ×Y9Í=£ƒ ¡ÎŸÞŸœˆ7^Sh“¦ Å a©{Žú,åC ¢BëëëS\ÒŠJõ³{I5Çý¹²·®¾ç/jX­rˆ×_ñ¡PÃ’ È}èèÔ8EXH÷×C¾cNüæHS À(&''Jùû3D% * 2 77-ÿݪ˜l©72¡µnÈÏkâlÀB`P‰ÊÌŽt»Ñ—–Ëiþ¬0Æ·çnkÑü­s˲Öwïf­Ïí?9½ÿ>×^#‚þ&¢¹t£á]ÅjÊÛI˜¹¡.J(4lØ/pæ$ÊùìÕ%Ïåú,MÞÅ¡ÏO˺êcÚâæêô€Aâ]ŒIÝÜÝ1Çl·l—:^IS{))*¹$âÄ&ÃèÛ5Wg¸%àN§Ï°Óa•ñ–7«ýŸ@x”XÑp6n¼† i¯÷ íO¢ù¹ üWëB¡IäGäDŠ{.á7ß$AtWh¾×~­Gø³¢æLÛòîî«9£çÆGßœ÷L»\34µå‘Àc(µÙ»Á&ö!cÛ¤×íK) ÆJJEUàsbKcèS#ýÁÒ¡ØeÄÍmD+@+9ÔW¾/Þú}ïèK~fE+áчú”:±DÅäÃ$â„ÆÉiP‰Ø‹ fÉ™a§[c.$ãÕ功:ëf-ëæá&«”\Ñk›=™\p·+Éz‚ÑÁ6Ä[O8÷H ¹kŽÑÃòÑI°Ð±<ºTxá:†öÜ­’âßÚyPL¿ö¢H·‘#iÙìì0Tô“C‚ÆÆÆGâåA ‡¡ªŸëw’  ìàfÍ;A~žƒÎâ=b-ÝL¬Ë/[Dyžz‘ÿšÖ}º±×VçðŒ³ßNaPòÈÖ—“‘_¯–”¸Â×Ãs^—ùÂ’˜9¥E2J{Ó‚Üä=‚¬û’;æZ˵â~t+Ýò#V¸úÔÜ.Ý]XˆêJkÆÈöÐØ3 J>„„šAôìÆ;¤¶pÃá‡j…ûÈŸÝÈÏ¢ˆÄomNÔºæöö‰bĨÄCw; FÀf‡’Àәъ!DÙ×fiļY›y:îºÞ2[Tû¬¼J¶lj¹g'ùmžCQSS“‚sLýy§Ê¢‚‹·&Ê ì2H¼¶N¿hÎóΛîäâä Ç"œ© eÕ›•—@¿X­>ßø©òZ¼ýsýª?Qb†ä^ª©­Í¸Â¬†‘0lÖ*OÄ ÷&¥LEI ¢Ïа† Òî9?pJ·c“øÙôþœYqþ ­çCÌ^i?ižèogƒ!Èȸ‘$ìHnZ2 ŒM Ï&6IË=µ?_n¸9³ðôüPWÞúÓ?7ÛRwóôt3ÄBNJ Ÿ¤…9ôå[ÂP­Wœø€ çh®á¸ZMªˆ£az“?X¯å2˜åVåéŸ? ³sØúUgÏ|Ò¬rTöä,äˬBP=)¬a¾3ê4xeé"‚ŽTN›ÔÿÕðqI:?ê÷„êLSõ¶‘#'AòÓr ÙÙ‰‰ÉÈØøapv6362¿=Þ¤ÁØ‘üÜð¥Âý†¨ÏøE?MÕ‰™È ´-ŸŸ1ÃKŒØ,–ƒŠÆ Ù?Üa ½ñ½TŸ"Â)ƒ00ÂòÚPR*©K-O=K Ñi.ÍjtѪÊ*¸ï·³¡_ÛyÍ€½ñ³e¨øÙÁÊôé¥3üÂ,]÷\‚{aÌêrêÌähÿÓ·œJ'ÅÈ$˜ÿJs&¢9 1š …F d¢H8^íÙÈPüHv2„;ÍúÓ-ñW†æbÝõ~Å\:í.^-zÎ,¥‹!`n -Â*Êö½ðAöÄâ ·÷äx¹I)„@K¥Ç¤(o<[íh.¼¹ù¦¹z÷êXÅ‚˜Ûðæ'kv4»užµ7²ö³gÎ\t_«Ê–Ý™U0´G"Å.:Gû'NŸ¡e'!Uúkí²øÝ%‚5¥“ ¡#Æçð04 œ' †± 0$ EFLkü mlô{j0É'£O>!ÖÁËqdÞÝ\Á€BKÑ¡0aÞ«˺@SÇð}ý€aΣ¬ðHóØp2éªæôéW­ÚnÜzeH ë'Ñ–c‰'c{0ë’|ûnw¡–xéððíÍþšžÛJŸž,sÜîrйóáe47¿»Ó¤=»Ã_o`fVÐÆJqðá9x-¡ã  EFÆB€$CrÃÑÄ,4r§"‹æÜ•éQ—¹\ecËîù6™ÛÃÈ`ÄÂÄparVR²i¬õFzi&X–GWž ’VRÂ…™ÛÆ=»²ß{©imšÍGä<1ˆàÎl&V¡xpk60kíªkdÔ4ÛÒ‡ñw.´^mšùpÚéÁe/Ù  ëЫþ´4ì(’Æó_ubyÓë j«‘ó‰Òáw ÅG“!ˆ PÄx+g‡£XPÜH’ɬÎýQn1…]ËzY“{q™îf‰æH4šã#lÊ!ªK𹜉ð:âÊûM‡»9]¸Þ‡ßº©Ù& ‰,Ù¼~˜¶³¿16ºâ?í³\ÄK¡ýá4ÂÔÞùÁžç“«¶ј$Sš…+ówÏëT5Gµ,¿z6”58ÑDŒf‡!H4þFK¹•4µ"¥6• f%aA³ ˆadÄ(8 ïUð¿E‹±¡ r,žu+Øñ³ež*vqà:Ja'¨äfƒ£D%ÉÕ¨$D}ÌlŠ®E}ÀëHƒu@ZÂç·ßðÊ^ñ&`èt“6v–Œ€—±z$&·ee>ßP"8{¨¢í#DfDs?x¹vÁ¯Þ¢…Œ’œc!·/´#<$£âùÄ㢧N£9Ë+/'iØ„5„ÿv“?· /'©©¨¨‹¨0 7^,h¼|ð…B#ÙHnç¾y2“XI|/ÀEa 'C²‘³Ðѱà] Þ»¿Š_[ð?£pÕbB"DZå%‰V#I›þ'""o% %JØC€KîÊÒw]EóáBÌþ]ùÒ‰÷ÑxS,Í +½ç~PðÎJ)ITc©!É÷.š]X#*,ZI#yø¸ÕsçU©gPÊjfÎí‚»ÎÎ{+%t|ÚD€œõSæ¡©c“¤„¥0>QÄw5¡ÃÓ|¢Äh˜°ÿz¯ûRɤÁ]ßYÝÃX …©¤ºàÏ{×JÕ›GÕ(LO@SºÐLóúÝñ«™åŸ /E›Cá}e²:oy5²Œp­(ì¢s!“VˆÙœÜ(ŒîÃ`EH9…í/^xuñÌâòšÓñ­ë {Ïo½Ÿqj03U·‘¡æeüÇ÷ˆ””lpLTTjää¬R$ >>:*ÄõÉÅE¶Êh;¾õZN£ëëöhrÊ_%ÎèíMí­G!ÔÞ–€åj=öΦ…œ|YcË×@Ïþ¨,C,µò_!I…)E¡¤0J„¨ ž¶ 07 §€)‰ùzÿÕŠ?7-ܼ¤²aã5^ôþ|IºB7SÿñJ+z!^|b Ne*)IN.ENNnŠ!'ÇP Oœc©^"A°˜&r¨P I´)~¿1U©½qŠ*L^H‰I©*šx¿¨QâN.¬Ø¦ÍËÁ§&LAtø’ˆ6 ®¤I¥ GSS§´Qb¥%§ãf' £½¿wºš¶:ÚÜ>š›½á¾š›F-†3⟛ ÕTö–6jÓã Å2³J‘«aÕX}Ô¨$I1¥›‰~&RF¯þÃ6D ÔF"FŠÌ€l†±ÜîaÑÅQ¦Ç#džN)gD$#ÀÏ ,y=-D탤âLb¢¤ðs#‰áb¹Éª[.<¾BK“¾”˜h€÷ä$If !ÂzV—(•Aš:B—B@D›ŠUC]]T= Ë´6,"©F¥. ¨Ë«+ É+óŸ/?‘‰0jã¤Ç²²²I0öéà…4«û£ã‹Ó÷« ÂZ ðqb¹Iì9((qm’‚LÚ¬|R¢¬,æ†4fv§3¯‹±Ó²#0¬¯®)3ýkzšQ†Ç›š^E‘BB€3,LD ‰Ã#‡¥ÔmSÄÝT¡ÿË\@YÚ[ˆ‚žT hÌqÊœª•E'I#Ar7û­ç•Óé‚ô¤¼ä[d"èt)J…þã"r”"Â>h6{)øžó´;+¹; ³ºH’Àßœ ÿ›“2„VŒÊ ªJÔ*ŠŠŠ68IVÅ6 ]EiU«ÿbìšG•^/ŒJ.Õ.h%³0ø¡±yÛ*G¹éiþd%(¨Ž–d¶±ØIñ*êâ((åÔÃpLꦬ )ÅÅ+ ì>ì4d$¤¬ÌFÿCƒù€HYS35UYFÆÈÈ›'U3õoÞ74©éqê8&U ^jø­,,8%&i„°¿ÚÞJôaœJˆ‹m6zzœaza‚¸0 ^NR 9A€" @sH% sȉèý=7ü³×þíG™Z妞¢"=!£#áOôäB*¤jÿ‡¤yd•#„Ôñ±äÔÓãåÅñ* rЍ«‘«! ø8ÕàŒ¤€Ñßûïþ÷@¾Ë…QS™Aæw—8*FPjþÁÀÑÛè¶ñâõÚtyqºz‚œrI¦T’R’XfRlÄÿÖýZÿÝÇJæ?¡2µÙ â·—¢%§^íp”Új¬ÌrJÊ?ýÿ ò—ŦªÒÖv³—WP“‰SC][CÃTMç¼Âòÿäg…!´’Á{_JJE^=#%BÆür§ÿ'ÀβoÁN¿©IEND®B`‚././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/sphinx/conf.py0000644104717000001440000002043014505632065017150 0ustar00sthiellusers# -*- coding: utf-8 -*- # # clustershell documentation build configuration file, created by # sphinx-quickstart on Mon Jul 13 20:46:35 2015. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../lib')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx_rtd_theme'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'clustershell' copyright = u'2023, Stephane Thiell' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '1.9.2' # The full version, including alpha/beta/rc tags. release = '1.9.2' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. #html_theme = 'default' html_theme = 'sphinx_rtd_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = '_static/clustershell-nautilus-logo200.png' # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] def setup(app): if 'READTHEDOCS' in os.environ: # RTD does not line wrap CSV tables, so we override this behavior. app.add_stylesheet("theme_overrides.css") # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # Better to disable this to prevent alteration of command-line arguments # Sphinx <1.6 use `html_use_smartypants` and >=1.6 use `smartquotes` from sphinx import __version__ as sphinx_version sphinx_version_parts = [int(i) for i in sphinx_version.split('.')] if sphinx_version_parts[0] <= 1 and sphinx_version_parts[1] < 6: html_use_smartypants = False else: smartquotes = False # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'clustershelldoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'clustershell.tex', u'ClusterShell Documentation', u'Stephane Thiell', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'clustershell', u'ClusterShell Documentation', [u'Stephane Thiell'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'clustershell', u'ClusterShell Documentation', u'Stephane Thiell', 'clustershell', 'Manage node sets, node groups and execute commands on cluster', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/sphinx/config.rst0000644104717000001440000007004414505632065017656 0ustar00sthiellusersConfiguration ============= .. highlight:: ini clush ----- .. _clush-config: clush.conf ^^^^^^^^^^ The following configuration file defines system-wide default values for several ``clush`` tool parameters:: /etc/clustershell/clush.conf ``clush`` settings might then be overridden (globally, or per user) if one of the following files is found, in priority order:: $XDG_CONFIG_HOME/clustershell/clush.conf $HOME/.config/clustershell/clush.conf (only if $XDG_CONFIG_HOME is not defined) {sys.prefix}/etc/clustershell/clush.conf $HOME/.local/etc/clustershell/clush.conf $HOME/.clush.conf (deprecated, for 1.6 compatibility only) .. note:: The path using `sys.prefix`_ was added in version 1.9.1 and is useful for Python virtual environments. In addition, if the environment variable ``$CLUSTERSHELL_CFGDIR`` is defined and valid, it will used instead. In such case, the following configuration file will be tried first for ``clush``:: $CLUSTERSHELL_CFGDIR/clush.conf The following table describes available ``clush`` config file settings. +-----------------+----------------------------------------------------+ | Key | Value | +=================+====================================================+ | fanout | Size of the sliding window of connectors (eg. max | | | number of *ssh(1)* allowed to run at the same | | | time). | +-----------------+----------------------------------------------------+ | confdir | Optional list of directory paths where ``clush`` | | | should look for **.conf** files which define | | | :ref:`run modes ` that can then | | | be activated with `--mode`. All other ``clush`` | | | config file settings defined in this table might | | | be overridden in a run mode. Each mode section | | | should have a name prefixed by "mode:" to clearly | | | identify a section defining a mode. Duplicate | | | modes are not allowed in those files. | | | Configuration files that are not readable by the | | | current user are ignored. The variable `$CFGDIR` | | | is replaced by the path of the highest priority | | | configuration directory found (where *clush.conf* | | | resides). The default *confdir* value enables both | | | system-wide and any installed user configuration | | | (thanks to `$CFGDIR`). Duplicate directory paths | | | are ignored. | +-----------------+----------------------------------------------------+ | connect_timeout | Timeout in seconds to allow a connection to | | | establish. This parameter is passed to *ssh(1)*. | | | If set to 0, no timeout occurs. | +-----------------+----------------------------------------------------+ | command_prefix | Command prefix. Generally used for specific | | | :ref:`run modes `, for example to | | | implement *sudo(8)* support. | +-----------------+----------------------------------------------------+ | command_timeout | Timeout in seconds to allow a command to complete | | | since the connection has been established. This | | | parameter is passed to *ssh(1)*. In addition, the | | | ClusterShell library ensures that any commands | | | complete in less than (connect_timeout \+ | | | command_timeout). If set to 0, no timeout occurs. | +-----------------+----------------------------------------------------+ | color | Whether to use ANSI colors to surround node | | | or nodeset prefix/header with escape sequences to | | | display them in color on the terminal. Valid | | | arguments are *never*, *always* or *auto* (which | | | use color if standard output/error refer to a | | | terminal). | | | Colors are set to ``[34m`` (blue foreground text) | | | for stdout and ``[31m`` (red foreground text) for | | | stderr, and cannot be modified. | +-----------------+----------------------------------------------------+ | fd_max | Maximum number of open file descriptors | | | permitted per ``clush`` process (soft resource | | | limit for open files). This limit can never exceed | | | the system (hard) limit. The *fd_max* (soft) and | | | system (hard) limits should be high enough to | | | run ``clush``, although their values depend on | | | your fanout value. | +-----------------+----------------------------------------------------+ | history_size | Set the maximum number of history entries saved in | | | the GNU readline history list. Negative values | | | imply unlimited history file size. | +-----------------+----------------------------------------------------+ | node_count | Should ``clush`` display additional (node count) | | | information in buffer header? (yes/no) | +-----------------+----------------------------------------------------+ | maxrc | Should ``clush`` return the largest of command | | | return codes? (yes/no) | | | If set to no (the default), ``clush`` exit status | | | gives no information about command return codes, | | | but rather reports on ``clush`` execution itself | | | (zero indicating a successful run). | +-----------------+----------------------------------------------------+ | password_prompt | Enable password prompt and password forwarding to | | | stdin? (yes/no) | | | Generally used for specific | | | :ref:`run modes `, for example to | | | implement interactive *sudo(8)* support. | +-----------------+----------------------------------------------------+ | verbosity | Set the verbosity level: 0 (quiet), 1 (default), | | | 2 (verbose) or more (debug). | +-----------------+----------------------------------------------------+ | ssh_user | Set the *ssh(1)* user to use for remote connection | | | (default is to not specify). | +-----------------+----------------------------------------------------+ | ssh_path | Set the *ssh(1)* binary path to use for remote | | | connection (default is *ssh*). | +-----------------+----------------------------------------------------+ | ssh_options | Set additional (raw) options to pass to the | | | underlying *ssh(1)* command. | +-----------------+----------------------------------------------------+ | scp_path | Set the *scp(1)* binary path to use for remote | | | copy (default is *scp*). | +-----------------+----------------------------------------------------+ | scp_options | Set additional options to pass to the underlying | | | *scp(1)* command. If not specified, *ssh_options* | | | are used instead. | +-----------------+----------------------------------------------------+ | rsh_path | Set the *rsh(1)* binary path to use for remote | | | connection (default is *rsh*). You could easily | | | use *mrsh* or *krsh* by simply changing this | | | value. | +-----------------+----------------------------------------------------+ | rcp_path | Same as *rsh_path* but for rcp command (default is | | | *rcp*). | +-----------------+----------------------------------------------------+ | rsh_options | Set additional options to pass to the underlying | | | rsh/rcp command. | +-----------------+----------------------------------------------------+ .. _clushmode-config: Run modes ^^^^^^^^^ Since version 1.9, ``clush`` has support for run modes, which are special :ref:`clush-config` settings with a given name. Two run modes are provided in example configuration files that can be copied and modified. They implement password-based authentication with *sshpass(1)* and support of interactive *sudo(8)* with password. To use a run mode with ``clush --mode``, install a configuration file in one of :ref:`clush-config`'s ``confdir`` (usually ``clush.conf.d``). Only configuration files ending in **.conf** are scanned. If the user running ``clush`` doesn't have read access to a configuration file, it is ignored. When ``--mode`` is specified, you can display all available run modes for the current user by enabling debug mode (``-d``). Example of a run mode configuration file (eg. ``/etc/clustershell/clush.conf.d/sudo.conf``) to add support for interactive sudo:: [mode:sudo] password_prompt: yes command_prefix: /usr/bin/sudo -S -p "''" System administrators or users can easily create additional run modes by adding configuration files to :ref:`clush-config`'s ``confdir``. More details about using run modes can be found :ref:`here `. .. _groups-config: Node groups ----------- ClusterShell defines a *node group* syntax to represent a collection of nodes. This is a convenient way to manipulate node sets, especially in HPC (High Performance Computing) or with large server farms. This section explains how to configure node group **sources**. Please see also :ref:`nodeset node groups ` for specific usage examples. .. _groups_config_conf: groups.conf ^^^^^^^^^^^ ClusterShell loads *groups.conf* configuration files that define how to obtain node groups configuration, ie. the way the library should access file-based or external node group **sources**. The following configuration file defines system-wide default values for *groups.conf*:: /etc/clustershell/groups.conf *groups.conf* settings might then be overridden (globally, or per user) if one of the following files is found, in priority order:: $XDG_CONFIG_HOME/clustershell/groups.conf $HOME/.config/clustershell/groups.conf (only if $XDG_CONFIG_HOME is not defined) {sys.prefix}/etc/clustershell/groups.conf $HOME/.local/etc/clustershell/groups.conf .. note:: The path using `sys.prefix`_ was added in version 1.9.1 and is useful for Python virtual environments. In addition, if the environment variable ``$CLUSTERSHELL_CFGDIR`` is defined and valid, it will used instead. In such case, the following configuration file will be tried first for *groups.conf*:: $CLUSTERSHELL_CFGDIR/groups.conf This makes possible for an user to have its own *node groups* configuration. If no readable configuration file is found, group support will be disabled but other node set operations will still work. *groups.conf* defines configuration sub-directories, but may also define source definitions by itself. These **sources** provide external calls that are detailed in :ref:`group-external-sources`. The following example shows the content of a *groups.conf* file where node groups are bound to the source named *genders* by default:: [Main] default: genders confdir: /etc/clustershell/groups.conf.d $CFGDIR/groups.conf.d autodir: /etc/clustershell/groups.d $CFGDIR/groups.d [genders] map: nodeattr -n $GROUP all: nodeattr -n ALL list: nodeattr -l [slurm] map: sinfo -h -o "%N" -p $GROUP all: sinfo -h -o "%N" list: sinfo -h -o "%P" reverse: sinfo -h -N -o "%P" -n $NODE The *groups.conf* files are parsed with Python's `ConfigParser`_: * The first section whose name is *Main* accepts the following keywords: * *default* defines a **default node group source** (eg. by referencing a valid section header) * *confdir* defines an optional list of directory paths where the ClusterShell library should look for **.conf** files which define group sources to use. Each file in these directories with the .conf suffix should contain one or more node group source sections as documented below. These will be merged with the group sources defined in the main *groups.conf* to form the complete set of group sources to use. Duplicate group source sections are not allowed in those files. Configuration files that are not readable by the current user are ignored (except the one that defines the default group source). The variable `$CFGDIR` is replaced by the path of the highest priority configuration directory found (where *groups.conf* resides). The default *confdir* value enables both system-wide and any installed user configuration (thanks to `$CFGDIR`). Duplicate directory paths are ignored. * *autodir* defines an optional list of directories where the ClusterShell library should look for **.yaml** files that define in-file group dictionaries. No need to call external commands for these files, they are parsed by the ClusterShell library itself. Multiple group source definitions in the same file is supported. The variable `$CFGDIR` is replaced by the path of the highest priority configuration directory found (where *groups.conf* resides). The default *confdir* value enables both system-wide and any installed user configuration (thanks to `$CFGDIR`). Duplicate directory paths are ignored. * Each following section (`genders`, `slurm`) defines a group source. The map, all, list and reverse upcalls are explained below in :ref:`group-sources-upcalls`. .. _group-file-based: File-based group sources ^^^^^^^^^^^^^^^^^^^^^^^^ Version 1.7 introduces support for native handling of flat files with different group sources to avoid the use of external upcalls for such static configuration. This can be achieved through the *autodir* feature and YAML files described below. YAML group files """""""""""""""" Cluster node groups can be defined in straightforward YAML files. In such a file, each YAML dictionary defines group to nodes mapping. **Different dictionaries** are handled as **different group sources**. For compatibility reasons with previous versions of ClusterShell, this is not the default way to define node groups yet. So here are the steps needed to try this out: Rename the following file:: /etc/clustershell/groups.d/cluster.yaml.example to a file having the **.yaml** extension, for example:: /etc/clustershell/groups.d/cluster.yaml Ensure that *autodir* is set in :ref:`groups_config_conf`:: autodir: /etc/clustershell/groups.d $CFGDIR/groups.d In the following example, we also changed the default group source to **roles** in :ref:`groups_config_conf` (the first dictionary defined in the example), so that *@roles:groupname* can just be shorted *@groupname*. .. highlight:: yaml Here is an example of **/etc/clustershell/groups.d/cluster.yaml**:: roles: adm: 'mgmt[1-2]' # define groups @roles:adm and @adm login: 'login[1-2]' compute: 'node[0001-0288]' gpu: 'node[0001-0008]' servers: # example of yaml list syntax for nodes - 'server001' # in a group - 'server002,server101' - 'server[003-006]' cpu_only: '@compute!@gpu' # example of inline set operation # define group @cpu_only with node[0009-0288] storage: '@lustre:mds,@lustre:oss' # example of external source reference all: '@login,@compute,@storage' # special group used for clush/nodeset -a # only needed if not including all groups lustre: mds: 'mds[1-4]' oss: 'oss[0-15]' rbh: 'rbh[1-2]' If you wish to define an empty group (with no nodes), you can either use an empty string ``''`` or any valid YAML null value (``null`` or ``~``). .. highlight:: console Testing the syntax of your group file can be quickly performed through the ``-L`` or ``--list-all`` command of :ref:`nodeset-tool`:: $ nodeset -LL @adm mgmt[1-2] @all login[1-2],mds[1-4],node[0001-0288],oss[0-15],rbh[1-2] @compute node[0001-0288] @cpu_only node[0009-0288] @gpu node[0001-0008] @login login[1-2] @storage mds[1-4],oss[0-15],rbh[1-2] @sysgrp sysgrp[1-4] @lustre:mds mds[1-4] @lustre:oss oss[0-15] @lustre:rbh rbh[1-2] .. _group-external-sources: External group sources ^^^^^^^^^^^^^^^^^^^^^^ .. _group-sources-upcalls: Group source upcalls """""""""""""""""""" Each node group source is defined by a section name (*source* name) and up to four upcalls: * **map**: External shell command used to resolve a group name into a node set, list of nodes or list of node sets (separated by space characters or by carriage returns). The variable *$GROUP* is replaced before executing the command. * **all**: Optional external shell command that should return a node set, list of nodes or list of node sets of all nodes for this group source. If not specified, the library will try to resolve all nodes by using the **list** external command in the same group source followed by **map** for each available group. The notion of *all nodes* is used by ``clush -a`` and also by the special group name ``@*`` (or ``@source:*``). * **list**: Optional external shell command that should return the list of all groups for this group source (separated by space characters or by carriage returns). If this upcall is not specified, ClusterShell won't be able to list any available groups (eg. with ``nodeset -l``), so it is highly recommended to set it. * **reverse**: Optional external shell command used to find the group(s) of a single node. The variable *$NODE* is previously replaced. If this external call is not specified, the reverse operation is computed in memory by the library from the **list** and **map** external calls, if available. Also, if the number of nodes to reverse is greater than the number of available groups, the reverse external command is avoided automatically to reduce resolution time. In addition to context-dependent *$GROUP* and *$NODE* variables described above, the two following variables are always available and also replaced before executing shell commands: * *$CFGDIR* is replaced by *groups.conf* base directory path * *$SOURCE* is replaced by current source name (see an usage example just below) .. _group-external-caching: Caching considerations """""""""""""""""""""" External command results are cached in memory, for a limited amount of time, to avoid multiple similar calls. The optional parameter **cache_time**, when specified within a group source section, defines the number of seconds each upcall result is kept in cache, in memory only. Please note that caching is actually only useful for long-running programs (like daemons) that are using node groups, not for one-shot commands like :ref:`clush ` or :ref:`cluset `/:ref:`nodeset `. The default value of **cache_time** is 3600 seconds. Multiple sources section """""""""""""""""""""""" .. highlight:: ini Use a comma-separated list of source names in the section header if you want to define multiple group sources with similar upcall commands. The special variable `$SOURCE` is always replaced by the source name before command execution (here `cluster`, `racks` and `cpu`), for example:: [cluster,racks,cpu] map: get_nodes_from_source.sh $SOURCE $GROUP all: get_all_nodes_from_source.sh $SOURCE list: list_nodes_from_source.sh $SOURCE is equivalent to:: [cluster] map: get_nodes_from_source.sh cluster $GROUP all: get_all_nodes_from_source.sh cluster list: list_nodes_from_source.sh cluster [racks] map: get_nodes_from_source.sh racks $GROUP all: get_all_nodes_from_source.sh racks list: list_nodes_from_source.sh racks [cpu] map: get_nodes_from_source.sh cpu $GROUP all: get_all_nodes_from_source.sh cpu list: list_nodes_from_source.sh cpu Return code of external calls """"""""""""""""""""""""""""" Each external command might return a non-zero return code when the operation is not doable. But if the call return zero, for instance, for a non-existing group, the user will not receive any error when trying to resolve such unknown group. The desired behavior is up to the system administrator. .. _group-slurm-bindings: Slurm group bindings """""""""""""""""""" Enable Slurm node group bindings by renaming the example configuration file usually installed as ``/etc/clustershell/groups.conf.d/slurm.conf.example`` to ``slurm.conf``. Three group sources are defined in this file and are detailed below. Each section comes with a long and short names (for convenience), but actually defines a same group source. While examples below are based on the :ref:`nodeset-tool` tool, all Python tools using ClusterShell and the :class:`.NodeSet` class will automatically benefit from these additional node groups. .. highlight:: ini The first section **slurmpart,sp** defines a group source based on Slurm partitions. Each group is named after the partition name and contains the partition's nodes:: [slurmpart,sp] map: sinfo -h -o "%N" -p $GROUP all: sinfo -h -o "%N" list: sinfo -h -o "%R" reverse: sinfo -h -N -o "%R" -n $NODE .. highlight:: console Example of use with :ref:`nodeset ` on a cluster having two Slurm partitions named *kepler* and *pascal*:: $ nodeset -s sp -ll @sp:kepler cluster-[0001-0065] @sp:pascal cluster-[0066-0068] .. highlight:: ini The second section **slurmstate,st** defines a group source based on Slurm node states. Each group is based on a different state name and contains the nodes currently in that state:: [slurmstate,st] map: sinfo -h -o "%N" -t $GROUP all: sinfo -h -o "%N" list: sinfo -h -o "%T" | tr -d '*~#$@+' reverse: sinfo -h -N -o "%T" -n $NODE | tr -d '*~#$@+' cache_time: 60 Here, :ref:`cache_time ` is set to 60 seconds instead of the default (3600s) to avoid caching results in memory for too long, in case of state change (this is only useful for long-running processes, not one-shot commands). .. highlight:: console Example of use with :ref:`nodeset ` to get the current nodes that are in the Slurm state *drained*:: $ nodeset -f @st:drained cluster-[0058,0067] .. highlight:: ini The third section **slurmjob,sj** defines a group source based on Slurm jobs. Each group is based on a running job ID and contains the nodes currently allocated for this job:: [slurmjob,sj] map: squeue -h -j $GROUP -o "%N" list: squeue -h -o "%i" -t R reverse: squeue -h -w $NODE -o "%i" cache_time: 60 The fourth section **slurmuser,su** defines a group source based on Slurm users. Each group is based on a username and contains the nodes currently allocated for jobs belonging to the username:: [slurmuser,su] map: squeue -h -u $GROUP -o "%N" -t R list: squeue -h -o "%u" -t R reverse: squeue -h -w $NODE -o "%i" cache_time: 60 Example of use with :ref:`clush ` to execute a command on all nodes with running jobs of username:: $ clush -bw@su:username 'df -Ph /scratch' $ clush -bw@su:username 'du -s /scratch/username' :ref:`cache_time ` is also set to 60 seconds instead of the default (3600s) to avoid caching results in memory for too long, because this group source is likely very dynamic (this is only useful for long-running processes, not one-shot commands). .. highlight:: console You can then easily find nodes associated with a Slurm job ID:: $ nodeset -f @sj:686518 cluster-[0003,0005,0010,0012,0015,0017,0021,0055] .. _group-xcat-bindings: xCAT group bindings """"""""""""""""""" Enable xCAT node group bindings by renaming the example configuration file usually installed as ``/etc/clustershell/groups.conf.d/xcat.conf.example`` to ``xcat.conf``. A single group source is defined in this file and is detailed below. .. warning:: xCAT installs its own `nodeset`_ command which usually takes precedence over ClusterShell's :ref:`nodeset-tool` command. In that case, simply use :ref:`cluset ` instead. While examples below are based on the :ref:`cluset-tool` tool, all Python tools using ClusterShell and the :class:`.NodeSet` class will automatically benefit from these additional node groups. .. highlight:: ini The section **xcat** defines a group source based on xCAT static node groups:: [xcat] # list the nodes in the specified node group map: lsdef -s -t node $GROUP | cut -d' ' -f1 # list all the nodes defined in the xCAT tables all: lsdef -s -t node | cut -d' ' -f1 # list all groups list: lsdef -t group | cut -d' ' -f1 .. highlight:: console Example of use with :ref:`cluset-tool`:: $ lsdef -s -t node dtn sh-dtn01 (node) sh-dtn02 (node) $ cluset -s xcat -f @dtn sh-dtn[01-02] .. highlight:: text .. _defaults-config: Library Defaults ---------------- .. warning:: Modifying library defaults is for advanced users only as that could change the behavior of tools using ClusterShell. Moreover, tools are free to enforce their own defaults, so changing library defaults may not change a global behavior as expected. Since version 1.7, most defaults of the ClusterShell library may be overridden in *defaults.conf*. The following configuration file defines ClusterShell system-wide defaults:: /etc/clustershell/defaults.conf *defaults.conf* settings might then be overridden (globally, or per user) if one of the following files is found, in priority order:: $XDG_CONFIG_HOME/clustershell/defaults.conf $HOME/.config/clustershell/defaults.conf (only if $XDG_CONFIG_HOME is not defined) {sys.prefix}/etc/clustershell/defaults.conf $HOME/.local/etc/clustershell/defaults.conf In addition, if the environment variable ``$CLUSTERSHELL_CFGDIR`` is defined and valid, it will used instead. In such case, the following configuration file will be tried first for ClusterShell defaults:: $CLUSTERSHELL_CFGDIR/defaults.conf Use case: rsh ^^^^^^^^^^^^^^ If your cluster uses a rsh variant like ``mrsh`` or ``krsh``, you may want to change it in the library defaults. An example file is usually available in ``/usr/share/doc/clustershell-*/examples/defaults.conf-rsh`` and could be copied to ``/etc/clustershell/defaults.conf`` or to an alternate path described above. Basically, the change consists in defining an alternate distant worker by Python module name as follow:: [task.default] distant_workername: Rsh .. _defaults-config-slurm: Use case: Slurm ^^^^^^^^^^^^^^^ If your cluster naming scheme has multiple dimensions, as in ``node-93-02``, we recommend that you disengage some nD folding when using Slurm, which is currently unable to detect some multidimensional node indexes when not explicitly enclosed with square brackets. To do so, define ``fold_axis`` to -1 in the :ref:`defaults-config` so that nD folding is only computed on the last axis (seems to work best with Slurm):: [nodeset] fold_axis: -1 That way, node sets computed by ClusterShell tools can be passed to Slurm without error. .. _ConfigParser: http://docs.python.org/library/configparser.html .. _nodeset: https://xcat-docs.readthedocs.io/en/stable/guides/admin-guides/references/man8/nodeset.8.html .. _sys.prefix: https://docs.python.org/3/library/sys.html#sys.prefix ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/further.rst0000644104717000001440000000232114501416555020061 0ustar00sthiellusersGoing further ============= .. highlight:: console Running the test suite ---------------------- Get the latest :ref:`install-source` code first. .. note:: "The intent of regression testing is to assure that in the process of fixing a defect no existing functionality has been broken. Non-regression testing is performed to test that an intentional change has had the desired effect." (from `Wikipedia`_) The *tests* directory of the source archive (not the RPM) contains all regression and non-regression tests. To run all tests with Python 2, use the following commands:: $ cd tests $ nosetests -sv --all-modules . Or run all tests with Python 3 by using the following command instead:: $ nosetests-3 -sv --all-modules . Some tests assume that *ssh(1)* to localhost is allowed for the current user. Some tests use *bc(1)*. And some tests need *pdsh(1)* installed. Bug reports ----------- We use `Github Issues`_ as issue tracking system for the ClusterShell development project. There, you can report bugs or suggestions after logged in with your Github account. .. _Wikipedia: https://en.wikipedia.org/wiki/Non-regression_testing .. _Github Issues: https://github.com/cea-hpc/clustershell/issues ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3283296 ClusterShell-1.9.2/doc/sphinx/guide/0000755104717000001440000000000014505640536016751 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/guide/examples.rst0000644104717000001440000002033714501416555021324 0ustar00sthiellusers.. _prog-examples: Programming Examples ==================== .. highlight:: python .. _prog-example-seq: Remote command example (sequential mode) ---------------------------------------- The following example shows how to send a command on some nodes, how to get a specific buffer and how to get gathered buffers:: from ClusterShell.Task import task_self task = task_self() task.run("/bin/uname -r", nodes="green[36-39,133]") print task.node_buffer("green37") for buf, nodes in task.iter_buffers(): print nodes, buf if task.max_retcode() != 0: print "An error occurred (max rc = %s)" % task.max_retcode() Result:: 2.6.32-431.el6.x86_64 ['green37', 'green38', 'green36', 'green39'] 2.6.32-431.el6.x86_64 ['green133'] 3.10.0-123.20.1.el7.x86_64 Max return code is 0 .. _prog-example-ev: Remote command example with live output (event-based mode) ---------------------------------------------------------- The following example shows how to use the event-based programmation model by installing an EventHandler and listening for :meth:`.EventHandler.ev_read` (we've got a line to read) and :meth:`.EventHandler.ev_hup` (one command has just completed) events. The goal here is to print standard outputs of ``uname -a`` commands during their execution and also to notify the user of any erroneous return codes:: from ClusterShell.Task import task_self from ClusterShell.Event import EventHandler class MyHandler(EventHandler): def ev_read(self, worker, node, sname, msg): print "%s: %s" % (node, msg) def ev_hup(self, worker, node, rc): if rc != 0: print "%s: returned with error code %s" % (node, rc) task = task_self() # Submit command, install event handler for this command and run task task.run("/bin/uname -a", nodes="fortoy[32-159]", handler=MyHandler()) .. _prog-example-script: *check_nodes.py* example script ------------------------------- The following script is available as an example in the source repository and is usually packaged with ClusterShell:: #!/usr/bin/python # check_nodes.py: ClusterShell simple example script. # # This script runs a simple command on remote nodes and report node # availability (basic health check) and also min/max boot dates. # It shows an example of use of Task, NodeSet and EventHandler objects. # Feel free to copy and modify it to fit your needs. # # Usage example: ./check_nodes.py -n node[1-99] import optparse from datetime import date, datetime import time from ClusterShell.Event import EventHandler from ClusterShell.NodeSet import NodeSet from ClusterShell.Task import task_self class CheckNodesResult(object): """Our result class""" def __init__(self): """Initialize result class""" self.nodes_ok = NodeSet() self.nodes_ko = NodeSet() self.min_boot_date = None self.max_boot_date = None def show(self): """Display results""" if self.nodes_ok: print "%s: OK (boot date: min %s, max %s)" % \ (self.nodes_ok, self.min_boot_date, self.max_boot_date) if self.nodes_ko: print "%s: FAILED" % self.nodes_ko class CheckNodesHandler(EventHandler): """Our ClusterShell EventHandler""" def __init__(self, result): """Initialize our event handler with a ref to our result object.""" EventHandler.__init__(self) self.result = result def ev_read(self, worker, node, sname, msg): """Read event from remote nodes""" # this is an example to demonstrate remote result parsing bootime = " ".join(msg.strip().split()[2:]) date_boot = None for fmt in ("%Y-%m-%d %H:%M",): # formats with year try: # datetime.strptime() is Python2.5+, use old method instead date_boot = datetime(*(time.strptime(bootime, fmt)[0:6])) except ValueError: pass for fmt in ("%b %d %H:%M",): # formats without year try: date_boot = datetime(date.today().year, \ *(time.strptime(bootime, fmt)[1:6])) except ValueError: pass if date_boot: if not self.result.min_boot_date or \ self.result.min_boot_date > date_boot: self.result.min_boot_date = date_boot if not self.result.max_boot_date or \ self.result.max_boot_date < date_boot: self.result.max_boot_date = date_boot self.result.nodes_ok.add(node) else: self.result.nodes_ko.add(node) def ev_close(self, worker, timedout): """Worker has finished (command done on all nodes)""" if timedout: nodeset = NodeSet.fromlist(worker.iter_keys_timeout()) self.result.nodes_ko.add(nodeset) self.result.show() def main(): """ Main script function """ # Initialize option parser parser = optparse.OptionParser() parser.add_option("-d", "--debug", action="store_true", dest="debug", default=False, help="Enable debug mode") parser.add_option("-n", "--nodes", action="store", dest="nodes", default="@all", help="Target nodes (default @all group)") parser.add_option("-f", "--fanout", action="store", dest="fanout", default="128", help="Fanout window size (default 128)", type=int) parser.add_option("-t", "--timeout", action="store", dest="timeout", default="5", help="Timeout in seconds (default 5)", type=float) options, _ = parser.parse_args() # Get current task (associated to main thread) task = task_self() nodes_target = NodeSet(options.nodes) task.set_info("fanout", options.fanout) if options.debug: print "nodeset : %s" % nodes_target task.set_info("debug", True) # Create ClusterShell event handler handler = CheckNodesHandler(CheckNodesResult()) # Schedule remote command and run task (blocking call) task.run("who -b", nodes=nodes_target, handler=handler, \ timeout=options.timeout) if __name__ == '__main__': main() .. _prog-example-pp-sbatch: Using NodeSet with Parallel Python Batch script using SLURM ----------------------------------------------------------- The following example shows how to use the NodeSet class to expand ``$SLURM_NODELIST`` environment variable in a Parallel Python batch script launched by SLURM. This variable may contain folded node sets. If ClusterShell is not available system-wide on your compute cluster, you need to follow :ref:`install-pip-user` first. .. highlight:: bash Example of SLURM ``pp.sbatch`` to submit using ``sbatch pp.sbatch``:: #!/bin/bash #SBATCH -N 2 #SBATCH --ntasks-per-node 1 # run the servers srun ~/.local/bin/ppserver.py -w $SLURM_CPUS_PER_TASK -t 300 & sleep 10 # launch the parallel processing python -u ./pp_jobs.py .. highlight:: python Example of a ``pp_jobs.py`` script:: #!/usr/bin/env python import os, time import pp from ClusterShell.NodeSet import NodeSet # get the nodelist form Slurm nodeset = NodeSet(os.environ['SLURM_NODELIST']) # start the servers (ncpus=0 will make sure that none is started locally) # casting nodelist to tuple/list will correctly expand $SLURM_NODELIST job_server = pp.Server(ncpus=0, ppservers=tuple(nodelist)) # make sure the servers have enough time to start time.sleep(5) # test function to execute on the remove nodes def test_func(): print os.uname() # start the jobs job_1 = job_server.submit(test_func,(),(),("os",)) job_2 = job_server.submit(test_func,(),(),("os",)) # retrieve the results print job_1() print job_2() # Cleanup job_server.print_stats() job_server.destroy() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/guide/index.rst0000644104717000001440000000123414501416555020610 0ustar00sthiellusersProgramming Guide ================= This part provides programming information for using ClusterShell in Python applications. It is divided into two sections: node sets handling and cluster task management, in that order, because managing cluster tasks requires some knowledge of how to deal with node sets. Each section also describes the conceptual structures of ClusterShell and provides examples of how to use them. This part is intended for intermediate and advanced programmers who are familiar with Python programming and basic concepts of high-performance computing (HPC). .. toctree:: :maxdepth: 2 nodesets rangesets taskmgnt examples ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/guide/nodesets.rst0000644104717000001440000002475514501416555021342 0ustar00sthiellusers.. _guide-NodeSet: Node sets handling ================== .. highlight:: python .. _class-NodeSet: NodeSet class ------------- :class:`.NodeSet` is a class to represent an ordered set of node names (optionally indexed). It's a convenient way to deal with cluster nodes and ease their administration. :class:`.NodeSet` is implemented with the help of two other ClusterShell public classes, :class:`.RangeSet` and :class:`.RangeSetND`, which implement methods to manage a set of numeric ranges in one or multiple dimensions. :class:`.NodeSet`, :class:`.RangeSet` and :class:`.RangeSetND` APIs match standard Python sets. A command-line interface (:ref:`nodeset-tool`) which implements most of :class:`.NodeSet` features, is also available. Other classes of the ClusterShell library makes use of the :class:`.NodeSet` class when they come to deal with distant nodes. Using NodeSet ^^^^^^^^^^^^^ If you are used to `Python sets`_, :class:`.NodeSet` interface will be easy for you to learn. The main conceptual difference is that :class:`.NodeSet` iterators always provide ordered results (and also :meth:`.NodeSet.__getitem__()` by index or slice is allowed). Furthermore, :class:`.NodeSet` provides specific methods like :meth:`.NodeSet.split()`, :meth:`.NodeSet.contiguous()` (see below), or :meth:`.NodeSet.groups()`, :meth:`.NodeSet.regroup()` (these last two are related to :ref:`class-NodeSet-groups`). The following code snippet shows you a basic usage of the :class:`.NodeSet` class:: >>> from ClusterShell.NodeSet import NodeSet >>> nodeset = NodeSet() >>> nodeset.add("node7") >>> nodeset.add("node6") >>> print nodeset node[6-7] :class:`.NodeSet` class provides several object constructors:: >>> print NodeSet("node[1-5]") node[1-5] >>> print NodeSet.fromlist(["node1", "node2", "node3"]) node[1-3] >>> print NodeSet.fromlist(["node[1-5]", "node[6-10]"]) node[1-10] >>> print NodeSet.fromlist(["clu-1-[1-4]", "clu-2-[1-4]"]) clu-[1-2]-[1-4] All corresponding Python sets operations are available, for example:: >>> from ClusterShell.NodeSet import NodeSet >>> ns1 = NodeSet("node[10-42]") >>> ns2 = NodeSet("node[11-16,18-39]") >>> print ns1.difference(ns2) node[10,17,40-42] >>> print ns1 - ns2 node[10,17,40-42] >>> ns3 = NodeSet("node[1-14,40-200]") >>> print ns3.intersection(ns1) node[10-14,40-42] Unlike Python sets, it is important to notice that :class:`.NodeSet` is somewhat not so strict about the type of element used for set operations. Thus when a string object is encountered, it is automatically converted to a NodeSet object for convenience. The following example shows an example of this (set operation is working with either a native nodeset or a string):: >>> nodeset = NodeSet("node[1-10]") >>> nodeset2 = NodeSet("node7") >>> nodeset.difference_update(nodeset2) >>> print nodeset node[1-6,8-10] >>> >>> nodeset.difference_update("node8") >>> print nodeset node[1-6,9-10] NodeSet ordered content leads to the following being allowed:: >>> nodeset = NodeSet("node[10-49]") >>> print nodeset[0] node10 >>> print nodeset[-1] node49 >>> print nodeset[10:] node[20-49] >>> print nodeset[:5] node[10-14] >>> print nodeset[::4] node[10,14,18,22,26,30,34,38,42,46] And it works for node names without index, for example:: >>> nodeset = NodeSet("lima,oscar,zulu,alpha,delta,foxtrot,tango,x-ray") >>> print nodeset alpha,delta,foxtrot,lima,oscar,tango,x-ray,zulu >>> print nodeset[0] alpha >>> print nodeset[-2] x-ray And also for multidimensional node sets:: >>> nodeset = NodeSet("clu1-[1-10]-ib[0-1],clu2-[1-10]-ib[0-1]") >>> print nodeset clu[1-2]-[1-10]-ib[0-1] >>> print nodeset[0] clu1-1-ib0 >>> print nodeset[-1] clu2-10-ib1 >>> print nodeset[::2] clu[1-2]-[1-10]-ib0 .. _class-NodeSet-split: To split a NodeSet object into *n* subsets, use the :meth:`.NodeSet.split()` method, for example:: >>> for nodeset in NodeSet("node[10-49]").split(2): ... print nodeset ... node[10-29] node[30-49] .. _class-NodeSet-contiguous: To split a NodeSet object into contiguous subsets, use the :meth:`.NodeSet.contiguous()` method, for example:: >>> for nodeset in NodeSet("node[10-49,51-53,60-64]").contiguous(): ... print nodeset ... node[10-49] node[51-53] node[60-64] For further details, please use the following command to see full :class:`.NodeSet` API documentation. .. _class-NodeSet-nD: Multidimensional considerations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Version 1.7 introduces full support of multidimensional NodeSet (eg. *da[2-5]c[1-2]p[0-1]*). The :class:`.NodeSet` interface is the same, multidimensional patterns are automatically detected by the parser and processed internally. While expanding a multidimensional NodeSet is easily solved by performing a cartesian product of all dimensions, folding nodes is much more complex and time consuming. To reduce the performance impact of such feature, the :class:`.NodeSet` class still relies on :class:`.RangeSet` when only one dimension is varying (see :ref:`class-RangeSet`). Otherwise, it uses a new class named :class:`.RangeSetND` for full multidimensional support (see :ref:`class-RangeSetND`). .. _class-NodeSet-extended-patterns: Extended String Pattern ^^^^^^^^^^^^^^^^^^^^^^^ :class:`.NodeSet` class parsing engine recognizes an *extended string pattern*, adding support for union (with special character *","*), difference (with special character *"!"*), intersection (with special character *"&"*) and symmetric difference (with special character *"^"*) operations. String patterns are read from left to right, by proceeding any character operators accordingly. The following example shows how you can use this feature:: >>> print NodeSet("node[10-42],node46!node10") node[11-42,46] .. _class-NodeSet-groups: Node groups ----------- Node groups are very useful and are needed to group similar cluster nodes in terms of configuration, installed software, available resources, etc. A node can be a member of more than one node group. Using node groups ^^^^^^^^^^^^^^^^^ Node groups are prefixed with **@** character. Please see :ref:`nodeset-groupsexpr` for more details about node group expression/syntax rules. Please also have a look at :ref:`Node groups configuration ` to learn how to configure external node group bingings (sources). Once setup (please use the :ref:`nodeset-tool` command to check your configuration), the NodeSet parsing engine automatically resolves node groups. For example:: >>> print NodeSet("@oss") example[4-5] >>> print NodeSet("@compute") example[32-159] >>> print NodeSet("@compute,@oss") example[4-5,32-159] That is, all NodeSet-based applications share the same system-wide node group configuration (unless explicitly disabled --- see :ref:`class-NodeSet-disable-group`). When the **all** group upcall is configured (:ref:`node groups configuration `), you can also use the following :class:`.NodeSet` constructor:: >>> print NodeSet.fromall() example[4-6,32-159] When group upcalls are not properly configured, this constructor will raise a *NodeSetExternalError* exception. .. _class-NodeSet-groups-finding: Finding node groups ^^^^^^^^^^^^^^^^^^^ In order to find node groups a specified node set belongs to, you can use the :meth:`.NodeSet.groups()` method. This method is used by ``nodeset -l `` command (see :ref:`nodeset-group-finding`). It returns a Python dictionary where keys are groups found and values, provided for convenience, are tuples of the form *(group_nodeset, contained_nodeset)*. For example:: >>> for group, (group_nodes, contained_nodes) in NodeSet("@oss").groups().iteritems(): ... print group, group_nodes, contained_nodes ... @all example[4-6,32-159] example[4-5] @oss example[4-5] example[4-5] More usage examples follow:: >>> print NodeSet("example4").groups().keys() ['@all', '@oss'] >>> print NodeSet("@mds").groups().keys() ['@all', '@mds'] >>> print NodeSet("dummy0").groups().keys() [] .. _class-NodeSet-regroup: Regrouping node sets ^^^^^^^^^^^^^^^^^^^^ If needed group configuration conditions are met (cf. :ref:`node groups configuration `), you can use the :meth:`.NodeSet.regroup()` method to reduce node sets using matching groups, whenever possible:: >>> print NodeSet("example[4-6]").regroup() @mds,@oss The nodeset command makes use of the :meth:`.NodeSet.regroup()` method when using the *-r* switch (see :ref:`nodeset-regroup`). .. _class-NodeSet-groups-override: Overriding default groups configuration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to override the library default groups configuration by changing the default :class:`.NodeSet` *resolver* object. Usually, this is done for testing or special purposes. Here is an example of how to override the *resolver* object using :func:`.NodeSet.set_std_group_resolver()` in order to use another configuration file:: >>> from ClusterShell.NodeSet import NodeSet, set_std_group_resolver >>> from ClusterShell.NodeUtils import GroupResolverConfig >>> set_std_group_resolver(GroupResolverConfig("/other/groups.conf")) >>> print NodeSet("@oss") other[10-20] It is possible to restore :class:`.NodeSet` *default group resolver* by passing None to the :func:`.NodeSet.set_std_group_resolver()` module function, for example:: >>> from ClusterShell.NodeSet import set_std_group_resolver >>> set_std_group_resolver(None) .. _class-NodeSet-disable-group: Disabling node group resolution ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If for any reason, you want to disable host groups resolution, you can use the special resolver value *RESOLVER_NOGROUP*. In that case, :class:`.NodeSet` parsing engine will not recognize **@** group characters anymore, for instance:: >>> from ClusterShell.NodeSet import NodeSet, RESOLVER_NOGROUP >>> print NodeSet("@oss") example[4-5] >>> print NodeSet("@oss", resolver=RESOLVER_NOGROUP) @oss Any attempts to use a group-based method (like :meth:`.NodeSet.groups()` or :meth:`.NodeSet.regroups()`) on such "no group" NodeSet will raise a *NodeSetExternalError* exception. NodeSet object serialization ---------------------------- The :class:`.NodeSet` class supports object serialization through the standard *pickling*. Group resolution is done before *pickling*. .. _Python sets: http://docs.python.org/library/sets.html ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/guide/rangesets.rst0000644104717000001440000000654014501416555021501 0ustar00sthiellusersRange sets ========== .. highlight:: python Cluster node names being typically indexed, common node sets rely heavily on numerical range sets. The :mod:`.RangeSet` module provides two public classes to deal directly with such range sets, :class:`.RangeSet` and :class:`.RangeSetND`, presented in the following sections. .. _class-RangeSet: RangeSet class -------------- The :class:`.RangeSet` class implements a mutable, ordered set of cluster node indexes (over a single dimension) featuring a fast range-based API. This class is used by the :class:`.NodeSet` class (see :ref:`class-NodeSet`). Since version 1.6, :class:`.RangeSet` actually derives from the standard Python set class (`Python sets`_), and thus provides methods like :meth:`.RangeSet.union`, :meth:`.RangeSet.intersection`, :meth:`.RangeSet.difference`, :meth:`.RangeSet.symmetric_difference` and their in-place versions :meth:`.RangeSet.update`, :meth:`.RangeSet.intersection_update`, :meth:`.RangeSet.difference_update()` and :meth:`.RangeSet.symmetric_difference_update`. In v1.9, the implementation of zero-based padding of indexes (e.g. `001`) has been improved. The inner set contains indexes as strings with the padding included, which allows the use of mixed length zero-padded indexes (eg. using both `01` and `001` is valid and supported in the same object). Prior to v1.9, zero-padding was a simple display feature of fixed length per :class:`.RangeSet` object, and indexes where stored as integers in the inner set. To iterate over indexes as strings with zero-padding included, you can now iterate over the :class:`.RangeSet` object (:meth:`.RangeSet.__iter__()`), or still use the :meth:`.RangeSet.striter()` method which has not changed. To iterate over the set's indexes as integers, you may use the new method :meth:`.RangeSet.intiter()`, which is the equivalent of iterating over the :class:`.RangeSet` object before v1.9. .. _class-RangeSetND: RangeSetND class ---------------- The :class:`.RangeSetND` class builds a N-dimensional RangeSet mutable object and provides the common set methods. This class is public and may be used directly, however we think it is less convenient to manipulate that :class:`.NodeSet` and does not necessarily provide the same one-dimension optimization (see :ref:`class-NodeSet-nD`). Several constructors are available, using RangeSet objects, strings or individual multidimensional tuples, for instance:: >>> from ClusterShell.RangeSet import RangeSet, RangeSetND >>> r1 = RangeSet("1-5/2") >>> list(r1) ['1', '3', '5'] >>> r2 = RangeSet("10-12") >>> r3 = RangeSet("0-4/2") >>> r4 = RangeSet("10-12") >>> print r1, r2, r3, r4 1,3,5 10-12 0,2,4 10-12 >>> rnd = RangeSetND([[r1, r2], [r3, r4]]) >>> print rnd 0-5; 10-12 >>> print list(rnd) [('0', '10'), ('0', '11'), ('0', '12'), ('1', '10'), ('1', '11'), ('1', '12'), ('2', '10'), ('2', '11'), ('2', '12'), ('3', '10'), ('3', '11'), ('3', '12'), ('4', '10'), ('4', '11'), ('4', '12'), ('5', '10'), ('5', '11'), ('5', '12')] >>> r1 = RangeSetND([(0, 4), (0, 5), (1, 4), (1, 5)]) >>> len(r1) 4 >>> str(r1) '0-1; 4-5\n' >>> r2 = RangeSetND([(1, 4), (1, 5), (1, 6), (2, 5)]) >>> str(r2) '1; 4-6\n2; 5\n' >>> r = r1 & r2 >>> str(r) '1; 4-5\n' >>> list(r) [('1', '4'), ('1', '5')] .. _Python sets: http://docs.python.org/library/sets.html ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/guide/taskmgnt.rst0000644104717000001440000002750614501416555021343 0ustar00sthiellusersTask management =============== .. highlight:: python .. _class-Task: Structure of Task ----------------- A ClusterShell *Task* and its underlying *Engine* class are the fundamental infrastructure associated with a thread. An *Engine* implements an event processing loop that you use to schedule work and coordinate the receipt of incoming events. The purpose of this run loop is to keep your thread busy when there is work to do and put your thread to sleep when there is none. When calling the :meth:`.Task.resume()` or :meth:`.Task.run()` methods, your thread enters the Task Engine run loop and calls installed event handlers in response to incoming events. Using Task objects ------------------ A *Task* object provides the main interface for adding shell commands, files to copy or timer and then running it. Every thread has a single *Task* object (and underlying *Engine* object) associated with it. The *Task* object is an instance of the :class:`.Task` class. Getting a Task object ^^^^^^^^^^^^^^^^^^^^^ To get the *Task* object bound to the **current thread**, you use one of the following: * Use the :func:`.Task.task_self()` function available at the root of the Task module * or use ``task = Task()``; Task objects are only instantiated when needed. Example of getting the current task object:: >>> from ClusterShell.Task import task_self >>> task = task_self() So for a single-threaded application, a Task is a simple singleton (which instance is also available through :func:`.Task.task_self()`). To get the *Task* object associated to a specific thread identified by the identifier *tid*, you use the following:: >>> from ClusterShell.Task import Task >>> task = Task(thread_id=tid) .. _class-Task-configure: Configuring the Task object ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Each *Task* provides an info dictionary that shares both internal *Task*-specific parameters and user-defined (key, value) parameters. Use the following :class:`.Task` class methods to get or set parameters: * :meth:`.Task.info` * :meth:`.Task.set_info` For example, to configure the task debugging behavior:: >>> task.set_info('debug', True) >>> task.info('debug') True You can also use the *Task* info dictionary to set your own *Task*-specific key, value pairs. You may use any free keys but only keys starting with *USER_* are guaranteed not to be used by ClusterShell in the future. Task info keys and their default values: +-----------------+----------------+------------------------------------+ | Info key string | Default value | Comment | +=================+================+====================================+ | debug | False | Enable debugging support (boolean) | +-----------------+----------------+------------------------------------+ | print_debug | internal using | Default is to print debug lines to | | | *print* | stdout using *print*. To override | | | | this behavior, set a function that | | | | takes two arguments (the task | | | | object and a string) as the value. | +-----------------+----------------+------------------------------------+ | fanout | 64 | Ssh *fanout* window (integer) | +-----------------+----------------+------------------------------------+ | connect_timeout | 10 | Value passed to ssh or pdsh | | | | (integer) | +-----------------+----------------+------------------------------------+ | command_timeout | 0 (no timeout) | Value passed to ssh or pdsh | | | | (integer) | +-----------------+----------------+------------------------------------+ Below is an example of `print_debug` override. As you can see, we set the function `print_csdebug(task, s)` as the value. When debugging is enabled, this function will be called for any debug text line. For example, this function searches for any known patterns and print a modified debug line to stdout when found:: def print_csdebug(task, s): m = re.search("(\w+): SHINE:\d:(\w+):", s) if m: print "%s" % m.group(0) else: print s # Install the new debug printing function task_self().set_info("print_debug", print_csdebug) .. _taskshell: Submitting a shell command ^^^^^^^^^^^^^^^^^^^^^^^^^^ You can submit a set of commands for local or distant execution in parallel with :meth:`.Task.shell`. Local usage:: task.shell(command [, key=key] [, handler=handler] [, timeout=secs]) Distant usage:: task.shell(command, nodes=nodeset [, handler=handler] [, timeout=secs]) This method makes use of the default local or distant worker. ClusterShell uses a default Worker based on the Python Popen2 standard module to execute local commands, and a Worker based on *ssh* (Secure SHell) for distant commands. If the Task is not running, the command is scheduled for later execution. If the Task is currently running, the command is executed as soon as possible (depending on the current *fanout*). To set a per-worker (eg. per-command) timeout value, just use the timeout parameter (in seconds), for example:: task.shell("uname -r", nodes=remote_nodes, handler=ehandler, timeout=5) This is the preferred way to specify a command timeout. :meth:`.EventHandler.ev_timeout` event is generated before the worker has finished to indicate that some nodes have timed out. You may then retrieve the nodes with :meth:`.DistantWorker.iter_keys_timeout()`. Submitting a file copy action ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Local file copy to distant nodes is supported. You can submit a copy action with :meth:`.Task.copy`:: task.copy(source, dest, nodes=nodeset [, handler=handler] [, timeout=secs]) This method makes use of the default distant copy worker which is based on scp (Secure CoPy) which comes with OpenSSH. If the Task is not running, the copy is scheduled for later execution. If the Task is currently running, the copy is started as soon as possible (depending on the current *fanout*). Starting the Task ^^^^^^^^^^^^^^^^^ Before you run a Task, you must add at least one worker (shell command, file copy) or timer to it. If a Task does not have any worker to execute and monitor, it exits immediately when you try to run it with:: task.resume() At this time, all previously submitted commands will start in the associated Task thread. From a library user point of view, the task thread is blocked until the end of the command executions. Please note that the special method :meth:`.Task.run` does a :meth:`.Task.shell` and a :meth:`.Task.resume` in once. To set a Task execution timeout, use the optional *timeout* parameter to set the timeout value in seconds. Once this time is elapsed when the Task is still running, the running Task raises ``TimeoutError`` exception, cleaning by the way all scheduled workers and timers. Using such a timeout ensures that the Task will not exceed a given time for all its scheduled works. You can also configure per-worker timeout that generates an event :meth:`.EventHandler.ev_timeout` but will not raise an exception, allowing the Task to continue. Indeed, using a per-worker timeout is the preferred way for most applications. Getting Task results ^^^^^^^^^^^^^^^^^^^^ After the task is finished (after :meth:`.Task.resume` or :meth:`.Task.run`) or after a worker is completed when you have previously defined an event handler (at :meth:`.EventHandler.ev_close`), you can use Task result getters: * :meth:`.Task.iter_buffers` * :meth:`.Task.iter_errors` * :meth:`.Task.node_buffer` * :meth:`.Task.node_error` * :meth:`.Task.max_retcode` * :meth:`.Task.num_timeout` * :meth:`.Task.iter_keys_timeout` Note: *buffer* refers to standard output, *error* to standard error. Please see some examples in :ref:`prog-examples`. Exiting the Task ^^^^^^^^^^^^^^^^ If a Task does not have anymore scheduled worker or timer (for example, if you run one shell command and then it closes), it exits automatically from :meth:`.Task.resume`. Still, except from a signal handler, you can always call the following method to abort the Task execution: * :meth:`.Task.abort` For example, it is safe to call this method from an event handler within the task itself. On abort, all scheduled workers (shell command, file copy) and timers are cleaned and :meth:`.Task.resume` returns, unblocking the Task thread from a library user point of view. Please note that commands being executed remotely are not necessary stopped (this is due to *ssh(1)* behavior). .. _configuring-a-timer: Configuring a Timer ^^^^^^^^^^^^^^^^^^^ A timer is bound to a Task (and its underlying Engine) and fires at a preset time in the future. Timers can fire either only once or repeatedly at fixed time intervals. Repeating timers can also have their next firing time manually adjusted (see :meth:`.Task.timer`). A timer is not a real-time mechanism; it fires when the Task's underlying Engine to which the timer has been added is running and able to check if the timer firing time has passed. When a timer fires, the method :meth:`.EventHandler.ev_timer` of the associated EventHandler is called. To configure a timer, use the following (secs in seconds with floating point precision):: task.timer(self, fire=secs, handler=handler [, interval=secs]) .. _task-default-worker: Changing default worker ^^^^^^^^^^^^^^^^^^^^^^^ When calling :meth:`.Task.shell` or :meth:`.Task.copy` the Task object creates a worker instance for each call. When the *nodes* argument is defined, the worker class used for these calls is based on Task default *distant_worker*. Change this value to use another worker class, by example **Rsh**:: from ClusterShell.Task import task_self from ClusterShell.Worker.Rsh import WorkerRsh task_self().set_default('distant_worker', WorkerRsh) Thread safety and Task objects ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ClusterShell is an event-based library and one of its advantage is to avoid the use of threads (and their safety issues), so it's mainly not thread-safe. When possible, avoid the use of threads with ClusterShell. However, it's sometimes not so easy, first because another library you want to use in some event handler is not event-based and may block the current thread (that's enough to break the deal). Also, in some cases, it could be useful for you to run several Tasks at the same time. Since version 1.1, ClusterShell provides support for launching a Task in another thread and some experimental support for multiple Tasks, but: * you should ensure that a Task is configured and accessed from one thread at a time before it's running (there is no API lock/mutex protection), * once the Task is running, you should modify it only from the same thread that owns that Task (for example, you cannot call :meth:`.Task.abort` from another thread). The library provides two thread-safe methods and a function for basic Task interactions: :meth:`.Task.wait`, :meth:`.Task.join` and :func:`.Task.task_wait` (function defined at the root of the Task module). Please refer to the API documentation. Configuring explicit Shell Worker objects ----------------------------------------- We have seen in :ref:`taskshell` how to easily submit shell commands to the Task. The :meth:`.Task.shell` method returns an already scheduled Worker object. It is possible to instantiate the Worker object explicitly, for example:: from ClusterShell.Worker.Ssh import WorkerSsh worker = WorkerSsh('node3', command="/bin/echo alright") To be used in a Task, add the worker to it with:: task.schedule(worker) If you have pdsh installed, you can use it by easily switching to the Pdsh worker, which should behave the same manner as the Ssh worker:: from ClusterShell.Worker.Pdsh import WorkerPdsh worker = WorkerPdsh('node3', command="/bin/echo alright") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/index.rst0000644104717000001440000000046114501416555017514 0ustar00sthiellusersClusterShell |release| documentation ==================================== Contents: .. toctree:: :maxdepth: 3 intro release install config tools/index guide/index api/index further Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/install.rst0000644104717000001440000002611014501416555020052 0ustar00sthiellusers.. highlight:: console Installation ============ ClusterShell is distributed in several packages. On RedHat-like OS, we recommend to use the RPM package (.rpm) distribution. As system software for cluster, ClusterShell is primarily made for system-wide installation to be used by system administrators. However, changes have been made so that it's now possible to install it without root access (see :ref:`install-pip-user`). .. _install-requirements: Requirements ------------ ClusterShell should work with any Unix [#]_ operating systems which provides Python 2.7 or 3.x and OpenSSH or any compatible Secure Shell clients. Furthermore, ClusterShell's engine has been optimized when the ``poll()`` syscall is available or even better, when the ``epoll_wait()`` syscall is available (Linux only). For instance, ClusterShell is known to work on the following operating systems: * GNU/Linux * Red Hat Enterprise Linux 7 (Python 2.7) * Red Hat Enterprise Linux 8 (Python 3.6) * Red Hat Enterprise Linux 9 (Python 3.9) * Fedora 30 and above (Python 2.7 to 3.10+) * Debian 10 "buster" (Python 3.7) * Debian 11 "bullseye" (Python 3.9) * Ubuntu 20.04 (Python 3.8) * Mac OS X 12+ (Python 2.7 and 3.8) Distribution ------------ ClusterShell is an open-source project distributed under the GNU Lesser General Public License version or later (`LGPL v2.1+`_), which means that many possibilities are offered to the end user. Also, as a software library, ClusterShell should remain easily available to everyone. Hopefully, packages are currently available for Fedora Linux, RHEL (through EPEL repositories), Debian, Arch Linux and more. .. _install-python-support-overview: Python support overview ^^^^^^^^^^^^^^^^^^^^^^^ As seen in :ref:`install-requirements`, ClusterShell supports Python 2.7 and onwards, at least up to Python 3.10 at the time of writing. The table below provides a few examples of versions of Python supported by ClusterShell packages as found in some common Linux distributions: +------------------+----------------------------+-----------------------------------+ | Operating | System Python version used | Alternate Python support | | System | by the clustershell tools | packaged (version-suffixed tools) | +==================+============================+===================================+ | RHEL 7 | Python 2.7 | Python 3.6 | +------------------+----------------------------+-----------------------------------+ | RHEL 8 | **Python 3.6** | | +------------------+----------------------------+-----------------------------------+ | RHEL 9 | **Python 3.9** | | +------------------+----------------------------+-----------------------------------+ | Fedora 36 | **Python 3.10** | | +------------------+----------------------------+-----------------------------------+ | openSUSE Leap 15 | Python 2.7 | Python 3.6 | +------------------+----------------------------+-----------------------------------+ | SUSE SLES 12 | Python 2.7 | Python 3.4 | +------------------+----------------------------+-----------------------------------+ | SUSE SLES 15 | Python 2.7 | Python 3.6 | +------------------+----------------------------+-----------------------------------+ | Ubuntu 18.04 LTS | **Python 3.6** | | +------------------+----------------------------+-----------------------------------+ | Ubuntu 20.04 LTS | **Python 3.8** | | +------------------+----------------------------+-----------------------------------+ Red Hat Enterprise Linux ^^^^^^^^^^^^^^^^^^^^^^^^ ClusterShell packages are maintained on Extra Packages for Enterprise Linux `EPEL`_ for Red Hat Enterprise Linux (RHEL) and its compatible spinoffs such as `Alma Linux`_ and `Rocky Linux`_. At the time of writing, ClusterShell |version| is available on EPEL 7, 8 and 9. Install ClusterShell from EPEL """""""""""""""""""""""""""""" First you have to enable the ``yum`` EPEL repository. We recommend to download and install the `EPEL`_ repository RPM package. On CentOS, this can be easily done using the following command:: $ yum --enablerepo=extras install epel-release Then, the ClusterShell installation procedure is quite the same as for *Fedora Updates*, for instance:: $ yum install clustershell With EPEL 7, the Python 2 modules and tools are installed by default. If interested in Python 3 support, simply install the additional ClusterShell's Python 3 subpackage using the following command:: $ yum install python36-clustershell .. note:: The Python 3 subpackage is named ``python34-clustershell`` or ``python36-clustershell`` instead of ``python3-clustershell`` on EPEL 7 only. On EPEL 7, Python 3 versions of the tools are installed as *tool-pythonversion*, like ``clush-3.6``, ``cluset-3.6`` or ``nodeset-3.6``. With EPEL 8 and 9, however, Python 3 is the system default, and Python 2 has been deprecated. Thus only Python 3 is supported by the EPEL clustershell packages, the tools are using Python 3 by default and are not suffixed anymore. Fedora ^^^^^^ At the time of writing, ClusterShell |version| is available on Fedora 37 (releases being maintained by the Fedora Project). Install ClusterShell from *Fedora Updates* """""""""""""""""""""""""""""""""""""""""" ClusterShell is part of Fedora, so it is really easy to install it with ``dnf``, although you have to keep the Fedora *updates* default repository. The following command checks whether the packages are available on a Fedora system:: $ dnf list \*clustershell Available Packages clustershell.noarch 1.8-1.fc26 fedora python2-clustershell.noarch 1.8-1.fc26 fedora python3-clustershell.noarch 1.8-1.fc26 fedora Then, install ClusterShell's library module and tools using the following command:: $ dnf install clustershell Prior to Fedora 31, Python 2 modules and tools were installed by default. If interested in Python 3 support, simply install the additional ClusterShell's Python 3 subpackage using the following command:: $ dnf install python3-clustershell Prior to Fedora 31, Python 3 versions of the tools are installed as *tool-pythonversion*, like ``clush-3.6``, ``cluset-3.6`` or ``nodeset-3.6``. On Fedora 31 and onwards, only Python 3 is supported. Install ClusterShell from Fedora Updates Testing """""""""""""""""""""""""""""""""""""""""""""""" Recent releases of ClusterShell are first available through the `Test Updates`_ repository of Fedora, then it is later pushed to the stable *updates* repository. The following ``dnf`` command will also checks for packages availability in the *updates-testing* repository:: $ dnf list \*clustershell --enablerepo=updates-testing To install, also add the ``--enablerepo=updates-testing`` option, for instance:: $ dnf install clustershell --enablerepo=updates-testing openSUSE ^^^^^^^^ ClusterShell is available in openSUSE Tumbleweed (Factory) and Leap since 2017:: $ zypper search clustershell Loading repository data... Reading installed packages... S | Name | Summary | Type --+----------------------+-------------------------------------------------------+-------- | clustershell | Python framework for efficient cluster administration | package | python2-clustershell | ClusterShell module for Python 2 | package | python3-clustershell | ClusterShell module for Python 3 | package To install ClusterShell on openSUSE, use:: $ zypper install clustershell Python 2 module and tools are installed by default. If interested in Python 3 support, simply install the additional ClusterShell's Python 3 subpackage using the following command:: $ zypper install python3-clustershell Python 3 versions of the tools are installed as *tool-pythonversion*, like ``clush-3.6``, ``cluset-3.6`` or ``nodeset-3.6``. Debian ^^^^^^ ClusterShell is available in Debian **main** repository (since 2011). To install it on Debian, simply use:: $ apt-get install clustershell You can get the latest version on:: * http://packages.debian.org/sid/clustershell Ubuntu ^^^^^^ Like Debian, it is easy to get and install ClusterShell on Ubuntu (also with ``apt-get``). To do so, please first enable the **universe** repository. ClusterShell is available since "Natty" release (11.04): * http://packages.ubuntu.com/clustershell .. _install-python: Installing ClusterShell the Python way ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. warning:: Installing ClusterShell as root using pip [#]_ is discouraged and can result in conflicting behaviour with the system package manager. Use packages provided by your OS instead to install ClusterShell system-wide. .. _install-pip-user: Installing ClusterShell as user using pip """"""""""""""""""""""""""""""""""""""""" To install ClusterShell as a standard Python package using pip as an user:: $ pip install --user ClusterShell Or alternatively, using the source tarball:: $ pip install --user ClusterShell-1.x.tar.gz Then, you might need to update your ``PATH`` to easily use the :ref:`tools`, and possibly set the ``PYTHONPATH`` environment variable to be able to import the library, and finally ``MANPATH`` for the man pages:: $ export PATH=$PATH:~/.local/bin $ $ # Might also be needed: $ export PYTHONPATH=$PYTHONPATH:~/.local/lib $ export MANPATH=$MANPATH:$HOME/.local/share/man Configuration files are installed in ``~/.local/etc/clustershell`` and are automatically loaded before system-wide ones (for more info about supported user config files, please see the :ref:`clush-config` or :ref:`groups-config` config sections). .. _install-venv-pip: Isolated environment using virtualenv and pip """"""""""""""""""""""""""""""""""""""""""""" It is possible to use virtual env (`venv`_) and pip to install ClusterShell in an isolated environment:: $ python3 -m venv venv $ source venv/bin/activate $ pip install ClusterShell .. _install-source: Source ------ Current source is available through Git, use the following command to retrieve the latest development version from the repository:: $ git clone git@github.com:cea-hpc/clustershell.git .. [#] Unix in the same sense of the *Availability: Unix* notes in the Python documentation .. [#] pip is a tool for installing and managing Python packages, such as those found in the Python Package Index .. _LGPL v2.1+: https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html .. _Test Updates: http://fedoraproject.org/wiki/QA/Updates_Testing .. _EPEL: http://fedoraproject.org/wiki/EPEL .. _Alma Linux: https://almalinux.org/ .. _Rocky Linux: https://rockylinux.org/ .. _venv: https://docs.python.org/3/tutorial/venv.html ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/intro.rst0000644104717000001440000000366214501416555017546 0ustar00sthiellusersIntroduction ============ ClusterShell provides a light and unified command execution Python framework to help administer GNU/Linux or BSD clusters. Some of the most important benefits of using ClusterShell are to: * provide an efficient, parallel and highly scalable command execution engine in Python, * support an unified node groups syntax and external group access (see the :class:`.NodeSet` class), * significantly speed up initial cluster setup and daily administrative tasks when using tools like :ref:`clush-tool` and :ref:`cluset-tool` / :ref:`nodeset-tool`. Originally created by the HPC Linux system development team at CEA [#]_ HPC center in France, ClusterShell is designed around medium and long term ideas of sharing cluster administration development time, and this according to two axes: * sharing administrative applications between main components of the computing center: compute clusters, but also storage clusters and server farms (so they can use the same efficient framework for their administrative applications), * sharing cluster administration techniques across multiple generations of super-computing clusters (first of all, to avoid that each cluster administration application has to implement its own command execution layer, but also to encourage the adoption of event-based coding model in cluster management scripts). Two available coding models make the library well-suited for simple scripts or for complex applications as well. Also, the library is fully cluster-aware and has primarily been made for executing remote shell commands in parallel and gathering output results. But it now also provides the developer a set of extra features for administrative applications, like file copy support or time-based notifications (timers) which are discussed in this documentation. .. [#] French Alternative Energies and Atomic Energy Commission, a leading technological research organization in Europe ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/sphinx/release.rst0000644104717000001440000006744614505632065020045 0ustar00sthiellusers.. highlight:: console Release Notes ============= Version 1.9 ----------- We are pleased to announce the availability of this new release, which comes with some exciting new features and improvements. We would like to thank everyone who participated in this release in a way or another. Version 1.9.2 ^^^^^^^^^^^^^ This version contains a few bug fixes and improvements over 1.9.1: * :ref:`clush-tool`: and :ref:`clubak-tool`: We fixed the line-buffered output with recent versions of Python 3 for the standard output and standard error streams. A welcome consequence of this change is that non-printable characters will now be displayed as �. * :ref:`clush-tool`: When multiple files or directories are specified as arguments with ``--[r]copy``, and ``--dest`` is omitted, use each argument's dirname for each destination, instead of the dirname of the first argument only. * In YAML configuration files used for :ref:`group-file-based`, a valid YAML null value (for example: ``null`` or ``~``) is now interpreted as an empty node set. * Router and destination node sets defined in :ref:`topology.conf ` may use :ref:`nodeset-groups` and :ref:`node-wildcards` but any route definition with an empty node set will now be ignored. For more details, please have a look at `GitHub Issues for 1.9.2 milestone`_. Version 1.9.1 ^^^^^^^^^^^^^ This version contains a few bug fixes and improvements over 1.9, mostly affecting packaging: * Allow ``clustershell`` to be installed as user in a ``venv`` using ``pip install`` or using ``pip install --user`` with man pages. Root installation using pip is now discouraged. If done, ``/usr/local`` is likely to be used as the install prefix. See :ref:`install-python` for more information. * :ref:`clush-tool`: ``$CFGDIR`` was broken if ``/etc/clustershell`` did not exist * Add support for negative ranges in :class:`.RangeSet`. For more details, please have a look at `GitHub Issues for 1.9.1 milestone`_. Main changes in 1.9 ^^^^^^^^^^^^^^^^^^^ Python support """""""""""""" .. warning:: Support for Python 2.6 has been dropped in this version. Upgrading to Python 3 is highly recommended as Python 2 reached end of life in 2020. See :ref:`install-requirements`. clush """"" * :ref:`clush-tool` has now support for :ref:`clush-modes` to support more authentication use cases. A run mode has pre-defined :ref:`clush.conf ` settings with a given name, and can then be activated with ``--mode=MODE``. We also added the new options ``command_prefix`` and ``password_prompt`` (see :ref:`clush.conf `). Two examples of run modes are included and can be easily enabled: * :ref:`password-based ssh authentication with sshpass ` * :ref:`sudo password forwarding over stdin ` .. note:: ``clush.conf`` comes with a new variable :ref:`confdir ` to specify where to look for run mode configuration files. If you upgrade from 1.8.4 and want to use run modes, make sure ``confdir`` is present in your ``clush.conf``. * :ref:`clush-tool`: add arguments ``--outdir=OUTDIR`` and ``--errdir=ERRDIR``; similar to *pssh(1)*, it allows to save the standard output (stdout) and/or error (stderr) of all remote commands to local files. See :ref:`clush-outdir`. Node sets and node groups """"""""""""""""""""""""" .. warning:: To support mixed-length 0-padding ranges, version 1.9 introduces changes in :class:`.RangeSet`'s API that might break existing code. If you use :class:`.RangeSet` directly, see below for more information. * :class:`.NodeSet`, :class:`.RangeSet` and :class:`.RangeSetND` objects now support sets with mixed length zero padding, meaning you can safely mix ranges like ``2-3``, ``03-09`` and ``005-123``. The following example with :ref:`nodeset-tool` shows that not only ``01`` and ``001`` are now seen as separate indexes, but it is also possible to mix non-padded indexes like ``1`` with zero-padded indexes:: $ nodeset --fold node001 node1 node01 node[1,01,001] See ``nodeset``'s :ref:`zero padding ` for more examples. :class:`.RangeSet` now internally manages indexes as strings with the zero padding included. Prior to v1.9, indexes were stored as integers and zero padding was a simple display feature of fixed length per :class:`.RangeSet` object. If you are using this class directly in your code, please see the :ref:`class-RangeSet` in the Programming Guide section for portability recommendations (especially the new method :meth:`.RangeSet.intiter()`). .. note:: The :class:`.NodeSet` class API has NOT changed so as long as you do not use :class:`.RangeSet` directly, you may safely upgrade to 1.9. * :ref:`nodeset-rawgroupnames`: the **@@** operator may be used in any node set expression to manipulate group names as a node set:: $ nodeset -l -s rack @rack:J1 @rack:J2 @rack:J3 $ nodeset -f @@rack J[1-3] * :class:`.RangeSet`: multidimensional folding performance optimization, useful for "xnames" on HPE Cray EX supercomputers that encode up to 5 dimensions. * :ref:`Slurm group bindings `: filter out more Slurm node state flags Configuration """"""""""""" * Introduce ``$CLUSTERSHELL_CFGDIR`` as an alternate location for configuration files; useful on a cluster where ClusterShell is provided as a user-facing tool installed on a shared file system (see :ref:`clush-config`, :ref:`groups_config_conf` and :ref:`defaults-config`). Tree mode """"""""" * Fix start by implementing a proper asynchronous start for :class:`.TreeWorker`, which is now only triggered when the engine actually starts. * Fix error with intermediate gateways For more details, please have a look at `GitHub Issues for 1.9 milestone`_. Version 1.8 ----------- This adaptive major release is now compatible with both Python 2 and Python 3. We hope this release will help you manage your clusters, server farms or cloud farms! Special thanks to the many of you that have sent us feedback on GitHub! .. warning:: Support for Python 2.5 and below has been dropped in this version. Version 1.8.4 ^^^^^^^^^^^^^ This version contains a few bug fixes and improvements: * allow out-of-tree worker modules * use default local_worker and allow overriding :ref:`defaults-config` (tree mode) * return maxrc properly in the case of the Rsh Worker * :ref:`clush-tool`: improve stdin support with Python 3 * :ref:`clush-tool`: add maxrc option to :ref:`clush.conf ` * :ref:`clush-tool`: add support for NO_COLOR and CLICOLOR For more details, please have a look at `GitHub Issues for 1.8.4 milestone`_. Version 1.8.3 ^^^^^^^^^^^^^ This version contains a few bug fixes and improvements, mostly affecting the :ref:`tree mode `: * propagate ``CLUSTERSHELL_GW_PYTHON_EXECUTABLE`` environment variable to remote gateways (see :ref:`clush-tree-python`) * fix defect to properly close gateway channel when worker has aborted * improve error reporting from gateways * :ref:`clush-tool`: now properly handles ``--worker=ssh`` when :ref:`topology.conf ` is present to explicitly disable :ref:`tree mode ` * use safe yaml load variant to avoid warning from :class:`.YAMLGroupLoader` For more details, please have a look at `GitHub Issues for 1.8.3 milestone`_. We also added a :ref:`Python support matrix ` for the main Linux distributions. Version 1.8.2 ^^^^^^^^^^^^^ This version contains a few minor fixes: * :ref:`clush-tool`: support UTF-8 string encoding with :ref:`--diff ` * in some cases, :ref:`timers ` were too fast due to an issue in :class:`.EngineTimer` * fix issue in the :ref:`Slurm group bindings ` where job ids were used instead of user names * performance update for :ref:`xCAT group bindings ` For more details, please have a look at `GitHub Issues for 1.8.2 milestone`_. Python support """""""""""""" Version 1.8.2 adds support for Python 3.7. .. note:: This version still supports Python 2.6 and thus also RHEL/CentOS 6, but please note that ClusterShell 1.9 is expected to require at least Python 2.7. OS support """""""""" Version 1.8.2 adds support for RHEL 8/CentOS 8 and Fedora 31+, where only the Python 3 package is provided. The ``clustershell`` packages will be made available in EPEL-8 as soon as possible. No packaging changes were made to ``clustershell`` in RHEL/CentOS 6 or 7. Version 1.8.1 ^^^^^^^^^^^^^ This update contains a few bug fixes and some performance improvements of the :class:`.NodeSet` class. The :ref:`tree mode ` has been fixed to properly support offline gateways. We added the following command line options: * ``--conf`` to specify alternative clush.conf (clush only) * ``--groupsconf`` to specify alternative groups.conf (all CLIs) In :class:`.EventHandler`, we reinstated :meth:`.EventHandler.ev_error`: and :meth:`.EventHandler.ev_error`: (as deprecated) for compatibility purposes. Please see below for more details about important :class:`.EventHandler` changes in 1.8. Finally, :ref:`cluset `/:ref:`nodeset ` have been improved by adding support for: * literal new line in ``-S`` * multiline shell variables in options For more details, please have a look at `GitHub Issues for 1.8.1 milestone`_. Main changes in 1.8 ^^^^^^^^^^^^^^^^^^^ For more details, please have a look at `GitHub Issues for 1.8 milestone`_. CLI (command line interface) """""""""""""""""""""""""""" If you use the :ref:`clush ` or :ref:`cluset `/:ref:`nodeset ` tools, there are no major changes since 1.7, though a few bug fixes and improvements have been done: * It is now possible to work with numeric node names with cluset/nodeset:: $ nodeset --fold 6704 6705 r931 r930 [6704-6705],r[930-931] $ squeue -h -o '%i' -u $USER | cluset -f [680240-680245,680310] As a reminder, cluset/nodeset has always had an option to switch to numerical cluster ranges (only), using ``-R/--rangeset``:: $ squeue -h -o '%i' -u $USER | cluset -f -R 680240-680245,680310 * Node group configuration is now loaded and processed only when required. This is actually an improvement of the :class:`.NodeSet` class that the tools readily benefit. This should improve both usability and performance. * YAML group files are now ignored for users that don't have the permission to read them (see :ref:`group-file-based` for more info about group files). * :ref:`clush ` now use slightly different colors that are legible on dark backgrounds. * :ref:`clush-tree`: + Better detection of the Python executable, and, if needed, we added a new environment variable to override it, see :ref:`clush-tree-python`. + You must use the same major version of Python on the gateways and the root node. .. highlight:: python Python library """""""""""""" If you're a developer and use the ClusterShell Python library, please read below. Python 3 support ++++++++++++++++ Starting in 1.8, the library can also be used with Python 3. The code is compatible with both Python 2 and 3 at the same time. To make it possible, we performed a full code refactoring (without changing the behavior). .. note:: When using Python 3, we recommend Python 3.4 or any more recent version. Improved Event API ++++++++++++++++++ We've made some changes to :class:`.EventHandler`, a class that defines a simple interface to handle events generated by :class:`.Worker`, :class:`.EventTimer` and :class:`.EventPort` objects. Please note that all programs already based on :class:`.EventHandler` should work with this new version of ClusterShell without any code change (backward API compatibility across 1.x versions is enforced). We use object *introspection*, the ability to determine the type of an object at runtime, to make the Event API evolve smoothly. We do still recommend to change your code as soon as possible as we'll break backward compatibility in the future major release 2.0. The signatures of the following :class:`.EventHandler` methods **changed** in 1.8: * :meth:`.EventHandler.ev_pickup`: new ``node`` argument * :meth:`.EventHandler.ev_read`: new ``node``, ``sname`` and ``msg`` arguments * :meth:`.EventHandler.ev_hup`: new ``node``, ``rc`` argument * :meth:`.EventHandler.ev_close`: new ``timedout`` argument Both old and new signatures are supported in 1.8. The old signatures will be deprecated in a future 1.x release and **removed** in version 2.0. The new methods aims to be more convenient to use by avoiding the need of accessing context-specific :class:`.Worker` attributes like ``worker.current_node`` (replaced with the ``node`` argument in that case). Also, please note that the following :class:`.EventHandler` methods will be removed in 2.0: * ``EventHandler.ev_error()``: its use should be replaced with :meth:`.EventHandler.ev_read` by comparing the stream name ``sname`` with :attr:`.Worker.SNAME_STDERR`, like in the example below:: class MyEventHandler(EventHandler): def ev_read(self, worker, node, sname, msg): if sname == worker.SNAME_STDERR: print('error from %s: %s' % (node, msg)) * ``EventHandler.ev_timeout()``: its use should be replaced with :meth:`.EventHandler.ev_close` by checking for the new ``timedout`` argument, which is set to ``True`` when a timeout occurred. We recommend developers to start using the improved :mod:`.Event` API now. Please don't forget to update your packaging requirements to use ClusterShell 1.8 or later. Task and standard input (stdin) +++++++++++++++++++++++++++++++ :meth:`.Task.shell` and :meth:`.Task.run` have a new ``stdin`` boolean argument which if set to ``False`` prevents the use of stdin by sending EOF at first read, like if it is connected to /dev/null. If not specified, its value is managed by the :ref:`defaults-config`. Its default value in :class:`.Defaults` is set to ``True`` for backward compatibility, but could change in a future major release. If your program doesn't plan to listen to stdin, it is recommended to set ``stdin=False`` when calling these two methods. .. highlight:: console Packaging changes """"""""""""""""" We recommend that package maintainers use separate subpackages for Python 2 and Python 3, to install ClusterShell modules and related command line tools. The Python 2 and Python 3 stacks should be fully installable in parallel. For the RPM packaging, there is now two subpackages ``python2-clustershell`` and ``python3-clustershell`` (or ``python34-clustershell`` in EPEL), each providing the library and tools for the corresponding version of Python. The ``clustershell`` package includes the common configuration files and documentation and requires ``python2-clustershell``, mainly because Python 2 is still the default interpreter on most operating systems. ``vim-clustershell`` was confusing so we removed it and added the vim extensions to the main ``clustershell`` subpackage. Version 1.8 should be readily available as RPMs in the following distributions or RPM repositories: * EPEL 6 and 7 * Fedora 26 and 27 * openSUSE Factory and Leap On a supported environment, you can expect a smooth upgrade from version 1.6+. We also expect the packaging to be updated for Debian. Version 1.7 ----------- It's just a small version bump from the well-known 1.6 version, but ClusterShell 1.7 comes with some nice new features that we hope you'll enjoy! Most of these features have already been tested on some very large Linux production systems. Version 1.7 and possible future minor versions 1.7.x are compatible with Python 2.4 up to Python 2.7 (for example: from RedHat EL5 to EL7). Upgrade from version 1.6 to 1.7 should be painless and is fully supported. Version 1.7.3 ^^^^^^^^^^^^^ This update contains a few bug fixes and some interesting performance improvements. This is also the first release published under the GNU Lesser General Public License, version 2.1 or later (`LGPL v2.1+`_). Previous releases were published under the `CeCILL-C V1`_. Quite a bit of work has been done on the *fanout* of processes that the library uses to execute commands. We implemenented a basic per-worker *fanout* to fix the broken behaviour in tree mode. Thanks to this, it is now possible to use fanout=1 with gateways. The :ref:`documentation ` has also been clarified. An issue that led to broken pipe errors but also affected performance has been fixed in :ref:`tree mode ` when copying files. An issue with :ref:`clush-tool` -L where nodes weren't always properly sorted has been fixed. The performance of :class:`.MsgTree`, the class used by the library to aggregate identical command outputs, has been improved. We have seen up to 75% speed improvement in some cases. Finally, a :ref:`cluset ` command has been added to avoid a conflict with `xCAT`_ nodeset command. It is the same command as :ref:`nodeset-tool`. For more details, please have a look at `GitHub Issues for 1.7.3 milestone`_. ClusterShell 1.7.3 is compatible with Python 2.4 up to Python 2.7 (for example: from RedHat EL5 to EL7). Upgrades from versions 1.6 or 1.7 are supported. Version 1.7.2 ^^^^^^^^^^^^^ This minor version fixes a defect in :ref:`tree mode ` that led to broken pipe errors or unwanted backtraces. The :class:`.NodeSet` class now supports the empty string as input. In practice, you may now safely reuse the output of a :ref:`nodeset ` command as input argument for another :ref:`nodeset ` command, even if the result is an empty string. A new option ``--pick`` is available for :ref:`clush ` and :ref:`nodeset ` to pick N node(s) at random from the resulting node set. For more details, please have a look at `GitHub Issues for 1.7.2 milestone`_. ClusterShell 1.7.2 is compatible with Python 2.4 up to Python 2.7 (for example: from RedHat EL5 to EL7). Upgrades from versions 1.6 or 1.7 are supported. Version 1.7.1 ^^^^^^^^^^^^^ This minor version contains a few bug fixes, mostly related to :ref:`guide-NodeSet`. This version also contains bug fixes and performance improvements in tree propagation mode. For more details, please have a look at `GitHub Issues for 1.7.1 milestone`_. ClusterShell 1.7.1 is compatible with Python 2.4 up to Python 2.7 (for example: from RedHat EL5 to EL7). Upgrades from versions 1.6 or 1.7 are supported. Main changes in 1.7 ^^^^^^^^^^^^^^^^^^^ This new version comes with a refreshed documentation, based on the Sphinx documentation generator, available on http://clustershell.readthedocs.org. The main new features of version 1.7 are described below. Multidimensional nodesets """"""""""""""""""""""""" The :class:`.NodeSet` class and :ref:`nodeset ` command-line have been improved to support multidimentional node sets with folding capability. The use of nD naming scheme is sometimes used to map node names to physical location like ``name--`` or node position within the cluster interconnect network topology. A first example of 3D nodeset expansion is a good way to start:: $ nodeset -e gpu-[1,3]-[4-5]-[0-6/2] gpu-1-4-0 gpu-1-4-2 gpu-1-4-4 gpu-1-4-6 gpu-1-5-0 gpu-1-5-2 gpu-1-5-4 gpu-1-5-6 gpu-3-4-0 gpu-3-4-2 gpu-3-4-4 gpu-3-4-6 gpu-3-5-0 gpu-3-5-2 gpu-3-5-4 gpu-3-5-6 You've probably noticed the ``/2`` notation of the last dimension. It's called a step and behaves as one would expect, and is fully supported with nD nodesets. All other :ref:`nodeset ` commands and options are supported with nD nodesets. For example, it's always useful to have a quick way to count the number of nodes in a nodeset:: $ nodeset -c gpu-[1,3]-[4-5]-[0-6/2] 16 Then to show the most interesting new capability of the underlying :class:`.NodeSet` class in version 1.7, a folding example is probably appropriate:: $ nodeset -f compute-1-[1-34] compute-2-[1-34] compute-[1-2]-[1-34] In the above example, nodeset will try to find a very compact nodesets representation whenever possible. ClusterShell is probably the first and only cluster tool capable of doing such complex nodeset folding. Attention, as not all cluster tools are supporting this kind of complex nodesets, even for nodeset expansion, we added an ``--axis`` option to select to fold along some desired dimension:: $ nodeset --axis 2 -f compute-[1-2]-[1-34] compute-1-[1-34],compute-2-[1-34] The last dimension can also be selected using ``-1``:: $ nodeset --axis -1 -f compute-[1-2]-[1-34] compute-1-[1-34],compute-2-[1-34] All set-like operations are also supported with several dimensions, for example *difference* (``-x``):: $ nodeset -f c-[1-10]-[1-44] -x c-[5-10]-[1-34] c-[1-4]-[1-44],c-[5-10]-[35-44] Hard to follow? Don't worry, ClusterShell does it for you! File-based node groups """""""""""""""""""""" Cluster node groups have been a great success of previous version of ClusterShell and are now widely adopted. So we worked on improving it even more for version 1.7. For those of you who use the file ``/etc/clustershell/group`` to describe node groups, that is still supported in 1.7 and upgrade from your 1.6 setup should work just fine. However, for new 1.7 installations, we have put this file in a different location by default:: $ vim /etc/clustershell/groups.d/local.cfg Especially if you're starting a new setup, you have also the choice to switch to a more advanced groups YAML configuration file that can define multiple *sources* in a single file (equivalent to separate namespaces for node groups). The YAML format possibly allows you to edit the file content with YAML tools but it's also a file format convenient to edit just using the vim editor. To enable the example file, you need to rename it first as it needs to have the **.yaml** extension:: $ cd /etc/clustershell/groups.d $ mv cluster.yaml.example cluster.yaml You can make the first dictionary found on this file (named *roles*) to be the **default** source by changing ``default: local`` to ``default: roles`` in ``/etc/clustershell/groups.conf`` (main config file for groups). For more info about the YAML group files, please see :ref:`group-file-based`. Please also see :ref:`node groups configuration ` for node groups configuration in general. nodeset -L/--list-all option """""""""""""""""""""""""""" Additionally, the :ref:`nodeset ` command also has a new option ``-L`` or ``--list-all`` to list groups from all sources (``-l`` only lists groups from the **default** source). This can be useful when configuring ClusterShell and/or troubleshooting node group sources:: $ nodeset -LL @adm example0 @all example[2,4-5,32-159] @compute example[32-159] @gpu example[156-159] @io example[2,4-5] @racks:new example[4-5,156-159] @racks:old example[0,2,32-159] @racks:rack1 example[0,2] @racks:rack2 example[4-5] @racks:rack3 example[32-159] @racks:rack4 example[156-159] @cpu:hsw example[64-159] @cpu:ivy example[32-63] Special group @* """""""""""""""" The special group syntax ``@*`` (or ``@source:*`` if using explicit source selection) has been added and can be used in configuration files or with command line tools. This special group is always available for file-based node groups (return the content of the **all** group, or all groups from the source otherwise). For external sources, it is available when either the **all** upcall is defined or both **map** and **list** upcalls are defined. The all special group is also used by ``clush -a`` and ``nodeset -a``. For example, the two following commands are equivalent:: $ nodeset -a -f example[2,4-5,32-159] $ nodeset -f @* example[2,4-5,32-159] Exec worker """"""""""" Version 1.7 introduces a new generic execution worker named :class:`.ExecWorker` as the new base class for most exec()-based worker classes. In practice with :ref:`clush-tool`, you can now specify the worker in command line using ``--worker`` or ``-R`` and use **exec**. It also supports special placeholders for the node (**%h**) or rank (**%n**). For example, the following command will execute *ping* commands in parallel, each with a different host from hosts *cs01*, etc. to *cs05* as argument and then aggregate the results:: $ clush -R exec -w cs[01-05] -bL 'ping -c1 %h >/dev/null && echo ok' cs[01-04]: ok clush: cs05: exited with exit code 1 This feature allows the system administrator to use non cluster-aware tools in a more efficient way. You may also want to explicitly set the fanout (using ``-f``) to limit the number of parallel local commands launched. Please see also :ref:`clush worker selection `. Rsh worker """""""""" Version 1.7 adds support for ``rsh`` or any of its variants like ``krsh`` or ``mrsh``. ``rsh`` and ``ssh`` also share a lot of common mechanisms. Worker Rsh was added moving a lot of Worker Ssh code into it. For ``clush``, please see :ref:`clush worker selection ` to enable ``rsh``. To use ``rsh`` by default instead of ``ssh`` at the library level, install the provided example file named ``defaults.conf-rsh`` to ``/etc/clustershell/defaults.conf``. Tree Propagation Mode """"""""""""""""""""" The ClusterShell Tree Mode allows you to send commands to target nodes through a set of predefined gateways (using ssh by default). It can be useful to access servers that are behind some other servers like bastion hosts, or to scale on very large clusters when the flat mode (eg. sliding window of ssh commands) is not enough anymore. The tree mode is now :ref:`documented `, it has been improved and is enabled by default when a ``topology.conf`` file is found. While it is still a work in progress, the tree mode is known to work pretty well when all gateways are online. We'll continue to improve it and make it more robust in the next versions. Configuration files """"""""""""""""""" When ``$CLUSTERSHELL_CFGDIR`` or ``$XDG_CONFIG_HOME`` are defined, ClusterShell will use them to search for additional configuration files. If ``$CLUSTERSHELL_CFGDIR`` is not defined, the global configuration files will be searched for in `/etc/clustershell` PIP user installation support """"""""""""""""""""""""""""" ClusterShell 1.7 is now fully compatible with PIP and supports user configuration files:: $ pip install --user clustershell Please see :ref:`install-pip-user`. .. _GitHub Issues for 1.7.1 milestone: https://github.com/cea-hpc/clustershell/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A1.7.1 .. _GitHub Issues for 1.7.2 milestone: https://github.com/cea-hpc/clustershell/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A1.7.2 .. _GitHub Issues for 1.7.3 milestone: https://github.com/cea-hpc/clustershell/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A1.7.3 .. _GitHub Issues for 1.8 milestone: https://github.com/cea-hpc/clustershell/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A1.8 .. _GitHub Issues for 1.8.1 milestone: https://github.com/cea-hpc/clustershell/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A1.8.1 .. _GitHub Issues for 1.8.2 milestone: https://github.com/cea-hpc/clustershell/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A1.8.2 .. _GitHub Issues for 1.8.3 milestone: https://github.com/cea-hpc/clustershell/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A1.8.3 .. _GitHub Issues for 1.8.4 milestone: https://github.com/cea-hpc/clustershell/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A1.8.4 .. _GitHub Issues for 1.9 milestone: https://github.com/cea-hpc/clustershell/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A1.9 .. _GitHub Issues for 1.9.1 milestone: https://github.com/cea-hpc/clustershell/issues?q=milestone%3A1.9.1 .. _GitHub Issues for 1.9.2 milestone: https://github.com/cea-hpc/clustershell/issues?q=milestone%3A1.9.2 .. _LGPL v2.1+: https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html .. _CeCILL-C V1: http://www.cecill.info/licences/Licence_CeCILL-C_V1-en.html .. _xCAT: https://xcat.org/ ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3293295 ClusterShell-1.9.2/doc/sphinx/tools/0000755104717000001440000000000014505640536017014 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/tools/clubak.rst0000644104717000001440000000565414501416555021017 0ustar00sthiellusers.. _clubak-tool: clubak ------ .. highlight:: console Overview ^^^^^^^^ *clubak* is another utility provided with the ClusterShell library that try to gather and sort such dsh-like output:: node17: MD5 (cstest.py) = 62e23bcf2e11143d4875c9826ef6183f node14: MD5 (cstest.py) = 62e23bcf2e11143d4875c9826ef6183f node16: MD5 (cstest.py) = e88f238673933b08d2b36904e3a207df node15: MD5 (cstest.py) = 62e23bcf2e11143d4875c9826ef6183f If *file* content is made of such output, you got the following result:: $ clubak -b < file --------------- node[14-15,17] (3) --------------- MD5 (cstest.py) = 62e23bcf2e11143d4875c9826ef6183f --------------- node16 --------------- MD5 (cstest.py) = e88f238673933b08d2b36904e3a207df Or with ``-L`` display option to disable header block:: $ clubak -bL < file node[14-15,17]: MD5 (cstest.py) = 62e23bcf2e11143d4875c9826ef6183f node16: MD5 (cstest.py) = e88f238673933b08d2b36904e3a207df Indeed, *clubak* formats text from standard input containing lines of the form *node: output*. It is fully backward compatible with *dshbak(1)* available with *pdsh* but provides additional features. For instance, *clubak* always displays its results sorted by node/nodeset. But you do not need to execute *clubak* when using *clush* as all output formatting features are already included in *clush* (see *clush -b / -B / -L* examples, :ref:`clush-oneshot`). There are several advantages of having *clubak* features included in *clush*: for example, it is possible, with *clush*, to still get partial results when interrupted during command execution (eg. with *Control-C*), thing not possible by just piping commands together. Most *clubak* options are the same as *clush*. For instance, to try to resolve node groups in results, use ``-r, --regroup``:: $ clubak -br < file Like *clush*, *clubak* uses the :mod:`ClusterShell.MsgTree` module of the ClusterShell library. Tree trace mode (-T) ^^^^^^^^^^^^^^^^^^^^ A special option ``-T, --tree``, only available with \clubak, can switch on :class:`.MsgTree` trace mode (all keys/nodes are kept for each message element of the tree, thus allowing special output display). This mode has been first added to replace *padb* [#]_ in some cases to display a whole cluster job digested backtrace. For example:: $ cat trace_test node3: first_func() node1: first_func() node2: first_func() node5: first_func() node1: second_func() node4: first_func() node3: bis_second_func() node2: second_func() node5: second_func() node4: bis_second_func() $ cat trace_test | clubak -TL node[1-5]: first_func() node[1-2,5]: second_func() node[3-4]: bis_second_func() .. [#] *padb*, a parallel application debugger (http://padb.pittman.org.uk/) .. _ticket #166: https://github.com/cea-hpc/clustershell/issues/166 .. _ticket: https://github.com/cea-hpc/clustershell/issues/new ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/tools/cluset.rst0000644104717000001440000000031414501416555021041 0ustar00sthiellusers.. _cluset-tool: cluset ------ .. highlight:: console The *cluset* command is the same as :ref:`nodeset-tool` and has been added in ClusterShell 1.7.3 to avoid a conflict with xCAT's nodeset command. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/sphinx/tools/clush.rst0000644104717000001440000007342114505632065020671 0ustar00sthiellusers.. _clush-tool: clush ------- .. highlight:: console *clush* is a program for executing commands in parallel on a cluster and for gathering their results. It can execute commands interactively or can be used within shell scripts and other applications. It is a partial front-end to the :class:`.Task` class of the ClusterShell library (cf. :ref:`class-Task`). *clush* currently makes use of the Ssh worker of ClusterShell that only requires *ssh(1)* (we tested with OpenSSH SSH client). Some features of *clush* command line tool are: * two modes of parallel cluster commands execution: + :ref:`flat mode `: sliding window of local or remote (eg. *ssh(1)*) commands + :ref:`tree mode `: commands propagated to the targets through a tree of pre-configured gateways; gateways are then using a sliding window of local or *ssh(1)* commands to reach the targets (if the target count per gateway is greater than the :ref:`fanout ` value) * smart display of command results (integrated output gathering, sorting by node, nodeset or node groups) * standard input redirection to remote nodes * files copying in parallel * *pdsh* [#]_ options backward compatibility *clush* can be started non-interactively to run a shell command, or can be invoked as an interactive shell. Both modes are discussed here (clush-oneshot clush-interactive). Target and filter nodes ^^^^^^^^^^^^^^^^^^^^^^^ *clush* offers different ways to select or filter target nodes through command line options or files containing a list of hosts. Command line options """""""""""""""""""" The ``-w`` option allows you to specify remote hosts by using ClusterShell :class:`.NodeSet` syntax, including the node groups *@group* special syntax (cf. :ref:`nodeset-groupsexpr`) and the Extended String Patterns syntax (see :ref:`class-NodeSet-extended-patterns`) to benefits from :class:`.NodeSet` basic arithmetic (like ``@Agroup&@Bgroup``). Additionally, the ``-x`` option allows you to exclude nodes from remote hosts list (the same NodeSet syntax can be used here). Nodes exclusion has priority over nodes addition. Using node groups """"""""""""""""" If you have ClusterShell :ref:`node groups ` configured on your cluster, any node group syntax may be used in place of nodes for ``-w`` as well as ``-x``. For example:: $ clush -w @rhel6 cat /proc/loadavg node26: 0.02 0.01 0.00 1/202 23042 For *pdsh* backward compatibility, *clush* supports two ``-g`` and ``-X`` options to respectively select and exclude nodes group(s), but only specified by omitting any *"@"* group prefix (see example below). In general, though, it is advised to use the *@*-prefixed group syntax as the non-prefixed notation is only recognized by *clush* but not by other tools like *nodeset*. For example:: $ clush -g rhel6 cat /proc/loadavg node26: 0.02 0.01 0.00 1/202 23033 .. _clush-all-nodes: Selecting all nodes """"""""""""""""""" The special option ``-a`` (without argument) can be used to select **all** nodes, in the sense of ClusterShell node groups (see :ref:`node groups configuration ` for more details on special **all** external shell command upcall). If not properly configured, the ``-a`` option may lead to a runtime error like:: clush: External error: Not enough working external calls (all, or map + list) defined to get all node .. _clush-pick: Picking node(s) at random """"""""""""""""""""""""" Use ``--pick`` with a maximum number of nodes you wish to pick randomly from the targeted node set. **clush** will then run only on selected node(s). The following example will run a script on a single random node picked from the ``@compute`` group:: $ clush -w @compute --pick=1 ./nonreg-single-client-fs-io.sh Host files """""""""" The option ``--hostfile`` (or ``--machinefile``) may be used to specify a path to a file containing a list of single hosts, node sets or node groups, separated by spaces and lines. It may be specified multiple times (one per file). For example:: $ clush --hostfile ./host_file -b systemctl is-enabled httpd This option has been added as backward compatibility with other parallel shell tools. Indeed, ClusterShell provides a preferred way to provision node sets from node group sources and flat files to all cluster tools using :class:`.NodeSet` (including *clush*). Please see :ref:`node groups configuration `. .. note:: Use ``--debug`` or ``-d`` to see resulting node sets from host files. .. _clush-flat: Flat execution mode ^^^^^^^^^^^^^^^^^^^ The default execution mode is to launch commands (local or remote) in parallel, up to a certain limit fixed by the :ref:`fanout ` value, which is the number of child processes allowed to run at a time. This "sliding window" of active commands is a common technique used on large clusters to conserve resources on the initiating host, while allowing some commands to time out. If used with *ssh(1)*, this does actually limit the number of concurrent ssh connections. .. _clush-fanout: Fanout (sliding window) """"""""""""""""""""""" The ``--fanout`` (or ``-f``) option of **clush** allows the user to change the default *fanout* value defined in :ref:`clush.conf ` or in the :ref:`library defaults ` if not specified. Indeed, it is sometimes useful to change the fanout value for a specific command, for example to avoid flooding a remote service with concurrent requests generated by that actual command. The following example will launch up to ten *puppet* commands at a time on the node group named *@compute*:: $ clush -w @compute -f 10 puppet agent -t If the fanout value is set to 1, commands are executed sequentially:: $ clush -w node[40-42] -f 1 'date +%s; sleep 1' node40: 1505366138 node41: 1505366139 node42: 1505366140 .. _clush-tree: Tree execution mode ^^^^^^^^^^^^^^^^^^^ ClusterShell's tree execution mode is a major horizontal scalability improvement by providing a hierarchical command propagation scheme. The Tree mode of ClusterShell has been the subject of `this paper`_ presented at the Ottawa Linux Symposium Conference in 2012 and at the PyHPC 2013 workshop in Denver, USA. .. highlight:: text The diagram below illustrates the hierarchical command propagation principle with a head node, gateways (GW) and target nodes:: .-----------. | Head node | '-----------' /|\ .------------' | '--.-----------. / | \ \ .-----. .-----. \ .-----. | GW1 | | GW2 | \ | GW3 | '-----' '-----' \ '-----' /|\ /|\ \ |\ .-' | '-. .-' | '-. \ | '---. / | \ / | \ \ | \ .---. .---. .---. .---. .---. .---. .---. .---. .-----. '---' '---' '---' '---' '---' '---' '---' '---' | GW4 | target nodes '-----' | ... The Tree mode is implemented at the library level, so that all applications using ClusterShell may benefits from it. However, this section describes how to use the tree mode with the **clush** command only. .. _clush-tree-enabling: Configuration """"""""""""" The system-wide library configuration file **/etc/clustershell/topology.conf** defines available/preferred routes for the command propagation tree. It is recommended that all connections between parent and children nodes are carefully pre-configured, for example, to avoid any SSH warnings when connecting (if using the default SSH remote worker, of course). .. highlight:: ini The file **topology.conf** is used to define a set of routes under a ``[routes]`` section. Think of it as a routing table but for cluster commands. Node sets should be used when possible, for example:: [routes] rio0: rio[10-13] rio[10-11]: rio[100-240] rio[12-13]: rio[300-440] .. highlight:: text The example above defines the following topology graph:: rio0 |- rio[10-11] | `- rio[100-240] `- rio[12-13] `- rio[300-440] :ref:`nodeset-groups` and :ref:`node-wildcards` are supported in **topology.conf**, but any route definition with an empty node set is ignored (a message is printed in debug mode in that case). At runtime, ClusterShell will pick an initial propagation tree from this topology graph definition and the current root node. Multiple admin/root nodes may be defined in the file. .. note:: The algorithm used in Tree mode does not rely on gateway system hostnames anymore. In topology.conf, just use the hosts or aliases needed to connect to each node. .. highlight:: console Enabling tree mode """""""""""""""""" Since version 1.7, the tree mode is enabled by default when a configuration file is present. When the configuration file **/etc/clustershell/topology.conf** exists, *clush* will use it by default for target nodes that are defined there. The topology file path can be changed using the ``--topology`` command line option. .. note:: If using ``clush -d`` (debug option), clush will display an ASCII representation of the initial propagation tree used. This is useful when working on Tree mode configuration. Enabling tree mode should be as much transparent as possible to the end user. Most **clush** options, including options defined in :ref:`clush.conf ` or specified using ``-O`` or ``-o`` (ssh options) are propagated to the gateways and taken into account there. .. _clush-tree-options: Tree mode specific options """""""""""""""""""""""""" The ``--remote=yes|no`` command line option controls the remote execution behavior: * Default is **yes**, that will make *clush* establish connections up to the leaf nodes using a *distant worker* like *ssh*. * Changing it to **no** will make *clush* establish connections up to the leaf parent nodes only, then the commands are executed locally on the gateways (like if it would be with ``--worker=exec`` on the gateways themselves). This execution mode allows users to schedule remote commands on gateways that take a node as an argument. On large clusters, this is useful to spread the load and resources used of one-shot monitoring, IPMI, or other commands on gateways. A simple example of use is:: $ clush -w node[100-199] --remote=no /usr/sbin/ipmipower -h %h-ipmi -s This command is also valid if you don't have any tree configured, because in that case, ``--remote=no`` is an alias of ``--worker=exec`` worker. The ``--grooming`` command line option allows users to change the grooming delay (float, in seconds). This feature allows gateways to aggregate responses received within a certain timeframe before transmitting them back to the root node in a batch fashion. This contributes to reducing the load on the root node by delegating the first steps of this CPU intensive task to the gateways. .. _clush-tree-fanout: Fanout considerations """"""""""""""""""""" ClusterShell uses a "sliding window" or *fanout* of processes to avoid too many concurrent connections and to conserve resources on the initiating hosts. See :ref:`clush-flat` for more details about this. In tree mode, the same *fanout* value is used on the head node and on each gateway. That is, if the *fanout* is **16**, each gateway will initiate up to **16** connections to their target nodes at the same time. .. note:: This is likely to **change** in the future, as it makes the *fanout* behaviour different if you are using the tree mode or not. For example, some administrators are using a *fanout* value of 1 to "sequentialize" a command on the cluster. In tree mode, please note that in that case, each gateway will be able to run a command at the same time. .. _clush-tree-python: Remote Python executable """""""""""""""""""""""" You must use the same major version of Python on the gateways and the root node. By default, the same python executable name than the one used on the root node will be used to launch the gateways, that is, `python` or `python3` (using relative path for added flexibility). You may override the selection of the remote Python interpreter by defining the following environment variable:: $ export CLUSTERSHELL_GW_PYTHON_EXECUTABLE=/path/to/python3 .. note:: It is highly recommended to have the same Python interpreter installed on all gateways and the root node. Debugging Tree mode """"""""""""""""""" To debug Tree mode, you can define the following environment variable before running **clush** (or any other applications using ClusterShell):: $ export CLUSTERSHELL_GW_LOG_LEVEL=DEBUG (default value is INFO) $ export CLUSTERSHELL_GW_LOG_DIR=/tmp (default value is /tmp) This will generate log files of the form ``$HOSTNAME.gw.log`` in ``CLUSTERSHELL_GW_LOG_DIR``. .. _clush-oneshot: Non-interactive (or one-shot) mode ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When *clush* is started non-interactively, the command is executed on the specified remote hosts in parallel (given the current *fanout* value and the number of commands to execute (see *fanout* library settings in :ref:`class-Task-configure`). .. _clush-gather: Output gathering options """""""""""""""""""""""" If option ``-b`` or ``--dshbak`` is specified, *clush* waits for command completion while displaying a :ref:`progress indicator ` and then displays gathered output results. If standard output is redirected to a file, *clush* detects it and disable any progress indicator. .. warning:: *clush* will only consolidate identical command outputs if the command return codes are also the same. The following is a simple example of *clush* command used to execute ``uname -r`` on *node40*, *node41* and *node42*, wait for their completion and finally display digested output results:: $ clush -b -w node[40-42] uname -r --------------- node[40-42] --------------- 2.6.35.6-45.fc14.x86_64 It is common to cancel such command execution because a node is hang. When using *pdsh* and *dshbak*, due to the pipe, all nodes output will be lost, even if all nodes have successfully run the command. When you hit CTRL-C with *clush*, the task is canceled but received output is not lost:: $ clush -b -w node[1-5] uname -r Warning: Caught keyboard interrupt! --------------- node[2-4] (3) --------------- 2.6.31.6-145.fc11 --------------- node5 --------------- 2.6.18-164.11.1.el5 Keyboard interrupt (node1 did not complete). .. _clush-diff: Performing *diff* of cluster-wide outputs """"""""""""""""""""""""""""""""""""""""" Since version 1.6, you can use the ``--diff`` *clush* option to show differences between common outputs. This feature is implemented using `Python unified diff`_. This special option implies ``-b`` (gather common stdout outputs) but you don't need to specify it. Example:: $ clush -w node[40-42] --diff dmidecode -s bios-version --- node[40,42] (2) +++ node41 @@ -1,1 +1,1 @@ -1.0.5S56 +1.1c A nodeset is automatically selected as the "reference nodeset" according to these criteria: #. lowest command return code (to discard failed commands) #. largest nodeset with the same output result #. otherwise the first nodeset is taken (ordered (1) by name and (2) lowest range indexes) .. _clush-outdir: Saving output in files """""""""""""""""""""" To save the standard output (stdout) and/or error (stderr) of all remote commands to local files identified with the node name in a given directory, use the options ``--outdir`` and/or ``--errdir``. Any directory that doesn't exist will be automatically created. These options provide a similar functionality as *pssh(1)*. For example, to save all logs from *journalctl(1)* in a local directory ``/tmp/run1/stdout``, you could use:: $ clush -w node[40-42] --outdir=/tmp/run1/stdout/ journalctl >/dev/null Standard input bindings """"""""""""""""""""""" Unless the option ``--nostdin`` (or ``-n``) is specified, *clush* detects when its standard input is connected to a terminal (as determined by *isatty(3)*). If actually connected to a terminal, *clush* listens to standard input when commands are running, waiting for an Enter key press. Doing so will display the status of current nodes. If standard input is not connected to a terminal, and unless the option ``--nostdin`` (or ``-n``) is specified, *clush* binds the standard input of the remote commands to its own standard input, allowing scripting methods like:: $ echo foo | clush -w node[40-42] -b cat --------------- node[40-42] --------------- foo Another stdin-bound *clush* usage example:: $ ssh node10 'ls /etc/yum.repos.d/*.repo' | clush -w node[11-14] -b xargs ls --------------- node[11-14] (4) --------------- /etc/yum.repos.d/cobbler-config.repo .. note:: Use ``--nostdin`` (or ``-n``) in the same way you would use ``ssh -n`` to disable standard input. Indeed, if this option is set, EOF is sent at first read, as if stdin were actually connected to /dev/null. .. _clush-progress: Progress indicator """""""""""""""""" In :ref:`output gathering mode `, *clush* will display a live progress indicator as a simple but convenient way to follow the completion of parallel commands. It can be disabled just by using the ``-q`` or ``--quiet`` options. The progress indicator will appear after 1 to 2 seconds and should look like this:: clush: / If writing is performed to *clush* standard input, like in ``command | clush``, the live progress indicator will display the global bandwidth of data written to the target nodes. Finally, the special option ``--progress`` can be used to force the display of the live progress indicator. Using this option may interfere with some command outputs, but it can be useful when using stdin while remote commands are silent. As an example, the following command will copy a local file to node[1-3] and display the global write bandwidth to the target nodes:: $ dd if=/path/to/local/file | clush -w node[1-3] --progress 'dd of=/path/to/remote/file' clush: 0/3 write: 212.27 MiB/s .. _clush-interactive: Interactive mode ^^^^^^^^^^^^^^^^ If a command is not specified, *clush* runs interactively. In this mode, *clush* uses the *GNU readline* library to read command lines from the terminal. *Readline* provides commands for searching through the command history for lines containing a specified string. For instance, you can type *Control-R* to search in the history for the next entry matching the search string typed so far. Single-character interactive commands """"""""""""""""""""""""""""""""""""" *clush* also recognizes special single-character prefixes that allows the user to see and modify the current nodeset (the nodes where the commands are executed). These single-character interactive commands are detailed below: +------------------------------+-----------------------------------------------+ | Interactive special commands | Comment | +==============================+===============================================+ | ``clush> ?`` | show current nodeset | +------------------------------+-----------------------------------------------+ | ``clush> +`` | add nodes to current nodeset | +------------------------------+-----------------------------------------------+ | ``clush> -`` | remove nodes from current nodeset | +------------------------------+-----------------------------------------------+ | ``clush> @`` | set current nodeset | +------------------------------+-----------------------------------------------+ | ``clush> !`` | execute ```` on the local system | +------------------------------+-----------------------------------------------+ | ``clush> =`` | toggle the output format (gathered or | | | standard mode) | +------------------------------+-----------------------------------------------+ To leave an interactive session, type ``quit`` or *Control-D*. As of version 1.6, it is not possible to cancel a command while staying in *clush* interactive session: for instance, *Control-C* is not supported and will abort current *clush* interactive command (see `ticket #166`_). Example of *clush* interactive session:: $ clush -w node[11-14] -b Enter 'quit' to leave this interactive mode Working with nodes: node[11-14] clush> uname --------------- node[11-14] (4) --------------- Linux clush> !pwd LOCAL: /root clush> -node[11,13] Working with nodes: node[12,14] clush> uname --------------- node[12,14] (2) --------------- Linux clush> The interactive mode and commands described above are subject to change and improvements in future releases. Feel free to open an enhancement `ticket`_ if you use the interactive mode and have some suggestions. .. _clush-copy: File copying mode ^^^^^^^^^^^^^^^^^ When *clush* is started with the ``-c`` or ``--copy`` option, it will attempt to copy specified files and/or directories to the provided cluster nodes. The ``--dest`` option can be used to specify a single path where all the file(s) should be copied to on the target nodes. In the absence of ``--dest``, *clush* will attempt to copy each file or directory found in the command line to their same location on the target nodes. Here are some examples of file copying with *clush*:: $ clush -v -w node[11-12] --copy /tmp/foo `/tmp/foo' -> node[11-12]:`/tmp' $ clush -v -w node[11-12] --copy /tmp/foo /tmp/bar `/tmp/bar' -> node[11-12]:`/tmp' `/tmp/foo' -> node[11-12]:`/tmp' $ clush -v -w node[11-12] --copy /tmp/foo --dest /var/tmp/ `/tmp/foo' -> node[11-12]:`/var/tmp/' .. note:: To copy a file to nodes under a different user, use the ``--user=$USER`` option and **NOT** ``$USER@node[11-12]``. Reverse file copying mode ^^^^^^^^^^^^^^^^^^^^^^^^^ When *clush* is started with the ``--rcopy`` option, it will attempt to retrieve specified file and/or directory from provided cluster nodes. If the ``--dest`` option is specified, it must be a directory path where the files will be stored with their hostname appended. If the destination path is not specified, it will take each file or directory's parent directory as the local destination, for example:: $ clush -v -w node[11-12] --rcopy /tmp/foo node[11-12]:`/tmp/foo' -> `/tmp' $ ls /tmp/foo.* /tmp/foo.node11 /tmp/foo.node12 .. _clush-modes: Run modes ^^^^^^^^^ Since version 1.9, *clush* has support for run modes, which are special :ref:`clush.conf ` settings with a given name. See :ref:`run mode configuration ` for more details on how to install a run mode. This section describes how to use the run modes from the provided example files. To use an installed run mode, just use the ``--mode`` or ``-m`` command line option followed by the mode name (eg. ``sudo``, ``sshpass``, etc.). .. _clush-sshpass: Run mode: sshpass """"""""""""""""" Since version 1.9, *clush* has support for password-based ssh authentication. It is implemented thanks to the external `sshpass`_ tool and provided sshpass run mode example. When using this run mode, you will be prompted for a password that will be forwarded to sshpass. This could be convenient for example in a new environment to install ssh keys on a large number of servers. Make sure you have *sshpass(1)* installed on your operating system and install the sshpass run mode by creating ``sshpass.conf`` in ``clush.conf.d``:: $ cd /etc/clustershell/clush.conf.d # or $CLUSTERSHELL_CFGDIR/clush.conf.d $ cp sshpass.conf.example sshpass.conf Then, run *clush* with ``--mode=sshpass`` (or ``-m sshpass``) to activate this run mode. You will be prompted for a password that will be forwarded on stdin to sshpass to authenticate your ssh workers. The following example shows how to check the date on four servers with password-based ssh authentication:: $ clush -w n[1-2]c[01-02] --mode=sshpass -b date Password: --------------- n[1-2]c[01-02] (4) --------------- Thu Nov 17 16:08:04 PST 2022 The following example shows how to install an ``authorized_keys`` file with the :ref:`clush-copy` and password-based ssh authentication on four nodes:: $ clush -w n[1-2]c[01-02] -m sshpass -v --copy ~/authorized_keys --dest ~/.ssh/authorized_keys [sshpass] run mode activated [sshpass] password prompt enabled Password: `/home/user/authorized_keys' -> n[1-2]c[01-02]:`/home/user/.ssh/authorized_keys' .. _clush-sudo: Run mode: sudo """""""""""""" Since version 1.9, *clush* has support for `sudo`_ password forwarding over stdin. This may be useful in an environment that only allows sysadmins to perform interactive *sudo* work with password. .. warning:: In this section, it is assumed that *sudo* always requires a password for the user on the target nodes. If *sudo* does NOT require any password (i.e. **NOPASSWD** is specified in your sudoers file), you do not need any extra options to run your *sudo* commands with *clush*. Make sure you have *sudo(8)* installed on your operating system. Then install the sudo run mode by creating ``sudo.conf`` in ``clush.conf.d``:: $ cd /etc/clustershell/clush.conf.d # or $CLUSTERSHELL_CFGDIR/clush.conf.d $ cp sudo.conf.example sudo.conf Then, run *clush* with ``--mode=sudo`` (or ``-m sudo``) to **enable a password prompt** to type your *sudo* password, then *sudo* (well, the ``command_prefix`` from the sudo run mode – see below) will be used to run your commands on the target nodes. The password is broadcasted to all target nodes over *ssh(1)* (or via your :ref:`favorite worker `) and as such, must be the same on all target nodes. It is not stored on disk at any time and only kept in memory during the duration of the *clush* command. Thus, the password will be prompted every time you run *clush*. When you start *clush* in :ref:`interactive mode ` along with ``--mode=sudo``, you can run multiple commands in that mode without having to type your password every time. When ``--mode=sudo`` is used, *clush* will run *sudo* for you on each target node, so your command itself should NOT start with ``sudo``. The actual *sudo* command used by *clush* can be changed in ``clush.conf.d/sudo.conf`` or in command line using ``-O command_prefix"..."``. The configured ``command_prefix`` must be able to read a password on stdin followed by a new line (which is what ``sudo -S`` does). Usage example:: $ clush -w n[1-2]c[01-02] --mode=sudo -b id Password: --------------- n[1-2]c[01-02] (4) --------------- uid=0(root) gid=0(root) groups=0(root) Other options ^^^^^^^^^^^^^ Overriding clush.conf settings """""""""""""""""""""""""""""" *clush* default settings are found in a configuration described in :ref:`clush configuration `. To override any settings, use the ``--option`` command line option (or ``-O`` for the shorter version), and repeat as needed. Here is a simple example to disable the use colors in the output nodeset header:: $ clush -O color=never -w node[11-12] -b echo ok --------------- node[11-12] (2) --------------- ok NO_COLOR, CLICOLOR_FORCE and CLICOLOR environment variables can also be used to change the way *clush* uses colors to display messages. .. _clush-worker: Worker selection """""""""""""""" By default, *clush* is using the default library worker configuration when running commands or copying files. In most cases, this is *ssh* (See :ref:`task-default-worker` for default worker selection). Worker selection can be performed at runtime thanks to ``--worker`` command line option (or ``-R`` for the shorter version in order to be compatible with *pdsh* remote command selection option):: $ clush -w node[11-12] --worker=rsh echo ok node11: ok node12: ok By default, ClusterShell supports the following worker identifiers: * **exec**: this local worker supports parallel command execution, doesn't rely on any external tool and provides command line placeholders described below: * ``%h`` and ``%host`` are substituted with each *target hostname* * ``%hosts`` is substituted with the full *target nodeset* * ``%n`` and ``%rank`` are substituted with the remote *rank* (0 to n-1) For example, the following would request the exec worker to locally run multiple *ipmitool* commands across the hosts foo[0-10] and automatically aggregate output results (-b):: $ clush -R exec -w foo[0-10] -b ipmitool -H %h-ipmi chassis power status --------------- foo[0-10] (11) --------------- Chassis Power is on * **rsh**: remote worker based on *rsh* * **ssh**: remote worker based on *ssh* (default) * **pdsh**: remote worker based on *pdsh* that requires *pdsh* to be installed; doesn't provide write support (eg. you cannot ``cat file | clush --worker pdsh``); it is primarily an 1-to-n worker example. Worker modules distributed outside of ClusterShell are also supported by specifying the case-sensitive full Python module name of a worker module. .. [#] LLNL parallel remote shell utility (https://computing.llnl.gov/linux/pdsh.html) .. _seq(1): http://linux.die.net/man/1/seq .. _Python unified diff: http://docs.python.org/library/difflib.html#difflib.unified_diff .. _ticket #166: https://github.com/cea-hpc/clustershell/issues/166 .. _ticket: https://github.com/cea-hpc/clustershell/issues/new .. _this paper: https://www.kernel.org/doc/ols/2012/ols2012-thiell.pdf .. _sshpass: http://sshpass.sourceforge.net/ .. _sudo: https://www.sudo.ws/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/sphinx/tools/index.rst0000644104717000001440000000067514501416555020663 0ustar00sthiellusers.. _tools: Tools ===== Three Python scripts using the ClusterShell library are provided with the distribution: * `cluset` or `nodeset`, both are the same tool to manage cluster node sets and groups, * `clush`, a powerful parallel command execution tool with output gathering, * `clubak`, a tool to gather and display results from clush/pdsh-like output (and more). .. toctree:: :maxdepth: 2 nodeset cluset clush clubak ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/sphinx/tools/nodeset.rst0000644104717000001440000010541314505632065021211 0ustar00sthiellusers.. _nodeset-tool: nodeset ------- .. highlight:: console The *nodeset* command enables easy manipulation of node sets, as well as node groups, at the command line level. As it is very user-friendly and efficient, the *nodeset* command can quickly improve traditional cluster shell scripts. It is also full-featured as it provides most of the :class:`.NodeSet` and :class:`.RangeSet` class methods (see also :ref:`class-NodeSet`, and :ref:`class-RangeSet`). Most of the examples in this section are using simple indexed node sets, however, *nodeset* supports multidimensional node sets, like *dc[1-2]n[1-99]*, introduced in version 1.7 (see :ref:`class-RangeSetND` for more info). This section will guide you through the basics and also more advanced features of *nodeset*. Usage basics ^^^^^^^^^^^^ One exclusive command must be specified to *nodeset*, for example:: $ nodeset --expand node[13-15,17-19] node13 node14 node15 node17 node18 node19 $ nodeset --count node[13-15,17-19] 6 $ nodeset --fold node1-ipmi node2-ipmi node3-ipmi node[1-3]-ipmi Commands with inputs """""""""""""""""""" Some *nodeset* commands require input (eg. node names, node sets or node groups), and some only give output. The following table shows commands that require some input: +-------------------+--------------------------------------------------------+ | Command | Description | +===================+========================================================+ | ``-c, --count`` | Count and display the total number of nodes in node | | | sets or/and node groups. | +-------------------+--------------------------------------------------------+ | ``-e, --expand`` | Expand node sets or/and node groups as unitary node | | | names separated by current separator string (see | | | ``--separator`` option described in | | | :ref:`nodeset-commands-formatting`). | +-------------------+--------------------------------------------------------+ | ``-f, --fold`` | Fold (compact) node sets or/and node groups into one | | | set of nodes (by previously resolving any groups). The | | | resulting node set is guaranteed to be free from node | | | ``--regroup`` below if you want to resolve node groups | | | in result). Please note that folding may be time | | | consuming for multidimensional node sets. | +-------------------+--------------------------------------------------------+ | ``-r, --regroup`` | Fold (compact) node sets or/and node groups into one | | | set of nodes using node groups whenever possible (by | | | previously resolving any groups). | | | See :ref:`nodeset-groups`. | +-------------------+--------------------------------------------------------+ There are three ways to give some input to the *nodeset* command: * from command line arguments, * from standard input (enabled when no arguments are found on command line), * from both command line and standard input, by using the dash special argument *"-"* meaning you need to use stdin instead. The following example illustrates the three ways to feed *nodeset*:: $ nodeset -f node1 node6 node7 node[1,6-7] $ echo node1 node6 node7 | nodeset -f node[1,6-7] $ echo node1 node6 node7 | nodeset -f node0 - node[0-1,6-7] Furthermore, *nodeset*'s standard input reader is able to process multiple lines and multiple node sets or groups per line. The following example shows a simple use case:: $ mount -t nfs | cut -d':' -f1 nfsserv1 nfsserv2 nfsserv3 $ mount -t nfs | cut -d':' -f1 | nodeset -f nfsserv[1-3] Other usage examples of *nodeset* below show how it can be useful to provide node sets from standard input (*sinfo* is a SLURM [#]_ command to view nodes and partitions information and *sacct* is a command to display SLURM accounting data):: $ sinfo -p cuda -o '%N' -h node[156-159] $ sinfo -p cuda -o '%N' -h | nodeset -e node156 node157 node158 node159 $ for node in $(sinfo -p cuda -o '%N' -h | nodeset -e); do sacct -a -N $node > /tmp/cudajobs.$node; done Previous rules also apply when working with node groups, for example when using ``nodeset -r`` reading from standard input (and a matching group is found):: $ nodeset -f @gpu node[156-159] $ sinfo -p cuda -o '%N' -h | nodeset -r @gpu Most commands described in this section produce output results that may be formatted using ``--output-format`` and ``--separator`` which are described in :ref:`nodeset-commands-formatting`. Commands with no input """""""""""""""""""""" The following table shows all other commands that are supported by *nodeset*. These commands don't support any input (like node sets), but can still recognize options as specified below. +--------------------+-----------------------------------------------------+ | Command w/o input | Description | +====================+=====================================================+ | ``-l, --list`` | List node groups from selected *group source* as | | | specified with ``-s`` or ``--groupsource``. If | | | not specified, node groups from the default *group | | | source* are listed (see :ref:`groups configuration | | | ` for default *group source* | | | configuration). | +--------------------+-----------------------------------------------------+ | ``--groupsources`` | List all configured *group sources*, one per line, | | | as configured in *groups.conf* (see | | | :ref:`groups configuration `). | | | The default *group source* is appended with | | | `` (default)``, unless the ``-q``, ``--quiet`` | | | option is specified. This command is mainly here to | | | avoid reading any configuration files, or to check | | | if all work fine when configuring *group sources*. | +--------------------+-----------------------------------------------------+ .. _nodeset-commands-formatting: Output result formatting """""""""""""""""""""""" When using the expand command (``-e, --expand``), a separator string is used when displaying results. The option ``-S``, ``--separator`` allows you to modify it. The specified string is interpreted, so that you can use special characters as separator, like ``\n`` or ``\t``. The default separator is the space character *" "*. This is an example showing such separator string change:: $ nodeset -e --separator='\n' node[0-3] node0 node1 node2 node3 The ``-O, --output-format`` option can be used to format output results of most *nodeset* commands. The string passed to this option is used as a base format pattern applied to each node or each result (depending on the command and other options requested). The default format string is *"%s"*. Formatting is performed using the Python builtin string formatting operator, so you must use one format operator of the right type (*%s* is guaranteed to work in all cases). Here is an output formatting example when using the expand command:: $ nodeset --output-format='%s-ipmi' -e node[1-2]x[1-2] node1x1-ipmi node1x2-ipmi node2x1-ipmi node2x2-ipmi Output formatting and separator combined can be useful when using the expand command, as shown here:: $ nodeset -O '%s-ipmi' -S '\n' -e node[1-2]x[1-2] node1x1-ipmi node1x2-ipmi node2x1-ipmi node2x2-ipmi When using the output formatting option along with the folding command, the format is applied to each node but the result is still folded:: $ nodeset -O '%s-ipmi' -f mgmt1 mgmt2 login[1-4] login[1-4]-ipmi,mgmt[1-2]-ipmi .. _nodeset-stepping: Stepping and auto-stepping ^^^^^^^^^^^^^^^^^^^^^^^^^^ The *nodeset* command, as does the *clush* command, is able to recognize by default a factorized notation for range sets of the form *a-b/c*, indicating a list of integers starting from *a*, less than or equal to *b* with the increment (step) *c*. For example, the *0-6/2* format indicates a range of 0-6 stepped by 2; that is 0,2,4,6:: $ nodeset -e node[0-6/2] node0 node2 node4 node6 However, by default, *nodeset* never uses this stepping notation in output results, as other cluster tools seldom if ever support this feature. Thus, to enable such factorized output in *nodeset*, you must specify ``--autostep=AUTOSTEP`` to set an auto step threshold number when folding nodesets (ie. when using ``-f`` or ``-r``). This threshold number (AUTOSTEP) is the minimum occurrence of equally-spaced integers needed to enable auto-stepping. For example:: $ nodeset -f --autostep=3 node1 node3 node5 node[1-5/2] $ nodeset -f --autostep=4 node1 node3 node5 node[1,3,5] It is important to note that resulting node sets with enabled auto-stepping never create overlapping ranges, for example:: $ nodeset -f --autostep=3 node1 node5 node9 node13 node[1-13/4] $ nodeset -f --autostep=3 node1 node5 node7 node9 node13 node[1,5-9/2,13] However, any ranges given as input may still overlap (in this case, *nodeset* will automatically spread them out so that they do not overlap), for example:: $ nodeset -f --autostep=3 node[1-13/4,7] node[1,5-9/2,13] A minimum node count threshold **percentage** before autostep is enabled may also be specified as autostep value (or ``auto`` which is currently 100%). In the two following examples, only the first 4 of the 7 indexes may be represented using the step syntax (57% of them):: $ nodeset -f --autostep=50% node[1,3,5,7,34,39,99] node[1-7/2,34,39,99] $ nodeset -f --autostep=90% node[1,3,5,7,34,39,99] node[1,3,5,7,34,39,99] .. _nodeset-zeropadding: Zero-padding ^^^^^^^^^^^^ Sometimes, cluster node names are padded with zeros (eg. *node007*). With *nodeset*, when leading zeros are used, resulting host names or node sets are automatically padded with zeros as well. For example:: $ nodeset -e node[08-11] node08 node09 node10 node11 $ nodeset -f node001 node002 node003 node005 node[001-003,005] Zero-padding and stepping (as seen in :ref:`nodeset-stepping`) together are also supported, for example:: $ nodeset -e node[000-012/4] node000 node004 node008 node012 Since v1.9, mixed length padding is allowed, for example:: $ nodeset -f node2 node01 node001 node[2,01,001] When mixed length zero-padding is encountered, indexes with smaller padding length are returned first, as you can see in the example above (``2`` comes before ``01``). Since v1.9, when using node sets with multiple dimensions, each dimension (or axis) may also use mixed length zero-padding:: $ nodeset -f foo1bar1 foo1bar00 foo1bar01 foo004bar1 foo004bar00 foo004bar01 foo[1,004]bar[1,00-01] Leading and trailing digits ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Version 1.7 introduces improved support for bracket leading and trailing digits. Those digits are automatically included within the range set, allowing all node set operations to be fully supported. Examples with bracket leading digits:: $ nodeset -f node-00[00-99] node-[0000-0099] $ nodeset -f node-01[01,09,42] node-[0101,0109,0142] Examples with bracket trailing digits:: $ nodeset -f node-[1-2]0-[0-2]5 node-[10,20]-[05,15,25] Examples with both bracket leading and trailing digits:: $ nodeset -f node-00[1-6]0 node-[0010,0020,0030,0040,0050,0060] $ nodeset --autostep=auto -f node-00[1-6]0 node-[0010-0060/10] Example with leading digit and mixed length zero padding (supported since v1.9):: $ nodeset -f node1[00-02,000-032/8] node[100-102,1000,1008,1016,1024,1032] Using this syntax can be error-prone especially if used with node sets without 0-padding or with the */step* syntax and also requires additional processing by the parser. In general, we recommend writing the whole rangeset inside the brackets. .. warning:: Using the step syntax (seen above) within a bracket-delimited range set is not compatible with **trailing** digits. For instance, this is **not** supported: ``node-00[1-6/2]0`` Arithmetic operations ^^^^^^^^^^^^^^^^^^^^^ As a preamble to this section, keep in mind that all operations can be repeated/mixed within the same *nodeset* command line, they will be processed from left to right. Union operation """"""""""""""" Union is the easiest arithmetic operation supported by *nodeset*: there is no special command line option for that, just provide several node sets and the union operation will be computed, for example:: $ nodeset -f node[1-3] node[4-7] node[1-7] $ nodeset -f node[1-3] node[2-7] node[5-8] node[1-8] Other operations """""""""""""""" As an extension to the above, other arithmetic operations are available by using the following command-line options (*working set* is the node set currently processed on the command line -- always from left to right): +--------------------------------------------+---------------------------------+ | *nodeset* command option | Operation | +============================================+=================================+ | ``-x NODESET``, ``--exclude=NODESET`` | compute a new set with elements | | | in *working set* but not in | | | ``NODESET`` | +--------------------------------------------+---------------------------------+ | ``-i NODESET``, ``--intersection=NODESET`` | compute a new set with elements | | | common to *working set* and | | | ``NODESET`` | +--------------------------------------------+---------------------------------+ | ``-X NODESET``, ``--xor=NODESET`` | compute a new set with elements | | | that are in exactly one of the | | | *working set* and ``NODESET`` | +--------------------------------------------+---------------------------------+ If rangeset mode (``-R``) is turned on, all arithmetic operations are supported by replacing ``NODESET`` by any ``RANGESET``. See :ref:`nodeset-rangeset` for more info about *nodeset*'s rangeset mode. Arithmetic operations usage examples:: $ nodeset -f node[1-9] -x node6 node[1-5,7-9] $ nodeset -f node[1-9] -i node[6-11] node[6-9] $ nodeset -f node[1-9] -X node[6-11] node[1-5,10-11] $ nodeset -f node[1-9] -x node6 -i node[6-12] node[7-9] .. _nodeset-extended-patterns: *Extended patterns* support """"""""""""""""""""""""""" *nodeset* does also support arithmetic operations through its "extended patterns" (inherited from :class:`.NodeSet` extended pattern feature, see :ref:`class-NodeSet-extended-patterns`, there is an example of use:: $ nodeset -f node[1-4],node[5-9] node[1-9] $ nodeset -f node[1-9]\!node6 node[1-5,7-9] $ nodeset -f node[1-9]\&node[6-12] node[6-9] $ nodeset -f node[1-9]^node[6-11] node[1-5,10-11] Special operations ^^^^^^^^^^^^^^^^^^ A few special operations are currently available: node set slicing, splitting on a predefined node count, splitting non-contiguous subsets, choosing fold axis (for multidimensional node sets) and picking N nodes randomly. They are all explained below. Slicing """"""" Slicing is a way to select elements from a node set by their index (or from a range set when using ``-R`` toggle option, see :ref:`nodeset-rangeset`. In this case actually, and because *nodeset*'s underlying :class:`.NodeSet` class sorts elements as observed after folding (for example), the word *set* may sound like a stretch of language (a *set* isn't usually sorted). Indeed, :class:`.NodeSet` further guarantees that its iterator will traverse the set in order, so we should see it as a *ordered set*. The following simple example illustrates this sorting behavior:: $ nodeset -f b2 b1 b0 b c a0 a a,a0,b,b[0-2],c Slicing is performed through the following command-line option: +---------------------------------------+-----------------------------------+ | *nodeset* command option | Operation | +=======================================+===================================+ | ``-I RANGESET``, ``--slice=RANGESET`` | *slicing*: get sliced off result, | | | selecting elements from provided | | | rangeset's indexes | +---------------------------------------+-----------------------------------+ Some slicing examples are shown below:: $ nodeset -f -I 0 node[4-8] node4 $ nodeset -f --slice=0 bnode[0-9] anode[0-9] anode0 $ nodeset -f --slice=1,4,7,9,15 bnode[0-9] anode[0-9] anode[1,4,7,9],bnode5 $ nodeset -f --slice=0-18/2 bnode[0-9] anode[0-9] anode[0,2,4,6,8],bnode[0,2,4,6,8] Splitting into *n* subsets """""""""""""""""""""""""" Splitting a node set into several parts is often useful to get separate groups of nodes, for instance when you want to check MPI comm between nodes, etc. Based on :meth:`.NodeSet.split` method, the *nodeset* command provides the following additional command-line option (since v1.4): +--------------------------+--------------------------------------------+ | *nodeset* command option | Operation | +==========================+============================================+ | ``--split=MAXSPLIT`` | *splitting*: split result into a number of | | | subsets | +--------------------------+--------------------------------------------+ ``MAXSPLIT`` is an integer specifying the number of separate groups of nodes to compute. Input's node set is divided into smaller groups, whenever possible with the same size (only the last ones may be smaller due to rounding). Obviously, if ``MAXSPLIT`` is higher than or equal to the number N of elements in the set, then the set is split to N single sets. Some node set splitting examples:: $ nodeset -f --split=4 node[0-7] node[0-1] node[2-3] node[4-5] node[6-7] $ nodeset -f --split=4 node[0-6] node[0-1] node[2-3] node[4-5] node6 $ nodeset -f --split=10000 node[0-4] foo0 foo1 foo2 foo3 foo4 $ nodeset -f --autostep=3 --split=2 node[0-38/2] node[0-18/2] node[20-38/2] Splitting off non-contiguous subsets """""""""""""""""""""""""""""""""""" It can be useful to split a node set into several contiguous subsets (with same pattern name and contiguous range indexes, eg. *node[1-100]* or *dc[1-4]node[1-100]*). The ``--contiguous`` option allows you to do that. It is based on :meth:`.NodeSet.contiguous` method, and should be specified with standard commands (fold, expand, count, regroup). The following example shows how to split off non-contiguous subsets of a specified node set, and to display each resulting contiguous node set in a folded manner to separated lines:: $ nodeset -f --contiguous node[1-100,200-300,500] node[1-100] node[200-300] node500 Similarly, the following example shows how to display each resulting contiguous node set in an expanded manner to separate lines:: $ nodeset -e --contiguous node[1-9,11-19] node1 node2 node3 node4 node5 node6 node7 node8 node9 node11 node12 node13 node14 node15 node16 node17 node18 node19 Choosing fold axis (nD) """"""""""""""""""""""" The default folding behavior for multidimensional node sets is to fold along all *nD* axis. However, other cluster tools barely support nD nodeset syntax, so it may be useful to fold along one (or a few) axis only. The ``--axis`` option allows you to specify indexes of dimensions to fold. Using this option, rangesets of unspecified axis there won't be folded. Please note however that the obtained result may be suboptimal, this is because :class:`.NodeSet` algorithms are optimized for folding along all axis. ``--axis`` value is a set of integers from 1 to n representing selected nD axis, in the form of a number or a rangeset. A common case is to restrict folding on a single axis, like in the following simple examples:: $ nodeset --axis=1 -f node1-ib0 node2-ib0 node1-ib1 node2-ib1 node[1-2]-ib0,node[1-2]-ib1 $ nodeset --axis=2 -f node1-ib0 node2-ib0 node1-ib1 node2-ib1 node1-ib[0-1],node2-ib[0-1] Because a single nodeset may have several different dimensions, axis indices are silently truncated to fall in the allowed range. Negative indices are useful to fold along the last axis whatever number of dimensions used:: $ nodeset --axis=-1 -f comp-[1-2]-[1-36],login-[1-2] comp-1-[1-36],comp-2-[1-36],login-[1-2] See also the :ref:`defaults-config-slurm` of Library Defaults for changing it permanently. .. _nodeset-pick: Picking N node(s) at random """"""""""""""""""""""""""" Use ``--pick`` with a maximum number of nodes you wish to pick randomly from the resulting node set (or from the resulting range set with ``-R``):: $ nodeset --pick=1 -f node11 node12 node13 node12 $ nodeset --pick=2 -f node11 node12 node13 node[11,13] .. _nodeset-groups: Node groups ^^^^^^^^^^^ This section tackles the node groups feature available more particularly through the *nodeset* command-line tool. The ClusterShell library defines a node groups syntax and allow you to bind these group sources to your applications (cf. :ref:`node groups configuration `). Having those group sources, group provisioning is easily done through user-defined external shell commands. Thus, node groups might be very dynamic and their nodes might change very often. However, for performance reasons, external call results are still cached in memory to avoid duplicate external calls during *nodeset* execution. For example, a group source can be bound to a resource manager or a custom cluster database. For further details about using node groups in Python, please see :ref:`class-NodeSet-groups`. For advanced usage, you should also be able to define your own group source directly in Python (cf. :ref:`class-NodeSet-groups-override`). .. _nodeset-groupsexpr: Node group expression rules """"""""""""""""""""""""""" The general node group expression is ``@source:groupname``. For example, ``@slurm:bigmem`` represents the group *bigmem* of the group source *slurm*. Moreover, a shortened expression is available when using the default group source (defined by configuration); for instance ``@compute`` represents the *compute* group of the default group source. Valid group source names and group names can contain alphanumeric characters, hyphens and underscores (no space allowed). Indeed, same rules apply to node names. Listing group sources """"""""""""""""""""" As already mentioned, the following *nodeset* command is available to list configured group sources and also display the default group source (unless ``-q`` is provided):: $ nodeset --groupsources local (default) genders slurm Listing group names """"""""""""""""""" It is always possible to list the groups from a group source if the source is :ref:`file-based `. If the source is an :ref:`external group source `, the **list** upcall must be configured (see also: :ref:`node groups configuration `). To list available groups *from the default source*, use the following command:: $ nodeset -l @mgnt @mds @oss @login @compute To list groups *from a specific group source*, use *-l* in conjunction with *-s* (or *--groupsource*):: $ nodeset -l -s slurm @slurm:parallel @slurm:cuda Or, to list groups *from all available group sources*, use *-L* (or *--list-all*):: $ nodeset -L @mgnt @mds @oss @login @compute @slurm:parallel @slurm:cuda You can also use ``nodeset -ll`` or ``nodeset -LL`` to see each group's associated node sets. .. _nodeset-rawgroupnames: Listing group names in expressions """""""""""""""""""""""""""""""""" ClusterShell 1.9 introduces a new operator **@@** optionally followed by a source name (e.g. **@@source**) to access the list of *raw group names* of the source (without the **@** prefix). If no source is specified (as in *just* **@@**), the default group source is used (see :ref:`groups_config_conf`). The **@@** operator may be used in any node set expression to manipulate group names as a node set. Example with the default group source:: $ nodeset -l @mgnt @mds @oss @login @compute $ nodeset -e @@ compute login mds mgnt oss Example with a group source "rack" that defines group names from rack locations in a data center:: $ nodeset -l -s rack @rack:J1 @rack:J2 @rack:J3 $ nodeset -f @@rack J[1-3] A set of valid, indexed group sources is also accepted by the **@@** operator (e.g. **@@dc[1-3]**). .. warning:: An error is generated when using **@@** in an expression if the source is not valid (e.g. invalid name, not configured or upcalls not currently working). Using node groups in basic commands """"""""""""""""""""""""""""""""""" The use of node groups with the *nodeset* command is very straightforward. Indeed, any group name, prefixed by **@** as mentioned above, can be used in lieu of a node name, where it will be substituted for all nodes in that group. A first, simple example is a group expansion (using default source) with *nodeset*:: $ nodeset -e @oss node40 node41 node42 node43 node44 node45 The *nodeset* count command works as expected:: $ nodeset -c @oss 6 Also *nodeset* folding command can always resolve node groups:: $ nodeset -f @oss node[40-45] There are usually two ways to use a specific group source (need to be properly configured):: $ nodeset -f @slurm:parallel node[50-81] $ nodeset -f -s slurm @parallel node[50-81] .. _nodeset-group-finding: Finding node groups """"""""""""""""""" As an extension to the **list** command, you can search node groups that a specified node set belongs to with ``nodeset -l[ll]`` as follow:: $ nodeset -l node40 @all @oss $ nodeset -ll node40 @all node[1-159] @oss node[40-45] This feature is implemented with the help of the :meth:`.NodeSet.groups` method (see :ref:`class-NodeSet-groups-finding` for further details). .. _nodeset-regroup: Resolving node groups """"""""""""""""""""" If needed group configuration conditions are met (cf. :ref:`node groups configuration `), you can try group lookups thanks to the ``-r, --regroup`` command. This feature is implemented with the help of the :meth:`.NodeSet.regroup()` method (see :ref:`class-NodeSet-regroup` for further details). Only exact matching groups are returned (all containing nodes needed), for example:: $ nodeset -r node[40-45] @oss $ nodeset -r node[0,40-45] @mgnt,@oss When resolving node groups, *nodeset* always returns the largest groups first, instead of several smaller matching groups, for instance:: $ nodeset -ll @login node[50-51] @compute node[52-81] @intel node[50-81] $ nodeset -r node[50-81] @intel If no matching group is found, ``nodeset -r`` still returns folded result (as does ``-f``):: $ nodeset -r node40 node42 node[40,42] Indexed node groups """"""""""""""""""" Node groups are themselves some kind of group sets and can be indexable. To use this feature, node groups external shell commands need to return indexed group names (automatically handled by the library as needed). For example, take a look at these indexed node groups:: $ nodeset -l @io1 @io2 @io3 $ nodeset -f @io[1-3] node[40-45] Arithmetic operations on node groups """""""""""""""""""""""""""""""""""" Arithmetic and special operations (as explained for node sets in nodeset-arithmetic and nodeset-special are also supported with node groups. Any group name can be used in lieu of a node set, where it will be substituted for all nodes in that group before processing requested operations. Some typical examples are:: $ nodeset -f @lustre -x @mds node[40-45] $ nodeset -r @lustre -x @mds @oss $ nodeset -r -a -x @lustre @compute,@login,@mgnt More advanced examples, with the use of node group sets, follow:: $ nodeset -r @io[1-3] -x @io2 @io[1,3] $ nodeset -f -I0 @io[1-3] node40 $ nodeset -f --split=3 @oss node[40-41] node[42-43] node[44-45] $ nodeset -r --split=3 @oss @io1 @io2 @io3 *Extended patterns* support with node groups """""""""""""""""""""""""""""""""""""""""""" Even for node groups, the *nodeset* command supports arithmetic operations through its *extended pattern* feature (see :ref:`class-NodeSet-extended-patterns`). A first example illustrates node groups intersection, that can be used in practice to get nodes available from two dynamic group sources at a given time:: $ nodeset -f @db:prod\&@compute The following fictive example computes a folded node set containing nodes found in node group ``@gpu`` and ``@slurm:bigmem``, but not in both, minus the nodes found in odd ``@chassis`` groups from 1 to 9 (computed from left to right):: $ nodeset -f @gpu^@slurm:bigmem\!@chassis[1-9/2] Also, version 1.7 introduces a notation extension ``@*`` (or ``@SOURCE:*``) that has been added to quickly represent *all nodes* (please refer to :ref:`clush-all-nodes` for more details). .. _nodeset-all-nodes: Selecting all nodes """"""""""""""""""" The option ``-a`` (without argument) can be used to select **all** nodes from a group source (see :ref:`node groups configuration ` for more details on special **all** external shell command upcall). Example of use for the default group source:: $ nodeset -a -f example[4-6,32-159] Use ``-s/--groupsource`` to select another group source. If not properly configured, the ``-a`` option may lead to runtime errors like:: $ nodeset -s mybrokensource -a -f nodeset: External error: Not enough working methods (all or map + list) to get all nodes A similar option is available with :ref:`clush-tool`, see :ref:`selecting all nodes with clush `. .. _node-wildcards: Node wildcards """""""""""""" ClusterShell 1.8 introduces node wildcards: ``*`` means match zero or more characters of any type; ``?`` means match exactly one character of any type. Any wildcard mask found is matched against **all** nodes from the group source (see :ref:`nodeset-all-nodes`). This can be especially useful for server farms, or when cluster node names differ. Say that your :ref:`group configuration ` is set to return the following "all nodes":: $ nodeset -f -a bckserv[1-2],dbserv[1-4],wwwserv[1-9] Then, you can use wildcards to select particular nodes, as shown below:: $ nodeset -f 'www*' wwwserv[1-9] $ nodeset -f 'www*[1-4]' wwwserv[1-4] $ nodeset -f '*serv1' bckserv1,dbserv1,wwwserv1 Wildcard masks are resolved prior to :ref:`extended patterns `, but each mask is evaluated as a whole node set operand. In the example below, we select all nodes matching ``*serv*`` before removing all nodes matching ``www*``:: $ nodeset -f '*serv*!www*' bckserv[1-2],dbserv[1-4] .. _nodeset-rangeset: Range sets ^^^^^^^^^^ Working with range sets """"""""""""""""""""""" By default, the *nodeset* command works with node or group sets and its functionality match most :class:`.NodeSet` class methods. Similarly, *nodeset* will match :class:`.RangeSet` methods when you make use of the ``-R`` option switch. In that case, all operations are restricted to numerical ranges. For example, to expand the range "``1-10``", you should use:: $ nodeset -e -R 1-10 1 2 3 4 5 6 7 8 9 10 Almost all commands and operations available for node sets are also available with range sets. The only restrictions are commands and operations related to node groups. For instance, the following command options are **not** available with ``nodeset -R``: * ``-r, --regroup`` as this feature is obviously related to node groups, * ``-a / --all`` as the **all** external call is also related to node groups. Using range sets instead of node sets doesn't change the general command usage, like the need of one command option presence (cf. nodeset-commands), or the way to give some input (cf. nodeset-stdin), for example:: $ echo 3 2 36 0 4 1 37 | nodeset -fR 0-4,36-37 $ echo 0-8/4 | nodeset -eR -S'\n' 0 4 8 Stepping and auto-stepping are supported (cf. :ref:`nodeset-stepping`) and also zero-padding (cf. nodeset-zpad), which are both :class:`.RangeSet` class features anyway. The following examples illustrate these last points:: $ nodeset -fR 03 05 01 07 11 09 01,03,05,07,09,11 $ nodeset -fR --autostep=3 03 05 01 07 11 09 01-11/2 Arithmetic and special operations """"""""""""""""""""""""""""""""" All arithmetic operations, as seen for node sets (cf. nodeset-arithmetic), are available for range sets, for example:: $ nodeset -fR 1-14 -x 10-20 1-9 $ nodeset -fR 1-14 -i 10-20 10-14 $ nodeset -fR 1-14 -X 10-20 1-9,15-20 For now, there is no *extended patterns* syntax for range sets as for node sets (cf. :ref:`nodeset-extended-patterns`). However, as the union operator ``,`` is available natively by design, such expressions are still allowed:: $ nodeset -fR 4-10,1-2 1-2,4-10 Besides arithmetic operations, special operations may be very convenient for range sets also. Below is an example with ``-I / --slice`` (cf. nodeset-slice):: $ nodeset -fR -I 0 100-131 100 $ nodeset -fR -I 0-15 100-131 100-115 There is another special operation example with ``--split`` (cf. nodeset-splitting-n):: $ nodeset -fR --split=2 100-131 100-115 116-131 Finally, an example of the special operation ``--contiguous`` (cf. nodeset-splitting-contiguous):: $ nodeset -f -R --contiguous 1-9,11,13-19 1-9 11 13-19 *rangeset* alias """""""""""""""" When using *nodeset* with range sets intensively (eg. for scripting), it may be convenient to create a local command alias, as shown in the following example (Bourne shell), making it sort of a super `seq(1)`_ command:: $ alias rangeset='nodeset -R' $ rangeset -e 0-8/2 0 2 4 6 8 .. [#] SLURM is an open-source resource manager (https://computing.llnl.gov/linux/slurm/) .. _seq(1): http://linux.die.net/man/1/seq ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3293295 ClusterShell-1.9.2/doc/txt/0000755104717000001440000000000014505640536015162 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/txt/README0000644104717000001440000000031114501416555016033 0ustar00sthiellusersFiles found in this directory are text files in reStructuredText format (Markup Syntax of Docutils). We use rst1man.py to convert them to roff man pages. See: http://docutils.sourceforge.net/rst.html ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/txt/clubak.txt0000644104717000001440000000724514505632065017172 0ustar00sthiellusers========= clubak ========= -------------------------------------------------- format output from clush/pdsh-like output and more -------------------------------------------------- :Author: Stephane Thiell :Date: 2023-09-29 :Copyright: GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) :Version: 1.9.2 :Manual section: 1 :Manual group: ClusterShell User Manual SYNOPSIS ======== ``clubak`` [ OPTIONS ] DESCRIPTION =========== ``clubak`` formats text from standard input containing lines of the form "`node:output`". It is fully backward compatible with ``dshbak``\(1) but provides additional features. For instance, ``clubak`` always displays its results sorted by node/nodeset. You do not need to use ``clubak`` when using ``clush``\(1) as all output formatting features are already included in. It is provided for other usages, like post-processing results of the form "`node:output`". Like ``clush``\(1), ``clubak`` uses the `ClusterShell.MsgTree` module of the ClusterShell library (see ``pydoc ClusterShell.MsgTree``). INVOCATION ========== ``clubak`` should be started with connected standard input. OPTIONS ======= --version show ``clubak`` version number and exit -b, -c gather nodes with same output (-c is provided for ``dshbak``\(1) compatibility) -d, --debug output more messages for debugging purpose -L disable header block and order output by nodes -r, --regroup fold nodeset using node groups -s GROUPSOURCE, --groupsource=GROUPSOURCE optional ``groups.conf``\(5) group source to use --groupsconf=FILE use alternate config file for groups.conf(5) -G, --groupbase do not display group source prefix (always `@groupname`) -S SEPARATOR, --separator=SEPARATOR node / line content separator string (default: `:`) -F, --fast faster but memory hungry mode (preload all messages per node) -T, --tree message tree trace mode; switch to enable ``ClusterShell.MsgTree`` trace mode, all keys/nodes being kept for each message element of the tree, thus allowing special output gathering --color=WHENCOLOR ``clush`` can use NO_COLOR, CLICOLOR and CLICOLOR_FORCE environment variables. ``--color`` command line option always takes precedence over environment variables. NO_COLOR takes precedence over CLICOLOR_FORCE which takes precedence over CLICOLOR. ``--color`` tells whether to use ANSI colors to surround node or nodeset prefix/header with escape sequences to display them in color on the terminal. *WHENCOLOR* is ``never``, ``always`` or ``auto`` (which use color if standard output refers to a terminal). Color is set to [34m (blue foreground text) and cannot be modified. --diff show diff between gathered outputs EXIT STATUS =========== An exit status of zero indicates success of the ``clubak`` command. EXAMPLES =========== 1. ``clubak`` can be used to gather some recorded ``clush``\(1) results: Record ``clush``\(1) results in a file: | # clush -w node[1-7] uname -r >/tmp/clush_output | # clush -w node[32-159] uname -r >>/tmp/clush_output Display file gathered results (in line-mode): | # clubak -bL :Date: 2023-09-29 :Copyright: GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) :Version: 1.9.2 :Manual section: 1 :Manual group: ClusterShell User Manual SYNOPSIS ======== ``cluset`` [OPTIONS] [COMMAND] [nodeset1 [OPERATION] nodeset2|...] DESCRIPTION =========== Note: ``cluset`` and ``nodeset`` are the same command. ``cluset`` is an utility command provided with the ClusterShell library which implements some features of ClusterShell's NodeSet and RangeSet Python classes. It provides easy manipulation of 1D or nD-indexed cluster nodes and node groups. Also, ``cluset`` is automatically bound to the library node group resolution mechanism. Thus, it is especially useful to enhance cluster aware administration shell scripts. OPTIONS ======= --version show program's version number and exit -h, --help show this help message and exit -s GROUPSOURCE, --groupsource=GROUPSOURCE optional ``groups.conf``\(5) group source to use --groupsconf=FILE use alternate config file for groups.conf(5) Commands: -c, --count show number of nodes in nodeset(s) -e, --expand expand nodeset(s) to separate nodes (see also -S *SEPARATOR*) -f, --fold fold nodeset(s) (or separate nodes) into one nodeset -l, --list list node groups, list node groups and nodes (``-ll``) or list node groups, nodes and node count (``-lll``). When no argument is specified at all, this command will list all node group names found in selected group source (see also -s *GROUPSOURCE*). If any nodesets are specified as argument, this command will find node groups these nodes belongs to (individually). Optionally for each group, the fraction of these nodes being member of the group may be displayed (with ``-ll``), and also member count/total group node count (with ``-lll``). If a single hyphen-minus (-) is given as a nodeset, it will be read from standard input. -r, --regroup fold nodes using node groups (see -s *GROUPSOURCE*) --groupsources list all active group sources (see ``groups.conf``\(5)) Operations: -x SUB_NODES, --exclude=SUB_NODES exclude specified set -i AND_NODES, --intersection=AND_NODES calculate sets intersection -X XOR_NODES, --xor=XOR_NODES calculate symmetric difference between sets Options: -a, --all call external node groups support to display all nodes --autostep=AUTOSTEP enable a-b/step style syntax when folding nodesets, value is min node count threshold (integer '4', percentage '50%' or 'auto'). If not specified, auto step is disabled (best for compatibility with other cluster tools. Example: autostep=4, "node2 node4 node6" folds in node[2,4,6] but autostep=3, "node2 node4 node6" folds in node[2-6/2]. -d, --debug output more messages for debugging purpose -q, --quiet be quiet, print essential output only -R, --rangeset switch to RangeSet instead of NodeSet. Useful when working on numerical cluster ranges, eg. 1,5,18-31 -G, --groupbase hide group source prefix (always `@groupname`) -S SEPARATOR, --separator=SEPARATOR separator string to use when expanding nodesets (default: ' ') -O FORMAT, --output-format=FORMAT output format (default: '%s') -I SLICE_RANGESET, --slice=SLICE_RANGESET return sliced off result; examples of SLICE_RANGESET are "0" for simple index selection, or "1-9/2,16" for complex rangeset selection --split=MAXSPLIT split result into a number of subsets --contiguous split result into contiguous subsets (ie. for nodeset, subsets will contain nodes with same pattern name and a contiguous range of indexes, like foobar[1-100]; for rangeset, subsets with consists in contiguous index ranges)""" --axis=RANGESET for nD nodesets, fold along provided axis only. Axis are indexed from 1 to n and can be specified here either using the rangeset syntax, eg. '1', '1-2', '1,3', or by a single negative number meaning that the indices is counted from the end. Because some nodesets may have several different dimensions, axis indices are silently truncated to fall in the allowed range. --pick=N pick N node(s) at random in nodeset For a short explanation of these options, see ``-h, --help``. If a single hyphen-minus (-) is given as a nodeset, it will be read from standard input. EXTENDED PATTERNS ================= The ``cluset`` command benefits from ClusterShell NodeSet basic arithmetic addition. This feature extends recognized string patterns by supporting operators matching all Operations seen previously. String patterns are read from left to right, by proceeding any character operators accordingly. Supported character operators ``,`` indicates that the *union* of both left and right nodeset should be computed before continuing ``!`` indicates the *difference* operation ``&`` indicates the *intersection* operation ``^`` indicates the *symmetric difference* (XOR) operation Care should be taken to escape these characters as needed when the shell does not interpret them literally. Examples of use of extended patterns :$ cluset -f node[0-7],node[8-10]: | node[0-10] :$ cluset -f node[0-10]\!node[8-10]: | node[0-7] :$ cluset -f node[0-10]\&node[5-13]: | node[5-10] :$ cluset -f node[0-10]^node[5-13]: | node[0-4,11-13] Example of advanced usage :$ cluset -f @gpu^@slurm\:bigmem!@chassis[1-9/2]: This computes a folded nodeset containing nodes found in group @gpu and @slurm:bigmem, but not in both, minus the nodes found in odd chassis groups from 1 to 9. "All nodes" extension The ``@*`` and ``@SOURCE:*`` special notations may be used in extended patterns to represent all nodes (in SOURCE) according to the *all* external shell command (see ``groups.conf``\(5)) and are equivalent to: :$ cluset [-s SOURCE] -a -f: Group names in expressions The ``@@SOURCE`` notation may be used to access all group names from the specified SOURCE (or from the default group source when just ``@@`` is used) in node set expressions; this works with either file-based group sources or with external group sources that have the *list* upcall defined (see ``groups.conf``\(5)): :$ cluset -f @@rack: | J[1-3] NODE WILDCARDS ============== Any wildcard mask found is matched against all nodes from the group source (see ``groups.conf``\(5) and the ``-a/--all`` option above). ``*`` means match zero or more characters of any type; ``?`` means match exactly one character of any type. This can be especially useful for server farms, or when cluster node names differ. Say that your group configuration is set to return the following “all nodesâ€: :$ cluset -f -a: | bckserv[1-2],dbserv[1-4],wwwserv[1-9] Then, you can use wildcards to select particular nodes, as shown below: :$ cluset -f 'www\*': | wwwserv[1-9] :$ cluset -f 'www\*[1-4]': | wwwserv[1-4] :$ cluset -f '\*serv1': | bckserv1,dbserv1,wwwserv1 Wildcard masks are resolved prior to extended patterns, but each mask is evaluated as a whole node set operand. In the example below, we select all nodes matching ``*serv*`` before removing all nodes matching ``www*``: :$ cluset -f '\*serv\*\!www\*': | bckserv[1-2],dbserv[1-4] EXIT STATUS =========== An exit status of zero indicates success of the ``cluset`` command. A non-zero exit status indicates failure. EXAMPLES =========== Getting the node count :$ cluset -c node[0-7,32-159]: | 136 :$ cluset -c node[0-7,32-159] node[160-163]: | 140 :$ cluset -c dc[1-2]n[100-199]: | 200 :$ cluset -c @login: | 4 Folding nodesets :$ cluset -f node[0-7,32-159] node[160-163]: | node[0-7,32-163] :$ echo node3 node6 node1 node2 node7 node5 | cluset -f: | node[1-3,5-7] :$ cluset -f dc1n2 dc2n2 dc1n1 dc2n1: | dc[1-2]n[1-2] :$ cluset --axis=1 -f dc1n2 dc2n2 dc1n1 dc2n1: | dc[1-2]n1,dc[1-2]n2 Expanding nodesets :$ cluset -e node[160-163]: | node160 node161 node162 node163 :$ echo 'dc[1-2]n[2-6/2]' | cluset -e: | dc1n2 dc1n4 dc1n6 dc2n2 dc2n4 dc2n6 Excluding nodes from nodeset :$ cluset -f node[32-159] -x node33: | node[32,34-159] Computing nodesets intersection :$ cluset -f node[32-159] -i node[0-7,20-21,32,156-159]: | node[32,156-159] Computing nodesets symmetric difference (xor) :$ cluset -f node[33-159] --xor node[32-33,156-159]: | node[32,34-155] Splitting nodes into several nodesets (expanding results) :$ cluset --split=3 -e node[1-9]: | node1 node2 node3 | node4 node5 node6 | node7 node8 node9 Splitting non-contiguous nodesets (folding results) :$ cluset --contiguous -f node2 node3 node4 node8 node9: | node[2-4] | node[8-9] :$ cluset --contiguous -f dc[1,3]n[1-2,4-5]: | dc1n[1-2] | dc1n[4-5] | dc3n[1-2] | dc3n[4-5] HISTORY ======= ``cluset`` was added in 1.7.3 to avoid a conflict with xCAT's ``nodeset`` command and also to conform with ClusterShell's "clu*" command nomenclature. SEE ALSO ======== ``clubak``\(1), ``clush``\(1), ``nodeset``\(1), ``groups.conf``\(5). http://clustershell.readthedocs.org/ BUG REPORTS =========== Use the following URL to submit a bug report or feedback: https://github.com/cea-hpc/clustershell/issues ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/txt/clush.conf.txt0000644104717000001440000001601314505632065017764 0ustar00sthiellusers============ clush.conf ============ ------------------------------ Configuration file for `clush` ------------------------------ :Author: Stephane Thiell, :Date: 2023-09-29 :Copyright: GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) :Version: 1.9.2 :Manual section: 5 :Manual group: ClusterShell User Manual DESCRIPTION =========== ``clush``\(1) obtains configuration options from the following sources in the following order: 1. command-line options 2. user configuration file (*$XDG_CONFIG_HOME/clustershell/clush.conf*) 3. local pip user installation (*$HOME/.local/etc/clustershell/clush.conf*) 4. global configuration file (*$CLUSTERSHELL_CFGDIR/clush.conf*, defaults to */etc/clustershell/clush.conf*) For each parameter, the first obtained value will be used. The configuration file has a format in the style of RFC 822 composed of one main section: Main Program options definition [Main] ------ Configuration parameters of the ``Main`` section are described below. fanout Size of the sliding window (fanout) of active commands for ``clush``. This `fanout` is used to avoid too many concurrent connections and to conserve resources on the initiating hosts. In tree mode, the same `fanout` value is used on the head node and on each gateway (the `fanout` value is propagated). That is, if the `fanout` is **16** on the head node, each gateway will initiate up to **16** connections to their target nodes at the same time. confdir Optional list of directory paths where clush should look for `.conf` files which define run modes that can then be activated with ``--mode``. All other clush config file settings defined here might be overridden in a run mode. Each mode section should have a name prefixed by "mode:" to clearly identify a section defining a mode. Duplicate modes are not allowed in those files. Configuration files that are not readable by the current user are ignored. The variable `$CFGDIR` is replaced by the path of the highest priority configuration directory found (where clush.conf resides). The default confdir value enables both system-wide and any installed user configuration (thanks to `$CFGDIR`). Duplicate directory paths are ignored. connect_timeout Timeout in seconds to allow a connection to establish. This parameter is passed to ssh. If set to *0*, no timeout occurs. command_prefix Command prefix. Generally used for specific run modes, for example to implement ``sudo``\(8\) support. command_timeout Timeout in seconds to allow a command to complete since the connection has been established. This parameter is passed to ssh. In addition, the ClusterShell library ensures that any commands complete in less than ( connect_timeout + command_timeout ). If set to *0*, no timeout occurs. color ``clush`` can use NO_COLOR, CLICOLOR and CLICOLOR_FORCE environment variables. NO_COLOR takes precedence over CLICOLOR_FORCE which takes precedence over CLICOLOR. When the option is set in configuration file environment variables are taken into account only with `auto` argument. ``color`` tells whether to use ANSI colors to surround node or nodeset prefix/header with escape sequences to display them in color on the terminal. Valid arguments are ``never``, ``always`` or ``auto`` (which use color if standard output/error refer to a terminal). Colors are set to [34m (blue foreground text) for stdout and [31m (red foreground text) for stderr, and cannot be modified. fd_max Maximum number of open file descriptors permitted per clush process (soft resource limit for open files). This limit can never exceed the system (hard) limit. The `fd_max` (soft) and system (hard) limits should be high enough to run ``clush``, although their values depend on your `fanout` value. history_size Set the maximum number of history entries saved in the GNU readline history list. Negative values imply unlimited history file size. node_count Should ``clush`` display additional (node count) information in buffer header? (`yes`/`no`) maxrc Should ``clush`` return the largest of command return codes? (yes/no) password_prompt Enable password prompt and password forwarding to stdin? (yes/no) Generally used for specific run modes, for example to implement interactive ``sudo``\(8\) support. verbosity Set the verbosity level: `0` (quiet), `1` (default), `2` (verbose) or more (debug). ssh_user Set the ssh user to use for remote connection (default is to not specify). ssh_path Set the ssh binary path to use for remote connection (default is `ssh`). ssh_options Set additional options to pass to the underlying ssh command. scp_path Set the scp binary path to use for remote copy (default is `scp`). scp_options Set additional options to pass to the underlying scp command. If not specified, ssh_options are used instead. rsh_path Set the rsh binary path to use for remote connection (default is `rsh`). You could easily use mrsh or krsh by simply changing this value. rcp_path Same a rsh_path for rcp command. (Default is `rcp`) rsh_options Set additional options to pass to the underlying rsh/rcp command. Run modes --------- Since version 1.9, clush has support for run modes, which are special ``clush.conf``\(5) settings with a given name. Two run modes are provided in example configuration files that can be copied and modified. They implement password-based authentication with ``sshpass``\(1\) and support of interactive ``sudo``\(8\) with password. To use a run mode with ``clush --mode``, install a configuration file in one of ``clush.conf``\(5)'s `confdir` (usually ``clush.conf.d``). Only configuration files ending in `.conf` are scanned. If the user running ``clush``\(1\) doesn't have read access to a configuration file, is it ignored. When ``--mode`` is specified, you can display all available run modes for the current user by enabling debug mode (``-d``). EXAMPLES =========== Simple configuration file. *clush.conf* ------------ | [Main] | fanout: 128 | connect_timeout: 15 | command_timeout: 0 | history_size: 100 | color: auto | fd_max: 10240 | maxrc: no | node_count: yes | confdir: /etc/clustershell/clush.conf.d | FILES ===== *$CLUSTERSHELL_CFGDIR/clush.conf* Global clush configuration file. If $CLUSTERSHELL_CFGDIR is not defined, */etc/clustershell/clush.conf* is used instead. *$XDG_CONFIG_HOME/clustershell/clush.conf* User configuration file for clush. If $XDG_CONFIG_HOME is not defined, *$HOME/.config/clustershell/clush.conf* is used instead. *$HOME/.local/etc/clustershell/clush.conf* Local user configuration file for clush (default installation for pip --user) *~/.clush.conf* Deprecated per-user clush configuration file. HISTORY ======= As of ClusterShell version 1.3, the ``External`` section has been removed from *clush.conf*. External commands whose outputs were used by ``clush`` (-a, -g, -X) are now handled by the library itself and defined in ``groups.conf``\(5). SEE ALSO ======== ``clush``\(1), ``groups.conf``\(5), ``sshpass``\(1\), ``sudo``\(8\). http://clustershell.readthedocs.org/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/txt/clush.txt0000644104717000001440000003534014505632065017044 0ustar00sthiellusers========= clush ========= ----------------------------------- execute shell commands on a cluster ----------------------------------- :Author: Stephane Thiell :Date: 2023-09-29 :Copyright: GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) :Version: 1.9.2 :Manual section: 1 :Manual group: ClusterShell User Manual SYNOPSIS ======== ``clush`` ``-a`` | ``-g`` *group* | ``-w`` *nodes* [OPTIONS] ``clush`` ``-a`` | ``-g`` *group* | ``-w`` *nodes* [OPTIONS] *command* ``clush`` ``-a`` | ``-g`` *group* | ``-w`` *nodes* [OPTIONS] --copy *file* | *dir* [ *file* | *dir* ...] [ --dest *path* ] ``clush`` ``-a`` | ``-g`` *group* | ``-w`` *nodes* [OPTIONS] --rcopy *file* | *dir* [ *file* | *dir* ...] [ --dest *path* ] DESCRIPTION =========== ``clush`` is a program for executing commands in parallel on a cluster and for gathering their results. ``clush`` executes commands interactively or can be used within shell scripts and other applications. It is a partial front-end to the ClusterShell library that ensures a light, unified and robust parallel command execution framework. Thus, it allows traditional shell scripts to benefit from some of the library features. ``clush`` currently makes use of the Ssh worker of ClusterShell, by default, that only requires ``ssh``\(1) (OpenSSH SSH client). INVOCATION ========== ``clush`` can be started non-interactively to run a shell *command*, or can be invoked as an interactive shell. To start a ``clush`` interactive session, invoke the ``clush`` command without providing *command*. Non-interactive mode When ``clush`` is started non-interactively, the *command* is executed on the specified remote hosts in parallel. If option ``-b`` or ``--dshbak`` is specified, ``clush`` waits for command completion and then displays gathered output results. The ``-w`` option allows you to specify remote hosts by using ClusterShell NodeSet syntax, including the node groups ``@group`` special syntax and the ``Extended Patterns`` syntax to benefits from NodeSet basic arithmetic (like ``@Agroup\&@Bgroup``). See EXTENDED PATTERNS in ``nodeset``\(1) and also ``groups.conf``\(5) for more information. Unless the option ``--nostdin`` (or ``-n``) is specified, ``clush`` detects when its standard input is connected to a terminal (as determined by ``isatty``\(3)). If actually connected to a terminal, ``clush`` listens to standard input when commands are running, waiting for an `Enter` key press. Doing so will display the status of current nodes. If standard input is not connected to a terminal, and unless the option ``--nostdin`` is specified, ``clush`` binds the standard input of the remote commands to its own standard input, allowing scripting methods like: | # echo foo | clush -w node[40-42] -b cat | --------------- | node[40-42] | --------------- | foo Please see some other great examples in the EXAMPLES section below. Interactive session If a *command* is not specified, and its standard input is connected to a terminal, ``clush`` runs interactively. In this mode, ``clush`` uses the GNU ``readline`` library to read command lines. Readline provides commands for searching through the command history for lines containing a specified string. For instance, type Control-R to search in the history for the next entry matching the search string typed so far. ``clush`` also recognizes special single-character prefixes that allows the user to see and modify the current nodeset (the nodes where the commands are executed). Single-character interactive commands are: clush> ? show current nodeset clush> @ set current nodeset clush> + add nodes to current nodeset clush> - remove nodes from current nodeset clush> !COMMAND execute COMMAND on the local system clush> = toggle the output format (gathered or standard mode) To leave an interactive session, type ``quit`` or Control-D. Local execution ( ``--worker=exec`` or ``-R exec`` ) Instead of running provided command on remote nodes, ``clush`` can use the dedicated *exec* worker to launch the command *locally*, for each node. Some parameters could be used in the command line to make a different command for each node. ``%h`` or ``%host`` will be replaced by node name and ``%n`` or ``%rank`` by the remote rank [0-N] (to get a literal % use %%) File copying mode ( ``--copy`` ) When ``clush`` is started with the ``-c`` or ``--copy`` option, it will attempt to copy specified *files* and/or *directories* to the provided cluster nodes. The ``--dest`` option can be used to specify a single path where all the file(s) should be copied to on the target nodes. In the absence of ``--dest``, ``clush`` will attempt to copy each file or directory found in the command line to their same location on the target nodes. Reverse file copying mode ( ``--rcopy`` ) When ``clush`` is started with the ``--rcopy`` option, it will attempt to retrieve specified *file* and/or *dir* from provided cluster nodes. If the ``--dest`` option is specified, it must be a directory path where the files will be stored with their hostname appended. If the destination path is not specified, it will take each *file* or *directory*'s parent directory as the local destination. OPTIONS ======= --version show ``clush`` version number and exit -s GROUPSOURCE, --groupsource=GROUPSOURCE optional ``groups.conf``\(5) group source to use -n, --nostdin do not watch for possible input from stdin; this should be used when ``clush`` is run in the background (or in scripts). --groupsconf=FILE use alternate config file for groups.conf(5) --conf=FILE use alternate config file for clush.conf(5) -O , --option= override any key=value ``clush.conf``\(5) options (repeat as needed) Selecting target nodes: -w NODES nodes where to run the command -x NODES exclude nodes from the node list -a, --all run command on all nodes -g GROUP, --group=GROUP run command on a group of nodes -X GROUP exclude nodes from this group --hostfile=FILE, --machinefile=FILE path to a file containing a list of single hosts, node sets or node groups, separated by spaces and lines (may be specified multiple times, one per file) --topology=FILE topology configuration file to use for tree mode --pick=N pick N node(s) at random in nodeset Output behaviour: -q, --quiet be quiet, print essential output only -v, --verbose be verbose, print informative messages -d, --debug output more messages for debugging purpose -G, --groupbase do not display group source prefix -L disable header block and order output by nodes; if -b/-B is not specified, ``clush`` will wait for all commands to finish and then display aggregated output of commands with same return codes, ordered by node name; alternatively, when used in conjunction with -b/-B (eg. -bL), ``clush`` will enable a "life gathering" of results by line, such as the next line is displayed as soon as possible (eg. when all nodes have sent the line) -N disable labeling of command line -P, --progress show progress during command execution; if writing is performed to standard input, the live progress indicator will display the global bandwidth of data written to the target nodes -b, --dshbak display gathered results in a dshbak-like way (note: it will only try to aggregate the output of commands with same return codes) -B like -b but including standard error -r, --regroup fold nodeset using node groups -S, --maxrc return the largest of command return codes --color=WHENCOLOR ``clush`` can use NO_COLOR, CLICOLOR and CLICOLOR_FORCE environment variables. NO_COLOR takes precedence over CLICOLOR_FORCE which takes precedence over CLICOLOR. When ``--color`` option is used these environment variables are not taken into account. ``--color`` tells whether to use ANSI colors to surround node or nodeset prefix/header with escape sequences to display them in color on the terminal. *WHENCOLOR* is ``never``, ``always`` or ``auto`` (which use color if standard output/error refer to a terminal). Colors are set to [34m (blue foreground text) for stdout and [31m (red foreground text) for stderr, and cannot be modified. --diff show diff between common outputs (find the best reference output by focusing on largest nodeset and also smaller command return code) --outdir=OUTDIR output directory for stdout files (OPTIONAL) --errdir=ERRDIR output directory for stderr files (OPTIONAL) File copying: -c, --copy copy local file or directory to remote nodes --rcopy copy file or directory from remote nodes --dest=DEST_PATH destination file or directory on the nodes (optional: use the first source directory path when not specified) -p preserve modification times and modes Connection options: -f FANOUT, --fanout=FANOUT do not execute more than FANOUT commands at the same time, useful to limit resource usage. In tree mode, the same *fanout* value is used on the head node and on each gateway (the *fanout* value is propagated). That is, if the *fanout* is **16**, each gateway will initiate up to **16** connections to their target nodes at the same time. Default *fanout* value is defined in ``clush.conf``\(5). -l USER, --user=USER execute remote command as user -o OPTIONS, --options=OPTIONS can be used to give ssh options, eg. ``-o "-p 2022 -i ~/.ssh/myidrsa"``; these options are added first to ssh and override default ones -t CONNECT_TIMEOUT, --connect_timeout=CONNECT_TIMEOUT limit time to connect to a node -u COMMAND_TIMEOUT, --command_timeout=COMMAND_TIMEOUT limit time for command to run on the node -m MODE, --mode=MODE run mode; define MODEs in ``/*.conf`` -R WORKER, --worker=WORKER worker name to use for connection (``exec``, ``ssh``, ``rsh``, ``pdsh``, or the name of a Python worker module), default is ``ssh`` --remote=REMOTE whether to enable remote execution: in tree mode, 'yes' forces connections to the leaf nodes for execution, 'no' establishes connections up to the leaf parent nodes for execution (default is 'yes') For a short explanation of these options, see ``-h, --help``. EXIT STATUS =========== By default, an exit status of zero indicates success of the ``clush`` command but gives no information about the remote commands exit status. However, when the ``-S`` option is specified, the exit status of ``clush`` is the largest value of the remote commands return codes. For failed remote commands whose exit status is non-zero, and unless the combination of options ``-qS`` is specified, ``clush`` displays messages similar to: :clush\: node[40-42]\: exited with exit code 1: EXAMPLES =========== Remote parallel execution ------------------------- :# clush -w node[3-5,62] uname -r: Run command `uname -r` in parallel on nodes: node3, node4, node5 and node62 Local parallel execution ------------------------ :# clush -w node[1-3] --worker=exec ping -c1 %host: Run locally, in parallel, a ping command for nodes: node1, node2 and node3. You may also use ``-R exec`` as the shorter and pdsh compatible option. Display features ---------------- :# clush -w node[3-5,62] -b uname -r: Run command `uname -r` on nodes[3-5,62] and display gathered output results (integrated ``dshbak``-like). :# clush -w node[3-5,62] -bL uname -r: Line mode: run command `uname -r` on nodes[3-5,62] and display gathered output results without default header block. :# ssh node32 find /etc/yum.repos.d -type f | clush -w node[40-42] -b xargs ls -l: Search some files on node32 in /etc/yum.repos.d and use clush to list the matching ones on node[40-42], and use ``-b`` to display gathered results. :# clush -w node[3-5,62] --diff dmidecode -s bios-version: Run this Linux command to get BIOS version on nodes[3-5,62] and show version differences (if any). All nodes --------- :# clush -a uname -r: Run command `uname -r` on all cluster nodes, see ``groups.conf``\(5) to setup all cluster nodes (`all:` field). :# clush -a -x node[5,7] uname -r: Run command `uname -r` on all cluster nodes except on nodes node5 and node7. :# clush -a --diff cat /some/file: Run command `cat /some/file` on all cluster nodes and show differences (if any), line by line, between common outputs. Node groups ----------- :# clush -w @oss modprobe lustre: Run command `modprobe lustre` on nodes from node group named `oss`, see ``groups.conf``\(5) to setup node groups (`map:` field). :# clush -g oss modprobe lustre: Same as previous example but using ``-g`` to avoid `@` group prefix. :# clush -w @mds,@oss modprobe lustre: You may specify several node groups by separating them with commas (please see EXTENDED PATTERNS in ``nodeset``\(1) and also ``groups.conf``\(5) for more information). Copy files ---------- :# clush -w node[3-5,62] --copy /etc/motd: Copy local file `/etc/motd` to remote nodes node[3-5,62]. :# clush -w node[3-5,62] --copy /etc/motd --dest /tmp/motd2: Copy local file `/etc/motd` to remote nodes node[3-5,62] at path `/tmp/motd2`. :# clush -w node[3-5,62] -c /usr/share/doc/clustershell: Recursively copy local directory `/usr/share/doc/clustershell` to the same path on remote nodes node[3-5,62]. :# clush -w node[3-5,62] --rcopy /etc/motd --dest /tmp: Copy `/etc/motd` from remote nodes node[3-5,62] to local `/tmp` directory, each file having their remote hostname appended, eg. `/tmp/motd.node3`. FILES ===== *$CLUSTERSHELL_CFGDIR/clush.conf* Global clush configuration file. If $CLUSTERSHELL_CFGDIR is not defined, */etc/clustershell/clush.conf* is used instead. *$XDG_CONFIG_HOME/clustershell/clush.conf* User configuration file for clush. If $XDG_CONFIG_HOME is not defined, *$HOME/.config/clustershell/clush.conf* is used instead. *$HOME/.local/etc/clustershell/clush.conf* Local user configuration file for clush (default installation for pip --user) *~/.clush.conf* Deprecated per-user clush configuration file. *~/.clush_history* File in which interactive ``clush`` command history is saved. SEE ALSO ======== ``clubak``\(1), ``cluset``\(1), ``nodeset``\(1), ``readline``\(3), ``clush.conf``\(5), ``groups.conf``\(5). http://clustershell.readthedocs.org/ BUG REPORTS =========== Use the following URL to submit a bug report or feedback: https://github.com/cea-hpc/clustershell/issues ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/doc/txt/clustershell.rst0000644104717000001440000000351714501416555020431 0ustar00sthiellusersClusterShell is an event-driven open source Python framework, designed to run local or distant commands in parallel on server farms or on large Linux clusters. It will take care of common issues encountered on HPC clusters, such as operating on groups of nodes, running distributed commands using optimized execution algorithms, as well as gathering results and merging identical outputs, or retrieving return codes. ClusterShell takes advantage of existing remote shell facilities already installed on your systems, like SSH. User tools ---------- ClusterShell provides clush, clubak and cluset/nodeset, convenient command-line tools that allow traditional shell scripts to benefit from some of the library's features: - **clush**: issue commands to cluster nodes and format output Example of use: :: $ clush -abL uname -r node[32-49,51-71,80,82-150,156-159]: 2.6.18-164.11.1.el5 node[3-7,72-79]: 2.6.18-164.11.1.el5_lustre1.10.0.36 node[2,151-155]: 2.6.31.6-145.fc11.2.x86_64 See *man clush* for more details. - **clubak**: improved dshbak to gather and sort dsh-like outputs See *man clubak* for more details. - **nodeset** (or **cluset**): compute advanced nodeset/nodegroup operations Examples of use: :: $ echo node160 node161 node162 node163 | nodeset -f node[160-163] $ nodeset -f node[0-7,32-159] node[160-163] node[0-7,32-163] $ nodeset -e node[160-163] node160 node161 node162 node163 $ nodeset -f node[32-159] -x node33 node[32,34-159] $ nodeset -f node[32-159] -i node[0-7,20-21,32,156-159] node[32,156-159] $ nodeset -f node[33-159] --xor node[32-33,156-159] node[32,34-155] $ nodeset -l @oss @mds @io @compute $ nodeset -e @mds node6 node7 See *man nodeset* (or *man cluset*) for more details. Please visit the ClusterShell website_. .. _website: http://cea-hpc.github.io/clustershell/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/txt/groups.conf.txt0000644104717000001440000001621714505632065020173 0ustar00sthiellusers============= groups.conf ============= ----------------------------------------------- Configuration file for ClusterShell node groups ----------------------------------------------- :Author: Stephane Thiell, :Date: 2023-09-29 :Copyright: GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) :Version: 1.9.2 :Manual section: 5 :Manual group: ClusterShell User Manual DESCRIPTION =========== The ClusterShell library obtains its node groups configuration from the following sources in the following order: 1. user configuration file (*$XDG_CONFIG_HOME/clustershell/groups.conf*) 2. local pip user installation (*$HOME/.local/etc/clustershell/groups.conf*) 3. Global configuration file (*$CLUSTERSHELL_CFGDIR/groups.conf*, defaults to */etc/clustershell/groups.conf*) If no *groups.conf* is found, group support will be disabled. Additional configuration files are also read from the directories set by the confdir option, if present. See the ``confdir`` option below for further details. Configuration files have a format in the style of RFC 822 potentially composed of several sections which may be present in any order. There are two types of sections: Main and *Group_source*: Main Global configuration options. There should be only one Main section. *Group_source* The *Group_source* section(s) define the configuration for each node group source (or namespace). This configuration consists in external commands definition (map, all, list and reverse). Only *Group_source* section(s) are allowed in additional configuration files. [Main] OPTIONS -------------- Configuration parameters of the ``Main`` section are described below. default Specify the default group source (group namespace) used by the NodeSet parser when the user does not explicitly specify the group source (eg. "@io"). confdir Optional list of directories where the ClusterShell library should look for **.conf** files which define group sources to use. Each file in these directories with the .conf suffix should contain one or more *Group_source* sections as documented in [*Group_source*] options below. These will be merged with the group sources defined in */etc/clustershell/groups.conf* to form the complete set of group sources that ClusterShell will use. Duplicate *Group_source* sections are not allowed. Note: .conf files that are not readable by the current user are ignored (except the one that defines the default group source). The variable *$CFGDIR* is replaced by the path of the highest priority configuration directory found (where groups.conf resides). The default confdir value enables both system-wide and any installed user configuration (thanks to *$CFGDIR*). Duplicate directory paths are ignored. autodir Optional list of directories where the ClusterShell library should look for **.yaml** files that define in-file group dictionaries. No need to call external commands for these files, they are parsed by the ClusterShell library itself. Multiple group source definitions in the same file is supported. The variable *$CFGDIR* is replaced by the path of the highest priority configuration directory found (where groups.conf resides). The default confdir value enables both system-wide and any installed user configuration (thanks to *$CFGDIR*). Duplicate directory paths are ignored. [*Group_source*] OPTIONS ------------------------ Configuration parameters of each group source section are described below. map Specify the external shell command used to resolve a group name into a nodeset, list of nodes or list of nodeset (separated by space characters or by carriage returns). The variable *$GROUP* is replaced before executing the command. all Optional external shell command that should return a nodeset, list of nodes or list of nodeset of all nodes for this group source. If not specified, the library will try to resolve all nodes by using the ``list`` external command in the same group source followed by ``map`` for each group. list Optional external shell command that should return the list of all groups for this group source (separated by space characters or by carriage returns). reverse Optional external shell command used to find the group(s) of a single node. The variable $NODE is previously replaced. If this upcall is not specified, the reverse operation is computed in memory by the library from the *list* and *map* external calls. Also, if the number of nodes to reverse is greater than the number of available groups, the *reverse* external command is avoided automatically. cache_time Number of seconds each upcall result is kept in cache, in memory only. Default is 3600 seconds. This is useful only for daemons using nodegroups. When the library executes a group source external shell command, the current working directory is previously set to the corresponding confdir. This allows the use of relative paths for third party files in the command. In addition to context-dependent $GROUP and $NODE variables described above, the two following variables are always available and also replaced before executing shell commands: * *$CFGDIR* is replaced by groups.conf highest priority base directory path * *$SOURCE* is replaced by current source name Each external command might return a non-zero return code when the operation is not doable. But if the call return zero, for instance, for a non-existing group, the user will not receive any error when trying to resolve such unknown group. The desired behaviour is up to the system administrator. RESOURCE USAGE ============== All external command results are cached in memory to avoid multiple calls. Each result is kept for a limited amount of time. See cache_time option to tune this behaviour. EXAMPLES ======== Simple configuration file for local groups and slurm partitions binding. *groups.conf* ------------- | [Main] | default: local | confdir: /etc/clustershell/groups.conf.d $CFGDIR/groups.conf.d | autodir: /etc/clustershell/groups.d $CFGDIR/groups.d | | [local] | map: sed -n 's/^$GROUP:\(.*\)/\1/p' /etc/clustershell/groups | list: sed -n \'s/^\\(``[0-9A-Za-z_-]``\*\\):.*/\\1/p' /etc/clustershell/groups | | [slurm] | map: sinfo -h -o "%N" -p $GROUP | all: sinfo -h -o "%N" | list: sinfo -h -o "%P" | reverse: sinfo -h -N -o "%P" -n $NODE FILES ===== *$CLUSTERSHELL_CFGDIR/groups.conf* (defaults to */etc/clustershell/groups.conf*) Global node groups configuration file. *$CLUSTERSHELL_CFGDIR/groups.conf.d/* (defaults to */etc/clustershell/groups.conf.d/*) Recommended directory for additional configuration files. *$CLUSTERSHELL_CFGDIR/groups.d/* (defaults to */etc/clustershell/groups.d/*) Recommended directory for *autodir*, where native group definition files (.yaml files) are found. *$XDG_CONFIG_HOME/clustershell/groups.conf* Main user groups.conf configuration file. If $XDG_CONFIG_HOME is not defined, *$HOME/.config/clustershell/groups.conf* is used instead. *$HOME/.local/etc/clustershell/groups.conf* Local groups.conf user configuration file (default installation for pip --user) SEE ALSO ======== ``clubak``\(1), ``cluset``\(1), ``clush``\(1), ``nodeset``\(1) http://clustershell.readthedocs.org/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/doc/txt/nodeset.txt0000644104717000001440000002376014505632065017372 0ustar00sthiellusers========= nodeset ========= ----------------------------------- compute advanced nodeset operations ----------------------------------- :Author: Stephane Thiell :Date: 2023-09-29 :Copyright: GNU Lesser General Public License version 2.1 or later (LGPLv2.1+) :Version: 1.9.2 :Manual section: 1 :Manual group: ClusterShell User Manual SYNOPSIS ======== ``nodeset`` [OPTIONS] [COMMAND] [nodeset1 [OPERATION] nodeset2|...] DESCRIPTION =========== Note: ``nodeset`` and ``cluset`` are the same command. ``nodeset`` is an utility command provided with the ClusterShell library which implements some features of ClusterShell's NodeSet and RangeSet Python classes. It provides easy manipulation of 1D or nD-indexed cluster nodes and node groups. Also, ``nodeset`` is automatically bound to the library node group resolution mechanism. Thus, it is especially useful to enhance cluster aware administration shell scripts. OPTIONS ======= --version show program's version number and exit -h, --help show this help message and exit -s GROUPSOURCE, --groupsource=GROUPSOURCE optional ``groups.conf``\(5) group source to use --groupsconf=FILE use alternate config file for groups.conf(5) Commands: -c, --count show number of nodes in nodeset(s) -e, --expand expand nodeset(s) to separate nodes (see also -S *SEPARATOR*) -f, --fold fold nodeset(s) (or separate nodes) into one nodeset -l, --list list node groups, list node groups and nodes (``-ll``) or list node groups, nodes and node count (``-lll``). When no argument is specified at all, this command will list all node group names found in selected group source (see also -s *GROUPSOURCE*). If any nodesets are specified as argument, this command will find node groups these nodes belongs to (individually). Optionally for each group, the fraction of these nodes being member of the group may be displayed (with ``-ll``), and also member count/total group node count (with ``-lll``). If a single hyphen-minus (-) is given as a nodeset, it will be read from standard input. -r, --regroup fold nodes using node groups (see -s *GROUPSOURCE*) --groupsources list all active group sources (see ``groups.conf``\(5)) Operations: -x SUB_NODES, --exclude=SUB_NODES exclude specified nodeset -i AND_NODES, --intersection=AND_NODES calculate nodesets intersection -X XOR_NODES, --xor=XOR_NODES calculate symmetric difference between nodesets Options: -a, --all call external node groups support to display all nodes --autostep=AUTOSTEP enable a-b/step style syntax when folding nodesets, value is min node count threshold (integer '4', percentage '50%' or 'auto'). If not specified, auto step is disabled (best for compatibility with other cluster tools. Example: autostep=4, "node2 node4 node6" folds in node[2,4,6] but autostep=3, "node2 node4 node6" folds in node[2-6/2]. -d, --debug output more messages for debugging purpose -q, --quiet be quiet, print essential output only -R, --rangeset switch to RangeSet instead of NodeSet. Useful when working on numerical cluster ranges, eg. 1,5,18-31 -G, --groupbase hide group source prefix (always `@groupname`) -S SEPARATOR, --separator=SEPARATOR separator string to use when expanding nodesets (default: ' ') -O FORMAT, --output-format=FORMAT output format (default: '%s') -I SLICE_RANGESET, --slice=SLICE_RANGESET return sliced off result; examples of SLICE_RANGESET are "0" for simple index selection, or "1-9/2,16" for complex rangeset selection --split=MAXSPLIT split result into a number of subsets --contiguous split result into contiguous subsets (ie. for nodeset, subsets will contain nodes with same pattern name and a contiguous range of indexes, like foobar[1-100]; for rangeset, subsets with consists in contiguous index ranges)""" --axis=RANGESET for nD nodesets, fold along provided axis only. Axis are indexed from 1 to n and can be specified here either using the rangeset syntax, eg. '1', '1-2', '1,3', or by a single negative number meaning that the indices is counted from the end. Because some nodesets may have several different dimensions, axis indices are silently truncated to fall in the allowed range. --pick=N pick N node(s) at random in nodeset For a short explanation of these options, see ``-h, --help``. If a single hyphen-minus (-) is given as a nodeset, it will be read from standard input. EXTENDED PATTERNS ================= The ``nodeset`` command benefits from ClusterShell NodeSet basic arithmetic addition. This feature extends recognized string patterns by supporting operators matching all Operations seen previously. String patterns are read from left to right, by proceeding any character operators accordingly. Supported character operators ``,`` indicates that the *union* of both left and right nodeset should be computed before continuing ``!`` indicates the *difference* operation ``&`` indicates the *intersection* operation ``^`` indicates the *symmetric difference* (XOR) operation Care should be taken to escape these characters as needed when the shell does not interpret them literally. Examples of use of extended patterns :$ nodeset -f node[0-7],node[8-10]: | node[0-10] :$ nodeset -f node[0-10]\!node[8-10]: | node[0-7] :$ nodeset -f node[0-10]\&node[5-13]: | node[5-10] :$ nodeset -f node[0-10]^node[5-13]: | node[0-4,11-13] Example of advanced usage :$ nodeset -f @gpu^@slurm\:bigmem!@chassis[1-9/2]: This computes a folded nodeset containing nodes found in group @gpu and @slurm:bigmem, but not in both, minus the nodes found in odd chassis groups from 1 to 9. "All nodes" extension The ``@*`` and ``@SOURCE:*`` special notations may be used in extended patterns to represent all nodes (in SOURCE) according to the *all* external shell command (see ``groups.conf``\(5)) and are equivalent to: :$ nodeset [-s SOURCE] -a -f: Group names in expressions The ``@@SOURCE`` notation may be used to access all group names from the specified SOURCE (or from the default group source when just ``@@`` is used) in node set expressions; this works with either file-based group sources or with external group sources that have the *list* upcall defined (see ``groups.conf``\(5)): :$ nodeset -f @@rack: | J[1-3] NODE WILDCARDS ============== Any wildcard mask found is matched against all nodes from the group source (see ``groups.conf``\(5) and the ``-a/--all`` option above). ``*`` means match zero or more characters of any type; ``?`` means match exactly one character of any type. This can be especially useful for server farms, or when cluster node names differ. Say that your group configuration is set to return the following “all nodesâ€: :$ nodeset -f -a: | bckserv[1-2],dbserv[1-4],wwwserv[1-9] Then, you can use wildcards to select particular nodes, as shown below: :$ nodeset -f 'www\*': | wwwserv[1-9] :$ nodeset -f 'www\*[1-4]': | wwwserv[1-4] :$ nodeset -f '\*serv1': | bckserv1,dbserv1,wwwserv1 Wildcard masks are resolved prior to extended patterns, but each mask is evaluated as a whole node set operand. In the example below, we select all nodes matching ``*serv*`` before removing all nodes matching ``www*``: :$ nodeset -f '\*serv\*\!www\*': | bckserv[1-2],dbserv[1-4] EXIT STATUS =========== An exit status of zero indicates success of the ``nodeset`` command. A non-zero exit status indicates failure. EXAMPLES =========== Getting the node count :$ nodeset -c node[0-7,32-159]: | 136 :$ nodeset -c node[0-7,32-159] node[160-163]: | 140 :$ nodeset -c dc[1-2]n[100-199]: | 200 :$ nodeset -c @login: | 4 Folding nodesets :$ nodeset -f node[0-7,32-159] node[160-163]: | node[0-7,32-163] :$ echo node3 node6 node1 node2 node7 node5 | nodeset -f: | node[1-3,5-7] :$ nodeset -f dc1n2 dc2n2 dc1n1 dc2n1: | dc[1-2]n[1-2] :$ nodeset --axis=1 -f dc1n2 dc2n2 dc1n1 dc2n1: | dc[1-2]n1,dc[1-2]n2 Expanding nodesets :$ nodeset -e node[160-163]: | node160 node161 node162 node163 :$ echo 'dc[1-2]n[2-6/2]' | nodeset -e: | dc1n2 dc1n4 dc1n6 dc2n2 dc2n4 dc2n6 Excluding nodes from nodeset :$ nodeset -f node[32-159] -x node33: | node[32,34-159] Computing nodesets intersection :$ nodeset -f node[32-159] -i node[0-7,20-21,32,156-159]: | node[32,156-159] Computing nodesets symmetric difference (xor) :$ nodeset -f node[33-159] --xor node[32-33,156-159]: | node[32,34-155] Splitting nodes into several nodesets (expanding results) :$ nodeset --split=3 -e node[1-9]: | node1 node2 node3 | node4 node5 node6 | node7 node8 node9 Splitting non-contiguous nodesets (folding results) :$ nodeset --contiguous -f node2 node3 node4 node8 node9: | node[2-4] | node[8-9] :$ nodeset --contiguous -f dc[1,3]n[1-2,4-5]: | dc1n[1-2] | dc1n[4-5] | dc3n[1-2] | dc3n[4-5] HISTORY ======= Command syntax has been changed since ``nodeset`` command available with ClusterShell v1.1. Operations, like *--intersection* or *-x*, are now specified between nodesets in the command line. ClusterShell v1.1: :$ nodeset -f -x node[3,5-6,9] node[1-9]: | node[1-2,4,7-8] ClusterShell v1.2+: :$ nodeset -f node[1-9] -x node[3,5-6,9]: | node[1-2,4,7-8] ``cluset`` was added in 1.7.3 to avoid a conflict with xCAT's ``nodeset`` command and also to conform with ClusterShell's "clu*" command nomenclature. SEE ALSO ======== ``clubak``\(1), ``cluset``\(1), ``clush``\(1), ``groups.conf``\(5). http://clustershell.readthedocs.org/ BUG REPORTS =========== Use the following URL to submit a bug report or feedback: https://github.com/cea-hpc/clustershell/issues ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3243294 ClusterShell-1.9.2/lib/0000755104717000001440000000000014505640536014344 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3303297 ClusterShell-1.9.2/lib/ClusterShell/0000755104717000001440000000000014505640536016755 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3313296 ClusterShell-1.9.2/lib/ClusterShell/CLI/0000755104717000001440000000000014505640536017364 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/CLI/Clubak.py0000755104717000001440000001525314505632065021146 0ustar00sthiellusers# # Copyright (C) 2010-2012 CEA/DAM # Copyright (C) 2017-2018 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ format dsh/pdsh-like output for humans and more For help, type:: $ clubak --help """ from __future__ import print_function import sys from ClusterShell.MsgTree import MsgTree, MODE_DEFER, MODE_TRACE from ClusterShell.NodeSet import NodeSet, NodeSetParseError, std_group_resolver from ClusterShell.NodeSet import set_std_group_resolver_config from ClusterShell.CLI.Display import Display, THREE_CHOICES from ClusterShell.CLI.Display import sys_stdin from ClusterShell.CLI.Error import GENERIC_ERRORS, handle_generic_error from ClusterShell.CLI.OptionParser import OptionParser from ClusterShell.CLI.Utils import nodeset_cmpkey def display_tree(tree, disp, out): """display sub-routine for clubak -T (msgtree trace mode)""" togh = True offset = 2 reldepth = -offset reldepths = {} line_mode = disp.line_mode for msgline, keys, depth, nchildren in tree.walk_trace(): if togh: if depth in reldepths: reldepth = reldepths[depth] else: reldepth = reldepths[depth] = reldepth + offset nodeset = NodeSet.fromlist(keys) if line_mode: out.write(str(nodeset) + ':\n') else: out.write(disp.format_header(nodeset, reldepth)) out.write(' ' * reldepth + bytes(msgline).decode(errors='replace') + '\n') togh = nchildren != 1 def display(tree, disp, gather, trace_mode, enable_nodeset_key): """nicely display MsgTree instance `tree' content according to `disp' Display object and `gather' boolean flag""" if trace_mode: display_tree(tree, disp, sys.stdout) sys.stdout.flush() return if gather: if enable_nodeset_key: # lambda to create a NodeSet from keys returned by walk() ns_getter = lambda x: NodeSet.fromlist(x[1]) for nodeset in sorted((ns_getter(item) for item in tree.walk()), key=nodeset_cmpkey): disp.print_gather(nodeset, tree[nodeset[0]]) else: for msg, key in tree.walk(): disp.print_gather_keys(key, msg) else: if enable_nodeset_key: # nodes are automagically sorted by NodeSet for node in NodeSet.fromlist(tree.keys()).nsiter(): disp.print_gather(node, tree[str(node)]) else: for key in tree.keys(): disp.print_gather_keys([ key ], tree[key]) def clubak(): """script subroutine""" # Argument management parser = OptionParser("%prog [options]") parser.install_groupsconf_option() parser.install_display_options(verbose_options=True, separator_option=True, dshbak_compat=True, msgtree_mode=True) options = parser.parse_args()[0] set_std_group_resolver_config(options.groupsconf) if options.interpret_keys == THREE_CHOICES[-1]: # auto? enable_nodeset_key = None # AUTO else: enable_nodeset_key = (options.interpret_keys == THREE_CHOICES[2]) # Create new message tree if options.trace_mode: tree_mode = MODE_TRACE else: tree_mode = MODE_DEFER tree = MsgTree(mode=tree_mode) fast_mode = options.fast_mode if fast_mode: if tree_mode != MODE_DEFER or options.line_mode: parser.error("incompatible tree options") preload_msgs = {} separator = options.separator.encode() # Feed the tree from standard input lines for line in sys_stdin(): try: linestripped = line.rstrip(b'\r\n') if options.verbose or options.debug: sys.stdout.write('INPUT ' + linestripped.decode(errors='replace') + '\n') key, content = linestripped.split(separator, 1) # NodeSet requires encoded string key = key.strip().decode(errors='replace') if not key: raise ValueError("no node found") if enable_nodeset_key is False: # interpret-keys=never? keyset = [ key ] else: try: keyset = NodeSet(key) except NodeSetParseError: if enable_nodeset_key: # interpret-keys=always? raise enable_nodeset_key = False # auto => switch off keyset = [ key ] if fast_mode: for node in keyset: preload_msgs.setdefault(node, []).append(content) else: for node in keyset: tree.add(node, content) except ValueError as ex: raise ValueError('%s: "%s"' % (ex, linestripped.decode(errors='replace'))) if fast_mode: # Messages per node have been aggregated, now add to tree one # full msg per node for key, wholemsg in preload_msgs.items(): tree.add(key, b'\n'.join(wholemsg)) # Display results try: disp = Display(options) except ValueError as exc: parser.error("option mismatch (%s)" % exc) return if options.debug: std_group_resolver().set_verbosity(1) print("clubak: line_mode=%s gather=%s tree_depth=%d" % (bool(options.line_mode), bool(disp.gather), tree._depth()), file=sys.stderr) display(tree, disp, disp.gather or disp.regroup, \ options.trace_mode, enable_nodeset_key is not False) def main(): """main script function""" try: clubak() except GENERIC_ERRORS as ex: sys.exit(handle_generic_error(ex)) except ValueError as ex: print("%s:" % sys.argv[0], ex, file=sys.stderr) sys.exit(1) sys.exit(0) if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/CLI/Clush.py0000755104717000001440000013546014505632065021026 0ustar00sthiellusers# # Copyright (C) 2007-2016 CEA/DAM # Copyright (C) 2015-2022 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ Execute cluster commands in parallel clush is an utility program to run commands on a cluster which benefits from the ClusterShell library and its Ssh worker. It features an integrated output results gathering system (dshbak-like), can get node groups by running predefined external commands and can redirect lines read on its standard input to the remote commands. When no command are specified, clush runs interactively. """ from __future__ import print_function import getpass import logging import os from os.path import abspath, dirname, exists, isdir, join import random import resource import shlex import signal import sys import time import threading # Python 3 compatibility try: raw_input except NameError: raw_input = input from ClusterShell.Defaults import DEFAULTS, _load_workerclass from ClusterShell.CLI.Config import ClushConfig, ClushConfigError from ClusterShell.CLI.Display import Display, sys_stdin from ClusterShell.CLI.Display import VERB_QUIET, VERB_STD, VERB_VERB, VERB_DEBUG from ClusterShell.CLI.OptionParser import OptionParser from ClusterShell.CLI.Error import GENERIC_ERRORS, handle_generic_error from ClusterShell.CLI.Utils import bufnodeset_cmpkey, human_bi_bytes_unit from ClusterShell.Event import EventHandler from ClusterShell.MsgTree import MsgTree from ClusterShell.NodeSet import RESOLVER_NOGROUP, set_std_group_resolver_config from ClusterShell.NodeSet import NodeSet, NodeSetParseError, std_group_resolver from ClusterShell.Task import Task, task_self class UpdatePromptException(Exception): """Exception used by the signal handler""" class StdInputHandler(EventHandler): """Standard input event handler class.""" def __init__(self, worker): EventHandler.__init__(self) self.master_worker = worker def ev_msg(self, port, msg): """invoked when a message is received from port object""" if not msg: self.master_worker.set_write_eof() return # Forward messages to master worker self.master_worker.write(msg) class OutputHandler(EventHandler): """Base class for generic output handlers.""" def __init__(self, prog=None): EventHandler.__init__(self) self._runtimer = None self._prog = prog if prog else os.path.basename(sys.argv[0]) def runtimer_init(self, task, ntotal=0): """Init timer for live command-completed progressmeter.""" thandler = RunTimer(task, ntotal, prog=self._prog) self._runtimer = task.timer(1.33, thandler, interval=1./3., autoclose=True) def _runtimer_clean(self): """Hide runtimer counter""" if self._runtimer: self._runtimer.eh.erase_line() def _runtimer_set_dirty(self): """Force redisplay of counter""" if self._runtimer: self._runtimer.eh.set_dirty() def _runtimer_finalize(self, worker): """Finalize display of runtimer counter""" if self._runtimer: self._runtimer.eh.finalize(worker.task.default("USER_interactive")) self._runtimer.invalidate() self._runtimer = None def update_prompt(self, worker): """ If needed, notify main thread to update its prompt by sending a SIGUSR1 signal. We use task-specific user-defined variable to record current states (prefixed by USER_). """ worker.task.set_default("USER_running", False) if worker.task.default("USER_handle_SIGUSR1"): os.kill(os.getpid(), signal.SIGUSR1) def ev_start(self, worker): """Worker is starting.""" if self._runtimer: self._runtimer.eh.start_time = time.time() def ev_written(self, worker, node, sname, size): """Bytes written on worker""" if self._runtimer: self._runtimer.eh.bytes_written += size class DirectOutputHandler(OutputHandler): """Direct output event handler class.""" def __init__(self, display, prog=None): OutputHandler.__init__(self, prog=prog) self._display = display def ev_read(self, worker, node, sname, msg): if sname == worker.SNAME_STDOUT: self._display.print_line(node, msg) elif sname == worker.SNAME_STDERR: self._display.print_line_error(node, msg) def ev_hup(self, worker, node, rc): if rc > 0: verb = VERB_QUIET if self._display.maxrc: verb = VERB_STD self._display.vprint_err(verb, "%s: %s: exited with exit code %d" % (self._prog, node, rc)) def ev_close(self, worker, timedout): if timedout: nodeset = NodeSet._fromlist1(worker.iter_keys_timeout()) self._display.vprint_err(VERB_QUIET, "%s: %s: command timeout" % (self._prog, nodeset)) self.update_prompt(worker) class DirectOutputDirHandler(DirectOutputHandler): """Direct output files event handler class. pssh style""" def __init__(self, display, ns, prog=None): DirectOutputHandler.__init__(self, display, prog) self._ns = ns self._outfiles = {} self._errfiles = {} if display.outdir: for n in self._ns: self._outfiles[n] = open(join(display.outdir, n), mode="w") if display.errdir: for n in self._ns: self._errfiles[n] = open(join(display.errdir, n), mode="w") def ev_read(self, worker, node, sname, msg): DirectOutputHandler.ev_read(self, worker, node, sname, msg) if sname == worker.SNAME_STDOUT: if self._display.outdir: self._outfiles[node].write("{}\n".format(msg.decode())) elif sname == worker.SNAME_STDERR: if self._display.errdir: self._errfiles[node].write("{}\n".format(msg.decode())) def ev_close(self, worker, timedout): DirectOutputHandler.ev_close(self, worker, timedout) if self._display.outdir: for v in self._outfiles.values(): v.close() if self._display.errdir: for v in self._errfiles.values(): v.close() class DirectProgressOutputHandler(DirectOutputHandler): """Direct output event handler class with progress support.""" # NOTE: This class is very similar to DirectOutputHandler, thus it could # first look overkill, but merging both is slightly impacting ev_read # performance of current DirectOutputHandler. def ev_read(self, worker, node, sname, msg): self._runtimer_clean() # it is ~10% faster to avoid calling super here if sname == worker.SNAME_STDOUT: self._display.print_line(node, msg) elif sname == worker.SNAME_STDERR: self._display.print_line_error(node, msg) def ev_close(self, worker, timedout): self._runtimer_clean() DirectOutputHandler.ev_close(self, worker, timedout) class CopyOutputHandler(DirectProgressOutputHandler): """Copy output event handler.""" def __init__(self, display, reverse=False, prog=None): DirectOutputHandler.__init__(self, display, prog=prog) self.reverse = reverse def ev_close(self, worker, timedout): """A copy worker has finished.""" for rc, nodes in worker.iter_retcodes(): if rc == 0: if self.reverse: self._display.vprint(VERB_VERB, "%s:`%s' -> `%s'" % \ (nodes, worker.source, worker.dest)) else: self._display.vprint(VERB_VERB, "`%s' -> %s:`%s'" % \ (worker.source, nodes, worker.dest)) break # multiple copy workers may be running (handled by this task's thread) copies = worker.task.default("USER_copies") - 1 worker.task.set_default("USER_copies", copies) if copies == 0: self._runtimer_finalize(worker) # handle timeout DirectOutputHandler.ev_close(self, worker, timedout) class GatherOutputHandler(OutputHandler): """Gathered output event handler class (e.g. clush -b).""" def __init__(self, display, prog=None): OutputHandler.__init__(self, prog=prog) self._display = display def ev_read(self, worker, node, sname, msg): if sname == worker.SNAME_STDOUT: if self._display.verbosity == VERB_VERB: self._display.print_line(node, worker.current_msg) elif sname == worker.SNAME_STDERR: self._runtimer_clean() self._display.print_line_error(node, msg) self._runtimer_set_dirty() def ev_close(self, worker, timedout): # Worker is closing -- it's time to gather results... self._runtimer_finalize(worker) # Display command output, try to order buffers by rc nodesetify = lambda v: (v[0], NodeSet._fromlist1(v[1])) cleaned = False for _rc, nodelist in sorted(worker.iter_retcodes()): ns_remain = NodeSet._fromlist1(nodelist) # Then order by node/nodeset (see nodeset_cmpkey) for buf, nodeset in sorted(map(nodesetify, worker.iter_buffers(nodelist)), key=bufnodeset_cmpkey): if not cleaned: # clean runtimer line before printing first result self._runtimer_clean() cleaned = True self._display.print_gather(nodeset, buf) ns_remain.difference_update(nodeset) if ns_remain: self._display.print_gather_finalize(ns_remain) self._display.flush() self._close_common(worker) # Notify main thread to update its prompt self.update_prompt(worker) def _close_common(self, worker): verbexit = VERB_QUIET if self._display.maxrc: verbexit = VERB_STD # Display return code if not ok ( != 0) for rc, nodelist in worker.iter_retcodes(): if rc != 0: nsdisp = ns = NodeSet._fromlist1(nodelist) if self._display.verbosity > VERB_QUIET and len(ns) > 1: nsdisp = "%s (%d)" % (ns, len(ns)) msgrc = "%s: %s: exited with exit code %d" % (self._prog, nsdisp, rc) self._display.vprint_err(verbexit, msgrc) # Display nodes that didn't answer within command timeout delay if worker.num_timeout() > 0: self._display.vprint_err(verbexit, "%s: %s: command timeout" % \ (self._prog, NodeSet._fromlist1(worker.iter_keys_timeout()))) class SortedOutputHandler(GatherOutputHandler): """Sorted by node output event handler class (e.g. clush -L).""" def ev_close(self, worker, timedout): # Overrides GatherOutputHandler.ev_close() self._runtimer_finalize(worker) # Display command output, try to order buffers by rc for _rc, nodelist in sorted(worker.iter_retcodes()): for node in nodelist: # NOTE: msg should be a MsgTreeElem as Display will iterate # over it to display multiple lines. As worker.node_buffer() # returns either a string or None if there is no output, it # cannot be used here. We use worker.iter_node_buffers() with # a single node as match_keys instead. for node, msg in worker.iter_node_buffers(match_keys=(node,)): self._display.print_gather(node, msg) self._close_common(worker) # Notify main thread to update its prompt self.update_prompt(worker) class LiveGatherOutputHandler(GatherOutputHandler): """Live line-gathered output event handler class (-bL).""" def __init__(self, display, nodes, prog=None): assert nodes is not None, "cannot gather local command" GatherOutputHandler.__init__(self, display, prog=prog) self._nodes = NodeSet(nodes) self._nodecnt = dict.fromkeys(self._nodes, 0) self._mtreeq = [] self._offload = 0 def ev_read(self, worker, node, sname, msg): if sname != worker.SNAME_STDOUT: GatherOutputHandler.ev_read(self, worker, node, sname, msg) return # Read new line from node self._nodecnt[node] += 1 cnt = self._nodecnt[node] if len(self._mtreeq) < cnt: self._mtreeq.append(MsgTree()) self._mtreeq[cnt - self._offload - 1].add(node, msg) self._live_line(worker) def ev_hup(self, worker, node, rc): if self._mtreeq and node not in self._mtreeq[0]: # forget a node that doesn't answer to continue live line # gathering anyway self._nodes.remove(node) self._live_line(worker) def _live_line(self, worker): # if all nodes have replied, display gathered line while self._mtreeq and len(self._mtreeq[0]) == len(self._nodes): mtree = self._mtreeq.pop(0) self._offload += 1 self._runtimer_clean() nodesetify = lambda v: (v[0], NodeSet.fromlist(v[1])) for buf, nodeset in sorted(map(nodesetify, mtree.walk()), key=bufnodeset_cmpkey): self._display.print_gather(nodeset, buf) self._runtimer_set_dirty() def ev_close(self, worker, timedout): # Worker is closing -- it's time to gather results... self._runtimer_finalize(worker) for mtree in self._mtreeq: nodesetify = lambda v: (v[0], NodeSet.fromlist(v[1])) for buf, nodeset in sorted(map(nodesetify, mtree.walk()), key=bufnodeset_cmpkey): self._display.print_gather(nodeset, buf) self._close_common(worker) # Notify main thread to update its prompt self.update_prompt(worker) class RunTimer(EventHandler): """Running progress timer event handler""" def __init__(self, task, total, prog=None): EventHandler.__init__(self) self.task = task self.total = total self.cnt_last = -1 self.tslen = len(str(self.total)) self.wholelen = 0 self.started = False # updated by worker handler for progress self.start_time = 0 self.bytes_written = 0 self._prog = prog if prog else os.path.basename(sys.argv[0]) def ev_timer(self, timer): self.update() def set_dirty(self): self.cnt_last = -1 def erase_line(self): if self.wholelen: sys.stderr.write(' ' * self.wholelen + '\r') self.wholelen = 0 def update(self): """Update runtime progress info""" wrbwinfo = '' if self.bytes_written > 0: bandwidth = self.bytes_written/(time.time() - self.start_time) wrbwinfo = " write: %s/s" % human_bi_bytes_unit(bandwidth) gwcnt = len(self.task.gateways) if gwcnt: # tree mode act_targets = NodeSet() for gw, (chan, metaworkers) in self.task.gateways.items(): act_targets.updaten(mw.gwtargets[gw] for mw in metaworkers) cnt = len(act_targets) + len(self.task._engine.clients()) - gwcnt gwinfo = ' gw %d' % gwcnt else: cnt = len(self.task._engine.clients()) gwinfo = '' if self.bytes_written > 0 or cnt != self.cnt_last: self.cnt_last = cnt # display completed/total clients towrite = '%s: %*d/%*d%s%s\r' % (self._prog, self.tslen, self.total - cnt, self.tslen, self.total, gwinfo, wrbwinfo) self.wholelen = len(towrite) sys.stderr.write(towrite) self.started = True def finalize(self, force_cr): """finalize display of runtimer""" if not self.started: return self.erase_line() # display completed/total clients fmt = '%s: %*d/%*d' if force_cr: fmt += '\n' else: fmt += '\r' sys.stderr.write(fmt % (self._prog, self.tslen, self.total, self.tslen, self.total)) def signal_handler(signum, frame): """Signal handler used for main thread notification""" if signum == signal.SIGUSR1: signal.signal(signal.SIGUSR1, signal.SIG_IGN) raise UpdatePromptException() def get_history_file(): """Turn the history file path""" return join(os.environ["HOME"], ".clush_history") def readline_setup(): """ Configure readline to automatically load and save a history file named .clush_history """ import readline readline.parse_and_bind("tab: complete") readline.set_completer_delims("") try: readline.read_history_file(get_history_file()) except IOError: pass def ttyloop(task, nodeset, timeout, display, remote, trytree): """Manage the interactive prompt to run command""" readline_avail = False interactive = task.default("USER_interactive") if interactive: try: import readline readline_setup() readline_avail = True except ImportError: pass display.vprint(VERB_STD, \ "Enter 'quit' to leave this interactive mode") rc = 0 ns = NodeSet(nodeset) ns_info = True cmd = "" while task.default("USER_running") or \ (interactive and cmd.lower() != 'quit'): try: # Set SIGUSR1 handler if needed if task.default("USER_handle_SIGUSR1"): signal.signal(signal.SIGUSR1, signal_handler) if task.default("USER_interactive") and \ not task.default("USER_running"): if ns_info: display.vprint(VERB_QUIET, \ "Working with nodes: %s" % ns) ns_info = False prompt = "clush> " else: prompt = "" try: cmd = raw_input(prompt) assert cmd is not None, "Result of raw_input() is None!" finally: signal.signal(signal.SIGUSR1, signal.SIG_IGN) except EOFError: print() return except UpdatePromptException: if task.default("USER_interactive"): continue return except KeyboardInterrupt as kbe: # Caught SIGINT here (main thread) but the signal will also reach # subprocesses (that will most likely kill them) if display.gather: # Suspend task, so we can safely access its data from here task.suspend() # If USER_running is not set, the task had time to finish, # that could mean all subprocesses have been killed and all # handlers have been processed. if not task.default("USER_running"): # let's clush_excepthook handle the rest raise kbe # If USER_running is set, the task didn't have time to finish # its work, so we must print something for the user... print_warn = False # Display command output, but cannot order buffers by rc nodesetify = lambda v: (v[0], NodeSet._fromlist1(v[1])) for buf, nodeset in sorted(map(nodesetify, task.iter_buffers()), key=bufnodeset_cmpkey): if not print_warn: print_warn = True display.vprint_err(VERB_STD, \ "Warning: Caught keyboard interrupt!") display.print_gather(nodeset, buf) # Return code handling verbexit = VERB_QUIET if display.maxrc: verbexit = VERB_STD ns_ok = NodeSet() for rc, nodelist in task.iter_retcodes(): ns_ok.add(NodeSet._fromlist1(nodelist)) if rc != 0: # Display return code if not ok ( != 0) nsdisp = ns = NodeSet._fromlist1(nodelist) if display.verbosity >= VERB_QUIET and len(ns) > 1: nsdisp = "%s (%d)" % (ns, len(ns)) msgrc = "clush: %s: exited with exit code %d" % (nsdisp, rc) display.vprint_err(verbexit, msgrc) # Add uncompleted nodeset to exception object kbe.uncompleted_nodes = ns - ns_ok # Display nodes that didn't answer within command timeout delay if task.num_timeout() > 0: display.vprint_err(verbexit, \ "clush: %s: command timeout" % \ NodeSet._fromlist1(task.iter_keys_timeout())) raise kbe if task.default("USER_running"): ns_reg, ns_unreg = NodeSet(), NodeSet() for client in task._engine.clients(): if client.registered: ns_reg.add(client.key) else: ns_unreg.add(client.key) if ns_unreg: pending = "\nclush: pending(%d): %s" % (len(ns_unreg), ns_unreg) else: pending = "" display.vprint_err(VERB_QUIET, "clush: interrupt (^C to abort task)") gws = list(task.gateways) if not gws: display.vprint_err(VERB_QUIET, "clush: in progress(%d): %s%s" % (len(ns_reg), ns_reg, pending)) else: display.vprint_err(VERB_QUIET, "clush: in progress(%d): %s%s\n" "clush: [tree] open gateways(%d): %s" % (len(ns_reg), ns_reg, pending, len(gws), NodeSet._fromlist1(gws))) for gw, (chan, metaworkers) in task.gateways.items(): act_targets = NodeSet.fromlist(mw.gwtargets[gw] for mw in metaworkers) if act_targets: display.vprint_err(VERB_QUIET, "clush: [tree] in progress(%d) on %s: %s" % (len(act_targets), gw, act_targets)) else: cmdl = cmd.lower() try: ns_info = True if cmdl.startswith('+'): ns.update(cmdl[1:]) elif cmdl.startswith('-'): ns.difference_update(cmdl[1:]) elif cmdl.startswith('@'): ns = NodeSet(cmdl[1:]) elif cmdl == '=': display.gather = not display.gather if display.gather: display.vprint(VERB_STD, \ "Switching to gathered output format") else: display.vprint(VERB_STD, \ "Switching to standard output format") task.set_default("stdout_msgtree", \ display.gather or display.line_mode) ns_info = False continue elif not cmdl.startswith('?'): # if ?, just print ns_info ns_info = False except NodeSetParseError: display.vprint_err(VERB_QUIET, \ "clush: nodeset parse error (ignoring)") if ns_info: continue if cmdl.startswith('!') and len(cmd.strip()) > 0: run_command(task, cmd[1:], None, timeout, display, remote, trytree) elif cmdl != "quit": if not cmd: continue if readline_avail: readline.write_history_file(get_history_file()) if task.default("USER_command_prefix"): prefix_cmdl = shlex.split(task.default("USER_command_prefix")) cmd = "%s %s" % (' '.join(prefix_cmdl), cmd) run_command(task, cmd, ns, timeout, display, remote, trytree) return rc def _stdin_thread_start(stdin_port, display): """Standard input reader thread entry point.""" try: # Note: read length should be as large as possible for performance # yet not too large to not introduce artificial latency. # 64k seems to be perfect with an openssh backend (they issue 64k # reads) ; could consider making it an option for e.g. gsissh. bufsize = 64 * 1024 # thread loop: read stdin + send messages to specified port object # use os.read() to work around https://bugs.python.org/issue42717 while True: buf = os.read(sys_stdin().fileno(), bufsize) if not buf: break # send message to specified port object (with ack) stdin_port.msg(buf) except IOError as ex: display.vprint(VERB_VERB, "stdin: %s" % ex) # send a None message to indicate EOF stdin_port.msg(None) def bind_stdin(worker, display): """Create a stdin->port->worker binding: connect specified worker to stdin with the help of a reader thread and a ClusterShell Port object.""" assert sys.stdin is not None and not sys.stdin.isatty() # Create a ClusterShell Port object bound to worker's task. This object # is able to receive messages in a thread-safe manner and then will safely # trigger ev_msg() on a specified event handler. port = worker.task.port(handler=StdInputHandler(worker), autoclose=True) # Launch a dedicated thread to read stdin in blocking mode. Indeed stdin # can be a file, so we cannot use a WorkerSimple here as polling on file # may result in different behaviors depending on selected engine. stdin_thread = threading.Thread(None, _stdin_thread_start, args=(port, display)) # Set thread as daemon because we're sometimes left with data that have # been read but the ssh connection is already closed. stdin_thread.daemon = True stdin_thread.start() def run_command(task, cmd, ns, timeout, display, remote, trytree): """ Create and run the specified command line, displaying results in a dshbak way when gathering is used. """ task.set_default("USER_running", True) if (display.gather or display.line_mode) and ns is not None: if display.gather and display.line_mode: handler = LiveGatherOutputHandler(display, ns) elif not display.gather and display.line_mode: handler = SortedOutputHandler(display) else: handler = GatherOutputHandler(display) if display.verbosity in (VERB_STD, VERB_VERB) or \ (display.progress and display.verbosity > VERB_QUIET): handler.runtimer_init(task, len(ns)) elif display.progress and display.verbosity > VERB_QUIET: handler = DirectProgressOutputHandler(display) handler.runtimer_init(task, len(ns)) elif (display.outdir or display.errdir) and ns is not None: if display.outdir and not exists(display.outdir): os.makedirs(display.outdir) if display.errdir and not exists(display.errdir): os.makedirs(display.errdir) handler = DirectOutputDirHandler(display, ns) else: # this is the simpler but faster output handler handler = DirectOutputHandler(display) stdin = task.default("USER_stdin_worker") # stdin forwarding? prompt_passwd = task.default("USER_password_prompt") # from --mode worker = task.shell(cmd, nodes=ns, handler=handler, timeout=timeout, remote=remote, tree=trytree, stdin=stdin or prompt_passwd is not None) if ns is None: worker.set_key('LOCAL') if prompt_passwd: worker.write(prompt_passwd.encode() + b'\n') if stdin: bind_stdin(worker, display) if prompt_passwd and not stdin: worker.set_write_eof() # we only enabled stdin to send the password task.resume() def run_copy(task, sources, dests, ns, timeout, preserve_flag, display): """run copy command""" task.set_default("USER_running", True) task.set_default("USER_copies", len(sources)) copyhandler = CopyOutputHandler(display) if display.verbosity in (VERB_STD, VERB_VERB): copyhandler.runtimer_init(task, len(ns) * len(sources)) # Sources check for source in sources: if not exists(source): display.vprint_err(VERB_QUIET, 'ERROR: file "%s" not found' % source) clush_exit(1, task) task.copy(source, dests.pop(0), ns, handler=copyhandler, timeout=timeout, preserve=preserve_flag) task.resume() def run_rcopy(task, sources, dests, ns, timeout, preserve_flag, display): """run reverse copy command""" task.set_default("USER_running", True) task.set_default("USER_copies", len(sources)) # Sanity checks for dest in dests: if not exists(dest): display.vprint_err(VERB_QUIET, 'ERROR: directory "%s" not found' % dest) clush_exit(1, task) if not isdir(dest): display.vprint_err(VERB_QUIET, 'ERROR: destination "%s" is not a directory' % dest) clush_exit(1, task) copyhandler = CopyOutputHandler(display, True) if display.verbosity == VERB_STD or display.verbosity == VERB_VERB: copyhandler.runtimer_init(task, len(ns) * len(sources)) for source in sources: task.rcopy(source, dests.pop(0), ns, handler=copyhandler, timeout=timeout, stderr=True, preserve=preserve_flag) task.resume() def set_fdlimit(fd_max, display): """Make open file descriptors soft limit the max.""" soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) if hard < fd_max: msgfmt = 'Warning: fd_max set to %d but max open files hard limit is %d' display.vprint_err(VERB_VERB, msgfmt % (fd_max, hard)) rlim_max = min(hard, fd_max) if soft != rlim_max: msgfmt = 'Changing max open files soft limit from %d to %d' display.vprint(VERB_DEBUG, msgfmt % (soft, rlim_max)) try: resource.setrlimit(resource.RLIMIT_NOFILE, (rlim_max, hard)) except (ValueError, resource.error) as exc: # Most probably the requested limit exceeds the system imposed limit msgfmt = 'Warning: Failed to set max open files limit to %d (%s)' display.vprint_err(VERB_VERB, msgfmt % (rlim_max, exc)) def ask_pass(): """Prompt for password (--mode with password_prompt=True)""" return getpass.getpass() def clush_exit(status, task=None): """Exit script, flushing stdio buffers and stopping ClusterShell task.""" if task: # Clean, usual termination task.abort() task.join() sys.exit(status) else: # Best effort cleanup if no task is set for stream in [sys.stdout, sys.stderr]: try: stream.flush() except IOError: pass # Use os._exit to avoid threads cleanup os._exit(status) def clush_excepthook(extype, exp, traceback): """Exceptions hook for clush: this method centralizes exception handling from main thread and from (possible) separate task thread. This hook has to be previously installed on startup by overriding sys.excepthook and task.excepthook.""" try: raise exp except ClushConfigError as econf: print("ERROR: %s" % econf, file=sys.stderr) clush_exit(1) except KeyboardInterrupt as kbe: uncomp_nodes = getattr(kbe, 'uncompleted_nodes', None) if uncomp_nodes: print("Keyboard interrupt (%s did not complete)." % uncomp_nodes, file=sys.stderr) else: print("Keyboard interrupt.", file=sys.stderr) clush_exit(128 + signal.SIGINT) except GENERIC_ERRORS as exc: clush_exit(handle_generic_error(exc)) # Error not handled task_self().default_excepthook(extype, exp, traceback) def main(): """clush script entry point""" sys.excepthook = clush_excepthook # # Argument management # usage = "%prog [options] command" parser = OptionParser(usage) parser.add_option("-n", "--nostdin", action="store_true", dest="nostdin", help="don't watch for possible input from stdin") parser.install_groupsconf_option() parser.install_clush_config_options() parser.install_nodes_options() parser.install_display_options(verbose_options=True) parser.install_filecopy_options() parser.install_connector_options() (options, args) = parser.parse_args() set_std_group_resolver_config(options.groupsconf) # # Load config file and apply overrides # config = ClushConfig(options, options.conf) # Initialize logging if config.verbosity >= VERB_DEBUG: logging.basicConfig(level=logging.DEBUG) logging.debug("clush: STARTING DEBUG") else: logging.basicConfig(level=logging.CRITICAL) # Should we use ANSI colors for nodes? if config.color == "auto": color = sys.stdout.isatty() and (options.gatherall or \ sys.stderr.isatty()) else: color = config.color == "always" try: # Create and configure display object. display = Display(options, config, color) except ValueError as exc: parser.error("option mismatch (%s)" % exc) if options.groupsource: # Be sure -a/g -s source work as espected. std_group_resolver().default_source_name = options.groupsource # Compute the nodeset and warn for possible use of shell pathname # expansion (#225) wnodelist = [] xnodelist = [] if options.nodes: wnodelist = [NodeSet(nodes) for nodes in options.nodes] if options.exclude: xnodelist = [NodeSet(nodes) for nodes in options.exclude] for (opt, nodelist) in (('w', wnodelist), ('x', xnodelist)): for nodes in nodelist: if len(nodes) == 1 and exists(str(nodes)): display.vprint_err(VERB_STD, "Warning: using '-%s %s' and " "local path '%s' exists, was it expanded " "by the shell?" % (opt, nodes, nodes)) # --hostfile support (#235) for opt_hostfile in options.hostfile: try: fnodeset = NodeSet() with open(opt_hostfile) as hostfile: for line in hostfile.read().splitlines(): fnodeset.updaten(nodes for nodes in line.split()) display.vprint_err(VERB_DEBUG, "Using nodeset %s from hostfile %s" % (fnodeset, opt_hostfile)) wnodelist.append(fnodeset) except IOError as exc: # re-raise as OSError to be properly handled errno, strerror = exc.args raise OSError(errno, strerror, exc.filename) # Instantiate target nodeset from command line and hostfile nodeset_base = NodeSet.fromlist(wnodelist) # Instantiate filter nodeset (command line only) nodeset_exclude = NodeSet.fromlist(xnodelist) # Specified engine prevails over default engine DEFAULTS.engine = options.engine # Do we have nodes group? task = task_self() task.set_info("debug", config.verbosity >= VERB_DEBUG) if config.verbosity == VERB_DEBUG: std_group_resolver().set_verbosity(1) if options.nodes_all: all_nodeset = NodeSet.fromall() display.vprint(VERB_DEBUG, "Adding nodes from option -a: %s" % \ all_nodeset) nodeset_base.add(all_nodeset) if options.group: grp_nodeset = NodeSet.fromlist(options.group, resolver=RESOLVER_NOGROUP) for grp in grp_nodeset: addingrp = NodeSet("@" + grp) display.vprint(VERB_DEBUG, \ "Adding nodes from option -g %s: %s" % (grp, addingrp)) nodeset_base.update(addingrp) if options.exgroup: grp_nodeset = NodeSet.fromlist(options.exgroup, resolver=RESOLVER_NOGROUP) for grp in grp_nodeset: removingrp = NodeSet("@" + grp) display.vprint(VERB_DEBUG, \ "Excluding nodes from option -X %s: %s" % (grp, removingrp)) nodeset_exclude.update(removingrp) # Do we have an exclude list? (-x ...) nodeset_base.difference_update(nodeset_exclude) if len(nodeset_base) < 1: parser.error('No node to run on.') if options.pick and options.pick < len(nodeset_base): # convert to string for sample as nsiter() is slower for big # nodesets; and we assume options.pick will remain small-ish keep = random.sample(list(nodeset_base), options.pick) nodeset_base.intersection_update(','.join(keep)) if config.verbosity >= VERB_VERB: msg = "Picked random nodes: %s" % nodeset_base print(Display.COLOR_RESULT_FMT % msg) # Set open files limit. set_fdlimit(config.fd_max, display) # # Task management # # check for clush interactive mode interactive = not len(args) and \ not (options.copy or options.rcopy) # check for foreground ttys presence (input) stdin_isafgtty = sys.stdin is not None and sys.stdin.isatty() and \ os.tcgetpgrp(sys.stdin.fileno()) == os.getpgrp() # check for special condition (empty command and stdin not a tty) if interactive and not stdin_isafgtty: # looks like interactive but stdin is not a tty: # switch to non-interactive + disable ssh pseudo-tty interactive = False # SSH: disable pseudo-tty allocation (-T) ssh_options = config.ssh_options or '' ssh_options += ' -T' config._set_main("ssh_options", ssh_options) if options.nostdin and interactive: parser.error("illegal option `--nostdin' in that case") # Force user_interaction if Clush._f_user_interaction for test purposes user_interaction = hasattr(sys.modules[__name__], '_f_user_interaction') if not options.nostdin: # Try user interaction: check for foreground ttys presence (output) stdout_isafgtty = sys.stdout.isatty() and \ os.tcgetpgrp(sys.stdout.fileno()) == os.getpgrp() user_interaction |= stdin_isafgtty and stdout_isafgtty display.vprint(VERB_DEBUG, "User interaction: %s" % user_interaction) if user_interaction: # Standard input is a terminal and we want to perform some user # interactions in the main thread (using blocking calls), so # we run cluster commands in a new ClusterShell Task (a new # thread is created). task = Task() # else: perform everything in the main thread # Handle special signal only when user_interaction is set task.set_default("USER_handle_SIGUSR1", user_interaction) task.excepthook = sys.excepthook task.set_default("USER_stdin_worker", not (sys.stdin is None or \ sys.stdin.isatty() or \ options.nostdin or \ user_interaction)) display.vprint(VERB_DEBUG, "Create STDIN worker: %s" % \ task.default("USER_stdin_worker")) task.set_info("debug", config.verbosity >= VERB_DEBUG) task.set_info("fanout", config.fanout) if options.mode: display.vprint(VERB_DEBUG, "ClushConfig parsed: %s" % config.parsed) display.vprint(VERB_DEBUG, "Available run modes: %s" % ' '.join(config.modes())) config.set_mode(options.mode) display.vprint(VERB_VERB, "[%s] run mode activated" % options.mode) command_prefix = config.command_prefix if command_prefix: # keep command_prefix for interactive mode ttyloop() task.set_default("USER_command_prefix", command_prefix) prefix_cmdl = shlex.split(command_prefix) display.vprint(VERB_VERB, "[%s] command prefix: %s" % \ (options.mode, prefix_cmdl)) args = prefix_cmdl + args # amend actual command with prefix if config.password_prompt: display.vprint(VERB_VERB, "[%s] password prompt enabled" % options.mode) # prompt for password task.set_default("USER_password_prompt", ask_pass()) if options.worker: try: if options.remote == 'no': task.set_default('local_worker', _load_workerclass(options.worker)) else: task.set_default('distant_worker', _load_workerclass(options.worker)) except (ImportError, AttributeError): msg = "ERROR: Could not load worker '%s'" % options.worker display.vprint_err(VERB_QUIET, msg) clush_exit(1, task) elif options.topofile or task._default_tree_is_enabled(): if options.topofile: task.load_topology(options.topofile) if config.verbosity >= VERB_VERB: roots = len(task.topology.root.nodeset) gws = task.topology.inner_node_count() - roots msg = "enabling tree topology (%d gateways)" % gws print("clush: %s" % msg, file=sys.stderr) if options.grooming_delay: if config.verbosity >= VERB_VERB: msg = Display.COLOR_RESULT_FMT % ("Grooming delay: %f" % options.grooming_delay) print(msg, file=sys.stderr) task.set_info("grooming_delay", options.grooming_delay) elif options.rcopy: # By default, --rcopy should inhibit grooming task.set_info("grooming_delay", 0) if config.ssh_user: task.set_info("ssh_user", config.ssh_user) if config.ssh_path: task.set_info("ssh_path", config.ssh_path) if config.ssh_options: task.set_info("ssh_options", config.ssh_options) if config.scp_path: task.set_info("scp_path", config.scp_path) if config.scp_options: task.set_info("scp_options", config.scp_options) if config.rsh_path: task.set_info("rsh_path", config.rsh_path) if config.rcp_path: task.set_info("rcp_path", config.rcp_path) if config.rsh_options: task.set_info("rsh_options", config.rsh_options) # Set detailed timeout values task.set_info("connect_timeout", config.connect_timeout) task.set_info("command_timeout", config.command_timeout) # Enable stdout/stderr separation task.set_default("stderr", not options.gatherall) # Disable MsgTree buffering if not gathering outputs task.set_default("stdout_msgtree", display.gather or display.line_mode) # Always disable stderr MsgTree buffering task.set_default("stderr_msgtree", False) # Set timeout at worker level when command_timeout is defined. if config.command_timeout > 0: timeout = config.command_timeout else: timeout = -1 # Configure task custom status task.set_default("USER_interactive", interactive) task.set_default("USER_running", False) if (options.copy or options.rcopy) and not args: parser.error("--[r]copy option requires at least one argument") dest_paths = [] if options.copy: if options.dest_path: for arg in args: dest_paths.append(options.dest_path) else: # append '/' to clearly indicate a directory for tree mode for arg in args: dest_paths.append(join(dirname(abspath(arg)), '')) op = "copy sources=%s dest=%s" % (args, dest_paths) elif options.rcopy: if options.dest_path: for arg in args: dest_paths.append(options.dest_path) else: for arg in args: dest_paths.append(dirname(abspath(arg))) op = "rcopy sources=%s dest=%s" % (args, dest_paths) else: op = "command=\"%s\"" % ' '.join(args) # print debug values (fanout value is get from the config object # and not task itself as set_info() is an asynchronous call. display.vprint(VERB_DEBUG, "clush: nodeset=%s fanout=%d [timeout " \ "conn=%.1f cmd=%.1f] %s" % (nodeset_base, config.fanout, config.connect_timeout, config.command_timeout, op)) if not task.default("USER_interactive"): if display.verbosity >= VERB_DEBUG and task.topology: print(Display.COLOR_RESULT_FMT % '-' * 15) print(Display.COLOR_RESULT_FMT % task.topology, end='') print(Display.COLOR_RESULT_FMT % '-' * 15) if options.copy: run_copy(task, args, dest_paths, nodeset_base, timeout, options.preserve_flag, display) elif options.rcopy: run_rcopy(task, args, dest_paths, nodeset_base, timeout, options.preserve_flag, display) else: run_command(task, ' '.join(args), nodeset_base, timeout, display, options.remote != 'no', options.worker is None) if user_interaction: ttyloop(task, nodeset_base, timeout, display, options.remote != 'no', options.worker is None) elif task.default("USER_interactive"): display.vprint_err(VERB_QUIET, \ "ERROR: interactive mode requires a tty") clush_exit(1, task) rc = 0 if config.maxrc: # Instead of clush return code, return commands retcode rc = task.max_retcode() if task.num_timeout() > 0: rc = 255 clush_exit(rc, task) if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/CLI/Config.py0000644104717000001440000003051014501416555021140 0ustar00sthiellusers# # Copyright (C) 2010-2016 CEA/DAM # Copyright (C) 2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ CLI configuration classes """ try: from configparser import ConfigParser, NoOptionError, NoSectionError except ImportError: # Python 2 compat from ConfigParser import ConfigParser, NoOptionError, NoSectionError import glob import os import shlex from string import Template from ClusterShell.Defaults import config_paths, DEFAULTS from ClusterShell.CLI.Display import VERB_QUIET, VERB_STD, \ VERB_VERB, VERB_DEBUG, THREE_CHOICES class ClushConfigError(Exception): """Exception used by ClushConfig to report an error.""" def __init__(self, section=None, option=None, msg=None): Exception.__init__(self) self.section = section self.option = option self.msg = msg def __str__(self): serr = "" if self.section and self.option: serr += "(Config %s.%s): " % (self.section, self.option) if self.msg: serr += str(self.msg) return serr class ClushConfig(ConfigParser, object): """Config class for clush (specialized ConfigParser)""" MAIN_SECTION = 'Main' MAIN_DEFAULTS = {"fanout": "%d" % DEFAULTS.fanout, "connect_timeout": "%f" % DEFAULTS.connect_timeout, "command_timeout": "%f" % DEFAULTS.command_timeout, "history_size": "100", "color": THREE_CHOICES[0], # '' "verbosity": "%d" % VERB_STD, "node_count": "yes", "maxrc": "no", "fd_max": "8192", "command_prefix": "", "password_prompt": "no"} def __init__(self, options, filename=None): """Initialize ClushConfig object from corresponding OptionParser options.""" ConfigParser.__init__(self) self.mode = None # create Main section with default values self.add_section(self.MAIN_SECTION) for key, value in self.MAIN_DEFAULTS.items(): self.set(self.MAIN_SECTION, key, value) # config files override defaults values if filename: files = [filename] else: files = config_paths('clush.conf') self.parsed = self.read(files) if self.parsed: # for proper $CFGDIR selection, take last parsed configfile only cfg_dirname = os.path.dirname(self.parsed[-1]) # parse Main.confdir try: # keep track of loaded confdirs loaded_confdirs = set() confdirstr = self.get(self.MAIN_SECTION, "confdir") for confdir in shlex.split(confdirstr): # substitute $CFGDIR, set to the highest priority # configuration directory that has been found confdir = Template(confdir).safe_substitute( CFGDIR=cfg_dirname) confdir = os.path.normpath(confdir) if confdir in loaded_confdirs: continue # load each confdir only once loaded_confdirs.add(confdir) if not os.path.isdir(confdir): if not os.path.exists(confdir): continue msg = "Defined confdir %s is not a directory" % confdir raise ClushConfigError(msg=msg) # add config declared in clush.conf.d file parts for cfgfn in sorted(glob.glob('%s/*.conf' % confdir)): # ignore files that cannot be read self.parsed += self.read(cfgfn) except (NoSectionError, NoOptionError): pass # Apply command line overrides if options.quiet: self._set_main("verbosity", VERB_QUIET) if options.verbose: self._set_main("verbosity", VERB_VERB) if options.debug: self._set_main("verbosity", VERB_DEBUG) if options.fanout: self._set_main("fanout", options.fanout) if options.user: self._set_main("ssh_user", options.user) if options.options: self._set_main("ssh_options", options.options) if options.connect_timeout: self._set_main("connect_timeout", options.connect_timeout) if options.command_timeout: self._set_main("command_timeout", options.command_timeout) if options.whencolor: self._set_main("color", options.whencolor) if options.maxrc: self._set_main("maxrc", options.maxrc) try: # -O/--option KEY=VALUE for cfgopt in options.option: optkey, optvalue = cfgopt.split('=', 1) self._set_main(optkey, optvalue) except ValueError as exc: raise ClushConfigError(self.MAIN_SECTION, cfgopt, "invalid -O/--option value") def _set_main(self, option, value): """Set given option/value pair in the Main section.""" self.set(self.MAIN_SECTION, option, str(value)) def _getx(self, xtype, section, option): """Return a value of specified type for the named option.""" try: return getattr(ConfigParser, 'get%s' % xtype)(self, \ section, option) except (NoOptionError, NoSectionError, TypeError, ValueError) as exc: raise ClushConfigError(section, option, exc) def getboolean(self, section, option): """Return a boolean value for the named option.""" return self._getx('boolean', section, option) def getfloat(self, section, option): """Return a float value for the named option.""" return self._getx('float', section, option) def getint(self, section, option): """Return an integer value for the named option.""" return self._getx('int', section, option) def _get_optional(self, section, option): """Utility method to get a value for the named option, but do not raise an exception if the option doesn't exist.""" try: return self.get(section, option) except (NoOptionError, NoSectionError): pass def _getboolean_mode_optional(self, option): """Return a boolean value for the named option in the current mode (optionally defined).""" if self.mode: try: return getattr(ConfigParser, 'getboolean')(self, "mode:%s" % self.mode, option) except (NoOptionError, NoSectionError): pass return self.getboolean(self.MAIN_SECTION, option) def _getint_mode_optional(self, option): """Return an integer value for the named option in the current mode (optionally defined).""" if self.mode: try: return getattr(ConfigParser, 'getint')(self, "mode:%s" % self.mode, option) except (NoOptionError, NoSectionError): pass return self.getint(self.MAIN_SECTION, option) def _getfloat_mode_optional(self, option): """Return a float value for the named option in the current mode (optionally defined).""" if self.mode: try: return getattr(ConfigParser, 'getfloat')(self, "mode:%s" % self.mode, option) except (NoOptionError, NoSectionError): pass return self.getfloat(self.MAIN_SECTION, option) def _get_mode_optional(self, option): """Utility method to get a value for the named option in the current mode, but do not raise an exception if the option doesn't exist.""" if self.mode: try: return self.get("mode:%s" % self.mode, option) except (NoOptionError, NoSectionError): pass return self._get_optional(self.MAIN_SECTION, option) @property def verbosity(self): """verbosity value as an integer""" try: return self.getint(self.MAIN_SECTION, "verbosity") except ClushConfigError: return 0 @property def fanout(self): """fanout value as an integer""" return self._getint_mode_optional("fanout") @property def connect_timeout(self): """connect_timeout value as a float""" return self._getfloat_mode_optional("connect_timeout") @property def command_timeout(self): """command_timeout value as a float""" return self._getfloat_mode_optional("command_timeout") @property def ssh_user(self): """ssh_user value as a string (optional)""" return self._get_mode_optional("ssh_user") @property def ssh_path(self): """ssh_path value as a string (optional)""" return self._get_mode_optional("ssh_path") @property def ssh_options(self): """ssh_options value as a string (optional)""" return self._get_mode_optional("ssh_options") @property def scp_path(self): """scp_path value as a string (optional)""" return self._get_mode_optional("scp_path") @property def scp_options(self): """scp_options value as a string (optional)""" return self._get_mode_optional("scp_options") @property def rsh_path(self): """rsh_path value as a string (optional)""" return self._get_mode_optional("rsh_path") @property def rcp_path(self): """rcp_path value as a string (optional)""" return self._get_mode_optional("rcp_path") @property def rsh_options(self): """rsh_options value as a string (optional)""" return self._get_mode_optional("rsh_options") @property def color(self): """color value as a string in (never, always, auto)""" whencolor = self._get_mode_optional("color") if whencolor not in THREE_CHOICES: raise ClushConfigError(self.mode or self.MAIN_SECTION, "color", "choose from %s" % THREE_CHOICES) return whencolor @property def node_count(self): """node_count value as a boolean""" return self._getboolean_mode_optional("node_count") @property def maxrc(self): """maxrc value as a boolean""" return self._getboolean_mode_optional("maxrc") @property def fd_max(self): """max number of open files (soft rlimit)""" return self.getint(self.MAIN_SECTION, "fd_max") def modes(self): """return available run modes""" for section in self.sections(): if section.startswith("mode:"): yield section[5:] # could use removeprefix() in py3.9+ def set_mode(self, mode): """set run mode; properties will use it by default""" if mode not in self.modes(): raise ClushConfigError(msg='invalid mode "%s" (available: %s)' % (mode, ' '.join(self.modes()))) self.mode = mode @property def command_prefix(self): """command_prefix value as a string (optional)""" return self._get_mode_optional("command_prefix") @property def password_prompt(self): """password_prompt value as a boolean (optional)""" return self._getboolean_mode_optional("password_prompt") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/CLI/Display.py0000644104717000001440000002717514505632065021355 0ustar00sthiellusers# # Copyright (C) 2010-2015 CEA/DAM # Copyright (C) 2023 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ CLI results display class """ from __future__ import print_function import difflib import sys import os from ClusterShell.NodeSet import NodeSet # Display constants VERB_QUIET = 0 VERB_STD = 1 VERB_VERB = 2 VERB_DEBUG = 3 THREE_CHOICES = ["", "never", "always", "auto"] WHENCOLOR_CHOICES = THREE_CHOICES # deprecated; use THREE_CHOICES if sys.getdefaultencoding() == 'ascii': STRING_ENCODING = 'utf-8' # enforce UTF-8 with Python 2 else: STRING_ENCODING = sys.getdefaultencoding() # Python 3 compat: wrapper for stdin def sys_stdin(): return getattr(sys.stdin, 'buffer', sys.stdin) class Display(object): """ Output display class for command line scripts. """ COLOR_RESULT_FMT = "\033[92m%s\033[0m" COLOR_STDOUT_FMT = "\033[94m%s\033[0m" COLOR_STDERR_FMT = "\033[91m%s\033[0m" COLOR_DIFFHDR_FMT = "\033[1m%s\033[0m" COLOR_DIFFHNK_FMT = "\033[96m%s\033[0m" COLOR_DIFFADD_FMT = "\033[92m%s\033[0m" COLOR_DIFFDEL_FMT = "\033[91m%s\033[0m" SEP = "-" * 15 class _KeySet(set): """Private NodeSet substitution to display raw keys""" def __str__(self): return ",".join(self) def __init__(self, options, config=None, color=None): """Initialize a Display object from CLI.OptionParser options and optional CLI.ClushConfig. If `color' boolean flag is not specified, it is auto detected according to options.whencolor. """ if options.diff: self._print_buffer = self._print_diff else: self._print_buffer = self._print_content self._display = self._print_buffer self._diffref = None # diff implies at least -b self.gather = options.gatherall or options.gather or options.diff self.progress = getattr(options, 'progress', False) # only in clush # check parameter compatibility if options.diff and options.line_mode: raise ValueError("diff not supported in line_mode") self.line_mode = options.line_mode self.label = options.label self.regroup = options.regroup self.groupsource = options.groupsource self.noprefix = options.groupbase self.outdir = options.outdir self.errdir = options.errdir # display may change when 'max return code' option is set self.maxrc = getattr(options, 'maxrc', False) # Be compliant with NO_COLOR and CLI_COLORS trying to solve #428 # See https://no-color.org/ and https://bixense.com/clicolors/ # NO_COLOR takes precedence over CLI_COLORS. --color option always # takes precedence over any environment variable. if options.whencolor is None and color is not False: if (config is None) or (config.color == '' or config.color == 'auto'): if 'NO_COLOR' not in os.environ: color = self._has_cli_color() else: color = False if color is None: # Should we use ANSI colors? color = False if not options.whencolor or options.whencolor == "auto": color = sys.stdout.isatty() elif options.whencolor == "always": color = True self._color = color # GH#528 enable line buffering self.out = sys.stdout try : if not self.out.line_buffering: self.out.reconfigure(line_buffering=True) except AttributeError: # < py3.7 pass self.err = sys.stderr try : if not self.err.line_buffering: self.err.reconfigure(line_buffering=True) except AttributeError: # < py3.7 pass if self._color: self.color_stdout_fmt = self.COLOR_STDOUT_FMT self.color_stderr_fmt = self.COLOR_STDERR_FMT self.color_diffhdr_fmt = self.COLOR_DIFFHDR_FMT self.color_diffctx_fmt = self.COLOR_DIFFHNK_FMT self.color_diffadd_fmt = self.COLOR_DIFFADD_FMT self.color_diffdel_fmt = self.COLOR_DIFFDEL_FMT else: self.color_stdout_fmt = self.color_stderr_fmt = \ self.color_diffhdr_fmt = self.color_diffctx_fmt = \ self.color_diffadd_fmt = self.color_diffdel_fmt = "%s" # Set display verbosity if config: # config object does already apply options overrides self.node_count = config.node_count self.verbosity = config.verbosity else: self.node_count = True self.verbosity = VERB_STD if hasattr(options, 'quiet') and options.quiet: self.verbosity = VERB_QUIET if hasattr(options, 'verbose') and options.verbose: self.verbosity = VERB_VERB if hasattr(options, 'debug') and options.debug: self.verbosity = VERB_DEBUG def _has_cli_color(self): """Tests CLICOLOR environment variable to determine whether to use color or not on output.""" # When CLICOLOR_FORCE is set to something else than 0 # colors must be used. if os.getenv("CLICOLOR_FORCE", "0") != "0": return True cli_color = os.getenv("CLICOLOR") if cli_color is None: return None elif cli_color != "0": # CLICOLOR is set and colored output should # be used if stdout is a tty return sys.stdout.isatty() else: # CLICOLOR is set to not display colors. return False def flush(self): """flush display object buffers""" # only used to reset diff display for now self._diffref = None def _getlmode(self): """line_mode getter""" return self._display == self._print_lines def _setlmode(self, value): """line_mode setter""" if value: self._display = self._print_lines else: self._display = self._print_buffer line_mode = property(_getlmode, _setlmode) def _format_nodeset(self, nodeset): """Sub-routine to format nodeset string.""" if self.regroup: return nodeset.regroup(self.groupsource, noprefix=self.noprefix) return str(nodeset) def format_header(self, nodeset, indent=0): """Format nodeset-based header.""" if not self.label: return "" indstr = " " * indent nodecntstr = "" if self.verbosity >= VERB_STD and self.node_count and len(nodeset) > 1: nodecntstr = " (%d)" % len(nodeset) hdr = self.color_stdout_fmt % ("%s%s\n%s%s%s\n%s%s" % \ (indstr, self.SEP, indstr, self._format_nodeset(nodeset), nodecntstr, indstr, self.SEP)) return hdr + '\n' def print_line(self, nodeset, line): """Display a line with optional label.""" linestr = line.decode(STRING_ENCODING, errors='replace') + '\n' if self.label: prefix = self.color_stdout_fmt % ("%s: " % nodeset) self.out.write(prefix + linestr) else: self.out.write(linestr) def print_line_error(self, nodeset, line): """Display an error line with optional label.""" linestr = line.decode(STRING_ENCODING, errors='replace') + '\n' if self.label: prefix = self.color_stderr_fmt % ("%s: " % nodeset) self.err.write(prefix + linestr) else: self.err.write(linestr) def print_gather(self, nodeset, obj): """Generic method for displaying nodeset/content according to current object settings.""" return self._display(NodeSet(nodeset), obj) def print_gather_finalize(self, nodeset): """Finalize display of diff-like gathered contents.""" if self._display == self._print_diff and self._diffref: return self._display(nodeset, '') def print_gather_keys(self, keys, obj): """Generic method for displaying raw keys/content according to current object settings (used by clubak).""" return self._display(self.__class__._KeySet(keys), obj) def _print_content(self, nodeset, content): """Display a dshbak-like header block and content.""" s = bytes(content).decode(STRING_ENCODING, errors='replace') self.out.write(self.format_header(nodeset) + s + '\n') def _print_diff(self, nodeset, content): """Display unified diff between remote gathered outputs.""" if self._diffref is None: self._diffref = (nodeset, content) else: nodeset_ref, content_ref = self._diffref nsstr_ref = self._format_nodeset(nodeset_ref) nsstr = self._format_nodeset(nodeset) if self.verbosity >= VERB_STD and self.node_count: if len(nodeset_ref) > 1: nsstr_ref += " (%d)" % len(nodeset_ref) if len(nodeset) > 1: nsstr += " (%d)" % len(nodeset) alist = [aline.decode('utf-8', 'ignore') for aline in content_ref] blist = [bline.decode('utf-8', 'ignore') for bline in content] udiff = difflib.unified_diff(alist, blist, fromfile=nsstr_ref, tofile=nsstr, lineterm='') output = '' for line in udiff: if line.startswith('---') or line.startswith('+++'): output += self.color_diffhdr_fmt % line.rstrip() elif line.startswith('@@'): output += self.color_diffctx_fmt % line elif line.startswith('+'): output += self.color_diffadd_fmt % line elif line.startswith('-'): output += self.color_diffdel_fmt % line else: output += line output += '\n' self.out.write(output) def _print_lines(self, nodeset, msg): """Display a MsgTree buffer by line with prefixed header.""" out = self.out if self.label: header = self.color_stdout_fmt % \ ("%s: " % self._format_nodeset(nodeset)) for line in msg: out.write(header + line.decode(STRING_ENCODING, errors='replace') + '\n') else: for line in msg: out.write(line.decode(STRING_ENCODING, errors='replace') + '\n') def vprint(self, level, message): """Utility method to print a message if verbose level is high enough.""" if self.verbosity >= level: print(message) def vprint_err(self, level, message): """Utility method to print a message on stderr if verbose level is high enough.""" if self.verbosity >= level: print(message, file=sys.stderr) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/CLI/Error.py0000644104717000001440000001142614501416555021031 0ustar00sthiellusers# # Copyright (C) 2010-2012 CEA/DAM # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ CLI error handling helper functions """ from __future__ import print_function try: import configparser except ImportError: # Python 2 compat import ConfigParser as configparser import errno import logging import os.path from resource import getrlimit, RLIMIT_NOFILE import signal import sys from ClusterShell.Engine.Engine import EngineNotSupportedError from ClusterShell.NodeUtils import GroupResolverConfigError from ClusterShell.NodeUtils import GroupResolverIllegalCharError from ClusterShell.NodeUtils import GroupResolverSourceError from ClusterShell.NodeUtils import GroupSourceError from ClusterShell.NodeUtils import GroupSourceNoUpcall from ClusterShell.NodeSet import NodeSetExternalError, NodeSetParseError from ClusterShell.NodeSet import RangeSetParseError from ClusterShell.Propagation import RouteResolvingError from ClusterShell.Topology import TopologyError from ClusterShell.Worker.EngineClient import EngineClientError from ClusterShell.Worker.Worker import WorkerError GENERIC_ERRORS = (configparser.Error, EngineNotSupportedError, EngineClientError, NodeSetExternalError, NodeSetParseError, RangeSetParseError, GroupResolverConfigError, GroupResolverIllegalCharError, GroupResolverSourceError, GroupSourceError, GroupSourceNoUpcall, RouteResolvingError, TopologyError, TypeError, IOError, OSError, KeyboardInterrupt, ValueError, WorkerError) LOGGER = logging.getLogger(__name__) def handle_generic_error(excobj): """handle error given `excobj' generic script exception""" prog = os.path.basename(sys.argv[0]) try: raise excobj except EngineNotSupportedError as exc: msgfmt = "%s: I/O events engine '%s' not supported on this host" print(msgfmt % (prog, exc.engineid), file=sys.stderr) except EngineClientError as exc: print("%s: EngineClientError: %s" % (prog, exc), file=sys.stderr) except NodeSetExternalError as exc: print("%s: External error: %s" % (prog, exc), file=sys.stderr) except (NodeSetParseError, RangeSetParseError) as exc: print("%s: Parse error: %s" % (prog, exc), file=sys.stderr) except GroupResolverIllegalCharError as exc: print('%s: Illegal group character: "%s"' % (prog, exc), file=sys.stderr) except GroupResolverConfigError as exc: print('%s: Group resolver error: %s' % (prog, exc), file=sys.stderr) except GroupResolverSourceError as exc: print('%s: Unknown group source: "%s"' % (prog, exc), file=sys.stderr) except GroupSourceNoUpcall as exc: msgfmt = '%s: No %s upcall defined for group source "%s"' print(msgfmt % (prog, exc, exc.group_source.name), file=sys.stderr) except GroupSourceError as exc: print("%s: Group error: %s" % (prog, exc), file=sys.stderr) except (RouteResolvingError, TopologyError) as exc: print("%s: TREE MODE: %s" % (prog, exc), file=sys.stderr) except configparser.Error as exc: print("%s: %s" % (prog, exc), file=sys.stderr) except (TypeError, ValueError, WorkerError) as exc: print("%s: %s" % (prog, exc), file=sys.stderr) except (IOError, OSError) as exc: # see PEP 3151 if exc.errno == errno.EPIPE: # be quiet on broken pipe LOGGER.debug(exc) else: print("ERROR: %s" % exc, file=sys.stderr) if exc.errno == errno.EMFILE: print("ERROR: maximum number of open file descriptors: " "soft=%d hard=%d" % getrlimit(RLIMIT_NOFILE), file=sys.stderr) except KeyboardInterrupt as exc: return 128 + signal.SIGINT except: assert False, "wrong GENERIC_ERRORS" # Exit with error code 1 (generic failure) return 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/CLI/Nodeset.py0000755104717000001440000003241214501416555021342 0ustar00sthiellusers# # Copyright (C) 2008-2016 CEA/DAM # Copyright (C) 2015-2018 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ compute advanced nodeset operations The nodeset command is an utility command provided with the ClusterShell library which implements some features of the NodeSet and RangeSet classes. """ from __future__ import print_function import logging import math import random import sys from ClusterShell.CLI.Error import GENERIC_ERRORS, handle_generic_error from ClusterShell.CLI.OptionParser import OptionParser from ClusterShell.NodeSet import NodeSet, RangeSet, std_group_resolver from ClusterShell.NodeSet import grouplist, set_std_group_resolver_config from ClusterShell.NodeUtils import GroupSourceNoUpcall def process_stdin(xsetop, xsetcls, autostep): """Process standard input and operate on xset.""" # Build temporary set (stdin accumulator) tmpset = xsetcls(autostep=autostep) for line in sys.stdin: # read lines of text stream (not bytes) # Support multi-lines and multi-nodesets per line line = line[0:line.find('#')].strip() for elem in line.split(): # Do explicit object creation for RangeSet tmpset.update(xsetcls(elem, autostep=autostep)) # Perform operation on xset if tmpset: xsetop(tmpset) def compute_nodeset(xset, args, autostep): """Apply operations and operands from args on xset, an initial RangeSet or NodeSet.""" class_set = xset.__class__ # Process operations from command arguments. # The special argument string "-" indicates to read stdin. # We also take care of multiline shell arguments (#394). while args: arg = args.pop(0) if arg in ("-i", "--intersection"): val = args.pop(0) if val == '-': process_stdin(xset.intersection_update, class_set, autostep) else: xset.intersection_update(class_set.fromlist(val.splitlines(), autostep=autostep)) elif arg in ("-x", "--exclude"): val = args.pop(0) if val == '-': process_stdin(xset.difference_update, class_set, autostep) else: xset.difference_update(class_set.fromlist(val.splitlines(), autostep=autostep)) elif arg in ("-X", "--xor"): val = args.pop(0) if val == '-': process_stdin(xset.symmetric_difference_update, class_set, autostep) else: xset.symmetric_difference_update( class_set.fromlist(val.splitlines(), autostep=autostep)) elif arg == '-': process_stdin(xset.update, xset.__class__, autostep) else: xset.update(class_set.fromlist(arg.splitlines(), autostep=autostep)) return xset def print_source_groups(source, level, xset, opts): """ Print groups from a source, a level of verbosity and an optional nodeset acting as a filter. """ # list groups of some specified nodes? if opts.all or xset or opts.and_nodes or opts.sub_nodes or opts.xor_nodes: # When some node sets are provided as argument, the list command # retrieves node groups these nodes belong to, thanks to the # groups() method. # Note: stdin support is enabled when '-' is found. groups = xset.groups(source, opts.groupbase) # sort by group name for group, (gnodes, inodes) in sorted(groups.items()): if level == 1: print(group) elif level == 2: print("%s %s" % (group, inodes)) else: print("%s %s %d/%d" % (group, inodes, len(inodes), len(gnodes))) else: # "raw" group list when no argument at all for group in grouplist(source): if source and not opts.groupbase: nsgroup = "@%s:%s" % (source, group) else: nsgroup = "@%s" % group if level == 1: print(nsgroup) else: nodes = NodeSet(nsgroup) if level == 2: print("%s %s" % (nsgroup, nodes)) else: print("%s %s %d" % (nsgroup, nodes, len(nodes))) def command_list(options, xset, group_resolver): """List command handler (-l/-ll/-lll/-L/-LL/-LLL).""" list_level = options.list + options.listall if options.listall: # useful: sources[0] is always the default or selected source sources = group_resolver.sources() # do not print name of default group source unless -s specified if sources and not options.groupsource: sources[0] = None else: sources = [options.groupsource] for source in sources: try: print_source_groups(source, list_level, xset, options) except GroupSourceNoUpcall as exc: if not options.listall: raise # missing list upcall is not fatal with -L msgfmt = "Warning: No %s upcall defined for group source %s" print(msgfmt % (exc, source), file=sys.stderr) def nodeset(): """script subroutine""" class_set = NodeSet usage = "%prog [COMMAND] [OPTIONS] [ns1 [-ixX] ns2|...]" parser = OptionParser(usage) parser.install_groupsconf_option() parser.install_nodeset_commands() parser.install_nodeset_operations() parser.install_nodeset_options() (options, args) = parser.parse_args() set_std_group_resolver_config(options.groupsconf) group_resolver = std_group_resolver() if options.debug: logging.basicConfig(level=logging.DEBUG) # Check for command presence cmdcount = int(options.count) + int(options.expand) + \ int(options.fold) + int(bool(options.list)) + \ int(bool(options.listall)) + int(options.regroup) + \ int(options.groupsources) if not cmdcount: parser.error("No command specified.") elif cmdcount > 1: parser.error("Multiple commands not allowed.") if options.rangeset: class_set = RangeSet if options.all or options.regroup: if class_set != NodeSet: parser.error("-a/-r only supported in NodeSet mode") if options.maxsplit is not None and options.contiguous: parser.error("incompatible splitting options (split, contiguous)") if options.maxsplit is None: options.maxsplit = 1 if options.axis and (not options.fold or options.rangeset): parser.error("--axis option is only supported when folding nodeset") if options.groupsource and not options.quiet and class_set == RangeSet: print("WARNING: option group source \"%s\" ignored" % options.groupsource, file=sys.stderr) # We want -s to act as a substitution of default groupsource # (ie. it's not necessary to prefix group names by this group source). if options.groupsource: group_resolver.default_source_name = options.groupsource # The groupsources command simply lists group sources. if options.groupsources: if options.quiet: dispdefault = "" # don't show (default) if quiet is set else: dispdefault = " (default)" for src in group_resolver.sources(): print("%s%s" % (src, dispdefault)) dispdefault = "" return autostep = options.autostep # Do not use autostep for computation when a percentage or the special # value 'auto' is specified. Real autostep value is set post-process. if isinstance(autostep, float) or autostep == 'auto': autostep = None # Instantiate RangeSet or NodeSet object xset = class_set(autostep=autostep) if options.all: # Include all nodes from external node groups support. xset.update(NodeSet.fromall()) # uses default_source when set if not args and not options.all and not (options.list or options.listall): # No need to specify '-' to read stdin in these cases process_stdin(xset.update, xset.__class__, autostep) if not xset and (options.and_nodes or options.sub_nodes or options.xor_nodes) and not options.quiet: print('WARNING: empty left operand for set operation', file=sys.stderr) # Apply first operations (before first non-option) for nodes in options.and_nodes: if nodes == '-': process_stdin(xset.intersection_update, xset.__class__, autostep) else: xset.intersection_update(class_set(nodes, autostep=autostep)) for nodes in options.sub_nodes: if nodes == '-': process_stdin(xset.difference_update, xset.__class__, autostep) else: xset.difference_update(class_set(nodes, autostep=autostep)) for nodes in options.xor_nodes: if nodes == '-': process_stdin(xset.symmetric_difference_update, xset.__class__, autostep) else: xset.symmetric_difference_update(class_set(nodes, autostep=autostep)) # Finish xset computing from args compute_nodeset(xset, args, autostep) # The list command has a special handling if options.list > 0 or options.listall > 0: return command_list(options, xset, group_resolver) # Interpret special characters (may raise SyntaxError) separator = eval('\'\'\'%s\'\'\'' % options.separator, {"__builtins__":None}, {}) if options.slice_rangeset: _xset = class_set() for sli in RangeSet(options.slice_rangeset).slices(): _xset.update(xset[sli]) xset = _xset if options.autostep == 'auto': # Simple implementation of --autostep=auto # if we have at least 3 nodes, all index should be foldable as a-b/n xset.autostep = max(3, len(xset)) elif isinstance(options.autostep, float): # at least % of nodes should be foldable as a-b/n autofactor = float(options.autostep) xset.autostep = int(math.ceil(float(len(xset)) * autofactor)) # user-specified nD-nodeset fold axis if options.axis: if not options.axis.startswith('-'): # axis are 1-indexed in nodeset CLI (0 ignored) xset.fold_axis = tuple(x-1 for x in \ RangeSet(options.axis).intiter() if x > 0) else: # negative axis index (only single number supported) xset.fold_axis = [int(options.axis)] if options.pick and options.pick < len(xset): # convert to string for sample as nsiter() is slower for big # nodesets; and we assume options.pick will remain small-ish keep = random.sample(list(xset), options.pick) # explicit class_set creation and str() conversion for RangeSet keep = class_set(','.join([str(x) for x in keep])) xset.intersection_update(keep) fmt = options.output_format # default to '%s' # Display result according to command choice if options.expand: xsubres = lambda x: separator.join((fmt % s for s in x.striter())) elif options.fold: # Special case when folding using NodeSet and format is set (#277) if class_set is NodeSet and fmt != '%s': # Create a new set after format has been applied to each node xset = class_set._fromlist1((fmt % xnodestr for xnodestr in xset), autostep=xset.autostep) xsubres = lambda x: x else: xsubres = lambda x: fmt % x elif options.regroup: xsubres = lambda x: fmt % x.regroup(options.groupsource, noprefix=options.groupbase) else: xsubres = lambda x: fmt % len(x) if not xset or options.maxsplit <= 1 and not options.contiguous: print(xsubres(xset)) else: if options.contiguous: xiterator = xset.contiguous() else: xiterator = xset.split(options.maxsplit) for xsubset in xiterator: print(xsubres(xsubset)) def main(): """main script function""" try: nodeset() except (AssertionError, IndexError, ValueError) as ex: print("ERROR: %s" % ex, file=sys.stderr) sys.exit(1) except SyntaxError: print("ERROR: invalid separator", file=sys.stderr) sys.exit(1) except GENERIC_ERRORS as ex: sys.exit(handle_generic_error(ex)) sys.exit(0) if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/CLI/OptionParser.py0000644104717000001440000004403314501416555022365 0ustar00sthiellusers# # Copyright (C) 2010-2015 CEA/DAM # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ Common ClusterShell CLI OptionParser With few exceptions, ClusterShell command-lines share most option arguments. This module provides a common OptionParser class. """ from copy import copy import optparse from ClusterShell import __version__ from ClusterShell.Engine.Factory import PreferredEngine from ClusterShell.CLI.Display import THREE_CHOICES def check_autostep(option, opt, value): """type-checker function for autostep""" try: if '%' in value: return float(value[:-1]) / 100.0 return int(value) except ValueError: if value == 'auto': return value error_fmt = "option %s: invalid value: %r, should be node count, " \ "node percentage or 'auto'" raise optparse.OptionValueError(error_fmt % (opt, value)) def check_safestring(option, opt, value): """type-checker function for safestring""" try: safestr = str(value) # check if the string is not empty and not an option if not safestr or safestr.startswith('-'): raise ValueError() return safestr except ValueError: raise optparse.OptionValueError( "option %s: invalid value: %r" % (opt, value)) class Option(optparse.Option): """This Option subclass adds a new safestring type.""" TYPES = optparse.Option.TYPES + ("autostep", "safestring",) TYPE_CHECKER = copy(optparse.Option.TYPE_CHECKER) TYPE_CHECKER["autostep"] = check_autostep TYPE_CHECKER["safestring"] = check_safestring class OptionParser(optparse.OptionParser): """Derived OptionParser for all CLIs""" def __init__(self, usage, **kwargs): """Initialize ClusterShell CLI OptionParser""" optparse.OptionParser.__init__(self, usage, version="%%prog %s" % __version__, option_class=Option, **kwargs) # Set parsing to stop on the first non-option self.disable_interspersed_args() # Always install groupsource support self.add_option("-s", "--groupsource", action="store", type="safestring", dest="groupsource", help="optional groups.conf(5) group source to use") def install_clush_config_options(self): """Install config options for clush""" # config file override (--conf) self.add_option("--conf", action="store", metavar='FILE', help="use alternate config file for clush.conf(5)") # config file option overrides (-O) self.add_option("-O", "--option", action="append", metavar="KEY=VALUE", dest="option", default=[], help="override any key=value clush.conf(5) options") def install_groupsconf_option(self): """Install an alternate groups.conf file option""" self.add_option("--groupsconf", action="store", metavar='FILE', help="use alternate config file for groups.conf(5)") def install_nodes_options(self): """Install nodes selection options""" optgrp = optparse.OptionGroup(self, "Selecting target nodes") optgrp.add_option("-w", action="append", type="safestring", dest="nodes", help="nodes where to run the command") optgrp.add_option("-x", action="append", type="safestring", dest="exclude", metavar="NODES", help="exclude nodes from the node list") optgrp.add_option("-a", "--all", action="store_true", dest="nodes_all", help="run command on all nodes") optgrp.add_option("-g", "--group", action="append", type="safestring", dest="group", help="run command on a group of nodes") optgrp.add_option("-X", action="append", dest="exgroup", metavar="GROUP", type="safestring", help="exclude nodes from this group") optgrp.add_option("-E", "--engine", action="store", dest="engine", choices=["auto"] + list(PreferredEngine.engines), default="auto", help=optparse.SUPPRESS_HELP) optgrp.add_option("--hostfile", "--machinefile", action="append", dest="hostfile", default=[], metavar='FILE', help="path to file containing a list of target hosts") optgrp.add_option("--topology", action="store", dest="topofile", default=None, metavar='FILE', help="topology configuration file to use for tree " "mode") optgrp.add_option("--pick", action="store", dest="pick", metavar="N", type="int", help="pick N node(s) at random in nodeset") self.add_option_group(optgrp) def install_display_options(self, debug_option=True, verbose_options=False, separator_option=False, dshbak_compat=False, msgtree_mode=False): """Install options needed by Display class""" optgrp = optparse.OptionGroup(self, "Output behaviour") if verbose_options: optgrp.add_option("-q", "--quiet", action="store_true", dest="quiet", help="be quiet, print essential output only") optgrp.add_option("-v", "--verbose", action="store_true", dest="verbose", help="be verbose, print informative messages") if debug_option: optgrp.add_option("-d", "--debug", action="store_true", dest="debug", help="output more messages for debugging purpose") optgrp.add_option("-G", "--groupbase", action="store_true", dest="groupbase", default=False, help="do not display group source prefix") optgrp.add_option("-L", action="store_true", dest="line_mode", help="disable header block and order output by nodes") optgrp.add_option("-N", action="store_false", dest="label", default=True, help="disable labeling of command line") if dshbak_compat: optgrp.add_option("-b", "-c", "--dshbak", action="store_true", dest="gather", help="gather nodes with same output") else: optgrp.add_option("-P", "--progress", action="store_true", dest="progress", help="show progress during command execution") optgrp.add_option("-b", "--dshbak", action="store_true", dest="gather", help="gather nodes with same output") optgrp.add_option("-B", action="store_true", dest="gatherall", default=False, help="like -b but including standard error") optgrp.add_option("-r", "--regroup", action="store_true", dest="regroup", default=False, help="fold nodeset using node groups") if separator_option: optgrp.add_option("-S", "--separator", action="store", dest="separator", default=':', help="node / line content separator string " "(default: ':')") else: optgrp.add_option("-S", "--maxrc", action="store_true", dest="maxrc", help="return the largest of command return codes") if msgtree_mode: # clubak specific optgrp.add_option("-F", "--fast", action="store_true", dest="fast_mode", help="faster but memory hungry mode") optgrp.add_option("-T", "--tree", action="store_true", dest="trace_mode", help="message tree trace mode") optgrp.add_option("--interpret-keys", action="store", dest="interpret_keys", choices=THREE_CHOICES, default=THREE_CHOICES[-1], help="whether to " "interpret keys (never, always or auto)") optgrp.add_option("--color", action="store", dest="whencolor", choices=THREE_CHOICES, help="whether to use ANSI " "colors (never, always or auto)") optgrp.add_option("--diff", action="store_true", dest="diff", help="show diff between gathered outputs") optgrp.add_option("--outdir", action="store", dest="outdir", help="output directory for stdout files (OPTIONAL)") optgrp.add_option("--errdir", action="store", dest="errdir", help="output directory for stderr files (OPTIONAL)") self.add_option_group(optgrp) def _copy_callback(self, option, opt_str, value, parser): """special callback method for copy and rcopy toggles""" # enable interspersed args again self.enable_interspersed_args() # set True to dest option attribute setattr(parser.values, option.dest, True) def install_filecopy_options(self): """Install file copying specific options""" optgrp = optparse.OptionGroup(self, "File copying") optgrp.add_option("-c", "--copy", action="callback", dest="copy", callback=self._copy_callback, help="copy local file or directory to remote nodes") optgrp.add_option("--rcopy", action="callback", dest="rcopy", callback=self._copy_callback, help="copy file or directory from remote nodes") optgrp.add_option("--dest", action="store", dest="dest_path", help="destination file or directory on the nodes") optgrp.add_option("-p", action="store_true", dest="preserve_flag", help="preserve modification times and modes") self.add_option_group(optgrp) def install_connector_options(self): """Install engine/connector (ssh, ...) options""" optgrp = optparse.OptionGroup(self, "Connection options") optgrp.add_option("-f", "--fanout", action="store", dest="fanout", help="use a specified fanout", type="int") #help="queueing delay for traffic grooming" optgrp.add_option("--grooming", action="store", dest="grooming_delay", help=optparse.SUPPRESS_HELP, type="float") optgrp.add_option("-l", "--user", action="store", type="safestring", dest="user", help="execute remote command as user") optgrp.add_option("-o", "--options", action="store", dest="options", help="can be used to give ssh options") optgrp.add_option("-t", "--connect_timeout", action="store", dest="connect_timeout", help="limit time to connect to a node", type="float") optgrp.add_option("-u", "--command_timeout", action="store", dest="command_timeout", help="limit time for command to run on the node", type="float") optgrp.add_option("-m", "--mode", action="store", dest="mode", help="run mode; define MODEs in /*.conf") optgrp.add_option("-R", "--worker", action="store", dest="worker", help="worker name to use for command execution " "('exec', 'rsh', 'ssh', etc. default is 'ssh')") optgrp.add_option("--remote", action="store", dest="remote", choices=('yes', 'no'), help="whether to enable remote execution: in tree " "mode, 'yes' forces connections to the leaf " "nodes for execution, 'no' establishes " "connections up to the leaf parent nodes for " "execution (default is 'yes')") self.add_option_group(optgrp) def install_nodeset_commands(self): """Install nodeset commands""" optgrp = optparse.OptionGroup(self, "Commands") optgrp.add_option("-c", "--count", action="store_true", dest="count", default=False, help="show number of nodes in nodeset(s)") optgrp.add_option("-e", "--expand", action="store_true", dest="expand", default=False, help="expand nodeset(s) to separate nodes") optgrp.add_option("-f", "--fold", action="store_true", dest="fold", default=False, help="fold nodeset(s) (or separate " "nodes) into one nodeset") optgrp.add_option("-l", "--list", action="count", dest="list", default=False, help="list node groups from one " "source (see -s GROUPSOURCE)") optgrp.add_option("-L", "--list-all", action="count", dest="listall", default=False, help="list node groups from all group sources") optgrp.add_option("-r", "--regroup", action="store_true", dest="regroup", default=False, help="fold nodes using node groups (see -s " "GROUPSOURCE)") optgrp.add_option("--list-sources", "--groupsources", action="store_true", dest="groupsources", default=False, help="list all active group sources (see " "groups.conf(5))") self.add_option_group(optgrp) def install_nodeset_operations(self): """Install nodeset operations""" optgrp = optparse.OptionGroup(self, "Operations") optgrp.add_option("-x", "--exclude", action="append", dest="sub_nodes", default=[], type="string", help="exclude specified nodeset") optgrp.add_option("-i", "--intersection", action="append", dest="and_nodes", default=[], type="string", help="calculate nodesets intersection") optgrp.add_option("-X", "--xor", action="append", dest="xor_nodes", default=[], type="string", help="calculate symmetric difference between " "nodesets") self.add_option_group(optgrp) def install_nodeset_options(self): """Install nodeset options""" optgrp = optparse.OptionGroup(self, "Options") optgrp.add_option("-a", "--all", action="store_true", dest="all", help="call external node groups support to " "display all nodes") optgrp.add_option("--autostep", action="store", dest="autostep", help="enable a-b/step style syntax when folding, " "value is min node count threshold (eg. '4', " "'50%' or 'auto')", type="autostep") optgrp.add_option("-d", "--debug", action="store_true", dest="debug", help="output more messages for debugging purpose") optgrp.add_option("-q", "--quiet", action="store_true", dest="quiet", help="be quiet, print essential output only") optgrp.add_option("-R", "--rangeset", action="store_true", dest="rangeset", help="switch to RangeSet instead " "of NodeSet. Useful when working on numerical " "cluster ranges, eg. 1,5,18-31") optgrp.add_option("-G", "--groupbase", action="store_true", dest="groupbase", help="hide group source prefix " "(always \"@groupname\")") optgrp.add_option("-S", "--separator", action="store", dest="separator", default=' ', help="separator string to use when " "expanding nodesets (default: ' ')") optgrp.add_option("-O", "--output-format", action="store", dest="output_format", metavar='FORMAT', default='%s', help="output format (default: '%s')") optgrp.add_option("-I", "--slice", action="store", dest="slice_rangeset", help="return sliced off result", type="string") optgrp.add_option("--split", action="store", dest="maxsplit", help="split result into a number of subsets", type="int") optgrp.add_option("--contiguous", action="store_true", dest="contiguous", help="split result into " "contiguous subsets") optgrp.add_option("--axis", action="store", dest="axis", metavar="RANGESET", help="fold along these axis only " "(axis 1..n for nD nodeset)") optgrp.add_option("--pick", action="store", dest="pick", metavar="N", type="int", help="pick N node(s) at random in nodeset") self.add_option_group(optgrp) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/CLI/Utils.py0000644104717000001440000000330314501416555021033 0ustar00sthiellusers# # Copyright (C) 2010-2015 CEA/DAM # Copyright (C) 2018 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ CLI utility functions """ (KIBI, MEBI, GIBI, TEBI) = (1024.0, 1024.0 ** 2, 1024.0 ** 3, 1024.0 ** 4) def human_bi_bytes_unit(value): """ Format numerical `value` to display it using human readable unit with binary prefix like (KiB, MiB, GiB, ...). """ if value >= TEBI: fmt = "%.2f TiB" % (value / TEBI) elif value >= GIBI: fmt = "%.2f GiB" % (value / GIBI) elif value >= MEBI: fmt = "%.2f MiB" % (value / MEBI) elif value >= KIBI: fmt = "%.2f KiB" % (value / KIBI) else: fmt = "%d B" % value return fmt def nodeset_cmpkey(nodeset): """We want larger nodeset first, then sorted by first node index.""" return -len(nodeset), nodeset[0] def bufnodeset_cmpkey(buf): """Helper to get nodeset compare key from a buffer (buf, nodeset)""" return nodeset_cmpkey(buf[1]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/CLI/__init__.py0000644104717000001440000000000014501416555021461 0ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Communication.py0000644104717000001440000003731014501416555022136 0ustar00sthiellusers# # Copyright (C) 2010-2016 CEA/DAM # Copyright (C) 2010-2011 Henri Doreau # Copyright (C) 2015-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell inter-nodes communication module This module contains the required material for nodes to communicate between each others within the propagation tree. At the highest level, messages are instances of several classes. They can be converted into XML to be sent over SSH links through a Channel instance. In the other side, XML is parsed and new message objects are instantiated. Communication channels have been implemented as ClusterShell events handlers. Whenever a message chunk is read, the data is given to a SAX XML parser, that will use it to create corresponding messages instances as a messages factory. As soon as an instance is ready, it is then passed to a recv() method in the channel. The recv() method of the Channel class is a stub, that requires to be implemented in subclass to process incoming messages. So is the start() method too. Subclassing the Channel class allows implementing whatever logic you want on the top of a communication channel. """ try: import _pickle as cPickle except ImportError: # Python 2 compat import cPickle import base64 import binascii import logging import os import xml.sax from xml.sax.handler import ContentHandler from xml.sax.saxutils import XMLGenerator from xml.sax import SAXParseException from collections import deque try: # Use cStringIO by default as it is faster from cStringIO import StringIO as BytesIO except ImportError: # Python 3 compat from io import BytesIO from ClusterShell import __version__ from ClusterShell.Event import EventHandler # XML character encoding ENCODING = 'utf-8' # See Message.data_encode() DEFAULT_B64_LINE_LENGTH = 65536 class MessageProcessingError(Exception): """base exception raised when an error occurs while processing incoming or outgoing messages. """ class XMLReader(ContentHandler): """SAX handler for XML -> Messages instances conversion""" def __init__(self): """XMLReader initializer""" ContentHandler.__init__(self) self.msg_queue = deque() self.version = None # current packet under construction self._draft = None self._sections_map = None def startElement(self, name, attrs): """read a starting xml tag""" if name == 'channel': self.version = attrs.get('version') self.msg_queue.appendleft(StartMessage()) elif name == 'message': self._draft_new(attrs) else: raise MessageProcessingError('Invalid starting tag %s' % name) def endElement(self, name): """read an ending xml tag""" # end of message if name == 'message': self.msg_queue.appendleft(self._draft) self._draft = None elif name == 'channel': self.msg_queue.appendleft(EndMessage()) def characters(self, content): """read content characters (always decoded string)""" if self._draft is not None: self._draft.data_update(content.encode(ENCODING)) def msg_available(self): """return whether a message is available for delivery or not""" return len(self.msg_queue) > 0 def pop_msg(self): """pop and return the oldest message queued""" if self.msg_available(): return self.msg_queue.pop() def _draft_new(self, attributes): """start a new packet construction""" # associative array to select to correct constructor according to the # message type field contained in the serialized representation ctors_map = { ConfigurationMessage.ident: ConfigurationMessage, ControlMessage.ident: ControlMessage, ACKMessage.ident: ACKMessage, ErrorMessage.ident: ErrorMessage, StdOutMessage.ident: StdOutMessage, StdErrMessage.ident: StdErrMessage, RetcodeMessage.ident: RetcodeMessage, TimeoutMessage.ident: TimeoutMessage, } try: msg_type = attributes['type'] # select the good constructor ctor = ctors_map[msg_type] except KeyError: raise MessageProcessingError('Unknown message type') # build message with its attributes self._draft = ctor() self._draft.selfbuild(attributes) class Channel(EventHandler): """Use this event handler to establish a communication channel between to hosts within the propagation tree. The endpoint's logic has to be implemented by subclassing the Channel class and overriding the start() and recv() methods. There is no default behavior for these methods apart raising a NotImplementedError. Usage: >> chan = MyChannel() # inherits Channel >> task = task_self() >> task.shell("uname -a", node="host2", handler=chan) >> task.resume() """ # Common channel stream names SNAME_WRITER = 'ch-writer' SNAME_READER = 'ch-reader' SNAME_ERROR = 'ch-error' def __init__(self, initiator=False): """ """ EventHandler.__init__(self) self.worker = None # channel state flags self.opened = False self.setup = False # Are we initiating the Channel? False on the receiving Gateway. # if True, this Channel will report errors to subclass as StdErrMessage # if False, this Channel will send communication error responses self.initiator = initiator self._xml_reader = XMLReader() self._parser = xml.sax.make_parser(["IncrementalParser"]) self._parser.setContentHandler(self._xml_reader) self.logger = logging.getLogger(__name__) def _init(self): """start xml document for communication""" XMLGenerator(self.worker, encoding=ENCODING).startDocument() def _open(self): """open a new communication channel from src to dst""" xmlgen = XMLGenerator(self.worker, encoding=ENCODING) xmlgen.startElement('channel', {'version': __version__}) def _close(self): """close an already opened channel""" send_endtag = self.opened if send_endtag: XMLGenerator(self.worker, encoding=ENCODING).endElement('channel') self.worker.abort() self.opened = self.setup = False def ev_start(self, worker): """connection established. Open higher level channel""" self.worker = worker self.start() def ev_read(self, worker, node, sname, msg): """channel has data to read""" # sname can be either SNAME_READER or self.SNAME_ERROR if sname == self.SNAME_ERROR: if self.initiator: self.recv(StdErrMessage(node, msg)) # This is not considered fatal from our side, so we choose to not # close the channel on stderr message. return try: self._parser.feed(msg + b'\n') except SAXParseException as ex: self.logger.error("SAXParseException: %s: %s", ex.getMessage(), msg) # Warning: do not send malformed raw message back if self.initiator: self.recv(StdErrMessage(node, ex.getMessage())) else: # target, not initiator: we can send an error message back self.send(ErrorMessage('Parse error: %s' % ex.getMessage())) # This constitutes a fatal channel error, close it now. self._close() return except MessageProcessingError as ex: self.logger.error("MessageProcessingError: %s", ex) if self.initiator: self.recv(StdErrMessage(node, str(ex))) else: # target, not initiator: we can send an error message back self.send(ErrorMessage(str(ex))) # This constitutes a fatal channel error, close it now. self._close() return # pass messages to the driver if ready while self._xml_reader.msg_available(): msg = self._xml_reader.pop_msg() assert msg is not None self.recv(msg) def send(self, msg): """write an outgoing message as its XML representation""" #self.logger.debug('SENDING to worker %s: "%s"', id(self.worker), # msg.xml()) self.worker.write(msg.xml() + b'\n', sname=self.SNAME_WRITER) def start(self): """initialization logic""" raise NotImplementedError('Abstract method: subclasses must implement') def recv(self, msg): """callback: process incoming message""" raise NotImplementedError('Abstract method: subclasses must implement') class Message(object): """base message class""" _inst_counter = 0 ident = 'GEN' has_payload = False def __init__(self): """ """ self.attr = {'type': str, 'msgid': int} self.type = self.__class__.ident self.msgid = Message._inst_counter self.data = None Message._inst_counter += 1 def data_encode(self, inst): """serialize an instance and store the result""" # Base64 transfer encoding for MIME mandates a fixed line length # of 76 characters, which is way too small for our per-line ev_read # mechanism. So use b64encode() here instead of encodestring(). encoded = base64.b64encode(cPickle.dumps(inst)) # We now follow relaxed RFC-4648 for base64, but we still add some # newlines to very long lines to avoid memory pressure (eg. --rcopy). # In RFC-4648, CRLF characters constitute "non-alphabet characters" # and are ignored. line_length = int(os.environ.get('CLUSTERSHELL_GW_B64_LINE_LENGTH', DEFAULT_B64_LINE_LENGTH)) self.data = b'\n'.join(encoded[pos:pos+line_length] for pos in range(0, len(encoded), line_length)) def data_decode(self): """deserialize a previously encoded instance and return it""" # NOTE: name is confusing, data_decode() returns pickle-decoded bytes # (encoded string) and not (decoded) string... # if self.data is None then an exception is raised here try: return cPickle.loads(base64.b64decode(self.data)) except (EOFError, TypeError, cPickle.UnpicklingError, binascii.Error): # raised by cPickle.loads() if self.data is not valid raise MessageProcessingError('Message %s has an invalid payload' % self.ident) def data_update(self, raw): """append data to the instance (used for deserialization)""" if self.has_payload: if self.data is None: self.data = raw # first encoded packet else: self.data += raw else: # ensure that incoming messages don't contain unexpected payloads raise MessageProcessingError('Got unexpected payload for Message %s' % self.ident) def selfbuild(self, attributes): """self construction from a table of attributes""" for k, fmt in self.attr.items(): try: setattr(self, k, fmt(attributes[k])) except KeyError: raise MessageProcessingError( 'Invalid "message" attributes: missing key "%s"' % k) def __str__(self): """printable representation""" elts = ['%s: %s' % (k, str(self.__dict__[k])) for k in self.attr.keys()] attributes = ', '.join(elts) return "Message %s (%s)" % (self.type, attributes) def xml(self): """generate XML version of a configuration message""" out = BytesIO() generator = XMLGenerator(out, encoding=ENCODING) # "stringify" entries for XML conversion state = {} for k in self.attr: state[k] = str(getattr(self, k)) generator.startElement('message', state) if self.data: generator.characters(self.data) generator.endElement('message') xml_msg = out.getvalue() out.close() return xml_msg class ConfigurationMessage(Message): """configuration propagation container""" ident = 'CFG' has_payload = True def __init__(self, gateway=''): """initialize with gateway node name""" Message.__init__(self) self.attr.update({'gateway': str}) self.gateway = gateway class RoutedMessageBase(Message): """abstract class for routed message (with worker source id)""" def __init__(self, srcid): Message.__init__(self) self.attr.update({'srcid': int}) self.srcid = srcid class ControlMessage(RoutedMessageBase): """action request""" ident = 'CTL' has_payload = True def __init__(self, srcid=0): """ """ RoutedMessageBase.__init__(self, srcid) self.attr.update({'action': str, 'target': str}) self.action = '' self.target = '' class ACKMessage(Message): """acknowledgement message""" ident = 'ACK' def __init__(self, ackid=0): """ """ Message.__init__(self) self.attr.update({'ack': int}) self.ack = ackid class ErrorMessage(Message): """error message""" ident = 'ERR' def __init__(self, err=''): """ """ Message.__init__(self) self.attr.update({'reason': str}) self.reason = err class StdOutMessage(RoutedMessageBase): """container message for standard output""" ident = 'OUT' has_payload = True def __init__(self, nodes='', output=None, srcid=0): """ Initialized either with empty payload (to be loaded, already encoded), or with payload provided (via output to encode here). """ RoutedMessageBase.__init__(self, srcid) self.attr.update({'nodes': str}) self.nodes = nodes self.data = None # something encoded or None if output is not None: self.data_encode(output) class StdErrMessage(StdOutMessage): """container message for stderr output""" ident = 'SER' class RetcodeMessage(RoutedMessageBase): """container message for return code""" ident = 'RET' def __init__(self, nodes='', retcode=0, srcid=0): """ """ RoutedMessageBase.__init__(self, srcid) self.attr.update({'retcode': int, 'nodes': str}) self.retcode = retcode self.nodes = nodes class TimeoutMessage(RoutedMessageBase): """container message for timeout notification""" ident = 'TIM' def __init__(self, nodes='', srcid=0): """ """ RoutedMessageBase.__init__(self, srcid) self.attr.update({'nodes': str}) self.nodes = nodes class StartMessage(Message): """message indicating the start of a channel communication""" ident = 'CHA' class EndMessage(Message): """end of channel message""" ident = 'END' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Defaults.py0000644104717000001440000002676714501416555021116 0ustar00sthiellusers# # Copyright (C) 2015-2019 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell Defaults module. Manage library defaults. """ from __future__ import print_function # Imported early # Should not import any other ClusterShell modules when loaded try: from configparser import ConfigParser, NoOptionError, NoSectionError except ImportError: # Python 2 compat from ConfigParser import ConfigParser, NoOptionError, NoSectionError import os import sys # # defaults.conf sections # CFG_SECTION_TASK_DEFAULT = 'task.default' CFG_SECTION_TASK_INFO = 'task.info' CFG_SECTION_NODESET = 'nodeset' CFG_SECTION_ENGINE = 'engine' # # Functions # def _task_print_debug(task, line): """Default task debug printing function.""" print(line) def _load_workerclass(workername): """ Return the class pointer matching `workername`. This can be the 'short' name (such as `ssh`) or a fully-qualified module path (such as ClusterShell.Worker.Ssh). The module is loaded if not done yet. """ # First try the worker name as a module under ClusterShell.Worker, # but if that fails, try the worker name directly try: modname = "ClusterShell.Worker.%s" % workername.capitalize() _import_module(modname) except ImportError: modname = workername _import_module(modname) # Get the class pointer return sys.modules[modname].WORKER_CLASS def _import_module(modname): """Import a python module if not done yet.""" # Iterate over a copy of sys.modules' keys to avoid RuntimeError if modname.lower() not in [mod.lower() for mod in list(sys.modules)]: # Import module if not yet loaded __import__(modname) def _local_workerclass(defaults): """Return default local worker class.""" return _load_workerclass(defaults.local_workername) def _distant_workerclass(defaults): """Return default distant worker class.""" return _load_workerclass(defaults.distant_workername) def config_paths(config_name): """Return default path list for a ClusterShell config file name.""" paths = [os.path.join('/etc/clustershell', config_name), # system-wide # default pip --user config file os.path.expanduser('~/.local/etc/clustershell/%s' % config_name), # Python installation prefix (for venv) os.path.join(sys.prefix, 'etc/clustershell', config_name), # per-user config (XDG Base Directory Specification) os.path.join(os.environ.get('XDG_CONFIG_HOME', os.path.expanduser('~/.config')), 'clustershell', config_name)] # $CLUSTERSHELL_CFGDIR has precedence over any other config paths if 'CLUSTERSHELL_CFGDIR' in os.environ: paths.append(os.path.join(os.environ['CLUSTERSHELL_CFGDIR'], config_name)) return paths def _converter_integer_tuple(value): """ConfigParser converter for tuple of integers""" # NOTE: compatible with ConfigParser 'converters' argument (Python 3.5+) return tuple(int(x) for x in value.split(',') if x.strip()) def _parser_get_integer_tuple(parser, section, option, **kwargs): """ Compatible converter for parsing tuple of integers until we can use converters from new ConfigParser (Python 3.5+). """ return _converter_integer_tuple( ConfigParser.get(parser, section, option, **kwargs)) # # Classes # class Defaults(object): """ Class used to manipulate ClusterShell defaults. The following attributes may be read at any time and also changed programmatically, for most of them **before** ClusterShell objects (Task or NodeSet) are initialized. NodeSet defaults: * fold_axis (tuple of axis integers; default is empty tuple ``()``) Task defaults: * stderr (boolean; default is ``False``) * stdin (boolean; default is ``True``) * stdout_msgtree (boolean; default is ``True``) * stderr_msgtree (boolean; default is ``True``) * engine (string; default is ``'auto'``) * local_workername (string; default is ``'exec'``) * distant_workername (string; default is ``'ssh'``) * debug (boolean; default is ``False``) * print_debug (function; default is internal) * fanout (integer; default is ``64``) * grooming_delay (float; default is ``0.25``) * connect_timeout (float; default is ``10``) * command_timeout (float; default is ``0``) Engine defaults: * port_qlimit (integer; default is ``100``) Example of use:: >>> from ClusterShell.Defaults import DEFAULTS >>> from ClusterShell.Task import task_self >>> # Change default distant worker to rsh (WorkerRsh) ... DEFAULTS.distant_workername = 'rsh' >>> task = task_self() >>> task.run("uname -r", nodes="cs[01-03]") >>> list((str(msg), nodes) for msg, nodes in task.iter_buffers()) [('3.10.0-229.7.2.el7.x86_64', ['cs02', 'cs01', 'cs03'])] The library default values of all of the above attributes may be changed using the defaults.conf configuration file, except for *print_debug* (cf. :ref:`defaults-config`). An example defaults.conf file should be included with ClusterShell. Remember that this could affect all applications using ClusterShell. """ # # Default values for task "default" sync dict # _TASK_DEFAULT = {"stderr" : False, "stdin" : True, "stdout_msgtree" : True, "stderr_msgtree" : True, "engine" : 'auto', "port_qlimit" : 100, # 1.8 compat "auto_tree" : True, "local_workername" : 'exec', "distant_workername" : 'ssh'} # # Datatype converters for task_default # _TASK_DEFAULT_CONVERTERS = {"stderr" : ConfigParser.getboolean, "stdin" : ConfigParser.getboolean, "stdout_msgtree" : ConfigParser.getboolean, "stderr_msgtree" : ConfigParser.getboolean, "engine" : ConfigParser.get, "port_qlimit" : ConfigParser.getint, # 1.8 compat "auto_tree" : ConfigParser.getboolean, "local_workername" : ConfigParser.get, "distant_workername" : ConfigParser.get} # # Default values for task "info" async dict # _TASK_INFO = {"debug" : False, "print_debug" : _task_print_debug, "fanout" : 64, "grooming_delay" : 0.25, "connect_timeout" : 10, "command_timeout" : 0} # # Datatype converters for task_info # _TASK_INFO_CONVERTERS = {"debug" : ConfigParser.getboolean, "fanout" : ConfigParser.getint, "grooming_delay" : ConfigParser.getfloat, "connect_timeout" : ConfigParser.getfloat, "command_timeout" : ConfigParser.getfloat} # # Black list of info keys whose values cannot safely be propagated # in tree mode # _TASK_INFO_PKEYS_BL = ['engine', 'print_debug'] # # Default values for NodeSet # _NODESET = {"fold_axis" : ()} # # Datatype converters for NodeSet defaults # _NODESET_CONVERTERS = {"fold_axis" : _parser_get_integer_tuple} # # Default values for Engine objects # _ENGINE = {"port_qlimit" : 100} # # Datatype converters for Engine defaults # _ENGINE_CONVERTERS = {"port_qlimit" : ConfigParser.getint} def __init__(self, filenames): """Initialize Defaults from config filenames""" self._task_default = self._TASK_DEFAULT.copy() self._task_info = self._TASK_INFO.copy() self._task_info_pkeys_bl = list(self._TASK_INFO_PKEYS_BL) self._nodeset = self._NODESET.copy() self._engine = self._ENGINE.copy() config = ConfigParser() parsed = config.read(filenames) if parsed: self._parse_config(config) def _parse_config(self, config): """parse config""" # task_default overrides for key, conv in self._TASK_DEFAULT_CONVERTERS.items(): try: self._task_default[key] = conv(config, CFG_SECTION_TASK_DEFAULT, key) except (NoSectionError, NoOptionError): pass # task_info overrides for key, conv in self._TASK_INFO_CONVERTERS.items(): try: self._task_info[key] = conv(config, CFG_SECTION_TASK_INFO, key) except (NoSectionError, NoOptionError): pass # NodeSet for key, conv in self._NODESET_CONVERTERS.items(): try: self._nodeset[key] = conv(config, CFG_SECTION_NODESET, key) except (NoSectionError, NoOptionError): pass # Engine for key, conv in self._ENGINE_CONVERTERS.items(): try: self._engine[key] = conv(config, CFG_SECTION_ENGINE, key) except (NoSectionError, NoOptionError): pass def __getattr__(self, name): """Defaults attribute lookup""" # 1.8 compat: port_qlimit moved into engine section if name == 'port_qlimit': if self._engine[name] == self._ENGINE[name]: return self._task_default[name] if name in self._engine: return self._engine[name] elif name in self._task_default: return self._task_default[name] elif name in self._task_info: return self._task_info[name] elif name in self._nodeset: return self._nodeset[name] raise AttributeError(name) def __setattr__(self, name, value): """Defaults attribute assignment""" if name in ('_task_default', '_task_info', '_task_info_pkeys_bl', '_nodeset', '_engine'): object.__setattr__(self, name, value) elif name in self._engine: self._engine[name] = value elif name in self._task_default: self._task_default[name] = value elif name in self._task_info: self._task_info[name] = value elif name in self._nodeset: self._nodeset[name] = value else: raise AttributeError(name) # # Globally accessible Defaults object # DEFAULTS = Defaults(config_paths('defaults.conf')) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3323298 ClusterShell-1.9.2/lib/ClusterShell/Engine/0000755104717000001440000000000014505640536020162 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Engine/EPoll.py0000644104717000001440000001570514501416555021555 0ustar00sthiellusers# # Copyright (C) 2009-2015 CEA/DAM # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ A ClusterShell Engine using epoll, an I/O event notification facility. The epoll event distribution interface is available on Linux 2.6, and has been included in Python 2.6. """ import errno import select import time from ClusterShell.Engine.Engine import Engine, E_READ, E_WRITE from ClusterShell.Engine.Engine import EngineNotSupportedError from ClusterShell.Engine.Engine import EngineTimeoutException from ClusterShell.Worker.EngineClient import EngineClientEOF class EngineEPoll(Engine): """ EPoll Engine ClusterShell Engine class using the select.epoll mechanism. """ identifier = "epoll" def __init__(self, info): """ Initialize Engine. """ Engine.__init__(self, info) try: # get an epoll object self.epolling = select.epoll() except AttributeError: raise EngineNotSupportedError(EngineEPoll.identifier) def release(self): """Release engine-specific resources.""" self.epolling.close() def _register_specific(self, fd, event): """Engine-specific fd registering. Called by Engine register.""" if event & E_READ: eventmask = select.EPOLLIN else: assert event & E_WRITE eventmask = select.EPOLLOUT self.epolling.register(fd, eventmask) def _unregister_specific(self, fd, ev_is_set): """ Engine-specific fd unregistering. Called by Engine unregister. """ self._debug("UNREGSPEC fd=%d ev_is_set=%x"% (fd, ev_is_set)) if ev_is_set: self.epolling.unregister(fd) def _modify_specific(self, fd, event, setvalue): """ Engine-specific modifications after a interesting event change for a file descriptor. Called automatically by Engine set_events(). For the epoll engine, it modifies the event mask associated to a file descriptor. """ self._debug("MODSPEC fd=%d event=%x setvalue=%d" % (fd, event, setvalue)) if setvalue: self._register_specific(fd, event) else: self.epolling.unregister(fd) def runloop(self, timeout): """ Run epoll main loop. """ if not timeout: timeout = -1 start_time = time.time() # run main event loop... while self.evlooprefcnt > 0: self._debug("LOOP evlooprefcnt=%d (reg_clifds=%s) (timers=%d)" % \ (self.evlooprefcnt, self.reg_clifds.keys(), len(self.timerq))) try: timeo = self.timerq.nextfire_delay() if timeout > 0 and timeo >= timeout: # task timeout may invalidate clients timeout self.timerq.clear() timeo = timeout elif timeo == -1: timeo = timeout self._current_loopcnt += 1 if timeo < 0: poll_timeo = -1 else: poll_timeo = timeo evlist = self.epolling.poll(poll_timeo) except IOError as ex: # might get interrupted by a signal if ex.errno == errno.EINTR: continue for fd, event in evlist: # get client instance client, stream = self._fd2client(fd) if client is None: continue fdev = stream.evmask sname = stream.name # set as current processed stream self._current_stream = stream # check for poll error condition of some sort if event & select.EPOLLERR: self._debug("EPOLLERR fd=%d sname=%s fdev=0x%x (%s)" % \ (fd, sname, fdev, client)) assert fdev & E_WRITE self.remove_stream(client, stream) self._current_stream = None continue # check for data to read if event & select.EPOLLIN: assert fdev & E_READ assert stream.events & fdev, (stream.events, fdev) self.modify(client, sname, 0, fdev) try: client._handle_read(sname) except EngineClientEOF: self._debug("EngineClientEOF %s %s" % (client, sname)) self.remove_stream(client, stream) self._current_stream = None continue # or check for end of stream (do not handle both at the same # time because handle_read() may perform a partial read) elif event & select.EPOLLHUP: assert fdev & E_READ, "fdev 0x%x & E_READ" % fdev self._debug("EPOLLHUP fd=%d sname=%s %s (%s)" % \ (fd, sname, client, client.streams)) self.remove_stream(client, stream) self._current_stream = None continue # check for writing if event & select.EPOLLOUT: self._debug("EPOLLOUT fd=%d sname=%s %s (%s)" % \ (fd, sname, client, client.streams)) assert fdev & E_WRITE assert stream.events & fdev, (stream.events, fdev) self.modify(client, sname, 0, fdev) client._handle_write(sname) self._current_stream = None # apply any changes occurred during processing if client.registered: self.set_events(client, stream) # check for task runloop timeout if timeout > 0 and time.time() >= start_time + timeout: raise EngineTimeoutException() # process clients timeout self.fire_timers() self._debug("LOOP EXIT evlooprefcnt=%d (reg_clifds=%s) (timers=%d)" % \ (self.evlooprefcnt, self.reg_clifds, len(self.timerq))) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/Engine/Engine.py0000644104717000001440000006507014505632065021747 0ustar00sthiellusers# # Copyright (C) 2007-2016 CEA/DAM # Copyright (C) 2015-2016 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ Interface of underlying Task's Engine. An Engine implements a loop your thread enters and uses to call event handlers in response to incoming events (from workers, timers, etc.). """ import errno import heapq import logging import sys import time import traceback LOGGER = logging.getLogger(__name__) # Engine client fd I/O event interest bits E_READ = 0x1 E_WRITE = 0x2 # Define epsilon value for time float arithmetic operations EPSILON = 1.0e-3 # Special fanout value for unlimited FANOUT_UNLIMITED = -1 # Special fanout value to use default Engine fanout FANOUT_DEFAULT = None class EngineException(Exception): """ Base engine exception. """ class EngineAbortException(EngineException): """ Raised on user abort. """ def __init__(self, kill): EngineException.__init__(self) self.kill = kill class EngineTimeoutException(EngineException): """ Raised when a timeout is encountered. """ class EngineIllegalOperationError(EngineException): """ Error raised when an illegal operation has been performed. """ class EngineAlreadyRunningError(EngineIllegalOperationError): """ Error raised when the engine is already running. """ class EngineNotSupportedError(EngineException): """ Error raised when the engine mechanism is not supported. """ def __init__(self, engineid): EngineException.__init__(self) self.engineid = engineid class EngineBaseTimer(object): """ Abstract class for ClusterShell's engine timer. Such a timer requires a relative fire time (delay) in seconds (as float), and supports an optional repeating interval in seconds (as float too). See EngineTimer for more information about ClusterShell timers. """ def __init__(self, fire_delay, interval=-1.0, autoclose=False): """ Create a base timer. """ # fire_delay is used for comparison between timers and MUST NOT be # None in Python 3 as comparison with float is not possible and could # lead to confusion anyway. If None is passed, fire_delay is now set # to -1 to avoid the timer to be armed in _EngineTimerQ.schedule(). if fire_delay is None: self.fire_delay = -1.0 else: self.fire_delay = fire_delay self.interval = interval self.autoclose = autoclose self._engine = None self._timercase = None def _set_engine(self, engine): """ Bind to engine, called by Engine. """ if self._engine: # A timer can be registered to only one engine at a time. raise EngineIllegalOperationError("Already bound to engine.") self._engine = engine def invalidate(self): """ Invalidates a timer object, stopping it from ever firing again. """ if self._engine: self._engine.timerq.invalidate(self) self._engine = None def is_valid(self): """ Returns a boolean value that indicates whether an EngineTimer object is valid and able to fire. """ return self._engine is not None def set_nextfire(self, fire_delay, interval=-1): """ Set the next firing delay in seconds for an EngineTimer object. The optional parameter *interval* sets the firing interval of the timer. If not specified, the timer fires once and then is automatically invalidated. Time values are expressed in second using floating point values. Precision is implementation (and system) dependent. It is safe to call this method from the task owning this timer object, in any event handlers, anywhere. However, resetting a timer's next firing time may be a relatively expensive operation. It is more efficient to let timers autorepeat or to use this method from the timer's own event handler callback (ie. from its ev_timer). """ if not self.is_valid(): raise EngineIllegalOperationError("Operation on invalid timer.") self.fire_delay = fire_delay self.interval = interval self._engine.timerq.reschedule(self) def _fire(self): raise NotImplementedError("Derived classes must implement.") class EngineTimer(EngineBaseTimer): """ Concrete class EngineTimer An EngineTimer object represents a timer bound to an engine that fires at a preset time in the future. Timers can fire either only once or repeatedly at fixed time intervals. Repeating timers can also have their next firing time manually adjusted. A timer is not a real-time mechanism; it fires when the task's underlying engine to which the timer has been added is running and able to check if the timer's firing time has passed. """ def __init__(self, fire_delay, interval, autoclose, handler): EngineBaseTimer.__init__(self, fire_delay, interval, autoclose) self.eh = handler assert self.eh is not None, "An event handler is needed for timer." def _fire(self): self.eh.ev_timer(self) class _EngineTimerQ(object): class _EngineTimerCase(object): """ Helper class that allows comparisons of fire times, to be easily used in an heapq. """ def __init__(self, client): self.client = client self.client._timercase = self # arm timer (first time) assert self.client.fire_delay > -EPSILON self.fire_date = self.client.fire_delay + time.time() def __lt__(self, other): # NOTE: add @total_ordering decorator in Python 2.7+ return self.fire_date < other.fire_date def __cmp__(self, other): # DEPRECATED: no longer used in Python 3 return cmp(self.fire_date, other.fire_date) def arm(self, client): assert client is not None self.client = client self.client._timercase = self # setup next firing date time_current = time.time() if self.client.fire_delay > -EPSILON: self.fire_date = self.client.fire_delay + time_current else: interval = float(self.client.interval) assert interval > 0 # Keep it simple: increase fire_date by interval even if # fire_date stays in the past, as in that case it's going to # fire again at next runloop anyway. self.fire_date += interval # Just print a debug message that could help detect issues # coming from a long-running timer handler. if self.fire_date < time_current: LOGGER.debug("Warning: passed interval time for %r " "(long running event handler?)", self.client) def disarm(self): client = self.client client._timercase = None self.client = None return client def armed(self): return self.client is not None def __init__(self, engine): """ Initializer. """ self._engine = engine self.timers = [] self.armed_count = 0 def __len__(self): """ Return the number of active timers. """ return self.armed_count def schedule(self, client): """ Insert and arm a client's timer. """ # arm only if fire is set if client.fire_delay > -EPSILON: heapq.heappush(self.timers, _EngineTimerQ._EngineTimerCase(client)) self.armed_count += 1 if not client.autoclose: self._engine.evlooprefcnt += 1 def reschedule(self, client): """ Re-insert client's timer. """ if client._timercase: self.invalidate(client) self._dequeue_disarmed() self.schedule(client) def invalidate(self, client): """ Invalidate client's timer. Current implementation doesn't really remove the timer, but simply flags it as disarmed. """ if not client._timercase: # if timer is being fire, invalidate its values client.fire_delay = -1.0 client.interval = -1.0 return if self.armed_count <= 0: raise ValueError("Engine client timer not found in timer queue") client._timercase.disarm() self.armed_count -= 1 if not client.autoclose: self._engine.evlooprefcnt -= 1 def _dequeue_disarmed(self): """ Dequeue disarmed timers (sort of garbage collection). """ while len(self.timers) > 0 and not self.timers[0].armed(): heapq.heappop(self.timers) def fire_expired(self): """ Remove expired timers from the queue and fire associated clients. """ self._dequeue_disarmed() # Build a queue of expired timercases. Any expired (and still armed) # timer is fired, but only once per call. expired_timercases = [] now = time.time() while self.timers and self.timers[0].fire_date <= now: expired_timercases.append(heapq.heappop(self.timers)) self._dequeue_disarmed() for timercase in expired_timercases: # Be careful to recheck and skip any disarmed timers (eg. timer # could be invalidated from another timer's event handler) if not timercase.armed(): continue # Disarm timer client = timercase.disarm() # Fire timer client.fire_delay = -1.0 client._fire() # Rearm it if needed - Note: fire=0 is valid, interval=0 is not if client.fire_delay >= -EPSILON or client.interval > EPSILON: timercase.arm(client) heapq.heappush(self.timers, timercase) else: self.armed_count -= 1 if not client.autoclose: self._engine.evlooprefcnt -= 1 def nextfire_delay(self): """ Return next timer fire delay (relative time). """ self._dequeue_disarmed() if len(self.timers) > 0: return max(0., self.timers[0].fire_date - time.time()) return -1 def clear(self): """ Stop and clear all timers. """ for timer in self.timers: if timer.armed(): timer.client.invalidate() self.timers = [] self.armed_count = 0 class Engine(object): """ Base class for ClusterShell Engines. Subclasses have to implement a runloop listening for client events. Subclasses that override other than "pure virtual methods" should call corresponding base class methods. """ identifier = "(none)" def __init__(self, info): """Initialize base class.""" # take a reference on info dict self.info = info # and update engine id self.info['engine'] = self.identifier # keep track of all clients self._clients = set() self._ports = set() # keep track of the number of registered clients per worker # (this does not include ports) self._reg_stats = {} # keep track of registered file descriptors in a dict where keys # are fileno and values are (EngineClient, EngineClientStream) tuples self.reg_clifds = {} # fanout cache used to speed up client launch when fanout changed self._prev_fanout = 0 # fanout_diff != 0 the first time # Current loop iteration counter. It is the number of performed engine # loops in order to keep track of client registration epoch, so we can # safely process FDs by chunk and re-use FDs (see Engine._fd2client). self._current_loopcnt = 0 # Current stream being processed self._current_stream = None # timer queue to handle both timers and clients timeout self.timerq = _EngineTimerQ(self) # reference count to the event loop (must include registered # clients and timers configured WITHOUT autoclose) self.evlooprefcnt = 0 # running state self.running = False # runloop-has-exited flag self._exited = False def release(self): """Release engine-specific resources.""" pass def clients(self): """Get a copy of clients set.""" return self._clients.copy() def ports(self): """ Get a copy of ports set. """ return self._ports.copy() def _fd2client(self, fd): client, stream = self.reg_clifds.get(fd, (None, None)) if client: if client._reg_epoch < self._current_loopcnt: return client, stream else: LOGGER.debug("_fd2client: ignoring just re-used FD %d", stream.fd) return (None, None) def _can_register(self, client): assert not client.registered if not client.delayable or client.worker._fanout == FANOUT_UNLIMITED: return True elif client.worker._fanout is FANOUT_DEFAULT: return self._reg_stats.get('default', 0) < self.info['fanout'] else: worker = client.worker return self._reg_stats.get(worker, 0) < worker._fanout def _update_reg_stats(self, client, offset): if client.worker._fanout is FANOUT_DEFAULT: key = 'default' else: key = client.worker self._reg_stats.setdefault(key, 0) self._reg_stats[key] += offset def add(self, client): """Add a client to engine.""" # bind to engine client._set_engine(self) if client.delayable: # add to regular client set self._clients.add(client) else: # add to port set (non-delayable) self._ports.add(client) if self.running and self._can_register(client): # in-fly add if running self.register(client._start()) def _remove(self, client, abort, did_timeout=False): """Remove a client from engine (subroutine).""" # be careful to also remove ports when engine has not started yet if client.registered or not client.delayable: if client.registered: self.unregister(client) # care should be taken to ensure correct closing flags client._close(abort=abort, timeout=did_timeout) def remove(self, client, abort=False, did_timeout=False): """ Remove a client from engine. Does NOT aim to flush individual stream read buffers. """ self._debug("REMOVE %s" % client) if client.delayable: self._clients.remove(client) else: self._ports.remove(client) self._remove(client, abort, did_timeout) # we just removed a client, so start pending client(s) self.start_clients() def remove_stream(self, client, stream): """ Regular way to remove a client stream from engine, performing needed read flush as needed. If no more retainable stream remains for this client, this method automatically removes the entire client from engine. This function does nothing if the stream is not registered. """ if stream.fd not in self.reg_clifds: LOGGER.debug("remove_stream: %s not registered", stream) return self.unregister_stream(client, stream) # _close_stream() will flush pending read buffers so may generate events client._close_stream(stream.name) # client may have been removed by previous events, if not check whether # some retained streams still remain if client in self._clients and not client.streams.retained(): self.remove(client) def clear(self, did_timeout=False, clear_ports=False): """ Remove all clients. Does not flush read buffers. Subclasses that override this method should call base class method. """ all_clients = [self._clients] if clear_ports: all_clients.append(self._ports) for clients in all_clients: while len(clients) > 0: client = clients.pop() self._remove(client, True, did_timeout) def register(self, client): """ Register an engine client. Subclasses that override this method should call base class method. """ assert client in self._clients or client in self._ports assert not client.registered self._debug("REG %s (%s)(autoclose=%s)" % \ (client.__class__.__name__, client.streams, client.autoclose)) client.registered = True client._reg_epoch = self._current_loopcnt if client.delayable: self._update_reg_stats(client, 1) # set interest event bits... for streams, ievent in ((client.streams.active_readers, E_READ), (client.streams.active_writers, E_WRITE)): for stream in streams(): self.reg_clifds[stream.fd] = client, stream stream.events |= ievent if not client.autoclose: self.evlooprefcnt += 1 self._register_specific(stream.fd, ievent) # start timeout timer self.timerq.schedule(client) def unregister_stream(self, client, stream): """Unregister a stream from a client.""" self._debug("UNREG_STREAM stream=%s" % stream) assert stream is not None and stream.fd is not None assert stream.fd in self.reg_clifds, \ "stream fd %d not registered" % stream.fd assert client.registered self._unregister_specific(stream.fd, stream.events & stream.evmask) self._debug("UNREG_STREAM unregistering stream fd %d (%d)" % \ (stream.fd, len(client.streams))) stream.events &= ~stream.evmask del self.reg_clifds[stream.fd] if not client.autoclose: self.evlooprefcnt -= 1 def unregister(self, client): """Unregister a client""" # sanity check assert client.registered self._debug("UNREG %s (%s)" % (client.__class__.__name__, \ client.streams)) # remove timeout timer self.timerq.invalidate(client) # clear interest events... for streams, ievent in ((client.streams.active_readers, E_READ), (client.streams.active_writers, E_WRITE)): for stream in streams(): if stream.fd in self.reg_clifds: self._unregister_specific(stream.fd, stream.events & ievent) stream.events &= ~ievent del self.reg_clifds[stream.fd] if not client.autoclose: self.evlooprefcnt -= 1 client.registered = False if client.delayable: self._update_reg_stats(client, -1) def modify(self, client, sname, setmask, clearmask): """Modify the next loop interest events bitset for a client stream.""" self._debug("MODEV set:0x%x clear:0x%x %s (%s)" % (setmask, clearmask, client, sname)) stream = client.streams[sname] stream.new_events &= ~clearmask stream.new_events |= setmask if self._current_stream is not stream: # modifying a non processing stream, apply new_events now self.set_events(client, stream) def _register_specific(self, fd, event): """Engine-specific register fd for event method.""" raise NotImplementedError("Derived classes must implement.") def _unregister_specific(self, fd, ev_is_set): """Engine-specific unregister fd method.""" raise NotImplementedError("Derived classes must implement.") def _modify_specific(self, fd, event, setvalue): """Engine-specific modify fd for event method.""" raise NotImplementedError("Derived classes must implement.") def set_events(self, client, stream): """Set the active interest events bitset for a client stream.""" self._debug("SETEV new_events:0x%x events:0x%x for %s[%s]" % \ (stream.new_events, stream.events, client, stream.name)) if not client.registered: LOGGER.debug("set_events: client %s not registered", self) return chgbits = stream.new_events ^ stream.events if chgbits == 0: return # configure interest events as appropriate for interest in (E_READ, E_WRITE): if chgbits & interest: assert stream.evmask & interest status = stream.new_events & interest self._modify_specific(stream.fd, interest, status) if status: stream.events |= interest else: stream.events &= ~interest stream.new_events = stream.events def set_reading(self, client, sname): """Set client reading state.""" # listen for readable events self.modify(client, sname, E_READ, 0) def set_writing(self, client, sname): """Set client writing state.""" # listen for writable events self.modify(client, sname, E_WRITE, 0) def add_timer(self, timer): """Add a timer instance to engine.""" timer._set_engine(self) self.timerq.schedule(timer) def remove_timer(self, timer): """Remove engine timer from engine.""" self.timerq.invalidate(timer) def fire_timers(self): """Fire expired timers for processing.""" # Only fire timers if runloop is still retained if self.evlooprefcnt > 0: # Fire once any expired timers self.timerq.fire_expired() def start_ports(self): """Start and register all port clients.""" # Ports are special, non-delayable engine clients for port in self._ports: if not port.registered: self._debug("START PORT %s" % port) self.register(port._start()) def start_clients(self): """Start and register regular engine clients in respect of fanout.""" # check if engine fanout has changed # NOTE: worker._fanout live changes not supported (see #323) fanout_diff = self.info['fanout'] - self._prev_fanout if fanout_diff: self._prev_fanout = self.info['fanout'] for client in self._clients: if not client.registered and self._can_register(client): self._debug("START CLIENT %s" % client.__class__.__name__) self.register(client._start()) # if first time or engine fanout has changed, we do a full scan if fanout_diff == 0: # if engine fanout has not changed, we only start 1 client break def run(self, timeout): """Run engine in calling thread.""" # change to running state if self.running: raise EngineAlreadyRunningError() try: self.running = True # start port clients self.start_ports() # peek in ports for early pending messages self.snoop_ports() # start all other clients self.start_clients() # run loop until all clients and timers are removed self.runloop(timeout) except EngineTimeoutException: self.clear(did_timeout=True) raise except: # MUST use BaseException as soon as possible (py2.5+) # The game is over. exc_t, exc_val, exc_tb = sys.exc_info() try: # Close Engine clients self.clear() except: # self.clear() may still generate termination events that # may raises exceptions, overriding the other one above. # In the future, we should block new user events to avoid # that. Also, such cases could be better handled with # BaseException. For now, print a backtrace in debug to # help detect the problem. tbexc = traceback.format_exception(exc_t, exc_val, exc_tb) LOGGER.debug(''.join(tbexc)) raise raise finally: # cleanup self.timerq.clear() self.running = False self._prev_fanout = 0 def snoop_ports(self): """ Peek in ports for possible early pending messages. This method simply tries to read port pipes in non-blocking mode. """ # make a copy so that early messages on installed ports may # lead to new ports ports = self._ports.copy() for port in ports: try: port._handle_read('in') except (IOError, OSError) as ex: if ex.errno in (errno.EAGAIN, errno.EWOULDBLOCK): # no pending message return # raise any other error raise def runloop(self, timeout): """Engine specific run loop. Derived classes must implement.""" raise NotImplementedError("Derived classes must implement.") def abort(self, kill): """Abort runloop.""" if self.running: raise EngineAbortException(kill) self.clear(clear_ports=kill) def exited(self): """Returns True if the engine has exited the runloop once.""" return not self.running and self._exited def _debug(self, s): """library engine verbose debugging hook""" #LOGGER.debug(s) pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Engine/Factory.py0000644104717000001440000000504214501416555022142 0ustar00sthiellusers# # Copyright (C) 2009-2016 CEA/DAM # Copyright (C) 2016 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ Engine Factory to select the best working event engine for the current version of Python and Operating System. """ import logging from ClusterShell.Engine.Engine import EngineNotSupportedError # Available event engines from ClusterShell.Engine.EPoll import EngineEPoll from ClusterShell.Engine.Poll import EnginePoll from ClusterShell.Engine.Select import EngineSelect class PreferredEngine(object): """ Preferred Engine selection metaclass (DP Abstract Factory). """ engines = {EngineEPoll.identifier: EngineEPoll, EnginePoll.identifier: EnginePoll, EngineSelect.identifier: EngineSelect} def __new__(cls, hint, info): """ Create a new preferred Engine. """ if not hint or hint == 'auto': # in order or preference for engine_class in [EngineEPoll, EnginePoll, EngineSelect]: try: return engine_class(info) except EngineNotSupportedError: pass raise RuntimeError("FATAL: No supported ClusterShell.Engine found") else: # User overriding engine selection engines = cls.engines.copy() try: tryengine = engines.pop(hint) while True: try: return tryengine(info) except EngineNotSupportedError: if len(engines) == 0: raise tryengine = engines.popitem()[1] except KeyError: msg = "Invalid engine identifier: %s" % hint logging.getLogger(__name__).error(msg) raise ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Engine/Poll.py0000644104717000001440000001606714501416555021452 0ustar00sthiellusers# # Copyright (C) 2007-2016 CEA/DAM # Copyright (C) 2016 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ A poll() based ClusterShell Engine. The poll() system call is available on Linux and BSD. """ import errno import logging import select import time from ClusterShell.Engine.Engine import Engine, E_READ, E_WRITE from ClusterShell.Engine.Engine import EngineException from ClusterShell.Engine.Engine import EngineNotSupportedError from ClusterShell.Engine.Engine import EngineTimeoutException from ClusterShell.Worker.EngineClient import EngineClientEOF class EnginePoll(Engine): """ Poll Engine ClusterShell engine using the select.poll mechanism (Linux poll() syscall). """ identifier = "poll" def __init__(self, info): """ Initialize Engine. """ Engine.__init__(self, info) try: # get a polling object self.polling = select.poll() except AttributeError: raise EngineNotSupportedError(EnginePoll.identifier) def _register_specific(self, fd, event): """Engine-specific fd registering. Called by Engine register.""" if event & E_READ: eventmask = select.POLLIN else: assert event & E_WRITE eventmask = select.POLLOUT self.polling.register(fd, eventmask) def _unregister_specific(self, fd, ev_is_set): if ev_is_set: self.polling.unregister(fd) def _modify_specific(self, fd, event, setvalue): """ Engine-specific modifications after a interesting event change for a file descriptor. Called automatically by Engine register/unregister and set_events(). For the poll() engine, it reg/unreg or modifies the event mask associated to a file descriptor. """ self._debug("MODSPEC fd=%d event=%x setvalue=%d" % (fd, event, setvalue)) if setvalue: self._register_specific(fd, event) else: self.polling.unregister(fd) def runloop(self, timeout): """ Poll engine run(): start clients and properly get replies """ if not timeout: timeout = -1 start_time = time.time() # run main event loop... while self.evlooprefcnt > 0: self._debug("LOOP evlooprefcnt=%d (reg_clifds=%s) (timers=%d)" \ % (self.evlooprefcnt, self.reg_clifds.keys(), \ len(self.timerq))) try: timeo = self.timerq.nextfire_delay() if timeout > 0 and timeo >= timeout: # task timeout may invalidate clients timeout self.timerq.clear() timeo = timeout elif timeo == -1: timeo = timeout self._current_loopcnt += 1 if timeo < 0: poll_timeo = -1 else: poll_timeo = timeo * 1000.0 evlist = self.polling.poll(poll_timeo) except select.error as ex: # might get interrupted by a signal if ex.args[0] == errno.EINTR: continue elif ex.args[0] == errno.EINVAL: msg = "Increase RLIMIT_NOFILE?" logging.getLogger(__name__).error(msg) raise for fd, event in evlist: if event & select.POLLNVAL: raise EngineException("Caught POLLNVAL on fd %d" % fd) # get client instance client, stream = self._fd2client(fd) if client is None: continue fdev = stream.evmask sname = stream.name # process this stream self._current_stream = stream # check for poll error condition of some sort if event & select.POLLERR: self._debug("POLLERR %s" % client) assert fdev & E_WRITE self._debug("POLLERR: remove_stream sname %s fdev 0x%x" % (sname, fdev)) self.remove_stream(client, stream) self._current_stream = None continue # check for data to read if event & select.POLLIN: assert fdev & E_READ assert stream.events & fdev, (stream.events, fdev) self.modify(client, sname, 0, fdev) try: client._handle_read(sname) except EngineClientEOF: self._debug("EngineClientEOF %s %s" % (client, sname)) self.remove_stream(client, stream) self._current_stream = None continue # or check for end of stream (do not handle both at the same # time because handle_read() may perform a partial read) elif event & select.POLLHUP: self._debug("POLLHUP fd=%d %s (%s)" % (fd, client.__class__.__name__, client.streams)) self.remove_stream(client, stream) self._current_stream = None continue # check for writing if event & select.POLLOUT: self._debug("POLLOUT fd=%d %s (%s)" % (fd, client.__class__.__name__, client.streams)) assert fdev == E_WRITE assert stream.events & fdev self.modify(client, sname, 0, fdev) client._handle_write(sname) self._current_stream = None # apply any changes occurred during processing if client.registered: self.set_events(client, stream) # check for task runloop timeout if timeout > 0 and time.time() >= start_time + timeout: raise EngineTimeoutException() # process clients timeout self.fire_timers() self._debug("LOOP EXIT evlooprefcnt=%d (reg_clifds=%s) (timers=%d)" % \ (self.evlooprefcnt, self.reg_clifds, len(self.timerq))) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Engine/Select.py0000644104717000001440000001457414501416555021764 0ustar00sthiellusers# # Copyright (C) 2009-2016 CEA/DAM # Copyright (C) 2009-2012 Henri Doreau # Copyright (C) 2009-2012 Aurelien Degremont # Copyright (C) 2016 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ A select() based ClusterShell Engine. The select() system call is available on almost every UNIX-like systems. """ import errno import select import sys import time from ClusterShell.Engine.Engine import Engine, E_READ, E_WRITE from ClusterShell.Engine.Engine import EngineTimeoutException from ClusterShell.Worker.EngineClient import EngineClientEOF class EngineSelect(Engine): """ Select Engine ClusterShell engine using the select.select mechanism """ identifier = "select" def __init__(self, info): """ Initialize Engine. """ Engine.__init__(self, info) self._fds_r = [] self._fds_w = [] def _register_specific(self, fd, event): """ Engine-specific fd registering. Called by Engine register. """ if event & E_READ: self._fds_r.append(fd) else: assert event & E_WRITE self._fds_w.append(fd) def _unregister_specific(self, fd, ev_is_set): """ Engine-specific fd unregistering. Called by Engine unregister. """ if ev_is_set or True: if fd in self._fds_r: self._fds_r.remove(fd) if fd in self._fds_w: self._fds_w.remove(fd) def _modify_specific(self, fd, event, setvalue): """ Engine-specific modifications after a interesting event change for a file descriptor. Called automatically by Engine register/unregister and set_events(). For the select() engine, it appends/remove the fd to/from the concerned fd_sets. """ self._debug("MODSPEC fd=%d event=%x setvalue=%d" % (fd, event, setvalue)) if setvalue: self._register_specific(fd, event) else: self._unregister_specific(fd, True) def runloop(self, timeout): """ Select engine run(): start clients and properly get replies """ if not timeout: timeout = -1 start_time = time.time() # run main event loop... while self.evlooprefcnt > 0: self._debug("LOOP evlooprefcnt=%d (reg_clifds=%s) (timers=%d)" % (self.evlooprefcnt, self.reg_clifds.keys(), len(self.timerq))) try: timeo = self.timerq.nextfire_delay() if timeout > 0 and timeo >= timeout: # task timeout may invalidate clients timeout self.timerq.clear() timeo = timeout elif timeo == -1: timeo = timeout self._current_loopcnt += 1 if timeo >= 0: r_ready, w_ready, x_ready = \ select.select(self._fds_r, self._fds_w, [], timeo) else: # no timeout specified, do not supply the timeout argument r_ready, w_ready, x_ready = \ select.select(self._fds_r, self._fds_w, []) except select.error as ex: # might get interrupted by a signal if ex.args[0] == errno.EINTR: continue elif ex.args[0] in (errno.EINVAL, errno.EBADF, errno.ENOMEM): msg = "Increase RLIMIT_NOFILE?" logging.getLogger(__name__).error(msg) raise # iterate over fd on which events occurred for fd in set(r_ready) | set(w_ready): # get client instance client, stream = self._fd2client(fd) if client is None: continue fdev = stream.evmask sname = stream.name # process this stream self._current_stream = stream # check for possible unblocking read on this fd if fd in r_ready: self._debug("R_READY fd=%d %s (%s)" % (fd, client.__class__.__name__, client.streams)) assert fdev & E_READ assert stream.events & fdev self.modify(client, sname, 0, fdev) try: client._handle_read(sname) except EngineClientEOF: self._debug("EngineClientEOF %s" % client) self.remove_stream(client, stream) # check for writing if fd in w_ready: self._debug("W_READY fd=%d %s (%s)" % (fd, client.__class__.__name__, client.streams)) assert fdev == E_WRITE assert stream.events & fdev self.modify(client, sname, 0, fdev) client._handle_write(sname) # post processing self._current_stream = None # apply any changes occurred during processing if client.registered: self.set_events(client, stream) # check for task runloop timeout if timeout > 0 and time.time() >= start_time + timeout: raise EngineTimeoutException() # process clients timeout self.fire_timers() self._debug("LOOP EXIT evlooprefcnt=%d (reg_clifds=%s) (timers=%d)" % (self.evlooprefcnt, self.reg_clifds, len(self.timerq))) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Engine/__init__.py0000644104717000001440000000000014501416555022257 0ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Event.py0000644104717000001440000001420614501416555020411 0ustar00sthiellusers# # Copyright (C) 2007-2015 CEA/DAM # Copyright (C) 2015-2022 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell Event handling. This module contains the base class :class:`.EventHandler` which defines a simple interface to handle events generated by :class:`.Worker`, :class:`.EngineTimer` and :class:`.EnginePort` objects. """ class EventHandler(object): """ClusterShell EventHandler interface. Derived class should implement any of the following methods to listen for :class:`.Worker`, :class:`.EnginePort` or :class:`.EngineTimer` events. If not implemented, the default behavior is to do nothing. NOTE: ``ev_timeout(self, worker)`` was removed from this class definition in ClusterShell 1.9. For compatibility, it is still called if defined by subclasses. Use ``ev_close()`` instead and check whether its argument ``timedout`` is ``True``, which means that the :class:`.Worker` has timed out. """ ### Worker events def ev_start(self, worker): """ Called to indicate that a worker has just started. :param worker: :class:`.Worker` derived object """ def ev_pickup(self, worker, node): """ Called for each node to indicate that a worker command for a specific node (or key) has just started. .. warning:: The signature of :meth:`EventHandler.ev_pickup` changed in ClusterShell 1.8, please update your :class:`.EventHandler` derived classes and add the node argument. *New in version 1.7.* :param worker: :class:`.Worker` derived object :param node: node (or key) """ def ev_read(self, worker, node, sname, msg): """ Called to indicate that a worker has data to read from a specific node (or key). .. warning:: The signature of :meth:`EventHandler.ev_read` changed in ClusterShell 1.8, please update your :class:`.EventHandler` derived classes and add the node, sname and msg arguments. :param worker: :class:`.Worker` derived object :param node: node (or key) :param sname: stream name :param msg: message """ def ev_error(self, worker): """ Called to indicate that a worker has error to read on stderr from a specific node (or key). [DEPRECATED] use ev_read instead and test if sname is 'stderr' :param worker: :class:`.Worker` object Available worker attributes: * :attr:`.Worker.current_node` - node (or key) * :attr:`.Worker.current_errmsg` - read error message """ def ev_written(self, worker, node, sname, size): """ Called to indicate that some writing has been done by the worker to a node on a given stream. This event is only generated when ``write()`` is previously called on the worker. This handler may be called very often depending on the number of target nodes, the amount of data to write and the block size used by the worker. *New in version 1.7.* :param worker: :class:`.Worker` derived object :param node: node (or) key :param sname: stream name :param size: amount of bytes that has just been written to node/stream associated with this event """ def ev_hup(self, worker, node, rc): """ Called for each node to indicate that a worker command for a specific node has just finished. .. warning:: The signature of :meth:`EventHandler.ev_hup` changed in ClusterShell 1.8, please update your :class:`.EventHandler` derived classes to add the node and rc arguments. :param worker: :class:`.Worker` derived object :param node: node (or key) :param rc: command return code (or None if the worker doesn't support command return codes) """ def ev_close(self, worker, timedout): """ Called to indicate that a worker has just finished. .. warning:: The signature of :meth:`EventHandler.ev_close` changed in ClusterShell 1.8, please update your :class:`.EventHandler` derived classes to add the timedout argument. Please use this argument instead of the old method ``ev_timeout()``. :param worker: :class:`.Worker` derived object :param timedout: boolean set to True if the worker has timed out """ def _ev_routing(self, worker, arg): """ Routing event (private). Called to indicate that a (meta)worker has just updated one of its route path. You can safely ignore this event. """ ### EnginePort events def ev_port_start(self, port): """ Called to indicate that a :class:`.EnginePort` object has just started. :param port: :class:`.EnginePort` derived object """ def ev_msg(self, port, msg): """ Called to indicate that a message has been received on an :class:`.EnginePort`. Used to deliver messages reliably between tasks. :param port: :class:`.EnginePort` object on which a message has been received :param msg: the message object received """ ### EngineTimer events def ev_timer(self, timer): """ Called to indicate that a timer is firing. :param timer: :class:`.EngineTimer` object that is firing """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Gateway.py0000755104717000001440000003473214501416555020742 0ustar00sthiellusers# # Copyright (C) 2010-2016 CEA/DAM # Copyright (C) 2010-2011 Henri Doreau # Copyright (C) 2015-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell agent launched on remote gateway nodes. This script reads messages on stdin via the SSH connection, interprets them, takes decisions, and prints out replies on stdout. """ import logging import os import sys import traceback from ClusterShell.Event import EventHandler from ClusterShell.NodeSet import NodeSet from ClusterShell.Task import task_self, _getshorthostname from ClusterShell.Engine.Engine import EngineAbortException from ClusterShell.Worker.fastsubprocess import set_nonblock_flag from ClusterShell.Worker.Worker import StreamWorker, FANOUT_UNLIMITED from ClusterShell.Worker.Tree import TreeWorker from ClusterShell.Communication import Channel, ConfigurationMessage, \ ControlMessage, ACKMessage, ErrorMessage, StartMessage, EndMessage, \ StdOutMessage, StdErrMessage, RetcodeMessage, TimeoutMessage, \ MessageProcessingError def _gw_print_debug(task, line): """Default gateway task debug printing function""" logging.getLogger(__name__).debug(line) def gateway_excepthook(exc_type, exc_value, tb): """ Default excepthook for Gateway to redirect any unhandled exception to logger instead of stderr. """ tbexc = traceback.format_exception(exc_type, exc_value, tb) logging.getLogger(__name__).error(''.join(tbexc)) class TreeWorkerResponder(EventHandler): """Gateway TreeWorker handler""" def __init__(self, task, gwchan, srcwkr): EventHandler.__init__(self) self.gwchan = gwchan # gateway channel self.srcwkr = srcwkr # id of distant parent TreeWorker self.worker = None # local TreeWorker instance self.retcodes = {} # self-managed retcodes self.logger = logging.getLogger(__name__) # Grooming initialization self.timer = None qdelay = task.info("grooming_delay") if qdelay > 1.0e-3: # Enable messages and rc grooming - enable msgtree (#181) task.set_default("stdout_msgtree", True) task.set_default("stderr_msgtree", True) # create auto-closing timer object for grooming self.timer = task.timer(qdelay, self, qdelay, autoclose=True) self.logger.debug("TreeWorkerResponder initialized grooming=%f", qdelay) def ev_start(self, worker): self.logger.debug("TreeWorkerResponder: ev_start") self.worker = worker def ev_timer(self, timer): """perform gateway traffic grooming""" if not self.worker: return logger = self.logger # check for grooming opportunities for stdout/stderr for msg_elem, nodes in self.worker.iter_errors(): logger.debug("iter(stderr): %s: %d bytes", nodes, len(msg_elem.message())) self.gwchan.send(StdErrMessage(nodes, msg_elem.message(), self.srcwkr)) for msg_elem, nodes in self.worker.iter_buffers(): logger.debug("iter(stdout): %s: %d bytes", nodes, len(msg_elem.message())) self.gwchan.send(StdOutMessage(nodes, msg_elem.message(), self.srcwkr)) # empty internal MsgTree buffers self.worker.flush_buffers() self.worker.flush_errors() # specifically manage retcodes to periodically return latest # retcodes to parent node, instead of doing it at ev_hup (no msg # aggregation) or at ev_close (no parent node live updates) for rc, nodes in self.retcodes.items(): self.logger.debug("iter(rc): %s: rc=%d", nodes, rc) self.gwchan.send(RetcodeMessage(nodes, rc, self.srcwkr)) self.retcodes.clear() def ev_read(self, worker, node, sname, msg): """message received""" if sname == worker.SNAME_STDOUT: msg_class = StdOutMessage elif sname == worker.SNAME_STDERR: msg_class = StdErrMessage self.logger.debug("TreeWorkerResponder: ev_error %s %s", node, msg) if self.timer is None: self.gwchan.send(msg_class(node, msg, self.srcwkr)) def ev_hup(self, worker, node, rc): """Received end of command from one node""" if self.timer is None: self.gwchan.send(RetcodeMessage(node, rc, self.srcwkr)) else: # retcode grooming if rc in self.retcodes: self.retcodes[rc].add(node) else: self.retcodes[rc] = NodeSet(node) def ev_close(self, worker, timedout): """End of CTL responder""" self.logger.debug("TreeWorkerResponder: ev_close timedout=%s", timedout) if timedout: # some nodes did timeout msg = TimeoutMessage(NodeSet._fromlist1(worker.iter_keys_timeout()), self.srcwkr) self.gwchan.send(msg) if self.timer is not None: # finalize grooming self.ev_timer(None) self.timer.invalidate() class GatewayChannel(Channel): """high level logic for gateways""" def __init__(self, task): Channel.__init__(self) self.task = task self.nodename = None self.topology = None self.propagation = None self.logger = logging.getLogger(__name__) def start(self): """initialization""" # prepare communication self._init() self.logger.debug('ready to accept channel communication') def close(self): """close gw channel""" self.logger.debug('closing gateway channel') self._close() def recv(self, msg): """handle incoming message""" try: self.logger.debug('handling incoming message: %s', str(msg)) if msg.type == EndMessage.ident: self.logger.debug('recv: got EndMessage') self._close() elif self.setup: self.recv_ctl(msg) elif self.opened: self.recv_cfg(msg) elif msg.type == StartMessage.ident: self.logger.debug('got start message %s', msg) self.opened = True self._open() self.logger.debug('channel started (version %s on remote end)', self._xml_reader.version) else: self.logger.error('unexpected message: %s', str(msg)) raise MessageProcessingError('unexpected message: %s' % msg) except MessageProcessingError as ex: self.logger.error('on recv(): %s', str(ex)) self.send(ErrorMessage(str(ex))) self._close() except EngineAbortException: # gateway task abort: don't handle like other exceptions raise except Exception as ex: self.logger.exception('on recv(): %s', str(ex)) self.send(ErrorMessage(str(ex))) self._close() def recv_cfg(self, msg): """receive cfg/topology configuration""" if msg.type != ConfigurationMessage.ident: raise MessageProcessingError('unexpected message: %s' % msg) self.logger.debug('got channel configuration') # gw node name hostname = _getshorthostname() if not msg.gateway: self.nodename = hostname self.logger.warn('gw name not provided, using system hostname %s', self.nodename) else: self.nodename = msg.gateway self.logger.debug('using gateway node name %s', self.nodename) if self.nodename.lower() != hostname.lower(): self.logger.debug('gw name %s does not match system hostname %s', self.nodename, hostname) # topology task_self().topology = self.topology = msg.data_decode() self.logger.debug('decoded propagation tree') self.logger.debug('\n%s', self.topology) self.setup = True self._ack(msg) def recv_ctl(self, msg): """receive control message with actions to perform""" if msg.type == ControlMessage.ident: self.logger.debug('GatewayChannel._state_ctl') if msg.action == 'shell': data = msg.data_decode() cmd = data['cmd'] stderr = data['stderr'] timeout = data['timeout'] remote = data['remote'] #self.propagation.invoke_gateway = data['invoke_gateway'] self.logger.debug('decoded gw invoke (%s)', data['invoke_gateway']) taskinfo = data['taskinfo'] self.logger.debug('assigning task infos (%s)', data['taskinfo']) task = task_self() task._info.update(taskinfo) task.set_info('print_debug', _gw_print_debug) for infokey in taskinfo: if infokey.startswith('tree_default:'): self.logger.debug('Setting default %s to %s', infokey[13:], taskinfo[infokey]) task.set_default(infokey[13:], taskinfo[infokey]) if task.info('debug'): self.logger.setLevel(logging.DEBUG) self.logger.debug('inherited fanout value=%d', task.info("fanout")) self.logger.debug('launching execution/enter gathering state') responder = TreeWorkerResponder(task, self, msg.srcid) self.propagation = TreeWorker(msg.target, responder, timeout, command=cmd, topology=self.topology, newroot=self.nodename, stderr=stderr, remote=remote) # FIXME ev_start-not-called workaround responder.worker = self.propagation self.propagation.upchannel = self task.schedule(self.propagation) self.logger.debug("TreeWorker scheduled") self._ack(msg) elif msg.action == 'write': data = msg.data_decode() self.logger.debug('GatewayChannel write: %d bytes', len(data['buf'])) self.propagation.write(data['buf']) self._ack(msg) elif msg.action == 'eof': self.logger.debug('GatewayChannel eof') self.propagation.set_write_eof() self._ack(msg) else: self.logger.error('unexpected CTL action: %s', msg.action) else: self.logger.error('unexpected message: %s', str(msg)) def _ack(self, msg): """acknowledge a received message""" self.send(ACKMessage(msg.msgid)) def ev_close(self, worker, timedout): """Gateway (parent) channel is closing. We abort the whole gateway task to stop other running workers. This avoids any unwanted remaining processes on gateways. """ self.logger.debug('GatewayChannel: ev_close') self.worker.task.abort() def gateway_main(): """ClusterShell gateway entry point""" host = _getshorthostname() # configure root logger logdir = os.path.expanduser(os.environ.get('CLUSTERSHELL_GW_LOG_DIR', '/tmp')) loglevel = os.environ.get('CLUSTERSHELL_GW_LOG_LEVEL', 'INFO') try: log_level = getattr(logging, loglevel.upper(), logging.INFO) log_fmt = '%(asctime)s %(name)s %(levelname)s %(message)s' logging.basicConfig(level=log_level, format=log_fmt, filename=os.path.join(logdir, "%s.gw.log" % host)) except (IOError, OSError): pass # logging failure is not fatal logger = logging.getLogger(__name__) sys.excepthook = gateway_excepthook if sys.stdin is None: logger.critical('Gateway failure: sys.stdin is None') sys.exit(1) if sys.stdin.isatty(): logger.critical('Gateway failure: sys.stdin.isatty() is True') sys.exit(1) logger.debug('Starting gateway on %s', host) logger.debug("environ=%s", os.environ) set_nonblock_flag(sys.stdin.fileno()) set_nonblock_flag(sys.stdout.fileno()) set_nonblock_flag(sys.stderr.fileno()) task = task_self() # Disable MsgTree buffering, it is enabled later when needed task.set_default("stdout_msgtree", False) task.set_default("stderr_msgtree", False) gateway = GatewayChannel(task) worker = StreamWorker(handler=gateway) # Define worker._fanout to not rely on the engine's fanout, and use # the special value FANOUT_UNLIMITED to always allow registration worker._fanout = FANOUT_UNLIMITED worker.set_reader(gateway.SNAME_READER, sys.stdin) worker.set_writer(gateway.SNAME_WRITER, sys.stdout, retain=False) # must stay disabled for now (see #274) #worker.set_writer(gateway.SNAME_ERROR, sys.stderr, retain=False) task.schedule(worker) logger.debug('Starting task') try: task.resume() logger.debug('Task performed') except EngineAbortException as exc: logger.debug('EngineAbortException') except IOError as exc: logger.debug('Broken pipe (%s)', exc) raise except Exception as exc: logger.exception('Gateway failure: %s', exc) logger.debug('-------- The End --------') if __name__ == '__main__': __name__ = 'ClusterShell.Gateway' # To enable gateway profiling: #import cProfile #cProfile.run('gateway_main()', '/tmp/gwprof') gateway_main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/MsgTree.py0000644104717000001440000003056714505632065020706 0ustar00sthiellusers# # Copyright (C) 2007-2016 CEA/DAM # Copyright (C) 2016-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ MsgTree ClusterShell message tree module. The purpose of MsgTree is to provide a shared message tree for storing message lines received from ClusterShell Workers (for example, from remote cluster commands). It should be efficient, in term of algorithm and memory consumption, especially when remote messages are the same. """ try: from itertools import filterfalse except ImportError: # Python 2 compat from itertools import ifilterfalse as filterfalse import sys # MsgTree behavior modes MODE_DEFER = 0 MODE_SHIFT = 1 MODE_TRACE = 2 class MsgTreeElem(object): """ Class representing an element of the MsgTree and its associated message. Object of this class are returned by the various MsgTree methods like messages() or walk(). The object can then be used as an iterator over the message lines or casted into a bytes buffer. """ def __init__(self, msgline=None, parent=None, trace=False): """ Initialize message tree element. """ # structure self.parent = parent self.children = {} if trace: # special behavior for trace mode self._shift = self._shift_trace else: self._shift = self._shift_notrace # content self.msgline = msgline self.keys = None def __len__(self): """Length of whole message buffer.""" return len(bytes(self)) def __eq__(self, other): """Comparison method compares whole message buffers.""" return bytes(self) == bytes(other) def _add_key(self, key): """Add a key to this tree element.""" if self.keys is None: self.keys = set([key]) else: self.keys.add(key) def _shift_notrace(self, key, target_elem): """Shift one of our key to specified target element.""" if self.keys and len(self.keys) == 1: shifting = self.keys self.keys = None else: shifting = set([key]) if self.keys: self.keys.difference_update(shifting) if not target_elem.keys: target_elem.keys = shifting else: target_elem.keys.update(shifting) return target_elem def _shift_trace(self, key, target_elem): """Shift one of our key to specified target element (trace mode: keep backtrace of keys).""" if not target_elem.keys: target_elem.keys = set([key]) else: target_elem.keys.add(key) return target_elem def __getitem__(self, i): return list(self.lines())[i] def __iter__(self): """Iterate over message lines up to this element.""" bottomtop = [] if self.msgline is not None: bottomtop.append(self.msgline) parent = self.parent while parent.msgline is not None: bottomtop.append(parent.msgline) parent = parent.parent return reversed(bottomtop) def lines(self): """Get an iterator over all message lines up to this element.""" return iter(self) splitlines = lines def message(self): """ Get the whole message buffer (from this tree element) as bytes. """ return b'\n'.join(self.lines()) __bytes__ = message def __str__(self): """ Get the whole message buffer (from this tree element) as a string. DEPRECATED: use message() or cast to bytes instead. """ if sys.version_info >= (3, 0): raise TypeError('cannot get string from %s, use bytes instead' % self.__class__.__name__) else: # in Python 2, str and bytes are actually the same type return self.message() def append(self, msgline, key=None): """ A new message is coming, append it to the tree element with optional associated source key. Called by MsgTree.add(). Return corresponding MsgTreeElem (possibly newly created). """ # get/create child element elem = self.children.get(msgline) if elem is None: elem = self.__class__(msgline, self, self._shift == self._shift_trace) self.children[msgline] = elem # if no key is given, MsgTree is in MODE_DEFER # shift down the given key otherwise # Note: replace with ternary operator in py2.5+ if key is None: return elem else: return self._shift(key, elem) class MsgTree(object): """ MsgTree maps key objects to multi-lines messages. MsgTree is a mutable object. Keys are almost arbitrary values (must be hashable). Message lines are organized as a tree internally. MsgTree provides low memory consumption especially on a cluster when all nodes return similar messages. Also, the gathering of messages is done automatically. """ def __init__(self, mode=MODE_DEFER): """MsgTree initializer The *mode* parameter should be set to one of the following constant: MODE_DEFER: all messages are processed immediately, saving memory from duplicate message lines, but keys are associated to tree elements usually later when tree is first "walked", saving useless state updates and CPU time. Once the tree is "walked" for the first time, its mode changes to MODE_SHIFT to keep track of further tree updates. This is the default mode. MODE_SHIFT: all keys and messages are processed immediately, it is more CPU time consuming as MsgTree full state is updated at each add() call. MODE_TRACE: all keys and messages and processed immediately, and keys are kept for each message element of the tree. The special method walk_trace() is then available to walk all elements of the tree. """ self.mode = mode # root element of MsgTree self._root = MsgTreeElem(trace=(mode == MODE_TRACE)) # dict of keys to MsgTreeElem self._keys = {} def clear(self): """Remove all items from the MsgTree.""" self._root = MsgTreeElem(trace=(self.mode == MODE_TRACE)) self._keys.clear() def __len__(self): """Return the number of keys contained in the MsgTree.""" return len(self._keys) def __getitem__(self, key): """Return the message of MsgTree with specified key. Raises a KeyError if key is not in the MsgTree.""" return self._keys[key] def get(self, key, default=None): """ Return the message for key if key is in the MsgTree, else default. If default is not given, it defaults to None, so that this method never raises a KeyError. """ return self._keys.get(key, default) def add(self, key, msgline): """ Add a message line (in bytes) associated with the given key to the MsgTree. """ # try to get current element in MsgTree for the given key, # defaulting to the root element e_msg = self._keys.get(key, self._root) if self.mode >= MODE_SHIFT: key_shift = key else: key_shift = None # add child msg and update keys dict self._keys[key] = e_msg.append(msgline, key_shift) def _update_keys(self): """Update keys associated to tree elements (MODE_DEFER).""" for key, e_msg in self._keys.items(): assert key is not None and e_msg is not None e_msg._add_key(key) # MODE_DEFER is no longer valid as keys are now assigned to MsgTreeElems self.mode = MODE_SHIFT def keys(self): """Return an iterator over MsgTree's keys.""" return iter(self._keys.keys()) __iter__ = keys def messages(self, match=None): """Return an iterator over MsgTree's messages.""" return (item[0] for item in self.walk(match)) def items(self, match=None, mapper=None): """ Return (key, message) for each key of the MsgTree. """ if mapper is None: mapper = lambda k: k for key, elem in self._keys.items(): if match is None or match(key): yield mapper(key), elem def _depth(self): """ Return the depth of the MsgTree, ie. the max number of lines per message. Added for debugging. """ depth = 0 # stack of (element, depth) tuples used to walk the tree estack = [(self._root, depth)] while estack: elem, edepth = estack.pop() if len(elem.children) > 0: estack += [(v, edepth + 1) for v in elem.children.values()] depth = max(depth, edepth) return depth def walk(self, match=None, mapper=None): """ Walk the tree. Optionally filter keys on match parameter, and optionally map resulting keys with mapper function. Return an iterator over (message, keys) tuples for each different message in the tree. """ if self.mode == MODE_DEFER: self._update_keys() # stack of elements used to walk the tree (depth-first) estack = [self._root] while estack: elem = estack.pop() children = elem.children if len(children) > 0: estack += children.values() if elem.keys: # has some keys mkeys = list(filter(match, elem.keys)) if len(mkeys): if mapper is not None: keys = [mapper(key) for key in mkeys] else: keys = mkeys yield elem, keys def walk_trace(self, match=None, mapper=None): """ Walk the tree in trace mode. Optionally filter keys on match parameter, and optionally map resulting keys with mapper function. Return an iterator over 4-length tuples (msgline, keys, depth, num_children). """ assert self.mode == MODE_TRACE, \ "walk_trace() is only callable in trace mode" # stack of (element, depth) tuples used to walk the tree estack = [(self._root, 0)] while estack: elem, edepth = estack.pop() children = elem.children nchildren = len(children) if nchildren > 0: estack += [(v, edepth + 1) for v in children.values()] if elem.keys: mkeys = list(filter(match, elem.keys)) if len(mkeys): if mapper is not None: keys = [mapper(key) for key in mkeys] else: keys = mkeys yield elem.msgline, keys, edepth, nchildren def remove(self, match=None): """ Modify the tree by removing any matching key references from the messages tree. Example of use: >>> msgtree.remove(lambda k: k > 3) """ # do not walk tree in MODE_DEFER as no key is associated if self.mode != MODE_DEFER: estack = [self._root] # walk the tree to keep only matching keys while estack: elem = estack.pop() if len(elem.children) > 0: estack += elem.children.values() if elem.keys: # has some keys elem.keys = set(filterfalse(match, elem.keys)) # remove key(s) from known keys dict for key in list(filter(match, self._keys.keys())): del self._keys[key] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/NodeSet.py0000644104717000001440000017054514505632065020702 0ustar00sthiellusers# # Copyright (C) 2007-2016 CEA/DAM # Copyright (C) 2007-2017 Aurelien Degremont # Copyright (C) 2015-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ Cluster node set module. A module to efficiently deal with node sets and node groups. Instances of NodeSet provide similar operations than the builtin set() type, see http://www.python.org/doc/lib/set-objects.html Usage example ============= >>> # Import NodeSet class ... from ClusterShell.NodeSet import NodeSet >>> >>> # Create a new nodeset from string ... nodeset = NodeSet("cluster[1-30]") >>> # Add cluster32 to nodeset ... nodeset.update("cluster32") >>> # Remove from nodeset ... nodeset.difference_update("cluster[2-5,8-31]") >>> # Print nodeset as a pdsh-like pattern ... print nodeset cluster[1,6-7,32] >>> # Iterate over node names in nodeset ... for node in nodeset: ... print node cluster1 cluster6 cluster7 cluster32 """ import fnmatch import re import string import sys from ClusterShell.Defaults import config_paths, DEFAULTS import ClusterShell.NodeUtils as NodeUtils # Import all RangeSet module public objects from ClusterShell.RangeSet import RangeSet, RangeSetND, AUTOSTEP_DISABLED from ClusterShell.RangeSet import RangeSetParseError # Python 3 compatibility try: basestring except NameError: basestring = str # Define default GroupResolver object used by NodeSet DEF_GROUPS_CONFIGS = config_paths('groups.conf') ILLEGAL_GROUP_CHARS = set("@,!&^*") _DEF_RESOLVER_STD_GROUP = NodeUtils.GroupResolverConfig(DEF_GROUPS_CONFIGS, ILLEGAL_GROUP_CHARS) # Standard group resolver RESOLVER_STD_GROUP = _DEF_RESOLVER_STD_GROUP # Special constants for NodeSet's resolver parameter # RESOLVER_NOGROUP => avoid any group resolution at all # RESOLVER_NOINIT => reserved use for optimized copy() RESOLVER_NOGROUP = -1 RESOLVER_NOINIT = -2 # 1.5 compat (deprecated) STD_GROUP_RESOLVER = RESOLVER_STD_GROUP NOGROUP_RESOLVER = RESOLVER_NOGROUP class NodeSetException(Exception): """Base NodeSet exception class.""" class NodeSetError(NodeSetException): """Raised when an error is encountered.""" class NodeSetParseError(NodeSetError): """Raised when NodeSet parsing cannot be done properly.""" def __init__(self, part, msg): if part: msg = "%s: \"%s\"" % (msg, part) NodeSetError.__init__(self, msg) # faulty part; this allows you to target the error self.part = part class NodeSetParseRangeError(NodeSetParseError): """Raised when bad range is encountered during NodeSet parsing.""" def __init__(self, rset_exc): NodeSetParseError.__init__(self, str(rset_exc), "bad range") class NodeSetExternalError(NodeSetError): """Raised when an external error is encountered.""" class NodeSetBase(object): """ Base class for NodeSet. This class allows node set base object creation from specified string pattern and rangeset object. If optional copy_rangeset boolean flag is set to True (default), provided rangeset object is copied (if needed), otherwise it may be referenced (should be seen as an ownership transfer upon creation). This class implements core node set arithmetic (no string parsing here). Example: >>> nsb = NodeSetBase('node%s-ipmi', RangeSet('1-5,7'), False) >>> str(nsb) 'node[1-5,7]-ipmi' >>> nsb = NodeSetBase('node%s-ib%s', RangeSetND([['1-5,7', '1-2']]), False) >>> str(nsb) 'node[1-5,7]-ib[1-2]' """ def __init__(self, pattern=None, rangeset=None, copy_rangeset=True, autostep=None, fold_axis=None): """New NodeSetBase object initializer""" self._autostep = autostep self._length = 0 self._patterns = {} self.fold_axis = fold_axis #: iterable over nD 0-indexed axis if self.fold_axis is None and DEFAULTS.fold_axis: self.fold_axis = DEFAULTS.fold_axis # non-empty tuple if pattern: self._add(pattern, rangeset, copy_rangeset) elif rangeset: raise ValueError("missing pattern") def get_autostep(self): """Get autostep value (property)""" return self._autostep def set_autostep(self, val): """Set autostep value (property)""" if val is None: self._autostep = None else: # Work around the pickling issue of sys.maxint (+inf) in py2.4 self._autostep = min(int(val), AUTOSTEP_DISABLED) # Update our RangeSet/RangeSetND objects for pat, rset in self._patterns.items(): if rset: rset.autostep = self._autostep autostep = property(get_autostep, set_autostep) def _iter(self): """Iterator on internal item tuples (pattern, indexes, autostep).""" for pat, rset in sorted(self._patterns.items()): if rset: autostep = rset.autostep if rset.dim() == 1: assert isinstance(rset, RangeSet) for idx in rset: yield pat, (idx,), autostep else: for rvec in rset: yield pat, rvec, autostep else: yield pat, None, None def _iterbase(self): """Iterator on single, one-item NodeSetBase objects.""" for pat, ivec, autostep in self._iter(): rset = None # 'no node index' by default if ivec is not None: assert len(ivec) > 0 if len(ivec) == 1: rset = RangeSet.fromone(ivec[0], autostep) else: rset = RangeSetND([ivec], autostep) yield NodeSetBase(pat, rset) def __iter__(self): """Iterator on single nodes as string.""" # Does not call self._iterbase() + str() for better performance. for pat, ivec, _ in self._iter(): if ivec is not None: yield pat % ivec else: yield pat % () # define striter() alias for convenience (to match RangeSet.striter()) striter = __iter__ # define nsiter() as an object-based iterator that could be used for # __iter__() in the future... def nsiter(self): """Object-based NodeSet iterator on single nodes.""" for pat, ivec, autostep in self._iter(): nodeset = self.__class__() if ivec is not None: if len(ivec) == 1: nodeset._add_new(pat, RangeSet.fromone(ivec[0])) else: nodeset._add_new(pat, RangeSetND([ivec], autostep)) else: nodeset._add_new(pat, None) yield nodeset def contiguous(self): """Object-based NodeSet iterator on contiguous node sets. Contiguous node set contains nodes with same pattern name and a contiguous range of indexes, like foobar[1-100].""" for pat, rangeset in sorted(self._patterns.items()): if rangeset: for cont_rset in rangeset.contiguous(): nodeset = self.__class__() nodeset._add_new(pat, cont_rset) yield nodeset else: nodeset = self.__class__() nodeset._add_new(pat, None) yield nodeset def __len__(self): """Get the number of nodes in NodeSet.""" cnt = 0 for rangeset in self._patterns.values(): if rangeset: cnt += len(rangeset) else: cnt += 1 return cnt def _iter_nd_pat(self, pat, rset): """ Take a pattern and a RangeSetND object and iterate over nD computed nodeset strings while following fold_axis constraints. """ try: dimcnt = rset.dim() if self.fold_axis is None: # fold along all axis (default) fold_axis = range(dimcnt) else: # set of user-provided fold axis (support negative numbers) fold_axis = [int(x) % dimcnt for x in self.fold_axis if -dimcnt <= int(x) < dimcnt] except (TypeError, ValueError) as exc: raise NodeSetParseError("fold_axis=%s" % self.fold_axis, exc) for rgvec in rset.vectors(): rgnargs = [] # list of str rangeset args for axis, rangeset in enumerate(rgvec): # build an iterator over rangeset strings to add if len(rangeset) > 1: if axis not in fold_axis: # expand rgstrit = rangeset.striter() else: rgstrit = ["[%s]" % rangeset] else: rgstrit = [str(rangeset)] # aggregate/expand along previous computed axis... t_rgnargs = [] for rgstr in rgstrit: # 1-time when not expanding if not rgnargs: t_rgnargs.append([rgstr]) else: for rga in rgnargs: t_rgnargs.append(rga + [rgstr]) rgnargs = t_rgnargs # get nodeset patterns formatted with range strings for rgargs in rgnargs: yield pat % tuple(rgargs) def __str__(self): """Get ranges-based pattern of node list.""" results = [] try: for pat, rset in sorted(self._patterns.items()): if not rset: results.append(pat % ()) elif rset.dim() == 1: # check if allowed to fold even for 1D pattern if self.fold_axis is None or \ list(x for x in self.fold_axis if -1 <= int(x) < 1): rgs = str(rset) cnt = len(rset) if cnt > 1: rgs = "[%s]" % rgs results.append(pat % rgs) else: results.extend((pat % rgs for rgs in rset.striter())) elif rset.dim() > 1: results.extend(self._iter_nd_pat(pat, rset)) except TypeError: raise NodeSetParseError(pat, "Internal error: node pattern and " "ranges mismatch") return ",".join(results) def copy(self): """Return a shallow copy.""" cpy = self.__class__() cpy.fold_axis = self.fold_axis cpy._autostep = self._autostep cpy._length = self._length dic = {} for pat, rangeset in self._patterns.items(): if rangeset is None: dic[pat] = None else: dic[pat] = rangeset.copy() cpy._patterns = dic return cpy def __contains__(self, other): """Is node contained in NodeSet ?""" return self.issuperset(other) def _binary_sanity_check(self, other): # check that the other argument to a binary operation is also # a NodeSet, raising a TypeError otherwise. if not isinstance(other, NodeSetBase): raise TypeError("Binary operation only permitted between " "NodeSetBase") def issubset(self, other): """Report whether another nodeset contains this nodeset.""" self._binary_sanity_check(other) return other.issuperset(self) def issuperset(self, other): """Report whether this nodeset contains another nodeset.""" self._binary_sanity_check(other) status = True for pat, erangeset in other._patterns.items(): rangeset = self._patterns.get(pat) if rangeset: status = rangeset.issuperset(erangeset) else: # might be an unnumbered node (key in dict but no value) status = pat in self._patterns if not status: break return status def __eq__(self, other): """NodeSet equality comparison.""" # See comment for for RangeSet.__eq__() if not isinstance(other, NodeSetBase): return NotImplemented return len(self) == len(other) and self.issuperset(other) # inequality comparisons using the is-subset relation __le__ = issubset __ge__ = issuperset def __lt__(self, other): """x.__lt__(y) <==> x x>y""" self._binary_sanity_check(other) return len(self) > len(other) and self.issuperset(other) def _extractslice(self, index): """Private utility function: extract slice parameters from slice object `index` for an list-like object of size `length`.""" length = len(self) if index.start is None: sl_start = 0 elif index.start < 0: sl_start = max(0, length + index.start) else: sl_start = index.start if index.stop is None: sl_stop = sys.maxsize elif index.stop < 0: sl_stop = max(0, length + index.stop) else: sl_stop = index.stop if index.step is None: sl_step = 1 elif index.step < 0: # We support negative step slicing with no start/stop, ie. r[::-n]. if index.start is not None or index.stop is not None: raise IndexError("illegal start and stop when negative step " "is used") # As RangeSet elements are ordered internally, adjust sl_start # to fake backward stepping in case of negative slice step. stepmod = (length + -index.step - 1) % -index.step if stepmod > 0: sl_start += stepmod sl_step = -index.step else: sl_step = index.step if not isinstance(sl_start, int) or not isinstance(sl_stop, int) \ or not isinstance(sl_step, int): raise TypeError("slice indices must be integers") return sl_start, sl_stop, sl_step def __getitem__(self, index): """Return the node at specified index or a subnodeset when a slice is specified.""" if isinstance(index, slice): inst = NodeSetBase() sl_start, sl_stop, sl_step = self._extractslice(index) sl_next = sl_start if sl_stop <= sl_next: return inst length = 0 for pat, rangeset in sorted(self._patterns.items()): if rangeset: cnt = len(rangeset) offset = sl_next - length if offset < cnt: num = min(sl_stop - sl_next, cnt - offset) inst._add(pat, rangeset[offset:offset + num:sl_step]) else: #skip until sl_next is reached length += cnt continue else: cnt = num = 1 if sl_next > length: length += cnt continue inst._add(pat, None) # adjust sl_next... sl_next += num if (sl_next - sl_start) % sl_step: sl_next = sl_start + \ ((sl_next - sl_start)/sl_step + 1) * sl_step if sl_next >= sl_stop: break length += cnt return inst elif isinstance(index, int): if index < 0: length = len(self) if index >= -length: index = length + index # - -index else: raise IndexError("%d out of range" % index) length = 0 for pat, rangeset in sorted(self._patterns.items()): if rangeset: cnt = len(rangeset) if index < length + cnt: # return a subrangeset of size 1 to manage padding if rangeset.dim() == 1: return pat % rangeset[index-length:index-length+1] else: sub = rangeset[index-length:index-length+1] for rgvec in sub.vectors(): return pat % (tuple(rgvec)) else: cnt = 1 if index == length: return pat length += cnt raise IndexError("%d out of range" % index) else: raise TypeError("NodeSet indices must be integers") def _add_new(self, pat, rangeset): """Add nodes from a (pat, rangeset) tuple. Predicate: pattern does not exist in current set. RangeSet object is referenced (not copied).""" assert pat not in self._patterns self._patterns[pat] = rangeset def _add(self, pat, rangeset, copy_rangeset=True): """Add nodes from a (pat, rangeset) tuple. `pat' may be an existing pattern and `rangeset' may be None. RangeSet or RangeSetND objects are copied if re-used internally when provided and if copy_rangeset flag is set. """ if pat in self._patterns: # existing pattern: get RangeSet or RangeSetND entry... pat_e = self._patterns[pat] # sanity checks if (pat_e is None) is not (rangeset is None): raise NodeSetError("Invalid operation") # entry may exist but set to None (single node) if pat_e: pat_e.update(rangeset) else: # new pattern... if rangeset and copy_rangeset: # default is to inherit rangeset autostep value rangeset = rangeset.copy() # but if set, self._autostep does override it if self._autostep is not None: # works with rangeset 1D or nD rangeset.autostep = self._autostep self._add_new(pat, rangeset) def union(self, other): """ s.union(t) returns a new set with elements from both s and t. """ self_copy = self.copy() self_copy.update(other) return self_copy def __or__(self, other): """ Implements the | operator. So s | t returns a new nodeset with elements from both s and t. """ if not isinstance(other, NodeSetBase): return NotImplemented return self.union(other) def add(self, other): """ Add node to NodeSet. """ self.update(other) def update(self, other): """ s.update(t) updates nodeset s with elements added from t. """ for pat, rangeset in other._patterns.items(): self._add(pat, rangeset) def updaten(self, others): """ s.updaten(list) updates nodeset s with elements added from given list. """ for other in others: self.update(other) def clear(self): """ Remove all nodes from this nodeset. """ self._patterns.clear() def __ior__(self, other): """ Implements the ``|=`` operator. So ``s |= t`` returns nodeset s with elements added from t. (Python version 2.5+ required) """ self._binary_sanity_check(other) self.update(other) return self def intersection(self, other): """ s.intersection(t) returns a new set with elements common to s and t. """ self_copy = self.copy() self_copy.intersection_update(other) return self_copy def __and__(self, other): """ Implements the & operator. So ``s & t`` returns a new nodeset with elements common to s and t. """ if not isinstance(other, NodeSet): return NotImplemented return self.intersection(other) def intersection_update(self, other): """ ``s.intersection_update(t)`` updates nodeset s keeping only elements also found in t. """ if other is self: return tmp_ns = NodeSetBase() for pat, irangeset in other._patterns.items(): rangeset = self._patterns.get(pat) if rangeset: irset = rangeset.intersection(irangeset) # ignore pattern if empty rangeset if len(irset) > 0: tmp_ns._add(pat, irset, copy_rangeset=False) elif not irangeset and pat in self._patterns: # intersect two nodes with no rangeset tmp_ns._add(pat, None) # Substitute self._patterns = tmp_ns._patterns def __iand__(self, other): """ Implements the &= operator. So ``s &= t`` returns nodeset s keeping only elements also found in t. (Python version 2.5+ required) """ self._binary_sanity_check(other) self.intersection_update(other) return self def difference(self, other): """ ``s.difference(t)`` returns a new NodeSet with elements in s but not in t. """ self_copy = self.copy() self_copy.difference_update(other) return self_copy def __sub__(self, other): """ Implement the - operator. So ``s - t`` returns a new nodeset with elements in s but not in t. """ if not isinstance(other, NodeSetBase): return NotImplemented return self.difference(other) def difference_update(self, other, strict=False): """ ``s.difference_update(t)`` removes from s all the elements found in t. :raises KeyError: an element cannot be removed (only if strict is True) """ # the purge of each empty pattern is done afterward to allow self = ns purge_patterns = [] # iterate first over exclude nodeset rangesets which is usually smaller for pat, erangeset in other._patterns.items(): # if pattern is found, deal with it rangeset = self._patterns.get(pat) if rangeset: # sub rangeset, raise KeyError if not found rangeset.difference_update(erangeset, strict) # check if no range left and add pattern to purge list if len(rangeset) == 0: purge_patterns.append(pat) else: # unnumbered node exclusion if pat in self._patterns: purge_patterns.append(pat) elif strict: raise KeyError(pat) for pat in purge_patterns: del self._patterns[pat] def __isub__(self, other): """ Implement the -= operator. So ``s -= t`` returns nodeset s after removing elements found in t. (Python version 2.5+ required) """ self._binary_sanity_check(other) self.difference_update(other) return self def remove(self, elem): """ Remove element elem from the nodeset. Raise KeyError if elem is not contained in the nodeset. :raises KeyError: elem is not contained in the nodeset """ self.difference_update(elem, True) def symmetric_difference(self, other): """ ``s.symmetric_difference(t)`` returns the symmetric difference of two nodesets as a new NodeSet. (ie. all nodes that are in exactly one of the nodesets.) """ self_copy = self.copy() self_copy.symmetric_difference_update(other) return self_copy def __xor__(self, other): """ Implement the ^ operator. So ``s ^ t`` returns a new NodeSet with nodes that are in exactly one of the nodesets. """ if not isinstance(other, NodeSet): return NotImplemented return self.symmetric_difference(other) def symmetric_difference_update(self, other): """ ``s.symmetric_difference_update(t)`` updates nodeset s keeping all nodes that are in exactly one of the nodesets. """ purge_patterns = [] # iterate over our rangesets for pat, rangeset in self._patterns.items(): brangeset = other._patterns.get(pat) if brangeset: rangeset.symmetric_difference_update(brangeset) else: if pat in other._patterns: purge_patterns.append(pat) # iterate over other's rangesets for pat, brangeset in other._patterns.items(): rangeset = self._patterns.get(pat) if not rangeset and not pat in self._patterns: self._add(pat, brangeset) # check for patterns cleanup for pat, rangeset in self._patterns.items(): if rangeset is not None and len(rangeset) == 0: purge_patterns.append(pat) # cleanup for pat in purge_patterns: del self._patterns[pat] def __ixor__(self, other): """ Implement the ^= operator. So ``s ^= t`` returns nodeset s after keeping all nodes that are in exactly one of the nodesets. (Python version 2.5+ required) """ self._binary_sanity_check(other) self.symmetric_difference_update(other) return self def _strip_escape(nsstr): """ Helper to prepare a nodeset string for parsing: trim boundary whitespaces and escape special characters. """ return nsstr.strip().replace('%', '%%') def _rsets4nsb(rsets, autostep): """ Helper to convert a list of RangeSet objects into the proper object for NodeSetBase: RangeSet, RangeSetND or None (no node index). """ if len(rsets) > 1: return RangeSetND([rsets], None, autostep, copy_rangeset=False) elif len(rsets) == 1: return rsets[0] class ParsingEngine(object): """ Class that is able to transform a source into a NodeSetBase. """ OP_CODES = {',': 'update', '!': 'difference_update', '&': 'intersection_update', '^': 'symmetric_difference_update'} OP_CODES_PAT = '[%s]' % re.escape(''.join(OP_CODES.keys())) BRACKET_OPEN = '[' BRACKET_CLOSE = ']' def __init__(self, group_resolver, node_wildcard_enable=True): """ Initialize Parsing Engine. """ self.group_resolver = group_resolver self.base_node_re = re.compile(r"(\D*)(\d*)") self.node_wc = node_wildcard_enable # node wildcard support def parse(self, nsobj, autostep): """ Parse provided object if possible and return a NodeSetBase object. """ # passing None is supported if nsobj is None: return NodeSetBase() # is nsobj a NodeSetBase instance? if isinstance(nsobj, NodeSetBase): return nsobj # or is nsobj a string? if isinstance(nsobj, basestring): try: return self.parse_string(str(nsobj), autostep) except (NodeUtils.GroupSourceQueryFailed, RuntimeError) as exc: raise NodeSetParseError(nsobj, str(exc)) raise TypeError("Unsupported NodeSet input %s" % type(nsobj)) def parse_string(self, nsstr, autostep, namespace=None): """Parse provided string in optional namespace. This method parses string, resolves all node groups, and computes set operations. Return a NodeSetBase object. """ alln_cache = None # used to compute 'all nodes' only once nodeset = NodeSetBase() nsstr = _strip_escape(nsstr) for opc, pat, rgnd in self._scan_string(nsstr, autostep): # Parser main debugging: #print "OPC %s PAT %s RANGESETS %s" % (opc, pat, rgnd) if self.group_resolver and pat[0] == '@': ns_group = NodeSetBase() for nodegroup in NodeSetBase(pat, rgnd): # parse/expand nodes group: get group string and namespace ns_str_ext, ns_nsp_ext = self.parse_group_string(nodegroup, namespace) if ns_str_ext: # may still contain groups # recursively parse and aggregate result ns_group.update(self.parse_string(ns_str_ext, autostep, ns_nsp_ext)) # perform operation getattr(nodeset, opc)(ns_group) elif self.group_resolver and self.node_wc and ('*' in pat or '?' in pat): # We support ranges with wildcard mask by testing all nodes # against each expanded mask (wcmasks). wcmasks = (str(wcn) for wcn in NodeSetBase(pat, rgnd, False)) # Our reference set is 'all nodes', we need to build it from # NodeSetBase to iterate over each individual node. if alln_cache is None: self.node_wc = False # avoid infinite recursion try: nsb = NodeSetBase() for res in self.all_nodes(namespace): nsb.update(self.parse_string(res, autostep, namespace)) alln_cache = set(str(node) for node in nsb) finally: self.node_wc = True alln = alln_cache.copy() # A wildcarded nodeset can be seen as a single nodeset, so we # compute the union of nodes matching the wildcard mask(s) and # use the resulting NodeSetBase object as argument of the next # operation (opc). wcns = NodeSetBase() for wcmask in wcmasks: # Expand nodes matching any of the wildcard mask for node in fnmatch.filter(alln, wcmask): alln.remove(node) # remove matching node for next iter wcp, wcr = self._scan_string_single(node, autostep) wcrgnd = _rsets4nsb(wcr, autostep) wcns.update(NodeSetBase(wcp, wcrgnd, False)) getattr(nodeset, opc)(wcns) else: getattr(nodeset, opc)(NodeSetBase(pat, rgnd, False)) return nodeset def parse_string_single(self, nsstr, autostep): """Parse provided string and return a NodeSetBase object.""" pat, rangesets = self._scan_string_single(_strip_escape(nsstr), autostep) if len(rangesets) > 1: rgobj = RangeSetND([rangesets], None, autostep, copy_rangeset=False) elif len(rangesets) == 1: rgobj = rangesets[0] else: # non-indexed nodename rgobj = None return NodeSetBase(pat, rgobj, False) def parse_group(self, group, namespace=None, autostep=None): """Parse provided single group name (without @ prefix).""" assert self.group_resolver is not None nodestr = self.group_resolver.group_nodes(group, namespace) return self.parse(",".join(nodestr), autostep) def parse_group_string(self, nodegroup, namespace=None): """Parse provided raw nodegroup string in optional namespace. Warning: 1 pass only, may still return groups. Return a tuple (grp_resolved_string, namespace). """ assert nodegroup[0] == '@' assert self.group_resolver is not None grpstr = group = nodegroup[1:] if grpstr.find(':') >= 0: # specified namespace does always override namespace, group = grpstr.split(':', 1) if group == '*': # @* or @source:* magic reslist = self.all_nodes(namespace) elif group.startswith('@'): # @@source group name list reslist = self.grouplist(grpstr[1:]) else: reslist = self.group_resolver.group_nodes(group, namespace) return ','.join(reslist), namespace def grouplist(self, namespace=None): """ Return a sorted list of groups from current resolver (in optional group source / namespace). """ grpset = NodeSetBase() for grpstr in self.group_resolver.grouplist(namespace): # We scan each group string to expand any range seen... grpstr = _strip_escape(grpstr) for opc, pat, rgnd in self._scan_string(grpstr, None): getattr(grpset, opc)(NodeSetBase(pat, rgnd, False)) return list(grpset) def all_nodes(self, namespace=None): """Get all nodes from group resolver as a list of strings.""" # namespace is the optional group source assert self.group_resolver is not None alln = [] try: # Ask resolver to provide all nodes. alln = self.group_resolver.all_nodes(namespace) except NodeUtils.GroupSourceNoUpcall: try: # As the resolver is not able to provide all nodes directly, # failback to list + map(s) method: for grp in self.grouplist(namespace): alln += self.group_resolver.group_nodes(grp, namespace) except NodeUtils.GroupSourceNoUpcall: # We are not able to find "all" nodes, definitely. msg = "Not enough working methods (all or map + list) to " \ "get all nodes" raise NodeSetExternalError(msg) except NodeUtils.GroupSourceQueryFailed as exc: raise NodeSetExternalError("Failed to get all nodes: %s" % exc) return alln def _next_op(self, pat): """Opcode parsing subroutine.""" mobj = re.search(ParsingEngine.OP_CODES_PAT, pat) if mobj: return mobj.span()[0], mobj.group() else: return -1, None def _scan_string_single(self, nsstr, autostep): """Single node scan, returns (pat, list of rangesets)""" # single node parsing pfx_nd = [mobj.groups() for mobj in self.base_node_re.finditer(nsstr)] pfx_nd = pfx_nd[:-1] if not pfx_nd: raise NodeSetParseError(nsstr, "parse error") pat = "" rangesets = [] for pfx, idx in pfx_nd: if idx: # optimization: process single index padding directly pad = 0 if int(idx) != 0: idxs = idx.lstrip("0") if len(idx) - len(idxs) > 0: pad = len(idx) idxint = int(idxs) else: if len(idx) > 1: pad = len(idx) idxint = 0 if idxint > 1e100: raise NodeSetParseRangeError( \ RangeSetParseError(idx, "invalid rangeset index")) # optimization: use numerical RangeSet constructor pat += "%s%%s" % pfx rangesets.append(RangeSet.fromone(idxint, pad, autostep)) else: # undefined pad means no node index pat += pfx return pat, rangesets def _scan_string(self, nsstr, autostep): """Parsing engine's string scanner method (iterator).""" next_op_code = ',' # if no operator, default one is to update nodeset while nsstr: # Ignore whitespace(s) for convenience nsstr = nsstr.lstrip() rsets = [] op_code = next_op_code op_idx, next_op_code = self._next_op(nsstr) bracket_idx = nsstr.find(self.BRACKET_OPEN) # Check if the operator is after the bracket, or if there # is no operator at all but some brackets. if bracket_idx >= 0 and (op_idx > bracket_idx or op_idx < 0): # In this case, we have a pattern of potentially several # nodes. # Fill prefix, range and suffix from pattern # eg. "forbin[3,4-10]-ilo" -> "forbin", "3,4-10", "-ilo" newpat = "" sfx = nsstr while bracket_idx >= 0 and (op_idx > bracket_idx or op_idx < 0): pfx, sfx = sfx.split(self.BRACKET_OPEN, 1) try: rng, sfx = sfx.split(self.BRACKET_CLOSE, 1) except ValueError: raise NodeSetParseError(nsstr, "missing bracket") # illegal closing bracket checks if pfx.find(self.BRACKET_CLOSE) > -1: raise NodeSetParseError(pfx, "illegal closing bracket") if len(sfx) > 0: bra_end = sfx.find(self.BRACKET_CLOSE) bra_start = sfx.find(self.BRACKET_OPEN) if bra_start == -1: bra_start = bra_end + 1 if bra_end >= 0 and bra_end < bra_start: msg = "illegal closing bracket" raise NodeSetParseError(sfx, msg) pfxlen, sfxlen = len(pfx), len(sfx) if sfxlen > 0: try: # amending trailing digits generates /steps sfx, rng = self._amend_trailing_digits(sfx, rng) except RangeSetParseError as ex: raise NodeSetParseRangeError(ex) if pfxlen > 0: try: # this method supports /steps pfx, rng = self._amend_leading_digits(pfx, rng) except RangeSetParseError as ex: raise NodeSetParseRangeError(ex) if pfx: # scan any nonempty pfx as a single node (no bracket) pfx, pfxrvec = self._scan_string_single(pfx, autostep) rsets += pfxrvec # readahead for sanity check bracket_idx = sfx.find(self.BRACKET_OPEN, bracket_idx - pfxlen) op_idx, next_op_code = self._next_op(sfx) if len(sfx) > 0 and sfx[0] == '[': msg = "illegal reopening bracket" raise NodeSetParseError(sfx, msg) newpat += "%s%%s" % pfx try: rsets.append(RangeSet(rng, autostep)) except RangeSetParseError as ex: raise NodeSetParseRangeError(ex) # Check if we have a next op-separated node or pattern op_idx, next_op_code = self._next_op(sfx) if op_idx < 0: nsstr = None else: sfx, nsstr = sfx.split(next_op_code, 1) # Detected character operator so right operand is mandatory if not nsstr: msg = "missing nodeset operand with '%s' " \ "operator" % next_op_code raise NodeSetParseError(None, msg) # Ignore whitespace(s) sfx = sfx.rstrip() if sfx: sfx, sfxrvec = self._scan_string_single(sfx, autostep) newpat += sfx rsets += sfxrvec else: # In this case, either there is no comma and no bracket, # or the bracket is after the comma, then just return # the node. if op_idx < 0: node = nsstr nsstr = None # break next time else: node, nsstr = nsstr.split(next_op_code, 1) # Detected character operator so both operands are mandatory if not node or not nsstr: msg = "missing nodeset operand with '%s' " \ "operator" % next_op_code raise NodeSetParseError(node or nsstr, msg) # Check for illegal closing bracket if node.find(self.BRACKET_CLOSE) > -1: raise NodeSetParseError(node, "illegal closing bracket") # Ignore whitespace(s) node = node.rstrip() newpat, rsets = self._scan_string_single(node, autostep) op = ParsingEngine.OP_CODES[op_code] yield op, newpat, _rsets4nsb(rsets, autostep) def _amend_leading_digits(self, outer, inner): """Helper to get rid of leading bracket digits. Take a bracket outer prefix string and an inner range set string and return amended strings. """ outerstrip = outer.rstrip(string.digits) outerlen, outerstriplen = len(outer), len(outerstrip) if outerstriplen < outerlen: # get outer bracket leading digits outerdigits = outer[outerstriplen:] inner = ','.join( '-'.join(outerdigits + bound for bound in elem.split('-')) for elem in (str(subrng) for subrng in RangeSet(inner).contiguous())) return outerstrip, inner def _amend_trailing_digits(self, outer, inner): """Helper to get rid of trailing bracket digits. Take a bracket outer suffix string and an inner range set string and return amended strings. """ outerstrip = outer.lstrip(string.digits) outerlen, outerstriplen = len(outer), len(outerstrip) if outerstriplen < outerlen: # step syntax is not compatible with trailing digits if '/' in inner: msg = "illegal trailing digits after range with steps" raise NodeSetParseError(outer, msg) # get outer bracket trailing digits outerdigits = outer[0:outerlen-outerstriplen] outlen = len(outerdigits) def shiftstep(orig, power): """Add needed step after shifting range indexes""" if '-' in orig: return orig + '/1' + '0' * power return orig # do not use /step for single index inner = ','.join(shiftstep(s, outlen) for s in ('-'.join(bound + outerdigits for bound in elem.split('-')) for elem in inner.split(','))) return outerstrip, inner class NodeSet(NodeSetBase): """ Iterable class of nodes with node ranges support. NodeSet creation examples: >>> nodeset = NodeSet() # empty NodeSet >>> nodeset = NodeSet("cluster3") # contains only cluster3 >>> nodeset = NodeSet("cluster[5,10-42]") >>> nodeset = NodeSet("cluster[0-10/2]") >>> nodeset = NodeSet("cluster[0-10/2],othername[7-9,120-300]") NodeSet provides methods like update(), intersection_update() or difference_update() methods, which conform to the Python Set API. However, unlike RangeSet or standard Set, NodeSet is somewhat not so strict for convenience, and understands NodeSet instance or NodeSet string as argument. Also, there is no strict definition of one element, for example, it IS allowed to do: >>> nodeset = NodeSet("blue[1-50]") >>> nodeset.remove("blue[36-40]") >>> print nodeset blue[1-35,41-50] Additionally, the NodeSet class recognizes the "extended string pattern" which adds support for union (special character ","), difference ("!"), intersection ("&") and symmetric difference ("^") operations. String patterns are read from left to right, by proceeding any character operators accordingly. Extended string pattern usage examples: >>> nodeset = NodeSet("node[0-10],node[14-16]") # union >>> nodeset = NodeSet("node[0-10]!node[8-10]") # difference >>> nodeset = NodeSet("node[0-10]&node[5-13]") # intersection >>> nodeset = NodeSet("node[0-10]^node[5-13]") # xor """ _VERSION = 2 def __init__(self, nodes=None, autostep=None, resolver=None, fold_axis=None): """Initialize a NodeSet object. The `nodes` argument may be a valid nodeset string or a NodeSet object. If no nodes are specified, an empty NodeSet is created. The optional `autostep` argument is passed to underlying :class:`.RangeSet.RangeSet` objects and aims to enable and make use of the range/step syntax (eg. ``node[1-9/2]``) when converting NodeSet to string (using folding). To enable this feature, autostep must be set there to the min number of indexes that are found at equal distance of each other inside a range before NodeSet starts to use this syntax. For example, `autostep=3` (or less) will pack ``n[2,4,6]`` into ``n[2-6/2]``. Default autostep value is None which means "inherit whenever possible", ie. do not enable it unless set in NodeSet objects passed as `nodes` here or during arithmetic operations. You may however use the special ``AUTOSTEP_DISABLED`` constant to force turning off autostep feature. The optional `resolver` argument may be used to override the group resolving behavior for this NodeSet object. It can either be set to a :class:`.NodeUtils.GroupResolver` object, to the ``RESOLVER_NOGROUP`` constant to disable any group resolution, or to None (default) to use standard NodeSet group resolver (see :func:`.set_std_group_resolver()` at the module level to change it if needed). nD nodeset only: the optional `fold_axis` parameter, if specified, set the public instance member `fold_axis` to an iterable over nD 0-indexed axis integers. This parameter may be used to disengage some nD folding. That may be useful as all cluster tools don't support folded-nD nodeset syntax. Pass ``[0]``, for example, to only fold along first axis (that is, to fold first dimension using ``[a-b]`` rangeset syntax whenever possible). Using `fold_axis` ensures that rangeset won't be folded on unspecified axis, but please note however, that using `fold_axis` may lead to suboptimal folding, this is because NodeSet algorithms are optimized for folding along all axis (default behavior). """ NodeSetBase.__init__(self, autostep=autostep, fold_axis=fold_axis) # Set group resolver. if resolver in (RESOLVER_NOGROUP, RESOLVER_NOINIT): self._resolver = None else: self._resolver = resolver or RESOLVER_STD_GROUP # Initialize default parser. if resolver == RESOLVER_NOINIT: self._parser = None else: self._parser = ParsingEngine(self._resolver) self.update(nodes) @classmethod def _fromlist1(cls, nodelist, autostep=None, resolver=None): """Class method that returns a new NodeSet with single nodes from provided list (optimized constructor).""" inst = NodeSet(autostep=autostep, resolver=resolver) for single in nodelist: inst.update(inst._parser.parse_string_single(single, autostep)) return inst @classmethod def fromlist(cls, nodelist, autostep=None, resolver=None): """Class method that returns a new NodeSet with nodes from provided list.""" inst = NodeSet(autostep=autostep, resolver=resolver) inst.updaten(nodelist) return inst @classmethod def fromall(cls, groupsource=None, autostep=None, resolver=None): """Class method that returns a new NodeSet with all nodes from optional groupsource.""" inst = NodeSet(autostep=autostep, resolver=resolver) try: if not inst._resolver: raise NodeSetExternalError("Group resolver is not defined") else: # fill this nodeset with all nodes found by resolver inst.updaten(inst._parser.all_nodes(groupsource)) except NodeUtils.GroupResolverError as exc: errmsg = "Group source error (%s: %s)" % (exc.__class__.__name__, exc) raise NodeSetExternalError(errmsg) return inst def __getstate__(self): """Called when pickling: remove references to group resolver.""" odict = self.__dict__.copy() odict['_version'] = NodeSet._VERSION del odict['_resolver'] del odict['_parser'] return odict def __setstate__(self, dic): """Called when unpickling: restore parser using non group resolver.""" self.__dict__.update(dic) self._resolver = None self._parser = ParsingEngine(None) if getattr(self, '_version', 1) <= 1: self.fold_axis = None # if setting state from first version, a conversion is needed to # support native RangeSetND old_patterns = self._patterns self._patterns = {} for pat, rangeset in sorted(old_patterns.items()): if rangeset: assert isinstance(rangeset, RangeSet) rgs = str(rangeset) if len(rangeset) > 1: rgs = "[%s]" % rgs self.update(pat % rgs) else: self.update(pat) def copy(self): """Return a shallow copy of a NodeSet.""" cpy = self.__class__(resolver=RESOLVER_NOINIT) dic = {} for pat, rangeset in self._patterns.items(): if rangeset is None: dic[pat] = None else: dic[pat] = rangeset.copy() cpy._patterns = dic cpy.fold_axis = self.fold_axis cpy._autostep = self._autostep cpy._resolver = self._resolver cpy._parser = self._parser return cpy __copy__ = copy # For the copy module def _find_groups(self, node, namespace, allgroups): """Find groups of node by namespace.""" if allgroups: # find node groups using in-memory allgroups for grp, nodeset in allgroups.items(): if node in nodeset: yield grp else: # find node groups using resolver try: for group in self._resolver.node_groups(node, namespace): yield group except NodeUtils.GroupSourceQueryFailed as exc: msg = "Group source query failed: %s" % exc raise NodeSetExternalError(msg) def _groups2(self, groupsource=None, autostep=None): """Find node groups this nodeset belongs to. [private]""" if not self._resolver: raise NodeSetExternalError("No node group resolver") try: # Get all groups in specified group source. allgrplist = self._parser.grouplist(groupsource) except NodeUtils.GroupSourceError: # If list query failed, we still might be able to regroup # using reverse. allgrplist = None groups_info = {} allgroups = {} # Check for external reverse presence, and also use the # following heuristic: external reverse is used only when number # of groups is greater than the NodeSet size. if self._resolver.has_node_groups(groupsource) and \ (not allgrplist or len(allgrplist) >= len(self)): # use external reverse pass else: if not allgrplist: # list query failed and no way to reverse! return groups_info # empty try: # use internal reverse: populate allgroups for grp in allgrplist: nodelist = self._resolver.group_nodes(grp, groupsource) allgroups[grp] = NodeSet(",".join(nodelist), resolver=self._resolver) except NodeUtils.GroupSourceQueryFailed as exc: # External result inconsistency raise NodeSetExternalError("Unable to map a group " \ "previously listed\n\tFailed command: %s" % exc) # For each NodeSetBase in self, find its groups. for node in self._iterbase(): for grp in self._find_groups(node, groupsource, allgroups): if grp not in groups_info: nodes = self._parser.parse_group(grp, groupsource, autostep) groups_info[grp] = (1, nodes) else: i, nodes = groups_info[grp] groups_info[grp] = (i + 1, nodes) return groups_info def groups(self, groupsource=None, noprefix=False): """Find node groups this nodeset belongs to. Return a dictionary of the form: group_name => (group_nodeset, contained_nodeset) Group names are always prefixed with "@". If groupsource is provided, they are prefixed with "@groupsource:", unless noprefix is True. """ groups = self._groups2(groupsource, self._autostep) result = {} for grp, (_, nsb) in groups.items(): if groupsource and not noprefix: key = "@%s:%s" % (groupsource, grp) else: key = "@" + grp result[key] = (NodeSet(nsb, resolver=self._resolver), self.intersection(nsb)) return result def regroup(self, groupsource=None, autostep=None, overlap=False, noprefix=False): """Regroup nodeset using node groups. Try to find fully matching node groups (within specified groupsource) and return a string that represents this node set (containing these potential node groups). When no matching node groups are found, this method returns the same result as str().""" groups = self._groups2(groupsource, autostep) if not groups: return str(self) # Keep only groups that are full. fulls = [] for k, (i, nodes) in groups.items(): assert i <= len(nodes) if i == len(nodes): fulls.append((i, k)) rest = NodeSet(self, resolver=RESOLVER_NOGROUP) regrouped = NodeSet(resolver=RESOLVER_NOGROUP) # Build regrouped NodeSet by selecting largest groups first. for _, grp in sorted(fulls, key=lambda x: (-x[0], x[1])): if not overlap and groups[grp][1] not in rest: continue if groupsource and not noprefix: regrouped.update("@%s:%s" % (groupsource, grp)) else: regrouped.update("@" + grp) rest.difference_update(groups[grp][1]) if not rest: return str(regrouped) if regrouped: return "%s,%s" % (regrouped, rest) return str(rest) def issubset(self, other): """ Report whether another nodeset contains this nodeset. """ nodeset = self._parser.parse(other, self._autostep) return NodeSetBase.issuperset(nodeset, self) def issuperset(self, other): """ Report whether this nodeset contains another nodeset. """ nodeset = self._parser.parse(other, self._autostep) return NodeSetBase.issuperset(self, nodeset) def __getitem__(self, index): """ Return the node at specified index or a subnodeset when a slice is specified. """ base = NodeSetBase.__getitem__(self, index) if not isinstance(base, NodeSetBase): return base # return a real NodeSet inst = NodeSet(autostep=self._autostep, resolver=self._resolver) inst._patterns = base._patterns return inst def split(self, nbr): """ Split the nodeset into nbr sub-nodesets (at most). Each sub-nodeset will have the same number of elements more or less 1. Current nodeset remains unmodified. >>> for nodeset in NodeSet("foo[1-5]").split(3): ... print nodeset foo[1-2] foo[3-4] foo5 """ assert(nbr > 0) # We put the same number of element in each sub-nodeset. slice_size = len(self) // nbr left = len(self) % nbr begin = 0 for i in range(0, min(nbr, len(self))): length = slice_size + int(i < left) yield self[begin:begin + length] begin += length def update(self, other): """ s.update(t) returns nodeset s with elements added from t. """ nodeset = self._parser.parse(other, self._autostep) NodeSetBase.update(self, nodeset) def intersection_update(self, other): """ s.intersection_update(t) returns nodeset s keeping only elements also found in t. """ nodeset = self._parser.parse(other, self._autostep) NodeSetBase.intersection_update(self, nodeset) def difference_update(self, other, strict=False): """ s.difference_update(t) removes from s all the elements found in t. If strict is True, raise KeyError if an element in t cannot be removed from s. """ nodeset = self._parser.parse(other, self._autostep) NodeSetBase.difference_update(self, nodeset, strict) def symmetric_difference_update(self, other): """ s.symmetric_difference_update(t) returns nodeset s keeping all nodes that are in exactly one of the nodesets. """ nodeset = self._parser.parse(other, self._autostep) NodeSetBase.symmetric_difference_update(self, nodeset) def expand(pat): """ Commodity function that expands a nodeset pattern into a list of nodes. """ return list(NodeSet(pat)) def fold(pat): """ Commodity function that clean dups and fold provided pattern with ranges and "/step" support. """ return str(NodeSet(pat)) def grouplist(namespace=None, resolver=None): """ Commodity function that retrieves the list of raw groups for a specified group namespace (or use default namespace). Group names are not prefixed with "@". """ return ParsingEngine(resolver or RESOLVER_STD_GROUP).grouplist(namespace) def std_group_resolver(): """ Get the current resolver used for standard "@" group resolution. """ return RESOLVER_STD_GROUP def set_std_group_resolver(new_resolver): """ Override the resolver used for standard "@" group resolution. The new resolver should be either an instance of NodeUtils.GroupResolver or None. In the latter case, the group resolver is restored to the default one. """ global RESOLVER_STD_GROUP RESOLVER_STD_GROUP = new_resolver or _DEF_RESOLVER_STD_GROUP def set_std_group_resolver_config(groupsconf, illegal_chars=None): """ Helper to create and set std group resolver from a config file path. By default, the GroupResolverConfig object is created using illegal_chars=NodeSet.ILLEGAL_GROUP_CHARS. This method does nothing if groupsconf is not defined. """ if groupsconf: if illegal_chars is None: illegal_chars = ILLEGAL_GROUP_CHARS group_resolver = NodeUtils.GroupResolverConfig(groupsconf, illegal_chars) set_std_group_resolver(group_resolver) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/NodeUtils.py0000644104717000001440000006154414505632065021245 0ustar00sthiellusers# # Copyright (C) 2010-2016 CEA/DAM # Copyright (C) 2010-2016 Aurelien Degremont # Copyright (C) 2015-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ Cluster nodes utility module The NodeUtils module is a ClusterShell helper module that provides supplementary services to manage nodes in a cluster. It is primarily designed to enhance the NodeSet module providing some binding support to external node groups sources in separate namespaces (example of group sources are: files, jobs scheduler, custom scripts, etc.). """ try: from configparser import ConfigParser, NoOptionError, NoSectionError except ImportError: # Python 2 compat from ConfigParser import ConfigParser, NoOptionError, NoSectionError import errno from functools import wraps import glob import logging import os import shlex import time from string import Template from subprocess import Popen, PIPE try: basestring except NameError: basestring = str LOGGER = logging.getLogger(__name__) class GroupSourceError(Exception): """Base GroupSource error exception""" def __init__(self, message, group_source): Exception.__init__(self, message) self.group_source = group_source class GroupSourceNoUpcall(GroupSourceError): """Raised when upcall or method is not available""" class GroupSourceQueryFailed(GroupSourceError): """Raised when a query failed (eg. no group found)""" class GroupResolverError(Exception): """Base GroupResolver error""" class GroupResolverSourceError(GroupResolverError): """Raised when upcall is not available""" class GroupResolverIllegalCharError(GroupResolverError): """Raised when an illegal group character is encountered""" class GroupResolverConfigError(GroupResolverError): """Raised when a configuration error is encountered""" _DEFAULT_CACHE_TIME = 3600 class GroupSource(object): """ClusterShell Group Source class. A Group Source object defines resolv_map, resolv_list, resolv_all and optional resolv_reverse methods for node group resolution. It is constituting a group resolution namespace. """ def __init__(self, name, groups=None, allgroups=None): """Initialize GroupSource :param name: group source name :param groups: group to nodes dict :param allgroups: optional "all groups" result (string) """ self.name = name self.groups = groups or {} # we avoid the use of {} as default argument self.allgroups = allgroups self.has_reverse = False def resolv_map(self, group): """Get nodes from group `group`""" return self.groups.get(group, '') def resolv_list(self): """Return a list of all group names for this group source""" return list(self.groups) def resolv_all(self): """Return the content of all groups as defined by this GroupSource""" if self.allgroups is None: raise GroupSourceNoUpcall("All groups info not available", self) return self.allgroups def resolv_reverse(self, node): """ Return the group name matching the provided node. """ raise GroupSourceNoUpcall("Not implemented", self) class FileGroupSource(GroupSource): """File-based Group Source using loader for file format and cache expiry.""" def __init__(self, name, loader): """ Initialize FileGroupSource object. :param name: group source name (eg. key name of yaml root dict) :param loader: associated content loader (eg. YAMLGroupLoader object) """ # do not call super.__init__ to allow the use of r/o properties self.name = name self.loader = loader self.has_reverse = False @property def groups(self): """groups property (dict)""" return self.loader.groups(self.name) @property def allgroups(self): """allgroups property (string)""" # FileGroupSource uses the 'all' group to implement resolv_all return self.groups.get('all') class UpcallGroupSource(GroupSource): """ GroupSource class managing external calls for nodegroup support. Upcall results are cached for a customizable amount of time. This is controlled by `cache_time` attribute. Default is 3600 seconds. """ def __init__(self, name, map_upcall, all_upcall=None, list_upcall=None, reverse_upcall=None, cfgdir=None, cache_time=None): GroupSource.__init__(self, name) self.verbosity = 0 # deprecated self.cfgdir = cfgdir self.logger = logging.getLogger(__name__) # Supported external upcalls self.upcalls = {} self.upcalls['map'] = map_upcall if all_upcall: self.upcalls['all'] = all_upcall if list_upcall: self.upcalls['list'] = list_upcall if reverse_upcall: self.upcalls['reverse'] = reverse_upcall self.has_reverse = True # Cache upcall data if cache_time is None: self.cache_time = _DEFAULT_CACHE_TIME else: self.cache_time = cache_time self._cache = {} self.clear_cache() def clear_cache(self): """ Remove all previously cached upcall results whatever their lifetime is. """ self._cache = { 'map': {}, 'reverse': {} } def _upcall_read(self, cmdtpl, args=dict()): """ Invoke the specified upcall command, raise an Exception if something goes wrong and return the command output otherwise. """ cmdline = Template(self.upcalls[cmdtpl]).safe_substitute(args) self.logger.debug("EXEC '%s'", cmdline) proc = Popen(cmdline, stdout=PIPE, shell=True, cwd=self.cfgdir, universal_newlines=True) output = proc.communicate()[0].strip() self.logger.debug("READ '%s'", output) if proc.returncode != 0: self.logger.debug("ERROR '%s' returned %d", cmdline, proc.returncode) raise GroupSourceQueryFailed(cmdline, self) return output def _upcall_cache(self, upcall, cache, key, **args): """ Look for `key' in provided `cache'. If not found, call the corresponding `upcall'. If `key' is missing, it is added to provided `cache'. Each entry in a cache is kept only for a limited time equal to self.cache_time . """ if not self.upcalls.get(upcall): raise GroupSourceNoUpcall(upcall, self) # Purge expired data from cache if key in cache and cache[key][1] < time.time(): self.logger.debug("PURGE EXPIRED (%d)'%s'", cache[key][1], key) del cache[key] # Fetch the data if unknown of just purged if key not in cache: cache_expiry = time.time() + self.cache_time # $CFGDIR and $SOURCE always replaced args['CFGDIR'] = self.cfgdir args['SOURCE'] = self.name cache[key] = (self._upcall_read(upcall, args), cache_expiry) return cache[key][0] def resolv_map(self, group): """ Get nodes from group 'group', using the cached value if available. """ return self._upcall_cache('map', self._cache['map'], group, GROUP=group) def resolv_list(self): """ Return a list of all group names for this group source, using the cached value if available. """ return self._upcall_cache('list', self._cache, 'list') def resolv_all(self): """ Return the content of special group ALL, using the cached value if available. """ return self._upcall_cache('all', self._cache, 'all') def resolv_reverse(self, node): """ Return the group name matching the provided node, using the cached value if available. """ # Cast node to string as cache key must be hashable node_str = str(node) return self._upcall_cache('reverse', self._cache['reverse'], node_str, NODE=node_str) class YAMLGroupLoader(object): """ YAML group file loader/reloader. Load or reload a YAML multi group sources file: - create GroupSource objects - gather groups dict content on load - reload the file once cache_time has expired """ def __init__(self, filename, cache_time=None): """ Initialize YAMLGroupLoader and load file. :param filename: YAML file path :param cache_time: cache time (seconds) """ if cache_time is None: self.cache_time = _DEFAULT_CACHE_TIME else: self.cache_time = cache_time self.cache_expiry = 0 self.filename = filename self.sources = {} self._groups = {} # must be loaded after initialization so self.sources is set self._load() def _load(self): """Load or reload YAML group file to create GroupSource objects.""" with open(self.filename) as yamlfile: try: import yaml sources = yaml.safe_load(yamlfile) except ImportError as exc: msg = "Disable autodir or install PyYAML!" raise GroupResolverConfigError("%s (%s)" % (str(exc), msg)) except yaml.YAMLError as exc: raise GroupResolverConfigError("%s: %s" % (self.filename, exc)) # NOTE: change to isinstance(sources, collections.Mapping) with py2.6+ if not isinstance(sources, dict): fmt = "%s: invalid content (base is not a dict)" raise GroupResolverConfigError(fmt % self.filename) first = not self.sources for srcname, groups in sources.items(): # check for valid types returned by PyYAML Loader if not isinstance(srcname, basestring): fmt = '%s: group source %s not a string (add quotes?)' raise GroupResolverConfigError(fmt % (self.filename, srcname)) if not isinstance(groups, dict): fmt = "%s: invalid content (group source '%s' is not a dict)" raise GroupResolverConfigError(fmt % (self.filename, srcname)) for grp, grpnodes in groups.items(): if not isinstance(grp, basestring): fmt = '%s: %s: group name %s not a string (add quotes?)' raise GroupResolverConfigError(fmt % (self.filename, srcname, grp)) # GH#533: interpret null value as empty set if grpnodes is None: groups[grp] = '' if first: self._groups[srcname] = groups self.sources[srcname] = FileGroupSource(srcname, self) elif srcname in self.sources: # update groups of existing source self._groups[srcname] = groups # else: cannot add new source on reload - just ignore it # groups are loaded, set cache expiry self.cache_expiry = time.time() + self.cache_time def __iter__(self): """Iterate over GroupSource objects.""" # safe as long as self.sources is set at init (once) return iter(self.sources.values()) def groups(self, sourcename): """ Groups dict accessor for sourcename. This method is called by associated FileGroupSource objects and simply returns dict content, after reloading file if cache_time has expired. """ if self.cache_expiry < time.time(): # reload whole file if cache time expired self._load() return self._groups[sourcename] class GroupResolver(object): """ Base class GroupResolver that aims to provide node/group resolution from multiple GroupSources. A GroupResolver object might be initialized with a default GroupSource object, that is later used when group resolution is requested with no source information. As of version 1.7, a set of illegal group characters may also be provided for sanity check (raising GroupResolverIllegalCharError when found). """ def __init__(self, default_source=None, illegal_chars=None): """Lazy initialization of a new GroupResolver object.""" self._sources = {} self._default_source = default_source self._initialized = False self.illegal_chars = illegal_chars or set() def _late_init(self): """Override method to initialize object just before it is needed.""" if self._default_source: self._sources[self._default_source.name] = self._default_source self._initialized = True # overriding methods should call super def init(func): @wraps(func) def wrapper(self, *args): if not self._initialized: self._late_init() return func(self, *args) return wrapper @init def set_verbosity(self, value): """Set debugging verbosity value (DEPRECATED: use logging.DEBUG).""" for source in self._sources.values(): source.verbosity = value @init def add_source(self, group_source): """Add a GroupSource to this resolver.""" if group_source.name in self._sources: raise ValueError("GroupSource '%s': name collision" % \ group_source.name) self._sources[group_source.name] = group_source @init def sources(self): """Get the list of all resolver source names. """ srcs = list(self._sources) if srcs and srcs[0] is not self._default_source: srcs.remove(self._default_source.name) srcs.insert(0, self._default_source.name) return srcs @init def _get_default_source_name(self): """Get default source name of resolver.""" if self._default_source is None: return None return self._default_source.name @init def _set_default_source_name(self, sourcename): """Set default source of resolver (by name).""" try: self._default_source = self._sources[sourcename] except KeyError: raise GroupResolverSourceError(sourcename) default_source_name = property(_get_default_source_name, _set_default_source_name) def _list_nodes(self, source, what, *args): """Helper method that returns a list of results (nodes) when the source is defined.""" result = [] assert source raw = getattr(source, 'resolv_%s' % what)(*args) if isinstance(raw, list): raw = ','.join(raw) for line in raw.splitlines(): [result.append(x) for x in line.strip().split()] return result def _list_groups(self, source, what, *args): """Helper method that returns a list of results (groups) when the source is defined.""" result = [] assert source raw = getattr(source, 'resolv_%s' % what)(*args) try: grpiter = raw.splitlines() except AttributeError: grpiter = raw for line in grpiter: for grpstr in line.strip().split(): if self.illegal_chars.intersection(grpstr): errmsg = ' '.join(self.illegal_chars.intersection(grpstr)) raise GroupResolverIllegalCharError(errmsg) result.append(grpstr) return result @init def _source(self, namespace): """Helper method that returns the source by namespace name.""" if not namespace: source = self._default_source else: source = self._sources.get(namespace) if not source: raise GroupResolverSourceError(namespace or "") return source def group_nodes(self, group, namespace=None): """ Find nodes for specified group name and optional namespace. """ source = self._source(namespace) return self._list_nodes(source, 'map', group) def all_nodes(self, namespace=None): """ Find all nodes. You may specify an optional namespace. """ source = self._source(namespace) return self._list_nodes(source, 'all') def grouplist(self, namespace=None): """ Get full group list. You may specify an optional namespace. """ source = self._source(namespace) return self._list_groups(source, 'list') def has_node_groups(self, namespace=None): """ Return whether finding group list for a specified node is supported by the resolver (in optional namespace). """ try: return self._source(namespace).has_reverse except GroupResolverSourceError: return False def node_groups(self, node, namespace=None): """ Find group list for specified node and optional namespace. """ source = self._source(namespace) return self._list_groups(source, 'reverse', node) class GroupResolverConfig(GroupResolver): """ GroupResolver class that is able to automatically setup its GroupSource's from a configuration file. This is the default resolver for NodeSet. """ SECTION_MAIN = 'Main' def __init__(self, filenames, illegal_chars=None): """ Lazy init GroupResolverConfig object from filenames. """ GroupResolver.__init__(self, illegal_chars=illegal_chars) self.filenames = filenames self.config = None def _late_init(self): """ Initialize object when needed. Only the first accessible config filename is loaded. """ GroupResolver._late_init(self) # support single or multiple config filenames self.config = ConfigParser() parsed = self.config.read(self.filenames) # check if at least one parsable config file has been found, otherwise # continue with an empty self._sources if parsed: # for proper $CFGDIR selection, take last parsed configfile only self._parse_config(os.path.dirname(parsed[-1])) def _parse_config(self, cfg_dirname): """parse config using relative dir cfg_dirname""" # parse Main.confdir try: if self.config.has_option(self.SECTION_MAIN, 'groupsdir'): opt_confdir = 'groupsdir' else: opt_confdir = 'confdir' # keep track of loaded confdirs loaded_confdirs = set() confdirstr = self.config.get(self.SECTION_MAIN, opt_confdir) for confdir in shlex.split(confdirstr): # substitute $CFGDIR, set to the highest priority clustershell # configuration directory that has been found confdir = Template(confdir).safe_substitute(CFGDIR=cfg_dirname) confdir = os.path.normpath(confdir) if confdir in loaded_confdirs: continue # load each confdir only once loaded_confdirs.add(confdir) if not os.path.isdir(confdir): if not os.path.exists(confdir): continue raise GroupResolverConfigError("Defined confdir %s is not" " a directory" % confdir) # add sources declared in groups.conf.d file parts for groupsfn in sorted(glob.glob('%s/*.conf' % confdir)): grpcfg = ConfigParser() grpcfg.read(groupsfn) # ignore files that cannot be read self._sources_from_cfg(grpcfg, confdir) except (NoSectionError, NoOptionError): pass # parse Main.autodir try: # keep track of loaded autodirs loaded_autodirs = set() autodirstr = self.config.get(self.SECTION_MAIN, 'autodir') for autodir in shlex.split(autodirstr): # substitute $CFGDIR, set to the highest priority clustershell # configuration directory that has been found autodir = Template(autodir).safe_substitute(CFGDIR=cfg_dirname) autodir = os.path.normpath(autodir) if autodir in loaded_autodirs: continue # load each autodir only once loaded_autodirs.add(autodir) if not os.path.isdir(autodir): if not os.path.exists(autodir): continue raise GroupResolverConfigError("Defined autodir %s is not" " a directory" % autodir) # add auto sources declared in groups.d YAML files for autosfn in sorted(glob.glob('%s/*.yaml' % autodir)): try: self._sources_from_yaml(autosfn) except IOError as exc: # same as OSError in Python 3 # in Python 3 only, we could just catch PermissionError if exc.errno in (errno.EACCES, errno.EPERM): # ignore YAML files that we don't have access to LOGGER.debug(exc) continue except (NoSectionError, NoOptionError): pass # add sources declared directly in groups.conf self._sources_from_cfg(self.config, cfg_dirname) # parse Main.default try: def_sourcename = self.config.get('Main', 'default') # warning: default_source_name is a property self.default_source_name = def_sourcename except (NoSectionError, NoOptionError): pass except GroupResolverSourceError: if def_sourcename: # allow empty Main.default fmt = 'Default group source not found: "%s"' raise GroupResolverConfigError(fmt % self.config.get('Main', 'default')) # pick random default source if not provided by config if not self.default_source_name and self._sources: self.default_source_name = list(self._sources)[0] def _sources_from_cfg(self, cfg, cfgdir): """ Instantiate as many UpcallGroupSources needed from cfg object, cfgdir (CWD for callbacks) and cfg filename. """ try: for section in cfg.sections(): # Support grouped sections: section1,section2,section3 for srcname in section.split(','): if srcname != self.SECTION_MAIN: # only map is a mandatory upcall map_upcall = cfg.get(section, 'map', raw=True) all_upcall = list_upcall = reverse_upcall = ctime = None if cfg.has_option(section, 'all'): all_upcall = cfg.get(section, 'all', raw=True) if cfg.has_option(section, 'list'): list_upcall = cfg.get(section, 'list', raw=True) if cfg.has_option(section, 'reverse'): reverse_upcall = cfg.get(section, 'reverse', raw=True) if cfg.has_option(section, 'cache_time'): ctime = float(cfg.get(section, 'cache_time', raw=True)) # add new group source self.add_source(UpcallGroupSource(srcname, map_upcall, all_upcall, list_upcall, reverse_upcall, cfgdir, ctime)) except (NoSectionError, NoOptionError, ValueError) as exc: raise GroupResolverConfigError(str(exc)) def _sources_from_yaml(self, filepath): """Load source(s) from YAML file.""" for source in YAMLGroupLoader(filepath): self.add_source(source) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Propagation.py0000644104717000001440000004007614501416555021617 0ustar00sthiellusers# # Copyright (C) 2010-2016 CEA/DAM # Copyright (C) 2010-2011 Henri Doreau # Copyright (C) 2015-2018 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell Propagation module. Use the topology tree to send commands through gateways and gather results. """ from collections import deque import logging from ClusterShell.Defaults import DEFAULTS from ClusterShell.NodeSet import NodeSet from ClusterShell.Communication import Channel from ClusterShell.Communication import ControlMessage, StdOutMessage from ClusterShell.Communication import StdErrMessage, RetcodeMessage from ClusterShell.Communication import StartMessage, EndMessage from ClusterShell.Communication import RoutedMessageBase, ErrorMessage from ClusterShell.Communication import ConfigurationMessage, TimeoutMessage from ClusterShell.Topology import TopologyError class RouteResolvingError(Exception): """error raised on invalid conditions during routing operations""" class PropagationTreeRouter(object): """performs routes resolving operations within a propagation tree. This object provides a next_hop method, that will look for the best directly connected node to use to forward a message to a remote node. Upon instantiation, the router will parse the topology tree to generate its routing table. """ def __init__(self, root, topology, fanout=0): self.root = root self.topology = topology self.fanout = fanout self.nodes_fanin = {} self.table = None self.table_generate(root, topology) self._unreachable_hosts = NodeSet() def table_generate(self, root, topology): """The router relies on a routing table. The keys are the destination nodes and the values are the next hop gateways to use to reach these nodes. """ try: root_group = topology.find_nodegroup(root) except TopologyError: msgfmt = "Invalid root or gateway node: %s" raise RouteResolvingError(msgfmt % root) self.table = [] for group in root_group.children(): dest = NodeSet() stack = [group] while len(stack) > 0: curr = stack.pop() dest.update(curr.children_ns()) stack += curr.children() self.table.append((dest, group.nodeset)) def dispatch(self, dst): """dispatch nodes from a target nodeset to the directly connected gateways. The method acts as an iterator, returning a gateway and the associated hosts. It should provide a rather good load balancing between the gateways. """ ### Disabled to handle all remaining nodes as directly connected nodes ## Check for directly connected targets #res = [tmp & dst for tmp in self.table.values()] #nexthop = NodeSet() #[nexthop.add(x) for x in res] #if len(nexthop) > 0: # yield nexthop, nexthop # Check for remote targets, that require a gateway to be reached for network, _ in self.table: dst_inter = network & dst dst.difference_update(dst_inter) for host in dst_inter.nsiter(): yield self.next_hop(host), host # remaining nodes are considered as directly connected nodes if dst: yield dst, dst def next_hop(self, dst): """perform the next hop resolution. If several hops are available, then, the one with the least number of current jobs will be returned """ if dst in self._unreachable_hosts: raise RouteResolvingError( 'Invalid destination: %s, host is unreachable' % dst) # can't resolve if source == destination if self.root == dst: raise RouteResolvingError( 'Invalid resolution request: %s -> %s' % (self.root, dst)) ## ------------------ # the routing table is organized this way: # # NETWORK | NEXT HOP # ------------+----------- # node[0-9] | gateway0 # node[10-19] | gateway[1-2] # ... # --------- for network, nexthops in self.table: # destination contained in current network if dst in network: res = self._best_next_hop(nexthops) if res is None: raise RouteResolvingError('No route available to %s' % \ str(dst)) self.nodes_fanin[res] += len(dst) return res # destination contained in current next hops (ie. directly # connected) if dst in nexthops: return dst raise RouteResolvingError( 'No route from %s to host %s' % (self.root, dst)) def mark_unreachable(self, dst): """mark node dst as unreachable and don't advertise routes through it anymore. The cache will be updated only when necessary to avoid performing expensive traversals. """ # Simply mark dst as unreachable in a dedicated NodeSet. This # list will be consulted by the resolution method self._unreachable_hosts.add(dst) def _best_next_hop(self, candidates): """find out a good next hop gateway""" backup = None backup_connections = 1e400 # infinity candidates = candidates.difference(self._unreachable_hosts) for host in candidates: # the router tracks established connections in the # nodes_fanin table to avoid overloading a gateway connections = self.nodes_fanin.setdefault(host, 0) # FIXME #if connections < self.fanout: # # currently, the first one is the best # return host if backup_connections > connections: backup = host backup_connections = connections return backup class PropagationChannel(Channel): """Admin node propagation logic. Instances are able to handle incoming messages from a directly connected gateway, process them and reply. In order to take decisions, the instance acts as a finite states machine, whose current state evolves according to received data. -- INTERNALS -- Instance can be in one of the 4 different states: - init (implicit) This is the very first state. The instance enters the init state at start() method, and will then send the configuration to the remote node. Once the configuration is sent away, the state changes to cfg. - cfg During this second state, the instance will wait for a valid acknowledgement from the gateway to the previously sent configuration message. If such a message is delivered, the control message (the one that contains the actions to perform) is sent, and the state is set to ctl. - ctl Third state, the instance is waiting for a valid ack for from the gateway to the ctl packet. Then, the state switch to gtr (gather). - gtr Final state: wait for results from the subtree and store them. """ def __init__(self, task, gateway): """ """ Channel.__init__(self, initiator=True) self.task = task self.gateway = gateway self.workers = {} self._cfg_write_hist = deque() # track write requests self._sendq = deque() self._rc = None self.logger = logging.getLogger(__name__) def send_queued(self, ctl): """helper used to send a message, using msg queue if needed""" if self.setup and not self._sendq: # send now if channel is setup and sendq empty self.send(ctl) else: self.logger.debug("send_queued: %d", len(self._sendq)) self._sendq.appendleft(ctl) def send_dequeue(self): """helper used to send one queued message (if any)""" if self._sendq: ctl = self._sendq.pop() self.logger.debug("dequeuing sendq: %s", ctl) self.send(ctl) def start(self): """start propagation channel""" self._init() self._open() # Immediately send CFG cfg = ConfigurationMessage(self.gateway) cfg.data_encode(self.task.topology) self.send(cfg) def recv(self, msg): """process incoming messages""" self.logger.debug("recv: %s", msg) if msg.type == EndMessage.ident: #??#self.ptree.notify_close() self.logger.debug("got EndMessage; closing") # abort worker (now working) self.worker.abort() elif msg.type == StdErrMessage.ident and msg.srcid == 0: # Handle error messages when channel is not established yet # or if messages are non-routed (eg. gateway-related) nodeset = NodeSet(msg.nodes) decoded = msg.data_decode() + b'\n' for metaworker in self.workers.values(): for line in decoded.splitlines(): for node in nodeset: metaworker._on_remote_node_msgline(node, line, 'stderr', self.gateway) elif self.setup: self.recv_ctl(msg) elif self.opened: self.recv_cfg(msg) elif msg.type == StartMessage.ident: self.opened = True self.logger.debug('channel started (version %s on remote gateway)', self._xml_reader.version) else: self.logger.error('unexpected message: %s', str(msg)) def shell(self, nodes, command, worker, timeout, stderr, gw_invoke_cmd, remote): """command execution through channel""" self.logger.debug("shell nodes=%s timeout=%s worker=%s remote=%s", nodes, timeout, id(worker), remote) self.workers[id(worker)] = worker ctl = ControlMessage(id(worker)) ctl.action = 'shell' ctl.target = nodes # keep only valid task info pairs info = dict((k, v) for k, v in self.task._info.items() if k not in DEFAULTS._task_info_pkeys_bl) ctl_data = { 'cmd': command, 'invoke_gateway': gw_invoke_cmd, # XXX 'taskinfo': info, 'stderr': stderr, 'timeout': timeout, 'remote': remote, } ctl.data_encode(ctl_data) self.send_queued(ctl) def write(self, nodes, buf, worker): """write buffer through channel to nodes on standard input""" self.logger.debug("write buflen=%d", len(buf)) assert id(worker) in self.workers ctl = ControlMessage(id(worker)) ctl.action = 'write' ctl.target = nodes ctl_data = { 'buf': buf, } ctl.data_encode(ctl_data) self._cfg_write_hist.appendleft((ctl.msgid, nodes, len(buf), worker)) self.send_queued(ctl) def set_write_eof(self, nodes, worker): """send EOF through channel to specified nodes""" self.logger.debug("set_write_eof") assert id(worker) in self.workers ctl = ControlMessage(id(worker)) ctl.action = 'eof' ctl.target = nodes self.send_queued(ctl) def recv_cfg(self, msg): """handle incoming messages for state 'propagate configuration'""" self.logger.debug("recv_cfg") if msg.type == 'ACK': self.logger.debug("CTL - connection with gateway fully established") self.setup = True self.send_dequeue() else: self.logger.debug("_state_config error (msg=%s)", msg) def recv_ctl(self, msg): """handle incoming messages for state 'control'""" if msg.type == 'ACK': self.logger.debug("got ack (%s)", msg.type) # check if ack matches write history msgid to generate ev_written if self._cfg_write_hist and msg.ack == self._cfg_write_hist[-1][0]: _, nodes, bytes_count, metaworker = self._cfg_write_hist.pop() for node in nodes: # we are losing track of the gateway here, we could override # on_written in TreeWorker if needed (eg. for stats) metaworker._on_written(node, bytes_count, 'stdin') self.send_dequeue() elif isinstance(msg, RoutedMessageBase): metaworker = self.workers[msg.srcid] if msg.type == StdOutMessage.ident: nodeset = NodeSet(msg.nodes) # msg.data_decode()'s name is a bit confusing, but returns # pickle-decoded bytes (encoded string) and not string... decoded = msg.data_decode() + b'\n' for line in decoded.splitlines(): for node in nodeset: metaworker._on_remote_node_msgline(node, line, 'stdout', self.gateway) elif msg.type == StdErrMessage.ident: nodeset = NodeSet(msg.nodes) decoded = msg.data_decode() + b'\n' for line in decoded.splitlines(): for node in nodeset: metaworker._on_remote_node_msgline(node, line, 'stderr', self.gateway) elif msg.type == RetcodeMessage.ident: rc = msg.retcode for node in NodeSet(msg.nodes): metaworker._on_remote_node_close(node, rc, self.gateway) elif msg.type == TimeoutMessage.ident: self.logger.debug("TimeoutMessage for %s", msg.nodes) for node in NodeSet(msg.nodes): metaworker._on_remote_node_timeout(node, self.gateway) elif msg.type == ErrorMessage.ident: # tree runtime error, could generate a new event later raise TopologyError("%s: %s" % (self.gateway, msg.reason)) else: self.logger.debug("recv_ctl: unhandled msg %s", msg) """ return if self.ptree.upchannel is not None: self.logger.debug("_state_gather ->upchan %s" % msg) self.ptree.upchannel.send(msg) # send to according event handler passed by shell() else: assert False """ def ev_hup(self, worker, node, rc): """Channel command is closing""" self._rc = rc def ev_close(self, worker, timedout): """Channel is closing""" # do not use worker buffer or rc accessors here as we doesn't use # common stream names gateway = str(worker.nodes) self.logger.debug("ev_close gateway=%s %s", gateway, self) self.logger.debug("ev_close rc=%s", self._rc) # may be None # NOTE: self._rc may be None if the communication channel has aborted if self._rc != 0: self.logger.debug("error on gateway %s (setup=%s)", gateway, self.setup) self.task.router.mark_unreachable(gateway) self.logger.debug("gateway %s now set as unreachable", gateway) if not self.setup: # channel was not set up: we can safely repropagate commands for mw in set(self.task.gateways[gateway][1]): mw._relaunch(gateway) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/RangeSet.py0000644104717000001440000015236114501416555021045 0ustar00sthiellusers# # Copyright (C) 2012-2016 CEA/DAM # Copyright (C) 2012-2016 Aurelien Degremont # Copyright (C) 2015-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ Cluster range set module. Instances of RangeSet provide similar operations than the builtin set type, extended to support cluster ranges-like format and stepping support ("0-8/2"). """ from functools import reduce from itertools import product from operator import mul __all__ = ['RangeSetException', 'RangeSetParseError', 'RangeSetPaddingError', 'RangeSet', 'RangeSetND', 'AUTOSTEP_DISABLED'] # Special constant used to force turn off autostep feature. # Note: +inf is 1E400, but a bug in python 2.4 makes it impossible to be # pickled, so we use less. Later, we could consider sys.maxint here. AUTOSTEP_DISABLED = 1E100 class RangeSetException(Exception): """Base RangeSet exception class.""" class RangeSetParseError(RangeSetException): """Raised when RangeSet parsing cannot be done properly.""" def __init__(self, part, msg): if part: msg = "%s : \"%s\"" % (msg, part) RangeSetException.__init__(self, msg) # faulty subrange; this allows you to target the error self.part = part class RangeSetPaddingError(RangeSetParseError): """Raised when a fatal padding incoherence occurs""" def __init__(self, part, msg): RangeSetParseError.__init__(self, part, "padding mismatch (%s)" % msg) class RangeSet(set): """ Mutable set of cluster node indexes featuring a fast range-based API. This class aims to ease the management of potentially large cluster range sets and is used by the :class:`.NodeSet` class. RangeSet basic constructors: >>> rset = RangeSet() # empty RangeSet >>> rset = RangeSet("5,10-42") # contains '5', '10' to '42' >>> rset = RangeSet("0-10/2") # contains '0', '2', '4', '6', '8', '10' >>> rset = RangeSet("00-10/2") # contains '00', '02', '04', '06', '08', '10' Also any iterable of integers can be specified as first argument: >>> RangeSet([3, 6, 8, 7, 1]) 1,3,6-8 >>> rset2 = RangeSet(rset) Padding of ranges (eg. "003-009") is inferred from input arguments and managed automatically. This is new in ClusterShell v1.9, where mixed lengths zero padding is now supported within the same RangeSet. The instance variable `padding` has become a property that can still be used to either get the max padding length in the set, or force a fixed length zero-padding on the set. RangeSet is itself a set and as such, provides an iterator over its items as strings (strings are used since v1.9). It is recommended to use the explicit iterators :meth:`RangeSet.intiter` and :meth:`RangeSet.striter` when iterating over a RangeSet. RangeSet provides methods like :meth:`RangeSet.union`, :meth:`RangeSet.intersection`, :meth:`RangeSet.difference`, :meth:`RangeSet.symmetric_difference` and their in-place versions :meth:`RangeSet.update`, :meth:`RangeSet.intersection_update`, :meth:`RangeSet.difference_update`, :meth:`RangeSet.symmetric_difference_update` which conform to the Python Set API. """ _VERSION = 4 # serial version number def __init__(self, pattern=None, autostep=None): """Initialize RangeSet object. :param pattern: optional string pattern :param autostep: optional autostep threshold """ set.__init__(self) if pattern is not None and not isinstance(pattern, str): pattern = ",".join("%s" % i for i in pattern) if isinstance(pattern, RangeSet): self._autostep = pattern._autostep else: self._autostep = None self.autostep = autostep #: autostep threshold public instance attribute if isinstance(pattern, str): self._parse(pattern) def _parse(self, pattern): """Parse string of comma-separated x-y/step -like ranges""" # Comma separated ranges for subrange in pattern.split(','): subrange = subrange.strip() # ignore whitespaces if subrange.find('/') < 0: baserange, step = subrange, 1 else: baserange, step = subrange.split('/', 1) try: step = int(step) except ValueError: raise RangeSetParseError(subrange, "cannot convert string to integer") begin_sign = end_sign = 1 # sign "scale factor" if baserange.find('-') < 0: if step != 1: raise RangeSetParseError(subrange, "invalid step usage") begin = end = baserange else: # ignore whitespaces in a range try: begin, end = (n.strip() for n in baserange.split('-')) if not begin: # single negative number "-5" begin = end begin_sign = end_sign = -1 except ValueError: try: # -0-3 _, begin, end = (n.strip() for n in baserange.split('-')) begin_sign = -1 except ValueError: # -8--4 _, begin, _, end = (n.strip() for n in baserange.split('-')) begin_sign = end_sign = -1 # compute padding and return node range info tuple try: pad = endpad = 0 if int(begin) != 0: begins = begin.lstrip("0") if len(begin) - len(begins) > 0: pad = len(begin) start = int(begins) else: if len(begin) > 1: pad = len(begin) start = 0 if int(end) != 0: ends = end.lstrip("0") else: ends = end # explicit padding for begin and end must match if len(end) - len(ends) > 0: endpad = len(end) if (pad > 0 or endpad > 0) and len(begin) != len(end): raise RangeSetParseError(subrange, "padding length mismatch") stop = int(ends) except ValueError: if len(subrange) == 0: msg = "empty range" else: msg = "cannot convert string to integer" raise RangeSetParseError(subrange, msg) # check preconditions if pad > 0 and begin_sign < 0: errmsg = "padding not supported in negative ranges" raise RangeSetParseError(subrange, errmsg) if stop > 1e100 or start * begin_sign > stop * end_sign or step < 1: raise RangeSetParseError(subrange, "invalid values in range") self.add_range(start * begin_sign, stop * end_sign + 1, step, pad) @classmethod def fromlist(cls, rnglist, autostep=None): """Class method that returns a new RangeSet with ranges from provided list.""" inst = RangeSet(autostep=autostep) inst.updaten(rnglist) return inst @classmethod def fromone(cls, index, pad=0, autostep=None): """ Class method that returns a new RangeSet of one single item or a single range. Accepted input arguments can be: - integer and padding length - slice object and padding length - string (1.9+) with padding automatically detected (pad is ignored) """ inst = RangeSet(autostep=autostep) # support slice object with duck-typing try: inst.add(index, pad) except TypeError: if not index.stop: raise ValueError("Invalid range upper limit (%s)" % index.stop) inst.add_range(index.start or 0, index.stop, index.step or 1, pad) return inst @property def padding(self): """Get largest padding value of whole set""" result = None for si in self: idx, digitlen = int(si), len(si) # explicitly padded? if digitlen > 1 and si[0] == '0': # result always grows bigger as we iterate over a sorted set # with largest padded values at the end result = digitlen return result @padding.setter def padding(self, value): """Force padding length on the whole set""" if value is None: value = 1 cpyset = set(self) self.clear() for i in cpyset: self.add(int(i), pad=value) def get_autostep(self): """Get autostep value (property)""" if self._autostep >= AUTOSTEP_DISABLED: return None else: # +1 as user wants node count but it means real steps here return self._autostep + 1 def set_autostep(self, val): """Set autostep value (property)""" if val is None: # disabled by default for compat with other cluster tools self._autostep = AUTOSTEP_DISABLED else: # - 1 because user means node count, but we mean real steps # (this operation has no effect on AUTOSTEP_DISABLED value) self._autostep = int(val) - 1 autostep = property(get_autostep, set_autostep) def dim(self): """Get the number of dimensions of this RangeSet object. Common method with RangeSetND. Here, it will always return 1 unless the object is empty, in that case it will return 0.""" return int(len(self) > 0) def _sorted(self): """Get sorted list from inner set.""" # For mixed padding support, sort by both string length and index return sorted(set.__iter__(self), key=lambda x: (-len(x), int(x)) if x.startswith('-') \ else (len(x), x)) def __iter__(self): """Iterate over each element in RangeSet, currently as integers, with no padding information. To guarantee future compatibility, please use the methods intiter() or striter() instead.""" return iter(self._sorted()) def striter(self): """Iterate over each element in RangeSet as strings with optional zero-padding.""" return iter(self._sorted()) def intiter(self): """Iterate over each element in RangeSet as integer. Zero padding info is ignored.""" for e in self._sorted(): yield int(e) def contiguous(self): """Object-based iterator over contiguous range sets.""" for sli, pad in self._contiguous_slices(): yield RangeSet.fromone(slice(sli.start, sli.stop, sli.step), pad) def __reduce__(self): """Return state information for pickling.""" return self.__class__, (str(self),), \ { 'padding': self.padding, \ '_autostep': self._autostep, \ '_version' : RangeSet._VERSION } def __setstate__(self, dic): """called upon unpickling""" self.__dict__.update(dic) if getattr(self, '_version', 0) < RangeSet._VERSION: # unpickle from old version? if getattr(self, '_version', 0) <= 1: # v1 (no object versioning) - CSv1.3 setattr(self, '_ranges', [(slice(start, stop + 1, step), pad) \ for start, stop, step, pad in getattr(self, '_ranges')]) elif hasattr(self, '_ranges'): # v2 - CSv1.4-1.5 self_ranges = getattr(self, '_ranges') if self_ranges and not isinstance(self_ranges[0][0], slice): # workaround for object pickled from Python < 2.5 setattr(self, '_ranges', [(slice(start, stop, step), pad) \ for (start, stop, step), pad in self_ranges]) if hasattr(self, '_ranges'): # convert to v3 for sli, pad in getattr(self, '_ranges'): self.add_range(sli.start, sli.stop, sli.step, pad) delattr(self, '_ranges') delattr(self, '_length') if getattr(self, '_version', 0) == 3: # 1.6 - 1.8 padding = getattr(self, 'padding', 0) # convert integer set to string set cpyset = set(self) self.clear() for i in cpyset: self.add(i, pad=padding) # automatic conversion def _strslices(self): """Stringify slices list (x-y/step format)""" for sli, pad in self._folded_slices(): if sli.start + 1 == sli.stop: yield "%0*d" % (pad, sli.start) else: assert sli.step >= 0, "Internal error: sli.step < 0" if sli.step == 1: yield "%0*d-%0*d" % (pad, sli.start, pad, sli.stop - 1) else: yield "%0*d-%0*d/%d" % (pad, sli.start, pad, sli.stop - 1, \ sli.step) def __str__(self): """Get comma-separated range-based string (x-y/step format).""" return ','.join(self._strslices()) # __repr__ is the same as __str__ as it is a valid expression that # could be used to recreate a RangeSet with the same value __repr__ = __str__ def _slices_padding(self, autostep=AUTOSTEP_DISABLED): """Iterator over (slices, padding). Iterator over RangeSet slices, either a:b:1 slices if autostep is disabled (default), or a:b:step slices if autostep is specified. """ # # Now support mixed lengths zero-padding (v1.9) cur_pad = 0 cur_padded = False cur_start = None cur_step = None last_idx = None for si in self._sorted(): # numerical index and length of digits idx, digitlen = int(si), len(si) # is current digit zero-padded? padded = (digitlen > 1 and si[0] == '0') if cur_start is not None: padding_mismatch = False step_mismatch = False # check conditions to yield # - padding mismatch # - step check (step=1 is just a special case if contiguous) if cur_padded: # currently strictly padded, our next item could be # unpadded but with the same length if digitlen != cur_pad: padding_mismatch = True else: # current not padded, and because the set is sorted, # it should stay that way if padded: padding_mismatch = True if not padding_mismatch: # does current range lead to broken step? if cur_step is not None: # only consider it if step is defined if cur_step != idx - last_idx: step_mismatch = True if padding_mismatch or step_mismatch: if cur_step is not None: # stepped is True when autostep setting does apply stepped = (cur_step == 1) or (last_idx - cur_start >= autostep * cur_step) step = cur_step else: stepped = True step = 1 if stepped: yield slice(cur_start, last_idx + 1, step), cur_pad if cur_padded else 0 cur_start = idx cur_padded = padded cur_pad = digitlen else: if padding_mismatch: stop = last_idx + 1 else: stop = last_idx - step + 1 for j in range(cur_start, stop, step): yield slice(j, j + 1, 1), cur_pad if cur_padded else 0 if padding_mismatch: cur_start = idx cur_padded = padded cur_pad = digitlen else: cur_start = last_idx cur_step = idx - last_idx if step_mismatch else None last_idx = idx continue else: # first index cur_padded = padded cur_pad = digitlen cur_start = idx cur_step = None last_idx = idx continue cur_step = idx - last_idx last_idx = idx if cur_start is not None: if cur_step is not None: # stepped is True when autostep setting does apply stepped = (last_idx - cur_start >= self._autostep * cur_step) else: stepped = True if stepped or cur_step == 1: yield slice(cur_start, last_idx + 1, cur_step), cur_pad if cur_padded else 0 else: for j in range(cur_start, last_idx + 1, cur_step): yield slice(j, j + 1, 1), cur_pad if cur_padded else 0 def _contiguous_slices(self): """Internal iterator over contiguous slices in RangeSet.""" return self._slices_padding() def _folded_slices(self): """Internal generator over ranges organized by step.""" return self._slices_padding(self._autostep) def slices(self): """ Iterate over RangeSet ranges as Python slide objects. NOTE: zero-padding info is not provided """ for sli, pad in self._folded_slices(): yield sli def __getitem__(self, index): """ Return the element at index or a subrange when a slice is specified. """ if isinstance(index, slice): inst = RangeSet() inst._autostep = self._autostep inst.update(self._sorted()[index]) return inst elif isinstance(index, int): return self._sorted()[index] else: raise TypeError("%s indices must be integers" % self.__class__.__name__) def split(self, nbr): """ Split the rangeset into nbr sub-rangesets (at most). Each sub-rangeset will have the same number of elements more or less 1. Current rangeset remains unmodified. Returns an iterator. >>> RangeSet("1-5").split(3) RangeSet("1-2") RangeSet("3-4") RangeSet("foo5") """ assert(nbr > 0) # We put the same number of element in each sub-nodeset. slice_size = len(self) // int(nbr) left = len(self) % nbr begin = 0 for i in range(0, min(nbr, len(self))): length = slice_size + int(i < left) yield self[begin:begin + length] begin += length def add_range(self, start, stop, step=1, pad=0): """ Add a range (start, stop, step and padding length) to RangeSet. Like the Python built-in function *range()*, the last element is the largest start + i * step less than stop. """ assert start < stop, "please provide ordered node index ranges" assert step > 0 assert pad >= 0 assert stop - start < 1e9, "range too large" if pad == 0: set.update(self, ("%d" % i for i in range(start, stop, step))) else: set.update(self, ("%0*d" % (pad, i) for i in range(start, stop, step))) def copy(self): """Return a shallow copy of a RangeSet.""" cpy = self.__class__() cpy._autostep = self._autostep cpy.update(self) return cpy __copy__ = copy # For the copy module def __eq__(self, other): """ RangeSet equality comparison. """ # Return NotImplemented instead of raising TypeError, to # indicate that the comparison is not implemented with respect # to the other type (the other comparand then gets a chance to # determine the result, then it falls back to object address # comparison). if not isinstance(other, RangeSet): return NotImplemented return len(self) == len(other) and self.issubset(other) # Standard set operations: union, intersection, both differences. # Each has an operator version (e.g. __or__, invoked with |) and a # method version (e.g. union). # Subtle: Each pair requires distinct code so that the outcome is # correct when the type of other isn't suitable. For example, if # we did "union = __or__" instead, then Set().union(3) would return # NotImplemented instead of raising TypeError (albeit that *why* it # raises TypeError as-is is also a bit subtle). def __or__(self, other): """Return the union of two RangeSets as a new RangeSet. (I.e. all elements that are in either set.) """ if not isinstance(other, set): return NotImplemented return self.union(other) def union(self, other): """Return the union of two RangeSets as a new RangeSet. (I.e. all elements that are in either set.) """ self_copy = self.copy() self_copy.update(other) return self_copy def __and__(self, other): """Return the intersection of two RangeSets as a new RangeSet. (I.e. all elements that are in both sets.) """ if not isinstance(other, set): return NotImplemented return self.intersection(other) def intersection(self, other): """Return the intersection of two RangeSets as a new RangeSet. (I.e. all elements that are in both sets.) """ self_copy = self.copy() self_copy.intersection_update(other) return self_copy def __xor__(self, other): """Return the symmetric difference of two RangeSets as a new RangeSet. (I.e. all elements that are in exactly one of the sets.) """ if not isinstance(other, set): return NotImplemented return self.symmetric_difference(other) def symmetric_difference(self, other): """Return the symmetric difference of two RangeSets as a new RangeSet. (ie. all elements that are in exactly one of the sets.) """ self_copy = self.copy() self_copy.symmetric_difference_update(other) return self_copy def __sub__(self, other): """Return the difference of two RangeSets as a new RangeSet. (I.e. all elements that are in this set and not in the other.) """ if not isinstance(other, set): return NotImplemented return self.difference(other) def difference(self, other): """Return the difference of two RangeSets as a new RangeSet. (I.e. all elements that are in this set and not in the other.) """ self_copy = self.copy() self_copy.difference_update(other) return self_copy # Membership test def __contains__(self, element): """Report whether an element is a member of a RangeSet. Element can be either another RangeSet object, a string or an integer. Called in response to the expression ``element in self``. """ if isinstance(element, set): return element.issubset(self) return set.__contains__(self, str(element)) # Subset and superset test def issubset(self, other): """Report whether another set contains this RangeSet.""" self._binary_sanity_check(other) return set.issubset(self, other) def issuperset(self, other): """Report whether this RangeSet contains another set.""" self._binary_sanity_check(other) return set.issuperset(self, other) # Inequality comparisons using the is-subset relation. __le__ = issubset __ge__ = issuperset def __lt__(self, other): self._binary_sanity_check(other) return len(self) < len(other) and self.issubset(other) def __gt__(self, other): self._binary_sanity_check(other) return len(self) > len(other) and self.issuperset(other) # Assorted helpers def _binary_sanity_check(self, other): """Check that the other argument to a binary operation is also a set, raising a TypeError otherwise.""" if not isinstance(other, set): raise TypeError("Binary operation only permitted between sets") # In-place union, intersection, differences. # Subtle: The xyz_update() functions deliberately return None, # as do all mutating operations on built-in container types. # The __xyz__ spellings have to return self, though. def __ior__(self, other): """Update a RangeSet with the union of itself and another.""" self._binary_sanity_check(other) set.__ior__(self, other) return self def union_update(self, other): """Update a RangeSet with the union of itself and another.""" self.update(other) def __iand__(self, other): """Update a RangeSet with the intersection of itself and another.""" self._binary_sanity_check(other) set.__iand__(self, other) return self def intersection_update(self, other): """Update a RangeSet with the intersection of itself and another.""" set.intersection_update(self, other) def __ixor__(self, other): """Update a RangeSet with the symmetric difference of itself and another.""" self._binary_sanity_check(other) set.symmetric_difference_update(self, other) return self def symmetric_difference_update(self, other): """Update a RangeSet with the symmetric difference of itself and another.""" set.symmetric_difference_update(self, other) def __isub__(self, other): """Remove all elements of another set from this RangeSet.""" self._binary_sanity_check(other) set.difference_update(self, other) return self def difference_update(self, other, strict=False): """Remove all elements of another set from this RangeSet. If strict is True, raise KeyError if an element cannot be removed. (strict is a RangeSet addition)""" if strict and other not in self: raise KeyError(set.difference(other, self).pop()) set.difference_update(self, other) # Python dict-like mass mutations: update, clear def update(self, iterable): """Add all indexes (as strings) from an iterable (such as a list).""" assert not isinstance(iterable, str) set.update(self, iterable) def updaten(self, rangesets): """ Update a rangeset with the union of itself and several others. """ for rng in rangesets: if isinstance(rng, set): self.update(str(i) for i in rng) # 1.9+: force cast to str else: self.update(RangeSet(rng)) def clear(self): """Remove all elements from this RangeSet.""" set.clear(self) # Single-element mutations: add, remove, discard def add(self, element, pad=0): """Add an element to a RangeSet. This has no effect if the element is already present. ClusterShell 1.9+ uses strings instead of integers to better manage zero-padded ranges with mixed lengths. This method supports either a string or an integer with padding info. :param element: the element to add (integer or string) :param pad: zero padding length (integer); ignored if element is string """ if isinstance(element, str): set.add(self, element) else: set.add(self, "%0*d" % (pad, int(element))) def remove(self, element, pad=0): """Remove an element from a RangeSet. ClusterShell 1.9+ uses strings instead of integers to better manage zero-padded ranges with mixed lengths. This method supports either a string or an integer with padding info. :param element: the element to remove (integer or string) :param pad: zero padding length (integer); ignored if element is string :raises KeyError: element is not contained in RangeSet :raises ValueError: element is not castable to integer """ if isinstance(element, str): set.remove(self, element) else: set.remove(self, "%0*d" % (pad, int(element))) def discard(self, element, pad=0): """Discard an element from a RangeSet if it is a member. If the element is not a member, do nothing. ClusterShell 1.9+ uses strings instead of integers to better manage zero-padded ranges with mixed lengths. This method supports either a string or an integer with padding info. :param element: the element to remove (integer or string) :param pad: zero padding length (integer); ignored if element is string """ try: if isinstance(element, str): set.discard(self, element) else: set.discard(self, "%0*d" % (pad, int(element))) except ValueError: pass # ignore other object types class RangeSetND(object): """ Build a N-dimensional RangeSet object. .. warning:: You don't usually need to use this class directly, use :class:`.NodeSet` instead that has ND support. Empty constructor:: RangeSetND() Build from a list of list of :class:`RangeSet` objects:: RangeSetND([[rs1, rs2, rs3, ...], ...]) Strings are also supported:: RangeSetND([["0-3", "4-10", ...], ...]) Integers are also supported:: RangeSetND([(0, 4), (0, 5), (1, 4), (1, 5), ...] """ def __init__(self, args=None, pads=None, autostep=None, copy_rangeset=True): """RangeSetND initializer All parameters are optional. :param args: generic "list of list" input argument (default is None) :param pads: list of 0-padding length (default is to not pad any dimensions) :param autostep: autostep threshold (use range/step notation if more than #autostep items meet the condition) - default is off (None) :param copy_rangeset: (advanced) if set to False, do not copy RangeSet objects from args (transfer ownership), which is faster. In that case, you should not modify these objects afterwards (default is True). """ # RangeSetND are arranged as a list of N-dimensional RangeSet vectors self._veclist = [] # Dirty flag to avoid doing veclist folding too often self._dirty = True # Initialize autostep through property self._autostep = None self.autostep = autostep #: autostep threshold public instance attribute # Hint on whether several dimensions are varying or not self._multivar_hint = False if args is None: return for rgvec in args: if rgvec: if isinstance(rgvec[0], str): self._veclist.append([RangeSet(rg, autostep=autostep) \ for rg in rgvec]) elif isinstance(rgvec[0], RangeSet): if copy_rangeset: self._veclist.append([rg.copy() for rg in rgvec]) else: self._veclist.append(rgvec) else: if pads is None: self._veclist.append( \ [RangeSet.fromone(rg, autostep=autostep) \ for rg in rgvec]) else: self._veclist.append( \ [RangeSet.fromone(rg, pad, autostep) \ for rg, pad in zip(rgvec, pads)]) class precond_fold(object): """Decorator to ease internal folding management""" def __call__(self, func): def inner(*args, **kwargs): rgnd, fargs = args[0], args[1:] if rgnd._dirty: rgnd._fold() return func(rgnd, *fargs, **kwargs) # modify the decorator meta-data for pydoc # Note: should be later replaced by @wraps (functools) # as of Python 2.5 inner.__name__ = func.__name__ inner.__doc__ = func.__doc__ inner.__dict__ = func.__dict__ inner.__module__ = func.__module__ return inner @precond_fold() def copy(self): """Return a new, mutable shallow copy of a RangeSetND.""" cpy = self.__class__() # Shallow "to the extent possible" says the copy module, so here that # means calling copy() on each sub-RangeSet to keep mutability. cpy._veclist = [[rg.copy() for rg in rgvec] for rgvec in self._veclist] cpy._dirty = self._dirty return cpy __copy__ = copy # For the copy module def __eq__(self, other): """RangeSetND equality comparison.""" # Return NotImplemented instead of raising TypeError, to # indicate that the comparison is not implemented with respect # to the other type (the other comparand then gets a change to # determine the result, then it falls back to object address # comparison). if not isinstance(other, RangeSetND): return NotImplemented return len(self) == len(other) and self.issubset(other) def __bool__(self): return bool(self._veclist) __nonzero__ = __bool__ # Python 2 compat def __len__(self): """Count unique elements in N-dimensional rangeset.""" return sum([reduce(mul, [len(rg) for rg in rgvec]) \ for rgvec in self.veclist]) @precond_fold() def __str__(self): """String representation of N-dimensional RangeSet.""" result = "" for rgvec in self._veclist: result += "; ".join([str(rg) for rg in rgvec]) result += "\n" return result @precond_fold() def __iter__(self): return self._iter() def _iter(self): """Iterate through individual items as tuples.""" for vec in self._veclist: for ivec in product(*vec): yield ivec @precond_fold() def iter_padding(self): """Iterate through individual items as tuples with padding info. As of v1.9, this method returns the largest padding value of each items, as mixed length padding is allowed.""" for vec in self._veclist: for ivec in product(*vec): yield ivec, [rg.padding for rg in vec] @precond_fold() def _get_veclist(self): """Get folded veclist""" return self._veclist def _set_veclist(self, val): """Set veclist and set dirty flag for deferred folding.""" self._veclist = val self._dirty = True veclist = property(_get_veclist, _set_veclist) def vectors(self): """Get underlying :class:`RangeSet` vectors""" return iter(self.veclist) def dim(self): """Get the current number of dimensions of this RangeSetND object. Return 0 when object is empty.""" try: return len(self._veclist[0]) except IndexError: return 0 def pads(self): """Get a tuple of padding length info for each dimension.""" # return a tuple of max padding length for each axis pad_veclist = ((rg.padding or 0 for rg in vec) for vec in self._veclist) return tuple(max(pads) for pads in zip(*pad_veclist)) def get_autostep(self): """Get autostep value (property)""" if self._autostep >= AUTOSTEP_DISABLED: return None else: # +1 as user wants node count but _autostep means real steps here return self._autostep + 1 def set_autostep(self, val): """Set autostep value (property)""" # Must conform to RangeSet.autostep logic if val is None: self._autostep = AUTOSTEP_DISABLED else: # Like in RangeSet.set_autostep(): -1 because user means node count, # but we mean real steps (this operation has no effect on # AUTOSTEP_DISABLED value) self._autostep = int(val) - 1 # Update our RangeSet objects for rgvec in self._veclist: for rg in rgvec: rg._autostep = self._autostep autostep = property(get_autostep, set_autostep) @precond_fold() def __getitem__(self, index): """ Return the element at index or a subrange when a slice is specified. """ if isinstance(index, slice): iveclist = [] for rgvec in self._veclist: iveclist += product(*rgvec) assert(len(iveclist) == len(self)) rnd = RangeSetND(iveclist[index], autostep=self.autostep) return rnd elif isinstance(index, int): # find a tuple of integer (multi-dimensional) at position index if index < 0: length = len(self) if index >= -length: index = length + index else: raise IndexError("%d out of range" % index) length = 0 for rgvec in self._veclist: cnt = reduce(mul, [len(rg) for rg in rgvec]) if length + cnt < index: length += cnt else: for ivec in product(*rgvec): if index == length: return ivec length += 1 raise IndexError("%d out of range" % index) else: raise TypeError("%s indices must be integers" % self.__class__.__name__) @precond_fold() def contiguous(self): """Object-based iterator over contiguous range sets.""" veclist = self._veclist try: dim = len(veclist[0]) except IndexError: return for dimidx in range(dim): new_veclist = [] for rgvec in veclist: for rgsli in rgvec[dimidx].contiguous(): rgvec = list(rgvec) rgvec[dimidx] = rgsli new_veclist.append(rgvec) veclist = new_veclist for rgvec in veclist: yield RangeSetND([rgvec]) # Membership test @precond_fold() def __contains__(self, element): """Report whether an element is a member of a RangeSetND. Element can be either another RangeSetND object, a string or an integer. Called in response to the expression ``element in self``. """ if isinstance(element, RangeSetND): rgnd_element = element else: rgnd_element = RangeSetND([[str(element)]]) return rgnd_element.issubset(self) # Subset and superset test def issubset(self, other): """Report whether another set contains this RangeSetND.""" self._binary_sanity_check(other) return other.issuperset(self) @precond_fold() def issuperset(self, other): """Report whether this RangeSetND contains another RangeSetND.""" self._binary_sanity_check(other) if self.dim() == 1 and other.dim() == 1: return self._veclist[0][0].issuperset(other._veclist[0][0]) if not other._veclist: return True test = other.copy() test.difference_update(self) return not bool(test) # Inequality comparisons using the is-subset relation. __le__ = issubset __ge__ = issuperset def __lt__(self, other): self._binary_sanity_check(other) return len(self) < len(other) and self.issubset(other) def __gt__(self, other): self._binary_sanity_check(other) return len(self) > len(other) and self.issuperset(other) # Assorted helpers def _binary_sanity_check(self, other): """Check that the other argument to a binary operation is also a RangeSetND, raising a TypeError otherwise.""" if not isinstance(other, RangeSetND): msg = "Binary operation only permitted between RangeSetND" raise TypeError(msg) def _sort(self): """N-dimensional sorting.""" def rgveckeyfunc(rgvec): # key used for sorting purposes, based on the following # conditions: # (1) larger vector first (#elements) # (2) larger dim first (#elements) # (3) lower first index first # (4) lower last index first return (-reduce(mul, [len(rg) for rg in rgvec]), \ tuple((-len(rg), rg[0], rg[-1]) for rg in rgvec)) self._veclist.sort(key=rgveckeyfunc) @precond_fold() def fold(self): """Explicit folding call. Please note that folding of RangeSetND nD vectors are automatically managed, so you should not have to call this method. It may be still useful in some extreme cases where the RangeSetND is heavily modified.""" pass def _fold(self): """In-place N-dimensional folding.""" assert self._dirty if len(self._veclist) > 1: self._fold_univariate() or self._fold_multivariate() else: self._dirty = False def _fold_univariate(self): """Univariate nD folding. Return True on success and False when a multivariate folding is required.""" dim = self.dim() vardim = dimdiff = 0 if dim > 1: # We got more than one dimension, see if only one is changing... for i in range(dim): # Are all rangesets on this dimension the same? slist = [vec[i] for vec in self._veclist] if slist.count(slist[0]) != len(slist): dimdiff += 1 if dimdiff > 1: break vardim = i univar = (dim == 1 or dimdiff == 1) if univar: # Eligible for univariate folding (faster!) for vec in self._veclist[1:]: self._veclist[0][vardim].update(vec[vardim]) del self._veclist[1:] self._dirty = False self._multivar_hint = not univar return univar def _fold_multivariate(self): """Multivariate nD folding""" # PHASE 1: expand with respect to uniqueness self._fold_multivariate_expand() # PHASE 2: merge self._fold_multivariate_merge() self._dirty = False def _fold_multivariate_expand(self): """Multivariate nD folding: expand [phase 1]""" self._veclist = [[RangeSet.fromone(i, autostep=self.autostep) for i in tvec] for tvec in set(self._iter())] def _fold_multivariate_merge(self): """Multivariate nD folding: merge [phase 2]""" full = False # try easy O(n) passes first chg = True # new pass (eg. after change on veclist) while chg: chg = False self._sort() # sort veclist before new pass index1, index2 = 0, 1 while (index1 + 1) < len(self._veclist): # use 2 references on iterator to compare items by couples item1 = self._veclist[index1] index2 = index1 + 1 index1 += 1 while index2 < len(self._veclist): item2 = self._veclist[index2] index2 += 1 new_item = [None] * len(item1) nb_diff = 0 # compare 2 rangeset vector, item by item, the idea being # to merge vectors if they differ only by one item for pos, (rg1, rg2) in enumerate(zip(item1, item2)): if rg1 == rg2: new_item[pos] = rg1 elif not rg1 & rg2: # merge on disjoint ranges nb_diff += 1 if nb_diff > 1: break new_item[pos] = rg1 | rg2 # if fully contained, keep the largest one elif (rg1 > rg2 or rg1 < rg2): # and nb_diff == 0: nb_diff += 1 if nb_diff > 1: break new_item[pos] = max(rg1, rg2) # otherwise, compute rangeset intersection and # keep the two disjoint part to be handled # later... else: # intersection but do nothing nb_diff = 2 break # one change has been done: use this new item to compare # with other if nb_diff <= 1: chg = True item1 = self._veclist[index1 - 1] = new_item index2 -= 1 self._veclist.pop(index2) elif not full: # easy pass so break to avoid scanning all # index2; advance with next index1 for now break if not chg and not full: # if no change was done during the last normal pass, we do a # full O(n^2) pass. This pass is done only at the end in the # hope that most vectors have already been merged by easy # O(n) passes. chg = full = True def __or__(self, other): """Return the union of two RangeSetNDs as a new RangeSetND. (I.e. all elements that are in either set.) """ if not isinstance(other, RangeSetND): return NotImplemented return self.union(other) def union(self, other): """Return the union of two RangeSetNDs as a new RangeSetND. (I.e. all elements that are in either set.) """ rgnd_copy = self.copy() rgnd_copy.update(other) return rgnd_copy def update(self, other): """Add all RangeSetND elements to this RangeSetND.""" if isinstance(other, RangeSetND): iterable = other._veclist else: iterable = other for vec in iterable: # copy rangesets and set custom autostep assert isinstance(vec[0], RangeSet) cpyvec = [] for rg in vec: cpyrg = rg.copy() cpyrg.autostep = self.autostep cpyvec.append(cpyrg) self._veclist.append(cpyvec) self._dirty = True if not self._multivar_hint: self._fold_univariate() union_update = update def __ior__(self, other): """Update a RangeSetND with the union of itself and another.""" self._binary_sanity_check(other) self.update(other) return self def __isub__(self, other): """Remove all elements of another set from this RangeSetND.""" self._binary_sanity_check(other) self.difference_update(other) return self def difference_update(self, other, strict=False): """Remove all elements of another set from this RangeSetND. If strict is True, raise KeyError if an element cannot be removed (strict is a RangeSet addition)""" if strict and not other in self: raise KeyError(other.difference(self)[0]) ergvx = other._veclist # read only rgnd_new = [] index1 = 0 while index1 < len(self._veclist): rgvec1 = self._veclist[index1] procvx1 = [ rgvec1 ] nextvx1 = [] index2 = 0 while index2 < len(ergvx): rgvec2 = ergvx[index2] while len(procvx1) > 0: # refine diff for each resulting vector rgproc1 = procvx1.pop(0) tmpvx = [] for pos, (rg1, rg2) in enumerate(zip(rgproc1, rgvec2)): if rg1 == rg2 or rg1 < rg2: # issubset pass elif rg1 & rg2: # intersect tmpvec = list(rgproc1) tmpvec[pos] = rg1.difference(rg2) tmpvx.append(tmpvec) else: # disjoint tmpvx = [ rgproc1 ] # reset previous work break if tmpvx: nextvx1 += tmpvx if nextvx1: procvx1 = nextvx1 nextvx1 = [] index2 += 1 if procvx1: rgnd_new += procvx1 index1 += 1 self.veclist = rgnd_new def __sub__(self, other): """Return the difference of two RangeSetNDs as a new RangeSetND. (I.e. all elements that are in this set and not in the other.) """ if not isinstance(other, RangeSetND): return NotImplemented return self.difference(other) def difference(self, other): """ ``s.difference(t)`` returns a new object with elements in s but not in t. """ self_copy = self.copy() self_copy.difference_update(other) return self_copy def intersection(self, other): """ ``s.intersection(t)`` returns a new object with elements common to s and t. """ self_copy = self.copy() self_copy.intersection_update(other) return self_copy def __and__(self, other): """ Implements the & operator. So ``s & t`` returns a new object with elements common to s and t. """ if not isinstance(other, RangeSetND): return NotImplemented return self.intersection(other) def intersection_update(self, other): """ ``s.intersection_update(t)`` returns nodeset s keeping only elements also found in t. """ if other is self: return tmp_rnd = RangeSetND() empty_rset = RangeSet() for rgvec in self._veclist: for ergvec in other._veclist: irgvec = [rg.intersection(erg) \ for rg, erg in zip(rgvec, ergvec)] if not empty_rset in irgvec: tmp_rnd.update([irgvec]) # substitute self.veclist = tmp_rnd.veclist def __iand__(self, other): """ Implements the &= operator. So ``s &= t`` returns object s keeping only elements also found in t (Python 2.5+ required). """ self._binary_sanity_check(other) self.intersection_update(other) return self def symmetric_difference(self, other): """ ``s.symmetric_difference(t)`` returns the symmetric difference of two objects as a new RangeSetND. (ie. all items that are in exactly one of the RangeSetND.) """ self_copy = self.copy() self_copy.symmetric_difference_update(other) return self_copy def __xor__(self, other): """ Implement the ^ operator. So ``s ^ t`` returns a new RangeSetND with nodes that are in exactly one of the RangeSetND. """ if not isinstance(other, RangeSetND): return NotImplemented return self.symmetric_difference(other) def symmetric_difference_update(self, other): """ ``s.symmetric_difference_update(t)`` returns RangeSetND s keeping all nodes that are in exactly one of the objects. """ diff2 = other.difference(self) self.difference_update(other) self.update(diff2) def __ixor__(self, other): """ Implement the ^= operator. So ``s ^= t`` returns object s after keeping all items that are in exactly one of the RangeSetND (Python 2.5+ required). """ self._binary_sanity_check(other) self.symmetric_difference_update(other) return self ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/Task.py0000644104717000001440000015377314505632065020247 0ustar00sthiellusers# # Copyright (C) 2007-2016 CEA/DAM # Copyright (C) 2015-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell Task module. Simple example of use: >>> from ClusterShell.Task import task_self, NodeSet >>> >>> # get task associated with calling thread ... task = task_self() >>> >>> # add a command to execute on distant nodes ... task.shell("/bin/uname -r", nodes="tiger[1-30,35]") >>> >>> # run task in calling thread ... task.run() >>> >>> # get results ... for output, nodelist in task.iter_buffers(): ... print '%s: %s' % (NodeSet.fromlist(nodelist), output) ... """ from __future__ import print_function import logging from operator import itemgetter import os import socket import sys import threading from time import sleep import traceback try: basestring except NameError: # Python 3 compat basestring = str from ClusterShell.Defaults import config_paths, DEFAULTS from ClusterShell.Defaults import _local_workerclass, _distant_workerclass, _load_workerclass from ClusterShell.Engine.Engine import EngineAbortException from ClusterShell.Engine.Engine import EngineTimeoutException from ClusterShell.Engine.Engine import EngineAlreadyRunningError from ClusterShell.Engine.Engine import EngineTimer from ClusterShell.Engine.Factory import PreferredEngine from ClusterShell.Worker.EngineClient import EnginePort, EngineClientError from ClusterShell.Worker.Popen import WorkerPopen from ClusterShell.Worker.Tree import TreeWorker from ClusterShell.Worker.Worker import FANOUT_UNLIMITED from ClusterShell.Event import EventHandler from ClusterShell.MsgTree import MsgTree from ClusterShell.NodeSet import NodeSet from ClusterShell.Topology import TopologyParser, TopologyError from ClusterShell.Propagation import PropagationTreeRouter, PropagationChannel class TaskException(Exception): """Base task exception.""" class TaskError(TaskException): """Base task error exception.""" class TimeoutError(TaskError): """Raised when the task timed out.""" class AlreadyRunningError(TaskError): """Raised when trying to resume an already running task.""" class TaskMsgTreeError(TaskError): """Raised when trying to access disabled MsgTree.""" def _getshorthostname(): """Get short hostname (host name cut at the first dot)""" return socket.gethostname().split('.')[0] class Task(object): """ The Task class defines an essential ClusterShell object which aims to execute commands in parallel and easily get their results. More precisely, a Task object manages a coordinated (ie. with respect of its current parameters) collection of independent parallel Worker objects. See ClusterShell.Worker.Worker for further details on ClusterShell Workers. Always bound to a specific thread, a Task object acts like a "thread singleton". So most of the time, and even more for single-threaded applications, you can get the current task object with the following top-level Task module function: >>> task = task_self() However, if you want to create a task in a new thread, use: >>> task = Task() To create or get the instance of the task associated with the thread object thr (threading.Thread): >>> task = Task(thread=thr) To submit a command to execute locally within task, use: >>> task.shell("/bin/hostname") To submit a command to execute to some distant nodes in parallel, use: >>> task.shell("/bin/hostname", nodes="tiger[1-20]") The previous examples submit commands to execute but do not allow result interaction during their execution. For your program to interact during command execution, it has to define event handlers that will listen for local or remote events. These handlers are based on the EventHandler class, defined in ClusterShell.Event. The following example shows how to submit a command on a cluster with a registered event handler: >>> task.shell("uname -r", nodes="node[1-9]", handler=MyEventHandler()) Run task in its associated thread (will block only if the calling thread is the task associated thread): >>> task.resume() or: >>> task.run() You can also pass arguments to task.run() to schedule a command exactly like in task.shell(), and run it: >>> task.run("hostname", nodes="tiger[1-20]", handler=MyEventHandler()) A common need is to set a maximum delay for command execution, especially when the command time is not known. Doing this with ClusterShell Task is very straightforward. To limit the execution time on each node, use the timeout parameter of shell() or run() methods to set a delay in seconds, like: >>> task.run("check_network.sh", nodes="tiger[1-20]", timeout=30) You can then either use Task's iter_keys_timeout() method after execution to see on what nodes the command has timed out, or listen for ev_close() events in your event handler and check the timedout boolean. To get command result, you can either use Task's iter_buffers() method for standard output, iter_errors() for standard error after command execution (common output contents are automatically gathered), or you can listen for ev_read() events in your event handler and get live command output. To get command return codes, you can either use Task's iter_retcodes(), node_retcode() and max_retcode() methods after command execution, or listen for ev_hup() events in your event handler. """ # topology.conf file path list TOPOLOGY_CONFIGS = config_paths('topology.conf') _tasks = {} _taskid_max = 0 _task_lock = threading.Lock() class _SyncMsgHandler(EventHandler): """Special task control port event handler. When a message is received on the port, call appropriate task method.""" def __init__(self, task): EventHandler.__init__(self) self.task = task def ev_msg(self, port, msg): """Message received: call appropriate task method.""" # pull out function and its arguments from message func, (args, kwargs) = msg[0], msg[1:] # call task method func(self.task, *args, **kwargs) class tasksyncmethod(object): """Class encapsulating a function that checks if the calling task is running or is the current task, and allowing it to be used as a decorator making the wrapped task method thread-safe.""" def __call__(self, f): def taskfunc(*args, **kwargs): # pull out the class instance task, fargs = args[0], args[1:] # check if the calling task is the current thread task if task._is_task_self(): return f(task, *fargs, **kwargs) elif task._dispatch_port: # no, safely call the task method by message # through the task special dispatch port task._dispatch_port.msg_send((f, fargs, kwargs)) else: task.info("print_debug")(task, "%s: dropped call: %s" % \ (task, str(fargs))) # modify the decorator meta-data for pydoc # Note: should be later replaced by @wraps (functools) # as of Python 2.5 taskfunc.__name__ = f.__name__ taskfunc.__doc__ = f.__doc__ taskfunc.__dict__ = f.__dict__ taskfunc.__module__ = f.__module__ return taskfunc class _SuspendCondition(object): """Special class to manage task suspend condition.""" def __init__(self, lock=threading.RLock(), initial=0): self._cond = threading.Condition(lock) self.suspend_count = initial def atomic_inc(self): """Increase suspend count.""" self._cond.acquire() self.suspend_count += 1 self._cond.release() def atomic_dec(self): """Decrease suspend count.""" self._cond.acquire() self.suspend_count -= 1 self._cond.release() def wait_check(self, release_lock=None): """Wait for condition if needed.""" self._cond.acquire() try: if self.suspend_count > 0: if release_lock: release_lock.release() self._cond.wait() finally: self._cond.release() def notify_all(self): """Signal all threads waiting for condition.""" self._cond.acquire() try: self.suspend_count = min(self.suspend_count, 0) self._cond.notify_all() finally: self._cond.release() def __new__(cls, thread=None, defaults=None): """ For task bound to a specific thread, this class acts like a "thread singleton", so new style class is used and new object are only instantiated if needed. """ if thread: if thread not in cls._tasks: cls._tasks[thread] = object.__new__(cls) return cls._tasks[thread] return object.__new__(cls) def __init__(self, thread=None, defaults=None): """Initialize a Task, creating a new non-daemonic thread if needed.""" if not getattr(self, "_engine", None): # first time called self._default_lock = threading.Lock() if defaults is None: defaults = DEFAULTS self._default = defaults._task_default.copy() self._default.update( {"local_worker": _local_workerclass(defaults), "distant_worker": _distant_workerclass(defaults)}) self._info = defaults._task_info.copy() # use factory class PreferredEngine that gives the proper # engine instance self._engine = PreferredEngine(self.default("engine"), self._info) self.timeout = None # task synchronization objects self._run_lock = threading.Lock() # primitive lock self._suspend_lock = threading.RLock() # reentrant lock # both join and suspend conditions share the same underlying lock self._suspend_cond = Task._SuspendCondition(self._suspend_lock, 1) self._join_cond = threading.Condition(self._suspend_lock) self._suspended = False self._quit = False self._terminated = False # Default router self.topology = None self.router = None self.gateways = {} # dict of MsgTree by sname self._msgtrees = {} # dict of sources to return codes self._d_source_rc = {} # dict of return codes to sources self._d_rc_sources = {} # keep max rc self._max_rc = None # keep timeout'd sources self._timeout_sources = set() # allow no-op call to getters before resume() self._reset() # special engine port for task method dispatching self._dispatch_port = EnginePort(handler=Task._SyncMsgHandler(self), autoclose=True) self._engine.add(self._dispatch_port) # set taskid used as Thread name Task._task_lock.acquire() Task._taskid_max += 1 self._taskid = Task._taskid_max Task._task_lock.release() # create new thread if needed self._thread_foreign = bool(thread) if self._thread_foreign: self.thread = thread else: self.thread = thread = \ threading.Thread(None, Task._thread_start, "Task-%d" % self._taskid, args=(self,)) Task._tasks[thread] = self thread.start() def _is_task_self(self): """Private method used by the library to check if the task is task_self(), but do not create any task_self() instance.""" return self.thread == threading.current_thread() def default_excepthook(self, exc_type, exc_value, tb): """Default excepthook for a newly Task. When an exception is raised and uncaught on Task thread, excepthook is called, which is default_excepthook by default. Once excepthook overridden, you can still call default_excepthook if needed.""" print('Exception in thread %s:' % self.thread, file=sys.stderr) traceback.print_exception(exc_type, exc_value, tb, file=sys.stderr) _excepthook = default_excepthook def _getexcepthook(self): return self._excepthook def _setexcepthook(self, hook): self._excepthook = hook # If thread has not been created by us, install sys.excepthook which # might handle uncaught exception. if self._thread_foreign: sys.excepthook = self._excepthook # When an exception is raised and uncaught on Task's thread, # excepthook is called. You may want to override this three # arguments method (very similar of what you can do with # sys.excepthook).""" excepthook = property(_getexcepthook, _setexcepthook) def _thread_start(self): """Task-managed thread entry point""" while not self._quit: self._suspend_cond.wait_check() if self._quit: # may be set by abort() break try: self._resume() except: self.excepthook(*sys.exc_info()) self._quit = True self._terminate(kill=True) def _run(self, timeout): """Run task (always called from its self thread).""" # check if task is already running if self._run_lock.locked(): raise AlreadyRunningError("task is already running") # use with statement later try: self._run_lock.acquire() self._engine.run(timeout) finally: self._run_lock.release() def _default_tree_is_enabled(self): """Return whether default tree is enabled (load topology_file btw)""" if self.topology is None: for topology_file in self.TOPOLOGY_CONFIGS[::-1]: if os.path.exists(topology_file): self.load_topology(topology_file) break return (self.topology is not None) and self.default("auto_tree") def load_topology(self, topology_file): """Load propagation topology from provided file. On success, task.topology is set to a corresponding TopologyTree instance. On failure, task.topology is left untouched and a TopologyError exception is raised. """ self.topology = TopologyParser(topology_file).tree(_getshorthostname()) def _default_router(self, router=None): """ Helper to instantiate or bind a default PropagationTreeRouter for the task which can then be shared by multiple workers. Called by a TreeWorker when it is scheduled with this task. """ if router is None: if self.router is None: # Init router with the task's topology (e.g. root node) self.router = \ PropagationTreeRouter(str(self.topology.root.nodeset), self.topology) else: if self.router is not None: # Update default router if a different one is used by a worker. logger = logging.getLogger(__name__) logger.debug("_default_router: overriding previous default " \ "router %s with %s", self.router, router) self.router = router return self.router def default(self, default_key, def_val=None): """ Return per-task value for key from the "default" dictionary. See set_default() for a list of reserved task default_keys. """ self._default_lock.acquire() try: return self._default.get(default_key, def_val) finally: self._default_lock.release() def set_default(self, default_key, value): """ Set task value for specified key in the dictionary "default". Users may store their own task-specific key, value pairs using this method and retrieve them with default(). Task default_keys are: - "stderr": Boolean value indicating whether to enable stdout/stderr separation when using task.shell(), if not specified explicitly (default: False). - "stdin": Boolean value indicating whether to enable stdin when using task.shell(), if not explicitly specified (default: True) - "stdout_msgtree": Whether to instantiate standard output MsgTree for automatic internal gathering of result messages coming from Workers (default: True). - "stderr_msgtree": Same for stderr (default: True). - "engine": Used to specify an underlying Engine explicitly (default: "auto"). - "port_qlimit": Size of port messages queue (default: 32). - "worker": Worker-based class used when spawning workers through shell()/run(). Threading considerations: Unlike set_info(), when called from the task's thread or not, set_default() immediately updates the underlying dictionary in a thread-safe manner. This method doesn't wake up the engine when called. """ self._default_lock.acquire() try: self._default[default_key] = value if default_key == 'local_workername': self._default['local_worker'] = _load_workerclass(value) elif default_key == 'distant_workername': self._default['distant_worker'] = _load_workerclass(value) finally: self._default_lock.release() def info(self, info_key, def_val=None): """ Return per-task information. See set_info() for a list of reserved task info_keys. """ return self._info.get(info_key, def_val) @tasksyncmethod() def set_info(self, info_key, value): """ Set task value for a specific key information. Key, value pairs can be passed to the engine and/or workers. Users may store their own task-specific info key, value pairs using this method and retrieve them with info(). The following example changes the fanout value to 128: >>> task.set_info('fanout', 128) The following example enables debug messages: >>> task.set_info('debug', True) Task info_keys are: - "debug": Boolean value indicating whether to enable library debugging messages (default: False). - "print_debug": Debug messages processing function. This function takes 2 arguments: the task instance and the message string (default: an internal function doing standard print). - "fanout": Max number of registered clients in Engine at a time (default: 64). - "grooming_delay": Message maximum end-to-end delay requirement used for traffic grooming, in seconds as float (default: 0.5). - "connect_timeout": Time in seconds to wait for connecting to remote host before aborting (default: 10). - "command_timeout": Time in seconds to wait for a command to complete before aborting (default: 0, which means unlimited). - "tree_default:": In tree mode, overrides the key in Defaults (settings normally set in defaults.conf) Threading considerations: Unlike set_default(), the underlying info dictionary is only modified from the task's thread. So calling set_info() from another thread leads to queueing the request for late apply (at run time) using the task dispatch port. When received, the request wakes up the engine when the task is running and the info dictionary is then updated. """ self._info[info_key] = value def shell(self, command, **kwargs): """ Schedule a shell command for local or distant parallel execution. This essential method creates a local or remote Worker (depending on the presence of the nodes parameter) and immediately schedules it for execution in task's runloop. So, if the task is already running (ie. called from an event handler), the command is started immediately, assuming current execution constraints are met (eg. fanout value). If the task is not running, the command is not started but scheduled for late execution. See resume() to start task runloop. The following optional parameters are passed to the underlying local or remote Worker constructor: - handler: EventHandler instance to notify (on event) -- default is no handler (None) - timeout: command timeout delay expressed in second using a floating point value -- default is unlimited (None) - autoclose: if set to True, the underlying Worker is automatically aborted as soon as all other non-autoclosing task objects (workers, ports, timers) have finished -- default is False - stderr: separate stdout/stderr if set to True -- default is False. - stdin: enable stdin if set to True or prevent its use otherwise -- default is True. Local usage: task.shell(command [, key=key] [, handler=handler] [, timeout=secs] [, autoclose=enable_autoclose] [, stderr=enable_stderr][, stdin=enable_stdin])) Distant usage: task.shell(command, nodes=nodeset [, handler=handler] [, timeout=secs], [, autoclose=enable_autoclose] [, tree=None|False|True] [, remote=False|True] [, stderr=enable_stderr][, stdin=enable_stdin])) Example: >>> task = task_self() >>> task.shell("/bin/date", nodes="node[1-2345]") >>> task.resume() """ handler = kwargs.get("handler", None) timeo = kwargs.get("timeout", None) autoclose = kwargs.get("autoclose", False) stderr = kwargs.get("stderr", self.default("stderr")) stdin = kwargs.get("stdin", self.default("stdin")) remote = kwargs.get("remote", True) if kwargs.get("nodes", None): assert kwargs.get("key", None) is None, \ "'key' argument not supported for distant command" tree = kwargs.get("tree") # tree == None means auto if tree != False and self._default_tree_is_enabled(): # fail if tree is forced without any topology if tree and self.topology is None: raise TaskError("tree mode required for distant shell " "command with unknown topology!") # create tree worker wrkcls = TreeWorker elif not remote: # create local worker wrkcls = self.default('local_worker') else: # create distant worker wrkcls = self.default('distant_worker') worker = wrkcls(NodeSet(kwargs["nodes"]), command=command, handler=handler, stderr=stderr, timeout=timeo, autoclose=autoclose, remote=remote) else: # create old fashioned local worker worker = WorkerPopen(command, key=kwargs.get("key", None), handler=handler, stderr=stderr, timeout=timeo, autoclose=autoclose) if not stdin: try: worker.set_write_eof() # prevent reading from stdin except EngineClientError: # not all workers support writing pass # schedule worker for execution in this task self.schedule(worker) return worker def copy(self, source, dest, nodes, **kwargs): """ Copy local file to distant nodes. """ assert nodes != None, "local copy not supported" handler = kwargs.get("handler", None) stderr = kwargs.get("stderr", self.default("stderr")) timeo = kwargs.get("timeout", None) preserve = kwargs.get("preserve", None) reverse = kwargs.get("reverse", False) tree = kwargs.get("tree") # tree == None means auto if tree != False and self._default_tree_is_enabled(): # fail if tree is forced without any topology if tree and self.topology is None: raise TaskError("tree mode required for distant shell " "command with unknown topology!") # create tree worker wrkcls = TreeWorker else: # create a new copy worker wrkcls = self.default('distant_worker') worker = wrkcls(nodes, source=source, dest=dest, handler=handler, stderr=stderr, timeout=timeo, preserve=preserve, reverse=reverse) self.schedule(worker) return worker def rcopy(self, source, dest, nodes, **kwargs): """ Copy distant file or directory to local node. """ kwargs['reverse'] = True return self.copy(source, dest, nodes, **kwargs) @tasksyncmethod() def _add_port(self, port): """Add an EnginePort instance to Engine (private method).""" self._engine.add(port) @tasksyncmethod() def remove_port(self, port): """Close and remove a port from task previously created with port().""" self._engine.remove(port) def port(self, handler=None, autoclose=False): """ Create a new task port. A task port is an abstraction object to deliver messages reliably between tasks. Basic rules: - A task can send messages to another task port (thread safe). - A task can receive messages from an acquired port either by setting up a notification mechanism or using a polling mechanism that may block the task waiting for a message sent on the port. - A port can be acquired by one task only. If handler is set to a valid EventHandler object, the port is a send-once port, ie. a message sent to this port generates an ev_msg event notification issued the port's task. If handler is not set, the task can only receive messages on the port by calling port.msg_recv(). """ port = EnginePort(handler, autoclose) self._add_port(port) return port def timer(self, fire, handler, interval=-1.0, autoclose=False): """ Create a timer bound to this task that fires at a preset time in the future by invoking the ev_timer() method of *handler* (provided EventHandler object). Timers can fire either only once or repeatedly at fixed time intervals. Repeating timers can also have their next firing time manually adjusted. The mandatory parameter *fire* sets the firing delay in seconds. The optional parameter *interval* sets the firing interval of the timer. If not specified, the timer fires once and then is automatically invalidated. Time values are expressed in second using floating point values. Precision is implementation (and system) dependent. The optional parameter *autoclose*, if set to True, creates an "autoclosing" timer: it will be automatically invalidated as soon as all other non-autoclosing task's objects (workers, ports, timers) have finished. Default value is False, which means the timer will retain task's runloop until it is invalidated. Return a new EngineTimer instance. See ClusterShell.Engine.Engine.EngineTimer for more details. """ assert fire >= 0.0, \ "timer's relative fire time must be a positive floating number" timer = EngineTimer(fire, interval, autoclose, handler) # The following method may be sent through msg port (async # call) if called from another task. self._add_timer(timer) # always return new timer (sync) return timer @tasksyncmethod() def _add_timer(self, timer): """Add a timer to task engine (thread-safe).""" self._engine.add_timer(timer) @tasksyncmethod() def schedule(self, worker): """ Schedule a worker for execution, ie. add worker in task running loop. Worker will start processing immediately if the task is running (eg. called from an event handler) or as soon as the task is started otherwise. Only useful for manually instantiated workers, for example: >>> task = task_self() >>> worker = WorkerSsh("node[2-3]", None, 10, command="/bin/ls") >>> task.schedule(worker) >>> task.resume() """ assert self in Task._tasks.values(), \ "deleted task instance, call task_self() again!" # bind worker to task self worker._set_task(self) # add worker clients to engine for client in worker._engine_clients(): self._engine.add(client) def _resume_thread(self): """Resume task - called from another thread.""" self._suspend_cond.notify_all() def _resume(self): """Resume task - called from self thread.""" assert self.thread == threading.current_thread() try: try: self._reset() self._run(self.timeout) except EngineTimeoutException: raise TimeoutError() except EngineAbortException as exc: self._terminate(exc.kill) except EngineAlreadyRunningError: raise AlreadyRunningError("task engine is already running") finally: # task becomes joinable self._join_cond.acquire() self._suspend_cond.atomic_inc() self._join_cond.notify_all() self._join_cond.release() def resume(self, timeout=None): """ Resume task. If task is task_self(), workers are executed in the calling thread so this method will block until all (non-autoclosing) workers have finished. This is always the case for a single-threaded application (eg. which doesn't create other Task() instance than task_self()). Otherwise, the current thread doesn't block. In that case, you may then want to call task_wait() to wait for completion. Warning: the timeout parameter can be used to set an hard limit of task execution time (in seconds). In that case, a TimeoutError exception is raised if this delay is reached. Its value is 0 by default, which means no task time limit (TimeoutError is never raised). In order to set a maximum delay for individual command execution, you should use Task.shell()'s timeout parameter instead. """ # If you change options here, check Task.run() compatibility. self.timeout = timeout self._suspend_cond.atomic_dec() if self._is_task_self(): self._resume() else: self._resume_thread() def run(self, command=None, **kwargs): """ With arguments, it will schedule a command exactly like a Task.shell() would have done it and run it. This is the easiest way to simply run a command. >>> task.run("hostname", nodes="foo") Without argument, it starts all outstanding actions. It behaves like Task.resume(). >>> task.shell("hostname", nodes="foo") >>> task.shell("hostname", nodes="bar") >>> task.run() When used with a command, you can set a maximum delay of individual command execution with the help of the timeout parameter (see Task.shell's parameters). You can then listen for ev_close() events and check the timedout boolean in your Worker event handlers, or use num_timeout() or iter_keys_timeout() afterwards. But, when used as an alias to Task.resume(), the timeout parameter sets an hard limit of task execution time. In that case, a TimeoutError exception is raised if this delay is reached. """ worker = None timeout = None # Both resume() and shell() support a 'timeout' parameter. We need a # trick to behave correctly for both cases. # # Here, we mock: task.resume(10) if type(command) in (int, float): timeout = command command = None # Here, we mock: task.resume(timeout=10) elif 'timeout' in kwargs and command is None: timeout = kwargs.pop('timeout') # All other cases mean a classical: shell(...) # we mock: task.shell("mycommand", [timeout=..., ...]) elif command is not None: worker = self.shell(command, **kwargs) self.resume(timeout) return worker @tasksyncmethod() def _suspend_wait(self): """Suspend request received.""" assert task_self() == self # atomically set suspend state self._suspend_lock.acquire() self._suspended = True self._suspend_lock.release() # wait for special suspend condition, while releasing l_run self._suspend_cond.wait_check(self._run_lock) # waking up, atomically unset suspend state self._suspend_lock.acquire() self._suspended = False self._suspend_lock.release() def suspend(self): """ Suspend task execution. This method may be called from another task (thread-safe). The function returns False if the task cannot be suspended (eg. it's not running), or returns True if the task has been successfully suspended. To resume a suspended task, use task.resume(). """ # first of all, increase suspend count self._suspend_cond.atomic_inc() # call synchronized suspend method self._suspend_wait() # wait for stopped task self._run_lock.acquire() # run_lock ownership transfer # get result: are we really suspended or just stopped? result = True self._suspend_lock.acquire() if not self._suspended: # not acknowledging suspend state, task is stopped result = False self._run_lock.release() self._suspend_lock.release() return result @tasksyncmethod() def _abort(self, kill=False): """Abort request received.""" assert task_self() == self # raise an EngineAbortException when task is running self._quit = True self._engine.abort(kill) def abort(self, kill=False): """ Abort a task. Aborting a task removes (and stops when needed) all workers. If optional parameter kill is True, the task object is unbound from the current thread, so calling task_self() creates a new Task object. """ if not self._run_lock.acquire(0): # self._run_lock is locked, try to call synchronized method self._abort(kill) # but there is no guarantee that it has really been called, as the # task could have aborted during the same time, so we use polling while not self._run_lock.acquire(0): sleep(0.001) # in any case, once _run_lock has been acquired, confirm abort self._quit = True self._run_lock.release() if self._is_task_self(): self._terminate(kill) else: # abort on stopped/suspended task self._suspend_cond.notify_all() def _terminate(self, kill): """ Abort completion subroutine. """ assert self._quit == True self._terminated = True if kill: # invalidate dispatch port self._dispatch_port = None # clear engine self._engine.clear(clear_ports=kill) if kill: self._engine.release() self._engine = None # clear result objects self._reset() # unlock any remaining threads that are waiting for our # termination (late join()s) # must be called after _terminated is set to True self._join_cond.acquire() self._join_cond.notify_all() self._join_cond.release() # destroy task if needed if kill: Task._task_lock.acquire() try: del Task._tasks[threading.current_thread()] finally: Task._task_lock.release() def join(self): """ Suspend execution of the calling thread until the target task terminates, unless the target task has already terminated. """ self._join_cond.acquire() try: if self._suspend_cond.suspend_count > 0 and not self._suspended: # ignore stopped task return if self._terminated: # ignore join() on dead task return self._join_cond.wait() finally: self._join_cond.release() def running(self): """ Return True if the task is running. """ return self._engine and self._engine.running def _reset(self): """ Reset buffers and retcodes management variables. """ # reinit MsgTree dict self._msgtrees = {} # other re-init's self._d_source_rc = {} self._d_rc_sources = {} self._max_rc = None self._timeout_sources.clear() def _msgtree(self, sname, strict=True): """Helper method to return msgtree instance by sname if allowed.""" if self.default("%s_msgtree" % sname): if sname not in self._msgtrees: self._msgtrees[sname] = MsgTree() return self._msgtrees[sname] elif strict: raise TaskMsgTreeError("%s_msgtree not set" % sname) def _msg_add(self, worker, node, sname, msg): """ Process a new message into Task's MsgTree that is coming from: - a worker instance of this task - a node - a stream name sname (string identifier) """ assert worker.task == self, "better to add messages from my workers" msgtree = self._msgtree(sname, strict=False) # As strict=False, if msgtree is None, this means task is set to NOT # record messages... in that case we ignore this request, still # keeping possible existing MsgTree, thus allowing temporarily # disabled ones. if msgtree is not None: msgtree.add((worker, node), msg) def _rc_set(self, worker, node, rc): """ Add a worker return code (rc) that is coming from a node of a worker instance. """ assert rc is not None source = (worker, node) # store rc by source self._d_source_rc[source] = rc # store source by rc self._d_rc_sources.setdefault(rc, set()).add(source) # update max rc if self._max_rc is None or rc > self._max_rc: self._max_rc = rc def _timeout_add(self, worker, node): """ Add a timeout indicator that is coming from a node of a worker instance. """ # store source in timeout set self._timeout_sources.add((worker, node)) def _msg_by_source(self, worker, node, sname): """Get a message by its worker instance, node and stream name.""" msg = self._msgtree(sname).get((worker, node)) if msg is None: return None return bytes(msg) def _call_tree_matcher(self, tree_match_func, match_keys=None, worker=None): """Call identified tree matcher (items, walk) method with options.""" if isinstance(match_keys, basestring): # change to str for Python 3 raise TypeError("Sequence of keys/nodes expected for 'match_keys'.") # filter by worker and optionally by matching keys if worker and match_keys is None: match = lambda k: k[0] is worker elif worker and match_keys is not None: match = lambda k: k[0] is worker and k[1] in match_keys elif match_keys: match = lambda k: k[1] in match_keys else: match = None # Call tree matcher function (items or walk) return tree_match_func(match, itemgetter(1)) def _rc_by_source(self, worker, node): """Get a return code by worker instance and node.""" return self._d_source_rc[(worker, node)] def _rc_iter_by_key(self, key): """ Return an iterator over return codes for the given key. """ for (w, k), rc in self._d_source_rc.items(): if k == key: yield rc def _rc_iter_by_worker(self, worker, match_keys=None): """ Return an iterator over return codes and keys list for a specific worker and optional matching keys. """ if match_keys: # Use the items iterator for the underlying dict. for rc, src in self._d_rc_sources.items(): keys = [t[1] for t in src if t[0] is worker and \ t[1] in match_keys] if len(keys) > 0: yield rc, keys else: for rc, src in self._d_rc_sources.items(): keys = [t[1] for t in src if t[0] is worker] if len(keys) > 0: yield rc, keys def _krc_iter_by_worker(self, worker): """ Return an iterator over key, rc for a specific worker. """ for rc, src in self._d_rc_sources.items(): for w, k in src: if w is worker: yield k, rc def _num_timeout_by_worker(self, worker): """ Return the number of timed out "keys" for a specific worker. """ cnt = 0 for (w, k) in self._timeout_sources: if w is worker: cnt += 1 return cnt def _iter_keys_timeout_by_worker(self, worker): """ Iterate over timed out keys (ie. nodes) for a specific worker. """ for (w, k) in self._timeout_sources: if w is worker: yield k def _flush_buffers_by_worker(self, worker): """ Remove any messages from specified worker. """ msgtree = self._msgtree('stdout', strict=False) if msgtree is not None: msgtree.remove(lambda k: k[0] == worker) def _flush_errors_by_worker(self, worker): """ Remove any error messages from specified worker. """ errtree = self._msgtree('stderr', strict=False) if errtree is not None: errtree.remove(lambda k: k[0] == worker) def key_buffer(self, key): """ Get buffer for a specific key. When the key is associated to multiple workers, the resulting buffer will contain all workers content that may overlap. This method returns an empty buffer if key is not found in any workers. """ msgtree = self._msgtree('stdout') select_key = lambda k: k[1] == key return b''.join(bytes(msg) for msg in msgtree.messages(select_key)) node_buffer = key_buffer def key_error(self, key): """ Get error buffer for a specific key. When the key is associated to multiple workers, the resulting buffer will contain all workers content that may overlap. This method returns an empty error buffer if key is not found in any workers. """ errtree = self._msgtree('stderr') select_key = lambda k: k[1] == key return b''.join(bytes(msg) for msg in errtree.messages(select_key)) node_error = key_error def key_retcode(self, key): """ Return return code for a specific key. When the key is associated to multiple workers, return the max return code from these workers. Raises a KeyError if key is not found in any finished workers. """ codes = list(self._rc_iter_by_key(key)) if not codes: raise KeyError(key) return max(codes) node_retcode = key_retcode def max_retcode(self): """ Get max return code encountered during last run or None in the following cases: - all commands timed out - no command-based worker was executed How do retcodes work? If the process exits normally, the return code is its exit status. If the process is terminated by a signal, the return code is 128 + signal number. """ return self._max_rc def _iter_msgtree(self, sname, match_keys=None): """Helper method to iterate over recorded buffers by sname.""" try: msgtree = self._msgtrees[sname] return self._call_tree_matcher(msgtree.walk, match_keys) except KeyError: if not self.default("%s_msgtree" % sname): raise TaskMsgTreeError("%s_msgtree not set" % sname) return iter([]) def iter_buffers(self, match_keys=None): """ Iterate over buffers, returns a tuple (buffer, keys). For remote workers (Ssh), keys are list of nodes. In that case, you should use NodeSet.fromlist(keys) to get a NodeSet instance (which is more convenient and efficient): Optional parameter match_keys add filtering on these keys. Usage example: >>> for buffer, nodelist in task.iter_buffers(): ... print NodeSet.fromlist(nodelist) ... print buffer """ return self._iter_msgtree('stdout', match_keys) def iter_errors(self, match_keys=None): """ Iterate over error buffers, returns a tuple (buffer, keys). See iter_buffers(). """ return self._iter_msgtree('stderr', match_keys) def iter_retcodes(self, match_keys=None): """ Iterate over return codes of command-based workers, returns a tuple (rc, keys). Optional parameter *match_keys* add filtering on these keys. How do retcodes work? If the process exits normally, the return code is its exit status. If the process is terminated by a signal, the return code is 128 + signal number. """ if match_keys: # Use the items iterator for the underlying dict. for rc, src in self._d_rc_sources.items(): keys = [t[1] for t in src if t[1] in match_keys] yield rc, keys else: for rc, src in self._d_rc_sources.items(): yield rc, [t[1] for t in src] def num_timeout(self): """ Return the number of timed out "keys" (ie. nodes). """ return len(self._timeout_sources) def iter_keys_timeout(self): """ Iterate over timed out keys (ie. nodes). """ for (w, k) in self._timeout_sources: yield k def flush_buffers(self): """ Flush all task messages (from all task workers). """ msgtree = self._msgtree('stdout', strict=False) if msgtree is not None: msgtree.clear() def flush_errors(self): """ Flush all task error messages (from all task workers). """ errtree = self._msgtree('stderr', strict=False) if errtree is not None: errtree.clear() @classmethod def wait(cls, from_thread): """ Class method that blocks calling thread until all tasks have finished (from a ClusterShell point of view, for instance, their task.resume() return). It doesn't necessarily mean that associated threads have finished. """ Task._task_lock.acquire() try: tasks = Task._tasks.copy() finally: Task._task_lock.release() for thread, task in tasks.items(): if thread != from_thread: task.join() def _pchannel(self, gateway, metaworker): """Get propagation channel for gateway (create one if needed). Use self.gateways dictionary that allows lookup like: gateway (string) => (worker channel, set of metaworkers) """ gwstr = str(gateway) # create gateway channel if needed if gwstr not in self.gateways: chan = PropagationChannel(self, gateway) logger = logging.getLogger(__name__) logger.debug("pchannel: creating new channel %s", chan) # invoke gateway timeout = None # FIXME: handle timeout for gateway channels wrkcls = self.default('distant_worker') chanworker = wrkcls(gateway, command=metaworker.invoke_gateway, handler=chan, stderr=True, timeout=timeout) chanworker._update_task_rc = False # gateway is special! define worker._fanout to not rely on the # engine's fanout, and use the special value FANOUT_UNLIMITED to # always allow registration of gateways chanworker._fanout = FANOUT_UNLIMITED # change default stream names to avoid internal task buffering # and conform with channel stream names chanworker.SNAME_STDIN = chan.SNAME_WRITER chanworker.SNAME_STDOUT = chan.SNAME_READER chanworker.SNAME_STDERR = chan.SNAME_ERROR self.schedule(chanworker) # update gateways dict self.gateways[gwstr] = (chanworker, set([metaworker])) else: # TODO: assert chanworker is running (need Worker.running()) chanworker, metaworkers = self.gateways[gwstr] metaworkers.add(metaworker) return chanworker.eh def _pchannel_release(self, gateway, metaworker): """Release propagation channel associated to gateway. Lookup by gateway, decref associated metaworker set and release channel worker if needed. """ logger = logging.getLogger(__name__) logger.debug("pchannel_release %s %s", gateway, metaworker) gwstr = str(gateway) if gwstr not in self.gateways: logger.error("pchannel_release: no pchannel found for gateway %s", gwstr) else: # TODO: delay gateway closing when other gateways are running chanworker, metaworkers = self.gateways[gwstr] metaworkers.remove(metaworker) if len(metaworkers) == 0: logger.debug("pchannel_release: destroying channel %s", chanworker.eh) chanworker.abort() # delete gateway reference del self.gateways[gwstr] def task_self(defaults=None): """ Return the current Task object, corresponding to the caller's thread of control (a Task object is always bound to a specific thread). This function provided as a convenience is available in the top-level ClusterShell.Task package namespace. """ return Task(thread=threading.current_thread(), defaults=defaults) def task_wait(): """ Suspend execution of the calling thread until all tasks terminate, unless all tasks have already terminated. This function is provided as a convenience and is available in the top-level ClusterShell.Task package namespace. """ Task.wait(threading.current_thread()) def task_terminate(): """ Destroy the Task instance bound to the current thread. A next call to task_self() will create a new Task object. Not to be called from a signal handler. This function provided as a convenience is available in the top-level ClusterShell.Task package namespace. """ task_self().abort(kill=True) def task_cleanup(): """ Cleanup routine to destroy all created tasks. This function provided as a convenience is available in the top-level ClusterShell.Task package namespace. This is mainly used for testing purposes and should be avoided otherwise. task_cleanup() may be called from any threads but not from a signal handler. """ # be sure to return to a clean state (no task at all) while True: Task._task_lock.acquire() try: tasks = Task._tasks.copy() if len(tasks) == 0: break finally: Task._task_lock.release() # send abort to all known tasks (it's needed to retry as we may have # missed the engine notification window (it was just exiting, which is # quite a common case if we didn't task_join() previously), or we may # have lost some task's dispatcher port messages. for task in tasks.values(): task.abort(kill=True) # also, for other task than self, task.abort() is async and performed # through an EngineAbortException, so tell the Python scheduler to give # up control to raise this exception (handled by task._terminate())... sleep(0.001) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/Topology.py0000644104717000001440000004127514505632065021152 0ustar00sthiellusers# # Copyright (C) 2010-2016 CEA/DAM # Copyright (C) 2010-2011 Henri Doreau # Copyright (C) 2015-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell topology module This module contains the network topology parser and its related classes. These classes are used to build a topology tree of nodegroups according to the configuration file. This file must be written using the following syntax: # for now only [routes] tree is taken in account: [routes] admin: first_level_gateways[0-10] first_level_gateways[0-10]: second_level_gateways[0-100] second_level_gateways[0-100]: nodes[0-2000] ... """ try: import configparser except ImportError: # Python 2 compat import ConfigParser as configparser import logging from ClusterShell.NodeSet import NodeSet LOGGER = logging.getLogger(__name__) class TopologyError(Exception): """topology parser error to report invalid configurations or parsing errors """ class TopologyNodeGroup(object): """Base element for in-memory representation of the propagation tree. Contains a nodeset, with parent-children relationships with other instances. """ def __init__(self, nodeset=None): """initialize a new TopologyNodeGroup instance.""" # Base nodeset self.nodeset = nodeset # Parent TopologyNodeGroup (TNG) instance self.parent = None # List of children TNG instances self._children = [] self._children_len = 0 # provided for convenience self._children_ns = None def printable_subtree(self, prefix=''): """recursive method that returns a printable version the subtree from the current node with a nice presentation """ res = '' # For now, it is ok to use a recursive method here as we consider that # tree depth is relatively small. if self.parent is None: # root res = '%s\n' % str(self.nodeset) elif self.parent.parent is None: # first level if not self._is_last(): res = '|- %s\n' % str(self.nodeset) else: res = '`- %s\n' % str(self.nodeset) else: # deepest levels... if not self.parent._is_last(): prefix += '| ' else: # fix last line prefix += ' ' if not self._is_last(): res = '%s|- %s\n' % (prefix, str(self.nodeset)) else: res = '%s`- %s\n' % (prefix, str(self.nodeset)) # perform recursive calls to print out every node for child in self._children: res += child.printable_subtree(prefix) return res def add_child(self, child): """add a child to the children list and define the current instance as its parent """ assert isinstance(child, TopologyNodeGroup) assert len(child.nodeset) > 0, "empty nodeset in child" if child in self._children: return child.parent = self self._children.append(child) if self._children_ns is None: self._children_ns = NodeSet() self._children_ns.add(child.nodeset) def clear_child(self, child, strict=False): """remove a child""" try: self._children.remove(child) self._children_ns.difference_update(child.nodeset) if len(self._children_ns) == 0: self._children_ns = None except ValueError: if strict: raise def clear_children(self): """delete all children""" self._children = [] self._children_ns = None def children(self): """get the children list""" return self._children def children_ns(self): """return the children as a nodeset""" return self._children_ns def children_len(self): """returns the number of children as the sum of the size of the children's nodeset """ if self._children_ns is None: return 0 else: return len(self._children_ns) def _is_last(self): """used to display the subtree: we won't prefix the line the same way if the current instance is the last child of the children list of its parent. """ return self.parent._children[-1::][0] == self def __str__(self): """printable representation of the nodegroup""" return '' % str(self.nodeset) class TopologyTree(object): """represent a simplified network topology as a tree of machines to use to connect to other ones """ class TreeIterator(object): """efficient tool for tree-traversal""" def __init__(self, tree): """we do simply manage a stack with the remaining nodes""" self._stack = [tree.root] def __next__(self): # Python 3 """return the next node in the stack or raise a StopIteration exception if the stack is empty """ if len(self._stack) > 0 and self._stack[0] is not None: node = self._stack.pop() self._stack += node.children() return node else: raise StopIteration() next = __next__ # Python 2 def __init__(self): """initialize a new TopologyTree instance.""" self.root = None self.groups = [] def load(self, rootnode): """load topology tree""" self.root = rootnode stack = [rootnode] while len(stack) > 0: curr = stack.pop() self.groups.append(curr) if curr.children_len() > 0: stack += curr.children() def __iter__(self): """provide an iterator on the tree's elements""" return TopologyTree.TreeIterator(self) def __str__(self): """printable representation of the tree""" if self.root is None: return '' return self.root.printable_subtree() def find_nodegroup(self, node): """Find TopologyNodeGroup from given node (helper to find new root)""" for group in self.groups: if node in group.nodeset: return group raise TopologyError('TopologyNodeGroup not found for node %s' % node) def inner_node_count(self): """helper to get inner node count (root and gateway nodes)""" return sum(len(group.nodeset) for group in self.groups if group.children_len() > 0) def leaf_node_count(self): """helper to get leaf node count""" return sum(len(group.nodeset) for group in self.groups if group.children_len() == 0) class TopologyRoute(object): """A single route between two nodesets""" def __init__(self, src_ns, dst_ns): """both src_ns and dst_ns are expected to be non-empty NodeSet instances """ self.src = src_ns self.dst = dst_ns if len(src_ns & dst_ns) != 0: raise TopologyError( 'Source and destination nodesets overlap') def dest(self, nodeset=None): """get the route's destination. The optional argument serves for convenience and provides a way to use the method for a subset of the whole source nodeset """ if nodeset is None or nodeset in self.src: return self.dst else: return None def __str__(self): """printable representation""" return '%s -> %s' % (str(self.src), str(self.dst)) class TopologyRoutingTable(object): """This class provides a convenient way to store and manage topology routes """ def __init__(self): """Initialize a new TopologyRoutingTable instance.""" self._routes = [] self.aggregated_src = NodeSet() self.aggregated_dst = NodeSet() def add_route(self, route): """add a new route to the table. The route argument is expected to be a TopologyRoute instance """ if self._introduce_circular_reference(route): raise TopologyError( 'Loop detected! Cannot add route %s' % str(route)) if self._introduce_convergent_paths(route): raise TopologyError( 'Convergent path detected! Cannot add route %s' % str(route)) self._routes.append(route) self.aggregated_src.add(route.src) self.aggregated_dst.add(route.dst) def connected(self, src_ns): """find out and return the aggregation of directly connected children from src_ns. Argument src_ns is expected to be a NodeSet instance. Result is returned as a NodeSet instance """ next_hop = NodeSet.fromlist(dst for dst in [route.dest(src_ns) for route in self._routes] if dst is not None) if len(next_hop) == 0: return None return next_hop def __str__(self): """printable representation""" return '\n'.join([str(route) for route in self._routes]) def __iter__(self): """return an iterator over the list of routes""" return iter(self._routes) def _introduce_circular_reference(self, route): """check whether the last added route adds a topology loop or not""" current_ns = route.dst # iterate over the destinations until we find None or we come back on # the src while True: _dest = self.connected(current_ns) if _dest is None or len(_dest) == 0: return False if len(_dest & route.src) != 0: return True current_ns = _dest def _introduce_convergent_paths(self, route): """check for undesired convergent paths""" for known_route in self._routes: # source cannot be a superset of an already known destination if route.src > known_route.dst: return True # same thing... if route.dst < known_route.src: return True # two different nodegroups cannot point to the same one if len(route.dst & known_route.dst) != 0 \ and route.src != known_route.src: return True return False class TopologyGraph(object): """represent a complete network topology by storing every "can reach" relations between nodes. """ def __init__(self): """initialize a new TopologyGraph instance.""" self._routing = TopologyRoutingTable() self._nodegroups = {} self._root = '' def add_route(self, src_ns, dst_ns): """add a new route from src nodeset to dst nodeset. The destination nodeset must not overlap with already known destination nodesets (otherwise a TopologyError is raised) """ assert isinstance(src_ns, NodeSet) assert isinstance(dst_ns, NodeSet) self._routing.add_route(TopologyRoute(src_ns, dst_ns)) def dest(self, from_nodeset): """return the aggregation of the destinations for a given nodeset""" return self._routing.connected(from_nodeset) def to_tree(self, root): """convert the routing table to a topology tree of nodegroups""" # convert the routing table into a table of linked TopologyNodeGroup's self._routes_to_tng() # ensure this is a valid pseudo-tree self._validate(root) tree = TopologyTree() tree.load(self._nodegroups[self._root]) return tree def __str__(self): """printable representation of the graph""" res = '\n' res += '\n'.join(['%s: %s' % (str(k), str(v)) for k, v in self._nodegroups.items()]) return res def _routes_to_tng(self): """convert the routing table into a graph of TopologyNodeGroup instances. Loops are not very expensive here as the number of routes will always be much lower than the number of nodes. """ # instantiate nodegroups as biggest groups of nodes sharing both parent # and destination aggregated_src = self._routing.aggregated_src for route in self._routing: self._nodegroups[str(route.src)] = TopologyNodeGroup(route.src) # create a nodegroup for the destination if it is a leaf group. # Otherwise, it will be created as src for another route leaf = route.dst - aggregated_src if len(leaf) > 0: self._nodegroups[str(leaf)] = TopologyNodeGroup(leaf) # add the parent <--> children relationships for group in self._nodegroups.values(): dst_ns = self._routing.connected(group.nodeset) if dst_ns is not None: for child in self._nodegroups.values(): if child.nodeset in dst_ns: group.add_child(child) def _validate(self, root): """ensure that the graph is valid for conversion to tree""" if len(self._nodegroups) == 0: raise TopologyError("No route found in topology definition!") # ensure that every node is reachable src_all = self._routing.aggregated_src dst_all = self._routing.aggregated_dst res = [(k, v) for k, v in self._nodegroups.items() if root in v.nodeset] if len(res) > 0: kgroup, group = res[0] del self._nodegroups[kgroup] self._nodegroups[root] = group else: raise TopologyError('"%s" is not a valid root node!' % root) self._root = root class TopologyParser(configparser.ConfigParser): """This class offers a way to interpret network topologies supplied under the form : # Comment : """ def __init__(self, filename=None): """instance wide variables initialization""" configparser.ConfigParser.__init__(self) self.optionxform = str # case sensitive parser self._topology = {} self.graph = None self._tree = None if filename: self.load(filename) def load(self, filename): """read a given topology configuration file and store the results in self._routes. Then build a propagation tree. """ try: self.read(filename) if self.has_section("routes"): self._topology = self.items("routes") else: # compat routes section [deprecated since v1.7] self._topology = self.items("Main") except configparser.Error: raise TopologyError( 'Invalid configuration file: %s' % filename) self._build_graph() def _build_graph(self): """build a network topology graph according to the information we got from the configuration file. """ self.graph = TopologyGraph() for src, dst in self._topology: # GH#527: router and destination node sets may use NodeSet groups # but we ignore any empty sets src_ns = NodeSet(src) if not src_ns: LOGGER.debug('Failed to resolve router node set: %s', src) dst_ns = NodeSet(dst) if not dst_ns: LOGGER.debug('Failed to resolve destination node set "%s" for' \ 'router node set %s', dst, src_ns) if src_ns and dst_ns: self.graph.add_route(src_ns, dst_ns) def tree(self, root, force_rebuild=False): """Return a previously generated propagation tree or build it if required. As rebuilding tree can be quite expensive, once built, the propagation tree is cached. you can force a re-generation using the optional `force_rebuild' parameter. """ if self._tree is None or force_rebuild: self._tree = self.graph.to_tree(root) return self._tree ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3323298 ClusterShell-1.9.2/lib/ClusterShell/Worker/0000755104717000001440000000000014505640536020226 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/EngineClient.py0000644104717000001440000004473714501416555023161 0ustar00sthiellusers# # Copyright (C) 2009-2016 CEA/DAM # Copyright (C) 2016-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ EngineClient ClusterShell engine's client interface. An engine client is similar to a process, you can start/stop it, read data from it and write data to it. Multiple data channels are supported (eg. stdin, stdout and stderr, or even more...) """ import errno import logging import os try: import queue except ImportError: # Python 2 compatibility import Queue as queue import threading from ClusterShell.Defaults import DEFAULTS from ClusterShell.Worker.fastsubprocess import Popen, PIPE, STDOUT, \ set_nonblock_flag from ClusterShell.Engine.Engine import EngineBaseTimer, E_READ, E_WRITE LOGGER = logging.getLogger(__name__) class EngineClientException(Exception): """Generic EngineClient exception.""" class EngineClientEOF(EngineClientException): """EOF from client.""" class EngineClientError(EngineClientException): """Base EngineClient error exception.""" class EngineClientNotSupportedError(EngineClientError): """Operation not supported by EngineClient.""" class EngineClientStream(object): """EngineClient I/O stream object. Internal object used by EngineClient to manage its Engine-registered I/O streams. Each EngineClientStream is bound to a file object (file descriptor). It can be either an input, an output or a bidirectional stream (not used for now).""" def __init__(self, name, sfile=None, evmask=0): """Initialize an EngineClientStream object. @param name: Name of stream. @param sfile: File object or file descriptor. @param evmask: Config I/O event bitmask. """ self.name = name self.fd = None self.rbuf = bytes() self.wbuf = bytes() self.eof = False self.evmask = evmask self.events = 0 self.new_events = 0 self.retain = False self.closefd = False self.set_file(sfile) def set_file(self, sfile, evmask=0, retain=True, closefd=True): """ Set the stream file and event mask for this object. sfile should be a file object or a file descriptor. Event mask can be either E_READ, E_WRITE or both. Currently does NOT retain file object. """ try: # file descriptor self.fd = sfile.fileno() except AttributeError: self.fd = sfile # Set I/O event mask self.evmask = evmask # Set retain flag self.retain = retain # Set closefd flag self.closefd = closefd def __repr__(self): return "<%s at 0x%s (name=%s fd=%s rbuflen=%d wbuflen=%d eof=%d " \ "evmask=0x%x)>" % (self.__class__.__name__, id(self), self.name, self.fd, len(self.rbuf), len(self.wbuf), self.eof, self.evmask) def close(self): """Close stream.""" if self.closefd and self.fd is not None: os.close(self.fd) def readable(self): """Return whether the stream is setup as readable.""" return self.evmask & E_READ def writable(self): """Return whether the stream is setup as writable.""" return self.evmask & E_WRITE class EngineClientStreamDict(dict): """EngineClient's named stream dictionary.""" def set_stream(self, sname, sfile=None, evmask=0, retain=True, closefd=True): """Set stream based on file object or file descriptor. This method can be used to add a stream or update its parameters. """ engfile = dict.setdefault(self, sname, EngineClientStream(sname)) engfile.set_file(sfile, evmask, retain, closefd) return engfile def set_reader(self, sname, sfile=None, retain=True, closefd=True): """Set readable stream based on file object or file descriptor.""" self.set_stream(sname, sfile, E_READ, retain, closefd) def set_writer(self, sname, sfile=None, retain=True, closefd=True): """Set writable stream based on file object or file descriptor.""" self.set_stream(sname, sfile, E_WRITE, retain, closefd) def destroy(self, key): """Close file object and remove it from this pool.""" self[key].close() dict.pop(self, key) def __delitem__(self, key): self.destroy(key) def clear(self): """Clear File Pool""" for stream in self.values(): stream.close() dict.clear(self) def active_readers(self): """Get an iterator on readable streams (with fd set).""" return (s for s in self.readers() if s.fd is not None) def readers(self): """Get an iterator on all streams setup as readable.""" return (s for s in list(self.values()) if s.evmask & E_READ) def active_writers(self): """Get an iterator on writable streams (with fd set).""" return (s for s in self.writers() if s.fd is not None) def writers(self): """Get an iterator on all streams setup as writable.""" return (s for s in list(self.values()) if s.evmask & E_WRITE) def retained(self): """Check whether this set of streams is retained. Note on retain: an active stream with retain=True keeps the engine client alive. When only streams with retain=False remain, the engine client terminates. Return: True -- when at least one stream is retained False -- when no retainable stream remain """ for stream in self.values(): if stream.fd is not None and stream.retain: return True return False class EngineClient(EngineBaseTimer): """ Abstract class EngineClient. """ def __init__(self, worker, key, stderr, timeout, autoclose): """EngineClient initializer. Should be called from derived classes. Arguments: worker -- parent worker instance key -- client key used by MsgTree (eg. node name) stderr -- boolean set if stderr is on a separate stream timeout -- client execution timeout value (float) autoclose -- boolean set to indicate whether this engine client should be aborted as soon as all other non-autoclosing clients have finished. """ EngineBaseTimer.__init__(self, timeout, -1, autoclose) self._reg_epoch = 0 # registration generation number # read-only public self.registered = False # registered on engine or not self.delayable = True # subject to fanout limit self.worker = worker if key is None: key = id(worker) self.key = key # boolean indicating whether stderr is on a separate fd self._stderr = stderr # streams associated with this client self.streams = EngineClientStreamDict() def __repr__(self): # added repr(self.key) return '<%s.%s instance at 0x%x key %r>' % (self.__module__, self.__class__.__name__, id(self), self.key) def _fire(self): """ Fire timeout timer. """ if self._engine: self._engine.remove(self, abort=True, did_timeout=True) def _start(self): """ Starts client and returns client instance as a convenience. Derived classes must implement. """ raise NotImplementedError("Derived classes must implement.") def _close(self, abort, timeout): """ Close client. Called by the engine after client has been unregistered. This method should handle both termination types (normal or aborted) and should set timeout status accordingly. Derived classes should implement. """ for sname in list(self.streams): self._close_stream(sname) self.invalidate() # set self._engine to None def _close_stream(self, sname): """ Close specific stream by name (internal, called by engine). This method is the regular way to close a stream flushing read buffers accordingly. """ self._flush_read(sname) # flush_read() is useful but may generate user events (ev_read) that # could lead to worker abort and then ev_close. Be careful there. if sname in self.streams: del self.streams[sname] def _set_reading(self, sname): """ Set reading state. """ self._engine.set_reading(self, sname) def _set_writing(self, sname): """ Set writing state. """ self._engine.set_writing(self, sname) def _read(self, sname, size=65536): """ Read data from process. """ result = os.read(self.streams[sname].fd, size) if len(result) == 0: raise EngineClientEOF() self._set_reading(sname) return result def _flush_read(self, sname): """Called when stream is closing to flush read buffers.""" pass # derived classes may implement def _handle_read(self, sname): """ Handle a read notification. Called by the engine as the result of an event indicating that a read is available. """ raise NotImplementedError("Derived classes must implement.") def _handle_write(self, sname): """ Handle a write notification. Called by the engine as the result of an event indicating that a write can be performed now. """ wfile = self.streams[sname] if not wfile.wbuf and wfile.eof: # remove stream from engine (not directly) if self._engine: self._engine.remove_stream(self, wfile) elif len(wfile.wbuf) > 0: try: wcnt = os.write(wfile.fd, wfile.wbuf) except OSError as exc: if exc.errno == errno.EAGAIN: # _handle_write() is not only called by the engine but also # by _write(), so this is legit: we just try again later self._set_writing(sname) return if exc.errno == errno.EPIPE: # broken pipe: log warning message and do NOT retry LOGGER.warning('%r: %s', self, exc) return raise if wcnt > 0: # dequeue written buffer wfile.wbuf = wfile.wbuf[wcnt:] # check for possible ending if wfile.eof and not wfile.wbuf: self.worker._on_written(self.key, wcnt, sname) # remove stream from engine (not directly) if self._engine: self._engine.remove_stream(self, wfile) else: self._set_writing(sname) self.worker._on_written(self.key, wcnt, sname) def _exec_nonblock(self, commandlist, shell=False, env=None): """ Utility method to launch a command with stdin/stdout file descriptors configured in non-blocking mode. """ full_env = None if env: full_env = os.environ.copy() full_env.update(env) if self._stderr: stderr_setup = PIPE else: stderr_setup = STDOUT # Launch process in non-blocking mode proc = Popen(commandlist, bufsize=0, stdin=PIPE, stdout=PIPE, stderr=stderr_setup, shell=shell, env=full_env) if self._stderr: self.streams.set_stream(self.worker.SNAME_STDERR, proc.stderr, E_READ) self.streams.set_stream(self.worker.SNAME_STDOUT, proc.stdout, E_READ) self.streams.set_stream(self.worker.SNAME_STDIN, proc.stdin, E_WRITE, retain=False) return proc def _readlines(self, sname): """Utility method to read client lines.""" # read a chunk of data, may raise eof readbuf = self._read(sname) assert len(readbuf) > 0, "assertion failed: len(readbuf) > 0" # Current version implements line-buffered reads. If needed, we could # easily provide direct, non-buffered, data reads in the future. rfile = self.streams[sname] buf = rfile.rbuf + readbuf lines = buf.splitlines(True) rfile.rbuf = bytes() for line in lines: if line.endswith(b'\n'): if line.endswith(b'\r\n'): yield line[:-2] # trim CRLF else: # trim LF yield line[:-1] # trim LF else: # keep partial line in buffer rfile.rbuf = line # breaking here def _write(self, sname, buf): """Add some data to be written to the client.""" wfile = self.streams[sname] if self._engine and wfile.fd: wfile.wbuf += buf # give it a try now (will set writing flag anyhow) self._handle_write(sname) else: # bufferize until pipe is ready wfile.wbuf += buf def _set_write_eof(self, sname): """Set EOF on specific writable stream.""" if sname not in self.streams: LOGGER.debug("stream %s was already closed on client %s, skipping", sname, self.key) return wfile = self.streams[sname] wfile.eof = True if self._engine and wfile.fd and not wfile.wbuf: # sendq empty, remove stream now self._engine.remove_stream(self, wfile) def abort(self): """Abort processing any action by this client. Safe to call on an already closing or aborting client. """ engine = self._engine if engine: self.invalidate() # set self._engine to None engine.remove(self, abort=True) class EnginePort(EngineClient): """ An EnginePort is an abstraction object to deliver messages reliably between tasks. """ class _Msg(object): """Private class representing a port message. A port message may be any Python object. """ def __init__(self, user_msg, sync): self._user_msg = user_msg self._sync_msg = sync self.reply_lock = threading.Lock() self.reply_lock.acquire() def get(self): """ Get and acknowledge message. """ self.reply_lock.release() return self._user_msg def sync(self): """ Wait for message acknowledgment if needed. """ if self._sync_msg: self.reply_lock.acquire() def __init__(self, handler=None, autoclose=False): """ Initialize EnginePort object. """ EngineClient.__init__(self, None, None, False, -1, autoclose) self.eh = handler # ports are no subject to fanout self.delayable = False # Port messages queue self._msgq = queue.Queue(DEFAULTS.port_qlimit) # Request pipe (readfd, writefd) = os.pipe() # Set nonblocking flag set_nonblock_flag(readfd) set_nonblock_flag(writefd) self.streams.set_stream('in', readfd, E_READ) self.streams.set_stream('out', writefd, E_WRITE) def __repr__(self): try: fd_in = self.streams['in'].fd except KeyError: fd_in = None try: fd_out = self.streams['out'].fd except KeyError: fd_out = None return "<%s at 0x%s (streams=(%s, %s))>" % (self.__class__.__name__, id(self), fd_in, fd_out) def _start(self): """Start port.""" if self.eh is not None: self.eh.ev_port_start(self) return self def _close(self, abort, timeout): """Close port.""" if not self._msgq.empty(): # purge msgq try: while not self._msgq.empty(): pmsg = self._msgq.get(block=False) LOGGER.debug('%r: dropped msg: %s', self, pmsg.get()) except queue.Empty: pass self._msgq = None del self.streams['out'] del self.streams['in'] self.invalidate() def _handle_read(self, sname): """ Handle a read notification. Called by the engine as the result of an event indicating that a read is available. """ readbuf = self._read(sname, 4096) for dummy_char in readbuf: # raise Empty if empty (should never happen) pmsg = self._msgq.get(block=False) self.eh.ev_msg(self, pmsg.get()) def msg(self, send_msg, send_once=False): """ Port message send method that will wait for acknowledgement unless the send_once parameter if set. May be called from another thread. Will generate ev_msg() on Port event handler (in Port task/thread). Return False if the message cannot be sent (eg. port closed). """ if self._msgq is None: # called after port closed? return False pmsg = EnginePort._Msg(send_msg, not send_once) self._msgq.put(pmsg, block=True, timeout=None) try: ret = os.write(self.streams['out'].fd, b'M') except OSError: raise pmsg.sync() return ret == 1 def msg_send(self, send_msg): """ Port message send-once method (no acknowledgement). See msg(). Return False if the message cannot be sent (eg. port closed). """ return self.msg(send_msg, send_once=True) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/Exec.py0000644104717000001440000003207714505632065021473 0ustar00sthiellusers# # Copyright (C) 2014-2015 CEA/DAM # Copyright (C) 2014-2015 Aurelien Degremont # Copyright (C) 2014-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell base worker for process-based workers. This module manages the worker class to spawn local commands, possibly using a nodeset to behave like a distant worker. Like other workers it can run commands or copy files, locally. This is the base class for most of other distant workers. """ import os from string import Template from ClusterShell.NodeSet import NodeSet from ClusterShell.Worker.EngineClient import EngineClient from ClusterShell.Worker.Worker import WorkerError, DistantWorker from ClusterShell.Worker.Worker import _eh_sigspec_invoke_compat def _replace_cmd(pattern, node, rank): """ Replace keywords in `pattern' with value from `node' and `rank'. %h, %host map `node' %n, %rank map `rank' """ variables = { 'h': node, 'host': node, 'hosts': node, 'n': rank or 0, 'rank': rank or 0, # 'u': None, } class Replacer(Template): delimiter = '%' try: cmd = Replacer(pattern).substitute(variables) except (KeyError, ValueError) as error: msg = "%s is not a valid pattern, use '%%%%' to escape '%%'" % error raise WorkerError(msg) return cmd class ExecClient(EngineClient): """ Run a simple local command. Useful as a superclass for other more specific workers. """ def __init__(self, node, command, worker, stderr, timeout, autoclose=False, rank=None): """ Create an EngineClient-type instance to locally run *command*. :param node: will be used as key. """ EngineClient.__init__(self, worker, node, stderr, timeout, autoclose) self.rank = rank self.command = command self.popen = None # Declare writer stream to allow early buffering self.streams.set_writer(worker.SNAME_STDIN, None, retain=True) def _build_cmd(self): """ Build the shell command line to start the command. Return a tuple containing command and arguments as a string or a list of string, and a dict of additional environment variables. None could be returned if no environment change is required. """ return (_replace_cmd(self.command, self.key, self.rank), None) def _start(self): """Prepare command and start client.""" # Build command cmd, cmd_env = self._build_cmd() # If command line is string, we need to interpret it as a shell command shell = isinstance(cmd, str) task = self.worker.task if task.info("debug", False): name = self.__class__.__name__.upper().split('.')[-1] if shell: task.info("print_debug")(task, "%s: %s" % (name, cmd)) else: task.info("print_debug")(task, "%s: %s" % (name, ' '.join(cmd))) self.popen = self._exec_nonblock(cmd, env=cmd_env, shell=shell) self._on_nodeset_start(self.key) return self def _close(self, abort, timeout): """Close client. See EngineClient._close().""" if abort: # it's safer to call poll() first for long time completed processes prc = self.popen.poll() # if prc is None, process is still running if prc is None: try: # try to kill it self.popen.kill() except OSError: pass prc = self.popen.wait() self.streams.clear() self.invalidate() if prc >= 0: self._on_nodeset_close(self.key, prc) elif timeout: assert abort, "abort flag not set on timeout" self.worker._on_node_timeout(self.key) elif not abort: # if process was signaled, return 128 + signum (bash-like) self._on_nodeset_close(self.key, 128 + -prc) self.worker._check_fini() def _on_nodeset_start(self, nodes): """local wrapper over _on_start that can also handle nodeset""" if isinstance(nodes, NodeSet): for node in nodes: self.worker._on_start(node) else: self.worker._on_start(nodes) def _on_nodeset_close(self, nodes, rc): """local wrapper over _on_node_rc that can also handle nodeset""" if isinstance(nodes, NodeSet): for node in nodes: self.worker._on_node_close(node, rc) else: self.worker._on_node_close(nodes, rc) def _on_nodeset_msgline(self, nodes, msg, sname): """local wrapper over _on_node_msgline that can also handle nodeset""" if isinstance(nodes, NodeSet): for node in nodes: self.worker._on_node_msgline(node, msg, sname) else: self.worker._on_node_msgline(nodes, msg, sname) def _flush_read(self, sname): """Called at close time to flush stream read buffer.""" stream = self.streams[sname] if stream.readable() and stream.rbuf: # We still have some read data available in buffer, but no # EOL. Generate a final message before closing. self._on_nodeset_msgline(self.key, stream.rbuf, sname) def _handle_read(self, sname): """ Handle a read notification. Called by the engine as the result of an event indicating that a read is available. """ # Local variables optimization worker = self.worker task = worker.task key = self.key node_msgline = self._on_nodeset_msgline debug = task.info("debug", False) if debug: print_debug = task.info("print_debug") for msg in self._readlines(sname): if debug: print_debug(task, "%s: %s" % (key, msg)) node_msgline(key, msg, sname) # handle full msg line class CopyClient(ExecClient): """ Run a local `cp' between a source and destination. Destination could be a directory. """ def __init__(self, node, source, dest, worker, stderr, timeout, autoclose, preserve, reverse, rank=None): """Create an EngineClient-type instance to locally run 'cp'.""" ExecClient.__init__(self, node, None, worker, stderr, timeout, autoclose, rank) self.source = source self.dest = dest # Preserve modification times and modes? self.preserve = preserve # Reverse copy? self.reverse = reverse # Directory? # FIXME: file sanity checks could be moved to Copy._start() as we # should now be able to handle error when starting (#215). if self.reverse: self.isdir = os.path.isdir(self.dest) if not self.isdir: raise ValueError("reverse copy dest must be a directory") else: self.isdir = os.path.isdir(self.source) def _build_cmd(self): """ Build the shell command line to start the rcp command. Return an array of command and arguments. """ source = _replace_cmd(self.source, self.key, self.rank) dest = _replace_cmd(self.dest, self.key, self.rank) cmd_l = [ "cp" ] if self.isdir: cmd_l.append("-r") if self.preserve: cmd_l.append("-p") if self.reverse: cmd_l.append(dest) cmd_l.append(source) else: cmd_l.append(source) cmd_l.append(dest) return (cmd_l, None) class ExecWorker(DistantWorker): """ ClusterShell simple execution worker Class. It runs commands locally. If a node list is provided, one command will be launched for each node and specific keywords will be replaced based on node name and rank. Local shell usage example: >>> worker = ExecWorker(nodeset, handler=MyEventHandler(), ... timeout=30, command="/bin/uptime") >>> task.schedule(worker) # schedule worker for execution >>> task.run() # run Local copy usage example: >>> worker = ExecWorker(nodeset, handler=MyEventHandler(), ... source="/etc/my.cnf", ... dest="/etc/my.cnf.bak") >>> task.schedule(worker) # schedule worker for execution >>> task.run() # run connect_timeout option is ignored by this worker. """ SHELL_CLASS = ExecClient COPY_CLASS = CopyClient def __init__(self, nodes, handler, timeout=None, **kwargs): """Create an ExecWorker and its engine client instances.""" DistantWorker.__init__(self, handler) self._close_count = 0 self._has_timeout = False self._clients = [] self.nodes = NodeSet(nodes) self.command = kwargs.get('command') self.source = kwargs.get('source') self.dest = kwargs.get('dest') self._create_clients(timeout=timeout, **kwargs) # # Spawn and manage EngineClient classes # def _create_clients(self, **kwargs): """ Create several shell and copy engine client instances based on worker properties. Additional arguments in `kwargs' will be used for client creation. There will be one client per node in self.nodes """ # do not iterate if special %hosts placeholder is found in command if self.command and ('%hosts' in self.command or '%{hosts}' in self.command): self._add_client(self.nodes, rank=None, **kwargs) else: for rank, node in enumerate(self.nodes): self._add_client(node, rank=rank, **kwargs) def _add_client(self, nodes, **kwargs): """Create one shell or copy client.""" autoclose = kwargs.get('autoclose', False) stderr = kwargs.get('stderr', False) rank = kwargs.get('rank') timeout = kwargs.get('timeout') if self.command is not None: cls = self.__class__.SHELL_CLASS self._clients.append(cls(nodes, self.command, self, stderr, timeout, autoclose, rank)) elif self.source: cls = self.__class__.COPY_CLASS self._clients.append(cls(nodes, self.source, self.dest, self, stderr, timeout, autoclose, kwargs.get('preserve', False), kwargs.get('reverse', False), rank)) else: raise ValueError("missing command or source parameter in " "worker constructor") def _engine_clients(self): """ Used by upper layer to get the list of underlying created engine clients. """ return self._clients def write(self, buf, sname=None): """Write to worker clients.""" sname = sname or self.SNAME_STDIN for client in self._clients: if sname in client.streams: client._write(sname, buf) def set_write_eof(self, sname=None): """ Tell worker to close its writer file descriptors once flushed. Do not perform writes after this call. """ for client in self._clients: client._set_write_eof(sname or self.SNAME_STDIN) def abort(self): """Abort processing any action by this worker.""" for client in self._clients: client.abort() # # Events # def _on_node_timeout(self, node): DistantWorker._on_node_timeout(self, node) self._has_timeout = True def _check_fini(self): """ Must be called by each client when closing. If they are all closed, trigger the required events. """ self._close_count += 1 assert self._close_count <= len(self._clients) if self._close_count == len(self._clients) and self.eh is not None: # also use hasattr check because ev_timeout was missing in 1.8.0 if self._has_timeout and hasattr(self.eh, 'ev_timeout'): # Legacy ev_timeout event self.eh.ev_timeout(self) _eh_sigspec_invoke_compat(self.eh.ev_close, 2, self, self._has_timeout) WORKER_CLASS = ExecWorker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/Pdsh.py0000644104717000001440000002241614501416555021501 0ustar00sthiellusers# # Copyright (C) 2007-2016 CEA/DAM # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ WorkerPdsh ClusterShell worker for executing commands with LLNL pdsh. """ import errno import os import shlex from ClusterShell.NodeSet import NodeSet from ClusterShell.Worker.EngineClient import EngineClientError from ClusterShell.Worker.EngineClient import EngineClientNotSupportedError from ClusterShell.Worker.Worker import WorkerError from ClusterShell.Worker.Exec import ExecWorker, ExecClient, CopyClient class PdshClient(ExecClient): """EngineClient which run 'pdsh'""" MODE = 'pdsh' def __init__(self, node, command, worker, stderr, timeout, autoclose=False, rank=None): ExecClient.__init__(self, node, command, worker, stderr, timeout, autoclose, rank) self._closed_nodes = NodeSet() def _build_cmd(self): """ Build the shell command line to start the command. Return an array of command and arguments. """ task = self.worker.task pdsh_env = {} # Build pdsh command path = task.info("pdsh_path") or "pdsh" cmd_l = [os.path.expanduser(pathc) for pathc in shlex.split(path)] cmd_l.append("-b") fanout = task.info("fanout", 0) if fanout > 0: cmd_l.append("-f %d" % fanout) # Pdsh flag '-t' do not really works well. Better to use # PDSH_SSH_ARGS_APPEND variable to transmit ssh ConnectTimeout # flag. connect_timeout = task.info("connect_timeout", 0) if connect_timeout > 0: pdsh_env['PDSH_SSH_ARGS_APPEND'] = "-o ConnectTimeout=%d" % \ connect_timeout command_timeout = task.info("command_timeout", 0) if command_timeout > 0: cmd_l.append("-u %d" % command_timeout) cmd_l.append("-w %s" % self.key) cmd_l.append("%s" % self.command) return (cmd_l, pdsh_env) def _close(self, abort, timeout): """Close client. See EngineClient._close().""" if abort: # it's safer to call poll() first for long time completed processes prc = self.popen.poll() # if prc is None, process is still running if prc is None: try: # try to kill it self.popen.kill() except OSError: pass prc = self.popen.wait() if prc > 0: raise WorkerError("Cannot run pdsh (error %d)" % prc) self.streams.clear() self.invalidate() if timeout: assert abort, "abort flag not set on timeout" for node in (self.key - self._closed_nodes): self.worker._on_node_timeout(node) else: for node in (self.key - self._closed_nodes): self.worker._on_node_close(node, 0) self.worker._check_fini() def _parse_line(self, line, sname): """ Parse Pdsh line syntax. """ if line.startswith(b"pdsh@") or \ line.startswith(b"pdcp@") or \ line.startswith(b"sending "): try: # pdsh@cors113: cors115: ssh exited with exit code 1 # 0 1 2 3 4 5 6 7 # corsUNKN: ssh: corsUNKN: Name or service not known # 0 1 2 3 4 5 6 7 # pdsh@fortoy0: fortoy101: command timeout # 0 1 2 3 # sending SIGTERM to ssh fortoy112 pid 32014 # 0 1 2 3 4 5 6 # pdcp@cors113: corsUNKN: ssh exited with exit code 255 # 0 1 2 3 4 5 6 7 # pdcp@cors113: cors115: fatal: /var/cache/shine/... # 0 1 2 3... words = line.split() # Set return code for nodename of worker if self.MODE == 'pdsh': if len(words) == 4 and words[2] == b"command" and \ words[3] == b"timeout": pass elif len(words) == 8 and words[3] == b"exited" and \ words[7].isdigit(): nodename = words[1][:-1].decode() self._closed_nodes.add(nodename) self.worker._on_node_close(nodename, int(words[7])) elif self.MODE == 'pdcp': nodename = words[1][:-1].decode() self._closed_nodes.add(nodename) self.worker._on_node_close(nodename, errno.ENOENT) except Exception as exc: raise EngineClientError("Pdsh parser error: %s" % exc) else: # split pdsh reply "nodename: msg" nodename, msg = line.split(b': ', 1) self.worker._on_node_msgline(nodename.decode(), msg, sname) def _flush_read(self, sname): """Called at close time to flush stream read buffer.""" pass def _handle_read(self, sname): """Engine is telling us a read is available.""" debug = self.worker.task.info("debug", False) if debug: print_debug = self.worker.task.info("print_debug") suffix = "" if sname == 'stderr': suffix = "@STDERR" for msg in self._readlines(sname): if debug: print_debug(self.worker.task, "PDSH%s: %s" % (suffix, msg)) self._parse_line(msg, sname) class PdcpClient(CopyClient, PdshClient): """EngineClient when pdsh is run to copy file, using pdcp.""" MODE = 'pdcp' def __init__(self, node, source, dest, worker, stderr, timeout, autoclose, preserve, reverse, rank=None): CopyClient.__init__(self, node, source, dest, worker, stderr, timeout, autoclose, preserve, reverse, rank) PdshClient.__init__(self, node, None, worker, stderr, timeout, autoclose, rank) def _build_cmd(self): cmd_l = [] # Build pdcp command if self.reverse: path = self.worker.task.info("rpdcp_path") or "rpdcp" else: path = self.worker.task.info("pdcp_path") or "pdcp" cmd_l = [os.path.expanduser(pathc) for pathc in shlex.split(path)] cmd_l.append("-b") fanout = self.worker.task.info("fanout", 0) if fanout > 0: cmd_l.append("-f %d" % fanout) connect_timeout = self.worker.task.info("connect_timeout", 0) if connect_timeout > 0: cmd_l.append("-t %d" % connect_timeout) cmd_l.append("-w %s" % self.key) if self.isdir: cmd_l.append("-r") if self.preserve: cmd_l.append("-p") cmd_l.append(self.source) cmd_l.append(self.dest) return (cmd_l, None) class WorkerPdsh(ExecWorker): """ ClusterShell pdsh-based worker Class. Remote Shell (pdsh) usage example: >>> worker = WorkerPdsh(nodeset, handler=MyEventHandler(), ... timeout=30, command="/bin/hostname") >>> task.schedule(worker) # schedule worker for execution >>> task.resume() # run Remote Copy (pdcp) usage example: >>> worker = WorkerPdsh(nodeset, handler=MyEventHandler(), ... timeout=30, source="/etc/my.conf", ... dest="/etc/my.conf") >>> task.schedule(worker) # schedule worker for execution >>> task.resume() # run Known limitations: - write() is not supported by WorkerPdsh - return codes == 0 are not guaranteed when a timeout is used (rc > 0 are fine) """ SHELL_CLASS = PdshClient COPY_CLASS = PdcpClient # # Spawn and control # def _create_clients(self, **kwargs): self._add_client(self.nodes, **kwargs) def write(self, buf): """ Write data to process. Not supported with Pdsh worker. """ raise EngineClientNotSupportedError("writing to stdin is not " "supported by pdsh worker") def set_write_eof(self): """ Tell worker to close its writer file descriptor once flushed. Do not perform writes after this call. Not supported by PDSH Worker. """ raise EngineClientNotSupportedError("writing to stdin is not " "supported by pdsh worker") WORKER_CLASS = WorkerPdsh ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/Popen.py0000644104717000001440000000772614505632065021673 0ustar00sthiellusers# # Copyright (C) 2008-2015 CEA/DAM # Copyright (C) 2015 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ WorkerPopen ClusterShell worker for executing local commands. Usage example: >>> worker = WorkerPopen("/bin/uname", key="mykernel") >>> task.schedule(worker) # schedule worker >>> task.resume() # run task >>> worker.retcode() # get return code 0 >>> worker.read() # read command output 'Linux' """ from ClusterShell.Worker.Worker import WorkerSimple, StreamClient from ClusterShell.Worker.Worker import _eh_sigspec_invoke_compat class PopenClient(StreamClient): def __init__(self, worker, key, stderr, timeout, autoclose): """PopenClient initializer""" StreamClient.__init__(self, worker, key, stderr, timeout, autoclose) self.popen = None self.rc = None # Declare writer stream to allow early buffering self.streams.set_writer(worker.SNAME_STDIN, None, retain=False) def _start(self): """Worker is starting.""" assert not self.worker.started assert self.popen is None self.popen = self._exec_nonblock(self.worker.command, shell=True) task = self.worker.task if task.info("debug", False): task.info("print_debug")(task, "POPEN: %s" % self.worker.command) self.worker._on_start(self.key) return self def _close(self, abort, timeout): """ Close client. See EngineClient._close(). """ if abort: # it's safer to call poll() first for long time completed processes prc = self.popen.poll() # if prc is None, process is still running if prc is None: try: # try to kill it self.popen.kill() except OSError: pass prc = self.popen.wait() self.streams.clear() self.invalidate() if prc >= 0: # filter valid rc self.rc = prc self.worker._on_close(self.key, prc) elif timeout: assert abort, "abort flag not set on timeout" self.worker._on_timeout(self.key) elif not abort: # if process was signaled, return 128 + signum (bash-like) self.rc = 128 + -prc self.worker._on_close(self.key, self.rc) if self.worker.eh is not None: _eh_sigspec_invoke_compat(self.worker.eh.ev_close, 2, self.worker, timeout) class WorkerPopen(WorkerSimple): """ Implements the Popen Worker. """ def __init__(self, command, key=None, handler=None, stderr=False, timeout=-1, autoclose=False): """Initialize Popen worker.""" WorkerSimple.__init__(self, None, None, None, key, handler, stderr, timeout, autoclose, client_class=PopenClient) self.command = command if not self.command: raise ValueError("missing command parameter in WorkerPopen " "constructor") self.key = key def retcode(self): """Return return code or None if command is still in progress.""" return self.clients[0].rc WORKER_CLASS = WorkerPopen ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/Rsh.py0000644104717000001440000001164414501416555021340 0ustar00sthiellusers# # Copyright (C) 2013-2015 CEA/DAM # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell RSH support It could also handles rsh forks, like krsh or mrsh. This is also the base class for rsh evolutions, like Ssh worker. """ import os import shlex import re from ClusterShell.Worker.Exec import ExecClient, CopyClient, ExecWorker class RshClient(ExecClient): """ Rsh EngineClient. """ def __init__(self, node, command, worker, stderr, timeout, autoclose=False, rank=None): ExecClient.__init__(self, node, command, worker, stderr, timeout, autoclose, rank) self.rsh_rc = None def _build_cmd(self): """ Build the shell command line to start the rsh command. Return an array of command and arguments. """ # Does not support 'connect_timeout' task = self.worker.task path = task.info("rsh_path") or "rsh" user = task.info("rsh_user") options = task.info("rsh_options") cmd_l = [os.path.expanduser(pathc) for pathc in shlex.split(path)] if user: cmd_l.append("-l") cmd_l.append(user) # Add custom options if options: cmd_l += shlex.split(options) cmd_l.append("%s" % self.key) # key is the node cmd_l.append("%s" % self.command) # rsh does not properly return exit status # force the exit status to be printed out cmd_l.append("; echo XXRETCODE: $?") return (cmd_l, None) def _on_nodeset_msgline(self, nodes, msg, sname): """Override _on_nodeset_msgline to parse magic return code""" match = re.search(r"^XXRETCODE: (\d+)$", msg.decode()) if match: self.rsh_rc = int(match.group(1)) else: ExecClient._on_nodeset_msgline(self, nodes, msg, sname) def _on_nodeset_close(self, nodes, rc): """Override _on_nodeset_close to return rsh_rc""" if (rc == 0 or rc == 1) and self.rsh_rc is not None: rc = self.rsh_rc ExecClient._on_nodeset_close(self, nodes, rc) class RcpClient(CopyClient): """ Rcp EngineClient. """ def _build_cmd(self): """ Build the shell command line to start the rcp command. Return an array of command and arguments. """ # Does not support 'connect_timeout' task = self.worker.task path = task.info("rcp_path") or "rcp" user = task.info("rsh_user") options = task.info("rcp_options") or task.info("rsh_options") cmd_l = [os.path.expanduser(pathc) for pathc in shlex.split(path)] if self.isdir: cmd_l.append("-r") if self.preserve: cmd_l.append("-p") # Add custom rcp options if options: cmd_l += shlex.split(options) if self.reverse: if user: cmd_l.append("%s@%s:%s" % (user, self.key, self.source)) else: cmd_l.append("%s:%s" % (self.key, self.source)) cmd_l.append(os.path.join(self.dest, "%s.%s" % \ (os.path.basename(self.source), self.key))) else: cmd_l.append(self.source) if user: cmd_l.append("%s@%s:%s" % (user, self.key, self.dest)) else: cmd_l.append("%s:%s" % (self.key, self.dest)) return (cmd_l, None) class WorkerRsh(ExecWorker): """ ClusterShell rsh-based worker Class. Remote Shell (rsh) usage example: >>> worker = WorkerRsh(nodeset, handler=MyEventHandler(), ... timeout=30, command="/bin/hostname") >>> task.schedule(worker) # schedule worker for execution >>> task.resume() # run Remote Copy (rcp) usage example: >>> worker = WorkerRsh(nodeset, handler=MyEventHandler(), ... source="/etc/my.conf", ... dest="/etc/my.conf") >>> task.schedule(worker) # schedule worker for execution >>> task.resume() # run connect_timeout option is ignored by this worker. """ SHELL_CLASS = RshClient COPY_CLASS = RcpClient WORKER_CLASS=WorkerRsh ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/Ssh.py0000644104717000001440000001255114501416555021337 0ustar00sthiellusers# # Copyright (C) 2008-2015 CEA/DAM # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell Ssh/Scp support This module implements OpenSSH engine client and task's worker. """ import os # Older versions of shlex can not handle unicode correctly. # Consider using ushlex instead. import shlex from ClusterShell.Worker.Exec import ExecClient, CopyClient, ExecWorker class SshClient(ExecClient): """ Ssh EngineClient. """ def _build_cmd(self): """ Build the shell command line to start the ssh command. Return an array of command and arguments. """ task = self.worker.task path = task.info("ssh_path") or "ssh" user = task.info("ssh_user") options = task.info("ssh_options") # Build ssh command cmd_l = [os.path.expanduser(pathc) for pathc in shlex.split(path)] # Add custom ssh options first as the first obtained value is # used. Thus all options are overridable by custom options. if options: # use expanduser() for options like '-i ~/.ssh/my_id_rsa' cmd_l += [os.path.expanduser(opt) for opt in shlex.split(options)] # Hardwired options (overridable by ssh_options) # note: you should use only long-format options here cmd_l += ["-oForwardAgent=no", "-oForwardX11=no"] if user: cmd_l.append("-l") cmd_l.append(user) connect_timeout = task.info("connect_timeout", 0) if connect_timeout > 0: cmd_l.append("-oConnectTimeout=%d" % connect_timeout) # Disable passphrase/password querying # When used together with sshpass this must be overwritten # by a custom option to "-oBatchMode=no". cmd_l.append("-oBatchMode=yes") cmd_l.append("%s" % self.key) cmd_l.append("%s" % self.command) return (cmd_l, None) class ScpClient(CopyClient): """ Scp EngineClient. """ def _build_cmd(self): """ Build the shell command line to start the scp command. Return an array of command and arguments. """ task = self.worker.task path = task.info("scp_path") or "scp" user = task.info("scp_user") or task.info("ssh_user") # If defined exclusively use scp_options. If no scp_options # given use ssh_options instead. options = task.info("scp_options") or task.info("ssh_options") # Build scp command cmd_l = [os.path.expanduser(pathc) for pathc in shlex.split(path)] # Add custom ssh options first as the first obtained value is # used. Thus all options are overridable by custom options. if options: # use expanduser() for options like '-i ~/.ssh/my_id_rsa' cmd_l += [os.path.expanduser(opt) for opt in shlex.split(options)] # Hardwired options if self.isdir: cmd_l.append("-r") if self.preserve: cmd_l.append("-p") connect_timeout = task.info("connect_timeout", 0) if connect_timeout > 0: cmd_l.append("-oConnectTimeout=%d" % connect_timeout) # Disable passphrase/password querying # When used together with sshpass this must be overwritten # by a custom option to "-oBatchMode=no". #cmd_l.append("-oBatchMode=yes") if self.reverse: if user: cmd_l.append("%s@[%s]:%s" % (user, self.key, self.source)) else: cmd_l.append("[%s]:%s" % (self.key, self.source)) cmd_l.append(os.path.join(self.dest, "%s.%s" % \ (os.path.basename(self.source), self.key))) else: cmd_l.append(self.source) if user: cmd_l.append("%s@[%s]:%s" % (user, self.key, self.dest)) else: cmd_l.append("[%s]:%s" % (self.key, self.dest)) return (cmd_l, None) class WorkerSsh(ExecWorker): """ ClusterShell ssh-based worker Class. Remote Shell (ssh) usage example: >>> worker = WorkerSsh(nodeset, handler=MyEventHandler(), ... timeout=30, command="/bin/hostname") >>> task.schedule(worker) # schedule worker for execution >>> task.resume() # run Remote Copy (scp) usage example: >>> worker = WorkerSsh(nodeset, handler=MyEventHandler(), ... timeout=30, source="/etc/my.conf", ... dest="/etc/my.conf") >>> task.schedule(worker) # schedule worker for execution >>> task.resume() # run """ SHELL_CLASS = SshClient COPY_CLASS = ScpClient WORKER_CLASS=WorkerSsh ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/Tree.py0000644104717000001440000005630014501416555021501 0ustar00sthiellusers# # Copyright (C) 2011-2016 CEA/DAM # Copyright (C) 2015-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # This file is part of the ClusterShell library. """ ClusterShell tree propagation worker """ import base64 import logging import os from os.path import basename, dirname, isfile, normpath import sys import tarfile import tempfile from ClusterShell.Event import EventHandler from ClusterShell.NodeSet import NodeSet from ClusterShell.Worker.EngineClient import EnginePort from ClusterShell.Worker.Worker import DistantWorker, WorkerError from ClusterShell.Worker.Worker import _eh_sigspec_invoke_compat from ClusterShell.Worker.Exec import ExecWorker from ClusterShell.Propagation import PropagationTreeRouter class MetaWorkerEventHandler(EventHandler): """Handle events for the meta worker TreeWorker""" def __init__(self, metaworker): self.metaworker = metaworker self.logger = logging.getLogger(__name__) def ev_start(self, worker): """ Called to indicate that a worker has just started. """ self.logger.debug("MetaWorkerEventHandler: ev_start") self.metaworker._start_count += 1 def ev_read(self, worker, node, sname, msg): """ Called to indicate that a worker has data to read. """ self.metaworker._on_node_msgline(node, msg, sname) def ev_written(self, worker, node, sname, size): """ Called to indicate that writing has been done. """ metaworker = self.metaworker metaworker.current_node = node metaworker.current_sname = sname if metaworker.eh: metaworker.eh.ev_written(metaworker, node, sname, size) def ev_hup(self, worker, node, rc): """ Called to indicate that a worker's connection has been closed. """ self.metaworker._on_node_close(node, rc) def ev_close(self, worker, timedout): """ Called to indicate that a worker has just finished. It may have failed on timeout if timedout is set. """ self.logger.debug("MetaWorkerEventHandler: ev_close, timedout=%s", timedout) if timedout: # WARNING!!! this is not possible as metaworker is changing task's # shared timeout set! #for node in worker.iter_keys_timeout(): # self.metaworker._on_node_timeout(node) # we use NodeSet to copy set for node in NodeSet._fromlist1(worker.iter_keys_timeout()): self.metaworker._on_node_timeout(node) self.metaworker._check_fini() class TreeWorker(DistantWorker): """ ClusterShell tree worker Class. """ # copy and rcopy tar command formats # the choice of single or double quotes is essential UNTAR_CMD_FMT = "tar -xf - -C '%s'" TAR_CMD_FMT = "tar -cf - -C '%s' " \ "--transform \"s,^\\([^/]*\\)[/]*,\\1.$(hostname -s)/,\" " \ "'%s' | base64 -w 65536" class _IOPortHandler(EventHandler): """ Special control port event handler used for: * start the TreeWorker when the engine starts * early write handling: write buffering and eof tracking """ def __init__(self, treeworker): EventHandler.__init__(self) self.treeworker = treeworker def ev_port_start(self, port): """Event when port is registered.""" self.treeworker._start() def ev_msg(self, port, msg): """ Message received: call appropriate worker method. Used for TreeWorker.write() and set_write_eof(). """ func, args = msg[0], msg[1:] func(self.treeworker, *args) def __init__(self, nodes, handler, timeout, **kwargs): """ Initialize Tree worker instance. :param nodes: Targeted nodeset. :param handler: Worker EventHandler. :param timeout: Timeout value for worker. :param command: Command to execute. :param topology: Force specific TopologyTree. :param newroot: Root node of TopologyTree. """ DistantWorker.__init__(self, handler) self.logger = logging.getLogger(__name__) self.workers = [] self.nodes = NodeSet(nodes) self.timeout = timeout self.command = kwargs.get('command') self.source = kwargs.get('source') self.dest = kwargs.get('dest') autoclose = kwargs.get('autoclose', False) self.stderr = kwargs.get('stderr', False) self.logger.debug("stderr=%s", self.stderr) self.remote = kwargs.get('remote', True) self.preserve = kwargs.get('preserve', None) self.reverse = kwargs.get('reverse', False) self._rcopy_bufs = {} self._rcopy_tars = {} self._close_count = 0 self._start_count = 0 self._child_count = 0 self._target_count = 0 self._has_timeout = False self._started = False if self.command is None and self.source is None: raise ValueError("missing command or source parameter in " "TreeWorker constructor") # rcopy is enforcing separated stderr to handle tar error messages # because stdout is used for data transfer if self.source and self.reverse: self.stderr = True # build gateway invocation command invoke_gw_args = [] for envname in ('PYTHONPATH', 'CLUSTERSHELL_GW_PYTHON_EXECUTABLE', 'CLUSTERSHELL_GW_LOG_DIR', 'CLUSTERSHELL_GW_LOG_LEVEL', 'CLUSTERSHELL_GW_B64_LINE_LENGTH'): envval = os.getenv(envname) if envval: invoke_gw_args.append("%s=%s" % (envname, envval)) # It is critical to launch a remote Python executable with the same # major version (ie. python or python3) as we use the (default) pickle # protocol and for example, version 3+ (Python 3 with bytes # support) cannot be unpickled by Python 2. python_executable = os.getenv('CLUSTERSHELL_GW_PYTHON_EXECUTABLE', basename(sys.executable or 'python')) invoke_gw_args.append(python_executable) invoke_gw_args.extend(['-m', 'ClusterShell.Gateway', '-Bu']) self.invoke_gateway = ' '.join(invoke_gw_args) self.topology = kwargs.get('topology') if self.topology is not None: self.newroot = kwargs.get('newroot') or \ str(self.topology.root.nodeset) self.router = PropagationTreeRouter(self.newroot, self.topology) else: self.router = None self.upchannel = None self.metahandler = MetaWorkerEventHandler(self) # gateway (string) -> active targets selection self.gwtargets = {} # IO port self._port = EnginePort(handler=TreeWorker._IOPortHandler(self), autoclose=True) def _start(self): # Engine has started: initialize router self.topology = self.topology or self.task.topology self.router = self.task._default_router(self.router) self._launch(self.nodes) self._check_ini() self._started = True def _launch(self, nodes): self.logger.debug("TreeWorker._launch on %s (fanout=%d)", nodes, self.task.info("fanout")) # Prepare copy params if source is defined destdir = None if self.source: if self.reverse: self.logger.debug("rcopy source=%s, dest=%s", self.source, self.dest) # dest is a directory destdir = self.dest else: self.logger.debug("copy source=%s, dest=%s", self.source, self.dest) # Special processing to determine best arcname and destdir for # tar. The only case that we don't support is when source is a # file and dest is a dir without a finishing / (in that case we # cannot determine remotely whether it is a file or a # directory). if isfile(self.source): # dest is not normalized here arcname = basename(self.dest) or \ basename(normpath(self.source)) destdir = dirname(self.dest) else: # source is a directory: if dest has a trailing slash # like in /tmp/ then arcname is basename(source) # but if dest is /tmp/newname (without leading slash) then # arcname becomes newname. if self.dest[-1] == '/': arcname = basename(self.source) else: arcname = basename(self.dest) # dirname has not the same behavior when a leading slash is # present, and we want that. destdir = dirname(self.dest) self.logger.debug("copy arcname=%s destdir=%s", arcname, destdir) # And launch stuffs next_hops = self._distribute(self.task.info("fanout"), nodes.copy()) self.logger.debug("next_hops=%s" % [(str(n), str(v)) for n, v in next_hops]) for gw, targets in next_hops: if gw == targets: self.logger.debug('task.shell cmd=%s source=%s nodes=%s ' 'timeout=%s remote=%s', self.command, self.source, nodes, self.timeout, self.remote) self._child_count += 1 self._target_count += len(targets) if self.remote: if self.source: # Note: specific case where targets are not in topology # as self.source is never used on remote gateways # so we try a direct copy/rcopy: self.logger.debug('_launch copy r=%s source=%s dest=%s', self.reverse, self.source, self.dest) worker = self.task.copy(self.source, self.dest, targets, handler=self.metahandler, stderr=self.stderr, timeout=self.timeout, preserve=self.preserve, reverse=self.reverse, tree=False) else: worker = self.task.shell(self.command, nodes=targets, timeout=self.timeout, handler=self.metahandler, stderr=self.stderr, tree=False) else: assert self.source is None workerclass = self.task.default('local_worker') worker = workerclass(nodes=targets, command=self.command, handler=self.metahandler, timeout=self.timeout, stderr=self.stderr) self.task.schedule(worker) self.workers.append(worker) self.logger.debug("added child worker %s count=%d", worker, len(self.workers)) else: self.logger.debug("trying gateway %s to reach %s", gw, targets) if self.source: self._copy_remote(self.source, destdir, targets, gw, self.timeout, self.reverse) else: self._execute_remote(self.command, targets, gw, self.timeout) # Copy mode: send tar data after above workers have been initialized if self.source and not self.reverse: try: # create temporary tar file with all source files tmptar = tempfile.TemporaryFile() tar = tarfile.open(fileobj=tmptar, mode='w:') tar.add(self.source, arcname=arcname) tar.close() tmptar.flush() # read generated tar file tmptar.seek(0) rbuf = tmptar.read(32768) # send tar data to remote targets only while len(rbuf) > 0: self._write_remote(rbuf) rbuf = tmptar.read(32768) except OSError as exc: raise WorkerError(exc) def _distribute(self, fanout, dst_nodeset): """distribute target nodes between next hop gateways""" self.router.fanout = fanout distribution = {} for gw, dstset in self.router.dispatch(dst_nodeset): distribution.setdefault(str(gw), NodeSet()).add(dstset) return tuple((NodeSet(k), v) for k, v in distribution.items()) def _copy_remote(self, source, dest, targets, gateway, timeout, reverse): """run a remote copy in tree mode (using gateway)""" self.logger.debug("_copy_remote gateway=%s source=%s dest=%s " "reverse=%s", gateway, source, dest, reverse) self._target_count += len(targets) self.gwtargets.setdefault(str(gateway), NodeSet()).add(targets) # tar commands are built here and launched on targets if reverse: # these weird replace calls aim to escape single quotes ' within '' srcdir = dirname(source).replace("'", '\'\"\'\"\'') srcbase = basename(normpath(self.source)).replace("'", '\'\"\'\"\'') cmd = self.TAR_CMD_FMT % (srcdir, srcbase) else: cmd = self.UNTAR_CMD_FMT % dest.replace("'", '\'\"\'\"\'') self.logger.debug('_copy_remote: tar cmd: %s', cmd) pchan = self.task._pchannel(gateway, self) pchan.shell(nodes=targets, command=cmd, worker=self, timeout=timeout, stderr=self.stderr, gw_invoke_cmd=self.invoke_gateway, remote=self.remote) def _execute_remote(self, cmd, targets, gateway, timeout): """run command against a remote node via a gateway""" self.logger.debug("_execute_remote gateway=%s cmd=%s targets=%s", gateway, cmd, targets) self._target_count += len(targets) self.gwtargets.setdefault(str(gateway), NodeSet()).add(targets) pchan = self.task._pchannel(gateway, self) pchan.shell(nodes=targets, command=cmd, worker=self, timeout=timeout, stderr=self.stderr, gw_invoke_cmd=self.invoke_gateway, remote=self.remote) def _relaunch(self, previous_gateway): """Redistribute and relaunch commands on targets that were running on previous_gateway (which is probably marked unreachable by now) NOTE: Relaunch is always called after failed remote execution, so previous_gateway must be defined. However, it is not guaranteed that the relaunch is going to be performed using gateways (that's a feature). """ targets = self.gwtargets[previous_gateway].copy() self.logger.debug("_relaunch on targets %s from previous_gateway %s", targets, previous_gateway) for target in targets: self.gwtargets[previous_gateway].remove(target) self._check_fini(previous_gateway) self._target_count -= len(targets) self._launch(targets) def _engine_clients(self): """ Access underlying engine clients. """ return [self._port] def _on_remote_node_msgline(self, node, msg, sname, gateway): """remote msg received""" if not self.source or not self.reverse or sname != 'stdout': DistantWorker._on_node_msgline(self, node, msg, sname) return # rcopy only: we expect base64 encoded tar content on stdout encoded = self._rcopy_bufs.setdefault(node, b'') + msg if node not in self._rcopy_tars: self._rcopy_tars[node] = tempfile.TemporaryFile() # partial base64 decoding requires a multiple of 4 characters encoded_sz = (len(encoded) // 4) * 4 # write decoded binary msg to node temporary tarfile self._rcopy_tars[node].write(base64.b64decode(encoded[0:encoded_sz])) # keep trailing encoded chars for next time self._rcopy_bufs[node] = encoded[encoded_sz:] def _on_remote_node_close(self, node, rc, gateway): """remote node closing with return code""" DistantWorker._on_node_close(self, node, rc) self.logger.debug("_on_remote_node_close %s %s via gw %s", node, self._close_count, gateway) # finalize rcopy: extract tar data if self.source and self.reverse: for bnode, buf in self._rcopy_bufs.items(): tarfileobj = self._rcopy_tars[bnode] if len(buf) > 0: self.logger.debug("flushing node %s buf %d bytes", bnode, len(buf)) tarfileobj.write(buf) tarfileobj.flush() tarfileobj.seek(0) tmptar = tarfile.open(fileobj=tarfileobj) try: self.logger.debug("%s extracting %d members in dest %s", bnode, len(tmptar.getmembers()), self.dest) tmptar.extractall(path=self.dest) except IOError as ex: self._on_remote_node_msgline(bnode, ex, 'stderr', gateway) finally: tmptar.close() self._rcopy_bufs = {} self._rcopy_tars = {} self.gwtargets[str(gateway)].remove(node) self._close_count += 1 self._check_fini(gateway) def _on_remote_node_timeout(self, node, gateway): """remote node timeout received""" DistantWorker._on_node_timeout(self, node) self.logger.debug("_on_remote_node_timeout %s via gw %s", node, gateway) self._close_count += 1 self._has_timeout = True self.gwtargets[str(gateway)].remove(node) self._check_fini(gateway) def _on_node_close(self, node, rc): DistantWorker._on_node_close(self, node, rc) self.logger.debug("_on_node_close %s %s (%s)", node, rc, self._close_count) self._close_count += 1 def _on_node_timeout(self, node): DistantWorker._on_node_timeout(self, node) self.logger.debug("_on_node_timeout %s (%s)", node, self._close_count) self._close_count += 1 self._has_timeout = True def _check_ini(self): self.logger.debug("TreeWorker: _check_ini (%d, %d)", self._start_count, self._child_count) if self.eh and self._start_count >= self._child_count: # this part is called once self.eh.ev_start(self) # Blindly generate pickup events: this could maybe be improved, for # example, generated only when commands are sent to the gateways # or for direct targets, using MetaWorkerEventHandler. for node in self.nodes: _eh_sigspec_invoke_compat(self.eh.ev_pickup, 2, self, node) def _check_fini(self, gateway=None): self.logger.debug("check_fini %s %s", self._close_count, self._target_count) if self._close_count >= self._target_count: handler = self.eh if handler: # also use hasattr check because ev_timeout was missing in 1.8.0 if self._has_timeout and hasattr(handler, 'ev_timeout'): handler.ev_timeout(self) _eh_sigspec_invoke_compat(handler.ev_close, 2, self, self._has_timeout) # check completion of targets per gateway if gateway: targets = self.gwtargets[str(gateway)] if not targets: # no more active targets for this gateway self.logger.debug("TreeWorker._check_fini %s call pchannel_" "release for gw %s", self, gateway) self.task._pchannel_release(gateway, self) del self.gwtargets[str(gateway)] def _write_remote(self, buf): """Write buf to remote clients only.""" for gateway, targets in self.gwtargets.items(): assert len(targets) > 0 self.task._pchannel(gateway, self).write(nodes=targets, buf=buf, worker=self) def _set_write_eof_remote(self): for gateway, targets in self.gwtargets.items(): assert len(targets) > 0 self.task._pchannel(gateway, self).set_write_eof(nodes=targets, worker=self) def write(self, buf): """Write to worker clients.""" if not self._started: self._port.msg_send((TreeWorker.write, buf)) return osexc = None # Differentiate directly handled writes from remote ones for worker in self.workers: try: worker.write(buf) except OSError as exc: osexc = exc self._write_remote(buf) if osexc: raise osexc def set_write_eof(self): """ Tell worker to close its writer file descriptor once flushed. Do not perform writes after this call. """ if not self._started: self._port.msg_send((TreeWorker.set_write_eof, )) return # Differentiate directly handled EOFs from remote ones for worker in self.workers: worker.set_write_eof() self._set_write_eof_remote() def abort(self): """Abort processing any action by this worker.""" # Not yet supported by TreeWorker raise NotImplementedError("see github issue #229") # TreeWorker's former name (deprecated as of 1.8) WorkerTree = TreeWorker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/Worker.py0000644104717000001440000006153314505632065022057 0ustar00sthiellusers# # Copyright (C) 2007-2016 CEA/DAM # Copyright (C) 2015-2017 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ ClusterShell worker interface. A worker is a generic object which provides "grouped" work in a specific task. """ try: from inspect import getfullargspec # py3 except ImportError: from inspect import getargspec as getfullargspec # py2 import warnings from ClusterShell.Worker.EngineClient import EngineClient from ClusterShell.NodeSet import NodeSet from ClusterShell.Engine.Engine import FANOUT_UNLIMITED, FANOUT_DEFAULT def _eh_sigspec_invoke_compat(method, argc_legacy, *args): """ Helper function to invoke an event handler method, with legacy signature compatibility if actual argc does match argc_legacy. This should be removed when old signatures (< 1.8) aren't supported anymore (in 2.x). """ argc_actual = len(getfullargspec(method)[0]) if argc_actual == argc_legacy: # Use legacy signature (1.x) deprecated as of 1.9 warnings.warn("%s should use new %s() signature" % (method.__self__, method.__name__), DeprecationWarning) return method(*args[0:argc_legacy - 1]) else: # Assume new signature (2.x) return method(*args) def _eh_sigspec_ev_read_17(ev_read): """Helper function to check whether ev_read has the old 1.7 signature.""" if len(getfullargspec(ev_read)[0]) == 2: warnings.warn("%s should use new ev_read() signature" % \ ev_read.__self__, DeprecationWarning) return True return False class WorkerException(Exception): """Generic worker exception.""" class WorkerError(WorkerException): """Generic worker error.""" # DEPRECATED: WorkerBadArgumentError exception is deprecated as of 1.4, # use ValueError instead. WorkerBadArgumentError = ValueError class Worker(object): """ Worker is an essential base class for the ClusterShell library. The goal of a worker object is to execute a common work on a single or several targets (abstract notion) in parallel. Concrete targets and also the notion of local or distant targets are managed by Worker's subclasses (for example, see the DistantWorker base class). A configured Worker object is associated to a specific ClusterShell Task, which can be seen as a single-threaded Worker supervisor. Indeed, the work to be done is executed in parallel depending on other Workers and Task's current parameters, like current fanout value. ClusterShell is designed to write event-driven applications, and the Worker class is key here as Worker objects are passed as parameter of most event handlers (see the ClusterShell.Event.EventHandler class). Example of use: >>> from ClusterShell.Event import EventHandler >>> class MyOutputHandler(EventHandler): ... def ev_read(self, worker, node, sname, msg): ... print "%s: %s" % (node, line) ... """ # The following common stream names are recognized by the Task class. # They can be changed per Worker, thus avoiding any Task buffering. SNAME_STDIN = 'stdin' #: stream name usually used for stdin SNAME_STDOUT = 'stdout' #: stream name usually used for stdout SNAME_STDERR = 'stderr' #: stream name usually used for stderr def __init__(self, handler): """Initializer. Should be called from derived classes.""" # Associated EventHandler object self.eh = handler #: associated :class:`.EventHandler` # Per Worker fanout value (positive integer). # Default is FANOUT_DEFAULT to use the fanout set at the Task level. # Change to FANOUT_UNLIMITED to always schedule this worker. # NOTE: the fanout value must be set before the Worker starts and # cannot currently be changed afterwards. self._fanout = FANOUT_DEFAULT # Update task rc? [private] # TODO: to be replaced with Task Event Handlers self._update_task_rc = True # Parent task (once bound) self.task = None #: worker's task when scheduled or None self.started = False #: set to True when worker has started self.metaworker = None self.metarefcnt = 0 # current_x public variables (updated at each event accordingly) #: set to node in event handler; DEPRECATED: use :class:`.EventHandler` method argument **node** self.current_node = None #: set to stdout in event handler; DEPRECATED: use :class:`.EventHandler` method argument **msg** if ``sname==SNAME_STDOUT`` self.current_msg = None #: set to stderr message in event handler; DEPRECATED: use :class:`.EventHandler` method argument **msg** if ``sname==SNAME_STDERR`` self.current_errmsg = None #: set to return code in event handler; DEPRECATED: use :class:`.EventHandler` method argument **rc** self.current_rc = 0 #: set to stream name in event handler; DEPRECATED: use :class:`.EventHandler` method argument **sname** self.current_sname = None def _set_task(self, task): """Bind worker to task. Called by task.schedule().""" if self.task is not None: # one-shot-only schedule supported for now raise WorkerError("worker has already been scheduled") self.task = task def _task_bound_check(self): """Helper method to check that worker is bound to a task.""" if not self.task: raise WorkerError("worker is not task bound") def _engine_clients(self): """Return a list of underlying engine clients.""" raise NotImplementedError("Derived classes must implement.") # Event generators def _on_start(self, key): """Called on command start.""" self.current_node = key if not self.started: self.started = True if self.eh is not None: self.eh.ev_start(self) if self.eh is not None: _eh_sigspec_invoke_compat(self.eh.ev_pickup, 2, self, key) def _on_close(self, key, rc=None): """Called to generate events when the Worker is closing.""" if self._update_task_rc: # rc may be None here for example when called from StreamClient # Only update task if rc is not None. if rc is not None: self.task._rc_set(self, key, rc) self.current_node = key self.current_rc = rc if self.eh is not None: _eh_sigspec_invoke_compat(self.eh.ev_hup, 2, self, key, rc) def _on_written(self, key, bytes_count, sname): """Notification of bytes written.""" # set node and stream name (compat only) self.current_node = key self.current_sname = sname if self.eh is not None: self.eh.ev_written(self, key, sname, bytes_count) # Base getters def did_timeout(self): """Return whether this worker has aborted due to timeout.""" self._task_bound_check() return self.task._num_timeout_by_worker(self) > 0 def read(self, node=None, sname='stdout'): """Read worker stream buffer. Return stream read buffer of current worker. Arguments: :param node: node name, can also be set to None for simple worker having worker.key defined (default is None) :param sname: stream name (default is 'stdout') """ self._task_bound_check() return self.task._msg_by_source(self, node, sname) # Base actions def abort(self): """Abort processing any action by this worker. Safe to call on an already closing or aborting worker. """ raise NotImplementedError("Derived classes must implement.") def flush_buffers(self): """Flush any messages associated to this worker.""" self._task_bound_check() self.task._flush_buffers_by_worker(self) def flush_errors(self): """Flush any error messages associated to this worker.""" self._task_bound_check() self.task._flush_errors_by_worker(self) class DistantWorker(Worker): """Base class DistantWorker. DistantWorker provides a useful set of setters/getters to use with distant workers like ssh or pdsh. """ # Event generators def _on_node_msgline(self, node, msg, sname): """Message received from node, update last* stuffs.""" # Maxoptimize this method as it might be called very often. task = self.task assert not isinstance(node, NodeSet) # for testing # update task msgtree task._msg_add(self, node, sname, msg) ### LEGACY (1.x) ### # set stream name self.current_sname = sname # generate event self.current_node = node if sname == self.SNAME_STDERR: self.current_errmsg = msg if self.eh is not None: # call old ev_error for compat (default is no-op) if hasattr(self.eh, 'ev_error'): # missing in 1.8.0! self.eh.ev_error(self) # /!\ NOT elif if not _eh_sigspec_ev_read_17(self.eh.ev_read): ### FUTURE (2.x) ### self.eh.ev_read(self, node, sname, msg) else: self.current_msg = msg if self.eh is not None: # ev_read: check for old signature first (< 1.8) if _eh_sigspec_ev_read_17(self.eh.ev_read): self.eh.ev_read(self) else: ### FUTURE (2.x) ### self.eh.ev_read(self, node, sname, msg) def _on_node_close(self, node, rc): """Command return code received.""" Worker._on_close(self, node, rc) def _on_node_timeout(self, node): """Update on node timeout.""" # Update current_node to allow node resolution after ev_timeout. self.current_node = node self.task._timeout_add(self, node) def node_buffer(self, node): """Get specific node buffer.""" return self.read(node, self.SNAME_STDOUT) def node_error(self, node): """Get specific node error buffer.""" return self.read(node, self.SNAME_STDERR) node_error_buffer = node_error def node_retcode(self, node): """ Get specific node return code. :raises KeyError: command on node has not yet finished (no return code available), or this node is not known by this worker """ self._task_bound_check() try: rc = self.task._rc_by_source(self, node) except KeyError: raise KeyError(node) return rc node_rc = node_retcode def iter_buffers(self, match_keys=None): """ Returns an iterator over available buffers and associated NodeSet. If the optional parameter match_keys is defined, only keys found in match_keys are returned. """ self._task_bound_check() for msg, keys in self.task._call_tree_matcher( self.task._msgtree(self.SNAME_STDOUT).walk, match_keys, self): yield msg, NodeSet.fromlist(keys) def iter_errors(self, match_keys=None): """ Returns an iterator over available error buffers and associated NodeSet. If the optional parameter match_keys is defined, only keys found in match_keys are returned. """ self._task_bound_check() for msg, keys in self.task._call_tree_matcher( self.task._msgtree(self.SNAME_STDERR).walk, match_keys, self): yield msg, NodeSet.fromlist(keys) def iter_node_buffers(self, match_keys=None): """ Returns an iterator over each node and associated buffer. """ self._task_bound_check() return self.task._call_tree_matcher( self.task._msgtree(self.SNAME_STDOUT).items, match_keys, self) def iter_node_errors(self, match_keys=None): """ Returns an iterator over each node and associated error buffer. """ self._task_bound_check() return self.task._call_tree_matcher( self.task._msgtree(self.SNAME_STDERR).items, match_keys, self) def iter_retcodes(self, match_keys=None): """ Returns an iterator over return codes and associated NodeSet. If the optional parameter match_keys is defined, only keys found in match_keys are returned. """ self._task_bound_check() for rc, keys in self.task._rc_iter_by_worker(self, match_keys): yield rc, NodeSet.fromlist(keys) def iter_node_retcodes(self): """ Returns an iterator over each node and associated return code. """ self._task_bound_check() return self.task._krc_iter_by_worker(self) def num_timeout(self): """ Return the number of timed out "keys" (ie. nodes) for this worker. """ self._task_bound_check() return self.task._num_timeout_by_worker(self) def iter_keys_timeout(self): """ Iterate over timed out keys (ie. nodes) for a specific worker. """ self._task_bound_check() return self.task._iter_keys_timeout_by_worker(self) class StreamClient(EngineClient): """StreamWorker's default EngineClient. StreamClient is the EngineClient subclass used by default by StreamWorker. It handles some generic methods to pass data to the StreamWorker. """ def _start(self): """Called on EngineClient start.""" assert not self.worker.started self.worker._on_start(self.key) return self def _read(self, sname, size=65536): """Read data from process.""" return EngineClient._read(self, sname, size) def _close(self, abort, timeout): """Close client. See EngineClient._close().""" EngineClient._close(self, abort, timeout) if timeout: assert abort, "abort flag not set on timeout" self.worker._on_timeout(self.key) # return code not available self.worker._on_close(self.key) if self.worker.eh: _eh_sigspec_invoke_compat(self.worker.eh.ev_close, 2, self, timeout) def _handle_read(self, sname): """Engine is telling us there is data available for reading.""" # Local variables optimization task = self.worker.task msgline = self.worker._on_msgline debug = task.info("debug", False) if debug: print_debug = task.info("print_debug") for msg in self._readlines(sname): print_debug(task, "LINE %s" % msg) msgline(self.key, msg, sname) else: for msg in self._readlines(sname): msgline(self.key, msg, sname) def _flush_read(self, sname): """Called at close time to flush stream read buffer.""" stream = self.streams[sname] if stream.readable() and stream.rbuf: # We still have some read data available in buffer, but no # EOL. Generate a final message before closing. self.worker._on_msgline(self.key, stream.rbuf, sname) def write(self, buf, sname=None): """Write to writable stream(s).""" if sname is not None: self._write(sname, buf) return # sname not specified: "broadcast" to all writable streams... for writer in self.streams.writers(): self._write(writer.name, buf) def set_write_eof(self, sname=None): """Set EOF flag to writable stream(s).""" if sname is not None: self._set_write_eof(sname) return # sname not specified: set eof flag on all writable streams... for writer in self.streams.writers(): self._set_write_eof(writer.name) class StreamWorker(Worker): """StreamWorker base class [v1.7+] The StreamWorker class implements a base (but concrete) Worker that can read and write to multiple streams. Unlike most other Workers, it does not execute any external commands by itself. Rather, it should be pre-bound to "streams", ie. file(s) or file descriptor(s), using the two following methods: >>> worker.set_reader('stream1', fd1) >>> worker.set_writer('stream2', fd2) Like other Workers, the StreamWorker instance should be associated with a Task using task.schedule(worker). When the task engine is ready to process the StreamWorker, all of its streams are being processed together. For that reason, it is not possible to add new readers or writers to a running StreamWorker (ie. task is running and worker is already scheduled). Configured readers will generate ev_read() events when data is available for reading, with the stream name passed as one of its argument. Configured writers will allow the use of the method write(), eg. worker.write(data, 'stream2'), to write to the stream. """ def __init__(self, handler, key=None, stderr=False, timeout=-1, autoclose=False, client_class=StreamClient): Worker.__init__(self, handler) if key is None: # allow key=0 key = self self.clients = [client_class(self, key, stderr, timeout, autoclose)] def set_reader(self, sname, sfile, retain=True, closefd=True): """Add a readable stream to StreamWorker. Arguments: :param sname: the name of the stream (string) :param sfile: the stream file or file descriptor :param retain: whether the stream retains engine client (default is True) :param closefd: whether to close fd when the stream is closed (default is True) """ if not self.clients[0].registered: self.clients[0].streams.set_reader(sname, sfile, retain, closefd) else: raise WorkerError("cannot add new stream at runtime") def set_writer(self, sname, sfile, retain=True, closefd=True): """Set a writable stream to StreamWorker. Arguments: :param sname: the name of the stream (string) :param sfile: the stream file or file descriptor :param retain: whether the stream retains engine client (default is True) :param closefd: whether to close fd when the stream is closed (default is True) """ if not self.clients[0].registered: self.clients[0].streams.set_writer(sname, sfile, retain, closefd) else: raise WorkerError("cannot add new stream at runtime") def _engine_clients(self): """Return a list of underlying engine clients.""" return self.clients def set_key(self, key): """Source key for this worker is free for use. Use this method to set the custom source key for this worker. """ self.clients[0].key = key def _on_msgline(self, key, msg, sname): """Add a message.""" # update task msgtree self.task._msg_add(self, key, sname, msg) ### LEGACY (1.x) ### # set stream name self.current_sname = sname # generate event if sname == 'stderr': self.current_errmsg = msg if self.eh is not None: # call old ev_error for compat (default is no-op) if hasattr(self.eh, 'ev_error'): # missing in 1.8.0! self.eh.ev_error(self) # /!\ NOT elif if not _eh_sigspec_ev_read_17(self.eh.ev_read): ### FUTURE (2.x) ### self.eh.ev_read(self, key, sname, msg) else: self.current_msg = msg if self.eh is not None: # ev_read: check for old signature first (< 1.8) if _eh_sigspec_ev_read_17(self.eh.ev_read): self.eh.ev_read(self) else: ### FUTURE (2.x) ### self.eh.ev_read(self, key, sname, msg) def _on_timeout(self, key): """Update on timeout.""" self.task._timeout_add(self, key) # trigger timeout event (deprecated in 1.8+) # also use hasattr check because ev_timeout was missing in 1.8.0 if self.eh and hasattr(self.eh, 'ev_timeout'): warnings.warn("%s should use new ev_close() instead of " \ "ev_timeout()" % self.eh, DeprecationWarning) self.eh.ev_timeout(self) def abort(self): """Abort processing any action by this worker. Safe to call on an already closing or aborting worker. """ self.clients[0].abort() def read(self, node=None, sname='stdout'): """Read worker stream buffer. Return stream read buffer of current worker. Arguments: :param node: node name, can also be set to None for simple worker having worker.key defined (default is None) :param sname: stream name (default is 'stdout') """ return Worker.read(self, node or self.clients[0].key, sname) def write(self, buf, sname=None): """Write to worker. If sname is specified, write to the associated stream, otherwise write to all writable streams. """ self.clients[0].write(buf, sname) def set_write_eof(self, sname=None): """ Tell worker to close its writer file descriptor once flushed. Do not perform writes after this call. Like write(), sname can be optionally specified to target a specific writable stream, otherwise all writable streams are marked as EOF. """ self.clients[0].set_write_eof(sname) class WorkerSimple(StreamWorker): """WorkerSimple base class [DEPRECATED] Implements a simple Worker to manage common process stdin/stdout/stderr streams. [DEPRECATED] use StreamWorker. """ def __init__(self, file_reader, file_writer, file_error, key, handler, stderr=False, timeout=-1, autoclose=False, closefd=True, client_class=StreamClient): """Initialize WorkerSimple worker.""" StreamWorker.__init__(self, handler, key, stderr, timeout, autoclose, client_class=client_class) if file_reader: self.set_reader('stdout', file_reader, closefd=closefd) if file_error: self.set_reader('stderr', file_error, closefd=closefd) if file_writer: self.set_writer('stdin', file_writer, closefd=closefd) # keep reference of provided file objects during worker lifetime self._filerefs = (file_reader, file_writer, file_error) def error_fileno(self): """Return the standard error reader file descriptor (integer).""" return self.clients[0].streams['stderr'].fd def reader_fileno(self): """Return the reader file descriptor (integer).""" return self.clients[0].streams['stdout'].fd def writer_fileno(self): """Return the writer file descriptor as an integer.""" return self.clients[0].streams['stdin'].fd def error(self): """Read worker error buffer.""" return self.read(sname='stderr') def _on_start(self, key): """Called on command start.""" if not self.started: self.started = True if self.eh is not None: self.eh.ev_start(self) if self.eh is not None: # generate ev_pickup _eh_sigspec_invoke_compat(self.eh.ev_pickup, 2, self, key) def _on_close(self, key, rc=None): """Called to generate events when the Worker is closing.""" self.current_rc = rc # rc may be None here for example when called from StreamClient # Only update task if rc is not None. if rc is not None: self.task._rc_set(self, key, rc) if self.eh is not None: # generate ev_hup _eh_sigspec_invoke_compat(self.eh.ev_hup, 2, self, key, rc) def _on_written(self, key, bytes_count, sname): """Notification of bytes written.""" # set node and stream name (compat only) self.current_sname = sname if self.eh is not None: # generate ev_written self.eh.ev_written(self, key, sname, bytes_count) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/__init__.py0000644104717000001440000000000014501416555022323 0ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/lib/ClusterShell/Worker/fastsubprocess.py0000644104717000001440000003670214501416555023654 0ustar00sthiellusers# fastsubprocess - POSIX relaxed revision of subprocess.py # Based on Python 2.6.4 subprocess.py # This is a performance oriented version of subprocess module. # Modified by Stephane Thiell # Changes: # * removed Windows specific code parts # * removed pipe for transferring possible exec failure from child to # parent, to avoid os.read() blocking call after each fork. # * child returns status code 255 on execv failure, which can be # handled with Popen.wait(). # * removed file objects creation using costly fdopen(): this version # returns non-blocking file descriptors bound to child # * added module method set_nonblock_flag() and used it in Popen(). ## # Original Disclaimer: # # For more information about this module, see PEP 324. # # This module should remain compatible with Python 2.2, see PEP 291. # # Copyright (c) 2003-2005 by Peter Astrand # # Licensed to PSF under a Contributor Agreement. # See http://www.python.org/2.4/license for licensing details. """_subprocess - Subprocesses with accessible I/O non-blocking file descriptors Faster revision of subprocess-like module. """ import gc import os import signal import sys import types # Python 3 compatibility try: basestring except NameError: basestring = str # Exception classes used by this module. class CalledProcessError(Exception): """This exception is raised when a process run by check_call() returns a non-zero exit status. The exit status will be stored in the returncode attribute.""" def __init__(self, returncode, cmd): self.returncode = returncode self.cmd = cmd def __str__(self): return "Command '%s' returned non-zero exit status %d" % (self.cmd, self.returncode) import select import errno import fcntl __all__ = ["Popen", "PIPE", "STDOUT", "call", "check_call", \ "CalledProcessError"] try: MAXFD = os.sysconf("SC_OPEN_MAX") except: MAXFD = 256 _active = [] def _cleanup(): for inst in _active[:]: if inst._internal_poll(_deadstate=sys.maxsize) >= 0: try: _active.remove(inst) except ValueError: # This can happen if two threads create a new Popen instance. # It's harmless that it was already removed, so ignore. pass PIPE = -1 STDOUT = -2 def call(*popenargs, **kwargs): """Run command with arguments. Wait for command to complete, then return the returncode attribute. The arguments are the same as for the Popen constructor. Example: retcode = call(["ls", "-l"]) """ return Popen(*popenargs, **kwargs).wait() def check_call(*popenargs, **kwargs): """Run command with arguments. Wait for command to complete. If the exit code was zero then return, otherwise raise CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute. The arguments are the same as for the Popen constructor. Example: check_call(["ls", "-l"]) """ retcode = call(*popenargs, **kwargs) cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] if retcode: raise CalledProcessError(retcode, cmd) return retcode def set_nonblock_flag(fd): """Set non blocking flag to file descriptor fd""" old = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, old | os.O_NDELAY) class Popen(object): """A faster Popen""" def __init__(self, args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, shell=False, cwd=None, env=None, universal_newlines=False): """Create new Popen instance.""" _cleanup() self._child_created = False if not isinstance(bufsize, int): raise TypeError("bufsize must be an integer") self.pid = None self.returncode = None self.universal_newlines = universal_newlines # Input and output objects. The general principle is like # this: # # Parent Child # ------ ----- # p2cwrite ---stdin---> p2cread # c2pread <--stdout--- c2pwrite # errread <--stderr--- errwrite # # On POSIX, the child objects are file descriptors. On # Windows, these are Windows file handles. The parent objects # are file descriptors on both platforms. The parent objects # are None when not using PIPEs. The child objects are None # when not redirecting. (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) = self._get_handles(stdin, stdout, stderr) self._execute_child(args, executable, preexec_fn, cwd, env, universal_newlines, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) if p2cwrite is not None: set_nonblock_flag(p2cwrite) self.stdin = p2cwrite if c2pread is not None: set_nonblock_flag(c2pread) self.stdout = c2pread if errread is not None: set_nonblock_flag(errread) self.stderr = errread def _translate_newlines(self, data): data = data.replace("\r\n", "\n") data = data.replace("\r", "\n") return data def __del__(self, sys=sys): if not self._child_created: # We didn't get to successfully create a child process. return # In case the child hasn't been waited on, check if it's done. self._internal_poll(_deadstate=sys.maxsize) if self.returncode is None and _active is not None: # Child is still running, keep us alive until we can wait on it. _active.append(self) def communicate(self, input=None): """Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child. communicate() returns a tuple (stdout, stderr).""" # Optimization: If we are only using one pipe, or no pipe at # all, using select() or threads is unnecessary. if [self.stdin, self.stdout, self.stderr].count(None) >= 2: stdout = None stderr = None if self.stdin: if input: self.stdin.write(input) self.stdin.close() elif self.stdout: stdout = self.stdout.read() self.stdout.close() elif self.stderr: stderr = self.stderr.read() self.stderr.close() self.wait() return (stdout, stderr) return self._communicate(input) def poll(self): return self._internal_poll() def _get_handles(self, stdin, stdout, stderr): """Construct and return tuple with IO objects: p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite """ p2cread, p2cwrite = None, None c2pread, c2pwrite = None, None errread, errwrite = None, None if stdin is None: pass elif stdin == PIPE: p2cread, p2cwrite = os.pipe() elif isinstance(stdin, int): p2cread = stdin else: # Assuming file-like object p2cread = stdin.fileno() if stdout is None: pass elif stdout == PIPE: try: c2pread, c2pwrite = os.pipe() except: # Cleanup of previous pipe() descriptors if stdin == PIPE: os.close(p2cread) os.close(p2cwrite) raise elif isinstance(stdout, int): c2pwrite = stdout else: # Assuming file-like object c2pwrite = stdout.fileno() if stderr is None: pass elif stderr == PIPE: try: errread, errwrite = os.pipe() except: # Cleanup of previous pipe() descriptors if stdin == PIPE: os.close(p2cread) os.close(p2cwrite) if stdout == PIPE: os.close(c2pread) os.close(c2pwrite) raise elif stderr == STDOUT: errwrite = c2pwrite elif isinstance(stderr, int): errwrite = stderr else: # Assuming file-like object errwrite = stderr.fileno() return (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) def _execute_child(self, args, executable, preexec_fn, cwd, env, universal_newlines, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite): """Execute program (POSIX version)""" if isinstance(args, basestring): args = [args] else: args = list(args) if shell: args = ["/bin/sh", "-c"] + args if executable is None: executable = args[0] gc_was_enabled = gc.isenabled() # Disable gc to avoid bug where gc -> file_dealloc -> # write to stderr -> hang. http://bugs.python.org/issue1336 gc.disable() try: self.pid = os.fork() except: if gc_was_enabled: gc.enable() raise self._child_created = True if self.pid == 0: # Child try: # Close parent's pipe ends if p2cwrite is not None: os.close(p2cwrite) if c2pread is not None: os.close(c2pread) if errread is not None: os.close(errread) # Dup fds for child if p2cread is not None: os.dup2(p2cread, 0) if c2pwrite is not None: os.dup2(c2pwrite, 1) if errwrite is not None: os.dup2(errwrite, 2) # Close pipe fds. Make sure we don't close the same # fd more than once, or standard fds. if p2cread is not None and p2cread not in (0,): os.close(p2cread) if c2pwrite is not None and c2pwrite not in (p2cread, 1): os.close(c2pwrite) if errwrite is not None and errwrite not in \ (p2cread, c2pwrite, 2): os.close(errwrite) if cwd is not None: os.chdir(cwd) if preexec_fn: preexec_fn() if env is None: os.execvp(executable, args) else: os.execvpe(executable, args, env) except: # Child execution failure os._exit(255) # Parent if gc_was_enabled: gc.enable() if p2cread is not None and p2cwrite is not None: os.close(p2cread) if c2pwrite is not None and c2pread is not None: os.close(c2pwrite) if errwrite is not None and errread is not None: os.close(errwrite) def _handle_exitstatus(self, sts): if os.WIFSIGNALED(sts): self.returncode = -os.WTERMSIG(sts) elif os.WIFEXITED(sts): self.returncode = os.WEXITSTATUS(sts) else: # Should never happen raise RuntimeError("Unknown child exit status!") def _internal_poll(self, _deadstate=None): """Check if child process has terminated. Returns returncode attribute.""" if self.returncode is None: try: pid, sts = os.waitpid(self.pid, os.WNOHANG) if pid == self.pid: self._handle_exitstatus(sts) except os.error: if _deadstate is not None: self.returncode = _deadstate return self.returncode def wait(self): """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: pid, sts = os.waitpid(self.pid, 0) self._handle_exitstatus(sts) return self.returncode def _communicate(self, input): read_set = [] write_set = [] stdout = None # Return stderr = None # Return if self.stdin: # Flush stdio buffer. This might block, if the user has # been writing to .stdin in an uncontrolled fashion. self.stdin.flush() if input: write_set.append(self.stdin) else: self.stdin.close() if self.stdout: read_set.append(self.stdout) stdout = [] if self.stderr: read_set.append(self.stderr) stderr = [] input_offset = 0 while read_set or write_set: try: rlist, wlist, xlist = select.select(read_set, write_set, []) except select.error as ex: if ex.args[0] == errno.EINTR: continue raise if self.stdin in wlist: # When select has indicated that the file is writable, # we can write up to PIPE_BUF bytes without risk # blocking. POSIX defines PIPE_BUF >= 512 chunk = input[input_offset : input_offset + 512] bytes_written = os.write(self.stdin.fileno(), chunk) input_offset += bytes_written if input_offset >= len(input): self.stdin.close() write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) if data == "": self.stdout.close() read_set.remove(self.stdout) stdout.append(data) if self.stderr in rlist: data = os.read(self.stderr.fileno(), 1024) if data == "": self.stderr.close() read_set.remove(self.stderr) stderr.append(data) # All data exchanged. Translate lists into strings. if stdout is not None: stdout = ''.join(stdout) if stderr is not None: stderr = ''.join(stderr) # Translate newlines, if requested. We cannot let the file # object do the translation: It is based on stdio, which is # impossible to combine with select (unless forcing no # buffering). if self.universal_newlines and hasattr(file, 'newlines'): if stdout: stdout = self._translate_newlines(stdout) if stderr: stderr = self._translate_newlines(stderr) self.wait() return (stdout, stderr) def send_signal(self, sig): """Send a signal to the process """ os.kill(self.pid, sig) def terminate(self): """Terminate the process with SIGTERM """ self.send_signal(signal.SIGTERM) def kill(self): """Kill the process with SIGKILL """ self.send_signal(signal.SIGKILL) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/lib/ClusterShell/__init__.py0000644104717000001440000000336414505632065021072 0ustar00sthiellusers# # Copyright (C) 2007-2016 CEA/DAM # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ClusterShell Python Library ClusterShell is an event-driven open source Python library, designed to run local or distant commands in parallel on server farms or on large clusters. You can use ClusterShell as a building block to create cluster aware administration scripts and system applications in Python. It will take care of common issues encountered on HPC clusters, such as operating on groups of nodes, running distributed commands using optimized execution algorithms, as well as gathering results and merging identical outputs, or retrieving return codes. ClusterShell takes advantage of existing remote shell facilities already installed on your systems, like SSH. Please see first: - ClusterShell.NodeSet - ClusterShell.Task """ __version__ = '1.9.2' __version_info__ = tuple([ int(_n) for _n in __version__.split('.')]) __date__ = '2023/09/29' __author__ = 'Stephane Thiell ' __url__ = 'http://clustershell.readthedocs.org/' ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3313296 ClusterShell-1.9.2/lib/ClusterShell.egg-info/0000755104717000001440000000000014505640536020447 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696022878.0 ClusterShell-1.9.2/lib/ClusterShell.egg-info/PKG-INFO0000644104717000001440000000677014505640536021556 0ustar00sthiellusersMetadata-Version: 1.1 Name: ClusterShell Version: 1.9.2 Summary: ClusterShell library and tools Home-page: https://clustershell.readthedocs.io/ Author: Stephane Thiell Author-email: sthiell@stanford.edu License: LGPLv2+ Download-URL: https://github.com/cea-hpc/clustershell/archive/refs/tags/v1.9.2.tar.gz Description: ClusterShell is an event-driven open source Python framework, designed to run local or distant commands in parallel on server farms or on large Linux clusters. It will take care of common issues encountered on HPC clusters, such as operating on groups of nodes, running distributed commands using optimized execution algorithms, as well as gathering results and merging identical outputs, or retrieving return codes. ClusterShell takes advantage of existing remote shell facilities already installed on your systems, like SSH. User tools ---------- ClusterShell provides clush, clubak and cluset/nodeset, convenient command-line tools that allow traditional shell scripts to benefit from some of the library's features: - **clush**: issue commands to cluster nodes and format output Example of use: :: $ clush -abL uname -r node[32-49,51-71,80,82-150,156-159]: 2.6.18-164.11.1.el5 node[3-7,72-79]: 2.6.18-164.11.1.el5_lustre1.10.0.36 node[2,151-155]: 2.6.31.6-145.fc11.2.x86_64 See *man clush* for more details. - **clubak**: improved dshbak to gather and sort dsh-like outputs See *man clubak* for more details. - **nodeset** (or **cluset**): compute advanced nodeset/nodegroup operations Examples of use: :: $ echo node160 node161 node162 node163 | nodeset -f node[160-163] $ nodeset -f node[0-7,32-159] node[160-163] node[0-7,32-163] $ nodeset -e node[160-163] node160 node161 node162 node163 $ nodeset -f node[32-159] -x node33 node[32,34-159] $ nodeset -f node[32-159] -i node[0-7,20-21,32,156-159] node[32,156-159] $ nodeset -f node[33-159] --xor node[32-33,156-159] node[32,34-155] $ nodeset -l @oss @mds @io @compute $ nodeset -e @mds node6 node7 See *man nodeset* (or *man cluset*) for more details. Please visit the ClusterShell website_. .. _website: http://cea-hpc.github.io/clustershell/ Keywords: clustershell,clush,clubak,nodeset Platform: GNU/Linux Platform: BSD Platform: MacOSX Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+) Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: POSIX :: BSD Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: System :: Clustering Classifier: Topic :: System :: Distributed Computing License-File: COPYING.LGPLv2.1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696022878.0 ClusterShell-1.9.2/lib/ClusterShell.egg-info/SOURCES.txt0000644104717000001440000001050014505640536022327 0ustar00sthiellusersCOPYING.LGPLv2.1 ChangeLog MANIFEST.in README.md setup.cfg setup.py conf/clush.conf conf/groups.conf conf/topology.conf.example conf/clush.conf.d/README conf/clush.conf.d/sshpass.conf.example conf/clush.conf.d/sudo.conf.example conf/groups.conf.d/README conf/groups.conf.d/ace.conf.example conf/groups.conf.d/genders.conf.example conf/groups.conf.d/slurm.conf.example conf/groups.conf.d/xcat.conf.example conf/groups.d/README conf/groups.d/cluster.yaml.example conf/groups.d/local.cfg doc/epydoc/clustershell_epydoc.conf doc/examples/check_nodes.py doc/examples/defaults.conf-rsh doc/extras/vim/ftdetect/clustershell.vim doc/extras/vim/syntax/clushconf.vim doc/extras/vim/syntax/groupsconf.vim doc/man/man1/clubak.1 doc/man/man1/cluset.1 doc/man/man1/clush.1 doc/man/man1/nodeset.1 doc/man/man5/clush.conf.5 doc/man/man5/groups.conf.5 doc/sphinx/Makefile doc/sphinx/clustershell-nautilus-logo200.png doc/sphinx/conf.py doc/sphinx/config.rst doc/sphinx/further.rst doc/sphinx/index.rst doc/sphinx/install.rst doc/sphinx/intro.rst doc/sphinx/release.rst doc/sphinx/_static/clustershell-nautilus-logo200.png doc/sphinx/_static/theme_overrides.css doc/sphinx/api/Defaults.rst doc/sphinx/api/EngineTimer.rst doc/sphinx/api/Event.rst doc/sphinx/api/MsgTree.rst doc/sphinx/api/NodeSet.rst doc/sphinx/api/NodeUtils.rst doc/sphinx/api/RangeSet.rst doc/sphinx/api/Task.rst doc/sphinx/api/index.rst doc/sphinx/api/workers/ExecWorker.rst doc/sphinx/api/workers/StreamWorker.rst doc/sphinx/api/workers/TreeWorker.rst doc/sphinx/api/workers/Worker.rst doc/sphinx/api/workers/WorkerPdsh.rst doc/sphinx/api/workers/WorkerPopen.rst doc/sphinx/api/workers/WorkerRsh.rst doc/sphinx/api/workers/WorkerSsh.rst doc/sphinx/api/workers/index.rst doc/sphinx/guide/examples.rst doc/sphinx/guide/index.rst doc/sphinx/guide/nodesets.rst doc/sphinx/guide/rangesets.rst doc/sphinx/guide/taskmgnt.rst doc/sphinx/tools/clubak.rst doc/sphinx/tools/cluset.rst doc/sphinx/tools/clush.rst doc/sphinx/tools/index.rst doc/sphinx/tools/nodeset.rst doc/txt/README doc/txt/clubak.txt doc/txt/cluset.txt doc/txt/clush.conf.txt doc/txt/clush.txt doc/txt/clustershell.rst doc/txt/groups.conf.txt doc/txt/nodeset.txt lib/ClusterShell/Communication.py lib/ClusterShell/Defaults.py lib/ClusterShell/Event.py lib/ClusterShell/Gateway.py lib/ClusterShell/MsgTree.py lib/ClusterShell/NodeSet.py lib/ClusterShell/NodeUtils.py lib/ClusterShell/Propagation.py lib/ClusterShell/RangeSet.py lib/ClusterShell/Task.py lib/ClusterShell/Topology.py lib/ClusterShell/__init__.py lib/ClusterShell.egg-info/PKG-INFO lib/ClusterShell.egg-info/SOURCES.txt lib/ClusterShell.egg-info/dependency_links.txt lib/ClusterShell.egg-info/entry_points.txt lib/ClusterShell.egg-info/requires.txt lib/ClusterShell.egg-info/top_level.txt lib/ClusterShell/CLI/Clubak.py lib/ClusterShell/CLI/Clush.py lib/ClusterShell/CLI/Config.py lib/ClusterShell/CLI/Display.py lib/ClusterShell/CLI/Error.py lib/ClusterShell/CLI/Nodeset.py lib/ClusterShell/CLI/OptionParser.py lib/ClusterShell/CLI/Utils.py lib/ClusterShell/CLI/__init__.py lib/ClusterShell/Engine/EPoll.py lib/ClusterShell/Engine/Engine.py lib/ClusterShell/Engine/Factory.py lib/ClusterShell/Engine/Poll.py lib/ClusterShell/Engine/Select.py lib/ClusterShell/Engine/__init__.py lib/ClusterShell/Worker/EngineClient.py lib/ClusterShell/Worker/Exec.py lib/ClusterShell/Worker/Pdsh.py lib/ClusterShell/Worker/Popen.py lib/ClusterShell/Worker/Rsh.py lib/ClusterShell/Worker/Ssh.py lib/ClusterShell/Worker/Tree.py lib/ClusterShell/Worker/Worker.py lib/ClusterShell/Worker/__init__.py lib/ClusterShell/Worker/fastsubprocess.py tests/CLIClubakTest.py tests/CLIClushTest.py tests/CLIConfigTest.py tests/CLIDisplayTest.py tests/CLINodesetTest.py tests/CLIOptionParserTest.py tests/DefaultsTest.py tests/MisusageTest.py tests/MsgTreeTest.py tests/NodeSetGroupTest.py tests/NodeSetTest.py tests/RangeSetNDTest.py tests/RangeSetTest.py tests/StreamWorkerTest.py tests/TLib.py tests/TaskDistantMixin.py tests/TaskDistantPdshMixin.py tests/TaskDistantPdshTest.py tests/TaskDistantTest.py tests/TaskEventTest.py tests/TaskLocalMixin.py tests/TaskLocalTest.py tests/TaskMsgTreeTest.py tests/TaskPortTest.py tests/TaskRLimitsTest.py tests/TaskThreadJoinTest.py tests/TaskThreadSuspendTest.py tests/TaskTimeoutTest.py tests/TaskTimerTest.py tests/TreeGatewayTest.py tests/TreeTaskTest.py tests/TreeTopologyTest.py tests/TreeWorkerTest.py tests/WorkerExecTest.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696022878.0 ClusterShell-1.9.2/lib/ClusterShell.egg-info/dependency_links.txt0000644104717000001440000000000114505640536024515 0ustar00sthiellusers ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696022878.0 ClusterShell-1.9.2/lib/ClusterShell.egg-info/entry_points.txt0000644104717000001440000000025414505640536023746 0ustar00sthiellusers[console_scripts] clubak = ClusterShell.CLI.Clubak:main cluset = ClusterShell.CLI.Nodeset:main clush = ClusterShell.CLI.Clush:main nodeset = ClusterShell.CLI.Nodeset:main ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696022878.0 ClusterShell-1.9.2/lib/ClusterShell.egg-info/requires.txt0000644104717000001440000000000714505640536023044 0ustar00sthiellusersPyYAML ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696022878.0 ClusterShell-1.9.2/lib/ClusterShell.egg-info/top_level.txt0000644104717000001440000000001514505640536023175 0ustar00sthiellusersClusterShell ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3353298 ClusterShell-1.9.2/setup.cfg0000644104717000001440000000007614505640536015422 0ustar00sthiellusers[install] optimize = 1 [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/setup.py0000755104717000001440000000767614505632065015331 0ustar00sthiellusers#!/usr/bin/env python # # Copyright (C) 2008-2016 CEA/DAM # Copyright (C) 2016-2023 Stephane Thiell # # This file is part of ClusterShell. # # ClusterShell is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # ClusterShell is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with ClusterShell; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA import os from setuptools import setup, find_packages VERSION = '1.9.2' CFGDIR = 'etc/clustershell' MANDIR = 'share/man' # Dependencies (for pip install) REQUIRES = ['PyYAML'] setup(name='ClusterShell', version=VERSION, package_dir={'': 'lib'}, packages=find_packages('lib'), data_files=[(CFGDIR, ['conf/clush.conf', 'conf/groups.conf', 'conf/topology.conf.example']), (os.path.join(CFGDIR, 'clush.conf.d'), ['conf/clush.conf.d/sshpass.conf.example', 'conf/clush.conf.d/sudo.conf.example', 'conf/clush.conf.d/README']), (os.path.join(CFGDIR, 'groups.conf.d'), ['conf/groups.conf.d/genders.conf.example', 'conf/groups.conf.d/slurm.conf.example', 'conf/groups.conf.d/xcat.conf.example', 'conf/groups.conf.d/README']), (os.path.join(CFGDIR, 'groups.d'), ['conf/groups.d/cluster.yaml.example', 'conf/groups.d/local.cfg', 'conf/groups.d/README']), (os.path.join(MANDIR, 'man1'), ['doc/man/man1/clubak.1', 'doc/man/man1/cluset.1', 'doc/man/man1/clush.1', 'doc/man/man1/nodeset.1']), (os.path.join(MANDIR, 'man5'), ['doc/man/man5/clush.conf.5', 'doc/man/man5/groups.conf.5']), ], entry_points={'console_scripts': ['clubak=ClusterShell.CLI.Clubak:main', 'cluset=ClusterShell.CLI.Nodeset:main', 'clush=ClusterShell.CLI.Clush:main', 'nodeset=ClusterShell.CLI.Nodeset:main'], }, author='Stephane Thiell', author_email='sthiell@stanford.edu', license='LGPLv2+', url='https://clustershell.readthedocs.io/', download_url='https://github.com/cea-hpc/clustershell/archive/refs/tags/v%s.tar.gz' % VERSION, platforms=['GNU/Linux', 'BSD', 'MacOSX'], keywords=['clustershell', 'clush', 'clubak', 'nodeset'], description='ClusterShell library and tools', long_description=open('doc/txt/clustershell.rst').read(), classifiers=[ "Development Status :: 5 - Production/Stable", "Environment :: Console", "Intended Audience :: System Administrators", "License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+)", "Operating System :: MacOS :: MacOS X", "Operating System :: POSIX :: BSD", "Operating System :: POSIX :: Linux", "Programming Language :: Python", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: System :: Clustering", "Topic :: System :: Distributed Computing" ], install_requires=REQUIRES, ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1696022878.3353298 ClusterShell-1.9.2/tests/0000755104717000001440000000000014505640536014740 5ustar00sthiellusers././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/CLIClubakTest.py0000644104717000001440000002504714505632065017711 0ustar00sthiellusers# ClusterShell.CLI.Clubak test suite # Written by S. Thiell """Unit test for CLI.Clubak""" import re from textwrap import dedent import unittest from TLib import * from ClusterShell.CLI.Clubak import main from ClusterShell.NodeSet import set_std_group_resolver, \ set_std_group_resolver_config def _outfmt(*args): outfmt = "---------------\n%s\n---------------\n bar\n" res = outfmt % args return res.encode() def _outfmt_verb(*args): outfmt = "INPUT foo: bar\n---------------\n%s\n---------------\n bar\n" res = outfmt % args return res.encode() class CLIClubakTest(unittest.TestCase): """Unit test class for testing CLI/Clubak.py""" def _clubak_t(self, args, stdin, expected_stdout, expected_rc=0, expected_stderr=None): CLI_main(self, main, ['clubak'] + args, stdin, expected_stdout, expected_rc, expected_stderr) def test_000_noargs(self): """test clubak (no argument)""" self._clubak_t([], b"foo: bar\n", _outfmt('foo')) self._clubak_t([], b"foo space: bar\n", _outfmt('foo space')) self._clubak_t([], b"foo space1: bar\n", _outfmt('foo space1')) self._clubak_t([], b"foo space1: bar\nfoo space2: bar", _outfmt('foo space1') + _outfmt('foo space2')) self._clubak_t([], b": bar\n", b'', 1, b'clubak: no node found: ": bar"\n') self._clubak_t([], b"foo[: bar\n", _outfmt('foo[')) self._clubak_t([], b"]o[o]: bar\n", _outfmt(']o[o]')) self._clubak_t([], b"foo:\n", b'---------------\nfoo\n---------------\n\n') self._clubak_t([], b"foo: \n", b'---------------\nfoo\n---------------\n \n') # nD self._clubak_t([], b"n1c1: bar\n", _outfmt('n1c1')) # Ticket #286 self._clubak_t([], b"n1c01: bar\n", _outfmt('n1c01')) self._clubak_t([], b"n01c01: bar\n", _outfmt('n01c01')) self._clubak_t([], b"n001c01: bar\nn001c02: bar\n", _outfmt('n001c01') + _outfmt('n001c02')) def test_001_verbosity(self): """test clubak (-q/-v/-d)""" self._clubak_t(["-d"], b"foo: bar\n", _outfmt_verb('foo'), 0, b'line_mode=False gather=False tree_depth=1\n') self._clubak_t(["-d", "-b"], b"foo: bar\n", _outfmt_verb('foo'), 0, b'line_mode=False gather=True tree_depth=1\n') self._clubak_t(["-d", "-L"], b"foo: bar\n", b'INPUT foo: bar\nfoo: bar\n', 0, b'line_mode=True gather=False tree_depth=1\n') self._clubak_t(["-v"], b"foo: bar\n", _outfmt_verb('foo'), 0) self._clubak_t(["-v", "-b"], b"foo: bar\n", _outfmt_verb('foo'), 0) # no node count with -q self._clubak_t(["-q", "-b"], b"foo[1-5]: bar\n", _outfmt('foo[1-5]'), 0) # non-printable characters replaced by the replacement character self._clubak_t(["-L"], b"foo:\xffbar\n", "foo: \ufffdbar\n".encode(), 0) self._clubak_t(["-d", "-L"], b"foo:\xf8bar\n", 'INPUT foo:\ufffdbar\nfoo: \ufffdbar\n'.encode(), 0, b'line_mode=True gather=False tree_depth=1\n') def test_002_b(self): """test clubak (gather -b)""" self._clubak_t(["-b"], b"foo: bar\n", _outfmt('foo')) self._clubak_t(["-b"], b"foo space: bar\n", _outfmt("foo space")) self._clubak_t(["-b"], b"foo space1: bar\n", _outfmt("foo space1")) self._clubak_t(["-b"], b"foo space1: bar\nfoo space2: bar", _outfmt("foo space[1-2] (2)")) self._clubak_t(["-b"], b"foo space1: bar\nfoo space2: foo", b"---------------\nfoo space1\n---------------\n bar\n---------------\nfoo space2\n---------------\n foo\n") self._clubak_t(["-b"], b": bar\n", b"", 1, b'clubak: no node found: ": bar"\n') self._clubak_t(["-b"], b"foo[: bar\n", _outfmt("foo[")) self._clubak_t(["-b"], b"]o[o]: bar\n", _outfmt("]o[o]")) self._clubak_t(["-b"], b"foo:\n", b"---------------\nfoo\n---------------\n\n") self._clubak_t(["-b"], b"foo: \n", b"---------------\nfoo\n---------------\n \n") # nD self._clubak_t(["-b"], b"n1c1: bar\n", _outfmt("n1c1")) # Ticket #286 self._clubak_t(["-b"], b"n1c01: bar\n", _outfmt("n1c01")) self._clubak_t(["-b"], b"n001c01: bar\n", _outfmt("n001c01")) self._clubak_t(["-b"], b"n001c01: bar\nn001c02: bar\n", _outfmt("n001c[01-02] (2)")) def test_003_L(self): """test clubak (line mode -L)""" self._clubak_t(["-L"], b"foo: bar\n", b"foo: bar\n") self._clubak_t(["-L", "-S", ": "], b"foo: bar\n", b"foo: bar\n") self._clubak_t(["-bL"], b"foo: bar\n", b"foo: bar\n") self._clubak_t(["-bL", "-S", ": "], b"foo: bar\n", b"foo: bar\n") # nD self._clubak_t(["-bL", "-S", ": "], b"n1c01: bar\n", b"n1c01: bar\n") def test_004_N(self): """test clubak (no header -N)""" self._clubak_t(["-N"], b"foo: bar\n", b" bar\n") self._clubak_t(["-NL"], b"foo: bar\n", b" bar\n") self._clubak_t(["-N", "-S", ": "], b"foo: bar\n", b"bar\n") self._clubak_t(["-bN"], b"foo: bar\n", b" bar\n") self._clubak_t(["-bN", "-S", ": "], b"foo: bar\n", b"bar\n") def test_005_fast(self): """test clubak (fast mode --fast)""" self._clubak_t(["--fast"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["-b", "--fast"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["-b", "--fast"], b"foo2: bar\nfoo1: bar\nfoo4: bar", _outfmt("foo[1-2,4] (3)")) # check conflicting options self._clubak_t(["-L", "--fast"], b"foo2: bar\nfoo1: bar\nfoo4: bar", b'', 2, b"error: incompatible tree options\n") def test_006_tree(self): """test clubak (tree mode --tree)""" self._clubak_t(["--tree"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["--tree", "-L"], b"foo: bar\n", b"foo:\n bar\n") self._clubak_t(["--tree", "-L"], b"foo: \xf8bar\n", "foo:\n \ufffdbar\n".encode()) stdin_buf = dedent("""foo1:bar foo2:bar foo1:moo foo1:bla foo2:m00 foo2:bla foo1:abc """).encode() self._clubak_t(["--tree", "-L"], stdin_buf, re.compile(br"(foo\[1-2\]:\nbar\nfoo2:\n m00\n bla\nfoo1:\n moo\n bla\n abc\n" br"|foo\[1-2\]:\nbar\nfoo1:\n moo\n bla\n abc\nfoo2:\n m00\n)")) # check conflicting options self._clubak_t(["--tree", "--fast"], stdin_buf, b'', 2, b"error: incompatible tree options\n") def test_007_interpret_keys(self): """test clubak (--interpret-keys)""" self._clubak_t(["--interpret-keys=auto"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["-b", "--interpret-keys=auto"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["-b", "--interpret-keys=never"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["-b", "--interpret-keys=always"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["-b", "--interpret-keys=always"], b"foo[1-3]: bar\n", _outfmt("foo[1-3] (3)")) self._clubak_t(["-b", "--interpret-keys=auto"], b"[]: bar\n", _outfmt("[]")) self._clubak_t(["-b", "--interpret-keys=never"], b"[]: bar\n", _outfmt("[]")) self._clubak_t(["-b", "--interpret-keys=always"], b"[]: bar\n", b'', 1, b"Parse error: bad range: \"empty range\"\n") def test_008_color(self): """test clubak (--color)""" self._clubak_t(["-b"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["-b", "--color=never"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["-b", "--color=auto"], b"foo: bar\n", _outfmt("foo")) self._clubak_t(["-L", "--color=always"], b"foo: bar\n", b"\x1b[94mfoo: \x1b[0m bar\n") self._clubak_t(["-b", "--color=always"], b"foo: bar\n", b"\x1b[94m---------------\nfoo\n---------------\x1b[0m\n bar\n") def test_009_diff(self): """test clubak (--diff)""" self._clubak_t(["--diff"], b"foo1: bar\nfoo2: bar", b'') self._clubak_t(["--diff"], b"foo1: bar\nfoo2: BAR\nfoo2: end\nfoo1: end", b"--- foo1\n+++ foo2\n@@ -1,2 +1,2 @@\n- bar\n+ BAR\n end\n") self._clubak_t(["--diff"], b"foo1: bar\nfoo2: BAR\nfoo3: bar\nfoo2: end\nfoo1: end\nfoo3: end", b"--- foo[1,3] (2)\n+++ foo2\n@@ -1,2 +1,2 @@\n- bar\n+ BAR\n end\n") self._clubak_t(["--diff", "--color=always"], b"foo1: bar\nfoo2: BAR\nfoo3: bar\nfoo2: end\nfoo1: end\nfoo3: end", b"\x1b[1m--- foo[1,3] (2)\x1b[0m\n\x1b[1m+++ foo2\x1b[0m\n\x1b[96m@@ -1,2 +1,2 @@\x1b[0m\n\x1b[91m- bar\x1b[0m\n\x1b[92m+ BAR\x1b[0m\n end\n") self._clubak_t(["--diff", "-d"], b"foo: bar\n", b"INPUT foo: bar\n", 0, b"line_mode=False gather=True tree_depth=1\n") self._clubak_t(["--diff", "-L"], b"foo1: bar\nfoo2: bar", b'', 2, b"error: option mismatch (diff not supported in line_mode)\n") # GH #400 self._clubak_t(["--diff"], b"host1: \xc3\xa5\nhost2: a\nhost2: b\nhost1: b\n", b"--- host1\n+++ host2\n@@ -1,2 +1,2 @@\n- \xc3\xa5\n+ a\n b\n") class CLIClubakTestGroupsConf(CLIClubakTest): """Unit test class for testing --groupsconf option""" def setUp(self): self.gconff = make_temp_file(dedent(""" [Main] default: global_default [global_default] map: echo foo[1-2] all: echo @foo list: echo foo """).encode()) set_std_group_resolver_config(self.gconff.name) # passed to --groupsconf self.custf = make_temp_file(dedent(""" [Main] default: custom [custom] map: echo foo[1-2] all: echo @bar list: echo bar """).encode()) def tearDown(self): set_std_group_resolver(None) self.gconff = None self.custf = None def test_groupsconf_option(self): """test nodeset with --groupsconf""" # use -r (--regroup) to test group resolution self._clubak_t(["-r"], b"foo1: bar\nfoo2: bar", _outfmt("@foo (2)")) self._clubak_t(["--groupsconf", self.custf.name, "-r"], b"foo1: bar\nfoo2: bar", _outfmt("@bar (2)")) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/CLIClushTest.py0000644104717000001440000011355314505632065017566 0ustar00sthiellusers# ClusterShell.CLI.Clush test suite # Written by S. Thiell """Unit test for CLI.Clush""" import codecs import errno import os from os.path import basename, dirname import pwd import re import resource import signal import sys import tempfile from textwrap import dedent import threading import time import unittest from subprocess import Popen, PIPE from TLib import * import ClusterShell.CLI.Clush from ClusterShell.CLI.Clush import main from ClusterShell.NodeSet import NodeSet from ClusterShell.NodeSet import set_std_group_resolver, \ set_std_group_resolver_config from ClusterShell.Task import task_cleanup from ClusterShell.Worker.EngineClient import EngineClientNotSupportedError class CLIClushTest_A(unittest.TestCase): """Unit test class for testing CLI/Clush.py""" def setUp(self): """define constants used in tests""" s = "%s: ok\n" % HOSTNAME self.output_ok = s.encode() s = "\x1b[94m%s: \x1b[0mok\n" % HOSTNAME self.output_ok_color = s.encode() self.soft, self.hard = resource.getrlimit(resource.RLIMIT_NOFILE) def tearDown(self): """cleanup all tasks""" task_cleanup() # we played with fd_max: restore original nofile resource limits resource.setrlimit(resource.RLIMIT_NOFILE, (self.soft, self.hard)) def _clush_t(self, args, stdin, expected_stdout, expected_rc=0, expected_stderr=None): """This new version allows code coverage checking by calling clush's main entry point.""" def raw_input_mock(prompt): # trusty sleep wait_time = 60 start = time.time() while time.time() - start < wait_time: time.sleep(wait_time - (time.time() - start)) return "" try: raw_input_save = ClusterShell.CLI.Clush.raw_input except: raw_input_save = raw_input ClusterShell.CLI.Clush.raw_input = raw_input_mock try: CLI_main(self, main, ['clush'] + args, stdin, expected_stdout, expected_rc, expected_stderr) finally: ClusterShell.CLI.Clush.raw_input = raw_input_save def test_000_display(self): """test clush (display options)""" self._clush_t(["-w", HOSTNAME, "true"], None, b"") s = "%s: ok ok\n" % HOSTNAME exp_output2 = s.encode() self._clush_t(["-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-w", HOSTNAME, "echo", "ok", "ok"], None, exp_output2) self._clush_t(["-N", "-w", HOSTNAME, "echo", "ok", "ok"], None, b"ok ok\n") self._clush_t(["-w", "badhost,%s" % HOSTNAME, "-x", "badhost", "echo", "ok"], None, self.output_ok) self._clush_t(["-qw", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-vw", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-qvw", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-Sw", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-Sqw", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-Svw", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["--nostdin", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-w", HOSTNAME, "--color=always", "echo", "ok"], None, self.output_ok_color) self._clush_t(["-w", HOSTNAME, "--color=never", "echo", "ok"], None, self.output_ok) # issue #352 self._clush_t(["-N", "-R", "exec", "-w", 'foo[1-2]', "-b", "echo", "test"], None, b"test\n") def test_001_display_tty(self): """test clush (display options) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_000_display() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') def test_002_fanout(self): """test clush (fanout)""" self._clush_t(["-f", "10", "-w", HOSTNAME, "true"], None, b"") self._clush_t(["-f", "1", "-w", HOSTNAME, "true"], None, b"") self._clush_t(["-f", "1", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) def test_003_fanout_tty(self): """test clush (fanout) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_002_fanout() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') def test_004_ssh_options(self): """test clush (ssh options)""" self._clush_t(["-o", "-oStrictHostKeyChecking=no", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-o", "-oStrictHostKeyChecking=no -oForwardX11=no", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-o", "-oStrictHostKeyChecking=no", "-o", "-oForwardX11=no", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-o-oStrictHostKeyChecking=no", "-o-oForwardX11=no", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-u", "30", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-t", "30", "-u", "30", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) def test_005_ssh_options_tty(self): """test clush (ssh options) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_004_ssh_options() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') def test_006_output_gathering(self): """test clush (output gathering)""" self._clush_t(["-w", HOSTNAME, "-bL", "echo", "ok"], None, self.output_ok) self._clush_t(["-w", HOSTNAME, "-qbL", "echo", "ok"], None, self.output_ok) self._clush_t(["-w", HOSTNAME, "-BL", "echo", "ok"], None, self.output_ok) self._clush_t(["-w", HOSTNAME, "-qBL", "echo", "ok"], None, self.output_ok) self._clush_t(["-w", HOSTNAME, "-BLS", "echo", "ok"], None, self.output_ok) self._clush_t(["-w", HOSTNAME, "-qBLS", "echo", "ok"], None, self.output_ok) s = "%s: ok\n---------------\n%s\n---------------\nok\n" \ % (HOSTNAME, HOSTNAME) self._clush_t(["-w", HOSTNAME, "-vb", "echo", "ok"], None, s.encode()) def test_007_output_gathering_tty(self): """test clush (output gathering) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_006_output_gathering() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') def test_008_file_copy(self): """test clush (file copy)""" content = "%f" % time.time() content = content.encode() sf = make_temp_file(content) self._clush_t(["-w", HOSTNAME, "-c", sf.name], None, b"") sf.seek(0) self.assertEqual(sf.read(), content) # test --dest option f2 = tempfile.NamedTemporaryFile() self._clush_t(["-w", HOSTNAME, "-c", sf.name, "--dest", f2.name], None, b"") f2.seek(0) self.assertEqual(f2.read(), content) # test multi --dest (manual) tdir = make_temp_dir() sf2 = make_temp_file(b'second') try: f2 = tempfile.NamedTemporaryFile() self._clush_t(["-w", HOSTNAME, "-c", sf.name, sf2.name, "--dest", tdir.name], None, b"") with open(os.path.join(tdir.name, basename(sf.name)), 'rb') as chkf: self.assertEqual(chkf.read(), content) with open(os.path.join(tdir.name, basename(sf2.name)), 'rb') as chkf: self.assertEqual(chkf.read(), b'second') finally: sf2.close() tdir.cleanup() # test multi --dest (auto) tdir = make_temp_dir() sf2 = make_temp_file(b'second', dir=tdir.name) try: f2 = tempfile.NamedTemporaryFile() self._clush_t(["-w", HOSTNAME, "-c", sf.name, sf2.name], None, b"") sf.seek(0) sf2.seek(0) self.assertEqual(sf.read(), content) self.assertEqual(sf2.read(), b'second') finally: sf2.close() tdir.cleanup() # test --user option f2 = tempfile.NamedTemporaryFile() self._clush_t(["--user", pwd.getpwuid(os.getuid())[0], "-w", HOSTNAME, "--copy", sf.name, "--dest", f2.name], None, b"") f2.seek(0) self.assertEqual(f2.read(), content) # test --rcopy self._clush_t(["--user", pwd.getpwuid(os.getuid())[0], "-w", HOSTNAME, "--rcopy", sf.name, "--dest", dirname(sf.name)], None, b"") f2.seek(0) self.assertEqual(open("%s.%s" % (sf.name, HOSTNAME), 'rb').read(), content) # test --rcopy with implicit --dest self._clush_t(["--user", pwd.getpwuid(os.getuid())[0], "-w", HOSTNAME, "--rcopy", sf.name], None, b"") f2.seek(0) self.assertEqual(open("%s.%s" % (sf.name, HOSTNAME), 'rb').read(), content) def test_009_file_copy_tty(self): """test clush (file copy) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_008_file_copy() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def test_010_diff(self): """test clush (diff)""" self._clush_t(["-w", HOSTNAME, "--diff", "echo", "ok"], None, b"") self._clush_t(["-w", "%s,localhost" % HOSTNAME, "--diff", "echo", "ok"], None, b"") @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def test_011_diff_tty(self): """test clush (diff) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_010_diff() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def test_012_diff_null(self): """test clush (diff w/o output)""" rxs = r"^--- %s\n\+\+\+ localhost\n@@ -1(,1)? \+[01],0 @@\n-ok\n$" % HOSTNAME self._clush_t(["-R", "exec", "-w", "%s,localhost" % HOSTNAME, "--diff", 'echo %h | egrep -q "^localhost$" || echo ok'], None, re.compile(rxs.encode())) def test_013_stdin(self): """test clush (stdin)""" self._clush_t(["-w", HOSTNAME, "sleep 1 && cat"], b"ok", self.output_ok) s = "%s: ok\n%s: ok\n" % (HOSTNAME, HOSTNAME) self._clush_t(["-w", HOSTNAME, "cat"], b"ok\nok", s.encode()) # write binary to stdin s = "%s: foo bar\n" % HOSTNAME self._clush_t(["-w", HOSTNAME, "gzip -d"], codecs.decode(b'1f8b0800869a744f00034bcbcf57484a2ce2020027b4dd1308000000', 'hex'), s.encode()) def test_015_stderr(self): """test clush (stderr)""" s = "%s: err\n" % HOSTNAME self._clush_t(["-w", HOSTNAME, "echo err 1>&2"], None, b"", 0, s.encode()) self._clush_t(["-b", "-w", HOSTNAME, "-q", "echo err 1>&2"], None, b"", 0, s.encode()) s = "---------------\n%s\n---------------\nerr\n" % HOSTNAME self._clush_t(["-B", "-w", HOSTNAME, "-q", "echo err 1>&2"], None, s.encode()) def test_016_stderr_tty(self): """test clush (stderr) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_015_stderr() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def test_017_retcodes(self): """test clush (retcodes)""" s = "clush: %s: exited with exit code 1\n" % HOSTNAME exp_err = s.encode() self._clush_t(["-w", HOSTNAME, "/bin/false"], None, b"", 0, exp_err) self._clush_t(["-w", HOSTNAME, "-b", "/bin/false"], None, b"", 0, exp_err) self._clush_t(["-S", "-w", HOSTNAME, "/bin/false"], None, b"", 1, exp_err) self._clush_t(["--maxrc", "-w", HOSTNAME, "/bin/false"], None, b"", 1, exp_err) self._clush_t(["-O", "maxrc=yes", "-w", HOSTNAME, "/bin/false"], None, b"", 1, exp_err) # -O takes precedence over --maxrc self._clush_t(["--maxrc", "-O", "maxrc=no", "-w", HOSTNAME, "/bin/false"], None, b"", 0, exp_err) self._clush_t(["-O", "maxrc=no", "--maxrc", "-w", HOSTNAME, "/bin/false"], None, b"", 0, exp_err) for i in (1, 2, 127, 128, 255): s = "clush: %s: exited with exit code %d\n" % (HOSTNAME, i) self._clush_t(["-S", "-w", HOSTNAME, "exit %d" % i], None, b"", i, s.encode()) self._clush_t(["-v", "-w", HOSTNAME, "/bin/false"], None, b"", 0, exp_err) duo = str(NodeSet("%s,localhost" % HOSTNAME)) s = "clush: %s (%d): exited with exit code 1\n" % (duo, 2) self._clush_t(["-w", duo, "-b", "/bin/false"], None, b"", 0, s.encode()) s = "clush: %s: exited with exit code 1\n" % duo self._clush_t(["-w", duo, "-b", "-q", "/bin/false"], None, b"", 0, s.encode()) s = "clush: %s (%d): exited with exit code 1\n" % (duo, 2) self._clush_t(["-w", duo, "-S", "-b", "/bin/false"], None, b"", 1, s.encode()) self._clush_t(["-w", duo, "-S", "-b", "-q", "/bin/false"], None, b"", 1) @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def test_018_retcodes_tty(self): """test clush (retcodes) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_017_retcodes() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') def test_019_timeout(self): """test clush (timeout)""" s = "clush: %s: command timeout\n" % HOSTNAME self._clush_t(["-w", HOSTNAME, "-u", "1", "sleep 3"], None, b"", 0, s.encode()) self._clush_t(["-w", HOSTNAME, "-u", "1", "-b", "sleep 3"], None, b"", 0, s.encode()) def test_020_timeout_tty(self): """test clush (timeout) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_019_timeout() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') def test_021_file_copy_timeout(self): """test clush file copy (timeout)""" content = "%f" % time.time() content = content.encode() f = make_temp_file(content) s = "clush: %s: command timeout\n" % HOSTNAME self._clush_t(["-w", HOSTNAME, "-u", "0.01", "-c", f.name], None, b"", 0, s.encode()) def test_022_file_copy_timeout_tty(self): """test clush file copy (timeout) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self.test_021_file_copy_timeout() finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') def test_023_load_workerclass(self): """test _load_workerclass()""" for name in ('rsh', 'ssh', 'pdsh'): cls = ClusterShell.CLI.Clush._load_workerclass(name) self.assertTrue(cls) def test_024_load_workerclass_error(self): """test _load_workerclass() bad use cases""" func = ClusterShell.CLI.Clush._load_workerclass # Bad module self.assertRaises(ImportError, func, 'not_a_module') # Worker module but not supported self.assertRaises(AttributeError, func, 'worker') def test_025_worker(self): """test clush (worker)""" self._clush_t(["-w", HOSTNAME, "--worker=ssh", "echo ok"], None, self.output_ok, 0) self._clush_t(["-w", HOSTNAME, "-R", "ssh", "echo ok"], None, self.output_ok, 0) # also test in debug mode... # Warning: Python3 will display b'...' in debug mode rxs = r"EXECCLIENT: echo ok\n%s: [b\\']{0,2}ok[']{0,1}\n%s: ok\n" \ % (HOSTNAME, HOSTNAME) self._clush_t(["-w", HOSTNAME, "--worker=exec", "-d", "echo ok"], None, re.compile(rxs.encode()), 0) self._clush_t(["-w", HOSTNAME, "-R", "exec", "-d", "echo ok"], None, re.compile(rxs.encode()), 0) def test_026_keyboard_interrupt(self): """test clush on keyboard interrupt""" # Note: the scope of this test is still limited as we cannot force user # interaction (as clush is launched by subprocess). For replicated # observation, we use --nostdin and only check if Keyboard interrupt # message is printed... class KillerThread(threading.Thread): def run(self): time.sleep(1) # replace later by process.send_signal() [py2.6+] os.kill(self.pidkill, signal.SIGINT) kth = KillerThread() args = ["-w", HOSTNAME, "--worker=exec", "-q", "--nostdin", "-b", "echo start; sleep 10"] python_exec = basename(sys.executable or 'python') process = Popen([python_exec, '-m', 'ClusterShell.CLI.Clush'] + args, stderr=PIPE, stdout=PIPE, bufsize=0) kth.pidkill = process.pid kth.start() stderr = process.communicate()[1] self.assertEqual(stderr, b"Keyboard interrupt.\n") def test_027_warn_shell_globbing_nodes(self): """test clush warning on shell globbing (-w)""" tdir = make_temp_dir() tfile = open(os.path.join(tdir.name, HOSTNAME), "w") curdir = os.getcwd() try: os.chdir(tdir.name) s = "Warning: using '-w %s' and local path '%s' exists, was it " \ "expanded by the shell?\n" % (HOSTNAME, HOSTNAME) self._clush_t(["-w", HOSTNAME, "echo", "ok"], None, self.output_ok, 0, s.encode()) finally: os.chdir(curdir) tfile.close() os.unlink(tfile.name) tdir.cleanup() def test_028_warn_shell_globbing_exclude(self): """test clush warning on shell globbing (-x)""" tdir = make_temp_dir() tfile = open(os.path.join(tdir.name, HOSTNAME), "wb") curdir = os.getcwd() try: os.chdir(tdir.name) rxs = r"^Warning: using '-x %s' and local path " \ r"'%s' exists, was it expanded by the shell\?\n" \ % (HOSTNAME, HOSTNAME) self._clush_t(["-S", "-w", "badhost,%s" % HOSTNAME, "-x", HOSTNAME, "echo", "ok"], None, b"", 255, re.compile(rxs.encode())) finally: os.chdir(curdir) tfile.close() os.unlink(tfile.name) tdir.cleanup() def test_029_hostfile(self): """test clush --hostfile""" f = make_temp_file(HOSTNAME.encode()) self._clush_t(["--hostfile", f.name, "echo", "ok"], None, self.output_ok) f2 = make_temp_file(HOSTNAME.encode()) self._clush_t(["--hostfile", f.name, "--hostfile", f2.name, "echo", "ok"], None, self.output_ok) self.assertRaises(OSError, self._clush_t, ["--hostfile", "/I/do/NOT/exist", "echo", "ok"], None, 1) def test_030_config_options(self): """test clush -O/--option""" self._clush_t(["--option", "color=never", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["--option", "color=always", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok_color) self._clush_t(["--option=color=never", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["--option=color=always", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok_color) self._clush_t(["-O", "fd_max=220", "--option", "color=never", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["-O", "fd_max=220", "--option", "color=always", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok_color) self._clush_t(["--option", "color=never", "-O", "fd_max=220", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) self._clush_t(["--option", "color=always", "-O", "fd_max=220", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok_color) self._clush_t(["--option", "color=never", "-O", "fd_max=220", "-O", "color=always", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok_color) self._clush_t(["--option", "color=always", "-O", "fd_max=220", "-O", "color=never", "-w", HOSTNAME, "echo", "ok"], None, self.output_ok) @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def test_031_progress(self): """test clush -P/--progress""" self._clush_t(["-w", HOSTNAME, "--progress", "echo", "ok"], None, self.output_ok) self._clush_t(["-w", HOSTNAME, "--progress", "sleep", "2"], None, b'', 0, re.compile(r'clush: 0/1\r.*'.encode())) self._clush_t(["-w", HOSTNAME, "--progress", "sleep", "2"], b'AAAAAAAA', b'', 0, re.compile(r'clush: 0/1 write: \d B/s\r.*'.encode())) self._clush_t(["-w", "%s,localhost" % HOSTNAME, "--progress", "sleep", "2"], b'AAAAAAAAAAAAAA', b'', 0, re.compile(r'clush: 0/2 write: \d+ B/s\r.*'.encode())) self._clush_t(["-w", HOSTNAME, "-b", "--progress", "sleep", "2"], None, b'', 0, re.compile(r'clush: 0/1\r.*'.encode())) self._clush_t(["-w", HOSTNAME, "-b", "--progress", "sleep", "2"], b'AAAAAAAAAAAAAAAA', b'', 0, re.compile(r'clush: 0/1 write: \d+ B/s\r.*'.encode())) # -q and --progress: explicit -q wins self._clush_t(["-w", HOSTNAME, "--progress", "-q", "sleep", "2"], None, b'', 0) self._clush_t(["-w", HOSTNAME, "-b", "--progress", "-q", "sleep", "2"], None, b'', 0, b'') self._clush_t(["-w", HOSTNAME, "-b", "--progress", "-q", "sleep", "2"], b'AAAAAAAAAAAAAAAA', b'', 0, b'') # cover stderr output and --progress s = "%s: bar\n" % HOSTNAME err_rxs = r'%s: foo\nclush: 0/1\r.*' % HOSTNAME self._clush_t(["-w", HOSTNAME, "--progress", "echo foo >&2; echo bar; sleep 2"], None, s.encode(), 0, re.compile(err_rxs.encode())) def test_032_worker_pdsh(self): """test clush (worker pdsh)""" # Warning: same as: echo -n | clush --worker=pdsh when launched from # jenkins (not a tty), so we need --nostdin as pdsh worker doesn't # support write self._clush_t(["-w", HOSTNAME, "--worker=pdsh", "--nostdin", "echo ok"], None, self.output_ok, 0) # write not supported by pdsh worker self.assertRaises(EngineClientNotSupportedError, self._clush_t, ["-w", HOSTNAME, "-R", "pdsh", "cat"], b"bar", None, 1) def test_033_worker_pdsh_tty(self): """test clush (worker pdsh) [tty]""" setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self._clush_t(["-w", HOSTNAME, "--worker=pdsh", "echo ok"], None, self.output_ok, 0) finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def test_034_pick(self): """test clush --pick""" rxs = r"^(localhost|%s): foo\n$" % HOSTNAME self._clush_t(["-w", "%s,localhost" % HOSTNAME, "--pick", "1", "echo foo"], None, re.compile(rxs.encode())) rxs = r"^((localhost|%s): foo\n){2}$" % HOSTNAME self._clush_t(["-w", "%s,localhost" % HOSTNAME, "--pick", "2", "echo foo"], None, re.compile(rxs.encode())) def test_035_sorted_line_mode(self): """test clush (sorted line mode -L)""" self._clush_t(["-w", HOSTNAME, "-L", "echo", "ok"], None, self.output_ok) # Issue #326 cmd = "bash -c 's=%h; n=${s//[!0-9]/}; if [[ $(expr $n %% 2) == 0 ]]; then " \ "echo foo; else echo bar; fi'" self._clush_t(["-w", "cs[01-03]", "--worker=exec", "-L", cmd], None, b'cs01: bar\ncs02: foo\ncs03: bar\n', 0) def test_036_sorted_gather(self): """test clush (CLI.Utils.bufnodeset_cmpkey)""" # test 1st sort criteria: largest nodeset first cmd = "bash -c 's=%h; n=${s//[!0-9]/}; if [[ $(expr $n %% 2) == 0 ]];" \ "then echo foo; else echo bar; fi'" self._clush_t(["-w", "cs[01-03]", "--worker=exec", "-b", cmd], None, b'---------------\ncs[01,03] (2)\n---------------\nbar\n' b'---------------\ncs02\n---------------\nfoo\n', 0) # test 2nd sort criteria: smaller node index first cmd = "bash -c 's=%h; n=${s//[!0-9]/}; if [[ $(expr $n %% 2) == 0 ]];" \ "then echo bar; else echo foo; fi'" self._clush_t(["-w", "cs[01-04]", "--worker=exec", "-b", cmd], None, b'---------------\ncs[01,03] (2)\n---------------\nfoo\n' b'---------------\ncs[02,04] (2)\n---------------\nbar\n', 0) def test_037_nostdin(self): """test clush (nostdin)""" self._clush_t(["-n", "-w", HOSTNAME, "cat"], b"dummy", b"") self._clush_t(["--nostdin", "-w", HOSTNAME, "cat"], b"dummy", b"") def test_038_rlimits(self): """test clush error with low fd_max""" # These tests also cover pipe() fd cleanup handling code in # fastsubprocess' Popen._gethandles(). All file descriptors should # be properly cleaned. # # Each fork creates 3 FDs remaining in the parent process. We have # two tests here with a different fd_max each time in order to raise # the exception during stdout and stderr pipe creation. # Depending on the current available FDs during the test, the two # tests below might be reversed. # # test for error when creating stdout pipes: # 99 used OK + 1 (stdin) self.assertRaises(OSError, self._clush_t, ["-N", "-R", "exec", "-w", 'foo[1-1000]', "-b", "-f", "1000", "-O", "fd_max=100", "echo ok"], None, b"ok\n") # # test for error when creating stderr pipes: # 99 OK + 1 (stdin) + 1 (stdout) self.assertRaises(OSError, self._clush_t, ["-N", "-R", "exec", "-w", 'foo[1-1000]', "-b", "-f", "1000", "-O", "fd_max=101", "echo ok"], None, b"ok\n") def test_039_conf_option(self): """test clush --conf option""" custf = make_temp_file(dedent(""" [Main] node_count: no """).encode()) # simple test that checks if "node_count:" no from custom conf file # is taken into account self._clush_t(["-b", "-R", "exec", "-w", "foo[1-10]", "echo ok"], b"", b"---------------\nfoo[1-10] (10)\n---------------\nok\n") self._clush_t(["--conf", custf.name, "-b", "-R", "exec", "-w", "foo[1-10]", "echo ok"], b"", b"---------------\nfoo[1-10]\n---------------\nok\n") def test_040_stdin_eof(self): """test clush (stdin eof)""" # should not block if connection to stdin cannot be established # or of --nostdin is specified self._clush_t(["-w", HOSTNAME, "cat"], None, b'') self._clush_t(["-w", HOSTNAME, "--nostdin", "cat"], None, b'') setattr(ClusterShell.CLI.Clush, '_f_user_interaction', True) try: self._clush_t(["-w", HOSTNAME, "cat"], None, b'') finally: delattr(ClusterShell.CLI.Clush, '_f_user_interaction') def test_041_outdir_errdir(self): """test clush --outdir and --errdir""" odir = make_temp_dir() edir = make_temp_dir() tofilepath = os.path.join(odir.name, HOSTNAME) tefilepath = os.path.join(edir.name, HOSTNAME) try: self._clush_t(["-w", HOSTNAME, "--outdir", odir.name, "echo", "ok"], None, self.output_ok) self.assertTrue(os.path.isfile(tofilepath)) with open(tofilepath, "r") as f: self.assertEqual(f.read(), "ok\n") finally: os.unlink(tofilepath) try: self._clush_t(["-w", HOSTNAME, "--errdir", edir.name, "echo", "ok", ">&2"], None, None, 0, self.output_ok) self.assertTrue(os.path.isfile(tefilepath)) with open(tefilepath, "r") as f: self.assertEqual(f.read(), "ok\n") finally: os.unlink(tefilepath) try: serr = "%s: err\n" % HOSTNAME self._clush_t(["-w", HOSTNAME, "--outdir", odir.name, "--errdir", edir.name, "echo", "ok", ";", "echo", "err", ">&2"], None, self.output_ok, 0, serr.encode()) self.assertTrue(os.path.isfile(tofilepath)) self.assertTrue(os.path.isfile(tefilepath)) with open(tofilepath, "r") as f: self.assertEqual(f.read(), "ok\n") with open(tefilepath, "r") as f: self.assertEqual(f.read(), "err\n") finally: os.unlink(tofilepath) os.unlink(tefilepath) odir.cleanup() edir.cleanup() def test_042_command_prefix(self): """test clush -O command_prefix""" s = "%s: foobar\n" % HOSTNAME self._clush_t(["-O", "command_prefix=echo", "-w", HOSTNAME, "foobar"], None, s.encode()) self._clush_t(["-O", "command_prefix=echo", "--nostdin", "-w", HOSTNAME, "foobar"], None, s.encode()) def test_043_password_prompt(self): """test clush -O password_prompt""" def ask_pass_mock(): return "passok" ask_pass_save = ClusterShell.CLI.Clush.ask_pass ClusterShell.CLI.Clush.ask_pass = ask_pass_mock try: s = "%s: passok\n" % HOSTNAME expected = s.encode() self._clush_t(["-O", "password_prompt=yes", "-w", HOSTNAME, "cat"], None, expected) self._clush_t(["-O", "password_prompt=yes", "-w", HOSTNAME, "cat"], b"test\n", expected+('%s: test\n' % HOSTNAME).encode()) self._clush_t(["-O", "password_prompt=yes", "--nostdin", "-w", HOSTNAME, "cat"], None, expected) self._clush_t(["-O", "password_prompt=yes", "--nostdin", "-w", HOSTNAME, "cat"], b"test\n", expected) # write to stdin is not supported by pdsh worker self.assertRaises(EngineClientNotSupportedError, self._clush_t, ["-O", "password_prompt=yes", "-w", HOSTNAME, "-R", "pdsh", "cat"], b"test stdin", expected, 1) self.assertRaises(EngineClientNotSupportedError, self._clush_t, ["--nostdin", "-O", "password_prompt=yes", "-w", HOSTNAME, "-R", "pdsh", "cat"], b"test stdin", expected, 1) finally: ClusterShell.CLI.Clush.ask_pass = ask_pass_save def test_044_command_prefix_and_password_prompt(self): """test clush -O command_prefix and -O password_prompt""" def ask_pass_mock(): return "passok" ask_pass_save = ClusterShell.CLI.Clush.ask_pass ClusterShell.CLI.Clush.ask_pass = ask_pass_mock try: s = "%s: passok\n" % HOSTNAME expected = s.encode() # using 'exec' command to simulate sudo-like behavior self._clush_t(["-O", "command_prefix=exec", "-O", "password_prompt=yes", "-w", HOSTNAME, "cat"], None, expected) self._clush_t(["-O", "command_prefix=exec", "-O", "password_prompt=yes", "-w", HOSTNAME, "cat"], b"test\n", expected+('%s: test\n' % HOSTNAME).encode()) self._clush_t(["-O", "command_prefix=exec", "-O", "password_prompt=yes", "--nostdin", "-w", HOSTNAME, "cat"], None, expected) self._clush_t(["-O", "command_prefix=exec", "-O", "password_prompt=yes", "--nostdin", "-w", HOSTNAME, "cat"], b"test\n", expected) # test password forwarding followed by stdin stream s = "%s: test stdin\n" % HOSTNAME expected += s.encode() self._clush_t(["-O", "command_prefix=exec", "-O", "password_prompt=yes", "-w", HOSTNAME, "cat"], b"test stdin\n", expected) finally: ClusterShell.CLI.Clush.ask_pass = ask_pass_save class CLIClushTest_B_StdinFailure(unittest.TestCase): """Unit test class for testing CLI/Clush.py and stdin failure""" def setUp(self): class BrokenStdinMock(object): def isatty(self): return False def fileno(self): raise IOError(errno.EINVAL, "Invalid argument") sys.stdin = BrokenStdinMock() def tearDown(self): """cleanup all tasks""" task_cleanup() sys.stdin = sys.__stdin__ def _clush_t(self, args, stdin, expected_stdout, expected_rc=0, expected_stderr=None): CLI_main(self, main, ['clush'] + args, stdin, expected_stdout, expected_rc, expected_stderr) def test_100_broken_stdin(self): """test clush with broken stdin""" self._clush_t(["-w", HOSTNAME, "-v", "sleep 1"], None, b"stdin: [Errno 22] Invalid argument\n", 0, b"") class CLIClushTest_C_GroupsConf(unittest.TestCase): """Unit test class for testing CLI/Clush.py with --groupsconf""" def setUp(self): self.gconff = make_temp_file(dedent(""" [Main] default: global_default [global_default] map: echo example[1-100] all: echo @foo,@bar,@moo list: echo foo bar moo """).encode()) set_std_group_resolver_config(self.gconff.name) # passed to --groupsconf self.custf = make_temp_file(dedent(""" [Main] default: custom [custom] map: echo custom[7-42] all: echo @selene,@artemis list: echo selene artemis """).encode()) def tearDown(self): set_std_group_resolver(None) self.gconff = None self.custf = None def _clush_t(self, args, stdin, expected_stdout, expected_rc=0, expected_stderr=None): CLI_main(self, main, ['clush'] + args, stdin, expected_stdout, expected_rc, expected_stderr) def test_200_groupsconf_option(self): """test clush --groupsconf""" self._clush_t(["-R", "exec", "-w", "@foo", "-bL", "echo ok"], None, b"example[1-100]: ok\n", 0, b"") self._clush_t(["--groupsconf", self.custf.name, "-R", "exec", "-w", "@foo", "-bL", "echo ok"], None, b"custom[7-42]: ok\n", 0, b"") class CLIClushTest_D_StdinFIFO(unittest.TestCase): """Unit test class for testing CLI/Clush.py when stdin is a named pipe""" def setUp(self): # Create fifo self.fname = tempfile.mktemp() os.mkfifo(self.fname) # Launch write thread class FIFOThread(threading.Thread): def run(self): fifo_wr = open(self.fname, "w") fifo_wr.write("0123456789") fifo_wr.close() fifoth = FIFOThread() fifoth.fname = self.fname fifoth.start() # Use read end of fifo as stdin sys.stdin = open(self.fname, "r") def tearDown(self): """cleanup all tasks""" sys.stdin.close() os.unlink(self.fname) task_cleanup() sys.stdin = sys.__stdin__ def _clush_t(self, args, stdin, expected_stdout, expected_rc=0, expected_stderr=None): CLI_main(self, main, ['clush'] + args, stdin, expected_stdout, expected_rc, expected_stderr) def test_300_fifo_stdin(self): """test clush with stdin is a fifo (read)""" s = "%s: 0123456789\n" % HOSTNAME self._clush_t(["-w", HOSTNAME, "-v", "cat"], None, s.encode(), 0, b"") def test_301_fifo_stdin(self): """test clush with stdin is a fifo (not read)""" s = "%s: ok\n" % HOSTNAME self._clush_t(["-w", HOSTNAME, "-v", "echo ok"], None, s.encode(), 0, b"") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/CLIConfigTest.py0000644104717000001440000005547514505632065017725 0ustar00sthiellusers# ClusterShell.CLI.Config test suite # Written by S. Thiell """Unit test for CLI.Config""" import resource import os.path import shutil import tempfile from textwrap import dedent import unittest from TLib import * from ClusterShell.CLI.Clush import set_fdlimit from ClusterShell.CLI.Config import ClushConfig, ClushConfigError from ClusterShell.CLI.Display import * from ClusterShell.CLI.OptionParser import OptionParser class CLIClushConfigTest(unittest.TestCase): """This test case performs a complete CLI.Config.ClushConfig verification. Also CLI.OptionParser is used and some parts are verified btw. """ def testClushConfigEmpty(self): """test CLI.Config.ClushConfig (empty)""" f = tempfile.NamedTemporaryFile(prefix='testclushconfig') f.write(b"\n") parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options, filename=f.name) self.assertEqual(config.color, THREE_CHOICES[0]) self.assertEqual(config.verbosity, VERB_STD) self.assertEqual(config.fanout, 64) self.assertEqual(config.maxrc, False) self.assertEqual(config.node_count, True) self.assertEqual(config.connect_timeout, 10) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, None) self.assertEqual(config.ssh_path, None) self.assertEqual(config.ssh_options, None) f.close() def testClushConfigAlmostEmpty(self): """test CLI.Config.ClushConfig (almost empty)""" f = tempfile.NamedTemporaryFile(prefix='testclushconfig') f.write("[Main]\n".encode()) parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options, filename=f.name) self.assertEqual(config.color, THREE_CHOICES[0]) self.assertEqual(config.verbosity, VERB_STD) self.assertEqual(config.maxrc, False) self.assertEqual(config.node_count, True) self.assertEqual(config.fanout, 64) self.assertEqual(config.connect_timeout, 10) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, None) self.assertEqual(config.ssh_path, None) self.assertEqual(config.ssh_options, None) f.close() def testClushConfigDefault(self): """test CLI.Config.ClushConfig (default)""" f = tempfile.NamedTemporaryFile(prefix='testclushconfig') f.write(dedent(""" [Main] fanout: 42 connect_timeout: 14 command_timeout: 0 history_size: 100 color: auto verbosity: 1 #ssh_user: root #ssh_path: /usr/bin/ssh #ssh_options: -oStrictHostKeyChecking=no""").encode()) f.flush() parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options, filename=f.name) display = Display(options, config) display.vprint(VERB_STD, "test") display.vprint(VERB_DEBUG, "shouldn't see this") self.assertEqual(config.color, THREE_CHOICES[-1]) self.assertEqual(config.verbosity, VERB_STD) self.assertEqual(config.maxrc, False) self.assertEqual(config.node_count, True) self.assertEqual(config.fanout, 42) self.assertEqual(config.connect_timeout, 14) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, None) self.assertEqual(config.ssh_path, None) self.assertEqual(config.ssh_options, None) f.close() def testClushConfigFull(self): """test CLI.Config.ClushConfig (full)""" f = tempfile.NamedTemporaryFile(prefix='testclushconfig') f.write(dedent(""" [Main] fanout: 42 connect_timeout: 14 command_timeout: 0 history_size: 100 color: auto maxrc: yes node_count: yes verbosity: 1 ssh_user: root ssh_path: /usr/bin/ssh ssh_options: -oStrictHostKeyChecking=no """).encode()) f.flush() parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options, filename=f.name) self.assertEqual(config.color, THREE_CHOICES[-1]) self.assertEqual(config.verbosity, VERB_STD) self.assertEqual(config.maxrc, True) self.assertEqual(config.node_count, True) self.assertEqual(config.fanout, 42) self.assertEqual(config.connect_timeout, 14) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, "root") self.assertEqual(config.ssh_path, "/usr/bin/ssh") self.assertEqual(config.ssh_options, "-oStrictHostKeyChecking=no") f.close() def testClushConfigError(self): """test CLI.Config.ClushConfig (error)""" f = tempfile.NamedTemporaryFile(prefix='testclushconfig') f.write(dedent(""" [Main] fanout: 3.2 connect_timeout: foo command_timeout: bar history_size: 100 color: maybe node_count: 3 verbosity: bar ssh_user: root ssh_path: /usr/bin/ssh ssh_options: -oStrictHostKeyChecking=no """).encode()) f.flush() parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options, filename=f.name) try: c = config.color self.fail("Exception ClushConfigError not raised (color)") except ClushConfigError: pass self.assertEqual(config.verbosity, 0) # probably for compatibility try: f = config.fanout self.fail("Exception ClushConfigError not raised (fanout)") except ClushConfigError: pass try: f = config.node_count self.fail("Exception ClushConfigError not raised (node_count)") except ClushConfigError: pass try: f = config.fanout except ClushConfigError as e: self.assertEqual(str(e)[0:20], "(Config Main.fanout)") try: t = config.connect_timeout self.fail("Exception ClushConfigError not raised (connect_timeout)") except ClushConfigError: pass try: m = config.command_timeout self.fail("Exception ClushConfigError not raised (command_timeout)") except ClushConfigError: pass f.close() def testClushConfigSetRlimit(self): """test CLI.Config.ClushConfig (setrlimit)""" soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) hard2 = min(32768, hard) f = tempfile.NamedTemporaryFile(prefix='testclushconfig') f.write(dedent(""" [Main] fanout: 42 connect_timeout: 14 command_timeout: 0 history_size: 100 color: auto fd_max: %d verbosity: 1 """ % hard2).encode()) f.flush() parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options, filename=f.name) display = Display(options, config) # force a lower soft limit resource.setrlimit(resource.RLIMIT_NOFILE, (hard2//2, hard)) # max_fdlimit should increase soft limit again set_fdlimit(config.fd_max, display) # verify soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) self.assertEqual(soft, hard2) f.close() def testClushConfigSetRlimitValueError(self): """test CLI.Config.ClushConfig (setrlimit ValueError)""" soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) f = tempfile.NamedTemporaryFile(prefix='testclushconfig') f.write(dedent(""" [Main] fanout: 42 connect_timeout: 14 command_timeout: 0 history_size: 100 color: auto # Use wrong fd_max value to generate ValueError fd_max: -1 verbosity: 1""").encode()) f.flush() parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options, filename=f.name) f.close() display = Display(options, config) class TestException(Exception): pass def mock_vprint_err(level, message): if message.startswith('Warning: Failed to set max open files'): raise TestException() display.vprint_err = mock_vprint_err self.assertRaises(TestException, set_fdlimit, config.fd_max, display) soft2, _ = resource.getrlimit(resource.RLIMIT_NOFILE) self.assertEqual(soft, soft2) def testClushConfigDefaultWithOptions(self): """test CLI.Config.ClushConfig (default with options)""" f = tempfile.NamedTemporaryFile(prefix='testclushconfig') f.write(dedent(""" [Main] fanout: 42 connect_timeout: 14 command_timeout: 0 history_size: 100 color: auto verbosity: 1""").encode()) f.flush() parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args(["-f", "36", "-u", "3", "-t", "7", "--user", "foobar", "--color", "always", "-d", "-v", "-q", "-o", "-oSomething"]) config = ClushConfig(options, filename=f.name) display = Display(options, config) display.vprint(VERB_STD, "test") display.vprint(VERB_DEBUG, "test") self.assertEqual(config.color, THREE_CHOICES[2]) self.assertEqual(config.verbosity, VERB_DEBUG) # takes biggest self.assertEqual(config.fanout, 36) self.assertEqual(config.connect_timeout, 7) self.assertEqual(config.command_timeout, 3) self.assertEqual(config.ssh_user, "foobar") self.assertEqual(config.ssh_path, None) self.assertEqual(config.ssh_options, "-oSomething") f.close() def testClushConfigWithInstalledConfig(self): """test CLI.Config.ClushConfig (installed config required)""" # This test needs installed configuration files (needed for # maximum coverage). parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options) def testClushConfigCustomGlobal(self): """test CLI.Config.ClushConfig (CLUSTERSHELL_CFGDIR global custom config) """ # Save existing environment variable, if it's defined custom_config_save = os.environ.get('CLUSTERSHELL_CFGDIR') # Create fake CLUSTERSHELL_CFGDIR custom_cfg_dir = make_temp_dir() try: os.environ['CLUSTERSHELL_CFGDIR'] = custom_cfg_dir.name cfgfile = open(os.path.join(custom_cfg_dir.name, 'clush.conf'), 'w') cfgfile.write(dedent(""" [Main] fanout: 42 connect_timeout: 14 command_timeout: 0 history_size: 100 color: never verbosity: 2 ssh_user: joebar ssh_path: ~/bin/ssh ssh_options: -oSomeDummyUserOption=yes """)) cfgfile.flush() parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options) # filename=None to use defaults! self.assertEqual(config.color, THREE_CHOICES[1]) self.assertEqual(config.verbosity, VERB_VERB) # takes biggest self.assertEqual(config.fanout, 42) self.assertEqual(config.connect_timeout, 14) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, 'joebar') self.assertEqual(config.ssh_path, '~/bin/ssh') self.assertEqual(config.ssh_options, '-oSomeDummyUserOption=yes') cfgfile.close() finally: if custom_config_save: os.environ['CLUSTERSHELL_CFGDIR'] = custom_config_save else: del os.environ['CLUSTERSHELL_CFGDIR'] custom_cfg_dir.cleanup() def testClushConfigUserOverride(self): """test CLI.Config.ClushConfig (XDG_CONFIG_HOME user config)""" xdg_config_home_save = os.environ.get('XDG_CONFIG_HOME') # Create fake XDG_CONFIG_HOME tdir = make_temp_dir() try: os.environ['XDG_CONFIG_HOME'] = tdir.name # create $XDG_CONFIG_HOME/clustershell/clush.conf usercfgdir = os.path.join(tdir.name, 'clustershell') os.mkdir(usercfgdir) cfgfile = open(os.path.join(usercfgdir, 'clush.conf'), 'w') cfgfile.write(dedent(""" [Main] fanout: 42 connect_timeout: 14 command_timeout: 0 history_size: 100 color: never verbosity: 2 ssh_user: trump ssh_path: ~/bin/ssh ssh_options: -oSomeDummyUserOption=yes """)) cfgfile.flush() parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options) # filename=None to use defaults! self.assertEqual(config.color, THREE_CHOICES[1]) self.assertEqual(config.verbosity, VERB_VERB) # takes biggest self.assertEqual(config.fanout, 42) self.assertEqual(config.connect_timeout, 14) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, 'trump') self.assertEqual(config.ssh_path, '~/bin/ssh') self.assertEqual(config.ssh_options, '-oSomeDummyUserOption=yes') cfgfile.close() finally: if xdg_config_home_save: os.environ['XDG_CONFIG_HOME'] = xdg_config_home_save else: del os.environ['XDG_CONFIG_HOME'] tdir.cleanup() def testClushConfigConfDirModesEmpty(self): """test CLI.Config.ClushConfig (confdir with no modes)""" tdir1 = make_temp_dir() dname1 = tdir1.name tdir2 = make_temp_dir() dname2 = tdir2.name f = make_temp_file(dedent(""" [Main] fanout: 42 connect_timeout: 14 command_timeout: 0 history_size: 100 color: auto maxrc: yes node_count: yes verbosity: 1 confdir: %s "%s" %s """ % (dname1, dname2, dname1)).encode()) try: parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options, filename=f.name) self.assertEqual(config.color, THREE_CHOICES[-1]) self.assertEqual(config.verbosity, VERB_STD) self.assertTrue(config.maxrc) self.assertTrue(config.node_count) self.assertEqual(config.fanout, 42) self.assertEqual(config.connect_timeout, 14) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, None) self.assertEqual(config.ssh_path, None) self.assertEqual(config.ssh_options, None) self.assertEqual(config.command_prefix, "") self.assertFalse(config.command_prefix) self.assertFalse(config.password_prompt) self.assertEqual(len(set(config.modes())), 0) self.assertRaises(ClushConfigError, config.set_mode, "sshpass") finally: f.close() tdir2.cleanup() tdir1.cleanup() def testClushConfigConfDirModes(self): """test CLI.Config.ClushConfig (confdir and modes)""" tdir1 = make_temp_dir() dname1 = tdir1.name tdir2 = make_temp_dir() dname2 = tdir2.name # Notes: # - use dname1 two times to check dup checking code # - use quotes on one of the directory path # - enable each run modes and test config options f = make_temp_file(dedent(""" [Main] fanout: 42 connect_timeout: 14 command_timeout: 0 history_size: 100 color: auto maxrc: yes node_count: yes verbosity: 1 ssh_user: root ssh_path: /usr/bin/ssh ssh_options: -oStrictHostKeyChecking=no confdir: %s "%s" %s """ % (dname1, dname2, dname1)).encode()) f1 = make_temp_file(dedent(""" [mode:sshpass] password_prompt: yes ssh_path: /usr/bin/sshpass /usr/bin/ssh scp_path: /usr/bin/sshpass /usr/bin/scp ssh_options: -oBatchMode=no """).encode(), suffix=".conf", dir=dname1) f2 = make_temp_file(dedent(""" [mode:sudo] password_prompt: yes command_prefix: /usr/bin/sudo -S -p "''" """).encode(), suffix=".conf", dir=dname2) f3 = make_temp_file(dedent(""" [mode:test] fanout: 100 connect_timeout: 6 command_timeout: 5 history_size: 200 color: always maxrc: no node_count: no verbosity: 0 ssh_user: nobody ssh_path: /some/other/ssh ssh_options: """).encode(), suffix=".conf", dir=dname2) try: parser = OptionParser("dummy") parser.install_clush_config_options() parser.install_display_options(verbose_options=True) parser.install_connector_options() options, _ = parser.parse_args([]) config = ClushConfig(options, filename=f.name) self.assertEqual(config.color, THREE_CHOICES[-1]) self.assertEqual(config.verbosity, VERB_STD) self.assertTrue(config.maxrc) self.assertTrue(config.node_count) self.assertEqual(config.fanout, 42) self.assertEqual(config.connect_timeout, 14) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, "root") self.assertEqual(config.ssh_path, "/usr/bin/ssh") self.assertEqual(config.ssh_options, "-oStrictHostKeyChecking=no") self.assertEqual(config.command_prefix, "") self.assertFalse(config.command_prefix) self.assertFalse(config.password_prompt) self.assertEqual(set(config.modes()), {'sshpass', 'sudo', 'test'}) config.set_mode("sshpass") self.assertEqual(config.color, THREE_CHOICES[-1]) self.assertEqual(config.verbosity, VERB_STD) self.assertTrue(config.maxrc) self.assertTrue(config.node_count) self.assertEqual(config.fanout, 42) self.assertEqual(config.connect_timeout, 14) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, "root") self.assertEqual(config.ssh_path, "/usr/bin/sshpass /usr/bin/ssh") self.assertEqual(config.ssh_options, "-oBatchMode=no") self.assertEqual(config.command_prefix, "") self.assertFalse(config.command_prefix) self.assertTrue(config.password_prompt) config.set_mode("sudo") self.assertEqual(config.color, THREE_CHOICES[-1]) self.assertEqual(config.verbosity, VERB_STD) self.assertTrue(config.maxrc) self.assertTrue(config.node_count) self.assertEqual(config.fanout, 42) self.assertEqual(config.connect_timeout, 14) self.assertEqual(config.command_timeout, 0) self.assertEqual(config.ssh_user, "root") self.assertEqual(config.ssh_path, "/usr/bin/ssh") self.assertEqual(config.ssh_options, "-oStrictHostKeyChecking=no") self.assertEqual(config.command_prefix, '/usr/bin/sudo -S -p "\'\'"') self.assertTrue(config.command_prefix) self.assertTrue(config.password_prompt) config.set_mode("test") self.assertEqual(config.color, THREE_CHOICES[2]) self.assertEqual(config.verbosity, VERB_STD) self.assertFalse(config.maxrc) self.assertFalse(config.node_count) self.assertEqual(config.fanout, 100) self.assertEqual(config.connect_timeout, 6) self.assertEqual(config.command_timeout, 5) self.assertEqual(config.ssh_user, "nobody") self.assertEqual(config.ssh_path, "/some/other/ssh") self.assertEqual(config.ssh_options, "") self.assertEqual(config.command_prefix, "") self.assertFalse(config.command_prefix) self.assertFalse(config.password_prompt) finally: f3.close() f2.close() f1.close() f.close() tdir2.cleanup() tdir1.cleanup() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/CLIDisplayTest.py0000644104717000001440000001661714505632065020120 0ustar00sthiellusers# ClusterShell.CLI.Display test suite # Written by S. Thiell """Unit test for CLI.Display""" import tempfile import unittest import os from io import StringIO from ClusterShell.CLI.Display import Display, THREE_CHOICES, VERB_STD from ClusterShell.CLI.OptionParser import OptionParser from ClusterShell.MsgTree import MsgTree from ClusterShell.NodeSet import NodeSet, set_std_group_resolver from ClusterShell.NodeUtils import GroupResolverConfig def makeTestFile(text): """Create a temporary file with the provided text.""" f = tempfile.NamedTemporaryFile() f.write(text) f.flush() return f class CLIDisplayTest(unittest.TestCase): """This test case performs a complete CLI.Display verification. Also CLI.OptionParser is used and some parts are verified btw. """ def testDisplay(self): """test CLI.Display""" parser = OptionParser("dummy") parser.install_display_options(verbose_options=True) options, _ = parser.parse_args([]) ns = NodeSet("hostfoo") mtree = MsgTree() mtree.add("hostfoo", b"message0") mtree.add("hostfoo", b"message1") list_env_vars = [] list_env_vars.append(dict()) list_env_vars.append(dict(NO_COLOR='0')) list_env_vars.append(dict(CLICOLOR='0')) list_env_vars.append(dict(CLICOLOR='1')) list_env_vars.append(dict(CLICOLOR='0', CLICOLOR_FORCE='0')) list_env_vars.append(dict(CLICOLOR_FORCE='1')) for env_vars in list_env_vars: for var_name in env_vars: var_value = env_vars[var_name] os.environ[var_name] = var_value for whencolor in THREE_CHOICES: # test whencolor switch if whencolor == "": options.whencolor = None else: options.whencolor = whencolor for label in [True, False]: # test no-label switch options.label = label disp = Display(options) # inhibit output disp.out = StringIO() disp.err = StringIO() # test print_* methods... disp.print_line(ns, b"foo bar") disp.print_line_error(ns, b"foo bar") disp.print_gather(ns, list(mtree.walk())[0][0]) # test also string nodeset as parameter disp.print_gather("hostfoo", list(mtree.walk())[0][0]) # test line_mode property self.assertEqual(disp.line_mode, False) disp.line_mode = True self.assertEqual(disp.line_mode, True) disp.print_gather("hostfoo", list(mtree.walk())[0][0]) disp.line_mode = False self.assertEqual(disp.line_mode, False) for var_name in env_vars: os.environ.pop(var_name) def testDisplayRegroup(self): """test CLI.Display (regroup)""" f = makeTestFile(b""" # A comment [Main] default: local [local] map: echo hostfoo #all: list: echo all #reverse: """) res = GroupResolverConfig(f.name) set_std_group_resolver(res) try: parser = OptionParser("dummy") parser.install_display_options(verbose_options=True) options, _ = parser.parse_args(["-r"]) disp = Display(options, color=False) self.assertEqual(disp.regroup, True) disp.out = StringIO() disp.err = StringIO() self.assertEqual(disp.line_mode, False) ns = NodeSet("hostfoo") # nodeset.regroup() is performed by print_gather() disp.print_gather(ns, b"message0\nmessage1\n") self.assertEqual(disp.out.getvalue(), "---------------\n@all\n---------------\nmessage0\nmessage1\n\n") finally: set_std_group_resolver(None) def testDisplayClubak(self): """test CLI.Display for clubak""" parser = OptionParser("dummy") parser.install_display_options(separator_option=True, dshbak_compat=True) options, _ = parser.parse_args([]) disp = Display(options) self.assertEqual(bool(disp.gather), False) self.assertEqual(disp.line_mode, False) self.assertEqual(disp.label, True) self.assertEqual(disp.regroup, False) self.assertEqual(bool(disp.groupsource), False) self.assertEqual(disp.noprefix, False) self.assertEqual(disp.maxrc, False) self.assertEqual(disp.node_count, True) self.assertEqual(disp.verbosity, VERB_STD) def testDisplayDecodingErrors(self): """test CLI.Display (decoding errors)""" parser = OptionParser("dummy") parser.install_display_options() options, _ = parser.parse_args([]) disp = Display(options, color=False) disp.out = StringIO() disp.err = StringIO() self.assertEqual(bool(disp.gather), False) self.assertEqual(disp.line_mode, False) ns = NodeSet("node") disp.print_line(ns, b"message0\n\xf8message1\n") self.assertEqual(disp.out.getvalue(), "node: message0\n\ufffdmessage1\n\n") disp.print_line_error(ns, b"message0\n\xf8message1\n") self.assertEqual(disp.err.getvalue(), "node: message0\n\ufffdmessage1\n\n") def testDisplayDecodingErrorsGather(self): """test CLI.Display (decoding errors, gather)""" parser = OptionParser("dummy") parser.install_display_options(dshbak_compat=True) options, _ = parser.parse_args(["-b"]) disp = Display(options, color=False) disp.out = StringIO() disp.err = StringIO() self.assertEqual(bool(disp.gather), True) self.assertEqual(disp.line_mode, False) ns = NodeSet("node") disp._print_buffer(ns, b"message0\n\xf8message1\n") self.assertEqual(disp.out.getvalue(), "---------------\nnode\n---------------\nmessage0\n\ufffdmessage1\n\n") def testDisplayDecodingErrorsLineMode(self): """test CLI.Display (decoding errors, line mode)""" parser = OptionParser("dummy") parser.install_display_options(dshbak_compat=True) options, _ = parser.parse_args(["-b", "-L"]) disp = Display(options, color=False) disp.out = StringIO() disp.err = StringIO() self.assertEqual(bool(disp.gather), True) self.assertEqual(disp.label, True) self.assertEqual(disp.line_mode, True) ns = NodeSet("node") disp.print_gather(ns, [b"message0\n", b"\xf8message1\n"]) self.assertEqual(disp.out.getvalue(), "node: message0\n\nnode: \ufffdmessage1\n\n") def testDisplayDecodingErrorsLineModeNoLabel(self): """test CLI.Display (decoding errors, line mode, no label)""" parser = OptionParser("dummy") parser.install_display_options(dshbak_compat=True) options, _ = parser.parse_args(["-b", "-L", "-N"]) disp = Display(options, color=False) disp.out = StringIO() disp.err = StringIO() self.assertEqual(bool(disp.gather), True) self.assertEqual(disp.label, False) self.assertEqual(disp.line_mode, True) ns = NodeSet("node") disp.print_gather(ns, [b"message0\n", b"\xf8message1\n"]) self.assertEqual(disp.out.getvalue(), "message0\n\n\ufffdmessage1\n\n") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/CLINodesetTest.py0000644104717000001440000014462414505632065020114 0ustar00sthiellusers# ClusterShell.CLI.Nodeset test suite # Written by S. Thiell """Unit test for CLI.Nodeset""" import os import random from textwrap import dedent import unittest from TLib import * from ClusterShell.CLI.Nodeset import main from ClusterShell.NodeUtils import GroupResolverConfig from ClusterShell.NodeSet import set_std_group_resolver, \ set_std_group_resolver_config class CLINodesetTestBase(unittest.TestCase): """Base unit test class for testing CLI/Nodeset.py""" def _nodeset_t(self, args, stdin, expected_stdout, expected_rc=0, expected_stderr=None): CLI_main(self, main, ['nodeset'] + args, stdin, expected_stdout, expected_rc, expected_stderr) class CLINodesetTest(CLINodesetTestBase): """Unit test class for testing CLI/Nodeset.py""" def _battery_count(self, args): self._nodeset_t(args + ["--count", ""], None, b"0\n") self._nodeset_t(args + ["--count", "foo"], None, b"1\n") self._nodeset_t(args + ["--count", "foo", "bar"], None, b"2\n") self._nodeset_t(args + ["--count", "foo", "foo"], None, b"1\n") self._nodeset_t(args + ["--count", "foo", "foo", "bar"], None, b"2\n") self._nodeset_t(args + ["--count", "foo[0]"], None, b"1\n") self._nodeset_t(args + ["--count", "foo[2]"], None, b"1\n") self._nodeset_t(args + ["--count", "foo[1,2]"], None, b"2\n") self._nodeset_t(args + ["--count", "foo[1-2]"], None, b"2\n") self._nodeset_t(args + ["--count", "foo[1,2]", "foo[1-2]"], None, b"2\n") self._nodeset_t(args + ["--count", "foo[1-200,245-394]"], None, b"350\n") self._nodeset_t(args + ["--count", "foo[395-442]", "foo[1-200,245-394]"], None, b"398\n") self._nodeset_t(args + ["--count", "foo[395-442]", "foo", "foo[1-200,245-394]"], None, b"399\n") self._nodeset_t(args + ["--count", "foo[395-442]", "foo", "foo[0-200,245-394]"], None, b"400\n") self._nodeset_t(args + ["--count", "foo[395-442]", "bar3,bar24", "foo[1-200,245-394]"], None, b"400\n") # from stdin: use string not bytes as input because CLI/Nodeset.py works in text mode self._nodeset_t(args + ["--count"], "\n", b"0\n") self._nodeset_t(args + ["--count"], "foo\n", b"1\n") self._nodeset_t(args + ["--count"], "foo\nbar\n", b"2\n") self._nodeset_t(args + ["--count"], "foo\nfoo\n", b"1\n") self._nodeset_t(args + ["--count"], "foo\nfoo\nbar\n", b"2\n") self._nodeset_t(args + ["--count"], "foo[0]\n", b"1\n") self._nodeset_t(args + ["--count"], "foo[2]\n", b"1\n") self._nodeset_t(args + ["--count"], "foo[1,2]\n", b"2\n") self._nodeset_t(args + ["--count"], "foo[1-2]\n", b"2\n") self._nodeset_t(args + ["--count"], "foo[1,2]\nfoo[1-2]\n", b"2\n") self._nodeset_t(args + ["--count"], "foo[1-200,245-394]\n", b"350\n") self._nodeset_t(args + ["--count"], "foo[395-442]\nfoo[1-200,245-394]\n", b"398\n") self._nodeset_t(args + ["--count"], "foo[395-442]\nfoo\nfoo[1-200,245-394]\n", b"399\n") self._nodeset_t(args + ["--count"], "foo[395-442]\nfoo\nfoo[0-200,245-394]\n", b"400\n") self._nodeset_t(args + ["--count"], "foo[395-442]\nbar3,bar24\nfoo[1-200,245-394]\n", b"400\n") def test_001_count(self): """test nodeset --count""" self._battery_count([]) self._battery_count(["--autostep=1"]) self._battery_count(["--autostep=2"]) self._battery_count(["--autostep=5"]) self._battery_count(["--autostep=auto"]) self._battery_count(["--autostep=0%"]) self._battery_count(["--autostep=50%"]) self._battery_count(["--autostep=100%"]) def test_002_count_intersection(self): """test nodeset --count --intersection""" self._nodeset_t(["--count", "foo", "--intersection", "bar"], None, b"0\n") self._nodeset_t(["--count", "foo", "--intersection", "foo"], None, b"1\n") self._nodeset_t(["--count", "foo", "--intersection", "foo", "-i", "bar"], None, b"0\n") self._nodeset_t(["--count", "foo[0]", "--intersection", "foo0"], None, b"1\n") self._nodeset_t(["--count", "foo[2]", "--intersection", "foo"], None, b"0\n") self._nodeset_t(["--count", "foo[1,2]", "--intersection", "foo[1-2]"], None, b"2\n") self._nodeset_t(["--count", "foo[395-442]", "--intersection", "foo[1-200,245-394]"], None, b"0\n") self._nodeset_t(["--count", "foo[395-442]", "--intersection", "foo", "-i", "foo[1-200,245-394]"], None, b"0\n") self._nodeset_t(["--count", "foo[395-442]", "-i", "foo", "-i", "foo[0-200,245-394]"], None, b"0\n") self._nodeset_t(["--count", "foo[395-442]", "--intersection", "bar3,bar24", "-i", "foo[1-200,245-394]"], None, b"0\n") # multiline args (#394) self._nodeset_t(["--count", "foo[1,2]", "-i", "foo1\nfoo2"], None, b"2\n") self._nodeset_t(["--count", "foo[1,2]", "-i", "foo1\nfoo2", "foo3\nfoo4"], None, b"4\n") def test_003_count_intersection_stdin(self): """test nodeset --count --intersection (stdin)""" self._nodeset_t(["--count", "--intersection", "bar"], "foo\n", b"0\n") self._nodeset_t(["--count", "--intersection", "foo"], "foo\n", b"1\n") self._nodeset_t(["--count", "--intersection", "foo", "-i", "bar"], "foo\n", b"0\n") self._nodeset_t(["--count", "--intersection", "foo0"], "foo[0]\n", b"1\n") self._nodeset_t(["--count", "--intersection", "foo"], "foo[2]\n", b"0\n") self._nodeset_t(["--count", "--intersection", "foo[1-2]"], "foo[1,2]\n", b"2\n") self._nodeset_t(["--count", "--intersection", "foo[1-200,245-394]"], "foo[395-442]\n", b"0\n") self._nodeset_t(["--count", "--intersection", "foo", "-i", "foo[1-200,245-394]"], "foo[395-442]\n", b"0\n") self._nodeset_t(["--count", "-i", "foo", "-i", "foo[0-200,245-394]"], "foo[395-442]\n", b"0\n") self._nodeset_t(["--count", "--intersection", "bar3,bar24", "-i", "foo[1-200,245-394]"], "foo[395-442]\n", b"0\n") def _battery_fold(self, args): self._nodeset_t(args + ["--fold", ""], None, b"\n") self._nodeset_t(args + ["--fold", "foo"], None, b"foo\n") self._nodeset_t(args + ["--fold", "foo", "bar"], None, b"bar,foo\n") self._nodeset_t(args + ["--fold", "foo", "foo"], None, b"foo\n") self._nodeset_t(args + ["--fold", "foo", "foo", "bar"], None, b"bar,foo\n") self._nodeset_t(args + ["--fold", "foo[0]"], None, b"foo0\n") self._nodeset_t(args + ["--fold", "foo[2]"], None, b"foo2\n") self._nodeset_t(args + ["--fold", "foo[1,2]"], None, b"foo[1-2]\n") self._nodeset_t(args + ["--fold", "foo[1-2]"], None, b"foo[1-2]\n") self._nodeset_t(args + ["--fold", "foo[1,2]", "foo[1-2]"], None, b"foo[1-2]\n") self._nodeset_t(args + ["--fold", "foo[1-200,245-394]"], None, b"foo[1-200,245-394]\n") self._nodeset_t(args + ["--fold", "foo[395-442]", "foo[1-200,245-394]"], None, b"foo[1-200,245-442]\n") self._nodeset_t(args + ["--fold", "foo[395-442]", "foo", "foo[1-200,245-394]"], None, b"foo,foo[1-200,245-442]\n") self._nodeset_t(args + ["--fold", "foo[395-442]", "foo", "foo[0-200,245-394]"], None, b"foo,foo[0-200,245-442]\n") self._nodeset_t(args + ["--fold", "foo[395-442]", "bar3,bar24", "foo[1-200,245-394]"], None, b"bar[3,24],foo[1-200,245-442]\n") # multiline arg (#394) self._nodeset_t(args + ["--fold", "foo3\nfoo1\nfoo2\nbar"], None, b"bar,foo[1-3]\n") self._nodeset_t(args + ["--fold", "foo3\n\n\nfoo1\n\nfoo2\n\n"], None, b"foo[1-3]\n") # stdin self._nodeset_t(args + ["--fold"], "\n", b"\n") self._nodeset_t(args + ["--fold"], "foo\n", b"foo\n") self._nodeset_t(args + ["--fold"], "foo\nbar\n", b"bar,foo\n") self._nodeset_t(args + ["--fold"], "foo\nfoo\n", b"foo\n") self._nodeset_t(args + ["--fold"], "foo\nfoo\nbar\n", b"bar,foo\n") self._nodeset_t(args + ["--fold"], "foo[0]\n", b"foo0\n") self._nodeset_t(args + ["--fold"], "foo[2]\n", b"foo2\n") self._nodeset_t(args + ["--fold"], "foo[1,2]\n", b"foo[1-2]\n") self._nodeset_t(args + ["--fold"], "foo[1-2]\n", b"foo[1-2]\n") self._nodeset_t(args + ["--fold"], "foo[1,2]\nfoo[1-2]\n", b"foo[1-2]\n") self._nodeset_t(args + ["--fold"], "foo[1-200,245-394]\n", b"foo[1-200,245-394]\n") self._nodeset_t(args + ["--fold"], "foo[395-442]\nfoo[1-200,245-394]\n", b"foo[1-200,245-442]\n") self._nodeset_t(args + ["--fold"], "foo[395-442]\nfoo\nfoo[1-200,245-394]\n", b"foo,foo[1-200,245-442]\n") self._nodeset_t(args + ["--fold"], "foo[395-442]\nfoo\nfoo[0-200,245-394]\n", b"foo,foo[0-200,245-442]\n") self._nodeset_t(args + ["--fold"], "foo[395-442]\nbar3,bar24\nfoo[1-200,245-394]\n", b"bar[3,24],foo[1-200,245-442]\n") def test_004_fold(self): """test nodeset --fold""" self._battery_fold([]) self._battery_fold(["--autostep=3"]) # --autostep=auto (1.7) self._battery_fold(["--autostep=auto"]) self._battery_count(["--autostep=0%"]) self._battery_count(["--autostep=50%"]) self._battery_count(["--autostep=100%"]) def test_005_fold_autostep(self): """test nodeset --fold --autostep=X""" self._nodeset_t(["--autostep=2", "-f", "foo0", "foo2", "foo4", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=2", "-f", "foo4", "foo2", "foo0", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=3", "-f", "foo0", "foo2", "foo4", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=4", "-f", "foo0", "foo2", "foo4", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=5", "-f", "foo0", "foo2", "foo4", "foo6"], None, b"foo[0,2,4,6]\n") self._nodeset_t(["--autostep=auto", "-f", "foo0", "foo2", "foo4", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=auto", "-f", "foo4", "foo2", "foo0", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=auto", "-f", "foo4", "foo2", "foo0", "foo2", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=auto", "-f", "foo4", "foo2", "foo0", "foo5", "foo6"], None, b"foo[0,2,4-6]\n") self._nodeset_t(["--autostep=auto", "-f", "foo4", "foo2", "foo0", "foo9", "foo6"], None, b"foo[0,2,4,6,9]\n") self._nodeset_t(["--autostep=75%", "-f", "foo0", "foo2", "foo4", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=75%", "-f", "foo4", "foo2", "foo0", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=80%", "-f", "foo4", "foo2", "foo0", "foo2", "foo6"], None, b"foo[0-6/2]\n") self._nodeset_t(["--autostep=80%", "-f", "foo4", "foo2", "foo0", "foo5", "foo6"], None, b"foo[0,2,4-6]\n") self._nodeset_t(["--autostep=80%", "-f", "foo4", "foo2", "foo0", "foo9", "foo6"], None, b"foo[0-6/2,9]\n") self._nodeset_t(["--autostep=81%", "-f", "foo4", "foo2", "foo0", "foo9", "foo6"], None, b"foo[0,2,4,6,9]\n") self._nodeset_t(["--autostep=100%", "-f", "foo4", "foo2", "foo0", "foo9", "foo6"], None, b"foo[0,2,4,6,9]\n") def test_006_expand(self): """test nodeset --expand""" self._nodeset_t(["--expand", ""], None, b"\n") self._nodeset_t(["--expand", "foo"], None, b"foo\n") self._nodeset_t(["--expand", "foo", "bar"], None, b"bar foo\n") self._nodeset_t(["--expand", "foo", "foo"], None, b"foo\n") self._nodeset_t(["--expand", "foo[0]"], None, b"foo0\n") self._nodeset_t(["--expand", "foo[2]"], None, b"foo2\n") self._nodeset_t(["--expand", "foo[1,2]"], None, b"foo1 foo2\n") self._nodeset_t(["--expand", "foo[1-2]"], None, b"foo1 foo2\n") self._nodeset_t(["--expand", "foo[1-2],bar"], None, b"bar foo1 foo2\n") def test_007_expand_stdin(self): """test nodeset --expand (stdin)""" self._nodeset_t(["--expand"], "\n", b"\n") self._nodeset_t(["--expand"], "foo\n", b"foo\n") self._nodeset_t(["--expand"], "foo\nbar\n", b"bar foo\n") self._nodeset_t(["--expand"], "foo\nfoo\n", b"foo\n") self._nodeset_t(["--expand"], "foo[0]\n", b"foo0\n") self._nodeset_t(["--expand"], "foo[2]\n", b"foo2\n") self._nodeset_t(["--expand"], "foo[1,2]\n", b"foo1 foo2\n") self._nodeset_t(["--expand"], "foo[1-2]\n", b"foo1 foo2\n") self._nodeset_t(["--expand"], "foo[1-2],bar\n", b"bar foo1 foo2\n") def test_008_expand_separator(self): """test nodeset --expand -S""" self._nodeset_t(["--expand", "-S", ":", "foo"], None, b"foo\n") self._nodeset_t(["--expand", "-S", ":", "foo", "bar"], None, b"bar:foo\n") self._nodeset_t(["--expand", "--separator", ":", "foo", "bar"], None, b"bar:foo\n") self._nodeset_t(["--expand", "--separator=:", "foo", "bar"], None, b"bar:foo\n") self._nodeset_t(["--expand", "-S", ":", "foo", "foo"], None, b"foo\n") self._nodeset_t(["--expand", "-S", ":", "foo[0]"], None, b"foo0\n") self._nodeset_t(["--expand", "-S", ":", "foo[2]"], None, b"foo2\n") self._nodeset_t(["--expand", "-S", ":", "foo[1,2]"], None, b"foo1:foo2\n") self._nodeset_t(["--expand", "-S", ":", "foo[1-2]"], None, b"foo1:foo2\n") self._nodeset_t(["--expand", "-S", " ", "foo[1-2]"], None, b"foo1 foo2\n") self._nodeset_t(["--expand", "-S", ",", "foo[1-2],bar"], None, b"bar,foo1,foo2\n") self._nodeset_t(["--expand", "-S", "uuu", "foo[1-2],bar"], None, b"baruuufoo1uuufoo2\n") self._nodeset_t(["--expand", "-S", "\\n", "foo[1-2]"], None, b"foo1\nfoo2\n") self._nodeset_t(["--expand", "-S", "\n", "foo[1-2]"], None, b"foo1\nfoo2\n") def test_009_fold_xor(self): """test nodeset --fold --xor""" self._nodeset_t(["--fold", "foo", "-X", "bar"], None, b"bar,foo\n") self._nodeset_t(["--fold", "foo", "-X", "foo"], None, b"\n") self._nodeset_t(["--fold", "foo[1,2]", "-X", "foo[1-2]"], None, b"\n") self._nodeset_t(["--fold", "foo[1-10]", "-X", "foo[5-15]"], None, b"foo[1-4,11-15]\n") self._nodeset_t(["--fold", "foo[395-442]", "-X", "foo[1-200,245-394]"], None, b"foo[1-200,245-442]\n") self._nodeset_t(["--fold", "foo[395-442]", "-X", "foo", "-X", "foo[1-200,245-394]"], None, b"foo,foo[1-200,245-442]\n") self._nodeset_t(["--fold", "foo[395-442]", "-X", "foo", "-X", "foo[0-200,245-394]"], None, b"foo,foo[0-200,245-442]\n") self._nodeset_t(["--fold", "foo[395-442]", "-X", "bar3,bar24", "-X", "foo[1-200,245-394]"], None, b"bar[3,24],foo[1-200,245-442]\n") # multiline args (#394) self._nodeset_t(["--fold", "foo[1-10]", "-X", "foo5\nfoo6\nfoo7"], None, b"foo[1-4,8-10]\n") self._nodeset_t(["--fold", "foo[1-10]", "-X", "foo5\nfoo6\nfoo7", "foo5\nfoo6"], None, b"foo[1-6,8-10]\n") def test_010_fold_xor_stdin(self): """test nodeset --fold --xor (stdin)""" self._nodeset_t(["--fold", "-X", "bar"], "foo\n", b"bar,foo\n") self._nodeset_t(["--fold", "-X", "foo"], "foo\n", b"\n") self._nodeset_t(["--fold", "-X", "foo[1-2]"], "foo[1,2]\n", b"\n") self._nodeset_t(["--fold", "-X", "foo[5-15]"], "foo[1-10]\n", b"foo[1-4,11-15]\n") self._nodeset_t(["--fold", "-X", "foo[1-200,245-394]"], "foo[395-442]\n", b"foo[1-200,245-442]\n") self._nodeset_t(["--fold", "-X", "foo", "-X", "foo[1-200,245-394]"], "foo[395-442]\n", b"foo,foo[1-200,245-442]\n") self._nodeset_t(["--fold", "-X", "foo", "-X", "foo[0-200,245-394]"], "foo[395-442]\n", b"foo,foo[0-200,245-442]\n") self._nodeset_t(["--fold", "-X", "bar3,bar24", "-X", "foo[1-200,245-394]"], "foo[395-442]\n", b"bar[3,24],foo[1-200,245-442]\n") # using stdin for -X self._nodeset_t(["-f", "foo[2-4]", "-X", "-"], "foo4 foo5 foo6\n", b"foo[2-3,5-6]\n") self._nodeset_t(["-f", "-X", "-", "foo[1-6]"], "foo4 foo5 foo6\n", b"foo[1-6]\n", 0, b"WARNING: empty left operand for set operation\n") def test_011_fold_exclude(self): """test nodeset --fold --exclude""" # Empty result self._nodeset_t(["--fold", "foo", "-x", "foo"], None, b"\n") # With no range self._nodeset_t(["--fold", "foo,bar", "-x", "foo"], None, b"bar\n") # Normal with range self._nodeset_t(["--fold", "foo[0-5]", "-x", "foo[0-10]"], None, b"\n") self._nodeset_t(["--fold", "foo[0-10]", "-x", "foo[0-5]"], None, b"foo[6-10]\n") # Do no change self._nodeset_t(["--fold", "foo[6-10]", "-x", "bar[0-5]"], None, b"foo[6-10]\n") self._nodeset_t(["--fold", "foo[0-10]", "foo[13-18]", "--exclude", "foo[5-10,15]"], None, b"foo[0-4,13-14,16-18]\n") # multiline args (#394) self._nodeset_t(["--fold", "foo[0-5]", "-x", "foo0\nfoo9\nfoo3\nfoo2\nfoo1"], None, b"foo[4-5]\n") self._nodeset_t(["--fold", "foo[0-5]", "-x", "foo0\nfoo9\nfoo3\nfoo2\nfoo1", "foo5\nfoo6"], None, b"foo[4-6]\n") def test_012_fold_exclude_stdin(self): """test nodeset --fold --exclude (stdin)""" # Empty result self._nodeset_t(["--fold", "-x", "foo"], "", b"\n", 0, b"WARNING: empty left operand for set operation\n") self._nodeset_t(["--fold", "-x", "foo"], "\n", b"\n", 0, b"WARNING: empty left operand for set operation\n") self._nodeset_t(["--fold", "-x", "foo"], "foo\n", b"\n") # With no range self._nodeset_t(["--fold", "-x", "foo"], "foo,bar\n", b"bar\n") # Normal with range self._nodeset_t(["--fold", "-x", "foo[0-10]"], "foo[0-5]\n", b"\n") self._nodeset_t(["--fold", "-x", "foo[0-5]"], "foo[0-10]\n", b"foo[6-10]\n") # Do no change self._nodeset_t(["--fold", "-x", "bar[0-5]"], "foo[6-10]\n", b"foo[6-10]\n") self._nodeset_t(["--fold", "--exclude", "foo[5-10,15]"], "foo[0-10]\nfoo[13-18]\n", b"foo[0-4,13-14,16-18]\n") # using stdin for -x self._nodeset_t(["-f", "foo[1-6]", "-x", "-"], "foo4 foo5 foo6\n", b"foo[1-3]\n") self._nodeset_t(["-f", "-x", "-", "foo[1-6]"], "foo4 foo5 foo6\n", b"foo[1-6]\n", 0, b"WARNING: empty left operand for set operation\n") def test_013_fold_intersection(self): """test nodeset --fold --intersection""" # Empty result self._nodeset_t(["--fold", "foo", "-i", "foo"], None, b"foo\n") # With no range self._nodeset_t(["--fold", "foo,bar", "--intersection", "foo"], None, b"foo\n") # Normal with range self._nodeset_t(["--fold", "foo[0-5]", "-i", "foo[0-10]"], None, b"foo[0-5]\n") self._nodeset_t(["--fold", "foo[0-10]", "-i", "foo[0-5]"], None, b"foo[0-5]\n") self._nodeset_t(["--fold", "foo[6-10]", "-i", "bar[0-5]"], None, b"\n") self._nodeset_t(["--fold", "foo[0-10]", "foo[13-18]", "-i", "foo[5-10,15]"], None, b"foo[5-10,15]\n") # numerical bracket folding (#228) self._nodeset_t(["--fold", "node123[1-2]", "-i", "node1232"], None, b"node1232\n") self._nodeset_t(["--fold", "node023[1-2]0", "-i", "node02320"], None, b"node02320\n") self._nodeset_t(["--fold", "node023[1-2]0-ipmi2", "-i", "node02320-ipmi2"], None, b"node02320-ipmi2\n") self._nodeset_t(["--fold", "-i", "foo", "foo"], None, b"foo\n", 0, b"WARNING: empty left operand for set operation\n") def test_014_fold_intersection_stdin(self): """test nodeset --fold --intersection (stdin)""" # Empty result self._nodeset_t(["--fold", "--intersection", "foo"], "", b"\n", 0, b"WARNING: empty left operand for set operation\n") self._nodeset_t(["--fold", "--intersection", "foo"], "\n", b"\n", 0, b"WARNING: empty left operand for set operation\n") self._nodeset_t(["--fold", "-i", "foo"], "foo\n", b"foo\n") # With no range self._nodeset_t(["--fold", "-i", "foo"], "foo,bar\n", b"foo\n") # Normal with range self._nodeset_t(["--fold", "-i", "foo[0-10]"], "foo[0-5]\n", b"foo[0-5]\n") self._nodeset_t(["--fold", "-i", "foo[0-5]"], "foo[0-10]\n", b"foo[0-5]\n") # Do no change self._nodeset_t(["--fold", "-i", "bar[0-5]"], "foo[6-10]\n", b"\n") self._nodeset_t(["--fold", "-i", "foo[5-10,15]"], "foo[0-10]\nfoo[13-18]\n", b"foo[5-10,15]\n") # using stdin for -i self._nodeset_t(["-f", "foo[1-6]", "-i", "-"], "foo4 foo5 foo6\n", b"foo[4-6]\n") self._nodeset_t(["-f", "-i", "-", "foo[1-6]"], "foo4 foo5 foo6\n", b"foo[1-6]\n", 0, b"WARNING: empty left operand for set operation\n") # numerical bracket folding (#228) self._nodeset_t(["--fold", "-i", "node123[1-2]"], "node1232\n", b"node1232\n") self._nodeset_t(["--fold", "-i", "node023[1-2]0"], "node02320\n", b"node02320\n") self._nodeset_t(["--fold", "-i", "node023[1-2]0-ipmi2"], "node02320-ipmi2\n", b"node02320-ipmi2\n") def test_015_rangeset(self): """test nodeset --rangeset""" self._nodeset_t(["--fold", "--rangeset", "1,2"], None, b"1-2\n") self._nodeset_t(["--expand", "-R", "1-2"], None, b"1 2\n") self._nodeset_t(["--fold", "-R", "1-2", "-X", "2-3"], None, b"1,3\n") def test_016_rangeset_stdin(self): """test nodeset --rangeset (stdin)""" self._nodeset_t(["--fold", "--rangeset"], "1,2\n", b"1-2\n") self._nodeset_t(["--expand", "-R"], "1-2\n", b"1 2\n") self._nodeset_t(["--fold", "-R", "-X", "2-3"], "1-2\n", b"1,3\n") def test_017_stdin(self): """test nodeset - (stdin)""" self._nodeset_t(["-f", "-"], "foo\n", b"foo\n") self._nodeset_t(["-f", "-"], "foo1 foo2 foo3\n", b"foo[1-3]\n") self._nodeset_t(["--autostep=2", "-f"], "foo0 foo2 foo4 foo6\n", b"foo[0-6/2]\n") self._nodeset_t(["--autostep=auto", "-f"], "foo0 foo2 foo4 foo6\n", b"foo[0-6/2]\n") self._nodeset_t(["--autostep=100%", "-f"], "foo0 foo2 foo4 foo6\n", b"foo[0-6/2]\n") self._nodeset_t(["--autostep=0%", "-f"], "foo0 foo2 foo4 foo6\n", b"foo[0-6/2]\n") def test_018_split(self): """test nodeset --split""" self._nodeset_t(["--split=2", "-f", "bar"], None, b"bar\n") self._nodeset_t(["--split", "2", "-f", "foo,bar"], None, b"bar\nfoo\n") self._nodeset_t(["--split", "2", "-e", "foo", "bar", "bur", "oof", "gcc"], None, b"bar bur foo\ngcc oof\n") self._nodeset_t(["--split=2", "-f", "foo[2-9]"], None, b"foo[2-5]\nfoo[6-9]\n") self._nodeset_t(["--split=2", "-f", "foo[2-3,7]", "bar9"], None, b"bar9,foo2\nfoo[3,7]\n") self._nodeset_t(["--split=3", "-f", "foo[2-9]"], None, b"foo[2-4]\nfoo[5-7]\nfoo[8-9]\n") self._nodeset_t(["--split=1", "-f", "foo2", "foo3"], None, b"foo[2-3]\n") self._nodeset_t(["--split=4", "-f", "foo[2-3]"], None, b"foo2\nfoo3\n") self._nodeset_t(["--split=4", "-f", "foo3", "foo2"], None, b"foo2\nfoo3\n") self._nodeset_t(["--split=2", "-e", "foo[2-9]"], None, b"foo2 foo3 foo4 foo5\nfoo6 foo7 foo8 foo9\n") self._nodeset_t(["--split=3", "-e", "foo[2-9]"], None, b"foo2 foo3 foo4\nfoo5 foo6 foo7\nfoo8 foo9\n") self._nodeset_t(["--split=1", "-e", "foo3", "foo2"], None, b"foo2 foo3\n") self._nodeset_t(["--split=4", "-e", "foo[2-3]"], None, b"foo2\nfoo3\n") self._nodeset_t(["--split=4", "-e", "foo2", "foo3"], None, b"foo2\nfoo3\n") self._nodeset_t(["--split=2", "-c", "foo2", "foo3"], None, b"1\n1\n") def test_019_contiguous(self): """test nodeset --contiguous""" self._nodeset_t(["--contiguous", "-f", "bar"], None, b"bar\n") self._nodeset_t(["--contiguous", "-f", "foo,bar"], None, b"bar\nfoo\n") self._nodeset_t(["--contiguous", "-f", "foo", "bar", "bur", "oof", "gcc"], None, b"bar\nbur\nfoo\ngcc\noof\n") self._nodeset_t(["--contiguous", "-e", "foo", "bar", "bur", "oof", "gcc"], None, b"bar\nbur\nfoo\ngcc\noof\n") self._nodeset_t(["--contiguous", "-f", "foo2"], None, b"foo2\n") self._nodeset_t(["--contiguous", "-R", "-f", "2"], None, b"2\n") self._nodeset_t(["--contiguous", "-f", "foo[2-9]"], None, b"foo[2-9]\n") self._nodeset_t(["--contiguous", "-f", "foo[2-3,7]", "bar9"], None, b"bar9\nfoo[2-3]\nfoo7\n") self._nodeset_t(["--contiguous", "-R", "-f", "2-3,7", "9"], None, b"2-3\n7\n9\n") self._nodeset_t(["--contiguous", "-f", "foo2", "foo3"], None, b"foo[2-3]\n") self._nodeset_t(["--contiguous", "-f", "foo3", "foo2"], None, b"foo[2-3]\n") self._nodeset_t(["--contiguous", "-f", "foo3", "foo1"], None, b"foo1\nfoo3\n") self._nodeset_t(["--contiguous", "-f", "foo[1-5/2]", "foo7"], None, b"foo1\nfoo3\nfoo5\nfoo7\n") def test_020_slice(self): """test nodeset -I/--slice""" self._nodeset_t(["--slice=0", "-f", "bar"], None, b"bar\n") self._nodeset_t(["--slice=0", "-e", "bar"], None, b"bar\n") self._nodeset_t(["--slice=1", "-f", "bar"], None, b"\n") self._nodeset_t(["--slice=0-1", "-f", "bar"], None, b"bar\n") self._nodeset_t(["-I0", "-f", "bar[34-68,89-90]"], None, b"bar34\n") self._nodeset_t(["-R", "-I0", "-f", "34-68,89-90"], None, b"34\n") self._nodeset_t(["-I 0", "-f", "bar[34-68,89-90]"], None, b"bar34\n") self._nodeset_t(["-I 0", "-e", "bar[34-68,89-90]"], None, b"bar34\n") self._nodeset_t(["-I 0-3", "-f", "bar[34-68,89-90]"], None, b"bar[34-37]\n") self._nodeset_t(["-I 0-3", "-f", "bar[34-68,89-90]", "-x", "bar34"], None, b"bar[35-38]\n") self._nodeset_t(["-I 0-3", "-f", "bar[34-68,89-90]", "-x", "bar35"], None, b"bar[34,36-38]\n") self._nodeset_t(["-I 0-3", "-e", "bar[34-68,89-90]"], None, b"bar34 bar35 bar36 bar37\n") self._nodeset_t(["-I 3,1,0,2", "-f", "bar[34-68,89-90]"], None, b"bar[34-37]\n") self._nodeset_t(["-I 1,3,7,10,16,20,30,34-35,37", "-f", "bar[34-68,89-90]"], None, b"bar[35,37,41,44,50,54,64,68,89]\n") self._nodeset_t(["-I 8", "-f", "bar[34-68,89-90]"], None, b"bar42\n") self._nodeset_t(["-I 8-100", "-f", "bar[34-68,89-90]"], None, b"bar[42-68,89-90]\n") self._nodeset_t(["-I 0-100", "-f", "bar[34-68,89-90]"], None, b"bar[34-68,89-90]\n") self._nodeset_t(["-I 8-100/2", "-f", "bar[34-68,89-90]"], None, b"bar[42,44,46,48,50,52,54,56,58,60,62,64,66,68,90]\n") self._nodeset_t(["--autostep=2", "-I 8-100/2", "-f", "bar[34-68,89-90]"], None, b"bar[42-68/2,90]\n") self._nodeset_t(["--autostep=93%", "-I 8-100/2", "-f", "bar[34-68,89-90]"], None, b"bar[42-68/2,90]\n") self._nodeset_t(["--autostep=94%", "-I 8-100/2", "-f", "bar[34-68,89-90]"], None, b"bar[42,44,46,48,50,52,54,56,58,60,62,64,66,68,90]\n") self._nodeset_t(["--autostep=auto", "-I 8-100/2", "-f", "bar[34-68,89-90]"], None, b"bar[42,44,46,48,50,52,54,56,58,60,62,64,66,68,90]\n") self._nodeset_t(["--autostep=auto", "-I 8-100/2", "-f", "bar[34-68]"], None, b"bar[42-68/2]\n") self._nodeset_t(["--autostep=100%", "-I 8-100/2", "-f", "bar[34-68]"], None, b"bar[42-68/2]\n") def test_021_slice_stdin(self): """test nodeset -I/--slice (stdin)""" self._nodeset_t(["--slice=0", "-f"], "bar\n", b"bar\n") self._nodeset_t(["--slice=0", "-e"], "bar\n", b"bar\n") self._nodeset_t(["--slice=1", "-f"], "bar\n", b"\n") self._nodeset_t(["--slice=0-1", "-f"], "bar\n", b"bar\n") self._nodeset_t(["-I0", "-f"], "bar[34-68,89-90]\n", b"bar34\n") self._nodeset_t(["-R", "-I0", "-f"], "34-68,89-90\n", b"34\n") self._nodeset_t(["-I 0", "-f"], "bar[34-68,89-90]\n", b"bar34\n") self._nodeset_t(["-I 0", "-e"], "bar[34-68,89-90]\n", b"bar34\n") self._nodeset_t(["-I 0-3", "-f"], "bar[34-68,89-90]\n", b"bar[34-37]\n") self._nodeset_t(["-I 0-3", "-f", "-x", "bar34"], "bar[34-68,89-90]\n", b"bar[35-38]\n") self._nodeset_t(["-I 0-3", "-f", "-x", "bar35"], "bar[34-68,89-90]\n", b"bar[34,36-38]\n") self._nodeset_t(["-I 0-3", "-e"], "bar[34-68,89-90]\n", b"bar34 bar35 bar36 bar37\n") self._nodeset_t(["-I 3,1,0,2", "-f"], "bar[34-68,89-90]\n", b"bar[34-37]\n") self._nodeset_t(["-I 1,3,7,10,16,20,30,34-35,37", "-f"], "bar[34-68,89-90]\n", b"bar[35,37,41,44,50,54,64,68,89]\n") self._nodeset_t(["-I 8", "-f"], "bar[34-68,89-90]\n", b"bar42\n") self._nodeset_t(["-I 8-100", "-f"], "bar[34-68,89-90]\n", b"bar[42-68,89-90]\n") self._nodeset_t(["-I 0-100", "-f"], "bar[34-68,89-90]\n", b"bar[34-68,89-90]\n") self._nodeset_t(["-I 8-100/2", "-f"], "bar[34-68,89-90]\n", b"bar[42,44,46,48,50,52,54,56,58,60,62,64,66,68,90]\n") self._nodeset_t(["--autostep=2", "-I 8-100/2", "-f"], "bar[34-68,89-90]\n", b"bar[42-68/2,90]\n") self._nodeset_t(["--autostep=93%", "-I 8-100/2", "-f"], "bar[34-68,89-90]\n", b"bar[42-68/2,90]\n") self._nodeset_t(["--autostep=93.33%", "-I 8-100/2", "-f"], "bar[34-68,89-90]\n", b"bar[42-68/2,90]\n") self._nodeset_t(["--autostep=94%", "-I 8-100/2", "-f"], "bar[34-68,89-90]\n", b"bar[42,44,46,48,50,52,54,56,58,60,62,64,66,68,90]\n") self._nodeset_t(["--autostep=auto", "-I 8-100/2", "-f"], "bar[34-68,89-90]\n", b"bar[42,44,46,48,50,52,54,56,58,60,62,64,66,68,90]\n") self._nodeset_t(["--autostep=2", "-I 8-100/2", "-f"], "bar[34-68]\n", b"bar[42-68/2]\n") def test_022_output_format(self): """test nodeset -O""" self._nodeset_t(["--expand", "--output-format", "/path/%s/", "foo"], None, b"/path/foo/\n") self._nodeset_t(["--expand", "-O", "/path/%s/", "-S", ":", "foo"], None, b"/path/foo/\n") self._nodeset_t(["--expand", "-O", "/path/%s/", "foo[2]"], None, b"/path/foo2/\n") self._nodeset_t(["--expand", "-O", "%s-ib0", "foo[1-4]"], None, b"foo1-ib0 foo2-ib0 foo3-ib0 foo4-ib0\n") self._nodeset_t(["--expand", "-O", "%s-ib0", "-S", ":", "foo[1-4]"], None, b"foo1-ib0:foo2-ib0:foo3-ib0:foo4-ib0\n") self._nodeset_t(["--fold", "-O", "%s-ipmi", "foo", "bar"], None, b"bar-ipmi,foo-ipmi\n") self._nodeset_t(["--fold", "-O", "%s-ib0", "foo1", "foo2"], None, b"foo[1-2]-ib0\n") self._nodeset_t(["--fold", "-O", "%s-ib0", "foo1", "foo2", "bar1", "bar2"], None, b"bar[1-2]-ib0,foo[1-2]-ib0\n") self._nodeset_t(["--fold", "-O", "%s-ib0", "--autostep=auto", "foo[1-9/2]"], None, b"foo[1-9/2]-ib0\n") self._nodeset_t(["--fold", "-O", "%s-ib0", "--autostep=6", "foo[1-9/2]"], None, b"foo[1,3,5,7,9]-ib0\n") self._nodeset_t(["--fold", "-O", "%s-ib0", "--autostep=5", "foo[1-9/2]"], None, b"foo[1-9/2]-ib0\n") self._nodeset_t(["--count", "-O", "result-%s", "foo1", "foo2"], None, b"result-2\n") self._nodeset_t(["--contiguous", "-O", "%s-ipmi", "-f", "foo[2-3,7]", "bar9"], None, b"bar9-ipmi\nfoo[2-3]-ipmi\nfoo7-ipmi\n") self._nodeset_t(["--split=2", "-O", "%s-ib", "-e", "foo[2-9]"], None, b"foo2-ib foo3-ib foo4-ib foo5-ib\nfoo6-ib foo7-ib foo8-ib foo9-ib\n") self._nodeset_t(["--split=3", "-O", "hwm-%s", "-f", "foo[2-9]"], None, b"hwm-foo[2-4]\nhwm-foo[5-7]\nhwm-foo[8-9]\n") self._nodeset_t(["-I0", "-O", "{%s}", "-f", "bar[34-68,89-90]"], None, b"{bar34}\n") # RangeSet mode (-R) self._nodeset_t(["--fold", "-O", "{%s}", "--rangeset", "1,2"], None, b"{1-2}\n") self._nodeset_t(["--expand", "-O", "{%s}", "-R", "1-2"], None, b"{1} {2}\n") self._nodeset_t(["--fold", "-O", "{%s}", "-R", "1-2", "-X", "2-3"], None, b"{1,3}\n") self._nodeset_t(["--fold", "-O", "{%s}", "-S", ":", "--rangeset", "1,2"], None, b"{1-2}\n") self._nodeset_t(["--expand", "-O", "{%s}", "-S", ":", "-R", "1-2"], None, b"{1}:{2}\n") self._nodeset_t(["--fold", "-O", "{%s}", "-S", ":", "-R", "1-2", "-X", "2-3"], None, b"{1,3}\n") self._nodeset_t(["-R", "-I0", "-O", "{%s}", "-f", "34-68,89-90"], None, b"{34}\n") def test_023_axis(self): """test nodeset folding with --axis""" self._nodeset_t(["--axis=0", "-f", "bar"], None, b"bar\n") self._nodeset_t(["--axis=1", "-f", "bar"], None, b"bar\n") self._nodeset_t(["--axis=1", "-R", "-f", "1,2,3"], None, None, 2, b"--axis option is only supported when folding nodeset\n") self._nodeset_t(["--axis=1", "-e", "bar"], None, None, 2, b"--axis option is only supported when folding nodeset\n") # 1D and 2D nodeset: fold along axis 0 only self._nodeset_t(["--axis=1", "-f", "comp-[1-2]-[1-3],login-[1-2]"], None, b'comp-[1-2]-1,comp-[1-2]-2,comp-[1-2]-3,login-[1-2]\n') # 1D and 2D nodeset: fold along axis 1 only self._nodeset_t(["--axis=2", "-f", "comp-[1-2]-[1-3],login-[1-2]"], None, b'comp-1-[1-3],comp-2-[1-3],login-1,login-2\n') # 1D and 2D nodeset: fold along last axis only self._nodeset_t(["--axis=-1", "-f", "comp-[1-2]-[1-3],login-[1-2]"], None, b'comp-1-[1-3],comp-2-[1-3],login-[1-2]\n') # test for a common case ndnodes = [] for ib in range(2): for idx in range(500): ndnodes.append("node%d-ib%d" % (idx, ib)) random.shuffle(ndnodes) self._nodeset_t(["--axis=1", "-f"] + ndnodes, None, b"node[0-499]-ib0,node[0-499]-ib1\n") exp_result = [] for idx in range(500): exp_result.append("node%d-ib[0-1]" % idx) exp_result_str = '%s\n' % ','.join(exp_result) self._nodeset_t(["--axis=2", "-f"] + ndnodes, None, exp_result_str.encode()) # 4D test ndnodes = ["c-1-2-3-4", "c-2-2-3-4", "c-3-2-3-4", "c-5-5-5-5", "c-5-7-5-5", "c-5-9-5-5", "c-5-11-5-5", "c-9-8-8-08", "c-9-8-8-09"] self._nodeset_t(["--axis=1", "-f"] + ndnodes, None, b"c-5-5-5-5,c-5-7-5-5,c-5-9-5-5,c-5-11-5-5,c-[1-3]-2-3-4,c-9-8-8-08,c-9-8-8-09\n") self._nodeset_t(["--axis=2", "-f"] + ndnodes, None, b"c-5-[5,7,9,11]-5-5,c-1-2-3-4,c-2-2-3-4,c-3-2-3-4,c-9-8-8-08,c-9-8-8-09\n") self._nodeset_t(["--axis=3", "-f"] + ndnodes, None, b"c-5-5-5-5,c-5-7-5-5,c-5-9-5-5,c-5-11-5-5,c-1-2-3-4,c-2-2-3-4,c-3-2-3-4,c-9-8-8-08,c-9-8-8-09\n") self._nodeset_t(["--axis=4", "-f"] + ndnodes, None, b"c-5-5-5-5,c-5-7-5-5,c-5-9-5-5,c-5-11-5-5,c-1-2-3-4,c-2-2-3-4,c-3-2-3-4,c-9-8-8-[08-09]\n") self._nodeset_t(["--axis=1-2", "-f"] + ndnodes, None, b"c-5-[5,7,9,11]-5-5,c-[1-3]-2-3-4,c-9-8-8-08,c-9-8-8-09\n") self._nodeset_t(["--axis=2-3", "-f"] + ndnodes, None, b"c-5-[5,7,9,11]-5-5,c-1-2-3-4,c-2-2-3-4,c-3-2-3-4,c-9-8-8-08,c-9-8-8-09\n") self._nodeset_t(["--axis=3-4", "-f"] + ndnodes, None, b"c-5-5-5-5,c-5-7-5-5,c-5-9-5-5,c-5-11-5-5,c-1-2-3-4,c-2-2-3-4,c-3-2-3-4,c-9-8-8-[08-09]\n") self._nodeset_t(["--axis=1-3", "-f"] + ndnodes, None, b"c-5-[5,7,9,11]-5-5,c-[1-3]-2-3-4,c-9-8-8-08,c-9-8-8-09\n") self._nodeset_t(["--axis=2-4", "-f"] + ndnodes, None, b"c-5-[5,7,9,11]-5-5,c-1-2-3-4,c-2-2-3-4,c-3-2-3-4,c-9-8-8-[08-09]\n") self._nodeset_t(["--axis=1-4", "-f"] + ndnodes, None, b"c-5-[5,7,9,11]-5-5,c-[1-3]-2-3-4,c-9-8-8-[08-09]\n") self._nodeset_t(["-f"] + ndnodes, None, b"c-5-[5,7,9,11]-5-5,c-[1-3]-2-3-4,c-9-8-8-[08-09]\n") # a case where axis and autostep are working self._nodeset_t(["--autostep=4", "--axis=1-2", "-f"] + ndnodes, None, b"c-5-[5-11/2]-5-5,c-[1-3]-2-3-4,c-9-8-8-08,c-9-8-8-09\n") def test_024_axis_stdin(self): """test nodeset folding with --axis (stdin)""" self._nodeset_t(["--axis=0", "-f"], "bar\n", b"bar\n") self._nodeset_t(["--axis=1", "-f"], "bar\n", b"bar\n") self._nodeset_t(["--axis=1", "-R", "-f"], "1,2,3", None, 2, b"--axis option is only supported when folding nodeset\n") self._nodeset_t(["--axis=1", "-e"], "bar\n", None, 2, b"--axis option is only supported when folding nodeset\n") # 1D and 2D nodeset: fold along axis 0 only self._nodeset_t(["--axis=1", "-f"], "comp-[1-2]-[1-3],login-[1-2]\n", b'comp-[1-2]-1,comp-[1-2]-2,comp-[1-2]-3,login-[1-2]\n') # 1D and 2D nodeset: fold along axis 1 only self._nodeset_t(["--axis=2", "-f"], "comp-[1-2]-[1-3],login-[1-2]\n", b'comp-1-[1-3],comp-2-[1-3],login-1,login-2\n') # 1D and 2D nodeset: fold along last axis only self._nodeset_t(["--axis=-1", "-f"], "comp-[1-2]-[1-3],login-[1-2]\n", b'comp-1-[1-3],comp-2-[1-3],login-[1-2]\n') # test for a common case ndnodes = [] for ib in range(2): for idx in range(500): ndnodes.append("node%d-ib%d" % (idx, ib)) random.shuffle(ndnodes) self._nodeset_t(["--axis=1", "-f"], '\n'.join(ndnodes) + '\n', b"node[0-499]-ib0,node[0-499]-ib1\n") exp_result = [] for idx in range(500): exp_result.append("node%d-ib[0-1]" % idx) exp_result_str = '%s\n' % ','.join(exp_result) self._nodeset_t(["--axis=2", "-f"], '\n'.join(ndnodes) + '\n', exp_result_str.encode()) def test_025_pick(self): """test nodeset --pick""" for num in range(1, 100): self._nodeset_t(["--count", "--pick", str(num), "foo[1-100]"], None, str(num).encode() + b'\n') self._nodeset_t(["--count", "--pick", str(num), "-R", "1-100"], None, str(num).encode() + b'\n') class CLINodesetGroupResolverTest1(CLINodesetTestBase): """Unit test class for testing CLI/Nodeset.py with custom Group Resolver""" def setUp(self): # Special tests that require a default group source set # # The temporary file needs to be persistent during the tests # because GroupResolverConfig does lazy init, this is why we # use an instance variable self.f # self.f = make_temp_file(dedent(""" [Main] default: local [local] map: echo example[1-100] all: echo example[1-1000] list: echo foo bar moo """).encode()) set_std_group_resolver(GroupResolverConfig(self.f.name)) def tearDown(self): set_std_group_resolver(None) self.f = None # used to release temp file def test_022_list(self): """test nodeset --list""" self._nodeset_t(["--list"], None, b"@bar\n@foo\n@moo\n") self._nodeset_t(["-ll"], None, b"@bar example[1-100]\n@foo example[1-100]\n@moo example[1-100]\n") self._nodeset_t(["-lll"], None, b"@bar example[1-100] 100\n@foo example[1-100] 100\n@moo example[1-100] 100\n") self._nodeset_t(["-l", "example[4,95]", "example5"], None, b"@bar\n@foo\n@moo\n") self._nodeset_t(["-ll", "example[4,95]", "example5"], None, b"@bar example[4-5,95]\n@foo example[4-5,95]\n@moo example[4-5,95]\n") self._nodeset_t(["-lll", "example[4,95]", "example5"], None, b"@bar example[4-5,95] 3/100\n@foo example[4-5,95] 3/100\n@moo example[4-5,95] 3/100\n") # test empty result self._nodeset_t(["-l", "foo[3-70]", "bar6"], None, b"") # more arg-mixed tests self._nodeset_t(["-a", "-l"], None, b"@bar\n@foo\n@moo\n") self._nodeset_t(["-a", "-l", "-x example[1-100]"], None, b"") self._nodeset_t(["-a", "-l", "-x example[1-40]"], None, b"@bar\n@foo\n@moo\n") self._nodeset_t(["-l", "-x example3"], None, b"") # no -a, remove from nothing self._nodeset_t(["-l", "-i example3"], None, b"") # no -a, intersect from nothing self._nodeset_t(["-l", "-X example3"], None, b"@bar\n@foo\n@moo\n") # no -a, xor from nothing self._nodeset_t(["-l", "-", "-i example3"], "example[3,500]\n", b"@bar\n@foo\n@moo\n") def test_023_list_all(self): """test nodeset --list-all""" self._nodeset_t(["--list-all"], None, b"@bar\n@foo\n@moo\n") self._nodeset_t(["-L"], None, b"@bar\n@foo\n@moo\n") self._nodeset_t(["-LL"], None, b"@bar example[1-100]\n@foo example[1-100]\n@moo example[1-100]\n") self._nodeset_t(["-LLL"], None, b"@bar example[1-100] 100\n@foo example[1-100] 100\n@moo example[1-100] 100\n") class CLINodesetGroupResolverTest2(CLINodesetTestBase): """Unit test class for testing CLI/Nodeset.py with custom Group Resolver""" def setUp(self): # Special tests that require a default group source set self.f = make_temp_file(dedent(""" [Main] default: test [test] map: echo example[1-100] all: echo @foo,@bar,@moo list: echo foo bar moo [other] map: echo nova[030-489] all: echo @baz,@qux,@norf list: echo baz qux norf """).encode()) set_std_group_resolver(GroupResolverConfig(self.f.name)) def tearDown(self): set_std_group_resolver(None) self.f = None # used to release temp file def test_024_groups(self): self._nodeset_t(["--split=2", "-r", "unknown2", "unknown3"], None, b"unknown2\nunknown3\n") self._nodeset_t(["-f", "-a"], None, b"example[1-100]\n") self._nodeset_t(["-f", "@moo"], None, b"example[1-100]\n") self._nodeset_t(["-f", "@moo", "@bar"], None, b"example[1-100]\n") self._nodeset_t(["-e", "-a"], None, ' '.join(["example%d" % i for i in range(1, 101)]).encode() + b'\n') self._nodeset_t(["-c", "-a"], None, b"100\n") self._nodeset_t(["-r", "-a"], None, b"@bar\n") self._nodeset_t(["--split=2", "-r", "unknown2", "unknown3"], None, b"unknown2\nunknown3\n") # We need to split following unit tests in order to reset group # source in setUp/tearDown... def test_025_groups(self): self._nodeset_t(["-s", "test", "-c", "-a", "-d"], None, b"100\n") def test_026_groups(self): self._nodeset_t(["-s", "test", "-r", "-a"], None, b"@test:bar\n") def test_027_groups(self): self._nodeset_t(["-s", "test", "-G", "-r", "-a"], None, b"@bar\n") def test_028_groups(self): self._nodeset_t(["-s", "test", "--groupsources"], None, b"test (default)\nother\n") def test_029_groups(self): self._nodeset_t(["-s", "test", "-q", "--groupsources"], None, b"test\nother\n") def test_030_groups(self): self._nodeset_t(["-f", "-a", "-"], "example101\n", b"example[1-101]\n") self._nodeset_t(["-f", "-a", "-"], "example102 example101\n", b"example[1-102]\n") # Check default group source switching... def test_031_groups(self): self._nodeset_t(["-s", "other", "-c", "-a", "-d"], None, b"460\n") self._nodeset_t(["-s", "test", "-c", "-a", "-d"], None, b"100\n") def test_032_groups(self): self._nodeset_t(["-s", "other", "-r", "-a"], None, b"@other:baz\n") self._nodeset_t(["-s", "test", "-r", "-a"], None, b"@test:bar\n") def test_033_groups(self): self._nodeset_t(["-s", "other", "-G", "-r", "-a"], None, b"@baz\n") self._nodeset_t(["-s", "test", "-G", "-r", "-a"], None, b"@bar\n") def test_034_groups(self): self._nodeset_t(["--groupsources"], None, b"test (default)\nother\n") def test_035_groups(self): self._nodeset_t(["-s", "other", "--groupsources"], None, b"other (default)\ntest\n") def test_036_groups(self): self._nodeset_t(["--groupsources"], None, b"test (default)\nother\n") def test_037_groups_output_format(self): self._nodeset_t(["-r", "-O", "{%s}", "-a"], None, b"{@bar}\n") def test_038_groups_output_format(self): self._nodeset_t(["-O", "{%s}", "-s", "other", "-r", "-a"], None, b"{@other:baz}\n") def test_039_list_all(self): """test nodeset --list-all (multi sources)""" self._nodeset_t(["--list-all"], None, b"@bar\n@foo\n@moo\n@other:baz\n@other:norf\n@other:qux\n") self._nodeset_t(["--list-all", "-G"], None, b"@bar\n@foo\n@moo\n@baz\n@norf\n@qux\n") self._nodeset_t(["-GL"], None, b"@bar\n@foo\n@moo\n@baz\n@norf\n@qux\n") self._nodeset_t(["--list-all", "-s", "other"], None, b"@other:baz\n@other:norf\n@other:qux\n@test:bar\n@test:foo\n@test:moo\n") self._nodeset_t(["--list-all", "-G", "-s", "other"], None, b"@baz\n@norf\n@qux\n@bar\n@foo\n@moo\n") # 'other' source first def test_040_wildcards(self): """test nodeset with wildcards""" self._nodeset_t(["-f", "*"], None, b"example[1-100]\n") self._nodeset_t(["-f", "x*"], None, b"\n") # no match self._nodeset_t(["-s", "other", "-f", "*"], None, b"nova[030-489]\n") self._nodeset_t(["-G", "-s", "other", "-f", "*"], None, b"nova[030-489]\n") self._nodeset_t(["-s", "other", "-f", "nova0??"], None, b"nova[030-099]\n") self._nodeset_t(["-s", "other", "-f", "nova?[12-42]"], None, b"nova[030-042,112-142,212-242,312-342,412-442]\n") self._nodeset_t(["-s", "other", "-f", "*!*[033]"], None, b"nova[030-032,034-489]\n") self._nodeset_t(["-s", "other", "--autostep=3", "-f", "*!*[033-099/2]"], None, b"nova[030-032,034-100/2,101-489]\n") class CLINodesetGroupResolverTest3(CLINodesetTestBase): """Unit test class for testing CLI/Nodeset.py with custom Group Resolver A case we support: one of the source misses the list upcall. """ def setUp(self): # Special tests that require a default group source set self.f = make_temp_file(dedent(""" [Main] default: test [test] map: echo example[1-100] all: echo @foo,@bar,@moo list: echo foo bar moo [other] map: echo nova[030-489] all: echo @baz,@qux,@norf list: echo baz qux norf [pdu] map: echo pdu-[0-3]-[1-2] """).encode()) set_std_group_resolver(GroupResolverConfig(self.f.name)) def tearDown(self): set_std_group_resolver(None) self.f = None # used to release temp file def test_list_all(self): """test nodeset --list-all (w/ missing list upcall)""" self._nodeset_t(["--list-all"], None, b"@bar\n@foo\n@moo\n@other:baz\n@other:norf\n@other:qux\n", 0, b"Warning: No list upcall defined for group source pdu\n") self._nodeset_t(["-LL"], None, b"@bar example[1-100]\n@foo example[1-100]\n@moo example[1-100]\n" b"@other:baz nova[030-489]\n@other:norf nova[030-489]\n@other:qux nova[030-489]\n", 0, b"Warning: No list upcall defined for group source pdu\n") self._nodeset_t(["-LLL"], None, b"@bar example[1-100] 100\n@foo example[1-100] 100\n@moo example[1-100] 100\n" b"@other:baz nova[030-489] 460\n@other:norf nova[030-489] 460\n@other:qux nova[030-489] 460\n", 0, b"Warning: No list upcall defined for group source pdu\n") def test_list_failure(self): """test nodeset --list -s source w/ missing list upcall""" self._nodeset_t(["--list", "-s", "pdu"], None, b"", 1, b'No list upcall defined for group source "pdu"\n') class CLINodesetGroupResolverConfigErrorTest(CLINodesetTestBase): """Unit test class for testing GroupResolverConfigError""" def setUp(self): self.tdir = make_temp_dir() self.gconff = make_temp_file(dedent(""" [Main] default: default autodir: %s """ % self.tdir.name).encode('ascii')) self.yamlf = make_temp_file(dedent(""" default: compute: 'foo' broken: i am not a dict """).encode('ascii'), suffix=".yaml", dir=self.tdir.name) set_std_group_resolver(GroupResolverConfig(self.gconff.name)) def tearDown(self): set_std_group_resolver(None) self.yamlf.close() self.gconff.close() self.tdir.cleanup() def test_bad_yaml_config(self): """test nodeset with bad yaml config""" self._nodeset_t(["--list-all"], None, b"", 1, b"invalid content (group source 'broken' is not a dict)\n") class CLINodesetEmptyGroupsConf(CLINodesetTestBase): """Unit test class for testing empty groups.conf""" def setUp(self): self.gconff = make_temp_file(b"") set_std_group_resolver(GroupResolverConfig(self.gconff.name)) def tearDown(self): set_std_group_resolver(None) self.gconff = None def test_empty_groups_conf(self): """test nodeset with empty groups.conf""" self._nodeset_t(["--list-all"], None, b"") class CLINodesetMalformedGroupsConf(CLINodesetTestBase): """Unit test class for testing malformed groups.conf""" def setUp(self): self.gconff = make_temp_file(b"[Main") set_std_group_resolver(GroupResolverConfig(self.gconff.name)) def tearDown(self): set_std_group_resolver(None) self.gconff = None def test_malformed_groups_conf(self): """test nodeset with malformed groups.conf""" self._nodeset_t(["--list-all"], None, b"", 1, b"'[Main'\n") class CLINodesetGroupsConfOption(CLINodesetTestBase): """Unit test class for testing --groupsconf option""" def setUp(self): self.gconff = make_temp_file(dedent(""" [Main] default: global_default [global_default] map: echo example[1-100] all: echo @foo,@bar,@moo list: echo foo bar moo """).encode()) set_std_group_resolver_config(self.gconff.name) # passed to --groupsconf self.custf = make_temp_file(dedent(""" [Main] default: custom [custom] map: echo custom[7-42] all: echo @selene,@artemis list: echo selene artemis """).encode()) def tearDown(self): set_std_group_resolver(None) self.gconff = None self.custf = None def test_groupsconf_option(self): """test nodeset with --groupsconf""" self._nodeset_t(["--list-all"], None, b"@bar\n@foo\n@moo\n") self._nodeset_t(["-f", "@foo"], None, b"example[1-100]\n") self._nodeset_t(["--groupsconf", self.custf.name, "--list-all"], None, b"@artemis\n@selene\n") self._nodeset_t(["--groupsconf", self.custf.name, "-f", "@artemis"], None, b"custom[7-42]\n") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/CLIOptionParserTest.py0000644104717000001440000000317414501416555021132 0ustar00sthiellusers# ClusterShell.CLI.OptionParser test suite # Written by S. Thiell """Unit test for CLI.OptionParser""" from optparse import OptionConflictError import unittest from ClusterShell.CLI.OptionParser import OptionParser class CLIOptionParserTest(unittest.TestCase): """This test case performs a complete CLI.OptionParser verification. """ def testOptionParser(self): """test CLI.OptionParser (1)""" parser = OptionParser("dummy") parser.install_nodes_options() parser.install_display_options(verbose_options=True) parser.install_filecopy_options() parser.install_connector_options() options, _ = parser.parse_args([]) def testOptionParser2(self): """test CLI.OptionParser (2)""" parser = OptionParser("dummy") parser.install_nodes_options() parser.install_display_options(verbose_options=True, separator_option=True) parser.install_filecopy_options() parser.install_connector_options() options, _ = parser.parse_args([]) def testOptionParserConflicts(self): """test CLI.OptionParser (conflicting options)""" parser = OptionParser("dummy") parser.install_nodes_options() parser.install_display_options(dshbak_compat=True) self.assertRaises(OptionConflictError, parser.install_filecopy_options) def testOptionParserClubak(self): """test CLI.OptionParser for clubak""" parser = OptionParser("dummy") parser.install_nodes_options() parser.install_display_options(separator_option=True, dshbak_compat=True) options, _ = parser.parse_args([]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/DefaultsTest.py0000644104717000001440000002411314505632065017720 0ustar00sthiellusers# ClusterShell.Defaults test suite # Written by S. Thiell """Unit test for ClusterShell.Defaults""" import os import sys import shutil from textwrap import dedent import unittest from TLib import make_temp_file, make_temp_dir from ClusterShell.Defaults import Defaults, _task_print_debug from ClusterShell.Task import task_self, task_terminate from ClusterShell.Worker.Pdsh import WorkerPdsh from ClusterShell.Worker.Ssh import WorkerSsh class Defaults000NoConfigTest(unittest.TestCase): def setUp(self): """setup test - initialize Defaults instance""" self.defaults = Defaults([]) def test_000_initial(self): """test Defaults initial values""" # nodeset self.assertEqual(self.defaults.fold_axis, ()) # task_default self.assertFalse(self.defaults.stderr) self.assertTrue(self.defaults.stdout_msgtree) self.assertTrue(self.defaults.stderr_msgtree) self.assertEqual(self.defaults.engine, 'auto') self.assertEqual(self.defaults.port_qlimit, 100) self.assertTrue(self.defaults.auto_tree) self.assertEqual(self.defaults.local_workername, 'exec') self.assertEqual(self.defaults.distant_workername, 'ssh') # task_info self.assertFalse(self.defaults.debug) self.assertEqual(self.defaults.print_debug, _task_print_debug) self.assertFalse(self.defaults.print_debug is None) self.assertEqual(self.defaults.fanout, 64) self.assertEqual(self.defaults.grooming_delay, 0.25) self.assertEqual(self.defaults.connect_timeout, 10) self.assertEqual(self.defaults.command_timeout, 0) def test_001_setattr(self): """test Defaults setattr""" # nodeset self.defaults.fold_axis = (0, 2) self.assertEqual(self.defaults.fold_axis, (0, 2)) # task_default self.defaults.stderr = True self.assertTrue(self.defaults.stderr) self.defaults.stdout_msgtree = False self.assertFalse(self.defaults.stdout_msgtree) self.defaults.stderr_msgtree = False self.assertFalse(self.defaults.stderr_msgtree) self.defaults.engine = 'select' self.assertEqual(self.defaults.engine, 'select') self.defaults.port_qlimit = 1000 self.assertEqual(self.defaults.port_qlimit, 1000) self.defaults.auto_tree = False self.assertFalse(self.defaults.auto_tree) self.defaults.local_workername = 'none' self.assertEqual(self.defaults.local_workername, 'none') self.defaults.distant_workername = 'pdsh' self.assertEqual(self.defaults.distant_workername, 'pdsh') # task_info self.defaults.debug = True self.assertTrue(self.defaults.debug) self.defaults.print_debug = None self.assertEqual(self.defaults.print_debug, None) self.defaults.fanout = 256 self.assertEqual(self.defaults.fanout, 256) self.defaults.grooming_delay = 0.5 self.assertEqual(self.defaults.grooming_delay, 0.5) self.defaults.connect_timeout = 12.5 self.assertEqual(self.defaults.connect_timeout, 12.5) self.defaults.connect_timeout = 30.5 def test_002_reinit_defaults(self): """Test Defaults manual reinit""" # For testing purposes only self.defaults.__init__(filenames=[]) self.test_000_initial() def test_004_workerclass(self): """test Defaults workerclass""" self.defaults.distant_workername = 'pdsh' task_terminate() task = task_self(self.defaults) self.assertTrue(task.default("distant_worker") is WorkerPdsh) self.defaults.distant_workername = 'ssh' self.assertTrue(task.default("distant_worker") is WorkerPdsh) task_terminate() task = task_self(self.defaults) self.assertTrue(task.default("distant_worker") is WorkerSsh) task_terminate() tdir = make_temp_dir() modfile = open(os.path.join(tdir.name, 'OutOfTree.py'), 'w') modfile.write(dedent(""" class OutOfTreeWorker(object): pass WORKER_CLASS = OutOfTreeWorker""")) modfile.flush() modfile.close() sys.path.append(tdir.name) self.defaults.distant_workername = 'OutOfTree' task = task_self(self.defaults) self.assertEqual(task.default("distant_worker").__name__, 'OutOfTreeWorker') task_terminate() tdir.cleanup() def test_005_misc_value_errors(self): """test Defaults misc value errors""" task_terminate() self.defaults.local_workername = 'dummy1' self.assertRaises(ImportError, task_self, self.defaults) self.defaults.local_workername = 'exec' self.defaults.distant_workername = 'dummy2' self.assertRaises(ImportError, task_self, self.defaults) self.defaults.distant_workername = 'ssh' self.defaults.engine = 'unknown' self.assertRaises(KeyError, task_self, self.defaults) self.defaults.engine = 'auto' task = task_self(self.defaults) self.assertEqual(task.default('engine'), 'auto') task_terminate() class Defaults001ConfigTest(unittest.TestCase): def setUp(self): self.defaults = None def _assert_default_values(self): # nodeset self.assertEqual(self.defaults.fold_axis, ()) # task_default self.assertFalse(self.defaults.stderr) self.assertTrue(self.defaults.stdout_msgtree) self.assertTrue(self.defaults.stderr_msgtree) self.assertEqual(self.defaults.engine, 'auto') self.assertEqual(self.defaults.port_qlimit, 100) self.assertTrue(self.defaults.auto_tree) self.assertEqual(self.defaults.local_workername, 'exec') self.assertEqual(self.defaults.distant_workername, 'ssh') # task_info self.assertFalse(self.defaults.debug) self.assertEqual(self.defaults.print_debug, _task_print_debug) self.assertFalse(self.defaults.print_debug is None) self.assertEqual(self.defaults.fanout, 64) self.assertEqual(self.defaults.grooming_delay, 0.25) self.assertEqual(self.defaults.connect_timeout, 10) self.assertEqual(self.defaults.command_timeout, 0) def test_000_empty(self): """test Defaults config file (empty)""" conf_test = make_temp_file(b'') self.defaults = Defaults(filenames=[conf_test.name]) self._assert_default_values() def test_001_defaults(self): """test Defaults config file (defaults)""" conf_test = make_temp_file(dedent(""" [nodeset] fold_axis: [task.default] stderr: false stdout_msgtree: true stderr_msgtree: true engine: auto port_qlimit: 100 auto_tree: true local_workername: exec distant_workername: ssh [task.info] debug: false fanout: 64 grooming_delay: 0.25 connect_timeout: 10 command_timeout: 0""").encode('ascii')) self.defaults = Defaults(filenames=[conf_test.name]) self._assert_default_values() def test_002_changed(self): """test Defaults config file (changed)""" conf_test = make_temp_file(dedent(""" [nodeset] fold_axis: -1 [task.default] stderr: true stdout_msgtree: false stderr_msgtree: false engine: select port_qlimit: 1000 auto_tree: false local_workername: none distant_workername: pdsh [task.info] debug: true fanout: 256 grooming_delay: 0.5 connect_timeout: 12.5 command_timeout: 30.5""").encode('ascii')) self.defaults = Defaults(filenames=[conf_test.name]) # nodeset self.assertEqual(self.defaults.fold_axis, (-1,)) # task_default self.assertTrue(self.defaults.stderr) self.assertFalse(self.defaults.stdout_msgtree) self.assertFalse(self.defaults.stderr_msgtree) self.assertEqual(self.defaults.engine, 'select') self.assertEqual(self.defaults.port_qlimit, 1000) # 1.8 compat self.assertFalse(self.defaults.auto_tree) self.assertEqual(self.defaults.local_workername, 'none') self.assertEqual(self.defaults.distant_workername, 'pdsh') # task_info self.assertTrue(self.defaults.debug) self.assertEqual(self.defaults.fanout, 256) self.assertEqual(self.defaults.grooming_delay, 0.5) self.assertEqual(self.defaults.connect_timeout, 12.5) def test_003_engine(self): """test Defaults config file (engine section)""" conf_test = make_temp_file(dedent(""" [nodeset] fold_axis: -1 [task.default] stderr: true stdout_msgtree: false stderr_msgtree: false engine: select auto_tree: false local_workername: none distant_workername: pdsh [task.info] debug: true fanout: 256 grooming_delay: 0.5 connect_timeout: 12.5 command_timeout: 30.5 [engine] port_qlimit: 1000""").encode('ascii')) self.defaults = Defaults(filenames=[conf_test.name]) # nodeset self.assertEqual(self.defaults.fold_axis, (-1,)) # task_default self.assertTrue(self.defaults.stderr) self.assertFalse(self.defaults.stdout_msgtree) self.assertFalse(self.defaults.stderr_msgtree) self.assertEqual(self.defaults.engine, 'select') self.assertEqual(self.defaults.port_qlimit, 1000) self.assertFalse(self.defaults.auto_tree) self.assertEqual(self.defaults.local_workername, 'none') self.assertEqual(self.defaults.distant_workername, 'pdsh') # task_info self.assertTrue(self.defaults.debug) self.assertEqual(self.defaults.fanout, 256) self.assertEqual(self.defaults.grooming_delay, 0.5) self.assertEqual(self.defaults.connect_timeout, 12.5) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/MisusageTest.py0000644104717000001440000000320314501416555017723 0ustar00sthiellusers# ClusterShell test suite # Written by S. Thiell """Unit test for ClusterShell common library misusages""" import unittest from TLib import HOSTNAME from ClusterShell.Event import EventHandler from ClusterShell.Worker.Popen import WorkerPopen from ClusterShell.Worker.Ssh import WorkerSsh from ClusterShell.Worker.Worker import WorkerError from ClusterShell.Task import task_self, AlreadyRunningError class MisusageTest(unittest.TestCase): def testTaskResumedTwice(self): """test library misusage (task_self resumed twice)""" class ResumeAgainHandler(EventHandler): def ev_read(self, worker, node, sname, msg): worker.task.resume() task = task_self() task.shell("/bin/echo OK", handler=ResumeAgainHandler()) self.assertRaises(AlreadyRunningError, task.resume) def testWorkerNotScheduledLocal(self): """test library misusage (local worker not scheduled)""" task = task_self() worker = WorkerPopen(command="/bin/hostname") task.resume() self.assertRaises(WorkerError, worker.read) def testWorkerNotScheduledDistant(self): """test library misusage (distant worker not scheduled)""" task = task_self() worker = WorkerSsh(HOSTNAME, command="/bin/hostname", handler=None, timeout=0) task.resume() self.assertRaises(WorkerError, worker.node_buffer, HOSTNAME) def testTaskScheduleTwice(self): """test task worker schedule twice error""" task = task_self() worker = task.shell("/bin/echo itsme") self.assertRaises(WorkerError, task.schedule, worker) task.abort() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/MsgTreeTest.py0000644104717000001440000003123414501416555017521 0ustar00sthiellusers# ClusterShell test suite # Written by S. Thiell """Unit test for ClusterShell MsgTree Class""" from operator import itemgetter import sys import unittest from ClusterShell.MsgTree import * class MsgTreeTest(unittest.TestCase): def test_001_basics(self): """test MsgTree basics""" tree = MsgTree() self.assertEqual(len(tree), 0) tree.add("key", b"message") self.assertEqual(len(tree), 1) tree.add("key", b"message2") self.assertEqual(len(tree), 1) tree.add("key2", b"message3") self.assertEqual(len(tree), 2) def test_002_elem(self): """test MsgTreeElem""" elem = MsgTreeElem() self.assertEqual(len(elem), 0) for s in elem: self.fail("found line in empty MsgTreeElem!") def test_003_iterators(self): """test MsgTree iterators""" # build tree... tree = MsgTree() self.assertEqual(len(tree), 0) tree.add(("item1", "key"), b"message0") self.assertEqual(len(tree), 1) tree.add(("item2", "key"), b"message2") self.assertEqual(len(tree), 2) tree.add(("item3", "key"), b"message3") self.assertEqual(len(tree), 3) tree.add(("item4", "key"), b"message3") tree.add(("item2", "newkey"), b"message4") self.assertEqual(len(tree), 5) self.assertEqual(tree._depth(), 1) # test standard iterator (over keys) cnt = 0 what = set([ ("item1", "key"), ("item2", "key"), ("item3", "key"), \ ("item4", "key"), ("item2", "newkey") ]) for key in tree: cnt += 1 what.remove(key) self.assertEqual(cnt, 5) self.assertEqual(len(what), 0) # test keys() iterator cnt = 0 for key in tree.keys(): # keep this test for return value check cnt += 1 self.assertEqual(cnt, 5) self.assertEqual(len(list(iter(tree.keys()))), 5) # test messages() iterator (iterate over different messages) cnt = 0 for msg in tree.messages(): cnt += 1 self.assertEqual(len(msg), len(b"message0")) self.assertEqual(msg[0][:-1], b"message") self.assertEqual(cnt, 4) self.assertEqual(len(list(iter(tree.messages()))), 4) # test items() iterator (iterate over all key, msg pairs) cnt = 0 for key, msg in tree.items(): cnt += 1 self.assertEqual(cnt, 5) self.assertEqual(len(list(iter(tree.items()))), 5) # test walk() iterator (iterate by msg and give the list of # associated keys) cnt = 0 cnt_2 = 0 for msg, keys in tree.walk(): cnt += 1 if len(keys) == 2: self.assertEqual(msg, b"message3") cnt_2 += 1 self.assertEqual(cnt, 4) self.assertEqual(cnt_2, 1) self.assertEqual(len(list(iter(tree.walk()))), 4) # test walk() with provided key-filter cnt = 0 for msg, keys in tree.walk(match=lambda s: s[1] == "newkey"): cnt += 1 self.assertEqual(cnt, 1) # test walk() with provided key-mapper cnt = 0 cnt_2 = 0 for msg, keys in tree.walk(mapper=itemgetter(0)): cnt += 1 if len(keys) == 2: for k in keys: self.assertEqual(type(k), str) cnt_2 += 1 self.assertEqual(cnt, 4) self.assertEqual(cnt_2, 1) # test walk with full options: key-filter and key-mapper cnt = 0 for msg, keys in tree.walk(match=lambda k: k[1] == "newkey", mapper=itemgetter(0)): cnt += 1 self.assertEqual(msg, b"message4") self.assertEqual(keys[0], "item2") self.assertEqual(cnt, 1) cnt = 0 for msg, keys in tree.walk(match=lambda k: k[1] == "key", mapper=itemgetter(0)): cnt += 1 self.assertEqual(keys[0][:-1], "item") self.assertEqual(cnt, 3) # 3 and not 4 because item3 and item4 are merged def test_004_getitem(self): """test MsgTree get and __getitem__""" # build tree... tree = MsgTree() tree.add("item1", b"message0") self.assertEqual(len(tree), 1) tree.add("item2", b"message2") tree.add("item3", b"message2") tree.add("item4", b"message3") tree.add("item2", b"message4") tree.add("item3", b"message4") self.assertEqual(len(tree), 4) self.assertEqual(tree["item1"], b"message0") self.assertEqual(tree.get("item1"), b"message0") self.assertEqual(tree["item2"], b"message2\nmessage4") self.assertEqual(tree.get("item2"), b"message2\nmessage4") self.assertEqual(tree.get("item5", b"default_buf"), b"default_buf") self.assertEqual(tree._depth(), 2) def test_005_remove(self): """test MsgTree.remove()""" # build tree tree = MsgTree() self.assertEqual(len(tree), 0) tree.add(("w1", "key1"), b"message0") self.assertEqual(len(tree), 1) tree.add(("w1", "key2"), b"message0") self.assertEqual(len(tree), 2) tree.add(("w1", "key3"), b"message0") self.assertEqual(len(tree), 3) tree.add(("w2", "key4"), b"message1") self.assertEqual(len(tree), 4) tree.remove(lambda k: k[1] == "key2") self.assertEqual(len(tree), 3) for msg, keys in tree.walk(match=lambda k: k[0] == "w1", mapper=itemgetter(1)): self.assertEqual(msg, b"message0") self.assertEqual(len(keys), 2) tree.remove(lambda k: k[0] == "w1") self.assertEqual(len(tree), 1) tree.remove(lambda k: k[0] == "w2") self.assertEqual(len(tree), 0) tree.clear() self.assertEqual(len(tree), 0) def test_006_scalability(self): """test MsgTree scalability""" # test tree of 10k nodes with a single different line each tree = MsgTree() for i in range(10000): tree.add("node%d" % i, b"message" + str(i).encode('ascii')) self.assertEqual(len(tree), 10000) cnt = 0 for msg, keys in tree.walk(): cnt += 1 self.assertEqual(cnt, 10000) # test tree of 1 node with 10k lines tree = MsgTree() for i in range(10000): tree.add("nodeX", b"message" + str(i).encode('ascii')) self.assertEqual(len(tree), 1) cnt = 0 for msg, keys in tree.walk(): testlines = bytes(msg) # test MsgTreeElem.__iter__() self.assertEqual(len(testlines.splitlines()), 10000) cnt += 1 self.assertEqual(cnt, 1) # test tree of 100 nodes with the same 1000 message lines each tree = MsgTree() for j in range(100): for i in range(1000): tree.add("node%d" % j, b"message" + str(i).encode('ascii')) self.assertEqual(len(tree), 100) cnt = 0 for msg, keys in tree.walk(): testlines = bytes(msg) # test MsgTreeElem.__iter__() self.assertEqual(len(testlines.splitlines()), 1000) cnt += 1 self.assertEqual(cnt, 1) def test_007_shift_mode(self): """test MsgTree in shift mode""" tree = MsgTree(mode=MODE_SHIFT) tree.add("item1", b"message0") self.assertEqual(len(tree), 1) tree.add("item2", b"message2") tree.add("item3", b"message2") tree.add("item4", b"message3") tree.add("item2", b"message4") tree.add("item3", b"message4") self.assertEqual(len(tree), 4) self.assertEqual(tree["item1"], b"message0") self.assertEqual(tree.get("item1"), b"message0") self.assertEqual(tree["item2"], b"message2\nmessage4") self.assertEqual(tree.get("item2"), b"message2\nmessage4") self.assertEqual(tree.get("item5", b"default_buf"), b"default_buf") self.assertEqual(tree._depth(), 2) self.assertEqual(len(list(tree.walk())), 3) def test_008_trace_mode(self): """test MsgTree in trace mode""" tree = MsgTree(mode=MODE_TRACE) tree.add("item1", b"message0") self.assertEqual(len(tree), 1) tree.add("item2", b"message2") tree.add("item3", b"message2") tree.add("item4", b"message3") tree.add("item2", b"message4") tree.add("item3", b"message4") self.assertEqual(len(tree), 4) self.assertEqual(tree["item1"], b"message0") self.assertEqual(tree.get("item1"), b"message0") self.assertEqual(tree["item2"], b"message2\nmessage4") self.assertEqual(tree.get("item2"), b"message2\nmessage4") self.assertEqual(tree.get("item5", b"default_buf"), b"default_buf") self.assertEqual(tree._depth(), 2) self.assertEqual(len(list(tree.walk())), 4) # /!\ results are not sorted result = [(m, sorted(k), d, c) for m, k, d, c in sorted(list(tree.walk_trace()))] self.assertEqual(result, [(b'message0', ['item1'], 1, 0), (b'message2', ['item2', 'item3'], 1, 1), (b'message3', ['item4'], 1, 0), (b'message4', ['item2', 'item3'], 2, 0)]) def test_009_defer_to_shift_mode(self): """test MsgTree defer to shift mode""" tree = MsgTree(mode=MODE_DEFER) tree.add("item1", b"message0") self.assertEqual(len(tree), 1) tree.add("item2", b"message1") self.assertEqual(len(tree), 2) tree.add("item3", b"message2") self.assertEqual(len(tree), 3) tree.add("item2", b"message4") tree.add("item1", b"message3") self.assertEqual(tree["item1"], b"message0\nmessage3") self.assertEqual(tree.mode, MODE_DEFER) # calling walk with call _update_keys() and change to MODE_SHIFT self.assertEqual(sorted([(k, e.message()) for e, k in tree.walk()]), [(['item1'], b'message0\nmessage3'), (['item2'], b'message1\nmessage4'), (['item3'], b'message2')]) self.assertEqual(tree.mode, MODE_SHIFT) # further tree modifications should be safe... tree.add("item1", b"message5") tree.add("item2", b"message6") self.assertEqual(tree["item1"], b"message0\nmessage3\nmessage5") # /!\ results are not sorted self.assertEqual(sorted([(k, e.message()) for e, k in tree.walk()]), [(['item1'], b'message0\nmessage3\nmessage5'), (['item2'], b'message1\nmessage4\nmessage6'), (['item3'], b'message2')]) def test_010_remove_in_defer_mode(self): """test MsgTree remove in defer mode""" tree = MsgTree(mode=MODE_DEFER) tree.add("item1", b"message0") self.assertEqual(len(tree), 1) tree.add("item2", b"message1") self.assertEqual(len(tree), 2) tree.add("item3", b"message2") self.assertEqual(len(tree), 3) tree.add("item2", b"message4") tree.add("item1", b"message3") tree.remove(lambda k: k == "item2") self.assertEqual(tree["item1"], b"message0\nmessage3") self.assertRaises(KeyError, tree.__getitem__, "item2") # calling walk with call _update_keys() and change to MODE_SHIFT self.assertEqual(sorted([(k, e.message()) for e, k in tree.walk()]), [(['item1'], b'message0\nmessage3'), (['item3'], b'message2')]) self.assertEqual(tree.mode, MODE_SHIFT) # further tree modifications should be safe... tree.add("item1", b"message5") tree.add("item2", b"message6") self.assertEqual(tree["item1"], b"message0\nmessage3\nmessage5") self.assertEqual(tree["item2"], b"message6") # /!\ results are not sorted self.assertEqual(sorted([(k, e.message()) for e, k in tree.walk()]), [(['item1'], b'message0\nmessage3\nmessage5'), (['item2'], b'message6'), (['item3'], b'message2')]) def test_011_str_compat(self): """test MsgTreeElem.__str__() compatibility""" tree = MsgTree() tree.add("item1", b"message0") elem = tree["item1"] if sys.version_info >= (3, 0): # casting to string is definitively not supported in Python 3, # use bytes() instead. self.assertRaises(TypeError, elem.__str__) self.assertEqual(bytes(elem), b"message0") else: self.assertEqual(str(elem), "message0") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/NodeSetGroupTest.py0000644104717000001440000021521314505632065020532 0ustar00sthiellusers""" Unit test for NodeSet with Group support """ import os import posixpath import sys from textwrap import dedent import unittest from TLib import * # Wildcard import for testing purpose from ClusterShell.NodeSet import * from ClusterShell.NodeUtils import * def makeTestG1(): """Create a temporary group file 1""" f1 = make_temp_file(dedent(""" # oss: montana5,montana4 mds: montana6 io: montana[4-6] #42: montana3 compute: montana[32-163] chassis1: montana[32-33] chassis2: montana[34-35] chassis3: montana[36-37] chassis4: montana[38-39] chassis5: montana[40-41] chassis6: montana[42-43] chassis7: montana[44-45] chassis8: montana[46-47] chassis9: montana[48-49] chassis10: montana[50-51] chassis11: montana[52-53] chassis12: montana[54-55] Uppercase: montana[1-2] gpuchassis: @chassis[4-5] gpu: montana[38-41] all: montana[1-6,32-163] """).encode('ascii')) # /!\ Need to return file object and not f1.name, otherwise the temporary # file might be immediately unlinked. return f1 def makeTestG2(): """Create a temporary group file 2""" f2 = make_temp_file(dedent(""" # # para: montana[32-37,42-55] gpu: montana[38-41] escape%test: montana[87-90] esc%test2: @escape%test """).encode('ascii')) return f2 def makeTestG3(): """Create a temporary group file 3""" f3 = make_temp_file(dedent(""" # # all: montana[32-55] para: montana[32-37,42-55] gpu: montana[38-41] login: montana[32-33] overclock: montana[41-42] chassis1: montana[32-33] chassis2: montana[34-35] chassis3: montana[36-37] single: idaho """).encode('ascii')) return f3 def makeTestR3(): """Create a temporary reverse group file 3""" r3 = make_temp_file(dedent(""" # # montana32: all,para,login,chassis1 montana33: all,para,login,chassis1 montana34: all,para,chassis2 montana35: all,para,chassis2 montana36: all,para,chassis3 montana37: all,para,chassis3 montana38: all,gpu montana39: all,gpu montana40: all,gpu montana41: all,gpu,overclock montana42: all,para,overclock montana43: all,para montana44: all,para montana45: all,para montana46: all,para montana47: all,para montana48: all,para montana49: all,para montana50: all,para montana51: all,para montana52: all,para montana53: all,para montana54: all,para montana55: all,para idaho: single """).encode('ascii')) return r3 def makeTestG4(): """Create a temporary group file 4 (nD)""" f4 = make_temp_file(dedent(""" # rack-x1y1: idaho1z1,idaho2z1 rack-x1y2: idaho2z1,idaho3z1 rack-x2y1: idaho4z1,idaho5z1 rack-x2y2: idaho6z1,idaho7z1 rack-x1: @rack-x1y[1-2] rack-x2: @rack-x2y[1-2] rack-y1: @rack-x[1-2]y1 rack-y2: @rack-x[1-2]y2 rack-all: @rack-x[1-2]y[1-2] """).encode('ascii')) return f4 class NodeSetGroupTest(unittest.TestCase): def setUp(self): """setUp test reproducibility: change standard group resolver to ensure that no local group source is used during tests""" set_std_group_resolver(GroupResolver()) # dummy resolver def tearDown(self): """tearDown: restore standard group resolver""" set_std_group_resolver(None) # restore std resolver def testGroupResolverSimple(self): """test NodeSet with simple custom GroupResolver""" test_groups1 = makeTestG1() source = UpcallGroupSource( "simple", "sed -n 's/^$GROUP:\(.*\)/\\1/p' %s" % test_groups1.name, "sed -n 's/^all:\(.*\)/\\1/p' %s" % test_groups1.name, "sed -n 's/^\([0-9A-Za-z_-]*\):.*/\\1/p' %s" % test_groups1.name, None) # create custom resolver with default source res = GroupResolver(source) self.assertFalse(res.has_node_groups()) self.assertFalse(res.has_node_groups("dummy_namespace")) nodeset = NodeSet("@gpu", resolver=res) self.assertEqual(nodeset, NodeSet("montana[38-41]")) self.assertEqual(str(nodeset), "montana[38-41]") nodeset = NodeSet("@chassis3", resolver=res) self.assertEqual(str(nodeset), "montana[36-37]") nodeset = NodeSet("@chassis[3-4]", resolver=res) self.assertEqual(str(nodeset), "montana[36-39]") nodeset = NodeSet("@chassis[1,3,5]", resolver=res) self.assertEqual(str(nodeset), "montana[32-33,36-37,40-41]") nodeset = NodeSet("@chassis[2-12/2]", resolver=res) self.assertEqual(str(nodeset), "montana[34-35,38-39,42-43,46-47,50-51,54-55]") nodeset = NodeSet("@chassis[1,3-4,5-11/3]", resolver=res) self.assertEqual(str(nodeset), "montana[32-33,36-41,46-47,52-53]") # test recursive group gpuchassis nodeset1 = NodeSet("@chassis[4-5]", resolver=res) nodeset2 = NodeSet("@gpu", resolver=res) nodeset3 = NodeSet("@gpuchassis", resolver=res) self.assertEqual(nodeset1, nodeset2) self.assertEqual(nodeset2, nodeset3) # test also with some inline operations nodeset = NodeSet("montana3,@gpuchassis!montana39,montana77^montana38", resolver=res) self.assertEqual(str(nodeset), "montana[3,40-41,77]") def testAllNoResolver(self): """test NodeSet.fromall() with no resolver""" self.assertRaises(NodeSetExternalError, NodeSet.fromall, resolver=RESOLVER_NOGROUP) # Also test with a nonfunctional resolver (#263) res = GroupResolver() self.assertRaises(NodeSetExternalError, NodeSet.fromall, resolver=res) def testGroupsNoResolver(self): """test NodeSet.groups() with no resolver""" nodeset = NodeSet("foo", resolver=RESOLVER_NOGROUP) self.assertRaises(NodeSetExternalError, nodeset.groups) def testGroupResolverAddSourceError(self): """test GroupResolver.add_source() error""" test_groups1 = makeTestG1() source = UpcallGroupSource("simple", "sed -n 's/^$GROUP:\(.*\)/\\1/p' %s" % test_groups1.name, "sed -n 's/^all:\(.*\)/\\1/p' %s" % test_groups1.name, "sed -n 's/^\([0-9A-Za-z_-]*\):.*/\\1/p' %s" % test_groups1.name, None) res = GroupResolver(source) # adding the same source again should raise ValueError self.assertRaises(ValueError, res.add_source, source) def testGroupResolverMinimal(self): """test NodeSet with minimal GroupResolver""" test_groups1 = makeTestG1() source = UpcallGroupSource("minimal", "sed -n 's/^$GROUP:\(.*\)/\\1/p' %s" % test_groups1.name, None, None, None) # create custom resolver with default source res = GroupResolver(source) nodeset = NodeSet("@gpu", resolver=res) self.assertEqual(nodeset, NodeSet("montana[38-41]")) self.assertEqual(str(nodeset), "montana[38-41]") self.assertRaises(NodeSetExternalError, NodeSet.fromall, resolver=res) def testConfigEmpty(self): """test groups with an empty configuration file""" f = make_temp_file(b"") res = GroupResolverConfig(f.name) # NodeSet should work nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") # without group support self.assertRaises(GroupResolverSourceError, nodeset.regroup) self.assertRaises(GroupResolverSourceError, NodeSet, "@bar", resolver=res) def testConfigResolverEmpty(self): """test groups resolver with an empty file list""" # empty file list OR as if no config file is parsable res = GroupResolverConfig([]) # NodeSet should work nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") # without group support self.assertRaises(GroupResolverSourceError, nodeset.regroup) self.assertRaises(GroupResolverSourceError, NodeSet, "@bar", resolver=res) def testConfigBasicLocal(self): """test groups with a basic local config file""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] #all: list: echo foo #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@foo") self.assertEqual(list(nodeset.groups().keys()), ["@foo"]) self.assertEqual(str(NodeSet("@foo", resolver=res)), "example[1-100]") # No 'all' defined: all_nodes() should raise an error self.assertRaises(GroupSourceNoUpcall, res.all_nodes) # No 'reverse' defined: node_groups() should raise an error self.assertRaises(GroupSourceNoUpcall, res.node_groups, "example1") # regroup with rest nodeset = NodeSet("example[1-101]", resolver=res) self.assertEqual(nodeset.regroup(), "@foo,example101") # regroup incomplete nodeset = NodeSet("example[50-200]", resolver=res) self.assertEqual(nodeset.regroup(), "example[50-200]") # regroup no matching nodeset = NodeSet("example[102-200]", resolver=res) self.assertEqual(nodeset.regroup(), "example[102-200]") # test default_source_name property self.assertEqual(res.default_source_name, "local") # check added with lazy init res = GroupResolverConfig(f.name) self.assertEqual(res.default_source_name, "local") def testConfigWrongSyntax(self): """test wrong groups config syntax""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] something: echo example[1-100] """).encode('ascii')) resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) def testConfigBasicLocalVerbose(self): """test groups with a basic local config file (verbose)""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] #all: list: echo foo #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@foo") self.assertEqual(str(NodeSet("@foo", resolver=res)), "example[1-100]") def testConfigBasicLocalAlternative(self): """test groups with a basic local config file (= alternative)""" f = make_temp_file(dedent(""" # A comment [Main] default=local [local] map=echo example[1-100] #all= list=echo foo #reverse= """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@foo") self.assertEqual(str(NodeSet("@foo", resolver=res)), "example[1-100]") # @truc? def testConfigBasicEmptyDefault(self): """test groups with a empty default namespace""" f = make_temp_file(dedent(""" # A comment [Main] default: [local] map: echo example[1-100] #all: list: echo foo #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@foo") self.assertEqual(str(NodeSet("@foo", resolver=res)), "example[1-100]") def testConfigBasicNoMain(self): """test groups with a local config without main section""" f = make_temp_file(dedent(""" # A comment [local] map: echo example[1-100] #all: list: echo foo #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@foo") self.assertEqual(str(NodeSet("@foo", resolver=res)), "example[1-100]") def testConfigBasicWrongDefault(self): """test groups with a wrong default namespace""" f = make_temp_file(dedent(""" # A comment [Main] default: pointless [local] map: echo example[1-100] #all: list: echo foo #reverse: """).encode('ascii')) resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) def testConfigQueryFailed(self): """test groups with config and failed query""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: false all: false list: echo foo #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertRaises(NodeSetExternalError, nodeset.regroup) # all_nodes() self.assertRaises(NodeSetExternalError, NodeSet.fromall, resolver=res) def testConfigQueryFailedReverse(self): """test groups with config and failed query (reverse)""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example1 list: echo foo reverse: false """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("@foo", resolver=res) self.assertEqual(str(nodeset), "example1") self.assertRaises(NodeSetExternalError, nodeset.regroup) def testConfigRegroupWrongNamespace(self): """test groups by calling regroup(wrong_namespace)""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] #all: list: echo foo #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertRaises(GroupResolverSourceError, nodeset.regroup, "unknown") def testConfigNoListNoReverse(self): """test groups with no list and not reverse upcall""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] #all: #list: #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") # not able to regroup, should still return valid nodeset self.assertEqual(nodeset.regroup(), "example[1-100]") def testConfigNoListButReverseQuery(self): """test groups with no list but reverse upcall""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] #all: #list: echo foo reverse: echo foo """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@foo") def testConfigNoMap(self): """test groups with no map upcall""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] #map: echo example[1-100] all: list: echo foo #reverse: echo foo """).encode('ascii')) # map is a mandatory upcall, an exception should be raised early resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) def testConfigWithEmptyList(self): """test groups with list upcall returning nothing""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] #all: list: : reverse: echo foo """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@foo") def testConfigListAllWithAll(self): """test all groups listing with all upcall""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] all: echo foo bar list: echo foo #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-50]", resolver=res) self.assertEqual(str(nodeset), "example[1-50]") self.assertEqual(str(NodeSet.fromall(resolver=res)), "bar,foo") # test "@*" magic group listing nodeset = NodeSet("@*", resolver=res) self.assertEqual(str(nodeset), "bar,foo") nodeset = NodeSet("rab,@*,oof", resolver=res) self.assertEqual(str(nodeset), "bar,foo,oof,rab") # with group source nodeset = NodeSet("@local:*", resolver=res) self.assertEqual(str(nodeset), "bar,foo") nodeset = NodeSet("rab,@local:*,oof", resolver=res) self.assertEqual(str(nodeset), "bar,foo,oof,rab") def testConfigListAllWithoutAll(self): """test all groups listing without all upcall""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] #all: list: echo foo bar #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-50]", resolver=res) self.assertEqual(str(nodeset), "example[1-50]") self.assertEqual(str(NodeSet.fromall(resolver=res)), "example[1-100]") # test "@*" magic group listing nodeset = NodeSet("@*", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") nodeset = NodeSet("@*,example[101-104]", resolver=res) self.assertEqual(str(nodeset), "example[1-104]") nodeset = NodeSet("example[105-149],@*,example[101-104]", resolver=res) self.assertEqual(str(nodeset), "example[1-149]") # with group source nodeset = NodeSet("@local:*", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") nodeset = NodeSet("example0,@local:*,example[101-110]", resolver=res) self.assertEqual(str(nodeset), "example[0-110]") def testConfigListAllNDWithoutAll(self): """test all groups listing without all upcall (nD)""" # Even in nD, ensure that $GROUP is a simple group that has been previously expanded f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: if [ "$GROUP" = "x1y[3-4]" ]; then exit 1; elif [ "$GROUP" = "x1y1" ]; then echo rack[1-5]z[1-42]; else echo rack[6-10]z[1-42]; fi #all: list: echo x1y1 x1y2 x1y[3-4] #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name, illegal_chars=ILLEGAL_GROUP_CHARS) nodeset = NodeSet("rack3z40", resolver=res) self.assertEqual(str(NodeSet.fromall(resolver=res)), "rack[1-10]z[1-42]") self.assertEqual(res.grouplist(), ['x1y1', 'x1y2', 'x1y[3-4]']) # raw self.assertEqual(grouplist(resolver=res), ['x1y1', 'x1y2', 'x1y3', 'x1y4']) # cleaned # test "@@" group name set nodeset = NodeSet("@@", resolver=res) self.assertEqual(str(nodeset), "x1y[1-4]") nodeset = NodeSet("@@local", resolver=res) self.assertEqual(str(nodeset), "x1y[1-4]") # test "@*" magic group listing nodeset = NodeSet("@*", resolver=res) self.assertEqual(str(nodeset), "rack[1-10]z[1-42]") # with group source nodeset = NodeSet("@local:*", resolver=res) self.assertEqual(str(nodeset), "rack[1-10]z[1-42]") nodeset = NodeSet("rack11z1,@local:*,rack11z[2-42]", resolver=res) self.assertEqual(str(nodeset), "rack[1-11]z[1-42]") def testConfigIllegalCharsND(self): """test group list containing illegal characters""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo rack[6-10]z[1-42] #all: list: echo x1y1 x1y2 @illegal x1y[3-4] #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name, illegal_chars=ILLEGAL_GROUP_CHARS) nodeset = NodeSet("rack3z40", resolver=res) self.assertRaises(GroupResolverIllegalCharError, res.grouplist) def testConfigResolverSources(self): """test sources() with groups config of 2 sources""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] [other] map: echo example[1-10] """).encode('ascii')) res = GroupResolverConfig(f.name) self.assertEqual(len(res.sources()), 2) self.assertTrue('local' in res.sources()) self.assertTrue('other' in res.sources()) def testConfigCrossRefs(self): """test groups config with cross references""" f = make_temp_file(dedent(""" # A comment [Main] default: other [local] map: echo example[1-100] [other] map: echo "foo: @local:foo" | sed -n 's/^$GROUP:\(.*\)/\\1/p' [third] map: printf "bar: @ref-rel\\nref-rel: @other:foo\\nref-all: @*\\n" | sed -n 's/^$GROUP:\(.*\)/\\1/p' list: echo bar """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("@other:foo", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") # @third:bar -> @ref-rel (third) -> @other:foo -> @local:foo -> nodes nodeset = NodeSet("@third:bar", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") nodeset = NodeSet("@third:ref-all", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") def testConfigGroupsDirDummy(self): """test groups with groupsdir defined (dummy)""" f = make_temp_file(dedent(""" [Main] default: local groupsdir: /path/to/nowhere [local] map: echo example[1-100] #all: list: echo foo #reverse: """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@foo") self.assertEqual(str(NodeSet("@foo", resolver=res)), "example[1-100]") def testConfigGroupsDirExists(self): """test groups with groupsdir defined (real, other)""" tdir = make_temp_dir() f = make_temp_file(dedent(""" [Main] default: new_local groupsdir: %s [local] map: echo example[1-100] #all: list: echo foo #reverse: """ % tdir.name).encode('ascii')) f2 = make_temp_file(dedent(""" [new_local] map: echo example[1-100] #all: list: echo bar #reverse: """).encode('ascii'), suffix=".conf", dir=tdir.name) try: res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@bar") self.assertEqual(str(NodeSet("@bar", resolver=res)), "example[1-100]") finally: f2.close() f.close() tdir.cleanup() def testConfigGroupsMultipleDirs(self): """test groups with multiple confdir defined""" tdir1 = make_temp_dir() dname1 = tdir1.name tdir2 = make_temp_dir() dname2 = tdir2.name # Notes: # - use dname1 two times to check dup checking code # - use quotes on one of the directory path f = make_temp_file(dedent(""" [Main] default: local2 confdir: "%s" %s %s [local] map: echo example[1-100] list: echo foo """ % (dname1, dname2, dname1)).encode('ascii')) fs1 = make_temp_file(dedent(""" [local1] map: echo loc1node[1-100] list: echo bar """).encode('ascii'), suffix=".conf", dir=dname1) fs2 = make_temp_file(dedent(""" [local2] map: echo loc2node[02-50] list: echo toto """).encode('ascii'), suffix=".conf", dir=dname2) try: res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") # local self.assertEqual(nodeset.regroup("local"), "@local:foo") self.assertEqual(str(NodeSet("@local:foo", resolver=res)), "example[1-100]") # local1 nodeset = NodeSet("loc1node[1-100]", resolver=res) self.assertEqual(nodeset.regroup("local1"), "@local1:bar") self.assertEqual(str(NodeSet("@local1:bar", resolver=res)), "loc1node[1-100]") # local2 nodeset = NodeSet("loc2node[02-50]", resolver=res) self.assertEqual(nodeset.regroup(), "@toto") # default group source self.assertEqual(str(NodeSet("@toto", resolver=res)), "loc2node[02-50]") finally: fs2.close() fs1.close() f.close() tdir2.cleanup() tdir1.cleanup() def testConfigGroupsDirDupConfig(self): """test groups with duplicate in groupsdir""" tdir = make_temp_dir() f = make_temp_file(dedent(""" [Main] default: iamdup groupsdir: %s [local] map: echo example[1-100] #all: list: echo foo #reverse: """ % tdir.name).encode('ascii')) f2 = make_temp_file(dedent(""" [iamdup] map: echo example[1-100] #all: list: echo bar #reverse: """).encode('ascii'), suffix=".conf", dir=tdir.name) f3 = make_temp_file(dedent(""" [iamdup] map: echo example[10-200] #all: list: echo patato #reverse: """).encode('ascii'), suffix=".conf", dir=tdir.name) try: resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) finally: f3.close() f2.close() f.close() tdir.cleanup() def testConfigGroupsDirExistsNoOther(self): """test groups with groupsdir defined (real, no other)""" tdir1 = make_temp_dir() dname1 = tdir1.name tdir2 = make_temp_dir() dname2 = tdir2.name f = make_temp_file(dedent(""" [Main] default: new_local groupsdir: %s %s """ % (dname1, dname2)).encode('ascii')) f2 = make_temp_file(dedent(""" [new_local] map: echo example[1-100] #all: list: echo bar #reverse: """).encode('ascii'), suffix=".conf", dir=dname2) try: res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@bar") self.assertEqual(str(NodeSet("@bar", resolver=res)), "example[1-100]") finally: f2.close() f.close() tdir2.cleanup() tdir1.cleanup() def testConfigGroupsDirNotADirectory(self): """test groups with groupsdir defined (not a directory)""" tdir = make_temp_dir() fdummy = make_temp_file(b"wrong") f = make_temp_file(dedent(""" [Main] default: new_local groupsdir: %s """ % fdummy.name).encode('ascii'), dir=tdir.name) try: resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) finally: fdummy.close() f.close() tdir.cleanup() def testConfigIllegalChars(self): """test groups with illegal characters""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo example[1-100] #all: list: echo 'foo *' reverse: echo f^oo """).encode('ascii')) res = GroupResolverConfig(f.name, illegal_chars=set("@,&!&^*")) nodeset = NodeSet("example[1-100]", resolver=res) self.assertRaises(GroupResolverIllegalCharError, nodeset.groups) self.assertRaises(GroupResolverIllegalCharError, nodeset.regroup) def testConfigMaxRecursionError(self): """test groups maximum recursion depth exceeded error""" f = make_temp_file(dedent(""" # A comment [Main] default: local [local] map: echo @deep list: echo deep """).encode('ascii')) res = GroupResolverConfig(f.name) self.assertRaises(NodeSetParseError, NodeSet, "@deep", resolver=res) def testGroupResolverND(self): """test NodeSet with simple custom GroupResolver (nD)""" test_groups4 = makeTestG4() source = UpcallGroupSource("simple", "sed -n 's/^$GROUP:\(.*\)/\\1/p' %s" % test_groups4.name, "sed -n 's/^all:\(.*\)/\\1/p' %s" % test_groups4.name, "sed -n 's/^\([0-9A-Za-z_-]*\):.*/\\1/p' %s" % test_groups4.name, None) # create custom resolver with default source res = GroupResolver(source) self.assertFalse(res.has_node_groups()) self.assertFalse(res.has_node_groups("dummy_namespace")) nodeset = NodeSet("@rack-x1y2", resolver=res) self.assertEqual(nodeset, NodeSet("idaho[2-3]z1")) self.assertEqual(str(nodeset), "idaho[2-3]z1") nodeset = NodeSet("@rack-y1", resolver=res) self.assertEqual(str(nodeset), "idaho[1-2,4-5]z1") nodeset = NodeSet("@rack-all", resolver=res) self.assertEqual(str(nodeset), "idaho[1-7]z1") # test NESTED nD groups() self.assertEqual(sorted(nodeset.groups().keys()), ['@rack-all', '@rack-x1', '@rack-x1y1', '@rack-x1y2', '@rack-x2', '@rack-x2y1', '@rack-x2y2', '@rack-y1', '@rack-y2']) self.assertEqual(sorted(nodeset.groups(groupsource="simple").keys()), ['@simple:rack-all', '@simple:rack-x1', '@simple:rack-x1y1', '@simple:rack-x1y2', '@simple:rack-x2', '@simple:rack-x2y1', '@simple:rack-x2y2', '@simple:rack-y1', '@simple:rack-y2']) self.assertEqual(sorted(nodeset.groups(groupsource="simple", noprefix=True).keys()), ['@rack-all', '@rack-x1', '@rack-x1y1', '@rack-x1y2', '@rack-x2', '@rack-x2y1', '@rack-x2y2', '@rack-y1', '@rack-y2']) testns = NodeSet() for gnodes, inodes in nodeset.groups().values(): testns.update(inodes) self.assertEqual(testns, nodeset) # more tests with nested groups nodeset = NodeSet("idaho5z1", resolver=res) self.assertEqual(sorted(nodeset.groups().keys()), ['@rack-all', '@rack-x2', '@rack-x2y1', '@rack-y1']) nodeset = NodeSet("idaho5z1,idaho4z1", resolver=res) self.assertEqual(sorted(nodeset.groups().keys()), ['@rack-all', '@rack-x2', '@rack-x2y1', '@rack-y1']) nodeset = NodeSet("idaho5z1,idaho7z1", resolver=res) self.assertEqual(sorted(nodeset.groups().keys()), ['@rack-all', '@rack-x2', '@rack-x2y1', '@rack-x2y2', '@rack-y1', '@rack-y2']) def testConfigCFGDIR(self): """test groups with $CFGDIR use in upcalls""" f = make_temp_file(dedent(""" [Main] default: local [local] map: echo example[1-100] list: basename $CFGDIR """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("example[1-100]", resolver=res) # just a trick to check $CFGDIR resolution... tmpgroup = os.path.basename(os.path.dirname(f.name)) self.assertEqual(list(nodeset.groups().keys()), ['@%s' % tmpgroup]) self.assertEqual(str(nodeset), "example[1-100]") self.assertEqual(nodeset.regroup(), "@%s" % tmpgroup) self.assertEqual(str(NodeSet("@%s" % tmpgroup, resolver=res)), "example[1-100]") def test_fromall_grouplist(self): """test NodeSet.fromall() without all upcall""" # Group Source that has no all upcall and that can handle special char test_groups2 = makeTestG2() source = UpcallGroupSource("simple", "sed -n 's/^$GROUP:\(.*\)/\\1/p' %s" % test_groups2.name, None, "sed -n 's/^\([0-9A-Za-z_\%%-]*\):.*/\\1/p' %s" % test_groups2.name, None) res = GroupResolver(source) # fromall will trigger ParserEngine.grouplist() that we want to test here nsall = NodeSet.fromall(resolver=res) # if working, group resolution worked with % char self.assertEqual(str(NodeSet.fromall(resolver=res)), "montana[32-55,87-90]") self.assertEqual(len(nsall), 28) # btw explicitly check escaped char nsesc = NodeSet('@escape%test', resolver=res) self.assertEqual(str(nsesc), 'montana[87-90]') self.assertEqual(len(nsesc), 4) nsesc2 = NodeSet('@esc%test2', resolver=res) self.assertEqual(nsesc, nsesc2) ns = NodeSet('montana[87-90]', resolver=res) # could also result in escape%test? self.assertEqual(ns.regroup(), '@esc%test2') def test_nodeset_wildcard_support(self): """test NodeSet wildcard support""" f = make_temp_file(dedent(""" [local] map: echo blargh all: echo foo1 foo2 foo3 bar bar1 bar2 foobar foobar1 list: echo g1 g2 g3 """).encode('ascii')) res = GroupResolverConfig(f.name) self.assertEqual(res.grouplist(), ['g1', 'g2', 'g3']) # wildcard expansion computes against 'all' nodeset = NodeSet("*foo*", resolver=res) self.assertEqual(str(nodeset), "foo[1-3],foobar,foobar1") self.assertEqual(len(nodeset), 5) nodeset = NodeSet("foo?", resolver=res) self.assertEqual(str(nodeset), "foo[1-3]") nodeset = NodeSet("*bar", resolver=res) self.assertEqual(str(nodeset), "bar,foobar") # to exercise 'all nodes' caching nodeset = NodeSet("foo*,bar1,*bar", resolver=res) self.assertEqual(str(nodeset), "bar,bar1,foo[1-3],foobar,foobar1") nodeset = NodeSet("*", resolver=res) self.assertEqual(str(nodeset), "bar,bar[1-2],foo[1-3],foobar,foobar1") # wildcard matching is done with fnmatch, which is always case # sensitive on UNIX-like systems, the only supported systems self.assertEqual(os.path, posixpath) nodeset = NodeSet("*Foo*", resolver=res) # case sensitive self.assertEqual(str(nodeset), "") def test_nodeset_wildcard_support_noall(self): """test NodeSet wildcard support (without all upcall)""" f = make_temp_file(dedent(""" [local] map: echo foo1 foo2 foo3 bar bar1 bar2 foobar foobar1 list: echo g1 g2 g3 """).encode('ascii')) res = GroupResolverConfig(f.name) # wildcard expansion computes against 'all', which if absent # is resolved using list+map nodeset = NodeSet("*foo*", resolver=res) self.assertEqual(str(nodeset), "foo[1-3],foobar,foobar1") nodeset = NodeSet("foo?", resolver=res) self.assertEqual(str(nodeset), "foo[1-3]") nodeset = NodeSet("*bar", resolver=res) self.assertEqual(str(nodeset), "bar,foobar") nodeset = NodeSet("*", resolver=res) self.assertEqual(str(nodeset), "bar,bar[1-2],foo[1-3],foobar,foobar1") def test_nodeset_wildcard_infinite_recursion(self): """test NodeSet wildcard infinite recursion protection""" f = make_temp_file(dedent(r""" [local] map: echo foo1 foo2 foo3 all: echo foo1 foo2 foo\* list: echo g1 """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("*foo*", resolver=res) # wildcard mask should be automatically ignored on foo* due to # infinite recursion self.assertEqual(str(nodeset), "foo[1-2],foo*") self.assertEqual(len(nodeset), 3) def test_nodeset_wildcard_grouplist(self): """test NodeSet wildcard support and grouplist()""" f = make_temp_file(dedent(r""" [local] map: echo other all: echo foo1 foo2 foo3 bar bar1 bar2 foobar foobar1 list: echo a b\* c d e """).encode('ascii')) res = GroupResolverConfig(f.name) # grouplist() shouldn't trigger wildcard expansion self.assertEqual(grouplist(resolver=res), ['a', 'b*', 'c', 'd', 'e']) nodeset = NodeSet("*foo*", resolver=res) self.assertEqual(str(nodeset), "foo[1-3],foobar,foobar1") # currently @@ triggers group name wildcard expansion nodeset = NodeSet("@@", resolver=res) self.assertEqual(str(nodeset), "a,bar,bar[1-2],c,d,e") nodeset = NodeSet("@@local", resolver=res) self.assertEqual(str(nodeset), "a,bar,bar[1-2],c,d,e") def test_nodeset_wildcard_support_ranges(self): """test NodeSet wildcard support with ranges""" f = make_temp_file(dedent(""" [local] map: echo blargh all: echo foo1 foo2 foo3 foo1-ib0 foo2-ib0 foo1-ib1 foo2-ib1 bar10 foobar foobar1 list: echo g1 g2 g3 """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("*foo[1-2]", resolver=res) self.assertEqual(str(nodeset), "foo[1-2]") nodeset = NodeSet("f?o[1]", resolver=res) self.assertEqual(str(nodeset), "foo1") nodeset = NodeSet("foo*", resolver=res) self.assertEqual(str(nodeset), "foo[1-3],foo[1-2]-ib[0-1],foobar,foobar1") nodeset = NodeSet("foo[1-2]*[0]", resolver=res) self.assertEqual(str(nodeset), "foo[1-2]-ib0") nodeset = NodeSet("foo[1-2]*[0-1]", resolver=res) self.assertEqual(str(nodeset), "foo[1-2]-ib[0-1]") nodeset = NodeSet("*[1-2]*[0-1]", resolver=res) # we do it all :) self.assertEqual(str(nodeset), "bar10,foo[1-2]-ib[0-1]") # bar10 too nodeset = NodeSet("*[1-2]*", resolver=res) self.assertEqual(str(nodeset), "bar10,foo[1-2],foo[1-2]-ib[0-1],foobar1") def test_nodeset_wildcard_precedence(self): """test NodeSet wildcard support precedence""" f = make_temp_file(dedent(""" [local] map: echo blargh all: echo foo1-ib0 foo2-ib0 foo1-ib1 foo2-ib1 bar001 bar002 list: echo g1 g2 g3 """).encode('ascii')) res = GroupResolverConfig(f.name) nodeset = NodeSet("foo*!foo[1-2]-ib0", resolver=res) self.assertEqual(str(nodeset), "foo[1-2]-ib1") nodeset = NodeSet("foo2-ib0!*", resolver=res) self.assertEqual(str(nodeset), "") nodeset = NodeSet("bar[001-002],foo[1-2]-ib[0-1]!*foo*", resolver=res) self.assertEqual(str(nodeset), "bar[001-002]") nodeset = NodeSet("bar0*,foo[1-2]-ib[0-1]!*foo*", resolver=res) self.assertEqual(str(nodeset), "bar[001-002]") nodeset = NodeSet("bar??1,foo[1-2]-ib[0-1]!*foo*", resolver=res) self.assertEqual(str(nodeset), "bar001") nodeset = NodeSet("*,*", resolver=res) self.assertEqual(str(nodeset), "bar[001-002],foo[1-2]-ib[0-1]") nodeset = NodeSet("*!*", resolver=res) self.assertEqual(str(nodeset), "") nodeset = NodeSet("*&*", resolver=res) self.assertEqual(str(nodeset), "bar[001-002],foo[1-2]-ib[0-1]") nodeset = NodeSet("*^*", resolver=res) self.assertEqual(str(nodeset), "") def test_nodeset_wildcard_no_resolver(self): """test NodeSet wildcard without resolver""" nodeset = NodeSet("foo*", resolver=RESOLVER_NOGROUP) self.assertEqual(str(nodeset), "foo*") class NodeSetGroup2GSTest(unittest.TestCase): def setUp(self): """configure simple RESOLVER_STD_GROUP""" # create temporary groups file and keep a reference to avoid file closing self.test_groups1 = makeTestG1() self.test_groups2 = makeTestG2() # create 2 GroupSource objects default = UpcallGroupSource("default", "sed -n 's/^$GROUP:\(.*\)/\\1/p' %s" % self.test_groups1.name, "sed -n 's/^all:\(.*\)/\\1/p' %s" % self.test_groups1.name, "sed -n 's/^\([0-9A-Za-z_-]*\):.*/\\1/p' %s" % self.test_groups1.name, None) source2 = UpcallGroupSource("source2", "sed -n 's/^$GROUP:\(.*\)/\\1/p' %s" % self.test_groups2.name, "sed -n 's/^all:\(.*\)/\\1/p' %s" % self.test_groups2.name, "sed -n 's/^\([0-9A-Za-z_-]*\):.*/\\1/p' %s" % self.test_groups2.name, None) resolver = GroupResolver(default) resolver.add_source(source2) set_std_group_resolver(resolver) def tearDown(self): """restore default RESOLVER_STD_GROUP""" set_std_group_resolver(None) del self.test_groups1 del self.test_groups2 def testGroupSyntaxes(self): """test NodeSet group operation syntaxes""" nodeset = NodeSet("@gpu") self.assertEqual(str(nodeset), "montana[38-41]") nodeset = NodeSet("@chassis[1-3,5]&@chassis[2-3]") self.assertEqual(str(nodeset), "montana[34-37]") nodeset1 = NodeSet("@io!@mds") nodeset2 = NodeSet("@oss") self.assertEqual(str(nodeset1), str(nodeset2)) self.assertEqual(str(nodeset1), "montana[4-5]") def testGroupListDefault(self): """test NodeSet group listing GroupResolver.grouplist()""" groups = std_group_resolver().grouplist() self.assertEqual(len(groups), 20) helper_groups = grouplist() self.assertEqual(len(helper_groups), 20) total = 0 nodes = NodeSet() for group in groups: ns = NodeSet("@%s" % group) total += len(ns) nodes.update(ns) self.assertEqual(total, 310) all_nodes = NodeSet.fromall() self.assertEqual(len(all_nodes), len(nodes)) self.assertEqual(all_nodes, nodes) # @@ special operator gl_nodes = NodeSet("@@") self.assertEqual(len(gl_nodes), 20) self.assertEqual(str(gl_nodes), "Uppercase,all,chassis[1-12],compute,gpu,gpuchassis,io,mds,oss") gl_nodes = NodeSet("@@default") self.assertEqual(len(gl_nodes), 20) self.assertEqual(str(gl_nodes), "Uppercase,all,chassis[1-12],compute,gpu,gpuchassis,io,mds,oss") def testGroupListSource2(self): """test NodeSet group listing GroupResolver.grouplist(source)""" groups = std_group_resolver().grouplist("source2") self.assertEqual(len(groups), 2) total = 0 for group in groups: total += len(NodeSet("@source2:%s" % group)) self.assertEqual(total, 24) # @@ special operator gl_nodes = NodeSet("@@source2") self.assertEqual(len(gl_nodes), 2) self.assertEqual(str(gl_nodes), "gpu,para") def testGroupNoPrefix(self): """test NodeSet group noprefix option""" nodeset = NodeSet("montana[32-37,42-55]") self.assertEqual(nodeset.regroup("source2"), "@source2:para") self.assertEqual(nodeset.regroup("source2", noprefix=True), "@para") def testGroupGroups(self): """test NodeSet.groups()""" nodeset = NodeSet("montana[32-37,42-55]") self.assertEqual(sorted(nodeset.groups().keys()), ['@all', '@chassis1', '@chassis10', '@chassis11', '@chassis12', '@chassis2', '@chassis3', '@chassis6', '@chassis7', '@chassis8', '@chassis9', '@compute']) testns = NodeSet() for gnodes, inodes in nodeset.groups().values(): testns.update(inodes) self.assertEqual(testns, nodeset) class NodeSetRegroupTest(unittest.TestCase): def setUp(self): """setUp test reproducibility: change standard group resolver to ensure that no local group source is used during tests""" set_std_group_resolver(GroupResolver()) # dummy resolver def tearDown(self): """tearDown: restore standard group resolver""" set_std_group_resolver(None) # restore std resolver def testGroupResolverReverse(self): """test NodeSet GroupResolver with reverse upcall""" test_groups3 = makeTestG3() test_reverse3 = makeTestR3() source = UpcallGroupSource("test", "sed -n 's/^$GROUP:\(.*\)/\\1/p' %s" % test_groups3.name, "sed -n 's/^all:\(.*\)/\\1/p' %s" % test_groups3.name, "sed -n 's/^\([0-9A-Za-z_-]*\):.*/\\1/p' %s" % test_groups3.name, "awk -F: '/^$NODE:/ { gsub(\",\",\"\\n\",$2); print $2 }' %s" % test_reverse3.name) # create custom resolver with default source res = GroupResolver(source) nodeset = NodeSet("@all", resolver=res) self.assertEqual(nodeset, NodeSet("montana[32-55]")) self.assertEqual(str(nodeset), "montana[32-55]") self.assertEqual(nodeset.regroup(), "@all") self.assertEqual(nodeset.regroup(), "@all") nodeset = NodeSet("@overclock", resolver=res) self.assertEqual(nodeset, NodeSet("montana[41-42]")) self.assertEqual(str(nodeset), "montana[41-42]") self.assertEqual(nodeset.regroup(), "@overclock") self.assertEqual(nodeset.regroup(), "@overclock") nodeset = NodeSet("@gpu,@overclock", resolver=res) self.assertEqual(str(nodeset), "montana[38-42]") self.assertEqual(nodeset, NodeSet("montana[38-42]")) # un-overlap :) self.assertEqual(nodeset.regroup(), "@gpu,montana42") self.assertEqual(nodeset.regroup(), "@gpu,montana42") self.assertEqual(nodeset.regroup(overlap=True), "@gpu,@overclock") nodeset = NodeSet("montana41", resolver=res) self.assertEqual(nodeset.regroup(), "montana41") self.assertEqual(nodeset.regroup(), "montana41") # test regroup code when using unindexed node nodeset = NodeSet("idaho", resolver=res) self.assertEqual(nodeset.regroup(), "@single") self.assertEqual(nodeset.regroup(), "@single") nodeset = NodeSet("@single", resolver=res) self.assertEqual(str(nodeset), "idaho") # unresolved unindexed: nodeset = NodeSet("utah", resolver=res) self.assertEqual(nodeset.regroup(), "utah") self.assertEqual(nodeset.regroup(), "utah") nodeset = NodeSet("@all!montana38", resolver=res) self.assertEqual(nodeset, NodeSet("montana[32-37,39-55]")) self.assertEqual(str(nodeset), "montana[32-37,39-55]") self.assertEqual(nodeset.regroup(), "@para,montana[39-41]") self.assertEqual(nodeset.regroup(), "@para,montana[39-41]") self.assertEqual(nodeset.regroup(overlap=True), "@chassis[1-3],@login,@overclock,@para,montana[39-40]") self.assertEqual(nodeset.regroup(overlap=True), "@chassis[1-3],@login,@overclock,@para,montana[39-40]") nodeset = NodeSet("montana[32-37]", resolver=res) self.assertEqual(nodeset.regroup(), "@chassis[1-3]") self.assertEqual(nodeset.regroup(), "@chassis[1-3]") class StaticGroupSource(UpcallGroupSource): """ A memory only group source based on a provided dict. """ def __init__(self, name, data): all_upcall = None if 'all' in data: all_upcall = 'fake_all' list_upcall = None if 'list' in data: list_upcall = 'fake_list' reverse_upcall = None if 'reverse' in data: reverse_upcall = 'fake_reverse' UpcallGroupSource.__init__(self, name, "fake_map", all_upcall, list_upcall, reverse_upcall) self._data = data def _upcall_read(self, cmdtpl, args=dict()): if cmdtpl == 'map': return self._data[cmdtpl].get(args['GROUP']) elif cmdtpl == 'reverse': return self._data[cmdtpl].get(args['NODE']) else: return self._data[cmdtpl] class GroupSourceCacheTest(unittest.TestCase): def test_clear_cache(self): """test GroupSource.clear_cache()""" source = StaticGroupSource('cache', {'map': {'a': 'foo1', 'b': 'foo2'} }) # create custom resolver with default source res = GroupResolver(source) # Populate map cache self.assertEqual("foo1", str(NodeSet("@a", resolver=res))) self.assertEqual("foo2", str(NodeSet("@b", resolver=res))) self.assertEqual(len(source._cache['map']), 2) # Clear cache source.clear_cache() self.assertEqual(len(source._cache['map']), 0) def test_expired_cache(self): """test UpcallGroupSource expired cache entries""" # create custom resolver with default source source = StaticGroupSource('cache', {'map': {'a': 'foo1', 'b': 'foo2'} }) source.cache_time = 0.2 res = GroupResolver(source) # Populate map cache self.assertEqual("foo1", str(NodeSet("@a", resolver=res))) self.assertEqual("foo2", str(NodeSet("@b", resolver=res))) # Query one more time to check that cache key is unique self.assertEqual("foo2", str(NodeSet("@b", resolver=res))) self.assertEqual(len(source._cache['map']), 2) # Be sure 0.2 cache time is expired (especially for old Python version) time.sleep(0.25) source._data['map']['a'] = 'something_else' self.assertEqual('something_else', str(NodeSet("@a", resolver=res))) self.assertEqual(len(source._cache['map']), 2) def test_expired_cache_reverse(self): """test UpcallGroupSource expired cache entries (reverse)""" source = StaticGroupSource('cache', {'map': {'a': 'foo1', 'b': 'foo2'}, 'reverse': {'foo1': 'a', 'foo2': 'b'} }) source.cache_time = 0.2 res = GroupResolver(source) # Populate reverse cache self.assertEqual("@a", str(NodeSet("foo1", resolver=res).regroup())) self.assertEqual("@b", str(NodeSet("foo2", resolver=res).regroup())) # Query one more time to check that cache key is unique self.assertEqual("@b", str(NodeSet("foo2", resolver=res).regroup())) self.assertEqual(len(source._cache['reverse']), 2) # Be sure 0.2 cache time is expired (especially for old Python version) time.sleep(0.25) source._data['map']['c'] = 'foo1' source._data['reverse']['foo1'] = 'c' self.assertEqual('@c', NodeSet("foo1", resolver=res).regroup()) self.assertEqual(len(source._cache['reverse']), 2) def test_config_cache_time(self): """test group config cache_time options""" f = make_temp_file(dedent(""" [local] cache_time: 0.2 map: echo foo1 """).encode('ascii')) res = GroupResolverConfig(f.name) dummy = res.group_nodes('dummy') # init res to access res._sources self.assertEqual(res._sources['local'].cache_time, 0.2) self.assertEqual("foo1", str(NodeSet("@local:foo", resolver=res))) class GroupSourceTest(unittest.TestCase): """Test class for 1.7 dict-based GroupSource""" def test_base_class0(self): """test base GroupSource class (empty)""" gs = GroupSource("emptysrc") self.assertEqual(gs.resolv_map('gr1'), '') self.assertEqual(gs.resolv_map('gr2'), '') self.assertEqual(gs.resolv_list(), []) self.assertRaises(GroupSourceNoUpcall, gs.resolv_all) self.assertRaises(GroupSourceNoUpcall, gs.resolv_reverse, 'n4') def test_base_class1(self): """test base GroupSource class (map and list)""" gs = GroupSource("testsrc", { 'gr1': ['n1', 'n4', 'n3', 'n2'], 'gr2': ['n9', 'n4'] }) self.assertEqual(gs.resolv_map('gr1'), ['n1', 'n4', 'n3', 'n2']) self.assertEqual(gs.resolv_map('gr2'), ['n9', 'n4']) self.assertEqual(sorted(gs.resolv_list()), ['gr1', 'gr2']) self.assertRaises(GroupSourceNoUpcall, gs.resolv_all) self.assertRaises(GroupSourceNoUpcall, gs.resolv_reverse, 'n4') def test_base_class2(self): """test base GroupSource class (all)""" gs = GroupSource("testsrc", { 'gr1': ['n1', 'n4', 'n3', 'n2'], 'gr2': ['n9', 'n4'] }, 'n[1-9]') self.assertEqual(gs.resolv_all(), 'n[1-9]') class YAMLGroupLoaderTest(unittest.TestCase): def test_missing_pyyaml(self): """test YAMLGroupLoader with missing PyYAML""" sys_path_saved = sys.path try: sys.path = [] # make import yaml failed if 'yaml' in sys.modules: # forget about previous yaml import del sys.modules['yaml'] f = make_temp_file(dedent(""" vendors: apricot: node""").encode('ascii')) self.assertRaises(GroupResolverConfigError, YAMLGroupLoader, f.name) finally: sys.path = sys_path_saved def test_one_source(self): """test YAMLGroupLoader one source""" f = make_temp_file(dedent(""" vendors: apricot: node""").encode('ascii')) loader = YAMLGroupLoader(f.name) sources = list(loader) self.assertEqual(len(sources), 1) self.assertEqual(loader.groups("vendors"), { 'apricot': 'node' }) def test_multi_sources(self): """test YAMLGroupLoader multi sources""" f = make_temp_file(dedent(""" vendors: apricot: node customers: cherry: client-4-2""").encode('ascii')) loader = YAMLGroupLoader(f.name) sources = list(loader) self.assertEqual(len(sources), 2) self.assertEqual(loader.groups("vendors"), { 'apricot': 'node' }) self.assertEqual(loader.groups("customers"), { 'cherry': 'client-4-2' }) def test_reload(self): """test YAMLGroupLoader cache_time""" f = make_temp_file(dedent(""" vendors: apricot: "node[1-10]" avocado: 'node[11-20]' banana: node[21-30] customers: cherry: client-4-2""").encode('ascii')) loader = YAMLGroupLoader(f.name, cache_time=1) self.assertEqual(loader.groups("vendors"), { 'apricot': 'node[1-10]', 'avocado': 'node[11-20]', 'banana': 'node[21-30]' }) # modify YAML file and check that it is reloaded after cache_time f.write(b"\n nut: node42\n") # oh and BTW for ultimate code coverage, test if we add a new source # on-the-fly, this is not supported but should be ignored f.write(b"thieves:\n pomegranate: node100\n") f.flush() time.sleep(0.1) # too soon self.assertEqual(loader.groups("customers"), { 'cherry': 'client-4-2' }) time.sleep(1.0) self.assertEqual(loader.groups("vendors"), { 'apricot': 'node[1-10]', 'avocado': 'node[11-20]', 'banana': 'node[21-30]' }) self.assertEqual(loader.groups("customers"), { 'cherry': 'client-4-2', 'nut': 'node42' }) def test_iter(self): """test YAMLGroupLoader iterator""" f = make_temp_file(dedent(""" src1: src1grp1: node11 src1grp2: node12 src2: src2grp1: node21 src2grp2: node22 src3: src3grp1: node31 src3grp2: node32""").encode('ascii')) loader = YAMLGroupLoader(f.name, cache_time = 0.1) # iterate sources with cache expired for source in loader: time.sleep(0.5) # force reload self.assertEqual(len(source.groups), 2) def test_numeric_sources(self): """test YAMLGroupLoader with numeric sources""" # good f = make_temp_file(b"'111': { compute: 'sgisummit-rcf-111-[08,10]' }") loader = YAMLGroupLoader(f.name) sources = list(loader) self.assertEqual(len(sources), 1) self.assertEqual(loader.groups("111"), {'compute': 'sgisummit-rcf-111-[08,10]'}) # bad f = make_temp_file(b"111: { compute: 'sgisummit-rcf-111-[08,10]' }") self.assertRaises(GroupResolverConfigError, YAMLGroupLoader, f.name) def test_numeric_group(self): """test YAMLGroupLoader with numeric group""" # good f = make_temp_file(b"courses: { '101': 'workstation-[1-10]' }") loader = YAMLGroupLoader(f.name) sources = list(loader) self.assertEqual(len(sources), 1) self.assertEqual(loader.groups("courses"), {'101': 'workstation-[1-10]'}) # bad f = make_temp_file(b"courses: { 101: 'workstation-[1-10]' }") self.assertRaises(GroupResolverConfigError, YAMLGroupLoader, f.name) def test_list_group(self): f = make_temp_file(dedent(""" rednecks: bubba: - pickup-1 - pickup-2 - tractor-[1-2]""").encode('ascii')) loader = YAMLGroupLoader(f.name) sources = list(loader) resolver = GroupResolver(sources[0]) self.assertEqual(resolver.group_nodes('bubba'), [ 'pickup-1,pickup-2,tractor-[1-2]' ]) class GroupResolverYAMLTest(unittest.TestCase): def setUp(self): """setUp test reproducibility: change standard group resolver to ensure that no local group source is used during tests""" set_std_group_resolver(GroupResolver()) # dummy resolver def tearDown(self): """tearDown: restore standard group resolver""" set_std_group_resolver(None) # restore std resolver def test_yaml_basic(self): """test groups with a basic YAML config file""" tdir = make_temp_dir() f = make_temp_file(dedent(""" # A comment [Main] default: yaml autodir: %s """ % tdir.name).encode('ascii')) yamlfile = make_temp_file(dedent(""" yaml: foo: example[1-4,91-100],example90 bar: example[5-89] """).encode('ascii'), suffix=".yaml", dir=tdir.name) try: res = GroupResolverConfig(f.name) # Group resolution nodeset = NodeSet("@foo", resolver=res) self.assertEqual(str(nodeset), "example[1-4,90-100]") nodeset = NodeSet("@bar", resolver=res) self.assertEqual(str(nodeset), "example[5-89]") nodeset = NodeSet("@foo,@bar", resolver=res) self.assertEqual(str(nodeset), "example[1-100]") nodeset = NodeSet("@unknown", resolver=res) self.assertEqual(len(nodeset), 0) # @@ grouplist operator nodeset = NodeSet("@@", resolver=res) self.assertEqual(str(nodeset), "bar,foo") nodeset = NodeSet("@@yaml", resolver=res) self.assertEqual(str(nodeset), "bar,foo") # Regroup nodeset = NodeSet("example[1-4,90-100]", resolver=res) self.assertEqual(str(nodeset), "example[1-4,90-100]") self.assertEqual(nodeset.regroup(), "@foo") self.assertEqual(list(nodeset.groups().keys()), ["@foo"]) self.assertEqual(str(NodeSet("@foo", resolver=res)), "example[1-4,90-100]") # No 'all' defined: all_nodes() should raise an error self.assertRaises(GroupSourceError, res.all_nodes) # but then NodeSet falls back to the union of all groups nodeset = NodeSet.fromall(resolver=res) self.assertEqual(str(nodeset), "example[1-100]") # regroup doesn't use @all in that case self.assertEqual(nodeset.regroup(), "@bar,@foo") # No 'reverse' defined: node_groups() should raise an error self.assertRaises(GroupSourceError, res.node_groups, "example1") # regroup with rest nodeset = NodeSet("example[1-101]", resolver=res) self.assertEqual(nodeset.regroup(), "@bar,@foo,example101") # regroup incomplete nodeset = NodeSet("example[50-200]", resolver=res) self.assertEqual(nodeset.regroup(), "example[50-200]") # regroup no matching nodeset = NodeSet("example[102-200]", resolver=res) self.assertEqual(nodeset.regroup(), "example[102-200]") finally: yamlfile.close() tdir.cleanup() def test_yaml_fromall(self): """test groups special all group""" tdir = make_temp_dir() f = make_temp_file(dedent(""" [Main] default: yaml autodir: %s """ % tdir.name).encode('ascii')) yamlfile = make_temp_file(dedent(""" yaml: foo: example[1-4,91-100],example90 bar: example[5-89] all: example[90-100] """).encode('ascii'), suffix=".yaml", dir=tdir.name) try: res = GroupResolverConfig(f.name) nodeset = NodeSet.fromall(resolver=res) self.assertEqual(str(nodeset), "example[90-100]") # regroup uses @all if it is defined self.assertEqual(nodeset.regroup(), "@all") finally: yamlfile.close() tdir.cleanup() def test_yaml_invalid_groups_not_dict(self): """test groups with an invalid YAML config file (1)""" tdir = make_temp_dir() f = make_temp_file(dedent(""" [Main] default: yaml autodir: %s """ % tdir.name).encode('ascii')) yamlfile = make_temp_file(dedent(""" yaml: bar """).encode('ascii'), suffix=".yaml", dir=tdir.name) try: resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) finally: yamlfile.close() tdir.cleanup() def test_yaml_invalid_root_dict(self): """test groups with an invalid YAML config file (2)""" tdir = make_temp_dir() f = make_temp_file(dedent(""" [Main] default: yaml autodir: %s """ % tdir.name).encode('ascii')) yamlfile = make_temp_file(dedent(""" - Casablanca - North by Northwest - The Man Who Wasn't There """).encode('ascii'), suffix=".yaml", dir=tdir.name) try: resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) finally: yamlfile.close() tdir.cleanup() def test_yaml_invalid_not_yaml(self): """test groups with an invalid YAML config file (3)""" tdir = make_temp_dir() f = make_temp_file(dedent(""" [Main] default: yaml autodir: %s """ % tdir.name).encode('ascii')) yamlfile = make_temp_file(dedent(""" [Dummy] one: un two: deux three: trois """).encode('ascii'), suffix=".yaml", dir=tdir.name) try: resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) finally: yamlfile.close() tdir.cleanup() def test_yaml_unsafe_yaml(self): """test groups with an invalid YAML config file (4)""" tdir = make_temp_dir() f = make_temp_file(dedent(""" [Main] default: yaml autodir: %s """ % tdir.name).encode('ascii')) yamlfile = make_temp_file(dedent(""" yaml: foo: !!python/object/new:os.system [echo EXPLOIT!] bar: example[5-89] all: example[90-100] """).encode('ascii'), suffix=".yaml", dir=tdir.name) try: resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) finally: yamlfile.close() tdir.cleanup() def test_wrong_autodir(self): """test wrong autodir (doesn't exist)""" f = make_temp_file(dedent(""" [Main] autodir: /i/do/not/=exist= default: local """).encode('ascii')) # absent autodir itself doesn't raise any exception, but default # pointing to nothing does... resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) def test_wrong_autodir_is_file(self): """test wrong autodir (is a file)""" fe = make_temp_file(b"") f = make_temp_file(dedent(""" [Main] autodir: %s default: local [local] map: node """ % fe.name).encode('ascii')) resolver = GroupResolverConfig(f.name) self.assertRaises(GroupResolverConfigError, resolver.grouplist) def test_yaml_permission_denied(self): """test groups when not allowed to read some YAML config file""" # This test doesn't work if run as root, as root can read the # yaml group file even with restricted file permissions... if os.geteuid() == 0: return tdir = make_temp_dir() f = make_temp_file(dedent(""" [Main] default: yaml1 autodir: %s """ % tdir.name).encode('ascii')) yamlfile1 = make_temp_file(b'yaml1: {foo: "example[1-4]"}', suffix=".yaml", dir=tdir.name) yamlfile2 = make_temp_file(b'yaml2: {bar: "example[5-8]"}', suffix=".yaml", dir=tdir.name) try: # do not allow read access to yamlfile2 os.chmod(yamlfile2.name, 0) self.assertFalse(os.access(yamlfile2.name, os.R_OK)) res = GroupResolverConfig(f.name) # using yaml1 should work nodeset = NodeSet("@foo", resolver=res) self.assertEqual(str(nodeset), "example[1-4]") # using yaml2 won't, of course self.assertRaises(GroupResolverSourceError, NodeSet, "@yaml2:bar", resolver=res) finally: yamlfile1.close() yamlfile2.close() tdir.cleanup() def test_yaml_null_value(self): """test null value in groups yaml file""" tdir = make_temp_dir() f = make_temp_file(dedent(""" [Main] default: yaml autodir: %s """ % tdir.name).encode('ascii')) yamlfile = make_temp_file(dedent(""" yaml: c0: nid[0001-0032] c1: c0r7: nid[0017-0032] """).encode('ascii'), suffix=".yaml", dir=tdir.name) try: res = GroupResolverConfig(f.name) nodeset = NodeSet.fromall(resolver=res) self.assertEqual(str(nodeset), "nid[0001-0032]") nodeset = NodeSet("@c1", resolver=res) self.assertEqual(len(nodeset), 0) self.assertEqual(str(nodeset), "") finally: yamlfile.close() tdir.cleanup() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/NodeSetTest.py0000644104717000001440000041651214505632065017522 0ustar00sthiellusers# -*- coding: utf-8 -*- # ClusterShell.NodeSet test suite # Written by S. Thiell (first version in 2007) """Unit test for NodeSet""" import binascii import copy import pickle import sys import unittest from ClusterShell.NodeSet import RangeSet, RangeSetND, NodeSet, fold, expand from ClusterShell.NodeSet import NodeSetBase, AUTOSTEP_DISABLED, \ NodeSetError, NodeSetParseError, \ NodeSetParseRangeError class NodeSetTest(unittest.TestCase): def _assertEqual(self, pattern, result=None): ns = NodeSet(pattern) if result is None: result = pattern self.assertEqual(str(ns), result) def _assertNode(self, nodeset, nodename): """helper to assert single node presence""" self.assertEqual(str(nodeset), nodename) self.assertEqual(list(nodeset), [nodename]) self.assertEqual(len(nodeset), 1) def testEmptyNode(self): """test NodeSet with empty node""" # empty strings and any strip()able chars are OK for arg in (None, " ", "\n", "\t", " " * 100): nodeset = NodeSet(arg) self.assertEqual(str(nodeset), "") self.assertEqual(len(nodeset), 0) def testUnnumberedNode(self): """test NodeSet with unnumbered node""" nodeset = NodeSet("cws-machin") self._assertNode(nodeset, "cws-machin") def testNodeZero(self): """test NodeSet with node0""" nodeset = NodeSet("supercluster0") self._assertNode(nodeset, "supercluster0") def testNoPrefix(self): """test NodeSet with node without prefix""" nodeset = NodeSet("0cluster") self._assertNode(nodeset, "0cluster") nodeset = NodeSet("[0]cluster") self._assertNode(nodeset, "0cluster") def testWhitespacePrefix(self): """test NodeSet parsing ignoring whitespace""" nodeset = NodeSet(" tigrou2 , tigrou7 , tigrou[5,9-11] ") self.assertEqual(str(nodeset), "tigrou[2,5,7,9-11]") nodeset = NodeSet(" tigrou2 , tigrou5,tigrou7 , tigrou[ 9 - 11 ] ") self.assertEqual(str(nodeset), "tigrou[2,5,7,9-11]") def testWhitespaceInsideNodeName(self): """test NodeSet parsing keeping whitespaces inside a node name""" nodeset = NodeSet("tigrou 0, tigrou [1],tigrou [2-3]") self.assertEqual(str(nodeset), "tigrou [0-3]") nsstr = "tigrou 1,tigrou 0 1 2 abc,tigrou [2-3] ourgit" nodeset = NodeSet(nsstr) self.assertEqual(str(nodeset), nsstr) nsstr = " tigrou [1-5] & tigrou [0,2,4] ! tigrou [2-3]" nsstr += " ^ tigrou [3-5], tigrou 1 " nodeset = NodeSet(nsstr) self.assertEqual(str(nodeset), "tigrou [1,3,5]") def testFromListConstructor(self): """test NodeSet.fromlist() constructor""" nodeset = NodeSet.fromlist(["cluster33"]) self._assertNode(nodeset, "cluster33") nodeset = NodeSet.fromlist(["cluster0", "cluster1", "cluster2", "cluster5", "cluster8", "cluster4", "cluster3"]) self.assertEqual(str(nodeset), "cluster[0-5,8]") self.assertEqual(len(nodeset), 7) # updaten() test nodeset.updaten(["cluster10", "cluster9"]) self.assertEqual(str(nodeset), "cluster[0-5,8-10]") self.assertEqual(len(nodeset), 9) # single nodes test nodeset = NodeSet.fromlist(["cluster0", "cluster1", "cluster", "wool", "cluster3"]) self.assertEqual(str(nodeset), "cluster,cluster[0-1,3],wool") self.assertEqual(len(nodeset), 5) def testDigitInPrefix(self): """test NodeSet digit in prefix""" nodeset = NodeSet("clu-0-3") self._assertNode(nodeset, "clu-0-3") nodeset = NodeSet("clu-0-[3-23]") self.assertEqual(str(nodeset), "clu-0-[3-23]") def testNodeWithPercent(self): """test NodeSet on nodename with % character""" # unindexed node with percent (issue #261) nodeset = NodeSet("cluster%s") self._assertNode(nodeset, "cluster%s") # single node indexed nodeset = NodeSet("cluster%s3") self._assertNode(nodeset, "cluster%s3") # more nodes nodeset = NodeSet("clust%ser[3-30]") self.assertEqual(str(nodeset), "clust%ser[3-30]") nodeset = NodeSet("myclu%ster,clust%ser[3-30]") self.assertEqual(str(nodeset), "clust%ser[3-30],myclu%ster") # issue #275 nodeset = NodeSet.fromlist(["cluster%eth0", "cluster%eth1"]) self.assertEqual(str(nodeset), "cluster%eth[0-1]") nodeset = NodeSet.fromlist(["cluster%eth[0-8]", "cluster%eth9"]) self.assertEqual(str(nodeset), "cluster%eth[0-9]") nodeset = NodeSet.fromlist(["super%cluster", "hyper%cluster"]) self.assertEqual(str(nodeset), "hyper%cluster,super%cluster") # test also private _fromlist1 constructor nodeset = NodeSet._fromlist1(["cluster%eth0", "cluster%eth1"]) self.assertEqual(str(nodeset), "cluster%eth[0-1]") nodeset = NodeSet._fromlist1(["super%cluster", "hyper%cluster"]) self.assertEqual(str(nodeset), "hyper%cluster,super%cluster") # real use-case!? exercise nD and escaping! nodeset = NodeSet("fe80::5054:ff:feff:6944%eth0 ") self._assertNode(nodeset, "fe80::5054:ff:feff:6944%eth0") nodeset = NodeSet.fromlist(["fe80::5054:ff:feff:6944%eth0"]) self._assertNode(nodeset, "fe80::5054:ff:feff:6944%eth0") nodeset = NodeSet._fromlist1(["fe80::5054:ff:feff:6944%eth0"]) self._assertNode(nodeset, "fe80::5054:ff:feff:6944%eth0") def _assertNS(self, pattern, expected_exc): self.assertRaises(expected_exc, NodeSet, pattern) def testBadRangeUsages(self): """test NodeSet parse errors in range""" self._assertNS("nova[]", NodeSetParseRangeError) self._assertNS("nova[-]", NodeSetParseRangeError) self._assertNS("nova[A]", NodeSetParseRangeError) self._assertNS("nova[2-5/a]", NodeSetParseRangeError) self._assertNS("nova[3/2]", NodeSetParseRangeError) self._assertNS("nova[3-/2]", NodeSetParseRangeError) self._assertNS("nova[-/2]", NodeSetParseRangeError) self._assertNS("nova[4-a/2]", NodeSetParseRangeError) self._assertNS("nova[4-3/2]", NodeSetParseRangeError) self._assertNS("nova[4-5/-2]", NodeSetParseRangeError) self._assertNS("nova[4-2/-2]", NodeSetParseRangeError) self._assertNS("nova[004-002]", NodeSetParseRangeError) self._assertNS("nova[3-59/2,102a]", NodeSetParseRangeError) self._assertNS("nova[3-59/2,,102]", NodeSetParseRangeError) self._assertNS("nova%s" % ("3" * 101), NodeSetParseRangeError) # nD self._assertNS("nova[]p0", NodeSetParseRangeError) self._assertNS("nova[-]p0", NodeSetParseRangeError) self._assertNS("nova[A]p0", NodeSetParseRangeError) self._assertNS("nova[2-5/a]p0", NodeSetParseRangeError) self._assertNS("nova[3/2]p0", NodeSetParseRangeError) self._assertNS("nova[3-/2]p0", NodeSetParseRangeError) self._assertNS("nova[-/2]p0", NodeSetParseRangeError) self._assertNS("nova[4-a/2]p0", NodeSetParseRangeError) self._assertNS("nova[4-3/2]p0", NodeSetParseRangeError) self._assertNS("nova[4-5/-2]p0", NodeSetParseRangeError) self._assertNS("nova[4-2/-2]p0", NodeSetParseRangeError) self._assertNS("nova[004-002]p0", NodeSetParseRangeError) self._assertNS("nova[3-59/2,102a]p0", NodeSetParseRangeError) self._assertNS("nova[3-59/2,,102]p0", NodeSetParseRangeError) self._assertNS("nova%sp0" % ("3" * 101), NodeSetParseRangeError) self._assertNS("x4nova[]p0", NodeSetParseRangeError) self._assertNS("x4nova[-]p0", NodeSetParseRangeError) self._assertNS("x4nova[A]p0", NodeSetParseRangeError) self._assertNS("x4nova[2-5/a]p0", NodeSetParseRangeError) self._assertNS("x4nova[3/2]p0", NodeSetParseRangeError) self._assertNS("x4nova[3-/2]p0", NodeSetParseRangeError) self._assertNS("x4nova[-/2]p0", NodeSetParseRangeError) self._assertNS("x4nova[4-a/2]p0", NodeSetParseRangeError) self._assertNS("x4nova[4-3/2]p0", NodeSetParseRangeError) self._assertNS("x4nova[4-5/-2]p0", NodeSetParseRangeError) self._assertNS("x4nova[4-2/-2]p0", NodeSetParseRangeError) self._assertNS("x4nova[004-002]p0", NodeSetParseRangeError) self._assertNS("x4nova[3-59/2,102a]p0", NodeSetParseRangeError) self._assertNS("x4nova[3-59/2,,102]p0", NodeSetParseRangeError) self._assertNS("x4nova%sp0" % ("3" * 101), NodeSetParseRangeError) def testBadUsages(self): """test NodeSet other parse errors""" self._assertNS("nova[3-59/2,102", NodeSetParseError) self._assertNS("nova3,nova4,,nova6", NodeSetParseError) self._assertNS("nova6,", NodeSetParseError) self._assertNS("nova6[", NodeSetParseError) self._assertNS("nova6]", NodeSetParseError) self._assertNS("n6[1-4]]", NodeSetParseError) # reopening bracket: no pfx/sfx between delimited ranges self._assertNS("n[1-4]0[3-4]", NodeSetParseError) self._assertNS("n6[1-4][3-4]", NodeSetParseError) self._assertNS("n6[1-4]56[3-4]", NodeSetParseError) # illegal numerical bracket folding with /step syntax self._assertNS("prod-0[01-06/2]0", NodeSetParseError) self._assertNS("prod-0[1-7/2,9]0", NodeSetParseError) self._assertNS("prod-0[1-5/2,7-9]0", NodeSetParseError) self._assertNS("prod-00[1-6/2]0", NodeSetParseError) # and not NodeSetParseRangeError # nD more self._assertNS("[1-30][4-9]", NodeSetParseError) self._assertNS("[1-30][4-9]p", NodeSetParseError) self._assertNS("x[1-30][4-9]p", NodeSetParseError) self._assertNS("x[1-30]p4-9]", NodeSetParseError) self._assertNS("xazer][1-30]p[4-9]", NodeSetParseError) self._assertNS("xa[[zer[1-30]p[4-9]", NodeSetParseRangeError) def testTypeSanityCheck(self): """test NodeSet input type sanity check""" self.assertRaises(TypeError, NodeSet, dict()) self.assertRaises(TypeError, NodeSet, list()) self.assertRaises(ValueError, NodeSetBase, None, RangeSet("1-10")) def testRangeSetEntryMismatch(self): """test NodeSet RangeSet entry mismatch""" nodeset = NodeSet("toto%s") self.assertRaises(NodeSetError, nodeset._add, "toto%%s", RangeSet("5")) def test_binary_bad_object_type(self): nodeset = NodeSet("cluster[1-30]c[1-2]") class Dummy: pass dummy = Dummy() self.assertRaises(TypeError, nodeset.add, dummy) def test_internal_mismatch(self): nodeset = NodeSet("cluster[1-30]c[1-2]") self.assertTrue("cluster%sc%s" in nodeset._patterns) nodeset._patterns["cluster%sc%s"] = RangeSetND([[1]]) self.assertRaises(NodeSetParseError, str, nodeset) nodeset._patterns["cluster%sc%s"] = RangeSetND([[1, 1]]) self.assertEqual(str(nodeset), "cluster1c1") nodeset._patterns["cluster%sc%s"] = RangeSetND([[1, 1, 1]]) self.assertRaises(NodeSetParseError, str, nodeset) def test_empty_operand(self): # right self.assertRaises(NodeSetParseError, NodeSet, "foo!") self.assertRaises(NodeSetParseError, NodeSet, "foo,") self.assertRaises(NodeSetParseError, NodeSet, "foo&") self.assertRaises(NodeSetParseError, NodeSet, "foo^") self.assertRaises(NodeSetParseError, NodeSet, "c[1-30]c[1-2]!") # left self.assertRaises(NodeSetParseError, NodeSet, "!foo") self.assertRaises(NodeSetParseError, NodeSet, ",foo") self.assertRaises(NodeSetParseError, NodeSet, "&foo") self.assertRaises(NodeSetParseError, NodeSet, "^foo") self.assertRaises(NodeSetParseError, NodeSet, "!c[1-30]c[1-2]") # other self.assertRaises(NodeSetParseError, NodeSet, "!") self.assertRaises(NodeSetParseError, NodeSet, ",") self.assertRaises(NodeSetParseError, NodeSet, "&") self.assertRaises(NodeSetParseError, NodeSet, "^") self.assertRaises(NodeSetParseError, NodeSet, ",,,") self.assertRaises(NodeSetParseError, NodeSet, "foo,,bar") def testNodeEightPad(self): """test NodeSet padding feature""" nodeset = NodeSet("cluster008") self._assertNode(nodeset, "cluster008") def testNodeRangeIncludingZero(self): """test NodeSet with node range including zero""" nodeset = NodeSet("cluster[0-10]") self.assertEqual(str(nodeset), "cluster[0-10]") self.assertEqual(list(nodeset), ["cluster0", "cluster1", "cluster2", "cluster3", "cluster4", "cluster5", "cluster6", "cluster7", "cluster8", "cluster9", "cluster10"]) self.assertEqual(len(nodeset), 11) def testSingle(self): """test NodeSet single cluster node""" nodeset = NodeSet("cluster115") self._assertNode(nodeset, "cluster115") def testSingleNodeInRange(self): """test NodeSet single cluster node in range""" nodeset = NodeSet("cluster[115]") self._assertNode(nodeset, "cluster115") def testRange(self): """test NodeSet with simple range""" nodeset = NodeSet("cluster[1-100]") self.assertEqual(str(nodeset), "cluster[1-100]") self.assertEqual(len(nodeset), 100) i = 1 for n in nodeset: self.assertEqual(n, "cluster%d" % i) i += 1 self.assertEqual(i, 101) lst = copy.deepcopy(list(nodeset)) i = 1 for n in lst: self.assertEqual(n, "cluster%d" % i) i += 1 self.assertEqual(i, 101) def testRangeWithPadding1(self): """test NodeSet with range with padding (1)""" nodeset = NodeSet("cluster[0001-0100]") self.assertEqual(str(nodeset), "cluster[0001-0100]") self.assertEqual(len(nodeset), 100) i = 1 for n in nodeset: self.assertEqual(n, "cluster%04d" % i) i += 1 self.assertEqual(i, 101) def testRangeWithPadding2(self): """test NodeSet with range with padding (2)""" nodeset = NodeSet("cluster[0034-8127]") self.assertEqual(str(nodeset), "cluster[0034-8127]") self.assertEqual(len(nodeset), 8094) i = 34 for n in nodeset: self.assertEqual(n, "cluster%04d" % i) i += 1 self.assertEqual(i, 8128) def testRangeWithSuffix(self): """test NodeSet with simple range with suffix""" nodeset = NodeSet("cluster[50-99]-ipmi") self.assertEqual(str(nodeset), "cluster[50-99]-ipmi") i = 50 for n in nodeset: self.assertEqual(n, "cluster%d-ipmi" % i) i += 1 self.assertEqual(i, 100) def testCommaSeparatedAndRangeWithPadding(self): """test NodeSet comma separated, range and padding""" nodeset = NodeSet("cluster[0001,0002,1555-1559]") self.assertEqual(str(nodeset), "cluster[0001-0002,1555-1559]") self.assertEqual(list(nodeset), ["cluster0001", "cluster0002", "cluster1555", "cluster1556", "cluster1557", "cluster1558", "cluster1559"]) def testCommaSeparatedAndRangeWithPaddingWithSuffix(self): """test NodeSet comma separated, range and padding with suffix""" nodeset = NodeSet("cluster[0001,0002,1555-1559]-ipmi") self.assertEqual(str(nodeset), "cluster[0001-0002,1555-1559]-ipmi") self.assertEqual(list(nodeset), ["cluster0001-ipmi", "cluster0002-ipmi", "cluster1555-ipmi", "cluster1556-ipmi", "cluster1557-ipmi", "cluster1558-ipmi", "cluster1559-ipmi"]) def testVeryBigRange(self): """test NodeSet iterations with big range size""" nodeset = NodeSet("bigcluster[1-1000000]") self.assertEqual(str(nodeset), "bigcluster[1-1000000]") self.assertEqual(len(nodeset), 1000000) i = 1 for n in nodeset: assert n == "bigcluster%d" % i i += 1 def test_numerical_bracket_folding(self): """test NodeSet numerical bracket folding (eg. 1[2-3]4)""" # Ticket #228 nodeset = NodeSet("node0[0]") self.assertEqual(str(nodeset), "node00") nodeset = NodeSet("node0[1]") self.assertEqual(str(nodeset), "node01") nodeset = NodeSet("node1[0]") self.assertEqual(str(nodeset), "node10") nodeset = NodeSet("node01[0-1]") self.assertEqual(str(nodeset), "node[010-011]") nodeset = NodeSet("prod-02[10-20]") self.assertEqual(str(nodeset), "prod-[0210-0220]") nodeset = NodeSet("prod-2[10-320]") self.assertEqual(str(nodeset), "prod-[210-2320]") nodeset = NodeSet("prod-02[010-320]") self.assertEqual(str(nodeset), "prod-[02010-02320]") nodeset = NodeSet("prod-000[1-9]") self.assertEqual(str(nodeset), "prod-[0001-0009]") nodeset = NodeSet("prod-100[1-9]") self.assertEqual(str(nodeset), "prod-[1001-1009]") nodeset = NodeSet("prod-100[040-042]") self.assertEqual(str(nodeset), "prod-[100040-100042]") self.assertEqual(len(nodeset), 3) # complex ranges nodeset = NodeSet("prod-10[01,05,09-15,40-50,52]") self.assertEqual(str(nodeset), "prod-[1001,1005,1009-1015,1040-1050,1052]") nodeset.autostep = 3 self.assertEqual(str(nodeset), "prod-[1001-1009/4,1010-1015,1040-1050,1052]") # multi patterns nodeset = NodeSet("prod-0[040-042],sysgrp-00[01-02]") self.assertEqual(str(nodeset), "prod-[0040-0042],sysgrp-[0001-0002]") nodeset = NodeSet("prod-100[040-042],sysgrp-00[01-02]") self.assertEqual(str(nodeset), "prod-[100040-100042],sysgrp-[0001-0002]") # leading digits with step notation (supported) nodeset = NodeSet("prod-000[0-8/2]", autostep=3) self.assertEqual(str(nodeset), "prod-[0000-0008/2]") nodeset = NodeSet("n1[01-40/4]", autostep=3) self.assertEqual(str(nodeset), "n[101-137/4]") nodeset = NodeSet("prod-000[0-8/2],prod-000[1-9/2]") self.assertEqual(str(nodeset), "prod-[0000-0009]") self.assertEqual(len(nodeset), 10) # Tricky case due to absence of padding: the one that requires # RangeSet.contiguous() in ParsingEngine._amend_leading_digits() nodeset = NodeSet("node-1[0-48/16]") # => not equal to node-[10-148/16]! self.assertEqual(str(nodeset), "node-[10,116,132,148]") self.assertEqual(len(nodeset), 4) # same case with padding nodeset = NodeSet("node-1[00-48/16]") # equal to node-[100-148/16] self.assertEqual(nodeset, NodeSet("node-[100-148/16]")) self.assertEqual(str(nodeset), "node-[100,116,132,148]") self.assertEqual(len(nodeset), 4) # see also NodeSetErrorTest.py for unsupported trailing digits w/ steps # /!\ padding mismatch cases: mixed padding allowed since 1.9 nodeset = NodeSet("prod-1[10-345]") # no padding so no mismatch there: OK self.assertEqual(str(nodeset), "prod-[110-1345]") nodeset = NodeSet("prod-02[10-34,069-099]") # no padding mismatch within a range: OK self.assertEqual(str(nodeset), "prod-[0210-0234,02069-02099]") self._assertNS("prod-0[10-345]", NodeSetParseRangeError) # padding length mismatch in a range self._assertNS("prod-02[10-345]", NodeSetParseRangeError) # padding length mismatch in a range # numerical folding with nD nodesets nodeset = NodeSet("x01[0-1]y01[0-1]z01[0-1]") self.assertEqual(str(nodeset), "x[010-011]y[010-011]z[010-011]") self.assertEqual(len(nodeset), 2*2*2) nodeset = NodeSet("x22[0-1]y00[0-1]z03[0-1]") self.assertEqual(str(nodeset), "x[220-221]y[000-001]z[030-031]") self.assertEqual(len(nodeset), 2*2*2) nodeset = NodeSet("x22[0-1]y000z03[0-1]") self.assertEqual(str(nodeset), "x[220-221]y000z[030-031]") self.assertEqual(len(nodeset), 2*1*2) # trigger trailing digits to step code nodeset = NodeSet("x22[0-1]0y03[0-1]0") self.assertEqual(str(nodeset), "x[2200,2210]y[0300,0310]") self.assertEqual(len(nodeset), 4) nodeset = NodeSet("x22[0-1]0y03[0-1]0-ipmi") self.assertEqual(str(nodeset), "x[2200,2210]y[0300,0310]-ipmi") self.assertEqual(len(nodeset), 4) # more numerical folding (with suffix) nodeset = NodeSet("node[0]0") self.assertEqual(str(nodeset), "node00") nodeset = NodeSet("node[0]1") self.assertEqual(str(nodeset), "node01") nodeset = NodeSet("node[1]0") self.assertEqual(str(nodeset), "node10") nodeset = NodeSet("n[1-9,15,59,10-50,142]0") self.assertEqual(str(nodeset), str(NodeSet("n[10-90/10,150,590,100-500/10,1420]"))) self.assertEqual(nodeset, NodeSet("n[10-90/10,150,590,100-500/10,1420]")) nodeset = NodeSet("nova[1-4]56") self.assertEqual(nodeset, NodeSet("nova[156-456/100]")) self.assertEqual(len(nodeset), 4) nodeset = NodeSet("nova16[1-4]56") self.assertEqual(str(nodeset), "nova[16156,16256,16356,16456]") self.assertEqual(len(nodeset), 4) nodeset = NodeSet("nova16[1-4]56c") self.assertEqual(str(nodeset), "nova[16156,16256,16356,16456]c") self.assertEqual(len(nodeset), 4) nodeset = NodeSet("prod-[01-34]0") self.assertEqual(nodeset, NodeSet("prod-[010-340/10]")) nodeset = NodeSet("prod-01[1-5]0") self.assertEqual(nodeset, NodeSet("prod-[0110-0150/10]")) nodeset = NodeSet("node123[1-2]") self.assertEqual(nodeset, NodeSet("node[1231-1232]")) self.assertEqual(str(nodeset), "node[1231-1232]") inodeset = NodeSet("node1232") self.assertEqual(str(nodeset.intersection(inodeset)), "node1232") nodeset = NodeSet("node0[0]0") self.assertEqual(str(nodeset), "node000") nodeset = NodeSet("node0[1]0") self.assertEqual(str(nodeset), "node010") nodeset = NodeSet("node1[0]1") self.assertEqual(str(nodeset), "node101") nodeset = NodeSet("node01[0]10") self.assertEqual(str(nodeset), "node01010") # misordered ranges nodeset = NodeSet("n1[1-9,15,59,10-50,142]0") self.assertEqual(nodeset, NodeSet("n[110-190/10,1100-1500/10,1590,11420]")) # more nD (with suffix) nodeset = NodeSet("x01[0-1]y01[0-1]z01[0-1]-ipmi") self.assertEqual(str(nodeset), "x[010-011]y[010-011]z[010-011]-ipmi") self.assertEqual(len(nodeset), 2*2*2) # #284 - hostname labels starting with digits (RFC 1123) nodeset = NodeSet("0[3-9/2]abc") self.assertEqual(str(nodeset), "[03,05,07,09]abc") nodeset = NodeSet("0[3-9]abc") self.assertEqual(str(nodeset), "[03-09]abc") nodeset = NodeSet("[3,5,7,9]0abc") self.assertEqual(str(nodeset), "[30,50,70,90]abc") nodeset = NodeSet("[3-9]0abc") self.assertEqual(str(nodeset), "[30,40,50,60,70,80,90]abc") nodeset = NodeSet("3abc0[1]0") self.assertEqual(str(nodeset), "3abc010") nodeset = NodeSet("3abc16[1-4]56d") self.assertEqual(str(nodeset), "3abc[16156,16256,16356,16456]d") nodeset = NodeSet("0[3,6,9]1abc16[1-4]56d") self.assertEqual(str(nodeset), "[031,061,091]abc[16156,16256,16356,16456]d") # bogus range with padding, we are stricter in v1.9+ self._assertNS("0123[0-100]L6", NodeSetParseRangeError) # ok when no mismatch within a given range nodeset = NodeSet("[01230-99999,100000-123100]L6") self.assertEqual(str(nodeset), "[01230-99999,100000-123100]L6") nodeset = NodeSet("0123[000-100]L6") self.assertEqual(str(nodeset), "[0123000-0123100]L6") def testCommaSeparated(self): """test NodeSet comma separated to ranges (folding)""" nodeset = NodeSet("cluster115,cluster116,cluster117,cluster130," "cluster166") self.assertEqual(str(nodeset), "cluster[115-117,130,166]") self.assertEqual(len(nodeset), 5) def testCommaSeparatedAndRange(self): """test NodeSet comma separated and range to ranges (folding)""" nodeset = NodeSet("cluster115,cluster116,cluster117,cluster130," "cluster[166-169],cluster170") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") def testCommaSeparatedAndRanges(self): """test NodeSet comma separated and ranges to ranges (folding)""" nodeset = NodeSet("cluster[115-117],cluster130,cluster[166-169]," "cluster170") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") def testSimpleStringUpdates(self): """test NodeSet simple string-based update()""" nodeset = NodeSet("cluster[115-117,130,166-170]") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") nodeset.update("cluster171") self.assertEqual(str(nodeset), "cluster[115-117,130,166-171]") nodeset.update("cluster172") self.assertEqual(str(nodeset), "cluster[115-117,130,166-172]") nodeset.update("cluster174") self.assertEqual(str(nodeset), "cluster[115-117,130,166-172,174]") nodeset.update("cluster113") self.assertEqual(str(nodeset), "cluster[113,115-117,130,166-172,174]") nodeset.update("cluster173") self.assertEqual(str(nodeset), "cluster[113,115-117,130,166-174]") nodeset.update("cluster114") self.assertEqual(str(nodeset), "cluster[113-117,130,166-174]") def test_nd_fold_axis_errors(self): """test NodeSet fold_axis errors""" n1 = NodeSet("a3b2c0,a2b3c1,a2b4c1,a1b2c0,a1b2c1,a3b2c1,a2b5c1") n1.fold_axis = 0 self.assertRaises(NodeSetParseError, str, n1) n1.fold_axis = 1 self.assertRaises(NodeSetParseError, str, n1) n1.fold_axis = "0-1" # nok self.assertRaises(NodeSetParseError, str, n1) n1.fold_axis = range(2) # ok self.assertEqual(str(n1), "a[1,3]b2c0,a[1,3]b2c1,a2b[3-5]c1") self.assertEqual(n1, NodeSet("a[1,3]b2c0,a[1,3]b2c1,a2b[3-5]c1")) n1.fold_axis = RangeSet("0-1") # ok self.assertEqual(str(n1), "a[1,3]b2c0,a[1,3]b2c1,a2b[3-5]c1") self.assertEqual(n1, NodeSet("a[1,3]b2c0,a[1,3]b2c1,a2b[3-5]c1")) n1.fold_axis = (0, 1) # ok self.assertEqual(str(n1), "a[1,3]b2c0,a[1,3]b2c1,a2b[3-5]c1") self.assertEqual(n1, NodeSet("a[1,3]b2c0,a[1,3]b2c1,a2b[3-5]c1")) def testSimpleNodeSetUpdates(self): """test NodeSet simple nodeset-based update()""" nodeset = NodeSet("cluster[115-117,130,166-170]") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") nodeset.update(NodeSet("cluster171")) self.assertEqual(str(nodeset), "cluster[115-117,130,166-171]") nodeset.update(NodeSet("cluster172")) self.assertEqual(str(nodeset), "cluster[115-117,130,166-172]") nodeset.update(NodeSet("cluster174")) self.assertEqual(str(nodeset), "cluster[115-117,130,166-172,174]") nodeset.update(NodeSet("cluster113")) self.assertEqual(str(nodeset), "cluster[113,115-117,130,166-172,174]") nodeset.update(NodeSet("cluster173")) self.assertEqual(str(nodeset), "cluster[113,115-117,130,166-174]") nodeset.update(NodeSet("cluster114")) self.assertEqual(str(nodeset), "cluster[113-117,130,166-174]") def testStringUpdatesFromEmptyNodeSet(self): """test NodeSet string-based NodeSet.update() from empty nodeset""" nodeset = NodeSet() self.assertEqual(str(nodeset), "") nodeset.update("") self.assertEqual(str(nodeset), "") nodeset.update(" ") self.assertEqual(str(nodeset), "") nodeset.update("cluster115") self.assertEqual(str(nodeset), "cluster115") nodeset.update("cluster118") self.assertEqual(str(nodeset), "cluster[115,118]") nodeset.update("cluster[116-117]") self.assertEqual(str(nodeset), "cluster[115-118]") def testNodeSetUpdatesFromEmptyNodeSet(self): """test NodeSet-based update() method from empty nodeset""" nodeset = NodeSet() self.assertEqual(str(nodeset), "") nodeset.update(NodeSet("cluster115")) self.assertEqual(str(nodeset), "cluster115") nodeset.update(NodeSet("cluster118")) self.assertEqual(str(nodeset), "cluster[115,118]") nodeset.update(NodeSet("cluster[116-117]")) self.assertEqual(str(nodeset), "cluster[115-118]") def testUpdatesWithSeveralPrefixes(self): """test NodeSet.update() using several prefixes""" nodeset = NodeSet("cluster3") self.assertEqual(str(nodeset), "cluster3") nodeset.update("cluster5") self.assertEqual(str(nodeset), "cluster[3,5]") nodeset.update("tiger5") self.assertEqual(str(nodeset), "cluster[3,5],tiger5") nodeset.update("tiger7") self.assertEqual(str(nodeset), "cluster[3,5],tiger[5,7]") nodeset.update("tiger6") self.assertEqual(str(nodeset), "cluster[3,5],tiger[5-7]") nodeset.update("cluster4") self.assertEqual(str(nodeset), "cluster[3-5],tiger[5-7]") def testOperatorUnion(self): """test NodeSet union | operator""" nodeset = NodeSet("cluster[115-117,130,166-170]") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") # 1 n_test1 = nodeset | NodeSet("cluster171") self.assertEqual(str(n_test1), "cluster[115-117,130,166-171]") nodeset2 = nodeset.copy() self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") nodeset2 |= NodeSet("cluster171") self.assertEqual(str(nodeset2), "cluster[115-117,130,166-171]") # btw validate modifying a copy did not change original self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") # 2 n_test2 = n_test1 | NodeSet("cluster172") self.assertEqual(str(n_test2), "cluster[115-117,130,166-172]") nodeset2 |= NodeSet("cluster172") self.assertEqual(str(nodeset2), "cluster[115-117,130,166-172]") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") # 3 n_test1 = n_test2 | NodeSet("cluster113") self.assertEqual(str(n_test1), "cluster[113,115-117,130,166-172]") nodeset2 |= NodeSet("cluster113") self.assertEqual(str(nodeset2), "cluster[113,115-117,130,166-172]") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") # 4 n_test2 = n_test1 | NodeSet("cluster114") self.assertEqual(str(n_test2), "cluster[113-117,130,166-172]") nodeset2 |= NodeSet("cluster114") self.assertEqual(str(nodeset2), "cluster[113-117,130,166-172]") self.assertEqual(nodeset2, NodeSet("cluster[113-117,130,166-172]")) self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") # more original = NodeSet("cluster0") nodeset = original.copy() for i in range(1, 3000): nodeset = nodeset | NodeSet("cluster%d" % i) self.assertEqual(len(nodeset), 3000) self.assertEqual(str(nodeset), "cluster[0-2999]") self.assertEqual(len(original), 1) self.assertEqual(str(original), "cluster0") nodeset2 = original.copy() for i in range(1, 3000): nodeset2 |= NodeSet("cluster%d" % i) self.assertEqual(nodeset, nodeset2) for i in range(3000, 5000): nodeset2 |= NodeSet("cluster%d" % i) self.assertEqual(len(nodeset2), 5000) self.assertEqual(str(nodeset2), "cluster[0-4999]") self.assertEqual(len(nodeset), 3000) self.assertEqual(str(nodeset), "cluster[0-2999]") self.assertEqual(len(original), 1) self.assertEqual(str(original), "cluster0") def testOperatorUnionFromEmptyNodeSet(self): """test NodeSet union | operator from empty nodeset""" nodeset = NodeSet() self.assertEqual(str(nodeset), "") n_test1 = nodeset | NodeSet("cluster115") self.assertEqual(str(n_test1), "cluster115") n_test2 = n_test1 | NodeSet("cluster118") self.assertEqual(str(n_test2), "cluster[115,118]") n_test1 = n_test2 | NodeSet("cluster[116,117]") self.assertEqual(str(n_test1), "cluster[115-118]") def testOperatorUnionWithSeveralPrefixes(self): """test NodeSet union | operator using several prefixes""" nodeset = NodeSet("cluster3") self.assertEqual(str(nodeset), "cluster3") n_test1 = nodeset | NodeSet("cluster5") self.assertEqual(str(n_test1), "cluster[3,5]") n_test2 = n_test1 | NodeSet("tiger5") self.assertEqual(str(n_test2), "cluster[3,5],tiger5") n_test1 = n_test2 | NodeSet("tiger7") self.assertEqual(str(n_test1), "cluster[3,5],tiger[5,7]") n_test2 = n_test1 | NodeSet("tiger6") self.assertEqual(str(n_test2), "cluster[3,5],tiger[5-7]") n_test1 = n_test2 | NodeSet("cluster4") self.assertEqual(str(n_test1), "cluster[3-5],tiger[5-7]") def testOperatorSub(self): """test NodeSet difference/sub - operator""" nodeset = NodeSet("cluster[115-117,130,166-170]") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") # __sub__ n_test1 = nodeset - NodeSet("cluster[115,130]") self.assertEqual(str(n_test1), "cluster[116-117,166-170]") nodeset2 = copy.copy(nodeset) nodeset2 -= NodeSet("cluster[115,130]") self.assertEqual(str(nodeset2), "cluster[116-117,166-170]") self.assertEqual(nodeset2, NodeSet("cluster[116-117,166-170]")) def testOperatorAnd(self): """test NodeSet intersection/and & operator""" nodeset = NodeSet("cluster[115-117,130,166-170]") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") # __and__ n_test1 = nodeset & NodeSet("cluster[115-167]") self.assertEqual(str(n_test1), "cluster[115-117,130,166-167]") nodeset2 = copy.copy(nodeset) nodeset2 &= NodeSet("cluster[115-167]") self.assertEqual(str(nodeset2), "cluster[115-117,130,166-167]") self.assertEqual(nodeset2, NodeSet("cluster[115-117,130,166-167]")) def testOperatorXor(self): """test NodeSet symmetric_difference/xor & operator""" nodeset = NodeSet("cluster[115-117,130,166-170]") self.assertEqual(str(nodeset), "cluster[115-117,130,166-170]") # __xor__ n_test1 = nodeset ^ NodeSet("cluster[115-167]") self.assertEqual(str(n_test1), "cluster[118-129,131-165,168-170]") nodeset2 = copy.copy(nodeset) nodeset2 ^= NodeSet("cluster[115-167]") self.assertEqual(str(nodeset2), "cluster[118-129,131-165,168-170]") self.assertEqual(nodeset2, NodeSet("cluster[118-129,131-165,168-170]")) def testLen(self): """test NodeSet len() results""" nodeset = NodeSet() self.assertEqual(len(nodeset), 0) nodeset.update("cluster[116-120]") self.assertEqual(len(nodeset), 5) nodeset = NodeSet("roma[50-99]-ipmi,cors[113,115-117,130,166-172]," "cws-tigrou,tigrou3") self.assertEqual(len(nodeset), 50 + 12 + 1 + 1) nodeset = NodeSet("roma[50-99]-ipmi,cors[113,115-117,130,166-172]," "cws-tigrou,tigrou3,tigrou3,tigrou3,cors116") self.assertEqual(len(nodeset), 50 + 12 + 1 + 1) def testIntersection(self): """test NodeSet.intersection()""" nsstr = "red[34-55,76-249,300-403],blue,green" nodeset = NodeSet(nsstr) self.assertEqual(len(nodeset), 302) nsstr2 = "red[32-57,72-249,300-341],blue,yellow" nodeset2 = NodeSet(nsstr2) self.assertEqual(len(nodeset2), 248) inodeset = nodeset.intersection(nodeset2) # originals should not change self.assertEqual(len(nodeset), 302) self.assertEqual(len(nodeset2), 248) self.assertEqual(str(nodeset), "blue,green,red[34-55,76-249,300-403]") self.assertEqual(str(nodeset2), "blue,red[32-57,72-249,300-341],yellow") # result self.assertEqual(len(inodeset), 239) self.assertEqual(str(inodeset), "blue,red[34-55,76-249,300-341]") def testIntersectUpdate(self): """test NodeSet.intersection_update()""" nsstr = "red[34-55,76-249,300-403]" nodeset = NodeSet(nsstr) self.assertEqual(len(nodeset), 300) nodeset = NodeSet(nsstr) nodeset.intersection_update("red[78-80]") self.assertEqual(str(nodeset), "red[78-80]") nodeset = NodeSet(nsstr) nodeset.intersection_update("red[54-249]") self.assertEqual(str(nodeset), "red[54-55,76-249]") nodeset = NodeSet(nsstr) nodeset.intersection_update("red[55-249]") self.assertEqual(str(nodeset), "red[55,76-249]") nodeset = NodeSet(nsstr) nodeset.intersection_update("red[55-100]") self.assertEqual(str(nodeset), "red[55,76-100]") nodeset = NodeSet(nsstr) nodeset.intersection_update("red[55-76]") self.assertEqual(str(nodeset), "red[55,76]") nodeset = NodeSet(nsstr) nodeset.intersection_update("red[55,76]") self.assertEqual(str(nodeset), "red[55,76]") nodeset = NodeSet(nsstr) nodeset.intersection_update("red55,red76") self.assertEqual(str(nodeset), "red[55,76]") # same with intersect(NodeSet) nodeset = NodeSet(nsstr) nodeset.intersection_update(NodeSet("red[78-80]")) self.assertEqual(str(nodeset), "red[78-80]") nodeset = NodeSet(nsstr) nodeset.intersection_update(NodeSet("red[54-249]")) self.assertEqual(str(nodeset), "red[54-55,76-249]") nodeset = NodeSet(nsstr) nodeset.intersection_update(NodeSet("red[55-249]")) self.assertEqual(str(nodeset), "red[55,76-249]") nodeset = NodeSet(nsstr) nodeset.intersection_update(NodeSet("red[55-100]")) self.assertEqual(str(nodeset), "red[55,76-100]") nodeset = NodeSet(nsstr) nodeset.intersection_update(NodeSet("red[55-76]")) self.assertEqual(str(nodeset), "red[55,76]") nodeset = NodeSet(nsstr) nodeset.intersection_update(NodeSet("red[55,76]")) self.assertEqual(str(nodeset), "red[55,76]") nodeset = NodeSet(nsstr) nodeset.intersection_update(NodeSet("red55,red76")) self.assertEqual(str(nodeset), "red[55,76]") # single nodes test nodeset = NodeSet("red,blue,yellow") nodeset.intersection_update("blue,green,yellow") self.assertEqual(len(nodeset), 2) self.assertEqual(str(nodeset), "blue,yellow") def testIntersectSelf(self): """test Nodeset.intersection_update(self)""" nodeset = NodeSet("red4955") self.assertEqual(len(nodeset), 1) nodeset.intersection_update(nodeset) self.assertEqual(len(nodeset), 1) self.assertEqual(str(nodeset), "red4955") nodeset = NodeSet("red") self.assertEqual(len(nodeset), 1) nodeset.intersection_update(nodeset) self.assertEqual(len(nodeset), 1) self.assertEqual(str(nodeset), "red") nodeset = NodeSet("red") self.assertEqual(len(nodeset), 1) nodeset.intersection_update("red") self.assertEqual(len(nodeset), 1) self.assertEqual(str(nodeset), "red") nodeset = NodeSet("red") self.assertEqual(len(nodeset), 1) nodeset.intersection_update("blue") self.assertEqual(len(nodeset), 0) nodeset = NodeSet("red[78-149]") self.assertEqual(len(nodeset), 72) nodeset.intersection_update(nodeset) self.assertEqual(len(nodeset), 72) self.assertEqual(str(nodeset), "red[78-149]") def testIntersectReturnNothing(self): """test NodeSet intersect that returns empty NodeSet""" nodeset = NodeSet("blue43") self.assertEqual(len(nodeset), 1) nodeset.intersection_update("blue42") self.assertEqual(len(nodeset), 0) def testDifference(self): """test NodeSet.difference()""" nsstr = "red[34-55,76-249,300-403],blue,green" nodeset = NodeSet(nsstr) self.assertEqual(str(nodeset), "blue,green,red[34-55,76-249,300-403]") self.assertEqual(len(nodeset), 302) nsstr2 = "red[32-57,72-249,300-341],blue,yellow" nodeset2 = NodeSet(nsstr2) self.assertEqual(str(nodeset2), "blue,red[32-57,72-249,300-341],yellow") self.assertEqual(len(nodeset2), 248) inodeset = nodeset.difference(nodeset2) # originals should not change self.assertEqual(str(nodeset), "blue,green,red[34-55,76-249,300-403]") self.assertEqual(str(nodeset2), "blue,red[32-57,72-249,300-341],yellow") self.assertEqual(len(nodeset), 302) self.assertEqual(len(nodeset2), 248) # result self.assertEqual(str(inodeset), "green,red[342-403]") self.assertEqual(len(inodeset), 63) inodeset = nodeset.difference("") self.assertEqual(str(inodeset), str(nodeset)) self.assertEqual(inodeset, nodeset) def testDifferenceUpdate(self): """test NodeSet.difference_update()""" # nodeset-based subs nodeset = NodeSet("yellow120") self.assertEqual(len(nodeset), 1) nodeset.difference_update(NodeSet("yellow120")) self.assertEqual(len(nodeset), 0) nodeset = NodeSet("yellow") self.assertEqual(len(nodeset), 1) nodeset.difference_update(NodeSet("yellow")) self.assertEqual(len(nodeset), 0) nodeset = NodeSet("yellow") self.assertEqual(len(nodeset), 1) nodeset.difference_update(NodeSet("blue")) self.assertEqual(len(nodeset), 1) self.assertEqual(str(nodeset), "yellow") nodeset = NodeSet("yellow[45-240,570-764,800]") self.assertEqual(len(nodeset), 392) nodeset.difference_update(NodeSet("yellow[45-240,570-764,800]")) self.assertEqual(len(nodeset), 0) # same with string-based subs nodeset = NodeSet("yellow120") self.assertEqual(len(nodeset), 1) nodeset.difference_update("yellow120") self.assertEqual(len(nodeset), 0) nodeset = NodeSet("yellow") self.assertEqual(len(nodeset), 1) nodeset.difference_update("yellow") self.assertEqual(len(nodeset), 0) nodeset = NodeSet("yellow") self.assertEqual(len(nodeset), 1) nodeset.difference_update("blue") self.assertEqual(len(nodeset), 1) self.assertEqual(str(nodeset), "yellow") nodeset = NodeSet("yellow[45-240,570-764,800]") self.assertEqual(len(nodeset), 392) nodeset.difference_update("yellow[45-240,570-764,800]") self.assertEqual(len(nodeset), 0) def testSubSelf(self): """test NodeSet.difference_update() method (self)""" nodeset = NodeSet("yellow[120-148,167]") nodeset.difference_update(nodeset) self.assertEqual(len(nodeset), 0) def testSubMore(self): """test NodeSet.difference_update() method (more)""" nodeset = NodeSet("yellow[120-160]") self.assertEqual(len(nodeset), 41) for i in range(120, 161): nodeset.difference_update(NodeSet("yellow%d" % i)) self.assertEqual(len(nodeset), 0) def testSubsAndAdds(self): """test NodeSet.update() and difference_update() together""" nodeset = NodeSet("yellow[120-160]") self.assertEqual(len(nodeset), 41) for i in range(120, 131): nodeset.difference_update(NodeSet("yellow%d" % i)) self.assertEqual(len(nodeset), 30) for i in range(1940, 2040): nodeset.update(NodeSet("yellow%d" % i)) self.assertEqual(len(nodeset), 130) def testSubsAndAddsMore(self): """test NodeSet.update() and difference_update() together (more)""" nodeset = NodeSet("yellow[120-160]") self.assertEqual(len(nodeset), 41) for i in range(120, 131): nodeset.difference_update(NodeSet("yellow%d" % i)) nodeset.update(NodeSet("yellow%d" % (i + 1000))) self.assertEqual(len(nodeset), 41) for i in range(1120, 1131): nodeset.difference_update(NodeSet("yellow%d" % i)) nodeset.difference_update(NodeSet("yellow[131-160]")) self.assertEqual(len(nodeset), 0) def testSubsAndAddsMoreDigit(self): """ test NodeSet.update() and difference_update() together (with other digit in prefix) """ nodeset = NodeSet("clu-3-[120-160]") self.assertEqual(len(nodeset), 41) for i in range(120, 131): nodeset.difference_update(NodeSet("clu-3-[%d]" % i)) nodeset.update(NodeSet("clu-3-[%d]" % (i + 1000))) self.assertEqual(len(nodeset), 41) for i in range(1120, 1131): nodeset.difference_update(NodeSet("clu-3-[%d]" % i)) nodeset.difference_update(NodeSet("clu-3-[131-160]")) self.assertEqual(len(nodeset), 0) def testSubUnknownNodes(self): """test NodeSet.difference_update() with unknown nodes""" nodeset = NodeSet("yellow[120-160]") self.assertEqual(len(nodeset), 41) nodeset.difference_update("red[35-49]") self.assertEqual(len(nodeset), 41) self.assertEqual(str(nodeset), "yellow[120-160]") def testSubMultiplePrefix(self): """test NodeSet.difference_update() with multiple prefixes""" nodeset = NodeSet("yellow[120-160],red[32-147],blue3,green," "white[2-3940],blue4,blue303") self.assertEqual(len(nodeset), 4100) for i in range(120, 131): nodeset.difference_update(NodeSet("red%d" % i)) nodeset.update(NodeSet("red%d" % (i + 1000))) nodeset.update(NodeSet("yellow%d" % (i + 1000))) self.assertEqual(len(nodeset), 4111) for i in range(1120, 1131): nodeset.difference_update(NodeSet("red%d" % i)) nodeset.difference_update(NodeSet("white%d" %i)) nodeset.difference_update(NodeSet("yellow[131-160]")) self.assertEqual(len(nodeset), 4059) nodeset.difference_update(NodeSet("green")) self.assertEqual(len(nodeset), 4058) def test_getitem(self): """test NodeSet.__getitem__()""" nodeset = NodeSet("yeti[30,34-51,59-60]") self.assertEqual(len(nodeset), 21) self.assertEqual(nodeset[0], "yeti30") self.assertEqual(nodeset[1], "yeti34") self.assertEqual(nodeset[2], "yeti35") self.assertEqual(nodeset[3], "yeti36") self.assertEqual(nodeset[18], "yeti51") self.assertEqual(nodeset[19], "yeti59") self.assertEqual(nodeset[20], "yeti60") self.assertRaises(IndexError, nodeset.__getitem__, 21) # negative indices self.assertEqual(nodeset[-1], "yeti60") for n in range(1, len(nodeset)): self.assertEqual(nodeset[-n], nodeset[len(nodeset)-n]) self.assertRaises(IndexError, nodeset.__getitem__, -100) # test getitem with some nodes without range nodeset = NodeSet("abc,cde[3-9,11],fgh") self.assertEqual(len(nodeset), 10) self.assertEqual(nodeset[0], "abc") self.assertEqual(nodeset[1], "cde3") self.assertEqual(nodeset[2], "cde4") self.assertEqual(nodeset[3], "cde5") self.assertEqual(nodeset[7], "cde9") self.assertEqual(nodeset[8], "cde11") self.assertEqual(nodeset[9], "fgh") self.assertRaises(IndexError, nodeset.__getitem__, 10) # test getitem with rangeset padding nodeset = NodeSet("prune[003-034,349-353/2]") self.assertEqual(len(nodeset), 35) self.assertEqual(nodeset[0], "prune003") self.assertEqual(nodeset[1], "prune004") self.assertEqual(nodeset[31], "prune034") self.assertEqual(nodeset[32], "prune349") self.assertEqual(nodeset[33], "prune351") self.assertEqual(nodeset[34], "prune353") self.assertRaises(IndexError, nodeset.__getitem__, 35) def test_getslice(self): """test NodeSet getitem() with slice""" nodeset = NodeSet("yeti[30,34-51,59-60]") self.assertEqual(len(nodeset), 21) self.assertEqual(len(nodeset[0:2]), 2) self.assertEqual(str(nodeset[0:2]), "yeti[30,34]") self.assertEqual(len(nodeset[1:3]), 2) self.assertEqual(str(nodeset[1:3]), "yeti[34-35]") self.assertEqual(len(nodeset[19:21]), 2) self.assertEqual(str(nodeset[19:21]), "yeti[59-60]") self.assertEqual(len(nodeset[20:22]), 1) self.assertEqual(str(nodeset[20:22]), "yeti60") self.assertEqual(len(nodeset[21:24]), 0) self.assertEqual(str(nodeset[21:24]), "") # negative indices self.assertEqual(str(nodeset[:-1]), "yeti[30,34-51,59]") self.assertEqual(str(nodeset[:-2]), "yeti[30,34-51]") self.assertEqual(str(nodeset[1:-2]), "yeti[34-51]") self.assertEqual(str(nodeset[2:-2]), "yeti[35-51]") self.assertEqual(str(nodeset[9:-3]), "yeti[42-50]") self.assertEqual(str(nodeset[10:-9]), "yeti[43-44]") self.assertEqual(str(nodeset[10:-10]), "yeti43") self.assertEqual(str(nodeset[11:-10]), "") self.assertEqual(str(nodeset[11:-11]), "") self.assertEqual(str(nodeset[::-2]), "yeti[30,35,37,39,41,43,45,47,49,51,60]") self.assertEqual(str(nodeset[::-3]), "yeti[35,38,41,44,47,50,60]") # advanced self.assertEqual(str(nodeset[0:10:2]), "yeti[30,35,37,39,41]") self.assertEqual(str(nodeset[1:11:2]), "yeti[34,36,38,40,42]") self.assertEqual(str(nodeset[:11:3]), "yeti[30,36,39,42]") self.assertEqual(str(nodeset[11::4]), "yeti[44,48,59]") self.assertEqual(str(nodeset[14:]), "yeti[47-51,59-60]") self.assertEqual(str(nodeset[:]), "yeti[30,34-51,59-60]") self.assertEqual(str(nodeset[::5]), "yeti[30,38,43,48,60]") # with unindexed nodes nodeset = NodeSet("foo,bar,bur") self.assertEqual(len(nodeset), 3) self.assertEqual(len(nodeset[0:2]), 2) self.assertEqual(str(nodeset[0:2]), "bar,bur") self.assertEqual(str(nodeset[1:2]), "bur") self.assertEqual(str(nodeset[1:3]), "bur,foo") self.assertEqual(str(nodeset[2:4]), "foo") nodeset = NodeSet("foo,bar,bur3,bur1") self.assertEqual(len(nodeset), 4) self.assertEqual(len(nodeset[0:2]), 2) self.assertEqual(len(nodeset[1:3]), 2) self.assertEqual(len(nodeset[2:4]), 2) self.assertEqual(len(nodeset[3:5]), 1) self.assertEqual(str(nodeset[2:3]), "bur3") self.assertEqual(str(nodeset[3:4]), "foo") self.assertEqual(str(nodeset[0:2]), "bar,bur1") self.assertEqual(str(nodeset[1:3]), "bur[1,3]") # using range step nodeset = NodeSet("yeti[10-98/2]") self.assertEqual(str(nodeset[1:9:3]), "yeti[12,18,24]") self.assertEqual(str(nodeset[::17]), "yeti[10,44,78]") nodeset = NodeSet("yeti[10-98/2]", autostep=2) self.assertEqual(str(nodeset[22:29]), "yeti[54-66/2]") self.assertEqual(nodeset._autostep, 2) # stepping scalability nodeset = NodeSet("yeti[10-9800/2]", autostep=2) self.assertEqual(str(nodeset[22:2900]), "yeti[54-5808/2]") self.assertEqual(str(nodeset[22:2900:3]), "yeti[54-5808/6]") nodeset = NodeSet("yeti[10-14,20-26,30-33]") self.assertEqual(str(nodeset[2:6]), "yeti[12-14,20]") # multiple patterns nodeset = NodeSet("stone[1-9],wood[1-9]") self.assertEqual(str(nodeset[:]), "stone[1-9],wood[1-9]") self.assertEqual(str(nodeset[1:2]), "stone2") self.assertEqual(str(nodeset[8:9]), "stone9") self.assertEqual(str(nodeset[8:10]), "stone9,wood1") self.assertEqual(str(nodeset[9:10]), "wood1") self.assertEqual(str(nodeset[9:]), "wood[1-9]") nodeset = NodeSet("stone[1-9],water[10-12],wood[1-9]") self.assertEqual(str(nodeset[8:10]), "stone9,water10") self.assertEqual(str(nodeset[11:15]), "water12,wood[1-3]") nodeset = NodeSet("stone[1-9],water,wood[1-9]") self.assertEqual(str(nodeset[8:10]), "stone9,water") self.assertEqual(str(nodeset[8:11]), "stone9,water,wood1") self.assertEqual(str(nodeset[9:11]), "water,wood1") self.assertEqual(str(nodeset[9:12]), "water,wood[1-2]") def test_bad_slices(self): nodeset = NodeSet("cluster[1-30]c[1-2]") self.assertRaises(TypeError, nodeset.__getitem__, "zz") self.assertRaises(TypeError, nodeset.__getitem__, slice(1, 'foo')) def testSplit(self): """test NodeSet split()""" # Empty nodeset nodeset = NodeSet() self.assertEqual((), tuple(nodeset.split(2))) # Not enough element nodeset = NodeSet("foo[1]") self.assertEqual((NodeSet("foo[1]"),), tuple(nodeset.split(2))) # Exact number of elements nodeset = NodeSet("foo[1-6]") self.assertEqual((NodeSet("foo[1-2]"), NodeSet("foo[3-4]"), NodeSet("foo[5-6]")), tuple(nodeset.split(3))) # Check limit results nodeset = NodeSet("bar[2-4]") for i in (3, 4): self.assertEqual((NodeSet("bar2"), NodeSet("bar3"), NodeSet("bar4")), tuple(nodeset.split(i))) def testAdd(self): """test NodeSet add()""" nodeset = NodeSet() nodeset.add("green") self.assertEqual(len(nodeset), 1) self.assertEqual(str(nodeset), "green") self.assertEqual(nodeset[0], "green") nodeset = NodeSet() nodeset.add("green35") self.assertEqual(len(nodeset), 1) self.assertEqual(str(nodeset), "green35") self.assertEqual(nodeset[0], "green35") nodeset = NodeSet() nodeset.add("green[3,5-46]") self.assertEqual(len(nodeset), 43) self.assertEqual(nodeset[0], "green3") nodeset = NodeSet() nodeset.add("green[3,5-46],black64,orange[045-148]") self.assertEqual(len(nodeset), 148) self.assertTrue("green5" in nodeset) self.assertTrue("black64" in nodeset) self.assertTrue("orange046" in nodeset) def testAddAdjust(self): """test NodeSet adjusting add()""" # autostep OFF nodeset = NodeSet() nodeset.add("green[1-8/2]") self.assertEqual(str(nodeset), "green[1,3,5,7]") self.assertEqual(len(nodeset), 4) nodeset.add("green[6-17/2]") self.assertEqual(str(nodeset), "green[1,3,5-8,10,12,14,16]") self.assertEqual(len(nodeset), 10) # autostep ON nodeset = NodeSet(autostep=2) nodeset.add("green[1-8/2]") self.assertEqual(str(nodeset), "green[1-7/2]") self.assertEqual(len(nodeset), 4) nodeset.add("green[6-17/2]") #self.assertEqual(str(nodeset), "green[1-5/2,6-7,8-16/2]") # <1.9 self.assertEqual(str(nodeset), "green[1-5/2,6-8,10-16/2]") # 1.9+ self.assertEqual(len(nodeset), 10) def testRemove(self): """test NodeSet remove()""" # from empty nodeset nodeset = NodeSet() self.assertEqual(len(nodeset), 0) self.assertRaises(KeyError, nodeset.remove, "tintin23") self.assertRaises(KeyError, nodeset.remove, "tintin[35-36]") nodeset.update("milou36") self.assertEqual(len(nodeset), 1) self.assertRaises(KeyError, nodeset.remove, "tintin23") self.assertTrue("milou36" in nodeset) nodeset.remove("milou36") self.assertEqual(len(nodeset), 0) nodeset.update("milou[36-60,76,95],haddock[1-12],tournesol") self.assertEqual(len(nodeset), 40) nodeset.remove("milou76") self.assertEqual(len(nodeset), 39) nodeset.remove("milou[36-39]") self.assertEqual(len(nodeset), 35) self.assertRaises(KeyError, nodeset.remove, "haddock13") self.assertEqual(len(nodeset), 35) self.assertRaises(KeyError, nodeset.remove, "haddock[1-15]") self.assertEqual(len(nodeset), 35) self.assertRaises(KeyError, nodeset.remove, "tutu") self.assertEqual(len(nodeset), 35) nodeset.remove("tournesol") self.assertEqual(len(nodeset), 34) nodeset.remove("haddock[1-12]") self.assertEqual(len(nodeset), 22) nodeset.remove("milou[40-60,95]") self.assertEqual(len(nodeset), 0) self.assertRaises(KeyError, nodeset.remove, "tournesol") self.assertRaises(KeyError, nodeset.remove, "milou40") # from non-empty nodeset nodeset = NodeSet("haddock[16-3045]") self.assertEqual(len(nodeset), 3030) self.assertRaises(KeyError, nodeset.remove, "haddock15") self.assertTrue("haddock16" in nodeset) self.assertEqual(len(nodeset), 3030) nodeset.remove("haddock[16,18-3044]") self.assertEqual(len(nodeset), 2) self.assertRaises(KeyError, nodeset.remove, "haddock3046") self.assertRaises(KeyError, nodeset.remove, "haddock[16,3060]") self.assertRaises(KeyError, nodeset.remove, "haddock[3045-3046]") self.assertRaises(KeyError, nodeset.remove, "haddock[3045,3049-3051/2]") nodeset.remove("haddock3045") self.assertEqual(len(nodeset), 1) self.assertRaises(KeyError, nodeset.remove, "haddock[3045]") self.assertEqual(len(nodeset), 1) nodeset.remove("haddock17") self.assertEqual(len(nodeset), 0) def testClear(self): """test NodeSet clear()""" nodeset = NodeSet("purple[35-39]") self.assertEqual(len(nodeset), 5) nodeset.clear() self.assertEqual(len(nodeset), 0) def test_contains(self): """test NodeSet contains()""" nodeset = NodeSet() self.assertEqual(len(nodeset), 0) self.assertTrue("foo" not in nodeset) nodeset.update("bar") self.assertEqual(len(nodeset), 1) self.assertEqual(str(nodeset), "bar") self.assertTrue("bar" in nodeset) nodeset.update("foo[20-40]") self.assertTrue("foo" not in nodeset) self.assertTrue("foo39" in nodeset) for node in nodeset: self.assertTrue(node in nodeset) nodeset.update("dark[2000-4000/4]") self.assertTrue("dark3000" in nodeset) self.assertTrue("dark3002" not in nodeset) for node in nodeset: self.assertTrue(node in nodeset) nodeset = NodeSet("scale[0-10000]") self.assertTrue("black64" not in nodeset) self.assertTrue("scale9346" in nodeset) nodeset = NodeSet("scale[0-10000]", autostep=2) self.assertTrue("scale9346" in nodeset[::2]) self.assertTrue("scale9347" not in nodeset[::2]) # nD nodeset = NodeSet("scale[0-1000]p[1,3]") self.assertTrue("black300p2" not in nodeset) self.assertTrue("scale333p3" in nodeset) self.assertTrue("scale333p1" in nodeset) nodeset = NodeSet("scale[0-1000]p[1,3]", autostep=2) self.assertEqual(str(nodeset), "scale[0-1000]p[1-3/2]") nhalf = nodeset[::2] self.assertEqual(str(nhalf), "scale[0-1000]p1") self.assertTrue("scale242p1" in nhalf) self.assertTrue("scale346p1" in nhalf) def testContainsUsingPadding(self): """test NodeSet contains() when using padding""" nodeset = NodeSet("white[001,030]") nodeset.add("white113") self.assertFalse(NodeSet("white30") in nodeset) self.assertTrue(NodeSet("white030") in nodeset) # case: nodeset without padding info is compared to a # padding-initialized range self.assertTrue(NodeSet("white113") in nodeset) self.assertTrue(NodeSet("white[001,113]") in nodeset) self.assertTrue(NodeSet("gene113") in NodeSet("gene[001,030,113]")) self.assertFalse(NodeSet("gene0113") in NodeSet("gene[001,030,113]")) self.assertTrue(NodeSet("gene0113") in NodeSet("gene[0001,0030,0113]")) self.assertTrue(NodeSet("gene113") in NodeSet("gene[098-113]")) self.assertFalse(NodeSet("gene0113") in NodeSet("gene[098-113]")) self.assertTrue(NodeSet("gene0113") in NodeSet("gene[0098-0113]")) # case: len(str(ielem)) >= rgpad nodeset = NodeSet("white[001,099]") nodeset.add("white100") nodeset.add("white1000") self.assertTrue(NodeSet("white100") in nodeset) self.assertFalse(NodeSet("white0100") in nodeset) self.assertTrue(NodeSet("white1000") in nodeset) def test_issuperset(self): """test NodeSet issuperset()""" nodeset = NodeSet("tronic[0036-1630]") self.assertEqual(len(nodeset), 1595) self.assertTrue(nodeset.issuperset("tronic[0036-1630]")) self.assertTrue(nodeset.issuperset("tronic[0140-0200]")) self.assertTrue(nodeset.issuperset(NodeSet("tronic[0140-0200]"))) self.assertTrue(nodeset.issuperset("tronic0070")) self.assertFalse(nodeset.issuperset("tronic0034")) # check padding issue - fixed since 1.9 self.assertFalse(nodeset.issuperset("tronic36")) # used to be true < 1.9 self.assertFalse(nodeset.issuperset("tronic[36-40]")) # same self.assertFalse(nodeset.issuperset(NodeSet("tronic[36-40]"))) # same self.assertTrue(nodeset.issuperset(NodeSet("tronic[0036-0040]"))) # check gt self.assertTrue(nodeset > NodeSet("tronic[0100-0200]")) self.assertFalse(nodeset > NodeSet("tronic[0036-1630]")) self.assertFalse(nodeset > NodeSet("tronic[0036-1631]")) self.assertTrue(nodeset >= NodeSet("tronic[0100-0200]")) self.assertTrue(nodeset >= NodeSet("tronic[0036-1630]")) self.assertFalse(nodeset >= NodeSet("tronic[0036-1631]")) # multiple patterns case nodeset = NodeSet("tronic[0036-1630],lounge[20-660/2]") self.assertTrue(nodeset > NodeSet("tronic[0100-0200]")) self.assertTrue(nodeset > NodeSet("lounge[36-400/2]")) self.assertTrue(nodeset.issuperset(NodeSet("lounge[36-400/2]," "tronic[0100-0660]"))) self.assertTrue(nodeset > NodeSet("lounge[36-400/2],tronic[0100-0660]")) self._assertNS("lounge[36-400/2],tronic[0100-660]", NodeSetParseRangeError) def test_issubset(self): """test NodeSet issubset()""" nodeset = NodeSet("artcore[3-999]") self.assertEqual(len(nodeset), 997) self.assertTrue(nodeset.issubset("artcore[3-999]")) self.assertTrue(nodeset.issubset("artcore[1-1000]")) self.assertFalse(nodeset.issubset("artcore[350-427]")) # check lt self.assertTrue(nodeset < NodeSet("artcore[2-32000]")) self.assertTrue(nodeset < NodeSet("artcore[2-32000],lounge[35-65/2]")) self.assertFalse(nodeset < NodeSet("artcore[3-999]")) self.assertFalse(nodeset < NodeSet("artcore[3-980]")) self.assertFalse(nodeset < NodeSet("artcore[2-998]")) self.assertTrue(nodeset <= NodeSet("artcore[2-32000]")) self.assertTrue(nodeset <= NodeSet("artcore[2-32000],lounge[35-65/2]")) self.assertTrue(nodeset <= NodeSet("artcore[3-999]")) self.assertFalse(nodeset <= NodeSet("artcore[3-980]")) self.assertFalse(nodeset <= NodeSet("artcore[2-998]")) self.assertEqual(len(nodeset), 997) # check padding issues - fixed since 1.9 self.assertFalse(nodeset.issubset("artcore[0001-1000]")) # was true < 1.9 self.assertFalse(nodeset.issubset("artcore30")) self.assertFalse(nodeset.issubset("artcore030")) self.assertFalse(nodeset.issubset("artcore0030")) # multiple patterns case nodeset = NodeSet("tronic[0036-1630],lounge[20-660/2]") self.assertTrue(nodeset < NodeSet("tronic[0036-1630],lounge[20-662/2]")) self.assertTrue(nodeset < NodeSet("tronic[0035-1630],lounge[20-660/2]")) self.assertFalse(nodeset < NodeSet("tronic[0035-1630],lounge[22-660/2]")) self.assertTrue(nodeset < NodeSet("tronic[0036-1630],lounge[20-660/2],artcore[034-070]")) self.assertTrue(nodeset < NodeSet("tronic[0032-1880],lounge[2-700/2],artcore[039-040]")) self.assertTrue(nodeset.issubset("tronic[0032-1880],lounge[2-700/2],artcore[039-040]")) self.assertTrue(nodeset.issubset(NodeSet("tronic[0032-1880],lounge[2-700/2],artcore[039-040]"))) def testSymmetricDifference(self): """test NodeSet symmetric_difference()""" nsstr = "red[34-55,76-249,300-403],blue,green" nodeset = NodeSet(nsstr) self.assertEqual(len(nodeset), 302) nsstr2 = "red[32-57,72-249,300-341],blue,yellow" nodeset2 = NodeSet(nsstr2) self.assertEqual(len(nodeset2), 248) inodeset = nodeset.symmetric_difference(nodeset2) # originals should not change self.assertEqual(len(nodeset), 302) self.assertEqual(len(nodeset2), 248) self.assertEqual(str(nodeset), "blue,green,red[34-55,76-249,300-403]") self.assertEqual(str(nodeset2), "blue,red[32-57,72-249,300-341],yellow") # result self.assertEqual(len(inodeset), 72) self.assertEqual(str(inodeset), "green,red[32-33,56-57,72-75,342-403],yellow") def testSymmetricDifferenceUpdate(self): """test NodeSet symmetric_difference_update()""" nodeset = NodeSet("artcore[3-999]") self.assertEqual(len(nodeset), 997) nodeset.symmetric_difference_update("artcore[1-2000]") self.assertEqual(len(nodeset), 1003) self.assertEqual(str(nodeset), "artcore[1-2,1000-2000]") nodeset = NodeSet("artcore[3-999],lounge") self.assertEqual(len(nodeset), 998) nodeset.symmetric_difference_update("artcore[1-2000]") self.assertEqual(len(nodeset), 1004) self.assertEqual(str(nodeset), "artcore[1-2,1000-2000],lounge") nodeset = NodeSet("artcore[3-999],lounge") self.assertEqual(len(nodeset), 998) nodeset.symmetric_difference_update("artcore[1-2000],lounge") self.assertEqual(len(nodeset), 1003) self.assertEqual(str(nodeset), "artcore[1-2,1000-2000]") nodeset = NodeSet("artcore[3-999],lounge") self.assertEqual(len(nodeset), 998) nodeset2 = NodeSet("artcore[1-2000],lounge") nodeset.symmetric_difference_update(nodeset2) self.assertEqual(len(nodeset), 1003) self.assertEqual(str(nodeset), "artcore[1-2,1000-2000]") self.assertEqual(len(nodeset2), 2001) # check const argument nodeset.symmetric_difference_update("artcore[1-2000],lounge") self.assertEqual(len(nodeset), 998) self.assertEqual(str(nodeset), "artcore[3-999],lounge") def testOperatorSymmetricDifference(self): """test NodeSet symmetric_difference() and ^ operator""" nodeset = NodeSet("artcore[3-999]") self.assertEqual(len(nodeset), 997) result = nodeset.symmetric_difference("artcore[1-2000]") self.assertEqual(len(result), 1003) self.assertEqual(str(result), "artcore[1-2,1000-2000]") self.assertEqual(len(nodeset), 997) # test ^ operator nodeset = NodeSet("artcore[3-999]") self.assertEqual(len(nodeset), 997) nodeset2 = NodeSet("artcore[1-2000]") result = nodeset ^ nodeset2 self.assertEqual(len(result), 1003) self.assertEqual(str(result), "artcore[1-2,1000-2000]") self.assertEqual(len(nodeset), 997) self.assertEqual(len(nodeset2), 2000) # check that n ^ n returns empty NodeSet nodeset = NodeSet("lounge[3-999]") self.assertEqual(len(nodeset), 997) result = nodeset ^ nodeset self.assertEqual(len(result), 0) def testBinarySanityCheck(self): """test NodeSet binary sanity check""" ns1 = NodeSet("1-5") ns2 = "4-6" self.assertRaises(TypeError, ns1.__gt__, ns2) self.assertRaises(TypeError, ns1.__lt__, ns2) def testBinarySanityCheckNotImplementedSubtle(self): """test NodeSet binary sanity check (NotImplemented subtle)""" ns1 = NodeSet("1-5") ns2 = "4-6" self.assertEqual(ns1.__and__(ns2), NotImplemented) self.assertEqual(ns1.__or__(ns2), NotImplemented) self.assertEqual(ns1.__sub__(ns2), NotImplemented) self.assertEqual(ns1.__xor__(ns2), NotImplemented) # Should implicitly raises TypeError if the real operator # version is invoked. To test that, we perform a manual check # as an additional function would be needed to check with # assertRaises(): good_error = False try: ns3 = ns1 & ns2 except TypeError: good_error = True self.assertTrue(good_error, "TypeError not raised for &") good_error = False try: ns3 = ns1 | ns2 except TypeError: good_error = True self.assertTrue(good_error, "TypeError not raised for |") good_error = False try: ns3 = ns1 - ns2 except TypeError: good_error = True self.assertTrue(good_error, "TypeError not raised for -") good_error = False try: ns3 = ns1 ^ ns2 except TypeError: good_error = True self.assertTrue(good_error, "TypeError not raised for ^") def testIsSubSetError(self): """test NodeSet issubset type error""" ns1 = NodeSet("1-5") ns2 = 4 self.assertRaises(TypeError, ns1.issubset, ns2) def testExpandFunction(self): """test NodeSet expand() utility function""" self.assertEqual(expand("purple[1-3]"), ["purple1", "purple2", "purple3"]) def testFoldFunction(self): """test NodeSet fold() utility function""" self.assertEqual(fold("purple1,purple2,purple3"), "purple[1-3]") def testEquality(self): """test NodeSet equality""" ns0_1 = NodeSet() ns0_2 = NodeSet() self.assertEqual(ns0_1, ns0_2) ns1 = NodeSet("roma[50-99]-ipmi,cors[113,115-117,130,166-172]," "cws-tigrou,tigrou3") ns2 = NodeSet("roma[50-99]-ipmi,cors[113,115-117,130,166-172]," "cws-tigrou,tigrou3") self.assertEqual(ns1, ns2) ns3 = NodeSet("cws-tigrou,tigrou3,cors[113,115-117,166-172]," "roma[50-99]-ipmi,cors130") self.assertEqual(ns1, ns3) ns4 = NodeSet("roma[50-99]-ipmi,cors[113,115-117,130,166-171]," "cws-tigrou,tigrou[3-4]") self.assertNotEqual(ns1, ns4) def testIterOrder(self): """test NodeSet nodes name order in iter and str""" ns_b = NodeSet("bcluster25") ns_c = NodeSet("ccluster12") ns_a1 = NodeSet("acluster4") ns_a2 = NodeSet("acluster39") ns_a3 = NodeSet("acluster41") ns = ns_c | ns_a1 | ns_b | ns_a2 | ns_a3 self.assertEqual(str(ns), "acluster[4,39,41],bcluster25,ccluster12") nodelist = list(iter(ns)) self.assertEqual(nodelist, ['acluster4', 'acluster39', 'acluster41', 'bcluster25', 'ccluster12']) def test_nsiter(self): """test NodeSet.nsiter() iterator""" ns1 = NodeSet("roma[50-61]-ipmi,cors[113,115-117,130,166-169]," "cws-tigrou,tigrou3") self.assertEqual(list(ns1), ['cors113', 'cors115', 'cors116', 'cors117', 'cors130', 'cors166', 'cors167', 'cors168', 'cors169', 'cws-tigrou', 'roma50-ipmi', 'roma51-ipmi', 'roma52-ipmi', 'roma53-ipmi', 'roma54-ipmi', 'roma55-ipmi', 'roma56-ipmi', 'roma57-ipmi', 'roma58-ipmi', 'roma59-ipmi', 'roma60-ipmi', 'roma61-ipmi', 'tigrou3']) self.assertEqual(list(ns1), [str(ns) for ns in ns1.nsiter()]) # Ticket #286 - broken nsiter() in 1.7 with nD + 0-padding ns1 = NodeSet("n0c01") self.assertEqual(list(ns1), ['n0c01']) self.assertEqual(list(ns1), [str(ns) for ns in ns1.nsiter()]) ns1 = NodeSet("n0c01,n1c01") self.assertEqual(list(ns1), ['n0c01', 'n1c01']) self.assertEqual(list(ns1), [str(ns) for ns in ns1.nsiter()]) def test_contiguous(self): """test NodeSet.contiguous() iterator""" ns1 = NodeSet("cors,roma[50-61]-ipmi,cors[113,115-117,130,166-169]," "cws-tigrou,tigrou3") self.assertEqual(['cors', 'cors113', 'cors[115-117]', 'cors130', 'cors[166-169]', 'cws-tigrou', 'roma[50-61]-ipmi', 'tigrou3'], [str(ns) for ns in ns1.contiguous()]) # check if NodeSet instances returned by contiguous() iterator are not # the same testlist = list(ns1.contiguous()) for i in range(len(testlist)): for j in range(i + 1, len(testlist)): self.assertNotEqual(testlist[i], testlist[j]) self.assertNotEqual(id(testlist[i]), id(testlist[j])) def testEqualityMore(self): """test NodeSet equality (more)""" self.assertEqual(NodeSet(), NodeSet()) ns1 = NodeSet("nodealone") ns2 = NodeSet("nodealone") self.assertEqual(ns1, ns2) ns1 = NodeSet("clu3,clu[4-9],clu11") ns2 = NodeSet("clu[3-9,11]") self.assertEqual(ns1, ns2) if ns1 == None: self.fail("ns1 == None succeeded") if ns1 != None: pass else: self.fail("ns1 != None failed") def testNodeSetNone(self): """test NodeSet methods behavior with None argument""" nodeset = NodeSet(None) self.assertEqual(len(nodeset), 0) self.assertEqual(list(nodeset), []) nodeset.update(None) self.assertEqual(list(nodeset), []) nodeset.intersection_update(None) self.assertEqual(list(nodeset), []) nodeset.difference_update(None) self.assertEqual(list(nodeset), []) nodeset.symmetric_difference_update(None) self.assertEqual(list(nodeset), []) n = nodeset.union(None) self.assertEqual(list(n), []) self.assertEqual(len(n), 0) n = nodeset.intersection(None) self.assertEqual(list(n), []) n = nodeset.difference(None) self.assertEqual(list(n), []) n = nodeset.symmetric_difference(None) self.assertEqual(list(n), []) nodeset = NodeSet("abc[3,6-89],def[3-98,104,128-133]") self.assertEqual(len(nodeset), 188) nodeset.update(None) self.assertEqual(len(nodeset), 188) nodeset.intersection_update(None) self.assertEqual(len(nodeset), 0) self.assertEqual(list(nodeset), []) nodeset = NodeSet("abc[3,6-89],def[3-98,104,128-133]") self.assertEqual(len(nodeset), 188) nodeset.difference_update(None) self.assertEqual(len(nodeset), 188) nodeset.symmetric_difference_update(None) self.assertEqual(len(nodeset), 188) n = nodeset.union(None) self.assertEqual(len(nodeset), 188) n = nodeset.intersection(None) self.assertEqual(list(n), []) self.assertEqual(len(n), 0) n = nodeset.difference(None) self.assertEqual(len(n), 188) n = nodeset.symmetric_difference(None) self.assertEqual(len(n), 188) self.assertFalse(n.issubset(None)) self.assertTrue(n.issuperset(None)) n = NodeSet(None) n.clear() self.assertEqual(len(n), 0) def testCopy(self): """test NodeSet.copy()""" nodeset = NodeSet("zclu[115-117,130,166-170],glycine[68,4780-4999]") self.assertEqual(str(nodeset), "glycine[68,4780-4999],zclu[115-117,130,166-170]") nodeset2 = nodeset.copy() nodeset3 = nodeset.copy() self.assertEqual(nodeset, nodeset2) # content equality self.assertTrue(isinstance(nodeset, NodeSet)) self.assertTrue(isinstance(nodeset2, NodeSet)) self.assertTrue(isinstance(nodeset3, NodeSet)) nodeset2.remove("glycine68") self.assertEqual(len(nodeset), len(nodeset2) + 1) self.assertNotEqual(nodeset, nodeset2) self.assertEqual(str(nodeset2), "glycine[4780-4999],zclu[115-117,130,166-170]") self.assertEqual(str(nodeset), "glycine[68,4780-4999],zclu[115-117,130,166-170]") nodeset2.add("glycine68") self.assertEqual(str(nodeset2), "glycine[68,4780-4999],zclu[115-117,130,166-170]") self.assertEqual(nodeset, nodeset3) nodeset3.update(NodeSet("zclu118")) self.assertNotEqual(nodeset, nodeset3) self.assertEqual(len(nodeset) + 1, len(nodeset3)) self.assertEqual(str(nodeset), "glycine[68,4780-4999],zclu[115-117,130,166-170]") self.assertEqual(str(nodeset3), "glycine[68,4780-4999],zclu[115-118,130,166-170]") # test copy with single nodes nodeset = NodeSet("zclu[115-117,130,166-170],foo,bar," "glycine[68,4780-4999]") nodeset2 = nodeset.copy() self.assertEqual(nodeset, nodeset2) # content equality # same with NodeSetBase nodeset = NodeSetBase("foobar", None) nodeset2 = nodeset.copy() self.assertEqual(nodeset, nodeset2) # content equality # unpickling tests; generate data with: # ns = NodeSet("bar[050-150,502-599],foo[1,4-50,80-100]") # print(binascii.b2a_base64(pickle.dumps(ns))) def test_unpickle_v1_3_py24(self): """test NodeSet unpickling (against v1.3/py24)""" nodeset = pickle.loads(binascii.a2b_base64("gAJjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApxACmBcQF9cQIoVQdfbGVuZ3RocQNLAFUJX3BhdHRlcm5zcQR9cQUoVQh5ZWxsb3clc3EGKGNDbHVzdGVyU2hlbGwuTm9kZVNldApSYW5nZVNldApxB29xCH1xCShoA0sBVQlfYXV0b3N0ZXBxCkdUskmtJZTDfVUHX3Jhbmdlc3ELXXEMKEsESwRLAUsAdHENYXViVQZibHVlJXNxDihoB29xD31xEChoA0sIaApHVLJJrSWUw31oC11xESgoSwZLCksBSwB0cRIoSw1LDUsBSwB0cRMoSw9LD0sBSwB0cRQoSxFLEUsBSwB0cRVldWJVB2dyZWVuJXNxFihoB29xF31xGChoA0tlaApHVLJJrSWUw31oC11xGShLAEtkSwFLAHRxGmF1YlUDcmVkcRtOdWgKTnViLg==")) self.assertEqual(nodeset, NodeSet("blue[6-10,13,15,17],green[0-100],red,yellow4")) self.assertEqual(str(nodeset), "blue[6-10,13,15,17],green[0-100],red,yellow4") self.assertEqual(len(nodeset), 111) self.assertEqual(nodeset[0], "blue6") self.assertEqual(nodeset[1], "blue7") self.assertEqual(nodeset[-1], "yellow4") # unpickle_v1_4_py24 : unpickling fails as v1.4 does not have slice # pickling workaround def test_unpickle_v1_3_py26(self): """test NodeSet unpickling (against v1.3/py26)""" nodeset = pickle.loads(binascii.a2b_base64("gAJjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApxACmBcQF9cQIoVQdfbGVuZ3RocQNLAFUJX3BhdHRlcm5zcQR9cQUoVQh5ZWxsb3clc3EGKGNDbHVzdGVyU2hlbGwuTm9kZVNldApSYW5nZVNldApxB29xCH1xCShoA0sBVQlfYXV0b3N0ZXBxCkdUskmtJZTDfVUHX3Jhbmdlc3ELXXEMKEsESwRLAUsAdHENYXViVQZibHVlJXNxDihoB29xD31xEChoA0sIaApHVLJJrSWUw31oC11xESgoSwZLCksBSwB0cRIoSw1LDUsBSwB0cRMoSw9LD0sBSwB0cRQoSxFLEUsBSwB0cRVldWJVB2dyZWVuJXNxFihoB29xF31xGChoA0tlaApHVLJJrSWUw31oC11xGShLAEtkSwFLAHRxGmF1YlUDcmVkcRtOdWgKTnViLg==")) self.assertEqual(nodeset, NodeSet("blue[6-10,13,15,17],green[0-100],red,yellow4")) self.assertEqual(str(nodeset), "blue[6-10,13,15,17],green[0-100],red,yellow4") self.assertEqual(len(nodeset), 111) self.assertEqual(nodeset[0], "blue6") self.assertEqual(nodeset[1], "blue7") self.assertEqual(nodeset[-1], "yellow4") # unpickle_v1_4_py24 : unpickling fails as v1.4 does not have slice # pickling workaround def test_unpickle_v1_4_py26(self): """test NodeSet unpickling (against v1.4/py26)""" nodeset = pickle.loads(binascii.a2b_base64("gAJjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApxACmBcQF9cQIoVQdfbGVuZ3RocQNLAFUJX3BhdHRlcm5zcQR9cQUoVQh5ZWxsb3clc3EGKGNDbHVzdGVyU2hlbGwuTm9kZVNldApSYW5nZVNldApxB29xCH1xCihoA0sBVQlfYXV0b3N0ZXBxC0dUskmtJZTDfVUHX3Jhbmdlc3EMXXENY19fYnVpbHRpbl9fCnNsaWNlCnEOSwRLBUsBh3EPUnEQSwCGcRFhVQhfdmVyc2lvbnESSwJ1YlUGYmx1ZSVzcRMoaAdvcRR9cRUoaANLCGgLR1SySa0llMN9aAxdcRYoaA5LBksLSwGHcRdScRhLAIZxGWgOSw1LDksBh3EaUnEbSwCGcRxoDksPSxBLAYdxHVJxHksAhnEfaA5LEUsSSwGHcSBScSFLAIZxImVoEksCdWJVB2dyZWVuJXNxIyhoB29xJH1xJShoA0tlaAtHVLJJrSWUw31oDF1xJmgOSwBLZUsBh3EnUnEoSwCGcSlhaBJLAnViVQNyZWRxKk51aAtOdWIu")) self.assertEqual(nodeset, NodeSet("blue[6-10,13,15,17],green[0-100],red,yellow4")) self.assertEqual(str(nodeset), "blue[6-10,13,15,17],green[0-100],red,yellow4") self.assertEqual(len(nodeset), 111) self.assertEqual(nodeset[0], "blue6") self.assertEqual(nodeset[1], "blue7") self.assertEqual(nodeset[-1], "yellow4") def test_unpickle_v1_5_py24(self): """test NodeSet unpickling (against v1.5/py24)""" nodeset = pickle.loads(binascii.a2b_base64("gAJjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApxACmBcQF9cQIoVQdfbGVuZ3RocQNLAFUJX3BhdHRlcm5zcQR9cQUoVQh5ZWxsb3clc3EGKGNDbHVzdGVyU2hlbGwuTm9kZVNldApSYW5nZVNldApxB29xCH1xCihoA0sBVQlfYXV0b3N0ZXBxC0dUskmtJZTDfVUHX3Jhbmdlc3EMXXENSwRLBUsBh3EOSwCGcQ9hVQhfdmVyc2lvbnEQSwJ1YlUGYmx1ZSVzcREoaAdvcRJ9cRMoaANLCGgLR1SySa0llMN9aAxdcRQoSwZLC0sBh3EVSwCGcRZLDUsOSwGHcRdLAIZxGEsPSxBLAYdxGUsAhnEaSxFLEksBh3EbSwCGcRxlaBBLAnViVQdncmVlbiVzcR0oaAdvcR59cR8oaANLZWgLR1SySa0llMN9aAxdcSBLAEtlSwGHcSFLAIZxImFoEEsCdWJVA3JlZHEjTnVoC051Yi4=")) self.assertEqual(nodeset, NodeSet("blue[6-10,13,15,17],green[0-100],red,yellow4")) self.assertEqual(str(nodeset), "blue[6-10,13,15,17],green[0-100],red,yellow4") self.assertEqual(len(nodeset), 111) self.assertEqual(nodeset[0], "blue6") self.assertEqual(nodeset[1], "blue7") self.assertEqual(nodeset[-1], "yellow4") def test_unpickle_v1_5_py26(self): """test NodeSet unpickling (against v1.5/py26)""" nodeset = pickle.loads(binascii.a2b_base64("gAJjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApxACmBcQF9cQIoVQdfbGVuZ3RocQNLAFUJX3BhdHRlcm5zcQR9cQUoVQh5ZWxsb3clc3EGKGNDbHVzdGVyU2hlbGwuTm9kZVNldApSYW5nZVNldApxB29xCH1xCihoA0sBVQlfYXV0b3N0ZXBxC0dUskmtJZTDfVUHX3Jhbmdlc3EMXXENY19fYnVpbHRpbl9fCnNsaWNlCnEOSwRLBUsBh3EPUnEQSwCGcRFhVQhfdmVyc2lvbnESSwJ1YlUGYmx1ZSVzcRMoaAdvcRR9cRUoaANLCGgLR1SySa0llMN9aAxdcRYoaA5LBksLSwGHcRdScRhLAIZxGWgOSw1LDksBh3EaUnEbSwCGcRxoDksPSxBLAYdxHVJxHksAhnEfaA5LEUsSSwGHcSBScSFLAIZxImVoEksCdWJVB2dyZWVuJXNxIyhoB29xJH1xJShoA0tlaAtHVLJJrSWUw31oDF1xJmgOSwBLZUsBh3EnUnEoSwCGcSlhaBJLAnViVQNyZWRxKk51aAtOdWIu")) self.assertEqual(nodeset, NodeSet("blue[6-10,13,15,17],green[0-100],red,yellow4")) self.assertEqual(str(nodeset), "blue[6-10,13,15,17],green[0-100],red,yellow4") self.assertEqual(len(nodeset), 111) self.assertEqual(nodeset[0], "blue6") self.assertEqual(nodeset[1], "blue7") self.assertEqual(nodeset[-1], "yellow4") def test_unpickle_v1_6_py24(self): """test NodeSet unpickling (against v1.6/py24)""" nodeset = pickle.loads(binascii.a2b_base64("gAJjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApxACmBcQF9cQIoVQdfbGVuZ3RocQNLAFUJX3BhdHRlcm5zcQR9cQUoVQh5ZWxsb3clc3EGY0NsdXN0ZXJTaGVsbC5SYW5nZVNldApSYW5nZVNldApxB1UBNHEIhXEJUnEKfXELKFUHcGFkZGluZ3EMTlUJX2F1dG9zdGVwcQ1HVLJJrSWUw31VCF92ZXJzaW9ucQ5LA3ViVQZibHVlJXNxD2gHVQ02LTEwLDEzLDE1LDE3cRCFcRFScRJ9cRMoaAxOaA1HVLJJrSWUw31oDksDdWJVB2dyZWVuJXNxFGgHVQUwLTEwMHEVhXEWUnEXfXEYKGgMTmgNR1SySa0llMN9aA5LA3ViVQNyZWRxGU51aA1OdWIu")) self.assertEqual(nodeset, NodeSet("blue[6-10,13,15,17],green[0-100],red,yellow4")) self.assertEqual(str(nodeset), "blue[6-10,13,15,17],green[0-100],red,yellow4") self.assertEqual(len(nodeset), 111) self.assertEqual(nodeset[0], "blue6") self.assertEqual(nodeset[1], "blue7") self.assertEqual(nodeset[-1], "yellow4") def test_unpickle_v1_6_py26(self): """test NodeSet unpickling (against v1.6/py26)""" nodeset = pickle.loads(binascii.a2b_base64("gAJjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApxACmBcQF9cQIoVQdfbGVuZ3RocQNLAFUJX3BhdHRlcm5zcQR9cQUoVQh5ZWxsb3clc3EGY0NsdXN0ZXJTaGVsbC5SYW5nZVNldApSYW5nZVNldApxB1UBNHEIhXEJUnEKfXELKFUHcGFkZGluZ3EMTlUJX2F1dG9zdGVwcQ1HVLJJrSWUw31VCF92ZXJzaW9ucQ5LA3ViVQZibHVlJXNxD2gHVQ02LTEwLDEzLDE1LDE3cRCFcRFScRJ9cRMoaAxOaA1HVLJJrSWUw31oDksDdWJVB2dyZWVuJXNxFGgHVQUwLTEwMHEVhXEWUnEXfXEYKGgMTmgNR1SySa0llMN9aA5LA3ViVQNyZWRxGU51aA1OdWIu")) self.assertEqual(nodeset, NodeSet("blue[6-10,13,15,17],green[0-100],red,yellow4")) self.assertEqual(str(nodeset), "blue[6-10,13,15,17],green[0-100],red,yellow4") self.assertEqual(len(nodeset), 111) self.assertEqual(nodeset[0], "blue6") self.assertEqual(nodeset[1], "blue7") self.assertEqual(nodeset[-1], "yellow4") def test_unpickle_v1_7_3_py27(self): """test NodeSet unpickling (against v1.7.3/py27)""" nodeset = pickle.loads(binascii.a2b_base64("Y2NvcHlfcmVnCl9yZWNvbnN0cnVjdG9yCnAwCihjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApwMQpjX19idWlsdGluX18Kb2JqZWN0CnAyCk50cDMKUnA0CihkcDUKUydmb2xkX2F4aXMnCnA2Ck5zUydfbGVuZ3RoJwpwNwpJMApzUydfcGF0dGVybnMnCnA4CihkcDkKUydmb28lcycKcDEwCmNDbHVzdGVyU2hlbGwuUmFuZ2VTZXQKUmFuZ2VTZXQKcDExCihTJzEsNC01MCw4MC0xMDAnCnAxMgp0cDEzClJwMTQKKGRwMTUKUydwYWRkaW5nJwpwMTYKTnNTJ19hdXRvc3RlcCcKcDE3CkYxZSsxMDAKc1MnX3ZlcnNpb24nCnAxOApJMwpzYnNTJ2JhciVzJwpwMTkKZzExCihTJzA1MC0xNTAsNTAyLTU5OScKcDIwCnRwMjEKUnAyMgooZHAyMwpnMTYKSTMKc2cxNwpGMWUrMTAwCnNnMTgKSTMKc2Jzc2cxNwpOc2cxOApJMgpzYi4=")) self.assertEqual(nodeset, NodeSet("foo[1,4-50,80-100],bar[050-150,502-599]")) self.assertEqual(str(nodeset), "bar[050-150,502-599],foo[1,4-50,80-100]") self.assertEqual(len(nodeset), 268) self.assertEqual(nodeset[0], "bar050") self.assertEqual(nodeset[1], "bar051") self.assertEqual(nodeset[-1], "foo100") def test_unpickle_v1_8_4_py27(self): """test NodeSet unpickling (against v1.8.4/py27)""" nodeset = pickle.loads(binascii.a2b_base64("Y2NvcHlfcmVnCl9yZWNvbnN0cnVjdG9yCnAwCihjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApwMQpjX19idWlsdGluX18Kb2JqZWN0CnAyCk50cDMKUnA0CihkcDUKUydmb2xkX2F4aXMnCnA2Ck5zUydfbGVuZ3RoJwpwNwpJMApzUydfcGF0dGVybnMnCnA4CihkcDkKUydmb28lcycKcDEwCmNDbHVzdGVyU2hlbGwuUmFuZ2VTZXQKUmFuZ2VTZXQKcDExCihTJzEsNC01MCw4MC0xMDAnCnAxMgp0cDEzClJwMTQKKGRwMTUKUydwYWRkaW5nJwpwMTYKTnNTJ19hdXRvc3RlcCcKcDE3CkYxZSsxMDAKc1MnX3ZlcnNpb24nCnAxOApJMwpzYnNTJ2JhciVzJwpwMTkKZzExCihTJzA1MC0xNTAsNTAyLTU5OScKcDIwCnRwMjEKUnAyMgooZHAyMwpnMTYKSTMKc2cxNwpGMWUrMTAwCnNnMTgKSTMKc2Jzc2cxNwpOc2cxOApJMgpzYi4=")) self.assertEqual(nodeset, NodeSet("foo[1,4-50,80-100],bar[050-150,502-599]")) self.assertEqual(str(nodeset), "bar[050-150,502-599],foo[1,4-50,80-100]") self.assertEqual(len(nodeset), 268) self.assertEqual(nodeset[0], "bar050") self.assertEqual(nodeset[1], "bar051") self.assertEqual(nodeset[-1], "foo100") def test_pickle_current(self): """test NodeSet pickling (current version)""" dump = pickle.dumps(NodeSet("foo[1-100]")) self.assertNotEqual(dump, None) nodeset = pickle.loads(dump) self.assertEqual(nodeset, NodeSet("foo[1-100]")) self.assertEqual(str(nodeset), "foo[1-100]") self.assertEqual(nodeset[0], "foo1") self.assertEqual(nodeset[1], "foo2") self.assertEqual(nodeset[-1], "foo100") def test_nd_unpickle_v1_6_py26(self): """test NodeSet nD unpickling (against v1.6/py26)""" # Use cases that will test conversion required when using # NodeSet nD (see NodeSet.__setstate__()): # TEST FROM v1.6: NodeSet("foo[1-100]bar[1-10]") nodeset = pickle.loads(binascii.a2b_base64("Y2NvcHlfcmVnCl9yZWNvbnN0cnVjdG9yCnAwCihjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApwMQpjX19idWlsdGluX18Kb2JqZWN0CnAyCk50cDMKUnA0CihkcDUKUydfbGVuZ3RoJwpwNgpJMApzUydfcGF0dGVybnMnCnA3CihkcDgKUydmb28lc2JhclsxLTEwXScKcDkKY0NsdXN0ZXJTaGVsbC5SYW5nZVNldApSYW5nZVNldApwMTAKKFMnMS0xMDAnCnAxMQp0cDEyClJwMTMKKGRwMTQKUydwYWRkaW5nJwpwMTUKTnNTJ19hdXRvc3RlcCcKcDE2CkYxZSsxMDAKc1MnX3ZlcnNpb24nCnAxNwpJMwpzYnNzZzE2Ck5zYi4=\n")) self.assertEqual(str(nodeset), str(NodeSet("foo[1-100]bar[1-10]"))) self.assertEqual(nodeset, NodeSet("foo[1-100]bar[1-10]")) self.assertEqual(len(nodeset), 1000) self.assertEqual(nodeset[0], "foo1bar1") self.assertEqual(nodeset[1], "foo1bar2") self.assertEqual(nodeset[-1], "foo100bar10") # TEST FROM v1.6: NodeSet("foo[1-100]bar3,foo[1-100]bar7,foo[1-100]bar12") nodeset = pickle.loads(binascii.a2b_base64("Y2NvcHlfcmVnCl9yZWNvbnN0cnVjdG9yCnAwCihjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApwMQpjX19idWlsdGluX18Kb2JqZWN0CnAyCk50cDMKUnA0CihkcDUKUydfbGVuZ3RoJwpwNgpJMApzUydfcGF0dGVybnMnCnA3CihkcDgKUydmb28lc2JhcjEyJwpwOQpjQ2x1c3RlclNoZWxsLlJhbmdlU2V0ClJhbmdlU2V0CnAxMAooUycxLTEwMCcKcDExCnRwMTIKUnAxMwooZHAxNApTJ3BhZGRpbmcnCnAxNQpOc1MnX2F1dG9zdGVwJwpwMTYKRjFlKzEwMApzUydfdmVyc2lvbicKcDE3CkkzCnNic1MnZm9vJXNiYXIzJwpwMTgKZzEwCihTJzEtMTAwJwpwMTkKdHAyMApScDIxCihkcDIyCmcxNQpOc2cxNgpGMWUrMTAwCnNnMTcKSTMKc2JzUydmb28lc2JhcjcnCnAyMwpnMTAKKFMnMS0xMDAnCnAyNAp0cDI1ClJwMjYKKGRwMjcKZzE1Ck5zZzE2CkYxZSsxMDAKc2cxNwpJMwpzYnNzZzE2Ck5zYi4=\n")) self.assertEqual(str(nodeset), str(NodeSet("foo[1-100]bar[3,7,12]"))) self.assertEqual(nodeset, NodeSet("foo[1-100]bar[3,7,12]")) self.assertEqual(len(nodeset), 300) self.assertEqual(nodeset[0], "foo1bar3") self.assertEqual(nodeset[1], "foo1bar7") self.assertEqual(nodeset[-1], "foo100bar12") # TEST FROM v1.6: NodeSet("foo1bar3,foo2bar4,foo[6-20]bar3") nodeset = pickle.loads(binascii.a2b_base64("Y2NvcHlfcmVnCl9yZWNvbnN0cnVjdG9yCnAwCihjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApwMQpjX19idWlsdGluX18Kb2JqZWN0CnAyCk50cDMKUnA0CihkcDUKUydfbGVuZ3RoJwpwNgpJMApzUydfcGF0dGVybnMnCnA3CihkcDgKUydmb28lc2JhcjMnCnA5CmNDbHVzdGVyU2hlbGwuUmFuZ2VTZXQKUmFuZ2VTZXQKcDEwCihTJzEsNi0yMCcKcDExCnRwMTIKUnAxMwooZHAxNApTJ3BhZGRpbmcnCnAxNQpOc1MnX2F1dG9zdGVwJwpwMTYKRjFlKzEwMApzUydfdmVyc2lvbicKcDE3CkkzCnNic1MnZm9vJXNiYXI0JwpwMTgKZzEwCihTJzInCnAxOQp0cDIwClJwMjEKKGRwMjIKZzE1Ck5zZzE2CkYxZSsxMDAKc2cxNwpJMwpzYnNzZzE2Ck5zYi4=\n")) self.assertEqual(str(nodeset), str(NodeSet("foo[1,6-20]bar3,foo2bar4"))) self.assertEqual(nodeset, NodeSet("foo[1,6-20]bar3,foo2bar4")) self.assertEqual(len(nodeset), 17) self.assertEqual(nodeset[0], "foo1bar3") self.assertEqual(nodeset[1], "foo6bar3") self.assertEqual(nodeset[-1], "foo2bar4") # TEST FROM v1.6: NodeSet("foo[1-100]bar4,foo[1-100]bar,foo[1-20],bar,foo101bar4") nodeset = pickle.loads(binascii.a2b_base64("Y2NvcHlfcmVnCl9yZWNvbnN0cnVjdG9yCnAwCihjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApwMQpjX19idWlsdGluX18Kb2JqZWN0CnAyCk50cDMKUnA0CihkcDUKUydfbGVuZ3RoJwpwNgpJMApzUydfcGF0dGVybnMnCnA3CihkcDgKUydmb28lcycKcDkKY0NsdXN0ZXJTaGVsbC5SYW5nZVNldApSYW5nZVNldApwMTAKKFMnMS0yMCcKcDExCnRwMTIKUnAxMwooZHAxNApTJ3BhZGRpbmcnCnAxNQpOc1MnX2F1dG9zdGVwJwpwMTYKRjFlKzEwMApzUydfdmVyc2lvbicKcDE3CkkzCnNic1MnZm9vJXNiYXInCnAxOApnMTAKKFMnMS0xMDAnCnAxOQp0cDIwClJwMjEKKGRwMjIKZzE1Ck5zZzE2CkYxZSsxMDAKc2cxNwpJMwpzYnNTJ2ZvbyVzYmFyNCcKcDIzCmcxMAooUycxLTEwMScKcDI0CnRwMjUKUnAyNgooZHAyNwpnMTUKTnNnMTYKRjFlKzEwMApzZzE3CkkzCnNic1MnYmFyJwpwMjgKTnNzZzE2Ck5zYi4=\n")) self.assertEqual(str(nodeset), str(NodeSet("bar,foo[1-20],foo[1-100]bar," "foo[1-101]bar4"))) self.assertEqual(nodeset, NodeSet("bar,foo[1-20],foo[1-100]bar,foo[1-101]bar4")) self.assertEqual(len(nodeset), 222) self.assertEqual(nodeset[0], "bar") self.assertEqual(nodeset[1], "foo1") self.assertEqual(nodeset[-1], "foo101bar4") def test_nd_unpickle_v1_7_3_py27(self): """test NodeSet nD unpickling (against v1.7.3/py27)""" # TEST FROM v1.7.3: NodeSet("foo[1-100]bar[00001-00010]") nodeset = pickle.loads(binascii.a2b_base64("Y2NvcHlfcmVnCl9yZWNvbnN0cnVjdG9yCnAwCihjQ2x1c3RlclNoZWxsLk5vZGVTZXQKTm9kZVNldApwMQpjX19idWlsdGluX18Kb2JqZWN0CnAyCk50cDMKUnA0CihkcDUKUydmb2xkX2F4aXMnCnA2Ck5zUydfbGVuZ3RoJwpwNwpJMApzUydfcGF0dGVybnMnCnA4CihkcDkKUydmb28lc2JhciVzJwpwMTAKZzAKKGNDbHVzdGVyU2hlbGwuUmFuZ2VTZXQKUmFuZ2VTZXRORApwMTEKZzIKTnRwMTIKUnAxMwooZHAxNApTJ19tdWx0aXZhcl9oaW50JwpwMTUKSTAwCnNTJ192ZWNsaXN0JwpwMTYKKGxwMTcKKGxwMTgKY0NsdXN0ZXJTaGVsbC5SYW5nZVNldApSYW5nZVNldApwMTkKKFMnMS0xMDAnCnAyMAp0cDIxClJwMjIKKGRwMjMKUydwYWRkaW5nJwpwMjQKTnNTJ19hdXRvc3RlcCcKcDI1CkYxZSsxMDAKc1MnX3ZlcnNpb24nCnAyNgpJMwpzYmFnMTkKKFMnMDAwMDEtMDAwMTAnCnAyNwp0cDI4ClJwMjkKKGRwMzAKZzI0Ckk1CnNnMjUKRjFlKzEwMApzZzI2CkkzCnNiYWFzZzI1CkYxZSsxMDAKc1MnX2RpcnR5JwpwMzEKSTAwCnNic3NnMjUKTnNnMjYKSTIKc2Iu")) self.assertEqual(str(nodeset), str(NodeSet("foo[1-100]bar[00001-00010]"))) self.assertEqual(nodeset, NodeSet("foo[1-100]bar[00001-00010]")) self.assertEqual(len(nodeset), 1000) self.assertEqual(nodeset[0], "foo1bar00001") self.assertEqual(nodeset[1], "foo1bar00002") self.assertEqual(nodeset[-1], "foo100bar00010") def test_nd_pickle_current(self): """test NodeSet nD pickling (current version)""" dump = pickle.dumps(NodeSet("foo[1-100]bar[1-10]")) self.assertNotEqual(dump, None) nodeset = pickle.loads(dump) self.assertEqual(nodeset, NodeSet("foo[1-100]bar[1-10]")) self.assertEqual(str(nodeset), "foo[1-100]bar[1-10]") self.assertEqual(nodeset[0], "foo1bar1") self.assertEqual(nodeset[1], "foo1bar2") self.assertEqual(nodeset[-1], "foo100bar10") dump = pickle.dumps(NodeSet("foo[1-100]bar4,foo[1-100]bar,foo[1-20]," "bar,foo101bar4")) self.assertNotEqual(dump, None) nodeset = pickle.loads(dump) self.assertEqual(nodeset, NodeSet("bar,foo[1-20],foo[1-100]bar,foo[1-101]bar4")) self.assertEqual(str(nodeset), "bar,foo[1-20],foo[1-100]bar,foo[1-101]bar4") self.assertEqual(nodeset[0], "bar") self.assertEqual(nodeset[1], "foo1") self.assertEqual(nodeset[-1], "foo101bar4") def testNodeSetBase(self): """test underlying NodeSetBase class""" rset = RangeSet("1-100,200") self.assertEqual(len(rset), 101) nsb = NodeSetBase("foo%sbar", rset) self.assertEqual(len(nsb), len(rset)) self.assertEqual(str(nsb), "foo[1-100,200]bar") nsbcpy = nsb.copy() self.assertEqual(len(nsbcpy), 101) self.assertEqual(str(nsbcpy), "foo[1-100,200]bar") other = NodeSetBase("foo%sbar", RangeSet("201")) nsbcpy.add(other) self.assertEqual(len(nsb), 101) self.assertEqual(str(nsb), "foo[1-100,200]bar") self.assertEqual(len(nsbcpy), 102) self.assertEqual(str(nsbcpy), "foo[1-100,200-201]bar") def test_nd_simple(self): ns1 = NodeSet("da3c1") ns2 = NodeSet("da3c2") self.assertEqual(str(ns1 | ns2), "da3c[1-2]") ns1 = NodeSet("da3c1-ipmi") ns2 = NodeSet("da3c2-ipmi") self.assertEqual(str(ns1 | ns2), "da3c[1-2]-ipmi") ns1 = NodeSet("da[2-3]c1") ns2 = NodeSet("da[2-3]c2") self.assertEqual(str(ns1 | ns2), "da[2-3]c[1-2]") ns1 = NodeSet("da[2-3]c1") ns2 = NodeSet("da[2-3]c1") self.assertEqual(str(ns1 | ns2), "da[2-3]c1") def test_nd_multiple(self): nodeset = NodeSet("da[30,34-51,59-60]p[1-2]") self.assertEqual(len(nodeset), 42) nodeset = NodeSet("da[30,34-51,59-60]p[1-2],da[70-77]p3") self.assertEqual(len(nodeset), 42+8) self.assertEqual(str(nodeset), "da[30,34-51,59-60]p[1-2],da[70-77]p3") # advanced parsing checks nodeset = NodeSet("da[1-10]c[1-2]") self.assertEqual(len(nodeset), 20) self.assertEqual(str(nodeset), "da[1-10]c[1-2]") nodeset = NodeSet("da[1-10]c[1-2]p") self.assertEqual(len(nodeset), 20) self.assertEqual(str(nodeset), "da[1-10]c[1-2]p") nodeset = NodeSet("da[1-10]c[1-2]p0") self.assertEqual(len(nodeset), 20) self.assertEqual(str(nodeset), "da[1-10]c[1-2]p0") nodeset = NodeSet("da[1-10]c[1-2,8]p0") self.assertEqual(len(nodeset), 30) self.assertEqual(str(nodeset), "da[1-10]c[1-2,8]p0") nodeset = NodeSet("da[1-10]c3p0x3") self.assertEqual(len(nodeset), 10) self.assertEqual(str(nodeset), "da[1-10]c3p0x3") nodeset = NodeSet("[1-7,10]xpc[3,4]p40_3,9xpc[3,4]p40_3,8xpc[3,4]p[40]_[3]") self.assertEqual(len(nodeset), 20) self.assertEqual(str(nodeset), "[1-10]xpc[3-4]p40_3") def test_nd_len(self): ns1 = NodeSet("da3c1") ns2 = NodeSet("da3c2") self.assertEqual(len(ns1 | ns2), 2) ns1 = NodeSet("da[2-3]c1") self.assertEqual(len(ns1), 2) ns2 = NodeSet("da[2-3]c2") self.assertEqual(len(ns2), 2) self.assertEqual(len(ns1) + len(ns2), 4) ns1 = NodeSet("da[1-1000]c[1-2]p[0-1]") self.assertEqual(len(ns1), 4000) ns1 = NodeSet("tronic[0036-1630]c[3-4]") self.assertEqual(len(ns1), 3190) ns1 = NodeSet("tronic[0036-1630]c[3-400]") self.assertEqual(len(ns1), 634810) # checking length of overlapping union ns1 = NodeSet("da[2-3]c[0-1]") self.assertEqual(len(ns1), 4) ns2 = NodeSet("da[2-3]c[1-2]") self.assertEqual(len(ns2), 4) self.assertEqual(len(ns1) + len(ns2), 8) self.assertEqual(len(ns1 | ns2), 6) # da[2-3]c[0-2] # checking length of nD + 1D ns1 = NodeSet("da[2-3]c[0-1]") self.assertEqual(len(ns1), 4) ns2 = NodeSet("node[1-1000]") self.assertEqual(len(ns2), 1000) self.assertEqual(len(ns1) + len(ns2), 1004) self.assertEqual(len(ns1 | ns2), 1004) # checking length of nD + single node ns1 = NodeSet("da[2-3]c[0-1]") self.assertEqual(len(ns1), 4) ns2 = NodeSet("single") self.assertEqual(len(ns2), 1) self.assertEqual(len(ns1) + len(ns2), 5) self.assertEqual(len(ns1 | ns2), 5) def test_nd_iter(self): ns1 = NodeSet("da[2-3]c[0-1]") result = list(iter(ns1)) self.assertEqual(result, ['da2c0', 'da2c1', 'da3c0', 'da3c1']) def test_nd_nsiter(self): ns1 = NodeSet("da[2-3]c[0-1]") result = list(ns1.nsiter()) self.assertEqual(result, [NodeSet('da2c0'), NodeSet('da2c1'), NodeSet('da3c0'), NodeSet('da3c1')]) def test_nd_getitem(self): nodeset = NodeSet("da[30,34-51,59-60]p[1-2]") self.assertEqual(len(nodeset), 42) self.assertEqual(nodeset[0], "da30p1") self.assertEqual(nodeset[1], "da30p2") self.assertEqual(nodeset[2], "da34p1") self.assertEqual(nodeset[-1], "da60p2") nodeset = NodeSet("da[30,34-51,59-60]p[1-2],da[70-77]p2") self.assertEqual(len(nodeset), 42+8) # OLD FOLD #self.assertEqual(str(nodeset), # "da[30,34-51,59-60,70-77]p2,da[30,34-51,59-60]p1") # NEW FOLD self.assertEqual(str(nodeset), "da[30,34-51,59-60]p[1-2],da[70-77]p2") #self.assertEqual(nodeset[0], "da30p2") # OLD FOLD self.assertEqual(nodeset[0], "da30p1") # NEW FOLD def test_nd_split(self): nodeset = NodeSet("foo[1-3]bar[2-4]") self.assertEqual((NodeSet("foo1bar[2-4]"), NodeSet("foo2bar[2-4]"), NodeSet("foo3bar[2-4]")), tuple(nodeset.split(3))) nodeset = NodeSet("foo[1-3]bar[2-4]") self.assertEqual((NodeSet("foo1bar[2-4],foo2bar[2-3]"), NodeSet("foo[2-3]bar4,foo3bar[2-3]")), tuple(nodeset.split(2))) def test_nd_contiguous(self): ns1 = NodeSet("foo[3-100]bar[4-30]") self.assertEqual(str(ns1), "foo[3-100]bar[4-30]") self.assertEqual(len(ns1), 98*27) ns1 = NodeSet("foo[3-100,200]bar4") self.assertEqual(['foo[3-100]bar4', 'foo200bar4'], [str(ns) for ns in ns1.contiguous()]) self.assertEqual(str(ns1), "foo[3-100,200]bar4") ns1 = NodeSet("foo[3-100,102-500]bar[4-30]") self.assertEqual(['foo[3-100]bar[4-30]', 'foo[102-500]bar[4-30]'], [str(ns) for ns in ns1.contiguous()]) self.assertEqual(str(ns1), "foo[3-100,102-500]bar[4-30]") ns1 = NodeSet("foo[3-100,102-500]bar[4-30,37]") self.assertEqual(['foo[3-100]bar[4-30]', 'foo[3-100]bar37', 'foo[102-500]bar[4-30]', 'foo[102-500]bar37'], [str(ns) for ns in ns1.contiguous()]) self.assertEqual(str(ns1), "foo[3-100,102-500]bar[4-30,37]") def test_nd_fold(self): ns = NodeSet("da[2-3]c[1-2],da[3-4]c[3-4]") self.assertEqual(ns, NodeSet("da[2-3]c[1-2],da[3-4]c[3-4]")) self.assertEqual(str(ns), "da3c[1-4],da2c[1-2],da4c[3-4]") ns = NodeSet("da[2-3]c[1-2],da[3-4]c[2-3]") self.assertEqual(ns, NodeSet("da[2-3]c[1-2],da[3-4]c[2-3]")) self.assertEqual(str(ns), "da3c[1-3],da2c[1-2],da4c[2-3]") ns = NodeSet("da[2-3]c[1-2],da[3-4]c[1-2]") self.assertEqual(ns, NodeSet("da[2-3]c[1-2],da[3-4]c[1-2]")) self.assertEqual(str(ns), "da[2-4]c[1-2]") ns = NodeSet("da[2-3]c[1-2]p3,da[3-4]c[1-3]p3") self.assertEqual(ns, NodeSet("da[2-3]c[1-2]p3,da[3-4]c[1-3]p3")) self.assertEqual(str(ns), "da[3-4]c[1-3]p3,da2c[1-2]p3") ns = NodeSet("da[2-3]c[1-2],da[2,5]c[2-3]") self.assertEqual(ns, NodeSet("da[2-3]c[1-2],da[2,5]c[2-3]")) self.assertEqual(str(ns), "da2c[1-3],da3c[1-2],da5c[2-3]") def test_nd_issuperset(self): ns1 = NodeSet("da[2-3]c[1-2]") ns2 = NodeSet("da[1-10]c[1-2]") self.assertTrue(ns2.issuperset(ns1)) self.assertFalse(ns1.issuperset(ns2)) ns1 = NodeSet("da[2-3]c[1-2]") ns1.add("da5c2") self.assertTrue(ns2.issuperset(ns1)) self.assertFalse(ns1.issuperset(ns2)) ns1 = NodeSet("da[2-3]c[1-2]") ns1.add("da5c[1-2]") self.assertTrue(ns2.issuperset(ns1)) self.assertFalse(ns1.issuperset(ns2)) ns1 = NodeSet("da[2-3]c[1-2]") ns1.add("da5c[2-3]") self.assertFalse(ns2.issuperset(ns1)) self.assertFalse(ns1.issuperset(ns2)) # large ranges nodeset = NodeSet("tronic[1-5000]c[1-2]") self.assertEqual(len(nodeset), 10000) self.assertTrue(nodeset.issuperset("tronic[1-5000]c1")) self.assertFalse(nodeset.issuperset("tronic[1-5000]c3")) nodeset = NodeSet("tronic[1-5000]c[1-200]p3") self.assertEqual(len(nodeset), 1000000) self.assertTrue(nodeset.issuperset("tronic[1-5000]c200p3")) self.assertFalse(nodeset.issuperset("tronic[1-5000]c[200-300]p3")) self.assertFalse(nodeset.issuperset("tronic[1-5000/2]c[200-300/2]p3")) def test_nd_issubset(self): nodeset = NodeSet("artcore[3-999]-ib0") self.assertEqual(len(nodeset), 997) self.assertTrue(nodeset.issubset("artcore[3-999]-ib[0-1]")) self.assertTrue(nodeset.issubset("artcore[1-1000]-ib0")) self.assertTrue(nodeset.issubset("artcore[1-1000]-ib[0,2]")) self.assertFalse(nodeset.issubset("artcore[350-427]-ib0")) # check lt self.assertTrue(nodeset < NodeSet("artcore[2-32000]-ib0")) self.assertFalse(nodeset > NodeSet("artcore[2-32000]-ib0")) self.assertTrue(nodeset < NodeSet("artcore[2-32000]-ib0,lounge[35-65/2]")) self.assertFalse(nodeset < NodeSet("artcore[3-999]-ib0")) self.assertFalse(nodeset < NodeSet("artcore[3-980]-ib0")) self.assertFalse(nodeset < NodeSet("artcore[2-998]-ib0")) self.assertTrue(nodeset <= NodeSet("artcore[2-32000]-ib0")) self.assertTrue(nodeset <= NodeSet("artcore[2-32000]-ib0,lounge[35-65/2]")) self.assertTrue(nodeset <= NodeSet("artcore[3-999]-ib0")) self.assertFalse(nodeset <= NodeSet("artcore[3-980]-ib0")) self.assertFalse(nodeset <= NodeSet("artcore[2-998]-ib0")) self.assertEqual(len(nodeset), 997) # check padding issue - fixed in 1.9 self.assertFalse(nodeset.issubset("artcore[0001-1000]-ib0")) # used to be true < 1.9 self.assertFalse(nodeset.issubset("artcore030-ib0")) # multiple patterns case nodeset = NodeSet("tronic[0036-1630],lounge[20-660/2]") self.assertTrue(nodeset < NodeSet("tronic[0036-1630],lounge[20-662/2]")) self.assertTrue(nodeset < NodeSet("tronic[0035-1630],lounge[20-660/2]")) self.assertFalse(nodeset < NodeSet("tronic[0035-1630],lounge[22-660/2]")) self.assertTrue(nodeset < NodeSet("tronic[0036-1630],lounge[20-660/2]," "artcore[034-070]")) self.assertTrue(nodeset < NodeSet("tronic[0032-1880],lounge[2-700/2]," "artcore[039-040]")) self.assertTrue(nodeset.issubset("tronic[0032-1880],lounge[2-700/2],artcore[039-040]")) self.assertTrue(nodeset.issubset(NodeSet("tronic[0032-1880],lounge[2-700/2],artcore[039-040]"))) def test_nd_intersection(self): ns1 = NodeSet("a0b[1-2]") ns2 = NodeSet("a0b1") self.assertEqual(ns1.intersection(ns2), ns2) self.assertEqual(ns1.intersection(ns2), NodeSet("a0b1")) self.assertEqual(len(ns1.intersection(ns2)), 1) ns1 = NodeSet("a0b[1-2]") ns2 = NodeSet("a3b0,a0b1") self.assertEqual(ns1.intersection(ns2), NodeSet("a0b1")) self.assertEqual(len(ns1.intersection(ns2)), 1) ns1 = NodeSet("a[0-100]b[1-2]") ns2 = NodeSet("a[50-150]b[2]") self.assertEqual(ns1.intersection(ns2), NodeSet("a[50-100]b2")) self.assertEqual(len(ns1.intersection(ns2)), 51) def test_nd_nonoverlap(self): ns1 = NodeSet("a[0-2]b[1-3]c[4]") ns1.add("a[0-1]b[2-3]c[4-5]") self.assertEqual(ns1, NodeSet("a[0-1]b[2-3]c[4-5],a[0-2]b1c4,a2b[2-3]c4")) self.assertEqual(ns1, NodeSet("a2b[1-3]c4,a0b[1-2]c4,a0b3c[4-5],a1b[1-2]c4,a1b3c[4-5],a0b2c5,a1b2c5")) self.assertEqual(str(ns1), "a[0-1]b[1-2]c4,a[0-1]b3c[4-5],a2b[1-3]c4,a[0-1]b2c5") self.assertEqual(len(ns1), 13) ns1 = NodeSet("a[0-1]b[2-3]c[4-5]") ns1.add("a[0-2]b[1-3]c[4]") self.assertEqual(ns1, NodeSet("a[0-1]b[2-3]c[4-5],a[0-2]b1c4,a2b[2-3]c4")) self.assertEqual(ns1, NodeSet("a2b[1-3]c4,a0b[1-2]c4,a0b3c[4-5],a1b[1-2]c4,a1b3c[4-5],a0b2c5,a1b2c5")) self.assertEqual(str(ns1), "a[0-1]b[1-2]c4,a[0-1]b3c[4-5],a2b[1-3]c4,a[0-1]b2c5") self.assertEqual(len(ns1), 13) ns1 = NodeSet("a[0-2]b[1-3]c[4],a[0-1]b[2-3]c[4-5]") self.assertEqual(ns1, NodeSet("a[0-2]b[1-3]c[4],a[0-1]b[2-3]c[4-5]")) self.assertEqual(ns1, NodeSet("a2b[1-3]c4,a0b[1-2]c4,a0b3c[4-5],a1b[1-2]c4,a1b3c[4-5],a0b2c5,a1b2c5")) self.assertEqual(ns1, NodeSet("a[0-2]b[1-3]c4,a[0-1]b[2-3]c5")) self.assertEqual(str(ns1), "a[0-1]b[1-2]c4,a[0-1]b3c[4-5],a2b[1-3]c4,a[0-1]b2c5") self.assertEqual(len(ns1), 13) ns1 = NodeSet("a[0-2]b[1-3]c[4-6],a[0-1]b[2-3]c[4-5]") self.assertEqual(ns1, NodeSet("a[0-2]b[1-3]c[4-6],a[0-1]b[2-3]c[4-5]")) self.assertEqual(ns1, NodeSet("a[0-2]b[1-3]c[4-6]")) self.assertEqual(str(ns1), "a[0-2]b[1-3]c[4-6]") self.assertEqual(len(ns1), 3*3*3) ns1 = NodeSet("a[0-2]b[2-3]c[4-6],a[0-1]b[1-3]c[4-5]") self.assertEqual(str(ns1), "a[0-2]b[2-3]c[4-6],a[0-1]b1c[4-5]") self.assertEqual(ns1, NodeSet("a[0-1]b[1-3]c[4-5],a[0-2]b[2-3]c6,a2b[2-3]c[4-5]")) self.assertEqual(ns1, NodeSet("a[0-2]b[2-3]c[4-6],a[0-1]b1c[4-5]")) self.assertEqual(len(ns1), (3*2*3)+(2*1*2)) ns1 = NodeSet("a[0-2]b[2-3]c[4-6],a[0-1]b[1-3]c[4-5]") self.assertEqual(str(ns1), "a[0-2]b[2-3]c[4-6],a[0-1]b1c[4-5]") self.assertEqual(NodeSet("a[0-1]b[1-3]c[4-5],a[0-2]b[2-3]c6,a2b[2-3]c[4-5]"), NodeSet("a[0-2]b[2-3]c[4-6],a[0-1]b1c[4-5]")) self.assertEqual(ns1, NodeSet("a[0-2]b[2-3]c[4-6],a[0-1]b1c[4-5]")) self.assertEqual(ns1, NodeSet("a[0-1]b[1-3]c[4-5],a[0-2]b[2-3]c6,a2b[2-3]c[4-5]")) self.assertEqual(len(ns1), (3*2*3)+(2*1*2)) ns1 = NodeSet("a[0-2]b[2-3]c[4-6],a[0-1]b[1-3]c[4-5],a2b1c[4-6]") self.assertEqual(str(ns1), "a[0-1]b[2-3]c[4-6],a2b[1-3]c[4-6],a[0-1]b1c[4-5]") self.assertEqual(ns1, NodeSet("a[0-1]b[1-3]c[4-5],a[0-2]b[2-3]c6,a2b[2-3]c[4-5],a2b1c[4-6]")) self.assertEqual(ns1, NodeSet("a[0-2]b[2-3]c[4-6],a[0-1]b1c[4-5],a2b1c[4-6]")) self.assertEqual(len(ns1), (3*3*2)+1+(3*2*1)) ns1.add("a1b1c6") self.assertEqual(str(ns1), "a[1-2]b[1-3]c[4-6],a0b[2-3]c[4-6],a0b1c[4-5]") self.assertEqual(ns1, NodeSet("a[0-2]b[2-3]c[4-6],a[0-1]b1c[4-5],a2b1c[4-6],a1b1c6")) self.assertEqual(ns1, NodeSet("a[1-2]b[1-3]c[4-6],a0b[2-3]c[4-6],a0b1c[4-5]")) ns1.add("a0b1c6") self.assertEqual(str(ns1), "a[0-2]b[1-3]c[4-6]") self.assertEqual(ns1, NodeSet("a[0-2]b[1-3]c[4-6]")) self.assertEqual(ns1, NodeSet("a[0-1]b[1-3]c[4-5],a[0-2]b[2-3]c6,a2b[2-3]c[4-5],a2b1c[4-6],a[0-1]b1c6")) self.assertEqual(len(ns1), 3*3*3) def test_nd_difference(self): ns1 = NodeSet("a0b[1-2]") ns2 = NodeSet("a0b1") self.assertEqual(ns1.difference(ns2), NodeSet("a0b2")) self.assertEqual(len(ns1.difference(ns2)), 1) ns1 = NodeSet("a[0-2]b[1-3]c[4-5]") ns2 = NodeSet("a[0-2]b[1-3]c4") self.assertEqual(str(ns1.difference(ns2)), "a[0-2]b[1-3]c5") self.assertEqual(ns1.difference(ns2), NodeSet("a[0-2]b[1-3]c5")) self.assertEqual(len(ns1.difference(ns2)), 9) ns1 = NodeSet("a[0-2]b[1-3]c[4]") ns2 = NodeSet("a[0-3]b[1]c[4-5]") self.assertEqual(ns1.difference(ns2), NodeSet("a[0-2]b[2-3]c4")) self.assertEqual(len(ns1.difference(ns2)), 6) ns1 = NodeSet("a[0-2]b[1-3]c[4],a[0-1]b[2-3]c[4-5]") self.assertEqual(str(ns1), "a[0-1]b[1-2]c4,a[0-1]b3c[4-5],a2b[1-3]c4,a[0-1]b2c5") self.assertEqual(ns1, NodeSet("a[0-2]b[1-3]c[4],a[0-1]b[2-3]c[4-5]")) self.assertEqual(ns1, NodeSet("a[0-1]b[2-3]c[4-5],a[0-2]b1c4,a2b[2-3]c4")) self.assertEqual(ns1, NodeSet("a2b[1-3]c4,a0b[1-2]c4,a0b3c[4-5],a1b[1-2]c4,a1b3c[4-5],a0b2c5,a1b2c5")) self.assertEqual(len(ns1), 3*3 + 2*2) ns2 = NodeSet("a[0-3]b[1]c[4-5]") self.assertEqual(len(ns2), 4*2) self.assertEqual(str(ns1.difference(ns2)), "a[0-1]b[2-3]c[4-5],a2b[2-3]c4") # compare object with different str repr self.assertNotEqual(str(ns1.difference(ns2)), "a[0-2]b[2-3]c4,a[0-1]b[2-3]c5") self.assertEqual(ns1.difference(ns2), NodeSet("a[0-2]b[2-3]c4,a[0-1]b[2-3]c5")) self.assertEqual(len(ns1.difference(ns2)), 3*2+2*2) ns1 = NodeSet("a[0-3]b[1-5]c5") ns2 = NodeSet("a[0-2]b[2-4]c5") self.assertEqual(str(ns1.difference(ns2)), "a[0-2]b[1,5]c5,a3b[1-5]c5") ns1 = NodeSet("a[0-3]b2c5") ns2 = NodeSet("a[0-2]b1c5") self.assertEqual(str(ns1.difference(ns2)), "a[0-3]b2c5") ns1 = NodeSet("a[0-3]b[1-4]c[5]") ns2 = NodeSet("a[0-2]b1c5") self.assertEqual(str(ns1.difference(ns2)), "a[0-2]b[2-4]c5,a3b[1-4]c5") ns1 = NodeSet("a[0-2]b[1-4]c5") ns2 = NodeSet("a[0-3]b[2-3]c5") self.assertEqual(str(ns1.difference(ns2)), "a[0-2]b[1,4]c5") ns1 = NodeSet("a[0-2]b1c5") ns2 = NodeSet("a[0-3]b[1-4]c[5]") self.assertEqual(str(ns1.difference(ns2)), "") ns1 = NodeSet("a[1-4]b1c5") ns2 = NodeSet("a[0-3]b1c5") self.assertEqual(str(ns1.difference(ns2)), "a4b1c5") ns1 = NodeSet("a[0-2]b1c[5-6]") ns2 = NodeSet("a[0-3]b[1-4]c[5]") self.assertEqual(str(ns1.difference(ns2)), "a[0-2]b1c6") ns1 = NodeSet("a[0-2]b[1-3]c[5]") ns2 = NodeSet("a[0-3]b[1-4]c[5]") self.assertEqual(ns1.difference(ns2), NodeSet()) self.assertEqual(len(ns1.difference(ns2)), 0) def test_nd_difference_test(self): #ns1 = NodeSet("a2b4") #ns2 = NodeSet("a2b6") #nsdiff = ns1.difference(ns2) #self.assertEqual(str(nsdiff), "a2b4") #self.assertEqual(nsdiff, NodeSet("a2b4")) ns1 = NodeSet("a[1-10]b[1-10]") ns2 = NodeSet("a[5-20]b[5-20]") nsdiff = ns1.difference(ns2) self.assertEqual(str(nsdiff), "a[1-4]b[1-10],a[5-10]b[1-4]") self.assertEqual(nsdiff, NodeSet("a[1-4]b[1-10],a[1-10]b[1-4]")) # manually checked with overlap # node[1-100]x[1-10] -x node4x4 def test_nd_difference_m(self): ns1 = NodeSet("a[2-3,5]b[1,4],a6b5") ns2 = NodeSet("a5b4,a6b5") nsdiff = ns1.difference(ns2) self.assertEqual(str(nsdiff), "a[2-3]b[1,4],a5b1") self.assertEqual(nsdiff, NodeSet("a[2-3]b[1,4],a5b1")) self.assertEqual(nsdiff, NodeSet("a[2-3,5]b1,a[2-3]b4")) # same with difference_update: ns1 = NodeSet("a[2-3,5]b[1,4],a6b5") ns2 = NodeSet("a5b4,a6b5") ns1.difference_update(ns2) self.assertEqual(str(ns1), "a[2-3]b[1,4],a5b1") self.assertEqual(ns1, NodeSet("a[2-3]b[1,4],a5b1")) self.assertEqual(ns1, NodeSet("a[2-3,5]b1,a[2-3]b4")) ns1 = NodeSet("a[2-3,5]b[1,4]p1,a6b5p1") ns2 = NodeSet("a5b4p1,a6b5p1") nsdiff = ns1.difference(ns2) self.assertEqual(str(nsdiff), "a[2-3]b[1,4]p1,a5b1p1") self.assertEqual(nsdiff, NodeSet("a[2-3]b[1,4]p1,a5b1p1")) self.assertEqual(nsdiff, NodeSet("a[2-3,5]b1p1,a[2-3]b4p1")) # manually checked ns1 = NodeSet("a[2-3]b[0,3-4],a[6-10]b[0-2]") ns2 = NodeSet("a[3-6]b[2-3]") nsdiff = ns1.difference(ns2) self.assertEqual(str(nsdiff), "a[7-10]b[0-2],a2b[0,3-4],a3b[0,4],a6b[0-1]") self.assertEqual(nsdiff, NodeSet("a[7-10]b[0-2],a[2-3]b[0,4],a6b[0-1],a2b3")) self.assertEqual(nsdiff, NodeSet("a[2-3,6-10]b0,a[6-10]b1,a[7-10]b2,a2b3,a[2-3]b4")) # manually checked ns1 = NodeSet("a[2-3,5]b4c[1,4],a6b4c5") ns2 = NodeSet("a5b4c4,a6b4c5") nsdiff = ns1.difference(ns2) self.assertEqual(str(nsdiff), "a[2-3]b4c[1,4],a5b4c1") self.assertEqual(nsdiff, NodeSet("a[2-3]b4c[1,4],a5b4c1")) self.assertEqual(nsdiff, NodeSet("a[2-3,5]b4c1,a[2-3]b4c4")) ns1 = NodeSet("a[1-6]b4") ns2 = NodeSet("a5b[2-5]") nsdiff = ns1.difference(ns2) self.assertEqual(str(nsdiff), "a[1-4,6]b4") self.assertEqual(nsdiff, NodeSet("a[1-4,6]b4")) def test_nd_xor(self): nodeset = NodeSet("artcore[3-999]p1") self.assertEqual(len(nodeset), 997) nodeset.symmetric_difference_update("artcore[1-2000]p1") self.assertEqual(str(nodeset), "artcore[1-2,1000-2000]p1") self.assertEqual(len(nodeset), 1003) nodeset = NodeSet("artcore[3-999]p1,lounge") self.assertEqual(len(nodeset), 998) nodeset.symmetric_difference_update("artcore[1-2000]p1") self.assertEqual(len(nodeset), 1004) self.assertEqual(str(nodeset), "artcore[1-2,1000-2000]p1,lounge") nodeset = NodeSet("artcore[3-999]p1,lounge") self.assertEqual(len(nodeset), 998) nodeset.symmetric_difference_update("artcore[1-2000]p1,lounge") self.assertEqual(len(nodeset), 1003) self.assertEqual(str(nodeset), "artcore[1-2,1000-2000]p1") nodeset = NodeSet("artcore[3-999]p1,lounge") self.assertEqual(len(nodeset), 998) nodeset2 = NodeSet("artcore[1-2000]p1,lounge") nodeset.symmetric_difference_update(nodeset2) self.assertEqual(len(nodeset), 1003) self.assertEqual(str(nodeset), "artcore[1-2,1000-2000]p1") self.assertEqual(len(nodeset2), 2001) # check const argument nodeset.symmetric_difference_update("artcore[1-2000]p1,lounge") self.assertEqual(len(nodeset), 998) self.assertEqual(str(nodeset), "artcore[3-999]p1,lounge") # first = NodeSet("a[2-3,5]b[1,4],a6b5") second = NodeSet("a[4-6]b[3-6]") first.symmetric_difference_update(second) self.assertEqual(str(first), "a[2-3]b[1,4],a4b[3-6],a5b[1,3,5-6],a6b[3-4,6]") self.assertEqual(first, NodeSet("a[2-3]b[1,4],a4b[3-6],a5b[1,3,5-6],a6b[3-4,6]")) self.assertEqual(first, NodeSet("a[4-6]b[3,6],a[2-3]b[1,4],a4b[4-5],a5b[1,5],a6b4")) first = NodeSet("a[1-50]b[1-20]") second = NodeSet("a[40-60]b[10-30]") first.symmetric_difference_update(second) self.assertEqual(str(first), "a[1-39]b[1-20],a[51-60]b[10-30],a[40-50]b[1-9,21-30]") self.assertEqual(first, NodeSet("a[1-39]b[1-20],a[40-60]b[21-30],a[51-60]b[10-20],a[40-50]b[1-9]")) first = NodeSet("a[1-2]p[1-2]") second = NodeSet("a[2-3]p[2-3]") first.symmetric_difference_update(second) self.assertEqual(str(first), "a1p[1-2],a2p[1,3],a3p[2-3]") self.assertEqual(first, NodeSet("a1p1,a1p2,a2p1,a2p3,a3p2,a3p3")) first = NodeSet("a[3-29]p[1-9,50-58]") second = NodeSet("a[1-110]p[4-56]") first.symmetric_difference_update(second) self.assertEqual(str(first), "a[1-2,30-110]p[4-56],a[3-29]p[1-3,10-49,57-58]") self.assertEqual(first, NodeSet("a[1-2,30-110]p[4-56],a[3-29]p[1-3,10-49,57-58]")) ns1 = NodeSet("a[1-6]b4") ns2 = NodeSet("a5b[2-5]") ns1.symmetric_difference_update(ns2) self.assertEqual(str(ns1), "a[1-4,6]b4,a5b[2-3,5]") self.assertEqual(ns1, NodeSet("a[1-4]b4,a5b[2-3,5],a6b4")) self.assertEqual(ns1, NodeSet("a[1-4,6]b4,a5b[2-3,5]")) def test_autostep(self): """test NodeSet autostep (1D)""" n1 = NodeSet("n1,n3,n5") # autostep arg does override origin autostep n2 = NodeSet(n1, autostep=3) self.assertEqual(str(n2), "n[1-5/2]") n2.update("p2,p5,p8") self.assertEqual(str(n2), "n[1-5/2],p[2-8/3]") n3 = NodeSet(n2, autostep=AUTOSTEP_DISABLED) self.assertEqual(str(n2), "n[1-5/2],p[2-8/3]") self.assertEqual(str(n3), "n[1,3,5],p[2,5,8]") # test xor, the other operation that can add nodes n4 = NodeSet() n4.symmetric_difference_update(n2) self.assertEqual(str(n2), "n[1-5/2],p[2-8/3]") self.assertEqual(str(n4), "n[1-5/2],p[2-8/3]") n5 = NodeSet(autostep=AUTOSTEP_DISABLED) n5.symmetric_difference_update(n2) self.assertEqual(str(n2), "n[1-5/2],p[2-8/3]") self.assertEqual(str(n5), "n[1,3,5],p[2,5,8]") n4 = NodeSet() n4b = n4.symmetric_difference(n2) self.assertEqual(str(n2), "n[1-5/2],p[2-8/3]") self.assertEqual(str(n4), "") self.assertEqual(str(n4b), "n[1-5/2],p[2-8/3]") n5 = NodeSet(autostep=AUTOSTEP_DISABLED) n5b = n5.symmetric_difference(n2) self.assertEqual(str(n2), "n[1-5/2],p[2-8/3]") self.assertEqual(str(n5), "") self.assertEqual(str(n5b), "n[1,3,5],p[2,5,8]") def test_autostep_property(self): """test NodeSet autostep property (1D)""" n1 = NodeSet("n1,n3,n5,p04,p07,p10,p13") self.assertEqual(str(n1), "n[1,3,5],p[04,07,10,13]") self.assertEqual(len(n1), 7) self.assertEqual(n1.autostep, None) n1.autostep = 2 self.assertEqual(str(n1), "n[1-5/2],p[04-13/3]") self.assertEqual(n1.autostep, 2) self.assertEqual(len(n1), 7) n1.autostep = 5 self.assertEqual(str(n1), "n[1,3,5],p[04,07,10,13]") n1.autostep = 4 self.assertEqual(str(n1), "n[1,3,5],p[04-13/3]") n1.autostep = 3 self.assertEqual(str(n1), "n[1-5/2],p[04-13/3]") self.assertEqual(len(n1), 7) n1.autostep = None self.assertEqual(str(n1), "n[1,3,5],p[04,07,10,13]") self.assertEqual(n1.autostep, None) self.assertEqual(len(n1), 7) # check change + init/copy n1.autostep = 4 n2 = NodeSet(n1) self.assertEqual(n1.autostep, 4) # autostep set as 'inherit' self.assertEqual(n2.autostep, None) # check that self.assertEqual(str(n2), "n[1,3,5],p[04-13/3]") n2.autostep = 2 self.assertEqual(str(n2), "n[1-5/2],p[04-13/3]") self.assertEqual(n1.autostep, 4) # no change self.assertEqual(n2.autostep, 2) n1.autostep = 4 n2 = NodeSet(n1, autostep=2) self.assertEqual(n1.autostep, 4) self.assertEqual(n2.autostep, 2) self.assertEqual(str(n2), "n[1-5/2],p[04-13/3]") n1.autostep = 4 n2 = NodeSet(n1, autostep=AUTOSTEP_DISABLED) self.assertEqual(n1.autostep, 4) self.assertEqual(n2.autostep, AUTOSTEP_DISABLED) self.assertEqual(str(n2), "n[1,3,5],p[04,07,10,13]") n1.autostep = 3 self.assertEqual(n1.copy().autostep, 3) n1 = NodeSet("n09,n11") self.assertEqual(str(n1), "n[09,11]") self.assertEqual(len(n1), 2) self.assertEqual(n1.autostep, None) n1.autostep = 2 self.assertEqual(str(n1), "n[09-11/2]") n1.autostep = 3 self.assertEqual(str(n1), "n[09,11]") n1 = NodeSet("n1,n3,n5,p03,p06,p09,p012,p015") self.assertEqual(str(n1), "n[1,3,5],p[03,06,09,012,015]") self.assertEqual(len(n1), 8) self.assertEqual(n1.autostep, None) n1.autostep = 2 self.assertEqual(str(n1), "n[1-5/2],p[03-09/3,012-015/3]") n1.autostep = 3 self.assertEqual(str(n1), "n[1-5/2],p[03-09/3,012,015]") n1 = NodeSet("n1,n3,n5,p03,p06,p09") self.assertEqual(str(n1), "n[1,3,5],p[03,06,09]") self.assertEqual(len(n1), 6) self.assertEqual(n1.autostep, None) n1.autostep = 2 self.assertEqual(str(n1), "n[1-5/2],p[03-09/3]") self.assertEqual(n1.autostep, 2) n1 = NodeSet("n1,n03,n05") self.assertEqual(str(n1), "n[1,03,05]") self.assertEqual(len(n1), 3) self.assertEqual(n1.autostep, None) n1.autostep = 3 self.assertEqual(str(n1), "n[1,03,05]") n1 = NodeSet("n1,n03,n05,n07") self.assertEqual(str(n1), "n[1,03,05,07]") self.assertEqual(len(n1), 4) self.assertEqual(n1.autostep, None) n1.autostep = 2 self.assertEqual(str(n1), "n[1,03-07/2]") def test_nd_autostep(self): """test NodeSet autostep (nD)""" n1 = NodeSet("p2n1,p2n3,p2n5") # autostep arg does override origin autostep n2 = NodeSet(n1, autostep=3) self.assertEqual(str(n1), "p2n[1,3,5]") # no change! self.assertEqual(str(n2), "p2n[1-5/2]") # test multi-pattern nD n2.update("p2p2,p2p4,p2p6") self.assertEqual(str(n1), "p2n[1,3,5]") # no change! self.assertEqual(str(n2), "p2n[1-5/2],p2p[2-6/2]") n3 = NodeSet("p2x1,p2x4,p2x7") n2.update(n3) self.assertEqual(str(n3), "p2x[1,4,7]") # no change! self.assertEqual(str(n2), "p2n[1-5/2],p2p[2-6/2],p2x[1-7/3]") # add nodes to same pattern (but not the first one) n4 = NodeSet("p2p8,p2p14,p2p20") n2.update(n4) self.assertEqual(str(n4), "p2p[8,14,20]") # no change! self.assertEqual(str(n2), "p2n[1-5/2],p2p[2-8/2,14,20],p2x[1-7/3]") n4 = NodeSet(n2, autostep=AUTOSTEP_DISABLED) # no change on n2... self.assertEqual(str(n2), "p2n[1-5/2],p2p[2-8/2,14,20],p2x[1-7/3]") # explicitly disabled on n4 n4_noautostep_str = "p2n[1,3,5],p2p[2,4,6,8,14,20],p2x[1,4,7]" self.assertEqual(str(n4), n4_noautostep_str) # test xor, the other operation that can add nodes n5 = NodeSet() n5.symmetric_difference_update(n2) self.assertEqual(str(n5), "p2n[1-5/2],p2p[2-8/2,14,20],p2x[1-7/3]") n6 = NodeSet(autostep=AUTOSTEP_DISABLED) n6.symmetric_difference_update(n2) self.assertEqual(str(n6), n4_noautostep_str) n5 = NodeSet() n5b = n5.symmetric_difference(n2) # no change on n2... self.assertEqual(str(n2), "p2n[1-5/2],p2p[2-8/2,14,20],p2x[1-7/3]") self.assertEqual(str(n5), "") self.assertEqual(str(n5b), "p2n[1-5/2],p2p[2-8/2,14,20],p2x[1-7/3]") n6 = NodeSet(autostep=AUTOSTEP_DISABLED) n6b = n6.symmetric_difference(n2) # no change on n2... self.assertEqual(str(n2), "p2n[1-5/2],p2p[2-8/2,14,20],p2x[1-7/3]") self.assertEqual(str(n6), "") self.assertEqual(str(n6b), n4_noautostep_str) def test_nd_autostep_property(self): """test NodeSet autostep property (nD)""" n1 = NodeSet("p1n4,p2x011,p1n6,p2x015,p1n2,p2x019,p1n0,p2x003") self.assertEqual(str(n1), "p1n[0,2,4,6],p2x[003,011,015,019]") self.assertEqual(len(n1), 8) self.assertEqual(n1.autostep, None) n1.autostep = 2 # 2 is really a too small value for autostep, but well... self.assertEqual(str(n1), "p1n[0-6/2],p2x[003-011/8,015-019/4]") self.assertEqual(n1.autostep, 2) self.assertEqual(len(n1), 8) n1.autostep = 5 self.assertEqual(str(n1), "p1n[0,2,4,6],p2x[003,011,015,019]") n1.autostep = 4 self.assertEqual(str(n1), "p1n[0-6/2],p2x[003,011,015,019]") n1.autostep = 3 self.assertEqual(str(n1), "p1n[0-6/2],p2x[003,011-019/4]") self.assertEqual(len(n1), 8) n1.autostep = None self.assertEqual(str(n1), "p1n[0,2,4,6],p2x[003,011,015,019]") self.assertEqual(n1.autostep, None) self.assertEqual(len(n1), 8) # check change + init/copy n1.autostep = 4 n2 = NodeSet(n1) self.assertEqual(n1.autostep, 4) # autostep set as 'inherit' self.assertEqual(n2.autostep, None) # check that self.assertEqual(str(n2), "p1n[0-6/2],p2x[003,011,015,019]") n2.autostep = 2 self.assertEqual(str(n2), "p1n[0-6/2],p2x[003-011/8,015-019/4]") self.assertEqual(n1.autostep, 4) # no change self.assertEqual(n2.autostep, 2) n1.autostep = 4 n2 = NodeSet(n1, autostep=2) self.assertEqual(n1.autostep, 4) self.assertEqual(n2.autostep, 2) self.assertEqual(str(n2), "p1n[0-6/2],p2x[003-011/8,015-019/4]") n1.autostep = 4 n2 = NodeSet(n1, autostep=AUTOSTEP_DISABLED) self.assertEqual(n1.autostep, 4) self.assertEqual(n2.autostep, AUTOSTEP_DISABLED) self.assertEqual(str(n2), "p1n[0,2,4,6],p2x[003,011,015,019]") n1.autostep = 3 self.assertEqual(n1.copy().autostep, 3) def test_nd_fold_axis(self): """test NodeSet fold_axis feature""" n1 = NodeSet("a3b2c0,a2b3c1,a2b4c1,a1b2c0,a1b2c1,a3b2c1,a2b5c1") # default dim is unlimited self.assertEqual(str(n1), "a[1,3]b2c[0-1],a2b[3-5]c1") self.assertEqual(len(n1), 7) # fold along three axis n1.fold_axis = (0, 1, 2) self.assertEqual(str(n1), "a[1,3]b2c[0-1],a2b[3-5]c1") self.assertEqual(len(n1), 7) # fold along one axis n1.fold_axis = [0] self.assertEqual(str(n1), "a[1,3]b2c0,a[1,3]b2c1,a2b3c1,a2b4c1,a2b5c1") self.assertEqual(len(n1), 7) n1.fold_axis = [1] self.assertEqual(str(n1), "a1b2c0,a3b2c0,a1b2c1,a3b2c1,a2b[3-5]c1") self.assertEqual(len(n1), 7) n1.fold_axis = [2] self.assertEqual(str(n1), "a1b2c[0-1],a3b2c[0-1],a2b3c1,a2b4c1,a2b5c1") self.assertEqual(len(n1), 7) # reverse n1.fold_axis = [-1] self.assertEqual(str(n1), "a1b2c[0-1],a3b2c[0-1],a2b3c1,a2b4c1,a2b5c1") self.assertEqual(len(n1), 7) n1.fold_axis = [-2] self.assertEqual(str(n1), "a1b2c0,a3b2c0,a1b2c1,a3b2c1,a2b[3-5]c1") self.assertEqual(len(n1), 7) n1.fold_axis = [-3] self.assertEqual(str(n1), "a[1,3]b2c0,a[1,3]b2c1,a2b3c1,a2b4c1,a2b5c1") self.assertEqual(len(n1), 7) # out of bound silently re-expand everything n1.fold_axis = [3] self.assertEqual(str(n1), "a1b2c0,a3b2c0,a1b2c1,a3b2c1,a2b3c1,a2b4c1,a2b5c1") n1.fold_axis = [-4] self.assertEqual(str(n1), "a1b2c0,a3b2c0,a1b2c1,a3b2c1,a2b3c1,a2b4c1,a2b5c1") # fold along two axis n1.fold_axis = [0, 1] self.assertEqual(str(n1), "a[1,3]b2c0,a[1,3]b2c1,a2b[3-5]c1") self.assertEqual(len(n1), 7) n1.fold_axis = [0, 2] self.assertEqual(str(n1), "a[1,3]b2c[0-1],a2b3c1,a2b4c1,a2b5c1") self.assertEqual(len(n1), 7) n1.fold_axis = [1, 2] self.assertEqual(str(n1), "a1b2c[0-1],a3b2c[0-1],a2b[3-5]c1") self.assertEqual(len(n1), 7) # reset fold_axis n1.fold_axis = None self.assertEqual(str(n1), "a[1,3]b2c[0-1],a2b[3-5]c1") self.assertEqual(len(n1), 7) # fold_axis: constructor and copy n1.fold_axis = (0, 2) n2 = NodeSet(n1) self.assertEqual(n1.fold_axis, (0, 2)) self.assertTrue(n2.fold_axis is None) n2 = NodeSet(n1, fold_axis=n1.fold_axis) self.assertEqual(n1.fold_axis, (0, 2)) self.assertEqual(n2.fold_axis, (0, 2)) self.assertEqual(str(n2), "a[1,3]b2c[0-1],a2b3c1,a2b4c1,a2b5c1") # fold_axis is kept when using copy() n2 = n1.copy() self.assertEqual(n1.fold_axis, (0, 2)) self.assertEqual(n2.fold_axis, (0, 2)) self.assertEqual(str(n2), "a[1,3]b2c[0-1],a2b3c1,a2b4c1,a2b5c1") def test_nd_fold_axis_multi(self): """test NodeSet fold_axis feature (ultimate)""" # A single variable-nD nodeset n1 = NodeSet("master,slave,ln0,ln1,da1c1,da1c2,da2c1,da2c2," "x1y1z1,x1y1z2,x1y2z1,x1y2z2," "x2y1z1,x2y1z2,x2y2z1,x2y2z2") # default is unlimited self.assertEqual(str(n1), "da[1-2]c[1-2],ln[0-1],master,slave,x[1-2]y[1-2]z[1-2]") self.assertEqual(len(n1), 16) # fold along one axis n1.fold_axis = [0] self.assertEqual(str(n1), "da[1-2]c1,da[1-2]c2,ln[0-1],master,slave,x[1-2]y1z1,x[1-2]y2z1,x[1-2]y1z2,x[1-2]y2z2") self.assertEqual(len(n1), 16) n1.fold_axis = [1] self.assertEqual(str(n1), "da1c[1-2],da2c[1-2],ln0,ln1,master,slave,x1y[1-2]z1,x2y[1-2]z1,x1y[1-2]z2,x2y[1-2]z2") self.assertEqual(len(n1), 16) n1.fold_axis = [2] self.assertEqual(str(n1), "da1c1,da2c1,da1c2,da2c2,ln0,ln1,master,slave,x1y1z[1-2],x2y1z[1-2],x1y2z[1-2],x2y2z[1-2]") self.assertEqual(len(n1), 16) # reverse n1.fold_axis = [-1] # first indices from the end self.assertEqual(str(n1), "da1c[1-2],da2c[1-2],ln[0-1],master,slave,x1y1z[1-2],x2y1z[1-2],x1y2z[1-2],x2y2z[1-2]") self.assertEqual(len(n1), 16) n1.fold_axis = [-2] # second indices from the end self.assertEqual(str(n1), "da[1-2]c1,da[1-2]c2,ln0,ln1,master,slave,x1y[1-2]z1,x2y[1-2]z1,x1y[1-2]z2,x2y[1-2]z2") self.assertEqual(len(n1), 16) n1.fold_axis = [-3] # etc. self.assertEqual(str(n1), "da1c1,da2c1,da1c2,da2c2,ln0,ln1,master,slave,x[1-2]y1z1,x[1-2]y2z1,x[1-2]y1z2,x[1-2]y2z2") self.assertEqual(len(n1), 16) # out of bound silently re-expand everything n1.fold_axis = [3] self.assertEqual(str(n1), "da1c1,da2c1,da1c2,da2c2,ln0,ln1,master,slave,x1y1z1,x2y1z1,x1y2z1,x2y2z1,x1y1z2,x2y1z2,x1y2z2,x2y2z2") n1.fold_axis = [-4] self.assertEqual(str(n1), "da1c1,da2c1,da1c2,da2c2,ln0,ln1,master,slave,x1y1z1,x2y1z1,x1y2z1,x2y2z1,x1y1z2,x2y1z2,x1y2z2,x2y2z2") # fold along two axis n1.fold_axis = [0, 1] self.assertEqual(str(n1), "da[1-2]c[1-2],ln[0-1],master,slave,x[1-2]y[1-2]z1,x[1-2]y[1-2]z2") self.assertEqual(len(n1), 16) n1.fold_axis = [0, 2] self.assertEqual(str(n1), "da[1-2]c1,da[1-2]c2,ln[0-1],master,slave,x[1-2]y1z[1-2],x[1-2]y2z[1-2]") self.assertEqual(len(n1), 16) n1.fold_axis = [1, 2] self.assertEqual(str(n1), "da1c[1-2],da2c[1-2],ln0,ln1,master,slave,x1y[1-2]z[1-2],x2y[1-2]z[1-2]") self.assertEqual(len(n1), 16) # fold along three axis n1.fold_axis = range(3) self.assertEqual(str(n1), "da[1-2]c[1-2],ln[0-1],master,slave,x[1-2]y[1-2]z[1-2]") self.assertEqual(len(n1), 16) def test_unicode(self): """test NodeSet with unicode string""" nodeset = NodeSet(u"node1") self._assertNode(nodeset, "node1") if sys.version_info < (3, 0, 0): # unicode cannot work in Python 2 as we use str() internally self.assertRaises(UnicodeEncodeError, NodeSet, u"\u0ad0[000-042]") else: # unicode is supported in Python 3 self.assertEqual(str(NodeSet(u"\u0ad0[000-042]")), u"\u0ad0[000-042]") self.assertEqual(str(NodeSet(u"\u0ad0[000-042]")), "à«[000-042]") def test_nd_fold_padding(self): """test NodeSet nD heuristic folding with padding""" # Ticket #286 - not broken in 1.7 n1 = NodeSet("n1c01,n1c02,n1c03,n1c04,n1c05,n1c06,n1c07,n1c08,n1c09,n2c01,n2c02,n2c03,n2c04,n2c05,n2c06,n2c07,n2c08,n2c09,n3c01,n3c02,n3c03,n3c04,n3c05,n3c06,n3c07,n3c08,n3c09,n4c01,n4c02,n4c03,n4c04,n4c05,n4c06,n4c07") self.assertEqual(str(n1), "n[1-3]c[01-09],n4c[01-07]") self.assertEqual(len(n1), 34) # Ticket #286 - broken in 1.7 - trigger RangeSetND._fold_multivariate_expand full expand n1 = NodeSet("n1c01,n1c02,n1c03,n1c04,n1c05,n1c06,n1c07,n1c08,n1c09,n2c01,n2c02,n2c03,n2c04,n2c05,n2c06,n2c07,n2c08,n2c09,n3c01,n3c02,n3c03,n3c04,n3c05,n3c06,n3c07,n3c08,n3c09,n4c01,n4c02,n4c03,n4c04,n4c05,n4c06,n4c07,n4c08,n4c09") self.assertEqual(str(n1), "n[1-4]c[01-09]") self.assertEqual(len(n1), 36) def test_fully_numeric(self): # supported from 1.8 (#338) n1 = NodeSet("3,5,[7-10,40]") self.assertEqual(str(n1), "[3,5,7-10,40]") self.assertEqual(len(n1), 7) n1 = NodeSet("nova3,nova4,5,nova6") self.assertEqual(str(n1), "5,nova[3-4,6]") self.assertEqual(len(n1), 4) n1 = NodeSet("nova3,nova4,[5-8],nova6") self.assertEqual(str(n1), "[5-8],nova[3-4,6]") self.assertEqual(len(n1), 7) n1 = NodeSet("[0-10]") self.assertEqual(str(n1), "[0-10]") self.assertEqual(len(n1), 11) self.assertEqual(len(n1), 11) # leading 0 along with mixed lengths padding self._assertEqual("[07-09,010]") self._assertNS("0[7-10]", NodeSetParseRangeError) self._assertNS("0[7-9,10]", NodeSetParseRangeError) # expanded to 7-10 first self._assertEqual("0[7-9,010]", "[07-09,0010]") self._assertEqual("0[07-09,10]", "[007-010]") self._assertEqual("0[07-09,010]", "[007-009,0010]") self._assertNS("0[0-10]", NodeSetParseRangeError) # trailing 0 along with mixed lengths padding self._assertNS("[0-10]0", NodeSetParseRangeError) self._assertNS("0[0-10]0", NodeSetParseRangeError) def test_negative_ranges(self): # supported from 1.9.1 (#515) n1 = NodeSet("n[-5]") self.assertEqual(str(n1), "n-5") self.assertEqual(len(n1), 1) n1 = NodeSet("n[-1-0]") self.assertEqual(str(n1), "n[-1-0]") self.assertEqual(len(n1), 2) n1 = NodeSet("n[-5-5]") self.assertEqual(str(n1), "n[-5-5]") self.assertEqual(len(n1), 2*5+1) n1 = NodeSet("n[-12-12]") self.assertEqual(str(n1), "n[-12-12]") self.assertEqual(len(n1), 12*2+1) n1 = NodeSet("n[-12,-10--9,-5--1,1-5]") self.assertEqual(str(n1), "n[-12,-10--9,-5--1,1-5]") self.assertEqual(len(n1), 13) n1 = NodeSet("n[1-5,-12,-5--1,-10--9]") self.assertEqual(str(n1), "n[-12,-10--9,-5--1,1-5]") self.assertEqual(len(n1), 13) n1 = NodeSet("n[1,-10,2-4,-12,5,-5,-4,-3,-9,-2--1]") self.assertEqual(str(n1), "n[-12,-10--9,-5--1,1-5]") self.assertEqual(len(n1), 13) n1 = NodeSet("n[1,-10,2-4,-12],n[5,-5,-4,-3,-9,-2--1]") self.assertEqual(str(n1), "n[-12,-10--9,-5--1,1-5]") self.assertEqual(len(n1), 13) # padding in negative range is not supported self._assertNS("n[-001]", NodeSetParseRangeError) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/RangeSetNDTest.py0000644104717000001440000005472414501416555020116 0ustar00sthiellusers# ClusterShell.RangeSet.RangeSetND test suite # Written by S. Thiell """Unit test for RangeSetND""" import sys import unittest import warnings from ClusterShell.RangeSet import RangeSet, RangeSetND class RangeSetNDTest(unittest.TestCase): def setUp(self): warnings.simplefilter("always") def _testRS(self, test, res, length): r1 = RangeSetND(test, autostep=3) self.assertEqual(str(r1), res) self.assertEqual(len(r1), length) def test_simple(self): # Test constructors self._testRS(None, "", 0) self._testRS([["0-10"], ["40-60"]], "0-10,40-60\n", 32) self._testRS([["0-2", "1-2"], ["10", "3-5"]], "0-2; 1-2\n10; 3-5\n", 9) self._testRS([[0, 1], [0, 2], [2, 2], [2, 1], [1, 1], [1, 2], [10, 4], [10, 5], [10, 3]], "0-2; 1-2\n10; 3-5\n", 9) self._testRS([(0, 4), (0, 5), (1, 4), (1, 5)], "0-1; 4-5\n", 4) # construct with copy_rangeset=False r0 = RangeSet("0-10,30-40,50") r1 = RangeSet("200-202") rn = RangeSetND([[r0, r1]], copy_rangeset=False) self.assertEqual(str(rn), "0-10,30-40,50; 200-202\n") self.assertEqual(len(rn), 69) def test_vectors(self): rn = RangeSetND([["0-10", "1-2"], ["5-60", "2"]]) # vectors() should perform automatic folding self.assertEqual([[RangeSet("11-60"), RangeSet("2")], [RangeSet("0-10"), RangeSet("1-2")],], list(rn.vectors())) self.assertEqual(str(rn), "11-60; 2\n0-10; 1-2\n") self.assertEqual(len(rn), 72) def test_nonzero(self): r0 = RangeSetND() if r0: self.assertFalse("nonzero failed") r1 = RangeSetND([["0-10"], ["40-60"]]) if not r1: self.assertFalse("nonzero failed") def test_eq(self): r0 = RangeSetND() r1 = RangeSetND() r2 = RangeSetND([["0-10", "1-2"], ["40-60", "1-3"]]) r3 = RangeSetND([["0-10"], ["40-60"]]) self.assertEqual(r0, r1) self.assertNotEqual(r0, r2) self.assertNotEqual(r0, r3) self.assertNotEqual(r2, r3) self.assertFalse(r3 == "foobar") # NotImplemented => object address comparison def test_dim(self): r0 = RangeSetND() self.assertEqual(r0.dim(), 0) r1 = RangeSetND([["0-10", "1-2"], ["40-60", "1-3"]]) self.assertEqual(r1.dim(), 2) def test_fold(self): r1 = RangeSetND([["0-10", "1-2"], ["5-15,40-60", "1-3"], ["0-4", "3"]]) r1.fold() self.assertEqual(str(r1._veclist), "[[0-15,40-60, 1-3]]") self.assertEqual(str(r1), "0-15,40-60; 1-3\n") def test_contains(self): r1 = RangeSetND([["0-10"], ["40-60"]]) r2 = RangeSetND() self.assertTrue(r2 in r1) # <=> issubset() r1 = RangeSetND() r2 = RangeSetND([["0-10"], ["40-60"]]) self.assertFalse(r2 in r1) r1 = RangeSetND([["0-10"], ["40-60"]]) r2 = RangeSetND([["4-8"], ["10,40-41"]]) self.assertTrue(r2 in r1) r1 = RangeSetND([["0-10", "1-2"], ["40-60", "2-5"]]) r2 = RangeSetND([["4-8", "1"], ["10,40-41", "2"]]) self.assertTrue(r2 in r1) r1 = RangeSetND([["0-10", "1-2"], ["40-60", "2-5"]]) r2 = RangeSetND([["4-8", "5"], ["10,40-41", "2"]]) self.assertTrue(r2 not in r1) r1 = RangeSetND([["0-10"], ["40-60"]]) self.assertTrue("10" in r1) self.assertTrue(10 in r1) self.assertFalse(11 in r1) def test_subset_superset(self): r1 = RangeSetND([["0-10"], ["40-60"]]) self.assertTrue(r1.issubset(r1)) self.assertTrue(r1.issuperset(r1)) r2 = RangeSetND([["0-10"], ["40-60"]]) self.assertTrue(r2.issubset(r1)) self.assertTrue(r1.issubset(r2)) self.assertTrue(r2.issuperset(r1)) self.assertTrue(r1.issuperset(r2)) r1 = RangeSetND([["0-10"], ["40-60"]]) r2 = RangeSetND() self.assertTrue(r2.issubset(r1)) self.assertFalse(r1.issubset(r2)) self.assertTrue(r1.issuperset(r2)) self.assertFalse(r2.issuperset(r1)) r1 = RangeSetND([["0-10"], ["40-60"]]) r2 = RangeSetND([["4"], ["10,40-41"]]) self.assertFalse(r1.issubset(r2)) self.assertFalse(r1 < r2) self.assertTrue(r2.issubset(r1)) self.assertTrue(r2 < r1) self.assertTrue(r1.issuperset(r2)) self.assertTrue(r1 > r2) self.assertFalse(r2.issuperset(r1)) self.assertFalse(r2 > r1) r1 = RangeSetND([["0-10", "1-2"], ["40-60", "2-5"]]) r2 = RangeSetND([["4-8", "1"], ["10,40-41", "2"]]) self.assertFalse(r1.issubset(r2)) self.assertFalse(r1 < r2) self.assertTrue(r2.issubset(r1)) self.assertTrue(r2 < r1) self.assertTrue(r1.issuperset(r2)) self.assertTrue(r1 > r2) self.assertFalse(r2.issuperset(r1)) self.assertFalse(r2 > r1) def test_sorting(self): # Test internal sorting algo # sorting condition (1) -- see RangeSetND._sort() self._testRS([["40-60", "5"], ["10-12", "6"]], "40-60; 5\n10-12; 6\n", 24) # sorting condition (2) self._testRS([["40-42", "5,7"], ["10-12", "6"]], "40-42; 5,7\n10-12; 6\n", 9) self._testRS([["40-42", "5"], ["10-12", "6-7"]], "10-12; 6-7\n40-42; 5\n", 9) # sorting condition (3) self._testRS([["40-60", "5"], ["10-30", "6"]], "10-30; 6\n40-60; 5\n", 42) self._testRS([["10-30", "3", "5"], ["10-30", "2", "6"]], "10-30; 2; 6\n10-30; 3; 5\n", 42) self._testRS([["10-30", "2", "6"], ["10-30", "3", "5"]], "10-30; 2; 6\n10-30; 3; 5\n", 42) # sorting condition (4) self._testRS([["10-30", "2,6", "6"], ["10-30", "2-3", "5"]], "10-30; 2; 5-6\n10-30; 3; 5\n10-30; 6; 6\n", 84) # the following test triggers folding loop protection self._testRS([["40-60", "5"], ["30-50", "6"]], "40-50; 5-6\n30-39; 6\n51-60; 5\n", 42) # 1D self._testRS([["40-60"], ["10-12"]], "10-12,40-60\n", 24) def test_folding(self): self._testRS([["0-10"], ["11-60"]], "0-60\n", 61) self._testRS([["0-2", "1-2"], ["3", "1-2"]], "0-3; 1-2\n", 8) self._testRS([["3", "1-3"], ["0-2", "1-2"]], "0-2; 1-2\n3; 1-3\n", 9) self._testRS([["0-2", "1-2"], ["3", "1-3"]], "0-2; 1-2\n3; 1-3\n", 9) self._testRS([["0-2", "1-2"], ["1-3", "1-3"]], "1-3; 1-3\n0; 1-2\n", 11) self._testRS([["0-2", "1-2", "0-4"], ["3", "1-2", "0-5"]], "0-2; 1-2; 0-4\n3; 1-2; 0-5\n", 42) self._testRS([["0-2", "1-2", "0-4"], ["1-3", "1-3", "0-4"]], "1-3; 1-3; 0-4\n0; 1-2; 0-4\n", 55) # triggers full expand heuristic veclist = [item for x in range(0, 22, 2) for item in [(x,0), (x,1)]] self._testRS(veclist, "0-20/2; 0-1\n", 22) # the following test triggers folding loop protection self._testRS([["0-100", "50-200"], ["2-101", "49"]], "2-100; 49-200\n0-1; 50-200\n101; 49\n", 15351) # the following test triggers full expand veclist = [] for v1, v2, v3 in zip(range(30), range(5, 35), range(10, 40)): veclist.append((v1, v2, v3)) self._testRS(veclist, "0; 5; 10\n1; 6; 11\n10; 15; 20\n11; 16; 21\n12; 17; 22\n13; 18; 23\n14; 19; 24\n15; 20; 25\n16; 21; 26\n17; 22; 27\n18; 23; 28\n19; 24; 29\n2; 7; 12\n20; 25; 30\n21; 26; 31\n22; 27; 32\n23; 28; 33\n24; 29; 34\n25; 30; 35\n26; 31; 36\n27; 32; 37\n28; 33; 38\n29; 34; 39\n3; 8; 13\n4; 9; 14\n5; 10; 15\n6; 11; 16\n7; 12; 17\n8; 13; 18\n9; 14; 19\n", 30) def test_union(self): rn1 = RangeSetND([["10-100", "1-3"], ["1100-1300", "2-3"]]) self.assertEqual(str(rn1), "1100-1300; 2-3\n10-100; 1-3\n") self.assertEqual(len(rn1), 675) rn2 = RangeSetND([["1100-1200", "1"], ["10-49", "1,3"]]) self.assertEqual(str(rn2), "12-13,1100-1200; 1\n10-11,14-49; 1,3\n12-13; 3\n") self.assertEqual(len(rn2), 181) rnu = rn1.union(rn2) self.assertEqual(str(rnu), "10-100,1100-1200; 1-3\n1201-1300; 2-3\n") self.assertEqual(len(rnu), 776) rnu2 = rn1 | rn2 self.assertEqual(str(rnu2), "10-100,1100-1200; 1-3\n1201-1300; 2-3\n") self.assertEqual(len(rnu2), 776) self.assertEqual(rnu, rnu2) # btw test __eq__ self.assertNotEqual(rnu, rn1) # btw test __eq__ self.assertNotEqual(rnu, rn2) # btw test __eq__ try: dummy = rn1 | "foobar" self.assertFalse("TypeError not raised") except TypeError: pass # binary error if sys.version_info >= (2, 5, 0): rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"]]) rn2 = RangeSetND([["1100-1200", "1"], ["10-49", "1,3"]]) rn1 |= rn2 self.assertEqual(str(rn2), "12-13,1100-1200; 1\n10-11,14-49; 1,3\n12-13; 3\n") self.assertEqual(len(rn2), 181) rn2 = set([3, 5]) self.assertRaises(TypeError, rn1.__ior__, rn2) def test_difference(self): rn1 = RangeSetND([["10", "10-13"], ["0-3", "1-2"]]) rn2 = RangeSetND([["10", "12"]]) self.assertEqual(len(rn1), 12) rnres = rn1.difference(rn2) self.assertEqual(str(rnres), "0-3; 1-2\n10; 10-11,13\n") self.assertEqual(len(rnres), 11) rn1 = RangeSetND([["0-2", "1-3", "4-5"]]) rn2 = RangeSetND([["0-2", "1-3", "4"]]) rnres = rn1.difference(rn2) self.assertEqual(str(rnres), "0-2; 1-3; 5\n") self.assertEqual(len(rnres), 9) rn1 = RangeSetND([["0-2", "1-3", "4-5"]]) rn2 = RangeSetND([["10-40", "20-120", "0-100"]]) rnres = rn1.difference(rn2) self.assertEqual(str(rnres), "0-2; 1-3; 4-5\n") self.assertEqual(len(rnres), 18) rn1 = RangeSetND([["0-2", "1-3", "4-5"]]) rn2 = RangeSetND([["10-40", "20-120", "100-200"]]) rnres = rn1.difference(rn2) self.assertEqual(str(rnres), "0-2; 1-3; 4-5\n") self.assertEqual(len(rnres), 18) rnres2 = rn1 - rn2 self.assertEqual(str(rnres2), "0-2; 1-3; 4-5\n") self.assertEqual(len(rnres2), 18) self.assertEqual(rnres, rnres2) # btw test __eq__ try: dummy = rn1 - "foobar" self.assertFalse("TypeError not raised") except TypeError: pass def test_difference_update(self): rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"]]) rn2 = RangeSetND([["10", "10"]]) rn1.difference_update(rn2) self.assertEqual(len(rn1), 4) self.assertEqual(str(rn1), "10; 9,11-13\n") rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"], ["8-9", "12-15"]]) rn2 = RangeSetND([["10", "10"], ["9", "12-15"]]) rn1.difference_update(rn2) self.assertEqual(len(rn1), 8) self.assertEqual(str(rn1), "10; 9,11-13\n8; 12-15\n") rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"], ["8-9", "12-15"]]) rn2 = RangeSetND([["10", "10"], ["9", "12-15"], ["10-12", "11-15"], ["11", "14"]]) rn1.difference_update(rn2) self.assertEqual(len(rn1), 5) self.assertEqual(str(rn1), "8; 12-15\n10; 9\n") rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"], ["8-9", "12-15"], ["10", "10-13"], ["10", "12-16"], ["9", "13-16"]]) rn2 = RangeSetND([["10", "10"], ["9", "12-15"], ["10-12", "11-15"], ["11", "14"]]) rn1.difference_update(rn2) self.assertEqual(len(rn1), 7) self.assertEqual(str(rn1), "8; 12-15\n10; 9,16\n9; 16\n") # strict mode rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"], ["8-9", "12-15"]]) rn2 = RangeSetND([["10", "10"], ["9", "12-15"], ["10-12", "11-15"], ["11", "14"]]) self.assertRaises(KeyError, rn1.difference_update, rn2, strict=True) if sys.version_info >= (2, 5, 0): rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"]]) rn2 = RangeSetND([["10", "10"]]) rn1 -= rn2 self.assertEqual(str(rn1), "10; 9,11-13\n") self.assertEqual(len(rn1), 4) # binary error rn2 = set([3, 5]) self.assertRaises(TypeError, rn1.__isub__, rn2) def test_intersection(self): rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"], ["8-9", "12-15"]]) self.assertEqual(len(rn1), 13) self.assertEqual(str(rn1), "8-9; 12-15\n10; 9-13\n") rn2 = RangeSetND([["10", "10"], ["9", "12-15"]]) self.assertEqual(len(rn2), 5) self.assertEqual(str(rn2), "9; 12-15\n10; 10\n") rni = rn1.intersection(rn2) self.assertEqual(len(rni), 5) self.assertEqual(str(rni), "9; 12-15\n10; 10\n") self.assertEqual(len(rn1), 13) self.assertEqual(str(rn1), "8-9; 12-15\n10; 9-13\n") self.assertEqual(len(rn2), 5) self.assertEqual(str(rn2), "9; 12-15\n10; 10\n") # test __and__ rni2 = rn1 & rn2 self.assertEqual(len(rni2), 5) self.assertEqual(str(rni2), "9; 12-15\n10; 10\n") self.assertEqual(len(rn1), 13) self.assertEqual(str(rn1), "8-9; 12-15\n10; 9-13\n") self.assertEqual(len(rn2), 5) self.assertEqual(str(rn2), "9; 12-15\n10; 10\n") self.assertEqual(rni, rni2) # btw test __eq__ try: dummy = rn1 & "foobar" self.assertFalse("TypeError not raised") except TypeError: pass def test_intersection_update(self): rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"]]) self.assertEqual(len(rn1), 5) self.assertEqual(str(rn1), "10; 9-13\n") # self test: rn1.intersection_update(rn1) self.assertEqual(len(rn1), 5) self.assertEqual(str(rn1), "10; 9-13\n") # rn2 = RangeSetND([["10", "10"]]) rn1.intersection_update(rn2) self.assertEqual(len(rn1), 1) self.assertEqual(str(rn1), "10; 10\n") rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"], ["8-9", "12-15"]]) rn2 = RangeSetND([["10", "10"], ["9", "12-15"]]) rn1.intersection_update(rn2) self.assertEqual(len(rn1), 5) self.assertEqual(str(rn1), "9; 12-15\n10; 10\n") rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"], ["8-9", "12-15"]]) rn2 = RangeSetND([["10", "10"], ["9", "12-15"], ["10-12", "11-15"], ["11", "14"]]) rn1.intersection_update(rn2) self.assertEqual(len(rn1), 8) self.assertEqual(str(rn1), "10; 10-13\n9; 12-15\n") rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"], ["8-9", "12-15"], ["10", "10-13"], ["10", "12-16"], ["9", "13-16"]]) rn2 = RangeSetND([["10", "10"], ["9", "12-15"], ["10-12", "11-15"], ["11", "14"]]) rn1.intersection_update(rn2) self.assertEqual(len(rn1), 10) self.assertEqual(str(rn1), "10; 10-15\n9; 12-15\n") rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"], ["8-9", "12-15"], ["10", "10-13"], ["10", "12-16"], ["9", "13-16"]]) rn2 = RangeSetND([["10", "10"], ["9", "12-16"], ["10-12", "11-15"], ["11", "14"], ["8", "10-20"]]) rn1.intersection_update(rn2) self.assertEqual(len(rn1), 15) # no pre-fold (self._veclist) self.assertEqual(str(rn1), "10; 10-15\n9; 12-16\n8; 12-15\n") # pre-fold (self.veclist) #self.assertEqual(str(rn1), "8-9; 12-15\n10; 10-15\n9; 16\n") # binary error if sys.version_info >= (2, 5, 0): rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"]]) rn2 = RangeSetND([["10", "10"]]) rn1 &= rn2 self.assertEqual(len(rn1), 1) self.assertEqual(str(rn1), "10; 10\n") rn2 = set([3, 5]) self.assertRaises(TypeError, rn1.__iand__, rn2) def test_xor(self): rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"]]) rn2 = RangeSetND([["10", "10"]]) rnx = rn1.symmetric_difference(rn2) self.assertEqual(len(rnx), 4) self.assertEqual(str(rnx), "10; 9,11-13\n") rnx2 = rn1 ^ rn2 self.assertEqual(len(rnx2), 4) self.assertEqual(str(rnx2), "10; 9,11-13\n") self.assertEqual(rnx, rnx2) try: dummy = rn1 ^ "foobar" self.assertFalse("TypeError not raised") except TypeError: pass # binary error if sys.version_info >= (2, 5, 0): rn1 = RangeSetND([["10", "10-13"], ["10", "9-12"]]) rn2 = RangeSetND([["10", "10"]]) rn1 ^= rn2 self.assertEqual(len(rnx), 4) self.assertEqual(str(rnx), "10; 9,11-13\n") rn2 = set([3, 5]) self.assertRaises(TypeError, rn1.__ixor__, rn2) def test_getitem(self): rn1 = RangeSetND([["10", "10-13"], ["0-3", "1-2"]]) self.assertEqual(len(rn1), 12) self.assertEqual(rn1[0], ('0', '1')) self.assertEqual(rn1[1], ('0', '2')) self.assertEqual(rn1[2], ('1', '1')) self.assertEqual(rn1[3], ('1', '2')) self.assertEqual(rn1[4], ('2', '1')) self.assertEqual(rn1[5], ('2', '2')) self.assertEqual(rn1[6], ('3', '1')) self.assertEqual(rn1[7], ('3', '2')) self.assertEqual(rn1[8], ('10', '10')) self.assertEqual(rn1[9], ('10', '11')) self.assertEqual(rn1[10], ('10', '12')) self.assertEqual(rn1[11], ('10', '13')) self.assertRaises(IndexError, rn1.__getitem__, 12) # negative indices self.assertEqual(rn1[-1], ('10', '13')) self.assertEqual(rn1[-2], ('10', '12')) self.assertEqual(rn1[-3], ('10', '11')) self.assertEqual(rn1[-4], ('10', '10')) self.assertEqual(rn1[-5], ('3', '2')) self.assertEqual(rn1[-12], ('0', '1')) self.assertRaises(IndexError, rn1.__getitem__, -13) self.assertRaises(TypeError, rn1.__getitem__, "foo") def test_getitem_slices(self): rn1 = RangeSetND([["10", "10-13"], ["0-3", "1-2"]]) # slices self.assertEqual(str(rn1[0:2]), "0; 1-2\n") self.assertEqual(str(rn1[0:4]), "0-1; 1-2\n") self.assertEqual(str(rn1[0:5]), "0-1; 1-2\n2; 1\n") self.assertEqual(str(rn1[0:6]), "0-2; 1-2\n") self.assertEqual(str(rn1[0:7]), "0-2; 1-2\n3; 1\n") self.assertEqual(str(rn1[0:8]), "0-3; 1-2\n") self.assertEqual(str(rn1[0:9]), "0-3; 1-2\n10; 10\n") self.assertEqual(str(rn1[0:10]), "0-3; 1-2\n10; 10-11\n") self.assertEqual(str(rn1[0:11]), "0-3; 1-2\n10; 10-12\n") self.assertEqual(str(rn1[0:12]), "0-3; 1-2\n10; 10-13\n") self.assertEqual(str(rn1[0:13]), "0-3; 1-2\n10; 10-13\n") # steps self.assertEqual(str(rn1[0:12:2]), "0-3; 1\n10; 10,12\n") self.assertEqual(str(rn1[1:12:2]), "0-3; 2\n10; 11,13\n") # GitHub #429 rn1 = RangeSetND([["110", "15-16"], ["107", "06"]]) self.assertEqual(str(rn1[0:3:2]), "107; 06\n110; 15\n") def test_contiguous(self): rn0 = RangeSetND() self.assertEqual([], [str(ns) for ns in rn0.contiguous()]) rn1 = RangeSetND([["10", "10-13,15"], ["0-3,5-6", "1-2"]]) self.assertEqual(str(rn1), "0-3,5-6; 1-2\n10; 10-13,15\n") self.assertEqual(['0-3; 1-2\n', '5-6; 1-2\n', '10; 10-13\n', '10; 15\n'], [str(ns) for ns in rn1.contiguous()]) self.assertEqual(str(rn1), "0-3,5-6; 1-2\n10; 10-13,15\n") def test_iter(self): rn0 = RangeSetND([['1-2', '3'], ['1-2', '4'], ['2-6', '6-9,11']]) self.assertEqual(len([r for r in rn0]), len(rn0)) # at this time, iter nD is not sorted self.assertEqual([('3', '6'), ('3', '7'), ('3', '8'), ('3', '9'), ('3', '11'), ('4', '6'), ('4', '7'), ('4', '8'), ('4', '9'), ('4', '11'), ('5', '6'), ('5', '7'), ('5', '8'), ('5', '9'), ('5', '11'), ('6', '6'), ('6', '7'), ('6', '8'), ('6', '9'), ('6', '11'), ('2', '3'), ('2', '4'), ('2', '6'), ('2', '7'), ('2', '8'), ('2', '9'), ('2', '11'), ('1', '3'), ('1', '4')], [r for r in rn0]) def test_pads(self): rn0 = RangeSetND() self.assertEqual(str(rn0), "") self.assertEqual(len(rn0), 0) self.assertEqual(rn0.pads(), ()) rn1 = RangeSetND([['01-02', '003'], ['01-02', '004'], ['02-06', '006-009,411']]) self.assertEqual(str(rn1), "03-06; 006-009,411\n02; 003-004,006-009,411\n01; 003-004\n") self.assertEqual(len(rn1), 29) self.assertEqual(rn1.pads(), (2, 3)) # Note: mixed lengths zero-padding supported in ClusterShell v1.9 rn1 = RangeSetND([['01-02', '003'], ['01-02', '0101'], ['02-06', '006-009,411']]) # before v1.9: 0101 padding was changed to 101 self.assertEqual(str(rn1), '03-06; 006-009,411\n02; 003,006-009,411,0101\n01; 003,0101\n') self.assertEqual(len(rn1), 29) self.assertEqual(rn1.pads(), (2, 4)) rn1 = RangeSetND([['01-02', '0003'], ['01-02', '004'], ['02-06', '006-009,411']]) # before v1.9: 004 padding was wrongly changed to 0004 self.assertEqual(str(rn1), '03-06; 006-009,411\n02; 004,006-009,411,0003\n01; 004,0003\n') self.assertEqual(len(rn1), 29) self.assertEqual(rn1.pads(), (2, 4)) # pads() returns max padding length by axis def test_mutability_1(self): rs0 = RangeSet("2-5") rs1 = RangeSet("0-1") rn0 = RangeSetND([[rs0, rs1]]) #, copy_rangeset=False) self.assertEqual(str(rn0), "2-5; 0-1\n") rs2 = RangeSet("6-7") rs3 = RangeSet("2-3") rn1 = RangeSetND([[rs2, rs3]]) #, copy_rangeset=False) rn0.update(rn1) self.assertEqual(str(rn0), "2-5; 0-1\n6-7; 2-3\n") # check mutability safety self.assertEqual(str(rs0), "2-5") self.assertEqual(str(rs1), "0-1") self.assertEqual(str(rs2), "6-7") self.assertEqual(str(rs3), "2-3") # reverse check rs1.add(2) self.assertEqual(str(rs1), "0-2") rs3.add(4) self.assertEqual(str(rs3), "2-4") self.assertEqual(str(rn0), "2-5; 0-1\n6-7; 2-3\n") self.assertEqual(str(rn1), "6-7; 2-3\n") rn1.update([[rs2, rs3]]) self.assertEqual(str(rn1), "6-7; 2-4\n") self.assertEqual(str(rn0), "2-5; 0-1\n6-7; 2-3\n") def test_mutability_2(self): rs0 = RangeSet("2-5") rs1 = RangeSet("0-1") rn0 = RangeSetND([[rs0, rs1]]) #, copy_rangeset=False) self.assertEqual(str(rn0), "2-5; 0-1\n") rs2 = RangeSet("6-7") rs3 = RangeSet("2-3") rn0.update([[rs2, rs3]]) self.assertEqual(str(rn0), "2-5; 0-1\n6-7; 2-3\n") rs3.add(4) self.assertEqual(str(rs3), "2-4") self.assertEqual(str(rn0), "2-5; 0-1\n6-7; 2-3\n") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/RangeSetTest.py0000644104717000001440000015370414501416555017672 0ustar00sthiellusers# ClusterShell.NodeSet.RangeSet test suite # Written by S. Thiell """Unit test for RangeSet""" import binascii import pickle import unittest import warnings from ClusterShell.RangeSet import RangeSet, RangeSetParseError class RangeSetTest(unittest.TestCase): def setUp(self): warnings.simplefilter("always") def _testRS(self, test, res, length): r1 = RangeSet(test, autostep=3) self.assertEqual(str(r1), res) self.assertEqual(len(r1), length) def testSimple(self): """test RangeSet simple ranges""" self._testRS("0", "0", 1) self._testRS("1", "1", 1) self._testRS("0-2", "0-2", 3) self._testRS("1-3", "1-3", 3) self._testRS("1-3,4-6", "1-6", 6) self._testRS("1-3,4-6,7-10", "1-10", 10) def testStepSimple(self): """test RangeSet simple step usages""" self._testRS("0-4/2", "0-4/2", 3) self._testRS("1-4/2", "1,3", 2) self._testRS("1-4/3", "1,4", 2) self._testRS("1-4/4", "1", 1) def testStepAdvanced(self): """test RangeSet advanced step usages""" self._testRS("1-4/4,2-6/2", "1-2,4,6", 4) # 1.9 small behavior change self._testRS("6-24/6,9-21/6", "6-24/3", 7) self._testRS("0-24/2,9-21/2", "0-8/2,9-22,24", 20) self._testRS("0-24/2,9-21/2,100", "0-8/2,9-22,24,100", 21) self._testRS("0-24/2,9-21/2,100-101", "0-8/2,9-22,24,100-101", 22) self._testRS("3-21/9,6-24/9,9-27/9", "3-27/3", 9) self._testRS("101-121/4,1-225/112", "1,101-121/4,225", 8) self._testRS("1-32/3,13-28/9", "1-31/3", 11) self._testRS("1-32/3,13-22/9", "1-31/3", 11) self._testRS("1-32/3,13-31/9", "1-31/3", 11) self._testRS("1-32/3,13-40/9", "1-31/3,40", 12) self._testRS("1-16/3,13-28/6", "1-19/3,25", 8) self._testRS("1-16/3,1-16/6", "1-16/3", 6) self._testRS("1-16/6,1-16/3", "1-16/3", 6) self._testRS("1-16/3,3-19/6", "1,3-4,7,9-10,13,15-16", 9) #self._testRS("1-16/3,3-19/4", "1,3-4,7,10-11,13,15-16,19", 10) # <1.6 #self._testRS("1-16/3,3-19/4", "1,3,4-10/3,11-15/2,16,19", 10) # 1.6+ self._testRS("1-16/3,3-19/4", "1,3-4,7,10-11,13,15-16,19", 10) # 1.9+ self._testRS("1-17/2,2-18/2", "1-18", 18) self._testRS("1-17/2,33-41/2,2-18/2", "1-18,33-41/2", 23) self._testRS("1-17/2,33-41/2,2-20/2", "1-18,20,33-41/2", 24) self._testRS("1-17/2,33-41/2,2-19/2", "1-18,33-41/2", 23) self._testRS("1968-1970,1972,1975,1978-1981,1984-1989", "1968-1970,1972-1978/3,1979-1981,1984-1989", 15) # use of 0-padding in the step number is ignored self._testRS("1-17/01", "1-17", 17) self._testRS("1-17/02", "1-17/2", 9) self._testRS("1-17/03", "1-16/3", 6) def test_bad_syntax(self): """test parse errors""" self.assertRaises(RangeSetParseError, RangeSet, "") self.assertRaises(RangeSetParseError, RangeSet, "-") self.assertRaises(RangeSetParseError, RangeSet, "A") self.assertRaises(RangeSetParseError, RangeSet, "2-5/a") self.assertRaises(RangeSetParseError, RangeSet, "3/2") self.assertRaises(RangeSetParseError, RangeSet, "3-/2") self.assertRaises(RangeSetParseError, RangeSet, "-/2") self.assertRaises(RangeSetParseError, RangeSet, "4-a/2") self.assertRaises(RangeSetParseError, RangeSet, "4-3/2") self.assertRaises(RangeSetParseError, RangeSet, "4-5/-2") self.assertRaises(RangeSetParseError, RangeSet, "4-2/-2") self.assertRaises(RangeSetParseError, RangeSet, "004-002") self.assertRaises(RangeSetParseError, RangeSet, "3-59/2,102a") def testIntersectSimple(self): """test RangeSet with simple intersections of ranges""" r1 = RangeSet("4-34") r2 = RangeSet("27-42") r1.intersection_update(r2) self.assertEqual(str(r1), "27-34") self.assertEqual(len(r1), 8) r1 = RangeSet("2-450,654-700,800") r2 = RangeSet("500-502,690-820,830-840,900") r1.intersection_update(r2) self.assertEqual(str(r1), "690-700,800") self.assertEqual(len(r1), 12) r1 = RangeSet("2-450,654-700,800") r3 = r1.intersection(r2) self.assertEqual(str(r3), "690-700,800") self.assertEqual(len(r3), 12) r1 = RangeSet("2-450,654-700,800") r3 = r1 & r2 self.assertEqual(str(r3), "690-700,800") self.assertEqual(len(r3), 12) r1 = RangeSet() r3 = r1.intersection(r2) self.assertEqual(str(r3), "") self.assertEqual(len(r3), 0) def testIntersectStep(self): """test RangeSet with more intersections of ranges""" r1 = RangeSet("4-34/2") r2 = RangeSet("28-42/2") r1.intersection_update(r2) self.assertEqual(str(r1), "28,30,32,34") self.assertEqual(len(r1), 4) r1 = RangeSet("4-34/2") r2 = RangeSet("27-42/2") r1.intersection_update(r2) self.assertEqual(str(r1), "") self.assertEqual(len(r1), 0) r1 = RangeSet("2-60/3", autostep=3) r2 = RangeSet("3-50/2", autostep=3) r1.intersection_update(r2) self.assertEqual(str(r1), "5-47/6") self.assertEqual(len(r1), 8) def testSubSimple(self): """test RangeSet with simple difference of ranges""" r1 = RangeSet("4,7-33") r2 = RangeSet("8-33") r1.difference_update(r2) self.assertEqual(str(r1), "4,7") self.assertEqual(len(r1), 2) r1 = RangeSet("4,7-33") r3 = r1.difference(r2) self.assertEqual(str(r3), "4,7") self.assertEqual(len(r3), 2) r3 = r1 - r2 self.assertEqual(str(r3), "4,7") self.assertEqual(len(r3), 2) # bounds checking r1 = RangeSet("1-10,39-41,50-60") r2 = RangeSet("1-10,38-39,50-60") r1.difference_update(r2) self.assertEqual(len(r1), 2) self.assertEqual(str(r1), "40-41") r1 = RangeSet("1-20,39-41") r2 = RangeSet("1-20,41-42") r1.difference_update(r2) self.assertEqual(len(r1), 2) self.assertEqual(str(r1), "39-40") # difference(self) issue r1 = RangeSet("1-20,39-41") r1.difference_update(r1) self.assertEqual(len(r1), 0) self.assertEqual(str(r1), "") # strict mode r1 = RangeSet("4,7-33") r2 = RangeSet("8-33") r1.difference_update(r2, strict=True) self.assertEqual(str(r1), "4,7") self.assertEqual(len(r1), 2) r3 = RangeSet("4-5") self.assertRaises(KeyError, r1.difference_update, r3, True) def testSymmetricDifference(self): """test RangeSet.symmetric_difference_update()""" r1 = RangeSet("4,7-33") r2 = RangeSet("8-34") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "4,7,34") self.assertEqual(len(r1), 3) r1 = RangeSet("4,7-33") r3 = r1.symmetric_difference(r2) self.assertEqual(str(r3), "4,7,34") self.assertEqual(len(r3), 3) r3 = r1 ^ r2 self.assertEqual(str(r3), "4,7,34") self.assertEqual(len(r3), 3) r1 = RangeSet("5,7,10-12,33-50") r2 = RangeSet("8-34") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "5,7-9,13-32,35-50") self.assertEqual(len(r1), 40) r1 = RangeSet("8-34") r2 = RangeSet("5,7,10-12,33-50") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "5,7-9,13-32,35-50") self.assertEqual(len(r1), 40) r1 = RangeSet("8-30") r2 = RangeSet("31-40") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "8-40") self.assertEqual(len(r1), 33) r1 = RangeSet("8-30") r2 = RangeSet("8-30") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "") self.assertEqual(len(r1), 0) r1 = RangeSet("8-30") r2 = RangeSet("10-13,31-40") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "8-9,14-40") self.assertEqual(len(r1), 29) r1 = RangeSet("10-13,31-40") r2 = RangeSet("8-30") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "8-9,14-40") self.assertEqual(len(r1), 29) r1 = RangeSet("1,3,5,7") r2 = RangeSet("4-8") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "1,3-4,6,8") self.assertEqual(len(r1), 5) r1 = RangeSet("1-1000") r2 = RangeSet("0-40,60-100/4,300,1000,1002") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "0,41-59,61-63,65-67,69-71,73-75,77-79,81-83,85-87,89-91,93-95,97-99,101-299,301-999,1002") self.assertEqual(len(r1), 949) r1 = RangeSet("25,27,29-31,33-35,41-43,48,50-52,55-60,63,66-68,71-78") r2 = RangeSet("27-30,35,37-39,42,45-48,50,52-54,56,61,67,69-79,81-82") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "25,28,31,33-34,37-39,41,43,45-47,51,53-55,57-61,63,66,68-70,79,81-82") self.assertEqual(len(r1), 30) r1 = RangeSet("986-987,989,991-992,994-995,997,1002-1008,1010-1011,1015-1018,1021") r2 = RangeSet("989-990,992-994,997-1000") r1.symmetric_difference_update(r2) self.assertEqual(str(r1), "986-987,990-991,993,995,998-1000,1002-1008,1010-1011,1015-1018,1021") self.assertEqual(len(r1), 23) def testSubStep(self): """test RangeSet with more sub of ranges (with step)""" # case 1 no sub r1 = RangeSet("4-34/2", autostep=3) r2 = RangeSet("3-33/2", autostep=3) self.assertEqual(r1.autostep, 3) self.assertEqual(r2.autostep, 3) r1.difference_update(r2) self.assertEqual(str(r1), "4-34/2") self.assertEqual(len(r1), 16) # case 2 diff left r1 = RangeSet("4-34/2", autostep=3) r2 = RangeSet("2-14/2", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "16-34/2") self.assertEqual(len(r1), 10) # case 3 diff right r1 = RangeSet("4-34/2", autostep=3) r2 = RangeSet("28-52/2", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "4-26/2") self.assertEqual(len(r1), 12) # case 4 diff with ranges split r1 = RangeSet("4-34/2", autostep=3) r2 = RangeSet("12-18/2", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "4-10/2,20-34/2") self.assertEqual(len(r1), 12) # case 5+ more tricky diffs r1 = RangeSet("4-34/2", autostep=3) r2 = RangeSet("28-55", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "4-26/2") self.assertEqual(len(r1), 12) r1 = RangeSet("4-34/2", autostep=3) r2 = RangeSet("27-55", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "4-26/2") self.assertEqual(len(r1), 12) r1 = RangeSet("1-100", autostep=3) r2 = RangeSet("2-98/2", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "1-99/2,100") self.assertEqual(len(r1), 51) r1 = RangeSet("1-100,102,105-242,800", autostep=3) r2 = RangeSet("1-1000/3", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "2-3,5-6,8-9,11-12,14-15,17-18,20-21,23-24,26-27,29-30,32-33,35-36,38-39,41-42,44-45,47-48,50-51,53-54,56-57,59-60,62-63,65-66,68-69,71-72,74-75,77-78,80-81,83-84,86-87,89-90,92-93,95-96,98-99,102,105,107-108,110-111,113-114,116-117,119-120,122-123,125-126,128-129,131-132,134-135,137-138,140-141,143-144,146-147,149-150,152-153,155-156,158-159,161-162,164-165,167-168,170-171,173-174,176-177,179-180,182-183,185-186,188-189,191-192,194-195,197-198,200-201,203-204,206-207,209-210,212-213,215-216,218-219,221-222,224-225,227-228,230-231,233-234,236-237,239-240,242,800") self.assertEqual(len(r1), 160) r1 = RangeSet("1-1000", autostep=3) r2 = RangeSet("2-999/2", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "1-999/2,1000") self.assertEqual(len(r1), 501) r1 = RangeSet("1-100/3,40-60/3", autostep=3) r2 = RangeSet("31-61/3", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "1-28/3,64-100/3") self.assertEqual(len(r1), 23) r1 = RangeSet("1-100/3,40-60/3", autostep=3) r2 = RangeSet("30-80/5", autostep=3) r1.difference_update(r2) self.assertEqual(str(r1), "1-37/3,43-52/3,58-67/3,73-100/3") self.assertEqual(len(r1), 31) def testContains(self): """test RangeSet.__contains__()""" r1 = RangeSet("1-100,102,105-242,800") self.assertEqual(len(r1), 240) self.assertTrue(99 in r1) self.assertTrue("99" in r1) self.assertFalse("099" in r1) # fixed in 1.9+ self.assertFalse(object() in r1) self.assertTrue(101 not in r1) self.assertEqual(len(r1), 240) r2 = RangeSet("1-100/3,40-60/3", autostep=3) self.assertEqual(len(r2), 34) self.assertTrue(1 in r2) self.assertTrue(4 in r2) self.assertTrue(2 not in r2) self.assertTrue(3 not in r2) self.assertTrue(40 in r2) self.assertTrue(101 not in r2) r3 = RangeSet("0003-0143,0360-1000") self.assertFalse(360 in r3) # fixed in 1.9+ self.assertFalse("360" in r3) # fixed in 1.9+ self.assertTrue("0360" in r3) r4 = RangeSet("00-02") self.assertTrue("00" in r4) self.assertFalse(0 in r4) # changed in 1.9+ self.assertFalse("0" in r4) # fixed in 1.9+ self.assertTrue("01" in r4) self.assertFalse(1 in r4) # changed in 1.9+ self.assertFalse("1" in r4) # fixed in 1.9+ self.assertTrue("02" in r4) self.assertFalse("03" in r4) # r1 = RangeSet("115-117,130,132,166-170,4780-4999") self.assertEqual(len(r1), 230) r2 = RangeSet("116-117,130,4781-4999") self.assertEqual(len(r2), 222) self.assertTrue(r2 in r1) self.assertFalse(r1 in r2) r2 = RangeSet("5000") self.assertFalse(r2 in r1) r2 = RangeSet("4999") self.assertTrue(r2 in r1) def testIsSuperSet(self): """test RangeSet.issuperset()""" r1 = RangeSet("1-100,102,105-242,800") self.assertEqual(len(r1), 240) r2 = RangeSet("3-98,140-199,800") self.assertEqual(len(r2), 157) self.assertTrue(r1.issuperset(r1)) self.assertTrue(r1.issuperset(r2)) self.assertTrue(r1 >= r1) self.assertTrue(r1 > r2) self.assertFalse(r2 > r1) r2 = RangeSet("3-98,140-199,243,800") self.assertEqual(len(r2), 158) self.assertFalse(r1.issuperset(r2)) self.assertFalse(r1 > r2) def testIsSubSet(self): """test RangeSet.issubset()""" r1 = RangeSet("1-100,102,105-242,800-900/2") self.assertTrue(r1.issubset(r1)) self.assertTrue(r1.issuperset(r1)) r2 = RangeSet() self.assertTrue(r2.issubset(r1)) self.assertTrue(r1.issuperset(r2)) self.assertFalse(r1.issubset(r2)) self.assertFalse(r2.issuperset(r1)) r1 = RangeSet("1-100,102,105-242,800-900/2") r2 = RangeSet("3,800,802,804,888") self.assertTrue(r2.issubset(r2)) self.assertTrue(r2.issubset(r1)) self.assertTrue(r2 <= r1) self.assertTrue(r2 < r1) self.assertTrue(r1 > r2) self.assertFalse(r1 < r2) self.assertFalse(r1 <= r2) self.assertFalse(r2 >= r1) # fixed in v1.9 where mixed padding is now supported r1 = RangeSet("1-100") r2 = RangeSet("001-100") self.assertFalse(r1.issubset(r2)) # used to be true < v1.9 def testGetItem(self): """test RangeSet.__getitem__()""" r1 = RangeSet("1-100,102,105-242,800") self.assertEqual(len(r1), 240) self.assertEqual(r1[0], '1') self.assertEqual(r1[1], '2') self.assertEqual(r1[2], '3') self.assertEqual(r1[99], '100') self.assertEqual(r1[100], '102') self.assertEqual(r1[101], '105') self.assertEqual(r1[102], '106') self.assertEqual(r1[103], '107') self.assertEqual(r1[237], '241') self.assertEqual(r1[238], '242') self.assertEqual(r1[239], '800') self.assertRaises(IndexError, r1.__getitem__, 240) self.assertRaises(IndexError, r1.__getitem__, 241) # negative indices self.assertEqual(r1[-1], '800') self.assertEqual(r1[-240], '1') for n in range(1, len(r1)): self.assertEqual(r1[-n], r1[len(r1)-n]) self.assertRaises(IndexError, r1.__getitem__, -len(r1)-1) self.assertRaises(IndexError, r1.__getitem__, -len(r1)-2) r2 = RangeSet("1-37/3,43-52/3,58-67/3,73-100/3,102-106/2") self.assertEqual(len(r2), 34) self.assertEqual(r2[0], '1') self.assertEqual(r2[1], '4') self.assertEqual(r2[2], '7') self.assertEqual(r2[12], '37') self.assertEqual(r2[13], '43') self.assertEqual(r2[14], '46') self.assertEqual(r2[16], '52') self.assertEqual(r2[17], '58') self.assertEqual(r2[29], '97') self.assertEqual(r2[30], '100') self.assertEqual(r2[31], '102') self.assertEqual(r2[32], '104') self.assertEqual(r2[33], '106') self.assertRaises(TypeError, r2.__getitem__, "foo") def testGetSlice(self): """test RangeSet.__getitem__() with slice""" r0 = RangeSet("1-12") self.assertEqual(r0[0:3], RangeSet("1-3")) self.assertEqual(r0[2:7], RangeSet("3-7")) # negative indices - sl_start self.assertEqual(r0[-1:0], RangeSet()) self.assertEqual(r0[-2:0], RangeSet()) self.assertEqual(r0[-11:0], RangeSet()) self.assertEqual(r0[-12:0], RangeSet()) self.assertEqual(r0[-13:0], RangeSet()) self.assertEqual(r0[-1000:0], RangeSet()) self.assertEqual(r0[-1:], RangeSet("12")) self.assertEqual(r0[-2:], RangeSet("11-12")) self.assertEqual(r0[-11:], RangeSet("2-12")) self.assertEqual(r0[-12:], RangeSet("1-12")) self.assertEqual(r0[-13:], RangeSet("1-12")) self.assertEqual(r0[-1000:], RangeSet("1-12")) self.assertEqual(r0[-13:1], RangeSet("1")) self.assertEqual(r0[-13:2], RangeSet("1-2")) self.assertEqual(r0[-13:11], RangeSet("1-11")) self.assertEqual(r0[-13:12], RangeSet("1-12")) self.assertEqual(r0[-13:13], RangeSet("1-12")) # negative indices - sl_stop self.assertEqual(r0[0:-1], RangeSet("1-11")) self.assertEqual(r0[:-1], RangeSet("1-11")) self.assertEqual(r0[0:-2], RangeSet("1-10")) self.assertEqual(r0[:-2], RangeSet("1-10")) self.assertEqual(r0[1:-2], RangeSet("2-10")) self.assertEqual(r0[4:-4], RangeSet("5-8")) self.assertEqual(r0[5:-5], RangeSet("6-7")) self.assertEqual(r0[6:-5], RangeSet("7")) self.assertEqual(r0[6:-6], RangeSet()) self.assertEqual(r0[7:-6], RangeSet()) self.assertEqual(r0[0:-1000], RangeSet()) r1 = RangeSet("10-14,16-20") self.assertEqual(r1[2:6], RangeSet("12-14,16")) self.assertEqual(r1[2:7], RangeSet("12-14,16-17")) r1 = RangeSet("1-2,4,9,10-12") self.assertEqual(r1[0:3], RangeSet("1-2,4")) self.assertEqual(r1[0:4], RangeSet("1-2,4,9")) self.assertEqual(r1[2:6], RangeSet("4,9,10-11")) self.assertEqual(r1[2:4], RangeSet("4,9")) self.assertEqual(r1[5:6], RangeSet("11")) self.assertEqual(r1[6:7], RangeSet("12")) self.assertEqual(r1[4:7], RangeSet("10-12")) # Slice indices are silently truncated to fall in the allowed range self.assertEqual(r1[2:100], RangeSet("4,9-12")) self.assertEqual(r1[9:10], RangeSet()) # Slice stepping self.assertEqual(r1[0:1:2], RangeSet("1")) self.assertEqual(r1[0:2:2], RangeSet("1")) self.assertEqual(r1[0:3:2], RangeSet("1,4")) self.assertEqual(r1[0:4:2], RangeSet("1,4")) self.assertEqual(r1[0:5:2], RangeSet("1,4,10")) self.assertEqual(r1[0:6:2], RangeSet("1,4,10")) self.assertEqual(r1[0:7:2], RangeSet("1,4,10,12")) self.assertEqual(r1[0:8:2], RangeSet("1,4,10,12")) self.assertEqual(r1[0:9:2], RangeSet("1,4,10,12")) self.assertEqual(r1[0:10:2], RangeSet("1,4,10,12")) self.assertEqual(r1[0:7:3], RangeSet("1,9,12")) self.assertEqual(r1[0:7:4], RangeSet("1,10")) self.assertEqual(len(r1[1:1:2]), 0) self.assertEqual(r1[1:2:2], RangeSet("2")) self.assertEqual(r1[1:3:2], RangeSet("2")) self.assertEqual(r1[1:4:2], RangeSet("2,9")) self.assertEqual(r1[1:5:2], RangeSet("2,9")) self.assertEqual(r1[1:6:2], RangeSet("2,9,11")) self.assertEqual(r1[1:7:2], RangeSet("2,9,11")) # negative indices - sl_step self.assertEqual(r1[::-2], RangeSet("1,4,10,12")) r2 = RangeSet("1-2,4,9,10-13") self.assertEqual(r2[::-2], RangeSet("2,9,11,13")) self.assertEqual(r2[::-3], RangeSet("2,10,13")) self.assertEqual(r2[::-4], RangeSet("9,13")) self.assertEqual(r2[::-5], RangeSet("4,13")) self.assertEqual(r2[::-6], RangeSet("2,13")) self.assertEqual(r2[::-7], RangeSet("1,13")) self.assertEqual(r2[::-8], RangeSet("13")) self.assertEqual(r2[::-9], RangeSet("13")) # Partial slices self.assertEqual(r1[2:], RangeSet("4,9-12")) self.assertEqual(r1[:3], RangeSet("1-2,4")) self.assertEqual(r1[:3:2], RangeSet("1,4")) # Twisted r2 = RangeSet("1-9/2,12-32/4") self.assertEqual(r2[5:10:2], RangeSet("12-28/8")) self.assertEqual(r2[5:10:2], RangeSet("12-28/8", autostep=2)) self.assertEqual(r2[1:12:3], RangeSet("3,9,20,32")) # FIXME: use nosetests/@raises to do that... self.assertRaises(TypeError, r1.__getitem__, slice('foo', 'bar')) self.assertRaises(TypeError, r1.__getitem__, slice(1, 3, 'bar')) r3 = RangeSet("0-600") self.assertEqual(r3[30:389], RangeSet("30-388")) r3 = RangeSet("0-6000") self.assertEqual(r3[30:389:2], RangeSet("30-389/2")) self.assertEqual(r3[30:389:2], RangeSet("30-389/2", autostep=2)) def testSplit(self): """test RangeSet.split()""" # Empty rangeset rangeset = RangeSet() self.assertEqual(len(list(rangeset.split(2))), 0) # Not enough element rangeset = RangeSet("1") self.assertEqual((RangeSet("1"),), tuple(rangeset.split(2))) # Exact number of elements rangeset = RangeSet("1-6") self.assertEqual((RangeSet("1-2"), RangeSet("3-4"), RangeSet("5-6")), \ tuple(rangeset.split(3))) # Check limit results rangeset = RangeSet("0-3") for i in (4, 5): self.assertEqual((RangeSet("0"), RangeSet("1"), \ RangeSet("2"), RangeSet("3")), \ tuple(rangeset.split(i))) def testAdd(self): """test RangeSet.add()""" r1 = RangeSet("1-100,102,105-242,800") self.assertEqual(len(r1), 240) r1.add(801) self.assertEqual(len(r1), 241) self.assertEqual(r1[0], '1') self.assertEqual(r1[240], '801') r1.add(788) self.assertEqual(str(r1), "1-100,102,105-242,788,800-801") self.assertEqual(len(r1), 242) self.assertEqual(r1[0], '1') self.assertEqual(r1[239], '788') self.assertEqual(r1[240], '800') r1.add(812) self.assertEqual(len(r1), 243) # test forced padding r1 = RangeSet("1-100,102,105-242,800") r1.add(801, pad=3) self.assertEqual(len(r1), 241) self.assertEqual(str(r1), "1-100,102,105-242,800-801") r1.padding = 4 # 1.8-1.9 compat: adjust padding of the whole set self.assertEqual(len(r1), 241) self.assertEqual(str(r1), "0001-0100,0102,0105-0242,0800-0801") def testUpdate(self): """test RangeSet.update()""" r1 = RangeSet("1-100,102,105-242,800") self.assertEqual(len(r1), 240) r2 = RangeSet("243-799,1924-1984") self.assertEqual(len(r2), 618) r1.update(r2) self.assertEqual(type(r1), RangeSet) self.assertEqual(r1.padding, None) self.assertEqual(len(r1), 240+618) self.assertEqual(str(r1), "1-100,102,105-800,1924-1984") r1 = RangeSet("1-100,102,105-242,800") r1.union_update(r2) self.assertEqual(len(r1), 240+618) self.assertEqual(str(r1), "1-100,102,105-800,1924-1984") def testUnion(self): """test RangeSet.union()""" r1 = RangeSet("1-100,102,105-242,800") self.assertEqual(len(r1), 240) r2 = RangeSet("243-799,1924-1984") self.assertEqual(len(r2), 618) r3 = r1.union(r2) self.assertEqual(type(r3), RangeSet) self.assertEqual(r3.padding, None) self.assertEqual(len(r3), 240+618) self.assertEqual(str(r3), "1-100,102,105-800,1924-1984") r4 = r1 | r2 self.assertEqual(len(r4), 240+618) self.assertEqual(str(r4), "1-100,102,105-800,1924-1984") # test with overlap r2 = RangeSet("200-799") r3 = r1.union(r2) self.assertEqual(len(r3), 797) self.assertEqual(str(r3), "1-100,102,105-800") r4 = r1 | r2 self.assertEqual(len(r4), 797) self.assertEqual(str(r4), "1-100,102,105-800") def testRemove(self): """test RangeSet.remove()""" r1 = RangeSet("1-100,102,105-242,800") self.assertEqual(len(r1), 240) r1.remove(100) self.assertEqual(len(r1), 239) self.assertEqual(str(r1), "1-99,102,105-242,800") self.assertRaises(KeyError, r1.remove, 101) self.assertRaises(KeyError, r1.remove, "101") r1.remove("106") self.assertRaises(KeyError, r1.remove, "foo") def testDiscard(self): """test RangeSet.discard()""" r1 = RangeSet("1-100,102,105-242,800") self.assertEqual(len(r1), 240) r1.discard(100) self.assertEqual(len(r1), 239) self.assertEqual(str(r1), "1-99,102,105-242,800") r1.discard(101) # should not raise KeyError r1.discard('105') self.assertEqual(len(r1), 238) self.assertEqual(str(r1), "1-99,102,106-242,800") r1.discard("foo") def testClear(self): """test RangeSet.clear()""" r1 = RangeSet("1-100,102,105-242,800") self.assertEqual(len(r1), 240) self.assertEqual(str(r1), "1-100,102,105-242,800") r1.clear() self.assertEqual(len(r1), 0) self.assertEqual(str(r1), "") def testConstructorIterate(self): """test RangeSet(iterable) constructor""" # from list rgs = RangeSet([3,5,6,7,8,1]) self.assertEqual(str(rgs), "1,3,5-8") self.assertEqual(len(rgs), 6) rgs.add(10) self.assertEqual(str(rgs), "1,3,5-8,10") self.assertEqual(len(rgs), 7) # from set rgs = RangeSet(set([3,5,6,7,8,1])) self.assertEqual(str(rgs), "1,3,5-8") self.assertEqual(len(rgs), 6) # from RangeSet r1 = RangeSet("1,3,5-8") rgs = RangeSet(r1) self.assertEqual(str(rgs), "1,3,5-8") self.assertEqual(len(rgs), 6) def testFromListConstructor(self): """test RangeSet.fromlist() constructor""" rgs = RangeSet.fromlist([ "3", "5-8", "1" ]) self.assertEqual(str(rgs), "1,3,5-8") self.assertEqual(len(rgs), 6) rgs = RangeSet.fromlist([ RangeSet("3"), RangeSet("5-8"), RangeSet("1") ]) self.assertEqual(str(rgs), "1,3,5-8") self.assertEqual(len(rgs), 6) rgs = RangeSet.fromlist([set([3,5,6,7,8,1])]) self.assertEqual(str(rgs), "1,3,5-8") self.assertEqual(len(rgs), 6) def testFromOneConstructor(self): """test RangeSet.fromone() constructor""" rgs = RangeSet.fromone(42) self.assertEqual(str(rgs), "42") self.assertEqual(len(rgs), 1) # also support slice object (v1.6+) rgs = RangeSet.fromone(slice(42)) self.assertEqual(str(rgs), "0-41") self.assertEqual(len(rgs), 42) self.assertRaises(ValueError, RangeSet.fromone, slice(12, None)) rgs = RangeSet.fromone(slice(42, 43)) self.assertEqual(str(rgs), "42") self.assertEqual(len(rgs), 1) rgs = RangeSet.fromone(slice(42, 48)) self.assertEqual(str(rgs), "42-47") self.assertEqual(len(rgs), 6) rgs = RangeSet.fromone(slice(42, 57, 2)) self.assertEqual(str(rgs), "42,44,46,48,50,52,54,56") rgs.autostep = 3 self.assertEqual(str(rgs), "42-56/2") self.assertEqual(len(rgs), 8) def testIterator(self): """test RangeSet iterator""" matches = ['1', '3', '4', '5', '6', '7', '8', '11'] rgs = RangeSet.fromlist([ "11", "3", "5-8", "1", "4" ]) cnt = 0 for rg in rgs: self.assertEqual(rg, matches[cnt]) cnt += 1 self.assertEqual(cnt, len(matches)) # with padding matches = ['001', '003', '004', '005', '006', '007', '008', '011'] rgs = RangeSet.fromlist([ "011", "003", "005-008", "001", "004" ]) cnt = 0 for rg in rgs: self.assertFalse(isinstance(rg, int)) # true prior to v1.9 self.assertTrue(isinstance(rg, str)) # true since v1.9 self.assertEqual(rg, matches[cnt]) cnt += 1 self.assertEqual(cnt, len(matches)) def testStringIterator(self): """test RangeSet string iterator striter()""" matches = [ 1, 3, 4, 5, 6, 7, 8, 11 ] rgs = RangeSet.fromlist([ "11", "3", "5-8", "1", "4" ]) cnt = 0 for rg in rgs.striter(): self.assertEqual(rg, str(matches[cnt])) cnt += 1 self.assertEqual(cnt, len(matches)) # with padding rgs = RangeSet.fromlist([ "011", "003", "005-008", "001", "004" ]) cnt = 0 for rg in rgs.striter(): self.assertTrue(isinstance(rg, str)) self.assertEqual(rg, "%0*d" % (3, matches[cnt])) cnt += 1 self.assertEqual(cnt, len(matches)) def testBinarySanityCheck(self): """test RangeSet binary sanity check""" rg1 = RangeSet("1-5") rg2 = "4-6" self.assertRaises(TypeError, rg1.__gt__, rg2) self.assertRaises(TypeError, rg1.__lt__, rg2) def testBinarySanityCheckNotImplementedSubtle(self): """test RangeSet binary sanity check (NotImplemented subtle)""" rg1 = RangeSet("1-5") rg2 = "4-6" self.assertEqual(rg1.__and__(rg2), NotImplemented) self.assertEqual(rg1.__or__(rg2), NotImplemented) self.assertEqual(rg1.__sub__(rg2), NotImplemented) self.assertEqual(rg1.__xor__(rg2), NotImplemented) # Should implicitly raises TypeError if the real operator # version is invoked. To test that, we perform a manual check # as an additional function would be needed to check with # assertRaises(): good_error = False try: rg3 = rg1 & rg2 except TypeError: good_error = True self.assertTrue(good_error, "TypeError not raised for &") good_error = False try: rg3 = rg1 | rg2 except TypeError: good_error = True self.assertTrue(good_error, "TypeError not raised for |") good_error = False try: rg3 = rg1 - rg2 except TypeError: good_error = True self.assertTrue(good_error, "TypeError not raised for -") good_error = False try: rg3 = rg1 ^ rg2 except TypeError: good_error = True self.assertTrue(good_error, "TypeError not raised for ^") def testIsSubSetError(self): """test RangeSet.issubset() error""" rg1 = RangeSet("1-5") rg2 = "4-6" self.assertRaises(TypeError, rg1.issubset, rg2) def testEquality(self): """test RangeSet equality""" rg0_1 = RangeSet() rg0_2 = RangeSet() self.assertEqual(rg0_1, rg0_2) rg1 = RangeSet("1-4") rg2 = RangeSet("1-4") self.assertEqual(rg1, rg2) rg3 = RangeSet("2-5") self.assertNotEqual(rg1, rg3) rg4 = RangeSet("1,2,3,4") self.assertEqual(rg1, rg4) rg5 = RangeSet("1,2,4") self.assertNotEqual(rg1, rg5) if rg1 == None: self.fail("rg1 == None succeeded") if rg1 != None: pass else: self.fail("rg1 != None failed") def testAddRange(self): """test RangeSet.add_range()""" r1 = RangeSet() r1.add_range(1, 100, 1) self.assertEqual(len(r1), 99) self.assertEqual(str(r1), "1-99") r1.add_range(40, 101, 1) self.assertEqual(len(r1), 100) self.assertEqual(str(r1), "1-100") r1.add_range(399, 423, 2) self.assertEqual(len(r1), 112) self.assertEqual(str(r1), "1-100,399,401,403,405,407,409,411,413,415,417,419,421") # With autostep... r1 = RangeSet(autostep=3) r1.add_range(1, 100, 1) self.assertEqual(r1.autostep, 3) self.assertEqual(len(r1), 99) self.assertEqual(str(r1), "1-99") r1.add_range(40, 101, 1) self.assertEqual(len(r1), 100) self.assertEqual(str(r1), "1-100") r1.add_range(399, 423, 2) self.assertEqual(len(r1), 112) self.assertEqual(str(r1), "1-100,399-421/2") # Bound checks r1 = RangeSet("1-30", autostep=2) self.assertEqual(len(r1), 30) self.assertEqual(str(r1), "1-30") self.assertEqual(r1.autostep, 2) r1.add_range(32, 35, 1) self.assertEqual(len(r1), 33) self.assertEqual(str(r1), "1-30,32-34") r1.add_range(31, 32, 1) self.assertEqual(len(r1), 34) self.assertEqual(str(r1), "1-34") r1 = RangeSet("1-30/4") self.assertEqual(len(r1), 8) self.assertEqual(str(r1), "1,5,9,13,17,21,25,29") r1.add_range(30, 32, 1) self.assertEqual(len(r1), 10) self.assertEqual(str(r1), "1,5,9,13,17,21,25,29-31") r1.add_range(40, 65, 10) self.assertEqual(len(r1), 13) self.assertEqual(str(r1), "1,5,9,13,17,21,25,29-31,40,50,60") r1 = RangeSet("1-30", autostep=3) r1.add_range(40, 65, 10) self.assertEqual(r1.autostep, 3) self.assertEqual(len(r1), 33) self.assertEqual(str(r1), "1-30,40-60/10") # One r1.add_range(103, 104) self.assertEqual(len(r1), 34) self.assertEqual(str(r1), "1-30,40-60/10,103") # Zero self.assertRaises(AssertionError, r1.add_range, 103, 103) def testSlices(self): """test RangeSet.slices()""" r1 = RangeSet() self.assertEqual(len(r1), 0) self.assertEqual(len(list(r1.slices())), 0) # Without autostep r1 = RangeSet("1-7/2,8-12,3000-3019") self.assertEqual(r1.autostep, None) self.assertEqual(len(r1), 29) self.assertEqual(list(r1.slices()), [slice(1, 2, 1), slice(3, 4, 1), \ slice(5, 6, 1), slice(7, 13, 1), slice(3000, 3020, 1)]) # With autostep r1 = RangeSet("1-7/2,8-12,3000-3019", autostep=2) self.assertEqual(len(r1), 29) self.assertEqual(r1.autostep, 2) self.assertEqual(list(r1.slices()), [slice(1, 8, 2), slice(8, 13, 1), \ slice(3000, 3020, 1)]) def testCopy(self): """test RangeSet.copy()""" rangeset = RangeSet("115-117,130,166-170,4780-4999") self.assertEqual(len(rangeset), 229) self.assertEqual(str(rangeset), "115-117,130,166-170,4780-4999") r1 = rangeset.copy() r2 = rangeset.copy() self.assertEqual(rangeset, r1) # content equality r1.remove(166) self.assertEqual(len(rangeset), len(r1) + 1) self.assertNotEqual(rangeset, r1) self.assertEqual(str(rangeset), "115-117,130,166-170,4780-4999") self.assertEqual(str(r1), "115-117,130,167-170,4780-4999") r2.update(RangeSet("118")) self.assertNotEqual(rangeset, r2) self.assertNotEqual(r1, r2) self.assertEqual(len(rangeset) + 1, len(r2)) self.assertEqual(str(rangeset), "115-117,130,166-170,4780-4999") self.assertEqual(str(r1), "115-117,130,167-170,4780-4999") self.assertEqual(str(r2), "115-118,130,166-170,4780-4999") # unpickling tests; generate data with: # rs = RangeSet("5,7-102,104,106-107") # print(binascii.b2a_base64(pickle.dumps(rs))) def test_unpickle_v1_3_py24(self): """test RangeSet unpickling (against v1.3/py24)""" rngset = pickle.loads(binascii.a2b_base64("gAIoY0NsdXN0ZXJTaGVsbC5Ob2RlU2V0ClJhbmdlU2V0CnEAb3EBfXECKFUHX2xlbmd0aHEDS2RVCV9hdXRvc3RlcHEER1SySa0llMN9VQdfcmFuZ2VzcQVdcQYoKEsFSwVLAUsAdHEHKEsHS2ZLAUsAdHEIKEtoS2hLAUsAdHEJKEtqS2tLAUsAdHEKZXViLg==")) self.assertEqual(rngset, RangeSet("5,7-102,104,106-107")) self.assertEqual(str(rngset), "5,7-102,104,106-107") self.assertEqual(len(rngset), 100) self.assertEqual(rngset[0], '5') self.assertEqual(rngset[1], '7') self.assertEqual(rngset[-1], '107') def test_unpickle_v1_3_py26(self): """test RangeSet unpickling (against v1.3/py26)""" rngset = pickle.loads(binascii.a2b_base64("gAIoY0NsdXN0ZXJTaGVsbC5Ob2RlU2V0ClJhbmdlU2V0CnEAb3EBfXECKFUHX2xlbmd0aHEDS2RVCV9hdXRvc3RlcHEER1SySa0llMN9VQdfcmFuZ2VzcQVdcQYoKEsFSwVLAUsAdHEHKEsHS2ZLAUsAdHEIKEtoS2hLAUsAdHEJKEtqS2tLAUsAdHEKZXViLg==")) self.assertEqual(rngset, RangeSet("5,7-102,104,106-107")) self.assertEqual(str(rngset), "5,7-102,104,106-107") self.assertEqual(len(rngset), 100) self.assertEqual(rngset[0], '5') self.assertEqual(rngset[1], '7') self.assertEqual(rngset[-1], '107') # unpickle_v1_4_py24 : unpickling fails as v1.4 does not have slice pickling workaround def test_unpickle_v1_4_py26(self): """test RangeSet unpickling (against v1.4/py26)""" rngset = pickle.loads(binascii.a2b_base64("gAIoY0NsdXN0ZXJTaGVsbC5Ob2RlU2V0ClJhbmdlU2V0CnEAb3EBfXEDKFUHX2xlbmd0aHEES2RVCV9hdXRvc3RlcHEFR1SySa0llMN9VQdfcmFuZ2VzcQZdcQcoY19fYnVpbHRpbl9fCnNsaWNlCnEISwVLBksBh3EJUnEKSwCGcQtoCEsHS2dLAYdxDFJxDUsAhnEOaAhLaEtpSwGHcQ9ScRBLAIZxEWgIS2pLbEsBh3ESUnETSwCGcRRlVQhfdmVyc2lvbnEVSwJ1Yi4=")) self.assertEqual(rngset, RangeSet("5,7-102,104,106-107")) self.assertEqual(str(rngset), "5,7-102,104,106-107") self.assertEqual(len(rngset), 100) self.assertEqual(rngset[0], '5') self.assertEqual(rngset[1], '7') self.assertEqual(rngset[-1], '107') def test_unpickle_v1_5_py24(self): """test RangeSet unpickling (against v1.5/py24)""" rngset = pickle.loads(binascii.a2b_base64("gAIoY0NsdXN0ZXJTaGVsbC5Ob2RlU2V0ClJhbmdlU2V0CnEAb3EBfXEDKFUHX2xlbmd0aHEES2RVCV9hdXRvc3RlcHEFR1SySa0llMN9VQdfcmFuZ2VzcQZdcQcoSwVLBksBh3EISwCGcQlLB0tnSwGHcQpLAIZxC0toS2lLAYdxDEsAhnENS2pLbEsBh3EOSwCGcQ9lVQhfdmVyc2lvbnEQSwJ1Yi4=")) self.assertEqual(rngset, RangeSet("5,7-102,104,106-107")) self.assertEqual(str(rngset), "5,7-102,104,106-107") self.assertEqual(len(rngset), 100) self.assertEqual(rngset[0], '5') self.assertEqual(rngset[1], '7') self.assertEqual(rngset[-1], '107') def test_unpickle_v1_5_py26(self): """test RangeSet unpickling (against v1.5/py26)""" rngset = pickle.loads(binascii.a2b_base64("gAIoY0NsdXN0ZXJTaGVsbC5Ob2RlU2V0ClJhbmdlU2V0CnEAb3EBfXEDKFUHX2xlbmd0aHEES2RVCV9hdXRvc3RlcHEFR1SySa0llMN9VQdfcmFuZ2VzcQZdcQcoY19fYnVpbHRpbl9fCnNsaWNlCnEISwVLBksBh3EJUnEKSwCGcQtoCEsHS2dLAYdxDFJxDUsAhnEOaAhLaEtpSwGHcQ9ScRBLAIZxEWgIS2pLbEsBh3ESUnETSwCGcRRlVQhfdmVyc2lvbnEVSwJ1Yi4=")) self.assertEqual(rngset, RangeSet("5,7-102,104,106-107")) self.assertEqual(str(rngset), "5,7-102,104,106-107") self.assertEqual(len(rngset), 100) self.assertEqual(rngset[0], '5') self.assertEqual(rngset[1], '7') self.assertEqual(rngset[-1], '107') def test_unpickle_v1_6_py24(self): """test RangeSet unpickling (against v1.6/py24)""" rngset = pickle.loads(binascii.a2b_base64("gAJjQ2x1c3RlclNoZWxsLlJhbmdlU2V0ClJhbmdlU2V0CnEAVRM1LDctMTAyLDEwNCwxMDYtMTA3cQGFcQJScQN9cQQoVQdwYWRkaW5ncQVOVQlfYXV0b3N0ZXBxBkdUskmtJZTDfVUIX3ZlcnNpb25xB0sDdWIu")) self.assertEqual(rngset, RangeSet("5,7-102,104,106-107")) self.assertEqual(str(rngset), "5,7-102,104,106-107") self.assertEqual(len(rngset), 100) self.assertEqual(rngset[0], '5') self.assertEqual(rngset[1], '7') self.assertEqual(rngset[-1], '107') def test_unpickle_v1_6_py26(self): """test RangeSet unpickling (against v1.6/py26)""" rngset = pickle.loads(binascii.a2b_base64("gAJjQ2x1c3RlclNoZWxsLlJhbmdlU2V0ClJhbmdlU2V0CnEAVRM1LDctMTAyLDEwNCwxMDYtMTA3cQGFcQJScQN9cQQoVQdwYWRkaW5ncQVOVQlfYXV0b3N0ZXBxBkdUskmtJZTDfVUIX3ZlcnNpb25xB0sDdWIu")) self.assertEqual(rngset, RangeSet("5,7-102,104,106-107")) self.assertEqual(str(rngset), "5,7-102,104,106-107") self.assertEqual(len(rngset), 100) self.assertEqual(rngset[0], '5') self.assertEqual(rngset[1], '7') self.assertEqual(rngset[-1], '107') def test_unpickle_v1_8_4_py27(self): """test RangeSet unpickling (against v1.8.4/py27)""" rngset = pickle.loads(binascii.a2b_base64("Y0NsdXN0ZXJTaGVsbC5SYW5nZVNldApSYW5nZVNldApwMAooUyc1LDctMTAyLDEwNCwxMDYtMTA3JwpwMQp0cDIKUnAzCihkcDQKUydwYWRkaW5nJwpwNQpOc1MnX2F1dG9zdGVwJwpwNgpGMWUrMTAwCnNTJ192ZXJzaW9uJwpwNwpJMwpzYi4=")) self.assertEqual(rngset, RangeSet("5,7-102,104,106-107")) self.assertEqual(str(rngset), "5,7-102,104,106-107") self.assertEqual(len(rngset), 100) self.assertEqual(rngset[0], '5') self.assertEqual(rngset[1], '7') self.assertEqual(rngset[-1], '107') rngset = pickle.loads(binascii.a2b_base64("Y0NsdXN0ZXJTaGVsbC5SYW5nZVNldApSYW5nZVNldApwMAooUycwMDAzLTAwNDAsMDA1OS0xNDAwJwpwMQp0cDIKUnAzCihkcDQKUydwYWRkaW5nJwpwNQpJNApzUydfYXV0b3N0ZXAnCnA2CkYxZSsxMDAKc1MnX3ZlcnNpb24nCnA3CkkzCnNiLg==")) self.assertEqual(rngset, RangeSet("0003-0040,0059-1400")) self.assertEqual(str(rngset), "0003-0040,0059-1400") self.assertEqual(len(rngset), 1380) self.assertEqual(rngset[0], '0003') self.assertEqual(rngset[1], '0004') self.assertEqual(rngset[-1], '1400') def test_pickle_current(self): """test RangeSet pickling (current version)""" dump = pickle.dumps(RangeSet("1-100")) self.assertNotEqual(dump, None) rngset = pickle.loads(dump) self.assertEqual(rngset, RangeSet("1-100")) self.assertEqual(str(rngset), "1-100") self.assertEqual(rngset[0], '1') self.assertEqual(rngset[1], '2') self.assertEqual(rngset[-1], '100') def testIntersectionLength(self): """test RangeSet intersection/length""" r1 = RangeSet("115-117,130,166-170,4780-4999") self.assertEqual(len(r1), 229) r2 = RangeSet("116-117,130,4781-4999") self.assertEqual(len(r2), 222) res = r1.intersection(r2) self.assertEqual(len(res), 222) r1 = RangeSet("115-200") self.assertEqual(len(r1), 86) r2 = RangeSet("116-117,119,123-131,133,149,199") self.assertEqual(len(r2), 15) res = r1.intersection(r2) self.assertEqual(len(res), 15) # StopIteration test r1 = RangeSet("115-117,130,166-170,4780-4999,5003") self.assertEqual(len(r1), 230) r2 = RangeSet("116-117,130,4781-4999") self.assertEqual(len(r2), 222) res = r1.intersection(r2) self.assertEqual(len(res), 222) # StopIteration test2 r1 = RangeSet("130,166-170,4780-4999") self.assertEqual(len(r1), 226) r2 = RangeSet("116-117") self.assertEqual(len(r2), 2) res = r1.intersection(r2) self.assertEqual(len(res), 0) def testFolding(self): """test RangeSet folding conditions""" r1 = RangeSet("112,114-117,119,121,130,132,134,136,138,139-141,144,147-148", autostep=6) self.assertEqual(str(r1), "112,114-117,119,121,130,132,134,136,138-141,144,147-148") r1.autostep = 5 self.assertEqual(str(r1), "112,114-117,119,121,130-138/2,139-141,144,147-148") r1 = RangeSet("1,3-4,6,8") self.assertEqual(str(r1), "1,3-4,6,8") r1 = RangeSet("1,3-4,6,8", autostep=4) self.assertEqual(str(r1), "1,3-4,6,8") r1 = RangeSet("1,3-4,6,8", autostep=2) self.assertEqual(str(r1), "1-3/2,4,6-8/2") r1 = RangeSet("1,3-4,6,8", autostep=3) self.assertEqual(str(r1), "1,3-4,6,8") # empty set r1 = RangeSet(autostep=3) self.assertEqual(str(r1), "") def test_ior(self): """test RangeSet.__ior__()""" r1 = RangeSet("1,3-9,14-21,30-39,42") r2 = RangeSet("2-5,10-32,35,40-41") r1 |= r2 self.assertEqual(len(r1), 42) self.assertEqual(str(r1), "1-42") def test_iand(self): """test RangeSet.__iand__()""" r1 = RangeSet("1,3-9,14-21,30-39,42") r2 = RangeSet("2-5,10-32,35,40-41") r1 &= r2 self.assertEqual(len(r1), 15) self.assertEqual(str(r1), "3-5,14-21,30-32,35") def test_ixor(self): """test RangeSet.__ixor__()""" r1 = RangeSet("1,3-9,14-21,30-39,42") r2 = RangeSet("2-5,10-32,35,40-41") r1 ^= r2 self.assertEqual(len(r1), 27) self.assertEqual(str(r1), "1-2,6-13,22-29,33-34,36-42") def test_isub(self): """test RangeSet.__isub__()""" r1 = RangeSet("1,3-9,14-21,30-39,42") r2 = RangeSet("2-5,10-32,35,40-41") r1 -= r2 self.assertEqual(len(r1), 12) self.assertEqual(str(r1), "1,6-9,33-34,36-39,42") def test_contiguous(self): r0 = RangeSet() self.assertEqual([], [str(ns) for ns in r0.contiguous()]) r1 = RangeSet("1,3-9,14-21,30-39,42") self.assertEqual(['1', '3-9', '14-21', '30-39', '42'], [str(ns) for ns in r1.contiguous()]) def test_dim(self): r0 = RangeSet() self.assertEqual(r0.dim(), 0) r1 = RangeSet("1-10,15-20") self.assertEqual(r1.dim(), 1) def test_intiter(self): matches = [ 1, 3, 4, 5, 6, 7, 8, 11 ] rgs = RangeSet.fromlist([ "11", "3", "5-8", "1", "4" ]) cnt = 0 for rg in rgs.intiter(): self.assertEqual(rg, matches[cnt]) cnt += 1 self.assertEqual(cnt, len(matches)) # with padding rgs = RangeSet.fromlist([ "011", "003", "005-008", "001", "004" ]) cnt = 0 for rg in rgs.intiter(): self.assertTrue(isinstance(rg, int)) self.assertEqual(rg, matches[cnt]) cnt += 1 self.assertEqual(cnt, len(matches)) # with mixed length padding (add 01, 09 and 0001): not supported until 1.9 matches = [ 1, 9 ] + matches + [ 1 ] rgs = RangeSet.fromlist([ "011", "01", "003", "005-008", "001", "0001", "09", "004" ]) cnt = 0 for rg in rgs.intiter(): self.assertTrue(isinstance(rg, int)) self.assertEqual(rg, matches[cnt]) cnt += 1 self.assertEqual(cnt, len(matches)) def test_mixed_padding(self): r0 = RangeSet("030-031,032-100/2,101-489", autostep=3) self.assertEqual(str(r0), "030-032,034-100/2,101-489") r1 = RangeSet("030-032,033-100/3,102", autostep=3) self.assertEqual(str(r1), "030-033,036-102/3") r2 = RangeSet("030-032,033-100/3,101", autostep=3) self.assertEqual(str(r2), "030-033,036-099/3,101") r3 = RangeSet("030-032,033-100/3,100", autostep=3) self.assertEqual(str(r3), "030-033,036-099/3,100") r5 = RangeSet("030-032,033-100/3,99-105/3,0001", autostep=3) self.assertEqual(str(r5), "99,030-033,036-105/3,0001") def test_mixed_padding_mismatch(self): self.assertRaises(RangeSetParseError, RangeSet, "1-044") self.assertRaises(RangeSetParseError, RangeSet, "01-044") self.assertRaises(RangeSetParseError, RangeSet, "001-44") self.assertRaises(RangeSetParseError, RangeSet, "0-9,1-044") self.assertRaises(RangeSetParseError, RangeSet, "0-9,01-044") self.assertRaises(RangeSetParseError, RangeSet, "0-9,001-44") self.assertRaises(RangeSetParseError, RangeSet, "030-032,033-99/3,100") def test_padding_property_compat(self): r0 = RangeSet("0-10,15-20") self.assertEqual(r0.padding, None) r0.padding = 1 self.assertEqual(r0.padding, None) self.assertEqual(str(r0), "0-10,15-20") r0.padding = 2 self.assertEqual(r0.padding, 2) self.assertEqual(str(r0), "00-10,15-20") r0.padding = 3 self.assertEqual(r0.padding, 3) self.assertEqual(str(r0), "000-010,015-020") # reset padding using None is allowed r0.padding = None self.assertEqual(r0.padding, None) self.assertEqual(str(r0), "0-10,15-20") def test_strip_whitespaces(self): r0 = RangeSet(" 1,5,39-42,100") self._testRS(" 1,5,39-42,100", "1,5,39-42,100", 7) self._testRS("1 ,5,39-42,100", "1,5,39-42,100", 7) self._testRS("1, 5,39-42,100", "1,5,39-42,100", 7) self._testRS("1,5 ,39-42,100", "1,5,39-42,100", 7) self._testRS("1,5, 39-42,100", "1,5,39-42,100", 7) self._testRS("1,5,39-42 ,100", "1,5,39-42,100", 7) self._testRS("1,5,39-42, 100", "1,5,39-42,100", 7) self._testRS("1,5,39-42,100 ", "1,5,39-42,100", 7) self._testRS(" 1 ,5,39-42,100", "1,5,39-42,100", 7) self._testRS("1 , 5 , 39-42 , 100", "1,5,39-42,100", 7) self._testRS(" 1 , 5 , 39-42 , 100 ", "1,5,39-42,100", 7) # whitespaces within ranges self._testRS("1 - 2", "1-2", 2) self._testRS("01 - 02", "01-02", 2) self._testRS("01- 02", "01-02", 2) self._testRS("01 -02", "01-02", 2) self._testRS(" 01-02", "01-02", 2) self._testRS(" 01 -02", "01-02", 2) self._testRS(" 01 - 02", "01-02", 2) self._testRS(" 01 - 02 ", "01-02", 2) self._testRS("01 - 02 ", "01-02", 2) self._testRS("01- 02 ", "01-02", 2) self._testRS("01-02 ", "01-02", 2) self._testRS("0-8/2", "0-8/2", 5) self._testRS("1-7/2", "1-7/2", 4) self._testRS("0-8 /2", "0-8/2", 5) self._testRS("1-7 /2", "1-7/2", 4) self._testRS("0-8/ 2", "0-8/2", 5) self._testRS("1-7/ 2", "1-7/2", 4) self._testRS("0-8 / 2", "0-8/2", 5) self._testRS("1-7 / 2", "1-7/2", 4) self._testRS("0 -8 / 2", "0-8/2", 5) self._testRS("1 -7 / 2", "1-7/2", 4) self._testRS("0 - 8 / 2", "0-8/2", 5) self._testRS("1 - 7 / 2", "1-7/2", 4) self._testRS("00-08/2", "00-08/2", 5) self._testRS("01-07/2", "01-07/2", 4) self._testRS("00-08 /2", "00-08/2", 5) self._testRS("01-07 /2", "01-07/2", 4) self._testRS("00-08/ 2", "00-08/2", 5) self._testRS("01-07/ 2", "01-07/2", 4) self._testRS("00-08 / 2", "00-08/2", 5) self._testRS("01-07 / 2", "01-07/2", 4) # invalid patterns self.assertRaises(RangeSetParseError, RangeSet, " 0 0") self.assertRaises(RangeSetParseError, RangeSet, " 1 2") self.assertRaises(RangeSetParseError, RangeSet, "0 1") self.assertRaises(RangeSetParseError, RangeSet, "0 1 ") self.assertRaises(RangeSetParseError, RangeSet, "1,5,39-42,10 0") self.assertRaises(RangeSetParseError, RangeSet, "1,5,39-42,12 3,300") self.assertRaises(RangeSetParseError, RangeSet, "1,5,") self.assertRaises(RangeSetParseError, RangeSet, "1,5,") self.assertRaises(RangeSetParseError, RangeSet, "1,5, ") self.assertRaises(RangeSetParseError, RangeSet, "1,5,, ") def test_init_ranges(self): """test RangeSet initialization with a range""" r1 = RangeSet(range(5,7)) self.assertEqual(str(r1), "5-6") self.assertEqual(len(r1), 2) def test_init_negative_ranges(self): """test RangeSet initialization with with negative ranges""" # negative ranges (GH#515) r1 = RangeSet(range(-1,1)) self.assertEqual(str(r1), "-1-0") self.assertEqual(len(r1), 2) r1 = RangeSet("-1-0") self.assertEqual(str(r1), "-1-0") self.assertEqual(len(r1), 2) r1 = RangeSet(range(-5,7)) self.assertEqual(str(r1), "-5-6") self.assertEqual(len(r1), 12) r1 = RangeSet("-5-6") self.assertEqual(str(r1), "-5-6") self.assertEqual(len(r1), 12) r1 = RangeSet(range(-9,-5)) self.assertEqual(str(r1), "-9--6") self.assertEqual(len(r1), 4) r1 = RangeSet("-9--6") self.assertEqual(str(r1), "-9--6") self.assertEqual(len(r1), 4) r1 = RangeSet(range(-12,-5)) self.assertEqual(str(r1), "-12--6") self.assertEqual(len(r1), 7) r1 = RangeSet(range(-100,-30)) self.assertEqual(str(r1), "-100--31") self.assertEqual(len(r1), 70) r1 = RangeSet(range(-10,-2,2)) self.assertEqual(str(r1), "-10,-8,-6,-4") self.assertEqual(len(r1), 4) r1 = RangeSet(range(-3, -2)) self.assertEqual(str(r1), "-3") self.assertEqual(len(r1), 1) r1 = RangeSet("-3/2") self.assertEqual(str(r1), "-3") self.assertEqual(len(r1), 1) r1 = RangeSet(range(-10,-2,2), autostep=3) self.assertEqual(str(r1), "-10--4/2") self.assertEqual(len(r1), 4) r1 = RangeSet(['-30', '-20', '-28', '-29']) self.assertEqual(str(r1), "-30--28,-20") self.assertEqual(len(r1), 4) r1 = RangeSet(['-31', '-20', '-27', '-29']) self.assertEqual(str(r1), "-31,-29,-27,-20") self.assertEqual(len(r1), 4) # parsing of negative range with padding is not supported self.assertRaises(RangeSetParseError, RangeSet, "-001") self.assertRaises(RangeSetParseError, RangeSet, "-009") self.assertRaises(RangeSetParseError, RangeSet, "-01-00") self.assertRaises(RangeSetParseError, RangeSet, "-003--001") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/StreamWorkerTest.py0000644104717000001440000002702014501416555020576 0ustar00sthiellusers""" Unit test for StreamWorker """ import os import unittest from ClusterShell.Worker.Worker import StreamWorker, WorkerError from ClusterShell.Task import task_self from ClusterShell.Event import EventHandler class StreamTest(unittest.TestCase): def run_worker(self, worker): """helper method to schedule and run a worker""" task_self().schedule(worker) task_self().run() def test_001_empty(self): """test empty StreamWorker""" # that makes no sense but well... # handler=None is supported by base Worker class worker = StreamWorker(handler=None) self.run_worker(worker) # GH Issue #488: # An unconfigured engine client does not abort by itself... worker.abort() # Check that we are in a clean state now self.assertEqual(len(task_self()._engine._clients), 0) def test_002_pipe_readers(self): """test StreamWorker bound to several pipe readers""" streams = { "pipe1_reader": b"Some data to read from a pipe", "stderr": b"Error data to read using special keyword stderr", "pipe2_reader": b"Other data to read from another pipe", "pipe3_reader": b"Cool data to read from a third pipe" } class TestH(EventHandler): def __init__(self, testcase): self.snames = set() self.testcase = testcase def ev_read(self, worker, node, sname, msg): self.recv_msg(sname, msg) def recv_msg(self, sname, msg): self.testcase.assertTrue(len(self.snames) < len(streams)) self.testcase.assertEqual(streams[sname], msg) self.snames.add(sname) if len(self.snames) == len(streams): # before finishing, try to add another pipe at # runtime: this is NOT allowed rfd, wfd = os.pipe() self.testcase.assertRaises(WorkerError, worker.set_reader, "pipe4_reader", rfd) self.testcase.assertRaises(WorkerError, worker.set_writer, "pipe4_writer", wfd) os.close(rfd) os.close(wfd) # create a StreamWorker instance bound to several pipes hdlr = TestH(self) worker = StreamWorker(handler=hdlr) for sname in streams.keys(): rfd, wfd = os.pipe() worker.set_reader(sname, rfd) os.write(wfd, streams[sname]) os.close(wfd) self.run_worker(worker) # check that all ev_read have been received self.assertEqual(set(("pipe1_reader", "pipe2_reader", "pipe3_reader", "stderr")), hdlr.snames) def test_003_io_pipes(self): """test StreamWorker bound to pipe readers and writers""" # os.write -> pipe1 -> worker -> pipe2 -> os.read class TestH(EventHandler): def __init__(self, testcase): self.testcase = testcase self.worker = None self.pickup_count = 0 self.hup_count = 0 def ev_pickup(self, worker, node): self.pickup_count += 1 def ev_read(self, worker, node, sname, msg): self.testcase.assertEqual(sname, "pipe1") worker.write(msg, "pipe2") def ev_timer(self, timer): # call set_write_eof on specific stream after some delay worker = self.worker self.worker = 'DONE' worker.set_write_eof("pipe2") def ev_hup(self, worker, node, rc): # ev_hup called at the end (after set_write_eof is called) self.hup_count += 1 self.testcase.assertEqual(self.worker, 'DONE') # no rc code should be set self.testcase.assertEqual(rc, None) # create a StreamWorker instance bound to several pipes hdlr = TestH(self) worker = StreamWorker(handler=hdlr) hdlr.worker = worker rfd1, wfd1 = os.pipe() worker.set_reader("pipe1", rfd1) os.write(wfd1, b"Some data\n") os.close(wfd1) rfd2, wfd2 = os.pipe() worker.set_writer("pipe2", wfd2) timer1 = task_self().timer(1.0, handler=hdlr) self.run_worker(worker) self.assertEqual(os.read(rfd2, 1024), b"Some data") os.close(rfd2) # wfd2 should be closed by CS self.assertRaises(OSError, os.close, wfd2) # rfd1 should be closed by CS self.assertRaises(OSError, os.close, rfd1) # check pickup/hup self.assertEqual(hdlr.hup_count, 1) self.assertEqual(hdlr.pickup_count, 1) self.assertTrue(task_self().max_retcode() is None) def test_004_timeout_on_open_stream(self): """test StreamWorker with timeout set on open stream""" # Create worker set with timeout worker = StreamWorker(handler=None, timeout=0.5) # Create pipe stream rfd1, wfd1 = os.pipe() worker.set_reader("pipe1", rfd1, closefd=False) # Write some chars without line break (worst case) os.write(wfd1, b"Some data") # TEST: Do not close wfd1 to simulate open stream # Need to enable pipe1_msgtree task_self().set_default("pipe1_msgtree", True) self.run_worker(worker) # Timeout occurred - read buffer should have been flushed self.assertEqual(worker.read(sname="pipe1"), b"Some data") # closefd was set, we should be able to close pipe fds os.close(rfd1) os.close(wfd1) def test_005_timeout_events(self): """test StreamWorker with timeout set (event based)""" class TestH(EventHandler): def __init__(self, testcase): self.testcase = testcase self.ev_pickup_called = False self.ev_read_called = False self.ev_hup_called = False self.ev_timeout_called = False def ev_pickup(self, worker, node): self.ev_pickup_called = True def ev_read(self, worker, node, sname, msg): self.ev_read_called = True self.testcase.assertEqual(sname, "pipe1") self.testcase.assertEqual(msg, b"Some data") def ev_hup(self, worker, node, rc): # ev_hup is called but no rc code should be set self.ev_hup_called = True self.testcase.assertEqual(rc, None) def ev_close(self, worker, timedout): if timedout: self.ev_timeout_called = True hdlr = TestH(self) worker = StreamWorker(handler=hdlr, timeout=0.5) # Create pipe stream with closefd set (default) rfd1, wfd1 = os.pipe() worker.set_reader("pipe1", rfd1) # Write some chars without line break (worst case) os.write(wfd1, b"Some data") # TEST: Do not close wfd1 to simulate open stream self.run_worker(worker) self.assertTrue(hdlr.ev_timeout_called) self.assertTrue(hdlr.ev_read_called) self.assertTrue(hdlr.ev_pickup_called) self.assertTrue(hdlr.ev_hup_called) # rfd1 should be already closed by CS self.assertRaises(OSError, os.close, rfd1) os.close(wfd1) def test_006_worker_abort_on_written(self): """test StreamWorker abort on ev_written""" # This test creates a writable StreamWorker that will abort after the # first write, to check whether ev_written is generated in the right # place. class TestH(EventHandler): def __init__(self, testcase, rfd): self.testcase = testcase self.rfd = rfd self.check_written = 0 def ev_written(self, worker, node, sname, size): self.check_written += 1 self.testcase.assertEqual(os.read(self.rfd, 1024), b"initial") worker.abort() worker.abort() # safe but no effect rfd, wfd = os.pipe() hdlr = TestH(self, rfd) worker = StreamWorker(handler=hdlr) worker.set_writer("test", wfd) # closefd=True worker.write(b"initial", "test") self.run_worker(worker) self.assertEqual(hdlr.check_written, 1) os.close(rfd) def test_007_worker_abort_on_written_eof(self): """test StreamWorker abort on ev_written (with EOF)""" # This test is similar to previous test test_006 but does # write() + set_write_eof(). class TestH(EventHandler): def __init__(self, testcase, rfd): self.testcase = testcase self.rfd = rfd self.check_written = 0 def ev_written(self, worker, node, sname, size): self.check_written += 1 self.testcase.assertEqual(os.read(self.rfd, 1024), b"initial") worker.abort() worker.abort() # safe but no effect rfd, wfd = os.pipe() hdlr = TestH(self, rfd) worker = StreamWorker(handler=hdlr) worker.set_writer("test", wfd) # closefd=True worker.write(b"initial", "test") worker.set_write_eof() self.run_worker(worker) self.assertEqual(hdlr.check_written, 1) os.close(rfd) def test_008_broken_pipe_on_write(self): """test StreamWorker with broken pipe on write()""" # This test creates a writable StreamWorker that will close the read # side of the pipe just after the first write to generate a broken # pipe error. class TestH(EventHandler): def __init__(self, testcase, rfd): self.testcase = testcase self.rfd = rfd self.check_hup = 0 self.check_written = 0 def ev_hup(self, worker, node, rc): self.check_hup += 1 def ev_written(self, worker, node, sname, size): self.check_written += 1 self.testcase.assertEqual(os.read(self.rfd, 1024), b"initial") # close reader, that will stop the StreamWorker os.close(self.rfd) # The following write call used to raise broken pipe before # version 1.7.2. worker.write(b"final") rfd, wfd = os.pipe() hdlr = TestH(self, rfd) worker = StreamWorker(handler=hdlr) worker.set_writer("test", wfd) # closefd=True worker.write(b"initial", "test") self.run_worker(worker) self.assertEqual(hdlr.check_hup, 1) self.assertEqual(hdlr.check_written, 1) def test_009_worker_abort_on_close(self): """test StreamWorker abort() on closing worker""" class TestH(EventHandler): def __init__(self, testcase, rfd): self.testcase = testcase self.rfd = rfd self.check_close = 0 def ev_close(self, worker, timedout): self.check_close += 1 self.testcase.assertFalse(timedout) os.close(self.rfd) worker.abort() worker.abort() # safe but no effect rfd, wfd = os.pipe() hdlr = TestH(self, rfd) worker = StreamWorker(handler=hdlr) worker.set_writer("test", wfd) # closefd=True worker.write(b"initial", "test") worker.set_write_eof() self.run_worker(worker) self.assertEqual(hdlr.check_close, 1) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/TLib.py0000644104717000001440000001130214505632065016137 0ustar00sthiellusers""" Unit test library """ import os import socket import sys import time from tempfile import mkstemp from tempfile import TemporaryFile, NamedTemporaryFile, TemporaryDirectory try: import configparser except ImportError: import ConfigParser as configparser from io import BytesIO, StringIO __all__ = ['HOSTNAME', 'load_cfg', 'make_temp_filename', 'make_temp_file', 'make_temp_dir', 'CLI_main'] # Get machine short hostname HOSTNAME = socket.gethostname().split('.', 1)[0] class TBytesIO(BytesIO): """Standard stream of in memory bytes for testing purpose.""" def __init__(self, initial_bytes=None): if initial_bytes and type(initial_bytes) is not bytes: initial_bytes = initial_bytes.encode() BytesIO.__init__(self, initial_bytes) def write(self, s): BytesIO.write(self, s.encode()) def isatty(self): return False def load_cfg(name): """Load test configuration file as a new ConfigParser""" cfgparser = configparser.ConfigParser() cfgparser.read([ \ os.path.expanduser('~/.clustershell/tests/%s' % name), '/etc/clustershell/tests/%s' % name]) return cfgparser # # Temp files and directories # def make_temp_filename(suffix=''): """Return a temporary name for a file.""" if len(suffix) > 0 and suffix[0] != '-': suffix = '-' + suffix fd, name = mkstemp(suffix, prefix='cs-test-') os.close(fd) # don't leak open fd return name def make_temp_file(text, suffix='', dir=None): """Create a temporary file with the provided text.""" assert type(text) is bytes tmp = NamedTemporaryFile(prefix='cs-test-', suffix=suffix, dir=dir) tmp.write(text) tmp.flush() return tmp def make_temp_dir(suffix=''): """Create a temporary directory.""" if len(suffix) > 0 and suffix[0] != '-': suffix = '-' + suffix return TemporaryDirectory(suffix, prefix='cs-test-') # # CLI tests # def CLI_main(test, main, args, stdin, expected_stdout, expected_rc=0, expected_stderr=None): """Generic CLI main() direct calling function that allows code coverage checks.""" rc = -1 saved_stdin = sys.stdin saved_stdout = sys.stdout saved_stderr = sys.stderr # Capture standard streams # Input: if defined, the stdin argument specifies input data if stdin is not None: if type(stdin) is bytes: # Use temporary file in Python 2 or with buffer (bytes) in Python 3 sys.stdin = TemporaryFile() sys.stdin.write(stdin) sys.stdin.seek(0) # ready to be read else: # If stdin is a string in Python 3, use StringIO as sys.stdin # should be read in text mode for some tests (eg. Nodeset). sys.stdin = StringIO(stdin) # Output: ClusterShell writes to stdout/stderr using strings, but the tests # expect bytes. TBytesIO is a wrapper that does the conversion until we # migrate all tests to string. sys.stdout = out = TBytesIO() sys.stderr = err = TBytesIO() sys.argv = args try: main() except SystemExit as exc: rc = int(str(exc)) finally: sys.stdout = saved_stdout sys.stderr = saved_stderr # close temporary file if we used one for stdin if saved_stdin != sys.stdin: sys.stdin.close() sys.stdin = saved_stdin try: if expected_stdout is not None: # expected_stdout might be a compiled regexp or a string try: if not expected_stdout.search(out.getvalue()): # search failed; use assertEqual() to display # expected/output test.assertEqual(out.getvalue(), expected_stdout.pattern) except AttributeError: # not a regexp test.assertEqual(out.getvalue(), expected_stdout) if expected_stderr is not None: # expected_stderr might be a compiled regexp or a string try: if not expected_stderr.match(err.getvalue()): # match failed; use assertEqual() to display expected/output test.assertEqual(err.getvalue(), expected_stderr.pattern) except AttributeError: # check the end as stderr messages are often prefixed with # argv[0] test.assertTrue(err.getvalue().endswith(expected_stderr), err.getvalue() + b' != ' + expected_stderr) if expected_rc is not None: test.assertEqual(rc, expected_rc, "rc=%d err=%s" % (rc, err.getvalue())) finally: out.close() err.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/TaskDistantMixin.py0000644104717000001440000007207014505632065020554 0ustar00sthiellusers# ClusterShell (distant) test suite # Written by S. Thiell """Unit test for ClusterShell Task (distant)""" import pwd import unittest import warnings from TLib import HOSTNAME, make_temp_filename, make_temp_dir from ClusterShell.Event import EventHandler from ClusterShell.Task import * from ClusterShell.Worker.Ssh import WorkerSsh from ClusterShell.Worker.EngineClient import * from ClusterShell.Worker.Worker import FANOUT_UNLIMITED, WorkerBadArgumentError import socket # TEventHandlerChecker 'received event' flags EV_START = 0x01 EV_PICKUP = 0x02 EV_READ = 0x04 EV_WRITTEN = 0x08 EV_HUP = 0x10 EV_TIMEOUT = 0x20 EV_CLOSE = 0x40 class TaskDistantMixin(object): def setUp(self): self._task = task_self() def testLocalhostCommand(self): # init worker worker = self._task.shell("/bin/hostname", nodes=HOSTNAME) # run task self._task.resume() def testLocalhostCommand2(self): # init worker worker = self._task.shell("/bin/hostname", nodes=HOSTNAME) worker = self._task.shell("/bin/uname -r", nodes=HOSTNAME) # run task self._task.resume() def testTaskShellWorkerGetCommand(self): worker1 = self._task.shell("/bin/hostname", nodes=HOSTNAME) worker2 = self._task.shell("/bin/uname -r", nodes=HOSTNAME) self._task.resume() self.assertTrue(hasattr(worker1, 'command')) self.assertTrue(hasattr(worker2, 'command')) self.assertEqual(worker1.command, "/bin/hostname") self.assertEqual(worker2.command, "/bin/uname -r") def testTaskShellRunDistant(self): wrk = task_self().run("false", nodes=HOSTNAME) self.assertEqual(wrk.node_retcode(HOSTNAME), 1) def testLocalhostCopy(self): dests = [] try: for i in range(5): dest = make_temp_filename(suffix='LocalhostCopy') dests.append(dest) worker = self._task.copy("/etc/hosts", dest, nodes=HOSTNAME) self._task.resume() finally: for dest in dests: os.unlink(dest) def testCopyNodeFailure(self): # == stderr merged == self._task.set_default("stderr", False) dest = make_temp_filename(suffix='LocalhostCopyF') worker = self._task.copy("/etc/hosts", dest, nodes='unlikely-node,%s' % HOSTNAME) self._task.resume() self.assertEqual(worker.node_error_buffer("unlikely-node"), None) self.assertTrue(len(worker.node_buffer("unlikely-node")) > 2) os.unlink(dest) # == stderr separated == self._task.set_default("stderr", True) try: dest = make_temp_filename(suffix='LocalhostCopyF2') worker = self._task.copy("/etc/hosts", dest, nodes='unlikely-node,%s' % HOSTNAME) # run task self._task.resume() self.assertTrue(worker.node_buffer("unlikely-node") is None) self.assertTrue(len(worker.node_error_buffer("unlikely-node")) > 2) os.unlink(dest) finally: self._task.set_default("stderr", False) def testLocalhostCopyDir(self): dtmp_src = make_temp_dir('src') dtmp_dst = make_temp_dir('testLocalhostCopyDir') try: os.mkdir(os.path.join(dtmp_src.name, "lev1_a")) os.mkdir(os.path.join(dtmp_src.name, "lev1_b")) os.mkdir(os.path.join(dtmp_src.name, "lev1_a", "lev2")) worker = self._task.copy(dtmp_src.name, dtmp_dst.name, nodes=HOSTNAME) self._task.resume() self.assertTrue(os.path.exists(os.path.join(dtmp_dst.name, os.path.basename(dtmp_src.name), "lev1_a", "lev2"))) finally: dtmp_dst.cleanup() dtmp_src.cleanup() def testLocalhostExplicitSshCopy(self): dest = make_temp_filename('testLocalhostExplicitSshCopy') srcsz = os.path.getsize("/etc/hosts") try: worker = WorkerSsh(HOSTNAME, source="/etc/hosts", dest=dest, handler=None, timeout=10) self._task.schedule(worker) self._task.resume() self.assertEqual(srcsz, os.path.getsize(dest)) finally: os.remove(dest) def testLocalhostExplicitSshCopyWithOptions(self): dest = make_temp_dir('testLocalhostExplicitSshCopyWithOptions') self._task.set_info("scp_path", "/usr/bin/scp -l 10") self._task.set_info("scp_options", "-oLogLevel=QUIET") try: worker = WorkerSsh(HOSTNAME, source="/etc/hosts", dest=dest.name, handler=None) self._task.schedule(worker) self._task.resume() self.assertEqual(self._task.max_retcode(), 0) self.assertTrue(os.path.exists(os.path.join(dest.name, "hosts"))) finally: os.unlink(os.path.join(dest.name, "hosts")) dest.cleanup() # clear options after test task_cleanup() self.assertEqual(task_self().info("scp_path"), None) def testLocalhostExplicitSshCopyDir(self): dtmp_src = make_temp_dir('src') dtmp_dst = make_temp_dir('testLocalhostExplicitSshCopyDir') try: os.mkdir(os.path.join(dtmp_src.name, "lev1_a")) os.mkdir(os.path.join(dtmp_src.name, "lev1_b")) os.mkdir(os.path.join(dtmp_src.name, "lev1_a", "lev2")) worker = WorkerSsh(HOSTNAME, source=dtmp_src.name, dest=dtmp_dst.name, handler=None) self._task.schedule(worker) self._task.resume() path = os.path.join(dtmp_dst.name, os.path.basename(dtmp_src.name), "lev1_a", "lev2") self.assertTrue(os.path.exists(path)) finally: dtmp_dst.cleanup() dtmp_src.cleanup() def testLocalhostExplicitSshCopyDirPreserve(self): dtmp_src = make_temp_dir('src') dtmp_dst = make_temp_dir('testLocalhostExplicitSshCopyDirPreserve') try: os.mkdir(os.path.join(dtmp_src.name, "lev1_a")) os.mkdir(os.path.join(dtmp_src.name, "lev1_b")) os.mkdir(os.path.join(dtmp_src.name, "lev1_a", "lev2")) worker = WorkerSsh(HOSTNAME, source=dtmp_src.name, dest=dtmp_dst.name, handler=None, timeout=10, preserve=True) self._task.schedule(worker) self._task.resume() self.assertTrue(os.path.exists(os.path.join(dtmp_dst.name, os.path.basename(dtmp_src.name), "lev1_a", "lev2"))) finally: dtmp_dst.cleanup() dtmp_src.cleanup() def testExplicitSshWorker(self): # init worker worker = WorkerSsh(HOSTNAME, command="/bin/echo alright", handler=None) self._task.schedule(worker) # run task self._task.resume() # test output self.assertEqual(worker.node_buffer(HOSTNAME), b"alright") def testExplicitSshWorkerWithOptions(self): self._task.set_info("ssh_path", "/usr/bin/ssh -C") self._task.set_info("ssh_options", "-oLogLevel=QUIET") worker = WorkerSsh(HOSTNAME, command="/bin/echo alright", handler=None) self._task.schedule(worker) # run task self._task.resume() # test output self.assertEqual(worker.node_buffer(HOSTNAME), b"alright") # clear options after test task_cleanup() self.assertEqual(task_self().info("ssh_path"), None) def testExplicitSshWorkerStdErr(self): # init worker worker = WorkerSsh(HOSTNAME, command="/bin/echo alright 1>&2", handler=None, stderr=True) self._task.schedule(worker) # run task self._task.resume() # test output self.assertEqual(worker.node_error_buffer(HOSTNAME), b"alright") # Re-test with stderr=False worker = WorkerSsh(HOSTNAME, command="/bin/echo alright 1>&2", handler=None, stderr=False) self._task.schedule(worker) # run task self._task.resume() # test output self.assertEqual(worker.node_error_buffer(HOSTNAME), None) class TEventHandlerChecker(EventHandler): """simple event trigger validator""" def __init__(self, test): self.test = test self.flags = 0 self.read_count = 0 self.written_count = 0 def ev_start(self, worker): self.test.assertEqual(self.flags, 0) self.flags |= EV_START def ev_pickup(self, worker, node): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_PICKUP self.last_node = node def ev_read(self, worker, node, sname, msg): self.test.assertEqual(self.flags, EV_START | EV_PICKUP) self.flags |= EV_READ self.last_node = node self.last_read = msg def ev_written(self, worker, node, sname, size): self.test.assertTrue(self.flags & (EV_START | EV_PICKUP)) self.flags |= EV_WRITTEN def ev_hup(self, worker, node, rc): self.test.assertTrue(self.flags & (EV_START | EV_PICKUP)) self.flags |= EV_HUP self.last_node = node self.last_rc = rc def ev_close(self, worker, timedout): self.test.assertTrue(self.flags & EV_START) self.test.assertTrue(self.flags & EV_CLOSE == 0) if timedout: self.flags |= EV_TIMEOUT self.flags |= EV_CLOSE def testShellEvents(self): # init worker test_eh = self.__class__.TEventHandlerChecker(self) worker = self._task.shell("/bin/hostname", nodes=HOSTNAME, handler=test_eh) # run task self._task.resume() # test events received: start, read, hup, close self.assertEqual(test_eh.flags, EV_START | EV_PICKUP | EV_READ | EV_HUP | EV_CLOSE) def testShellEventsWithTimeout(self): # init worker test_eh = self.__class__.TEventHandlerChecker(self) worker = self._task.shell("/bin/echo alright && /bin/sleep 10", nodes=HOSTNAME, handler=test_eh, timeout=2) self.assertTrue(worker != None) # run task self._task.resume() # test events received: start, read, timeout, close self.assertEqual(test_eh.flags, EV_START | EV_PICKUP | EV_READ | EV_TIMEOUT | EV_CLOSE) self.assertEqual(worker.node_buffer(HOSTNAME), b"alright") self.assertEqual(worker.num_timeout(), 1) self.assertEqual(self._task.num_timeout(), 1) count = 0 for node in self._task.iter_keys_timeout(): count += 1 self.assertEqual(node, HOSTNAME) self.assertEqual(count, 1) count = 0 for node in worker.iter_keys_timeout(): count += 1 self.assertEqual(node, HOSTNAME) self.assertEqual(count, 1) def testShellEventsWithTimeout2(self): # init worker test_eh1 = self.__class__.TEventHandlerChecker(self) worker1 = self._task.shell("/bin/echo alright && /bin/sleep 10", nodes=HOSTNAME, handler=test_eh1, timeout=2) test_eh2 = self.__class__.TEventHandlerChecker(self) worker2 = self._task.shell("/bin/echo okay && /bin/sleep 10", nodes=HOSTNAME, handler=test_eh2, timeout=3) # run task self._task.resume() # test events received: start, read, timeout, close self.assertEqual(test_eh1.flags, EV_START | EV_PICKUP | EV_READ | EV_TIMEOUT | EV_CLOSE) self.assertEqual(test_eh2.flags, EV_START | EV_PICKUP | EV_READ | EV_TIMEOUT | EV_CLOSE) self.assertEqual(worker1.node_buffer(HOSTNAME), b"alright") self.assertEqual(worker2.node_buffer(HOSTNAME), b"okay") self.assertEqual(worker1.num_timeout(), 1) self.assertEqual(worker2.num_timeout(), 1) self.assertEqual(self._task.num_timeout(), 2) def testShellEventsReadNoEOL(self): # init worker test_eh = self.__class__.TEventHandlerChecker(self) worker = self._task.shell("/bin/echo -n okay", nodes=HOSTNAME, handler=test_eh) # run task self._task.resume() # test events received: start, close self.assertEqual(test_eh.flags, EV_START | EV_PICKUP | EV_READ | EV_HUP | EV_CLOSE) self.assertEqual(worker.node_buffer(HOSTNAME), b"okay") def testShellEventsNoReadNoTimeout(self): # init worker test_eh = self.__class__.TEventHandlerChecker(self) worker = self._task.shell("/bin/sleep 2", nodes=HOSTNAME, handler=test_eh) # run task self._task.resume() # test events received: start, close self.assertEqual(test_eh.flags, EV_START | EV_PICKUP | EV_HUP | EV_CLOSE) self.assertEqual(worker.node_buffer(HOSTNAME), None) def testLocalhostCommandFanout(self): fanout = self._task.info("fanout") self._task.set_info("fanout", 2) # init worker for i in range(0, 10): worker = self._task.shell("/bin/echo %d" % i, nodes=HOSTNAME) # run task self._task.resume() # restore fanout value self._task.set_info("fanout", fanout) def testWorkerBuffers(self): # Warning: if you modify this test, please also modify testWorkerErrorBuffers() task = task_self() worker = task.shell("/usr/bin/printf 'foo\nbar\nxxx\n'", nodes=HOSTNAME) task.resume() # test iter_buffers() by worker... cnt = 2 for buf, nodes in worker.iter_buffers(): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(len(nodes), 1) self.assertEqual(str(nodes), HOSTNAME) self.assertEqual(cnt, 1) # new check in 1.7 to ensure match_keys is not a string testgen = worker.iter_buffers(HOSTNAME) # cast to list to effectively iterate self.assertRaises(TypeError, list, testgen) # and also fixed an issue when match_keys was an empty list for buf, nodes in worker.iter_buffers([]): self.assertFalse("Found buffer with empty match_keys?!") for buf, nodes in worker.iter_buffers([HOSTNAME]): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(len(nodes), 1) self.assertEqual(str(nodes), HOSTNAME) self.assertEqual(cnt, 0) # test flushing buffers by worker worker.flush_buffers() self.assertEqual(list(worker.iter_buffers()), []) def testWorkerErrorBuffers(self): # Warning: if you modify this test, please also modify testWorkerBuffers() task = task_self() worker = task.shell("/usr/bin/printf 'foo\nbar\nxxx\n' 1>&2", nodes=HOSTNAME, stderr=True) task.resume() # test iter_errors() by worker... cnt = 2 for buf, nodes in worker.iter_errors(): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(len(nodes), 1) self.assertEqual(str(nodes), HOSTNAME) self.assertEqual(cnt, 1) # new check in 1.7 to ensure match_keys is not a string testgen = worker.iter_errors(HOSTNAME) # cast to list to effectively iterate self.assertRaises(TypeError, list, testgen) # and also fixed an issue when match_keys was an empty list for buf, nodes in worker.iter_errors([]): self.assertFalse("Found error buffer with empty match_keys?!") for buf, nodes in worker.iter_errors([HOSTNAME]): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(len(nodes), 1) self.assertEqual(str(nodes), HOSTNAME) self.assertEqual(cnt, 0) # test flushing error buffers by worker worker.flush_errors() self.assertEqual(list(worker.iter_errors()), []) def testWorkerNodeBuffers(self): task = task_self() worker = task.shell("/usr/bin/printf 'foo\nbar\nxxx\n'", nodes=HOSTNAME) task.resume() cnt = 1 for node, buf in worker.iter_node_buffers(): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(node, HOSTNAME) self.assertEqual(cnt, 0) def testWorkerNodeErrors(self): task = task_self() worker = task.shell("/usr/bin/printf 'foo\nbar\nxxx\n' 1>&2", nodes=HOSTNAME, stderr=True) task.resume() cnt = 1 for node, buf in worker.iter_node_errors(): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(node, HOSTNAME) self.assertEqual(cnt, 0) def testWorkerRetcodes(self): task = task_self() worker = task.shell("/bin/sh -c 'exit 3'", nodes=HOSTNAME) task.resume() cnt = 2 for rc, keys in worker.iter_retcodes(): cnt -= 1 self.assertEqual(rc, 3) self.assertEqual(len(keys), 1) self.assertEqual(keys[0], HOSTNAME) self.assertEqual(cnt, 1) for rc, keys in worker.iter_retcodes(HOSTNAME): cnt -= 1 self.assertEqual(rc, 3) self.assertEqual(len(keys), 1) self.assertEqual(keys[0], HOSTNAME) self.assertEqual(cnt, 0) # test node_retcode self.assertEqual(worker.node_retcode(HOSTNAME), 3) # 1.2.91+ self.assertEqual(worker.node_rc(HOSTNAME), 3) # test node_retcode failure self.assertRaises(KeyError, worker.node_retcode, "dummy") # test max retcode API self.assertEqual(task.max_retcode(), 3) def testWorkerNodeRetcodes(self): task = task_self() worker = task.shell("/bin/sh -c 'exit 3'", nodes=HOSTNAME) task.resume() cnt = 1 for node, rc in worker.iter_node_retcodes(): cnt -= 1 self.assertEqual(rc, 3) self.assertEqual(node, HOSTNAME) self.assertEqual(cnt, 0) def testEscape(self): cmd = r"export CSTEST=foobar; /bin/echo \$CSTEST | sed 's/\ foo/bar/'" worker = self._task.shell(cmd, nodes=HOSTNAME) # execute self._task.resume() # read result self.assertEqual(worker.node_buffer(HOSTNAME), b"$CSTEST") def testEscape2(self): cmd = r"export CSTEST=foobar; /bin/echo $CSTEST | sed 's/\ foo/bar/'" worker = self._task.shell(cmd, nodes=HOSTNAME) # execute self._task.resume() # read result self.assertEqual(worker.node_buffer(HOSTNAME), b"foobar") def testSshUserOption(self): ssh_user_orig = self._task.info("ssh_user") self._task.set_info("ssh_user", pwd.getpwuid(os.getuid())[0]) worker = self._task.shell("/bin/echo foobar", nodes=HOSTNAME) self._task.resume() # restore original ssh_user (None) self.assertEqual(ssh_user_orig, None) self._task.set_info("ssh_user", ssh_user_orig) def testSshUserOptionForScp(self): ssh_user_orig = self._task.info("ssh_user") self._task.set_info("ssh_user", pwd.getpwuid(os.getuid())[0]) dest = make_temp_filename('testLocalhostCopyU') worker = self._task.copy("/etc/hosts", dest, nodes=HOSTNAME) self._task.resume() # restore original ssh_user (None) self.assertEqual(ssh_user_orig, None) self._task.set_info("ssh_user", ssh_user_orig) os.unlink(dest) def testSshOptionsOption(self): ssh_options_orig = self._task.info("ssh_options") try: self._task.set_info("ssh_options", "-oLogLevel=QUIET") worker = self._task.shell("/bin/echo foobar", nodes=HOSTNAME) self._task.resume() self.assertEqual(worker.node_buffer(HOSTNAME), b"foobar") # test 3 options self._task.set_info("ssh_options", \ "-oLogLevel=QUIET -oStrictHostKeyChecking=no -oVerifyHostKeyDNS=no") worker = self._task.shell("/bin/echo foobar3", nodes=HOSTNAME) self._task.resume() self.assertEqual(worker.node_buffer(HOSTNAME), b"foobar3") finally: # restore original ssh_user (None) self.assertEqual(ssh_options_orig, None) self._task.set_info("ssh_options", ssh_options_orig) def testSshOptionsOptionForScp(self): ssh_options_orig = self._task.info("ssh_options") testfile = None try: testfile = make_temp_filename('testLocalhostCopyO') if os.path.exists(testfile): os.remove(testfile) self._task.set_info("ssh_options", \ "-oLogLevel=QUIET -oStrictHostKeyChecking=no -oVerifyHostKeyDNS=no") worker = self._task.copy("/etc/hosts", testfile, nodes=HOSTNAME) self._task.resume() self.assertTrue(os.path.exists(testfile)) finally: os.unlink(testfile) # restore original ssh_user (None) self.assertEqual(ssh_options_orig, None) self._task.set_info("ssh_options", ssh_options_orig) def testShellStderrWithHandler(self): class StdErrHandler(EventHandler): def ev_read(self, worker, node, sname, msg): if sname == worker.SNAME_STDERR: assert msg == b"something wrong" worker = self._task.shell("echo something wrong 1>&2", nodes=HOSTNAME, handler=StdErrHandler(), stderr=True) self._task.resume() for buf, nodes in worker.iter_errors(): self.assertEqual(buf, b"something wrong") for buf, nodes in worker.iter_errors([HOSTNAME]): self.assertEqual(buf, b"something wrong") def testShellWriteSimple(self): worker = self._task.shell("cat", nodes=HOSTNAME) worker.write(b"this is a test\n") worker.set_write_eof() self._task.resume() self.assertEqual(worker.node_buffer(HOSTNAME), b"this is a test") def testShellWriteHandler(self): class WriteOnReadHandler(EventHandler): def __init__(self, target_worker): self.target_worker = target_worker def ev_read(self, worker, node, sname, msg): self.target_worker.write(node.encode() + b':' + msg + b'\n') self.target_worker.set_write_eof() reader = self._task.shell("cat", nodes=HOSTNAME) worker = self._task.shell("sleep 1; echo foobar", nodes=HOSTNAME, handler=WriteOnReadHandler(reader)) self._task.resume() res = "%s:foobar" % HOSTNAME self.assertEqual(reader.node_buffer(HOSTNAME), res.encode()) def testSshBadArgumentOption(self): # Check code < 1.4 compatibility self.assertRaises(WorkerBadArgumentError, WorkerSsh, HOSTNAME, None, None) # As of 1.4, ValueError is raised for missing parameter self.assertRaises(ValueError, WorkerSsh, HOSTNAME, None, None) # 1.4+ def testCopyEvents(self): test_eh = self.__class__.TEventHandlerChecker(self) dest = make_temp_filename('testLocalhostCopyEvents') worker = self._task.copy("/etc/hosts", dest, nodes=HOSTNAME, handler=test_eh) # run task self._task.resume() os.unlink(dest) self.assertEqual(test_eh.flags, EV_START | EV_PICKUP | EV_HUP | EV_CLOSE) def testWorkerAbort(self): task = task_self() # Test worker.abort() in an event handler. class AbortOnTimer(EventHandler): def __init__(self, worker): EventHandler.__init__(self) self.ext_worker = worker self.testtimer = False def ev_timer(self, timer): self.ext_worker.abort() self.ext_worker.abort() # safe but no effect self.testtimer = True aot = AbortOnTimer(task.shell("sleep 10", nodes=HOSTNAME)) self.assertEqual(aot.testtimer, False) task.timer(1.5, handler=aot) task.resume() self.assertEqual(aot.testtimer, True) def testWorkerAbortSanity(self): task = task_self() worker = task.shell("sleep 1", nodes=HOSTNAME) worker.abort() # test noop abort() on unscheduled worker worker = WorkerSsh(HOSTNAME, command="sleep 1", handler=None, timeout=None) worker.abort() def testLocalhostRCopy(self): try: dest = make_temp_dir('testLocalhostRCopy') # use fake node 'aaa' to test rank > 0 worker = self._task.rcopy("/etc/hosts", dest.name, "aaa,%s" % HOSTNAME, handler=None, timeout=10) self._task.resume() self.assertEqual(worker.source, "/etc/hosts") self.assertEqual(worker.dest, dest.name) self.assertTrue(os.path.exists(os.path.join(dest.name, "hosts.%s" % HOSTNAME))) finally: dest.cleanup() def testLocalhostExplicitSshReverseCopy(self): dest = make_temp_dir('testLocalhostExplicitSshRCopy') try: worker = WorkerSsh(HOSTNAME, source="/etc/hosts", dest=dest.name, handler=None, timeout=10, reverse=True) self._task.schedule(worker) self._task.resume() self.assertEqual(worker.source, "/etc/hosts") self.assertEqual(worker.dest, dest.name) self.assertTrue(os.path.exists(os.path.join(dest.name, "hosts.%s" % HOSTNAME))) finally: dest.cleanup() def testLocalhostExplicitSshReverseCopyDir(self): dtmp_src = make_temp_dir('src') dtmp_dst = make_temp_dir('testLocalhostExplicitSshReverseCopyDir') try: os.mkdir(os.path.join(dtmp_src.name, "lev1_a")) os.mkdir(os.path.join(dtmp_src.name, "lev1_b")) os.mkdir(os.path.join(dtmp_src.name, "lev1_a", "lev2")) worker = WorkerSsh(HOSTNAME, source=dtmp_src.name, dest=dtmp_dst.name, handler=None, timeout=30, reverse=True) self._task.schedule(worker) self._task.resume() self.assertTrue(os.path.exists(os.path.join(dtmp_dst.name, "%s.%s" % \ (os.path.basename(dtmp_src.name), HOSTNAME), "lev1_a", "lev2"))) finally: dtmp_dst.cleanup() dtmp_src.cleanup() def testLocalhostExplicitSshReverseCopyDirPreserve(self): dtmp_src = make_temp_dir('src') dtmp_dst = make_temp_dir('testLocalhostExplicitSshReverseCpDirPreserve') try: os.mkdir(os.path.join(dtmp_src.name, "lev1_a")) os.mkdir(os.path.join(dtmp_src.name, "lev1_b")) os.mkdir(os.path.join(dtmp_src.name, "lev1_a", "lev2")) worker = WorkerSsh(HOSTNAME, source=dtmp_src.name, dest=dtmp_dst.name, handler=None, timeout=30, reverse=True) self._task.schedule(worker) self._task.resume() self.assertTrue(os.path.exists(os.path.join(dtmp_dst.name, "%s.%s" % \ (os.path.basename(dtmp_src.name), HOSTNAME), "lev1_a", "lev2"))) finally: dtmp_dst.cleanup() dtmp_src.cleanup() def testErroneousSshPath(self): try: self._task.set_info("ssh_path", "/wrong/path/to/ssh") # init worker worker = self._task.shell("/bin/echo ok", nodes=HOSTNAME) # run task self._task.resume() self.assertEqual(self._task.max_retcode(), 255) finally: # restore fanout value self._task.set_info("ssh_path", None) class TEventHandlerEvCountChecker(EventHandler): """simple event count validator""" def __init__(self): self.start_count = 0 self.pickup_count = 0 self.hup_count = 0 self.close_count = 0 def ev_start(self, worker): self.start_count += 1 def ev_pickup(self, worker, node): self.pickup_count += 1 def ev_hup(self, worker, node, rc): self.hup_count += 1 def ev_close(self, worker, timedout): self.close_count += 1 @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def testWorkerEventCount(self): test_eh = self.__class__.TEventHandlerEvCountChecker() nodes = "localhost,%s" % HOSTNAME worker = self._task.shell("/bin/hostname", nodes=nodes, handler=test_eh) self._task.resume() # test event count self.assertEqual(test_eh.pickup_count, 2) self.assertEqual(test_eh.hup_count, 2) self.assertEqual(test_eh.start_count, 1) self.assertEqual(test_eh.close_count, 1) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/TaskDistantPdshMixin.py0000644104717000001440000005205414505632065021373 0ustar00sthiellusers# ClusterShell (distant, pdsh worker) test suite # Written by S. Thiell """Unit test for ClusterShell Task (distant, pdsh worker)""" from TLib import HOSTNAME, make_temp_filename, make_temp_dir from ClusterShell.Event import EventHandler from ClusterShell.Task import * from ClusterShell.Worker.Worker import WorkerBadArgumentError from ClusterShell.Worker.Pdsh import WorkerPdsh from ClusterShell.Worker.EngineClient import * import shutil import socket import unittest # TEventHandlerChecker 'received event' flags EV_START = 0x01 EV_PICKUP = 0x02 EV_READ = 0x04 EV_WRITTEN = 0x08 EV_HUP = 0x10 EV_TIMEOUT = 0x20 EV_CLOSE = 0x40 class TaskDistantPdshMixin(object): def setUp(self): self._task = task_self() def testWorkerPdshGetCommand(self): # test worker.command with WorkerPdsh worker1 = WorkerPdsh(HOSTNAME, command="/bin/echo foo bar fuu", handler=None, timeout=5) self._task.schedule(worker1) worker2 = WorkerPdsh(HOSTNAME, command="/bin/echo blah blah foo", handler=None, timeout=5) self._task.schedule(worker2) # run task self._task.resume() # test output self.assertEqual(worker1.node_buffer(HOSTNAME), b"foo bar fuu") self.assertEqual(worker1.command, "/bin/echo foo bar fuu") self.assertEqual(worker2.node_buffer(HOSTNAME), b"blah blah foo") self.assertEqual(worker2.command, "/bin/echo blah blah foo") def testLocalhostExplicitPdshCopy(self): # test simple localhost copy with explicit pdsh worker dest = make_temp_filename(suffix='LocalhostExplicitPdshCopy') try: worker = WorkerPdsh(HOSTNAME, source="/etc/hosts", dest=dest, handler=None, timeout=10) self._task.schedule(worker) self._task.resume() self.assertEqual(worker.source, "/etc/hosts") self.assertEqual(worker.dest, dest) finally: os.unlink(dest) def testLocalhostExplicitPdshCopyWithOptions(self): dest = make_temp_dir('testLocalhostExplicitPdshCopyWithOptions') self._task.set_info("pdcp_path", "pdcp -p") try: worker = WorkerPdsh(HOSTNAME, source="/etc/hosts", dest=dest.name, handler=None) self._task.schedule(worker) self._task.resume() self.assertEqual(self._task.max_retcode(), 0) self.assertTrue(os.path.exists(os.path.join(dest.name, "hosts"))) finally: os.unlink(os.path.join(dest.name, "hosts")) dest.cleanup() # clear options after test task_cleanup() self.assertEqual(task_self().info("pdcp_path"), None) def testLocalhostExplicitPdshCopyDir(self): # test simple localhost copy dir with explicit pdsh worker dtmp_src = make_temp_dir('src') # pdcp worker doesn't create custom destination directory dtmp_dst = make_temp_dir('testLocalhostExplicitPdshCopyDir') try: os.mkdir(os.path.join(dtmp_src.name, "lev1_a")) os.mkdir(os.path.join(dtmp_src.name, "lev1_b")) os.mkdir(os.path.join(dtmp_src.name, "lev1_a", "lev2")) worker = WorkerPdsh(HOSTNAME, source=dtmp_src.name, dest=dtmp_dst.name, handler=None, timeout=10) self._task.schedule(worker) self._task.resume() self.assertTrue(os.path.exists(os.path.join( \ dtmp_dst.name, os.path.basename(dtmp_src.name), "lev1_a", "lev2"))) finally: dtmp_dst.cleanup() dtmp_src.cleanup() def testLocalhostExplicitPdshCopyDirPreserve(self): # test simple localhost preserve copy dir with explicit pdsh worker dtmp_src = make_temp_dir('src') # pdcp worker doesn't create custom destination directory dtmp_dst = make_temp_dir('testLocalhostExplicitPdshCopyDirPreserve') try: os.mkdir(os.path.join(dtmp_src.name, "lev1_a")) os.mkdir(os.path.join(dtmp_src.name, "lev1_b")) os.mkdir(os.path.join(dtmp_src.name, "lev1_a", "lev2")) worker = WorkerPdsh(HOSTNAME, source=dtmp_src.name, dest=dtmp_dst.name, handler=None, timeout=10, preserve=True) self._task.schedule(worker) self._task.resume() self.assertTrue(os.path.exists(os.path.join( \ dtmp_dst.name, os.path.basename(dtmp_src.name), "lev1_a", "lev2"))) finally: dtmp_dst.cleanup() dtmp_src.cleanup() def testExplicitPdshWorker(self): # test simple localhost command with explicit pdsh worker # init worker worker = WorkerPdsh(HOSTNAME, command="echo alright", handler=None) self._task.schedule(worker) # run task self._task.resume() # test output self.assertEqual(worker.node_buffer(HOSTNAME), b"alright") def testExplicitPdshWorkerWithOptions(self): self._task.set_info("pdsh_path", "/usr/bin/pdsh -S") worker = WorkerPdsh(HOSTNAME, command="echo alright", handler=None) self._task.schedule(worker) # run task self._task.resume() # test output self.assertEqual(worker.node_buffer(HOSTNAME), b"alright") # clear options after test task_cleanup() self.assertEqual(task_self().info("pdsh_path"), None) def testExplicitPdshWorkerStdErr(self): # test simple localhost command with explicit pdsh worker (stderr) worker = WorkerPdsh(HOSTNAME, command="echo alright 1>&2", handler=None, stderr=True) self._task.schedule(worker) # run task self._task.resume() # test output self.assertEqual(worker.node_error_buffer(HOSTNAME), b"alright") # Re-test with stderr=False worker = WorkerPdsh(HOSTNAME, command="echo alright 1>&2", handler=None, stderr=False) self._task.schedule(worker) # run task self._task.resume() # test output self.assertEqual(worker.node_error_buffer(HOSTNAME), None) def testPdshWorkerWriteNotSupported(self): # test that write is reported as not supported with pdsh worker = WorkerPdsh(HOSTNAME, command="uname -r", handler=None, timeout=5) self.assertRaises(EngineClientNotSupportedError, worker.write, b"toto") class TEventHandlerChecker(EventHandler): """simple event trigger validator""" def __init__(self, test): self.test = test self.flags = 0 self.read_count = 0 self.written_count = 0 def ev_start(self, worker): self.test.assertEqual(self.flags, 0) self.flags |= EV_START def ev_pickup(self, worker, node): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_PICKUP self.last_node = node def ev_read(self, worker, node, sname, msg): self.test.assertEqual(self.flags, EV_START | EV_PICKUP) self.flags |= EV_READ self.last_node = node self.last_read = msg def ev_written(self, worker, node, sname, size): self.test.assertTrue(self.flags & (EV_START | EV_PICKUP)) self.flags |= EV_WRITTEN def ev_hup(self, worker, node, rc): self.test.assertTrue(self.flags & (EV_START | EV_PICKUP)) self.flags |= EV_HUP self.last_node = node self.last_rc = rc def ev_close(self, worker, timedout): self.test.assertTrue(self.flags & EV_START) self.test.assertTrue(self.flags & EV_CLOSE == 0) if timedout: self.flags |= EV_TIMEOUT self.flags |= EV_CLOSE def testExplicitWorkerPdshShellEvents(self): # test triggered events with explicit pdsh worker test_eh = self.__class__.TEventHandlerChecker(self) worker = WorkerPdsh(HOSTNAME, command="hostname", handler=test_eh, timeout=None) self._task.schedule(worker) # run task self._task.resume() # test events received: start, read, hup, close self.assertEqual(test_eh.flags, EV_START | EV_PICKUP | EV_READ | EV_HUP | EV_CLOSE) def testExplicitWorkerPdshShellEventsWithTimeout(self): # test triggered events (with timeout) with explicit pdsh worker test_eh = self.__class__.TEventHandlerChecker(self) worker = WorkerPdsh(HOSTNAME, command="echo alright && sleep 10", handler=test_eh, timeout=2) self._task.schedule(worker) # run task self._task.resume() # test events received: start, read, timeout, close self.assertEqual(test_eh.flags, EV_START | EV_PICKUP | EV_READ | EV_TIMEOUT | EV_CLOSE) self.assertEqual(worker.node_buffer(HOSTNAME), b"alright") def testShellPdshEventsNoReadNoTimeout(self): # test triggered events (no read, no timeout) with explicit pdsh worker test_eh = self.__class__.TEventHandlerChecker(self) worker = WorkerPdsh(HOSTNAME, command="sleep 2", handler=test_eh, timeout=None) self._task.schedule(worker) # run task self._task.resume() # test events received: start, close self.assertEqual(test_eh.flags, EV_START | EV_PICKUP | EV_HUP | EV_CLOSE) self.assertEqual(worker.node_buffer(HOSTNAME), None) def testWorkerPdshBuffers(self): # test buffers at pdsh worker level worker = WorkerPdsh(HOSTNAME, command="printf 'foo\nbar\nxxx\n'", handler=None, timeout=None) self._task.schedule(worker) self._task.resume() cnt = 2 for buf, nodes in worker.iter_buffers(): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(len(nodes), 1) self.assertEqual(str(nodes), HOSTNAME) self.assertEqual(cnt, 1) # new check in 1.7 to ensure match_keys is not a string testgen = worker.iter_buffers(HOSTNAME) # cast to list to effectively iterate self.assertRaises(TypeError, list, testgen) # and also fixed an issue when match_keys was an empty list for buf, nodes in worker.iter_buffers([]): self.assertFalse("Found buffer with empty match_keys?!") for buf, nodes in worker.iter_buffers([HOSTNAME]): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(len(nodes), 1) self.assertEqual(str(nodes), HOSTNAME) self.assertEqual(cnt, 0) def testWorkerPdshNodeBuffers(self): # test iter_node_buffers on distant pdsh workers worker = WorkerPdsh(HOSTNAME, command="/usr/bin/printf 'foo\nbar\nxxx\n'", handler=None, timeout=None) self._task.schedule(worker) self._task.resume() cnt = 1 for node, buf in worker.iter_node_buffers(): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(node, HOSTNAME) self.assertEqual(cnt, 0) def testWorkerPdshNodeErrors(self): # test iter_node_errors on distant pdsh workers worker = WorkerPdsh(HOSTNAME, command="/usr/bin/printf 'foo\nbar\nxxx\n' 1>&2", handler=None, timeout=None, stderr=True) self._task.schedule(worker) self._task.resume() cnt = 1 for node, buf in worker.iter_node_errors(): cnt -= 1 if buf == b"foo\nbar\nxxx\n": self.assertEqual(node, HOSTNAME) self.assertEqual(cnt, 0) def testWorkerPdshRetcodes(self): # test retcodes on distant pdsh workers worker = WorkerPdsh(HOSTNAME, command="/bin/sh -c 'exit 3'", handler=None, timeout=None) self._task.schedule(worker) self._task.resume() cnt = 2 for rc, keys in worker.iter_retcodes(): cnt -= 1 self.assertEqual(rc, 3) self.assertEqual(len(keys), 1) self.assertEqual(keys[0], HOSTNAME) self.assertEqual(cnt, 1) for rc, keys in worker.iter_retcodes(HOSTNAME): cnt -= 1 self.assertEqual(rc, 3) self.assertEqual(len(keys), 1) self.assertEqual(keys[0], HOSTNAME) self.assertEqual(cnt, 0) # test node_retcode self.assertEqual(worker.node_retcode(HOSTNAME), 3) # 1.2.91+ self.assertEqual(worker.node_rc(HOSTNAME), 3) # test node_retcode failure self.assertRaises(KeyError, worker.node_retcode, "dummy") # test max retcode API self.assertEqual(self._task.max_retcode(), 3) def testWorkerNodeRetcodes(self): # test iter_node_retcodes on distant pdsh workers worker = WorkerPdsh(HOSTNAME, command="/bin/sh -c 'exit 3'", handler=None, timeout=None) self._task.schedule(worker) self._task.resume() cnt = 1 for node, rc in worker.iter_node_retcodes(): cnt -= 1 self.assertEqual(rc, 3) self.assertEqual(node, HOSTNAME) self.assertEqual(cnt, 0) def testEscapePdsh(self): # test distant worker (pdsh) cmd with escaped variable cmd = r"export CSTEST=foobar; /bin/echo \$CSTEST | sed 's/\ foo/bar/'" worker = WorkerPdsh(HOSTNAME, command=cmd, handler=None, timeout=None) #task.set_info("debug", True) self._task.schedule(worker) # execute self._task.resume() # read result self.assertEqual(worker.node_buffer(HOSTNAME), b"$CSTEST") def testEscapePdsh2(self): # test distant worker (pdsh) cmd with non-escaped variable cmd = r"export CSTEST=foobar; /bin/echo $CSTEST | sed 's/\ foo/bar/'" worker = WorkerPdsh(HOSTNAME, command=cmd, handler=None, timeout=None) self._task.schedule(worker) # execute self._task.resume() # read result self.assertEqual(worker.node_buffer(HOSTNAME), b"foobar") def testShellPdshStderrWithHandler(self): # test reading stderr of distant pdsh worker on event handler class StdErrHandler(EventHandler): def ev_error(self, worker): assert worker.last_error() == b"something wrong" worker = WorkerPdsh(HOSTNAME, command="echo something wrong 1>&2", handler=StdErrHandler(), timeout=None) self._task.schedule(worker) self._task.resume() for buf, nodes in worker.iter_errors(): self.assertEqual(buf, b"something wrong") for buf, nodes in worker.iter_errors([HOSTNAME]): self.assertEqual(buf, b"something wrong") def testCommandTimeoutOption(self): # test pdsh shell with command_timeout set command_timeout_orig = self._task.info("command_timeout") self._task.set_info("command_timeout", 1) worker = WorkerPdsh(HOSTNAME, command="sleep 10", handler=None, timeout=None) self._task.schedule(worker) self._task.resume() # restore original command_timeout (0) self.assertEqual(command_timeout_orig, 0) self._task.set_info("command_timeout", command_timeout_orig) def testPdshBadArgumentOption(self): # test WorkerPdsh constructor bad argument # Check code < 1.4 compatibility self.assertRaises(WorkerBadArgumentError, WorkerPdsh, HOSTNAME, None, None) # As of 1.4, ValueError is raised for missing parameter self.assertRaises(ValueError, WorkerPdsh, HOSTNAME, None, None) # 1.4+ def testCopyEvents(self): test_eh = self.__class__.TEventHandlerChecker(self) dest = "/tmp/cs-test_testLocalhostPdshCopyEvents" try: worker = WorkerPdsh(HOSTNAME, source="/etc/hosts", dest=dest, handler=test_eh, timeout=10) self._task.schedule(worker) self._task.resume() self.assertEqual(test_eh.flags, EV_START | EV_PICKUP | EV_HUP | EV_CLOSE) finally: os.remove(dest) def testWorkerAbort(self): # test WorkerPdsh abort() on timer class AbortOnTimer(EventHandler): def __init__(self, worker): EventHandler.__init__(self) self.ext_worker = worker self.testtimer = False def ev_timer(self, timer): self.ext_worker.abort() self.testtimer = True worker = WorkerPdsh(HOSTNAME, command="sleep 10", handler=None, timeout=None) self._task.schedule(worker) aot = AbortOnTimer(worker) self.assertEqual(aot.testtimer, False) self._task.timer(2.0, handler=aot) self._task.resume() self.assertEqual(aot.testtimer, True) def testWorkerAbortSanity(self): # test WorkerPdsh abort() (sanity) # test noop abort() on unscheduled worker worker = WorkerPdsh(HOSTNAME, command="sleep 1", handler=None, timeout=None) worker.abort() def testLocalhostExplicitPdshReverseCopy(self): # test simple localhost rcopy with explicit pdsh worker dest = "/tmp/cs-test_testLocalhostExplicitPdshRCopy" shutil.rmtree(dest, ignore_errors=True) try: os.mkdir(dest) worker = WorkerPdsh(HOSTNAME, source="/etc/hosts", dest=dest, handler=None, timeout=10, reverse=True) self._task.schedule(worker) self._task.resume() self.assertEqual(worker.source, "/etc/hosts") self.assertEqual(worker.dest, dest) self.assertTrue(os.path.exists(os.path.join(dest, "hosts.%s" % HOSTNAME))) finally: shutil.rmtree(dest, ignore_errors=True) def testLocalhostExplicitPdshReverseCopyDir(self): # test simple localhost rcopy dir with explicit pdsh worker dtmp_src = make_temp_dir('src') dtmp_dst = make_temp_dir('testLocalhostExplicitPdshReverseCopyDir') try: os.mkdir(os.path.join(dtmp_src.name, "lev1_a")) os.mkdir(os.path.join(dtmp_src.name, "lev1_b")) os.mkdir(os.path.join(dtmp_src.name, "lev1_a", "lev2")) worker = WorkerPdsh(HOSTNAME, source=dtmp_src.name, dest=dtmp_dst.name, handler=None, timeout=30, reverse=True) self._task.schedule(worker) self._task.resume() tgt = os.path.join(dtmp_dst.name, "%s.%s" % \ (os.path.basename(dtmp_src.name), HOSTNAME), "lev1_a", "lev2") self.assertTrue(os.path.exists(tgt)) finally: dtmp_dst.cleanup() dtmp_src.cleanup() def testLocalhostExplicitPdshReverseCopyDirPreserve(self): # test simple localhost preserve rcopy dir with explicit pdsh worker dtmp_src = make_temp_dir('src') dtmp_dst = make_temp_dir('testLocalhostExplicitPdshRevCpDirPreserve') try: os.mkdir(os.path.join(dtmp_src.name, "lev1_a")) os.mkdir(os.path.join(dtmp_src.name, "lev1_b")) os.mkdir(os.path.join(dtmp_src.name, "lev1_a", "lev2")) worker = WorkerPdsh(HOSTNAME, source=dtmp_src.name, dest=dtmp_dst.name, handler=None, timeout=30, preserve=True, reverse=True) self._task.schedule(worker) self._task.resume() tgt = os.path.join(dtmp_dst.name, "%s.%s" % \ (os.path.basename(dtmp_src.name), HOSTNAME), "lev1_a", "lev2") self.assertTrue(os.path.exists(tgt)) finally: dtmp_dst.cleanup() dtmp_src.cleanup() class TEventHandlerEvCountChecker(EventHandler): """simple event count validator""" def __init__(self): self.start_count = 0 self.pickup_count = 0 self.hup_count = 0 self.close_count = 0 def ev_start(self, worker): self.start_count += 1 def ev_pickup(self, worker, node): self.pickup_count += 1 def ev_hup(self, worker, node, rc): self.hup_count += 1 def ev_close(self, worker, timedout): self.close_count += 1 @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def testWorkerEventCount(self): test_eh = self.__class__.TEventHandlerEvCountChecker() nodes = "localhost,%s" % HOSTNAME worker = WorkerPdsh(nodes, command="/bin/hostname", handler=test_eh) self._task.schedule(worker) self._task.resume() # test event count self.assertEqual(test_eh.pickup_count, 2) self.assertEqual(test_eh.hup_count, 2) self.assertEqual(test_eh.start_count, 1) self.assertEqual(test_eh.close_count, 1) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskDistantPdshTest.py0000644104717000001440000000415514501416555021225 0ustar00sthiellusers""" Unit test for ClusterShell Task with all engines (pdsh distant worker) """ import sys import unittest from ClusterShell.Defaults import DEFAULTS from ClusterShell.Engine.Select import EngineSelect from ClusterShell.Engine.Poll import EnginePoll from ClusterShell.Engine.EPoll import EngineEPoll from ClusterShell.Task import * from TaskDistantPdshMixin import TaskDistantPdshMixin ENGINE_SELECT_ID = EngineSelect.identifier ENGINE_POLL_ID = EnginePoll.identifier ENGINE_EPOLL_ID = EngineEPoll.identifier class TaskDistantPdshEngineSelectTest(TaskDistantPdshMixin, unittest.TestCase): def setUp(self): task_terminate() self.engine_id_save = DEFAULTS.engine DEFAULTS.engine = ENGINE_SELECT_ID # select should be supported anywhere... self.assertEqual(task_self().info('engine'), ENGINE_SELECT_ID) TaskDistantPdshMixin.setUp(self) def tearDown(self): DEFAULTS.engine = self.engine_id_save task_terminate() class TaskDistantPdshEnginePollTest(TaskDistantPdshMixin, unittest.TestCase): def setUp(self): task_terminate() self.engine_id_save = DEFAULTS.engine DEFAULTS.engine = ENGINE_POLL_ID if task_self().info('engine') != ENGINE_POLL_ID: self.skipTest("engine %s not supported on this host" % ENGINE_POLL_ID) TaskDistantPdshMixin.setUp(self) def tearDown(self): DEFAULTS.engine = self.engine_id_save task_terminate() # select.epoll is only available with Python 2.6 (if condition to be # removed once we only support Py2.6+) if sys.version_info >= (2, 6, 0): class TaskDistantPdshEngineEPollTest(TaskDistantPdshMixin, unittest.TestCase): def setUp(self): task_terminate() self.engine_id_save = DEFAULTS.engine DEFAULTS.engine = ENGINE_EPOLL_ID if task_self().info('engine') != ENGINE_EPOLL_ID: self.skipTest("engine %s not supported on this host" % ENGINE_EPOLL_ID) TaskDistantPdshMixin.setUp(self) def tearDown(self): DEFAULTS.engine = self.engine_id_save task_terminate() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskDistantTest.py0000644104717000001440000000407414501416555020406 0ustar00sthiellusers""" Unit test for ClusterShell Task with all engines (distant worker) """ import sys import unittest from ClusterShell.Defaults import DEFAULTS from ClusterShell.Engine.Select import EngineSelect from ClusterShell.Engine.Poll import EnginePoll from ClusterShell.Engine.EPoll import EngineEPoll from ClusterShell.Task import * from TaskDistantMixin import TaskDistantMixin ENGINE_SELECT_ID = EngineSelect.identifier ENGINE_POLL_ID = EnginePoll.identifier ENGINE_EPOLL_ID = EngineEPoll.identifier class TaskDistantEngineSelectTest(TaskDistantMixin, unittest.TestCase): def setUp(self): task_terminate() self.engine_id_save = DEFAULTS.engine DEFAULTS.engine = ENGINE_SELECT_ID # select should be supported anywhere... self.assertEqual(task_self().info('engine'), ENGINE_SELECT_ID) TaskDistantMixin.setUp(self) def tearDown(self): DEFAULTS.engine = self.engine_id_save task_terminate() class TaskDistantEnginePollTest(TaskDistantMixin, unittest.TestCase): def setUp(self): task_terminate() self.engine_id_save = DEFAULTS.engine DEFAULTS.engine = ENGINE_POLL_ID if task_self().info('engine') != ENGINE_POLL_ID: self.skipTest("engine %s not supported on this host" % ENGINE_POLL_ID) TaskDistantMixin.setUp(self) def tearDown(self): DEFAULTS.engine = self.engine_id_save task_terminate() # select.epoll is only available with Python 2.6 (if condition to be # removed once we only support Py2.6+) if sys.version_info >= (2, 6, 0): class TaskDistantEngineEPollTest(TaskDistantMixin, unittest.TestCase): def setUp(self): task_terminate() self.engine_id_save = DEFAULTS.engine DEFAULTS.engine = ENGINE_EPOLL_ID if task_self().info('engine') != ENGINE_EPOLL_ID: self.skipTest("engine %s not supported on this host" % ENGINE_EPOLL_ID) TaskDistantMixin.setUp(self) def tearDown(self): DEFAULTS.engine = self.engine_id_save task_terminate() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskEventTest.py0000644104717000001440000004707514501416555020071 0ustar00sthiellusers# ClusterShell (local) test suite # Written by S. Thiell """Unit test for ClusterShell Task (event-based mode)""" import unittest import warnings from ClusterShell.Task import * from ClusterShell.Event import EventHandler class BaseAssertTestHandler(EventHandler): """Base Assert Test Handler""" def __init__(self): self.reset_asserts() def do_asserts_read_notimeout(self): assert self.did_start, "ev_start not called" assert self.cnt_pickup > 0, "ev_pickup not called" assert self.did_read, "ev_read not called" assert not self.did_readerr, "ev_error called" assert self.cnt_written == 0, "ev_written called" assert self.cnt_hup > 0, "ev_hup not called" assert self.did_close, "ev_close not called" assert not self.did_timeout, "ev_timeout called" def do_asserts_timeout(self): assert self.did_start, "ev_start not called" assert self.cnt_pickup > 0, "ev_pickup not called" assert not self.did_read, "ev_read called" assert not self.did_readerr, "ev_error called" assert self.cnt_written == 0, "ev_written called" assert self.cnt_hup == 0, "ev_hup called" assert self.did_close, "ev_close not called" assert self.did_timeout, "ev_timeout not called" def do_asserts_noread_notimeout(self): assert self.did_start, "ev_start not called" assert self.cnt_pickup > 0, "ev_pickup not called" assert not self.did_read, "ev_read called" assert not self.did_readerr, "ev_error called" assert self.cnt_written == 0, "ev_written called" assert self.cnt_hup > 0, "ev_hup not called" assert self.did_close, "ev_close not called" assert not self.did_timeout, "ev_timeout called" def do_asserts_read_write_notimeout(self): assert self.did_start, "ev_start not called" assert self.cnt_pickup > 0, "ev_pickup not called" assert self.did_read, "ev_read not called" assert not self.did_readerr, "ev_error called" assert self.cnt_written > 0, "ev_written not called" assert self.cnt_hup > 0, "ev_hup not called" assert self.did_close, "ev_close not called" assert not self.did_timeout, "ev_timeout called" def reset_asserts(self): self.did_start = False self.cnt_pickup = 0 self.did_read = False self.did_readerr = False self.cnt_written = 0 self.bytes_written = 0 self.cnt_hup = 0 self.did_close = False self.did_timeout = False class LegacyTestHandler(BaseAssertTestHandler): """Legacy Test Handler (deprecated as of 1.8)""" def ev_start(self, worker): self.did_start = True def ev_pickup(self, worker): self.cnt_pickup += 1 def ev_read(self, worker): self.did_read = True assert worker.current_msg == b"abcdefghijklmnopqrstuvwxyz" assert worker.current_errmsg != b"abcdefghijklmnopqrstuvwxyz" def ev_error(self, worker): self.did_readerr = True assert worker.current_errmsg == b"errerrerrerrerrerrerrerr" assert worker.current_msg != b"errerrerrerrerrerrerrerr" def ev_written(self, worker, node, sname, size): self.cnt_written += 1 self.bytes_written += size def ev_hup(self, worker): self.cnt_hup += 1 def ev_close(self, worker): self.did_close = True if worker.read(): assert worker.read().startswith(b"abcdefghijklmnopqrstuvwxyz") def ev_timeout(self, worker): self.did_timeout = True class TestHandler(BaseAssertTestHandler): """New Test Handler (1.8+)""" def ev_start(self, worker): self.did_start = True def ev_pickup(self, worker, node): assert node is not None self.cnt_pickup += 1 def ev_read(self, worker, node, sname, msg): if sname == 'stdout': self.did_read = True assert msg == b"abcdefghijklmnopqrstuvwxyz" elif sname == 'stderr': self.did_readerr = True assert msg == b"errerrerrerrerrerrerrerr" def ev_written(self, worker, node, sname, size): self.cnt_written += 1 self.bytes_written += size def ev_hup(self, worker, node, rc): assert node is not None self.cnt_hup += 1 def ev_close(self, worker, did_timeout): self.did_timeout = did_timeout self.did_close = True if worker.read(): assert worker.read().startswith(b"abcdefghijklmnopqrstuvwxyz") class TaskEventTest(unittest.TestCase): def run_task_and_catch_warnings(self, task, expected_warn_cnt=0, category=DeprecationWarning, task_timeout=None): """helper to run task and catch+test issued warnings""" with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") task.run(timeout=task_timeout) if len(w) != expected_warn_cnt: self.fail("Expected %d warnings, got %d: %s" % (expected_warn_cnt, len(w), '\n'.join(str(ex.message) for ex in w))) if len(w) > 0: self.assertTrue(issubclass(w[-1].category, category)) def test_simple_event_handler_legacy(self): """test simple event handler (legacy)""" task = task_self() eh = LegacyTestHandler() # init worker worker = task.shell("echo abcdefghijklmnopqrstuvwxyz", handler=eh) # warnings: pickup + read + hup + close self.run_task_and_catch_warnings(task, 4) eh.do_asserts_read_notimeout() eh.reset_asserts() # test again worker = task.shell("echo abcdefghijklmnopqrstuvwxyz", handler=eh) # warnings: pickup + read + hup + close self.run_task_and_catch_warnings(task, 4) eh.do_asserts_read_notimeout() def test_simple_event_handler(self): """test simple event handler (1.8+)""" task = task_self() eh = TestHandler() worker = task.shell("echo abcdefghijklmnopqrstuvwxyz", handler=eh) self.run_task_and_catch_warnings(task) eh.do_asserts_read_notimeout() eh.reset_asserts() # test again worker = task.shell("echo abcdefghijklmnopqrstuvwxyz", handler=eh) self.run_task_and_catch_warnings(task) eh.do_asserts_read_notimeout() def test_simple_event_handler_with_timeout(self): """test simple event handler with timeout""" task = task_self() eh = TestHandler() task.shell("/bin/sleep 3", handler=eh, timeout=2) # verify that no warnings are generated self.run_task_and_catch_warnings(task, 0) eh.do_asserts_timeout() def test_simple_event_handler_with_timeout_legacy(self): """test simple event handler with timeout (legacy)""" task = task_self() eh = LegacyTestHandler() task.shell("/bin/sleep 3", handler=eh, timeout=2) # warnings: pickup + timeout + close self.run_task_and_catch_warnings(task, 3) eh.do_asserts_timeout() def test_simple_event_handler_with_task_timeout_legacy(self): """test simple event handler with task timeout (legacy)""" task = task_self() eh = LegacyTestHandler() task.shell("/bin/sleep 3", handler=eh) try: # warnings: pickup + timeout + close self.run_task_and_catch_warnings(task, 3, task_timeout=2) except TimeoutError: pass else: self.fail("did not detect timeout") eh.do_asserts_timeout() def test_simple_event_handler_with_task_timeout(self): """test simple event handler with task timeout (1.8+)""" task = task_self() eh = TestHandler() # init worker worker = task.shell("/bin/sleep 3", handler=eh) try: self.run_task_and_catch_warnings(task, task_timeout=2) except TimeoutError: pass else: self.fail("did not detect timeout") eh.do_asserts_timeout() def test_popen_specific_behaviour_legacy(self): """test WorkerPopen events specific behaviour (legacy)""" class LegacyWorkerPopenEH(LegacyTestHandler): def __init__(self, testcase): LegacyTestHandler.__init__(self) self.testcase = testcase def ev_start(self, worker): LegacyTestHandler.ev_start(self, worker) self.testcase.assertEqual(worker.current_node, None) def ev_read(self, worker): LegacyTestHandler.ev_read(self, worker) self.testcase.assertEqual(worker.current_node, None) def ev_error(self, worker): LegacyTestHandler.ev_error(self, worker) self.testcase.assertEqual(worker.current_node, None) def ev_written(self, worker, node, sname, size): LegacyTestHandler.ev_written(self, worker, node, sname, size) self.testcase.assertEqual(worker.current_node, None) def ev_pickup(self, worker): LegacyTestHandler.ev_pickup(self, worker) self.testcase.assertEqual(worker.current_node, None) def ev_hup(self, worker): LegacyTestHandler.ev_hup(self, worker) self.testcase.assertEqual(worker.current_node, None) def ev_close(self, worker): LegacyTestHandler.ev_close(self, worker) self.testcase.assertEqual(worker.current_node, None) task = task_self() eh = LegacyWorkerPopenEH(self) worker = task.shell("cat", handler=eh) content = b"abcdefghijklmnopqrstuvwxyz\n" worker.write(content) worker.set_write_eof() # warnings: 1 x pickup + 1 x read + 1 x hup + 1 x close self.run_task_and_catch_warnings(task, 4) eh.do_asserts_read_write_notimeout() def test_popen_specific_behaviour(self): """test WorkerPopen events specific behaviour (1.8+)""" class WorkerPopenEH(TestHandler): def __init__(self, testcase): TestHandler.__init__(self) self.testcase = testcase self.worker = None def ev_start(self, worker): TestHandler.ev_start(self, worker) self.testcase.assertEqual(worker, self.worker) def ev_read(self, worker, node, sname, msg): TestHandler.ev_read(self, worker, node, sname, msg) self.testcase.assertEqual(worker, self.worker) self.testcase.assertEqual(worker, node) def ev_written(self, worker, node, sname, size): TestHandler.ev_written(self, worker, node, sname, size) self.testcase.assertEqual(worker, self.worker) self.testcase.assertEqual(worker, node) def ev_pickup(self, worker, node): TestHandler.ev_pickup(self, worker, node) self.testcase.assertEqual(worker, self.worker) self.testcase.assertEqual(worker, node) def ev_hup(self, worker, node, rc): TestHandler.ev_hup(self, worker, node, rc) self.testcase.assertEqual(worker, self.worker) self.testcase.assertEqual(worker, node) def ev_close(self, worker, did_timeout): TestHandler.ev_close(self, worker, did_timeout) self.testcase.assertEqual(worker.current_node, None) # XXX task = task_self() eh = WorkerPopenEH(self) worker = task.shell("cat", handler=eh) eh.worker = worker content = b"abcdefghijklmnopqrstuvwxyz\n" worker.write(content) worker.set_write_eof() self.run_task_and_catch_warnings(task) eh.do_asserts_read_write_notimeout() class LegacyTOnTheFlyLauncher(EventHandler): """Legacy Test Event handler to schedule commands on the fly""" def ev_read(self, worker): assert worker.task.running() # in-fly workers addition other1 = worker.task.shell("/bin/sleep 0.1", handler=self) assert other1 != None other2 = worker.task.shell("/bin/sleep 0.1", handler=self) assert other2 != None def ev_pickup(self, worker): """legacy ev_pickup signature to check for warnings""" def ev_hup(self, worker): """legacy ev_hup signature to check for warnings""" def ev_close(self, worker): """legacy ev_close signature to check for warnings""" def test_engine_on_the_fly_launch_legacy(self): """test client add on the fly while running (legacy)""" task = task_self() eh = self.__class__.LegacyTOnTheFlyLauncher() worker = task.shell("/bin/uname", handler=eh) self.assertNotEqual(worker, None) # warnings: 1 x pickup + 1 x read + 2 x pickup + 3 x hup + 3 x close self.run_task_and_catch_warnings(task, 10) class TOnTheFlyLauncher(EventHandler): """CS v1.8 Test Event handler to schedule commands on the fly""" def ev_read(self, worker, node, sname, msg): assert worker.task.running() # in-fly workers addition other1 = worker.task.shell("/bin/sleep 0.1") assert other1 != None other2 = worker.task.shell("/bin/sleep 0.1") assert other2 != None def test_engine_on_the_fly_launch(self): """test client add on the fly while running (1.8+)""" task = task_self() eh = self.__class__.TOnTheFlyLauncher() worker = task.shell("/bin/uname", handler=eh) self.assertNotEqual(worker, None) self.run_task_and_catch_warnings(task) class LegacyTWriteOnStart(EventHandler): def ev_start(self, worker): assert worker.task.running() worker.write(b"foo bar\n") def ev_read(self, worker): assert worker.current_msg == b"foo bar" worker.abort() def test_write_on_ev_start_legacy(self): """test write on ev_start (legacy)""" task = task_self() task.shell("cat", handler=self.__class__.LegacyTWriteOnStart()) self.run_task_and_catch_warnings(task, 1) # ev_read class TWriteOnStart(EventHandler): def ev_start(self, worker): assert worker.task.running() worker.write(b"foo bar\n") def ev_read(self, worker, node, sname, msg): assert msg == b"foo bar" worker.abort() def test_write_on_ev_start(self): """test write on ev_start""" task = task_self() task.shell("cat", handler=self.__class__.TWriteOnStart()) self.run_task_and_catch_warnings(task) class LegacyAbortOnReadHandler(EventHandler): def ev_read(self, worker): worker.abort() def test_engine_may_reuse_fd_legacy(self): """test write + worker.abort() on read to reuse FDs (legacy)""" task = task_self() fanout = task.info("fanout") try: task.set_info("fanout", 1) eh = self.__class__.LegacyAbortOnReadHandler() for i in range(10): worker = task.shell("echo ok; sleep 1", handler=eh) self.assertTrue(worker is not None) worker.write(b"OK\n") # warnings: 10 x read self.run_task_and_catch_warnings(task, 10) finally: task.set_info("fanout", fanout) class AbortOnReadHandler(EventHandler): def ev_read(self, worker, node, sname, msg): worker.abort() def test_engine_may_reuse_fd(self): """test write + worker.abort() on read to reuse FDs""" task = task_self() fanout = task.info("fanout") try: task.set_info("fanout", 1) eh = self.__class__.AbortOnReadHandler() for i in range(10): worker = task.shell("echo ok; sleep 1", handler=eh) self.assertTrue(worker is not None) worker.write(b"OK\n") self.run_task_and_catch_warnings(task) finally: task.set_info("fanout", fanout) def test_ev_pickup_legacy(self): """test ev_pickup event (legacy)""" task = task_self() eh = LegacyTestHandler() task.shell("/bin/sleep 0.4", handler=eh) task.shell("/bin/sleep 0.5", handler=eh) task.shell("/bin/sleep 0.5", handler=eh) # warnings: 3 x pickup + 3 x hup + 3 x close self.run_task_and_catch_warnings(task, 9) eh.do_asserts_noread_notimeout() self.assertEqual(eh.cnt_pickup, 3) self.assertEqual(eh.cnt_hup, 3) def test_ev_pickup(self): """test ev_pickup event (1.8+)""" task = task_self() eh = TestHandler() task.shell("/bin/sleep 0.4", handler=eh) task.shell("/bin/sleep 0.5", handler=eh) task.shell("/bin/sleep 0.5", handler=eh) self.run_task_and_catch_warnings(task) eh.do_asserts_noread_notimeout() self.assertEqual(eh.cnt_pickup, 3) self.assertEqual(eh.cnt_hup, 3) def test_ev_pickup_fanout_legacy(self): """test ev_pickup event with fanout (legacy)""" task = task_self() fanout = task.info("fanout") try: task.set_info("fanout", 1) eh = LegacyTestHandler() task.shell("/bin/sleep 0.4", handler=eh, key="n1") task.shell("/bin/sleep 0.5", handler=eh, key="n2") task.shell("/bin/sleep 0.5", handler=eh, key="n3") # warnings: 3 x pickup + 3 x hup + 3 x close self.run_task_and_catch_warnings(task, 9) eh.do_asserts_noread_notimeout() self.assertEqual(eh.cnt_pickup, 3) self.assertEqual(eh.cnt_hup, 3) finally: task.set_info("fanout", fanout) def test_ev_pickup_fanout(self): """test ev_pickup event with fanout""" task = task_self() fanout = task.info("fanout") try: task.set_info("fanout", 1) eh = TestHandler() task.shell("/bin/sleep 0.4", handler=eh, key="n1") task.shell("/bin/sleep 0.5", handler=eh, key="n2") task.shell("/bin/sleep 0.5", handler=eh, key="n3") self.run_task_and_catch_warnings(task) eh.do_asserts_noread_notimeout() self.assertEqual(eh.cnt_pickup, 3) self.assertEqual(eh.cnt_hup, 3) finally: task.set_info("fanout", fanout) def test_ev_written_legacy(self): """test ev_written event (legacy)""" task = task_self() eh = LegacyTestHandler() worker = task.shell("cat", handler=eh) content = b"abcdefghijklmnopqrstuvwxyz\n" worker.write(content) worker.set_write_eof() # warnings: pickup + read + hup + close self.run_task_and_catch_warnings(task, 4) eh.do_asserts_read_write_notimeout() self.assertEqual(eh.cnt_written, 1) self.assertEqual(eh.bytes_written, len(content)) def test_ev_written(self): """test ev_written event""" task = task_self() # ev_written itself is using the same signature but it is just for # the sake of completeness... eh = TestHandler() worker = task.shell("cat", handler=eh) content = b"abcdefghijklmnopqrstuvwxyz\n" worker.write(content) worker.set_write_eof() self.run_task_and_catch_warnings(task) eh.do_asserts_read_write_notimeout() self.assertEqual(eh.cnt_written, 1) self.assertEqual(eh.bytes_written, len(content)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/TaskLocalMixin.py0000644104717000001440000011030414505632065020171 0ustar00sthiellusers# ClusterShell (local) test suite # Written by S. Thiell """Unit test for ClusterShell Task (local)""" import copy import os import signal import socket import threading import time import warnings from ClusterShell.Defaults import DEFAULTS from ClusterShell.Event import EventHandler from ClusterShell.Task import * from ClusterShell.Worker.Exec import ExecWorker from ClusterShell.Worker.Worker import StreamWorker, WorkerSimple from ClusterShell.Worker.Worker import WorkerBadArgumentError from ClusterShell.Worker.Worker import FANOUT_UNLIMITED def _test_print_debug(task, s): # Use custom task info (prefix 'user_' is recommended) task.set_info("user_print_debug_last", s) class TaskLocalMixin(object): """Mixin test case class: should be overridden and used in multiple inheritance with unittest.TestCase""" def setUp(self): warnings.simplefilter("once") # save original fanout value self.fanout_orig = task_self().info("fanout") def tearDown(self): # restore original fanout value task_self().set_info("fanout", self.fanout_orig) warnings.resetwarnings() def testSimpleCommand(self): task = task_self() # init worker worker = task.shell("/bin/hostname") # run task task.resume() def testSimpleDualTask(self): task0 = task_self() worker1 = task0.shell("/bin/hostname") worker2 = task0.shell("/bin/uname -a") task0.resume() b1 = copy.copy(worker1.read()) b2 = copy.copy(worker2.read()) task1 = task_self() self.assertTrue(task1 is task0) worker1 = task1.shell("/bin/hostname") worker2 = task1.shell("/bin/uname -a") task1.resume() self.assertEqual(worker2.read(), b2) self.assertEqual(worker1.read(), b1) def testSimpleCommandNoneArgs(self): task = task_self() # init worker worker = task.shell("/bin/hostname", nodes=None, handler=None) # run task task.resume() def testSimpleMultipleCommands(self): task = task_self() # run commands workers = [] for i in range(0, 100): workers.append(task.shell("/bin/hostname")) task.resume() # verify results hn = socket.gethostname() for i in range(0, 100): t_hn = workers[i].read().splitlines()[0] self.assertEqual(t_hn.decode('utf-8'), hn) def testHugeOutputCommand(self): task = task_self() # init worker worker = task.shell("for i in $(seq 1 100000); do echo -n ' huge! '; done") # run task task.resume() self.assertEqual(worker.retcode(), 0) self.assertEqual(len(worker.read()), 700000) # task configuration def testTaskInfo(self): task = task_self() fanout = task.info("fanout") self.assertEqual(fanout, DEFAULTS.fanout) def testSimpleCommandTimeout(self): task = task_self() # init worker worker = task.shell("/bin/sleep 30") # run task self.assertRaises(TimeoutError, task.resume, 1) def testSimpleCommandNoTimeout(self): task = task_self() # init worker worker = task.shell("/bin/sleep 1") try: # run task task.resume(3) except TimeoutError: self.fail("did detect timeout") def testWorkersTimeout(self): task = task_self() # init worker worker = task.shell("/bin/sleep 6", timeout=1) worker = task.shell("/bin/sleep 6", timeout=0.5) try: # run task task.resume() except TimeoutError: self.fail("did detect timeout") self.assertTrue(worker.did_timeout()) def testWorkersTimeout2(self): task = task_self() worker = task.shell("/bin/sleep 10", timeout=1) worker = task.shell("/bin/sleep 10", timeout=0.5) try: # run task task.resume() except TimeoutError: self.fail("did detect task timeout") def testWorkersAndTaskTimeout(self): task = task_self() worker = task.shell("/bin/sleep 10", timeout=5) worker = task.shell("/bin/sleep 10", timeout=3) self.assertRaises(TimeoutError, task.resume, 1) def testLocalEmptyBuffer(self): task = task_self() task.shell("true", key="empty") task.resume() self.assertEqual(task.key_buffer("empty"), b'') for buf, keys in task.iter_buffers(): self.assertTrue(False) def testLocalEmptyError(self): task = task_self() task.shell("true", key="empty") task.resume() self.assertEqual(task.key_error("empty"), b'') for buf, keys in task.iter_errors(): self.assertTrue(False) def testTaskKeyErrors(self): task = task_self() task.shell("true", key="dummy") task.resume() # task.key_retcode raises KeyError self.assertRaises(KeyError, task.key_retcode, "not_known") # unlike task.key_buffer/error self.assertEqual(task.key_buffer("not_known"), b'') self.assertEqual(task.key_error("not_known"), b'') def testLocalSingleLineBuffers(self): task = task_self() task.shell("/bin/echo foo", key="foo") task.shell("/bin/echo bar", key="bar") task.shell("/bin/echo bar", key="bar2") task.shell("/bin/echo foobar", key="foobar") task.shell("/bin/echo foobar", key="foobar2") task.shell("/bin/echo foobar", key="foobar3") task.resume() self.assertEqual(task.key_buffer("foobar"), b"foobar") cnt = 3 for buf, keys in task.iter_buffers(): cnt -= 1 if buf == b"foo": self.assertEqual(len(keys), 1) self.assertEqual(keys[0], "foo") elif buf == b"bar": self.assertEqual(len(keys), 2) self.assertTrue(keys[0] == "bar" or keys[1] == "bar") elif buf == b"foobar": self.assertEqual(len(keys), 3) self.assertEqual(cnt, 0) def testLocalBuffers(self): task = task_self() task.shell("/usr/bin/printf 'foo\nbar\n'", key="foobar") task.shell("/usr/bin/printf 'foo\nbar\n'", key="foobar2") task.shell("/usr/bin/printf 'foo\nbar\n'", key="foobar3") task.shell("/usr/bin/printf 'foo\nbar\nxxx\n'", key="foobarX") task.shell("/usr/bin/printf 'foo\nfuu\n'", key="foofuu") task.shell("/usr/bin/printf 'faa\nber\n'", key="faaber") task.shell("/usr/bin/printf 'foo\nfuu\n'", key="foofuu2") task.resume() cnt = 4 for buf, keys in task.iter_buffers(): cnt -= 1 if buf == b"faa\nber\n": self.assertEqual(len(keys), 1) self.assertTrue(keys[0].startswith("faaber")) elif buf == b"foo\nfuu\n": self.assertEqual(len(keys), 2) self.assertTrue(keys[0].startswith("foofuu")) elif buf == b"foo\nbar\n": self.assertEqual(len(keys), 3) elif buf == b"foo\nbar\nxxx\n": self.assertEqual(len(keys), 1) self.assertTrue(keys[0].startswith("foobarX")) self.assertTrue(keys[0].startswith("foobar")) elif buf == b"foo\nbar\nxxx\n": self.assertEqual(len(keys), 1) self.assertTrue(keys[0].startswith("foobarX")) self.assertEqual(cnt, 0) def testLocalRetcodes(self): task = task_self() # 0 ['worker0'] # 1 ['worker1'] # 2 ['worker2'] # 3 ['worker3bis', 'worker3'] # 4 ['worker4'] # 5 ['worker5bis', 'worker5'] task.shell("true", key="worker0") task.shell("false", key="worker1") task.shell("/bin/sh -c 'exit 1'", key="worker1bis") task.shell("/bin/sh -c 'exit 2'", key="worker2") task.shell("/bin/sh -c 'exit 3'", key="worker3") task.shell("/bin/sh -c 'exit 3'", key="worker3bis") task.shell("/bin/sh -c 'exit 4'", key="worker4") task.shell("/bin/sh -c 'exit 1'", key="worker4") task.shell("/bin/sh -c 'exit 5'", key="worker5") task.shell("/bin/sh -c 'exit 5'", key="worker5bis") task.resume() # test key_retcode(key) self.assertEqual(task.key_retcode("worker2"), 2) # single self.assertEqual(task.key_retcode("worker4"), 4) # multiple self.assertRaises(KeyError, task.key_retcode, "worker9") # error cnt = 6 for rc, keys in task.iter_retcodes(): cnt -= 1 if rc == 0: self.assertEqual(len(keys), 1) self.assertEqual(keys[0], "worker0" ) elif rc == 1: self.assertEqual(len(keys), 3) self.assertTrue(keys[0] in ("worker1", "worker1bis", "worker4")) elif rc == 2: self.assertEqual(len(keys), 1) self.assertEqual(keys[0], "worker2" ) elif rc == 3: self.assertEqual(len(keys), 2) self.assertTrue(keys[0] in ("worker3", "worker3bis")) elif rc == 4: self.assertEqual(len(keys), 1) self.assertEqual(keys[0], "worker4" ) elif rc == 5: self.assertEqual(len(keys), 2) self.assertTrue(keys[0] in ("worker5", "worker5bis")) self.assertEqual(cnt, 0) # test max retcode API self.assertEqual(task.max_retcode(), 5) def testCustomPrintDebug(self): task = task_self() # first test that simply changing print_debug doesn't enable debug default_print_debug = task.info("print_debug") try: task.set_info("print_debug", _test_print_debug) task.shell("true") task.resume() self.assertEqual(task.info("user_print_debug_last"), None) # with debug enabled, it should work task.set_info("debug", True) task.shell("true") task.resume() self.assertEqual(task.info("user_print_debug_last"), "POPEN: true") # remove debug task.set_info("debug", False) # re-run for default print debug callback code coverage task.shell("true") task.resume() finally: # restore default print_debug task.set_info("debug", False) task.set_info("print_debug", default_print_debug) def testLocalRCBufferGathering(self): task = task_self() task.shell("/usr/bin/printf 'foo\nbar\n' && exit 1", key="foobar5") task.shell("/usr/bin/printf 'foo\nbur\n' && exit 1", key="foobar2") task.shell("/usr/bin/printf 'foo\nbar\n' && exit 1", key="foobar3") task.shell("/usr/bin/printf 'foo\nfuu\n' && exit 5", key="foofuu") task.shell("/usr/bin/printf 'foo\nbar\n' && exit 4", key="faaber") task.shell("/usr/bin/printf 'foo\nfuu\n' && exit 1", key="foofuu2") task.resume() foobur = b"foo\nbur" cnt = 5 for rc, keys in task.iter_retcodes(): for buf, keys in task.iter_buffers(keys): cnt -= 1 if buf == b"foo\nbar": self.assertTrue(rc == 1 or rc == 4) elif foobur == buf: self.assertEqual(rc, 1) elif b"foo\nfuu" == buf: self.assertTrue(rc == 1 or rc == 5) else: self.fail("invalid buffer returned") self.assertEqual(cnt, 0) def testLocalBufferRCGathering(self): task = task_self() task.shell("/usr/bin/printf 'foo\nbar\n' && exit 1", key="foobar5") task.shell("/usr/bin/printf 'foo\nbur\n' && exit 1", key="foobar2") task.shell("/usr/bin/printf 'foo\nbar\n' && exit 1", key="foobar3") task.shell("/usr/bin/printf 'foo\nfuu\n' && exit 5", key="foofuu") task.shell("/usr/bin/printf 'foo\nbar\n' && exit 4", key="faaber") task.shell("/usr/bin/printf 'foo\nfuu\n' && exit 1", key="foofuu2") task.resume() cnt = 9 for buf, keys in task.iter_buffers(): for rc, keys in task.iter_retcodes(keys): # same checks as testLocalRCBufferGathering cnt -= 1 if buf == b"foo\nbar\n": self.assertTrue(rc == 1 and rc == 4) elif buf == b"foo\nbur\n": self.assertEqual(rc, 1) elif buf == b"foo\nbuu\n": self.assertEqual(rc, 5) self.assertEqual(cnt, 0) def testLocalWorkerWrites(self): # Simple test: we write to a cat process and see if read matches. task = task_self() worker = task.shell("cat") # write first line worker.write(b"foobar\n") # write second line worker.write(b"deadbeaf\n") worker.set_write_eof() task.resume() self.assertEqual(worker.read(), b"foobar\ndeadbeaf") def testLocalWorkerWritesBcExample(self): # Other test: write a math statement to a bc process and check # for the result. task = task_self() worker = task.shell("bc -q") # write statement worker.write(b"2+2\n") worker.set_write_eof() # execute task.resume() # read result self.assertEqual(worker.read(), b"4") def testLocalWorkerWritesWithLateEOF(self): class LateEOFHandler(EventHandler): def ev_start(self, worker): worker.set_write_eof() task = task_self() worker = task.shell("(sleep 1; cat)", handler=LateEOFHandler()) worker.write(b"cracoucasse\n") task.resume() # read result self.assertEqual(worker.read(), b"cracoucasse") def testEscape(self): task = task_self() worker = task.shell(r"export CSTEST=foobar; /bin/echo \$CSTEST | sed 's/\ foo/bar/'") # execute task.resume() # read result self.assertEqual(worker.read(), b"$CSTEST") def testEscape2(self): task = task_self() worker = task.shell(r"export CSTEST=foobar; /bin/echo $CSTEST | sed 's/\ foo/bar/'") # execute task.resume() # read result self.assertEqual(worker.read(), b"foobar") def testEngineClients(self): # private EngineClient stream basic tests class StartHandler(EventHandler): def __init__(self, test): self.test = test def ev_start(self, worker): if len(streams) == 2: for streamd in streams: for name, stream in streamd.items(): self.test.assertTrue(name in ['stdin', 'stdout', 'stderr']) if name == 'stdin': self.test.assertTrue(stream.writable()) self.test.assertFalse(stream.readable()) else: self.test.assertTrue(stream.readable()) self.test.assertFalse(stream.writable()) task = task_self() shdl = StartHandler(self) worker1 = task.shell("/bin/hostname", handler=shdl) worker2 = task.shell("echo ok", handler=shdl) engine = task._engine clients = engine.clients() self.assertEqual(len(clients), 2) streams = [client.streams for client in clients] task.resume() def testEnginePorts(self): task = task_self() worker = task.shell("/bin/hostname") self.assertEqual(len(task._engine.ports()), 1) task.resume() def testSimpleCommandAutoclose(self): task = task_self() worker = task.shell("/bin/sleep 3; /bin/uname", autoclose=True) task.resume() self.assertEqual(worker.read(), None) def testTwoSimpleCommandsAutoclose(self): task = task_self() worker1 = task.shell("/bin/sleep 2; /bin/echo ok") worker2 = task.shell("/bin/sleep 3; /bin/uname", autoclose=True) task.resume() self.assertEqual(worker1.read(), b"ok") self.assertEqual(worker2.read(), None) def test_unregister_stream_autoclose(self): task = task_self() worker1 = task.shell("/bin/sleep 2; /bin/echo ok") worker2 = task.shell("/bin/sleep 3; /bin/uname", autoclose=True) # the following leads to a call to unregister_stream() with autoclose flag set worker3 = task.shell("sleep 1; echo blah | cat", autoclose=True) task.resume() self.assertEqual(worker1.read(), b"ok") self.assertEqual(worker2.read(), None) def testLocalWorkerErrorBuffers(self): task = task_self() w1 = task.shell("/usr/bin/printf 'foo bar\n' 1>&2", key="foobar", stderr=True) w2 = task.shell("/usr/bin/printf 'foo\nbar\n' 1>&2", key="foobar2", stderr=True) task.resume() self.assertEqual(w1.error(), b'foo bar') self.assertEqual(w2.error(), b'foo\nbar') def testLocalErrorBuffers(self): task = task_self() task.shell("/usr/bin/printf 'foo\nbar\n' 1>&2", key="foobar", stderr=True) task.shell("/usr/bin/printf 'foo\nbar\n' 1>&2", key="foobar2", stderr=True) task.shell("/usr/bin/printf 'foo\nbar\n 1>&2'", key="foobar3", stderr=True) task.shell("/usr/bin/printf 'foo\nbar\nxxx\n' 1>&2", key="foobarX", stderr=True) task.shell("/usr/bin/printf 'foo\nfuu\n' 1>&2", key="foofuu", stderr=True) task.shell("/usr/bin/printf 'faa\nber\n' 1>&2", key="faaber", stderr=True) task.shell("/usr/bin/printf 'foo\nfuu\n' 1>&2", key="foofuu2", stderr=True) task.resume() cnt = 4 for buf, keys in task.iter_errors(): cnt -= 1 if buf == b"faa\nber\n": self.assertEqual(len(keys), 1) self.assertTrue(keys[0].startswith("faaber")) elif buf == b"foo\nfuu\n": self.assertEqual(len(keys), 2) self.assertTrue(keys[0].startswith("foofuu")) elif buf == b"foo\nbar\n": self.assertEqual(len(keys), 3) self.assertTrue(keys[0].startswith("foobar")) elif buf == b"foo\nbar\nxxx\n": self.assertEqual(len(keys), 1) self.assertTrue(keys[0].startswith("foobarX")) self.assertEqual(cnt, 0) def testTaskPrintDebug(self): task = task_self() # simple test, just run a task with debug on to improve test # code coverage task.set_info("debug", True) worker = task.shell("/bin/echo test") task.resume() task.set_info("debug", False) def testTaskAbortSelf(self): task = task_self() # abort(False) keeps current task_self() object task.abort() self.assertEqual(task, task_self()) # abort(True) unbinds current task_self() object task.abort(True) self.assertNotEqual(task, task_self()) # retry task = task_self() worker = task.shell("/bin/echo should not see that") task.abort() self.assertEqual(task, task_self()) def testTaskAbortHandler(self): class AbortOnReadTestHandler(EventHandler): def ev_read(self, worker, node, sname, msg): self.has_ev_read = True worker.task.abort() assert False, "Shouldn't reach this line" task = task_self() eh = AbortOnReadTestHandler() eh.has_ev_read = False task.shell("/bin/echo test", handler=eh) task.resume() self.assertTrue(eh.has_ev_read) def testWorkerSetKey(self): task = task_self() task.shell("/bin/echo foo", key="foo") worker = task.shell("/bin/echo foobar") worker.set_key("bar") task.resume() self.assertEqual(task.key_buffer("bar"), b"foobar") def testWorkerSimplePipeStdout(self): task = task_self() rfd, wfd = os.pipe() os.write(wfd, b"test\n") os.close(wfd) worker = WorkerSimple(os.fdopen(rfd), None, None, "pipe", None, stderr=True, timeout=-1, autoclose=False, closefd=False) self.assertEqual(worker.reader_fileno(), rfd) task.schedule(worker) task.resume() self.assertEqual(task.key_buffer("pipe"), b'test') dummy = os.fstat(rfd) # just to check that rfd is still valid here # (worker keeps a reference of file object) # rfd will be closed when associated file is released def testWorkerSimplePipeStdErr(self): task = task_self() rfd, wfd = os.pipe() os.write(wfd, b"test\n") os.close(wfd) # be careful, stderr is arg #3 worker = WorkerSimple(None, None, os.fdopen(rfd), "pipe", None, stderr=True, timeout=-1, autoclose=False, closefd=False) self.assertEqual(worker.error_fileno(), rfd) task.schedule(worker) task.resume() self.assertEqual(task.key_error("pipe"), b'test') dummy = os.fstat(rfd) # just to check that rfd is still valid here # rfd will be closed when associated file is released def testWorkerSimplePipeStdin(self): task = task_self() rfd, wfd = os.pipe() # be careful, stdin is arg #2 worker = WorkerSimple(None, os.fdopen(wfd, "w"), None, "pipe", None, stderr=True, timeout=-1, autoclose=False, closefd=False) self.assertEqual(worker.writer_fileno(), wfd) worker.write(b"write to stdin test\n") worker.set_write_eof() # close stream after write! task.schedule(worker) task.resume() self.assertEqual(os.read(rfd, 1024), b"write to stdin test\n") os.close(rfd) # wfd will be closed when associated file is released # FIXME: reconsider this kind of test (which now must fail) especially # when using epoll engine, as soon as testsuite is improved (#95). #def testWorkerSimpleFile(self): # """test WorkerSimple (file)""" # task = task_self() # # use tempfile # tmpfile = tempfile.TemporaryFile() # tmpfile.write("one line without EOL") # tmpfile.seek(0) # worker = WorkerSimple(tmpfile, None, None, "file", None, 0, True) # task.schedule(worker) # task.resume() # self.assertEqual(worker.read(), "one line without EOL") def testInterruptEngine(self): class KillerThread(threading.Thread): def run(self): time.sleep(1) os.kill(self.pidkill, signal.SIGUSR1) task_wait() kth = KillerThread() kth.pidkill = os.getpid() task = task_self() signal.signal(signal.SIGUSR1, lambda x, y: None) task.shell("/bin/sleep 2", timeout=5) kth.start() task.resume() def testSignalWorker(self): class TestSignalHandler(EventHandler): def ev_read(self, worker, node, sname, msg): pid = int(worker.current_msg) os.kill(pid, signal.SIGTERM) task = task_self() wrk = task.shell("echo $$; /bin/sleep 2", handler=TestSignalHandler()) task.resume() self.assertEqual(wrk.retcode(), 128 + signal.SIGTERM) def testShellDelayedIO(self): class TestDelayedHandler(EventHandler): def __init__(self, target_worker=None): self.target_worker = target_worker self.counter = 0 def ev_read(self, worker, node, sname, msg): self.counter += 1 if self.counter == 100: worker.write(b"another thing to read\n") worker.set_write_eof() def ev_timer(self, timer): self.target_worker.write(b"something to read\n" * 300) task = task_self() hdlr = TestDelayedHandler() reader = task.shell("cat", handler=hdlr) timer = task.timer(0.6, handler=TestDelayedHandler(reader)) task.resume() self.assertEqual(hdlr.counter, 301) def testSimpleCommandReadNoEOL(self): task = task_self() # init worker worker = task.shell("echo -n okay") # run task task.resume() self.assertEqual(worker.read(), b"okay") def testLocalFanout(self): task = task_self() task.set_info("fanout", 3) # Test #1: simple for i in range(0, 10): worker = task.shell("echo test %d" % i) task.resume() # Test #2: fanout change during run class TestFanoutChanger(EventHandler): def ev_timer(self, timer): task_self().set_info("fanout", 1) timer = task.timer(2.0, handler=TestFanoutChanger()) for i in range(0, 10): worker = task.shell("sleep 0.5") task.resume() def testLocalWorkerFanout(self): class TestRunCountChecker(EventHandler): def __init__(self): self.workers = [] self.max_run_cnt = 0 def ev_start(self, worker): self.workers.append(worker) def ev_read(self, worker, node, sname, msg): run_cnt = sum(e.registered for w in self.workers for e in w._engine_clients()) self.max_run_cnt = max(self.max_run_cnt, run_cnt) task = task_self() TEST_FANOUT = 3 task.set_info("fanout", TEST_FANOUT) # TEST 1 - default worker fanout eh = TestRunCountChecker() for i in range(10): task.shell("echo foo", handler=eh) task.resume() # Engine fanout should be enforced self.assertTrue(eh.max_run_cnt <= TEST_FANOUT) # TEST 1bis - default worker fanout with ExecWorker eh = TestRunCountChecker() worker = ExecWorker(nodes='foo[0-9]', handler=eh, command='echo bar') task.schedule(worker) task.resume() # Engine fanout should be enforced self.assertTrue(eh.max_run_cnt <= TEST_FANOUT) # TEST 2 - create n x workers using worker.fanout eh = TestRunCountChecker() for i in range(10): task.shell("echo foo", handler=eh)._fanout = 1 task.resume() # max_run_cnt should reach the total number of workers self.assertEqual(eh.max_run_cnt, 10) # TEST 2bis - create ExecWorker with multiple clients [larger fanout] eh = TestRunCountChecker() worker = ExecWorker(nodes='foo[0-9]', handler=eh, command='echo bar') worker._fanout = 5 task.schedule(worker) task.resume() # max_run_cnt should reach worker._fanout self.assertEqual(eh.max_run_cnt, 5) # TEST 2ter - create ExecWorker with multiple clients [smaller fanout] eh = TestRunCountChecker() worker = ExecWorker(nodes='foo[0-9]', handler=eh, command='echo bar') worker._fanout = 1 task.schedule(worker) task.resume() # max_run_cnt should reach worker._fanout self.assertEqual(eh.max_run_cnt, 1) # TEST 4 - create workers using unlimited fanout eh = TestRunCountChecker() for i in range(10): w = task.shell("echo foo", handler=eh) w._fanout = FANOUT_UNLIMITED task.resume() # max_run_cnt should reach the total number of workers self.assertEqual(eh.max_run_cnt, 10) # TEST 4bis - create ExecWorker with unlimited fanout eh = TestRunCountChecker() worker = ExecWorker(nodes='foo[0-9]', handler=eh, command='echo bar') worker._fanout = FANOUT_UNLIMITED task.schedule(worker) task.resume() # max_run_cnt should reach the total number of clients (10) self.assertEqual(eh.max_run_cnt, 10) def testPopenBadArgumentOption(self): # Check code < 1.4 compatibility self.assertRaises(WorkerBadArgumentError, WorkerPopen, None, None) # As of 1.4, ValueError is raised for missing parameter self.assertRaises(ValueError, WorkerPopen, None, None) # 1.4+ def testWorkerAbort(self): task = task_self() class AbortOnTimer(EventHandler): def __init__(self, worker): EventHandler.__init__(self) self.ext_worker = worker self.testtimer = False def ev_timer(self, timer): self.ext_worker.abort() self.ext_worker.abort() # safe but no effect self.testtimer = True aot = AbortOnTimer(task.shell("sleep 10")) self.assertEqual(aot.testtimer, False) task.timer(1.0, handler=aot) task.resume() self.assertEqual(aot.testtimer, True) def testWorkerAbortSanity(self): task = task_self() worker = task.shell("sleep 1") worker.abort() # test noop abort() on unscheduled worker worker = WorkerPopen("sleep 1") worker.abort() def testKBI(self): class TestKBI(EventHandler): def ev_read(self, worker, node, sname, msg): raise KeyboardInterrupt task = task_self() ok = False try: task.run("echo test; sleep 5", handler=TestKBI()) except KeyboardInterrupt: ok = True # We want to test here if engine clients are not properly # cleaned, or results are not cleaned on re-run() # # cannot assert on task.iter_retcodes() as we are not sure in # what order the interpreter will proceed #self.assertEqual(len(list(task.iter_retcodes())), 1) self.assertEqual(len(list(task.iter_buffers())), 1) # hard to test without really checking the number of clients of engine self.assertEqual(len(task._engine._clients), 0) task.run("echo newrun") self.assertEqual(len(task._engine._clients), 0) self.assertEqual(len(list(task.iter_retcodes())), 1) self.assertEqual(len(list(task.iter_buffers())), 1) self.assertEqual(bytes(list(task.iter_buffers())[0][0]), b"newrun") self.assertTrue(ok, "KeyboardInterrupt not raised") # From old TaskAdvancedTest.py: def testTaskRun(self): wrk = task_self().shell("true") task_self().run() def testTaskRunTimeout(self): wrk = task_self().shell("sleep 1") self.assertRaises(TimeoutError, task_self().run, 0.3) wrk = task_self().shell("sleep 1") self.assertRaises(TimeoutError, task_self().run, timeout=0.3) def testTaskShellRunLocal(self): wrk = task_self().run("false") self.assertTrue(wrk) self.assertEqual(task_self().max_retcode(), 1) # Timeout in shell() fashion way. wrk = task_self().run("sleep 1", timeout=0.3) self.assertTrue(wrk) self.assertEqual(task_self().num_timeout(), 1) def testTaskEngineUserSelection(self): task_terminate() try: DEFAULTS.engine = 'select' self.assertEqual(task_self().info('engine'), 'select') task_terminate() finally: DEFAULTS.engine = 'auto' def testTaskEngineWrongUserSelection(self): try: task_terminate() DEFAULTS.engine = 'foobar' # Check for KeyError in case of wrong engine request self.assertRaises(KeyError, task_self) finally: DEFAULTS.engine = 'auto' task_terminate() def testTaskNewThread1(self): # create a task in a new thread task = Task() match = "test" # schedule a command in that task worker = task.shell("/bin/echo %s" % match) # run this task task.resume() # wait for the task to complete task_wait() # verify that the worker has completed self.assertEqual(worker.read(), match.encode('ascii')) # stop task task.abort() def testTaskInNewThread2(self): # create a task in a new thread task = Task() match = "again" # schedule a command in that task worker = task.shell("/bin/echo %s" % match) # run this task task.resume() # wait for the task to complete task_wait() # verify that the worker has completed self.assertEqual(worker.read(), match.encode('ascii')) # stop task task.abort() def testTaskInNewThread3(self): # create a task in a new thread task = Task() match = "once again" # schedule a command in that task worker = task.shell("/bin/echo %s" % match) # run this task task.resume() # wait for the task to complete task_wait() # verify that the worker has completed self.assertEqual(worker.read(), match.encode('ascii')) # stop task task.abort() def testLocalPickupHup(self): class PickupHupCounter(EventHandler): def __init__(self): self.pickup_count = 0 self.hup_count = 0 def ev_pickup(self, worker, node): self.pickup_count += 1 def ev_hup(self, worker, node, rc): self.hup_count += 1 task = task_self() fanout = task.info("fanout") try: task.set_info("fanout", 3) # Test #1: simple chdlr = PickupHupCounter() for i in range(0, 10): task.shell("/bin/echo test %d" % i, handler=chdlr) task.resume() self.assertEqual(chdlr.pickup_count, 10) self.assertEqual(chdlr.hup_count, 10) # Test #2: fanout change during run chdlr = PickupHupCounter() class TestFanoutChanger(EventHandler): def ev_timer(self, timer): task_self().set_info("fanout", 1) timer = task.timer(2.0, handler=TestFanoutChanger()) for i in range(0, 10): task.shell("sleep 0.5", handler=chdlr) task.resume() self.assertEqual(chdlr.pickup_count, 10) self.assertEqual(chdlr.hup_count, 10) finally: # restore original fanout value task.set_info("fanout", fanout) def test_shell_nostdin(self): # this shouldn't block if we do prevent the use of stdin task = task_self() task.shell("cat", stdin=False) task.resume() # same thing with run() task.run("cat", stdin=False) def test_mixed_worker_retcodes(self): """test Task retcode handling with mixed workers""" # This test case failed with CS <= 1.7.3 # Conditions: task.max_retcode() set during runtime (not None) # and then a StreamWorker closing, thus calling Task._set_rc(rc=None) # To reproduce, we start a StreamWorker on first read of a ExecWorker. class TestH(EventHandler): def __init__(self, worker2): self.worker2 = worker2 def ev_read(self, worker, node, sname, msg): worker.task.schedule(self.worker2) worker2 = StreamWorker(handler=None) worker1 = ExecWorker(nodes='localhost', handler=TestH(worker2), command="echo ok") # Create pipe stream rfd1, wfd1 = os.pipe() worker2.set_reader("pipe1", rfd1, closefd=False) os.write(wfd1, b"test\n") os.close(wfd1) # Enable pipe1_msgtree task_self().set_default("pipe1_msgtree", True) task_self().schedule(worker1) task_self().run() self.assertEqual(worker1.node_buffer('localhost'), b"ok") self.assertEqual(worker1.node_retcode('localhost'), 0) self.assertEqual(worker2.read(sname="pipe1"), b"test") self.assertEqual(task_self().max_retcode(), 0) def testWorkerPopenKeyCompat(self): """test WorkerPopen.key attribute (compat with 1.6)""" # Was broken in 1.7 to 1.7.3 after StreamWorker changes task = task_self() worker = task.shell("echo ok", key="ok") self.assertEqual(worker.key, "ok") worker = WorkerPopen("echo foo", key="foo") self.assertEqual(worker.key, "foo") task.run() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskLocalTest.py0000644104717000001440000000511214501416555020024 0ustar00sthiellusers""" Unit test for ClusterShell Task with all engines (local worker) """ import sys import unittest from ClusterShell.Defaults import DEFAULTS from ClusterShell.Engine.Select import EngineSelect from ClusterShell.Engine.Poll import EnginePoll from ClusterShell.Engine.EPoll import EngineEPoll from ClusterShell.Task import * from TaskLocalMixin import TaskLocalMixin ENGINE_SELECT_ID = EngineSelect.identifier ENGINE_POLL_ID = EnginePoll.identifier ENGINE_EPOLL_ID = EngineEPoll.identifier class TaskLocalEngineSelectTest(TaskLocalMixin, unittest.TestCase): def setUp(self): # switch Engine task_terminate() self.engine_id_save = DEFAULTS.engine DEFAULTS.engine = ENGINE_SELECT_ID # select should be supported anywhere... self.assertEqual(task_self().info('engine'), ENGINE_SELECT_ID) # call base class setUp() TaskLocalMixin.setUp(self) def tearDown(self): # call base class tearDown() TaskLocalMixin.tearDown(self) # restore Engine DEFAULTS.engine = self.engine_id_save task_terminate() class TaskLocalEnginePollTest(TaskLocalMixin, unittest.TestCase): def setUp(self): # switch Engine task_terminate() self.engine_id_save = DEFAULTS.engine DEFAULTS.engine = ENGINE_POLL_ID if task_self().info('engine') != ENGINE_POLL_ID: self.skipTest("engine %s not supported on this host" % ENGINE_POLL_ID) # call base class setUp() TaskLocalMixin.setUp(self) def tearDown(self): # call base class tearDown() TaskLocalMixin.tearDown(self) # restore Engine DEFAULTS.engine = self.engine_id_save task_terminate() # select.epoll is only available with Python 2.6 (if condition to be # removed once we only support Py2.6+) if sys.version_info >= (2, 6, 0): class TaskLocalEngineEPollTest(TaskLocalMixin, unittest.TestCase): def setUp(self): # switch Engine task_terminate() self.engine_id_save = DEFAULTS.engine DEFAULTS.engine = ENGINE_EPOLL_ID if task_self().info('engine') != ENGINE_EPOLL_ID: self.skipTest("engine %s not supported on this host" % ENGINE_EPOLL_ID) # call base class setUp() TaskLocalMixin.setUp(self) def tearDown(self): # call base class tearDown() TaskLocalMixin.tearDown(self) # restore Engine DEFAULTS.engine = self.engine_id_save task_terminate() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskMsgTreeTest.py0000644104717000001440000001211714501416555020343 0ustar00sthiellusers# ClusterShell test suite # Written by S. Thiell """Unit test for ClusterShell TaskMsgTree variants""" import unittest from ClusterShell.Task import TaskMsgTreeError from ClusterShell.Task import task_cleanup, task_self from ClusterShell.Event import EventHandler class TaskMsgTreeTest(unittest.TestCase): """Task/MsgTree test case class""" def tearDown(self): # cleanup task_self between tests to restore defaults task_cleanup() def testEnabledMsgTree(self): """test TaskMsgTree enabled""" task = task_self() # init worker worker = task.shell("echo foo bar") task.set_default('stdout_msgtree', True) # run task task.resume() # should not raise for buf, keys in task.iter_buffers(): pass def testEmptyMsgTree(self): """test TaskMsgTree empty""" task = task_self() worker = task.shell("/bin/true") # should not raise nor returns anything self.assertEqual(list(task.iter_buffers()), []) def testDisabledMsgTree(self): """test TaskMsgTree disabled""" task = task_self() worker = task.shell("echo foo bar2") task.set_default('stdout_msgtree', False) task.resume() self.assertRaises(TaskMsgTreeError, task.iter_buffers) # # can be re-enabled (cold) task.set_default('stdout_msgtree', True) # but no messages should be found self.assertEqual(list(task.iter_buffers()), []) def testHotEnablingMsgTree(self): """test TaskMsgTree enabling at runtime (v1.7)""" class HotEH2(EventHandler): def ev_read(self, worker, node, sname, msg): worker.task.set_default("stdout_msgtree", True) worker.task.shell("echo foo bar2") # default EH task = task_self() task.set_default("stdout_msgtree", False) self.assertEqual(task.default("stdout_msgtree"), False) worker = task.shell("echo foo bar", handler=HotEH2()) task.resume() # only second message has been recorded for buf, keys in task.iter_buffers(): self.assertEqual(buf, b"foo bar2") def testHotDisablingMsgTree(self): """test TaskMsgTree disabling at runtime (v1.7)""" class HotEH2(EventHandler): def ev_read(self, worker, node, sname, msg): worker.task.set_default("stdout_msgtree", False) worker.task.shell("echo foo bar2") # default EH task = task_self() self.assertEqual(task.default("stdout_msgtree"), True) worker = task.shell("echo foo bar", handler=HotEH2()) task.resume() # only first message has been recorded for buf, keys in task.iter_buffers(): self.assertEqual(buf, b"foo bar") def testEnabledMsgTreeStdErr(self): """test TaskMsgTree enabled for stderr""" task = task_self() worker = task.shell("echo foo bar 1>&2", stderr=True) worker = task.shell("echo just foo bar", stderr=True) task.set_default('stderr_msgtree', True) # run task task.resume() # should not raise: for buf, keys in task.iter_errors(): pass # this neither: for buf, keys in task.iter_buffers(): pass def testDisabledMsgTreeStdErr(self): """test TaskMsgTree disabled for stderr""" task = task_self() worker = task.shell("echo foo bar2 1>&2", stderr=True) worker = task.shell("echo just foo bar2", stderr=True) task.set_default('stderr_msgtree', False) # run task task.resume() # iter_errors() should raise self.assertRaises(TaskMsgTreeError, task.iter_errors) # but stdout should not for buf, keys in task.iter_buffers(): pass # # can be re-enabled (cold) task.set_default('stderr_msgtree', True) # but no messages should be found self.assertEqual(list(task.iter_errors()), []) def testTaskFlushBuffers(self): """test Task.flush_buffers""" task = task_self() worker = task.shell("echo foo bar") task.set_default('stdout_msgtree', True) # run task task.resume() task.flush_buffers() self.assertEqual(len(list(task.iter_buffers())), 0) def testTaskFlushErrors(self): """test Task.flush_errors""" task = task_self() worker = task.shell("echo foo bar 1>&2") task.set_default('stderr_msgtree', True) # run task task.resume() task.flush_errors() self.assertEqual(len(list(task.iter_errors())), 0) def testTaskModifyCommonStreams(self): """test worker common stream names change""" task = task_self() worker = task.shell("echo foo 1>&2; echo bar", stderr=True) worker.SNAME_STDOUT = 'dummy-stdout' # disable buffering on stdout only task.resume() # only stderr should have been buffered at task level self.assertEqual(len(list(task.iter_buffers())), 0) self.assertEqual(len(list(task.iter_errors())), 1) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskPortTest.py0000644104717000001440000000460314501416555017722 0ustar00sthiellusers# ClusterShell test suite # Written by S. Thiell """Unit test for ClusterShell inter-Task msg""" import threading import time import unittest from ClusterShell.Task import * from ClusterShell.Event import EventHandler class TaskPortTest(unittest.TestCase): def tearDown(self): task_cleanup() def testPortMsg1(self): """test port msg from main thread to task""" TaskPortTest.got_msg = False TaskPortTest.started = 0 # create task in new thread task = Task() class PortHandler(EventHandler): def ev_port_start(self, port): TaskPortTest.started += 1 def ev_msg(self, port, msg): # receive msg assert msg == "toto" assert task_self().thread == threading.current_thread() TaskPortTest.got_msg = True task_self().abort() # create non-autoclosing port port = task.port(handler=PortHandler()) task.resume() # send msg from main thread port.msg("toto") task_wait() self.assertEqual(TaskPortTest.started, 1) self.assertTrue(TaskPortTest.got_msg) def testPortRemove(self): """test remove_port()""" class PortHandler(EventHandler): def ev_msg(self, port, msg): pass task = Task() # new thread port = task.port(handler=PortHandler(), autoclose=True) task.resume() task.remove_port(port) task_wait() def testPortClosed(self): """test port msg on closed port""" # test sending message to "stillborn" port self.port_msg_result = None # thread will wait a bit and send a port message def test_thread_start(port, test): time.sleep(0.5) test.port_msg_result = port.msg('foobar') class TestHandler(EventHandler): pass task = task_self() test_handler = TestHandler() task.timer(0.2, handler=test_handler, autoclose=False) port = task.port(handler=test_handler, autoclose=True) thread = threading.Thread(None, test_thread_start, args=(port, self)) thread.daemon = True thread.start() task.resume() task.abort(kill=True) # will remove_port() thread.join() self.assertEqual(self.port_msg_result, False) # test vs. None and True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskRLimitsTest.py0000644104717000001440000000453714501416555020367 0ustar00sthiellusers# ClusterShell task resource consumption/limits test suite # Written by S. Thiell """Unit test for ClusterShell Task (resource limits)""" import resource import unittest from TLib import HOSTNAME from ClusterShell.Task import * from ClusterShell.Worker.Pdsh import WorkerPdsh class TaskRLimitsTest(unittest.TestCase): def setUp(self): """set soft nofile resource limit to 100""" self.soft, self.hard = resource.getrlimit(resource.RLIMIT_NOFILE) resource.setrlimit(resource.RLIMIT_NOFILE, (100, self.hard)) def tearDown(self): """restore original resource limits""" resource.setrlimit(resource.RLIMIT_NOFILE, (self.soft, self.hard)) def _testPopen(self, stderr): task = task_self() task.set_info("fanout", 10) for i in range(2000): worker = task.shell("/bin/hostname", stderr=stderr) # run task task.resume() def testPopen(self): """test resource usage with local task.shell(stderr=False)""" self._testPopen(False) def testPopenStderr(self): """test resource usage with local task.shell(stderr=True)""" self._testPopen(True) def _testRemote(self, stderr): task = task_self() task.set_info("fanout", 10) for i in range(400): worker = task.shell("/bin/hostname", nodes=HOSTNAME, stderr=stderr) # run task task.resume() def testRemote(self): """test resource usage with remote task.shell(stderr=False)""" self._testRemote(False) def testRemoteStderr(self): """test resource usage with remote task.shell(stderr=True)""" self._testRemote(True) def _testRemotePdsh(self, stderr): task = task_self() task.set_info("fanout", 10) for i in range(200): worker = WorkerPdsh(HOSTNAME, handler=None, timeout=0, command="/bin/hostname", stderr=stderr) task.schedule(worker) # run task task.resume() def testRemotePdsh(self): """test resource usage with WorkerPdsh(stderr=False)""" self._testRemotePdsh(False) def testRemotePdshStderr(self): """test resource usage with WorkerPdsh(stderr=True)""" self._testRemotePdsh(True) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskThreadJoinTest.py0000644104717000001440000000746114501416555021032 0ustar00sthiellusers# ClusterShell test suite # Written by S. Thiell 2010-01-16 """ Unit test for ClusterShell task's join feature in multithreaded environments """ import time import unittest from ClusterShell.Task import * from ClusterShell.Event import EventHandler class TaskThreadJoinTest(unittest.TestCase): def tearDown(self): task_cleanup() def testThreadTaskWaitWhenRunning(self): """test task_wait() when workers are running""" for i in range(1, 5): task = Task() task.shell("sleep %d" % i) task.resume() task_wait() def testThreadTaskWaitWhenSomeFinished(self): """test task_wait() when some workers finished""" for i in range(1, 5): task = Task() task.shell("sleep %d" % i) task.resume() time.sleep(2) task_wait() def testThreadTaskWaitWhenAllFinished(self): """test task_wait() when all workers finished""" for i in range(1, 3): task = Task() task.shell("sleep %d" % i) task.resume() time.sleep(4) task_wait() def testThreadSimpleTaskSupervisor(self): """test task methods from another thread""" #print "PASS 1" task = Task() task.shell("sleep 3") task.shell("echo testing", key=1) task.resume() task.join() self.assertEqual(task.key_buffer(1), b"testing") #print "PASS 2" task.shell("echo ok", key=2) task.resume() task.join() #print "PASS 3" self.assertEqual(task.key_buffer(2), b"ok") task.shell("sleep 1 && echo done", key=3) task.resume() task.join() #print "PASS 4" self.assertEqual(task.key_buffer(3), b"done") task.abort() def testThreadTaskBuffers(self): """test task data access methods after join()""" task = Task() # test data access from main thread # test stderr separated task.set_default("stderr", True) task.shell("echo foobar", key="OUT") task.shell("echo raboof 1>&2", key="ERR") task.resume() task.join() self.assertEqual(task.key_buffer("OUT"), b"foobar") self.assertEqual(task.key_error("OUT"), b"") self.assertEqual(task.key_buffer("ERR"), b"") self.assertEqual(task.key_error("ERR"), b"raboof") # test stderr merged task.set_default("stderr", False) task.shell("echo foobar", key="OUT") task.shell("echo raboof 1>&2", key="ERR") task.resume() task.join() self.assertEqual(task.key_buffer("OUT"), b"foobar") self.assertEqual(task.key_error("OUT"), b"") self.assertEqual(task.key_buffer("ERR"), b"raboof") self.assertEqual(task.key_error("ERR"), b"") def testThreadTaskUnhandledException(self): """test task unhandled exception in thread""" class TestUnhandledException(Exception): """test exception""" class RaiseOnRead(EventHandler): def ev_read(self, worker, node, sname, msg): raise TestUnhandledException("you should see this exception") task = Task() # test data access from main thread task.shell("echo raisefoobar", key=1, handler=RaiseOnRead()) task.resume() task.join() self.assertEqual(task.key_buffer(1), b"raisefoobar") time.sleep(1) # for pretty display, because unhandled exception # traceback may be sent to stderr after the join() self.assertFalse(task.running()) def testThreadTaskWaitWhenNotStarted(self): """test task_wait() when workers not started""" for i in range(1, 5): task = Task() task.shell("sleep %d" % i) task_wait() task.resume() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskThreadSuspendTest.py0000644104717000001440000000370214501416555021546 0ustar00sthiellusers# ClusterShell test suite # Written by S. Thiell """Unit test for ClusterShell in multithreaded environments""" import random import time import threading import unittest from ClusterShell.Task import * from ClusterShell.Event import EventHandler class TaskThreadSuspendTest(unittest.TestCase): def tearDown(self): task_cleanup() def testSuspendMiscTwoTasks(self): """test task suspend/resume (2 tasks)""" task = task_self() task2 = Task() task2.shell("sleep 4 && echo thr1") task2.resume() w = task.shell("sleep 1 && echo thr0", key=0) task.resume() self.assertEqual(task.key_buffer(0), b"thr0") self.assertEqual(w.read(), b"thr0") assert task2 != task task2.suspend() time.sleep(10) task2.resume() task_wait() task2.shell("echo suspend_test", key=1) task2.resume() task_wait() self.assertEqual(task2.key_buffer(1), b"suspend_test") def _thread_delayed_unsuspend_func(self, task): """thread used to unsuspend task during task_wait()""" time_th = int(random.random()*6+5) #print "TIME unsuspend thread=%d" % time_th time.sleep(time_th) self.resumed = True task.resume() def testThreadTaskWaitWithSuspend(self): """test task_wait() with suspended tasks""" task = Task() self.resumed = False threading.Thread(None, self._thread_delayed_unsuspend_func, args=(task,)).start() time_sh = int(random.random()*4) #print "TIME shell=%d" % time_sh task.shell("sleep %d" % time_sh) task.resume() time.sleep(1) suspended = task.suspend() for i in range(1, 4): task = Task() task.shell("sleep %d" % i) task.resume() time.sleep(1) task_wait() self.assertTrue(self.resumed or suspended == False) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskTimeoutTest.py0000644104717000001440000000133014501416555020416 0ustar00sthiellusers# ClusterShell (local) test suite # Written by S. Thiell """Unit test for ClusterShell Task/Worker timeout support""" import unittest from ClusterShell.Task import task_self class TaskTimeoutTest(unittest.TestCase): def testWorkersTimeoutBuffers(self): """test worker buffers with timeout""" task = task_self() worker = task.shell('echo some buffer; echo here...; sleep 10', timeout=4) task.resume() self.assertEqual(worker.read(), b"""some buffer here...""") test = 1 for buf, keys in task.iter_buffers(): test = 0 self.assertEqual(buf, b"""some buffer here...""") self.assertEqual(test, 0, "task.iter_buffers() did not work") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TaskTimerTest.py0000644104717000001440000004262214501416555020061 0ustar00sthiellusers# ClusterShell timer test suite # Written by S. Thiell """Unit test for ClusterShell Task's timer""" import copy import threading from time import sleep, time import unittest from TLib import HOSTNAME from ClusterShell.Engine.Engine import EngineTimer, EngineIllegalOperationError from ClusterShell.Event import EventHandler from ClusterShell.Task import * EV_START=0x01 EV_READ=0x02 EV_WRITTEN=0x04 EV_HUP=0x08 EV_TIMEOUT=0x10 EV_CLOSE=0x20 EV_TIMER=0x40 class TaskTimerTest(unittest.TestCase): class TSimpleTimerChecker(EventHandler): def __init__(self): self.count = 0 def ev_timer(self, timer): self.count += 1 def testSimpleTimer(self): """test simple timer""" task = task_self() # init event handler for timer's callback test_handler = self.__class__.TSimpleTimerChecker() timer1 = task.timer(0.5, handler=test_handler) self.assertTrue(timer1 is not None) # run task task.resume() self.assertEqual(test_handler.count, 1) def testSimpleTimer2(self): """test simple 2 timers with same fire_date""" task = task_self() test_handler = self.__class__.TSimpleTimerChecker() timer1 = task.timer(0.5, handler=test_handler) timer2 = task.timer(0.5, handler=test_handler) task.resume() self.assertEqual(test_handler.count, 2) def testSimpleTimerImmediate(self): """test simple immediate timer""" task = task_self() test_handler = self.__class__.TSimpleTimerChecker() timer1 = task.timer(0.0, handler=test_handler) task.resume() self.assertEqual(test_handler.count, 1) def testSimpleTimerImmediate2(self): """test simple immediate timers""" task = task_self() test_handler = self.__class__.TSimpleTimerChecker() for i in range(10): timer1 = task.timer(0.0, handler=test_handler) task.resume() self.assertEqual(test_handler.count, 10) class TRepeaterTimerChecker(EventHandler): def __init__(self): self.count = 0 def ev_timer(self, timer): self.count += 1 timer.set_nextfire(0.2) if self.count > 4: timer.invalidate() def testSimpleRepeater(self): """test simple repeater timer""" task = task_self() # init event handler for timer's callback test_handler = self.__class__.TRepeaterTimerChecker() timer1 = task.timer(0.5, interval=0.2, handler=test_handler) # run task task.resume() self.assertEqual(test_handler.count, 5) def testRepeaterInvalidatedTwice(self): """test repeater timer invalidated two times""" task = task_self() # init event handler for timer's callback test_handler = self.__class__.TRepeaterTimerChecker() timer1 = task.timer(0.5, interval=0.2, handler=test_handler) # run task task.resume() self.assertEqual(test_handler.count, 5) # force invalidation again (2d time), this should do nothing timer1.invalidate() # call handler one more time directly: set_nextfire should raise an error self.assertRaises(EngineIllegalOperationError, test_handler.ev_timer, timer1) # force invalidation again (3th), this should do nothing timer1.invalidate() def launchSimplePrecisionTest(self, delay): task = task_self() # init event handler for timer's callback test_handler = self.__class__.TSimpleTimerChecker() timer1 = task.timer(delay, handler=test_handler) t1 = time() # run task task.resume() t2 = time() check_precision = 0.05 self.assertTrue(abs((t2 - t1) - delay) < check_precision, "%f >= %f" % (abs((t2 - t1) - delay), check_precision)) self.assertEqual(test_handler.count, 1) def testPrecision1(self): """test simple timer precision (0.1s)""" self.launchSimplePrecisionTest(0.1) def testPrecision2(self): """test simple timer precision (1.0s)""" self.launchSimplePrecisionTest(1.0) def testWorkersAndTimer(self): """test task with timer and local jobs""" task0 = task_self() worker1 = task0.shell("/bin/hostname") worker2 = task0.shell("/bin/uname -a") test_handler = self.__class__.TSimpleTimerChecker() timer1 = task0.timer(1.0, handler=test_handler) task0.resume() self.assertEqual(test_handler.count, 1) b1 = copy.copy(worker1.read()) b2 = copy.copy(worker2.read()) worker1 = task0.shell("/bin/hostname") worker2 = task0.shell("/bin/uname -a") timer1 = task0.timer(1.0, handler=test_handler) task0.resume() self.assertEqual(test_handler.count, 2) # same handler, called 2 times self.assertEqual(worker2.read(), b2) self.assertEqual(worker1.read(), b1) def testNTimers(self): """test multiple timers""" task = task_self() # init event handler for timer's callback test_handler = self.__class__.TSimpleTimerChecker() for i in range(0, 30): timer1 = task.timer(1.0 + 0.2 * i, handler=test_handler) # run task task.resume() self.assertEqual(test_handler.count, 30) class TEventHandlerTimerInvalidate(EventHandler): """timer operations event handler simulator""" def __init__(self, test): self.test = test self.timer = None self.timer_count = 0 self.flags = 0 def ev_start(self, worker): self.flags |= EV_START def ev_read(self, worker, node, sname, msg): self.test.assertEqual(self.flags, EV_START) self.flags |= EV_READ def ev_written(self, worker, node, sname, size): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_WRITTEN def ev_hup(self, worker, node, rc): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_HUP def ev_close(self, worker, timedout): self.test.assertTrue(self.flags & EV_START) if timedout: self.flags |= EV_TIMEOUT self.flags |= EV_CLOSE def ev_timer(self, timer): self.flags |= EV_TIMER self.timer_count += 1 self.timer.invalidate() def testTimerInvalidateInHandler(self): """test timer invalidate in event handler""" task = task_self() test_eh = self.__class__.TEventHandlerTimerInvalidate(self) # init worker worker = task.shell("/bin/sleep 1", handler=test_eh) worker = task.shell("/bin/sleep 3", nodes=HOSTNAME, handler=test_eh) # init timer timer = task.timer(1.5, interval=0.5, handler=test_eh) test_eh.timer = timer # run task task.resume() # test timer did fire once self.assertEqual(test_eh.timer_count, 1) class TEventHandlerTimerSetNextFire(EventHandler): def __init__(self, test): self.test = test self.timer = None self.timer_count = 0 self.flags = 0 def ev_start(self, worker): self.flags |= EV_START def ev_read(self, worker, node, sname, msg): self.test.assertEqual(self.flags, EV_START) self.flags |= EV_READ def ev_written(self, worker): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_WRITTEN def ev_hup(self, worker, node, rc): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_HUP def ev_close(self, worker, timedout): self.test.assertTrue(self.flags & EV_START) if timedout: self.flags |= EV_TIMEOUT self.flags |= EV_CLOSE def ev_timer(self, timer): self.flags |= EV_TIMER if self.timer_count < 4: self.timer.set_nextfire(0.5) # else invalidate automatically as timer does not repeat self.timer_count += 1 def testTimerSetNextFireInHandler(self): """test timer set_nextfire in event handler""" task = task_self() test_eh = self.__class__.TEventHandlerTimerSetNextFire(self) # init worker worker = task.shell("/bin/sleep 3", nodes=HOSTNAME, handler=test_eh) # init timer timer = task.timer(1.0, interval=0.2, handler=test_eh) test_eh.timer = timer # run task task.resume() # test timer did fire one time self.assertEqual(test_eh.timer_count, 5) class TEventHandlerTimerOtherInvalidate(EventHandler): """timer operations event handler simulator""" def __init__(self, test): self.test = test self.timer = None self.flags = 0 def ev_start(self, worker): self.flags |= EV_START def ev_read(self, worker, node, sname, msg): self.flags |= EV_READ self.timer.invalidate() def ev_written(self, worker): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_WRITTEN def ev_hup(self, worker, node, rc): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_HUP def ev_close(self, worker, timedout): self.test.assertTrue(self.flags & EV_START) if timedout: self.flags |= EV_TIMEOUT self.flags |= EV_CLOSE def ev_timer(self, timer): self.flags |= EV_TIMER def testTimerInvalidateInOtherHandler(self): """test timer invalidate in other event handler""" task = task_self() test_eh = self.__class__.TEventHandlerTimerOtherInvalidate(self) # init worker worker = task.shell("/bin/uname -r", handler=test_eh) worker = task.shell("/bin/sleep 2", nodes=HOSTNAME, handler=test_eh) # init timer timer = task.timer(1.0, interval=0.5, handler=test_eh) test_eh.timer = timer # run task task.resume() # test timer didn't fire, invalidated in a worker's event handler self.assertTrue(test_eh.flags & EV_READ) self.assertFalse(test_eh.flags & EV_TIMER) class TEventHandlerTimerInvalidateSameRunloop(EventHandler): """timer operations event handler simulator""" def __init__(self, test): self.timer1 = None self.timer2 = None self.count = 0 def ev_timer(self, timer): self.count += 1 # Invalidate both timers, the other is expected to fire during the # same runloop, but now it should not. self.timer1.invalidate() self.timer2.invalidate() def testTimerInvalidateSameRunloop(self): """test timer invalidate by other timer in same runloop""" task = task_self() test_eh = self.__class__.TEventHandlerTimerInvalidateSameRunloop(self) timer1 = task.timer(0.5, interval=0.5, handler=test_eh) test_eh.timer1 = timer1 timer2 = task.timer(0.5, interval=0.5, handler=test_eh) test_eh.timer2 = timer2 task.resume() # check that only one timer is fired self.assertEqual(test_eh.count, 1) class TEventHandlerTimerOtherSetNextFire(EventHandler): def __init__(self, test): self.test = test self.timer = None self.timer_count = 0 self.flags = 0 def ev_start(self, worker): self.flags |= EV_START def ev_read(self, worker, node, sname, msg): self.test.assertEqual(self.flags, EV_START) self.flags |= EV_READ def ev_written(self, worker): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_WRITTEN def ev_hup(self, worker, node, rc): self.test.assertTrue(self.flags & EV_START) self.flags |= EV_HUP def ev_close(self, worker, timedout): self.test.assertTrue(self.flags & EV_START) if timedout: self.flags |= EV_TIMEOUT self.flags |= EV_CLOSE # set next fire delay, also disable previously setup interval # (timer will not repeat anymore) self.timer.set_nextfire(0.5) def ev_timer(self, timer): self.flags |= EV_TIMER self.timer_count += 1 def testTimerSetNextFireInOtherHandler(self): """test timer set_nextfire in other event handler""" task = task_self() test_eh = self.__class__.TEventHandlerTimerOtherSetNextFire(self) # init worker worker = task.shell("/bin/sleep 1", handler=test_eh) # init timer timer = task.timer(10.0, interval=0.5, handler=test_eh) test_eh.timer = timer # run task task.resume() # test timer did fire one time self.assertEqual(test_eh.timer_count, 1) def testAutocloseTimer(self): """test timer autoclose (one autoclose timer)""" task = task_self() # Task should return immediately test_handler = self.__class__.TSimpleTimerChecker() timer_ac = task.timer(10.0, handler=test_handler, autoclose=True) # run task task.resume() self.assertEqual(test_handler.count, 0) def testAutocloseWithTwoTimers(self): """test timer autoclose (two timers)""" task = task_self() # build 2 timers, one of 10 secs with autoclose, # and one of 1 sec without autoclose. # Task should return after 1 sec. test_handler = self.__class__.TSimpleTimerChecker() timer_ac = task.timer(10.0, handler=test_handler, autoclose=True) timer_noac = task.timer(1.0, handler=test_handler, autoclose=False) # run task task.resume() self.assertEqual(test_handler.count, 1) class TForceDelayedRepeaterChecker(EventHandler): def __init__(self): self.count = 0 def ev_timer(self, timer): self.count += 1 if self.count == 1: # force delay timer (NOT a best practice!) sleep(2) # do not invalidate first time else: # invalidate next time to stop repeater timer.invalidate() def testForceDelayedRepeater(self): """test repeater being forcibly delayed""" task = task_self() test_handler = self.__class__.TForceDelayedRepeaterChecker() repeater1 = task.timer(0.5, interval=0.25, handler=test_handler) task.resume() self.assertEqual(test_handler.count, 2) class TForceDelayedRepeaterAutoCloseChecker(EventHandler): INTERVAL = 0.25 def __init__(self): self.count = 0 def ev_timer(self, timer): self.count += 1 sleep(self.INTERVAL + 0.1) def testForceDelayedRepeaterAutoClose(self): """test repeater being forcibly delayed (w/ autoclose)""" # Test Github issue #254 INTERVAL = 0.25 task = task_self() teh = self.__class__.TForceDelayedRepeaterAutoCloseChecker() bootstrap = task.shell("sleep %f" % (2 * INTERVAL)) repeater1 = task.timer(INTERVAL, teh, INTERVAL, autoclose=True) repeater2 = task.timer(INTERVAL, teh, INTERVAL, autoclose=True) task.resume() # Expected behavior: both timers will fire after INTERVAL, the first # one will block thread for INTERVAL+0.1, the second one will also # block for INTERVAL+0.1 more time. Then at next runloop the engine # will see our shell command termination so will unregister associated # worker client. At this point, only non-autoclosing timers remain # registered, so timer firing will be skipped and runloop will exit. # # Why two possible values below? This has changed due to #399: # count will be 1 if one timer is fired first (most likely without # epsilon batching effect fixed in #399) # count will be 2 if both timers are fired during the same runloop self.assertTrue(teh.count in (1, 2)) def testMultipleAddSameTimerPrivate(self): """test multiple add() of same timer [private]""" task = task_self() test_handler = self.__class__.TSimpleTimerChecker() timer = EngineTimer(1.0, -1.0, False, test_handler) task._engine.add_timer(timer) self.assertRaises(EngineIllegalOperationError, task._engine.add_timer, timer) task_terminate() def testRemoveTimerPrivate(self): """test engine.remove_timer() [private]""" # [private] because engine methods are currently private, # users should use timer.invalidate() instead task = task_self() test_handler = self.__class__.TSimpleTimerChecker() timer = EngineTimer(1.0, -1.0, False, test_handler) task._engine.add_timer(timer) task._engine.remove_timer(timer) task_terminate() def _thread_timer_create_func(self, task): """thread used to create a timer for another task; hey why not?""" timer = task.timer(0.5, self.__class__.TSimpleTimerChecker()) def testTimerAddFromAnotherThread(self): """test timer creation from another thread""" task = task_self() threading.Thread(None, self._thread_timer_create_func, args=(task,)).start() task.resume() task_wait() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TreeGatewayTest.py0000644104717000001440000004465414501416555020406 0ustar00sthiellusers""" Unit test for ClusterShell.Gateway """ import logging import os import re import unittest import xml.sax from ClusterShell import __version__ from ClusterShell.Communication import ConfigurationMessage, ControlMessage, \ StdOutMessage, StdErrMessage, RetcodeMessage, ACKMessage, ErrorMessage, \ TimeoutMessage, StartMessage, EndMessage, XMLReader from ClusterShell.Gateway import GatewayChannel from ClusterShell.NodeSet import NodeSet from ClusterShell.Task import Task, task_self from ClusterShell.Topology import TopologyGraph from ClusterShell.Worker.Tree import TreeWorker from ClusterShell.Worker.Worker import StreamWorker from TLib import HOSTNAME # live logging with nosetests --nologcapture logging.basicConfig(level=logging.DEBUG) class Gateway(object): """Gateway special test class. Initialize a GatewayChannel through a R/W StreamWorker like a real remote ClusterShell Gateway but: - using pipes to communicate, - running on a dedicated task/thread. """ def __init__(self): """init Gateway bound objects""" self.task = Task() self.channel = GatewayChannel(self.task) self.worker = StreamWorker(handler=self.channel) # create communication pipes self.pipe_stdin = os.pipe() self.pipe_stdout = os.pipe() # avoid nonblocking flag as we want recv/read() to block self.worker.set_reader(self.channel.SNAME_READER, self.pipe_stdin[0]) self.worker.set_writer(self.channel.SNAME_WRITER, self.pipe_stdout[1], retain=False) self.task.schedule(self.worker) self.task.resume() def send(self, msg): """send msg (bytes) to pseudo stdin""" os.write(self.pipe_stdin[1], msg + b'\n') def send_str(self, msgstr): """send msg (string) to pseudo stdin""" self.send(msgstr.encode()) def recv(self): """recv buf from pseudo stdout (blocking call)""" return os.read(self.pipe_stdout[0], 4096) def wait(self): """wait for task/thread termination""" # can be blocked indefinitely if StreamWorker doesn't complete self.task.join() def close(self): """close parent fds""" os.close(self.pipe_stdout[0]) os.close(self.pipe_stdin[1]) def destroy(self): """abort task/thread""" self.task.abort(kill=True) class TreeGatewayBaseTest(unittest.TestCase): """base test class""" def setUp(self): """setup gateway and topology for each test""" # gateway self.gateway = Gateway() self.chan = self.gateway.channel # topology graph = TopologyGraph() graph.add_route(NodeSet(HOSTNAME), NodeSet('n[1-2]')) graph.add_route(NodeSet('n1'), NodeSet('n[10-49]')) graph.add_route(NodeSet('n2'), NodeSet('n[50-89]')) self.topology = graph.to_tree(HOSTNAME) # xml parser with Communication.XMLReader as content handler self.xml_reader = XMLReader() self.parser = xml.sax.make_parser(["IncrementalParser"]) self.parser.setContentHandler(self.xml_reader) def tearDown(self): """destroy gateway after each test""" self.gateway.destroy() self.gateway = None # # Send to GW # def channel_send_start(self): """send starting channel tag""" self.gateway.send_str('' % __version__) def channel_send_stop(self): """send channel ending tag""" self.gateway.send_str("") def channel_send_cfg(self, gateway): """send configuration part of channel""" # code snippet from PropagationChannel.start() cfg = ConfigurationMessage(gateway) cfg.data_encode(self.topology) self.gateway.send(cfg.xml()) # # Receive from GW # def assert_isinstance(self, msg, msg_class): """helper to check a message instance""" self.assertTrue(isinstance(msg, msg_class), "%s is not a %s" % (type(msg), msg_class)) def _recvxml(self): while not self.xml_reader.msg_available(): xml_msg = self.gateway.recv() if len(xml_msg) == 0: return None self.assertTrue(type(xml_msg) is bytes) self.parser.feed(xml_msg) return self.xml_reader.pop_msg() def recvxml(self, expected_msg_class=None): msg = self._recvxml() if expected_msg_class is None: self.assertEqual(msg, None) else: self.assert_isinstance(msg, expected_msg_class) return msg class TreeGatewayTest(TreeGatewayBaseTest): def test_basic_noop(self): """test gateway channel open/close""" self.channel_send_start() self.recvxml(StartMessage) self.assertEqual(self.chan.opened, True) self.assertEqual(self.chan.setup, False) self.channel_send_stop() self.recvxml(EndMessage) # ending tag should abort gateway worker without delay self.gateway.wait() self.gateway.close() def test_channel_err_dup(self): """test gateway channel duplicate tags""" self.channel_send_start() msg = self.recvxml(StartMessage) self.assertEqual(self.chan.opened, True) self.assertEqual(self.chan.setup, False) # send an unexpected second channel tag self.channel_send_start() msg = self.recvxml(ErrorMessage) self.assertEqual(msg.type, 'ERR') reason = 'unexpected message: Message CHA ' self.assertEqual(msg.reason[:len(reason)], reason) # gateway should terminate channel session msg = self.recvxml(EndMessage) self.gateway.wait() self.gateway.close() def _check_channel_err(self, sendmsg, errback, openchan=True, setupchan=False): """helper to ease test of erroneous messages sent to gateway""" if openchan: self.channel_send_start() msg = self.recvxml(StartMessage) self.assertEqual(self.chan.opened, True) self.assertEqual(self.chan.setup, False) if setupchan: # send channel configuration self.channel_send_cfg('n1') msg = self.recvxml(ACKMessage) self.assertEqual(self.chan.setup, True) # send the erroneous message and test gateway reply self.gateway.send_str(sendmsg) msg = self.recvxml(ErrorMessage) self.assertEqual(msg.type, 'ERR') try: if not errback.search(msg.reason): self.assertFalse(msg.reason) except AttributeError: # not a regex self.assertEqual(msg.reason, errback) # gateway should terminate channel session if openchan: msg = self.recvxml(EndMessage) self.assertEqual(msg.type, 'END') else: self.recvxml() # gateway task should exit properly self.gateway.wait() self.gateway.close() def test_err_start_with_ending_tag(self): """test gateway missing opening channel tag""" self._check_channel_err('', 'Parse error: not well-formed (invalid token)', openchan=False) def test_err_channel_end_msg(self): """test gateway channel missing opening message tag""" self._check_channel_err('', 'Parse error: mismatched tag') def test_err_channel_end_msg_setup(self): """test gateway channel missing opening message tag (setup)""" self._check_channel_err('', 'Parse error: mismatched tag', setupchan=True) def test_err_unknown_tag(self): """test gateway unknown tag""" self._check_channel_err('', 'Invalid starting tag foobar', openchan=False) def test_channel_err_unknown_tag(self): """test gateway unknown tag in channel""" self._check_channel_err('', 'Invalid starting tag foo') def test_channel_err_unknown_tag_setup(self): """test gateway unknown tag in channel (setup)""" self._check_channel_err('', 'Invalid starting tag foo', setupchan=True) def test_err_unknown_msg(self): """test gateway unknown message""" self._check_channel_err('', 'Unknown message type', openchan=False) def test_channel_err_unknown_msg(self): """test gateway channel unknown message""" self._check_channel_err('', 'Unknown message type') def test_err_xml_malformed(self): """test gateway malformed xml message""" self._check_channel_err('', 'Parse error: not well-formed (invalid token)', openchan=False) def test_channel_err_xml_malformed(self): """test gateway channel malformed xml message""" self._check_channel_err('', 'Parse error: not well-formed (invalid token)') def test_channel_err_xml_malformed_setup(self): """test gateway channel malformed xml message""" self._check_channel_err('', 'Parse error: not well-formed (invalid token)', setupchan=True) def test_channel_err_xml_bad_char(self): """test gateway channel malformed xml message (bad chars)""" self._check_channel_err('\x11', 'Parse error: not well-formed (invalid token)') def test_channel_err_missingattr(self): """test gateway channel message bad attributes""" self._check_channel_err( '', 'Invalid "message" attributes: missing key "srcid"') def test_channel_err_unexpected(self): """test gateway channel unexpected message""" self._check_channel_err( '', re.compile(r'unexpected message: Message ACK \(.*ack: 2.*\)')) def test_channel_err_cfg_missing_gw(self): """test gateway channel message missing gateway nodename""" self._check_channel_err( 'DUMMY', 'Invalid "message" attributes: missing key "gateway"') def test_channel_err_missing_pl(self): """test gateway channel message missing payload""" self._check_channel_err( '', 'Message CFG has an invalid payload') def test_channel_err_unexpected_pl(self): """test gateway channel message unexpected payload""" self._check_channel_err( 'FOO', 'Got unexpected payload for Message ERR', setupchan=True) def test_channel_err_badenc_b2a_pl(self): """test gateway channel message badly encoded payload (base64)""" # Generate TypeError (py2) or binascii.Error (py3) self._check_channel_err( 'bar', 'Message CFG has an invalid payload') def test_channel_err_badenc_pickle_pl(self): """test gateway channel message badly encoded payload (pickle)""" # Generate pickle error self._check_channel_err( 'barm', 'Message CFG has an invalid payload') def test_channel_basic_abort(self): """test gateway channel aborted while opened""" self.channel_send_start() self.recvxml(StartMessage) self.assertEqual(self.chan.opened, True) self.assertEqual(self.chan.setup, False) self.gateway.close() self.gateway.wait() def _check_channel_ctl_shell(self, command, target, stderr, remote, reply_msg_class, reply_pattern, write_buf=None, timeout=-1, replycnt=1, reply_rc=0): """helper to check channel shell action""" self.channel_send_start() msg = self.recvxml(StartMessage) self.channel_send_cfg('n1') msg = self.recvxml(ACKMessage) # prepare a remote shell command request... workertree = TreeWorker(nodes=target, handler=None, timeout=timeout, command=command) # code snippet from PropagationChannel.shell() ctl = ControlMessage(id(workertree)) ctl.action = 'shell' ctl.target = NodeSet(target) info = task_self()._info.copy() info['debug'] = False ctl_data = { 'cmd': command, 'invoke_gateway': workertree.invoke_gateway, 'taskinfo': info, 'stderr': stderr, 'timeout': timeout, 'remote': remote } ctl.data_encode(ctl_data) self.gateway.send(ctl.xml()) self.recvxml(ACKMessage) if write_buf: ctl = ControlMessage(id(workertree)) ctl.action = 'write' ctl.target = NodeSet(target) ctl_data = { 'buf': write_buf, } # Send write message ctl.data_encode(ctl_data) self.gateway.send(ctl.xml()) self.recvxml(ACKMessage) # Send EOF message ctl = ControlMessage(id(workertree)) ctl.action = 'eof' ctl.target = NodeSet(target) self.gateway.send(ctl.xml()) self.recvxml(ACKMessage) while replycnt > 0: msg = self.recvxml(reply_msg_class) replycnt -= len(NodeSet(msg.nodes)) self.assertTrue(msg.nodes in ctl.target) if msg.has_payload or reply_pattern: msg_data = msg.data_decode() try: if not reply_pattern.search(msg_data): self.assertEqual(msg.data, reply_pattern, 'Pattern "%s" not found in data="%s"' % (reply_pattern.pattern, msg_data)) except AttributeError: # not a regexp self.assertEqual(msg_data, reply_pattern) if timeout <= 0: msg = self.recvxml(RetcodeMessage) self.assertEqual(msg.retcode, reply_rc) self.channel_send_stop() self.gateway.wait() self.gateway.close() def test_channel_ctl_shell_local1(self): """test gateway channel shell stdout (stderr=False remote=False)""" self._check_channel_ctl_shell("echo ok", "n10", False, False, StdOutMessage, b"ok") def test_channel_ctl_shell_local2(self): """test gateway channel shell stdout (stderr=True remote=False)""" self._check_channel_ctl_shell("echo ok", "n10", True, False, StdOutMessage, b"ok") def test_channel_ctl_shell_local3(self): """test gateway channel shell stderr (stderr=True remote=False)""" self._check_channel_ctl_shell("echo ok >&2", "n10", True, False, StdErrMessage, b"ok") def test_channel_ctl_shell_mlocal1(self): """test gateway channel shell multi (remote=False)""" self._check_channel_ctl_shell("echo ok", "n[10-49]", True, False, StdOutMessage, b"ok", replycnt=40) def test_channel_ctl_shell_mlocal2(self): """test gateway channel shell multi stderr (remote=False)""" self._check_channel_ctl_shell("echo ok 1>&2", "n[10-49]", True, False, StdErrMessage, b"ok", replycnt=40) def test_channel_ctl_shell_mlocal3(self): """test gateway channel shell multi placeholder (remote=False)""" self._check_channel_ctl_shell('echo node %h rank %n', "n[10-29]", True, False, StdOutMessage, re.compile(br"node n\d+ rank \d+"), replycnt=20) def test_channel_ctl_shell_remote1(self): """test gateway channel shell stdout (stderr=False remote=True)""" self._check_channel_ctl_shell("echo ok", "n10", False, True, StdOutMessage, re.compile(b"(Could not resolve hostname|" b"Name or service not known)"), reply_rc=255) def test_channel_ctl_shell_remote2(self): """test gateway channel shell stdout (stderr=True remote=True)""" self._check_channel_ctl_shell("echo ok", "n10", True, True, StdErrMessage, re.compile(b"(Could not resolve hostname|" b"Name or service not known)"), reply_rc=255) def test_channel_ctl_shell_timeo1(self): """test gateway channel shell timeout""" self._check_channel_ctl_shell("sleep 10", "n10", False, False, TimeoutMessage, None, timeout=0.5) def test_channel_ctl_shell_wrloc1(self): """test gateway channel write (stderr=False remote=False)""" self._check_channel_ctl_shell("cat", "n10", False, False, StdOutMessage, b"ok", write_buf=b"ok\n") def test_channel_ctl_shell_wrloc2(self): """test gateway channel write (stderr=True remote=False)""" self._check_channel_ctl_shell("cat", "n10", True, False, StdOutMessage, b"ok", write_buf=b"ok\n") def test_channel_ctl_shell_mwrloc1(self): """test gateway channel write multi (remote=False)""" self._check_channel_ctl_shell("cat", "n[10-49]", True, False, StdOutMessage, b"ok", write_buf=b"ok\n") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1694899565.0 ClusterShell-1.9.2/tests/TreeTaskTest.py0000644104717000001440000000417614501416555017702 0ustar00sthiellusers""" Unit test for ClusterShell.Task in tree mode """ import logging import os from textwrap import dedent import unittest from ClusterShell.Propagation import RouteResolvingError from ClusterShell.Task import task_self from ClusterShell.Topology import TopologyError from TLib import HOSTNAME, make_temp_file # live logging with nosetests --nologcapture logging.basicConfig(level=logging.DEBUG) class TreeTaskTest(unittest.TestCase): """Test cases for Tree-related Task methods""" def tearDown(self): """clear task topology""" task_self().topology = None def test_shell_auto_tree_dummy(self): """test task shell auto tree""" # initialize a dummy topology.conf file topofile = make_temp_file(dedent(""" [Main] %s: dummy-gw dummy-gw: dummy-node"""% HOSTNAME).encode()) task = task_self() task.set_default("auto_tree", True) task.TOPOLOGY_CONFIGS = [topofile.name] self.assertRaises(RouteResolvingError, task.run, "/bin/hostname", nodes="dummy-node", stderr=True) self.assertEqual(task.max_retcode(), None) def test_shell_auto_tree_noconf(self): """test task shell auto tree [no topology.conf]""" task = task_self() task.set_default("auto_tree", True) dummyfile = "/some/dummy/path/topo.conf" self.assertFalse(os.path.exists(dummyfile)) task.TOPOLOGY_CONFIGS = [dummyfile] # do not raise exception task.run("/bin/hostname", nodes="dummy-node") def test_shell_auto_tree_error(self): """test task shell auto tree [TopologyError]""" # initialize an erroneous topology.conf file topofile = make_temp_file(dedent(""" [Main] %s: dummy-gw dummy-gw: dummy-gw"""% HOSTNAME).encode()) task = task_self() task.set_default("auto_tree", True) task.TOPOLOGY_CONFIGS = [topofile.name] self.assertRaises(TopologyError, task.run, "/bin/hostname", nodes="dummy-node") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/TreeTopologyTest.py0000644104717000001440000005066014505632065020613 0ustar00sthiellusers# ClusterShell.Topology test suite # Written by H. Doreau """Unit test for Topology""" import unittest from tempfile import NamedTemporaryFile from textwrap import dedent # profiling imports #import cProfile #from guppy import hpy # --- from ClusterShell.Topology import * from ClusterShell.NodeSet import NodeSet, set_std_group_resolver from ClusterShell.NodeSet import set_std_group_resolver_config from ClusterShell.NodeUtils import GroupResolverConfig from TLib import make_temp_file class TopologyTest(unittest.TestCase): def testInvalidConfigurationFile(self): """test detecting invalid configuration file""" parser = TopologyParser() self.assertRaises(TopologyError, parser.load, '/invalid/path/for/testing') self.assertRaises(TopologyError, TopologyParser, '/invalid/path/for/testing') def testTopologyGraphGeneration(self): """test graph generation""" g = TopologyGraph() ns1 = NodeSet('nodes[0-5]') ns2 = NodeSet('nodes[6-10]') g.add_route(ns1, ns2) self.assertEqual(g.dest(ns1), ns2) def testAddingSeveralRoutes(self): """test adding several valid routes""" g = TopologyGraph() admin = NodeSet('admin') ns0 = NodeSet('nodes[0-9]') ns1 = NodeSet('nodes[10-19]') g.add_route(admin, ns0) g.add_route(ns0, ns1) # Connect a new dst nodeset to an existing src ns2 = NodeSet('nodes[20-29]') g.add_route(ns0, ns2) # Add the same dst nodeset twice (no error) g.add_route(ns0, ns2) self.assertEqual(g.dest(admin), ns0) self.assertEqual(g.dest(ns0), ns1 | ns2) def testBadLink(self): """test detecting bad links in graph""" g = TopologyGraph() admin = NodeSet('admin') ns0 = NodeSet('nodes[0-9]') ns1 = NodeSet('nodes[10-19]') g.add_route(admin, ns0) g.add_route(ns0, ns1) # Add a known src nodeset as a dst nodeset (error!) self.assertRaises(TopologyError, g.add_route, ns1, ns0) def testOverlappingRoutes(self): """test overlapping routes detection""" g = TopologyGraph() admin = NodeSet('admin') # Add the same nodeset twice ns0 = NodeSet('nodes[0-9]') ns1 = NodeSet('nodes[10-19]') ns1_overlap = NodeSet('nodes[5-29]') self.assertRaises(TopologyError, g.add_route, ns0, ns0) g.add_route(ns0, ns1) self.assertRaises(TopologyError, g.add_route, ns0, ns1_overlap) def testBadTopologies(self): """test detecting invalid topologies""" g = TopologyGraph() admin = NodeSet('admin') # Add the same nodeset twice ns0 = NodeSet('nodes[0-9]') ns1 = NodeSet('nodes[10-19]') ns2 = NodeSet('nodes[20-29]') g.add_route(admin, ns0) g.add_route(ns0, ns1) g.add_route(ns0, ns2) # add a superset of a known destination as source ns2_sup = NodeSet('somenode[0-10]') ns2_sup.add(ns2) self.assertRaises(TopologyError, g.add_route, ns2_sup, NodeSet('foo1')) # Add a known dst nodeset as a src nodeset ns3 = NodeSet('nodes[30-39]') g.add_route(ns1, ns3) # Add a subset of a known src nodeset as src ns0_sub = NodeSet(','.join(ns0[:3:])) ns4 = NodeSet('nodes[40-49]') g.add_route(ns0_sub, ns4) # Add a subset of a known dst nodeset as src ns1_sub = NodeSet(','.join(ns1[:3:])) self.assertRaises(TopologyError, g.add_route, ns4, ns1_sub) # Add a subset of a known src nodeset as dst self.assertRaises(TopologyError, g.add_route, ns4, ns0_sub) # Add a subset of a known dst nodeset as dst self.assertRaises(TopologyError, g.add_route, ns4, ns1_sub) # src <- subset of -> dst ns5 = NodeSet('nodes[50-59]') ns5_sub = NodeSet(','.join(ns5[:3:])) self.assertRaises(TopologyError, g.add_route, ns5, ns5_sub) self.assertRaises(TopologyError, g.add_route, ns5_sub, ns5) self.assertEqual(g.dest(ns0), (ns1 | ns2)) self.assertEqual(g.dest(ns1), ns3) self.assertEqual(g.dest(ns2), None) self.assertEqual(g.dest(ns3), None) self.assertEqual(g.dest(ns4), None) self.assertEqual(g.dest(ns5), None) self.assertEqual(g.dest(ns0_sub), (ns1 | ns2 | ns4)) g = TopologyGraph() root = NodeSet('root') ns01 = NodeSet('nodes[0-1]') ns23 = NodeSet('nodes[2-3]') ns45 = NodeSet('nodes[4-5]') ns67 = NodeSet('nodes[6-7]') ns89 = NodeSet('nodes[8-9]') g.add_route(root, ns01) g.add_route(root, ns23 | ns45) self.assertRaises(TopologyError, g.add_route, ns23, ns23) self.assertRaises(TopologyError, g.add_route, ns45, root) g.add_route(ns23, ns67) g.add_route(ns67, ns89) self.assertRaises(TopologyError, g.add_route, ns89, ns67) self.assertRaises(TopologyError, g.add_route, ns89, ns89) self.assertRaises(TopologyError, g.add_route, ns89, ns23) ns_all = NodeSet('root,nodes[0-9]') for nodegroup in g.to_tree('root'): ns_all.difference_update(nodegroup.nodeset) self.assertEqual(len(ns_all), 0) def testInvalidRootNode(self): """test invalid root node specification""" g = TopologyGraph() ns0 = NodeSet('node[0-9]') ns1 = NodeSet('node[10-19]') g.add_route(ns0, ns1) self.assertRaises(TopologyError, g.to_tree, 'admin1') def testMultipleAdminGroups(self): """test topology with several admin groups""" ## ------------------- # TODO : uncommenting following lines should not produce an error. This # is a valid topology!! # ---------- with NamedTemporaryFile() as tmpfile: tmpfile.write(b'[routes]\n') tmpfile.write(b'admin0: nodes[0-1]\n') #tmpfile.write(b'admin1: nodes[0-1]\n') tmpfile.write(b'admin2: nodes[2-3]\n') #tmpfile.write(b'admin3: nodes[2-3]\n') tmpfile.write(b'nodes[0-1]: nodes[10-19]\n') tmpfile.write(b'nodes[2-3]: nodes[20-29]\n') tmpfile.flush() parser = TopologyParser(tmpfile.name) ns_all = NodeSet('admin2,nodes[2-3,20-29]') ns_tree = NodeSet() tree = parser.tree('admin2') self.assertEqual(tree.inner_node_count(), 3) self.assertEqual(tree.leaf_node_count(), 10) for nodegroup in tree: ns_tree.add(nodegroup.nodeset) self.assertEqual(str(ns_all), str(ns_tree)) def testTopologyGraphBigGroups(self): """test adding huge nodegroups in routes""" g = TopologyGraph() ns0 = NodeSet('nodes[0-10000]') ns1 = NodeSet('nodes[12000-23000]') g.add_route(ns0, ns1) self.assertEqual(g.dest(ns0), ns1) ns2 = NodeSet('nodes[30000-35000]') ns3 = NodeSet('nodes[35001-45000]') g.add_route(ns2, ns3) self.assertEqual(g.dest(ns2), ns3) def testNodeString(self): """test loading a linear string topology""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'[routes]\n') # TODO : increase the size ns = NodeSet('node[0-10]') prev = 'admin' for n in ns: line = '%s: %s\n' % (prev, str(n)) tmpfile.write(line.encode()) prev = n tmpfile.flush() parser = TopologyParser(tmpfile.name) tree = parser.tree('admin') ns.add('admin') ns_tree = NodeSet() for nodegroup in tree: ns_tree.add(nodegroup.nodeset) self.assertEqual(ns, ns_tree) def testConfigurationParser(self): """test configuration parsing""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'# this is a comment\n') tmpfile.write(b'[routes]\n') tmpfile.write(b'admin: nodes[0-1]\n') tmpfile.write(b'nodes[0-1]: nodes[2-5]\n') tmpfile.write(b'nodes[4-5]: nodes[6-9]\n') tmpfile.flush() parser = TopologyParser(tmpfile.name) parser.tree('admin') ns_all = NodeSet('admin,nodes[0-9]') ns_tree = NodeSet() for nodegroup in parser.tree('admin'): ns_tree.add(nodegroup.nodeset) self.assertEqual(str(ns_all), str(ns_tree)) def testConfigurationParserCompatMain(self): """test configuration parsing (Main section compat)""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'# this is a comment\n') tmpfile.write(b'[Main]\n') tmpfile.write(b'admin: nodes[0-1]\n') tmpfile.write(b'nodes[0-1]: nodes[2-5]\n') tmpfile.write(b'nodes[4-5]: nodes[6-9]\n') tmpfile.flush() parser = TopologyParser(tmpfile.name) parser.tree('admin') ns_all = NodeSet('admin,nodes[0-9]') ns_tree = NodeSet() for nodegroup in parser.tree('admin'): ns_tree.add(nodegroup.nodeset) self.assertEqual(str(ns_all), str(ns_tree)) def testConfigurationShortSyntax(self): """test short topology specification syntax""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'# this is a comment\n') tmpfile.write(b'[routes]\n') tmpfile.write(b'admin: nodes[0-9]\n') tmpfile.write(b'nodes[0-3,5]: nodes[10-19]\n') tmpfile.write(b'nodes[4,6-9]: nodes[30-39]\n') tmpfile.flush() parser = TopologyParser() parser.load(tmpfile.name) ns_all = NodeSet('admin,nodes[0-19,30-39]') ns_tree = NodeSet() for nodegroup in parser.tree('admin'): ns_tree.add(nodegroup.nodeset) self.assertEqual(str(ns_all), str(ns_tree)) def testConfigurationLongSyntax(self): """test detailed topology description syntax""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'# this is a comment\n') tmpfile.write(b'[routes]\n') tmpfile.write(b'admin: proxy\n') tmpfile.write(b'proxy: STA[0-1]\n') tmpfile.write(b'STA0: STB[0-1]\n') tmpfile.write(b'STB0: nodes[0-2]\n') tmpfile.write(b'STB1: nodes[3-5]\n') tmpfile.write(b'STA1: STB[2-3]\n') tmpfile.write(b'STB2: nodes[6-7]\n') tmpfile.write(b'STB3: nodes[8-10]\n') tmpfile.flush() parser = TopologyParser() parser.load(tmpfile.name) ns_all = NodeSet('admin,proxy,STA[0-1],STB[0-3],nodes[0-10]') ns_tree = NodeSet() tree = parser.tree('admin') self.assertEqual(tree.inner_node_count(), 8) self.assertEqual(tree.leaf_node_count(), 11) for nodegroup in tree: ns_tree.add(nodegroup.nodeset) self.assertEqual(str(ns_all), str(ns_tree)) def testConfigurationParserDeepTree(self): """test a configuration that generates a deep tree""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'# this is a comment\n') tmpfile.write(b'[routes]\n') tmpfile.write(b'admin: nodes[0-9]\n') levels = 15 # how deep do you want the tree to be? for i in range(0, levels*10, 10): line = 'nodes[%d-%d]: nodes[%d-%d]\n' % (i, i+9, i+10, i+19) tmpfile.write(line.encode()) tmpfile.flush() parser = TopologyParser() parser.load(tmpfile.name) ns_all = NodeSet('admin,nodes[0-159]') ns_tree = NodeSet() tree = parser.tree('admin') self.assertEqual(tree.inner_node_count(), 151) self.assertEqual(tree.leaf_node_count(), 10) for nodegroup in tree: ns_tree.add(nodegroup.nodeset) self.assertEqual(str(ns_all), str(ns_tree)) def testConfigurationParserBigTree(self): """test configuration parser against big propagation tree""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'# this is a comment\n') tmpfile.write(b'[routes]\n') tmpfile.write(b'admin: ST[0-4]\n') tmpfile.write(b'ST[0-4]: STA[0-49]\n') tmpfile.write(b'STA[0-49]: nodes[0-10000]\n') tmpfile.flush() parser = TopologyParser() parser.load(tmpfile.name) ns_all = NodeSet('admin,ST[0-4],STA[0-49],nodes[0-10000]') ns_tree = NodeSet() tree = parser.tree('admin') self.assertEqual(tree.inner_node_count(), 56) self.assertEqual(tree.leaf_node_count(), 10001) for nodegroup in tree: ns_tree.add(nodegroup.nodeset) self.assertEqual(str(ns_all), str(ns_tree)) def testConfigurationParserConvergentPaths(self): """convergent paths detection""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'# this is a comment\n') tmpfile.write(b'[routes]\n') tmpfile.write(b'fortoy32: fortoy[33-34]\n') tmpfile.write(b'fortoy33: fortoy35\n') tmpfile.write(b'fortoy34: fortoy36\n') tmpfile.write(b'fortoy[35-36]: fortoy37\n') tmpfile.flush() parser = TopologyParser() self.assertRaises(TopologyError, parser.load, tmpfile.name) def testPrintingTree(self): """test printing tree""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'[routes]\n') tmpfile.write(b'n0: n[1-2]\n') tmpfile.write(b'n1: n[10-49]\n') tmpfile.write(b'n2: n[50-89]\n') tmpfile.flush() parser = TopologyParser() parser.load(tmpfile.name) tree = parser.tree('n0') # In fact it looks like this: # --------------------------- # n0 # |_ n1 # | |_ n[10-49] # |_ n2 # |_ n[50-89] # --------------------------- display_ref1 = 'n0\n|- n1\n| `- n[10-49]\n`- n2\n `- n[50-89]\n' display_ref2 = 'n0\n|- n2\n| `- n[50-89]\n`- n1\n `- n[10-49]\n' display = str(tree) self.assertTrue(display == display_ref1 or display == display_ref2) self.assertEqual(str(TopologyTree()), '') def testAddingInvalidChildren(self): """test detecting invalid children""" t0 = TopologyNodeGroup(NodeSet('node[0-9]')) self.assertRaises(AssertionError, t0.add_child, 'foobar') t1 = TopologyNodeGroup(NodeSet('node[10-19]')) t0.add_child(t1) self.assertEqual(t0.children_ns(), t1.nodeset) t0.add_child(t1) self.assertEqual(t0.children_ns(), t1.nodeset) def testRemovingChild(self): """test child removal operation""" t0 = TopologyNodeGroup(NodeSet('node[0-9]')) t1 = TopologyNodeGroup(NodeSet('node[10-19]')) t0.add_child(t1) self.assertEqual(t0.children_ns(), t1.nodeset) t0.clear_child(t1) self.assertEqual(t0.children_ns(), None) t0.clear_child(t1) # error discarded self.assertRaises(ValueError, t0.clear_child, t1, strict=True) t2 = TopologyNodeGroup(NodeSet('node[20-29]')) t0.add_child(t1) t0.add_child(t2) self.assertEqual(t0.children_ns(), t1.nodeset | t2.nodeset) t0.clear_children() self.assertEqual(t0.children_ns(), None) self.assertEqual(t0.children_len(), 0) def testStrConversions(self): """test str() casts""" t = TopologyNodeGroup(NodeSet('admin0')) self.assertEqual(str(t), '') t = TopologyRoutingTable() r0 = TopologyRoute(NodeSet('src[0-9]'), NodeSet('dst[5-8]')) r1 = TopologyRoute(NodeSet('src[10-19]'), NodeSet('dst[15-18]')) self.assertEqual(str(r0), 'src[0-9] -> dst[5-8]') t.add_route(r0) t.add_route(r1) self.assertEqual(str(t), 'src[0-9] -> dst[5-8]\nsrc[10-19] -> dst[15-18]') g = TopologyGraph() # XXX: Actually if g is not empty other things will be printed out... self.assertEqual(str(g), '\n') class TopologyWithGroupsTest(unittest.TestCase): def setUp(self): """set default group resolver""" self.grpf = make_temp_file(dedent(""" [Main] default: test [test] map: echo Controller-vm[1,30] list: echo gw """).encode()) set_std_group_resolver_config(self.grpf.name) def tearDown(self): """restore default group resolver""" set_std_group_resolver(None) # restore std resolver self.grpf.close() def testWildcardsValid(self): """test topology with node groups and wildcards (valid)""" # make sure groups are set up self.assertEqual(str(NodeSet("@gw")), "Controller-vm[1,30]") # 1. Test valid NodeSet with wildcards in topology.conf with NamedTemporaryFile() as tmpfile: tmpfile.write(b'[routes]\n') # (ab)use of valid wildcards tmpfile.write(b'Controller-?m1:Controller-vm2,?ontroller-vm30\n') tmpfile.write(b'Controller-vm3?:Computer101\n') tmpfile.flush() parser = TopologyParser() parser.load(tmpfile.name) tree = parser.tree('Controller-vm1') # Controller-vm1 # |- Controller-vm2 # `- Controller-vm30 # `- Computer101 display_ref1 = 'Controller-vm1\n|- Controller-vm2\n' \ '`- Controller-vm30\n `- Computer101\n' self.assertEqual(str(tree), display_ref1) # 2. Test valid node groups in topology.conf with NamedTemporaryFile() as tmpfile: tmpfile.write(b'[routes]\n') # use valid wildcards tmpfile.write(b'@gw:Controller-vm2,Computer101\n') tmpfile.flush() parser = TopologyParser() parser.load(tmpfile.name) tree = parser.tree('Controller-vm1') # Controller-vm[1,30] # `- Computer101,Controller-vm2 display_ref1 = 'Controller-vm[1,30]\n`' \ '- Computer101,Controller-vm2\n' self.assertEqual(str(tree), display_ref1) def testWildcardsUnresolvedRouters(self): """test topology with node groups and wildcards (unresolved routers)""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'[routes]\n') tmpfile.write(b'Controller-vm1:Controller-vm2,Controller-vm30\n') tmpfile.write(b'Controller-vm2*:Computer101\n') tmpfile.flush() parser = TopologyParser() parser.load(tmpfile.name) tree = parser.tree('Controller-vm1') # Controller-vm1 # `- Controller-vm[2,30] display_ref1 = 'Controller-vm1\n`' \ '- Controller-vm[2,30]\n' self.assertEqual(str(tree), display_ref1) def testWildcardsUnresolvedDestionation(self): """test topology with node groups and wildcards (unresolved dest)""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'[routes]\n') tmpfile.write(b'Controller-vm1:Un*resolved3\n') tmpfile.write(b'Controller-vm30:Computer101\n') tmpfile.flush() parser = TopologyParser() parser.load(tmpfile.name) tree = parser.tree('Controller-vm30') # Controller-vm30 # `- Computer101 display_ref1 = 'Controller-vm30\n`' \ '- Computer101\n' self.assertEqual(str(tree), display_ref1) def testWildcardsOverlap(self): """test topology with node groups and wildcards (overlap)""" with NamedTemporaryFile() as tmpfile: tmpfile.write(b'[routes]\n') tmpfile.write(b'Controller-vm1:@gwController-vm2,Controller-vm30\n') tmpfile.write(b'Controller-vm30:Computer101\n') tmpfile.flush() parser = TopologyParser() self.assertRaises(TopologyError, parser.load, tmpfile.name) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/TreeWorkerTest.py0000644104717000001440000004177614505632065020260 0ustar00sthiellusers""" Unit test for ClusterShell.Worker.TreeWorker This unit test requires working ssh connections to the following local addresses: $HOSTNAME, localhost, 127.0.0.[2-4] You can use the following options in ~/.ssh/config: Host your_hostname localhost 127.0.0.* StrictHostKeyChecking no LogLevel ERROR """ import os from os.path import basename, join import unittest import warnings from ClusterShell.NodeSet import NodeSet from ClusterShell.Task import task_self, task_terminate, task_wait from ClusterShell.Task import Task, task_cleanup from ClusterShell.Topology import TopologyGraph from ClusterShell.Worker.Tree import TreeWorker, WorkerTree from TLib import HOSTNAME, make_temp_dir, make_temp_file, make_temp_filename NODE_HEAD = HOSTNAME NODE_GATEWAY = 'localhost' NODE_DISTANT = '127.0.0.2' NODE_DIRECT = '127.0.0.3' NODE_FOREIGN = '127.0.0.4' class TEventHandlerBase(object): """Base Test class for EventHandler""" def __init__(self): self.ev_start_cnt = 0 self.ev_pickup_cnt = 0 self.ev_read_cnt = 0 self.ev_written_cnt = 0 self.ev_written_sz = 0 self.ev_hup_cnt = 0 self.ev_close_cnt = 0 self.ev_timedout_cnt = 0 self.last_read = None class TEventHandlerLegacy(TEventHandlerBase): """Test Legacy Event Handler (< 1.8)""" def ev_start(self, worker): self.ev_start_cnt += 1 def ev_pickup(self, worker): self.ev_pickup_cnt += 1 def ev_read(self, worker): self.ev_read_cnt += 1 self.last_read = worker.current_msg def ev_written(self, worker, node, sname, size): self.ev_written_cnt += 1 self.ev_written_sz += size def ev_hup(self, worker): self.ev_hup_cnt += 1 def ev_timeout(self, worker): self.ev_timedout_cnt += 1 def ev_close(self, worker): self.ev_close_cnt += 1 class TEventHandler(TEventHandlerBase): """Test Event Handler (1.8+)""" def ev_start(self, worker): self.ev_start_cnt += 1 def ev_pickup(self, worker, node): self.ev_pickup_cnt += 1 def ev_read(self, worker, node, sname, msg): self.ev_read_cnt += 1 self.last_read = msg def ev_written(self, worker, node, sname, size): self.ev_written_cnt += 1 self.ev_written_sz += size def ev_hup(self, worker, node, rc): self.ev_hup_cnt += 1 def ev_close(self, worker, timedout): self.ev_close_cnt += 1 if timedout: self.ev_timedout_cnt += 1 @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") class TreeWorkerTest(unittest.TestCase): """ TreeWorkerTest: test TreeWorker NODE_HEAD -> NODE_GATEWAY -> NODE_DISTANT -> NODE_DIRECT [defined in topology] -> NODE_FOREIGN [not defined in topology] Connections are really established to the target and command results are tested. """ def setUp(self): """setup test environment topology""" task_terminate() # ideally shouldn't be needed... self.task = task_self() # set task topology graph = TopologyGraph() graph.add_route(NodeSet(HOSTNAME), NodeSet(NODE_GATEWAY)) graph.add_route(NodeSet(NODE_GATEWAY), NodeSet(NODE_DISTANT)) graph.add_route(NodeSet(HOSTNAME), NodeSet(NODE_DIRECT)) # NODE_FOREIGN is not included self.task.topology = graph.to_tree(HOSTNAME) def tearDown(self): """clean up test environment""" task_terminate() self.task = None def test_tree_run_event_legacy(self): """test simple tree run with legacy EventHandler""" teh = TEventHandlerLegacy() with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") self.task.run('echo Lorem Ipsum', nodes=NODE_DISTANT, handler=teh) self.assertEqual(len(w), 4) self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 1) self.assertEqual(teh.ev_written_cnt, 0) self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) self.assertEqual(teh.last_read, b'Lorem Ipsum') def test_tree_run_event_legacy_timeout(self): """test simple tree run with legacy EventHandler with timeout""" teh = TEventHandlerLegacy() with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") self.task.run('sleep 10', nodes=NODE_DISTANT, handler=teh, timeout=0.5) self.assertEqual(len(w), 2) self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 0) # nothing to read self.assertEqual(teh.ev_written_cnt, 0) self.assertEqual(teh.ev_hup_cnt, 0) # no hup event if timed out self.assertEqual(teh.ev_timedout_cnt, 1) # command timed out self.assertEqual(teh.ev_close_cnt, 1) def test_tree_run_event(self): """test simple tree run with EventHandler (1.8+)""" teh = TEventHandler() self.task.run('echo Lorem Ipsum', nodes=NODE_DISTANT, handler=teh) self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 1) self.assertEqual(teh.ev_written_cnt, 0) self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) self.assertEqual(teh.last_read, b'Lorem Ipsum') def test_tree_run_event_timeout(self): """test simple tree run with EventHandler (1.8+) with timeout""" teh = TEventHandler() self.task.run('sleep 10', nodes=NODE_DISTANT, handler=teh, timeout=0.5) self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 0) # nothing to read self.assertEqual(teh.ev_written_cnt, 0) self.assertEqual(teh.ev_hup_cnt, 0) # no hup event if timed out self.assertEqual(teh.ev_timedout_cnt, 1) # command timed out self.assertEqual(teh.ev_close_cnt, 1) def test_tree_run_noremote(self): """test tree run with remote=False""" teh = TEventHandler() self.task.run('echo %h', nodes=NODE_DISTANT, handler=teh, remote=False) self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 1) self.assertEqual(teh.ev_written_cnt, 0) self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) self.assertEqual(teh.last_read, NODE_DISTANT.encode('ascii')) def test_tree_run_noremote_alt_localworker(self): """test tree run with remote=False and a non-exec localworker""" teh = TEventHandler() self.task.set_info('tree_default:local_workername', 'ssh') self.task.run('echo %h', nodes=NODE_DISTANT, handler=teh, remote=False) self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 1) self.assertEqual(teh.ev_written_cnt, 0) self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) # The exec worker will expand %h to the host, but ssh will just echo '%h' self.assertEqual(teh.last_read, '%h'.encode('ascii')) del self.task._info['tree_default:local_workername'] def test_tree_run_direct(self): """test tree run with direct target, in topology""" teh = TEventHandler() self.task.run('echo Lorem Ipsum', nodes=NODE_DIRECT, handler=teh) self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 1) self.assertEqual(teh.ev_written_cnt, 0) self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) self.assertEqual(teh.last_read, b'Lorem Ipsum') def test_tree_run_foreign(self): """test tree run with direct target, not in topology""" teh = TEventHandler() self.task.run('echo Lorem Ipsum', nodes=NODE_FOREIGN, handler=teh) self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 1) self.assertEqual(teh.ev_written_cnt, 0) self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) self.assertEqual(teh.last_read, b'Lorem Ipsum') def _tree_run_write(self, target, separate_thread=False): if separate_thread: task = Task() else: task = self.task teh = TEventHandler() worker = task.shell('cat', nodes=target, handler=teh) worker.write(b'Lorem Ipsum') worker.set_write_eof() task.run() if separate_thread: task_wait() task_cleanup() self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 1) self.assertEqual(teh.ev_written_cnt, 1) self.assertEqual(teh.ev_written_sz, len('Lorem Ipsum')) self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) self.assertEqual(teh.last_read, b'Lorem Ipsum') def test_tree_run_write_distant(self): """test tree run with write(), distant target""" self._tree_run_write(NODE_DISTANT) def test_tree_run_write_direct(self): """test tree run with write(), direct target, in topology""" self._tree_run_write(NODE_DIRECT) def test_tree_run_write_foreign(self): """test tree run with write(), direct target, not in topology""" self._tree_run_write(NODE_FOREIGN) def test_tree_run_write_gateway(self): """test tree run with write(), gateway is target, not in topology""" self._tree_run_write(NODE_GATEWAY) def test_tree_run_write_distant_mt(self): """test tree run with write(), distant target, separate thread""" self._tree_run_write(NODE_DISTANT, separate_thread=True) def test_tree_run_write_direct_mt(self): """test tree run with write(), direct target, in topology, separate thread""" self._tree_run_write(NODE_DIRECT, separate_thread=True) def test_tree_run_write_foreign_mt(self): """test tree run with write(), direct target, not in topology, separate thread""" self._tree_run_write(NODE_FOREIGN, separate_thread=True) def test_tree_run_write_gateway_mt(self): """test tree run with write(), gateway is target, not in topology, separate thread""" self._tree_run_write(NODE_GATEWAY, separate_thread=True) def _tree_copy_file(self, target): teh = TEventHandler() srcf = make_temp_file(b'Lorem Ipsum', 'test_tree_copy_file_src') dest = make_temp_filename('test_tree_copy_file_dest') try: worker = self.task.copy(srcf.name, dest, nodes=target, handler=teh) self.task.run() self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 0) #self.assertEqual(teh.ev_written_cnt, 0) # FIXME self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) with open(dest, 'r') as destf: self.assertEqual(destf.read(), 'Lorem Ipsum') finally: os.remove(dest) def test_tree_copy_file_distant(self): """test tree copy: file, distant target""" self._tree_copy_file(NODE_DISTANT) def test_tree_copy_file_direct(self): """test tree copy: file, direct target, in topology""" self._tree_copy_file(NODE_DIRECT) def test_tree_copy_file_foreign(self): """test tree copy: file, direct target, not in topology""" self._tree_copy_file(NODE_FOREIGN) def test_tree_copy_file_gateway(self): """test tree copy: file, gateway is target""" self._tree_copy_file(NODE_GATEWAY) def _tree_copy_dir(self, target): teh = TEventHandler() srcdir = make_temp_dir() destdir = make_temp_dir() file1 = make_temp_file(b'Lorem Ipsum Unum', suffix=".txt", dir=srcdir.name) file2 = make_temp_file(b'Lorem Ipsum Duo', suffix=".txt", dir=srcdir.name) try: # add '/' to dest so that distant does like the others worker = self.task.copy(srcdir.name, destdir.name + '/', nodes=target, handler=teh) self.task.run() self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 0) #self.assertEqual(teh.ev_written_cnt, 0) # FIXME self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) # copy successful? copy_dest = join(destdir.name, srcdir.name) with open(join(copy_dest, basename(file1.name)), 'rb') as rfile1: self.assertEqual(rfile1.read(), b'Lorem Ipsum Unum') with open(join(copy_dest, basename(file2.name)), 'rb') as rfile2: self.assertEqual(rfile2.read(), b'Lorem Ipsum Duo') finally: file1.close() file2.close() srcdir.cleanup() destdir.cleanup() def test_tree_copy_dir_distant(self): """test tree copy: directory, distant target""" self._tree_copy_dir(NODE_DISTANT) def test_tree_copy_dir_direct(self): """test tree copy: directory, direct target, in topology""" self._tree_copy_dir(NODE_DIRECT) def test_tree_copy_dir_foreign(self): """test tree copy: directory, direct target, not in topology""" self._tree_copy_dir(NODE_FOREIGN) def test_tree_copy_dir_gateway(self): """test tree copy: directory, gateway is target""" self._tree_copy_dir(NODE_GATEWAY) def _tree_rcopy_dir(self, target, dirsuffix=None): teh = TEventHandler() srcdir = make_temp_dir() destdir = make_temp_dir() file1 = make_temp_file(b'Lorem Ipsum Unum', suffix=".txt", dir=srcdir.name) file2 = make_temp_file(b'Lorem Ipsum Duo', suffix=".txt", dir=srcdir.name) try: worker = self.task.rcopy(srcdir.name, destdir.name, nodes=target, handler=teh) self.task.run() self.assertEqual(teh.ev_start_cnt, 1) self.assertEqual(teh.ev_pickup_cnt, 1) self.assertEqual(teh.ev_read_cnt, 0) #self.assertEqual(teh.ev_written_cnt, 0) # FIXME self.assertEqual(teh.ev_hup_cnt, 1) self.assertEqual(teh.ev_timedout_cnt, 0) self.assertEqual(teh.ev_close_cnt, 1) # rcopy successful? if not dirsuffix: dirsuffix = target rcopy_dest = join(destdir.name, basename(srcdir.name) + '.' + dirsuffix) with open(join(rcopy_dest, basename(file1.name)), 'rb') as rfile1: self.assertEqual(rfile1.read(), b'Lorem Ipsum Unum') with open(join(rcopy_dest, basename(file2.name)), 'rb') as rfile2: self.assertEqual(rfile2.read(), b'Lorem Ipsum Duo') finally: file1.close() file2.close() srcdir.cleanup() destdir.cleanup() def test_tree_rcopy_dir_distant(self): """test tree rcopy: directory, distant target""" # In distant tree mode, the returned result will include the # hostname of the distant host, not target name self._tree_rcopy_dir(NODE_DISTANT, dirsuffix=HOSTNAME) def test_tree_rcopy_dir_direct(self): """test tree rcopy: directory, direct target, in topology""" self._tree_rcopy_dir(NODE_DIRECT) def test_tree_rcopy_dir_foreign(self): """test tree rcopy: directory, direct target, not in topology""" self._tree_rcopy_dir(NODE_FOREIGN) def test_tree_rcopy_dir_gateway(self): """test tree rcopy: directory, gateway is target""" self._tree_rcopy_dir(NODE_GATEWAY) def test_tree_worker_missing_arguments(self): """test TreeWorker with missing arguments""" teh = TEventHandler() # no command nor source self.assertRaises(ValueError, TreeWorker, NODE_DISTANT, teh, 10) def test_tree_worker_name_compat(self): """test TreeWorker former name (WorkerTree)""" self.assertEqual(TreeWorker, WorkerTree) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696019509.0 ClusterShell-1.9.2/tests/WorkerExecTest.py0000644104717000001440000001670214505632065020234 0ustar00sthiellusers# ClusterShell.Worker.ExecWorker test suite # First version by A. Degremont 2014-07-10 """Unit test for ExecWorker""" import os import unittest from TLib import HOSTNAME, make_temp_file, make_temp_filename, make_temp_dir from ClusterShell.Event import EventHandler from ClusterShell.Worker.Exec import ExecWorker, WorkerError from ClusterShell.Task import task_self class ExecTest(unittest.TestCase): def execw(self, **kwargs): """helper method to spawn and run ExecWorker""" worker = ExecWorker(**kwargs) task_self().schedule(worker) task_self().run() return worker def test_no_nodes(self): """test ExecWorker with a simple command without nodes""" self.execw(nodes=None, handler=None, command="echo ok") self.assertEqual(task_self().max_retcode(), None) def test_shell_syntax(self): """test ExecWorker with a command using shell syntax""" cmd = "echo -n 1; echo -n 2" self.execw(nodes='localhost', handler=None, command=cmd) self.assertEqual(task_self().max_retcode(), 0) self.assertEqual(task_self().node_buffer('localhost'), b'12') def test_one_node(self): """test ExecWorker with a simple command on localhost""" self.execw(nodes='localhost', handler=None, command="echo ok") self.assertEqual(task_self().max_retcode(), 0) self.assertEqual(task_self().node_buffer('localhost'), b'ok') def test_one_node_error(self): """test ExecWorker with an error command on localhost""" self.execw(nodes='localhost', handler=None, command="false") self.assertEqual(task_self().max_retcode(), 1) self.assertEqual(task_self().node_buffer('localhost'), b'') @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def test_timeout(self): """test ExecWorker with a timeout""" nodes = "localhost,%s" % HOSTNAME self.execw(nodes=nodes, handler=None, command="sleep 1", timeout=0.2) self.assertEqual(task_self().max_retcode(), None) self.assertEqual(task_self().num_timeout(), 2) def test_node_placeholder(self): """test ExecWorker with several nodes and %h (host)""" nodes = "localhost,%s" % HOSTNAME self.execw(nodes=nodes, handler=None, command="echo %h") self.assertEqual(task_self().max_retcode(), 0) self.assertEqual(task_self().node_buffer('localhost'), b'localhost') self.assertEqual(task_self().node_buffer(HOSTNAME), HOSTNAME.encode('utf-8')) def test_bad_placeholder(self): """test ExecWorker with unknown placeholder pattern""" self.assertRaises(WorkerError, self.execw, nodes="localhost", handler=None, command="echo %x") self.assertRaises(WorkerError, self.execw, nodes="localhost", handler=None, command="echo %") @unittest.skipIf(HOSTNAME == 'localhost', "does not work with hostname set to 'localhost'") def test_rank_placeholder(self): """test ExecWorker with several nodes and %n (rank)""" nodes = "localhost,%s" % HOSTNAME self.execw(nodes=nodes, handler=None, command="echo %n") self.assertEqual(task_self().max_retcode(), 0) self.assertEqual(set(bytes(msg) for msg, _ in task_self().iter_buffers()), set([b'0', b'1'])) def test_copy(self): """test copying with an ExecWorker and host placeholder""" src = make_temp_file(b"data") dstdir = make_temp_dir() dstpath = os.path.join(dstdir.name, os.path.basename(src.name)) try: pattern = dstpath + ".%h" self.execw(nodes='localhost', handler=None, source=src.name, dest=pattern) self.assertEqual(task_self().max_retcode(), 0) self.assertTrue(os.path.isfile(dstpath + '.localhost')) finally: os.unlink(dstpath + '.localhost') dstdir.cleanup() def test_copy_preserve(self): """test copying with an ExecWorker (preserve=True)""" src = make_temp_file(b"data") past_time = 443757600 os.utime(src.name, (past_time, past_time)) dstpath = make_temp_filename() try: self.execw(nodes='localhost', handler=None, source=src.name, dest=dstpath, preserve=True) self.assertEqual(task_self().max_retcode(), 0) self.assertTrue(os.stat(dstpath).st_mtime, past_time) finally: os.unlink(dstpath) def test_copy_directory(self): """test copying directory with an ExecWorker""" srcdir = make_temp_dir() dstdir = make_temp_dir() ref1 = make_temp_file(b"data1", dir=srcdir.name) pathdstsrcdir = os.path.join(dstdir.name, os.path.basename(srcdir.name)) pathdst1 = os.path.join(pathdstsrcdir, os.path.basename(ref1.name)) try: self.execw(nodes='localhost', handler=None, source=srcdir.name, dest=dstdir.name) self.assertEqual(task_self().max_retcode(), 0) self.assertTrue(os.path.isdir(pathdstsrcdir)) self.assertTrue(os.path.isfile(pathdst1)) with open(pathdst1) as dst1: self.assertEqual(dst1.readlines()[0], "data1") finally: os.unlink(pathdst1) os.rmdir(pathdstsrcdir) ref1.close() dstdir.cleanup() srcdir.cleanup() def test_copy_wrong_directory(self): """test copying wrong directory with an ExecWorker""" srcdir = make_temp_dir() dst = make_temp_file(b"data") ref1 = make_temp_file(b"data1", dir=srcdir.name) try: self.execw(nodes='localhost', handler=None, source=srcdir.name, dest=dst.name, stderr=True) self.assertEqual(task_self().max_retcode(), 1) self.assertTrue(len(task_self().node_error("localhost")) > 0) self.assertTrue(os.path.isfile(ref1.name)) finally: ref1.close() srcdir.cleanup() def test_rcopy_wrong_directory(self): """test ExecWorker reverse copying with wrong directory""" with make_temp_dir() as dstbasedirname: dstdir = os.path.join(dstbasedirname, "wrong") src = make_temp_file(b"data") self.assertRaises(ValueError, self.execw, nodes='localhost', handler=None, source=src.name, dest=dstdir, stderr=True, reverse=True) def test_abort_on_read(self): """test ExecWorker.abort() on read""" class TestH(EventHandler): def ev_read(self, worker, node, sname, msg): worker.abort() worker.abort() # safe but no effect self.execw(nodes='localhost', handler=TestH(), command="echo ok; tail -f /dev/null") self.assertEqual(task_self().max_retcode(), None) self.assertEqual(task_self().node_buffer('localhost'), b'ok') def test_abort_on_close(self): """test ExecWorker.abort() on close""" class TestH(EventHandler): def ev_close(self, worker, timedout): worker.abort() worker.abort() # safe but no effect self.execw(nodes='localhost', handler=TestH(), command="echo ok; sleep .1") self.assertEqual(task_self().max_retcode(), 0) self.assertEqual(task_self().node_buffer('localhost'), b'ok')