pax_global_header00006660000000000000000000000064126610546240014520gustar00rootroot0000000000000052 comment=24f3c946967cfec05be7e0bd934a2dba273567fc pcs-0.9.149/000077500000000000000000000000001266105462400124715ustar00rootroot00000000000000pcs-0.9.149/.gitignore000066400000000000000000000001461266105462400144620ustar00rootroot00000000000000*.pyc *.swp /MANIFEST /dist/ /pcs/bash_completion.d.pcs /pcsd/pcs_settings.conf /pcsd/pcs_users.conf pcs-0.9.149/COPYING000066400000000000000000000432541266105462400135340ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. pcs-0.9.149/MANIFEST.in000066400000000000000000000003371266105462400142320ustar00rootroot00000000000000include Makefile include COPYING include pcs/pcs.8 include pcs/bash_completion.d.pcs include pcsd/.bundle/config graft pcsd graft pcsd/vendor/cache prune pcsd/vendor/bundle prune pcsd/test recursive-exclude pcsd .gitignore pcs-0.9.149/Makefile000066400000000000000000000120441266105462400141320ustar00rootroot00000000000000# Compatibility with GNU/Linux [i.e. Debian] based distros UNAME_OS_GNU := $(shell if uname -o | grep -q "GNU/Linux" ; then echo true; else echo false; fi) DISTRO_DEBIAN := $(shell if [ -e /etc/debian_version ] ; then echo true; else echo false; fi) IS_DEBIAN=false DISTRO_DEBIAN_VER_8=false ifeq ($(UNAME_OS_GNU),true) ifeq ($(DISTRO_DEBIAN),true) IS_DEBIAN=true DISTRO_DEBIAN_VER_8 := $(shell if grep -q -i "^8\|jessie" /etc/debian_version ; then echo true; else echo false; fi) # dpkg-architecture is in the optional dpkg-dev package, unfortunately. #DEB_HOST_MULTIARCH := $(shell dpkg-architecture -qDEB_HOST_MULTIARCH) # TODO: Use lsb_architecture to get the multiarch tuple if/when it becomes available in distributions. DEB_HOST_MULTIARCH := $(shell dpkg -L libc6 | sed -nr 's|^/etc/ld\.so\.conf\.d/(.*)\.conf$$|\1|p') endif endif ifndef PYTHON_SITELIB PYTHON_SITELIB=$(shell python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") endif ifeq ($(PYTHON_SITELIB), /usr/lib/python2.6/dist-packages) EXTRA_SETUP_OPTS="--install-layout=deb" endif ifeq ($(PYTHON_SITELIB), /usr/lib/python2.7/dist-packages) EXTRA_SETUP_OPTS="--install-layout=deb" endif # Check for systemd presence, add compatibility with Debian based distros IS_SYSTEMCTL=false ifeq ($(IS_DEBIAN),true) IS_SYSTEMCTL = $(shell if [ -d /var/run/systemd/system ] ; then echo true ; else echo false; fi) ifeq ($(IS_SYSTEMCTL),false) ifeq ($(SYSTEMCTL_OVERRIDE),true) IS_SYSTEMCTL=true endif endif else ifeq ("$(wildcard /usr/bin/systemctl)","/usr/bin/systemctl") IS_SYSTEMCTL=true else ifeq ("$(wildcard /bin/systemctl)","/usr/bin/systemctl") IS_SYSTEMCTL=true endif endif endif # Check for an override for building gems ifndef BUILD_GEMS BUILD_GEMS=true endif MANDIR=/usr/share/man ifndef PREFIX PREFIX=$(shell prefix=`python -c "import sys; print(sys.prefix)"` || prefix="/usr"; echo $$prefix) endif ifndef systemddir systemddir=/usr/lib/systemd endif ifndef initdir initdir=/etc/init.d endif ifndef install_settings ifeq ($(IS_DEBIAN),true) install_settings=true else install_settings=false endif endif install: bash_completion python setup.py install --prefix ${DESTDIR}${PREFIX} ${EXTRA_SETUP_OPTS} mkdir -p ${DESTDIR}${PREFIX}/sbin/ chmod 755 ${DESTDIR}${PYTHON_SITELIB}/pcs/pcs.py ln -fs ${PYTHON_SITELIB}/pcs/pcs.py ${DESTDIR}${PREFIX}/sbin/pcs install -D pcs/bash_completion.d.pcs ${DESTDIR}/etc/bash_completion.d/pcs install -m644 -D pcs/pcs.8 ${DESTDIR}/${MANDIR}/man8/pcs.8 ifeq ($(IS_DEBIAN),true) ifeq ($(install_settings),true) rm -f ${DESTDIR}${PYTHON_SITELIB}/pcs/settings.py tmp_settings=`mktemp`; \ sed s/DEB_HOST_MULTIARCH/${DEB_HOST_MULTIARCH}/g pcs/settings.py.debian > $$tmp_settings; \ install -m644 $$tmp_settings ${DESTDIR}${PYTHON_SITELIB}/pcs/settings.py; \ rm -f $$tmp_settings python -m compileall -fl ${DESTDIR}${PYTHON_SITELIB}/pcs/settings.py endif endif install_pcsd: ifeq ($(BUILD_GEMS),true) make -C pcsd build_gems endif mkdir -p ${DESTDIR}/var/log/pcsd ifeq ($(IS_DEBIAN),true) mkdir -p ${DESTDIR}/usr/share/ cp -r pcsd ${DESTDIR}/usr/share/ install -m 644 -D pcsd/pcsd.conf ${DESTDIR}/etc/default/pcsd install -d ${DESTDIR}/etc/pam.d install pcsd/pcsd.pam.debian ${DESTDIR}/etc/pam.d/pcsd ifeq ($(install_settings),true) rm -f ${DESTDIR}/usr/share/pcsd/settings.rb tmp_settings_pcsd=`mktemp`; \ sed s/DEB_HOST_MULTIARCH/${DEB_HOST_MULTIARCH}/g pcsd/settings.rb.debian > $$tmp_settings_pcsd; \ install -m644 $$tmp_settings_pcsd ${DESTDIR}/usr/share/pcsd/settings.rb; \ rm -f $$tmp_settings_pcsd endif ifeq ($(IS_SYSTEMCTL),true) install -d ${DESTDIR}/${systemddir}/system/ install -m 644 pcsd/pcsd.service.debian ${DESTDIR}/${systemddir}/system/pcsd.service else install -m 755 -D pcsd/pcsd.debian ${DESTDIR}/${initdir}/pcsd endif else mkdir -p ${DESTDIR}${PREFIX}/lib/ cp -r pcsd ${DESTDIR}${PREFIX}/lib/ install -m 644 -D pcsd/pcsd.conf ${DESTDIR}/etc/sysconfig/pcsd install -d ${DESTDIR}/etc/pam.d install pcsd/pcsd.pam ${DESTDIR}/etc/pam.d/pcsd ifeq ($(IS_SYSTEMCTL),true) install -d ${DESTDIR}/${systemddir}/system/ install -m 644 pcsd/pcsd.service ${DESTDIR}/${systemddir}/system/ else install -m 755 -D pcsd/pcsd ${DESTDIR}/${initdir}/pcsd endif endif install -m 700 -d ${DESTDIR}/var/lib/pcsd install -m 644 -D pcsd/pcsd.logrotate ${DESTDIR}/etc/logrotate.d/pcsd uninstall: rm -f ${DESTDIR}${PREFIX}/sbin/pcs rm -rf ${DESTDIR}${PYTHON_SITELIB}/pcs ifeq ($(IS_DEBIAN),true) rm -rf ${DESTDIR}/usr/share/pcsd else rm -rf ${DESTDIR}${PREFIX}/lib/pcsd endif ifeq ($(IS_SYSTEMCTL),true) rm -f ${DESTDIR}/${systemddir}/system/pcsd.service else rm -f ${DESTDIR}/${initdir}/pcsd endif rm -f ${DESTDIR}/etc/pam.d/pcsd rm -rf ${DESTDIR}/var/lib/pcsd tarball: bash_completion python setup.py sdist --formats=tar python maketarballs.py newversion: python newversion.py bash_completion: cd pcs ; python -c 'import usage; usage.sub_generate_bash_completion()' > bash_completion.d.pcs ; cd .. pcs-0.9.149/README000066400000000000000000000030411266105462400133470ustar00rootroot00000000000000PCS - Pacemaker/Corosync configuration system Quick start To install pcs run the following in terminal # tar -xzvf pcs-0.9.143.tar.gz # cd pcs-0.9.143 # make install This will install pcs into /usr/sbin/pcs To create a cluster run the following commands on all nodes (replacing node1, node2, node3 with a list of nodes in the cluster). # pcs cluster setup --local --name cluster_name node1 node2 node3 Then run the following command on all nodes: # pcs cluster start After a few moments the cluster should startup and you can get the status of the cluster # pcs status After this you can add resources and stonith agents: # pcs resource help and # pcs stonith help You can also install pcsd which operates as a GUI and remote server for pcs. pcsd may also be necessary in order to follow the guides on the clusterlabs.org website. To install pcsd run the following commands from the root of your pcs directory. (You must have the ruby bundler gem installed, rubygem-bundler in Fedora, and development packages installed) # cd pcsd ; make get_gems ; cd .. # make install_pcsd If you are using GNU/Linux its now time to: # systemctl daemon-reload Currently this is built into Fedora (other distributions to follow). You can see the current Fedora .spec in the fedora package git repositories here: http://pkgs.fedoraproject.org/cgit/pcs.git/ Current Fedora 23 .spec: http://pkgs.fedoraproject.org/cgit/pcs.git/tree/pcs.spec?h=f23 If you have an questions or concerns please feel free to email cfeist@redhat.com or open a github issue on the pcs project. pcs-0.9.149/README.md000066400000000000000000000040071266105462400137510ustar00rootroot00000000000000## PCS - Pacemaker/Corosync configuration system ### Quick Start *** - **PCS Installation from Source** Run the following in terminal: ```shell # tar -xzvf pcs-0.9.143.tar.gz # cd pcs-0.9.143 # make install ``` This will install pcs into `/usr/sbin/pcs`.
- **Create and Start a Basic Cluster** To create a cluster run the following commands on all nodes (replacing node1, node2, node3 with a list of nodes in the cluster). ```shell # pcs cluster setup --local --name cluster_name node1 node2 node3 ``` Then run the following command on all nodes: ```shell # pcs cluster start ```
- **Check the Cluster Status** After a few moments the cluster should startup and you can get the status of the cluster ```shell # pcs status ```
- **Add Cluster Resources** After this you can add resources and stonith agents: ```shell # pcs resource help ``` and ```shell # pcs stonith help ```
- **PCSD Installation from Source** You can also install pcsd which operates as a GUI and remote server for pcs. pcsd may also be necessary in order to follow the guides on the clusterlabs.org website. To install pcsd run the following commands from the root of your pcs directory. (You must have the ruby bundler gem installed, rubygem-bundler in Fedora, and development packages installed) ```shell # cd pcsd ; make get_gems ; cd .. # make install_pcsd ``` If you are using GNU/Linux its now time to: ```shell # systemctl daemon-reload ```
### Packages *** Currently this is built into Fedora (other distributions to follow). You can see the current Fedora .spec in the fedora package git repositories here: http://pkgs.fedoraproject.org/cgit/pcs.git/ Current Fedora 23 .spec: http://pkgs.fedoraproject.org/cgit/pcs.git/tree/pcs.spec?h=f23
### Inquiries *** If you have any questions or concerns please feel free to email cfeist@redhat.com or open a github issue on the pcs project. pcs-0.9.149/maketarballs.py000066400000000000000000000011621266105462400155050ustar00rootroot00000000000000#!/usr/bin/python from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import sys import os sys.path.insert( 0, os.path.join(os.path.dirname(os.path.abspath(__file__)), "pcs") ) import settings pcs_version = settings.pcs_version print(os.system("cp dist/pcs-"+pcs_version+".tar dist/pcs-withgems-"+pcs_version+".tar")) print(os.system("tar --delete -f dist/pcs-"+pcs_version+".tar '*/pcsd/vendor'")) print(os.system("gzip dist/pcs-"+pcs_version+".tar")) print(os.system("gzip dist/pcs-withgems-"+pcs_version+".tar")) pcs-0.9.149/newversion.py000066400000000000000000000026661266105462400152540ustar00rootroot00000000000000#!/usr/bin/python from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import sys import os import locale import datetime sys.path.insert( 0, os.path.join(os.path.dirname(os.path.abspath(__file__)), "pcs") ) import settings locale.setlocale(locale.LC_ALL, ("en_US", "UTF-8")) # Get the current version, increment by 1, verify changes, git commit & tag pcs_version_split = settings.pcs_version.split('.') pcs_version_split[2] = str(int(pcs_version_split[2]) + 1) new_version = ".".join(pcs_version_split) print(os.system("sed -i 's/"+settings.pcs_version+"/"+new_version + "/' setup.py")) print(os.system("sed -i 's/"+settings.pcs_version+"/"+new_version + "/' pcs/settings.py")) print(os.system("sed -i 's/"+settings.pcs_version+"/"+new_version + "/' pcs/settings.py.debian")) print(os.system("sed -i 's/"+settings.pcs_version+"/"+new_version + "/' pcsd/bootstrap.rb")) manpage_head = '.TH PCS "8" "{date}" "pcs {version}" "System Administration Utilities"'.format( date=datetime.date.today().strftime('%B %Y'), version=new_version ) print(os.system("sed -i '1c " + manpage_head + "' pcs/pcs.8")) print(os.system("git diff")) print("Look good? (y/n)") choice = sys.stdin.read(1) if choice != "y": print("Ok, exiting") sys.exit(0) print(os.system("git commit -a -m 'Bumped to "+new_version+"'")) print(os.system("git tag "+new_version)) pcs-0.9.149/pcs/000077500000000000000000000000001266105462400132565ustar00rootroot00000000000000pcs-0.9.149/pcs/COPYING000066400000000000000000000432541266105462400143210ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. pcs-0.9.149/pcs/__init__.py000066400000000000000000000000001266105462400153550ustar00rootroot00000000000000pcs-0.9.149/pcs/acl.py000066400000000000000000000266211266105462400143760ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import sys import usage import utils import prop from errors import CmdLineInputError from errors import ReportItem from errors import ReportItemSeverity from errors import error_codes from library_acl import LibraryError from library_acl import create_role from library_acl import provide_role from library_acl import add_permissions_to_role def exit_on_cmdline_input_errror(usage_name): usage.acl([usage_name]) sys.exit(1) def acl_cmd(argv): if len(argv) == 0: argv = ["show"] sub_cmd = argv.pop(0) # If we're using help or show we don't upgrade, otherwise upgrade if necessary if sub_cmd not in ["help","show"]: utils.checkAndUpgradeCIB(2,0,0) if (sub_cmd == "help"): usage.acl(argv) elif (sub_cmd == "show"): acl_show(argv) elif (sub_cmd == "enable"): acl_enable(argv) elif (sub_cmd == "disable"): acl_disable(argv) elif (sub_cmd == "role"): acl_role(argv) elif (sub_cmd == "target" or sub_cmd == "user"): acl_target(argv) elif sub_cmd == "group": acl_target(argv, True) elif sub_cmd == "permission": acl_permission(argv) else: usage.acl() sys.exit(1) def acl_show(argv): dom = utils.get_cib_dom() properties = prop.get_set_properties(defaults=prop.get_default_properties()) acl_enabled = properties.get("enable-acl", "").lower() if utils.is_cib_true(acl_enabled): print("ACLs are enabled") else: print("ACLs are disabled, run 'pcs acl enable' to enable") print() print_targets(dom) print_groups(dom) print_roles(dom) def acl_enable(argv): prop.set_property(["enable-acl=true"]) def acl_disable(argv): prop.set_property(["enable-acl=false"]) def acl_role(argv): if len(argv) < 2: usage.acl(["role"]) sys.exit(1) command = argv.pop(0) if command == "create": try: run_create_role(argv) except CmdLineInputError as e: exit_on_cmdline_input_errror('role create') except LibraryError as e: utils.process_library_reports(e.args) elif command == "delete": run_role_delete(argv) elif command == "assign": run_role_assign(argv) elif command == "unassign": run_role_unassign(argv) else: usage.acl(["role"]) sys.exit(1) def acl_target(argv,group=False): if len(argv) < 2: if group: usage.acl(["group"]) sys.exit(1) else: usage.acl(["user"]) sys.exit(1) dom = utils.get_cib_dom() acls = utils.get_acls(dom) command = argv.pop(0) tug_id = argv.pop(0) if command == "create": # pcsd parses the error message in order to determine whether the id is # assigned to user/group or some other cib element if group and utils.dom_get_element_with_id(dom, "acl_group", tug_id): utils.err("group %s already exists" % tug_id) if not group and utils.dom_get_element_with_id(dom, "acl_target", tug_id): utils.err("user %s already exists" % tug_id) if utils.does_id_exist(dom,tug_id): utils.err(tug_id + " already exists") if group: element = dom.createElement("acl_group") else: element = dom.createElement("acl_target") element.setAttribute("id", tug_id) acls.appendChild(element) for role in argv: if not utils.dom_get_element_with_id(acls, "acl_role", role): utils.err("cannot find acl role: %s" % role) r = dom.createElement("role") r.setAttribute("id", role) element.appendChild(r) utils.replace_cib_configuration(dom) elif command == "delete": found = False if group: elist = dom.getElementsByTagName("acl_group") else: elist = dom.getElementsByTagName("acl_target") for elem in elist: if elem.getAttribute("id") == tug_id: found = True elem.parentNode.removeChild(elem) break if not found: if group: utils.err("unable to find acl group: %s" % tug_id) else: utils.err("unable to find acl target/user: %s" % tug_id) utils.replace_cib_configuration(dom) else: if group: usage.acl(["group"]) else: usage.acl(["user"]) sys.exit(1) def acl_permission(argv): if len(argv) < 1: usage.acl(["permission"]) sys.exit(1) command = argv.pop(0) if command == "add": try: run_permission_add(argv) except CmdLineInputError as e: exit_on_cmdline_input_errror('permission add') except LibraryError as e: utils.process_library_reports(e.args) elif command == "delete": run_permission_delete(argv) else: usage.acl(["permission"]) sys.exit(1) def print_groups(dom): for elem in dom.getElementsByTagName("acl_group"): print("Group: " + elem.getAttribute("id")) role_list = [] for role in elem.getElementsByTagName("role"): role_list.append(role.getAttribute("id")) print(" ".join([" Roles:"] + role_list)) def print_targets(dom): for elem in dom.getElementsByTagName("acl_target"): print("User: " + elem.getAttribute("id")) role_list = [] for role in elem.getElementsByTagName("role"): role_list.append(role.getAttribute("id")) print(" ".join([" Roles:"] + role_list)) def print_roles(dom): for elem in dom.getElementsByTagName("acl_role"): print("Role: " + elem.getAttribute("id")) if elem.getAttribute("description"): print(" Description: " + elem.getAttribute("description")) for perm in elem.getElementsByTagName("acl_permission"): perm_name = " Permission: " + perm.getAttribute("kind") if "xpath" in perm.attributes.keys(): perm_name += " xpath " + perm.getAttribute("xpath") elif "reference" in perm.attributes.keys(): perm_name += " id " + perm.getAttribute("reference") perm_name += " (" + perm.getAttribute("id") + ")" print(perm_name) def argv_to_permission_info_list(argv): if len(argv) % 3 != 0: raise CmdLineInputError() permission_info_list = zip( [permission.lower() for permission in argv[::3]], [scope_type.lower() for scope_type in argv[1::3]], argv[2::3] ) for permission, scope_type, scope in permission_info_list: if( permission not in ['read', 'write', 'deny'] or scope_type not in ['xpath', 'id'] ): raise CmdLineInputError() return permission_info_list def run_create_role(argv): if len(argv) < 1: raise CmdLineInputError() role_id = argv.pop(0) description = "" desc_key = 'description=' if argv and argv[0].startswith(desc_key) and len(argv[0]) > len(desc_key): description = argv.pop(0)[len(desc_key):] permission_info_list = argv_to_permission_info_list(argv) dom = utils.get_cib_dom() create_role(dom, role_id, description) add_permissions_to_role(dom, role_id, permission_info_list) utils.replace_cib_configuration(dom) def run_role_delete(argv): if len(argv) < 1: usage.acl(["role delete"]) sys.exit(1) role_id = argv.pop(0) dom = utils.get_cib_dom() found = False for elem in dom.getElementsByTagName("acl_role"): if elem.getAttribute("id") == role_id: found = True elem.parentNode.removeChild(elem) break if not found: utils.err("unable to find acl role: %s" % role_id) # Remove any references to this role in acl_target or acl_group for elem in dom.getElementsByTagName("role"): if elem.getAttribute("id") == role_id: user_group = elem.parentNode user_group.removeChild(elem) if "--autodelete" in utils.pcs_options: if not user_group.getElementsByTagName("role"): user_group.parentNode.removeChild(user_group) utils.replace_cib_configuration(dom) def run_role_assign(argv): if len(argv) < 2: usage.acl(["role assign"]) sys.exit(1) if len(argv) == 2: role_id = argv[0] ug_id = argv[1] elif len(argv) > 2 and argv[1] == "to": role_id = argv[0] ug_id = argv[2] else: usage.acl(["role assign"]) sys.exit(1) dom = utils.get_cib_dom() found = False for role in dom.getElementsByTagName("acl_role"): if role.getAttribute("id") == role_id: found = True break if not found: utils.err("cannot find role: %s" % role_id) found = False for ug in dom.getElementsByTagName("acl_target") + dom.getElementsByTagName("acl_group"): if ug.getAttribute("id") == ug_id: found = True break if not found: utils.err("cannot find user or group: %s" % ug_id) for current_role in ug.getElementsByTagName("role"): if current_role.getAttribute("id") == role_id: utils.err(role_id + " is already assigned to " + ug_id) new_role = dom.createElement("role") new_role.setAttribute("id", role_id) ug.appendChild(new_role) utils.replace_cib_configuration(dom) def run_role_unassign(argv): if len(argv) < 2: usage.acl(["role unassign"]) sys.exit(1) role_id = argv.pop(0) if len(argv) > 1 and argv[0] == "from": ug_id = argv[1] else: ug_id = argv[0] dom = utils.get_cib_dom() found = False for ug in dom.getElementsByTagName("acl_target") + dom.getElementsByTagName("acl_group"): if ug.getAttribute("id") == ug_id: found = True break if not found: utils.err("cannot find user or group: %s" % ug_id) found = False for current_role in ug.getElementsByTagName("role"): if current_role.getAttribute("id") == role_id: found = True current_role.parentNode.removeChild(current_role) break if not found: utils.err("cannot find role: %s, assigned to user/group: %s" % (role_id, ug_id)) if "--autodelete" in utils.pcs_options: if not ug.getElementsByTagName("role"): ug.parentNode.removeChild(ug) utils.replace_cib_configuration(dom) def run_permission_add(argv): if len(argv) < 4: raise CmdLineInputError() role_id = argv.pop(0) permission_info_list = argv_to_permission_info_list(argv) dom = utils.get_cib_dom() provide_role(dom, role_id) add_permissions_to_role(dom, role_id, permission_info_list) utils.replace_cib_configuration(dom) def run_permission_delete(argv): dom = utils.get_cib_dom() if len(argv) < 1: usage.acl(["permission delete"]) sys.exit(1) perm_id = argv.pop(0) found = False for elem in dom.getElementsByTagName("acl_permission"): if elem.getAttribute("id") == perm_id: elem.parentNode.removeChild(elem) found = True if not found: utils.err("Unable to find permission with id: %s" % perm_id) utils.replace_cib_configuration(dom) pcs-0.9.149/pcs/cluster.py000066400000000000000000002032711266105462400153160ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import os import subprocess import re import sys import socket import tempfile import datetime import json import xml.dom.minidom import threading try: # python2 from commands import getstatusoutput except ImportError: # python3 from subprocess import getstatusoutput import settings import usage import utils import corosync_conf as corosync_conf_utils import pcsd import status import prop import resource import stonith import constraint from errors import ReportItem from errors import error_codes pcs_dir = os.path.dirname(os.path.realpath(__file__)) def cluster_cmd(argv): if len(argv) == 0: usage.cluster() exit(1) sub_cmd = argv.pop(0) if (sub_cmd == "help"): usage.cluster(argv) elif (sub_cmd == "setup"): if "--name" in utils.pcs_options: cluster_setup([utils.pcs_options["--name"]] + argv) else: utils.err( "A cluster name (--name ) is required to setup a cluster" ) elif (sub_cmd == "sync"): sync_nodes(utils.getNodesFromCorosyncConf(),utils.getCorosyncConf()) elif (sub_cmd == "status"): status.cluster_status(argv) elif (sub_cmd == "pcsd-status"): cluster_gui_status(argv) elif (sub_cmd == "certkey"): cluster_certkey(argv) elif (sub_cmd == "auth"): cluster_auth(argv) elif (sub_cmd == "token"): cluster_token(argv) elif (sub_cmd == "token-nodes"): cluster_token_nodes(argv) elif (sub_cmd == "start"): if "--all" in utils.pcs_options: start_cluster_all() else: start_cluster(argv) elif (sub_cmd == "stop"): if "--all" in utils.pcs_options: stop_cluster_all() else: stop_cluster(argv) elif (sub_cmd == "kill"): kill_cluster(argv) elif (sub_cmd == "standby"): node_standby(argv) elif (sub_cmd == "unstandby"): node_standby(argv, False) elif (sub_cmd == "enable"): if "--all" in utils.pcs_options: enable_cluster_all() else: enable_cluster(argv) elif (sub_cmd == "disable"): if "--all" in utils.pcs_options: disable_cluster_all() else: disable_cluster(argv) elif (sub_cmd == "remote-node"): cluster_remote_node(argv) elif (sub_cmd == "cib"): get_cib(argv) elif (sub_cmd == "cib-push"): cluster_push(argv) elif (sub_cmd == "cib-upgrade"): cluster_upgrade() elif (sub_cmd == "edit"): cluster_edit(argv) elif (sub_cmd == "node"): cluster_node(argv) elif (sub_cmd == "localnode"): cluster_localnode(argv) elif (sub_cmd == "uidgid"): cluster_uidgid(argv) elif (sub_cmd == "corosync"): cluster_get_corosync_conf(argv) elif (sub_cmd == "reload"): cluster_reload(argv) elif (sub_cmd == "destroy"): cluster_destroy(argv) elif (sub_cmd == "verify"): cluster_verify(argv) elif (sub_cmd == "report"): cluster_report(argv) elif (sub_cmd == "quorum"): if argv and argv[0] == "unblock": cluster_quorum_unblock(argv[1:]) else: usage.cluster(["quorum"]) sys.exit(1) else: usage.cluster() sys.exit(1) def sync_nodes(nodes,config): for node in nodes: utils.setCorosyncConfig(node,config) def cluster_auth(argv): if len(argv) == 0: auth_nodes(utils.getNodesFromCorosyncConf()) else: auth_nodes(argv) def cluster_token(argv): if len(argv) > 1: utils.err("Must specify only one node") elif len(argv) == 0: utils.err("Must specify a node to get authorization token from") node = argv[0] tokens = utils.readTokens() if node in tokens: print(tokens[node]) else: utils.err("No authorization token for: %s" % (node)) def cluster_token_nodes(argv): print("\n".join(sorted(utils.readTokens().keys()))) def auth_nodes(nodes): if "-u" in utils.pcs_options: username = utils.pcs_options["-u"] else: username = None if "-p" in utils.pcs_options: password = utils.pcs_options["-p"] else: password = None set_nodes = set(nodes) need_auth = "--force" in utils.pcs_options or (username or password) if not need_auth: for node in set_nodes: status = utils.checkAuthorization(node) if status[0] == 3: need_auth = True break mutually_authorized = False if status[0] == 0: try: auth_status = json.loads(status[1]) if auth_status["success"]: if set_nodes.issubset(set(auth_status["node_list"])): mutually_authorized = True except (ValueError, KeyError): pass if not mutually_authorized: need_auth = True break if need_auth: if username == None: username = utils.get_terminal_input('Username: ') if password == None: password = utils.get_terminal_password() auth_nodes_do( set_nodes, username, password, '--force' in utils.pcs_options, '--local' in utils.pcs_options ) else: for node in set_nodes: print(node + ": Already authorized") def auth_nodes_do(nodes, username, password, force, local): pcsd_data = { 'nodes': list(set(nodes)), 'username': username, 'password': password, 'force': force, 'local': local, } output, retval = utils.run_pcsdcli('auth', pcsd_data) if retval == 0 and output['status'] == 'access_denied': utils.err('Access denied') if retval == 0 and output['status'] == 'ok' and output['data']: failed = False try: if not output['data']['sync_successful']: utils.err( "Some nodes had a newer tokens than the local node. " + "Local node's tokens were updated. " + "Please repeat the authentication if needed." ) for node, result in output['data']['auth_responses'].items(): if result['status'] == 'ok': print("{0}: Authorized".format(node)) elif result['status'] == 'already_authorized': print("{0}: Already authorized".format(node)) elif result['status'] == 'bad_password': utils.err( "{0}: Username and/or password is incorrect".format(node), False ) failed = True elif result['status'] == 'noresponse': utils.err("Unable to communicate with {0}".format(node), False) failed = True else: utils.err("Unexpected response from {0}".format(node), False) failed = True if output['data']['sync_nodes_err']: utils.err( ( "Unable to synchronize and save tokens on nodes: {0}. " + "Are they authorized?" ).format( ", ".join(output['data']['sync_nodes_err']) ), False ) failed = True except: utils.err('Unable to communicate with pcsd') if failed: sys.exit(1) return utils.err('Unable to communicate with pcsd') # If no arguments get current cluster node status, otherwise get listed # nodes status def cluster_gui_status(argv,dont_exit = False): bad_nodes = False if len(argv) == 0: nodes = utils.getNodesFromCorosyncConf() if len(nodes) == 0: if utils.is_rhel6(): utils.err("no nodes found in cluster.conf") else: utils.err("no nodes found in corosync.conf") bad_nodes = check_nodes(nodes, " ") else: bad_nodes = check_nodes(argv, " ") if bad_nodes and not dont_exit: sys.exit(2) def cluster_certkey(argv): return pcsd.pcsd_certkey(argv) # Check and see if pcsd is running on the nodes listed def check_nodes(nodes, prefix = ""): bad_nodes = False if not utils.is_rhel6(): pm_nodes = utils.getPacemakerNodesID(True) cs_nodes = utils.getCorosyncNodesID(True) for node in nodes: status = utils.checkAuthorization(node) if not utils.is_rhel6(): if node not in pm_nodes.values(): for n_id, n in cs_nodes.items(): if node == n and n_id in pm_nodes: real_node_name = pm_nodes[n_id] if real_node_name == "(null)": real_node_name = "*Unknown*" node = real_node_name + " (" + node + ")" break if status[0] == 0: print(prefix + node + ": Online") elif status[0] == 3: print(prefix + node + ": Unable to authenticate") bad_nodes = True else: print(prefix + node + ": Offline") bad_nodes = True return bad_nodes def cluster_setup(argv): if len(argv) < 2: usage.cluster(["setup"]) sys.exit(1) is_rhel6 = utils.is_rhel6() cluster_name = argv[0] # get nodes' addresses udpu_rrp = False node_list = [] primary_addr_list = [] all_addr_list = [] for node in argv[1:]: addr_list = utils.parse_multiring_node(node) primary_addr_list.append(addr_list[0]) all_addr_list.append(addr_list[0]) node_options = { "ring0_addr": addr_list[0], } if addr_list[1]: udpu_rrp = True all_addr_list.append(addr_list[1]) node_options["ring1_addr"] = addr_list[1] node_list.append(node_options) # special case of ring1 address on cman if is_rhel6 and not udpu_rrp and "--addr1" in utils.pcs_options: for node in node_list: node["ring1_addr"] = utils.pcs_options["--addr1"] # verify addresses if udpu_rrp: for node_options in node_list: if "ring1_addr" not in node_options: utils.err( "if one node is configured for RRP, " + "all nodes must be configured for RRP" ) nodes_unresolvable = False for node_addr in all_addr_list: try: socket.getaddrinfo(node_addr, None) except socket.error: print("Warning: Unable to resolve hostname: {0}".format(node_addr)) nodes_unresolvable = True if nodes_unresolvable and "--force" not in utils.pcs_options: utils.err("Unable to resolve all hostnames, use --force to override") # parse, validate and complete options if is_rhel6: options, messages = cluster_setup_parse_options_cman(utils.pcs_options) else: options, messages = cluster_setup_parse_options_corosync( utils.pcs_options ) if udpu_rrp and "rrp_mode" not in options["transport_options"]: options["transport_options"]["rrp_mode"] = "passive" utils.process_library_reports(messages) # prepare config file if is_rhel6: config, messages = cluster_setup_create_cluster_conf( cluster_name, node_list, options["transport_options"], options["totem_options"] ) else: config, messages = cluster_setup_create_corosync_conf( cluster_name, node_list, options["transport_options"], options["totem_options"], options["quorum_options"] ) utils.process_library_reports(messages) # setup on the local node if "--local" in utils.pcs_options: # Config path can be overriden by --corosync_conf or --cluster_conf # command line options. If it is overriden we do not touch any cluster # which may be set up on the local node. if is_rhel6: config_path = settings.cluster_conf_file else: config_path = settings.corosync_conf_file config_path_overriden = ( (is_rhel6 and "--cluster_conf" in utils.pcs_options) or (not is_rhel6 and "--corosync_conf" in utils.pcs_options) ) # verify and ensure no cluster is set up on the host if "--force" not in utils.pcs_options and os.path.exists(config_path): utils.err("{0} already exists, use --force to overwrite".format( config_path )) if not config_path_overriden: cib_path = os.path.join(settings.cib_dir, "cib.xml") if "--force" not in utils.pcs_options and os.path.exists(cib_path): utils.err("{0} already exists, use --force to overwrite".format( cib_path )) cluster_destroy([]) # set up the cluster utils.setCorosyncConf(config) if "--start" in utils.pcs_options: start_cluster([]) if "--enable" in utils.pcs_options: enable_cluster([]) # setup on remote nodes else: # verify and ensure no cluster is set up on the nodes # checks that nodes are authenticated as well if "--force" not in utils.pcs_options: all_nodes_available = True for node in primary_addr_list: available, message = utils.canAddNodeToCluster(node) if not available: all_nodes_available = False utils.err("{0}: {1}".format(node, message), False) if not all_nodes_available: utils.err( "nodes availability check failed, use --force to override. " + "WARNING: This will destroy existing cluster on the nodes." ) print("Destroying cluster on nodes: {0}...".format( ", ".join(primary_addr_list) )) destroy_cluster(primary_addr_list) print() # send local cluster pcsd configs to the new nodes print("Sending cluster config files to the nodes...") pcsd_data = { "nodes": primary_addr_list, "force": True, "clear_local_cluster_permissions": True, } err_msgs = [] output, retval = utils.run_pcsdcli("send_local_configs", pcsd_data) if retval == 0 and output["status"] == "ok" and output["data"]: try: for node in primary_addr_list: node_response = output["data"][node] if node_response["status"] == "notauthorized": err_msgs.append( "Unable to authenticate to " + node + ", try running 'pcs cluster auth'" ) if node_response["status"] not in ["ok", "not_supported"]: err_msgs.append( "Unable to set pcsd configs on {0}".format(node) ) except: err_msgs.append("Unable to communicate with pcsd") else: err_msgs.append("Unable to set pcsd configs") for err_msg in err_msgs: print("Warning: {0}".format(err_msg)) # send the cluster config for node in primary_addr_list: utils.setCorosyncConfig(node, config) # start and enable the cluster if requested if "--start" in utils.pcs_options: print("\nStarting cluster on nodes: {0}...".format( ", ".join(primary_addr_list) )) start_cluster_nodes(primary_addr_list) if "--enable" in utils.pcs_options: enable_cluster(primary_addr_list) # sync certificates as the last step because it restarts pcsd print() pcsd.pcsd_sync_certs([], exit_after_error=False) def cluster_setup_parse_options_corosync(options): messages = [] parsed = { "transport_options": { "rings_options": [], }, "totem_options": {}, "quorum_options": {}, } transport = "udpu" if "--transport" in options: transport = options["--transport"] if transport not in ("udp", "udpu"): messages.append(ReportItem.error( error_codes.UNKNOWN_TRANSPORT, "unknown transport '{transport}'", info={'transport': transport}, forceable=True, )) parsed["transport_options"]["transport"] = transport if transport == "udpu" and ("--addr0" in options or "--addr1" in options): messages.append(ReportItem.error( error_codes.NON_UDP_TRANSPORT_ADDR_MISMATCH, '--addr0 and --addr1 can only be used with --transport=udp', )) rrpmode = None if "--rrpmode" in options or "--addr0" in options: rrpmode = "passive" if "--rrpmode" in options: rrpmode = options["--rrpmode"] if rrpmode not in ("passive", "active"): messages.append(ReportItem.error( error_codes.UNKNOWN_RRP_MODE, '{rrpmode} is an unknown RRP mode', info={'rrpmode': rrpmode}, forceable=True, )) if rrpmode == "active": messages.append(ReportItem.error( error_codes.RRP_ACTIVE_NOT_SUPPORTED, "using a RRP mode of 'active' is not supported or tested", forceable=True, )) if rrpmode: parsed["transport_options"]["rrp_mode"] = rrpmode totem_options_names = { "--token": "token", "--token_coefficient": "token_coefficient", "--join": "join", "--consensus": "consensus", "--miss_count_const": "miss_count_const", "--fail_recv_const": "fail_recv_const", } for opt_name, parsed_name in totem_options_names.items(): if opt_name in options: parsed["totem_options"][parsed_name] = options[opt_name] if transport == "udp": interface_ids = [] if "--addr0" in options: interface_ids.append(0) if "--addr1" in options: interface_ids.append(1) for interface in interface_ids: ring_options = {} ring_options["addr"] = options["--addr{0}".format(interface)] if "--broadcast{0}".format(interface) in options: ring_options["broadcast"] = True else: if "--mcast{0}".format(interface) in options: mcastaddr = options["--mcast{0}".format(interface)] else: mcastaddr = "239.255.{0}.1".format(interface + 1) ring_options["mcastaddr"] = mcastaddr if "--mcastport{0}".format(interface) in options: mcastport = options["--mcastport{0}".format(interface)] else: mcastport = "5405" ring_options["mcastport"] = mcastport if "--ttl{0}".format(interface) in options: ring_options["ttl"] = options["--ttl{0}".format(interface)] parsed["transport_options"]["rings_options"].append(ring_options) if "--ipv6" in options: parsed["transport_options"]["ip_version"] = "ipv6" quorum_options_names = { "--wait_for_all": "wait_for_all", "--auto_tie_breaker": "auto_tie_breaker", "--last_man_standing": "last_man_standing", "--last_man_standing_window": "last_man_standing_window", } for opt_name, parsed_name in quorum_options_names.items(): if opt_name in options: parsed["quorum_options"][parsed_name] = options[opt_name] for opt_name in ( "--wait_for_all", "--auto_tie_breaker", "--last_man_standing" ): allowed_values = ('0', '1') if opt_name in options and options[opt_name] not in allowed_values: messages.append(ReportItem.error( error_codes.INVALID_OPTION_VALUE, "'{option_value}' is not a valid value for {option_name}, " +"use {allowed_values}" , info={ 'option_name': opt_name, 'option_value': options[opt_name], 'allowed_values_raw': allowed_values, 'allowed_values': ' or '.join(allowed_values), }, )) return parsed, messages def cluster_setup_parse_options_cman(options): messages = [] parsed = { "transport_options": { "rings_options": [], }, "totem_options": {}, } broadcast = ("--broadcast0" in options) or ("--broadcast1" in options) if broadcast: transport = "udpb" parsed["transport_options"]["broadcast"] = True ring_missing_broadcast = None if "--broadcast0" not in options: ring_missing_broadcast = "0" if "--broadcast1" not in options: ring_missing_broadcast = "1" if ring_missing_broadcast: messages.append(ReportItem.warning( error_codes.CMAN_BROADCAST_ALL_RINGS, 'Enabling broadcast for ring {ring_missing_broadcast}' +' as CMAN does not support broadcast in only one ring' , info={'ring_missing_broadcast': ring_missing_broadcast} )) else: transport = "udp" if "--transport" in options: transport = options["--transport"] if transport not in ("udp", "udpu"): messages.append(ReportItem.error( error_codes.UNKNOWN_TRANSPORT, "unknown transport '{transport}'", info={'transport': transport}, forceable=True, )) parsed["transport_options"]["transport"] = transport if transport == "udpu": messages.append(ReportItem.warning( error_codes.CMAN_UDPU_RESTART_REQUIRED, "Using udpu transport on a CMAN cluster, " + "cluster restart is required after node add or remove" , )) if transport == "udpu" and ("--addr0" in options or "--addr1" in options): messages.append(ReportItem.error( error_codes.NON_UDP_TRANSPORT_ADDR_MISMATCH, '--addr0 and --addr1 can only be used with --transport=udp', )) rrpmode = None if "--rrpmode" in options or "--addr0" in options: rrpmode = "passive" if "--rrpmode" in options: rrpmode = options["--rrpmode"] if rrpmode not in ("passive", "active"): messages.append(ReportItem.error( error_codes.UNKNOWN_RRP_MODE, '{rrpmode} is an unknown RRP mode', info={'rrpmode': rrpmode}, forceable=True, )) if rrpmode == "active": messages.append(ReportItem.error( error_codes.RRP_ACTIVE_NOT_SUPPORTED, "using a RRP mode of 'active' is not supported or tested", forceable=True, )) if rrpmode: parsed["transport_options"]["rrp_mode"] = rrpmode totem_options_names = { "--token": "token", "--join": "join", "--consensus": "consensus", "--miss_count_const": "miss_count_const", "--fail_recv_const": "fail_recv_const", } for opt_name, parsed_name in totem_options_names.items(): if opt_name in options: parsed["totem_options"][parsed_name] = options[opt_name] if not broadcast: for interface in (0, 1): if "--addr{0}".format(interface) not in options: continue ring_options = {} if "--mcast{0}".format(interface) in options: mcastaddr = options["--mcast{0}".format(interface)] else: mcastaddr = "239.255.{0}.1".format(interface + 1) ring_options["mcastaddr"] = mcastaddr if "--mcastport{0}".format(interface) in options: ring_options["mcastport"] = options[ "--mcastport{0}".format(interface) ] if "--ttl{0}".format(interface) in options: ring_options["ttl"] = options["--ttl{0}".format(interface)] parsed["transport_options"]["rings_options"].append(ring_options) ignored_options_names = ( "--wait_for_all", "--auto_tie_breaker", "--last_man_standing", "--last_man_standing_window", "--token_coefficient", "--ipv6", ) for opt_name in ignored_options_names: if opt_name in options: messages.append(ReportItem.warning( error_codes.IGNORED_CMAN_UNSUPPORTED_OPTION, '{option_name} ignored as it is not supported on CMAN clusters', info={'option_name': opt_name} )) return parsed, messages def cluster_setup_create_corosync_conf( cluster_name, node_list, transport_options, totem_options, quorum_options ): messages = [] corosync_conf = corosync_conf_utils.Section("") totem_section = corosync_conf_utils.Section("totem") nodelist_section = corosync_conf_utils.Section("nodelist") quorum_section = corosync_conf_utils.Section("quorum") logging_section = corosync_conf_utils.Section("logging") corosync_conf.add_section(totem_section) corosync_conf.add_section(nodelist_section) corosync_conf.add_section(quorum_section) corosync_conf.add_section(logging_section) totem_section.add_attribute("version", "2") totem_section.add_attribute("secauth", "off") totem_section.add_attribute("cluster_name", cluster_name) transport_options_names = ( "transport", "rrp_mode", "ip_version", ) for opt_name in transport_options_names: if opt_name in transport_options: totem_section.add_attribute(opt_name, transport_options[opt_name]) totem_options_names = ( "token", "token_coefficient", "join", "consensus", "miss_count_const", "fail_recv_const", ) for opt_name in totem_options_names: if opt_name in totem_options: totem_section.add_attribute(opt_name, totem_options[opt_name]) transport = None if "transport" in transport_options: transport = transport_options["transport"] if transport == "udp": if "rings_options" in transport_options: for ring_number, ring_options in enumerate( transport_options["rings_options"] ): interface_section = corosync_conf_utils.Section("interface") totem_section.add_section(interface_section) interface_section.add_attribute("ringnumber", ring_number) if "addr" in ring_options: interface_section.add_attribute( "bindnetaddr", ring_options["addr"] ) if "broadcast" in ring_options and ring_options["broadcast"]: interface_section.add_attribute("broadcast", "yes") else: for opt_name in ("mcastaddr", "mcastport", "ttl"): if opt_name in ring_options: interface_section.add_attribute( opt_name, ring_options[opt_name] ) for node_id, node_options in enumerate(node_list, 1): node_section = corosync_conf_utils.Section("node") nodelist_section.add_section(node_section) for opt_name in ("ring0_addr", "ring1_addr"): if opt_name in node_options: node_section.add_attribute(opt_name, node_options[opt_name]) node_section.add_attribute("nodeid", node_id) quorum_section.add_attribute("provider", "corosync_votequorum") quorum_options_names = ( "wait_for_all", "auto_tie_breaker", "last_man_standing", "last_man_standing_window", ) for opt_name in quorum_options_names: if opt_name in quorum_options: quorum_section.add_attribute(opt_name, quorum_options[opt_name]) auto_tie_breaker = ( "auto_tie_breaker" in quorum_options and quorum_options["auto_tie_breaker"] == "1" ) if len(node_list) == 2 and not auto_tie_breaker: quorum_section.add_attribute("two_node", "1") logging_section.add_attribute("to_logfile", "yes") logging_section.add_attribute("logfile", "/var/log/cluster/corosync.log") logging_section.add_attribute("to_syslog", "yes") return str(corosync_conf), messages def cluster_setup_create_cluster_conf( cluster_name, node_list, transport_options, totem_options ): broadcast = ( "broadcast" in transport_options and transport_options["broadcast"] ) commands = [] commands.append({ "cmd": ["-i", "--createcluster", cluster_name], "err": "error creating cluster: {0}".format(cluster_name), }) commands.append({ "cmd": ["-i", "--addfencedev", "pcmk-redirect", "agent=fence_pcmk"], "err": "error creating fence dev: {0}".format(cluster_name), }) cman_opts = [] if "transport" in transport_options: cman_opts.append("transport=" + transport_options["transport"]) cman_opts.append("broadcast=" + ("yes" if broadcast else "no")) if len(node_list) == 2: cman_opts.append("two_node=1") cman_opts.append("expected_votes=1") commands.append({ "cmd": ["--setcman"] + cman_opts, "err": "error setting cman options", }) for node_options in node_list: if "ring0_addr" in node_options: ring0_addr = node_options["ring0_addr"] commands.append({ "cmd": ["--addnode", ring0_addr], "err": "error adding node: {0}".format(ring0_addr), }) if "ring1_addr" in node_options: ring1_addr = node_options["ring1_addr"] commands.append({ "cmd": ["--addalt", ring0_addr, ring1_addr], "err": ( "error adding alternative address for node: {0}".format( ring0_addr ) ), }) commands.append({ "cmd": ["-i", "--addmethod", "pcmk-method", ring0_addr], "err": "error adding fence method: {0}".format(ring0_addr), }) commands.append({ "cmd": [ "-i", "--addfenceinst", "pcmk-redirect", ring0_addr, "pcmk-method", "port=" + ring0_addr ], "err": "error adding fence instance: {0}".format(ring0_addr), }) if not broadcast: if "rings_options" in transport_options: for ring_number, ring_options in enumerate( transport_options["rings_options"] ): mcast_options = [] if "mcastaddr" in ring_options: mcast_options.append(ring_options["mcastaddr"]) if "mcastport" in ring_options: mcast_options.append("port=" + ring_options["mcastport"]) if "ttl" in ring_options: mcast_options.append("ttl=" + ring_options["ttl"]) if ring_number == 0: cmd_name = "--setmulticast" else: cmd_name = "--setaltmulticast" commands.append({ "cmd": [cmd_name] + mcast_options, "err": "error adding ring{0} settings".format(ring_number), }) totem_options_names = ( "token", "join", "consensus", "miss_count_const", "fail_recv_const", ) totem_cmd_options = [] for opt_name in totem_options_names: if opt_name in totem_options: totem_cmd_options.append( "{0}={1}".format(opt_name, totem_options[opt_name]) ) if "rrp_mode" in transport_options: totem_cmd_options.append( "rrp_mode={0}".format(transport_options["rrp_mode"]) ) if totem_cmd_options: commands.append({ "cmd": ["--settotem"] + totem_cmd_options, "err": "error setting totem options", }) messages = [] conf_temp = tempfile.NamedTemporaryFile(mode="w+", suffix=".pcs") conf_path = conf_temp.name cmd_prefix = ["ccs", "-f", conf_path] for cmd_item in commands: output, retval = utils.run(cmd_prefix + cmd_item["cmd"]) if retval != 0: if output: messages.append( ReportItem.info(error_codes.COMMON_INFO, output) ) messages.append( ReportItem.error(error_codes.COMMON_ERROR, cmd_item["err"]) ) conf_temp.close() return "", messages conf_temp.seek(0) cluster_conf = conf_temp.read() conf_temp.close() return cluster_conf, messages def get_local_network(): args = ["/sbin/ip", "route"] p = subprocess.Popen(args, stdout=subprocess.PIPE) iproute_out = p.stdout.read() network_addr = re.search(r"^([0-9\.]+)", iproute_out) if network_addr: return network_addr.group(1) else: utils.err("unable to determine network address, is interface up?") def start_cluster(argv): if len(argv) > 0: start_cluster_nodes(argv) return print("Starting Cluster...") if utils.is_rhel6(): # Verify that CMAN_QUORUM_TIMEOUT is set, if not, then we set it to 0 retval, output = getstatusoutput('source /etc/sysconfig/cman ; [ -z "$CMAN_QUORUM_TIMEOUT" ]') if retval == 0: with open("/etc/sysconfig/cman", "a") as cman_conf_file: cman_conf_file.write("\nCMAN_QUORUM_TIMEOUT=0\n") output, retval = utils.run(["service", "cman","start"]) if retval != 0: print(output) utils.err("unable to start cman") else: output, retval = utils.run(["service", "corosync","start"]) if retval != 0: print(output) utils.err("unable to start corosync") output, retval = utils.run(["service", "pacemaker", "start"]) if retval != 0: print(output) utils.err("unable to start pacemaker") def start_cluster_all(): start_cluster_nodes(utils.getNodesFromCorosyncConf()) def start_cluster_nodes(nodes): threads = dict() for node in nodes: threads[node] = NodeStartThread(node) error_list = utils.run_node_threads(threads) if error_list: utils.err("unable to start all nodes\n" + "\n".join(error_list)) def stop_cluster_all(): stop_cluster_nodes(utils.getNodesFromCorosyncConf()) def stop_cluster_nodes(nodes): all_nodes = utils.getNodesFromCorosyncConf() unknown_nodes = set(nodes) - set(all_nodes) if unknown_nodes: utils.err( "nodes '%s' do not appear to exist in configuration" % "', '".join(unknown_nodes) ) stopping_all = set(nodes) >= set(all_nodes) if not "--force" in utils.pcs_options and not stopping_all: error_list = [] for node in nodes: retval, data = utils.get_remote_quorumtool_output(node) if retval != 0: error_list.append(node + ": " + data) continue # we are sure whether we are on cman cluster or not because only # nodes from a local cluster can be stopped (see nodes validation # above) if utils.is_rhel6(): quorum_info = utils.parse_cman_quorum_info(data) else: quorum_info = utils.parse_quorumtool_output(data) if quorum_info: if not quorum_info["quorate"]: continue if utils.is_node_stop_cause_quorum_loss( quorum_info, local=False, node_list=nodes ): utils.err( "Stopping the node(s) will cause a loss of the quorum" + ", use --force to override" ) else: # We have the info, no need to print errors error_list = [] break if not utils.is_node_offline_by_quorumtool_output(data): error_list.append("Unable to get quorum status") # else the node seems to be stopped already if error_list: utils.err( "Unable to determine whether stopping the nodes will cause " + "a loss of the quorum, use --force to override\n" + "\n".join(error_list) ) threads = dict() for node in nodes: threads[node] = NodeStopPacemakerThread(node) error_list = utils.run_node_threads(threads) if error_list: utils.err("unable to stop all nodes\n" + "\n".join(error_list)) threads = dict() for node in nodes: threads[node] = NodeStopCorosyncThread(node) error_list = utils.run_node_threads(threads) if error_list: utils.err("unable to stop all nodes\n" + "\n".join(error_list)) def node_standby(argv,standby=True): if len(argv) > 1: if standby: usage.cluster(["standby"]) else: usage.cluster(["unstandby"]) sys.exit(1) nodes = utils.getNodesFromPacemaker() if "--all" not in utils.pcs_options: options_node = [] if argv: if argv[0] not in nodes: utils.err( "node '%s' does not appear to exist in configuration" % argv[0] ) else: options_node = ["-N", argv[0]] if standby: utils.run(["crm_standby", "-v", "on"] + options_node) else: utils.run(["crm_standby", "-D"] + options_node) else: for node in nodes: if standby: utils.run(["crm_standby", "-v", "on", "-N", node]) else: utils.run(["crm_standby", "-D", "-N", node]) def enable_cluster(argv): if len(argv) > 0: enable_cluster_nodes(argv) return utils.enableServices() def disable_cluster(argv): if len(argv) > 0: disable_cluster_nodes(argv) return utils.disableServices() def enable_cluster_all(): enable_cluster_nodes(utils.getNodesFromCorosyncConf()) def disable_cluster_all(): disable_cluster_nodes(utils.getNodesFromCorosyncConf()) def enable_cluster_nodes(nodes): error_list = utils.map_for_error_list(utils.enableCluster, nodes) if len(error_list) > 0: utils.err("unable to enable all nodes\n" + "\n".join(error_list)) def disable_cluster_nodes(nodes): error_list = utils.map_for_error_list(utils.disableCluster, nodes) if len(error_list) > 0: utils.err("unable to disable all nodes\n" + "\n".join(error_list)) def destroy_cluster(argv): if len(argv) > 0: # stop pacemaker and resources while cluster is still quorate threads = dict() for node in argv: threads[node] = NodeStopPacemakerThread(node) error_list = utils.run_node_threads(threads) # proceed with destroy regardless of errors # destroy will stop any remaining cluster daemons threads = dict() for node in argv: threads[node] = NodeDestroyThread(node) error_list = utils.run_node_threads(threads) if error_list: utils.err("unable to destroy cluster\n" + "\n".join(error_list)) def stop_cluster(argv): if len(argv) > 0: stop_cluster_nodes(argv) return if not "--force" in utils.pcs_options: if utils.is_rhel6(): output_status, retval = utils.run(["cman_tool", "status"]) output_nodes, retval = utils.run([ "cman_tool", "nodes", "-F", "id,type,votes,name" ]) if output_status == output_nodes: # when both commands return the same error output = output_status else: output = output_status + "\n---Votes---\n" + output_nodes quorum_info = utils.parse_cman_quorum_info(output) else: output, retval = utils.run(["corosync-quorumtool", "-p", "-s"]) # retval is 0 on success if node is not in partition with quorum # retval is 1 on error OR on success if node has quorum quorum_info = utils.parse_quorumtool_output(output) if quorum_info: if utils.is_node_stop_cause_quorum_loss(quorum_info, local=True): utils.err( "Stopping the node will cause a loss of the quorum" + ", use --force to override" ) elif not utils.is_node_offline_by_quorumtool_output(output): utils.err( "Unable to determine whether stopping the node will cause " + "a loss of the quorum, use --force to override" ) # else the node seems to be stopped already, proceed to be sure stop_all = ( "--pacemaker" not in utils.pcs_options and "--corosync" not in utils.pcs_options ) if stop_all or "--pacemaker" in utils.pcs_options: stop_cluster_pacemaker() if stop_all or "--corosync" in utils.pcs_options: stop_cluster_corosync() def stop_cluster_pacemaker(): print("Stopping Cluster (pacemaker)...") output, retval = utils.run(["service", "pacemaker","stop"]) if retval != 0: print(output) utils.err("unable to stop pacemaker") def stop_cluster_corosync(): if utils.is_rhel6(): print("Stopping Cluster (cman)...") output, retval = utils.run(["service", "cman","stop"]) if retval != 0: print(output) utils.err("unable to stop cman") else: print("Stopping Cluster (corosync)...") output, retval = utils.run(["service", "corosync","stop"]) if retval != 0: print(output) utils.err("unable to stop corosync") def kill_cluster(argv): daemons = ["crmd", "pengine", "attrd", "lrmd", "stonithd", "cib", "pacemakerd", "corosync"] output, retval = utils.run(["killall", "-9"] + daemons) # if retval != 0: # print "Error: unable to execute killall -9" # print output # sys.exit(1) def cluster_push(argv): if len(argv) > 2: usage.cluster(["cib-push"]) sys.exit(1) filename = None scope = None for arg in argv: if "=" not in arg: filename = arg else: arg_name, arg_value = arg.split("=", 1) if arg_name == "scope" and "--config" not in utils.pcs_options: if not utils.is_valid_cib_scope(arg_value): utils.err("invalid CIB scope '%s'" % arg_value) else: scope = arg_value else: usage.cluster(["cib-push"]) sys.exit(1) if "--config" in utils.pcs_options: scope = "configuration" if not filename: usage.cluster(["cib-push"]) sys.exit(1) try: new_cib_dom = xml.dom.minidom.parse(filename) if scope and not new_cib_dom.getElementsByTagName(scope): utils.err( "unable to push cib, scope '%s' not present in new cib" % scope ) except (EnvironmentError, xml.parsers.expat.ExpatError) as e: utils.err("unable to parse new cib: %s" % e) command = ["cibadmin", "--replace", "--xml-file", filename] if scope: command.append("--scope=%s" % scope) output, retval = utils.run(command) if retval != 0: utils.err("unable to push cib\n" + output) else: print("CIB updated") def cluster_upgrade(): output, retval = utils.run(["cibadmin", "--upgrade", "--force"]) if retval != 0: utils.err("unable to upgrade cluster: %s" % output) print("Cluster CIB has been upgraded to latest version") def cluster_edit(argv): if 'EDITOR' in os.environ: if len(argv) > 1: usage.cluster(["edit"]) sys.exit(1) scope = None scope_arg = "" for arg in argv: if "=" not in arg: usage.cluster(["edit"]) sys.exit(1) else: arg_name, arg_value = arg.split("=", 1) if arg_name == "scope" and "--config" not in utils.pcs_options: if not utils.is_valid_cib_scope(arg_value): utils.err("invalid CIB scope '%s'" % arg_value) else: scope_arg = arg scope = arg_value else: usage.cluster(["edit"]) sys.exit(1) if "--config" in utils.pcs_options: scope = "configuration" # Leave scope_arg empty as cluster_push will pick up a --config # option from utils.pcs_options scope_arg = "" editor = os.environ['EDITOR'] tempcib = tempfile.NamedTemporaryFile(mode="w+", suffix=".pcs") cib = utils.get_cib(scope) tempcib.write(cib) tempcib.flush() try: subprocess.call([editor, tempcib.name]) except OSError: utils.err("unable to open file with $EDITOR: " + editor) tempcib.seek(0) newcib = "".join(tempcib.readlines()) if newcib == cib: print("CIB not updated, no changes detected") else: cluster_push([arg for arg in [tempcib.name, scope_arg] if arg]) else: utils.err("$EDITOR environment variable is not set") def get_cib(argv): if len(argv) > 2: usage.cluster(["cib"]) sys.exit(1) filename = None scope = None for arg in argv: if "=" not in arg: filename = arg else: arg_name, arg_value = arg.split("=", 1) if arg_name == "scope" and "--config" not in utils.pcs_options: if not utils.is_valid_cib_scope(arg_value): utils.err("invalid CIB scope '%s'" % arg_value) else: scope = arg_value else: usage.cluster(["cib"]) sys.exit(1) if "--config" in utils.pcs_options: scope = "configuration" if not filename: print(utils.get_cib(scope), end="") else: try: f = open(filename, 'w') output = utils.get_cib(scope) if output != "": f.write(output) else: utils.err("No data in the CIB") except IOError as e: utils.err("Unable to write to file '%s', %s" % (filename, e.strerror)) def cluster_node(argv): if len(argv) != 2: usage.cluster(); sys.exit(1) if argv[0] == "add": add_node = True elif argv[0] in ["remove","delete"]: add_node = False else: usage.cluster(); sys.exit(1) node = argv[1] node0, node1 = utils.parse_multiring_node(node) if not node0: utils.err("missing ring 0 address of the node") status,output = utils.checkAuthorization(node0) if status == 2: utils.err("pcsd is not running on %s" % node0) elif status == 3: utils.err( "%s is not yet authenticated (try pcs cluster auth %s)" % (node0, node0) ) elif status != 0: utils.err(output) if add_node == True: need_ring1_address = utils.need_ring1_address(utils.getCorosyncConf()) if not node1 and need_ring1_address: utils.err( "cluster is configured for RRP, " "you have to specify ring 1 address for the node" ) elif node1 and not need_ring1_address: utils.err( "cluster is not configured for RRP, " "you must not specify ring 1 address for the node" ) corosync_conf = None (canAdd, error) = utils.canAddNodeToCluster(node0) if not canAdd: utils.err("Unable to add '%s' to cluster: %s" % (node0, error)) for my_node in utils.getNodesFromCorosyncConf(): retval, output = utils.addLocalNode(my_node, node0, node1) if retval != 0: utils.err( "unable to add %s on %s - %s" % (node0, my_node, output.strip()), False ) else: print("%s: Corosync updated" % my_node) corosync_conf = output if corosync_conf != None: # send local cluster pcsd configs to the new node # may be used for sending corosync config as well in future pcsd_data = { 'nodes': [node0], 'force': True, } output, retval = utils.run_pcsdcli('send_local_configs', pcsd_data) if retval != 0: utils.err("Unable to set pcsd configs") if output['status'] == 'notauthorized': utils.err( "Unable to authenticate to " + node0 + ", try running 'pcs cluster auth'" ) if output['status'] == 'ok' and output['data']: try: node_response = output['data'][node0] if node_response['status'] not in ['ok', 'not_supported']: utils.err("Unable to set pcsd configs") except: utils.err('Unable to communicate with pcsd') utils.setCorosyncConfig(node0, corosync_conf) if "--enable" in utils.pcs_options: retval, err = utils.enableCluster(node0) if retval != 0: print("Warning: enable cluster - {0}".format(err)) if "--start" in utils.pcs_options or utils.is_rhel6(): # always start new node on cman cluster # otherwise it will get fenced retval, err = utils.startCluster(node0) if retval != 0: print("Warning: start cluster - {0}".format(err)) pcsd.pcsd_sync_certs([node0], exit_after_error=False) else: utils.err("Unable to update any nodes") output, retval = utils.reloadCorosync() if utils.is_cman_with_udpu_transport(): print("Warning: Using udpu transport on a CMAN cluster, " + "cluster restart is required to apply node addition") else: if node0 not in utils.getNodesFromCorosyncConf(): utils.err( "node '%s' does not appear to exist in configuration" % node0 ) if not "--force" in utils.pcs_options: retval, data = utils.get_remote_quorumtool_output(node0) if retval != 0: utils.err( "Unable to determine whether removing the node will cause " + "a loss of the quorum, use --force to override\n" + data ) # we are sure whether we are on cman cluster or not because only # nodes from a local cluster can be stopped (see nodes validation # above) if utils.is_rhel6(): quorum_info = utils.parse_cman_quorum_info(data) else: quorum_info = utils.parse_quorumtool_output(data) if quorum_info: if utils.is_node_stop_cause_quorum_loss( quorum_info, local=False, node_list=[node0] ): utils.err( "Removing the node will cause a loss of the quorum" + ", use --force to override" ) elif not utils.is_node_offline_by_quorumtool_output(data): utils.err( "Unable to determine whether removing the node will cause " + "a loss of the quorum, use --force to override\n" + data ) # else the node seems to be stopped already, we're ok to proceed nodesRemoved = False c_nodes = utils.getNodesFromCorosyncConf() destroy_cluster([node0]) for my_node in c_nodes: if my_node == node0: continue retval, output = utils.removeLocalNode(my_node, node0) if retval != 0: utils.err( "unable to remove %s on %s - %s" % (node0,my_node,output.strip()), False ) else: if output[0] == 0: print("%s: Corosync updated" % my_node) nodesRemoved = True else: utils.err( "%s: Error executing command occured: %s" % (my_node, "".join(output[1])), False ) if nodesRemoved == False: utils.err("Unable to update any nodes") output, retval = utils.reloadCorosync() output, retval = utils.run(["crm_node", "--force", "-R", node0]) if utils.is_cman_with_udpu_transport(): print("Warning: Using udpu transport on a CMAN cluster, " + "cluster restart is required to apply node removal") def cluster_localnode(argv): if len(argv) != 2: usage.cluster() exit(1) elif argv[0] == "add": node = argv[1] if not utils.is_rhel6(): success = utils.addNodeToCorosync(node) else: success = utils.addNodeToClusterConf(node) if success: print("%s: successfully added!" % node) else: utils.err("unable to add %s" % node) elif argv[0] in ["remove","delete"]: node = argv[1] if not utils.is_rhel6(): success = utils.removeNodeFromCorosync(node) else: success = utils.removeNodeFromClusterConf(node) if success: print("%s: successfully removed!" % node) else: utils.err("unable to remove %s" % node) else: usage.cluster() exit(1) def cluster_uidgid_rhel6(argv, silent_list = False): if not os.path.isfile(settings.cluster_conf_file): utils.err("the file doesn't exist on this machine, create a cluster before running this command" % settings.cluster_conf_file) if len(argv) == 0: found = False output, retval = utils.run(["ccs", "-f", settings.cluster_conf_file, "--lsmisc"]) if retval != 0: utils.err("error running ccs\n" + output) lines = output.split('\n') for line in lines: if line.startswith('UID/GID: '): print(line) found = True if not found and not silent_list: print("No uidgids configured in cluster.conf") return command = argv.pop(0) uid="" gid="" if (command == "add" or command == "rm") and len(argv) > 0: for arg in argv: if arg.find('=') == -1: utils.err("uidgid options must be of the form uid= gid=") (k,v) = arg.split('=',1) if k != "uid" and k != "gid": utils.err("%s is not a valid key, you must use uid or gid" %k) if k == "uid": uid = v if k == "gid": gid = v if uid == "" and gid == "": utils.err("you must set either uid or gid") if command == "add": output, retval = utils.run(["ccs", "-f", settings.cluster_conf_file, "--setuidgid", "uid="+uid, "gid="+gid]) if retval != 0: utils.err("unable to add uidgid\n" + output.rstrip()) elif command == "rm": output, retval = utils.run(["ccs", "-f", settings.cluster_conf_file, "--rmuidgid", "uid="+uid, "gid="+gid]) if retval != 0: utils.err("unable to remove uidgid\n" + output.rstrip()) # If we make a change, we sync out the changes to all nodes unless we're using -f if not utils.usefile: sync_nodes(utils.getNodesFromCorosyncConf(), utils.getCorosyncConf()) else: usage.cluster(["uidgid"]) exit(1) def cluster_uidgid(argv, silent_list = False): if utils.is_rhel6(): cluster_uidgid_rhel6(argv, silent_list) return if len(argv) == 0: found = False uid_gid_files = os.listdir(settings.corosync_uidgid_dir) for ug_file in uid_gid_files: uid_gid_dict = utils.read_uid_gid_file(ug_file) if "uid" in uid_gid_dict or "gid" in uid_gid_dict: line = "UID/GID: uid=" if "uid" in uid_gid_dict: line += uid_gid_dict["uid"] line += " gid=" if "gid" in uid_gid_dict: line += uid_gid_dict["gid"] print(line) found = True if not found and not silent_list: print("No uidgids configured in cluster.conf") return command = argv.pop(0) uid="" gid="" if (command == "add" or command == "rm") and len(argv) > 0: for arg in argv: if arg.find('=') == -1: utils.err("uidgid options must be of the form uid= gid=") (k,v) = arg.split('=',1) if k != "uid" and k != "gid": utils.err("%s is not a valid key, you must use uid or gid" %k) if k == "uid": uid = v if k == "gid": gid = v if uid == "" and gid == "": utils.err("you must set either uid or gid") if command == "add": utils.write_uid_gid_file(uid,gid) elif command == "rm": retval = utils.remove_uid_gid_file(uid,gid) if retval == False: utils.err("no uidgid files with uid=%s and gid=%s found" % (uid,gid)) else: usage.cluster(["uidgid"]) exit(1) def cluster_get_corosync_conf(argv): if utils.is_rhel6(): utils.err("corosync.conf is not supported on CMAN clusters") if len(argv) > 1: usage.cluster() exit(1) if len(argv) == 0: print(utils.getCorosyncConf(), end="") return node = argv[0] retval, output = utils.getCorosyncConfig(node) if retval != 0: utils.err(output) else: print(output, end="") def cluster_reload(argv): if len(argv) != 1 or argv[0] != "corosync": usage.cluster(["reload"]) exit(1) output, retval = utils.reloadCorosync() if retval != 0 or "invalid option" in output: utils.err(output.rstrip()) print("Corosync reloaded") # Completely tear down the cluster & remove config files # Code taken from cluster-clean script in pacemaker def cluster_destroy(argv): if "--all" in utils.pcs_options: destroy_cluster(utils.getNodesFromCorosyncConf()) else: print("Shutting down pacemaker/corosync services...") os.system("service pacemaker stop") os.system("service corosync stop") print("Killing any remaining services...") os.system("killall -q -9 corosync aisexec heartbeat pacemakerd ccm stonithd ha_logd lrmd crmd pengine attrd pingd mgmtd cib fenced dlm_controld gfs_controld") utils.disableServices() print("Removing all cluster configuration files...") if utils.is_rhel6(): os.system("rm -f /etc/cluster/cluster.conf") else: os.system("rm -f /etc/corosync/corosync.conf") state_files = ["cib.xml*", "cib-*", "core.*", "hostcache", "cts.*", "pe*.bz2","cib.*"] for name in state_files: os.system("find /var/lib -name '"+name+"' -exec rm -f \{\} \;") def cluster_verify(argv): nofilename = True if len(argv) == 1: filename = argv.pop(0) nofilename = False elif len(argv) > 1: usage.cluster("verify") options = [] if "-V" in utils.pcs_options: options.append("-V") if nofilename: options.append("--live-check") else: options.append("--xml-file") options.append(filename) output, retval = utils.run([settings.crm_verify] + options) if output != "": print(output) stonith.stonith_level_verify() return retval def cluster_report(argv): if len(argv) != 1: usage.cluster(["report"]) sys.exit(1) outfile = argv[0] dest_outfile = outfile + ".tar.bz2" if os.path.exists(dest_outfile): if "--force" not in utils.pcs_options: utils.err(dest_outfile + " already exists, use --force to overwrite") else: try: os.remove(dest_outfile) except OSError as e: utils.err("Unable to remove " + dest_outfile + ": " + e.strerror) crm_report_opts = [] crm_report_opts.append("-f") if "--from" in utils.pcs_options: crm_report_opts.append(utils.pcs_options["--from"]) if "--to" in utils.pcs_options: crm_report_opts.append("-t") crm_report_opts.append(utils.pcs_options["--to"]) else: yesterday = datetime.datetime.now() - datetime.timedelta(1) crm_report_opts.append(yesterday.strftime("%Y-%m-%d %H:%M")) crm_report_opts.append(outfile) output, retval = utils.run([settings.crm_report] + crm_report_opts) newoutput = "" for line in output.split("\n"): if line.startswith("cat:") or line.startswith("grep") or line.startswith("grep") or line.startswith("tail"): continue if "We will attempt to remove" in line: continue if "-p option" in line: continue if "However, doing" in line: continue if "to diagnose" in line: continue newoutput = newoutput + line + "\n" if retval != 0: utils.err(newoutput) print(newoutput) def cluster_remote_node(argv): if len(argv) < 1: usage.cluster(["remote-node"]) sys.exit(1) command = argv.pop(0) if command == "add": if len(argv) < 2: usage.cluster(["remote-node"]) sys.exit(1) hostname = argv.pop(0) rsc = argv.pop(0) if not utils.dom_get_resource(utils.get_cib_dom(), rsc): utils.err("unable to find resource '%s'" % rsc) resource.resource_update(rsc, ["meta", "remote-node="+hostname] + argv) elif command in ["remove","delete"]: if len(argv) < 1: usage.cluster(["remote-node"]) sys.exit(1) hostname = argv.pop(0) dom = utils.get_cib_dom() nvpairs = dom.getElementsByTagName("nvpair") nvpairs_to_remove = [] for nvpair in nvpairs: if nvpair.getAttribute("name") == "remote-node" and nvpair.getAttribute("value") == hostname: for np in nvpair.parentNode.getElementsByTagName("nvpair"): if np.getAttribute("name").startswith("remote-"): nvpairs_to_remove.append(np) if len(nvpairs_to_remove) == 0: utils.err("unable to remove: cannot find remote-node '%s'" % hostname) for nvpair in nvpairs_to_remove[:]: nvpair.parentNode.removeChild(nvpair) dom = constraint.remove_constraints_containing_node(dom, hostname) utils.replace_cib_configuration(dom) else: usage.cluster(["remote-node"]) sys.exit(1) def cluster_quorum_unblock(argv): if len(argv) > 0: usage.cluster(["quorum", "unblock"]) sys.exit(1) if utils.is_rhel6(): utils.err("operation is not supported on CMAN clusters") output, retval = utils.run( ["corosync-cmapctl", "-g", "runtime.votequorum.wait_for_all_status"] ) if retval != 0: utils.err("unable to check quorum status") if output.split("=")[-1].strip() != "1": utils.err("cluster is not waiting for nodes to establish quorum") unjoined_nodes = ( set(utils.getNodesFromCorosyncConf()) - set(utils.getCorosyncActiveNodes()) ) if not unjoined_nodes: utils.err("no unjoined nodes found") for node in unjoined_nodes: stonith.stonith_confirm([node]) output, retval = utils.run( ["corosync-cmapctl", "-s", "quorum.cancel_wait_for_all", "u8", "1"] ) if retval != 0: utils.err("unable to cancel waiting for nodes") print("Quorum unblocked") startup_fencing = prop.get_set_properties().get("startup-fencing", "") utils.set_cib_property( "startup-fencing", "false" if startup_fencing.lower() != "false" else "true" ) utils.set_cib_property("startup-fencing", startup_fencing) print("Waiting for nodes cancelled") class NodeActionThread(threading.Thread): def __init__(self, node): super(NodeActionThread, self).__init__() self.node = node self.retval = 0 self.output = "" class NodeStartThread(NodeActionThread): def run(self): self.retval, self.output = utils.startCluster(self.node, quiet=True) class NodeStopPacemakerThread(NodeActionThread): def run(self): self.retval, self.output = utils.stopCluster( self.node, quiet=True, pacemaker=True, corosync=False ) class NodeStopCorosyncThread(NodeActionThread): def run(self): self.retval, self.output = utils.stopCluster( self.node, quiet=True, pacemaker=False, corosync=True ) class NodeDestroyThread(NodeActionThread): def run(self): self.retval, self.output = utils.destroyCluster(self.node, quiet=True) pcs-0.9.149/pcs/config.py000066400000000000000000000616571266105462400151140ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import sys import os import os.path import re import datetime from io import BytesIO import tarfile import json from xml.dom.minidom import parse import logging import pwd import grp import time logging.basicConfig() # clufter needs logging set before imported try: import clufter.format_manager import clufter.filter_manager import clufter.command_manager no_clufter = False except ImportError: no_clufter = True import settings import utils import cluster import constraint import prop import resource import status import stonith import usage def config_cmd(argv): if len(argv) == 0: config_show(argv) return sub_cmd = argv.pop(0) if sub_cmd == "help": usage.config(argv) elif sub_cmd == "show": config_show(argv) elif sub_cmd == "backup": config_backup(argv) elif sub_cmd == "restore": config_restore(argv) elif sub_cmd == "checkpoint": if not argv: config_checkpoint_list() elif argv[0] == "view": config_checkpoint_view(argv[1:]) elif argv[0] == "restore": config_checkpoint_restore(argv[1:]) else: usage.config(["checkpoint"]) sys.exit(1) elif sub_cmd == "import-cman": config_import_cman(argv) elif sub_cmd == "export": if not argv: usage.config(["export"]) sys.exit(1) elif argv[0] == "pcs-commands": config_export_pcs_commands(argv[1:]) elif argv[0] == "pcs-commands-verbose": config_export_pcs_commands(argv[1:], True) else: usage.config(["export"]) sys.exit(1) else: usage.config() sys.exit(1) def config_show(argv): print("Cluster Name: %s" % utils.getClusterName()) status.nodes_status(["config"]) print() config_show_cib() cluster.cluster_uidgid([], True) def config_show_cib(): print("Resources:") utils.pcs_options["--all"] = 1 utils.pcs_options["--full"] = 1 resource.resource_show([]) print() print("Stonith Devices:") resource.resource_show([], True) print("Fencing Levels:") print() stonith.stonith_level_show() constraint.location_show([]) constraint.order_show([]) constraint.colocation_show([]) print() del utils.pcs_options["--all"] print("Resources Defaults:") resource.show_defaults("rsc_defaults", indent=" ") print("Operations Defaults:") resource.show_defaults("op_defaults", indent=" ") print() prop.list_property([]) def config_backup(argv): if len(argv) > 1: usage.config(["backup"]) sys.exit(1) outfile_name = None if argv: outfile_name = argv[0] if not outfile_name.endswith(".tar.bz2"): outfile_name += ".tar.bz2" tar_data = config_backup_local() if outfile_name: ok, message = utils.write_file( outfile_name, tar_data, permissions=0o600, binary=True ) if not ok: utils.err(message) else: # in python3 stdout accepts str so we need to use buffer if hasattr(sys.stdout, "buffer"): sys.stdout.buffer.write(tar_data) else: sys.stdout.write(tar_data) def config_backup_local(): file_list = config_backup_path_list() tar_data = BytesIO() try: tarball = tarfile.open(fileobj=tar_data, mode="w|bz2") config_backup_add_version_to_tarball(tarball) for tar_path, path_info in file_list.items(): if ( not os.path.exists(path_info["path"]) and not path_info["required"] ): continue tarball.add(path_info["path"], tar_path) tarball.close() except (tarfile.TarError, EnvironmentError) as e: utils.err("unable to create tarball: %s" % e) tar = tar_data.getvalue() tar_data.close() return tar def config_restore(argv): if len(argv) > 1: usage.config(["restore"]) sys.exit(1) infile_name = infile_obj = None if argv: infile_name = argv[0] if not infile_name: # in python3 stdin returns str so we need to use buffer if hasattr(sys.stdin, "buffer"): infile_obj = BytesIO(sys.stdin.buffer.read()) else: infile_obj = BytesIO(sys.stdin.read()) if os.getuid() == 0: if "--local" in utils.pcs_options: config_restore_local(infile_name, infile_obj) else: config_restore_remote(infile_name, infile_obj) else: new_argv = ['config', 'restore'] new_stdin = None if '--local' in utils.pcs_options: new_argv.append('--local') if infile_name: new_argv.append(os.path.abspath(infile_name)) else: new_stdin = infile_obj.read() err_msgs, exitcode, std_out, std_err = utils.call_local_pcsd( new_argv, True, new_stdin ) if err_msgs: for msg in err_msgs: utils.err(msg, False) sys.exit(1) print(std_out) sys.stderr.write(std_err) sys.exit(exitcode) def config_restore_remote(infile_name, infile_obj): extracted = { "version.txt": "", "corosync.conf": "", "cluster.conf": "", } try: tarball = tarfile.open(infile_name, "r|*", infile_obj) while True: # next(tarball) does not work in python2.6 tar_member_info = tarball.next() if tar_member_info is None: break if tar_member_info.name in extracted: tar_member = tarball.extractfile(tar_member_info) extracted[tar_member_info.name] = tar_member.read() tar_member.close() tarball.close() except (tarfile.TarError, EnvironmentError) as e: utils.err("unable to read the tarball: %s" % e) config_backup_check_version(extracted["version.txt"]) node_list = utils.getNodesFromCorosyncConf( extracted["cluster.conf" if utils.is_rhel6() else "corosync.conf"].decode("utf-8") ) if not node_list: utils.err("no nodes found in the tarball") err_msgs = [] for node in node_list: try: retval, output = utils.checkStatus(node) if retval != 0: err_msgs.append(output) continue status = json.loads(output) if status["corosync"] or status["pacemaker"] or status["cman"]: err_msgs.append( "Cluster is currently running on node %s. You need to stop " "the cluster in order to restore the configuration." % node ) continue except (ValueError, NameError): err_msgs.append("unable to determine status of the node %s" % node) if err_msgs: for msg in err_msgs: utils.err(msg, False) sys.exit(1) # Temporarily disable config files syncing thread in pcsd so it will not # rewrite restored files. 10 minutes should be enough time to restore. # If node returns HTTP 404 it does not support config syncing at all. for node in node_list: retval, output = utils.pauseConfigSyncing(node, 10 * 60) if not (retval == 0 or output.endswith("(HTTP error: 404)")): utils.err(output) if infile_obj: infile_obj.seek(0) tarball_data = infile_obj.read() else: with open(infile_name, "rb") as tarball: tarball_data = tarball.read() error_list = [] for node in node_list: retval, error = utils.restoreConfig(node, tarball_data) if retval != 0: error_list.append(error) if error_list: utils.err("unable to restore all nodes\n" + "\n".join(error_list)) def config_restore_local(infile_name, infile_obj): if ( status.is_cman_running() or status.is_corosyc_running() or status.is_pacemaker_running() ): utils.err( "Cluster is currently running on this node. You need to stop " "the cluster in order to restore the configuration." ) file_list = config_backup_path_list(with_uid_gid=True) tarball_file_list = [] version = None try: tarball = tarfile.open(infile_name, "r|*", infile_obj) while True: # next(tarball) does not work in python2.6 tar_member_info = tarball.next() if tar_member_info is None: break if tar_member_info.name == "version.txt": version_data = tarball.extractfile(tar_member_info) version = version_data.read() version_data.close() continue tarball_file_list.append(tar_member_info.name) tarball.close() required_file_list = [ tar_path for tar_path, path_info in file_list.items() if path_info["required"] ] missing = set(required_file_list) - set(tarball_file_list) if missing: utils.err( "unable to restore the cluster, missing files in backup: %s" % ", ".join(missing) ) config_backup_check_version(version) if infile_obj: infile_obj.seek(0) tarball = tarfile.open(infile_name, "r|*", infile_obj) while True: # next(tarball) does not work in python2.6 tar_member_info = tarball.next() if tar_member_info is None: break extract_info = None path = tar_member_info.name while path: if path in file_list: extract_info = file_list[path] break path = os.path.dirname(path) if not extract_info: continue path_extract = os.path.dirname(extract_info["path"]) tarball.extractall(path_extract, [tar_member_info]) path_full = os.path.join(path_extract, tar_member_info.name) file_attrs = extract_info["attrs"] os.chmod(path_full, file_attrs["mode"]) os.chown(path_full, file_attrs["uid"], file_attrs["gid"]) tarball.close() except (tarfile.TarError, EnvironmentError) as e: utils.err("unable to restore the cluster: %s" % e) try: sig_path = os.path.join(settings.cib_dir, "cib.xml.sig") if os.path.exists(sig_path): os.remove(sig_path) except EnvironmentError as e: utils.err("unable to remove %s: %s" % (sig_path, e)) def config_backup_path_list(with_uid_gid=False, force_rhel6=None): rhel6 = utils.is_rhel6() if force_rhel6 is None else force_rhel6 corosync_attrs = { "mtime": int(time.time()), "mode": 0o644, "uname": "root", "gname": "root", "uid": 0, "gid": 0, } cib_attrs = { "mtime": int(time.time()), "mode": 0o600, "uname": settings.pacemaker_uname, "gname": settings.pacemaker_gname, } if with_uid_gid: try: cib_attrs["uid"] = pwd.getpwnam(cib_attrs["uname"]).pw_uid except KeyError: utils.err( "Unable to determine uid of user '%s'" % cib_attrs["uname"] ) try: cib_attrs["gid"] = grp.getgrnam(cib_attrs["gname"]).gr_gid except KeyError: utils.err( "Unable to determine gid of group '%s'" % cib_attrs["gname"] ) file_list = { "cib.xml": { "path": os.path.join(settings.cib_dir, "cib.xml"), "required": True, "attrs": dict(cib_attrs), }, } if rhel6: file_list["cluster.conf"] = { "path": settings.cluster_conf_file, "required": True, "attrs": dict(corosync_attrs), } else: file_list["corosync.conf"] = { "path": settings.corosync_conf_file, "required": True, "attrs": dict(corosync_attrs), } file_list["uidgid.d"] = { "path": settings.corosync_uidgid_dir.rstrip("/"), "required": False, "attrs": dict(corosync_attrs), } file_list["pcs_settings.conf"] = { "path": settings.pcsd_settings_conf_location, "required": False, "attrs": { "mtime": int(time.time()), "mode": 0o644, "uname": "root", "gname": "root", "uid": 0, "gid": 0, }, } return file_list def config_backup_check_version(version): try: version_number = int(version) supported_version = config_backup_version() if version_number > supported_version: utils.err( "Unsupported version of the backup, " "supported version is %d, backup version is %d" % (supported_version, version_number) ) if version_number < supported_version: print( "Warning: restoring from the backup version %d, " "current supported version is %s" % (version_number, supported_version) ) except TypeError: utils.err("Cannot determine version of the backup") def config_backup_add_version_to_tarball(tarball, version=None): ver = version if version is not None else str(config_backup_version()) return utils.tar_add_file_data(tarball, ver.encode("utf-8"), "version.txt") def config_backup_version(): return 1 def config_checkpoint_list(): try: file_list = os.listdir(settings.cib_dir) except OSError as e: utils.err("unable to list checkpoints: %s" % e) cib_list = [] cib_name_re = re.compile("^cib-(\d+)\.raw$") for filename in file_list: match = cib_name_re.match(filename) if not match: continue file_path = os.path.join(settings.cib_dir, filename) try: if os.path.isfile(file_path): cib_list.append( (float(os.path.getmtime(file_path)), match.group(1)) ) except OSError: pass cib_list.sort() if not cib_list: print("No checkpoints available") return for cib_info in cib_list: print( "checkpoint %s: date %s" % (cib_info[1], datetime.datetime.fromtimestamp(round(cib_info[0]))) ) def config_checkpoint_view(argv): if len(argv) != 1: usage.config(["checkpoint", "view"]) sys.exit(1) utils.usefile = True utils.filename = os.path.join(settings.cib_dir, "cib-%s.raw" % argv[0]) if not os.path.isfile(utils.filename): utils.err("unable to read the checkpoint") config_show_cib() def config_checkpoint_restore(argv): if len(argv) != 1: usage.config(["checkpoint", "restore"]) sys.exit(1) cib_path = os.path.join(settings.cib_dir, "cib-%s.raw" % argv[0]) try: snapshot_dom = parse(cib_path) except Exception as e: utils.err("unable to read the checkpoint: %s" % e) utils.replace_cib_configuration(snapshot_dom) def config_import_cman(argv): if no_clufter: utils.err("Unable to perform a CMAN cluster conversion due to missing python-clufter package") # prepare convertor options cluster_conf = settings.cluster_conf_file dry_run_output = None output_format = "cluster.conf" if utils.is_rhel6() else "corosync.conf" invalid_args = False for arg in argv: if "=" in arg: name, value = arg.split("=", 1) if name == "input": cluster_conf = value elif name == "output": dry_run_output = value elif name == "output-format": if value in ( "cluster.conf", "corosync.conf", "pcs-commands", "pcs-commands-verbose", ): output_format = value else: invalid_args = True else: invalid_args = True else: invalid_args = True if ( output_format not in ("pcs-commands", "pcs-commands-verbose") and (dry_run_output and not dry_run_output.endswith(".tar.bz2")) ): dry_run_output += ".tar.bz2" if invalid_args or not dry_run_output: usage.config(["import-cman"]) sys.exit(1) debug = "--debug" in utils.pcs_options force = "--force" in utils.pcs_options interactive = "--interactive" in utils.pcs_options clufter_args = { "input": str(cluster_conf), "cib": {"passin": "bytestring"}, "nocheck": force, "batch": True, "sys": "linux", # Make it work on RHEL6 as well for sure "color": "always" if sys.stdout.isatty() else "never" } if interactive: if "EDITOR" not in os.environ: utils.err("$EDITOR environment variable is not set") clufter_args["batch"] = False clufter_args["editor"] = os.environ["EDITOR"] if debug: logging.getLogger("clufter").setLevel(logging.DEBUG) if output_format == "cluster.conf": clufter_args["ccs_pcmk"] = {"passin": "bytestring"} clufter_args["dist"] = "redhat,6.7,Santiago" cmd_name = "ccs2pcs-flatiron" elif output_format == "corosync.conf": clufter_args["coro"] = {"passin": "struct"} clufter_args["dist"] = "redhat,7.1,Maipo" cmd_name = "ccs2pcs-needle" elif output_format in ("pcs-commands", "pcs-commands-verbose"): clufter_args["output"] = {"passin": "bytestring"} clufter_args["start_wait"] = "60" clufter_args["tmp_cib"] = "tmp-cib.xml" clufter_args["force"] = force clufter_args["text_width"] = "80" clufter_args["silent"] = True clufter_args["noguidance"] = True if output_format == "pcs-commands-verbose": clufter_args["text_width"] = "-1" clufter_args["silent"] = False clufter_args["noguidance"] = False cmd_name = "ccs2pcscmd-flatiron" clufter_args_obj = type(str("ClufterOptions"), (object, ), clufter_args) # run convertor run_clufter( cmd_name, clufter_args_obj, debug, force, "Error: unable to import cluster configuration" ) # save commands if output_format in ("pcs-commands", "pcs-commands-verbose"): ok, message = utils.write_file( dry_run_output, clufter_args_obj.output["passout"] ) if not ok: utils.err(message) return # put new config files into tarball file_list = config_backup_path_list( force_rhel6=(output_format == "cluster.conf") ) for file_item in file_list.values(): file_item["attrs"]["uname"] = "root" file_item["attrs"]["gname"] = "root" file_item["attrs"]["uid"] = 0 file_item["attrs"]["gid"] = 0 file_item["attrs"]["mode"] = 0o600 tar_data = BytesIO() try: tarball = tarfile.open(fileobj=tar_data, mode="w|bz2") config_backup_add_version_to_tarball(tarball) utils.tar_add_file_data( tarball, clufter_args_obj.cib["passout"].encode("utf-8"), "cib.xml", **file_list["cib.xml"]["attrs"] ) if output_format == "cluster.conf": utils.tar_add_file_data( tarball, clufter_args_obj.ccs_pcmk["passout"].encode("utf-8"), "cluster.conf", **file_list["cluster.conf"]["attrs"] ) else: # put uidgid into separate files fmt_simpleconfig = clufter.format_manager.FormatManager.init_lookup( 'simpleconfig' ).plugins['simpleconfig'] corosync_struct = [] uidgid_list = [] for section in clufter_args_obj.coro["passout"][2]: if section[0] == "uidgid": uidgid_list.append(section[1]) else: corosync_struct.append(section) corosync_conf_data = fmt_simpleconfig( "struct", ("corosync", (), corosync_struct) )("bytestring") utils.tar_add_file_data( tarball, corosync_conf_data.encode("utf-8"), "corosync.conf", **file_list["corosync.conf"]["attrs"] ) for uidgid in uidgid_list: uid = "" gid = "" for item in uidgid: if item[0] == "uid": uid = item[1] if item[0] == "gid": gid = item[1] filename = utils.get_uid_gid_file_name(uid, gid) uidgid_data = fmt_simpleconfig( "struct", ("corosync", (), [("uidgid", uidgid, None)]) )("bytestring") utils.tar_add_file_data( tarball, uidgid_data.encode("utf-8"), "uidgid.d/" + filename, **file_list["uidgid.d"]["attrs"] ) tarball.close() except (tarfile.TarError, EnvironmentError) as e: utils.err("unable to create tarball: %s" % e) tar_data.seek(0) #save tarball / remote restore if dry_run_output: ok, message = utils.write_file( dry_run_output, tar_data.read(), permissions=0o600, binary=True ) if not ok: utils.err(message) else: config_restore_remote(None, tar_data) tar_data.close() def config_export_pcs_commands(argv, verbose=False): if no_clufter: utils.err( "Unable to perform export due to missing python-clufter package" ) # parse options debug = "--debug" in utils.pcs_options force = "--force" in utils.pcs_options interactive = "--interactive" in utils.pcs_options invalid_args = False output_file = None for arg in argv: if "=" in arg: name, value = arg.split("=", 1) if name == "output": output_file = value else: invalid_args = True else: invalid_args = True if invalid_args or not output_file: usage.config(["export", "pcs-commands"]) sys.exit(1) # prepare convertor options clufter_args = { "nocheck": force, "batch": True, "sys": "linux", # Make it work on RHEL6 as well for sure "color": "always" if sys.stdout.isatty() else "never", "coro": settings.corosync_conf_file, "ccs": settings.cluster_conf_file, "output": {"passin": "bytestring"}, "start_wait": "60", "tmp_cib": "tmp-cib.xml", "force": force, "text_width": "80", "silent": True, "noguidance": True, } if interactive: if "EDITOR" not in os.environ: utils.err("$EDITOR environment variable is not set") clufter_args["batch"] = False clufter_args["editor"] = os.environ["EDITOR"] if debug: logging.getLogger("clufter").setLevel(logging.DEBUG) if utils.usefile: clufter_args["cib"] = os.path.abspath(utils.filename) else: clufter_args["cib"] = ("bytestring", utils.get_cib()) if verbose: clufter_args["text_width"] = "-1" clufter_args["silent"] = False clufter_args["noguidance"] = False clufter_args_obj = type(str("ClufterOptions"), (object, ), clufter_args) cmd_name = "pcs2pcscmd-flatiron" if utils.is_rhel6() else "pcs2pcscmd-needle" # run convertor run_clufter( cmd_name, clufter_args_obj, debug, force, "Error: unable to export cluster configuration" ) # save commands ok, message = utils.write_file( output_file, clufter_args_obj.output["passout"] ) if not ok: utils.err(message) def run_clufter(cmd_name, cmd_args, debug, force, err_prefix): try: result = None cmd_manager = clufter.command_manager.CommandManager.init_lookup( cmd_name ) result = cmd_manager.commands[cmd_name](cmd_args) error_message = "" except Exception as e: error_message = str(e) if error_message or result != 0: hints = [] hints.append("--interactive to solve the issues manually") if not debug: hints.append("--debug to get more information") if not force: hints.append("--force to override") hints_string = "\nTry using %s." % ", ".join(hints) if hints else "" sys.stderr.write( err_prefix + (": %s" % error_message if error_message else "") + hints_string + "\n" ) sys.exit(1 if result is None else result) pcs-0.9.149/pcs/constraint.py000066400000000000000000001410331266105462400160160ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import sys import xml.dom.minidom from xml.dom.minidom import parseString from collections import defaultdict import usage import utils import resource import rule as rule_utils OPTIONS_ACTION = ("start", "promote", "demote", "stop") DEFAULT_ACTION = "start" OPTIONS_ROLE = ("Stopped", "Started", "Master", "Slave") DEFAULT_ROLE = "Started" OPTIONS_KIND = ("Optional", "Mandatory", "Serialize") OPTIONS_SYMMETRICAL = ("true", "false") def constraint_cmd(argv): if len(argv) == 0: argv = ["list"] sub_cmd = argv.pop(0) if (sub_cmd == "help"): usage.constraint(argv) elif (sub_cmd == "location"): if len (argv) == 0: sub_cmd2 = "show" else: sub_cmd2 = argv.pop(0) if (sub_cmd2 == "add"): location_add(argv) elif (sub_cmd2 in ["remove","delete"]): location_add(argv,True) elif (sub_cmd2 == "show"): location_show(argv) elif len(argv) >= 2: if argv[0] == "rule": location_rule([sub_cmd2] + argv) else: location_prefer([sub_cmd2] + argv) else: usage.constraint() sys.exit(1) elif (sub_cmd == "order"): if (len(argv) == 0): sub_cmd2 = "show" else: sub_cmd2 = argv.pop(0) if (sub_cmd2 == "set"): order_set(argv) elif (sub_cmd2 in ["remove","delete"]): order_rm(argv) elif (sub_cmd2 == "show"): order_show(argv) else: order_start([sub_cmd2] + argv) elif (sub_cmd == "colocation"): if (len(argv) == 0): sub_cmd2 = "show" else: sub_cmd2 = argv.pop(0) if (sub_cmd2 == "add"): colocation_add(argv) elif (sub_cmd2 in ["remove","delete"]): colocation_rm(argv) elif (sub_cmd2 == "set"): colocation_set(argv) elif (sub_cmd2 == "show"): colocation_show(argv) else: usage.constraint() sys.exit(1) elif (sub_cmd in ["remove","delete"]): constraint_rm(argv) elif (sub_cmd == "show" or sub_cmd == "list"): location_show(argv) order_show(argv) colocation_show(argv) elif (sub_cmd == "ref"): constraint_ref(argv) elif (sub_cmd == "rule"): constraint_rule(argv) else: usage.constraint() sys.exit(1) def colocation_show(argv): if "--full" in utils.pcs_options: showDetail = True else: showDetail = False (dom,constraintsElement) = getCurrentConstraints() resource_colocation_sets = [] print("Colocation Constraints:") for co_loc in constraintsElement.getElementsByTagName('rsc_colocation'): if not co_loc.getAttribute("rsc"): resource_colocation_sets.append(co_loc) else: print(" " + colocation_el_to_string(co_loc, showDetail)) print_sets(resource_colocation_sets, showDetail) def colocation_el_to_string(co_loc, showDetail=False): co_resource1 = co_loc.getAttribute("rsc") co_resource2 = co_loc.getAttribute("with-rsc") co_id = co_loc.getAttribute("id") co_score = co_loc.getAttribute("score") score_text = "(score:" + co_score + ")" attrs_list = [ "(%s:%s)" % (attr[0], attr[1]) for attr in co_loc.attributes.items() if attr[0] not in ("rsc", "with-rsc", "id", "score") ] if showDetail: attrs_list.append("(id:%s)" % co_id) return " ".join( [co_resource1, "with", co_resource2, score_text] + attrs_list ) def colocation_rm(argv): elementFound = False if len(argv) < 2: usage.constraint() sys.exit(1) (dom,constraintsElement) = getCurrentConstraints() resource1 = argv[0] resource2 = argv[1] for co_loc in constraintsElement.getElementsByTagName('rsc_colocation')[:]: if co_loc.getAttribute("rsc") == resource1 and co_loc.getAttribute("with-rsc") == resource2: constraintsElement.removeChild(co_loc) elementFound = True if co_loc.getAttribute("rsc") == resource2 and co_loc.getAttribute("with-rsc") == resource1: constraintsElement.removeChild(co_loc) elementFound = True if elementFound == True: utils.replace_cib_configuration(dom) else: print("No matching resources found in ordering list") # When passed an array of arguments if the first argument doesn't have an '=' # then it's the score, otherwise they're all arguments # Return a tuple with the score and array of name,value pairs def parse_score_options(argv): if len(argv) == 0: return "INFINITY",[] arg_array = [] first = argv[0] if first.find('=') != -1: score = "INFINITY" else: score = argv.pop(0) for arg in argv: args = arg.split('=') if (len(args) != 2): continue arg_array.append(args) return (score, arg_array) # There are two acceptable syntaxes # Deprecated - colocation add [score] [options] # Supported - colocation add [role] with [role] [score] [options] def colocation_add(argv): if len(argv) < 2: usage.constraint() sys.exit(1) role1 = "" role2 = "" if len(argv) > 2: if not utils.is_score_or_opt(argv[2]): if argv[2] == "with": role1 = argv.pop(0).lower().capitalize() resource1 = argv.pop(0) else: resource1 = argv.pop(0) argv.pop(0) # Pop 'with' if len(argv) == 1: resource2 = argv.pop(0) else: if utils.is_score_or_opt(argv[1]): resource2 = argv.pop(0) else: role2 = argv.pop(0).lower().capitalize() resource2 = argv.pop(0) else: resource1 = argv.pop(0) resource2 = argv.pop(0) else: resource1 = argv.pop(0) resource2 = argv.pop(0) cib_dom = utils.get_cib_dom() resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource(cib_dom, resource1) if "--autocorrect" in utils.pcs_options and correct_id: resource1 = correct_id elif not resource_valid: utils.err(resource_error) resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource(cib_dom, resource2) if "--autocorrect" in utils.pcs_options and correct_id: resource2 = correct_id elif not resource_valid: utils.err(resource_error) score,nv_pairs = parse_score_options(argv) id_in_nvpairs = None for name, value in nv_pairs: if name == "id": id_valid, id_error = utils.validate_xml_id(value, 'constraint id') if not id_valid: utils.err(id_error) if utils.does_id_exist(cib_dom, value): utils.err( "id '%s' is already in use, please specify another one" % value ) id_in_nvpairs = True if not id_in_nvpairs: nv_pairs.append(( "id", utils.find_unique_id( cib_dom, "colocation-%s-%s-%s" % (resource1, resource2, score) ) )) (dom,constraintsElement) = getCurrentConstraints(cib_dom) # If one role is specified, the other should default to "started" if role1 != "" and role2 == "": role2 = DEFAULT_ROLE if role2 != "" and role1 == "": role1 = DEFAULT_ROLE element = dom.createElement("rsc_colocation") element.setAttribute("rsc",resource1) element.setAttribute("with-rsc",resource2) element.setAttribute("score",score) if role1 != "": element.setAttribute("rsc-role", role1) if role2 != "": element.setAttribute("with-rsc-role", role2) for nv_pair in nv_pairs: element.setAttribute(nv_pair[0], nv_pair[1]) if "--force" not in utils.pcs_options: duplicates = colocation_find_duplicates(constraintsElement, element) if duplicates: utils.err( "duplicate constraint already exists, use --force to override\n" + "\n".join([ " " + colocation_el_to_string(dup, True) for dup in duplicates ]) ) constraintsElement.appendChild(element) utils.replace_cib_configuration(dom) def colocation_find_duplicates(dom, constraint_el): def normalize(const_el): return ( const_el.getAttribute("rsc"), const_el.getAttribute("with-rsc"), const_el.getAttribute("rsc-role").capitalize() or DEFAULT_ROLE, const_el.getAttribute("with-rsc-role").capitalize() or DEFAULT_ROLE, ) normalized_el = normalize(constraint_el) return [ other_el for other_el in dom.getElementsByTagName("rsc_colocation") if not other_el.getElementsByTagName("resource_set") and constraint_el is not other_el and normalized_el == normalize(other_el) ] def colocation_set(argv): setoptions = [] for i in range(len(argv)): if argv[i] == "setoptions": setoptions = argv[i+1:] argv[i:] = [] break argv.insert(0, "set") resource_sets = set_args_into_array(argv) if not check_empty_resource_sets(resource_sets): usage.constraint(["colocation set"]) sys.exit(1) cib, constraints = getCurrentConstraints(utils.get_cib_dom()) attributes = [] score_options = ("score", "score-attribute", "score-attribute-mangle") score_specified = False id_specified = False for opt in setoptions: if "=" not in opt: utils.err("missing value of '%s' option" % opt) name, value = opt.split("=", 1) if name == "id": id_valid, id_error = utils.validate_xml_id(value, 'constraint id') if not id_valid: utils.err(id_error) if utils.does_id_exist(cib, value): utils.err( "id '%s' is already in use, please specify another one" % value ) id_specified = True attributes.append((name, value)) elif name in score_options: if score_specified: utils.err("you cannot specify multiple score options") if name == "score" and not utils.is_score(value): utils.err( "invalid score '%s', use integer or INFINITY or -INFINITY" % value ) score_specified = True attributes.append((name, value)) else: utils.err( "invalid option '%s', allowed options are: %s" % (name, ", ".join(score_options + ("id",))) ) if not score_specified: attributes.append(("score", "INFINITY")) if not id_specified: colocation_id = "pcs_rsc_colocation" for a in argv: if "=" not in a: colocation_id += "_" + a attributes.append(("id", utils.find_unique_id(cib, colocation_id))) rsc_colocation = cib.createElement("rsc_colocation") for name, value in attributes: rsc_colocation.setAttribute(name, value) set_add_resource_sets(rsc_colocation, resource_sets, cib) constraints.appendChild(rsc_colocation) utils.replace_cib_configuration(cib) def order_show(argv): if "--full" in utils.pcs_options: showDetail = True else: showDetail = False (dom,constraintsElement) = getCurrentConstraints() resource_order_sets = [] print("Ordering Constraints:") for ord_loc in constraintsElement.getElementsByTagName('rsc_order'): if not ord_loc.getAttribute("first"): resource_order_sets.append(ord_loc) else: print(" " + order_el_to_string(ord_loc, showDetail)) print_sets(resource_order_sets,showDetail) def order_el_to_string(ord_loc, showDetail=False): oc_resource1 = ord_loc.getAttribute("first") oc_resource2 = ord_loc.getAttribute("then") first_action = ord_loc.getAttribute("first-action") then_action = ord_loc.getAttribute("then-action") oc_id = ord_loc.getAttribute("id") oc_score = ord_loc.getAttribute("score") oc_kind = ord_loc.getAttribute("kind") oc_sym = "" oc_id_out = "" oc_options = "" if ( ord_loc.getAttribute("symmetrical") and not utils.is_cib_true(ord_loc.getAttribute("symmetrical")) ): oc_sym = "(non-symmetrical)" if oc_kind != "": score_text = "(kind:" + oc_kind + ")" elif oc_kind == "" and oc_score == "": score_text = "(kind:Mandatory)" else: score_text = "(score:" + oc_score + ")" if showDetail: oc_id_out = "(id:"+oc_id+")" already_processed_attrs = ( "first", "then", "first-action", "then-action", "id", "score", "kind", "symmetrical" ) oc_options = " ".join([ "{0}={1}".format(name, value) for name, value in ord_loc.attributes.items() if name not in already_processed_attrs ]) if oc_options: oc_options = "(Options: " + oc_options + ")" return " ".join([arg for arg in [ first_action, oc_resource1, "then", then_action, oc_resource2, score_text, oc_sym, oc_options, oc_id_out ] if arg]) def print_sets(sets,showDetail): if len(sets) != 0: print(" Resource Sets:") for ro in sets: print(" " + set_constraint_el_to_string(ro, showDetail)) def set_constraint_el_to_string(constraint_el, showDetail=False): set_list = [] for set_el in constraint_el.getElementsByTagName("resource_set"): set_list.append("set " + " ".join( [ res_el.getAttribute("id") for res_el in set_el.getElementsByTagName("resource_ref") ] + utils.dom_attrs_to_list(set_el, showDetail) )) constraint_opts = utils.dom_attrs_to_list(constraint_el, False) if constraint_opts: constraint_opts.insert(0, "setoptions") if showDetail: constraint_opts.append("(id:%s)" % constraint_el.getAttribute("id")) return " ".join(set_list + constraint_opts) def set_args_into_array(argv): all_sets = [] current_set = None for elem in argv: if "set" == elem: if current_set is not None: all_sets.append(current_set) current_set = [] else: current_set.append(elem) if current_set is not None: all_sets.append(current_set) return all_sets def check_empty_resource_sets(sets): if not sets: return False return all(sets) def set_add_resource_sets(elem, sets, cib): allowed_options = { "sequential": ("true", "false"), "require-all": ("true", "false"), "action" : OPTIONS_ACTION, "role" : OPTIONS_ROLE, } for o_set in sets: set_id = "pcs_rsc_set" res_set = cib.createElement("resource_set") elem.appendChild(res_set) for opts in o_set: if opts.find("=") != -1: key,val = opts.split("=") if key not in allowed_options: utils.err( "invalid option '%s', allowed options are: %s" % (key, ", ".join(allowed_options.keys())) ) if val not in allowed_options[key]: utils.err( "invalid value '%s' of option '%s', allowed values are: %s" % (val, key, ", ".join(allowed_options[key])) ) res_set.setAttribute(key, val) else: res_valid, res_error, correct_id \ = utils.validate_constraint_resource(cib, opts) if "--autocorrect" in utils.pcs_options and correct_id: opts = correct_id elif not res_valid: utils.err(res_error) se = cib.createElement("resource_ref") res_set.appendChild(se) se.setAttribute("id", opts) set_id = set_id + "_" + opts res_set.setAttribute("id", utils.find_unique_id(cib, set_id)) if "--force" not in utils.pcs_options: duplicates = set_constraint_find_duplicates(cib, elem) if duplicates: utils.err( "duplicate constraint already exists, use --force to override\n" + "\n".join([ " " + set_constraint_el_to_string(dup, True) for dup in duplicates ]) ) def set_constraint_find_duplicates(dom, constraint_el): def normalize(constraint_el): return [ [ ref_el.getAttribute("id") for ref_el in set_el.getElementsByTagName("resource_ref") ] for set_el in constraint_el.getElementsByTagName("resource_set") ] normalized_el = normalize(constraint_el) return [ other_el for other_el in dom.getElementsByTagName(constraint_el.tagName) if other_el.getElementsByTagName("resource_set") and constraint_el is not other_el and normalized_el == normalize(other_el) ] def order_set(argv): setoptions = [] for i in range(len(argv)): if argv[i] == "setoptions": setoptions = argv[i+1:] argv[i:] = [] break argv.insert(0, "set") resource_sets = set_args_into_array(argv) if not check_empty_resource_sets(resource_sets): usage.constraint(["order set"]) sys.exit(1) cib, constraints = getCurrentConstraints(utils.get_cib_dom()) attributes = [] id_specified = False for opt in setoptions: if "=" not in opt: utils.err("missing value of '%s' option" % opt) name, value = opt.split("=", 1) if name == "id": id_valid, id_error = utils.validate_xml_id(value, 'constraint id') if not id_valid: utils.err(id_error) if utils.does_id_exist(cib, value): utils.err( "id '%s' is already in use, please specify another one" % value ) id_specified = True attributes.append((name, value)) elif name == "kind": normalized_value = value.lower().capitalize() if normalized_value not in OPTIONS_KIND: utils.err( "invalid kind value '%s', allowed values are: %s" % (value, ", ".join(OPTIONS_KIND)) ) attributes.append((name, normalized_value)) elif name == "symmetrical": if value.lower() not in OPTIONS_SYMMETRICAL: utils.err( "invalid symmetrical value '%s', allowed values are: %s" % (value, ", ".join(OPTIONS_SYMMETRICAL)) ) attributes.append((name, value.lower())) else: utils.err( "invalid option '%s', allowed options are: %s" % (name, "kind, symmetrical, id") ) if not id_specified: order_id = "pcs_rsc_order" for a in argv: if "=" not in a: order_id += "_" + a attributes.append(("id", utils.find_unique_id(cib, order_id))) rsc_order = cib.createElement("rsc_order") for name, value in attributes: rsc_order.setAttribute(name, value) set_add_resource_sets(rsc_order, resource_sets, cib) constraints.appendChild(rsc_order) utils.replace_cib_configuration(cib) def order_rm(argv): if len(argv) == 0: usage.constraint() sys.exit(1) elementFound = False (dom,constraintsElement) = getCurrentConstraints() for resource in argv: for ord_loc in constraintsElement.getElementsByTagName('rsc_order')[:]: if ord_loc.getAttribute("first") == resource or ord_loc.getAttribute("then") == resource: constraintsElement.removeChild(ord_loc) elementFound = True resource_refs_to_remove = [] for ord_set in constraintsElement.getElementsByTagName('resource_ref'): if ord_set.getAttribute("id") == resource: resource_refs_to_remove.append(ord_set) elementFound = True for res_ref in resource_refs_to_remove: res_set = res_ref.parentNode res_order = res_set.parentNode res_ref.parentNode.removeChild(res_ref) if len(res_set.getElementsByTagName('resource_ref')) <= 0: res_set.parentNode.removeChild(res_set) if len(res_order.getElementsByTagName('resource_set')) <= 0: res_order.parentNode.removeChild(res_order) if elementFound == True: utils.replace_cib_configuration(dom) else: utils.err("No matching resources found in ordering list") def order_start(argv): if len(argv) < 3: usage.constraint() sys.exit(1) first_action = DEFAULT_ACTION then_action = DEFAULT_ACTION action = argv[0] if action in OPTIONS_ACTION: first_action = action argv.pop(0) resource1 = argv.pop(0) if argv.pop(0) != "then": usage.constraint() sys.exit(1) if len(argv) == 0: usage.constraint() sys.exit(1) action = argv[0] if action in OPTIONS_ACTION: then_action = action argv.pop(0) if len(argv) == 0: usage.constraint() sys.exit(1) resource2 = argv.pop(0) order_options = [] if len(argv) != 0: order_options = order_options + argv[:] order_options.append("first-action="+first_action) order_options.append("then-action="+then_action) order_add([resource1, resource2] + order_options) def order_add(argv,returnElementOnly=False): if len(argv) < 2: usage.constraint() sys.exit(1) resource1 = argv.pop(0) resource2 = argv.pop(0) cib_dom = utils.get_cib_dom() resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource(cib_dom, resource1) if "--autocorrect" in utils.pcs_options and correct_id: resource1 = correct_id elif not resource_valid: utils.err(resource_error) resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource(cib_dom, resource2) if "--autocorrect" in utils.pcs_options and correct_id: resource2 = correct_id elif not resource_valid: utils.err(resource_error) order_options = [] id_specified = False sym = None for arg in argv: if arg == "symmetrical": sym = "true" elif arg == "nonsymmetrical": sym = "false" elif "=" in arg: name, value = arg.split("=", 1) if name == "id": id_valid, id_error = utils.validate_xml_id(value, 'constraint id') if not id_valid: utils.err(id_error) if utils.does_id_exist(cib_dom, value): utils.err( "id '%s' is already in use, please specify another one" % value ) id_specified = True order_options.append((name, value)) elif name == "symmetrical": if value.lower() in OPTIONS_SYMMETRICAL: sym = value.lower() else: utils.err( "invalid symmetrical value '%s', allowed values are: %s" % (value, ", ".join(OPTIONS_SYMMETRICAL)) ) else: order_options.append((name, value)) if sym: order_options.append(("symmetrical", sym)) options = "" if order_options: options = " (Options: %s)" % " ".join([ "%s=%s" % (name, value) for name, value in order_options if name not in ("kind", "score") ]) scorekind = "kind: Mandatory" id_suffix = "mandatory" for opt in order_options: if opt[0] == "score": scorekind = "score: " + opt[1] id_suffix = opt[1] break if opt[0] == "kind": scorekind = "kind: " + opt[1] id_suffix = opt[1] break if not id_specified: order_id = "order-" + resource1 + "-" + resource2 + "-" + id_suffix order_id = utils.find_unique_id(cib_dom, order_id) order_options.append(("id", order_id)) (dom,constraintsElement) = getCurrentConstraints() element = dom.createElement("rsc_order") element.setAttribute("first",resource1) element.setAttribute("then",resource2) for order_opt in order_options: element.setAttribute(order_opt[0], order_opt[1]) constraintsElement.appendChild(element) if "--force" not in utils.pcs_options: duplicates = order_find_duplicates(constraintsElement, element) if duplicates: utils.err( "duplicate constraint already exists, use --force to override\n" + "\n".join([ " " + order_el_to_string(dup, True) for dup in duplicates ]) ) print( "Adding " + resource1 + " " + resource2 + " ("+scorekind+")" + options ) if returnElementOnly == False: utils.replace_cib_configuration(dom) else: return element.toxml() def order_find_duplicates(dom, constraint_el): def normalize(constraint_el): return ( constraint_el.getAttribute("first"), constraint_el.getAttribute("then"), constraint_el.getAttribute("first-action").lower() or DEFAULT_ACTION, constraint_el.getAttribute("then-action").lower() or DEFAULT_ACTION, ) normalized_el = normalize(constraint_el) return [ other_el for other_el in dom.getElementsByTagName("rsc_order") if not other_el.getElementsByTagName("resource_set") and constraint_el is not other_el and normalized_el == normalize(other_el) ] # Show the currently configured location constraints by node or resource def location_show(argv): if (len(argv) != 0 and argv[0] == "nodes"): byNode = True showDetail = False elif "--full" in utils.pcs_options: byNode = False showDetail = True else: byNode = False showDetail = False if len(argv) > 1: valid_noderes = argv[1:] else: valid_noderes = [] (dom,constraintsElement) = getCurrentConstraints() nodehashon = {} nodehashoff = {} rschashon = {} rschashoff = {} ruleshash = defaultdict(list) all_loc_constraints = constraintsElement.getElementsByTagName('rsc_location') print("Location Constraints:") for rsc_loc in all_loc_constraints: lc_node = rsc_loc.getAttribute("node") lc_rsc = rsc_loc.getAttribute("rsc") lc_id = rsc_loc.getAttribute("id") lc_score = rsc_loc.getAttribute("score") lc_role = rsc_loc.getAttribute("role") lc_name = "Resource: " + lc_rsc lc_resource_discovery = rsc_loc.getAttribute("resource-discovery") for child in rsc_loc.childNodes: if child.nodeType == child.ELEMENT_NODE and child.tagName == "rule": ruleshash[lc_name].append(child) # NEED TO FIX FOR GROUP LOCATION CONSTRAINTS (where there are children of # rsc_location) if lc_score == "": lc_score = "0"; if lc_score == "INFINITY": positive = True elif lc_score == "-INFINITY": positive = False elif int(lc_score) >= 0: positive = True else: positive = False if positive == True: nodeshash = nodehashon rschash = rschashon else: nodeshash = nodehashoff rschash = rschashoff if lc_node in nodeshash: nodeshash[lc_node].append((lc_id,lc_rsc,lc_score, lc_role, lc_resource_discovery)) else: nodeshash[lc_node] = [(lc_id, lc_rsc,lc_score, lc_role, lc_resource_discovery)] if lc_rsc in rschash: rschash[lc_rsc].append((lc_id,lc_node,lc_score, lc_role, lc_resource_discovery)) else: rschash[lc_rsc] = [(lc_id,lc_node,lc_score, lc_role, lc_resource_discovery)] nodelist = list(set(list(nodehashon.keys()) + list(nodehashoff.keys()))) rsclist = list(set(list(rschashon.keys()) + list(rschashoff.keys()))) if byNode == True: for node in nodelist: if len(valid_noderes) != 0: if node not in valid_noderes: continue print(" Node: " + node) nodehash_label = ( (nodehashon, " Allowed to run:") (nodehashoff, " Not allowed to run:") ) for nodehash, label in nodehash_label: if node in nodehash: print(label) for options in nodehash[node]: line_parts = [ " " + options[1] + " (" + options[0] + ")", ] if options[3]: line_parts.append("(role: {0})".format(options[3])) if options[4]: line_parts.append( "(resource-discovery={0})".format(options[4]) ) line_parts.append("Score: " + options[2]) print(" ".join(line_parts)) show_location_rules(ruleshash,showDetail) else: rsclist.sort() for rsc in rsclist: if len(valid_noderes) != 0: if rsc not in valid_noderes: continue print(" Resource: " + rsc) rschash_label = ( (rschashon, " Enabled on:"), (rschashoff, " Disabled on:"), ) for rschash, label in rschash_label: if rsc in rschash: for options in rschash[rsc]: if not options[1]: continue line_parts = [ label, options[1], "(score:{0})".format(options[2]), ] if options[3]: line_parts.append("(role: {0})".format(options[3])) if options[4]: line_parts.append( "(resource-discovery={0})".format(options[4]) ) if showDetail: line_parts.append("(id:{0})".format(options[0])) print(" ".join(line_parts)) miniruleshash={} miniruleshash["Resource: " + rsc] = ruleshash["Resource: " + rsc] show_location_rules(miniruleshash,showDetail, True) def show_location_rules(ruleshash,showDetail,noheader=False): constraint_options = {} for rsc in ruleshash: constrainthash= defaultdict(list) if not noheader: print(" " + rsc) for rule in ruleshash[rsc]: constraint_id = rule.parentNode.getAttribute("id") constrainthash[constraint_id].append(rule) constraint_options[constraint_id] = [] if rule.parentNode.getAttribute("resource-discovery"): constraint_options[constraint_id].append("resource-discovery=%s" % rule.parentNode.getAttribute("resource-discovery")) for constraint_id in constrainthash.keys(): if constraint_id in constraint_options and len(constraint_options[constraint_id]) > 0: constraint_option_info = " (" + " ".join(constraint_options[constraint_id]) + ")" else: constraint_option_info = "" print(" Constraint: " + constraint_id + constraint_option_info) for rule in constrainthash[constraint_id]: print(rule_utils.ExportDetailed().get_string( rule, showDetail, " " )) def location_prefer(argv): rsc = argv.pop(0) prefer_option = argv.pop(0) if prefer_option == "prefers": prefer = True elif prefer_option == "avoids": prefer = False else: usage.constraint() sys.exit(1) for nodeconf in argv: nodeconf_a = nodeconf.split("=",1) if len(nodeconf_a) == 1: node = nodeconf_a[0] if prefer: score = "INFINITY" else: score = "-INFINITY" else: score = nodeconf_a[1] if not utils.is_score(score): utils.err("invalid score '%s', use integer or INFINITY or -INFINITY" % score) if not prefer: if score[0] == "-": score = score[1:] else: score = "-" + score node = nodeconf_a[0] location_add(["location-" +rsc+"-"+node+"-"+score,rsc,node,score]) def location_add(argv,rm=False): if len(argv) < 4 and (rm == False or len(argv) < 1): usage.constraint() sys.exit(1) constraint_id = argv.pop(0) # If we're removing, we only care about the id if (rm == True): resource_name = "" node = "" score = "" else: id_valid, id_error = utils.validate_xml_id(constraint_id, 'constraint id') if not id_valid: utils.err(id_error) resource_name = argv.pop(0) node = argv.pop(0) score = argv.pop(0) options = [] # For now we only allow setting resource-discovery if len(argv) > 0: for arg in argv: if '=' in arg: options.append(arg.split('=',1)) else: print("Error: bad option '%s'" % arg) usage.constraint(["location add"]) sys.exit(1) if options[-1][0] != "resource-discovery" and "--force" not in utils.pcs_options: utils.err("bad option '%s', use --force to override" % options[-1][0]) resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource( utils.get_cib_dom(), resource_name ) if "--autocorrect" in utils.pcs_options and correct_id: resource_name = correct_id elif not resource_valid: utils.err(resource_error) if not utils.is_score(score): utils.err("invalid score '%s', use integer or INFINITY or -INFINITY" % score) # Verify current constraint doesn't already exist # If it does we replace it with the new constraint (dom,constraintsElement) = getCurrentConstraints() elementsToRemove = [] # If the id matches, or the rsc & node match, then we replace/remove for rsc_loc in constraintsElement.getElementsByTagName('rsc_location'): if (constraint_id == rsc_loc.getAttribute("id")) or \ (rsc_loc.getAttribute("rsc") == resource_name and \ rsc_loc.getAttribute("node") == node and not rm): elementsToRemove.append(rsc_loc) for etr in elementsToRemove: constraintsElement.removeChild(etr) if (rm == True and len(elementsToRemove) == 0): utils.err("resource location id: " + constraint_id + " not found.") if (not rm): element = dom.createElement("rsc_location") element.setAttribute("id",constraint_id) element.setAttribute("rsc",resource_name) element.setAttribute("node",node) element.setAttribute("score",score) for option in options: element.setAttribute(option[0], option[1]) constraintsElement.appendChild(element) utils.replace_cib_configuration(dom) def location_rule(argv): if len(argv) < 3: usage.constraint(["location", "rule"]) sys.exit(1) res_name = argv.pop(0) resource_valid, resource_error, correct_id \ = utils.validate_constraint_resource(utils.get_cib_dom(), res_name) if "--autocorrect" in utils.pcs_options and correct_id: res_name = correct_id elif not resource_valid: utils.err(resource_error) argv.pop(0) # pop "rule" options, rule_argv = rule_utils.parse_argv(argv, {"constraint-id": None, "resource-discovery": None,}) # If resource-discovery is specified, we use it with the rsc_location # element not the rule if "resource-discovery" in options and options["resource-discovery"]: utils.checkAndUpgradeCIB(2,2,0) cib, constraints = getCurrentConstraints(utils.get_cib_dom()) lc = cib.createElement("rsc_location") lc.setAttribute("resource-discovery", options.pop("resource-discovery")) else: cib, constraints = getCurrentConstraints(utils.get_cib_dom()) lc = cib.createElement("rsc_location") constraints.appendChild(lc) if options.get("constraint-id"): id_valid, id_error = utils.validate_xml_id( options["constraint-id"], 'constraint id' ) if not id_valid: utils.err(id_error) if utils.does_id_exist(cib, options["constraint-id"]): utils.err( "id '%s' is already in use, please specify another one" % options["constraint-id"] ) lc.setAttribute("id", options["constraint-id"]) del options["constraint-id"] else: lc.setAttribute("id", utils.find_unique_id(cib, "location-" + res_name)) lc.setAttribute("rsc", res_name) rule_utils.dom_rule_add(lc, options, rule_argv) location_rule_check_duplicates(constraints, lc) utils.replace_cib_configuration(cib) def location_rule_check_duplicates(dom, constraint_el): if "--force" not in utils.pcs_options: duplicates = location_rule_find_duplicates(dom, constraint_el) if duplicates: lines = [] for dup in duplicates: lines.append(" Constraint: %s" % dup.getAttribute("id")) for dup_rule in utils.dom_get_children_by_tag_name(dup, "rule"): lines.append(rule_utils.ExportDetailed().get_string( dup_rule, True, " " )) utils.err( "duplicate constraint already exists, use --force to override\n" + "\n".join(lines) ) def location_rule_find_duplicates(dom, constraint_el): def normalize(constraint_el): return ( constraint_el.getAttribute("rsc"), [ rule_utils.ExportAsExpression().get_string(rule_el, True) for rule_el in constraint_el.getElementsByTagName("rule") ] ) normalized_el = normalize(constraint_el) return [ other_el for other_el in dom.getElementsByTagName("rsc_location") if other_el.getElementsByTagName("rule") and constraint_el is not other_el and normalized_el == normalize(other_el) ] # Grabs the current constraints and returns the dom and constraint element def getCurrentConstraints(passed_dom=None): if passed_dom: dom = passed_dom else: current_constraints_xml = utils.get_cib_xpath('//constraints') if current_constraints_xml == "": utils.err("unable to process cib") # Verify current constraint doesn't already exist # If it does we replace it with the new constraint dom = parseString(current_constraints_xml) constraintsElement = dom.getElementsByTagName('constraints')[0] return (dom, constraintsElement) # If returnStatus is set, then we don't error out, we just print the error # and return false def constraint_rm(argv,returnStatus=False, constraintsElement=None, passed_dom=None): if len(argv) < 1: usage.constraint() sys.exit(1) bad_constraint = False if len(argv) != 1: for arg in argv: if not constraint_rm([arg],True, passed_dom=passed_dom): bad_constraint = True if bad_constraint: sys.exit(1) return else: c_id = argv.pop(0) elementFound = False if not constraintsElement: (dom, constraintsElement) = getCurrentConstraints(passed_dom) use_cibadmin = True else: use_cibadmin = False for co in constraintsElement.childNodes[:]: if co.nodeType != xml.dom.Node.ELEMENT_NODE: continue if co.getAttribute("id") == c_id: constraintsElement.removeChild(co) elementFound = True if not elementFound: for rule in constraintsElement.getElementsByTagName("rule")[:]: if rule.getAttribute("id") == c_id: elementFound = True parent = rule.parentNode parent.removeChild(rule) if len(parent.getElementsByTagName("rule")) == 0: parent.parentNode.removeChild(parent) if elementFound == True: if passed_dom: return dom if use_cibadmin: utils.replace_cib_configuration(dom) if returnStatus: return True else: utils.err("Unable to find constraint - '%s'" % c_id, False) if returnStatus: return False sys.exit(1) def constraint_ref(argv): if len(argv) == 0: usage.constraint() sys.exit(1) for arg in argv: print("Resource: %s" % arg) constraints,set_constraints = find_constraints_containing(arg) if len(constraints) == 0 and len(set_constraints) == 0: print(" No Matches.") else: for constraint in constraints: print(" " + constraint) for constraint in set_constraints: print(" " + constraint) def remove_constraints_containing(resource_id,output=False,constraints_element = None, passed_dom=None): constraints,set_constraints = find_constraints_containing(resource_id, passed_dom) for c in constraints: if output == True: print("Removing Constraint - " + c) if constraints_element != None: constraint_rm([c], True, constraints_element, passed_dom=passed_dom) else: constraint_rm([c], passed_dom=passed_dom) if len(set_constraints) != 0: (dom, constraintsElement) = getCurrentConstraints(passed_dom) for c in constraintsElement.getElementsByTagName("resource_ref")[:]: # If resource id is in a set, remove it from the set, if the set # is empty, then we remove the set, if the parent of the set # is empty then we remove it if c.getAttribute("id") == resource_id: pn = c.parentNode pn.removeChild(c) if output == True: print("Removing %s from set %s" % (resource_id,pn.getAttribute("id"))) if pn.getElementsByTagName("resource_ref").length == 0: print("Removing set %s" % pn.getAttribute("id")) pn2 = pn.parentNode pn2.removeChild(pn) if pn2.getElementsByTagName("resource_set").length == 0: pn2.parentNode.removeChild(pn2) print("Removing constraint %s" % pn2.getAttribute("id")) if passed_dom: return dom utils.replace_cib_configuration(dom) def find_constraints_containing(resource_id, passed_dom=None): if passed_dom: dom = passed_dom else: dom = utils.get_cib_dom() constraints_found = [] set_constraints = [] resources = dom.getElementsByTagName("primitive") resource_match = None for res in resources: if res.getAttribute("id") == resource_id: resource_match = res break if resource_match: if resource_match.parentNode.tagName == "master" or resource_match.parentNode.tagName == "clone": constraints_found,set_constraints = find_constraints_containing(resource_match.parentNode.getAttribute("id"), dom) constraints = dom.getElementsByTagName("constraints") if len(constraints) == 0: return [],[] else: constraints = constraints[0] myConstraints = constraints.getElementsByTagName("rsc_colocation") myConstraints += constraints.getElementsByTagName("rsc_location") myConstraints += constraints.getElementsByTagName("rsc_order") attr_to_match = ["rsc", "first", "then", "with-rsc", "first", "then"] for c in myConstraints: for attr in attr_to_match: if c.getAttribute(attr) == resource_id: constraints_found.append(c.getAttribute("id")) break setConstraints = constraints.getElementsByTagName("resource_ref") for c in setConstraints: if c.getAttribute("id") == resource_id: set_constraints.append(c.parentNode.parentNode.getAttribute("id")) # Remove duplicates set_constraints = list(set(set_constraints)) return constraints_found,set_constraints def remove_constraints_containing_node(dom, node, output=False): for constraint in find_constraints_containing_node(dom, node): if output: print("Removing Constraint - %s" % constraint.getAttribute("id")) constraint.parentNode.removeChild(constraint) return dom def find_constraints_containing_node(dom, node): return [ constraint for constraint in dom.getElementsByTagName("rsc_location") if constraint.getAttribute("node") == node ] # Re-assign any constraints referencing a resource to its parent (a clone # or master) def constraint_resource_update(old_id, passed_dom=None): dom = utils.get_cib_dom() if passed_dom is None else passed_dom new_id = None clone_ms_parent = utils.dom_get_resource_clone_ms_parent(dom, old_id) if clone_ms_parent: new_id = clone_ms_parent.getAttribute("id") if new_id: constraints = dom.getElementsByTagName("rsc_location") constraints += dom.getElementsByTagName("rsc_order") constraints += dom.getElementsByTagName("rsc_colocation") attrs_to_update=["rsc","first","then", "with-rsc"] for constraint in constraints: for attr in attrs_to_update: if constraint.getAttribute(attr) == old_id: constraint.setAttribute(attr, new_id) if passed_dom is None: utils.replace_cib_configuration(dom) if passed_dom: return dom def constraint_rule(argv): if len(argv) < 2: usage.constraint("rule") sys.exit(1) found = False command = argv.pop(0) constraint_id = None rule_id = None if command == "add": constraint_id = argv.pop(0) cib = utils.get_cib_dom() constraint = utils.dom_get_element_with_id( cib.getElementsByTagName("constraints")[0], "rsc_location", constraint_id ) if not constraint: utils.err("Unable to find constraint: " + constraint_id) options, rule_argv = rule_utils.parse_argv(argv) rule_utils.dom_rule_add(constraint, options, rule_argv) location_rule_check_duplicates(cib, constraint) utils.replace_cib_configuration(cib) elif command in ["remove","delete"]: cib = utils.get_cib_etree() temp_id = argv.pop(0) constraints = cib.find('.//constraints') loc_cons = cib.findall(str('.//rsc_location')) rules = cib.findall(str('.//rule')) for loc_con in loc_cons: for rule in loc_con: if rule.get("id") == temp_id: if len(loc_con) > 1: print("Removing Rule: {0}".format(rule.get("id"))) loc_con.remove(rule) found = True break else: print( "Removing Constraint: {0}".format(loc_con.get("id")) ) constraints.remove(loc_con) found = True break if found == True: break if found: utils.replace_cib_configuration(cib) else: utils.err("unable to find rule with id: %s" % temp_id) else: usage.constraint("rule") sys.exit(1) pcs-0.9.149/pcs/corosync_conf.py000066400000000000000000000106661266105462400165050ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals class Section(object): def __init__(self, name): self._parent = None self._attr_list = [] self._section_list = [] self._name = name @property def parent(self): return self._parent @property def name(self): return self._name def export(self, indent=" "): lines = [] for attr in self._attr_list: lines.append("{0}: {1}".format(*attr)) if self._attr_list and self._section_list: lines.append("") section_count = len(self._section_list) for index, section in enumerate(self._section_list, 1): lines.extend(str(section).split("\n")) if not lines[-1].strip(): del lines[-1] if index < section_count: lines.append("") if self.parent: lines = [indent + x if x else x for x in lines] lines.insert(0, self.name + " {") lines.append("}") final = "\n".join(lines) if final: final += "\n" return final def get_root(self): parent = self while parent.parent: parent = parent.parent return parent def get_attributes(self, name=None): return [ attr for attr in self._attr_list if name is None or attr[0] == name ] def add_attribute(self, name, value): self._attr_list.append([name, value]) return self def del_attribute(self, attribute): self._attr_list = [ attr for attr in self._attr_list if attr != attribute ] return self def del_attributes_by_name(self, name, value=None): self._attr_list = [ attr for attr in self._attr_list if not(attr[0] == name and (value is None or attr[1] == value)) ] return self def set_attribute(self, name, value): found = False new_attr_list = [] for attr in self._attr_list: if attr[0] != name: new_attr_list.append(attr) elif not found: found = True attr[1] = value new_attr_list.append(attr) self._attr_list = new_attr_list if not found: self.add_attribute(name, value) return self def get_sections(self, name=None): return [ section for section in self._section_list if name is None or section.name == name ] def add_section(self, section): parent = self while parent: if parent == section: raise CircularParentshipException() parent = parent.parent if section.parent: section.parent.del_section(section) section._parent = self self._section_list.append(section) return self def del_section(self, section): self._section_list.remove(section) # don't set parent to None if the section was not found in the list # thanks to remove raising a ValueError in that case section._parent = None return self def __str__(self): return self.export() def parse_string(conf_text): root = Section("") _parse_section(conf_text.split("\n"), root) return root def _parse_section(lines, section): # parser is trying to work the same way as an original corosync parser while lines: current_line = lines.pop(0).strip() if not current_line or current_line[0] == "#": continue if "{" in current_line: section_name, junk = current_line.rsplit("{", 1) new_section = Section(section_name.strip()) section.add_section(new_section) _parse_section(lines, new_section) elif "}" in current_line: if not section.parent: raise ParseErrorException("Unexpected closing brace") return elif ":" in current_line: section.add_attribute( *[x.strip() for x in current_line.split(":", 1)] ) if section.parent: raise ParseErrorException("Missing closing brace") class CorosyncConfException(Exception): pass class CircularParentshipException(CorosyncConfException): pass class ParseErrorException(CorosyncConfException): pass pcs-0.9.149/pcs/error_codes.py000066400000000000000000000016721266105462400161440ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals ACL_ROLE_ALREADY_EXISTS = 'ACL_ROLE_ALREADY_EXISTS' ACL_ROLE_NOT_FOUND = 'ACL_ROLE_NOT_FOUND' BAD_ACL_PERMISSION = 'BAD_ACL_PERMISSION' BAD_ACL_SCOPE_TYPE = 'BAD_ACL_SCOPE_TYPE' CMAN_BROADCAST_ALL_RINGS = 'CMAN_BROADCAST_ALL_RINGS' CMAN_UDPU_RESTART_REQUIRED = 'CMAN_UDPU_RESTART_REQUIRED' COMMON_ERROR = 'COMMON_ERROR' COMMON_INFO = 'COMMON_INFO' ID_ALREADY_EXISTS = 'ID_ALREADY_EXISTS' ID_IS_NOT_VALID = 'ID_IS_NOT_VALID' ID_NOT_FOUND = 'ID_NOT_FOUND' IGNORED_CMAN_UNSUPPORTED_OPTION = 'IGNORED_CMAN_UNSUPPORTED_OPTION' INVALID_OPTION_VALUE = 'INVALID_OPTION_VALUE' NON_UDP_TRANSPORT_ADDR_MISMATCH = 'NON_UDP_TRANSPORT_ADDR_MISMATCH' RRP_ACTIVE_NOT_SUPPORTED = 'RRP_ACTIVE_NOT_SUPPORTED' UNKNOWN_COMMAND = 'UNKNOWN_COMMAND' UNKNOWN_RRP_MODE = 'UNKNOWN_RRP_MODE' UNKNOWN_TRANSPORT = 'UNKNOWN_TRANSPORT' pcs-0.9.149/pcs/errors.py000066400000000000000000000021731266105462400151470ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import usage import error_codes class CmdLineInputError(Exception): pass class ReportItemSeverity(object): ERROR = 'ERROR' WARNING = 'WARNING' INFO = 'INFO' class ReportItem(object): @classmethod def error(cls, code, message_pattern, **kwargs): return cls(code, ReportItemSeverity.ERROR, message_pattern, **kwargs) @classmethod def warning(cls, code, message_pattern, **kwargs): return cls(code, ReportItemSeverity.WARNING, message_pattern, **kwargs) @classmethod def info(cls, code, message_pattern, **kwargs): return cls(code, ReportItemSeverity.INFO, message_pattern, **kwargs) def __init__( self, code, severity, message_pattern, forceable=False, info=None ): self.code = code self.severity = severity self.forceable = forceable self.message_pattern=message_pattern self.info = info if info else dict() self.message = self.message_pattern.format(**self.info) pcs-0.9.149/pcs/library_acl.py000066400000000000000000000106061266105462400161160ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import utils from errors import ReportItem from errors import ReportItemSeverity from errors import error_codes class LibraryError(Exception): pass class AclRoleNotFound(LibraryError): pass def __validate_role_id_for_create(dom, role_id): id_valid, message = utils.validate_xml_id(role_id, 'ACL role') if not id_valid: raise LibraryError(ReportItem.error( error_codes.ID_IS_NOT_VALID, message, info={'id': role_id} )) if utils.dom_get_element_with_id(dom, "acl_role", role_id): raise LibraryError(ReportItem.error( error_codes.ACL_ROLE_ALREADY_EXISTS, 'role {id} already exists', info={'id': role_id} )) if utils.does_id_exist(dom, role_id): raise LibraryError(ReportItem.error( error_codes.ID_ALREADY_EXISTS, '{id} already exists', info={'id': role_id} )) def __validate_permissions(dom, permission_info_list): report = [] allowed_permissions = ["read", "write", "deny"] allowed_scopes = ["xpath", "id"] for permission, scope_type, scope in permission_info_list: if not permission in allowed_permissions: report.append(ReportItem.error( error_codes.BAD_ACL_PERMISSION, 'bad permission "{permission}, expected {allowed_values}', info={ 'permission': permission, 'allowed_values_raw': allowed_permissions, 'allowed_values': ' or '.join(allowed_permissions) }, )) if not scope_type in allowed_scopes: report.append(ReportItem.error( error_codes.BAD_ACL_SCOPE_TYPE, 'bad scope type "{scope_type}, expected {allowed_values}', info={ 'scope_type': scope_type, 'allowed_values_raw': allowed_scopes, 'allowed_values': ' or '.join(allowed_scopes) }, )) if scope_type == 'id' and not utils.does_id_exist(dom, scope): report.append(ReportItem.error( error_codes.ID_NOT_FOUND, 'id "{id}" does not exist.', info={'id': scope }, )) if report: raise LibraryError(*report) def __find_role(dom, role_id): for role in dom.getElementsByTagName("acl_role"): if role.getAttribute("id") == role_id: return role raise AclRoleNotFound(ReportItem.error( error_codes.ACL_ROLE_NOT_FOUND, 'role id "{role_id}" does not exist.', info={'role_id': role_id}, )) def create_role(dom, role_id, description=''): """ role_id id of desired role description role description """ __validate_role_id_for_create(dom, role_id) role = dom.createElement("acl_role") role.setAttribute("id",role_id) if description != "": role.setAttribute("description", description) acls = utils.get_acls(dom) acls.appendChild(role) def provide_role(dom, role_id): """ role_id id of desired role description role description """ try: __find_role(dom, role_id) except AclRoleNotFound: create_role(dom, role_id) def add_permissions_to_role(dom, role_id, permission_info_list): """ dom document node role_id value of atribute id, which exists in dom permission_info_list list of tuples, each contains (permission, scope_type, scope) """ __validate_permissions(dom, permission_info_list) area_type_attribute_map = { 'xpath': 'xpath', 'id': 'reference', } for permission, scope_type, scope in permission_info_list: se = dom.createElement("acl_permission") se.setAttribute( "id", utils.find_unique_id(dom, role_id + "-" + permission) ) se.setAttribute("kind", permission) se.setAttribute(area_type_attribute_map[scope_type], scope) __find_role(dom, role_id).appendChild(se) def remove_permissions_referencing(dom, reference): for permission in dom.getElementsByTagName("acl_permission"): if permission.getAttribute("reference") == reference: permission.parentNode.removeChild(permission) pcs-0.9.149/pcs/node.py000066400000000000000000000062431266105462400145620ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import sys import usage import utils def node_cmd(argv): if len(argv) == 0: usage.node() sys.exit(1) sub_cmd = argv.pop(0) if sub_cmd == "help": usage.node(argv) elif sub_cmd == "maintenance": node_maintenance(argv) elif sub_cmd == "unmaintenance": node_maintenance(argv, False) elif sub_cmd == "utilization": if len(argv) == 0: print_nodes_utilization() elif len(argv) == 1: print_node_utilization(argv.pop(0)) else: set_node_utilization(argv.pop(0), argv) else: usage.node() sys.exit(1) def node_maintenance(argv, on=True): action = ["-v", "on"] if on else ["-D"] cluster_nodes = utils.getNodesFromPacemaker() nodes = [] failed_count = 0 if "--all" in utils.pcs_options: nodes = cluster_nodes elif argv: for node in argv: if node not in cluster_nodes: utils.err( "Node '%s' does not appear to exist in configuration" % argv[0], False ) failed_count += 1 else: nodes.append(node) else: nodes.append("") for node in nodes: node = ["-N", node] if node else [] output, retval = utils.run( ["crm_attribute", "-t", "nodes", "-n", "maintenance"] + action + node ) if retval != 0: node_name = ("node '%s'" % node) if argv else "current node" failed_count += 1 if on: utils.err( "Unable to put %s to maintenance mode.\n%s" % (node_name, output), False ) else: utils.err( "Unable to remove %s from maintenance mode.\n%s" % (node_name, output), False ) if failed_count > 0: sys.exit(1) def set_node_utilization(node, argv): cib = utils.get_cib_dom() node_el = utils.dom_get_node(cib, node) if node_el is None: utils.err("Unable to find a node: {0}".format(node)) utils.dom_update_utilization( node_el, utils.convert_args_to_tuples(argv), "nodes-" ) utils.replace_cib_configuration(cib) def print_node_utilization(node): cib = utils.get_cib_dom() node_el = utils.dom_get_node(cib, node) if node_el is None: utils.err("Unable to find a node: {0}".format(node)) utilization = utils.get_utilization_str(node_el) print("Node Utilization:") print(" {0}: {1}".format(node, utilization)) def print_nodes_utilization(): cib = utils.get_cib_dom() utilization = {} for node_el in cib.getElementsByTagName("node"): u = utils.get_utilization_str(node_el) if u: utilization[node_el.getAttribute("uname")] = u print("Node Utilization:") for node in sorted(utilization): print(" {0}: {1}".format(node, utilization[node])) pcs-0.9.149/pcs/pcs000077700000000000000000000000001266105462400151202pcs.pyustar00rootroot00000000000000pcs-0.9.149/pcs/pcs.8000066400000000000000000001251121266105462400141360ustar00rootroot00000000000000.TH PCS "8" "February 2016" "pcs 0.9.149" "System Administration Utilities" .SH NAME pcs \- pacemaker/corosync configuration system .SH SYNOPSIS .B pcs [\fI\-f file\fR] [\fI\-h\fR] [\fIcommands\fR]... .SH DESCRIPTION Control and configure pacemaker and corosync. .SH OPTIONS .TP \fB\-h\fR, \fB\-\-help\fR Display usage and exit .TP \fB\-f\fR file Perform actions on file instead of active CIB .TP \fB\-\-debug\fR Print all network traffic and external commands run .TP \fB\-\-version\fR Print pcs version information .SS "Commands:" .TP cluster Configure cluster options and nodes .TP resource Manage cluster resources .TP stonith Configure fence devices .TP constraint Set resource constraints .TP property Set pacemaker properties .TP acl Set pacemaker access control lists .TP status View cluster status .TP config View and manage cluster configuration .TP pcsd Manage pcs daemon .TP node Manage cluster nodes .SS "resource" .TP show [resource id] [\fB\-\-full\fR] [\fB\-\-groups\fR] Show all currently configured resources or if a resource is specified show the options for the configured resource. If \fB\-\-full\fR is specified all configured resource options will be displayed. If \fB\-\-groups\fR is specified, only show groups (and their resources). .TP list [] [\fB\-\-nodesc\fR] Show list of all available resources, optionally filtered by specified type, standard or provider. If \fB\-\-nodesc\fR is used then descriptions of resources are not printed. .TP describe Show options for the specified resource .TP create [resource options] [op [ ]...] [meta ...] [\fB\-\-clone\fR | \fB\-\-master\fR | \fB\-\-group\fR [\fB\-\-before\fR | \fB\-\-after\fR ]] [\fB\-\-disabled\fR] [\fB\-\-wait\fR[=n]] Create specified resource. If \fB\-\-clone\fR is used a clone resource is created. If \fB\-\-master\fR is specified a master/slave resource is created. If \fB\-\-group\fR is specified the resource is added to the group named. You can use \fB\-\-before\fR or \fB\-\-after\fR to specify the position of the added resource relatively to some resource already existing in the group. If \fB\-\-disabled\fR is specified the resource is not started automatically. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to start and then return 0 if the resource is started, or 1 if the resource has not yet started. If 'n' is not specified it defaults to 60 minutes. Example: Create a new resource called 'VirtualIP' with IP address 192.168.0.99, netmask of 32, monitored everything 30 seconds, on eth2: pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s .TP delete Deletes the resource, group, master or clone (and all resources within the group/master/clone). .TP enable [\fB\-\-wait\fR[=n]] Allow the cluster to start the resource. Depending on the rest of the configuration (constraints, options, failures, etc), the resource may remain stopped. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to start and then return 0 if the resource is started, or 1 if the resource has not yet started. If 'n' is not specified it defaults to 60 minutes. .TP disable [\fB\-\-wait\fR[=n]] Attempt to stop the resource if it is running and forbid the cluster from starting it again. Depending on the rest of the configuration (constraints, options, failures, etc), the resource may remain started. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to stop and then return 0 if the resource is stopped or 1 if the resource has not stopped. If 'n' is not specified it defaults to 60 minutes. .TP restart [node] [\fB\-\-wait\fR=n] Restart the resource specified. If a node is specified and if the resource is a clone or master/slave it will be restarted only on the node specified. If \fB\-\-wait\fR is specified, then we will wait up to 'n' seconds for the resource to be restarted and return 0 if the restart was successful or 1 if it was not. .TP debug\-start [\fB\-\-full\fR] This command will force the specified resource to start on this node ignoring the cluster recommendations and print the output from starting the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to start. .TP debug\-stop [\fB\-\-full\fR] This command will force the specified resource to stop on this node ignoring the cluster recommendations and print the output from stopping the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to stop. .TP debug\-promote [\fB\-\-full\fR] This command will force the specified resource to be promoted on this node ignoring the cluster recommendations and print the output from promoting the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to promote. .TP debug\-demote [\fB\-\-full\fR] This command will force the specified resource to be demoted on this node ignoring the cluster recommendations and print the output from demoting the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to demote. .TP debug\-monitor [\fB\-\-full\fR] This command will force the specified resource to be moniored on this node ignoring the cluster recommendations and print the output from monitoring the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to be monitored. .TP move [destination node] [\fB\-\-master\fR] [lifetime=] [\fB\-\-wait\fR[=n]] Move the resource off the node it is currently running on by creating a \-INFINITY location constraint to ban the node. If destination node is specified the resource will be moved to that node by creating an INFINITY location constraint to prefer the destination node. If \fB\-\-master\fR is used the scope of the command is limited to the master role and you must use the master id (instead of the resource id). If lifetime is specified then the constraint will expire after that time, otherwise it defaults to infinity and the constraint can be cleared manually with 'pcs resource clear' or 'pcs constraint delete'. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to move and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. If you want the resource to preferably avoid running on some nodes but be able to failover to them use 'pcs location avoids'. .TP ban [node] [\fB\-\-master\fR] [lifetime=] [\fB\-\-wait\fR[=n]] Prevent the resource id specified from running on the node (or on the current node it is running on if no node is specified) by creating a \-INFINITY location constraint. If \fB\-\-master\fR is used the scope of the command is limited to the master role and you must use the master id (instead of the resource id). If lifetime is specified then the constraint will expire after that time, otherwise it defaults to infinity and the constraint can be cleared manually with 'pcs resource clear' or 'pcs constraint delete'. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to move and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. If you want the resource to preferably avoid running on some nodes but be able to failover to them use 'pcs location avoids'. .TP clear [node] [\fB\-\-master\fR] [\fB\-\-wait\fR[=n]] Remove constraints created by move and/or ban on the specified resource (and node if specified). If \fB\-\-master\fR is used the scope of the command is limited to the master role and you must use the master id (instead of the resource id). If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including starting and/or moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP standards List available resource agent standards supported by this installation. (OCF, LSB, etc.) .TP providers List available OCF resource agent providers .TP agents [standard[:provider]] List available agents optionally filtered by standard and provider .TP update [resource options] [op [ ]...] [meta ...] [\fB\-\-wait\fR[=n]] Add/Change options to specified resource, clone or multi\-state resource. If an operation (op) is specified it will update the first found operation with the same action on the specified resource, if no operation with that action exists then a new operation will be created. (WARNING: all existing options on the updated operation will be reset if not specified.) If you want to create multiple monitor operations you should use the 'op add' & 'op remove' commands. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the changes to take effect and then return 0 if the changes have been processed or 1 otherwise. If 'n' is not specified it defaults to 60 minutes. .TP op add [operation properties] Add operation for specified resource .TP op remove [...] Remove specified operation (note: you must specify the exact operation properties to properly remove an existing operation). .TP op remove Remove the specified operation id .TP op defaults [options] Set default values for operations, if no options are passed, lists currently configured defaults .TP meta [\fB\-\-wait\fR[=n]] Add specified options to the specified resource, group, master/slave or clone. Meta options should be in the format of name=value, options may be removed by setting an option without a value. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the changes to take effect and then return 0 if the changes have been processed or 1 otherwise. If 'n' is not specified it defaults to 60 minutes. Example: pcs resource meta TestResource failure\-timeout=50 stickiness= .TP group add [resource id] ... [resource id] [\fB\-\-before\fR | \fB\-\-after\fR ] [\fB\-\-wait\fR[=n]] Add the specified resource to the group, creating the group if it does not exist. If the resource is present in another group it is moved to the new group. You can use \fB\-\-before\fR or \fB\-\-after\fR to specify the position of the added resources relatively to some resource already existing in the group. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP group remove [resource id] ... [resource id] [\fB\-\-wait\fR[=n]] Remove the specified resource(s) from the group, removing the group if it no resources remain. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP ungroup [resource id] ... [resource id] [\fB\-\-wait\fR[=n]] Remove the group (Note: this does not remove any resources from the cluster) or if resources are specified, remove the specified resources from the group. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and the return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP clone [clone options]... [\fB\-\-wait\fR[=n]] Setup up the specified resource or group as a clone. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including starting clone instances if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP unclone [\fB\-\-wait\fR[=n]] Remove the clone which contains the specified group or resource (the resource or group will not be removed). If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including stopping clone instances if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP master [] [options] [\fB\-\-wait\fR[=n]] Configure a resource or group as a multi\-state (master/slave) resource. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including starting and promoting resource instances if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. Note: to remove a master you must remove the resource/group it contains. .TP manage ... [resource n] Set resources listed to managed mode (default) .TP unmanage ... [resource n] Set resources listed to unmanaged mode .TP defaults [options] Set default values for resources, if no options are passed, lists currently configured defaults .TP cleanup [] Cleans up the resource in the lrmd (useful to reset the resource status and failcount). This tells the cluster to forget the operation history of a resource and re-detect its current state. This can be useful to purge knowledge of past failures that have since been resolved. If a resource id is not specified then all resources/stonith devices will be cleaned up. .TP failcount show [node] Show current failcount for specified resource from all nodes or only on specified node .TP failcount reset [node] Reset failcount for specified resource on all nodes or only on specified node. This tells the cluster to forget how many times a resource has failed in the past. This may allow the resource to be started or moved to a more preferred location. .TP relocate dry-run [resource1] [resource2] ... The same as 'relocate run' but has no effect on the cluster. .TP relocate run [resource1] [resource2] ... Relocate specified resources to their preferred nodes. If no resources are specified, relocate all resources. This command calculates the preferred node for each resource while ignoring resource stickiness. Then it creates location constraints which will cause the resources to move to their preferred nodes. Once the resources have been moved the constraints are deleted automatically. Note that the preferred node is calculated based on current cluster status, constraints, location of resources and other settings and thus it might change over time. .TP relocate show Display current status of resources and their optimal node ignoring resource stickiness. .TP relocate clear Remove all constraints created by the 'relocate run' command. .TP utilization [ [= ...]] Add specified utilization options to specified resource. If resource is not specified, shows utilization of all resources. If utilization options are not specified, shows utilization of specified resource. Utilization option should be in format name=value, value has to be integer. Options may be removed by setting an option without a value. Example: pcs resource utilization TestResource cpu= ram=20 .SS "cluster" .TP auth [node] [...] [\fB\-u\fR username] [\fB\-p\fR password] [\fB\-\-force\fR] [\fB\-\-local\fR] Authenticate pcs to pcsd on nodes specified, or on all nodes configured in corosync.conf if no nodes are specified (authorization tokens are stored in ~/.pcs/tokens or /var/lib/pcsd/tokens for root). By default all nodes are also authenticated to each other, using \fB\-\-local\fR only authenticates the local node (and does not authenticate the remote nodes with each other). Using \fB\-\-force\fR forces re-authentication to occur. .TP setup [\fB\-\-start\fR] [\fB\-\-local\fR] [\fB\-\-enable\fR] \fB\-\-name\fR [node2[,node2-altaddr]] [..] [\fB\-\-transport\fR ] [\fB\-\-rrpmode\fR active|passive] [\fB\-\-addr0\fR [[[\fB\-\-mcast0\fR
] [\fB\-\-mcastport0\fR ] [\fB\-\-ttl0\fR ]] | [\fB\-\-broadcast0\fR]] [\fB\-\-addr1\fR [[[\fB\-\-mcast1\fR
] [\fB\-\-mcastport1\fR ] [\fB\-\-ttl1\fR ]] | [\fB\-\-broadcast1\fR]]]] [\fB\-\-wait_for_all\fR=<0|1>] [\fB\-\-auto_tie_breaker\fR=<0|1>] [\fB\-\-last_man_standing\fR=<0|1> [\fB\-\-last_man_standing_window\fR=