pax_global_header00006660000000000000000000000064147066515430014525gustar00rootroot0000000000000052 comment=89964fd399c62230d8fb41124c960db701280add ceph-iscsi-3.9/000077500000000000000000000000001470665154300134275ustar00rootroot00000000000000ceph-iscsi-3.9/.gitignore000066400000000000000000000021061470665154300154160ustar00rootroot00000000000000# Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python env/ build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib64/ parts/ sdist/ var/ *.egg-info/ .installed.cfg *.egg non-git/ *.tar.gz *.rpm # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .coverage .coverage.* .cache nosetests.xml coverage.xml *,cover .hypothesis/ # Translations *.mo *.pot # Django stuff: *.log local_settings.py # Flask stuff: instance/ .webassets-cache # Scrapy stuff: .scrapy # Sphinx documentation docs/_build/ # PyBuilder target/ # IPython Notebook .ipynb_checkpoints # pyenv .python-version # celery beat schedule file celerybeat-schedule # dotenv .env # virtualenv venv/ ENV/ # Spyder project settings .spyderproject # Rope project settings .ropeproject # pycharm .idea/ # vscode .vscode/ ceph-iscsi-3.9/COPYING000066400000000000000000000004251470665154300144630ustar00rootroot00000000000000Format-Specification: http://anonscm.debian.org/viewvc/dep/web/deps/dep5/copyright-format.xml?revision=279&view=markup Name: ceph-iscsi Maintainer: Jason Dillaman Source: http://github.com/ceph/ceph-iscsi Files: * License: GPL-3.0-or-later (see LICENSE) ceph-iscsi-3.9/LICENSE000066400000000000000000001045141470665154300144410ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . ceph-iscsi-3.9/Makefile000066400000000000000000000006041470665154300150670ustar00rootroot00000000000000DIST ?= "el9" TAG := $(shell git describe --tags --abbrev=0) VERSION := $(shell echo $(TAG) | sed 's/^v//') dist: git archive --format=tar.gz --prefix=ceph-iscsi-$(VERSION)/ HEAD > ceph-iscsi-$(VERSION).tar.gz srpm: dist rpmbuild -bs ceph-iscsi.spec \ --define "_topdir ." \ --define "_sourcedir ." \ --define "_srcrpmdir ." \ --define "dist .$(DIST)" .PHONY: dist srpm ceph-iscsi-3.9/README000066400000000000000000000157541470665154300143230ustar00rootroot00000000000000This project provides the common logic and CLI tools for creating and managing LIO gateways for Ceph. It includes the ```rbd-target-api``` daemon which is responsible for restoring the state of LIO following a gateway reboot/outage and replaces the existing 'target' service. It also includes the CLI tool ```gwcli``` which can be used to configure and manage the Ceph iSCSI gateway, which replaces the existing ```targetcli``` CLI tool. This CLI tool utilizes the ```rbd-target-api``` server daemon to configure multiple gateways concurrently. Here's an example of the shell interface the gwcli tool provides: [ceph-iscsi]$ gwcli ls o- / .................................................................................. [...] o- cluster .................................................................. [Clusters: 1] | o- ceph ..................................................................... [HEALTH_OK] | o- pools ................................................................... [Pools: 3] | | o- ec ........................................ [(2+1), Commit: 0b/40G (0%), Used: 0b] | | o- iscsi ..................................... [(x3), Commit: 0b/20G (0%), Used: 18b] | | o- rbd ....................................... [(x3), Commit: 8G/20G (40%), Used: 5K] | o- topology ......................................................... [OSDs: 3,MONs: 3] o- disks ................................................................... [8G, Disks: 5] | o- rbd ....................................................................... [rbd (8G)] | o- disk_1 ............................................................... [disk_1 (1G)] | o- disk_2 ............................................................... [disk_2 (2G)] | o- disk_3 ............................................................... [disk_3 (2G)] | o- disk_4 ............................................................... [disk_4 (1G)] | o- disk_5 ............................................................... [disk_5 (2G)] o- iscsi-targets ............................................................. [Targets: 1] o- iqn.2003-01.com.redhat.iscsi-gw:ceph-gw1 ................... [Auth: CHAP, Gateways: 2] | o- disks ................................................................... [Disks: 1] | | o- rbd/disk_1 .............................................. [Owner: rh7-gw2, Lun: 0] | o- gateways ..................................................... [Up: 2/2, Portals: 2] | | o- rh7-gw1 .................................................... [192.168.122.69 (UP)] | | o- rh7-gw2 .................................................... [192.168.122.14 (UP)] o- host-groups ........................................................... [Groups : 0] o- hosts ................................................ [Auth: ACL_ENABLED, Hosts: 1] | o- iqn.1994-05.com.redhat:rh7-client .......... [LOGGED-IN, Auth: CHAP, Disks: 1(2G)] | o- lun 0 ......................................... [rbd.disk_1(2G), Owner: rh7-gw2] o- iqn.2003-01.com.redhat.iscsi-gw:ceph-gw2 ................... [Auth: None, Gateways: 2] o- disks ................................................................... [Disks: 1] | o- rbd/disk_2 .............................................. [Owner: rh7-gw1, Lun: 0] o- gateways ..................................................... [Up: 2/2, Portals: 2] | o- rh7-gw1 ................................................... [2006:ac81::1103 (UP)] | o- rh7-gw2 ................................................... [2006:ac81::1104 (UP)] o- host-groups ........................................................... [Groups : 0] o- hosts ................................................ [Auth: ACL_ENABLED, Hosts: 1] o- iqn.1994-05.com.redhat:rh7-client .......... [LOGGED-IN, Auth: None, Disks: 1(2G)] o- lun 0 ......................................... [rbd.disk_2(2G), Owner: rh7-gw1] The rbd-target-api daemon utilises the flask's internal development server to provide the REST api. It is normally not used in a production context, but given this specific use case it provides a simple way to provide an admin interface - at least for the first release! The API has been tested with Firefox RESTclient add-on with https (based on a common self-signed certificate). With the certificate in place on each gateway you can add basic auth credentials to match the local api configuration in the RESTclient and use the client as follows; Add a Header content type for application/x-www-form-urlencoded METHOD: PUT URL: https://192.168.122.69:5000/api/gateway/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw1/rh7-gw1 select the urlencoded content type and the basic auth credentials add the required variables to the body section in the client ui eg. ip_address=192.168.122.69 Click 'SEND' Curl Examples: If the UI is not your thing, curl probably is! Here's an example of using curl to create a gateway node. curl --user admin:admin -d ip_address=192.168.122.14 \ -X PUT http://192.168.122.14:5000/api/gateway/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw1/rh7-gw2 IPv6 Support: Make sure the IPv6 addresses used are global unicast ones, and then the URL link could be: curl --user admin:admin -d ip_address=2006:ac81::1104 \ -X PUT http://[2006:ac81::1104]:5000/api/gateway/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw2/rh7-gw2 And mixing the IPv4 and IPv6 addresses is also allowed as below: curl --user admin:admin -d ip_address=192.168.122.14 \ -X PUT http://[2006:ac81::1104]:5000/api/gateway/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw1/rh7-gw2 curl --user admin:admin -d ip_address=2006:ac81::1104 \ -X PUT http://192.168.122.14:5000/api/gateway/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw2/rh7-gw2 NOTE: please make sure both the IPv4 and IPv6 addresses are in the trusted ip list in iscsi-gateway.cfg. ## Installation ### Via RPM Simply install the provided rpm with ```rpm -ivh ceph-iscsi-.el7.noarch.rpm``` ### Manually The following packages are required by ceph-iscsi and must be installed before starting the rbd-target-api service: python-requests python-flask python-rados python-rbd python-netifaces python-rtslib python-configshell python-cryptography python-flask pyOpenSSL To install the python package that provides the application logic, run the provided setup.py script i.e. ```> python setup.py install``` For the management daemon (rbd-target-api), simply copy the following files into their equivalent places on each gateway - /usr/lib/systemd/system/rbd-target-gw.service --> /lib/systemd/system - /usr/lib/systemd/system/rbd-target-api.service --> /lib/systemd/system - /usr/bin/rbd-target-gw --> /usr/bin - /usr/bin/rbd-target-api --> /usr/bin - /usr/bin/gwcli --> /usr/bin ## Configuration Once the package is in installed, the Ceph ceph-iscsi instructions found here: http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ can be used to create a iscsi-gateway.cfg and create a target. ceph-iscsi-3.9/README.md000066400000000000000000000066671470665154300147250ustar00rootroot00000000000000# ceph-iscsi This project provides the common logic and CLI tools for creating and managing LIO gateways for Ceph. It includes the ```rbd-target-api``` daemon which is responsible for restoring the state of LIO following a gateway reboot/outage and exporting a REST API to configure the system using tools like gwcli. It replaces the existing 'target' service. There is also a second daemon ```rbd-target-gw``` which exports a REST API to gather statistics. It also includes the CLI tool ```gwcli``` which can be used to configure and manage the Ceph iSCSI gateway, which replaces the existing ```targetcli``` CLI tool. This CLI tool utilizes the ```rbd-target-api``` server daemon to configure multiple gateways concurrently. ## Usage This package should be installed on each node that is intended to be an iSCSI gateway. The Python ```ceph_iscsi_config``` modules are used by: * the **rbd-target-api** daemon to restore LIO state at boot time * **API/CLI** configuration tools ## Installation ### Repository A YUM repository is available with the lastest releases. The repository is available at `https://download.ceph.com/ceph-iscsi/{version}/rpm/{distribution}/noarch/`. For example, https://download.ceph.com/ceph-iscsi/latest/rpm/el7/noarch/ Alternatively, you may download the YUM repo description at https://download.ceph.com/ceph-iscsi/latest/rpm/el7/ceph-iscsi.repo Packages are signed with the following key: https://download.ceph.com/keys/release.asc ### Via RPM Simply install the provided rpm with: ```rpm -ivh ceph-iscsi-.el7.noarch.rpm``` ### Manually The following packages are required by ceph-iscsi-config and must be installed before starting the rbd-target-api and rbd-target-gw services: python-rados python-rbd python-netifaces python-rtslib python-configshell python-cryptography python-flask To install the python package that provides the CLI tool, daemons and application logic, run the provided setup.py script i.e. ```> python setup.py install``` If using systemd, copy the following unit files into their equivalent places on each gateway: - /usr/lib/systemd/system/rbd-target-gw.service --> /lib/systemd/system - /usr/lib/systemd/system/rbd-target-api.service --> /lib/systemd/system Once the unit files are in place, reload the configuration with ``` systemctl daemon-reload systemctl enable rbd-target-api systemctl enable rbd-target-gw systemctl start rbd-target-api systemctl start rbd-target-gw ``` ## Features The functionality provided by each module in the python package is summarised below; | Module | Description | | --- | --- | | **client** | logic handling the create/update and remove of a NodeACL from a gateway | | **config** | common code handling the creation and update mechanisms for the rados configuration object | | **gateway** | definition of the iSCSI gateway (target plus target portal groups) | | **lun** | rbd image management (create/resize), combined with mapping to the OS and LIO instance | | **utils** | common code called by multiple modules | The rbd-target-api daemon performs the following tasks; 1. At start up remove any osd blocklist entry that may apply to the running host 2. Read the configuration object from Rados 3. Process the configuration 3.1 map rbd's to the host 3.2 add rbd's to LIO 3.3 Create the iscsi target, TPG's and port IP's 3.4 Define clients (NodeACL's) 3.5 add the required rbd images to clients 4. Export a REST API for system configuration. ceph-iscsi-3.9/ceph-iscsi.spec000066400000000000000000000145241470665154300163400ustar00rootroot00000000000000# # spec file for package ceph-iscsi # # Copyright (C) 2017-2018 The Ceph iSCSI Project Developers. See # COPYING file at the top-level directory of this distribution and at # https://github.com/ceph/ceph-iscsi/blob/master/COPYING # # All modifications and additions to the file contributed by third parties # remain the property of their copyright owners, unless otherwise agreed # upon. # # This file is under the GNU General Public License, version 3 or any # later version. # # Please submit bugfixes or comments via http://tracker.ceph.com/ # %if 0%{?rhel} == 7 %global with_python2 1 %endif Name: ceph-iscsi Version: 3.9 Release: 1%{?dist} Group: System/Filesystems Summary: Python modules for Ceph iSCSI gateway configuration management %if 0%{?suse_version} License: GPL-3.0-or-later %else License: GPLv3+ %endif URL: https://github.com/ceph/ceph-iscsi Source0: https://github.com/ceph/ceph-iscsi/archive/%{version}/%{name}-%{version}.tar.gz BuildArch: noarch Obsoletes: ceph-iscsi-config Obsoletes: ceph-iscsi-cli Obsoletes: ceph-iscsi-tools Requires: tcmu-runner >= 1.4.0 Requires: ceph-common >= 10.2.2 %if 0%{?with_python2} BuildRequires: python2-devel BuildRequires: python2-setuptools Requires: python2-distro Requires: python-rados >= 10.2.2 Requires: python-rbd >= 10.2.2 Requires: python-netifaces >= 0.10.4 Requires: python-rtslib >= 2.1.fb68 Requires: python-cryptography Requires: python-flask >= 0.10.1 Requires: python-configshell >= 1.1.fb25 %if 0%{?rhel} == 7 Requires: pyOpenSSL Requires: python-requests %else Requires: python-pyOpenSSL Requires: python2-requests %endif %else BuildRequires: python3-devel BuildRequires: python3-setuptools Requires: python3-distro Requires: python3-rados >= 10.2.2 Requires: python3-rbd >= 10.2.2 Requires: python3-netifaces >= 0.10.4 Requires: python3-rtslib >= 2.1.fb68 Requires: python3-cryptography Requires: python3-pyOpenSSL Requires: python3-requests %if 0%{?suse_version} BuildRequires: python-rpm-macros BuildRequires: fdupes Requires: python3-Flask >= 0.10.1 Requires: python3-configshell-fb >= 1.1.25 %else Requires: python3-flask >= 0.10.1 Requires: python3-configshell >= 1.1.fb25 %endif %endif %if 0%{?rhel} BuildRequires: systemd %else BuildRequires: systemd-rpm-macros %{?systemd_requires} %endif %description Python package providing the modules used to handle the configuration of an iSCSI gateway, backed by Ceph RBD. The RPM installs configuration management logic (ceph_iscsi_config modules), an rbd-target-gw systemd service, and a CLI-based management tool 'gwcli', replacing the 'targetcli' tool. The configuration management modules may be are consumed by custom Ansible playbooks and the rbd-target-gw daemon. The rbd-target-gw service is responsible for startup and shutdown actions, replacing the 'target' service used in standalone LIO implementations. In addition, rbd-target-gw also provides a REST API utilized by the Ceph dashboard and gwcli tool, and a prometheus exporter for gateway LIO performance statistics, supporting monitoring and visualisation tools like Grafana. %prep %autosetup -p1 %build %if 0%{?with_python2} %{__python2} setup.py build %else %if 0%{?suse_version} %python3_build %else %{py3_build} %endif %endif %install %if 0%{?with_python2} %{__python2} setup.py install -O1 --skip-build --root %{buildroot} --install-scripts %{_bindir} %else %if 0%{?suse_version} %python3_install %python_expand %fdupes %{buildroot}%{$python_sitelib} %else %{py3_install} %endif %endif mkdir -p %{buildroot}%{_unitdir} install -m 0644 .%{_unitdir}/rbd-target-gw.service %{buildroot}%{_unitdir} install -m 0644 .%{_unitdir}/rbd-target-api.service %{buildroot}%{_unitdir} mkdir -p %{buildroot}%{_mandir}/man8 install -m 0644 gwcli.8 %{buildroot}%{_mandir}/man8/ gzip %{buildroot}%{_mandir}/man8/gwcli.8 mkdir -p %{buildroot}%{_unitdir}/rbd-target-gw.service.d # When rtslib is fixed drop this and update the dep. Note that we add both # /etc and /var, because the kernel and userspace developers keep switching # the dir they want to use. mkdir -p %{buildroot}%{_sysconfdir}/target/pr mkdir -p %{buildroot}%{_sysconfdir}/target/alua mkdir -p %{buildroot}%{_localstatedir}/target/pr mkdir -p %{buildroot}%{_localstatedir}/target/alua %if 0%{?suse_version} mkdir -p %{buildroot}%{_sbindir} ln -s service %{buildroot}%{_sbindir}/rcrbd-target-gw ln -s service %{buildroot}%{_sbindir}/rcrbd-target-api %endif %pre %if 0%{?suse_version} %service_add_pre rbd-target-gw.service rbd-target-api.service %endif %post %if 0%{?rhel} == 7 /bin/systemctl --system daemon-reload &> /dev/null || : /bin/systemctl --system enable rbd-target-gw &> /dev/null || : /bin/systemctl --system enable rbd-target-api &> /dev/null || : %endif %if 0%{?fedora} || 0%{?rhel} >= 8 %systemd_post rbd-target-gw.service %systemd_post rbd-target-api.service %endif %if 0%{?suse_version} %service_add_post rbd-target-gw.service rbd-target-api.service %endif %preun %if 0%{?fedora} || 0%{?rhel} >= 8 %systemd_preun rbd-target-gw.service %systemd_preun rbd-target-api.service %endif %if 0%{?suse_version} %service_del_preun rbd-target-gw.service rbd-target-api.service %endif %postun %if 0%{?rhel} == 7 /bin/systemctl --system daemon-reload &> /dev/null || : %endif %if 0%{?fedora} || 0%{?rhel} >= 8 %systemd_postun rbd-target-gw.service %systemd_postun rbd-target-api.service %endif %if 0%{?suse_version} %service_del_postun rbd-target-gw.service rbd-target-api.service %endif %files %license LICENSE %license COPYING %doc README %doc iscsi-gateway.cfg_sample %if 0%{?with_python2} %{python2_sitelib}/* %else %{python3_sitelib}/* %endif %{_bindir}/gwcli %{_bindir}/rbd-target-gw %{_bindir}/rbd-target-api %{_unitdir}/rbd-target-gw.service %{_unitdir}/rbd-target-api.service %{_mandir}/man8/gwcli.8.gz %attr(0770,root,root) %dir %{_localstatedir}/log/rbd-target-gw %attr(0770,root,root) %dir %{_localstatedir}/log/rbd-target-api %dir %{_unitdir}/rbd-target-gw.service.d %dir %{_sysconfdir}/target %dir %{_sysconfdir}/target/pr %dir %{_sysconfdir}/target/alua %dir %{_localstatedir}/target %dir %{_localstatedir}/target/pr %dir %{_localstatedir}/target/alua %if 0%{?suse_version} %{_sbindir}/rcrbd-target-gw %{_sbindir}/rcrbd-target-api %endif %changelog ceph-iscsi-3.9/ceph_iscsi_config/000077500000000000000000000000001470665154300170655ustar00rootroot00000000000000ceph-iscsi-3.9/ceph_iscsi_config/__init__.py000066400000000000000000000000001470665154300211640ustar00rootroot00000000000000ceph-iscsi-3.9/ceph_iscsi_config/alua.py000066400000000000000000000047021470665154300203640ustar00rootroot00000000000000from rtslib_fb.alua import ALUATargetPortGroup import ceph_iscsi_config.settings as settings from ceph_iscsi_config.utils import CephiSCSIInval def alua_format_group_name(tpg, failover_type, is_owner): if is_owner: return "ao" if failover_type == "explicit": return "standby{}".format(tpg.tag) else: return "ano{}".format(tpg.tag) def alua_create_ao_group(so, tpg, group_name): alua_tpg = ALUATargetPortGroup(so, group_name, tpg.tag) alua_tpg.alua_support_active_optimized = 1 alua_tpg.alua_access_state = 0 return alua_tpg def alua_create_implicit_group(tpg, so, group_name, is_owner): if is_owner: alua_tpg = alua_create_ao_group(so, tpg, group_name) else: alua_tpg = ALUATargetPortGroup(so, group_name, tpg.tag) alua_tpg.alua_access_state = 1 alua_tpg.alua_support_active_nonoptimized = 1 alua_tpg.alua_access_type = 1 # Just make sure we get to at least attempt one op for the failover # process. alua_tpg.implicit_trans_secs = settings.config.osd_op_timeout + 15 return alua_tpg def alua_create_explicit_group(tpg, so, group_name, is_owner): if is_owner: alua_tpg = alua_create_ao_group(so, tpg, group_name) alua_tpg.preferred = 1 else: alua_tpg = ALUATargetPortGroup(so, group_name, tpg.tag) alua_tpg.alua_support_standby = 1 # Use Explicit but also set the Implicit bit so we can # update the kernel from configfs. alua_tpg.alua_access_type = 3 # start ports in Standby, and let the initiator drive the initial # transition to AO. alua_tpg.alua_access_state = 2 return alua_tpg def alua_create_group(failover_type, tpg, so, is_owner): group_name = alua_format_group_name(tpg, failover_type, is_owner) if failover_type == "explicit": alua_tpg = alua_create_explicit_group(tpg, so, group_name, is_owner) elif failover_type == "implicit": # tmp drop down to implicit. Next patch will check for "implicit" # and add error handling up the stack if the failover_type is invalid. alua_tpg = alua_create_implicit_group(tpg, so, group_name, is_owner) else: raise CephiSCSIInval("Invalid failover type {}".format(failover_type)) alua_tpg.alua_support_active_optimized = 1 alua_tpg.alua_support_offline = 0 alua_tpg.alua_support_unavailable = 0 alua_tpg.alua_support_transitioning = 1 alua_tpg.nonop_delay_msecs = 0 return alua_tpg ceph-iscsi-3.9/ceph_iscsi_config/backstore.py000066400000000000000000000014041470665154300214130ustar00rootroot00000000000000from rtslib_fb import UserBackedStorageObject from rtslib_fb.utils import RTSLibError from ceph_iscsi_config.utils import CephiSCSIError USER_RBD = 'user:rbd' def lookup_storage_object_by_disk(config, disk): backstore = config.config["disks"][disk]["backstore"] backstore_object_name = config.config["disks"][disk]["backstore_object_name"] try: return lookup_storage_object(backstore_object_name, backstore) except (RTSLibError, CephiSCSIError): return None def lookup_storage_object(name, backstore): if backstore == USER_RBD: return UserBackedStorageObject(name=name) else: raise CephiSCSIError("Could not lookup storage object - " "Unsupported backstore {}".format(backstore)) ceph-iscsi-3.9/ceph_iscsi_config/client.py000066400000000000000000001043501470665154300207200ustar00rootroot00000000000000from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import serialization, hashes from cryptography.hazmat.primitives.asymmetric import padding from base64 import b64encode, b64decode import os import rtslib_fb.root as lio_root from rtslib_fb.target import NodeACL, Target, TPG from rtslib_fb.fabric import ISCSIFabricModule from rtslib_fb.utils import RTSLibError, RTSLibNotInCFS, normalize_wwn import ceph_iscsi_config.settings as settings from ceph_iscsi_config.gateway_setting import CLIENT_SETTINGS from ceph_iscsi_config.common import Config from ceph_iscsi_config.utils import encryption_available, CephiSCSIError, this_host from ceph_iscsi_config.gateway_object import GWObject class GWClient(GWObject): """ This class holds a representation of a client connecting to LIO """ SETTINGS = CLIENT_SETTINGS seed_metadata = {"auth": {"username": '', "password": '', "password_encryption_enabled": False, "mutual_username": '', "mutual_password": '', "mutual_password_encryption_enabled": False}, "luns": {}, "group_name": "" } def __init__(self, logger, client_iqn, image_list, username, password, mutual_username, mutual_password, target_iqn): """ Instantiate an instance of an LIO client :param client_iqn: (str) iscsi iqn string :param image_list: (list) list of rbd images (pool/image) to attach to this client or list of tuples (disk, lunid) :param username: (str) chap username :param password: (str) chap password :param mutual_username: (str) chap mutual username :param mutual_password: (str) chap mutual password :param target_iqn: (str) target iqn string :return: """ self.target_iqn = target_iqn self.lun_lookup = {} # only used for hostgroup based definitions self.requested_images = [] self.username = username self.password = password self.mutual_username = mutual_username self.mutual_password = mutual_password self.mutual = '' self.tpgauth = '' self.metadata = {} self.acl = None self.client_luns = {} self.tpg = None self.tpg_luns = {} self.lun_id_list = list(range(256)) # available LUN ids 0..255 self.change_count = 0 # enable commit to the config for changes by default self.commit_enabled = True self.logger = logger self.current_config = {} self.error = False self.error_msg = '' try: client_iqn, iqn_type = normalize_wwn(['iqn'], client_iqn) except RTSLibError as err: self.error = True self.error_msg = "Invalid iSCSI client name - {}".format(err) self.iqn = client_iqn # Validate the images list doesn't contain duplicate entries dup_images = set([rbd for rbd in image_list if image_list.count(rbd) >= 2]) if len(dup_images) > 0: self.error = True dup_string = ','.join(dup_images) self.error_msg = ("Client's image list contains duplicate rbd's" ": {}".format(dup_string)) try: super(GWClient, self).__init__('targets', target_iqn, logger, GWClient.SETTINGS) except CephiSCSIError as err: self.error = True self.error_msg = err # image_list is normally a list of strings (pool/image_name) but # group processing forces a specific lun id allocation to masked disks # in this scenario the image list is a tuple if image_list: if isinstance(image_list[0], tuple): # tuple format ('disk_name', {'lun_id': 0})... for disk_item in image_list: disk_name = disk_item[0] lun_id = disk_item[1].get('lun_id') self.requested_images.append(disk_name) self.lun_lookup[disk_name] = lun_id else: target_config = self.config.config['targets'][self.target_iqn] used_lun_ids = self._get_lun_ids(target_config['clients']) for disk_name in image_list: disk_lun_id = target_config['disks'][disk_name]['lun_id'] if disk_lun_id not in used_lun_ids: self.lun_lookup[disk_name] = disk_lun_id self.requested_images = image_list def _get_lun_ids(self, clients_config): lun_ids = [] for client_config in clients_config.values(): for lun_config in client_config['luns'].values(): lun_ids.append(lun_config['lun_id']) return lun_ids def setup_luns(self, disks_config): """ Add the requested LUNs to the node ACL definition. The image list defined for the client is compared to the current runtime settings, resulting in new images being added, or images removed. """ # first drop the current lunid's used from the candidate list # this allows luns to be added/removed, and new id's to occupy free lun-id # slots rather than simply tag on the end. In a high churn environment, # adding new lun(s) at highest lun +1 could lead to exhausting the # 255 lun limit per target self.client_luns = self.get_images(self.acl) for image_name in self.client_luns: lun_id = self.client_luns[image_name]['lun_id'] self.lun_id_list.remove(lun_id) self.logger.debug("(Client.setup_luns) {} has id of " "{}".format(image_name, lun_id)) self.tpg_luns = self.get_images(self.tpg) current_map = dict(self.client_luns) for image in self.requested_images: backstore_object_name = disks_config[image]['backstore_object_name'] if backstore_object_name in self.client_luns: del current_map[backstore_object_name] continue else: rc = self._add_lun(image, self.tpg_luns[backstore_object_name]) if rc != 0: self.error = True self.error_msg = ("{} is missing from the tpg - unable " "to map".format(image)) self.logger.debug("(Client.setup) tpg luns " "{}".format(self.tpg_luns)) self.logger.error("(Client.setup) missing image '{}' from " "the tpg".format(image)) return # 'current_map' should be empty, if not the remaining images need # to be removed from the client if current_map: for backstore_object_name in current_map: self._del_lun_map(backstore_object_name, disks_config) if self.error: self.logger.error("(Client.setup) unable to delete {} from" " {}".format(self.iqn, backstore_object_name)) return def update_acl_controls(self): self.logger.debug("(update_acl_controls) controls: {}".format(self.controls)) self.acl.set_attribute('dataout_timeout', str(self.dataout_timeout)) # Try to detect network problems so we can kill connections # and cleanup before the initiator has begun recovery and # failed over. # LIO default 30 self.acl.set_attribute('nopin_response_timeout', str(self.nopin_response_timeout)) # LIO default 15 self.acl.set_attribute('nopin_timeout', str(self.nopin_timeout)) # LIO default 64 self.acl.tcq_depth = self.cmdsn_depth def define_client(self): """ Establish the links for this object to the corresponding ACL and TPG objects from LIO :return: """ iscsi_fabric = ISCSIFabricModule() target = Target(iscsi_fabric, self.target_iqn, 'lookup') # NB. this will check all tpg's for a matching iqn for tpg in target.tpgs: if tpg.enable: for client in tpg.node_acls: if client.node_wwn == self.iqn: self.acl = client self.tpg = client.parent_tpg try: self.update_acl_controls() except RTSLibError as err: self.logger.error("(Client.define_client) FAILED to update " "{}".format(self.iqn)) self.error = True self.error_msg = err self.logger.debug("(Client.define_client) - {} already " "defined".format(self.iqn)) return # at this point the client does not exist, so create it # The configuration only has one active tpg, so pick that one for any # acl definitions for tpg in target.tpgs: if tpg.enable: self.tpg = tpg try: self.acl = NodeACL(self.tpg, self.iqn) self.update_acl_controls() except RTSLibError as err: self.logger.error("(Client.define_client) FAILED to define " "{}".format(self.iqn)) self.logger.debug("(Client.define_client) failure msg " "{}".format(err)) self.error = True self.error_msg = err else: self.logger.info("(Client.define_client) {} added " "successfully".format(self.iqn)) self.change_count += 1 @staticmethod def get_client_info(target_iqn, client_iqn): result = { "alias": '', "state": '', "ip_address": [] } iscsi_fabric = ISCSIFabricModule() try: target = Target(iscsi_fabric, target_iqn, 'lookup') except RTSLibNotInCFS: return result for tpg in target.tpgs: if tpg.enable: for client in tpg.node_acls: if client.node_wwn != client_iqn: continue session = client.session if session is None: break result['alias'] = session.get('alias') state = session.get('state').upper() result['state'] = state ips = set() if state == 'LOGGED_IN': for conn in session.get('connections'): ips.add(conn.get('address')) result['ip_address'] = list(ips) break return result @staticmethod def define_clients(logger, config, target_iqn): """ define the clients (nodeACLs) to the gateway definition :param logger: logger object to print to :param config: configuration dict from the rados pool :raises CephiSCSIError. """ # Client configurations (NodeACL's) target_config = config.config['targets'][target_iqn] for client_iqn in target_config['clients']: client_metadata = target_config['clients'][client_iqn] client_chap = CHAP(client_metadata['auth']['username'], client_metadata['auth']['password'], client_metadata['auth']['password_encryption_enabled']) client_chap_mutual = CHAP(client_metadata['auth']['mutual_username'], client_metadata['auth']['mutual_password'], client_metadata['auth'][ 'mutual_password_encryption_enabled']) image_list = list(client_metadata['luns'].keys()) if client_chap.error: raise CephiSCSIError("Unable to decode password for {}. " "CHAP error: {}".format(client_iqn, client_chap.error_msg)) if client_chap_mutual.error: raise CephiSCSIError("Unable to decode password for {}. " "CHAP_MUTUAL error: {}".format(client_iqn, client_chap.error_msg)) client = GWClient(logger, client_iqn, image_list, client_chap.user, client_chap.password, client_chap_mutual.user, client_chap_mutual.password, target_iqn) client.manage('present') # ensure the client exists @staticmethod def try_disable_auth(tpg): """ Disable authentication (enable ACL mode) if this is the last CHAP user. LIO doesn't allow us to mix and match ACLs and auth under a tpg. We only allow ACL mode if there are not CHAP users. """ for client in tpg.node_acls: if client.chap_userid or client.chap_password: return if tpg.chap_userid or tpg.chap_password: return tpg.set_attribute('authentication', '0') def configure_auth(self, username, password, mutual_username, mutual_password, target_config): """ Attempt to configure authentication for the client :return: """ auth_enabled = (username and password) self.logger.debug("configuring auth username={}, password={}, mutual_username={}, " "mutual_password={}".format(username, password, mutual_username, mutual_password)) acl_chap_userid = self.acl.chap_userid acl_chap_password = self.acl.chap_password acl_chap_mutual_userid = self.acl.chap_mutual_userid acl_chap_mutual_password = self.acl.chap_mutual_password try: self.logger.debug("Updating the ACL") if username != acl_chap_userid or \ password != acl_chap_password: self.acl.chap_userid = username self.acl.chap_password = password new_chap = CHAP(username, password, False) self.logger.debug("chap object set to: {},{},{}".format( new_chap.user, new_chap.password, new_chap.password_str)) if new_chap.error: self.error = True self.error_msg = new_chap.error_msg return if mutual_username != acl_chap_mutual_userid or \ mutual_password != acl_chap_mutual_password: self.acl.chap_mutual_userid = mutual_username self.acl.chap_mutual_password = mutual_password new_chap_mutual = CHAP(mutual_username, mutual_password, False) self.logger.debug("chap mutual object set to: {},{},{}".format( new_chap_mutual.user, new_chap_mutual.password, new_chap_mutual.password_str)) if new_chap_mutual.error: self.error = True self.error_msg = new_chap_mutual.error_msg return if auth_enabled: self.tpg.set_attribute('authentication', '1') else: GWClient.try_disable_auth(self.tpg) self.logger.debug("Updating config object meta data") encryption_enabled = encryption_available() if username != acl_chap_userid: self.metadata['auth']['username'] = new_chap.user if password != acl_chap_password: self.metadata['auth']['password'] = new_chap.encrypted_password(encryption_enabled) self.metadata['auth']['password_encryption_enabled'] = encryption_enabled if mutual_username != acl_chap_mutual_userid: self.metadata['auth']['mutual_username'] = new_chap_mutual.user if mutual_password != acl_chap_mutual_password: self.metadata['auth']['mutual_password'] = \ new_chap_mutual.encrypted_password(encryption_enabled) self.metadata['auth']['mutual_password_encryption_enabled'] = encryption_enabled except RTSLibError as err: self.error = True self.error_msg = ("Unable to configure authentication " "for {} - {}".format(self.iqn, err)) self.logger.error("(Client.configure_auth) failed to set " "credentials for {}".format(self.iqn)) else: self.change_count += 1 self._update_acl(target_config) def _update_acl(self, target_config): if self.tpg.node_acls: self.tpg.set_attribute('generate_node_acls', 0) self.tpg.set_attribute('demo_mode_write_protect', 1) if not target_config['acl_enabled']: target_config['acl_enabled'] = True self.change_count += 1 def _add_lun(self, image, lun): """ Add a given image to the client ACL :param image: rbd image name of the form pool/image (str) :param lun: rtslib lun object :return: """ rc = 0 # get the tpg lun to map this client to tpg_lun = lun['tpg_lun'] # lunid allocated from the current config object setting, or if this is # a new device from the target disk lun id or next free lun id 'position' # if target disk lun id is already in use if image in self.metadata['luns'].keys(): lun_id = self.metadata['luns'][image]['lun_id'] else: if image in self.lun_lookup: lun_id = self.lun_lookup[image] else: lun_id = self.lun_id_list[0] # pick lowest available lun ID self.logger.debug("(Client._add_lun) Adding {} to {} at " "id {}".format(image, self.iqn, lun_id)) try: m_lun = self.acl.mapped_lun(lun_id, tpg_lun=tpg_lun) except RTSLibError as err: self.logger.error("Client.add_lun RTSLibError for lun id {} -" " {}".format(lun_id, err)) rc = 12 else: self.client_luns[image] = {"lun_id": lun_id, "mapped_lun": m_lun, "tpg_lun": tpg_lun} self.metadata['luns'][image] = {"lun_id": lun_id} self.lun_id_list.remove(lun_id) self.logger.info("(Client.add_lun) added image '{}' to " "{}".format(image, self.iqn)) self.change_count += 1 return rc def _del_lun_map(self, backstore_object_name, disks_config): """ Delete a lun from the client's ACL :param backstore_object_name: rbd image name to remove :return: """ lun = self.client_luns[backstore_object_name]['mapped_lun'] try: lun.delete() except RTSLibError as err: self.error = True self.error_msg = err else: self.change_count += 1 disk_id = [disk_id for disk_id, disk in disks_config.items() if disk['backstore_object_name'] == backstore_object_name][0] # the lun entry could have been deleted by another host, so before # we try and delete - make sure it's in our local copy of the # metadata! if disk_id in self.metadata['luns']: del self.metadata['luns'][disk_id] def delete(self): """ Delete the client definition from LIO :return: """ try: self.acl.delete() GWClient.try_disable_auth(self.tpg) self.change_count += 1 self.logger.info("(Client.delete) deleted NodeACL for " "{}".format(self.iqn)) except RTSLibError as err: self.error = True self.error_msg = "RTS NodeACL delete failure" self.logger.error("(Client.delete) failed to delete client {} " "- error: {}".format(self.iqn, err)) def exists(self): """ This function determines whether this instances iqn is already defined to LIO :return: Boolean """ r = lio_root.RTSRoot() client_list = [client.node_wwn for client in r.node_acls] return self.iqn in client_list def seed_config(self, config): """ function to seed the config object with a new client definition """ target_config = config.config["targets"][self.target_iqn] target_config['clients'][self.iqn] = GWClient.seed_metadata config.update_item("targets", self.target_iqn, target_config) # persist the config update, and leave the connection to the ceph # object open since adding just the iqn is only the start of the # definition config.commit("retain") def manage(self, rqst_type, committer=None): """ Manage the allocation or removal of this client :param rqst_type is either 'present' (try and create the nodeACL), or 'absent' - delete the nodeACL :param committer is the host responsible for any commits to the configuration - this is not needed for Ansible management, but is used by the CLI->API->GWClient interaction """ # Build a local object representing the rados configuration object config_object = Config(self.logger) if config_object.error: self.error = True self.error_msg = config_object.error_msg return # use current config to hold a copy of the current rados config # object (dict) self.current_config = config_object.config target_config = self.current_config['targets'][self.target_iqn] update_host = committer self.logger.debug("GWClient.manage) update host to handle any config " "update is {}".format(update_host)) if rqst_type == "present": ################################################################### # Ensure the client exists in LIO # ################################################################### # first look at the request to see if it matches the settings # already in the config object - if so this is just a rerun, or a # reboot so config object updates are not needed when we change # the LIO environment if self.iqn in target_config['clients'].keys(): self.metadata = target_config['clients'][self.iqn] config_image_list = sorted(self.metadata['luns'].keys()) # # Does the request match the current config? auth_config = self.metadata['auth'] config_chap = CHAP(auth_config['username'], auth_config['password'], auth_config['password_encryption_enabled']) if config_chap.error: self.error = True self.error_msg = config_chap.error_msg return # extract the chap_mutual_str from the config object entry config_chap_mutual = CHAP(auth_config['mutual_username'], auth_config['mutual_password'], auth_config['mutual_password_encryption_enabled']) if config_chap_mutual.error: self.error = True self.error_msg = config_chap_mutual.error_msg return if self.username == config_chap.user and \ self.password == config_chap.password and \ self.mutual_username == config_chap_mutual.user and \ self.mutual_password == config_chap_mutual.password and \ config_image_list == sorted(self.requested_images): self.commit_enabled = False else: # requested iqn is not in the config object self.seed_config(config_object) self.metadata = GWClient.seed_metadata self.logger.debug("(manage) config updates to be applied from " "this host: {}".format(self.commit_enabled)) client_exists = self.exists() self.define_client() if self.error: # unable to define the client! return if client_exists and self.metadata["group_name"]: # bypass setup_luns for existing clients that have an # associated host group pass else: # either the client didn't exist (new or boot time), or the # group_name is not defined so run setup_luns for this client disks_config = self.current_config['disks'] bad_images = self.validate_images(disks_config) if not bad_images: self.setup_luns(disks_config) if self.error: return else: # request for images to map to this client that haven't # been added to LIO yet! self.error = True self.error_msg = ("Non-existent images {} requested " "for {}".format(bad_images, self.iqn)) return if not self.username and not self.password and \ not self.mutual_username and not self.mutual_password: self.logger.warning("(main) client '{}' configured without" " security".format(self.iqn)) self.configure_auth(self.username, self.password, self.mutual_username, self.mutual_password, target_config) if self.error: return # check the client object's change count, and update the config # object if this is the updating host if self.change_count > 0: if self.commit_enabled: if update_host == this_host(): # update the config object with this clients settings self.logger.debug("Updating config object metadata " "for '{}'".format(self.iqn)) target_config['clients'][self.iqn] = self.metadata config_object.update_item("targets", self.target_iqn, target_config) # persist the config update config_object.commit() elif rqst_type == 'reconfigure': self.define_client() else: ################################################################### # Remove the requested client from the config object and LIO # ################################################################### if self.exists(): self.define_client() # grab the client and parent tpg objects self.delete() # deletes from the local LIO instance if self.error: return else: # remove this client from the config if update_host == this_host(): self.logger.debug("Removing {} from the config " "object".format(self.iqn)) target_config['clients'].pop(self.iqn) config_object.update_item("targets", self.target_iqn, target_config) config_object.commit() else: # desired state is absent, but the client does not exist # in LIO - Nothing to do! self.logger.info("(main) client {} removal request, but it's" "not in LIO...skipping".format(self.iqn)) def validate_images(self, disks_config): """ Confirm that the images listed are actually allocated to the tpg and can therefore be used by a client :return: a list of images that are NOT in the tpg ... should be empty! """ bad_images = [] tpg_lun_list = self.get_images(self.tpg).keys() self.logger.debug("tpg images: {}".format(tpg_lun_list)) self.logger.debug("request images: {}".format(self.requested_images)) backstore_object_names = [disk['backstore_object_name'] for disk_id, disk in disks_config.items() if disk_id in self.requested_images] self.logger.debug("backstore object names: {}".format(backstore_object_names)) for backstore_object_name in backstore_object_names: if backstore_object_name not in tpg_lun_list: bad_images.append(backstore_object_name) return bad_images @staticmethod def get_update_host(config): """ decide which gateway host should be responsible for any config object updates :param config: configuration dict from the rados pool :return: a suitable gateway host that is online """ ptr = 0 potential_hosts = [host_name for host_name in config["gateways"].keys() if isinstance(config["gateways"][host_name], dict)] # Assume the 1st element from the list is OK for now # TODO check the potential hosts are online/available return potential_hosts[ptr] def get_images(self, rts_object): """ Funtion to return a dict of luns mapped to either a node ACL or the TPG, based on the passed object type :param rts_object: rtslib object - either NodeACL or TPG :return: dict indexed by image name of LUN object attributes """ luns_mapped = {} if isinstance(rts_object, NodeACL): # return a dict of images assigned to this client for m_lun in rts_object.mapped_luns: key = m_lun.tpg_lun.storage_object.name luns_mapped[key] = {"lun_id": m_lun.mapped_lun, "mapped_lun": m_lun, "tpg_lun": m_lun.tpg_lun} elif isinstance(rts_object, TPG): # return a dict of *all* images available to this tpg for m_lun in rts_object.luns: key = m_lun.storage_object.name luns_mapped[key] = {"lun_id": m_lun.lun, "mapped_lun": None, "tpg_lun": m_lun} return luns_mapped class CHAP(object): def __init__(self, user, password_str, encryption_enabled): self.error = False self.error_msg = '' self.user = user self.password_str = password_str if len(self.password_str) > 0 and encryption_enabled: self.password = self._decrypt() else: self.password = self.password_str def encrypted_password(self, encryption_enabled): if encryption_enabled and len(self.password_str) > 0: return self._encrypt() return self.password def _decrypt(self): key_path = os.path.join(settings.config.ceph_config_dir, settings.config.priv_key) try: with open(key_path, 'rb') as keyf: key = serialization.load_pem_private_key(keyf.read(), None, default_backend()) try: plain_pw = key.decrypt(b64decode(self.password_str), padding.OAEP( mgf=padding.MGF1(algorithm=hashes.SHA256()), algorithm=hashes.SHA256(), label=None)).decode('utf-8') except ValueError: # decrypting a password that was encrypted with python-crypto? plain_pw = key.decrypt(b64decode(self.password_str), padding.OAEP( mgf=padding.MGF1(algorithm=hashes.SHA1()), algorithm=hashes.SHA1(), label=None)).decode('utf-8') except Exception as ex: print(ex) self.error = True self.error_msg = 'Problems decoding the encrypted password' return None else: return plain_pw def _encrypt(self): key_path = os.path.join(settings.config.ceph_config_dir, settings.config.pub_key) try: with open(key_path, 'rb') as keyf: key = serialization.load_pem_public_key(keyf.read(), default_backend()) encrypted_pw = b64encode(key.encrypt(self.password_str.encode('utf-8'), padding.OAEP( mgf=padding.MGF1(algorithm=hashes.SHA256()), algorithm=hashes.SHA256(), label=None))).decode('utf-8') except Exception: self.error = True self.error_msg = 'Encoding password failed' return None else: return encrypted_pw ceph-iscsi-3.9/ceph_iscsi_config/common.py000066400000000000000000000635021470665154300207350ustar00rootroot00000000000000import rados import socket import time import json import traceback from ceph_iscsi_config.backstore import USER_RBD import ceph_iscsi_config.settings as settings from ceph_iscsi_config.utils import encryption_available, get_time class ConfigTransaction(object): def __init__(self, cfg_type, element_name, txn_action='add', initial_value=None): self.type = cfg_type self.action = txn_action self.item_name = element_name init_state = {} if initial_value is None else initial_value self.item_content = init_state def __repr__(self): return str(self.__dict__) class CephCluster(object): def __init__(self): self.error = False self.error_msg = '' self.cluster = None conf = settings.config.cephconf try: self.cluster = rados.Rados(conffile=conf, name=settings.config.cluster_client_name) except rados.Error as err: self.error = True self.error_msg = "Invaid cluster_client_name or setting in {} - {}".format(conf, err) return try: self.cluster.connect() except rados.Error as err: self.error = True self.error_msg = "Unable to connect to the cluster (keyring missing?) - {}".format(err) def __del__(self): if self.cluster: self.cluster.shutdown() def shutdown(self): self.cluster.shutdown() class Config(object): seed_config = {"disks": {}, "gateways": {}, "targets": {}, "discovery_auth": {'username': '', 'password': '', 'password_encryption_enabled': False, 'mutual_username': '', 'mutual_password': '', 'mutual_password_encryption_enabled': False}, "version": 11, "epoch": 0, "created": '', "updated": '' } lock_time_limit = 30 def __init__(self, logger, cfg_name=None, pool=None): self.logger = logger self.config_name = cfg_name if self.config_name is None: self.config_name = settings.config.gateway_conf if pool is None: pool = settings.config.pool self.pool = pool self.ceph = None self.error = False self.reset = False self.error_msg = "" self.txn_list = [] self.config_locked = False self.ceph = CephCluster() if self.ceph.error: self.error = True self.error_msg = self.ceph.error_msg return if self.init_config(): self.config = self.get_config() self._upgrade_config() self.changed = False def _read_config_object(self, ioctx): """ Return config string from the config object. The string is checked to see if it's valid json. If it's not the read is likely to be a against the object while it's being updated by another host - if this happens, we wait and reread until we get valid json. :param ioctx: rados ioctx :return: (str) current string. """ try: size, mtime = ioctx.stat(self.config_name) except rados.ObjectNotFound: self.logger.error("_read_config_object object not found") raise else: self.logger.debug("_read_config_object reading the config object") size += 1 cfg_str = ioctx.read(self.config_name, length=size) if cfg_str: valid = False while not valid: try: json.loads(cfg_str) except ValueError: # self.logger.debug("_read_config_object not valid json, rereading") time.sleep(1) size, mtime = ioctx.stat(self.config_name) cfg_str = ioctx.read(self.config_name, length=size) else: valid = True return cfg_str def _open_ioctx(self): try: self.logger.debug("(_open_ioctx) Opening connection to {} pool".format(self.pool)) ioctx = self.ceph.cluster.open_ioctx(self.pool) except rados.ObjectNotFound: self.error = True self.error_msg = "'{}' pool does not exist!".format(self.pool) self.logger.error("(_open_ioctx) {} does not exist".format(self.pool)) raise self.logger.debug("(_open_ioctx) connection opened") return ioctx def _get_ceph_config(self): cfg_dict = {} ioctx = self._open_ioctx() cfg_data = self._read_config_object(ioctx) ioctx.close() if not cfg_data: # attempt to read the object got nothing which means it's empty # so we seed the object self.logger.debug("(_get_rbd_config) config object is empty..seeding it") self._seed_rbd_config() if self.error: self.logger.error("(Config._get_rbd_config) Unable to seed the config object") return {} else: cfg_data = json.dumps(Config.seed_config) self.logger.debug("(_get_rbd_config) config object contains '{}'".format(cfg_data)) cfg_dict = json.loads(cfg_data) return cfg_dict def needs_hostname_update(self): if self.config['version'] == 9: # No gateway has been updated yet. return True updated = self.config.get('gateways_upgraded') if updated is None: # Everything has been updated or we are < 9 return False if socket.getfqdn() in updated: return False return True def _upgrade_config(self): update_hostname = self.needs_hostname_update() if self.config['version'] >= Config.seed_config['version'] and not update_hostname: return if self.config['version'] <= 2: self.add_item("groups", element_name=None, initial_value={}) self.update_item("version", element_name=None, element_value=3) if self.config['version'] == 3: iqn = self.config['gateways'].get('iqn', None) gateways = {} portals = {} self.add_item("targets", None, {}) self.add_item('discovery_auth', None, { 'chap': '', 'chap_mutual': '' }) if iqn: for host, gateway_v3 in self.config['gateways'].items(): if isinstance(gateway_v3, dict): portal = gateway_v3 portal.pop('iqn') active_luns = portal.pop('active_luns') updated = portal.pop('updated', None) created = portal.pop('created', None) gateway = { 'active_luns': active_luns } if created: gateway['created'] = created if updated: gateway['updated'] = updated gateways[host] = gateway portals[host] = portal for _, client in self.config['clients'].items(): client.pop('created', None) client.pop('updated', None) client['auth']['chap_mutual'] = '' for _, group in self.config['groups'].items(): group.pop('created', None) group.pop('updated', None) target = { 'disks': list(self.config['disks'].keys()), 'clients': self.config['clients'], 'portals': portals, 'groups': self.config['groups'], 'controls': self.config.get('controls', {}), 'ip_list': self.config['gateways']['ip_list'] } self.add_item("targets", iqn, target) self.update_item("targets", iqn, target) self.update_item("gateways", None, gateways) if 'controls' in self.config: self.del_item('controls', None) self.del_item('clients', None) self.del_item('groups', None) self.update_item("version", None, 4) if self.config['version'] == 4: for disk_id, disk in self.config['disks'].items(): disk['backstore'] = USER_RBD self.update_item("disks", disk_id, disk) self.update_item("version", None, 5) if self.config['version'] == 5: for target_iqn, target in self.config['targets'].items(): target['acl_enabled'] = True self.update_item("targets", target_iqn, target) self.update_item("version", None, 6) if self.config['version'] == 6: new_disks = {} old_disks = [] for disk_id, disk in self.config['disks'].items(): disk['backstore_object_name'] = disk_id new_disk_id = disk_id.replace('.', '/') new_disks[new_disk_id] = disk old_disks.append(disk_id) for old_disk_id in old_disks: self.del_item('disks', old_disk_id) for new_disk_id, new_disk in new_disks.items(): self.add_item("disks", new_disk_id, new_disk) for iqn, target in self.config['targets'].items(): new_disk_ids = [] for disk_id in target['disks']: new_disk_id = disk_id.replace('.', '/') new_disk_ids.append(new_disk_id) target['disks'] = new_disk_ids for _, client in target['clients'].items(): new_luns = {} for lun_id, lun in client['luns'].items(): new_lun_id = lun_id.replace('.', '/') new_luns[new_lun_id] = lun client['luns'] = new_luns for _, group in target['groups'].items(): new_group_disks = {} for group_disk_id, group_disk in group['disks'].items(): new_group_disk_id = group_disk_id.replace('.', '/') new_group_disks[new_group_disk_id] = group_disk group['disks'] = new_group_disks self.update_item("targets", iqn, target) self.update_item("version", None, 7) if self.config['version'] == 7: if '/' in self.config['discovery_auth']['chap']: duser, dpassword = self.config['discovery_auth']['chap'].split('/', 1) else: duser = '' dpassword = '' self.config['discovery_auth']['username'] = duser self.config['discovery_auth']['password'] = dpassword self.config['discovery_auth']['password_encryption_enabled'] = False self.config['discovery_auth'].pop('chap', None) if '/' in self.config['discovery_auth']['chap_mutual']: dmuser, dmpassword = self.config['discovery_auth']['chap_mutual'].split('/', 1) else: dmuser = '' dmpassword = '' self.config['discovery_auth']['mutual_username'] = dmuser self.config['discovery_auth']['mutual_password'] = dmpassword self.config['discovery_auth']['mutual_password_encryption_enabled'] = False self.config['discovery_auth'].pop('chap_mutual', None) self.update_item("discovery_auth", None, self.config['discovery_auth']) for target_iqn, target in self.config['targets'].items(): for _, client in target['clients'].items(): if '/' in client['auth']['chap']: user, password = client['auth']['chap'].split('/', 1) else: user = '' password = '' client['auth']['username'] = user client['auth']['password'] = password client['auth']['password_encryption_enabled'] = \ (len(password) > 16 and encryption_available()) client['auth'].pop('chap', None) if '/' in client['auth']['chap_mutual']: muser, mpassword = client['auth']['chap_mutual'].split('/', 1) else: muser = '' mpassword = '' client['auth']['mutual_username'] = muser client['auth']['mutual_password'] = mpassword client['auth']['mutual_password_encryption_enabled'] = \ (len(mpassword) > 16 and encryption_available()) client['auth'].pop('chap_mutual', None) self.update_item("targets", target_iqn, target) self.update_item("version", None, 8) if self.config['version'] == 8: for target_iqn, target in self.config['targets'].items(): for _, portal in target['portals'].items(): portal['portal_ip_addresses'] = [portal['portal_ip_address']] portal.pop('portal_ip_address') self.update_item("targets", target_iqn, target) self.update_item("version", None, 9) if self.config['version'] == 9 or update_hostname: # temporary field to store the gateways already upgraded from v9 to v10 gateways_upgraded = self.config.get('gateways_upgraded') if not gateways_upgraded: gateways_upgraded = [] self.add_item('gateways_upgraded', None, gateways_upgraded) this_shortname = socket.gethostname().split('.')[0] this_fqdn = socket.getfqdn() if this_fqdn not in gateways_upgraded: gateways_config = self.config['gateways'] gateway_config = gateways_config.get(this_shortname) if gateway_config: gateways_config.pop(this_shortname) gateways_config[this_fqdn] = gateway_config self.update_item("gateways", None, gateways_config) for target_iqn, target in self.config['targets'].items(): portals_config = target['portals'] portal_config = portals_config.get(this_shortname) if portal_config: portals_config.pop(this_shortname) portals_config[this_fqdn] = portal_config self.update_item("targets", target_iqn, target) for disk_id, disk in self.config['disks'].items(): if disk.get('allocating_host') == this_shortname: disk['allocating_host'] = this_fqdn if disk.get('owner') == this_shortname: disk['owner'] = this_fqdn self.update_item("disks", disk_id, disk) gateways_upgraded.append(this_fqdn) self.update_item("gateways_upgraded", None, gateways_upgraded) if any(gateway_name not in gateways_upgraded for gateway_name in self.config['gateways'].keys()): self.logger.debug("gateways upgraded to 10: {}". format(gateways_upgraded)) else: self.del_item("gateways_upgraded", None) if self.config['version'] == 9: # Upgrade from v9 to v10 is still in progress. Update the # version now, so we can update the other config fields and # setup the target to execute IO while the other gws upgrade. self.update_item("version", None, 10) # Currently, the versions below do not rely on fields being updated # in the 9->10 upgrade which needs to execute on every node before # completing. If this changes, we will need to fix how we handle # rolling upgrades, so new versions have access to the updated fields # on all gws before completing the upgrade. if self.config['version'] == 10: for target_iqn, target in self.config['targets'].items(): target['auth'] = { 'username': '', 'password': '', 'password_encryption_enabled': False, 'mutual_username': '', 'mutual_password': '', 'mutual_password_encryption_enabled': False } disks = {} for disk_index, disk in enumerate(sorted(target['disks'])): disks[disk] = { 'lun_id': disk_index } target['disks'] = disks self.update_item("targets", target_iqn, target) self.update_item("version", None, 11) self.commit("retain") def init_config(self): try: ioctx = self._open_ioctx() except rados.ObjectNotFound: return False try: with rados.WriteOpCtx(ioctx) as op: # try to exclusively create the config object op.new(rados.LIBRADOS_CREATE_EXCLUSIVE) ioctx.operate_write_op(op, self.config_name) self.logger.debug("(init_config) created empty config object") except rados.ObjectExists: self.logger.debug("(init_config) using pre existing config object") ioctx.close() return True def get_config(self): return self._get_ceph_config() def lock(self): ioctx = self._open_ioctx() secs = 0 self.logger.debug("config.lock attempting to acquire lock on {}".format(self.config_name)) while secs < Config.lock_time_limit: try: ioctx.lock_exclusive(self.config_name, 'lock', 'config') self.config_locked = True break except (rados.ObjectBusy, rados.ObjectExists): self.logger.debug("(Config.lock) waiting for excl lock on " "{} object".format(self.config_name)) time.sleep(1) secs += 1 if secs >= Config.lock_time_limit: self.error = True self.error_msg = ("Timed out ({}s) waiting for excl " "lock on {} object".format(Config.lock_time_limit, self.config_name)) self.logger.error("(Config.lock) {}".format(self.error_msg)) ioctx.close() def unlock(self): ioctx = self._open_ioctx() self.logger.debug("config.unlock releasing lock on {}".format(self.config_name)) try: ioctx.unlock(self.config_name, 'lock', 'config') self.config_locked = False except Exception: self.error = True self.error_msg = ("Unable to unlock {} - {}".format(self.config_name, traceback.format_exc())) self.logger.error("(Config.unlock) {}".format(self.error_msg)) ioctx.close() def _seed_rbd_config(self): ioctx = self._open_ioctx() self.lock() if self.error: return # if the config object is empty, seed it - if not just leave as is cfg_data = self._read_config_object(ioctx) if not cfg_data: self.logger.debug("_seed_rbd_config found empty config object") seed_now = Config.seed_config seed_now['created'] = get_time() seed = json.dumps(seed_now, sort_keys=True, indent=4, separators=(',', ': ')) ioctx.write_full(self.config_name, seed.encode('utf-8')) ioctx.set_xattr(self.config_name, "epoch", "0".encode('utf-8')) self.changed = True self.unlock() def refresh(self): self.logger.debug("config refresh - current config is {}".format(self.config)) self.config = self.get_config() self._upgrade_config() def add_item(self, cfg_type, element_name=None, initial_value=None): now = get_time() if element_name: # ensure the initial state for this item has a 'created' date/time value if isinstance(initial_value, dict): if 'created' not in initial_value: initial_value['created'] = now if initial_value is None: init_state = {"created": now} else: init_state = initial_value self.config[cfg_type][element_name] = init_state if isinstance(init_state, str) and 'created' not in self.config[cfg_type]: self.config[cfg_type]['created'] = now # add a separate transaction to capture the creation date to the section txn = ConfigTransaction(cfg_type, 'created', initial_value=now) self.txn_list.append(txn) else: # new section being added to the config object self.config[cfg_type] = initial_value init_state = initial_value txn = ConfigTransaction(cfg_type, None, initial_value=initial_value) self.txn_list.append(txn) self.logger.debug("(Config.add_item) config updated to {}".format(self.config)) self.changed = True txn = ConfigTransaction(cfg_type, element_name, initial_value=init_state) self.txn_list.append(txn) def del_item(self, cfg_type, element_name): self.changed = True if element_name: del self.config[cfg_type][element_name] else: del self.config[cfg_type] self.logger.debug("(Config.del_item) config updated to {}".format(self.config)) txn = ConfigTransaction(cfg_type, element_name, 'delete') self.txn_list.append(txn) def update_item(self, cfg_type, element_name, element_value): now = get_time() if element_name: current_values = self.config[cfg_type][element_name] self.logger.debug("prior to update, item contains {}".format(current_values)) if isinstance(element_value, dict): merged = current_values.copy() new_dict = element_value new_dict['updated'] = now merged.update(new_dict) element_value = merged.copy() self.config[cfg_type][element_name] = element_value else: # update to a root level config element, like version self.config[cfg_type] = element_value self.logger.debug("(Config.update_item) config is {}".format(self.config)) self.changed = True self.logger.debug("update_item: type={}, item={}, update={}".format( cfg_type, element_name, element_value)) txn = ConfigTransaction(cfg_type, element_name, 'add') txn.item_content = element_value self.txn_list.append(txn) def set_item(self, cfg_type, element_name, element_value): self.logger.debug("(Config.update_item) config is {}".format(self.config)) self.changed = True self.logger.debug("update_item: type={}, item={}, update={}".format( cfg_type, element_name, element_value)) txn = ConfigTransaction(cfg_type, element_name, 'add') txn.item_content = element_value self.txn_list.append(txn) def _commit_rbd(self, post_action): ioctx = self._open_ioctx() if not self.config_locked: self.lock() if self.error: return # reread the config to account for updates made by other systems # then apply this hosts update(s) current_config = json.loads(self._read_config_object(ioctx)) for txn in self.txn_list: self.logger.debug("_commit_rbd transaction shows {}".format(txn)) if txn.action == 'add': # add's and updates if txn.item_name: current_config[txn.type][txn.item_name] = txn.item_content else: current_config[txn.type] = txn.item_content elif txn.action == 'delete': if txn.item_name: del current_config[txn.type][txn.item_name] else: del current_config[txn.type] else: self.error = True self.error_msg = "Unknown transaction type ({}) encountered in " \ "_commit_rbd".format(txn.action) if not self.error: if self.reset: current_config["epoch"] = 0 else: # Python will switch from plain to long int automagically current_config["epoch"] += 1 now = get_time() current_config['updated'] = now config_str = json.dumps(current_config) self.logger.debug("_commit_rbd updating config to {}".format(config_str)) config_str_fmtd = json.dumps(current_config, sort_keys=True, indent=4, separators=(',', ': ')) ioctx.write_full(self.config_name, config_str_fmtd.encode('utf-8')) ioctx.set_xattr(self.config_name, "epoch", str(current_config["epoch"]).encode('utf-8')) del self.txn_list[:] # empty the list of transactions self.unlock() ioctx.close() if post_action == 'close': self.ceph.shutdown() def commit(self, post_action='close'): self._commit_rbd(post_action) def main(): pass if __name__ == '__main__': main() ceph-iscsi-3.9/ceph_iscsi_config/device_status.py000077500000000000000000000131231470665154300223040ustar00rootroot00000000000000import json import rados import threading import time from datetime import datetime import ceph_iscsi_config.settings as settings class StatusCounter(object): def __init__(self, name, cnt): self.name = name self.cnt = cnt self.last_cnt = cnt class TcmuDevStatusTracker(object): def __init__(self, image_name): self.image_name = image_name self.gw_counter_lookup = {} self.lock_owner = "" self.lock_owner_timestamp = None self.state = "Online" self.changed_state = False self.stable_cnt = 0 def get_status_dict(self): status = {} status['state'] = self.state status['lock_owner'] = self.lock_owner status['gateways'] = {} for gw, stat_cnt_dict in self.gw_counter_lookup.items(): status['gateways'][gw] = {} for name, stat_cnt in stat_cnt_dict.items(): status['gateways'][gw][name] = stat_cnt.cnt return status def check_for_degraded_state(self, stat_cnt): if stat_cnt.name in ["cmd_timed_out_cnt", "conn_lost_cnt"]: if abs(stat_cnt.cnt - stat_cnt.last_cnt) >= 1: self.state = "Degraded - cluster access failure" self.changed_state = True self.stable_cnt = 0 return if stat_cnt.name == "lock_lost_cnt" and \ abs(stat_cnt.cnt - stat_cnt.last_cnt) >= \ settings.config.lock_lost_cnt_threshhold: self.state = "Degraded - excessive failovers" self.changed_state = True self.stable_cnt = 0 return def update_status(self, gw, status, status_stamp): if status is None: # Sometimes status calls will return empty statuses even though # there is valid data. We might not see it until the Nth call. return counter_dict = self.gw_counter_lookup.get(gw) if counter_dict is None: counter_dict = {} for name, val in status.items(): if name == "lock_owner" and val == "true": dt = datetime.strptime(status_stamp, "%Y-%m-%dT%H:%M:%S.%f%z") if self.lock_owner_timestamp is None or dt > self.lock_owner_timestamp: self.lock_owner_timestamp = dt self.lock_owner = gw self.stable_cnt = 0 continue if name not in ["cmd_timed_out_cnt", "conn_lost_cnt", "lock_lost_cnt"]: continue stat_cnt = counter_dict.get(name) if stat_cnt is None: stat_cnt = StatusCounter(name, int(val)) stat_cnt.cnt = int(val) # TODO: # If we detect a degraded state, we can throttle the path here. self.check_for_degraded_state(stat_cnt) stat_cnt.last_cnt = stat_cnt.cnt counter_dict[name] = stat_cnt self.gw_counter_lookup[gw] = counter_dict class DeviceStatusWatcher(threading.Thread): def __init__(self, logger): threading.Thread.__init__(self) self.logger = logger self.daemon = True self.cluster = None self.status_lookup = {} def get_dev_status(self, image_name): return self.status_lookup.get(image_name) def exit(self): if self.cluster: self.cluster.shutdown() def run(self): self.cluster = rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) self.cluster.connect() while True: time.sleep(settings.config.status_check_interval) cmd = json.dumps({"prefix": "service status", "format": "json"}) ret, outb, outs = self.cluster.mgr_command(cmd, b'') if ret != 0: self.logger.error("mgr command failed {}".format(ret)) continue svc = json.loads(outb).get('tcmu-runner') if svc is None: self.logger.warning("there is no tcmu-runner data available") continue image_names_dict = {} for daemon, daemon_info in svc.items(): gw, image_name = daemon.split(":", 1) image_names_dict[image_name] = image_name dev_status = self.get_dev_status(image_name) if dev_status is None: dev_status = TcmuDevStatusTracker(image_name) self.status_lookup[image_name] = dev_status dev_status.update_status(gw, daemon_info.get('status'), daemon_info.get('status_stamp')) # cleanup stale entries and try to move to online if a dev # didn't not see any errors on every gateway for a while for image_name in list(self.status_lookup): if image_names_dict.get(image_name) is None: del self.status_lookup[image_name] else: dev_status = self.status_lookup[image_name] if dev_status.changed_state is False: dev_status.stable_cnt += 1 if dev_status.stable_cnt > settings.config.stable_state_reset_count: dev_status.stable_cnt = 0 dev_status.state = "Online" else: dev_status.changed_state = False # debugging info status_dict = {k: v.get_status_dict() for k, v in self.status_lookup.items()} self.logger.debug("device status {}".format(status_dict)) ceph-iscsi-3.9/ceph_iscsi_config/discovery.py000066400000000000000000000040571470665154300214540ustar00rootroot00000000000000from rtslib_fb.fabric import ISCSIFabricModule from ceph_iscsi_config.client import CHAP from ceph_iscsi_config.utils import encryption_available class Discovery(object): @staticmethod def set_discovery_auth_lio(username, password, password_encryption_enabled, mutual_username, mutual_password, mutual_password_encryption_enabled): iscsi_fabric = ISCSIFabricModule() if username == '': iscsi_fabric.clear_discovery_auth_settings() else: chap = CHAP(username, password, password_encryption_enabled) chap_mutual = CHAP(mutual_username, mutual_password, mutual_password_encryption_enabled) iscsi_fabric.discovery_userid = chap.user iscsi_fabric.discovery_password = chap.password iscsi_fabric.discovery_mutual_userid = chap_mutual.user iscsi_fabric.discovery_mutual_password = chap_mutual.password iscsi_fabric.discovery_enable_auth = True @staticmethod def set_discovery_auth_config(username, password, mutual_username, mutual_password, config): encryption_enabled = encryption_available() discovery_auth_config = { 'username': '', 'password': '', 'password_encryption_enabled': encryption_enabled, 'mutual_username': '', 'mutual_password': '', 'mutual_password_encryption_enabled': encryption_enabled } if username != '': chap = CHAP(username, password, encryption_enabled) chap_mutual = CHAP(mutual_username, mutual_password, encryption_enabled) discovery_auth_config['username'] = chap.user discovery_auth_config['password'] = chap.encrypted_password(encryption_enabled) discovery_auth_config['mutual_username'] = chap_mutual.user discovery_auth_config['mutual_password'] = \ chap_mutual.encrypted_password(encryption_enabled) config.update_item('discovery_auth', '', discovery_auth_config) ceph-iscsi-3.9/ceph_iscsi_config/gateway.py000066400000000000000000000374611470665154300211130ustar00rootroot00000000000000import subprocess import netifaces from rtslib_fb.utils import RTSLibError from rtslib_fb.fabric import ISCSIFabricModule from rtslib_fb.target import Target import ceph_iscsi_config.settings as settings from ceph_iscsi_config.target import GWTarget from ceph_iscsi_config.lun import LUN from ceph_iscsi_config.client import GWClient from ceph_iscsi_config.lio import LIO from ceph_iscsi_config.utils import this_host, CephiSCSIError __author__ = 'pcuzner@redhat.com' class CephiSCSIGateway(object): def __init__(self, logger, config, name=None): self.logger = logger self.config = config if name: self.hostname = name else: self.hostname = this_host() def _run_ceph_cmd(self, cmd, stderr=None, shell=True): if not stderr: stderr = subprocess.STDOUT try: result = subprocess.check_output(cmd, stderr=stderr, shell=shell) except subprocess.CalledProcessError as err: return None, err return result, None def ceph_rm_blocklist(self, blocklisted_ip): """ Issue a ceph osd blocklist rm command for a given IP on this host :param blocklisted_ip: IP address (str - dotted quad) :return: boolean for success of the rm operation """ self.logger.info("Removing blocklisted entry for this host : " "{}".format(blocklisted_ip)) conf = settings.config result, err = self._run_ceph_cmd( "ceph -n {client_name} --conf {cephconf} osd blocklist rm " "{blocklisted_ip}".format(blocklisted_ip=blocklisted_ip, client_name=conf.cluster_client_name, cephconf=conf.cephconf)) if err: result, err = self._run_ceph_cmd( "ceph -n {client_name} --conf {cephconf} osd blacklist rm " "{blocklisted_ip}".format(blocklisted_ip=blocklisted_ip, client_name=conf.cluster_client_name, cephconf=conf.cephconf)) if err: self.logger.critical("blocklist removal failed: {}. Run" " 'ceph -n {client_name} --conf {cephconf} " "osd blocklist rm {blocklisted_ip}'". format(err.output.decode('utf-8').strip(), blocklisted_ip=blocklisted_ip, client_name=conf.cluster_client_name, cephconf=conf.cephconf)) return False self.logger.info("Successfully removed blocklist entry") return True def osd_blocklist_cleanup(self): """ Process the osd's to see if there are any blocklist entries for this node :return: True, blocklist entries removed OK, False - problems removing a blocklist """ self.logger.info("Processing osd blocklist entries for this node") cleanup_state = True conf = settings.config # NB. Need to use the stderr override to catch the output from # the command blocklist, err = self._run_ceph_cmd( "ceph -n {client_name} --conf {cephconf} osd blocklist ls". format(client_name=conf.cluster_client_name, cephconf=conf.cephconf)) if err: blocklist, err = self._run_ceph_cmd( "ceph -n {client_name} --conf {cephconf} osd blacklist ls". format(client_name=conf.cluster_client_name, cephconf=conf.cephconf)) if err: self.logger.critical( "Failed to run 'ceph -n {client_name} --conf {cephconf} " "osd blocklist ls'. Please resolve manually..." .format(client_name=conf.cluster_client_name, cephconf=conf.cephconf)) cleanup_state = False else: blocklist_output = blocklist.decode('utf-8').split('\n')[:-1] if len(blocklist_output) > 1: # We have entries to look for, so first build a list of ipv4 # addresses on this node ipv4_list = [] for iface in netifaces.interfaces(): dev_info = netifaces.ifaddresses(iface).get(netifaces.AF_INET, []) ipv4_list += [dev['addr'] for dev in dev_info] # process the entries. last entry is just null) for blocklist_entry in blocklist_output: # blocklist_output is not gauranteed to be in order returned # from the ceph command. 'listed N entries' line could be # at any index. if "listed" in blocklist_entry: continue # valid entries to process look like - # 192.168.122.101:0/3258528596 2016-09-28 18:23:15.307227 blocklisted_ip = blocklist_entry.split(':')[0] # Look for this hosts ipv4 address in the blocklist if blocklisted_ip in ipv4_list: # pass in the ip:port/nonce rm_ok = self.ceph_rm_blocklist(blocklist_entry.split(' ')[0]) if not rm_ok: cleanup_state = False break else: self.logger.info("No OSD blocklist entries found") return cleanup_state def get_tpgs(self, target_iqn): """ determine the number of tpgs in the current target :return: count of the defined tpgs """ try: target = Target(ISCSIFabricModule(), target_iqn, "lookup") return len([tpg.tag for tpg in target.tpgs]) except RTSLibError: return 0 def portals_active(self, target_iqn): """ use the get_tpgs function to determine whether there are tpg's defined :return: (bool) indicating whether there are tpgs defined """ return self.get_tpgs(target_iqn) > 0 def redefine_target(self, target_iqn): self.delete_target(target_iqn) target_config = self.config.config['targets'][target_iqn] self.define_target(target_iqn, target_config['ip_list']) def define_target(self, target_iqn, gw_ip_list, target_only=False): """ define the iSCSI target and tpgs :param target_iqn: (str) target iqn :param gw_ip_list: (list) gateway ip list :param target_only: (bool) if True only setup target :return: (object) GWTarget object """ # GWTarget Definition : Handle the creation of the Target/TPG(s) and # Portals. Although we create the tpgs, we flick the enable_portal flag # off so the enabled tpg will not have an outside IP address. This # prevents clients from logging in too early, failing and giving up # because the nodeACL hasn't been defined yet (yes Windows I'm looking # at you!) # first check if there are tpgs already in LIO (True) - this would # indicate a restart or reload call has been made. If the tpg count is # 0, this is a boot time request self.logger.info("Setting up {}".format(target_iqn)) target = GWTarget(self.logger, target_iqn, gw_ip_list, enable_portal=self.portals_active(target_iqn)) if target.error: raise CephiSCSIError("Error initializing iSCSI target: " "{}".format(target.error_msg)) target.manage('target') if target.error: raise CephiSCSIError("Error creating the iSCSI target (target, " "TPGs, Portals): {}".format(target.error_msg)) if not target_only: self.logger.info("Processing LUN configuration") try: LUN.define_luns(self.logger, self.config, target) except CephiSCSIError as err: self.logger.error("{} - Could not define LUNs: " "{}".format(target.iqn, err)) raise self.logger.info("{} - Processing client configuration". format(target.iqn)) try: GWClient.define_clients(self.logger, self.config, target.iqn) except CephiSCSIError as err: self.logger.error("Could not define clients: {}".format(err)) raise if not target.enable_portal: # The tpgs, luns and clients are all defined, but the active tpg # doesn't have an IP bound to it yet (due to the # enable_portals=False setting above) self.logger.info("{} - Adding the IP to the enabled tpg, " "allowing iSCSI logins".format(target.iqn)) target.enable_active_tpg(self.config) if target.error: raise CephiSCSIError("{} - Error enabling the IP with the " "active TPG: {}".format(target.iqn, target.error_msg)) return target def define_targets(self): """ define the list of iSCSI targets and tpgs :return: (list) GWTarget objects """ targets = [] for iqn, target in self.config.config['targets'].items(): if self.hostname in target['portals']: target = self.define_target(iqn, target.get('ip_list', {})) targets.append(target) return targets def define(self): """ procesing logic that orchestrates the creation of the iSCSI gateway to LIO. """ self.logger.info("Reading the configuration object to update local LIO " "configuration") # first check to see if we have any entries to handle - if not, there is # no work to do.. if "targets" not in self.config.config: self.logger.info("Configuration is empty - nothing to define to LIO") return if self.hostname not in self.config.config['gateways']: self.logger.info("Configuration does not have an entry for this host({}) - " "nothing to define to LIO".format(self.hostname)) return # at this point we have a gateway entry that applies to the running host self.logger.info("Processing Gateway configuration") self.define_targets() self.logger.info("Ceph iSCSI Gateway configuration load complete") def delete_target(self, target_iqn): target = GWTarget(self.logger, target_iqn, {}) if target.error: raise CephiSCSIError("Could not initialize target: {}". format(target.error_msg)) target.load_config() if target.error: self.logger.debug("Could not find target {}: {}". format(target_iqn, target.error_msg)) # Target might not be setup on this node. Ignore. return try: target.delete(self.config) except RTSLibError as err: err_msg = "Could not remove target {}: {}".format(target_iqn, err) raise CephiSCSIError(err_msg) def delete_targets(self): err_msg = None if self.hostname not in self.config.config['gateways']: return # Clear the current config, based on the config objects settings. # This will fail incoming IO, but wait on outstanding IO to # complete normally. We rely on the initiator multipath layer # to handle retries like a normal path failure. self.logger.info("Removing iSCSI target from LIO") for target_iqn, target_config in self.config.config['targets'].items(): try: self.delete_target(target_iqn) except CephiSCSIError as err: if err_msg is None: err_msg = err continue if err_msg: raise CephiSCSIError(err_msg) def delete(self): """ Clear the LIO configuration of the settings defined by the config object We could simply call the clear_existing method of rtsroot - but if the admin has defined additional non ceph iscsi exports they'd loose everything :return: (int) 0 = LIO configuration removed/not-required 4 = LUN removal problem encountered 8 = Gateway (target/tpgs) removal failed """ self.logger.debug("delete received, refreshing local state") self.config.refresh() if self.config.error: self.logger.critical("Problems accessing config object" " - {}".format(self.config.error_msg)) return 8 if "gateways" in self.config.config: if self.hostname not in self.config.config["gateways"]: self.logger.info("No gateway configuration to remove on this " "host ({})".format(self.hostname)) return 0 else: self.logger.info("Configuration object does not hold any gateway " "metadata - nothing to do") return 0 ret = 0 try: self.delete_targets() except CephiSCSIError: ret = 8 # unload disks not yet added to targets lio = LIO() lio.drop_lun_maps(self.config, False) if lio.error: self.logger.error("failed to remove LUN objects") if ret != 0: ret = 4 if ret == 0: self.logger.info("Active Ceph iSCSI gateway configuration removed") return ret def remove_from_config(self, target_iqn): has_changed = False target_config = self.config.config["targets"].get(target_iqn) if target_config: local_gw = target_config['portals'].get(self.hostname) if local_gw: local_gw_ips = local_gw['portal_ip_addresses'] target_config['portals'].pop(self.hostname) ip_list = target_config['ip_list'] for local_gw_ip in local_gw_ips: ip_list.remove(local_gw_ip) for _, remote_gw_config in target_config['portals'].items(): for local_gw_ip in local_gw_ips: remote_gw_config["gateway_ip_list"].remove(local_gw_ip) remote_gw_config["inactive_portal_ips"].remove(local_gw_ip) tpg_count = remote_gw_config["tpgs"] remote_gw_config["tpgs"] = tpg_count - 1 if not target_config['portals']: # Last gw for the target so delete everything that lives # under the tpg in LIO since we can't create it target_config['disks'] = {} target_config['clients'] = {} target_config['controls'] = {} target_config['groups'] = {} has_changed = True self.config.update_item('targets', target_iqn, target_config) remove_gateway = True for _, target in self.config.config["targets"].items(): if self.hostname in target['portals']: remove_gateway = False break if remove_gateway: # gateway is no longer used, so delete it has_changed = True self.config.del_item('gateways', self.hostname) LUN.reassign_owners(self.logger, self.config) if has_changed: self.config.commit("retain") if self.config.error: raise CephiSCSIError(self.config.error_msg) ceph-iscsi-3.9/ceph_iscsi_config/gateway_object.py000066400000000000000000000047011470665154300224300ustar00rootroot00000000000000import ceph_iscsi_config.settings as settings from ceph_iscsi_config.common import Config from ceph_iscsi_config.utils import CephiSCSIError class GWObject(object): def __init__(self, cfg_type, cfg_type_key, logger, control_settings): self.cfg_type = cfg_type self.cfg_type_key = cfg_type_key self.logger = logger self.config = Config(self.logger) if self.config.error: raise CephiSCSIError(self.config.error_msg) # Copy of controls that will not be written until commit is called. # To update the kernel call the child object's update function. self.controls = self._get_config_controls().copy() self._add_properies(control_settings) def _set_config_controls(self, config, controls): config.config[self.cfg_type][self.cfg_type_key]['controls'] = controls def _get_config_controls(self): # This might be the initial creation so it will not be in the # config yet if self.cfg_type_key in self.config.config[self.cfg_type]: return self.config.config[self.cfg_type][self.cfg_type_key].get('controls', {}) else: return {} def _get_control(self, key, setting): value = self.controls.get(key, None) if value is None: return getattr(settings.config, key) return setting.normalize(value) def _set_control(self, key, value): if value is None or value == getattr(settings.config, key): self.controls.pop(key, None) else: self.controls[key] = value def _add_properies(self, control_settings): for k, setting in control_settings.items(): setattr(GWObject, k, property(lambda self, k=k, s=setting: self._get_control(k, s), lambda self, v, k=k: self._set_control(k, v))) def update_controls(self): committed_controls = self._get_config_controls() if self.controls != committed_controls: # update our config self._set_config_controls(self.config, self.controls) updated_obj = self.config.config[self.cfg_type][self.cfg_type_key] self.config.update_item(self.cfg_type, self.cfg_type_key, updated_obj) def commit_controls(self): self.update_controls() self.config.commit() if self.config.error: raise CephiSCSIError(self.config.error_msg) ceph-iscsi-3.9/ceph_iscsi_config/gateway_setting.py000066400000000000000000000172541470665154300226460ustar00rootroot00000000000000import logging def convert_str_to_bool(value): """ Convert true/false/yes/no/1/0 to boolean """ if isinstance(value, bool): return value value = str(value).lower() if value in ['1', 'true', 'yes']: return True elif value in ['0', 'false', 'no']: return False raise ValueError(value) class Setting(object): def __init__(self, name, type_str, def_val): self.name = name self.type_str = type_str self.def_val = def_val def __contains__(self, key): return key == self.def_val class BoolSetting(Setting): def __init__(self, name, def_val): super(BoolSetting, self).__init__(name, "bool", def_val) def to_str(self, norm_val): if norm_val: return "true" else: return "false" def normalize(self, raw_val): try: # for compat we also support Yes/No and 1/0 return convert_str_to_bool(raw_val) except ValueError: raise ValueError("expected true or false for {}".format(self.name)) class LIOBoolSetting(BoolSetting): def __init__(self, name, def_val): super(LIOBoolSetting, self).__init__(name, def_val) def to_str(self, norm_val): if norm_val: return "yes" else: return "no" def normalize(self, raw_val): try: # for compat we also support True/False and 1/0 return convert_str_to_bool(raw_val) except ValueError: raise ValueError("expected yes or no for {}".format(self.name)) class ListSetting(Setting): def __init__(self, name, def_val): super(ListSetting, self).__init__(name, "list", def_val) def to_str(self, norm_val): return str(norm_val) def normalize(self, raw_val): return [r.strip() for r in raw_val.split(',')] if raw_val else [] class StrSetting(Setting): def __init__(self, name, def_val): super(StrSetting, self).__init__(name, "str", def_val) def to_str(self, norm_val): return str(norm_val) def normalize(self, raw_val): return str(raw_val) class IntSetting(Setting): def __init__(self, name, min_val, max_val, def_val): self.min_val = min_val self.max_val = max_val super(IntSetting, self).__init__(name, "int", def_val) def to_str(self, norm_val): return str(norm_val) def normalize(self, raw_val): try: val = int(raw_val) except ValueError: raise ValueError("expected integer for {}".format(self.name)) if val < self.min_val: raise ValueError("expected integer >= {} for {}". format(self.min_val, self.name)) if val > self.max_val: raise ValueError("expected integer <= {} for {}". format(self.max_val, self.name)) return val class EnumSetting(Setting): def __init__(self, name, valid_vals, def_val): if len(valid_vals) == 0: raise ValueError("Invalid enum. There must be at least one valid value.") valid_type = type(valid_vals[0]) if valid_type is not int and valid_type is not str: raise ValueError("Invalid enum. Items must be str or int. Got {}". format(valid_type)) for i in valid_vals: if valid_type != type(i): raise ValueError("Invalid enum. All items must be the same type." "Found {} and {}".format(type(i), valid_type)) self.valid_vals = valid_vals super(EnumSetting, self).__init__(name, "enum", def_val) def to_str(self, norm_val): return str(norm_val) def normalize(self, raw_val): if isinstance(self.valid_vals[0], str): val = str(raw_val) else: val = int(raw_val) if val not in self.valid_vals: raise ValueError("expected {} for {} found {}". format(self.valid_vals, self.name, raw_val)) return val CLIENT_SETTINGS = { "dataout_timeout": IntSetting("dataout_timeout", 2, 60, 20), "nopin_response_timeout": IntSetting("nopin_response_timeout", 3, 60, 5), "nopin_timeout": IntSetting("nopin_timeout", 3, 60, 5), "cmdsn_depth": IntSetting("cmdsn_depth", 1, 512, 128)} TGT_SETTINGS = { # client settings you can also set at the ceph-iscsi target level "dataout_timeout": IntSetting("dataout_timeout", 2, 60, 20), "nopin_response_timeout": IntSetting("nopin_response_timeout", 3, 60, 5), "nopin_timeout": IntSetting("nopin_timeout", 3, 60, 5), "cmdsn_depth": IntSetting("cmdsn_depth", 1, 512, 128), # lio tpg settings "immediate_data": LIOBoolSetting("immediate_data", True), "initial_r2t": LIOBoolSetting("initial_r2t", True), "max_outstanding_r2t": IntSetting("max_outstanding_r2t", 1, 65535, 1), "first_burst_length": IntSetting("first_burst_length", 512, 16777215, 262144), "max_burst_length": IntSetting("max_burst_length", 512, 16777215, 524288), "max_recv_data_segment_length": IntSetting("max_recv_data_segment_length", 512, 16777215, 262144), "max_xmit_data_segment_length": IntSetting("max_xmit_data_segment_length", 512, 16777215, 262144)} SYS_SETTINGS = { "cluster_name": StrSetting("cluster_name", "ceph"), "pool": StrSetting("pool", "rbd"), "cluster_client_name": StrSetting("cluster_client_name", "client.admin"), "time_out": IntSetting("time_out", 1, 600, 30), "api_host": StrSetting("api_host", "::"), "api_port": IntSetting("api_port", 1, 65535, 5000), "api_secure": BoolSetting("api_secure", True), "api_ssl_verify": BoolSetting("api_ssl_verify", False), "loop_delay": IntSetting("loop_delay", 1, 60, 2), "trusted_ip_list": ListSetting("trusted_ip_list", []), # comma separate list of IPs "api_user": StrSetting("api_user", "admin"), "api_password": StrSetting("api_password", "admin"), "ceph_user": StrSetting("ceph_user", "admin"), "debug": BoolSetting("debug", False), "minimum_gateways": IntSetting("minimum_gateways", 1, 9999, 2), "ceph_config_dir": StrSetting("ceph_config_dir", '/etc/ceph'), "gateway_conf": StrSetting("gateway_conf", 'gateway.conf'), "priv_key": StrSetting("priv_key", 'iscsi-gateway.key'), "pub_key": StrSetting("pub_key", 'iscsi-gateway-pub.key'), "prometheus_exporter": BoolSetting("prometheus_exporter", True), "prometheus_port": IntSetting("prometheus_port", 1, 65535, 9287), "prometheus_host": StrSetting("prometheus_host", "::"), "logger_level": IntSetting("logger_level", logging.DEBUG, logging.CRITICAL, logging.DEBUG), "log_to_stderr": BoolSetting("log_to_stderr", False), "log_to_stderr_prefix": StrSetting("log_to_stderr_prefix", ""), "log_to_file": BoolSetting("log_to_file", True), # TODO: This is under sys for compat. It is not settable per device/backend # type yet. "alua_failover_type": EnumSetting("alua_failover_type", ["implicit", "explicit"], "implicit")} TCMU_SETTINGS = { "max_data_area_mb": IntSetting("max_data_area_mb", 1, 2048, 8), "qfull_timeout": IntSetting("qfull_timeout", 0, 600, 5), "osd_op_timeout": IntSetting("osd_op_timeout", 0, 600, 30), "hw_max_sectors": IntSetting("hw_max_sectors", 1, 8192, 1024)} TCMU_DEV_STATUS_SETTINGS = { "lock_lost_cnt_threshhold": IntSetting("lock_lost_cnt_threshhold", 1, 1000000, 12), "status_check_interval": IntSetting("status_check_interval", 1, 600, 10), "stable_state_reset_count": IntSetting("stable_state_reset_count", 1, 600, 3)} ceph-iscsi-3.9/ceph_iscsi_config/group.py000066400000000000000000000335431470665154300206030ustar00rootroot00000000000000# import ceph_iscsi_config.settings as settings import json from ceph_iscsi_config.common import Config from ceph_iscsi_config.client import GWClient from ceph_iscsi_config.utils import ListComparison class Group(object): def __init__(self, logger, target_iqn, group_name, members=[], disks=[]): """ Manage a host group definition. The input for the group object is the desired state of the group where the logic enforced produces an idempotent group definition across API/CLI and more importantly Ansible :param logger: (logging object) used for centralised logging :param target_iqn: (str) target iqn :param group_name: (str) group name :param members: (list) iscsi IQN's of the clients :param disks: (list) disk names of the format pool/image """ self.logger = logger self.error = False self.error_msg = '' self.num_changes = 0 self.config = Config(logger) if self.config.error: self.error = self.config.error self.error_msg = self.config.error_msg return self.target_iqn = target_iqn self.group_name = group_name self.group_members = members self.disks = disks target_config = self.config.config['targets'][self.target_iqn] if group_name in target_config['groups']: self.new_group = False else: self.new_group = True self.logger.debug("Group : name={}".format(self.group_name)) self.logger.debug("Group : members={}".format(self.group_members)) self.logger.debug("Group : disks={}".format(self.disks)) def __str__(self): return ("Group: {}\n- Members: {}\n- " "Disks: {}".format(self.group_name, self.group_members, self.disks)) def _set_error(self, error_msg): self.error = True self.error_msg = error_msg self.logger.debug("Error: {}".format(self.error_msg)) def _valid_client(self, action, client_iqn): """ validate the addition of a specific client :param action: (str) add or remove request :param client_iqn: (str) iqn of the client to add tot he group :return: (bool) true/false whether the client should be accepted """ target_config = self.config.config['targets'][self.target_iqn] self.logger.debug("checking '{}'".format(client_iqn)) # to validate the request, pass through a 'negative' filter if action == 'add': client = target_config['clients'].get(client_iqn, {}) if not client: self._set_error("client '{}' doesn't exist".format(client_iqn)) return False elif client.get('luns'): self._set_error("Client '{}' already has luns. " "Only clients without prior lun maps " "can be added to a group".format(client_iqn)) return False elif client.get('group_name'): self._set_error("Client already assigned to {} - a client " "can only belong to one host " "group".format(client.get('group_name'))) return False else: # client_iqn must exist in the group if client_iqn not in target_config['groups'][self.group_name].get('members'): self._set_error("client '{}' is not a member of " "{}".format(client_iqn, self.group_name)) return False # to reach here the request is considered valid self.logger.debug("'{}' client '{}' for group '{}'" " is valid".format(action, client_iqn, self.group_name)) return True def _valid_disk(self, action, disk): self.logger.debug("checking disk '{}'".format(disk)) target_config = self.config.config['targets'][self.target_iqn] if action == 'add': if disk not in target_config['disks']: self._set_error("disk '{}' doesn't exist".format(disk)) return False else: if disk not in target_config['groups'][self.group_name]['disks']: self._set_error("disk '{}' is not in the group".format(disk)) return False return True def _next_lun(self, preferred_lun_id): """ Look at the disk list for the group and return the 1st available free LUN id used for adding disks to the group :return: (int) lun Id """ lun_range = list(range(0, 256, 1)) # 0->255 lun_range.remove(preferred_lun_id) lun_range.insert(0, preferred_lun_id) target_config = self.config.config['targets'][self.target_iqn] group = target_config['groups'][self.group_name] group_disks = group.get('disks') for d in group_disks: lun_range.remove(group_disks[d].get('lun_id')) return lun_range[0] def apply(self): """ setup/manage the group definition :return: NULL """ group_seed = { "members": [], "disks": {} } target_config = self.config.config['targets'][self.target_iqn] if self.new_group: # New Group definition, so seed it self.logger.debug("Processing request for new group " "'{}'".format(self.group_name)) if len(set(self.group_members)) != len(self.group_members): self._set_error("Member must contain unique clients - no " "duplication") return self.logger.debug("New group definition required") # new_group = True target_config['groups'][self.group_name] = group_seed # Now the group definition is at least seeded, so let's look at the # member and disk information passed this_group = target_config['groups'][self.group_name] members = ListComparison(this_group.get('members'), self.group_members) disks = ListComparison(list(this_group.get('disks').keys()), self.disks) if set(self.disks) != set(this_group.get('disks')) or \ set(self.group_members) != set(this_group.get('members')): group_changed = True else: group_changed = False if group_changed or self.new_group: if self.valid_request(members, disks): self.update_metadata(members, disks) else: self._set_error("Group request failed validation") return else: # no changes required self.logger.info("Current group definition matches request") self.enforce_policy() def valid_request(self, members, disks): self.logger.info("Validating client membership") for mbr in members.added: if not self._valid_client('add', mbr): self.logger.error("'{}' failed checks".format(mbr)) return False for mbr in members.removed: if not self._valid_client('remove', mbr): self.logger.error("'{}' failed checks".format(mbr)) return False self.logger.debug("Client membership checks passed") self.logger.debug("clients to add : {}".format(members.added)) self.logger.debug("clients to remove : {}".format(members.removed)) # client membership is valid, check disks self.logger.info("Validating disk membership") for disk_name in disks.added: if not self._valid_disk('add', disk_name): self.logger.error("'{}' failed checks".format(disk_name)) return False for disk_name in disks.removed: if not self._valid_disk('remove', disk_name): self.logger.error("'{}' failed checks".format(disk_name)) return False self.logger.info("Disk membership checks passed") self.logger.debug("disks to add : {}".format(disks.added)) self.logger.debug("disks to remove : {}".format(disks.removed)) return True def update_metadata(self, members, disks): target_config = self.config.config['targets'][self.target_iqn] this_group = target_config['groups'].get(self.group_name, {}) group_disks = this_group.get('disks', {}) if disks.added: # update the groups disk list for disk in disks.added: lun_seq = self._next_lun(target_config['disks'][disk]['lun_id']) group_disks[disk] = {"lun_id": lun_seq} self.logger.debug("- adding '{}' to group '{}' @ " "lun id {}".format(disk, self.group_name, lun_seq)) if disks.removed: # remove disk from the group definition for disk in disks.removed: del group_disks[disk] self.logger.debug("- removed '{}' from group " "{}".format(disk, self.group_name)) if disks.added or disks.removed: # update each clients meta data self.logger.debug("updating clients LUN masking with " "{}".format(json.dumps(group_disks))) for client_iqn in self.group_members: self.update_disk_md(client_iqn, group_disks) # handle client membership if members.changed: for client_iqn in members.added: self.add_client(client_iqn) self.update_disk_md(client_iqn, group_disks) for client_iqn in members.removed: self.remove_client(client_iqn) this_group['members'] = self.group_members this_group['disks'] = group_disks self.logger.debug("Group '{}' updated to " "{}".format(self.group_name, json.dumps(this_group))) target_config['groups'][self.group_name] = this_group self.config.update_item('targets', self.target_iqn, target_config) self.config.commit() def enforce_policy(self): target_config = self.config.config['targets'][self.target_iqn] this_group = target_config['groups'][self.group_name] group_disks = this_group.get('disks') host_group = this_group.get('members') image_list = sorted(group_disks.items(), key=lambda v: v[1]['lun_id']) for client_iqn in host_group: self.update_client(client_iqn, image_list) if self.error: # Applying the policy failed, so report and abort self.logger.error("Unable to apply policy to {} " ": {}".format(client_iqn, self.error_msg)) return def add_client(self, client_iqn): target_config = self.config.config['targets'][self.target_iqn] client_metadata = target_config['clients'][client_iqn] client_metadata['group_name'] = self.group_name self.config.update_item('targets', self.target_iqn, target_config) self.logger.info("Added {} to group {}".format(client_iqn, self.group_name)) def update_disk_md(self, client_iqn, group_disks): target_config = self.config.config['targets'][self.target_iqn] md = target_config['clients'].get(client_iqn) md['luns'] = group_disks self.config.update_item('targets', self.target_iqn, target_config) self.logger.info("updated {} disk map to " "{}".format(client_iqn, json.dumps(group_disks))) def update_client(self, client_iqn, image_list): client = GWClient(self.logger, client_iqn, image_list, '', '', '', '', self.target_iqn) client.manage('reconfigure') # grab the client's metadata from the config (needed by setup_luns) target_config = self.config.config['targets'][self.target_iqn] client.metadata = target_config['clients'][client_iqn] client.setup_luns(self.config.config['disks']) if client.error: self._set_error(client.error_msg) def remove_client(self, client_iqn): target_config = self.config.config['targets'][self.target_iqn] client_md = target_config["clients"][client_iqn] # remove the group_name setting from the client client_md['group_name'] = '' self.config.update_item('targets', self.target_iqn, target_config) self.logger.info("Removed {} from group {}".format(client_iqn, self.group_name)) def purge(self): # act on the group name # get the members from the current definition target_config = self.config.config['targets'][self.target_iqn] groups = target_config['groups'] if self.group_name in groups: for mbr in groups[self.group_name]["members"]: self.remove_client(mbr) # issue a del_item to the config object for this group_name groups.pop(self.group_name) self.config.update_item('targets', self.target_iqn, target_config) self.config.commit() self.logger.info("Group {} removed".format(self.group_name)) else: self._set_error("Group name requested does not exist") return ceph-iscsi-3.9/ceph_iscsi_config/lio.py000066400000000000000000000040751470665154300202300ustar00rootroot00000000000000from rtslib_fb import root from rtslib_fb.utils import RTSLibError __author__ = 'Paul Cuzner, Michael Christie' class LIO(object): def __init__(self): self.lio_root = root.RTSRoot() self.error = False self.error_msg = '' self.changed = False def drop_lun_maps(self, config, update_config): disks_config = config.config['disks'] backstore_object_names = [disk['backstore_object_name'] for _, disk in disks_config.items()] for stg_object in self.lio_root.storage_objects: if stg_object.name in backstore_object_names: # this is an rbd device that's in the config object, # so remove it try: stg_object.delete() except RTSLibError as err: self.error = True self.error_msg = err else: self.changed = True if update_config: # update the disk item to remove the wwn info image_metadata = [disk for _, disk in config.config['disks'].items() if disk['backstore_object_name'] == stg_object.name][0] image_metadata['wwn'] = '' config.update_item("disks", stg_object.name, image_metadata) class Gateway(LIO): def __init__(self, config_object): LIO.__init__(self) self.config = config_object def session_count(self): return len(list(self.lio_root.sessions)) def drop_target(self, this_host): if this_host in self.config.config['gateways']: lio_root = root.RTSRoot() for tgt in lio_root.targets: if tgt.wwn in self.config.config['targets'] \ and this_host in self.config.config['targets'][tgt.wwn]['portals']: tgt.delete() self.changed = True ceph-iscsi-3.9/ceph_iscsi_config/lun.py000066400000000000000000001462101470665154300202410ustar00rootroot00000000000000import rados import rbd import re from time import sleep from rtslib_fb import UserBackedStorageObject, root from rtslib_fb.utils import RTSLibError import ceph_iscsi_config.settings as settings from ceph_iscsi_config.gateway_setting import TCMU_SETTINGS from ceph_iscsi_config.backstore import USER_RBD from ceph_iscsi_config.utils import (convert_2_bytes, gen_control_string, valid_size, get_pool_id, ip_addresses, get_pools, get_rbd_size, this_host, human_size, CephiSCSIError) from ceph_iscsi_config.gateway_object import GWObject from ceph_iscsi_config.target import GWTarget from ceph_iscsi_config.client import GWClient, CHAP from ceph_iscsi_config.group import Group from ceph_iscsi_config.backstore import lookup_storage_object __author__ = 'pcuzner@redhat.com' class RBDDev(object): unsupported_features_list = { USER_RBD: [] } default_features_list = { USER_RBD: [ 'RBD_FEATURE_LAYERING', 'RBD_FEATURE_EXCLUSIVE_LOCK', 'RBD_FEATURE_OBJECT_MAP', 'RBD_FEATURE_FAST_DIFF', 'RBD_FEATURE_DEEP_FLATTEN' ] } required_features_list = { USER_RBD: [ 'RBD_FEATURE_EXCLUSIVE_LOCK' ] } def __init__(self, image, size, backstore, pool=None): self.image = image self.size_bytes = convert_2_bytes(size) self.backstore = backstore if pool is None: pool = settings.config.pool self.pool = pool self.pool_id = get_pool_id(pool_name=self.pool) self.error = False self.error_msg = '' self.changed = False def create(self): """ Create an rbd image compatible with exporting through LIO to multiple clients :return: status code and msg """ with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster: with cluster.open_ioctx(self.pool) as ioctx: rbd_inst = rbd.RBD() try: rbd_inst.create(ioctx, self.image, self.size_bytes, features=RBDDev.default_features(self.backstore), old_format=False) except (rbd.ImageExists, rbd.InvalidArgument) as err: self.error = True self.error_msg = ("Failed to create rbd image {} in " "pool {} : {}".format(self.image, self.pool, err)) def delete(self): """ Delete the current rbd image :return: nothing, but the objects error attribute is set if there are problems """ rbd_deleted = False extra_error_info = '' with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster: with cluster.open_ioctx(self.pool) as ioctx: rbd_inst = rbd.RBD() ctr = 0 while ctr < settings.config.time_out: try: rbd_inst.remove(ioctx, self.image) except rbd.ImageNotFound: rbd_deleted = True break except rbd.ImageBusy: # catch and ignore the busy state - rbd probably still mapped on # another gateway, so we keep trying pass except rbd.ImageHasSnapshots: extra_error_info = " - Image has snapshots" break else: rbd_deleted = True break sleep(settings.config.loop_delay) ctr += settings.config.loop_delay if rbd_deleted: return else: self.error = True self.error_msg = ("Unable to delete the underlying rbd " "image {}".format(self.image)) if extra_error_info: self.error_msg += extra_error_info def rbd_size(self): """ Confirm that the existing rbd image size, matches the requirement passed in the request - if the required size is > than current, resize the rbd image to match :return: boolean value reflecting whether the rbd image was resized """ with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster: with cluster.open_ioctx(self.pool) as ioctx: with rbd.Image(ioctx, self.image) as rbd_image: # get the current size in bytes current_bytes = rbd_image.size() if self.size_bytes > current_bytes: # resize method, doesn't document potential exceptions # so using a generic catch all (Yuk!) try: rbd_image.resize(self.size_bytes) except Exception: self.error = True self.error_msg = ("rbd image resize failed for " "{}".format(self.image)) else: self.changed = True def _get_size_bytes(self): """ Return the current size of the rbd image :return: (int) rbd image size in bytes """ with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster: with cluster.open_ioctx(self.pool) as ioctx: with rbd.Image(ioctx, self.image) as rbd_image: image_size = rbd_image.size() return image_size @staticmethod def rbd_list(conf=None, pool=None): """ return a list of rbd images in a given pool :param pool: pool name (str) to return a list of rbd image names for :return: list of rbd image names (list) """ if conf is None: conf = settings.config.cephconf if pool is None: pool = settings.config.pool with rados.Rados(conffile=conf, name=settings.config.cluster_client_name) as cluster: with cluster.open_ioctx(pool) as ioctx: rbd_inst = rbd.RBD() rbd_names = rbd_inst.list(ioctx) return rbd_names @staticmethod def rbd_lock_cleanup(logger, local_ips, rbd_image): """ cleanup locks left if this node crashed and was not able to release them :param logger: logger object to print to :param local_ips: list of local ip addresses. :rbd_image: rbd image to clean up locking for :raise CephiSCSIError. """ lock_info = rbd_image.list_lockers() if not lock_info: return lockers = lock_info.get("lockers") for holder in lockers: for ip in local_ips: if ip in holder[2]: logger.info("Cleaning up stale local lock for {} {}".format( holder[0], holder[1])) try: rbd_image.break_lock(holder[0], holder[1]) except Exception as err: raise CephiSCSIError("Could not break lock for {}. " "Error {}".format(rbd_image, err)) def _valid_rbd(self): valid_state = True with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster: ioctx = cluster.open_ioctx(self.pool) with rbd.Image(ioctx, self.image) as rbd_image: if rbd_image.features() & RBDDev.required_features(self.backstore) != \ RBDDev.required_features(self.backstore): valid_state = False return valid_state @classmethod def unsupported_features(cls, backstore): """ Return an int representing the unsupported features for LIO export :return: int """ # build the required feature settings into an int feature_int = 0 for feature in RBDDev.unsupported_features_list[backstore]: feature_int += getattr(rbd, feature) return feature_int @classmethod def default_features(cls, backstore): """ Return an int representing the default features for image creation :return: int """ # build the required feature settings into an int feature_int = 0 for feature in RBDDev.default_features_list[backstore]: feature_int += getattr(rbd, feature) return feature_int @classmethod def required_features(cls, backstore): """ Return an int representing the required features for LIO export :return: int """ # build the required feature settings into an int feature_int = 0 for feature in RBDDev.required_features_list[backstore]: feature_int += getattr(rbd, feature) return feature_int current_size = property(_get_size_bytes, doc="return the current size of the rbd(bytes)") valid = property(_valid_rbd, doc="check the rbd is valid for export through LIO" " (boolean)") class LUN(GWObject): BACKSTORES = [ USER_RBD ] DEFAULT_BACKSTORE = USER_RBD SETTINGS = { USER_RBD: TCMU_SETTINGS } def __init__(self, logger, pool, image, size, allocating_host, backstore, backstore_object_name): self.logger = logger self.image = image self.pool = pool self.pool_id = 0 self.size_bytes = convert_2_bytes(size) self.config_key = '{}/{}'.format(self.pool, self.image) self.allocating_host = allocating_host self.backstore = backstore self.backstore_object_name = backstore_object_name self.error = False self.error_msg = '' self.num_changes = 0 try: super(LUN, self).__init__('disks', self.config_key, logger, LUN.SETTINGS[self.backstore]) except CephiSCSIError as err: self.error = True self.error_msg = err self._validate_request() def _validate_request(self): if not rados_pool(pool=self.pool): # Could create the pool, but a fat finger moment in the config # file would mean rbd images get created and mapped, and then need # correcting. Better to exit if the pool doesn't exist self.error = True self.error_msg = ("Pool '{}' does not exist. Unable to " "continue".format(self.pool)) def remove_lun(self, preserve_image): local_gw = this_host() self.logger.info("LUN deletion request received, rbd removal to be " "performed by {}".format(self.allocating_host)) # First ensure the LUN is not in use for target_iqn, target in self.config.config['targets'].items(): if self.config_key in target['disks']: self.error = True self.error_msg = ("Unable to delete {} - allocated to " "{}".format(self.config_key, target_iqn)) self.logger.warning(self.error_msg) return # Check that the LUN is in LIO - if not there is nothing to do for # this request lun = self.lio_stg_object() if lun: # Now we know the request is for a LUN in LIO, and it's not masked # to a client self.remove_dev_from_lio() if self.error: return rbd_image = RBDDev(self.image, '0G', self.backstore, self.pool) if local_gw == self.allocating_host: # by using the allocating host we ensure the delete is not # issue by several hosts when initiated through ansible if not preserve_image: rbd_image.delete() if rbd_image.error: self.error = True self.error_msg = rbd_image.error_msg return # remove the definition from the config object self.config.del_item('disks', self.config_key) self.config.commit() def unmap_lun(self, target_iqn): local_gw = this_host() self.logger.info("LUN unmap request received, config commit to be " "performed by {}".format(self.allocating_host)) target_config = self.config.config['targets'][target_iqn] # First ensure the LUN is not in use clients = target_config['clients'] for client_iqn in clients: client_luns = clients[client_iqn]['luns'].keys() if self.config_key in client_luns: self.error = True self.error_msg = ("Unable to delete {} - allocated to {}" .format(self.config_key, client_iqn)) self.logger.warning(self.error_msg) return # Check that the LUN is in LIO - if not there is nothing to do for # this request lun = self.lio_stg_object() if not lun: return # Now we know the request is for a LUN in LIO, and it's not masked # to a client self.remove_dev_from_lio() if self.error: return if local_gw == self.allocating_host: # by using the allocating host we ensure the delete is not # issue by several hosts when initiated through ansible target_config['disks'].pop(self.config_key) self.config.update_item("targets", target_iqn, target_config) # determine which host was the path owner disk_owner = self.config.config['disks'][self.config_key].get('owner') if disk_owner: # update the active_luns count for gateway that owned this # lun gw_metadata = self.config.config['gateways'][disk_owner] if gw_metadata['active_luns'] > 0: gw_metadata['active_luns'] -= 1 self.config.update_item('gateways', disk_owner, gw_metadata) disk_metadata = self.config.config['disks'][self.config_key] if 'owner' in disk_metadata: del disk_metadata['owner'] self.logger.debug("{} owner deleted".format(self.config_key)) self.config.update_item("disks", self.config_key, disk_metadata) self.config.commit() def _get_next_lun_id(self, target_disks): lun_ids_in_use = [t['lun_id'] for t in target_disks.values()] lun_id_candidate = 0 while lun_id_candidate in lun_ids_in_use: lun_id_candidate += 1 return lun_id_candidate def map_lun(self, gateway, owner, disk, lun_id=None): target_config = self.config.config['targets'][gateway.iqn] disk_metadata = self.config.config['disks'][disk] disk_metadata['owner'] = owner self.config.update_item("disks", disk, disk_metadata) target_disk_config = target_config['disks'].get(disk) if not target_disk_config: if lun_id is None: lun_id = self._get_next_lun_id(target_config['disks']) target_config['disks'][disk] = { 'lun_id': lun_id } self.config.update_item("targets", gateway.iqn, target_config) gateway_dict = self.config.config['gateways'][owner] gateway_dict['active_luns'] += 1 self.config.update_item('gateways', owner, gateway_dict) so = self.allocate() if self.error: raise CephiSCSIError(self.error_msg) gateway.map_lun(self.config, so, target_config['disks'][disk]) if gateway.error: raise CephiSCSIError(gateway.error_msg) def manage(self, desired_state): self.logger.debug("LUN.manage request for {}, desired state " "{}".format(self.image, desired_state)) if desired_state == 'present': self.allocate() elif desired_state == 'absent': self.remove_lun() def deactivate(self): so = self.lio_stg_object() if not so: # Could be due to a restart after failure. Just log and ignore. self.logger.warning("LUN {} already deactivated".format(self.image)) return for alun in so.attached_luns: for mlun in alun.mapped_luns: node_acl = mlun.parent_nodeacl if node_acl.session and \ node_acl.session.get('state', '').upper() == 'LOGGED_IN': raise CephiSCSIError("LUN {} in-use".format(self.image)) self.remove_dev_from_lio() if self.error: raise CephiSCSIError("LUN deactivate failure - {}".format(self.error_msg)) def activate(self): disk = self.config.config['disks'].get(self.config_key, None) if not disk: raise CephiSCSIError("Image {} not found.".format(self.image)) wwn = disk.get('wwn', None) if not wwn: raise CephiSCSIError("LUN {} missing wwn".format(self.image)) # re-add backend storage object so = self.lio_stg_object() if not so: so = self.add_dev_to_lio(wwn) if self.error: raise CephiSCSIError("LUN activate failure - {}".format(self.error_msg)) # re-add LUN to target local_gw = this_host() targets_items = [item for item in self.config.config['targets'].items() if self.config_key in item[1]['disks'] and local_gw in item[1]['portals']] for target_iqn, target in targets_items: ip_list = target['ip_list'] # Add the mapping for the lun to ensure the block device is # present on all TPG's gateway = GWTarget(self.logger, target_iqn, ip_list) gateway.map_lun(self.config, so, target['disks'][self.config_key]) if gateway.error: raise CephiSCSIError("LUN mapping failed - {}".format(gateway.error_msg)) # re-map LUN to hosts client_err = '' for client_iqn in target['clients']: client_metadata = target['clients'][client_iqn] if client_metadata.get('group_name', ''): continue image_list = list(client_metadata['luns'].keys()) if self.config_key not in image_list: continue client_auth_config = client_metadata['auth'] client_chap = CHAP(client_auth_config['username'], client_auth_config['password'], client_auth_config['password_encryption_enabled']) if client_chap.error: raise CephiSCSIError("Password decode issue : " "{}".format(client_chap.error_msg)) client_chap_mutual = CHAP(client_auth_config['mutual_username'], client_auth_config['mutual_password'], client_auth_config['mutual_password_encryption_enabled']) if client_chap_mutual.error: raise CephiSCSIError("Password decode issue : " "{}".format(client_chap_mutual.error_msg)) client = GWClient(self.logger, client_iqn, image_list, client_chap.user, client_chap.password, client_chap_mutual.user, client_chap_mutual.password, target_iqn) client.manage('present') if client.error: client_err = "LUN mapping failed {} - {}".format(client_iqn, client.error_msg) # re-map LUN to host groups for group_name in target['groups']: host_group = target['groups'][group_name] members = host_group.get('members') disks = host_group.get('disks').keys() if self.config_key not in disks: continue group = Group(self.logger, target_iqn, group_name, members, disks) group.apply() if group.error: client_err = "LUN mapping failed {} - {}".format(group_name, group.error_msg) if client_err: raise CephiSCSIError(client_err) def add_disk_item(self, wwn, pool_id): # rbd image is OK to use, so ensure it's in the config # object if self.config_key not in self.config.config['disks']: self.config.add_item('disks', self.config_key) # update the other items disk_attr = {"wwn": wwn, "image": self.image, "pool": self.pool, "allocating_host": self.allocating_host, "pool_id": pool_id, "controls": self.controls, "backstore": self.backstore, "backstore_object_name": self.backstore_object_name} self.config.update_item('disks', self.config_key, disk_attr) self.logger.debug("(LUN.allocate) registered '{}/{}' with " "wwn '{}' with the config " "object".format(self.pool, self.image, wwn)) def allocate(self, keep_dev_in_lio=True, in_wwn=None): """ Create image and add to LIO and config. :param keep_dev_in_lio: (bool) false if the LIO so should be removed after allocating the wwn. :return: LIO storage object if successful and keep_dev_in_lio=True else None. """ self.logger.debug("LUN.allocate starting, listing rbd devices") disk_list = RBDDev.rbd_list(pool=self.pool) self.logger.debug("rados pool '{}' contains the following - " "{}".format(self.pool, disk_list)) local_gw = this_host() self.logger.debug("Hostname Check - this host is {}, target host for " "allocations is {}".format(local_gw, self.allocating_host)) rbd_image = RBDDev(self.image, self.size_bytes, self.backstore, self.pool) self.pool_id = rbd_image.pool_id # if the image required isn't defined, create it! if self.image not in disk_list: # create the requested disk if this is the 'owning' host if local_gw == self.allocating_host: rbd_image.create() if not rbd_image.error: self.logger.info("(LUN.allocate) created {}/{} " "successfully".format(self.pool, self.image)) self.num_changes += 1 else: self.error = True self.error_msg = rbd_image.error_msg return None else: # the image isn't there, and this isn't the 'owning' host # so wait until the disk arrives waiting = 0 while self.image not in disk_list: sleep(settings.config.loop_delay) disk_list = RBDDev.rbd_list(pool=self.pool) waiting += settings.config.loop_delay if waiting >= settings.config.time_out: self.error = True self.error_msg = ("(LUN.allocate) timed out waiting " "for rbd to show up") return None else: # requested image is already defined to ceph if not rbd_image.valid: # rbd image is not valid for export, so abort self.error = True features = ','.join(RBDDev.unsupported_features_list[self.backstore]) self.error_msg = ("(LUN.allocate) rbd '{}' is not compatible " "with LIO\nImage features {} are not" " supported".format(self.image, features)) self.logger.error(self.error_msg) return None self.logger.debug("Check the rbd image size matches the request") # if updates_made is not set, the disk pre-exists so on the owning # host see if it needs to be resized if self.num_changes == 0 and local_gw == self.allocating_host: # check the size, and update if needed rbd_image.rbd_size() if rbd_image.error: self.logger.critical(rbd_image.error_msg) self.error = True self.error_msg = rbd_image.error_msg return None if rbd_image.changed: self.logger.info("rbd image {} resized " "to {}".format(self.config_key, self.size_bytes)) self.num_changes += 1 else: self.logger.debug("rbd image {} size matches the configuration" " file request".format(self.config_key)) self.logger.debug("Begin processing LIO mapping") # now see if we need to add this rbd image to LIO so = self.lio_stg_object() if not so: # this image has not been defined to this hosts LIO, so check the # config for the details and if it's missing define the # wwn/alua_state and update the config if local_gw == self.allocating_host: # first check to see if the device needs adding try: wwn = self.config.config['disks'][self.config_key]['wwn'] except KeyError: wwn = '' if wwn == '' or in_wwn is not None: # disk hasn't been defined to LIO yet, it' not been defined # to the config yet and this is the allocating host so = self.add_dev_to_lio(in_wwn) if self.error: return None # lun is now in LIO, time for some housekeeping :P wwn = so._get_wwn() if not keep_dev_in_lio: self.remove_dev_from_lio() if self.error: return None self.add_disk_item(wwn, rbd_image.pool_id) self.logger.info("(LUN.allocate) added '{}/{}' to LIO and" " config object".format(self.pool, self.image)) else: # config object already had wwn for this rbd image so = self.add_dev_to_lio(wwn) if self.error: return None self.update_controls() self.logger.debug("(LUN.allocate) registered '{}' to LIO " "with wwn '{}' from the config " "object".format(self.image, wwn)) self.num_changes += 1 else: # lun is not already in LIO, but this is not the owning node # that defines the wwn we need the wwn from the config # (placed by the allocating host), so we wait! waiting = 0 while waiting < settings.config.time_out: self.config.refresh() if self.config_key in self.config.config['disks']: if 'wwn' in self.config.config['disks'][self.config_key]: if self.config.config['disks'][self.config_key]['wwn']: wwn = self.config.config['disks'][self.config_key]['wwn'] break sleep(settings.config.loop_delay) waiting += settings.config.loop_delay self.logger.debug("(LUN.allocate) waiting for config object" " to show {} with it's wwn".format(self.image)) if waiting >= settings.config.time_out: self.error = True self.error_msg = ("(LUN.allocate) waited too long for the " "wwn information on image {} to " "arrive".format(self.image)) return None # At this point we have a wwn from the config for this rbd # image, so just add to LIO so = self.add_dev_to_lio(wwn) if self.error: return None self.logger.info("(LUN.allocate) added {} to LIO using wwn " "'{}' defined by {}".format(self.image, wwn, self.allocating_host)) self.num_changes += 1 else: # lun exists in LIO, check the size is correct if not self.lio_size_ok(rbd_image, so): self.error = True self.error_msg = "Unable to sync the rbd device size with LIO" self.logger.critical(self.error_msg) return None # wwn is empty means there disk config is empty and we need # recover it and currently we do not support size changing disk_config = self.config.config['disks'].get(self.config_key, None) if disk_config is None and self.lio_size_ok(rbd_image, so, True): # try to recovery the config from LIO if local_gw == self.allocating_host: # lun is now in LIO, time for some housekeeping :P wwn = so._get_wwn() self.add_disk_item(wwn, rbd_image.pool_id) self.num_changes += 1 self.logger.debug("config meta data for this disk is " "{}".format(self.config.config['disks'].get(self.config_key, None))) # the owning host for an image is the only host that commits to the # config if local_gw == self.allocating_host and self.config.changed: self.logger.debug("(LUN.allocate) Committing change(s) to the " "config object in pool {}".format(self.pool)) self.config.commit() self.error = self.config.error self.error_msg = self.config.error_msg if self.error: return None return so def lio_size_ok(self, rbd_object, stg_object, equal_size=False): """ Check that the SO in LIO matches the current size of the rbd. if the size requested < current size, just return. Downsizing an rbd is not supported by this code and problematic for client filesystems anyway! :return boolean indicating whether the size matches """ tmr = 0 size_ok = False rbd_size_ok = False # dm_path_found = False # We have to wait for the rbd size to match, since the rbd could have # been resized on another gateway host while tmr < settings.config.time_out: if self.size_bytes <= rbd_object.current_size: rbd_size_ok = True break sleep(settings.config.loop_delay) tmr += settings.config.loop_delay # we have the right size for the rbd - check that LIO dev size matches if rbd_size_ok: if equal_size: return stg_object.size == self.size_bytes # If the LIO size is not right, poke it with the new value if stg_object.size < self.size_bytes: self.logger.info("Resizing {} in LIO " "to {}".format(self.config_key, self.size_bytes)) stg_object.set_attribute("dev_size", self.size_bytes) size_ok = stg_object.size == self.size_bytes else: size_ok = True return size_ok def lio_stg_object(self): try: return lookup_storage_object(self.backstore_object_name, self.backstore) except RTSLibError as err: self.logger.debug("lio stg lookup failed {}".format(err)) return None def add_dev_to_lio(self, in_wwn=None): """ Add an rbd device to the LIO configuration :param in_wwn: optional wwn identifying the rbd image to clients (must match across gateways) :return: LIO LUN object """ self.logger.info("(LUN.add_dev_to_lio) Adding image " "'{}' to LIO backstore {}".format(self.config_key, self.backstore)) new_lun = None if self.backstore == USER_RBD: new_lun = self._add_dev_to_lio_user_rbd(in_wwn) else: raise CephiSCSIError("Error adding device to lio - " "Unsupported backstore {}".format(self.backstore)) if new_lun: self.logger.info("(LUN.add_dev_to_lio) Successfully added {}" " to LIO".format(self.config_key)) return new_lun def _add_dev_to_lio_user_rbd(self, in_wwn=None): """ Add an rbd device to the LIO configuration (`USER_RBD`) :param in_wwn: optional wwn identifying the rbd image to clients (must match across gateways) :return: LIO LUN object """ # extract control parameter overrides (if any) or use default controls = {} for k in ['max_data_area_mb', 'hw_max_sectors']: controls[k] = getattr(self, k) control_string = gen_control_string(controls) if control_string: self.logger.debug("control=\"{}\"".format(control_string)) new_lun = None try: # config string = rbd identifier / config_key (pool/image) / # optional osd timeout cfgstring = "rbd/{}/{};osd_op_timeout={}".format(self.pool, self.image, self.osd_op_timeout) if (settings.config.cephconf != '/etc/ceph/ceph.conf'): cfgstring += ";conf={}".format(settings.config.cephconf) if (settings.config.cluster_client_name != 'client.admin'): client_id = settings.config.cluster_client_name.split('.', 1)[1] cfgstring += ";id={}".format(client_id) new_lun = UserBackedStorageObject(name=self.backstore_object_name, config=cfgstring, size=self.size_bytes, wwn=in_wwn, control=control_string) except (RTSLibError, IOError) as err: self.error = True self.error_msg = ("failed to add {} to LIO - " "error({})".format(self.config_key, str(err))) self.logger.error(self.error_msg) return None try: new_lun.set_attribute("cmd_time_out", 0) new_lun.set_attribute("qfull_time_out", self.qfull_timeout) except RTSLibError as err: self.error = True self.error_msg = ("Could not set LIO device attribute " "cmd_time_out/qfull_time_out for device: {}. " "Kernel not supported. - " "error({})".format(self.config_key, str(err))) self.logger.error(self.error_msg) new_lun.delete() return None return new_lun def remove_dev_from_lio(self): lio_root = root.RTSRoot() # remove the device from all tpgs for t in lio_root.tpgs: for lun in t.luns: if lun.storage_object.name == self.backstore_object_name: try: lun.delete() except Exception as e: self.error = True self.error_msg = ("Delete from LIO/TPG failed - " "{}".format(e)) return else: break # continue to the next tpg so = self.lio_stg_object() if not so: self.error = True self.error_msg = ("Removal failed. Could not find LIO object.") return try: so.delete() except Exception as err: self.error = True self.error_msg = ("Delete from LIO/backstores failed - " "{}".format(err)) return @staticmethod def valid_disk(ceph_iscsi_config, logger, **kwargs): """ determine whether the given image info is valid for a disk operation :param ceph_iscsi_config: Config object :param logger: logger object :param image_id: (str) . format :return: (str) either 'ok' or an error description """ # create can also pass optional controls dict mode_vars = {"create": ['pool', 'image', 'size', 'count'], "resize": ['pool', 'image', 'size'], "reconfigure": ['pool', 'image', 'controls'], "delete": ['pool', 'image']} if 'mode' in kwargs.keys(): mode = kwargs['mode'] else: mode = None backstore = kwargs['backstore'] if backstore not in LUN.BACKSTORES: return "Invalid '{}' backstore - Supported backstores: " \ "{}".format(backstore, ','.join(LUN.BACKSTORES)) if mode in mode_vars: if not all(x in kwargs for x in mode_vars[mode]): return ("{} request must contain the following " "variables: {}".format(mode, ','.join(mode_vars[mode]))) else: return "disk operation mode '{}' is invalid".format(mode) config = ceph_iscsi_config.config disk_key = "{}/{}".format(kwargs['pool'], kwargs['image']) if mode in ['create', 'resize']: if kwargs['pool'] not in get_pools(): return "pool name is invalid" if mode == 'create': if kwargs['size'] and not valid_size(kwargs['size']): return "Size is invalid" if len(config['disks']) >= 256: return "Disk limit of 256 reached." disk_regex = re.compile(r"^[a-zA-Z0-9\-_\.]+$") if not disk_regex.search(kwargs['pool']): return "Invalid pool name (use alphanumeric, '_', '.', or '-' characters)" if not disk_regex.search(kwargs['image']): return "Invalid image name (use alphanumeric, '_', '.', or '-' characters)" if kwargs['wwn'] is not None: for disk_id, disk_config in config['disks'].items(): if disk_config['wwn'] == kwargs['wwn']: return "WWN {} is already in use by {}".format(kwargs['wwn'], disk_id) if kwargs['count'].isdigit(): if not 1 <= int(kwargs['count']) <= 10: return "invalid count specified, must be an integer (1-10)" if int(kwargs['count']) > 1 and kwargs['wwn'] is not None: return "WWN cannot be specified when count > 1" else: return "invalid count specified, must be an integer (1-10)" if kwargs['count'] == '1': new_disks = {disk_key} else: limit = int(kwargs['count']) + 1 new_disks = set(['{}{}'.format(disk_key, ctr) for ctr in range(1, limit)]) if any(new_disk in config['disks'] for new_disk in new_disks): return ("at least one rbd image(s) with that name/prefix is " "already defined") if mode in ["resize", "delete", "reconfigure"]: # disk must exist in the config if disk_key not in config['disks']: return ("rbd {}/{} is not defined to the " "configuration".format(kwargs['pool'], kwargs['image'])) if mode == 'resize': if not valid_size(kwargs['size']): return "Size is invalid" size = kwargs['size'].upper() try: current_size = get_rbd_size(kwargs['pool'], kwargs['image']) except Exception as err: return "failed to get current size: {}".format(err) if convert_2_bytes(size) <= current_size: return ("resize value must be larger than the " "current size ({}/{})".format(human_size(current_size), current_size)) if mode in ['create', 'reconfigure']: try: settings.Settings.normalize_controls(kwargs['controls'], LUN.SETTINGS[backstore]) except ValueError as err: return "Unexpected or invalid controls: {}".format(err) if mode == 'delete': # disk must *not* be allocated to a client in the config mapped_list = [] allocation_list = [] for target_iqn, target in config['targets'].items(): if disk_key in target['disks']: mapped_list.append(target_iqn) for client_iqn in target['clients']: client_metadata = target['clients'][client_iqn] if disk_key in client_metadata['luns']: allocation_list.append(client_iqn) if allocation_list: return ("Unable to delete {}. Allocated " "to: {}".format(disk_key, ','.join(allocation_list))) if mapped_list: return ("Unable to delete {}. Mapped " "to: {}".format(disk_key, ','.join(mapped_list))) return 'ok' @staticmethod def get_owner(gateways, portals): """ Determine the gateway in the configuration with the lowest number of active LUNs. This gateway is then selected as the owner for the primary path of the current LUN being processed :param gateways: gateway dict returned from the RADOS configuration object :param portals: portal dict returned from the RADOS configuration object :return: specific gateway hostname (str) that should provide the active path for the next LUN """ return sorted(portals.keys(), key=lambda x: (gateways[x]['active_luns']))[0] @staticmethod def _backstore_object_name_exists(disks_config, backstore_object_name_exists): return len([disk for _, disk in disks_config.items() if disk['backstore_object_name'] == backstore_object_name_exists]) > 0 @staticmethod def get_backstore_object_name(pool, image, disks_config): """ Determine the backstore storage object name based on the pool name, image name, and existing storage object names to avoid conflicts. Example of how name conflict resolution will work: - Add disk `a.b/c` will create the storage object `a.b.c` - Add disk `a/b.c` will create the storage object `a.b.c.1` :param pool: pool name :param image: image name :param disks_config: disks configuration from `gateway.conf` :return: the backstore storage object name to be used """ base_name = '{}.{}'.format(pool, image) candidate = base_name counter = 0 while LUN._backstore_object_name_exists(disks_config, candidate): counter += 1 candidate = '{}.{}'.format(base_name, counter) return candidate @staticmethod def find_first_mapped_target(config, disk): for target, target_config in config.config['targets'].items(): if disk in target_config['disks']: return target return None @staticmethod def reassign_owners(logger, config): """ Reassign disks across gateways after a gw deletion. :param logger: logger object to print to :param config: configuration dict from the rados pool :raises CephiSCSIError. """ updated = False gateways = config.config['gateways'] for disk, disk_config in config.config['disks'].items(): owner = disk_config.get('owner', None) if owner is None: continue gw = gateways.get(owner, None) if gw is None: target = LUN.find_first_mapped_target(config, disk) if not gateways or target is None: disk_config.pop('owner') else: target_config = config.config['targets'][target] new_owner = LUN.get_owner(gateways, target_config['portals']) logger.info("Changing {}'s owner from {} to {}". format(disk, owner, new_owner)) disk_config['owner'] = new_owner gw_config = config.config['gateways'][new_owner] active_cnt = gw_config['active_luns'] gw_config['active_luns'] = active_cnt + 1 config.update_item("gateways", new_owner, gw_config) config.update_item("disks", disk, disk_config) updated = True if updated: config.commit("retain") if config.error: raise CephiSCSIError("Could not update LUN owners: {}". format(config.error_msg)) @staticmethod def define_luns(logger, config, target): """ define the disks in the config to LIO and map to a LUN :param logger: logger object to print to :param config: configuration dict from the rados pool :param target: (object) gateway object - used for mapping :raises CephiSCSIError. """ ips = ip_addresses() local_gw = this_host() target_disks = config.config["targets"][target.iqn]['disks'] if not target_disks: logger.info("No LUNs to export") return disks = {} for disk in target_disks.keys(): disks[disk] = config.config['disks'][disk] # sort the disks dict keys, so the disks are registered in a specific # sequence srtd_disks = sorted(disks) pools = {disks[disk_key]['pool'] for disk_key in srtd_disks} ips = ip_addresses() with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster: for pool in pools: logger.debug("Processing rbd's in '{}' pool".format(pool)) with cluster.open_ioctx(pool) as ioctx: pool_disks = [disk_key for disk_key in srtd_disks if disk_key.startswith(pool + '/')] for disk_key in pool_disks: pool, image_name = disk_key.split('/') try: with rbd.Image(ioctx, image_name) as rbd_image: disk_config = config.config['disks'][disk_key] backstore = disk_config['backstore'] backstore_object_name = disk_config['backstore_object_name'] lun = LUN(logger, pool, image_name, rbd_image.size(), local_gw, backstore, backstore_object_name) if lun.error: raise CephiSCSIError("Error defining rbd image {}" .format(disk_key)) so = lun.allocate() if lun.error: raise CephiSCSIError("Unable to register {} " "with LIO: {}" .format(disk_key, lun.error_msg)) # If not in use by another target on this gw # clean up stale locks. if so.status != 'activated': RBDDev.rbd_lock_cleanup(logger, ips, rbd_image) target._map_lun(config, so, target_disks[disk_key]) if target.error: raise CephiSCSIError("Mapping for {} failed: {}" .format(disk_key, target.error_msg)) except rbd.ImageNotFound: logger.error("Error defining rbd image {}, not found" .format(disk_key)) def rados_pool(conf=None, pool=None): """ determine if a given pool name is defined within the ceph cluster :param pool: pool name to check for (str) :return: Boolean representing the pool's existence """ if conf is None: conf = settings.config.cephconf if pool is None: pool = settings.config.pool with rados.Rados(conffile=conf, name=settings.config.cluster_client_name) as cluster: pool_list = cluster.list_pools() return pool in pool_list ceph-iscsi-3.9/ceph_iscsi_config/metrics.py000066400000000000000000000167731470665154300211230ustar00rootroot00000000000000import threading import time import os import rtslib_fb.tcm as tcm from rtslib_fb.root import RTSRoot from rtslib_fb.utils import fread from ceph_iscsi_config.utils import this_host, CephiSCSIInval class Metric(object): """ Metric object used to hold the metric, labels and value """ def __init__(self, vhelp, vtype): self.var_help = vhelp self.var_type = vtype self.data = [] def add(self, labels, value): _d = dict(labels=labels, value=value) self.data.append(_d) class TPGMapper(threading.Thread): """ thread which builds a list of LUNs mapped to a given TPG""" def __init__(self, tpg): self.tpg = tpg self.tpg_id = tpg.tag self.portal_ip = next(tpg.network_portals).ip_address self.owned_luns = dict() threading.Thread.__init__(self) def run(self): for lun in self.tpg.luns: if lun.alua_tg_pt_gp_name == 'ao': lun_name = lun.storage_object.name self.owned_luns[lun_name] = self.portal_ip class GatewayStats(object): """ Gather and format gateway related performance data """ def __init__(self): self.metrics = {} self._root = RTSRoot() # use utils.this_host self.gw_name = this_host() def formatted(self): s = '' for m_name in sorted(self.metrics.keys()): metric = self.metrics[m_name] s += "#HELP: {} {}\n".format(m_name, metric.var_help) s += "#TYPE: {} {}\n".format(m_name, metric.var_type) for v in metric.data: labels = [] for n in v['labels'].items(): label_name = '{}='.format(n[0]) label_value = '"{}"'.format(n[1]) labels.append('{}{}'.format(label_name, label_value)) s += "{}{{{}}} {}\n".format(m_name, ','.join(labels), v["value"]) return s.rstrip() def collect(self): # the tcm module uses a global called bs_cache and performs lookups # against this to verify a storage object exists. However, if a change # is made the local copy of bs_cache in the rbd-target-gw scope is not # changed, so we reset it here to ensure it always starts empty tcm.bs_cache = {} stime = time.time() self._get_tpg() self._get_mapping() self._get_lun_sizes() self._get_lun_stats() self._get_client_details() etime = time.time() summary = Metric("time taken to scrape iscsi stats (secs)", "gauge") labels = {"gw_name": self.gw_name} summary.add(labels, etime - stime) self.metrics['ceph_iscsi_scrape_duration_seconds'] = summary def _get_tpg(self): stat = Metric("target portal groups defined within gateway group", "gauge") tgt = next(self._root.targets, None) if tgt is None: raise CephiSCSIInval("No targets setup.") labels = {"gw_iqn": tgt.wwn} v = len([tpg for tpg in self._root.tpgs]) stat.add(labels, v) self.metrics["ceph_iscsi_gateway_tpg_total"] = stat def _get_mapping(self): mapping = Metric("LUN mapping state 0=unmapped, 1=mapped", "gauge") mapped_devices = [lun.tpg_lun.storage_object.name for lun in self._root.mapped_luns] tpg_mappers = [] for tpg in self._root.tpgs: mapper = TPGMapper(tpg) mapper.start() tpg_mappers.append(mapper) for mapper in tpg_mappers: mapper.join() if not tpg_mappers: raise CephiSCSIInval("Target not mapped to gateway.") # merge the tpg lun maps all_devs = tpg_mappers[0].owned_luns.copy() for mapper in tpg_mappers[1:]: all_devs.update(mapper.owned_luns) for so in self._root.storage_objects: so_state = 1 if so.name in mapped_devices else 0 owner = all_devs[so.name] mapping.add({"lun_name": so.name, "gw_name": self.gw_name, "gw_owner": owner}, so_state) self.metrics["ceph_iscsi_lun_mapped"] = mapping def _get_lun_sizes(self): size_bytes = Metric("LUN size (bytes)", "gauge") for so in self._root.storage_objects: labels = {"lun_name": so.name, "gw_name": self.gw_name} lun_size = so.size size_bytes.add(labels, lun_size) self.metrics["ceph_iscsi_lun_size_bytes"] = size_bytes def _get_lun_stats(self): iops = Metric("IOPS per LUN per client", "counter") read_bytes = Metric("read bytes per LUN per client", "counter") write_bytes = Metric("write bytes per LUN client", "counter") for node_acl in self._root.node_acls: for lun in node_acl.mapped_luns: lun_path = lun.path lun_name = lun.tpg_lun.storage_object.name perf_labels = {"gw_name": self.gw_name, "client_iqn": node_acl.node_wwn, "lun_name": lun_name} lun_iops = int(fread( os.path.join(lun_path, "statistics/scsi_auth_intr/num_cmds"))) mbytes_read = int(fread( os.path.join(lun_path, "statistics/scsi_auth_intr/read_mbytes"))) mbytes_write = int(fread( os.path.join(lun_path, "statistics/scsi_auth_intr/write_mbytes"))) iops.add(perf_labels, lun_iops) read_bytes.add(perf_labels, mbytes_read * (1024 ** 2)) write_bytes.add(perf_labels, mbytes_write * (1024 ** 2)) self.metrics["ceph_iscsi_lun_iops"] = iops self.metrics["ceph_iscsi_lun_read_bytes"] = read_bytes self.metrics["ceph_iscsi_lun_write_bytes"] = write_bytes def _get_client_details(self): logins = Metric("iscsi client session active (0=No, 1=Yes)", "gauge") lun_map = Metric("LUN ID by client", "gauge") logged_in_clients = [client['parent_nodeacl'].node_wwn for client in self._root.sessions if client['state'] == 'LOGGED_IN'] for client in self._root.node_acls: login_labels = {"gw_name": self.gw_name, "client_iqn": client.node_wwn } v = 1 if client.node_wwn in logged_in_clients else 0 logins.add(login_labels, v) for lun in client.mapped_luns: lun_labels = {"gw_name": self.gw_name, "client_iqn": client.node_wwn, "lun_name": lun.tpg_lun.storage_object.name} v = lun.mapped_lun lun_map.add(lun_labels, v) self.metrics["ceph_iscsi_client_login"] = logins self.metrics["ceph_iscsi_client_lun"] = lun_map ceph-iscsi-3.9/ceph_iscsi_config/settings.py000066400000000000000000000142771470665154300213120ustar00rootroot00000000000000 __author__ = 'pcuzner@redhat.com' try: from ConfigParser import ConfigParser except ImportError: from configparser import ConfigParser import hashlib import json import rados import re from ceph_iscsi_config.gateway_setting import (TGT_SETTINGS, SYS_SETTINGS, TCMU_SETTINGS, TCMU_DEV_STATUS_SETTINGS) # this module when imported preserves the global values # defined by the init method allowing other classes to # access common configuration settings def init(): global config config = Settings() MON_CONFIG_PREFIX = 'config://' class Settings(object): _float_regex = re.compile(r"^[0-9]*\.{1}[0-9]$") _int_regex = re.compile(r"^[0-9]+$") @staticmethod def normalize_controls(raw_controls, settings_list): """ Convert a controls dictionary from a json converted or a user input dictionary where the values are strings. """ controls = {} for key, raw_value in raw_controls.items(): setting = settings_list.get(key) if setting is None: raise ValueError("Supported controls: {}".format(",".join(settings_list.keys()))) if raw_value in [None, '']: # Use the default/reset. controls[key] = None continue controls[key] = setting.normalize(raw_value) return controls exclude_from_hash = ["cluster_client_name", "logger_level" ] def __init__(self, conffile='/etc/ceph/iscsi-gateway.cfg'): self.error = False self.error_msg = '' config = ConfigParser() dataset = config.read(conffile) self._add_attrs_from_defs(SYS_SETTINGS) self._add_attrs_from_defs(TGT_SETTINGS) self._add_attrs_from_defs(TCMU_SETTINGS) self._add_attrs_from_defs(TCMU_DEV_STATUS_SETTINGS) if len(dataset) != 0: # If we have a file use it to override the defaults if config.has_section("config"): self._override_attrs_from_conf(config.items("config"), SYS_SETTINGS) if config.has_section("device_status"): self._override_attrs_from_conf(config.items("device_status"), TCMU_DEV_STATUS_SETTINGS) if config.has_section("target"): all_settings = TGT_SETTINGS.copy() all_settings.update(TCMU_SETTINGS) self._override_attrs_from_conf(config.items("target"), all_settings) if self.api_secure: self.api_ssl_verify = False if self.api_secure else None @property def cephconf(self): return '{}/{}.conf'.format(self.ceph_config_dir, self.cluster_name) def __repr__(self): s = '' for k in self.__dict__: s += "{} = {}\n".format(k, self.__dict__[k]) return s def _add_attrs_from_defs(self, def_settings): """ receive a settings dict and apply those key/value to the current instance, settings that look like numbers are converted :param settings: array of setting objects :return: None """ for k, setting in def_settings.items(): self.__setattr__(k, setting.def_val) def pull_from_mon_config(self, v): if not self.cluster_client_name or not self.cephconf: return '' with rados.Rados(conffile=self.cephconf, name=self.cluster_client_name) as cluster: if v.startswith(MON_CONFIG_PREFIX): v = v[len(MON_CONFIG_PREFIX):] cmd = {"prefix": "config-key get", "key": "{}".format(v)} ret, v_data, outs = cluster.mon_command(json.dumps(cmd), b'') if ret: return '' return v_data.decode('utf-8') def _override_attrs(self, override_attrs, def_settings): for k, v in override_attrs.items(): if hasattr(self, k): setting = def_settings[k] try: self.__setattr__(k, setting.normalize(v)) except ValueError: # We do not even have the logger up yet, so just ignore # so the deamons can still start pass def _override_attrs_from_conf(self, config, def_settings): """ receive a settings dict and apply those key/value to the current instance, settings that look like numbers are converted :param settings: dict of settings :return: None """ mon_config_items = { k: v for k, v in config if isinstance(v, str) and v.startswith(MON_CONFIG_PREFIX)} config_items = {k: v for k, v in config if k not in mon_config_items} # first process non mon config items because we need the # cluster_client_name and ceph_conf in order to talk to the mon config # store self._override_attrs(config_items, def_settings) if mon_config_items: # Now let's attempt to pull these from the config store for k, v in mon_config_items.items(): mon_config_items[k] = self.pull_from_mon_config(v) self._override_attrs(mon_config_items, def_settings) def _hash_settings(self, def_settings, sync_settings): for setting in def_settings: if setting not in Settings.exclude_from_hash: sync_settings[setting] = getattr(self, setting) def hash(self): """ Generates a sha256 hash of the settings that are required to be in sync between gateways. :return: checksum (str) """ sync_settings = {} self._hash_settings(SYS_SETTINGS.keys(), sync_settings) self._hash_settings(TGT_SETTINGS.keys(), sync_settings) self._hash_settings(TCMU_SETTINGS.keys(), sync_settings) self._hash_settings(TCMU_DEV_STATUS_SETTINGS.keys(), sync_settings) h = hashlib.sha256() h.update(json.dumps(sync_settings).encode('utf-8')) return h.hexdigest() ceph-iscsi-3.9/ceph_iscsi_config/target.py000066400000000000000000000753761470665154300207470ustar00rootroot00000000000000import os from rtslib_fb.target import Target, TPG, NetworkPortal, LUN from rtslib_fb.fabric import ISCSIFabricModule from rtslib_fb.utils import RTSLibError, normalize_wwn from rtslib_fb.alua import ALUATargetPortGroup import ceph_iscsi_config.settings as settings from ceph_iscsi_config.gateway_setting import TGT_SETTINGS from ceph_iscsi_config.utils import (normalize_ip_address, normalize_ip_literal, ip_addresses, this_host, format_lio_yes_no, CephiSCSIError, CephiSCSIInval) from ceph_iscsi_config.common import Config from ceph_iscsi_config.discovery import Discovery from ceph_iscsi_config.alua import alua_create_group, alua_format_group_name from ceph_iscsi_config.client import GWClient, CHAP from ceph_iscsi_config.gateway_object import GWObject from ceph_iscsi_config.backstore import lookup_storage_object_by_disk __author__ = 'pcuzner@redhat.com' class GWTarget(GWObject): """ Class representing the state of the local LIO environment """ # Settings for all transport/fabric objects. Using this allows apps like # gwcli to get/set all tpgs/clients under the target instead of per obj. SETTINGS = TGT_SETTINGS def __init__(self, logger, iqn, gateway_ip_list, enable_portal=True): """ Instantiate the class :param iqn: iscsi iqn name for the gateway :param gateway_ip_list: list of IP addresses to be defined as portals to LIO :return: gateway object """ self.error = False self.error_msg = '' self.enable_portal = enable_portal # boolean to trigger portal IP creation self.logger = logger # logger object try: iqn, iqn_type = normalize_wwn(['iqn'], iqn) except RTSLibError as err: self.error = True self.error_msg = "Invalid iSCSI target name - {}".format(err) self.iqn = iqn # Ensure IPv6 addresses are in the normalized address (not literal) format gateway_ip_list = [normalize_ip_address(x) for x in gateway_ip_list] # If the ip list received has data in it, this is a target we need to # act on the IP's provided, otherwise just set to null if gateway_ip_list: # if the ip list provided doesn't match any ip of this host, abort # the assumption here is that we'll only have one matching ip in # the list! matching_ip = set(gateway_ip_list).intersection(ip_addresses()) if len(list(matching_ip)) == 0: self.error = True self.error_msg = ("gateway IP addresses provided do not match" " any ip on this host") return self.active_portal_ips = list(matching_ip) self.logger.debug("active portal will use " "{}".format(self.active_portal_ips)) self.gateway_ip_list = gateway_ip_list self.logger.debug("tpg's will be defined in this order" " - {}".format(self.gateway_ip_list)) else: # without gateway_ip_list passed in this is a 'init' or # 'clearconfig' request self.gateway_ip_list = [] self.active_portal_ips = [] self.changes_made = False self.config_updated = False # self.portal = None self.target = None self.tpg = None self.tpg_list = [] self.tpg_tag_by_gateway_name = {} try: super(GWTarget, self).__init__('targets', iqn, logger, GWTarget.SETTINGS) except CephiSCSIError as err: self.error = True self.error_msg = err def exists(self): """ Basic check to see whether this iqn already exists in kernel's configFS directory :return: boolean """ return GWTarget._exists(self.iqn) @staticmethod def _exists(target_iqn): return os.path.exists('/sys/kernel/config/target/iscsi/' '{}'.format(target_iqn)) def _get_portals(self, tpg): """ return a list of network portal IPs allocated to a specfic tpg :param tpg: tpg to check (object) :return: list of IP's this tpg has (list) """ return [normalize_ip_address(portal.ip_address) for portal in tpg.network_portals] def check_tpgs(self): # process the portal IP's in order to preserve the tpg sequence # across gateways requested_tpg_ips = list(self.gateway_ip_list) current_tpgs = list(self.tpg_list) for portal_ip in self.gateway_ip_list: for tpg in current_tpgs: if portal_ip in self._get_portals(tpg): # portal requested is defined, so remove from the list requested_tpg_ips.remove(portal_ip) current_tpgs.remove(tpg) break # if the requested_tpg_ips list has entries, we need to add new tpg's if requested_tpg_ips: self.logger.info("An additional {} tpg's are " "required".format(len(requested_tpg_ips))) for ip in requested_tpg_ips: self.create_tpg(ip) try: self.update_tpg_controls() except RTSLibError as err: self.error = True self.error_msg = "Failed to update TPG control parameters - {}".format(err) def update_tpg_controls(self): self.logger.debug("(GWGateway.update_tpg_controls) {}".format(self.controls)) for tpg in self.tpg_list: tpg.set_parameter('ImmediateData', format_lio_yes_no(self.immediate_data)) tpg.set_parameter('InitialR2T', format_lio_yes_no(self.initial_r2t)) tpg.set_parameter('MaxOutstandingR2T', str(self.max_outstanding_r2t)) tpg.set_parameter('FirstBurstLength', str(self.first_burst_length)) tpg.set_parameter('MaxBurstLength', str(self.max_burst_length)) tpg.set_parameter('MaxRecvDataSegmentLength', str(self.max_recv_data_segment_length)) tpg.set_parameter('MaxXmitDataSegmentLength', str(self.max_xmit_data_segment_length)) def enable_active_tpg(self, config): """ Add the relevant ip to the active/enabled tpg within the target and bind the tpg's luns to an ALUA group. :return: None """ index = 0 for tpg in self.tpg_list: if tpg._get_enable(): for lun in tpg.luns: try: self.bind_alua_group_to_lun(config, lun, tpg_ip_address=self.active_portal_ips[index]) except CephiSCSIInval as err: self.error = True self.error_msg = err return try: NetworkPortal(tpg, normalize_ip_literal(self.active_portal_ips[index])) except RTSLibError as e: self.error = True self.error_msg = e index += 1 def clear_config(self, config): """ Remove the target definition form LIO :return: None """ # check that there aren't any disks or clients in the configuration clients = [] disks = set() for tpg in self.tpg_list: tpg_clients = [node for node in tpg._list_node_acls()] clients += tpg_clients disks.update([lun.storage_object.name for lun in tpg.luns]) client_count = len(clients) disk_count = len(disks) if disk_count > 0 or client_count > 0: self.error = True self.error_msg = ("Clients({}) and disks({}) must be removed " "before the gateways".format(client_count, disk_count)) return self.logger.debug("Clients defined :{}".format(client_count)) self.logger.debug("Disks defined :{}".format(disk_count)) self.logger.info("Removing target configuration") try: self.delete(config) except RTSLibError as err: self.error = True self.error_msg = "Unable to delete target - {}".format(err) def update_acl(self, acl_enabled): for tpg in self.tpg_list: if acl_enabled: tpg.set_attribute('generate_node_acls', 0) tpg.set_attribute('demo_mode_write_protect', 1) else: tpg.set_attribute('generate_node_acls', 1) tpg.set_attribute('demo_mode_write_protect', 0) def _get_gateway_name(self, ip): if ip in self.active_portal_ips: return this_host() target_config = self.config.config['targets'][self.iqn] for portal_name, portal_config in target_config['portals'].items(): if ip in portal_config['portal_ip_addresses']: return portal_name return None def get_tpg_by_gateway_name(self, gateway_name): tpg_tag = self.tpg_tag_by_gateway_name.get(gateway_name) if tpg_tag: for tpg_item in self.tpg_list: if tpg_item.tag == tpg_tag: return tpg_item return None def update_auth(self, tpg, username=None, password=None, mutual_username=None, mutual_password=None): tpg.chap_userid = username tpg.chap_password = password tpg.chap_mutual_userid = mutual_username tpg.chap_mutual_password = mutual_password auth_enabled = (username and password) if auth_enabled: tpg.set_attribute('authentication', '1') else: GWClient.try_disable_auth(tpg) def create_tpg(self, ip): try: gateway_name = self._get_gateway_name(ip) tpg = self.get_tpg_by_gateway_name(gateway_name) if not tpg: tpg = TPG(self.target) # Use initiator name based ACL by default. tpg.set_attribute('authentication', '0') self.logger.debug("(Gateway.create_tpg) Added tpg for portal " "ip {}".format(ip)) if ip in self.active_portal_ips: target_config = self.config.config['targets'][self.iqn] auth_config = target_config['auth'] config_chap = CHAP(auth_config['username'], auth_config['password'], auth_config['password_encryption_enabled']) if config_chap.error: self.error = True self.error_msg = config_chap.error_msg return config_chap_mutual = CHAP(auth_config['mutual_username'], auth_config['mutual_password'], auth_config['mutual_password_encryption_enabled']) if config_chap_mutual.error: self.error = True self.error_msg = config_chap_mutual.error_msg return self.update_auth(tpg, config_chap.user, config_chap.password, config_chap_mutual.user, config_chap_mutual.password) if self.enable_portal: NetworkPortal(tpg, normalize_ip_literal(ip)) tpg.enable = True self.logger.debug("(Gateway.create_tpg) Added tpg for " "portal ip {} is enabled".format(ip)) else: NetworkPortal(tpg, normalize_ip_literal(ip)) # disable the tpg on this host tpg.enable = False # by disabling tpg_enabled_sendtargets, discovery to just one # node will return all portals (default is 1) tpg.set_attribute('tpg_enabled_sendtargets', '0') self.logger.debug("(Gateway.create_tpg) Added tpg for " "portal ip {} as disabled".format(ip)) self.tpg_list.append(tpg) self.tpg_tag_by_gateway_name[gateway_name] = tpg.tag except RTSLibError as err: self.error_msg = err self.error = True else: self.changes_made = True self.logger.info("(Gateway.create_tpg) created TPG '{}' " "for target iqn '{}'".format(tpg.tag, self.iqn)) def create_target(self): """ Add an iSCSI target to LIO with this objects iqn name, and bind to the IP that aligns with the given iscsi_network """ try: iscsi_fabric = ISCSIFabricModule() self.target = Target(iscsi_fabric, wwn=self.iqn) self.logger.debug("(Gateway.create_target) Added iscsi target - " "{}".format(self.iqn)) # tpg's are defined in the sequence provide by the gateway_ip_list, # so across multiple gateways the same tpg number will be # associated with the same IP - however, only the tpg with an IP on # the host will be in an enabled state. The other tpgs are # necessary for systems like ESX who issue a rtpg scsi inquiry # only to one of the gateways - so that gateway must provide # details for the whole configuration self.logger.debug("Creating tpgs") for ip in self.gateway_ip_list: self.create_tpg(ip) if self.error: self.logger.critical("Unable to create the TPG for {} " "- {}".format(ip, self.error_msg)) self.update_tpg_controls() except RTSLibError as err: self.error_msg = err self.logger.critical("Unable to create the Target definition " "- {}".format(self.error_msg)) self.error = True if self.error: if self.target: self.target.delete() else: self.changes_made = True self.logger.info("(Gateway.create_target) created an iscsi target " "with iqn of '{}'".format(self.iqn)) def load_config(self): """ Grab the target, tpg and portal objects from LIO and store in this Gateway object """ try: self.target = Target(ISCSIFabricModule(), self.iqn, "lookup") # clear list so we can rebuild with the current values below if self.tpg_list: del self.tpg_list[:] if self.tpg_tag_by_gateway_name: self.tpg_tag_by_gateway_name = {} # there could/should be multiple tpg's for the target for tpg in self.target.tpgs: self.tpg_list.append(tpg) network_portals = list(tpg.network_portals) if network_portals: ip_address = network_portals[0].ip_address gateway_name = self._get_gateway_name(ip_address) if gateway_name: self.tpg_tag_by_gateway_name[gateway_name] = tpg.tag else: self.logger.info("No available network portal for target " "with iqn of '{}'".format(self.iqn)) # self.portal = self.tpg.network_portals.next() except RTSLibError as err: self.error_msg = err self.error = True self.logger.info("(Gateway.load_config) successfully loaded existing " "target definition") def bind_alua_group_to_lun(self, config, lun, tpg_ip_address=None): """ bind lun to one of the alua groups. Query the config to see who 'owns' the primary path for this LUN. Then either bind the LUN to the ALUA 'AO' group if the host matches, or default to the 'ANO'/'Standby' alua group param config: Config object param lun: lun object on the tpg param tpg_ip: IP of Network Portal for the lun's tpg. """ stg_object = lun.storage_object disk_config = [disk for _, disk in config.config['disks'].items() if disk['backstore_object_name'] == stg_object.name][0] owning_gw = disk_config['owner'] tpg = lun.parent_tpg if not tpg_ip_address: # just need to check one portal for ip in tpg.network_portals: tpg_ip_address = normalize_ip_address(ip.ip_address) break if tpg_ip_address is None: # this is being run during boot so the NP is not setup yet. return target_config = config.config["targets"][self.iqn] is_owner = False gw_config = target_config['portals'].get(owning_gw, None) # If the user has exported a disk through multiple targets but # they do not have a common gw the owning gw may not exist here. # The LUN will just have all ANO paths then. if gw_config: if tpg_ip_address in gw_config["portal_ip_addresses"]: is_owner = True try: alua_tpg = alua_create_group(settings.config.alua_failover_type, tpg, stg_object, is_owner) except CephiSCSIInval: raise except RTSLibError: self.logger.info("ALUA group id {} for stg obj {} lun {} " "already made".format(tpg.tag, stg_object, lun)) group_name = alua_format_group_name(tpg, settings.config.alua_failover_type, is_owner) # someone mapped a LU then unmapped it without deleting the # stg_object, or we are reloading the config. alua_tpg = ALUATargetPortGroup(stg_object, group_name) if alua_tpg.tg_pt_gp_id != tpg.tag: # ports and owner were rearranged. Not sure we support that. raise CephiSCSIInval("Existing ALUA group tag for group {} " "in invalid state.\n".format(group_name)) # drop down in case we are restarting due to error and we # were not able to bind to a lun last time. self.logger.info("Setup group {} for {} on tpg {} (state {}, owner {}, " "failover type {})".format(alua_tpg.name, stg_object.name, tpg.tag, alua_tpg.alua_access_state, is_owner, alua_tpg.alua_access_type)) self.logger.debug("Setting Luns tg_pt_gp to {}".format(alua_tpg.name)) lun.alua_tg_pt_gp_name = alua_tpg.name self.logger.debug("Bound {} on tpg{} to {}".format(stg_object.name, tpg.tag, alua_tpg.name)) def _map_lun(self, config, stg_object, target_disk_config): for tpg in self.tpg_list: self.logger.debug("processing tpg{}".format(tpg.tag)) lun_id = target_disk_config['lun_id'] try: mapped_lun = LUN(tpg, lun=lun_id, storage_object=stg_object) self.changes_made = True except RTSLibError as err: if "already exists in configFS" not in str(err): self.logger.error("LUN mapping failed: {}".format(err)) self.error = True self.error_msg = err return # Already created. Ignore and loop to the next tpg. continue try: self.bind_alua_group_to_lun(config, mapped_lun) except CephiSCSIInval as err: self.logger.error("Could not bind LUN to ALUA group: " "{}".format(err)) self.error = True self.error_msg = err return def map_lun(self, config, stg_object, target_disk_config): self.load_config() self._map_lun(config, stg_object, target_disk_config) def map_luns(self, config): """ LIO will have objects already defined by the lun module, so this method, brings those objects into the gateways TPG """ target_config = config.config["targets"][self.iqn] for disk_id, disk in target_config['disks'].items(): stg_object = lookup_storage_object_by_disk(config, disk_id) if stg_object is None: err_msg = "Could not map {} to LUN. Disk not found".format(disk_id) self.logger.error(err_msg) self.error = True self.error_msg = err_msg return self._map_lun(config, stg_object, disk) if self.error: return def delete(self, config): saved_err = None if self.target is None: self.load_config() # Ignore errors. Target was probably not setup. Try to clean up # disks. if self.target: try: self.target.delete() except RTSLibError as err: self.logger.error("lio target deletion failed {}".format(err)) saved_err = err # drop down and try to delete disks for disk in config.config['targets'][self.iqn]['disks'].keys(): so = lookup_storage_object_by_disk(config, disk) if so is None: self.logger.debug("lio disk lookup failed {}") # SO may not have got setup. Ignore. continue if so.status == 'activated': # Still mapped so ignore. continue try: so.delete() except RTSLibError as err: self.logger.error("lio disk deletion failed {}".format(err)) if saved_err is None: saved_err = err # Try the other disks. if saved_err: raise RTSLibError(saved_err) def manage(self, mode): """ Manage the definition of the gateway, given a mode of 'target', 'map', 'init' or 'clearconfig'. In 'target' mode the LIO TPG is defined, whereas in map mode, the required LUNs are added to the existing TPG :param mode: run mode - target, map, init or clearconfig (str) :return: None - but sets the objects error flags to be checked by the caller """ config = Config(self.logger) if config.error: self.error = True self.error_msg = config.error_msg return local_gw = this_host() if mode == 'target': if self.exists(): self.load_config() self.check_tpgs() else: self.create_target() if self.error: # return to caller, with error state set return target_config = config.config["targets"][self.iqn] self.update_acl(target_config['acl_enabled']) discovery_auth_config = config.config['discovery_auth'] Discovery.set_discovery_auth_lio(discovery_auth_config['username'], discovery_auth_config['password'], discovery_auth_config['password_encryption_enabled'], discovery_auth_config['mutual_username'], discovery_auth_config['mutual_password'], discovery_auth_config[ 'mutual_password_encryption_enabled']) gateway_group = config.config["gateways"].keys() if "ip_list" not in target_config: target_config['ip_list'] = self.gateway_ip_list config.update_item("targets", self.iqn, target_config) self.config_updated = True if self.controls != target_config.get('controls', {}): target_config['controls'] = self.controls.copy() config.update_item("targets", self.iqn, target_config) self.config_updated = True if local_gw not in gateway_group: gateway_metadata = {"active_luns": 0} config.add_item("gateways", local_gw) config.update_item("gateways", local_gw, gateway_metadata) self.config_updated = True if local_gw not in target_config['portals']: # Update existing gws with the new gw for remote_gw, remote_gw_config in target_config['portals'].items(): if remote_gw_config['gateway_ip_list'] == self.gateway_ip_list: continue inactive_portal_ip = list(self.gateway_ip_list) for portal_ip_address in remote_gw_config["portal_ip_addresses"]: inactive_portal_ip.remove(portal_ip_address) remote_gw_config['gateway_ip_list'] = self.gateway_ip_list remote_gw_config['tpgs'] = len(self.tpg_list) remote_gw_config['inactive_portal_ips'] = inactive_portal_ip target_config['portals'][remote_gw] = remote_gw_config # Add the new gw inactive_portal_ip = list(self.gateway_ip_list) for active_portal_ip in self.active_portal_ips: inactive_portal_ip.remove(active_portal_ip) portal_metadata = {"tpgs": len(self.tpg_list), "gateway_ip_list": self.gateway_ip_list, "portal_ip_addresses": self.active_portal_ips, "inactive_portal_ips": inactive_portal_ip} target_config['portals'][local_gw] = portal_metadata target_config['ip_list'] = self.gateway_ip_list config.update_item("targets", self.iqn, target_config) self.config_updated = True if self.config_updated: config.commit() elif mode == 'map': if self.exists(): self.load_config() self.map_luns(config) target_config = config.config["targets"][self.iqn] self.update_acl(target_config['acl_enabled']) else: self.error = True self.error_msg = ("Attempted to map to a gateway '{}' that " "hasn't been defined yet...out of order " "steps?".format(self.iqn)) elif mode == 'init': # init mode just creates the iscsi target definition and updates # the config object. It is used by the CLI only if self.exists(): self.logger.info("GWTarget init request skipped - target " "already exists") else: # create the target self.create_target() # if error happens, we should never store this target to config if self.error: return seed_target = { 'disks': {}, 'clients': {}, 'acl_enabled': True, 'auth': { 'username': '', 'password': '', 'password_encryption_enabled': False, 'mutual_username': '', 'mutual_password': '', 'mutual_password_encryption_enabled': False}, 'portals': {}, 'groups': {}, 'controls': {} } config.add_item("targets", self.iqn, seed_target) config.commit() discovery_auth_config = config.config['discovery_auth'] Discovery.set_discovery_auth_lio(discovery_auth_config['username'], discovery_auth_config['password'], discovery_auth_config[ 'password_encryption_enabled'], discovery_auth_config['mutual_username'], discovery_auth_config['mutual_password'], discovery_auth_config[ 'mutual_password_encryption_enabled']) elif mode == 'clearconfig': # Called by API from CLI clearconfig command if self.exists(): self.load_config() self.clear_config(config) if self.error: return target_config = config.config["targets"][self.iqn] if len(target_config['portals']) == 0: config.del_item('targets', self.iqn) else: gw_ips = target_config['portals'][local_gw]['portal_ip_addresses'] target_config['portals'].pop(local_gw) ip_list = target_config['ip_list'] for gw_ip in gw_ips: ip_list.remove(gw_ip) if len(ip_list) > 0 and len(target_config['portals'].keys()) > 0: config.update_item('targets', self.iqn, target_config) else: # no more portals in the list, so delete the target config.del_item('targets', self.iqn) remove_gateway = True for _, target in config.config["targets"].items(): if local_gw in target['portals']: remove_gateway = False break if remove_gateway: # gateway is no longer used, so delete it config.del_item('gateways', local_gw) config.commit() @staticmethod def get_num_sessions(target_iqn): if not GWTarget._exists(target_iqn): return 0 with open('/sys/kernel/config/target/iscsi/{}/fabric_statistics/iscsi_instance' '/sessions'.format(target_iqn)) as sessions_file: return int(sessions_file.read().rstrip('\n')) ceph-iscsi-3.9/ceph_iscsi_config/utils.py000066400000000000000000000220631470665154300206020ustar00rootroot00000000000000import socket import netifaces import subprocess import rados import rbd import re import datetime import os import ceph_iscsi_config.settings as settings __author__ = 'pcuzner@redhat.com' size_suffixes = ['M', 'G', 'T'] class CephiSCSIError(Exception): ''' Generic Ceph iSCSI config error. ''' pass class CephiSCSIInval(CephiSCSIError): ''' Invalid setting/param. ''' pass def shellcommand(command_string): try: response = subprocess.check_output(command_string, shell=True) except subprocess.CalledProcessError: return None else: return response def normalize_ip_address(ip_address): """ IPv6 addresses should not include the square brackets utilized by IPv6 literals (RFC 3986) """ address_regex = re.compile(r"^\[(.*)\]$") match = address_regex.match(ip_address) if match: return match.group(1) return ip_address def normalize_ip_literal(ip_address): """ rtslib expects IPv4 addresses as a dotted-quad string, and IPv6 addresses surrounded by brackets. """ ip_address = normalize_ip_address(ip_address) try: socket.inet_pton(socket.AF_INET6, ip_address) return "[" + ip_address + "]" except Exception: pass return ip_address def resolve_ip_addresses(addr): """ return list of IPv4/IPv6 address for the given address - could be an ip or name passed in :param addr: name or ip address (dotted quad) :return: list of IPv4/IPv6 addresses """ families = [socket.AF_INET, socket.AF_INET6] normalized_addr = normalize_ip_address(addr) for family in families: try: socket.inet_pton(family, normalized_addr) return [normalized_addr] except Exception: pass addrs = set() try: infos = socket.getaddrinfo(addr, 0) for info in infos: if info[0] in families: addrs.add(info[4][0]) except Exception: pass return list(addrs) def valid_ip(ip, port=22): """ Validate either a single IP or a list of IPs. An IP is valid if I can reach port 22 - since that's a common :param args: :return: Boolean """ if isinstance(ip, str): ip_list = list([ip]) elif isinstance(ip, list): ip_list = ip else: return False ip_ok = True families = [socket.AF_INET, socket.AF_INET6] for addr in ip_list: addr_ok = False for family in families: sock = socket.socket(family, socket.SOCK_STREAM) sock.settimeout(1) try: sock.connect((addr, port)) except socket.error: pass else: sock.close() addr_ok = True break if not addr_ok: ip_ok = False break return ip_ok def valid_size(size): valid = True unit = size[-1] if unit.upper() not in size_suffixes: valid = False else: try: int(size[:-1]) except ValueError: valid = False return valid def format_lio_yes_no(value): if value: return "Yes" return "No" def ip_addresses(): """ return a list of IPv4/IPv6 addresses on the system (excluding 127.0.0.1/::1) :return: IP address list """ ip_list = set() for iface in netifaces.interfaces(): if netifaces.AF_INET in netifaces.ifaddresses(iface): for link in netifaces.ifaddresses(iface)[netifaces.AF_INET]: ip_list.add(link['addr']) if netifaces.AF_INET6 in netifaces.ifaddresses(iface): for link in netifaces.ifaddresses(iface)[netifaces.AF_INET6]: if '%' in link['addr']: continue ip_list.add(link['addr']) ip_list.discard('::1') ip_list.discard('127.0.0.1') return list(ip_list) def human_size(num): for unit, precision in [('b', 0), ('K', 0), ('M', 0), ('G', 0), ('T', 1), ('P', 1), ('E', 2), ('Z', 2)]: if num % 1024 != 0: return "{0:.{1}f}{2}".format(num, precision, unit) num /= 1024.0 return "{0:.2f}{1}".format(num, "Y") def convert_2_bytes(disk_size): try: # If it's already an integer or a string with no suffix then assume # it's already in bytes. return int(disk_size) except ValueError: pass power = [2, 3, 4] unit = disk_size[-1].upper() offset = size_suffixes.index(unit) value = int(disk_size[:-1]) # already validated, so no need for try/except clause _bytes = value * (1024 ** power[offset]) return _bytes def get_pool_id(conf=None, pool_name=None): """ Query Rados to get the pool id of a given pool name :param conf: ceph configuration file :param pool_name: pool name (str) :return: pool id (int) """ if conf is None: conf = settings.config.cephconf if pool_name is None: pool_name = settings.config.pool with rados.Rados(conffile=conf, name=settings.config.cluster_client_name) as cluster: pool_id = cluster.pool_lookup(pool_name) return pool_id def get_pool_name(conf=None, pool_id=0): """ Query Rados to get the pool name of a given pool_id :param conf: ceph configuration file :param pool_name: pool id number (int) :return: pool name (str) """ if conf is None: conf = settings.config.cephconf with rados.Rados(conffile=conf, name=settings.config.cluster_client_name) as cluster: pool_name = cluster.pool_reverse_lookup(pool_id) return pool_name def get_rbd_size(pool, image, conf=None): """ return the size of a given rbd from the local ceph cluster :param pool: (str) pool name :param image: (str) rbd image name :return: (int) size in bytes of the rbd """ if conf is None: conf = settings.config.cephconf with rados.Rados(conffile=conf, name=settings.config.cluster_client_name) as cluster: with cluster.open_ioctx(pool) as ioctx: with rbd.Image(ioctx, image) as rbd_image: size = rbd_image.size() return size def get_pools(conf=None): """ return a list of pools in the local ceph cluster :param conf: (str) or None :return: (list) of pool names """ if conf is None: conf = settings.config.cephconf with rados.Rados(conffile=conf, name=settings.config.cluster_client_name) as cluster: pool_list = cluster.list_pools() return pool_list def get_time(): utc = datetime.datetime.utcnow() return utc.strftime('%Y/%m/%d %H:%M:%S') def this_host(): """ return the local machine's fqdn """ return socket.getfqdn() def encryption_available(): """ Determine whether encryption is available by looking for the relevant keys :return: (bool) True if all keys are present, else False """ encryption_keys = list([settings.config.priv_key, settings.config.pub_key]) config_dir = settings.config.ceph_config_dir keys = [os.path.join(config_dir, key_name) for key_name in encryption_keys] return all([os.path.exists(key) for key in keys]) def read_os_release(): os_release_file = '/etc/os-release' d = {} if not os.path.exists(os_release_file): return d with open(os_release_file) as f: for line in f: rs = line.rstrip() if rs: k, v = rs.split("=") d[k] = v.strip('"') return d def gen_control_string(controls): """ Generate a kernel control string from a given dictionary of control arguments. :return: control string (str) """ control = '' for key, value in controls.items(): if value is not None: control += "{}={},".format(key, value) return None if control == '' else control[:-1] class ListComparison(object): def __init__(self, current_list, new_list): """ compare two lists to identify changes :param current_list : list of current values (existing state) :param new_list: list if new values (desired state) """ self.current = current_list self.new = new_list self.changed = False @property def added(self): """ provide a list of added items :return: (list) in the sequence provided """ additions = set(self.new) - set(self.current) if len(additions) > 0: self.changed = True # simply returning the result of the set comparison does not preserve # the list item sequence. By iterating over the new list we can # return the expected sequence return [item for item in self.new if item in additions] @property def removed(self): """ calculate the removed items between two lists using set comparisons :return: (list) removed items """ removals = set(self.current) - set(self.new) if len(removals) > 0: self.changed = True return list(removals) ceph-iscsi-3.9/gwcli.8000066400000000000000000000401161470665154300146270ustar00rootroot00000000000000.\" Manpage for gwcli .\" Contact pcuzner@redhat.com to correct errors or typos. .TH gwcli 8 "Ceph iSCSI Gateway Tools" "23 Jul 2017" "Ceph iSCSI Gateway Tools" .SH NAME \fBgwcli\fR \- manage iscsi gateway configuration from the command line .SH DESCRIPTION \fBgwcli\fR is a configuration shell interface used for viewing, editing and saving the configuration of a ceph/iSCSI gateway environment. It enables the administrator to define rbd devices, map them across gateway nodes and export them to various clients over iSCSI. In addition to managing the iSCSI related elements of the configuration, the shell provides an overview of the ceph cluster, describing the available pools and the capacity they provide. Since rbd images are thin provisioned, the capacity information also indicates the capacity over-commit of the pools, enabling the admin to make more informed choices when allocating new rbd images. .PP iSCSI services are implemented by the kernel's LIO target subsystem layer, with iSCSI settings enforced by the rbd-target-gw daemon. The targetcli command may still be used to view lower level detail of the LIO environment, but all changes \fBmust\fR be made using the gwcli. .PP The gwcli shell is similar to the targetcli interface, and is also based on 'configshell'. The layout of the UI is a tree format, and is navigated in much the same way as a filesystem. .SH USAGE \fBgwcli\fR [-d | --debug] The -d option provides additional verbosity within the shell \fBgwcli [cmd]\fR Invoke gwcli as root to enter the interactive shell, or supply a command to execute without entering the shell. Within the shell, us \fBls\fR to list nodes beneath the current path. Moving around the tree is done using the \fBcd\fR command, or by simply entering the 'path' of the new location/node directly. Use \fBhelp \fR for specific help information. The shell provides tab completion for commands and command arguments. .PP Configuration state is persisted within a rados object stored in the 'rbd' pool. gwcli orchestrates changes across all iscsi gateways via the rbd-target-api service running on each gateway. Once the change to the local LIO subsystem is complete, the change is committed to the rados configuration object. Although 'targetcli' is available, it can only really provide a view of the local LIO configuration. .SH QUICKSTART gwcli interacts with an API service provided by each iSCSI gateway's rbd-target-api daemon. The API service is installed with the cli, and can be configured by updating the api_* related settings in '/etc/ceph/iscsi-gateway.cfg'. .PP Typically, the following options are regarded as site specific; .PP .PD 0.4 .RS 3 \fBapi_user = \fR .PP \fBapi_password = \fR .PP \fBapi_port = \fR .PP \fBapi_secure = \fR .RE .PD 1 .PP \fBNB.\fR An example iscsi-gateway.cfg file is provided under /usr/share/doc/ceph-iscsi-config* .PP Access to the API is normally restricted to the IP's of the gateway nodes, but you may also define other IP addresses that should be granted access to the API by adding the following entry to the configuration file; .PP .RS 3 \fBtrusted_ip_list = \fR .RE .PP By default the API service is not running with TLS, so for a more secure environment ensure iscsi-gateway.cfg has "api_secure = true" defined. When using secure mode you will need to create the appropriate certificate and private key files, and place them in /etc/ceph as 'iscsi-gateway.crt' and 'iscsi-gateway.key' on \fBeach\fR gateway node. .PP Once these files are inplace across the nodes, the rbd-target-api service can be started. Check that the API service is enabled and in the correct mode by looking at the output of 'systemctl status rbd-target-api'. You should see a message similar to .PP .RS 3 \fB* Running on https://0.0.0.0:5000/\fR. .RE .PP The example gwcli output below shows a small two-gateway configuration, supporting 2 iSCSI clients .PP .PD 0.4 $ sudo gwcli /> ls .PP .nf /> ls o- / ................................................................... [...] o- clusters .................................................. [Clusters: 1] | o- ceph ...................................................... [HEALTH_OK] | o- pools .................................................... [Pools: 3] | | o- ec ......................... [(2+1), Commit: 0b/40G (0%), Used: 0b] | | o- iscsi ...................... [(x3), Commit: 0b/20G (0%), Used: 18b] | | o- rbd ........................ [(x3), Commit: 8G/20G (40%), Used: 5K] | o- topology .......................................... [OSDs: 3,MONs: 3] o- disks .................................................... [8G, Disks: 5] | o- rbd ........................................................ [rbd (8G)] | o- disk_1 ............................................ [rbd/disk_1 (1G)] | o- disk_2 ............................................ [rbd/disk_2 (2G)] | o- disk_3 ............................................ [rbd/disk_3 (2G)] | o- disk_4 ............................................ [rbd/disk_4 (1G)] | o- disk_5 ............................................ [rbd/disk_5 (2G)] o- iscsi-targets .............................................. [Targets: 1] o- iqn.2003-01.com.redhat.iscsi-gw:ceph-gw ................. [Gateways: 2] o- disks .................................................... [Disks: 5] | o- rbd/disk_1 ....................................... [Owner: rh7-gw1] | o- rbd/disk_5 ....................................... [Owner: rh7-gw2] o- gateways ...................................... [Up: 2/2, Portals: 2] | o- rh7-gw1 ..................................... [192.168.122.69 (UP)] | o- rh7-gw2 .................................... [192.168.122.104 (UP)] o- host-groups ............................................ [Groups : 1] | o- group1 ....................................... [Hosts: 1, Disks: 1] | o- iqn.1994-05.com.redhat:rh7-client ........................ [host] | o- rbd/disk_5 ............................................... [disk] o- hosts .................................................... [Hosts: 2] o- iqn.1994-05.com.redhat:myhost1 ......... [Auth: None, Disks: 1(1G)] | o- lun 0 .......................... [rbd/disk_1(1G), \fBOwner: rh7-gw2]\fR] o- iqn.1994-05.com.redhat:rh7-client [LOGGED-IN, Auth: CHAP, Disks: 1(2G)] o- lun 0 .......................... [rbd/disk_5(2G), \fBOwner: rh7-gw2]\fR] .fi .PD 1 .PP Disks exported through the gateways use ALUA attributes to provide ActiveOptimised and ActiveNonOptimised access to the rbd images. Each disk is assigned a primary owner at creation/import time - shown above with the \fBowner\fR attribute. .SH DISKS In order to manage rbd images (disks) within the environment there are several commands that enable you to create, resize and delete rbd's from the ceph cluster. When an rbd image is created, it is registered with all gateways. Part of this registration process defines the gateway that will provide the active I/O path to the LUN (disk) for any/all clients. This means that the iscsi-target definition \fIand\fR the gateway hosts must be defined prior to any disks being created (added to the gateways). It's also important to note that for an rbd image to be compatible with the iSCSI environment, it must have specific image features enabled (exclusive_lock, layering). The easiest way to create new disks is using the \fB/disks create\fR command. .PP .TP \fB/disks/ create pool= image= size=G\fR Using the create command ensure the image features are applied correctly. You can also choose to create your rbd images by some other means, in which case the 'create' command will effectively 'import' the rbd into the configuration leaving any data already on the device, intact. .PP .TP .PD 0 \fB/disks// resize g\fR .TP \fB/disks resize \fR Use the resize command to increase the capacity of a specific rbd image. .PD 1 .PP .TP \fB/disks/ delete \fR The delete command allows you to remove the rbd from the LIO and ceph cluster. Prior to the delete being actioned the current configuration is checked to ensure that the requested rbd image is not masked to any iSCSI client. Once this check is successful, the rbd image will be purged from the LIO environment on each gateway and deleted from the ceph cluster. .SH ISCSI-TARGETS The iscsi-target provides the end-point name that clients will know the iSCSI 'cluster' as. The target IQN will be created across all gateways within the configuration. Once the target is defined, the iscsi-target sub-tree is populated with entries for \fBgateways\fR and \fBhosts\fR. .PP .TP \fB/iscsi-targets/ create \fR The IQN provided will be validated and defined to the configuration object. Adding gateway nodes will then pick up the configuration's IQN and apply it to their local LIO instance. .TP \fB/iscsi-targets/ clearconfig confirm=true\fR The clearconfig command provides the ability to return each of the gateways to their undefined state. However, since this is a disruptive command you must remove the clients and disks first, before issuing a clearconfig. .SH GATEWAYS Gateways provide the access points for rbd images over iSCSI, so there should be a minimum of 2 defined to provide fault tolerance. .PP .TP \fB/iscsi-targets// create Gateways are defined by a node name (preferably a shortname, but it must resolve), and an IPv4/IPv6 address that the iSCSI 'service' will be bound to (i.e. the iSCSI portal IP address). When adding a gateway, the candidate machine will be checked to ensure the relevant files and daemons are in place. .SH HOST-GROUPS Host groups provide a more convenient way of managing multiple servers that must share the same disk masking configuration. For example in a RHV/oVirt or Vmware environment, each host needs access to the same LUNs. Host groups allow you to create a logical group which contains the hosts and the disks that each host in the group should have access to. Please note that sharing devices across hosts needs a cluster aware filesystem or equivalent locking to avoid data corruption. .PP .TP \fB/iscsi-targets//host-groups/ create | delete Create or delete a given group name. Deleting a group definition does \fBnot\fR remove the hosts or LUN masking, it simply removes the logical grouping used for management purposes. .PP .TP \fB/iscsi-targets//host-groups// host add | remove The host subcommand within a group definition allows you to add and remove hosts from the group. When adding a host, it must not have existing LUN masking in place - this restriction ensure lun id consistency across all hosts within the host group. Removing a host from a group does \fBnot\fR automatically remove it's LUN masking. .TP \fB/iscsi-targets//host-groups// disk add | remove . The disk subcommand enables you to add and remove disks to/from all members of the host group. .PP .RS \fBNB.\fROnce a client is a member of a host group, it's disks \fBcan only\fR be managed at the group level. .RE .SH HOSTS The 'hosts' section defines the iSCSI client definitions (NodeACLs) that provide access to the rbd images. The CLI provides the ability to create and delete clients, define/update chap authentication and add and remove rbd images for the client. .PP .TP \fB/iscsi-targets//hosts/ create The create command will define the client IQN to all gateways within the configuration. At creation time, the client IQN is added to a ACL that allows normal iSCSI session logins for all clients with the IQN. To enable CHAP authentication use the \fBauth\fR command described below. .TP \fB/iscsi-targets//hosts/ delete The delete command will attempt to remove client IQN from all gateways within the configuration. The client must be logged out, for the delete command to be successful. .TP .nf \fB/iscsi-targets//hosts/ auth nochap\fR .fi CHAP authentication can be reset to initiator based ACLs target wide for all setup ACLs using the \fBnochap\fR keyword. If there are multiple clients, CHAP must be enabled for all clients or disabled for all clients. gwcli does not support mixing CHAP clients with IQN ACL clients. .TP .nf \fB/iscsi-targets//hosts// auth chap=/\fR .fi CHAP authentication can be defined for the client with the \fBchap=\fR parameter. The username and password defined here must then be used within the clients login credentials for this iscsi target. If there are multiple clients, CHAP must be enabled for all clients or disabled for all clients. gwcli does not support mixing CHAP clients with IQN ACL clients. .TP .nf \fB/iscsi-targets//hosts// disk add | remove \fR .fi rbd images defined to the iscsi gateway, become LUNs within the LIO environment. These LUNs can be masked to, or masked from specific clients using the \fBdisk\fR command. When a disk is masked to a client, the disk is automatically assigned a LUN id. The disk->LUN id relationship is persisted in the rados configuration object to ensure that the disk always appears on the clients SCSI interface at the same point. It is the Administrators responsibility to ensure that any disk shared between clients uses a cluster-aware filesystem to prevent data corruption. .SH EXAMPLES .PP .SS CREATING ISCSI GATEWAYS .TP \fB>/iscsi-targets create iqn.2003-01.com.redhat.iscsi-gw:ceph-igw\fR Create a iscsi target name of 'iqn.2003-01.com.redhat.iscsi-gw:ceph-igw', that will be used by each gateway node added to the configuration .PP \fB>cd /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-igw/gateways .PD 0 .PP \fB>create ceph-gw-1 10.172.19.21 .TP \fB>create ceph-gw-2 10.172.19.22 Create 2 gateways, using servers ceph-gw-1 and ceph-gw-2. The iSCSI portals will be bound to the IP addresses provided. During the registration of a gateway a check is performed to ensure the candidate machine has the required IP address available. .PD 1 .SS ADDING AN RBD .TP \fB>/disks/ create pool=rbd image=disk_1 size=50g Create/import a 50g rbd image and register it with each gateway node .SS CREATING A CLIENT .PD 0 \fB>cd /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-igw/hosts/fR .PP .TP \fB>create iqn.1994-05.com.redhat:rh7-client\fr Create an iscsi client called 'iqn.1994-05.com.redhat:rh7-client'. The initial client definition will not have CHAP authentication enabled, resulting in red highlighting against this clients summary information in the output of the \fBls\fR command. .PD 1 .PP .SS ADDING DISKS TO A CLIENT .PP .PD 0 .TP \fB>/iscsi-target..eph-igw/hosts> cd iqn.1994-05.com.redhat:rh7-client\fR .PP .TP \fB>disk add rbd/disk_1 The first command navigates to the client's entry in the UI at which point the \fBdisk\fR or \fBauth\fR sub-commands may be used. In this example the disk subcommand is used to mask \fIdisk_1\fR in the \fIrbd\fR pool to the iSCSI client. The LUN id associated with this device is automatically assigned and maintained by the system. .PD 1 .SH OTHER COMMANDS .TP \fBexport mode=[ copy ]\fR with the export command a copy of the current configuration can be exported as a backup (mode=copy). The resulting output is written to stdout. .TP \fB/ceph refresh\fR refreshes the ceph information present in the UI .TP \fBinfo\fR when run at the root of the shell (/), info will show you configuration settings such as http mode, API port, local ceph cluster name and 2ndary API trusted IP addresses. .TP \fBgoto [ gateways | hosts | host-groups | 'bookmark']\fR to ease navigation within the UI, gwcli automatically creates bookmarks for hosts and gateways. This allows you to switch to those sub-trees in the UI by simply using '\fBgoto hosts\fR'. The 'goto' command will also work for any other bookmarks you create. .PP .SH FILES .TP \fB~/gwcli.log\fR log file maintained by gwcli, recording all changes made via the shell interface in a timestamped format. .TP \fB~/.gwcli/history.txt log containing a record of all commands executed within the gwcli shell on this system. .SH AUTHOR Written by Paul Cuzner (pcuzner@redhat.com) .SH REPORTING BUGS Report bugs via ceph-iscsi-3.9/gwcli.py000077500000000000000000000133201470665154300151100ustar00rootroot00000000000000#!/usr/bin/python # prep for python 3 from __future__ import print_function # requires python2-requests/python3-requests import logging import os import sys import argparse import signal from configshell_fb import ConfigShell, ExecutionError from gwcli.gateway import ISCSIRoot import ceph_iscsi_config.settings as settings __author__ = 'Paul Cuzner' __version__ = '2.7' class GatewayCLI(ConfigShell): default_prefs = {'color_path': 'magenta', 'color_command': 'cyan', 'color_parameter': 'magenta', 'color_keyword': 'cyan', 'completions_in_columns': True, 'logfile': None, 'loglevel_console': 'info', 'loglevel_file': 'debug9', 'color_mode': True, 'prompt_length': 30, 'tree_max_depth': 0, 'tree_status_mode': True, 'tree_round_nodes': True, 'tree_show_root': True, } def exception_handler(exception_type, exception, traceback, debug_hook=sys.excepthook): if options.debug: debug_hook(exception_type, exception, traceback) else: color_red = '\x1b[31;1m' color_off = '\x1b[0m' print("{}{}: {}{}".format(color_red, exception_type.__name__, exception, color_off)) def get_options(): # Set up the runtime overrides, any of these could be provided # by the cfg file(s) parser = argparse.ArgumentParser(prog='gwcli', description='Manage iSCSI gateways') parser.add_argument('-c', '--config-object', type=str, help='pool and object name holding the iSCSI config' ' object (pool/object_name)') parser.add_argument('-d', '--debug', action='store_true', default=False, help='run with additional debug') parser.add_argument('-t', '--threads', type=int, default=8, help='threads used for rbd scanning (default is 8)') parser.add_argument('-v', '--version', action='version', version='%(prog)s - {}'.format(__version__)) parser.add_argument('cli_command', type=str, nargs=argparse.REMAINDER) # create the opts object opts = parser.parse_args() # establish defaults, just in case they're missing from the config # file(s) AND run time call if not opts.config_object: opts.config_object = 'rbd/gateway.conf' opts.cli_command = ' '.join(opts.cli_command) return opts def kbd_handler(*args): pass def main(): is_root = True if os.getuid() == 0 else False if not is_root: print("CLI only supports root level access") sys.exit(-1) shell = GatewayCLI('~/.gwcli') root_node = ISCSIRoot(shell, scan_threads=options.threads) root_node.interactive = False if options.cli_command else True settings.config.interactive = False if options.cli_command else True # Load the config to populate the object model root_node.refresh() if root_node.error: print("Unable to contact the local API endpoint " "({})".format(root_node.local_api)) sys.exit(-1) # Account for invocation which includes a command to run i.e. batch mode if options.cli_command: try: shell.run_cmdline(options.cli_command) except Exception as e: print(str(e), file=sys.stderr) sys.exit(-1) sys.exit(0) # Main loop - run the interactive shell, until the user exits while not shell._exit: try: shell.run_interactive() except ExecutionError as msg: shell.log.error(str(msg)) def log_in_color(fn): def new(*args): colour_off = '\x1b[0m' levelno = args[0].levelno if levelno >= logging.CRITICAL: color = '\x1b[31;1m' elif levelno >= logging.ERROR: color = '\x1b[31;1m' elif levelno >= logging.WARNING: color = '\x1b[33;1m' elif levelno >= logging.INFO: color = '\x1b[32;1m' elif levelno >= logging.DEBUG: color = '\x1b[34;1m' else: color = '\x1b[0m' args[0].msg = "{}{}{}".format(color, args[0].msg, colour_off) return fn(*args) return new if __name__ == "__main__": options = get_options() # Setup logging log_path = os.path.join(os.path.expanduser("~"), "gwcli.log") logger = logging.getLogger('gwcli') logger.setLevel(logging.DEBUG) file_handler = logging.FileHandler(log_path, mode='a') file_format = logging.Formatter('%(asctime)s %(levelname)-8s ' '[%(filename)s:%(lineno)s:%(funcName)s()]' ' %(message)s') file_handler.setFormatter(file_format) file_handler.setLevel(logging.DEBUG) logger.addHandler(file_handler) if not options.cli_command: stream_handler = logging.StreamHandler(stream=sys.stdout) if options.debug: stream_handler.setLevel(logging.DEBUG) else: stream_handler.setLevel(logging.INFO) stream_handler.emit = log_in_color(stream_handler.emit) logger.addHandler(stream_handler) # Override the default exception handler to only show back traces # in debug mode sys.excepthook = exception_handler # Intercept ctrl-c and ctrl-z events to stop the user exiting signal.signal(signal.SIGTSTP, kbd_handler) signal.signal(signal.SIGINT, kbd_handler) settings.init() main() ceph-iscsi-3.9/gwcli/000077500000000000000000000000001470665154300145345ustar00rootroot00000000000000ceph-iscsi-3.9/gwcli/__init__.py000066400000000000000000000000241470665154300166410ustar00rootroot00000000000000__author__ = 'paul' ceph-iscsi-3.9/gwcli/ceph.py000066400000000000000000000315431470665154300160330ustar00rootroot00000000000000from .node import UIGroup, UINode import json import rados from gwcli.utils import console_message, os_cmd import ceph_iscsi_config.settings as settings from ceph_iscsi_config.utils import human_size class CephGroup(UIGroup): """ define an object to represent the ceph cluster. The methods use librados which means the host will need a valid ceph.conf and a valid keyring. """ help_intro = ''' The ceph component of the shell is intended to provide you with an overview of the ceph cluster. Information is initially gathered when you start this shell, but can be refreshed later using the 'refresh' subcommand. Data is shown that covers the health of the ceph cluster(s), together with an overview of the rados pools and overall topology. The pools section is useful when performing allocation tasks since it provides the current state of available space within the pool(s), together with the current over-commit percentage. ''' def __init__(self, parent): UIGroup.__init__(self, 'cluster', parent) self.logger.debug("Adding ceph cluster '{}' to the UI" .format(settings.config.cluster_name)) self.cluster = CephCluster(self, settings.config.cluster_name, settings.config.cephconf, settings.config.cluster_client_name) @staticmethod def valid_conf(config_file): """ Dummy check function :param config_file: (str) config file path :return: TRUE """ return True def ui_command_refresh(self): """ refresh command updates the health and capacity state of the ceph meta data shown within the interface """ self.refresh() self.logger.info("ok") def refresh(self): for cluster in self.children: cluster.update_state() cluster.pools.refresh() def summary(self): """ return the number of clusters :return: """ return "Clusters: {}".format(len(self.children)), None class CephCluster(UIGroup): def __init__(self, parent, cluster_name, conf_file, client_name): self.conf = conf_file self.client_name = client_name self.cluster_name = cluster_name UIGroup.__init__(self, cluster_name, parent) self.ceph_status = {} self.health_status = '' self.health_list = [] self.version = self.cluster_version self.pools = CephPools(self) self.update_state() self.topology = CephTopology(self) def ui_command_refresh(self): """ refresh command updates the health and capacity state of the ceph meta data shown within the interface """ self.refresh() self.logger.info("ok") def ui_command_info(self): color = {"HEALTH_OK": "green", "HEALTH_WARN": "yellow", "HEALTH_ERR": "red"} status = self.ceph_status # backward compatibility (< octopus) if 'osdmap' in status['osdmap']: osdmap = status['osdmap']['osdmap'] else: osdmap = status['osdmap'] output = "Cluster name: {}\n".format(self.cluster_name) output += "Ceph version: {}\n".format(self.version) output += "Health : {}\n".format(self.health_status) if self.health_status != 'HEALTH_OK': output += " - {}\n".format(','.join(self.health_list)) # backward compatibility (< octopus) if 'mons' in status['monmap']: num_mons = len(status['monmap']['mons']) else: num_mons = status['monmap']['num_mons'] num_quorum = len(status['quorum_names']) num_out_of_quorum = num_mons - num_quorum q_str = "quorum {}, out of quorum: {}".format(num_quorum, num_out_of_quorum) output += "\nMONs : {:>4} ({})\n".format(self.topology.num_mons, q_str) output += "OSDs : {:>4} ({} up, {} in)\n".format(self.topology.num_osds, osdmap['num_up_osds'], osdmap['num_in_osds']) output += "Pools: {:>4}\n".format(self.num_pools) raw = status['pgmap']['bytes_total'] output += "Raw capacity: {}\n".format(human_size(raw)) output += "\nConfig : {}\n".format(self.conf) output += "Client Name: {}\n".format(self.client_name) console_message(output, color=color[self.health_status]) @property def cluster_version(self): vers_out = os_cmd("ceph -c {} -n {} version".format(self.conf, self.client_name)) # RHEL packages include additional info, that we don't need version_str = vers_out.split()[2].decode('utf-8') return '.'.join(version_str.split('.')[:3]) def update_state(self): self.logger.debug("Querying ceph for state information") with rados.Rados(conffile=self.conf, name=self.client_name) as cluster: cluster.wait_for_latest_osdmap() cmd = {'prefix': 'status', 'format': 'json'} ret, buf_s, out = cluster.mon_command(json.dumps(cmd), b'') status = json.loads(buf_s) self.ceph_status = status # assume a luminous or above ceph cluster self.health_status = status['health'].get('status', status['health'].get('overall_status')) self.health_list = [check.get('summary').get('message') for key, check in status['health'].get('checks', {}).items() if check.get('summary', {}).get('message')] @property def num_pools(self): return len([pool for pool in self.pools.children]) def refresh(self): self.update_state() self.pools.refresh() def summary(self): color_lookup = {"HEALTH_OK": True, "HEALTH_WARN": None, "HEALTH_ERR": False} return self.health_status, color_lookup[self.health_status] @property def healthy_mon(self): """ Return the first mon in quorum state :return: (str) name of the 1st monitor in a healthy state """ quorum_mons = self.ceph_status.get('quorum_names', []) if quorum_mons: return quorum_mons[0] else: return "UNKNOWN" class CephPools(UIGroup): help_intro = ''' Each pool within the ceph cluster is shown with the following metrics; - Commit .... this is a total of the logical space that has been requested for all rbd images defined to the gateways - Avail ..... 'avail' shows the actual space that is available for allocation after taking into account the protection scheme of the pool (e.g. replication level) - Used ...... shows the physical space that has been consumed within the pool - Commit% ... is a ratio of the logical space allocated to clients over the amount of space that can be allocated. So when this value is <=100% the physical backing store capacity is available. However, if this ratio is > 100%, you are overcommiting capacity. Being able to overcommit is a benefit of Ceph's thin provisioning - BUT you must keep an eye on the capacity to protect against out of space scenarios. ''' def __init__(self, parent): UIGroup.__init__(self, 'pools', parent) self.pool_lookup = {} # pool_name -> pool object hash self.populate() def populate(self): # existing_pools = [pool.name for pool in self.children] # get a breakdown of the osd's to retrieve the pool types # i.e. is it replica 3 or EC 4+2 etc # This is overkill and hacky, but the bindings don't provide # pool type information...so it's SLEDGEHAMMER meets NUT time self.logger.debug("Fetching ceph osd information") with rados.Rados(conffile=self.parent.conf, name=self.parent.client_name) as cluster: cluster.wait_for_latest_osdmap() cmd = {'prefix': 'osd dump', 'format': 'json'} rc, buf_s, out = cluster.mon_command(json.dumps(cmd), b'') pools = {} osd_dump = json.loads(buf_s) for pool in osd_dump['pools']: name = pool['pool_name'] pools[name] = pool for pool_name in pools: # if pool_name not in existing_pools: new_pool = RadosPool(self, pool_name, pools[pool_name], osd_dump['erasure_code_profiles']) self.pool_lookup[pool_name] = new_pool def refresh(self): self.logger.debug("Gathering pool stats for cluster " "'{}'".format(self.parent.name)) # unfortunately the rados python api does not expose all the needed # metrics through an ioctx call - specifically pool size is missing, # so stats need to be gathered at this level through the mon_command # interface, and pushed down to the child objects. Having a refresh # method within the child object would have been preferred! with rados.Rados(conffile=self.parent.conf, name=self.parent.client_name) as cluster: cluster.wait_for_latest_osdmap() cmd = {'prefix': 'df', 'format': 'json'} rc, buf_s, out = cluster.mon_command(json.dumps(cmd), b'') if rc == 0: pool_info = json.loads(buf_s) for pool_data in pool_info['pools']: pool_name = pool_data['name'] self.pool_lookup[pool_name].update(pool_data) def summary(self): return "Pools: {}".format(self.parent.num_pools), True class RadosPool(UINode): display_attributes = ["name", "commit", "overcommit_PCT", "max_bytes", "used_bytes", "type", "desc"] def __init__(self, parent, pool_name, pool_md, ec_profiles): UINode.__init__(self, pool_name, parent) type_id = pool_md['type'] if type_id == 1: self.desc = "x{}".format(pool_md['size']) self.type = "replicated" elif type_id == 3: ec_profile = ec_profiles[pool_md['erasure_code_profile']] self.desc = "{}+{}".format(ec_profile['k'], ec_profile['m']) self.type = "erasure" def _calc_overcommit(self): root = self.parent.parent.parent.parent potential_demand = 0 for pool_child in root.disks.children: if pool_child.name == self.name: for image_child in pool_child.children: potential_demand += image_child.size self.commit = potential_demand if self.max_bytes == 0: self.overcommit_PCT = 0 else: self.overcommit_PCT = int( (potential_demand / float(self.max_bytes)) * 100) def update(self, pool_metadata): self.max_bytes = pool_metadata['stats']['max_avail'] self.used_bytes = pool_metadata['stats']['bytes_used'] self._calc_overcommit() def summary(self): msg = ["({})".format(self.desc)] msg.append("Commit: {}/{} ({}%)".format(human_size(self.commit), human_size(self.max_bytes), self.overcommit_PCT)) msg.append("Used: {}".format(human_size(self.used_bytes))) return ', '.join(msg), True class CephTopology(UINode): def __init__(self, parent): UINode.__init__(self, 'topology', parent) # backward compatibility (< octopus) if 'osdmap' in self.parent.ceph_status['osdmap']: self.num_osds = self.parent.ceph_status['osdmap']['osdmap']['num_osds'] else: self.num_osds = self.parent.ceph_status['osdmap']['num_osds'] # backward compatibility (< octopus) if 'mons' in self.parent.ceph_status['monmap']: self.num_mons = len(self.parent.ceph_status['monmap']['mons']) else: self.num_mons = self.parent.ceph_status['monmap']['num_mons'] def summary(self): msg = ["OSDs: {}".format(self.num_osds)] msg.append("MONs: {}".format(self.num_mons)) return ','.join(msg), True ceph-iscsi-3.9/gwcli/client.py000066400000000000000000000655071470665154300164010ustar00rootroot00000000000000from gwcli.node import UIGroup, UINode from gwcli.utils import response_message, APIRequest, get_config from ceph_iscsi_config.client import CHAP, GWClient import ceph_iscsi_config.settings as settings from ceph_iscsi_config.utils import human_size, this_host from rtslib_fb.utils import normalize_wwn, RTSLibError # this ignores the warning issued when verify=False is used from requests.packages import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) class Clients(UIGroup): help_intro = ''' The host section shows the clients that have been defined across each of the gateways in the configuration. Clients may be created or deleted, but once defined they can not be renamed. e.g. create iqn.1994-05.com.redhat:rh7-client4 ''' def __init__(self, parent): UIGroup.__init__(self, 'hosts', parent) # lun_map dict is indexed by the rbd name, pointing to a list # of clients that have that rbd allocated. self.lun_map = {} self.client_map = {} # record the shortcut shortcut = self.shell.prefs['bookmarks'].get('hosts', None) if not shortcut or shortcut is not self.path: self.shell.prefs['bookmarks']['hosts'] = self.path self.shell.prefs.save() self.shell.log.debug("Bookmarked %s as %s." % (self.path, 'hosts')) self.config = get_config() def load(self, client_info): for client_iqn, client_settings in client_info.items(): Client(self, client_iqn, client_settings) def ui_command_create(self, client_iqn): """ Clients may be created using the 'create' sub-command. The initial definition will be added to each gateway without any authentication set. Once a client is created the admin is automatically placed in the context of the new client definition for auth and disk configuration operations. e.g. > create """ self.logger.debug("CMD: ../hosts/ create {}".format(client_iqn)) cli_seed = {"luns": {}, "auth": {}} # is the IQN usable? try: client_iqn, iqn_type = normalize_wwn(['iqn'], client_iqn) except RTSLibError: self.logger.error("IQN name '{}' is not valid for " "iSCSI".format(client_iqn)) return target_iqn = self.parent.name # Issue the API call to create the client client_api = ('{}://localhost:{}/api/' 'client/{}/{}'.format(self.http_mode, settings.config.api_port, target_iqn, client_iqn)) self.logger.debug("Client CREATE for {}".format(client_iqn)) api = APIRequest(client_api) api.put() if api.response.status_code == 200: Client(self, client_iqn, cli_seed) self.config = get_config() self.logger.debug("- Client '{}' added".format(client_iqn)) self.logger.info('ok') else: self.logger.error("Failed: {}".format(response_message(api.response, self.logger))) return # switch the current directory to the new client for auth or disk # definitions as part of the users workflow return self.ui_command_cd(client_iqn) def ui_command_delete(self, client_iqn): """ You may delete a client from the configuration, but you must ensure the client has logged out of the iscsi gateways. Attempting to delete a client that has an open session will fail the request e.g. delete """ self.logger.debug("CMD: ../hosts/ delete {}".format(client_iqn)) self.logger.debug("Client DELETE for {}".format(client_iqn)) # is the IQN usable? try: client_iqn, iqn_type = normalize_wwn(['iqn'], client_iqn) except RTSLibError: self.logger.error("IQN name '{}' is not valid for " "iSCSI".format(client_iqn)) return target_iqn = self.parent.name client_api = ('{}://{}:{}/api/' 'client/{}/{}'.format(self.http_mode, "localhost", settings.config.api_port, target_iqn, client_iqn)) api = APIRequest(client_api) api.delete() if api.response.status_code == 200: # Delete successful across all gateways self.logger.debug("- '{}' removed and configuration " "updated".format(client_iqn)) client = [client for client in self.children if client.name == client_iqn][0] # remove any rbd maps from the lun_map for this client rbds_mapped = [lun.rbd_name for lun in client.children] for rbd in rbds_mapped: self.update_lun_map('remove', rbd, client_iqn) self.delete(client) self.logger.info('ok') else: # client delete request failed self.logger.error(response_message(api.response, self.logger)) def update_lun_map(self, action, rbd_path, client_iqn): """ Update the lun_map lookup dict, each element points to a 'set' of clients that have the lun mapped :param action: add or remove :param rbd_path: disk name (str) i.e. . :param client_iqn: client IQN (str) """ if action == 'add': if rbd_path in self.lun_map: self.lun_map[rbd_path].add(client_iqn) else: self.lun_map[rbd_path] = {client_iqn} else: if rbd_path in self.lun_map: try: self.lun_map[rbd_path].remove(client_iqn) except KeyError: # client not in set pass else: if len(self.lun_map[rbd_path]) == 0: del self.lun_map[rbd_path] else: # delete requested for an rbd that is not in lun_map? # twilight zone moment raise ValueError("Clients.update_lun_map : Attempt to delete " "rbd from lun_map that is not defined") def delete(self, child): del self.client_map[child.client_iqn] self.remove_child(child) def ui_command_auth(self, action=None): """ Disable/enable ACL authentication or clear CHAP settings for all clients on the target. - disable_acl ... Disable initiator name based ACL authentication. - enable_acl .... Enable initiator name based ACL authentication. - nochap ........ Remove chap authentication for all clients across all gateways. Initiator name based authentication will then be used. e.g. auth disable_acl """ if not action: self.logger.error("Missing auth argument. Use 'auth nochap|disable_acl|enable_acl'") return if action not in ['nochap', 'enable_acl', 'disable_acl']: self.logger.error("Invalid auth argument. Use 'auth nochap|disable_acl|enable_acl'") return if action == 'nochap': for client in self.children: client.set_auth(action, None, None, None) else: target_iqn = self.parent.name api_vars = {'action': action} targetauth_api = ('{}://localhost:{}/api/' 'targetauth/{}'.format(self.http_mode, settings.config.api_port, target_iqn)) api = APIRequest(targetauth_api, data=api_vars) api.put() if api.response.status_code == 200: self.config = get_config() self.logger.info('ok') else: self.logger.error("Failed to {}: " "{}".format(action, response_message(api.response, self.logger))) return def summary(self): chap_enabled = False chap_disabled = False target_iqn = self.parent.name target_auth = self.parent.auth target_auth_enabled = target_auth['username'] and target_auth['password'] if self.config['targets'][target_iqn]['acl_enabled']: auth_stat_str = "ACL_ENABLED" status = True else: auth_stat_str = "ACL_DISABLED" status = None if target_auth_enabled else False if not target_auth_enabled: for client in self.children: if not client.auth['username']: chap_disabled = True else: chap_enabled = True if chap_enabled and chap_disabled: auth_stat_str = "MISCONFIG" status = False break return "Auth: {}, Hosts: {}".format(auth_stat_str, len(self.children)), \ status class Client(UINode): help_intro = ''' Client definitions can be managed through two sub-commands; 'auth' and 'disk'. These commands allow you to manage the CHAP authentication for the client (1-WAY) and change the rbd images masked to a specific client. LUN masking automatically associates a specific LUN id to the for an rbd image, simplifying the workflow. The lun id's assigned can be seen by running the 'info' command. This will show all the clients details that are stored within the iscsi gateway configuration. ''' display_attributes = ["client_iqn", "ip_address", "alias", "logged_in", "auth", "group_name", "luns"] def __init__(self, parent, client_iqn, client_settings): UINode.__init__(self, client_iqn, parent) self.client_iqn = client_iqn self.parent.client_map[client_iqn] = self self.group_name = '' self.ip_address = '' self.alias = '' for k, v in client_settings.items(): self.__setattr__(k, v) # decode the chap password if necessary if 'username' in self.auth and 'password' in self.auth: self.chap = CHAP(self.auth['username'], self.auth['password'], self.auth['password_encryption_enabled']) self.auth['username'] = self.chap.user self.auth['password'] = self.chap.password else: self.auth['username'] = '' self.auth['password'] = '' # decode the chap_mutual password if necessary if 'mutual_username' in self.auth and 'mutual_password' in self.auth: self.chap_mutual = CHAP(self.auth['mutual_username'], self.auth['mutual_password'], self.auth['mutual_password_encryption_enabled']) self.auth['mutual_username'] = self.chap_mutual.user self.auth['mutual_password'] = self.chap_mutual.password else: self.auth['mutual_username'] = '' self.auth['mutual_password'] = '' self.refresh_luns() def drop_luns(self): luns = self.children.copy() for lun in luns: self.remove_lun(lun) def refresh_luns(self): for rbd_path in self.luns.keys(): lun_id = self.luns[rbd_path]['lun_id'] self.parent.update_lun_map('add', rbd_path, self.client_iqn) MappedLun(self, rbd_path, lun_id) def __str__(self): return self.get_info() def summary(self): all_pools = self.parent.parent.parent.parent.disks.children all_disks = [] for pool in all_pools: for disk in pool.children: all_disks.append(disk) total_bytes = 0 client_luns = [lun.rbd_name for lun in self.children] for disk in all_disks: if disk.image_id in client_luns: total_bytes += disk.size msg = ['LOGGED-IN'] if self.logged_in else [] auth_text = "Auth: None" status = False if self.auth.get('mutual_username'): auth_text = "Auth: CHAP_MUTUAL" status = True elif self.auth.get('username'): auth_text = "Auth: CHAP" status = True msg.append(auth_text) msg.append("Disks: {}({})".format(len(client_luns), human_size(total_bytes))) return ", ".join(msg), status def set_auth(self, username=None, password=None, mutual_username=None, mutual_password=None): self.logger.debug("username={}, password={}, mutual_username={}, " "mutual_password={}".format(username, password, mutual_username, mutual_password)) self.logger.debug("CMD: ../hosts/ auth *") if not username: self.logger.error("To set or reset authentication, specify either " "username= password= " "[mutual_username]= [mutual_password]= " "or nochap") return if username == 'nochap': username = '' password = '' mutual_username = '' mutual_password = '' self.logger.debug("auth to be set to username='{}', password='{}', mutual_username='{}', " "mutual_password='{}' for '{}'".format(username, password, mutual_username, mutual_password, self.client_iqn)) target_iqn = self.parent.parent.name api_vars = { "username": username, "password": password, "mutual_username": mutual_username, "mutual_password": mutual_password } clientauth_api = ('{}://localhost:{}/api/' 'clientauth/{}/{}'.format(self.http_mode, settings.config.api_port, target_iqn, self.client_iqn)) api = APIRequest(clientauth_api, data=api_vars) api.put() if api.response.status_code == 200: self.logger.debug("- client credentials updated") self.auth['username'] = username self.auth['password'] = password self.auth['mutual_username'] = mutual_username self.auth['mutual_password'] = mutual_password self.logger.info('ok') else: self.logger.error("Failed to update the client's auth: " "{}".format(response_message(api.response, self.logger))) return def ui_command_auth(self, username=None, password=None, mutual_username=None, mutual_password=None): """ Client authentication can be set to use CHAP/CHAP_MUTUAL by supplying username, password, mutual_username, mutual_password e.g. auth username= password= mutual_username= mutual_password= username / mutual_username ... the username is 8-64 character string. Each character may either be an alphanumeric or use one of the following special characters .,:,-,@. Consider using the hosts 'shortname' or the initiators IQN value as the username password / mutual_password ... the password must be between 12-16 chars in length containing alphanumeric characters, plus the following special characters @,_,-,/ WARNING1: Using unsupported special characters may result in truncation, resulting in failed logins. WARNING2: If there are multiple clients, CHAP must be enabled for all clients or disabled for all clients. gwcli does not support mixing CHAP clients with IQN ACL clients. """ self.logger.debug("CMD: ../hosts/ auth *") if not username: self.logger.error("To set authentication, specify " "username= password= " "[mutual_username]= [mutual_password]= " "or nochap") return self.set_auth(username, password, mutual_username, mutual_password) @staticmethod def get_srtd_names(lun_list): """ sort the supplied list of luns (tuples - [('disk1',1),('disk2',0)]) :return: list of LUN names, in lun_id sequence """ srtd_luns = sorted(lun_list, key=lambda field: field[1]) return [rbd_name for rbd_name, lun_id in srtd_luns] def ui_command_disk(self, action='add', disk=None, size=None): """ Disks can be added or removed from the client one at a time using the 'disk' sub-command. Note that if the disk does not currently exist in the configuration, the cli will attempt to create it for you. e.g. disk add disk remove Adding a disk will result in the disk occupying the client's next available lun id. Once allocated removing a LUN will not change the LUN id associations for the client. Note that if the client is a member of a host group, disk management *must* be performed at the group level. Attempting to add/remove disks at the client level will fail. """ self.logger.debug("CMD: ../hosts/ disk action={}" " disk={}".format(action, disk)) valid_actions = ['add', 'remove'] if not disk: self.logger.critical("You must supply a disk name to add/remove " "for this client") return if action not in valid_actions: self.logger.error("you can only add and remove disks - {} is " "invalid ".format(action)) return lun_list = [(lun.rbd_name, lun.lun_id) for lun in self.children] current_luns = Client.get_srtd_names(lun_list) if action == 'add': if disk not in current_luns: ui_root = self.get_ui_root() all_pools = ui_root.disks.children all_disks = [] for current_pool in all_pools: for current_disk in current_pool.children: all_disks.append(current_disk) valid_disk_names = [defined_disk.image_id for defined_disk in all_disks] else: # disk provided is already mapped, so remind the user self.logger.error("Disk {} already mapped".format(disk)) return else: valid_disk_names = current_luns if disk not in valid_disk_names: # if this is an add operation, we can create the disk on-the-fly # for the admin if action == 'add': ui_root = self.get_ui_root() ui_disks = ui_root.disks # a disk given here would be of the form pool.image try: pool, image = disk.split('/') except ValueError: self.logger.error("Invalid format. Use pool_name/disk_name") return rc = ui_disks.create_disk(pool=pool, image=image, size=size) if rc == 0: self.logger.debug("disk auto-define successful") else: self.logger.error("disk auto-define failed({}), try " "using the /disks create " "command".format(rc)) return else: self.logger.error("disk '{}' is not mapped to this " "client ".format(disk)) return mapped_disks = [mapped_disk.name for mapped_disk in self.parent.parent.target_disks.children] if disk not in mapped_disks: rc = self.parent.parent.target_disks.add_disk(disk, None, None) if rc == 0: self.logger.debug("disk auto-map successful") else: self.logger.error("disk auto-map failed({}), try " "using the /iscsi-targets//disks add " "command".format(rc)) return # At this point we are either in add/remove mode, with a valid disk # to act upon self.logger.debug("Client '{}' update - {} disk " "{}".format(self.client_iqn, action, disk)) target_iqn = self.parent.parent.name api_vars = {"disk": disk} clientlun_api = ('{}://localhost:{}/api/' 'clientlun/{}/{}'.format(self.http_mode, settings.config.api_port, target_iqn, self.client_iqn)) api = APIRequest(clientlun_api, data=api_vars) if action == 'add': api.put() else: api.delete() if api.response.status_code == 200: self.logger.debug("disk mapping updated successfully") if action == 'add': # The addition of the lun will get a lun id assigned so # we need to query the api server to get the new configuration # to be able to set the local cli entry correctly get_api_vars = {"disk": disk} clientlun_api = clientlun_api.replace('/clientlun/', '/_clientlun/') self.logger.debug("Querying API to get mapped LUN information") api = APIRequest(clientlun_api, data=get_api_vars) api.get() if api.response.status_code == 200: try: lun_dict = api.response.json()['message'] except Exception: self.logger.error("Malformed REST API response") return # now update the UI lun_id = lun_dict[disk]['lun_id'] self.add_lun(disk, lun_id) else: self.logger.error("Query for disk '{}' meta data " "failed".format(disk)) return else: # this was a remove request, so simply delete the child # MappedLun object corresponding to this rbd name mlun = [lun for lun in self.children if lun.rbd_name == disk][0] self.remove_lun(mlun) self.logger.debug("configuration update successful") self.logger.info('ok') else: # the request to add/remove the disk for the client failed self.logger.error("disk {} for '{}' against {} failed" "\n{}".format(action, disk, self.client_iqn, response_message(api.response, self.logger))) return def add_lun(self, disk, lun_id): MappedLun(self, disk, lun_id) # update the objects lun list (so ui info cmd picks # up the change self.luns[disk] = {'lun_id': lun_id} self.parent.update_lun_map('add', disk, self.client_iqn) active_maps = len(self.parent.lun_map[disk]) - 1 if active_maps > 0: self.logger.warning("Warning: '{}' mapped to {} other " "client(s)".format(disk, active_maps)) def remove_lun(self, lun): self.remove_child(lun) del self.luns[lun.rbd_name] self.parent.update_lun_map('remove', lun.rbd_name, self.client_iqn) @property def logged_in(self): target_iqn = self.parent.parent.name gateways = self.parent.parent.get_child('gateways') local_gw = this_host() is_local_target = len([child for child in gateways.children if child.name == local_gw]) > 0 if is_local_target: client_info = GWClient.get_client_info(target_iqn, self.client_iqn) self.alias = client_info['alias'] self.ip_address = ','.join(client_info['ip_address']) return client_info['state'] else: self.alias = '' self.ip_address = '' return '' class MappedLun(UINode): display_attributes = ["rbd_name", "owner", "size", "size_h", "lun_id"] def __init__(self, parent, name, lun_id): self.rbd_name = name UINode.__init__(self, 'lun {}'.format(lun_id), parent) # navigate back through the object model to pick up the disks ui_root = self.get_ui_root() disk_lookup = ui_root.disks.disk_lookup self.disk = disk_lookup[name] self.owner = self.disk.owner self.size = self.disk.size self.size_h = self.disk.size_h self.lun_id = lun_id def summary(self): self.owner = self.disk.owner self.size_h = self.disk.size_h return "{}({}), Owner: {}".format(self.rbd_name, self.size_h, self.owner), True ceph-iscsi-3.9/gwcli/gateway.py000066400000000000000000001130221470665154300165460ustar00rootroot00000000000000import json import threading from gwcli.node import UIGroup, UINode, UIRoot from gwcli.hostgroup import HostGroups from gwcli.storage import Disks, TargetDisks from gwcli.client import Clients from gwcli.utils import (response_message, GatewayAPIError, GatewayError, APIRequest, console_message, get_config, refresh_control_values) import ceph_iscsi_config.settings as settings from ceph_iscsi_config.utils import (normalize_ip_address, this_host) from ceph_iscsi_config.target import GWTarget from ceph_iscsi_config.client import CHAP from gwcli.ceph import CephGroup from rtslib_fb.utils import normalize_wwn, RTSLibError # FIXME - code is using a self signed cert common across all gateways # the embedded urllib3 package will issue warnings when ssl cert validation is # disabled - so this disable_warnings stops the user interface from being # bombed from requests.packages import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) class ISCSIRoot(UIRoot): def __init__(self, shell, scan_threads=1, endpoint=None): UIRoot.__init__(self, shell) self.error = False self.interactive = True # default interactive mode self.scan_threads = scan_threads if settings.config.api_secure: self.http_mode = 'https' else: self.http_mode = 'http' if endpoint is None: self.local_api = ('{}://localhost:{}/' 'api'.format(self.http_mode, settings.config.api_port)) else: self.local_api = endpoint self.config = {} # Establish the root nodes within the UI, for the different components self.disks = Disks(self) self.ceph = CephGroup(self) self.target = ISCSITargets(self) def refresh(self): self.config = self._get_config() if not self.error: if 'disks' in self.config: self.disks.refresh(self.config['disks']) else: self.disks.refresh({}) if 'gateways' in self.config: self.target.gateway_group = self.config['gateways'] else: self.target.gateway_group = {} self.target.refresh(self.config['targets'], self.config['discovery_auth']) self.ceph.refresh() else: # Unable to get the config, tell the user and exit the cli self.logger.critical("Unable to access the configuration object") def _get_config(self, endpoint=None): if not endpoint: endpoint = self.local_api api = APIRequest(endpoint + "/config") api.get() if api.response.status_code == 200: try: return api.response.json() except Exception: self.error = True self.logger.error("Malformed REST API response") return {} else: # 403 maybe due to the ip address is not in the iscsi # gateway trusted ip list self.error = True self.logger.error("REST API failure, code : " "{}".format(api.response.status_code)) return {} def export_copy(self, config): fmtd_config = json.dumps(config, sort_keys=True, indent=4, separators=(',', ': ')) print(fmtd_config) def ui_command_export(self, mode='copy'): """ Print the configuration in a format that can be used as a backup. The export command supports two modes: copy - This prints the internal configuration. It can used for backup or for support requests. """ valid_modes = ['copy'] self.logger.debug("CMD: export mode={}".format(mode)) if mode not in valid_modes: self.logger.error("Invalid export mode requested - supported " "modes are: {}".format(','.join(valid_modes))) return current_config = self._get_config() if not current_config: self.logger.error("Unable to refresh local config.") return if mode == 'copy': self.export_copy(current_config) def ui_command_info(self): self.logger.debug("CMD: info") if settings.config.trusted_ip_list: display_ips = ','.join(settings.config.trusted_ip_list) else: display_ips = 'None' console_message("HTTP mode : {}".format(self.http_mode)) console_message("Rest API port : {}".format(settings.config.api_port)) console_message("Local endpoint : {}".format(self.local_api)) console_message("Local Ceph Cluster : {}".format(settings.config.cluster_name)) console_message("2ndary API IP's : {}".format(display_ips)) class ISCSITargets(UIGroup): help_intro = ''' The iscsi-targets section shows the iSCSI targets that the groups of gateways will be known as by iSCSI initiators (clients). Each iSCSI target can consist of 2-4 gateway nodes. Multiple gateways are needed to deliver high availability storage to the iSCSI client. ''' def __init__(self, parent): UIGroup.__init__(self, 'iscsi-targets', parent) self.gateway_group = {} self.auth = None def ui_command_create(self, target_iqn): """ Create an iSCSI target. This target is defined across all gateway nodes, providing the client with a single 'image' for iscsi discovery. """ self.logger.debug("CMD: /iscsi create {}".format(target_iqn)) # is the IQN usable? try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError: self.logger.error("IQN name '{}' is not valid for " "iSCSI".format(target_iqn)) return # 'safe' to continue with the definition self.logger.debug("Create an iscsi target definition in the UI") local_api = ('{}://localhost:{}/api/' 'target/{}'.format(self.http_mode, settings.config.api_port, target_iqn)) api = APIRequest(local_api) api.put() if api.response.status_code == 200: self.logger.info('ok') # create the target entry in the UI tree target_exists = len([target for target in self.children if target.name == target_iqn]) > 0 if not target_exists: Target(target_iqn, self) else: self.logger.error("Failed to create the target on the local node " "{}".format(response_message(api.response, self.logger))) def ui_command_delete(self, target_iqn): """ Delete an iSCSI target. """ self.logger.debug("CMD: /iscsi delete {}".format(target_iqn)) try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError: self.logger.error("IQN name '{}' is not valid for " "iSCSI".format(target_iqn)) return gw_api = ('{}://localhost:{}/api/' 'target/{}'.format(self.http_mode, settings.config.api_port, target_iqn)) api = APIRequest(gw_api) api.delete() if api.response.status_code == 200: self.logger.info('ok') # delete the target entry from the UI tree target_object = [target for target in self.children if target.name == target_iqn][0] self.remove_child(target_object) else: self.logger.error("Failed - {}".format(response_message(api.response, self.logger))) def ui_command_clearconfig(self, confirm=None): """ The 'clearconfig' command allows you to return the configuration to an unused state: LIO on each gateway will be cleared, and gateway definitions in the configuration object will be removed. > clearconfig confirm=true In order to run the clearconfig command, all clients and disks *must* have already have been removed. """ self.logger.debug("CMD: clearconfig confirm={}".format(confirm)) confirm = self.ui_eval_param(confirm, 'bool', False) if not confirm: self.logger.error("To clear the configuration you must specify " "confirm=true") return # get a new copy of the config dict over the local API # check that there aren't any disks or client listed current_config = get_config() for target_iqn, target in current_config['targets'].items(): num_clients = len(target['clients'].keys()) if num_clients > 0: self.logger.error("{} - Clients({}) must be removed first" " before clearing the gateway " "configuration".format(target_iqn, num_clients)) return num_disks = len(current_config['disks'].keys()) if num_disks > 0: self.logger.error("Disks({}) must be removed first" " before clearing the gateway " "configuration".format(num_disks)) return for target_iqn, target in current_config['targets'].items(): target_config = current_config['targets'][target_iqn] self.clear_config(target_config['portals'], target_iqn) def clear_config(self, gw_list, target_iqn): for gw_name in gw_list: gw_api = ('{}://{}:{}/api/' '_target/{}'.format(self.http_mode, gw_name, settings.config.api_port, target_iqn)) api = APIRequest(gw_api) api.delete() if api.response.status_code != 200: msg = response_message(api.response, self.logger) self.logger.error("Delete of {} failed : {}".format(gw_name, msg)) return else: self.logger.debug("- deleted {}".format(gw_name)) # gateways removed, so lets delete the objects from the UI tree self.reset() # remove any bookmarks stored in the prefs.bin file if 'bookmarks' in self.shell.prefs: del self.shell.prefs['bookmarks'] self.logger.info('ok') def ui_command_discovery_auth(self, username=None, password=None, mutual_username=None, mutual_password=None): """ Discovery authentication can be set to use CHAP/CHAP_MUTUAL by supplying username, password, mutual_username, mutual_password Specifying 'nochap' will remove discovery authentication. e.g. auth username= password= mutual_username= mutual_password= """ self.logger.warn("discovery username={}, password={}, mutual_username={}, " "mutual_password={}".format(username, password, mutual_username, mutual_password)) self.logger.debug("CMD: /iscsi discovery_auth") if not username: self.logger.error("To set or reset discovery authentication, specify either " "username= password= [mutual_username]= " "[mutual_password]= or nochap") return if username == 'nochap': username = '' password = '' mutual_username = '' mutual_password = '' self.logger.debug("discovery auth to be set to username='{}', password='{}', " "mutual_username='{}', mutual_password='{}'".format(username, password, mutual_username, mutual_password)) api_vars = { "username": username, "password": password, "mutual_username": mutual_username, "mutual_password": mutual_password } discoveryauth_api = ('{}://localhost:{}/api/' 'discoveryauth'.format(self.http_mode, settings.config.api_port)) api = APIRequest(discoveryauth_api, data=api_vars) api.put() if api.response.status_code == 200: self._set_auth(username, password, mutual_username, mutual_password) self.logger.info('ok') else: self.logger.error("Error: {}".format(response_message(api.response, self.logger))) def _set_auth(self, username, password, mutual_username, mutual_password): if mutual_username != '' and mutual_password != '': self.auth = "CHAP_MUTUAL" elif username != '' and password != '': self.auth = "CHAP" else: self.auth = None def refresh(self, targets, discovery_auth): self.logger.debug("Refreshing gateway & client information") self.reset() self._set_auth(discovery_auth['username'], discovery_auth['password'], discovery_auth['mutual_username'], discovery_auth['mutual_password']) for target_iqn, target in targets.items(): tgt = Target(target_iqn, self) tgt.controls = target['controls'] tgt.gateway_group.load(target['portals']) tgt.target_disks.load(target['disks']) tgt.client_group.load(target['clients']) def summary(self): return "DiscoveryAuth: {}, Targets: {}".format(self.auth, len(self.children)), None class Target(UINode): display_attributes = ["target_iqn", "auth", "control_values"] help_intro = ''' The iscsi target is the name that the group of gateways are known as by the iscsi initiators (clients). ''' def __init__(self, target_iqn, parent): UIGroup.__init__(self, target_iqn, parent) self.target_iqn = target_iqn self.control_values = [] self.controls = {} self.gateway_group = GatewayGroup(self) self.client_group = Clients(self) self.host_groups = HostGroups(self) self.target_disks = TargetDisks(self) config = self.parent.parent._get_config() if not config: self.logger.error("Unable to refresh local config") raise GatewayError self.auth = config['targets'][target_iqn]['auth'] # decode the chap password if necessary if 'username' in self.auth and 'password' in self.auth: self.chap = CHAP(self.auth['username'], self.auth['password'], self.auth['password_encryption_enabled']) self.auth['username'] = self.chap.user self.auth['password'] = self.chap.password else: self.auth['username'] = '' self.auth['password'] = '' # decode the chap_mutual password if necessary if 'mutual_username' in self.auth and 'mutual_password' in self.auth: self.chap_mutual = CHAP(self.auth['mutual_username'], self.auth['mutual_password'], self.auth['mutual_password_encryption_enabled']) self.auth['mutual_username'] = self.chap_mutual.user self.auth['mutual_password'] = self.chap_mutual.password else: self.auth['mutual_username'] = '' self.auth['mutual_password'] = '' def _get_controls(self): return self._controls.copy() def _set_controls(self, controls): self._controls = controls.copy() self.control_values = {} refresh_control_values(self.control_values, self.controls, GWTarget.SETTINGS) controls = property(_get_controls, _set_controls) def summary(self): msg = [] auth_text = "Auth: None" status = None if self.auth.get('mutual_username'): auth_text = "Auth: CHAP_MUTUAL" status = True elif self.auth.get('username'): auth_text = "Auth: CHAP" status = True msg.append(auth_text) msg.append("Gateways: {}".format(len(self.gateway_group.children))) return ", ".join(msg), status def ui_command_reconfigure(self, attribute, value): """ The reconfigure command allows you to tune various gateway attributes. An empty value for an attribute resets the lun attribute to its default. attribute : attribute to reconfigure. supported attributes: - cmdsn_depth : integer 1 - 512 - dataout_timeout : integer 2 - 60 - nopin_response_timeout : integer 3 - 60 - nopin_timeout : integer 3 - 60 - immediate_data : [Yes|No] - initial_r2t : [Yes|No] - first_burst_length : integer 512 - 16777215 - max_burst_length : integer 512 - 16777215 - max_outstanding_r2t : integer 1 - 65535 - max_recv_data_segment_length : integer 512 - 16777215 - max_xmit_data_segment_length : integer 512 - 16777215 value : value of the attribute to reconfigure e.g. set cmdsn_depth - reconfigure attribute=cmdsn_depth value=128 reset cmdsn_depth - reconfigure attribute=cmdsn_depth value= """ if not GWTarget.SETTINGS.get(attribute): self.logger.error("supported attributes: {}".format(",".join( sorted(GWTarget.SETTINGS.keys())))) return # Issue the api request for the reconfigure gateways_api = ('{}://localhost:{}/api/' 'target/{}'.format(self.http_mode, settings.config.api_port, self.target_iqn)) controls = {attribute: value} controls_json = json.dumps(controls) api_vars = {'mode': 'reconfigure', 'controls': controls_json} self.logger.debug("Issuing reconfigure request: controls={}".format(controls_json)) api = APIRequest(gateways_api, data=api_vars) api.put() if api.response.status_code != 200: self.logger.error("Failed to reconfigure : " "{}".format(response_message(api.response, self.logger))) config = self.parent.parent._get_config() if not config: self.logger.error("Unable to refresh local config.") else: self.controls = config['targets'][self.target_iqn]['controls'] self.logger.info('ok') def ui_command_auth(self, username=None, password=None, mutual_username=None, mutual_password=None): """ Target authentication can be set to use CHAP/CHAP_MUTUAL by supplying username, password, mutual_username, mutual_password e.g. auth username= password= mutual_username= mutual_password= username / mutual_username ... the username is 8-64 character string. Each character may either be an alphanumeric or use one of the following special characters .,:,-,@. Consider using the hosts 'shortname' or the initiators IQN value as the username password / mutual_password ... the password must be between 12-16 chars in length containing alphanumeric characters, plus the following special characters @,_,-,/ """ self.logger.debug("CMD: /iscsi-targets/ auth *") if not username: self.logger.error("To set authentication, specify " "username= password= " "[mutual_username]= [mutual_password]= " "or nochap") return if username == 'nochap': username = '' password = '' mutual_username = '' mutual_password = '' self.logger.debug("auth to be set to username='{}', password='{}', mutual_username='{}', " "mutual_password='{}'".format(username, password, mutual_username, mutual_password)) target_iqn = self.name api_vars = { "username": username, "password": password, "mutual_username": mutual_username, "mutual_password": mutual_password } targetauth_api = ('{}://localhost:{}/api/' 'targetauth/{}'.format(self.http_mode, settings.config.api_port, target_iqn)) api = APIRequest(targetauth_api, data=api_vars) api.put() if api.response.status_code == 200: self.logger.debug("- target credentials updated") self.auth['username'] = username self.auth['password'] = password self.auth['mutual_username'] = mutual_username self.auth['mutual_password'] = mutual_password self.logger.info('ok') else: self.logger.error("Failed to update target auth: " "{}".format(response_message(api.response, self.logger))) class GatewayGroup(UIGroup): help_intro = ''' The gateway-group shows you the high level details of the iscsi gateway nodes that have been configured. It also allows you to add further gateways to the configuration, but this requires the API service instance to be started on the new gateway host If in doubt, use Ansible :) ''' def __init__(self, parent): UIGroup.__init__(self, 'gateways', parent) self.thread_lock = threading.Lock() self.check_interval = 10 # check gateway state every 'n' secs self.last_state = 0 # record the shortcut shortcut = self.shell.prefs['bookmarks'].get('gateways', None) if not shortcut or shortcut is not self.path: self.shell.prefs['bookmarks']['gateways'] = self.path self.shell.prefs.save() self.shell.log.debug("Bookmarked %s as %s." % (self.path, 'gateways')) @property def gateways_down(self): return len([gw for gw in self.children if gw.state != 'UP']) def load(self, portal_group): for gateway_name in portal_group: Gateway(self, gateway_name, portal_group[gateway_name]) self.check_gateways() def ui_command_info(self): ''' List configured gateways. ''' self.logger.debug("CMD: ../gateways/ info") for child in self.children: console_message(child) def check_gateways(self): check_thread = threading.Timer(self.check_interval, self.check_gateways) check_thread.daemon = True check_thread.start() self.refresh() def refresh(self, mode='auto'): self.thread_lock.acquire() if len(self.children) > 0: for gw in self.children: gw.refresh(mode) else: pass gateways_down = self.gateways_down if gateways_down != self.last_state: if gateways_down == 0: self.logger.info("\nAll gateways accessible") else: err_str = "gateway is" if gateways_down == 1 else "gateways are" self.logger.warning("\n{} {} inaccessible - updates will " "be disabled".format(gateways_down, err_str)) self.last_state = gateways_down self.thread_lock.release() def ui_command_refresh(self): """ refresh allows you to refresh the connection status of each of the configured gateways (i.e. check the up/down state). """ num_gw = len(self.children) if num_gw > 0: self.logger.debug("{} gateways to refresh".format(num_gw)) self.refresh(mode='interactive') else: self.logger.error("No gateways to refresh") def ui_command_delete(self, gateway_name, confirm=None): """ Delete a gateway from the group. This will stop and delete the target running on the gateway. If this is the last gateway the target is mapped to all objects added to it will be removed, and confirm=True is required. """ self.logger.debug("CMD: ../gateways/ delete {} confirm {}". format(gateway_name, confirm)) self.logger.info("Deleting gateway, {}".format(gateway_name)) confirm = self.ui_eval_param(confirm, 'bool', False) config = self.parent.parent.parent._get_config() if not config: self.logger.error("Unable to refresh local config over API - sync " "aborted, restart rbd-target-api on {0} to " "sync".format(gateway_name)) return target_iqn = self.parent.name gw_cnt = len(config['targets'][target_iqn]['portals']) if gw_cnt == 0: self.logger.error("Target is not mapped to any gateways.") return if gw_cnt == 1: if not confirm: self.logger.error("Deleting the last gateway will remove all " "objects on this target. Use confirm=true") return gw_api = '{}://{}:{}/api'.format(self.http_mode, "localhost", settings.config.api_port) gw_rqst = gw_api + '/gateway/{}/{}'.format(target_iqn, gateway_name) if confirm: gw_vars = {"force": 'true'} else: gw_vars = {"force": 'false'} api = APIRequest(gw_rqst, data=gw_vars) api.delete() msg = response_message(api.response, self.logger) if api.response.status_code != 200: if "unavailable:" + gateway_name in msg: self.logger.error("Could not contact {}. If the gateway is " "permanently down. Use confirm=true to " "force removal. WARNING: Forcing removal of " "a gateway that can still be reached by an " "initiator may result in data corruption.". format(gateway_name)) else: self.logger.error("Failed : {}".format(msg)) return self.logger.debug("{}".format(msg)) self.logger.debug("Removing gw from UI") self.thread_lock.acquire() gw_object = self.get_child(gateway_name) self.remove_child(gw_object) self.thread_lock.release() config = self.parent.parent.parent._get_config() if not config: self.logger.error("Could not refresh display. Restart gwcli.") return elif not config['targets'][target_iqn]['portals']: # no more gws so everything but the target is dropped. disks_object = self.parent.get_child("disks") disks_object.reset() hosts_grp_object = self.parent.get_child("host-groups") hosts_grp_object.reset() hosts_object = self.parent.get_child("hosts") hosts_object.reset() def ui_command_create(self, gateway_name, ip_addresses, nosync=False, skipchecks='false'): """ Define a gateway to the gateway group for this iscsi target. The first host added should be the gateway running the command gateway_name ... should resolve to the hostname of the gateway ip_addresses ... are the IPv4/IPv6 addresses of the interfaces the iSCSI portals should use nosync ......... by default new gateways are sync'd with the existing configuration by cli. By specifying nosync the sync step is bypassed - so the new gateway will need to have it's rbd-target-api daemon restarted to apply the current configuration (default = False) skipchecks ..... set this to true to force gateway validity checks to be bypassed(default = false). This is a developer option ONLY. Skipping these checks has the potential to result in an unstable configuration. """ ip_addresses = [normalize_ip_address(ip_address) for ip_address in ip_addresses.split(',')] self.logger.debug("CMD: ../gateways/ create {} {} " "nosync={} skipchecks={}".format(gateway_name, ip_addresses, nosync, skipchecks)) local_gw = this_host() current_gateways = [tgt.name for tgt in self.children] if gateway_name != local_gw and len(current_gateways) == 0: # the first gateway defined must be the local machine. By doing # this the initial create uses localhost, and places it's portal IP # in the gateway ip list. Once the gateway ip list is defined, the # api server can resolve against the gateways - until the list is # defined only a request from localhost is acceptable to the api self.logger.error("The first gateway defined must be the local " "machine") return if skipchecks not in ['true', 'false']: self.logger.error("skipchecks must be either true or false") return if local_gw in current_gateways: current_gateways.remove(local_gw) config = self.parent.parent.parent._get_config() if not config: self.logger.error("Unable to refresh local config" " over API - sync aborted, restart rbd-target-api" " on {0} to sync".format(gateway_name)) return target_iqn = self.parent.name target_config = config['targets'][target_iqn] if nosync: sync_text = "sync skipped" else: sync_text = ("sync'ing {} disk(s) and " "{} client(s)".format(len(target_config['disks']), len(target_config['clients']))) if skipchecks == 'true': self.logger.warning("OS version/package checks have been bypassed") # Check if we can get hostname from # the new gw endpoint new_gw_endpoint = ('{}://{}:{}/' 'api'.format(self.http_mode, gateway_name, settings.config.api_port)) api = APIRequest('{}/sysinfo/hostname'.format(new_gw_endpoint)) api.get() if api.response.status_code != 200: msg = response_message(api.response, self.logger) self.logger.error("Get gateway hostname failed : {}\n" "Please check api_host setting and make sure " "host {} IP is listening on port {}" "".format(msg, gateway_name, settings.config.api_port)) return gateway_hostname = api.response.json()['data'] self.logger.info("Adding gateway, {}".format(sync_text)) gw_api = '{}://{}:{}/api'.format(self.http_mode, "localhost", settings.config.api_port) gw_rqst = gw_api + '/gateway/{}/{}'.format(target_iqn, gateway_name) gw_vars = {"nosync": nosync, "skipchecks": skipchecks, "ip_address": ','.join(ip_addresses)} api = APIRequest(gw_rqst, data=gw_vars) api.put() msg = response_message(api.response, self.logger) if api.response.status_code != 200: self.logger.error("Failed : {}".format(msg)) return self.logger.debug("{}".format(msg)) self.logger.debug("Adding gw to UI") # Target created OK, get the details back from the gateway and # add to the UI. We have to use the new gateway to ensure what # we get back is current (the other gateways will lag until they see # epoch xattr change on the config object) config = self.parent.parent.parent._get_config(endpoint=new_gw_endpoint) if not config: self.logger.error("Unable to refresh local config" " over API - sync aborted, restart rbd-target-api" " on {0} to sync".format(gateway_name)) return target_config = config['targets'][target_iqn] portal_config = target_config['portals'][gateway_hostname] Gateway(self, gateway_hostname, portal_config) self.logger.info('ok') def summary(self): up_count = len([gw.state for gw in self.children if gw.state == 'UP']) gw_count = len(self.children) return ("Up: {}/{}, Portals: {}".format(up_count, gw_count, gw_count), up_count == gw_count) @property def interactive(self): """determine whether the cli is running in interactive mode""" return self.parent.parent.parent.interactive class Gateway(UINode): display_attributes = ["name", "gateway_ip_list", "portal_ip_addresses", "inactive_portal_ips", "tpgs", "service_state"] TCP_PORT = 3260 def __init__(self, parent, gateway_name, gateway_config): """ Create the LIO element :param parent: parent object the gateway group object :param gateway_config: dict holding the fields that define the gateway :return: """ UINode.__init__(self, gateway_name, parent) for k, v in gateway_config.items(): self.__setattr__(k, v) self.state = "DOWN" self.service_state = {"iscsi": "DOWN", "api": "DOWN"} self.refresh() def ui_command_refresh(self): """ The refresh command will initiate a check against the gateway node, checking that the API is available, and that the iscsi port is listening """ self.refresh() def refresh(self, mode="interactive"): if mode == 'interactive': self.logger.debug("- checking iSCSI/API ports on " "{}".format(self.name)) self._get_state() def _get_state(self): """ Determine iSCSI and gateway API service state using the _ping api endpoint :return: """ lookup = {200: {"status": "UP", "iscsi": "UP", "api": "UP"}, 401: {"status": "UNAUTHORIZED", "iscsi": "UNKNOWN", "api": "UP"}, 403: {"status": "FORBIDDEN", "iscsi": "UNKNOWN", "api": "UP"}, 500: {"status": "UNKNOWN", "iscsi": "UNKNOWN", "api": "UNKNOWN"}, 503: {"status": "PARTIAL", "iscsi": "DOWN", "api": "UP"}, 999: {"status": "UNKNOWN", "iscsi": "UNKNOWN", "api": "UNKNOWN"}, } gw_api = '{}://{}:{}/api/_ping'.format(self.http_mode, self.name, settings.config.api_port) api = APIRequest(gw_api) try: api.get() rc = api.response.status_code if rc not in lookup: rc = 999 except GatewayAPIError: rc = 999 self.state = lookup[rc].get('status') self.service_state['iscsi'] = lookup[rc].get('iscsi') self.service_state['api'] = lookup[rc].get('api') def summary(self): state = self.state return "{} ({})".format(','.join(self.portal_ip_addresses), state), (state == "UP") ceph-iscsi-3.9/gwcli/hostgroup.py000066400000000000000000000372201470665154300171440ustar00rootroot00000000000000import re from gwcli.node import UIGroup, UINode from gwcli.utils import response_message, APIRequest import ceph_iscsi_config.settings as settings class HostGroups(UIGroup): help_intro = ''' Hosts groups provide a more convenient way of managing multiple hosts that require access to the same set of LUNs. The host group 'policy' defines the clients and the LUNs (rbd images) that should be associated together. There are two commands used to manage the host group create delete Since the same disks will be seen by multiple systems, you should only use this feature for hosts that are cluster aware. Failing to adhere to this constraint is likely to result in **data loss** Once a group has been created you can associate clients and LUNs to the group with the 'host' and 'disk' sub-commands. Note that a client can only belong to a single group definition, but a disk can be defined across several groups. ''' group_name_length = 32 def __init__(self, parent): UIGroup.__init__(self, 'host-groups', parent) # record the shortcut shortcut = self.shell.prefs['bookmarks'].get('host-groups', None) if not shortcut or shortcut is not self.path: self.shell.prefs['bookmarks']['host-groups'] = self.path self.shell.prefs.save() self.load() def load(self): target_iqn = self.parent.name for child in self.children: self.delete(child) if target_iqn in self.get_ui_root().config['targets']: target_config = self.get_ui_root().config['targets'][target_iqn] groups = target_config['groups'] for group_name in groups: HostGroup(self, group_name, groups[group_name]) @property def groups(self): return [child.name for child in self.children] def summary(self): return "Groups : {}".format(len(self.children)), None def ui_command_create(self, group_name): """ Create a host group definition. Group names can be use up to 32 alphanumeric characters, including '_', '-' and '@'. Note that once a group is created it can not be renamed. """ self.logger.debug("CMD: ../host-groups/ create {}".format(group_name)) if group_name in self.groups: self.logger.error("Group {} already defined".format(group_name)) return grp_regex = re.compile( r"^[\w\@\-\_]{{1,{}}}$".format(HostGroups.group_name_length)) if not grp_regex.search(group_name): self.logger.error("Invalid group name - max of {} chars of " "alphanumeric and -,_,@ " "characters".format(HostGroups.group_name_length)) return target_iqn = self.parent.name # this is a new group group_api = ('{}://{}:{}/api/hostgroup/' '{}/{}'.format(self.http_mode, "localhost", settings.config.api_port, target_iqn, group_name)) api = APIRequest(group_api) api.put() if api.response.status_code != 200: self.logger.error("Failed : " "{}".format(response_message(api.response, self.logger))) return self.logger.debug('Adding group to the UI') HostGroup(self, group_name) self.logger.info('ok') # Switch to the new group return self.ui_command_cd(group_name) def ui_command_delete(self, group_name): """ Delete a host group definition. The delete process will remove the group definition and remove the group name association within any client. Note the deletion of a group will not remove the lun masking already defined to clients. If this is desired, it will need to be performed manually once the group is deleted. """ self.logger.debug("CMD: ../host-groups/ delete {}".format(group_name)) if group_name not in self.groups: self.logger.error("Group '{}' does not exist".format(group_name)) return target_iqn = self.parent.name # OK, so the group exists... group_api = ('{}://{}:{}/api/hostgroup/' '{}/{}'.format(self.http_mode, "localhost", settings.config.api_port, target_iqn, group_name)) api = APIRequest(group_api) api.delete() if api.response.status_code != 200: self.logger.error("failed to delete group '{}'".format(group_name)) return self.logger.debug("removing group from the UI") child = [child for child in self.children if child.name == group_name][0] self.delete(child) self.logger.info('ok') def delete(self, child): target_iqn = self.parent.name client_group = child._get_client_group(target_iqn) client_map = client_group.client_map group_clients = [client_iqn for client_iqn in client_map if client_map[client_iqn].group_name == child.name] for iqn in group_clients: self.logger.debug("removing group name from {}".format(iqn)) client_map[iqn].group_name = '' self.remove_child(child) class HostGroup(UIGroup): help_intro = ''' A host group provides a simple way to manage the LUN masking of a number of iscsi clients as a single unit. The host group contains hosts (iscsi clients) and disks (rbd images). Once a host is defined to a group, it's lun masking must be managed through the group. In fact attempts to manage the disks of a client directly are blocked. The following commands enable you to manage the membership of the host group. e.g. host add|remove iqn.1994-05.com.redhat:rh7-client disk add|remove rbd/disk_1 ''' valid_actions = ['add', 'remove'] def __init__(self, parent, group_name, group_settings={}): UIGroup.__init__(self, group_name, parent) self.name = group_name for disk in group_settings.get('disks', []): HostGroupMember(self, 'disk', disk) for member in group_settings.get('members', []): HostGroupMember(self, 'host', member) def ui_command_host(self, action, client_iqn): """ use the 'host' sub-command to add and remove hosts from a host group. Adding a host will automatically map the host group's disks to that specific host. Removing a host however, does not change the hosts disk masking - it simply removes the host from group. e.g. host add|remove iqn.1994-05.com.redhat:rh7-client """ if action not in HostGroup.valid_actions: self.logger.error("Invalid request - must be " "host add|remove ") return target_iqn = self.parent.parent.name # basic checks client_group = self._get_client_group(target_iqn) client_map = client_group.client_map if client_iqn not in client_map: self.logger.error("'{}' is not managed by a " "group".format(client_iqn)) return current_group = client_map[client_iqn].group_name if action == 'add' and current_group: self.logger.error("'{}' already belongs to " "'{}'".format(client_iqn, current_group)) return elif action == 'remove' and current_group != self.name: self.logger.error("'{}' does not belong to this " "group".format(client_iqn)) return # Basic checks passed, hand-off to the API now group_api = ('{}://{}:{}/api/hostgroup/' '{}/{}'.format(self.http_mode, "localhost", settings.config.api_port, target_iqn, self.name)) api_vars = {"action": action, "members": client_iqn} api = APIRequest(group_api, data=api_vars) api.put() self.logger.debug("- api call responded " "{}".format(api.response.status_code)) if api.response.status_code != 200: self.logger.error("Failed :" "{}".format(response_message(api.response, self.logger))) return # group updated, so update the UI self.logger.debug("Updating the UI") if action == 'add': HostGroupMember(self, 'host', client_iqn) self.update_clients_UI([client_iqn], target_iqn) elif action == 'remove': child = [child for child in self.children if child.name == client_iqn][0] self.delete(child) self.logger.info('ok') def delete(self, child): target_iqn = self.parent.parent.name if child.member_type == 'host': client_group = self._get_client_group(target_iqn) client = client_group.client_map[child.name] client.group_name = '' self.remove_child(child) def ui_command_disk(self, action, disk_name): """ use the 'disk' sub-command to add or remove a disk from a specific host group. Removing disks should be done with care, as the remove operation will be executed across all hosts defined to the host group. e.g. disk add|remove rbd/disk_1 """ if action not in HostGroup.valid_actions: self.logger.error("Invalid request - must be " "disk add|remove ") return target_iqn = self.parent.parent.name # simple sanity checks # 1. does the disk exist in the configuration ui_root = self.get_ui_root() all_pools = ui_root.disks.children all_disks = [] for current_pool in all_pools: for current_disk in current_pool.children: all_disks.append(current_disk) if disk_name not in [disk.image_id for disk in all_disks]: self.logger.error("Disk '{}' is not defined within the " "configuration".format(disk_name)) return # 2. For an 'add' request, the disk must not already be in the host # group. Whereas, for a remove request the disk must exist. if action == 'add': if disk_name in self.disks: self.logger.error("'{}' is already defined to this " "host-group".format(disk_name)) return else: if disk_name not in self.disks: self.logger.error("'{}' is not a member of this " "group".format(disk_name)) return mapped_disks = [mapped_disk.name for mapped_disk in self.parent.parent.target_disks.children] if disk_name not in mapped_disks: rc = self.parent.parent.target_disks.add_disk(disk_name, None, None) if rc == 0: self.logger.debug("disk auto-map successful") else: self.logger.error("disk auto-map failed({}), try " "using the /iscsi-targets//disks add " "command".format(rc)) return # Basic checks passed, hand-off to the API group_api = ('{}://{}:{}/api/hostgroup/' '{}/{}'.format(self.http_mode, "localhost", settings.config.api_port, target_iqn, self.name)) api_vars = {"action": action, "disks": disk_name} api = APIRequest(group_api, data=api_vars) api.put() self.logger.debug("- api call responded {}".format(api.response.status_code)) if api.response.status_code != 200: self.logger.error("Failed: " "{}".format(response_message(api.response, self.logger))) return # group updated, so update the host-groups UI elements self.logger.debug("Updating the UI") if action == 'add': HostGroupMember(self, 'disk', disk_name) elif action == 'remove': child = [child for child in self.children if child.name == disk_name][0] self.delete(child) self.update_clients_UI(self.members, target_iqn) self.logger.info('ok') @property def members(self): return [child.name for child in self.children if child.member_type == 'host'] @property def disks(self): return [child.name for child in self.children if child.member_type == 'disk'] def _get_client_group(self, target_iqn): ui_root = self.get_ui_root() iscsi_target = [t for t in list(ui_root.target.children) if t.name == target_iqn][0] return iscsi_target.client_group def update_clients_UI(self, client_list, target_iqn): self.logger.debug("rereading the config object") root = self.get_ui_root() config = root._get_config() clients = config['targets'][target_iqn]['clients'] client_group = self._get_client_group(target_iqn) # Clients Object clients_to_update = [client for client in client_group.children if client.name in client_list] # refresh the client with the new config self.logger.debug("resync'ing client lun maps") client_map = client_group.client_map for client in clients_to_update: client_map[client.client_iqn].group_name = self.name client.drop_luns() client.luns = clients[client.client_iqn].get('luns', {}) client.refresh_luns() def summary(self): counts = {'disk': 0, 'host': 0} for child in self.children: counts[child.member_type] += 1 return "Hosts: {}, Disks: {}".format(counts['host'], counts['disk']), \ None class HostGroupMember(UINode): help_intro = ''' The entries here show the hosts and disks that are held within a specific host group definition. Care should be taken when removing disks from a host group, as the remove operation will be performed across each client within the group. ''' def __init__(self, parent, member_type, name): UINode.__init__(self, name, parent) self.member_type = member_type def summary(self): return "{}".format(self.member_type), True ceph-iscsi-3.9/gwcli/node.py000066400000000000000000000101371470665154300160350ustar00rootroot00000000000000from configshell_fb import ConfigNode from gwcli.utils import console_message import logging __author__ = 'Paul Cuzner' class UICommon(ConfigNode): def __init__(self, name, parent=None, shell=None): ConfigNode.__init__(self, name, parent, shell) self.logger = logging.getLogger('gwcli') def ui_command_goto(self, shortcut='/'): ''' cd to the bookmark at shortcut. See 'help bookmarks' for more info on bookmarks. ''' if shortcut in self.shell.prefs['bookmarks']: return self.ui_command_cd(self.shell.prefs['bookmarks'][shortcut]) else: pass def get_ui_root(self): found = False obj = self while not found: if obj.__class__.__name__ == 'ISCSIRoot': break obj = obj.parent return obj class UIGroup(UICommon): def __init__(self, name, parent=None, shell=None): UICommon.__init__(self, name, parent, shell) self.http_mode = self.parent.http_mode def reset(self): children = set(self.children) # set of child objects for child in children: self.remove_child(child) class UINode(UIGroup): display_attributes = None def __init__(self, name, parent): UIGroup.__init__(self, name, parent) self.http_mode = self.parent.http_mode def ui_command_info(self): """ Show the attributes of the current object. """ text = self.get_info() console_message(text) def get_info(self): """ extract the relevant display fields from the object and format ready for printing :return: (str) object meta data based on object's display_attributes list """ display_text = '' if not self.display_attributes: return "'info' not available for this item" field_list = self.display_attributes max_field_size = len(max(field_list, key=len)) for k in field_list: attr_label = k.replace('_', ' ').title() attr_value = getattr(self, k) if isinstance(attr_value, dict): if attr_value: display_text += "{}\n".format(attr_label) max_dict_field = len(max(attr_value.keys(), key=len)) for dict_key in sorted(attr_value): if isinstance(attr_value[dict_key], dict): inner_dict = attr_value[dict_key] display_value = ", ".join(["=".join( [key, str(val)]) for key, val in inner_dict.items()]) display_text += ("- {:<{}} .. {}\n".format(dict_key, max_dict_field, display_value)) else: display_text += ("- {} .. {}\n".format(dict_key, attr_value[dict_key])) continue else: attr_value = 'UNDEFINED\n' if isinstance(attr_value, list): item_1 = True attr_string = '' for item in attr_value: if item_1: attr_string = "{}\n".format(str(item)) item_1 = False else: attr_string += "{}{}\n".format(" " * (max_field_size + 4), str(item)) attr_value = attr_string[:-1] display_text += ("{:<{}} .. {}\n".format(attr_label, max_field_size, attr_value)) return display_text class UIRoot(UICommon): """ The gwcli hierarchy root node. """ def __init__(self, shell, as_root=False): UICommon.__init__(self, '/', shell=shell) self.as_root = as_root ceph-iscsi-3.9/gwcli/storage.py000066400000000000000000001226741470665154300165660ustar00rootroot00000000000000import json import time try: import Queue except ImportError: import queue as Queue import threading import rados import rbd from gwcli.node import UIGroup, UINode from gwcli.client import Clients from gwcli.utils import (console_message, response_message, GatewayAPIError, APIRequest, valid_snapshot_name, get_config, refresh_control_values) from ceph_iscsi_config.utils import valid_size, convert_2_bytes, human_size, this_host from ceph_iscsi_config.lun import LUN import ceph_iscsi_config.settings as settings # FIXME - this ignores the warning issued when verify=False is used from requests.packages import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) class Disks(UIGroup): scan_interval = 0.02 help_intro = ''' The disks section provides a summary of the rbd images that have been defined and added to the gateway nodes. Each disk listed will provide a view of it's capacity, and you can use the 'info' subcommand to retrieve lower level information about the rbd image. The capacity shown against each disk is the logical size of the rbd image, not the physical space the image is consuming within rados. ''' def __init__(self, parent): UIGroup.__init__(self, 'disks', parent) self.disk_info = {} self.disk_lookup = {} self.scan_threads = self.get_ui_root().scan_threads self.scan_queue = None self.scan_mutex = None def _get_disk_meta(self, cluster_ioctx, disk_meta): """ Use the provided cluster context to take an rbd image name from the queue and extract size and feature code. The resulting data is then stored in a shared dict accessible by all scan threads :param cluster_ioctx: cluster io context object :param disk_meta: dict of rbd images, holding metadata :return: None """ while True: time.sleep(Disks.scan_interval) try: rbd_name = self.scan_queue.get(block=False) except Queue.Empty: break else: pool, image = rbd_name.split('/') disk_meta[rbd_name] = {} with cluster_ioctx.open_ioctx(pool) as ioctx: try: with rbd.Image(ioctx, image) as rbd_image: size = rbd_image.size() features = rbd_image.features() snapshots = list(rbd_image.list_snaps()) self.scan_mutex.acquire() disk_meta[rbd_name] = { "size": size, "features": features, "snapshots": snapshots } self.scan_mutex.release() except rbd.ImageNotFound: pass def refresh(self, disk_info): """ refresh the disk information by triggering a rescan of the rbd images defined in the config object. Scanning uses a common queue object, and multiple rbd scan threads to reduce the rescan time for larger environments. :param disk_info: dict corresponding to the disk subtree of the config object :return: None """ self.logger.debug("Refreshing disk information from the config object") self.disk_info = disk_info self.logger.debug("- Scanning will use {} scan " "threads".format(self.scan_threads)) self.scan_queue = Queue.Queue() self.scan_mutex = threading.Lock() disk_meta = dict() # Load the queue for disk_name in disk_info.keys(): self.scan_queue.put(disk_name) start_time = int(time.time()) scan_threads = [] # Open a connection to the cluster with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster: # Initiate the scan threads for _t in range(0, self.scan_threads, 1): _thread = threading.Thread(target=self._get_disk_meta, args=(cluster, disk_meta)) _thread.start() scan_threads.append(_thread) for _t in scan_threads: _t.join() end_time = int(time.time()) self.logger.debug("- rbd image scan complete: {}s".format(end_time - start_time)) # Load the disk configuration disk_info_by_pool = self._group_disks_by_pool(disk_info) for pool, pool_disks_config in disk_info_by_pool.items(): DiskPool(self, pool, pool_disks_config, disk_meta) def _group_disks_by_pool(self, disks_config): result = {} for disk_id, disk_config in disks_config.items(): pool, image = disk_id.split('/') if pool not in result: result[pool] = [] result[pool].append(disk_config) return result def reset(self): children = set(self.children) # set of child objects for child in children: self.remove_child(child) def ui_command_attach(self, pool=None, image=None, backstore=None, wwn=None): """ Assign a previously created RBD image to the gateway(s) The attach command supports two request formats; Long format : attach pool= image= Short format : attach pool/image e.g. attach pool=rbd image=testimage attach rbd.testimage The syntax of each parameter is as follows; pool : Pool and image name may contain a-z, A-Z, 0-9, '_', or '-' image characters. """ if pool and '/' in pool: # shorthand version of the command self.logger.debug("user provided pool/image format request") pool, image = pool.split('/') else: # long format request if not pool or not image: self.logger.error("Invalid create: pool and image " "parameters are needed") return self.logger.debug("CMD: /disks/ attach pool={} " "image={}".format(pool, image)) self.create_disk(pool=pool, image=image, create_image=False, backstore=backstore, wwn=wwn) def ui_command_create(self, pool=None, image=None, size=None, backstore=None, wwn=None, count=1): """ Create a RBD image and assign to the gateway(s). The create command supports two request formats; Long format : create pool= image= size= Short format : create pool/image e.g. create pool=rbd image=testimage size=100g create rbd.testimage 100g The syntax of each parameter is as follows; pool : Pool and image name may contain a-z, A-Z, 0-9, '_', or '-' image characters. size : integer, suffixed by the allocation unit - either m/M, g/G or t/T representing the MB/GB/TB [1] backstore : lio backstore wwn : unit serial number count : integer (default is 1)[2]. If the request provides a count= parameter the image name will be used as a prefix, and the count used as a suffix to create multiple images from the same request. e.g. create rbd.test 1g count=5 -> create 5 images called test1..test5 each of 1GB in size from the rbd pool Notes. 1) size does not support decimal representations 2) Using a count to create multiple images will lock the CLI until all images have been created """ # NB the text above is shown on a help create request in the CLI if pool and '/' in pool: # shorthand version of the command self.logger.debug("user provided pool/image format request") if image: if size: try: count = int(size) except ValueError: self.logger.error("Invalid count provided " "({} ?)".format(size)) return size = image pool, image = pool.split('/') else: # long format request if not pool or not image: self.logger.error("Invalid create: pool and image " "parameters are needed") return if size and not valid_size(size): self.logger.error("Invalid size requested. Must be an integer, " "suffixed by M, G or T. See help for more info") return if count: if not str(count).isdigit(): self.logger.error("invalid count format, must be an integer") return self.logger.debug("CMD: /disks/ create pool={} " "image={} size={} " "count={} ".format(pool, image, size, count)) self.create_disk(pool=pool, image=image, size=size, count=count, backstore=backstore, wwn=wwn) def _valid_pool(self, pool=None): """ ensure the requested pool is ok to use. currently this is just a pool type check, but could also include checks against freespace in the pool, it's overcommit ratio etc etc :param pool: (str) pool name :return: (bool) showing whether the pool is acceptable for a new disk """ # first check that the intended pool is compatible with rbd images root = self.get_ui_root() pools = root.ceph.cluster.pools pool_object = pools.pool_lookup.get(pool, None) if pool_object: if pool_object.type == 'replicated': self.logger.debug("pool '{}' is ok to use".format(pool)) return True self.logger.error("Invalid pool ({}). Must already exist and " "be replicated".format(pool)) return False def create_disk(self, pool=None, image=None, size=None, count=1, parent=None, create_image=True, backstore=None, wwn=None): rc = 0 if not parent: parent = self local_gw = this_host() disk_key = "{}/{}".format(pool, image) if not self._valid_pool(pool): return self.logger.debug("Creating/mapping disk {}/{}".format(pool, image)) # make call to local api server's disk endpoint disk_api = '{}://localhost:{}/api/disk/{}'.format(self.http_mode, settings.config.api_port, disk_key) api_vars = {'pool': pool, 'owner': local_gw, 'count': count, 'mode': 'create', 'create_image': 'true' if create_image else 'false', 'backstore': backstore, 'wwn': wwn} if size: api_vars['size'] = size.upper() self.logger.debug("Issuing disk create request") api = APIRequest(disk_api, data=api_vars) api.put() if api.response.status_code == 200: # rbd create and map successful across all gateways so request # it's details and add to the UI self.logger.debug("- LUN(s) ready on all gateways") self.logger.info("ok") self.logger.debug("Updating UI for the new disk(s)") for n in range(1, (int(count) + 1), 1): if int(count) > 1: disk_key = "{}/{}{}".format(pool, image, n) else: disk_key = "{}/{}".format(pool, image) disk_api = ('{}://localhost:{}/api/disk/' '{}'.format(self.http_mode, settings.config.api_port, disk_key)) api = APIRequest(disk_api) api.get() if api.response.status_code == 200: try: image_config = api.response.json() except Exception: raise GatewayAPIError("Malformed REST API response") disk_pool = None for current_disk_pool in self.children: if current_disk_pool.name == pool: disk_pool = current_disk_pool break if disk_pool: Disk(disk_pool, disk_key, image_config) else: DiskPool(parent, pool, [image_config]) self.logger.debug("{} added to the UI".format(disk_key)) else: raise GatewayAPIError("Unable to retrieve disk details " "for '{}' from the API".format(disk_key)) ceph_pools = self.parent.ceph.cluster.pools ceph_pools.refresh() else: self.logger.error("Failed : {}".format(response_message(api.response, self.logger))) rc = 8 return rc def find_hosts(self): hosts = [] tgt_group = self.parent.target.children for tgt in tgt_group: for tgt_child in tgt.children: if isinstance(tgt_child, Clients): hosts += list(tgt_child.children) return hosts def disk_in_use(self, image_id): """ determine if a given disk image is mapped to any of the defined clients @param: image_id ... rbd image name (. format) :return: either an empty list or a list of clients using the disk image """ disk_users = [] client_list = self.find_hosts() for client in client_list: client_disks = [mlun.rbd_name for mlun in client.children] if image_id in client_disks: disk_users.append(client.name) return disk_users def ui_command_resize(self, image_id, size): """ The resize command allows you to increase the size of an existing rbd image. Attempting to decrease the size of an rbd will be ignored. image_id: disk name (pool/image format) size: new size including unit suffix e.g. 300G """ self.logger.debug("CMD: /disks/ resize {} {}".format(image_id, size)) if image_id not in self.disk_lookup: self.logger.error("the disk '{}' does not exist in this " "configuration".format(image_id)) return disk = self.disk_lookup[image_id] disk.resize(size) def ui_command_reconfigure(self, image_id, attribute, value): """ The reconfigure command allows you to tune various lun attributes. An empty value for an attribute resets the lun attribute to its default. image_id : disk name (pool/image format) attribute : attribute to reconfigure. supported attributes: value : value of the attribute to reconfigure See the create command help for a list of attributes that can be reconfigured. e.g. set max_data_area_mb - reconfigure image_id=rbd.disk_1 attribute=max_data_area_mb value=128 reset max_data_area_mb to default - reconfigure image_id=rbd.disk_1 attribute=max_data_area_mb value= """ if image_id in self.disk_lookup: disk = self.disk_lookup[image_id] disk.reconfigure(attribute, value) else: self.logger.error("the disk '{}' does not exist in this " "configuration".format(image_id)) def ui_command_info(self, image_id): """ Provide disk configuration information (rbd and LIO details are provided) """ self.logger.debug("CMD: /disks/ info {}".format(image_id)) if image_id in self.disk_lookup: disk = self.disk_lookup[image_id] text = disk.get_info() console_message(text) else: self.logger.error("disk name provided does not exist") def ui_command_detach(self, image_id): """ Delete a given rbd image from the configuration but not from ceph. > detach e.g. > detach rbd/disk_1 "disk_name" refers to the name of the disk as shown in the UI, for example rbd/disk_1. """ self.delete_disk(image_id, True) def ui_command_delete(self, image_id): """ Delete a given rbd image from the configuration and ceph. This is a destructive action that could lead to data loss, so please ensure the rbd image name is correct! > delete e.g. > delete rbd/disk_1 "disk_name" refers to the name of the disk as shown in the UI, for example rbd/disk_1. Also note that the delete process is a synchronous task, so the larger the rbd image is, the longer the delete will take to run. """ self.delete_disk(image_id, False) def delete_disk(self, image_id, preserve_image): all_disks = [] for pool in self.children: for disk in pool.children: all_disks.append(disk) # Perform a quick 'sniff' test on the request if image_id not in [disk.image_id for disk in all_disks]: self.logger.error("Disk '{}' is not defined to the " "configuration".format(image_id)) return self.logger.debug("CMD: /disks delete {}".format(image_id)) self.logger.debug("Starting delete for rbd {}".format(image_id)) local_gw = this_host() api_vars = { 'purge_host': local_gw, 'preserve_image': 'true' if preserve_image else 'false' } disk_api = '{}://{}:{}/api/disk/{}'.format(self.http_mode, local_gw, settings.config.api_port, image_id) api = APIRequest(disk_api, data=api_vars) api.delete() if api.response.status_code == 200: self.logger.debug("- rbd removed from all gateways, and deleted") disk_object = [disk for disk in all_disks if disk.image_id == image_id][0] pool, _ = image_id.split('/') pool_object = [pool_object for pool_object in self.children if pool_object.name == pool][0] pool_object.remove_child(disk_object) if len(pool_object.children) == 0: self.remove_child(pool_object) del self.disk_info[image_id] del self.disk_lookup[image_id] else: self.logger.debug("delete request failed - " "{}".format(api.response.status_code)) self.logger.error("{}".format(response_message(api.response, self.logger))) return ceph_pools = self.parent.ceph.cluster.pools ceph_pools.refresh() self.logger.info('ok') def _valid_request(self, pool, image, size): """ Validate the parameters of a create request :param pool: rados pool name :param image: rbd image name :param size: size of the rbd (unit suffixed e.g. 20G) :return: boolean, indicating whether the parameters may be used or not """ ui_root = self.get_ui_root() state = True discovered_pools = [rados_pool.name for rados_pool in ui_root.ceph.cluster.pools.children] existing_rbds = self.disk_info.keys() storage_key = "{}/{}".format(pool, image) if not size: self.logger.error("Size parameter is missing") state = False elif not valid_size(size): self.logger.error("Size is invalid") state = False elif pool not in discovered_pools: self.logger.error("pool name is invalid") state = False elif storage_key in existing_rbds: self.logger.error("image of that name already defined") state = False return state def summary(self): total_bytes = 0 total_disks = 0 for pool in self.children: total_disks += len(pool.children) for disk in pool.children: total_bytes += disk.size return '{}, Disks: {}'.format(human_size(total_bytes), total_disks), None class DiskPool(UIGroup): help_intro = ''' Disks within a pool. The capacity shown against each pool is the logical size of the rbd images, not the physical space the images are consuming within rados. ''' def __init__(self, parent, pool, pool_disks_config, disks_meta=None): UIGroup.__init__(self, pool, parent) self.pool_disks_config = pool_disks_config self.disks_meta = disks_meta self.refresh() def refresh(self): for pool_disk_config in self.pool_disks_config: disk_id = '{}/{}'.format(pool_disk_config['pool'], pool_disk_config['image']) size = self.disks_meta[disk_id].get('size', 0) if self.disks_meta else None features = self.disks_meta[disk_id].get('features', 0) if self.disks_meta else None snapshots = self.disks_meta[disk_id].get('snapshots', []) if self.disks_meta else None Disk(self, image_id=disk_id, image_config=pool_disk_config, size=size, features=features, snapshots=snapshots) def summary(self): total_bytes = 0 for disk in self.children: total_bytes += disk.size return '{} ({})'.format(self.name, human_size(total_bytes)), None class Disk(UINode): display_attributes = ["image", "ceph_cluster", "pool", "wwn", "size_h", "feature_list", "snapshots", "owner", "lock_owner", "state", "backstore", "backstore_object_name", "control_values"] def __init__(self, parent, image_id, image_config, size=None, features=None, snapshots=None): """ Create a disk entry under the Disks subtree :param parent: parent object (instance of the Disks class) :param image_id: key used in the config object for this rbd image (pool/image_name) - str :param image_config: meta data for this image :return: """ self.pool, self.rbd_image = image_id.split('/', 1) UINode.__init__(self, self.rbd_image, parent) self.image_id = image_id self.size = 0 self.size_h = '' self.features = 0 self.feature_list = [] self.snapshots = [] self.backstore = image_config['backstore'] self.backstore_object_name = image_config['backstore_object_name'] self.controls = {} self.control_values = {} self.ceph_cluster = self.parent.parent.parent.ceph.cluster.name disk_map = self.parent.parent.disk_info if image_id not in disk_map: disk_map[image_id] = {} if image_id not in self.parent.parent.disk_lookup: self.parent.parent.disk_lookup[image_id] = self self._apply_config(image_config) self._apply_status() if not size: # Size/features are not stored in the config, since it can be changed # outside of this tool-chain, so we get them dynamically self._refresh_config() else: # size and features have been passed in from the Disks.refresh # method self.exists = True self.size = size self.size_h = human_size(self.size) self.features = features self.feature_list = self._get_features() self._parse_snapshots(snapshots) # update the parent's disk info map disk_map = self.parent.parent.disk_info disk_map[self.image_id]['size'] = self.size disk_map[self.image_id]['size_h'] = self.size_h def _apply_status(self): disk_api = ('{}://localhost:{}/api/' 'disk/{}'.format(self.http_mode, settings.config.api_port, self.image_id)) self.logger.debug("disk GET status for {}".format(self.image_id)) api = APIRequest(disk_api) api.get() # set both the 'lock_owner' and 'state' to Unknown as default in # case if the api response fails the gwcli command will fail too self.__setattr__('lock_owner', 'Unknown') self.__setattr__('state', 'Unknown') if api.response.status_code == 200: info = api.response.json() status = info.get("status") if status is None: return state = status.get('state') if (state): self.__setattr__('state', state) owner = status.get('lock_owner') if (owner): self.__setattr__('lock_owner', owner) def _apply_config(self, image_config): # set the remaining attributes based on the fields in the dict disk_map = self.parent.parent.disk_info if 'owner' not in image_config: self.__setattr__('owner', '') for k, v in image_config.items(): disk_map[self.image_id][k] = v self.__setattr__(k, v) refresh_control_values(self.control_values, self.controls, LUN.SETTINGS[image_config['backstore']]) def summary(self): if not self.exists: return 'NOT FOUND', False status = True disk_api = ('{}://localhost:{}/api/' 'disk/{}'.format(self.http_mode, settings.config.api_port, self.image_id)) self.logger.debug("disk GET status for {}".format(self.image_id)) api = APIRequest(disk_api) api.get() state = "State unknown" if api.response.status_code == 200: info = api.response.json() disk_status = info.get("status") if disk_status: state = disk_status.get("state") if state != "Online": status = False msg = [self.image_id, "({}, {})".format(state, self.size_h)] return " ".join(msg), status def _parse_snapshots(self, snapshots): self.snapshots = ["{name} ({size})".format(name=s['name'], size=human_size(s['size'])) for s in snapshots] self.snapshot_names = [s['name'] for s in snapshots] def _get_features(self): """ return a human readable list of features for this rbd :return: (list) of feature names from the feature code """ rbd_features = {getattr(rbd, f): f for f in rbd.__dict__ if 'RBD_FEATURE_' in f} feature_idx = sorted(rbd_features) disk_features = [] b_num = bin(self.features).replace('0b', '') ptr = len(b_num) - 1 key_ptr = 0 while ptr >= 0: if b_num[ptr] == '1': disk_features.append(rbd_features[feature_idx[key_ptr]]) key_ptr += 1 ptr -= 1 return disk_features def _refresh_config(self): self._get_meta_data_tcmu() self._get_meta_data_config() def _get_meta_data_config(self): config = get_config() if not config: return self._apply_config(config['disks'][self.image_id]) self._apply_status() def _get_meta_data_tcmu(self): """ query the rbd to get the features and size of the rbd :return: """ self.logger.debug("Refreshing image metadata") with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster: with cluster.open_ioctx(self.pool) as ioctx: try: with rbd.Image(ioctx, self.image) as rbd_image: self.exists = True self.size = rbd_image.size() self.size_h = human_size(self.size) self.features = rbd_image.features() self.feature_list = self._get_features() self._parse_snapshots(list(rbd_image.list_snaps())) except rbd.ImageNotFound: self.exists = False # def get_meta_data_krbd(self): # """ # KRBD based method to get size and rbd features information # :return: # """ # # image_path is a symlink to the actual /dev/rbdX file # image_path = "/dev/rbd/{}/{}".format(self.pool, self.rbd_image) # dev_id = os.path.realpath(image_path)[8:] # rbd_path = "/sys/devices/rbd/{}".format(dev_id) # # try: # self.features = readcontents(os.path.join(rbd_path, 'features')) # self.size = int(readcontents(os.path.join(rbd_path, 'size'))) # except IOError: # # if we get an ioError here, it means the config object passed # # back from the API is out of step with the physical configuration # # this can happen after a purge_gateways ansible playbook run if # # the gateways do not have there rbd-target-gw daemons reloaded # error_msg = "The API has returned disks that are not on this " \ # "server...reload rbd-target-api?" # # self.logger.critical(error_msg) # raise GatewayError(error_msg) # else: # # self.size_h = human_size(self.size) # # # update the parent's disk info map # disk_map = self.parent.disk_info # # disk_map[self.image_id]['size'] = self.size # disk_map[self.image_id]['size_h'] = self.size_h def reconfigure(self, attribute, value): controls = {attribute: value} controls_json = json.dumps(controls) ui_root = self.get_ui_root() disk = ui_root.disks.disk_lookup[self.image_id] if not disk.owner: self.logger.error("Cannot reconfigure until disk assigned to target") return local_gw = this_host() # Issue the api request for reconfigure disk_api = ('{}://localhost:{}/api/' 'disk/{}'.format(self.http_mode, settings.config.api_port, self.image_id)) api_vars = {'pool': self.pool, 'owner': local_gw, 'controls': controls_json, 'mode': 'reconfigure'} self.logger.debug("Issuing reconfigure request: attribute={}, " "value={}".format(attribute, value)) api = APIRequest(disk_api, data=api_vars) api.put() if api.response.status_code == 200: self.logger.info('ok') self._refresh_config() else: self.logger.error("Failed to reconfigure : " "{}".format(response_message(api.response, self.logger))) def resize(self, size): """ Perform the resize operation, and sync the disk size across each of the gateways :param size: (int) new size for the rbd image :return: """ # resize is actually managed by the same lun and api endpoint as # create so this logic is very similar to a 'create' request size_rqst = size.upper() if not valid_size(size_rqst): self.logger.error("Size is invalid") return # At this point the size request needs to be honoured self.logger.debug("Resizing {} to {}".format(self.image_id, size_rqst)) local_gw = this_host() # Issue the api request for the resize disk_api = ('{}://localhost:{}/api/' 'disk/{}'.format(self.http_mode, settings.config.api_port, self.image_id)) api_vars = {'pool': self.pool, 'size': size_rqst, 'owner': local_gw, 'mode': 'resize'} self.logger.debug("Issuing resize request") api = APIRequest(disk_api, data=api_vars) api.put() if api.response.status_code == 200: # at this point the resize request was successful, so we need to # update the ceph pool meta data (%commit etc) self._update_pool() self.size_h = size_rqst self.size = convert_2_bytes(size_rqst) self.logger.info('ok') else: self.logger.error("Failed to resize : " "{}".format(response_message(api.response, self.logger))) def snapshot(self, action, name): self.logger.debug("CMD: /disks/{} snapshot action={} " "name={}".format(self.image_id, action, name)) valid_actions = ['create', 'delete', 'rollback'] if action not in valid_actions: self.logger.error("you can only create, delete, or rollback - " "{} is invalid ".format(action)) return if action == 'create': if name in self.snapshot_names: self.logger.error("Snapshot {} already exists".format(name)) return if not valid_snapshot_name(name): self.logger.error("Snapshot {} contains invalid characters".format(name)) return else: if name not in self.snapshot_names: self.logger.error("Snapshot {} does not exist".format(name)) return if action == 'rollback': self.logger.warning("Please be patient, rollback might take time") self.logger.debug("Issuing snapshot {} request".format(action)) disk_api = ('{}://localhost:{}/api/' 'disksnap/{}/{}/{}'.format(self.http_mode, settings.config.api_port, self.pool, self.rbd_image, name)) if action == 'delete': api = APIRequest(disk_api) api.delete() else: api_vars = {'mode': action} api = APIRequest(disk_api, data=api_vars) api.put() if api.response.status_code == 200: if action == 'create' or action == 'delete': self._refresh_config() self.logger.info('ok') else: self.logger.error("Failed to {} snapshot: " "{}".format(action, response_message(api.response, self.logger))) def _update_pool(self): """ use the object model to track back from the disk to the relevant pool in the local ceph cluster and update the commit stats """ root = self.parent.parent.parent ceph_group = root.ceph cluster = ceph_group.cluster pool = cluster.pools.pool_lookup.get(self.pool) if pool: # update the pool commit numbers pool._calc_overcommit() def ui_command_resize(self, size): """ The resize command allows you to increase the size of an existing rbd image. Attempting to decrease the size of an rbd will be ignored. size: new size including unit suffix e.g. 300G """ self.resize(size) def ui_command_reconfigure(self, attribute, value): """ The reconfigure command allows you to tune various lun attributes. An empty value for an attribute resets the lun attribute to its default. attribute : attribute to reconfigure. supported attributes: value : value of the attribute to reconfigure See the create command help for a list of attributes that can be reconfigured. e.g. set max_data_area_mb - reconfigure attribute=max_data_area_mb value=128 reset max_data_area_mb to default - reconfigure attribute=max_data_area_mb value= """ self.reconfigure(attribute, value) def ui_command_snapshot(self, action, name): """ The snapshot command allows you create, delete, and rollback snapshots on an existing rbd image. e.g. snapshot create snap1 snapshot delete snap1 snapshot rollback snap1 action: create, delete, or rollback name: snapshot name """ self.snapshot(action, name) class TargetDisks(UIGroup): help_intro = ''' The target disks section shows the disks that are mapped to this target. Disks may be added or deleted using the add/delete command, but the same disk cannot be mapped to multiple targets. ''' def __init__(self, parent): UIGroup.__init__(self, 'disks', parent) self.http_mode = self.parent.http_mode self.target_iqn = self.parent.name def load(self, disks): for image_id, image in disks.items(): TargetDisk(self, image_id, image['lun_id']) def ui_command_add(self, disk, lun_id=None): self.add_disk(disk, lun_id) def add_disk(self, disk, lun_id, success_msg='ok'): rc = 0 api_vars = {"disk": disk, "lun_id": lun_id} targetdisk_api = ('{}://localhost:{}/api/' 'targetlun/{}'.format(self.http_mode, settings.config.api_port, self.target_iqn)) api = APIRequest(targetdisk_api, data=api_vars) api.put() if api.response.status_code == 200: config = get_config() owner = config['disks'][disk]['owner'] ui_root = self.get_ui_root() disk = ui_root.disks.disk_lookup[disk] disk.owner = owner self.logger.debug("- Disk '{}' owner updated to {}" .format(disk.image_id, owner)) target_config = config['targets'][self.target_iqn] lun_id = target_config['disks'][disk.image_id]['lun_id'] TargetDisk(self, disk.image_id, lun_id) self.logger.debug("- TargetDisk '{}' added".format(disk.image_id)) if success_msg: self.logger.info(success_msg) else: self.logger.error("Failed - {}".format(response_message(api.response, self.logger))) rc = 1 return rc def ui_command_delete(self, disk): self.delete_disk(disk) def delete_disk(self, disk): rc = 0 api_vars = {"disk": disk} targetdisk_api = ('{}://localhost:{}/api/' 'targetlun/{}'.format(self.http_mode, settings.config.api_port, self.target_iqn)) api = APIRequest(targetdisk_api, data=api_vars) api.delete() if api.response.status_code == 200: target_disk = [target_disk for target_disk in self.children if target_disk.name == disk][0] self.remove_child(target_disk) self.logger.debug("- TargetDisk '{}' deleted".format(disk)) ui_root = self.get_ui_root() disk = ui_root.disks.disk_lookup[disk] disk.owner = '' self.logger.debug("- Disk '{}' owner deleted".format(disk)) self.logger.info('ok') else: self.logger.error("Failed - {}".format(response_message(api.response, self.logger))) rc = 1 return rc def summary(self): return "Disks: {}".format(len(self.children)), None class TargetDisk(UINode): help_intro = ''' Represents a disk that is mapped to the target. ''' display_attributes = ['name', 'owner'] def __init__(self, parent, name, lun_id): UINode.__init__(self, name, parent) ui_root = self.get_ui_root() disk = ui_root.disks.disk_lookup[name] self.owner = disk.owner self.lun_id = lun_id def summary(self): return "Owner: {}, Lun: {}".format(self.owner, self.lun_id), True ceph-iscsi-3.9/gwcli/utils.py000066400000000000000000000430341470665154300162520ustar00rootroot00000000000000import requests from requests import Response import sys import re import os import subprocess from rtslib_fb.utils import normalize_wwn, RTSLibError from ceph_iscsi_config.client import GWClient import ceph_iscsi_config.settings as settings from ceph_iscsi_config.utils import (resolve_ip_addresses, CephiSCSIError, this_host) __author__ = 'Paul Cuzner' class Colors(object): map = {'green': '\x1b[32;1m', 'red': '\x1b[31;1m', 'yellow': '\x1b[33;1m', 'blue': '\x1b[34;1m'} def readcontents(filename): with open(filename, 'r') as input_file: content = input_file.read().rstrip() return content def get_config(): """ use the /config api to return the current gateway configuration :return: (dict) of the config object """ http_mode = "https" if settings.config.api_secure else "http" api_rqst = "{}://localhost:{}/api/config".format(http_mode, settings.config.api_port) api = APIRequest(api_rqst) api.get() if api.response.status_code == 200: try: return api.response.json() except Exception: pass return {} def valid_gateway(target_iqn, gw_name, gw_ips, config): """ validate the request for a new gateway :param gw_name: (str) host (shortname) of the gateway :param gw_ips: (str) ip addresses on the gw that will be used for iSCSI :param config: (dict) current config :return: (str) "ok" or error description """ http_mode = 'https' if settings.config.api_secure else "http" # if the gateway request already exists in the config, computer says "no" target_config = config['targets'][target_iqn] if gw_name in target_config['portals']: return "Gateway name {} already defined".format(gw_name) for gw_ip in gw_ips: if gw_ip in target_config.get('ip_list', []): return "IP address already defined to the configuration" # validate the gateway name is resolvable if not resolve_ip_addresses(gw_name): return ("Gateway '{}' is not resolvable to an IP address".format(gw_name)) # validate the ip_address is valid ip for gw_ip in gw_ips: if not resolve_ip_addresses(gw_ip): return ("IP address provided is not usable (name doesn't" " resolve, or not a valid IPv4/IPv6 address)") # At this point the request seems reasonable, so lets check a bit deeper gw_api = '{}://{}:{}/api'.format(http_mode, gw_name, settings.config.api_port) # check the intended host actually has the requested IP available api = APIRequest(gw_api + '/sysinfo/ip_addresses') api.get() if api.response.status_code != 200: return ("ip_addresses query to {} failed - check " "rbd-target-api log. Is the API server " "running and in the right mode (http/https)?".format(gw_name)) try: target_ips = api.response.json()['data'] except Exception: return "Malformed REST API response" for gw_ip in gw_ips: if gw_ip not in target_ips: return ("IP address of {} is not available on {}. Valid " "IPs are :{}".format(gw_ip, gw_name, ','.join(target_ips))) # check that config file on the new gateway matches the local machine api = APIRequest(gw_api + '/sysinfo/checkconf') api.get() if api.response.status_code != 200: return ("checkconf API call to {} failed with " "code {}".format(gw_name, api.response.status_code)) # compare the hash of the new gateways conf file with the local one local_hash = settings.config.hash() try: remote_hash = str(api.response.json()['data']) except Exception: remote_hash = None if local_hash != remote_hash: return ("/etc/ceph/iscsi-gateway.cfg on {} does " "not match the local version. Correct and " "retry request".format(gw_name)) # Check for package version dependencies api = APIRequest(gw_api + '/sysinfo/checkversions') api.get() if api.response.status_code != 200: try: errors = api.response.json()['data'] except Exception: return "Malformed REST API response" return ("{} failed package validation checks - " "{}".format(gw_name, ','.join(errors))) # At this point the gateway seems valid return "ok" def get_remote_gateways(config, logger, local_gw_required=True): """ Return the list of remote gws. :param: config: Config object with gws setup. :param: logger: Logger object :param: local_gw_required: Check if local_gw is defined within gateways configuration :return: A list of gw names, or CephiSCSIError if not run on a gw in the config """ local_gw = this_host() logger.debug("this host is {}".format(local_gw)) gateways = [key for key in config if isinstance(config[key], dict)] logger.debug("all gateways - {}".format(gateways)) if local_gw_required and local_gw not in gateways: raise CephiSCSIError("{} cannot be used to perform this operation " "because it is not defined within the gateways " "configuration".format(local_gw)) if local_gw in gateways: gateways.remove(local_gw) logger.debug("remote gateways: {}".format(gateways)) return gateways def valid_credentials(username, password, mutual_username, mutual_password): """ Returns `None` if credentials are acceptable, otherwise return an error message username / mutual_username is 8-64 chars long containing any alphanumeric in [0-9a-zA-Z] and '.' ':' '@' '_' '-' password / mutual_password is 12-16 chars long containing any alphanumeric in [0-9a-zA-Z] and '@' '-' '_' '/' """ usr_regex = re.compile(r"^[\w\\.\:\@\_\-]{8,64}$") pw_regex = re.compile(r"^[\w\@\-\_\/]{12,16}$") if username and not password: return 'Password is required' if not username and (password or mutual_username): return 'Username is required' if mutual_username and not mutual_password: return 'Mutual password is required' if not mutual_username and mutual_password: return 'Mutual username is required' if username and len(username) < 8: return 'Minimum length of username is 8 characters' if username and len(username) > 64: return 'Maximum length of username is 64 characters' if username and not usr_regex.search(username): return 'Invalid username' if mutual_username and len(mutual_username) < 8: return 'Minimum length of mutual username is 8 characters' if mutual_username and len(mutual_username) > 64: return 'Maximum length of mutual username is 64 characters' if mutual_username and not usr_regex.search(mutual_username): return 'Invalid mutual username' if password and len(password) < 12: return 'Minimum length of password is 12 characters' if password and len(password) > 16: return 'Maximum length of password is 16 characters' if password and not pw_regex.search(password): return 'Invalid password' if mutual_password and len(mutual_password) < 12: return 'Minimum length of mutual password is 12 characters' if mutual_password and len(mutual_password) > 16: return 'Maximum length of mutual password is 16 characters' if mutual_password and not pw_regex.search(mutual_password): return 'Invalid mutual password' return None def valid_client(**kwargs): """ validate a client create or update request, based on mode. :param kwargs: 'mode' is the key field used to determine process flow :return: 'ok' or an error description (str) """ valid_modes = ['create', 'delete', 'auth', 'disk'] parms_passed = set(kwargs.keys()) if 'mode' in kwargs: if kwargs['mode'] not in valid_modes: return ("Invalid client validation mode request - " "asked for {}, available {}".format(kwargs['mode'], valid_modes)) else: return "Invalid call to valid_client - mode is needed" # at this point we have a mode to work with mode = kwargs['mode'] client_iqn = kwargs['client_iqn'] target_iqn = kwargs['target_iqn'] config = get_config() if not config: return "Unable to query the local API for the current config" target_config = config['targets'][target_iqn] if mode == 'create': # iqn must be valid try: normalize_wwn(['iqn'], client_iqn) except RTSLibError: return ("Invalid IQN name for iSCSI") # iqn must not already exist if client_iqn in target_config['clients']: return ("A client with the name '{}' is " "already defined".format(client_iqn)) # Mixing TPG/target auth with ACL is not supported target_username = target_config['auth']['username'] target_password = target_config['auth']['password'] target_auth_enabled = (target_username and target_password) if target_auth_enabled: return "Cannot create client because target CHAP authentication is enabled" # Creates can only be done with a minimum number of gw's in place num_gws = len([gw_name for gw_name in config['gateways'] if isinstance(config['gateways'][gw_name], dict)]) if num_gws < settings.config.minimum_gateways: return ("Clients can not be defined until a HA configuration " "has been defined " "(>{} gateways)".format(settings.config.minimum_gateways)) # at this point pre-req's look good return 'ok' elif mode == 'delete': # client must exist in the configuration if client_iqn not in target_config['clients']: return ("{} is not defined yet - nothing to " "delete".format(client_iqn)) this_client = target_config['clients'].get(client_iqn) if this_client.get('group_name', None): return ("Unable to delete '{}' - it belongs to " "group {}".format(client_iqn, this_client.get('group_name'))) # client to delete must not be logged in - we're just checking locally, # since *all* nodes are set up the same, and a client login request # would normally login to each gateway client_info = GWClient.get_client_info(target_iqn, client_iqn) if client_info['state'] == 'LOGGED_IN': return ("Client '{}' is logged in to {}- unable to delete until" " it's logged out".format(client_iqn, target_iqn)) # at this point, the client looks ok for a DELETE operation return 'ok' elif mode == 'auth': # client iqn must exist if client_iqn not in target_config['clients']: return ("Client '{}' does not exist".format(client_iqn)) username = kwargs['username'] password = kwargs['password'] mutual_username = kwargs['mutual_username'] mutual_password = kwargs['mutual_password'] error_msg = valid_credentials(username, password, mutual_username, mutual_password) if error_msg: return error_msg return 'ok' elif mode == 'disk': this_client = target_config['clients'].get(client_iqn) if this_client.get('group_name', None): return ("Unable to manage disks for '{}' - it belongs to " "group {}".format(client_iqn, this_client.get('group_name'))) if 'image_list' not in parms_passed: return ("Disk changes require 'image_list' to be set, containing" " a comma separated str of rbd images (pool/image)") rqst_disks = set(kwargs['image_list'].split(',')) mapped_disks = set(target_config['clients'][client_iqn]['luns'].keys()) current_disks = set(config['disks'].keys()) if len(rqst_disks) > len(mapped_disks): # this is an add operation # ensure the image list is 'complete' not just a single disk if not mapped_disks.issubset(rqst_disks): return ("Invalid image list - it must contain existing " "disks AND any additions") # ensure new disk(s) exist - must yield a result since rqst>mapped new_disks = rqst_disks.difference(mapped_disks) if not new_disks.issubset(current_disks): # disks provided are not currently defined return ("Invalid image list - it defines new disks that do " "not current exist") return 'ok' else: # this is a disk removal operation if kwargs['image_list']: if not rqst_disks.issubset(mapped_disks): return ("Invalid image list ({})".format(rqst_disks)) return 'ok' return 'Unknown error in valid_client function' def valid_snapshot_name(name): regex = re.compile("^[^/@]+$") if not regex.search(name): return False return True def refresh_control_values(control_values, controls, def_settings): for key, setting in def_settings.items(): val = controls.get(setting.name) if val is not None: # config values may be normalized or raw val = setting.to_str(setting.normalize(val)) def_val = setting.to_str(getattr(settings.config, key)) if val is None or val == def_val: control_values[setting.name] = def_val else: control_values[setting.name] = "{} (override)".format(val) class GatewayError(Exception): pass class GatewayAPIError(GatewayError): pass class GatewayLIOError(GatewayError): pass class APIRequest(object): def __init__(self, *args, **kwargs): self.args = args self.kwargs = kwargs # Establish defaults for the API connection if 'auth' not in self.kwargs: self.kwargs['auth'] = (settings.config.api_user, settings.config.api_password) if 'verify' not in self.kwargs: self.kwargs['verify'] = settings.config.api_ssl_verify self.http_methods = ['get', 'put', 'delete'] self.data = None def _get_response(self): return self.data def __getattr__(self, name): if name in self.http_methods: request_method = getattr(requests, name) try: self.data = request_method(*self.args, **self.kwargs) except requests.ConnectionError: msg = ("Unable to connect to api endpoint @ " "{}".format(self.args[0])) self.data = Response() self.data.status_code = 500 self.data._content = '{{"message": "{}" }}'.format(msg).encode('utf-8') return self._get_response except Exception: raise GatewayAPIError("Unknown error connecting to " "{}".format(self.args[0])) else: # since the attribute is a callable, we must return with # a callable return self._get_response raise AttributeError() response = property(_get_response, doc="get http response output") def progress_message(text, color='green'): sys.stdout.write("{}{}{}\r".format(Colors.map[color], text, '\x1b[0m')) sys.stdout.flush() def console_message(text, color='green'): color_needed = getattr(settings.config, 'interactive', True) if color_needed: print("{}{}{}".format(Colors.map[color], text, '\x1b[0m')) else: print(text) def cmd_exists(command): return any( os.access(os.path.join(path, command), os.X_OK) for path in os.environ["PATH"].split(os.pathsep) ) def os_cmd(command): """ Issue a command to the OS and return the output. NB. check_output default is shell=False :param command: (str) OS command :return: (str) command response (lines terminated with \n) """ cmd_list = command.split(' ') if cmd_exists(cmd_list[0]): cmd_output = subprocess.check_output(cmd_list, stderr=subprocess.STDOUT).rstrip() return cmd_output else: return '' def response_message(response, logger=None): """ Attempts to retrieve the "message" value from a JSON-encoded response message. If the JSON fails to parse, the response will be returned as-is. :param response: (requests.Response) response :param logger: optional logger :return: (str) response message """ try: return response.json()['message'] except Exception: if logger: logger.debug("Failed API request: {} {}\n{}".format(response.request.method, response.request.url, response.text)) return "{} {}".format(response.status_code, response.reason) ceph-iscsi-3.9/iscsi-gateway.cfg_sample000066400000000000000000000021701470665154300202220ustar00rootroot00000000000000[config] # name of the *.conf file. A suitable conf file allowing access to the ceph # cluster from the gateway node is required. cluster_name = # Pool name where internal `gateway.conf` object is stored # pool = rbd # CephX client name # cluster_client_name = client. # E.g.: client.admin # gateway_conf = gateway.conf # API settings. # The api supports a number of options that allow you to tailor it to your # local environment. If you want to run the api under https, you will need to # create crt/key files that are compatible for each gateway node (i.e. not # locked to a specific node). SSL crt and key files *must* be called # iscsi-gateway.crt and iscsi-gateway.key and placed in /etc/ceph on *each* # gateway node. With the SSL files in place, you can use api_secure = true # to switch to https mode. # To support the api, the bear minimum settings are; api_secure = false # Additional API configuration options are as follows (defaults shown); # api_user = admin # api_password = admin # api_port = 5000 # trusted_ip_list = IP,IP # Refer to the ceph-iscsi-config/settings module for more options ceph-iscsi-3.9/rbd-target-api.py000066400000000000000000003431301470665154300166070ustar00rootroot00000000000000#!/usr/bin/env python import sys import os import signal import json import logging import logging.handlers from logging.handlers import RotatingFileHandler import ssl import operator import OpenSSL import tempfile import threading import time import inspect import copy from functools import (reduce, wraps) import rados import rbd import werkzeug from flask import Flask, jsonify, request from rtslib_fb.utils import RTSLibError, normalize_wwn import ceph_iscsi_config.settings as settings from ceph_iscsi_config.gateway_setting import IntSetting, EnumSetting from ceph_iscsi_config.gateway import CephiSCSIGateway from ceph_iscsi_config.discovery import Discovery from ceph_iscsi_config.target import GWTarget from ceph_iscsi_config.group import Group from ceph_iscsi_config.lun import RBDDev, LUN from ceph_iscsi_config.client import GWClient, CHAP from ceph_iscsi_config.common import Config from ceph_iscsi_config.utils import (normalize_ip_literal, resolve_ip_addresses, ip_addresses, read_os_release, encryption_available, CephiSCSIError, this_host) from ceph_iscsi_config.device_status import DeviceStatusWatcher from gwcli.utils import (APIRequest, valid_gateway, valid_client, valid_credentials, get_remote_gateways, valid_snapshot_name, GatewayAPIError) app = Flask(__name__) # workaround for https://github.com/pallets/flask/issues/2549 app.config['JSONIFY_PRETTYPRINT_REGULAR'] = False def requires_basic_auth(f): """ wrapper function to check authentication credentials are valid """ @wraps(f) def decorated(*args, **kwargs): # check credentials supplied in the http request are valid auth = request.authorization if not auth: return jsonify(message="Missing credentials"), 401 if auth.username != settings.config.api_user or \ auth.password != settings.config.api_password: return jsonify(message="username/password mismatch with the " "configuration file"), 401 return f(*args, **kwargs) return decorated def requires_restricted_auth(f): """ Wrapper function which checks both auth credentials and source IP address to validate the request """ @wraps(f) def decorated(*args, **kwargs): # First check that the source of the request is actually valid gw_names = [gw for gw in config.config['gateways'] if isinstance(config.config['gateways'][gw], dict)] gw_names.append('localhost') for _, target in config.config['targets'].items(): gw_names += target.get('ip_list', []) gw_ips = reduce(operator.concat, [resolve_ip_addresses(gw_name) for gw_name in gw_names]) + \ settings.config.trusted_ip_list # remove interface scope suffix and IPv4-over-IPv6 prefix remote_addr = request.remote_addr.rsplit('%', 1)[0] remote_addr = remote_addr.split('::ffff:', 1)[-1] if remote_addr not in gw_ips: return jsonify(message="API access not available to " "{}".format(remote_addr)), 403 # check credentials supplied in the http request are valid auth = request.authorization if not auth: return jsonify(message="Missing credentials"), 401 if auth.username != settings.config.api_user or \ auth.password != settings.config.api_password: return jsonify(message="username/password mismatch with the " "configuration file"), 401 return f(*args, **kwargs) return decorated @app.errorhandler(Exception) def unhandled_exception(e): logger.exception("Unhandled Exception") return jsonify(message="Unhandled exception: {}".format(e)), 500 @app.route('/api', methods=['GET']) def get_api_info(): """ Display all the available API endpoints **UNRESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api """ links = [] sorted_rules = sorted(app.url_map.iter_rules(), key=lambda x: x.rule, reverse=False) for rule in sorted_rules: url = rule.rule if rule.endpoint == 'static': continue else: func_doc = inspect.getdoc(globals()[rule.endpoint]) if func_doc: doc = func_doc.split('\n') if any(path_entry.startswith('_') for path_entry in url.split('/')): continue else: url_desc = "{} : {}".format(url, doc[0]) doc = doc[1:] else: url_desc = "{} : {}".format(url, "Missing description - FIXME!") doc = [] callable_methods = [method for method in rule.methods if method not in ['OPTIONS', 'HEAD']] api_methods = "Methods: {}".format(','.join(callable_methods)) links.append((url_desc, api_methods, doc)) return jsonify(api=links), 200 @app.route('/api/sysinfo/', methods=['GET']) @requires_basic_auth def get_sys_info(query_type=None): """ Provide system information based on the query_type Valid query types are: ip_addresses, checkconf and checkversions **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/sysinfo/ip_addresses """ if query_type == 'ip_addresses': return jsonify(data=ip_addresses()), 200 if query_type == 'hostname': return jsonify(data=this_host()), 200 elif query_type == 'checkconf': local_hash = settings.config.hash() return jsonify(data=local_hash), 200 elif query_type == 'checkversions': config_errors = pre_reqs_errors() if config_errors: return jsonify(data=config_errors), 500 else: return jsonify(data='checks passed'), 200 else: # Request Unknown return jsonify(message="Unknown /sysinfo query"), 404 def _parse_controls(controls_json, settings_list): return settings.Settings.normalize_controls(json.loads(controls_json), settings_list) def parse_target_controls(request): tpg_controls = {} client_controls = {} if 'controls' not in request.form: return tpg_controls, client_controls controls = _parse_controls(request.form['controls'], GWTarget.SETTINGS) for k, v in controls.items(): if GWClient.SETTINGS.get(k): client_controls[k] = v else: tpg_controls[k] = v logger.debug("controls tpg {} acl {}".format(tpg_controls, client_controls)) return tpg_controls, client_controls @app.route('/api/targets', methods=['GET']) @requires_restricted_auth def get_targets(): """ List targets defined in the configuration. **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/targets """ return jsonify({'targets': list(config.config['targets'].keys())}), 200 @app.route('/api/target/', methods=['PUT', 'DELETE']) @requires_restricted_auth def target(target_iqn=None): """ Handle the definition of the iscsi target name The target is added to the configuration object, seeding the configuration for ALL gateways :param target_iqn: IQN of the target each gateway will use :param mode: (str) 'reconfigure' :param controls: (JSON dict) valid control overrides **RESTRICTED** Examples: curl --user admin:admin -X PUT http://192.168.122.69:5000/api/target/iqn.2003-01.com.redhat.iscsi-gw0 curl --user admin:admin -d mode=reconfigure -d controls='{cmdsn_depth=128}' -X PUT http://192.168.122.69:5000/api/target/iqn.2003-01.com.redhat.iscsi-gw0 """ try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 if request.method == 'PUT': mode = request.form.get('mode', None) if mode not in [None, 'reconfigure']: logger.error("Unexpected mode provided") return jsonify(message="Unexpected mode provided for {} - " "{}".format(target_iqn, mode)), 500 try: tpg_controls, client_controls = parse_target_controls(request) except ValueError as err: logger.error("Unexpected or invalid controls") return jsonify(message="Unexpected or invalid controls - " "{}".format(err)), 500 if mode == 'reconfigure': target_config = config.config['targets'].get(target_iqn, None) if target_config is None: return jsonify(message="Target: {} is not defined." "".format(target_iqn)), 400 gateway_ip_list = [] target = GWTarget(logger, str(target_iqn), gateway_ip_list) if target.error: logger.error("Unable to create an instance of the GWTarget class") return jsonify(message="GWTarget problem - " "{}".format(target.error_msg)), 500 orig_tpg_controls = {} orig_client_controls = {} for k, v in tpg_controls.items(): orig_tpg_controls[k] = getattr(target, k) setattr(target, k, v) for k, v in client_controls.items(): orig_client_controls[k] = getattr(target, k) setattr(target, k, v) target.manage('init') if target.error: logger.error("Failure during gateway 'init' processing") return jsonify(message="iscsi target 'init' process failed " "for {} - {}".format(target_iqn, target.error_msg)), 500 if mode is None: config.refresh() return jsonify(message="Target defined successfully"), 200 if not tpg_controls and not client_controls: return jsonify(message="Target reconfigured."), 200 # This is a reconfigure operation, so first confirm the gateways # are in place (we need defined gateways) target_config = config.config['targets'][target_iqn] try: gateways = get_remote_gateways(target_config['portals'], logger) except CephiSCSIError as err: logger.warning("target operation request failed: {}".format(err)) return jsonify(message="{}".format(err)), 400 # We perform the reconfigure locally here to make sure the values are valid # and simplify error cleanup resp_text = local_target_reconfigure(target_iqn, tpg_controls, client_controls) if "ok" != resp_text: reset_resp = local_target_reconfigure(target_iqn, orig_tpg_controls, orig_client_controls) if "ok" != reset_resp: logger.error("Failed to reset target controls - " "{}".format(reset_resp)) return jsonify(message="{}".format(resp_text)), 500 resp_text, resp_code = call_api(gateways, '_target', target_iqn, http_method='put', api_vars=request.form) if resp_code != 200: return jsonify(message="{}".format(resp_text)), resp_code try: target.commit_controls() except CephiSCSIError as err: logger.error("Control commit failed during gateway 'reconfigure'") return jsonify(message="Could not commit controls - {}".format(err)), 500 config.refresh() return jsonify(message="Target reconfigured."), 200 else: # DELETE target request config.refresh() hostnames = None if target_iqn in config.config['targets']: target_config = config.config['targets'][target_iqn] hostnames = target_config['portals'].keys() if not hostnames: hostnames = [this_host()] resp_text, resp_code = call_api(hostnames, '_target', '{}'.format(target_iqn), http_method='delete') if resp_code != 200: return jsonify(message="{}".format(resp_text)), resp_code return jsonify(message="Target deleted."), 200 def local_target_reconfigure(target_iqn, tpg_controls, client_controls): config.refresh() target = GWTarget(logger, str(target_iqn), []) if target.error: logger.error("Unable to create an instance of the GWTarget class") return target.error_msg for k, v in tpg_controls.items(): setattr(target, k, v) if target.exists(): target.load_config() if target.error: logger.error("Unable to refresh tpg state") return target.error_msg try: target.update_tpg_controls() except RTSLibError as err: logger.error("Unable to update tpg control - {}".format(err)) return "Unable to update tpg control - {}".format(err) # re-apply client control overrides error_msg = "ok" target_config = config.config['targets'][target_iqn] for client_iqn in target_config['clients']: client_metadata = target_config['clients'][client_iqn] image_list = list(client_metadata['luns'].keys()) client_auth_config = client_metadata['auth'] client_chap = CHAP(client_auth_config['username'], client_auth_config['password'], client_auth_config['password_encryption_enabled']) if client_chap.error: logger.debug("Password decode issue : " "{}".format(client_chap.error_msg)) halt("Unable to decode password for {}".format(client_iqn)) client_chap_mutual = CHAP(client_auth_config['mutual_username'], client_auth_config['mutual_password'], client_auth_config['mutual_password_encryption_enabled']) if client_chap_mutual.error: logger.debug("Password decode issue : " "{}".format(client_chap_mutual.error_msg)) halt("Unable to decode password for {}".format(client_iqn)) client = GWClient(logger, client_iqn, image_list, client_chap.user, client_chap.password, client_chap_mutual.user, client_chap_mutual.password, target_iqn) if client.error: logger.error("Could not create client. Control override failed " "{} - {}".format(client_iqn, client.error_msg)) error_msg = client.error_msg continue for k, v in client_controls.items(): setattr(client, k, v) client.manage('reconfigure') if client.error: logger.error("Unable to update client control - " "{} - {}".format(client_iqn, client.error_msg)) error_msg = client.error_msg if "Invalid argument" in client.error_msg: # Kernel/rtslib reported EINVAL so immediately fail return client.error_msg if error_msg != "ok": return "Unable to update client control - {}".format(error_msg) return "ok" def delete_gateway(gateway_name, target_iqn): ceph_gw = CephiSCSIGateway(logger, config) if gateway_name is None or gateway_name == ceph_gw.hostname: ceph_gw.delete_target(target_iqn) ceph_gw.remove_from_config(target_iqn) else: # To maintain the tpg ordering completely tear down the target # and rebuild it with the new ordering. ceph_gw.redefine_target(target_iqn) @app.route('/api/_target/', methods=['PUT', 'DELETE']) @requires_restricted_auth def _target(target_iqn=None): if request.method == 'PUT': mode = request.form.get('mode', None) if mode not in ['reconfigure']: logger.error("Unexpected mode provided") return jsonify(message="Unexpected mode provided for {} - " "{}".format(target_iqn, mode)), 500 try: tpg_controls, client_controls = parse_target_controls(request) except ValueError as err: logger.error("Unexpected or invalid controls") return jsonify(message="Unexpected or invalid controls - " "{}".format(err)), 500 resp_text = local_target_reconfigure(target_iqn, tpg_controls, client_controls) if "ok" != resp_text: return jsonify(message="{}".format(resp_text)), 500 return jsonify(message="Target reconfigured successfully"), 200 else: # DELETE target request target = GWTarget(logger, target_iqn, '') if target.error: return jsonify(message="Failed to access target"), 500 target.manage('clearconfig') if target.error: logger.error("clearconfig failed: " "{}".format(target.error_msg)) return jsonify(message=target.error_msg), 400 else: config.refresh() return jsonify(message="Gateway removed successfully"), 200 @app.route('/api/config', methods=['GET']) @requires_restricted_auth def get_config(): """ Return the complete config object to the caller (must be authenticated) WARNING: Contents will include any defined CHAP credentials :param decrypt_passwords: (bool) if true, passwords will be decrypted **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/config """ if request.method == 'GET': config.refresh() decrypt_passwords = request.args.get('decrypt_passwords', 'false') result_config = copy.deepcopy(config.config) if decrypt_passwords.lower() == 'true': discovery_auth_config = result_config['discovery_auth'] chap = CHAP(discovery_auth_config['username'], discovery_auth_config['password'], discovery_auth_config['password_encryption_enabled']) discovery_auth_config['password'] = chap.password chap = CHAP(discovery_auth_config['mutual_username'], discovery_auth_config['mutual_password'], discovery_auth_config['mutual_password_encryption_enabled']) discovery_auth_config['mutual_password'] = chap.password for _, target in result_config['targets'].items(): target_auth_config = target['auth'] chap = CHAP(target_auth_config['username'], target_auth_config['password'], target_auth_config['password_encryption_enabled']) target_auth_config['password'] = chap.password chap = CHAP(target_auth_config['mutual_username'], target_auth_config['mutual_password'], target_auth_config['mutual_password_encryption_enabled']) target_auth_config['mutual_password'] = chap.password for _, client in target['clients'].items(): auth_config = client['auth'] chap = CHAP(auth_config['username'], auth_config['password'], auth_config['password_encryption_enabled']) auth_config['password'] = chap.password chap = CHAP(auth_config['mutual_username'], auth_config['mutual_password'], auth_config['mutual_password_encryption_enabled']) auth_config['mutual_password'] = chap.password return jsonify(result_config), 200 @app.route('/api/gateways/', methods=['GET']) @requires_restricted_auth def gateways(target_iqn=None): """ Return the gateway subsection of the config object to the caller **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/gateways/iqn.2003-01.com.redhat.iscsi-gw """ try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 target_config = config.config['targets'][target_iqn] if request.method == 'GET': return jsonify(target_config['portals']), 200 @app.route('/api/gateway//', methods=['PUT', 'DELETE']) @requires_restricted_auth def gateway(target_iqn=None, gateway_name=None): """ Define (PUT) or delete (DELETE) iscsi gateway(s) across node(s), adding TPGs, disks and clients. gateway_name and target_iqn are required by all calls. The rest are required for PUT only. :param target_iqn: (str) target iqn :param gateway_name: (str) gateway name :param ip_address: (str) IPv4/IPv6 addresses iSCSI should use :param nosync: (bool) whether to sync the LIO objects to the new gateway default: FALSE :param skipchecks: (bool) whether to skip OS/software versions checks default: FALSE :param force: (bool) if True will force removal of gateway. **RESTRICTED** Examples: curl --user admin:admin -d ip_address=192.168.122.69 -X PUT http://192.168.122.69:5000/api/gateway/iqn.2003-01.com.redhat.iscsi-gw/gateway1 curl --user admin:admin -X DELETE http://192.168.122.69:5000/api/gateway/iqn.2003-01.com.redhat.iscsi-gw/gateway1 """ # the definition of a gateway into an existing configuration can apply the # running config to the new host. The downside is that this sync task # could take a while if there are 100's of disks/clients. Future work should # aim to make this synchronisation of the new gateway an async task try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 # first confirm that the request is actually valid, if not return a 400 # error with the error description config.refresh() current_config = config.config target_config = config.config['targets'][target_iqn] if request.method == 'PUT': if gateway_name in target_config['portals']: err_str = "Gateway already exists in configuration" logger.error(err_str) return jsonify(message=err_str), 400 ip_address = request.form.get('ip_address').split(',') nosync = request.form.get('nosync', 'false') skipchecks = request.form.get('skipchecks', 'false') if skipchecks.lower() == 'true': logger.warning("Gateway request received, with validity checks " "disabled") gateway_usable = 'ok' else: logger.info("gateway validation needed for {}".format(gateway_name)) gateway_usable = valid_gateway(target_iqn, gateway_name, ip_address, current_config) if gateway_usable != 'ok': return jsonify(message=gateway_usable), 400 current_disks = target_config['disks'] current_clients = target_config['clients'] total_objects = len(current_disks) + len(current_clients.keys()) # if the config is empty, it doesn't matter what nosync is set to if total_objects == 0: nosync = 'true' gateway_ip_list = target_config.get('ip_list', []) gateway_ip_list += ip_address op = 'creation' api_vars = {"gateway_ip_list": ",".join(gateway_ip_list), "nosync": nosync} elif request.method == 'DELETE': if gateway_name not in current_config['gateways']: err_str = "Gateway does not exist in configuration" logger.error(err_str) return jsonify(message=err_str), 404 op = 'deletion' api_vars = None else: return jsonify(message="Unsupported request type."), 400 gateways = list(target_config['portals'].keys()) first_gateway = (len(gateways) == 0) if first_gateway: gateways = ['localhost'] elif request.method == 'DELETE': gateways.remove(gateway_name) if gateway_name != this_host() and request.form.get('force', 'false').lower() == 'true': # The gw we want to delete is down and the user has decided to # force the deletion, so we do the config modification locally # then only tell the other gws to update their state. try: ceph_gw = CephiSCSIGateway(logger, config, gateway_name) ceph_gw.remove_from_config(target_iqn) except CephiSCSIError as err: return jsonify(message="Could not update config: {}.".format(err)), 400 else: # Update the deleted gw first, so the other gws see the updated # portal list gateways.insert(0, gateway_name) else: # Update the new gw first, so other gws see the updated gateways list. gateways.insert(0, gateway_name) resp_text, resp_code = call_api(gateways, '_gateway', '{}/{}'.format(target_iqn, gateway_name), http_method=request.method.lower(), api_vars=api_vars) config.refresh() return jsonify(message="Gateway {} {}".format(op, resp_text)), resp_code @app.route('/api/_gateway//', methods=['GET', 'PUT', 'DELETE']) @requires_restricted_auth def _gateway(target_iqn=None, gateway_name=None): """ Manage the local iSCSI gateway definition Internal Use ONLY Gateways may be be added(PUT), queried (GET) or deleted (DELETE) from the configuration :param target_iqn: (str) target iqn :param gateway_name: (str) gateway name, normally the DNS name **RESTRICTED** """ config.refresh() target_config = config.config['targets'][target_iqn] if request.method == 'GET': if gateway_name in target_config['portals']: return jsonify(target_config['portals'][gateway_name]), 200 else: return jsonify(message="Gateway doesn't exist in the " "configuration"), 404 elif request.method == 'DELETE': try: delete_gateway(gateway_name, target_iqn) except CephiSCSIError as err: return jsonify(message="Gateway deletion failed: {}.".format(err)), 400 return jsonify(message="Gateway deleted."), 200 elif request.method == 'PUT': # the parameters need to be cast to str for compatibility # with the comparison logic in common.config.add_item logger.debug("Attempting create of gateway {}".format(gateway_name)) gateway_ips = str(request.form['gateway_ip_list']) nosync = str(request.form.get('nosync', 'false')) gateway_ip_list = gateway_ips.split(',') target_only = False if nosync.lower() == 'true': target_only = True try: ceph_gw = CephiSCSIGateway(logger, config) ceph_gw.define_target(target_iqn, gateway_ip_list, target_only) except CephiSCSIError as err: err_msg = "Could not create target on gateway: {}".format(err) logger.error(err_msg) return jsonify(message=err_msg), 500 return jsonify(message="Gateway defined/mapped"), 200 @app.route('/api/targetlun/', methods=['PUT', 'DELETE']) @requires_restricted_auth def target_disk(target_iqn=None): """ Coordinate the addition(PUT) and removal(DELETE) of a disk for a target :param target_iqn: (str) IQN of the target :param disk: (str) rbd image name on the format pool/image **RESTRICTED** Examples: curl --user admin:admin -d disk=rbd/new2_1 -X PUT http://192.168.122.69:5000/api/targetlun/iqn.2003-01.com.redhat.iscsi-gw curl --user admin:admin -d disk=rbd/new2_1 -X DELETE http://192.168.122.69:5000/api/targetlun/iqn.2003-01.com.redhat.iscsi-gw """ try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 target_config = config.config['targets'][target_iqn] portals = [key for key in target_config['portals']] # Any disk operation needs at least 2 gateways to be present if len(portals) < settings.config.minimum_gateways: msg = "at least {} gateways must exist before disk mapping operations " \ "are permitted".format(settings.config.minimum_gateways) logger.warning("disk add request failed: {}".format(msg)) return jsonify(message=msg), 400 try: gateways = get_remote_gateways(target_config['portals'], logger) except CephiSCSIError as err: return jsonify(message="{}".format(err)), 400 local_gw = this_host() disk = request.form.get('disk') if request.method == 'PUT': if disk not in config.config['disks']: return jsonify(message="Disk {} is not defined in the configuration".format(disk)), 400 for iqn, target in config.config['targets'].items(): if disk in target['disks']: return jsonify(message="Disk {} cannot be used because it is already mapped on " "target {}".format(disk, iqn)), 400 pool, image_name = disk.split('/') try: backstore = config.config['disks'][disk] rbd_image = RBDDev(image_name, 0, backstore, pool) size = rbd_image.current_size logger.debug("{} size is {}".format(disk, size)) except rbd.ImageNotFound: return jsonify(message="Image {} not found".format(disk)), 400 owner = LUN.get_owner(config.config['gateways'], target_config['portals']) logger.debug("{} owner will be {}".format(disk, owner)) lun_id = request.form.get('lun_id') if lun_id is not None: try: lun_id_int = int(lun_id) except ValueError: return jsonify(message="Lun id must be a number"), 400 for target_disk in target_config['disks'].values(): if lun_id_int == target_disk['lun_id']: return jsonify(message="Lun id {} already in use".format(lun_id)), 400 api_vars = { 'disk': disk, 'lun_id': lun_id, 'owner': owner, 'allocating_host': local_gw } # process local gateway first gateways.insert(0, local_gw) resp_text, resp_code = call_api(gateways, '_targetlun', '{}'.format(target_iqn), http_method='put', api_vars=api_vars) if resp_code != 200: return jsonify(message="Add target LUN mapping failed - " "{}".format(resp_text)), resp_code else: # this is a DELETE request if disk not in config.config['disks']: return jsonify(message="Disk {} is not defined in the " "configuration".format(disk)), 400 if disk not in target_config['disks']: return jsonify(message="Disk {} is not defined in target " "{}".format(disk, target_iqn)), 400 for group_name, group in target_config['groups'].items(): if disk in group['disks']: return jsonify(message="Disk {} belongs to group " "{}".format(disk, group_name)), 400 api_vars = { 'disk': disk, 'purge_host': local_gw } # process other gateways first gateways.append(local_gw) resp_text, resp_code = call_api(gateways, '_targetlun', '{}'.format(target_iqn), http_method='delete', api_vars=api_vars) if resp_code != 200: return jsonify(message="Delete target LUN mapping failed - " "{}".format(resp_text)), resp_code return jsonify(message="Target LUN mapping updated successfully"), 200 @app.route('/api/_targetlun/', methods=['PUT', 'DELETE']) @requires_restricted_auth def _target_disk(target_iqn=None): """ Manage the addition/removal of disks from a target on the local gateway Internal Use ONLY **RESTRICTED** """ config.refresh() disk = request.form.get('disk') pool, image = disk.split('/', 1) disk_config = config.config['disks'][disk] backstore = disk_config['backstore'] backstore_object_name = disk_config['backstore_object_name'] if request.method == 'PUT': target_config = config.config['targets'][target_iqn] ip_list = target_config.get('ip_list', []) gateway = GWTarget(logger, target_iqn, ip_list) if gateway.error: logger.error("LUN mapping failed : " "{}".format(gateway.error_msg)) return jsonify(message="LUN map failed"), 500 owner = request.form.get('owner') allocating_host = request.form.get('allocating_host') rbd_image = RBDDev(image, 0, backstore, pool) size = rbd_image.current_size lun = LUN(logger, pool, image, size, allocating_host, backstore, backstore_object_name) if lun.error: logger.error("Error initializing the LUN : " "{}".format(lun.error_msg)) return jsonify(message="Error establishing LUN instance"), 500 lun_id = request.form.get('lun_id') if lun_id is not None: lun_id = int(lun_id) try: lun.map_lun(gateway, owner, disk, lun_id) except CephiSCSIError as err: status_code = 400 if str(err) else 500 logger.error("LUN add failed : {}".format(err)) return jsonify(message="Failed to add the LUN - " "{}".format(err)), status_code else: # DELETE gateway request purge_host = request.form['purge_host'] logger.debug("delete request for disk image '{}'".format(disk)) lun = LUN(logger, pool, image, 0, purge_host, backstore, backstore_object_name) if lun.error: logger.error("Error initializing the LUN : " "{}".format(lun.error_msg)) return jsonify(message="Error establishing LUN instance"), 500 lun.unmap_lun(target_iqn) if lun.error: status_code = 400 if lun.error_msg else 500 logger.error("LUN remove failed : {}".format(lun.error_msg)) return jsonify(message="Failed to remove the LUN - " "{}".format(lun.error_msg)), status_code config.refresh() return jsonify(message="LUN mapped"), 200 @app.route('/api/disks') @requires_restricted_auth def get_disks(): """ Show the rbd disks defined to the gateways :param config: (str) 'yes' to list the config info of all disks, default is 'no' **RESTRICTED** Examples: curl --user admin:admin -d config=yes -X GET http://192.168.122.69:5000/api/disks """ conf = request.form.get('config', 'no') if conf.lower() == "yes": disk_names = config.config['disks'] response = {"disks": disk_names} else: disk_names = list(config.config['disks'].keys()) response = {"disks": disk_names} return jsonify(response), 200 @app.route('/api/disk//', methods=['GET', 'PUT', 'DELETE']) @requires_restricted_auth def disk(pool, image): """ Coordinate the create/delete of rbd images across the gateway nodes This method calls the corresponding disk api entrypoints across each gateway. Processing is done serially: creation is done locally first, then other gateways - whereas, rbd deletion is performed first against remote gateways and then the local machine is used to perform the actual rbd delete. :param pool: (str) pool name :param image: (str) rbd image name :param mode: (str) 'create' or 'resize' the rbd image :param size: (str) the size of the rbd image :param pool: (str) the pool name the rbd image will be in :param count: (str) the number of images will be created :param owner: (str) the owner of the rbd image :param controls: (JSON dict) valid control overrides :param preserve_image: (bool, 'true/false') do NOT delete RBD image :param create_image: (bool, 'true/false') create RBD image if not exists, true as default :param backstore: (str) lio backstore :param wwn: (str) unit serial number **RESTRICTED** Examples: curl --user admin:admin -d mode=create -d size=1g -d pool=rbd -d count=5 -X PUT http://192.168.122.69:5000/api/disk/rbd/new0_ curl --user admin:admin -d mode=create -d size=10g -d pool=rbd -d create_image=false -X PUT http://192.168.122.69:5000/api/disk/rbd/new1 curl --user admin:admin -X GET http://192.168.122.69:5000/api/disk/rbd/new2 curl --user admin:admin -X DELETE http://192.168.122.69:5000/api/disk/rbd/new3 """ local_gw = this_host() logger.debug("this host is {}".format(local_gw)) image_id = '{}/{}'.format(pool, image) config.refresh() if request.method == 'GET': if image_id in config.config['disks']: disk_dict = config.config["disks"][image_id] global dev_status_watcher disk_status = dev_status_watcher.get_dev_status(image_id) if disk_status: disk_dict['status'] = disk_status.get_status_dict() else: disk_dict['status'] = {'state': 'Unknown'} return jsonify(disk_dict), 200 else: return jsonify(message="rbd image {} not " "found".format(image_id)), 404 # Initial disk creation is done only on local host and this host mode = request.form.get('mode') if mode == 'create': backstore = request.form.get('backstore', LUN.DEFAULT_BACKSTORE) else: backstore = config.config['disks'][image_id]['backstore'] gateways = [] if mode != 'create': try: gateways = get_remote_gateways(config.config['gateways'], logger, False) except CephiSCSIError as err: logger.warning("disk operation request failed: {}".format(err)) return jsonify(message="{}".format(err)), 400 if request.method == 'PUT': # at this point we have a disk request, and the gateways are available # for the LUN masking operations # pool = request.form.get('pool') size = request.form.get('size') count = request.form.get('count', '1') controls = {} if 'controls' in request.form: try: controls = _parse_controls(request.form['controls'], LUN.SETTINGS[backstore]) except ValueError as err: logger.error("Unexpected or invalid {} controls".format(mode)) return jsonify(message="Unexpected or invalid controls - " "{}".format(err)), 500 logger.debug("{} controls {}".format(mode, controls)) wwn = request.form.get('wwn') disk_usable = LUN.valid_disk(config, logger, pool=pool, image=image, size=size, mode=mode, count=count, controls=controls, backstore=backstore, wwn=wwn) if disk_usable != 'ok': return jsonify(message=disk_usable), 400 create_image = request.form.get('create_image', 'true') if create_image not in ['true', 'false']: logger.error("Invalid 'create_image' value {}".format(create_image)) return jsonify(message="Invalid 'create_image' value {}".format(create_image)), 400 if mode == 'create' and (create_image == 'false' or not size): try: # no size implies not intention to create an image, try to # check whether it exists rbd_image = RBDDev(image, 0, backstore, pool) size = rbd_image.current_size except rbd.ImageNotFound: # the create_image=true will be implied if size is specified # by default if create_image == 'true': # the size must be specified when creating an image return jsonify(message="Size parameter is required when creating a new " "image"), 400 elif create_image == 'false': return jsonify(message="Image {} does not exist".format(image_id)), 400 if mode == 'reconfigure': resp_text, resp_code = lun_reconfigure(image_id, controls, backstore) if resp_code == 200: return jsonify(message="lun reconfigured: {}".format(resp_text)), resp_code else: return jsonify(message=resp_text), resp_code suffixes = [n for n in range(1, int(count) + 1)] # make call to local api server first! gateways.insert(0, 'localhost') for sfx in suffixes: image_name = image if count == '1' else "{}{}".format(image, sfx) api_vars = {'pool': pool, 'image': image, 'size': size, 'owner': local_gw, 'mode': mode, 'backstore': backstore, 'wwn': wwn} if 'controls' in request.form: api_vars['controls'] = request.form['controls'] resp_text, resp_code = call_api(gateways, '_disk', '{}/{}'.format(pool, image_name), http_method='put', api_vars=api_vars) if resp_code != 200: return jsonify(message="disk create/update " "{}".format(resp_text)), resp_code return jsonify(message="disk create/update {}".format(resp_text)), \ resp_code else: # this is a DELETE request disk_usable = LUN.valid_disk(config, logger, mode='delete', pool=pool, image=image, backstore=backstore) if disk_usable != 'ok': return jsonify(message=disk_usable), 400 api_vars = { 'purge_host': local_gw, 'preserve_image': request.form.get('preserve_image'), 'backstore': backstore } # process other gateways first gateways.append(local_gw) resp_text, resp_code = call_api(gateways, '_disk', '{}/{}'.format(pool, image), http_method='delete', api_vars=api_vars) return jsonify(message="disk map deletion {}".format(resp_text)), \ resp_code @app.route('/api/_disk//', methods=['GET', 'PUT', 'DELETE']) @requires_restricted_auth def _disk(pool, image): """ Manage a disk definition on the local gateway Internal Use ONLY Disks can be created and added to each gateway, or deleted through this call :param pool: (str) pool name :param image: (str) image name **RESTRICTED** """ image_id = '{}/{}'.format(pool, image) config.refresh() if request.method == 'GET': if image_id in config.config['disks']: return jsonify(config.config["disks"][image_id]), 200 else: return jsonify(message="rbd image {} not " "found".format(image_id)), 404 elif request.method == 'PUT': # A put is for either a create or a resize # put('http://localhost:5000/api/disk/rbd.ansible3', # data={'pool': 'rbd','size': '3G','owner':'ceph-1'}) mode = request.form['mode'] if mode == 'create': backstore = request.form['backstore'] backstore_object_name = LUN.get_backstore_object_name(str(request.form['pool']), image, config.config['disks']) else: disk_config = config.config['disks'][image_id] backstore = disk_config['backstore'] backstore_object_name = disk_config['backstore_object_name'] controls = {} if 'controls' in request.form: try: controls = _parse_controls(request.form['controls'], LUN.SETTINGS[backstore]) except ValueError as err: logger.error("Unexpected or invalid {} controls".format(mode)) return jsonify(message="Unexpected or invalid controls - " "{}".format(err)), 500 logger.debug("{} controls {}".format(mode, controls)) if mode in ['create', 'resize']: rqst_fields = set(request.form.keys()) if not rqst_fields.issuperset(("pool", "size", "owner", "mode")): # this is an invalid request return jsonify(message="Invalid Request - need to provide " "pool, size and owner"), 400 lun = LUN(logger, str(request.form['pool']), image, str(request.form['size']), str(request.form['owner']), backstore, backstore_object_name) if lun.error: logger.error("Unable to create a LUN instance" " : {}".format(lun.error_msg)) return jsonify(message="Unable to establish LUN instance"), 500 lun.allocate(False, request.form.get('wwn')) if lun.error: logger.error("LUN alloc problem - {}".format(lun.error_msg)) return jsonify(message="LUN allocation failure"), 500 if mode == 'create': # new disk is allocated, so refresh the local config object config.refresh() return jsonify(message="LUN created"), 200 elif mode == 'resize': return jsonify(message="LUN resized"), 200 elif mode in ['activate', 'deactivate']: disk = config.config['disks'].get(image_id, None) if not disk: return jsonify(message="rbd image {} not " "found".format(image_id)), 404 backstore = disk['backstore'] backstore_object_name = disk['backstore_object_name'] # calculate required values for LUN object rbd_image = RBDDev(image, 0, backstore, pool) size = rbd_image.current_size if not size: logger.error("LUN size unknown - {}".format(image_id)) return jsonify(message="LUN {} failure".format(mode)), 500 if 'owner' not in disk: msg = "Disk {}/{} must be assigned to a target".format(disk['pool'], disk['image']) logger.error("LUN owner not defined - {}".format(msg)) return jsonify(message="LUN {} failure - {}".format(mode, msg)), 400 lun = LUN(logger, pool, image, size, disk['owner'], backstore, backstore_object_name) if mode == 'deactivate': try: lun.deactivate() except CephiSCSIError as err: return jsonify(message="deactivate failed - {}".format(err)), 500 return jsonify(message="LUN deactivated"), 200 elif mode == 'activate': for k, v in controls.items(): setattr(lun, k, v) try: lun.activate() except CephiSCSIError as err: return jsonify(message="activate failed - {}".format(err)), 500 return jsonify(message="LUN activated"), 200 else: # DELETE request # let's assume that the request has been validated by the caller # if valid_request(request.remote_addr): purge_host = request.form['purge_host'] preserve_image = request.form.get('preserve_image') == 'true' logger.debug("delete request for disk image '{}'".format(image_id)) pool, image = image_id.split('/', 1) disk_config = config.config['disks'][image_id] backstore = disk_config['backstore'] backstore_object_name = disk_config['backstore_object_name'] lun = LUN(logger, pool, image, 0, purge_host, backstore, backstore_object_name) if lun.error: # problem defining the LUN instance logger.error("Error initializing the LUN : " "{}".format(lun.error_msg)) return jsonify(message="Error establishing LUN instance"), 500 lun.remove_lun(preserve_image) if lun.error: if 'allocated to' in lun.error_msg: # attempted to remove rbd that is still allocated to a client status_code = 400 else: status_code = 500 error_msg = "Failed to remove the LUN - {}".format(lun.error_msg) logger.error(error_msg) return jsonify(message=error_msg), status_code config.refresh() return jsonify(message="LUN removed"), 200 def lun_reconfigure(image_id, controls, backstore): logger.debug("lun reconfigure request") config.refresh() disk = config.config['disks'].get(image_id, None) if not disk: return "rbd image {} not found".format(image_id), 404 try: gateways = get_remote_gateways(config.config['gateways'], logger) except CephiSCSIError as err: return "{}".format(err), 400 gateways.insert(0, 'localhost') # deactivate disk api_vars = {'mode': 'deactivate'} logger.debug("deactivating disk") resp_text, resp_code = call_api(gateways, '_disk', image_id, http_method='put', api_vars=api_vars) if resp_code != 200: return "failed to deactivate disk: {}".format(resp_text), resp_code pool_name, image_name = image_id.split('/', 1) rbd_image = RBDDev(image_name, 0, backstore, pool_name) size = rbd_image.current_size lun = LUN(logger, pool_name, image_name, size, disk['owner'], disk['backstore'], disk['backstore_object_name']) for k, v in controls.items(): setattr(lun, k, v) try: lun.activate() except CephiSCSIError as err: logger.error("local LUN activation failed - {}".format(err)) resp_code = 500 resp_text = "{}".format(err) else: # We already activated this local node, so skip it gateways.remove('localhost') api_vars['controls'] = json.dumps(controls) # activate disk api_vars['mode'] = 'activate' logger.debug("activating disk") activate_resp_text, activate_resp_code = call_api(gateways, '_disk', image_id, http_method='put', api_vars=api_vars) if resp_code == 200 and activate_resp_code != 200: resp_text = activate_resp_text resp_code = activate_resp_code if resp_code == 200: try: lun.commit_controls() except CephiSCSIError as err: resp_text = "Could not commit controls: {}".format(err) resp_code = 500 else: config.refresh() return resp_text, resp_code @app.route('/api/disksnap///', methods=['PUT', 'DELETE']) @requires_restricted_auth def disksnap(pool, image, name): """ Coordinate the management of rbd image snapshots across the gateway nodes. This method calls the corresponding disk api entrypoints across each gateway. Processing is done serially: rollback is done locally first, then other gateways. Other actions are only performed locally. :param image_id: (str) rbd image name of the format pool/image :param name: (str) rbd snapshot name :param mode: (str) 'create' or 'rollback' the rbd snapshot **RESTRICTED** Examples: curl --user admin:admin -d mode=create -X PUT http://192.168.122.69:5000/api/disksnap/rbd.image/new1 curl --user admin:admin -X DELETE http://192.168.122.69:5000/api/disksnap/rbd.image/new1 """ if not valid_snapshot_name(name): logger.debug("snapshot request rejected due to invalid snapshot name") return jsonify(message="snapshot name is invalid"), 400 image_id = '{}/{}'.format(pool, image) if image_id not in config.config['disks']: return jsonify(message="rbd image {} not " "found".format(image_id)), 404 if request.method == 'PUT': mode = request.form.get('mode') if mode == 'create': resp_text, resp_code = _disksnap_create(pool, image, name) elif mode == 'rollback': resp_text, resp_code = _disksnap_rollback(image_id, pool, image, name) else: logger.debug("snapshot request rejected due to invalid mode") resp_text = "mode is invalid" resp_code = 400 else: resp_text, resp_code = _disksnap_delete(pool, image, name) if resp_code == 200: return jsonify(message="disk snapshot {}".format(resp_text)), resp_code else: return jsonify(message=resp_text), resp_code def _disksnap_create(pool_name, image_name, name): logger.debug("snapshot create request") try: with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster, \ cluster.open_ioctx(pool_name) as ioctx, \ rbd.Image(ioctx, image_name) as image: image.create_snap(name) resp_text = "snapshot created" resp_code = 200 except rbd.ImageExists: resp_text = "snapshot {} already exists".format(name) resp_code = 400 except Exception as err: resp_text = "failed to create snapshot: {}".format(err) resp_code = 400 return resp_text, resp_code def _disksnap_delete(pool_name, image_name, name): logger.debug("snapshot delete request") try: with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster, \ cluster.open_ioctx(pool_name) as ioctx, \ rbd.Image(ioctx, image_name) as image: try: image.remove_snap(name) resp_text = "snapshot deleted" resp_code = 200 except rbd.ImageNotFound: resp_text = "snapshot {} does not exist".format(name) resp_code = 404 except Exception as err: resp_text = "failed to delete snapshot: {}".format(err) resp_code = 400 return resp_text, resp_code def _disksnap_rollback(image_id, pool_name, image_name, name): logger.debug("snapshot rollback request") disk = config.config['disks'].get(image_id, None) if not disk: return "rbd image {} not found".format(image_id), 404 try: gateways = get_remote_gateways(config.config['gateways'], logger) except CephiSCSIError as err: return "{}".format(err), 400 gateways.append(this_host()) api_vars = { 'mode': 'deactivate'} logger.debug("deactivating disk") resp_text, resp_code = call_api(gateways, '_disk', image_id, http_method='put', api_vars=api_vars) if resp_code == 200: try: with rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) as cluster, \ cluster.open_ioctx(pool_name) as ioctx, \ rbd.Image(ioctx, image_name) as image: try: logger.debug("rolling back to snapshot") image.rollback_to_snap(name) resp_text = "rolled back to snapshot" resp_code = 200 except rbd.ImageNotFound: resp_text = "snapshot {} does not exist".format(name) resp_code = 404 except Exception as err: resp_text = "failed to rollback snapshot: {}".format(err) resp_code = 400 else: resp_text = "failed to deactivate disk: {}".format(resp_text) logger.debug("activating disk") api_vars['mode'] = 'activate' activate_resp_text, activate_resp_code = call_api(gateways, '_disk', image_id, http_method='put', api_vars=api_vars) if resp_code == 200 and activate_resp_code != 200: resp_text = activate_resp_text resp_code = activate_resp_code return resp_text, resp_code @app.route('/api/discoveryauth', methods=['PUT']) @requires_restricted_auth def discoveryauth(): """ Coordinate discovery authentication changes across each gateway node The following parameters are needed to manage discovery auth :param username: (str) username string is 8-64 chars long containing any alphanumeric in [0-9a-zA-Z] and '.' ':' '@' '_' '-' :param password: (str) password string is 12-16 chars long containing any alphanumeric in [0-9a-zA-Z] and '@' '-' '_' '/' :param mutual_username: (str) mutual_username string is 8-64 chars long containing any alphanumeric in [0-9a-zA-Z] and '.' ':' '@' '_' '-' :param mutual_password: (str) mutual_password string is 12-16 chars long containing any alphanumeric in [0-9a-zA-Z] and '@' '-' '_' '/' **RESTRICTED** Example: curl --user admin:admin -d username=myiscsiusername -d password=myiscsipassword -d mutual_username=myiscsiusername -d mutual_password=myiscsipassword -X PUT http://192.168.122.69:5000/api/discoveryauth """ username = request.form.get('username', '') password = request.form.get('password', '') mutual_username = request.form.get('mutual_username', '') mutual_password = request.form.get('mutual_password', '') # Validate request error_msg = valid_credentials(username, password, mutual_username, mutual_password) if error_msg: logger.error("BAD discovery auth request from {} - {}".format( request.remote_addr, error_msg)) return jsonify(message=error_msg), 400 # Apply to all gateways api_vars = {"username": username, "password": password, "mutual_username": mutual_username, "mutual_password": mutual_password} gateways = config.config['gateways'].keys() resp_text, resp_code = call_api(gateways, '_discoveryauth', '', http_method='put', api_vars=api_vars) # Update the configuration Discovery.set_discovery_auth_config(username, password, mutual_username, mutual_password, config) config.commit("retain") return jsonify(message="discovery auth {}".format(resp_text)), \ resp_code @app.route('/api/_discoveryauth/', methods=['PUT']) @requires_restricted_auth def _discoveryauth(): """ Manage discovery authentication credentials on the local gateway Internal Use ONLY **RESTRICTED** """ username = request.form.get('username', '') password = request.form.get('password', '') mutual_username = request.form.get('mutual_username', '') mutual_password = request.form.get('mutual_password', '') Discovery.set_discovery_auth_lio(username, password, False, mutual_username, mutual_password, False) return jsonify(message='OK'), 200 @app.route('/api/targetauth/', methods=['PUT']) @requires_restricted_auth def targetauth(target_iqn=None): """ Coordinate the gen-acls or CHAP/MUTUAL_CHAP across each gateway node :param target_iqn: (str) IQN of the target :param action: (str) action to be performed :param username: (str) username string is 8-64 chars long containing any alphanumeric in [0-9a-zA-Z] and '.' ':' '@' '_' '-' :param password: (str) password string is 12-16 chars long containing any alphanumeric in [0-9a-zA-Z] and '@' '-' '_' '/' :param mutual_username: (str) mutual_username string is 8-64 chars long containing any alphanumeric in [0-9a-zA-Z] and '.' ':' '@' '_' '-' :param mutual_password: (str) mutual_password string is 12-16 chars long containing any alphanumeric in [0-9a-zA-Z] and '@' '-' '_' '/' **RESTRICTED** Examples: curl --user admin:admin -d auth='disable_acl' -X PUT http://192.168.122.69:5000/api/targetauth/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw """ action = request.form.get('action') if action and action not in ['disable_acl', 'enable_acl']: return jsonify(message='Invalid auth {}'.format(action)), 400 target_config = config.config['targets'][target_iqn] if action == 'disable_acl' and target_config['clients'].keys(): return jsonify(message='Cannot disable ACL authentication ' 'because target has clients'), 400 # Mixing TPG/target auth with ACL is not supported if action == 'enable_acl': target_username = target_config['auth']['username'] target_password = target_config['auth']['password'] target_auth_enabled = (target_username and target_password) if target_auth_enabled: return jsonify(message="Cannot enable ACL authentication " "because target CHAP authentication is enabled"), 400 username = request.form.get('username', '') password = request.form.get('password', '') mutual_username = request.form.get('mutual_username', '') mutual_password = request.form.get('mutual_password', '') # Mixing TPG/target auth with ACL is not supported auth_enabled = (username and password) if auth_enabled and target_config['acl_enabled']: return jsonify(message="Cannot enable target CHAP authentication " "because ACL authentication is enabled"), 400 error_msg = valid_credentials(username, password, mutual_username, mutual_password) if error_msg: return jsonify(message=error_msg), 400 try: gateways = get_remote_gateways(target_config['portals'], logger) except CephiSCSIError as err: return jsonify(message="{}".format(err)), 400 local_gw = this_host() gateways.insert(0, local_gw) # Apply to all gateways api_vars = { "committing_host": local_gw, "action": action, "username": username, "password": password, "mutual_username": mutual_username, "mutual_password": mutual_password, } resp_text, resp_code = call_api(gateways, '_targetauth', target_iqn, http_method='put', api_vars=api_vars) return jsonify(message="target auth {} - {}".format(action, resp_text)), resp_code @app.route('/api/_targetauth/', methods=['PUT']) @requires_restricted_auth def _targetauth(target_iqn=None): """ Apply gen-acls or CHAP/MUTUAL_CHAP on the local gateway Internal Use ONLY **RESTRICTED** """ config.refresh() local_gw = this_host() committing_host = request.form['committing_host'] action = request.form.get('action') username = request.form['username'] password = request.form['password'] mutual_username = request.form['mutual_username'] mutual_password = request.form['mutual_password'] target = GWTarget(logger, target_iqn, []) target_config = config.config['targets'][target_iqn] if target.exists(): target.load_config() if action in ['disable_acl', 'enable_acl']: acl_enabled = (action == 'enable_acl') target.update_acl(acl_enabled) else: tpg = target.get_tpg_by_gateway_name(local_gw) target.update_auth(tpg, username, password, mutual_username, mutual_password) if committing_host == local_gw: if action in ['disable_acl', 'enable_acl']: acl_enabled = (action == 'enable_acl') target_config['acl_enabled'] = acl_enabled else: encryption_enabled = encryption_available() auth_config = { 'username': '', 'password': '', 'password_encryption_enabled': encryption_enabled, 'mutual_username': '', 'mutual_password': '', 'mutual_password_encryption_enabled': encryption_enabled } if username != '': chap = CHAP(username, password, encryption_enabled) chap_mutual = CHAP(mutual_username, mutual_password, encryption_enabled) auth_config['username'] = chap.user auth_config['password'] = chap.encrypted_password(encryption_enabled) auth_config['mutual_username'] = chap_mutual.user auth_config['mutual_password'] = chap_mutual.encrypted_password(encryption_enabled) target_config['auth'] = auth_config config.update_item('targets', target_iqn, target_config) config.commit("retain") return jsonify(message='OK'), 200 @app.route('/api/targetinfo/', methods=['GET']) @requires_restricted_auth def targetinfo(target_iqn): """ Returns the total number of active sessions for **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/targetinfo/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw """ if target_iqn not in config.config['targets']: return jsonify(message="Target {} does not exist".format(target_iqn)), 404 target_config = config.config['targets'][target_iqn] gateways = target_config['portals'] num_sessions = 0 for gateway in gateways.keys(): target_state = target_ready([gateway]) if target_state.get('status_api') == 'UP' and target_state.get('status_iscsi') == 'DOWN': # If API is 'up' and iSCSI is 'down', there are no active sessions to count continue resp_text, resp_code = call_api([gateway], '_targetinfo', target_iqn, http_method='get') if resp_code != 200: return jsonify(message="{}".format(resp_text)), resp_code gateway_response = json.loads(resp_text) num_sessions += gateway_response['num_sessions'] return jsonify({ "num_sessions": num_sessions }), 200 @app.route('/api/_targetinfo/', methods=['GET']) @requires_restricted_auth def _targetinfo(target_iqn): """ Returns the number of active sessions for on local gateway **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/_targetinfo/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw """ if target_iqn not in config.config['targets']: return jsonify(message="Target {} does not exist".format(target_iqn)), 404 num_sessions = GWTarget.get_num_sessions(target_iqn) return jsonify({ "num_sessions": num_sessions }), 200 @app.route('/api/gatewayinfo', methods=['GET']) @requires_restricted_auth def gatewayinfo(): """ Returns the number of active sessions on local gateway **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/gatewayinfo """ local_gw = this_host() if local_gw not in config.config['gateways']: return jsonify(message="Gateway {} does not exist in configuration".format(local_gw)), 404 num_sessions = 0 for target_iqn, target in config.config['targets'].items(): if local_gw in target['portals']: num_sessions += GWTarget.get_num_sessions(target_iqn) return jsonify({ "num_sessions": num_sessions }), 200 @app.route('/api/clients/', methods=['GET']) @requires_restricted_auth def get_clients(target_iqn=None): """ List clients defined to the configuration. This information will include auth information, hence the restricted_auth wrapper **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/clients/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw """ try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 target_config = config.config['targets'][target_iqn] client_list = list(target_config['clients'].keys()) response = {"clients": client_list} return jsonify(response), 200 def _update_client(**kwargs): """ Handler function to apply the changes to a specific client definition :param args: """ # convert the comma separated image_list string into a list for GWClient if kwargs['images']: image_list = str(kwargs['images']).split(',') else: image_list = [] client = GWClient(logger, kwargs['client_iqn'], image_list, kwargs['username'], kwargs['password'], kwargs['mutual_username'], kwargs['mutual_password'], kwargs['target_iqn']) if client.error: logger.error("Invalid client request - {}".format(client.error_msg)) return 400, "Invalid client request" client.manage('present', committer=kwargs['committing_host']) if client.error: logger.error("client update failed on {} : " "{}".format(kwargs['client_iqn'], client.error_msg)) return 500, "Client update failed - {}".format(client.error_msg) else: config.refresh() return 200, "Client configured successfully" @app.route('/api/clientauth//', methods=['PUT']) @requires_restricted_auth def clientauth(target_iqn, client_iqn): """ Coordinate client authentication changes across each gateway node The following parameters are needed to manage client auth :param target_iqn: (str) target IQN name :param client_iqn: (str) client IQN name :param username: (str) username string is 8-64 chars long containing any alphanumeric in [0-9a-zA-Z] and '.' ':' '@' '_' '-' :param password: (str) password string is 12-16 chars long containing any alphanumeric in [0-9a-zA-Z] and '@' '-' '_' '/' :param mutual_username: (str) mutual_username string is 8-64 chars long containing any alphanumeric in [0-9a-zA-Z] and '.' ':' '@' '_' '-' :param mutual_password: (str) mutual_password string is 12-16 chars long containing any alphanumeric in [0-9a-zA-Z] and '@' '-' '_' '/' **RESTRICTED** Example: curl --user admin:admin -d username=myiscsiusername -d password=myiscsipassword -d mutual_username=myiscsiusername -d mutual_password=myiscsipassword -X PUT http://192.168.122.69:5000/api/clientauth/iqn.2017-08.org.ceph:iscsi-gw0 """ try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 try: client_iqn, iqn_type = normalize_wwn(['iqn'], client_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(client_iqn, err) return jsonify(message=err_str), 500 # http_mode = 'https' if settings.config.api_secure else 'http' target_config = config.config['targets'][target_iqn] try: gateways = get_remote_gateways(target_config['portals'], logger) except CephiSCSIError as err: return jsonify(message="{}".format(err)), 400 lun_list = target_config['clients'][client_iqn]['luns'].keys() image_list = ','.join(lun_list) username = request.form.get('username', '') password = request.form.get('password', '') mutual_username = request.form.get('mutual_username', '') mutual_password = request.form.get('mutual_password', '') client_usable = valid_client(mode='auth', client_iqn=client_iqn, username=username, password=password, mutual_username=mutual_username, mutual_password=mutual_password, target_iqn=target_iqn) if client_usable != 'ok': logger.error("BAD auth request from {}".format(request.remote_addr)) return jsonify(message=client_usable), 400 api_vars = {"committing_host": this_host(), "image_list": image_list, "username": username, "password": password, "mutual_username": mutual_username, "mutual_password": mutual_password} gateways.insert(0, 'localhost') resp_text, resp_code = call_api(gateways, '_clientauth', '{}/{}'.format(target_iqn, client_iqn), http_method='put', api_vars=api_vars) return jsonify(message="client auth {}".format(resp_text)), \ resp_code @app.route('/api/_clientauth//', methods=['PUT']) @requires_restricted_auth def _clientauth(target_iqn, client_iqn): """ Manage client authentication credentials on the local gateway Internal Use ONLY :param target_iqn: IQN of the target :param client_iqn: IQN of the client **RESTRICTED** """ # PUT request to define/change authentication image_list = request.form['image_list'] username = request.form.get('username', '') password = request.form.get('password', '') mutual_username = request.form.get('mutual_username', '') mutual_password = request.form.get('mutual_password', '') committing_host = request.form['committing_host'] status_code, status_text = _update_client(client_iqn=client_iqn, images=image_list, username=username, password=password, mutual_username=mutual_username, mutual_password=mutual_password, committing_host=committing_host, target_iqn=target_iqn) return jsonify(message=status_text), status_code @app.route('/api/clientlun//', methods=['PUT', 'DELETE']) @requires_restricted_auth def clientlun(target_iqn, client_iqn): """ Coordinate the addition(PUT) and removal(DELETE) of a disk for a client :param client_iqn: (str) IQN of the client :param disk: (str) rbd image name of the format pool/image **RESTRICTED** Examples: TARGET_IQN = iqn.2017-08.org.ceph:iscsi-gw CLIENT_IQN = iqn.1994-05.com.redhat:myhost4 curl --user admin:admin -d disk=rbd/new2_1 -X PUT http://192.168.122.69:5000/api/clientlun/$TARGET_IQN/$CLIENT_IQN curl --user admin:admin -d disk=rbd/new2_1 -X DELETE http://192.168.122.69:5000/api/clientlun/$TARGET_IQN/$CLIENT_IQN """ try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 try: client_iqn, iqn_type = normalize_wwn(['iqn'], client_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(client_iqn, err) return jsonify(message=err_str), 500 # http_mode = 'https' if settings.config.api_secure else 'http' target_config = config.config['targets'][target_iqn] try: gateways = get_remote_gateways(target_config['portals'], logger) except CephiSCSIError as err: return jsonify(message="{}".format(err)), 400 disk = request.form.get('disk') lun_list = list(target_config['clients'][client_iqn]['luns'].keys()) if request.method == 'PUT': lun_list.append(disk) else: # this is a delete request if disk in lun_list: lun_list.remove(disk) else: return jsonify(message="disk not mapped to client"), 400 auth_config = target_config['clients'][client_iqn]['auth'] chap_obj = CHAP(auth_config['username'], auth_config['password'], auth_config['password_encryption_enabled']) chap_mutual_obj = CHAP(auth_config['mutual_username'], auth_config['mutual_password'], auth_config['mutual_password_encryption_enabled']) image_list = ','.join(lun_list) client_usable = valid_client(mode='disk', client_iqn=client_iqn, image_list=image_list, target_iqn=target_iqn) if client_usable != 'ok': logger.error("Bad disk request for client {} : " "{}".format(client_iqn, client_usable)) return jsonify(message=client_usable), 400 # committing host is the local LIO node api_vars = {"committing_host": this_host(), "image_list": image_list, "username": chap_obj.user, "password": chap_obj.password, "mutual_username": chap_mutual_obj.user, "mutual_password": chap_mutual_obj.password} gateways.insert(0, 'localhost') resp_text, resp_code = call_api(gateways, '_clientlun', '{}/{}'.format(target_iqn, client_iqn), http_method='put', api_vars=api_vars) return jsonify(message="client masking update {}".format(resp_text)), \ resp_code @app.route('/api/_clientlun//', methods=['GET', 'PUT']) @requires_restricted_auth def _clientlun(target_iqn, client_iqn): """ Manage the addition/removal of disks from a client on the local gateway Internal Use ONLY **RESTRICTED** """ target_config = config.config['targets'][target_iqn] if request.method == 'GET': if client_iqn in target_config['clients']: lun_config = target_config['clients'][client_iqn]['luns'] return jsonify(message=lun_config), 200 else: return jsonify(message="Client does not exist"), 404 else: # PUT request = new/updated disks for this client image_list = request.form['image_list'] username = request.form.get('username', '') password = request.form.get('password', '') mutual_username = request.form.get('mutual_username', '') mutual_password = request.form.get('mutual_password', '') committing_host = request.form['committing_host'] status_code, status_text = _update_client(client_iqn=client_iqn, images=image_list, username=username, password=password, mutual_username=mutual_username, mutual_password=mutual_password, committing_host=committing_host, target_iqn=target_iqn) return jsonify(message=status_text), status_code @app.route('/api/client//', methods=['PUT', 'DELETE']) @requires_restricted_auth def client(target_iqn, client_iqn): """ Handle the client create/delete actions across gateways :param target_iqn: (str) IQN of the target :param client_iqn: (str) IQN of the client to create or delete **RESTRICTED** Examples: TARGET_IQN = iqn.2017-08.org.ceph:iscsi-gw CLIENT_IQN = iqn.1994-05.com.redhat:myhost4 curl --user admin:admin -X PUT http://192.168.122.69:5000/api/client/$TARGET_IQN/$CLIENT_IQN curl --user admin:admin -X DELETE http://192.168.122.69:5000/api/client/$TARGET_IQN/$CLIENT_IQN """ method = {"PUT": 'create', "DELETE": 'delete'} try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 try: client_iqn, iqn_type = normalize_wwn(['iqn'], client_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(client_iqn, err) return jsonify(message=err_str), 500 # http_mode = 'https' if settings.config.api_secure else 'http' target_config = config.config['targets'][target_iqn] try: gateways = get_remote_gateways(target_config['portals'], logger) except CephiSCSIError as err: return jsonify(message="{}".format(err)), 400 # validate the PUT/DELETE request first client_usable = valid_client(mode=method[request.method], client_iqn=client_iqn, target_iqn=target_iqn) if client_usable != 'ok': return jsonify(message=client_usable), 400 # committing host is the node responsible for updating the config object api_vars = {"committing_host": this_host()} if request.method == 'PUT': # creating a client is done locally first, then applied to the # other gateways gateways.insert(0, 'localhost') resp_text, resp_code = call_api(gateways, '_client', '{}/{}'.format(target_iqn, client_iqn), http_method='put', api_vars=api_vars) return jsonify(message="client create/update {}".format(resp_text)), \ resp_code else: # DELETE client request # Process flow: remote gateways > local > delete config object entry gateways.append('localhost') resp_text, resp_code = call_api(gateways, '_client', '{}/{}'.format(target_iqn, client_iqn), http_method='delete', api_vars=api_vars) return jsonify(message="client delete {}".format(resp_text)), \ resp_code @app.route('/api/_client//', methods=['GET', 'PUT', 'DELETE']) @requires_restricted_auth def _client(target_iqn, client_iqn): """ Manage a client definition on the local gateway Internal Use ONLY :param target_iqn: iscsi name for the target :param client_iqn: iscsi name for the client **RESTRICTED** """ if request.method == 'GET': target_config = config.config['targets'][target_iqn] if client_iqn in target_config['clients']: return jsonify(target_config["clients"][client_iqn]), 200 else: return jsonify(message="Client does not exist"), 404 elif request.method == 'PUT': try: normalize_wwn(['iqn'], client_iqn) except RTSLibError: return jsonify(message="'{}' is not a valid name for " "iSCSI".format(client_iqn)), 400 committing_host = request.form['committing_host'] image_list = request.form.get('image_list', '') username = request.form.get('username', '') password = request.form.get('password', '') mutual_username = request.form.get('mutual_username', '') mutual_password = request.form.get('mutual_password', '') status_code, status_text = _update_client(client_iqn=client_iqn, images=image_list, username=username, password=password, mutual_username=mutual_username, mutual_password=mutual_password, committing_host=committing_host, target_iqn=target_iqn) logger.debug("client create: {}".format(status_code)) logger.debug("client create: {}".format(status_text)) return jsonify(message=status_text), status_code else: # DELETE request committing_host = request.form['committing_host'] # Make sure the delete request is for a client we have defined target_config = config.config['targets'][target_iqn] if client_iqn in target_config['clients'].keys(): client = GWClient(logger, client_iqn, '', '', '', '', '', target_iqn) client.manage('absent', committer=committing_host) if client.error: logger.error("Failed to remove client : " "{}".format(client.error_msg)) return jsonify(message="Failed to remove client"), 500 else: if committing_host == this_host(): config.refresh() return jsonify(message="Client deleted ok"), 200 else: logger.error("Delete request for non existent client!") return jsonify(message="Client does not exist!"), 404 @app.route('/api/clientinfo//', methods=['GET']) @requires_restricted_auth def clientinfo(target_iqn, client_iqn): """ Returns client alias, ip_address and state for each connected portal **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/clientinfo/ iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw-client """ if target_iqn not in config.config['targets']: return jsonify(message="Target {} does not exist".format(target_iqn)), 404 target_config = config.config['targets'][target_iqn] if client_iqn not in target_config['clients']: return jsonify(message="Client {} does not exist".format(client_iqn)), 404 gateways = target_config['portals'] response = { "alias": '', "state": {}, "ip_address": [] } for gateway in gateways.keys(): resp_text, resp_code = call_api([gateway], '_clientinfo', '{}/{}'.format(target_iqn, client_iqn), http_method='get') if resp_code != 200: return jsonify(message="{}".format(resp_text)), resp_code gateway_response = json.loads(resp_text) alias = gateway_response['alias'] if alias: response['alias'] = gateway_response['alias'] state = gateway_response['state'] if state: if state not in response['state']: response['state'][state] = [] response['state'][state].append(gateway) response['ip_address'].extend(gateway_response['ip_address']) response['ip_address'] = list(set(response['ip_address'])) return jsonify(response), 200 @app.route('/api/_clientinfo//', methods=['GET']) @requires_restricted_auth def _clientinfo(target_iqn, client_iqn): """ Returns client alias, ip_address and state for local gateway **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/_clientinfo/ iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw-client """ if target_iqn not in config.config['targets']: return jsonify(message="Target {} does not exist".format(target_iqn)), 404 target_config = config.config['targets'][target_iqn] if client_iqn not in target_config['clients']: return jsonify(message="Client {} does not exist".format(client_iqn)), 404 logged_in = GWClient.get_client_info(target_iqn, client_iqn) return jsonify(logged_in), 200 @app.route('/api/hostgroups/', methods=['GET']) @requires_restricted_auth def hostgroups(target_iqn=None): """ Return the hostgroup names defined to the configuration **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/hostgroups/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw """ try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 target_config = config.config['targets'][target_iqn] if request.method == 'GET': return jsonify({"groups": list(target_config['groups'].keys())}), 200 @app.route('/api/hostgroup//', methods=['GET', 'PUT', 'DELETE']) @requires_restricted_auth def hostgroup(target_iqn, group_name): """ co-ordinate the management of host groups across iSCSI gateway hosts **RESTRICTED** :param group_name: (str) group name :param: members (list) list of client iqn's that are members of this group :param: disks (list) list of disks that each member should have masked :param: action (str) 'add'/'remove' group's client members/disks, default is 'add' :return: Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/hostgroup/group_name curl --user admin:admin -d members=iqn.1994-05.com.redhat:myhost4 -d disks=rbd.disk1 -X PUT http://192.168.122.69:5000/api/hostgroup/group_name curl --user admin:admin -d action=remove -d disks=rbd.disk1 -X PUT http://192.168.122.69:5000/api/hostgroup/group_name curl --user admin:admin -X DELETE http://192.168.122.69:5000/api/hostgroup/group_name """ http_mode = 'https' if settings.config.api_secure else 'http' valid_hostgroup_actions = ['add', 'remove'] try: target_iqn, iqn_type = normalize_wwn(['iqn'], target_iqn) except RTSLibError as err: err_str = "Invalid iqn {} - {}".format(target_iqn, err) return jsonify(message=err_str), 500 target_config = config.config['targets'][target_iqn] try: gateways = get_remote_gateways(target_config['portals'], logger) except CephiSCSIError as err: return jsonify(message="{}".format(err)), 400 action = request.form.get('action', 'add') if action.lower() not in valid_hostgroup_actions: return jsonify(message="Invalid hostgroup action specified"), 405 target_config = config.config['targets'][target_iqn] if request.method == 'GET': # return the requested definition if group_name in target_config['groups'].keys(): return jsonify(target_config['groups'].get(group_name)), 200 else: # group name does not exist return jsonify(message="Group name does not exist"), 404 elif request.method == 'PUT': if group_name in target_config['groups']: host_group = target_config['groups'].get(group_name) current_members = host_group.get('members') current_disks = list(host_group.get('disks').keys()) else: current_members = [] current_disks = [] changed_members = request.form.get('members', '') if changed_members == '': changed_members = [] else: changed_members = changed_members.split(',') changed_disks = request.form.get('disks', '') if changed_disks == '': changed_disks = [] else: changed_disks = changed_disks.split(',') if action.lower() == 'add': group_members = set(current_members + changed_members) group_disks = set(current_disks + changed_disks) else: # remove members group_members = [mbr for mbr in current_members if mbr not in changed_members] group_disks = [disk for disk in current_disks if disk not in changed_disks] api_vars = {"members": ','.join(group_members), "disks": ','.join(group_disks)} # updated = [] gateways.insert(0, 'localhost') logger.debug("gateway update order is {}".format(','.join(gateways))) resp_text, resp_code = call_api(gateways, '_hostgroup', '{}/{}'.format(target_iqn, group_name), http_method='put', api_vars=api_vars) return jsonify(message="hostgroup create/update {}".format(resp_text)), \ resp_code else: # Delete request just purges the entry from the config, so we only # need to run against the local gateway if not target_config['groups'].get(group_name, None): return jsonify(message="Group name '{}' not " "found".format(group_name)), 404 # At this point the group name is valid, so go ahead and remove it api_endpoint = ("{}://{}:{}/api/" "_hostgroup/{}/{}".format(http_mode, 'localhost', settings.config.api_port, target_iqn, group_name )) api = APIRequest(api_endpoint) api.delete() if api.response.status_code == 200: logger.debug("Group definition {} removed".format(group_name)) return jsonify(message="Group definition '{}' " "deleted".format(group_name)), 200 else: return jsonify(message="Delete of group '{}'" " failed : {}".format(group_name, api.response.json()['message'])), 400 def fill_settings_dict(def_settings): defaults = {} limits = {} for k, setting in def_settings.items(): # Return normalized value to match get_config()'s format defaults[k] = getattr(settings.config, k) if isinstance(setting, IntSetting): limits[k] = {'min': setting.min_val, 'max': setting.max_val, 'type': setting.type_str} elif isinstance(setting, EnumSetting): limits[k] = {'values': setting.valid_vals, 'type': setting.type_str} else: limits[k] = {'type': setting.type_str} return defaults, limits @app.route('/api/settings', methods=['GET']) @requires_restricted_auth def get_settings(): """ List settings. **RESTRICTED** Examples: curl --user admin:admin -X GET http://192.168.122.69:5000/api/settings """ target_default_controls, target_controls_limits = fill_settings_dict(GWTarget.SETTINGS) disk_default_controls = {} disk_controls_limits = {} required_rbd_features = {} unsupported_rbd_features = {} for bs, bs_settings in LUN.SETTINGS.items(): disk_default_controls[bs], disk_controls_limits[bs] = fill_settings_dict(bs_settings) required_rbd_features[bs] = RBDDev.required_features(bs) unsupported_rbd_features[bs] = RBDDev.unsupported_features(bs) return jsonify({ 'target_default_controls': target_default_controls, 'target_controls_limits': target_controls_limits, 'disk_default_controls': disk_default_controls, 'disk_controls_limits': disk_controls_limits, 'unsupported_rbd_features': unsupported_rbd_features, 'required_rbd_features': required_rbd_features, 'backstores': LUN.BACKSTORES, 'default_backstore': LUN.DEFAULT_BACKSTORE, 'config': { 'minimum_gateways': settings.config.minimum_gateways }, 'api_version': 2 }), 200 @app.route('/api/_hostgroup//', methods=['GET', 'PUT', 'DELETE']) @requires_restricted_auth def _hostgroup(target_iqn, group_name): """ Manage a hostgroup definition on the local iscsi gateway Internal Use ONLY **RESTRICTED** :param group_name: :return: """ target_config = config.config['targets'][target_iqn] if request.method == 'GET': # return the requested definition if group_name in target_config['groups'].keys(): return jsonify(target_config['groups'].get(group_name)), 200 else: # group name does not exist return jsonify(message="Group name does not exist"), 404 elif request.method == 'PUT': members = request.form.get('members', []) if members == '': members = [] else: members = members.split(',') disks = request.form.get('disks', []) if disks == '': disks = [] else: disks = disks.split(',') # create/update a host group definition grp = Group(logger, target_iqn, group_name, members, disks) grp.apply() if not grp.error: config.refresh() return jsonify(message="Group created/updated"), 200 else: return jsonify(message="{}".format(grp.error_msg)), 400 else: # request is for a delete of a host group grp = Group(logger, target_iqn, group_name) grp.purge() if not grp.error: config.refresh() return jsonify(message="Group '{}' removed".format(group_name)), 200 else: return jsonify(message=grp.error_msg), 400 def iscsi_active(): for x in ['/proc/net/tcp', '/proc/net/tcp6']: try: with open(x) as tcp_data: for con in tcp_data: field = con.split() if '0CBC' in field[1] and field[3] == '0A': # iscsi port is up (x'0cbc' = 3260), and listening (x'0a') return True except Exception: pass return False @app.route('/api/_ping', methods=['GET']) @requires_restricted_auth def _ping(): """ Simple "is alive" ping responder. """ if request.method == 'GET': gw_config = config.config['gateways'] if this_host() in gw_config: if iscsi_active(): rc = 200 else: rc = 503 else: # host is not yet defined, which means the port check would fail # so just return a 200 OK back to the caller rc = 200 return jsonify(message='pong'), rc def target_ready(gateway_list): """ function which determines whether all gateways in the configuration are up and ready to process commands :param gateway_list: (list) list of gateway names/IP addresses :return: (str) either 'ok' or an error description """ http_mode = 'https' if settings.config.api_secure else 'http' target_state = {"status": 'OK', "status_iscsi": 'UP', "status_api": 'UP', "summary": ''} for gw in gateway_list: api_endpoint = ("{}://{}:{}/api/_ping".format(http_mode, normalize_ip_literal(gw), settings.config.api_port)) try: api = APIRequest(api_endpoint) api.get() except GatewayAPIError: target_state['status'] = 'NOTOK' target_state['status_iscsi'] = 'UNKNOWN' target_state['status_api'] = 'DOWN' target_state['summary'] += ',{}(iscsi Unknown, API down)'.format(gw) else: if api.response.status_code == 200: continue elif api.response.status_code == 503: target_state['status'] = 'NOTOK' target_state['status_iscsi'] = 'DOWN' target_state['status_api'] = 'UP' target_state['summary'] += ',{}(iscsi down, API up)'.format(gw) else: target_state['status'] = 'NOTOK' target_state['status_iscsi'] = 'UNKNOWN' target_state['status_api'] = 'UNKNOWN' target_state['summary'] += ',{}(UNKNOWN state)'.format(gw) target_state['summary'] = target_state['summary'][1:] # ignore 1st char return target_state def call_api(gateway_list, endpoint, element, http_method='put', api_vars=None): """ Generic API handler to process a given request across multiple gateways :param gateway_list: (list) :param endpoint: (str) http api endpoint name to call :param element: (str) object to act upon :param http_method: (str) put or get http method :param api_vars: (dict) variables to pass to the api call :return: (str, int) string description and http status code """ target_state = target_ready(gateway_list) if target_state.get('status') != 'OK': return ('failed, gateway(s) unavailable:' '{}'.format(target_state.get('summary'))), 503 http_mode = 'https' if settings.config.api_secure else 'http' updated = [] logger.debug("gateway update order is {}".format(','.join(gateway_list))) for gw in gateway_list: logger.debug("processing GW '{}'".format(gw)) api_endpoint = ("{}://{}:{}/api/" "{}/{}".format(http_mode, normalize_ip_literal(gw), settings.config.api_port, endpoint, element )) api = APIRequest(api_endpoint, data=api_vars) api_method = getattr(api, http_method) api_method() if api.response.status_code == 200: updated.append(gw) logger.info("{} update on {}, successful".format(endpoint, gw)) continue else: logger.error("{} change on {} failed with " "{}".format(endpoint, gw, api.response.status_code)) if gw == 'localhost': gw = this_host() if len(updated) > 0: aborted = [gw_name for gw_name in gateway_list if gw_name not in updated] fail_msg = ("failed on {}, " "applied to {}, " "aborted {}. ".format(gw, ','.join(updated), ','.join(aborted))) else: fail_msg = "failed on {}. ".format(gw) try: fail_msg += api.response.json()['message'] except Exception: logger.debug(api.response.text) fail_msg += "unknown failure" logger.debug(fail_msg) return fail_msg, api.response.status_code return api.response.text if http_method == 'get' else 'successful', 200 def pre_reqs_errors(): """ function to check pre-reqs are installed and at the relevant versions :return: list of configuration errors detected """ dist_translations = { "centos": "rhel", "opensuse-leap": "suse"} valid_dists = { "rhel": 7.4, "suse": 15.1, "debian": 10, "ubuntu": 18.04} errors_found = [] os_release = read_os_release() dist = os_release.get('ID', '') rel = os_release.get('VERSION_ID') dist = dist.lower() dist = dist_translations.get(dist, dist) if dist in valid_dists: if dist == 'rhel': import distro rel = distro.version() # CentOS formats a release similar 7.4.1708 rel = float(".".join(rel.split('.')[:2])) if rel < valid_dists[dist]: errors_found.append("OS version is unsupported") else: errors_found.append("OS is unsupported") return errors_found def halt(message): logger.critical(message) sys.exit(16) class ConfigWatcher(threading.Thread): """ A ConfigWatcher checks the epoc xattr of the rados config object every 'n' seconds to determine if a change has been made. If a change has been made the local copy of the config object is refreshed """ def __init__(self, interval=1): threading.Thread.__init__(self) self.interval = interval self.daemon = True def run(self): logger.info("Started the configuration object watcher") logger.info("Checking for config object changes every {}s".format( self.interval)) cluster = rados.Rados(conffile=settings.config.cephconf, name=settings.config.cluster_client_name) cluster.connect() ioctx = cluster.open_ioctx(settings.config.pool) while True: time.sleep(self.interval) # look at the internal config object epoch (it could be refreshed # within an api call) current_epoch = config.config['epoch'] # get the epoch from the xattr of the config object try: obj_epoch = int(ioctx.get_xattr('gateway.conf', 'epoch')) except rados.ObjectNotFound: raise else: # if it's changed, refresh the local config to ensure a query # to this node will return current state if obj_epoch != current_epoch: logger.info("Change detected - internal {} / xattr {} " "refreshing".format(current_epoch, obj_epoch)) config.refresh() def get_ssl_files_from_mon(): client_name = settings.config.cluster_client_name temp_files = [] crt_data = settings.config.pull_from_mon_config( "iscsi/{}/iscsi-gateway.crt".format(client_name)) if not crt_data: return temp_files key_data = settings.config.pull_from_mon_config( "iscsi/{}/iscsi-gateway.key".format(client_name)) if not key_data: return temp_files for data in crt_data, key_data: # NOTE: Annoyingly SSLContext.load_cert_chain can only take # paths to files and not file like objects.. yet. So we need to # create tempfiles for the SSL context to read. Once # https://bugs.python.org/issue16487 is resolved, we should be able # to simply use file-like objects and makes this much nicer. tmp_f = tempfile.NamedTemporaryFile(mode='w') tmp_f.write(data) tmp_f.flush() temp_files.append(tmp_f) return temp_files def get_ssl_context(): # Use these self-signed crt and key files cert_files = ['/etc/ceph/iscsi-gateway.crt', '/etc/ceph/iscsi-gateway.key'] temp_files = [] if not all([os.path.exists(crt_file) for crt_file in cert_files]): # attempt to pull out the crt and key data from global mon config-key # storage, we need to return the tempfiles so they're not gc'ed. temp_files = get_ssl_files_from_mon() cert_files = [f.name for f in temp_files] if not cert_files or not all([os.path.exists(crt_file) for crt_file in cert_files]): return None ver, rel, mod = werkzeug.__version__.split('.') if int(ver) > 0 or int(rel) > 9: logger.info("API server using TLSv1.2") context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2) context.load_cert_chain(cert_files[0], cert_files[1]) else: logger.info("API server using TLSv1 (older version of werkzeug)") context = OpenSSL.SSL.Context(OpenSSL.SSL.TLSv1_METHOD) try: context.use_certificate_file(cert_files[0]) context.use_privatekey_file(cert_files[1]) except OpenSSL.SSL.Error as err: logger.critical("SSL Error : {}".format(err)) return None # If we have loaded the certs into tempfiles we can clean them up now. # This should happen when we return, but let's be explicit. for f in temp_files: f.close() return context def main(): config_watcher = ConfigWatcher() config_watcher.start() log = logging.getLogger('werkzeug') log.setLevel(logging.DEBUG) # Attach the werkzeug log to the handlers defined in the outer scope if settings.config.log_to_file: log.addHandler(file_handler) log.addHandler(syslog_handler) global dev_status_watcher dev_status_watcher = DeviceStatusWatcher(logger) dev_status_watcher.start() ceph_gw = CephiSCSIGateway(logger, config) osd_state_ok = ceph_gw.osd_blocklist_cleanup() if not osd_state_ok: sys.exit(16) try: ceph_gw.define() except (CephiSCSIError, RTSLibError) as err: err_str = "Could not load gateway: {}".format(err) logger.error(err_str) ceph_gw.delete() halt(err_str) if settings.config.api_secure: context = get_ssl_context() if context is None: logger.critical( "Secure API requested but the crt/key files " "missing/incompatible?") logger.critical("Unable to start") sys.exit(16) else: context = None # Start the API server. threaded is enabled to prevent deadlocks when one # request makes further api requests app.run(host=settings.config.api_host, port=settings.config.api_port, debug=settings.config.debug, use_evalex=False, threaded=True, use_reloader=False, ssl_context=context) def signal_stop(*args): logger.info("Shutdown received") ceph_gw = CephiSCSIGateway(logger, config) sys.exit(ceph_gw.delete()) def signal_reload(*args): logger.info("Refreshing local copy of the Gateway configuration") config.refresh() ceph_gw = CephiSCSIGateway(logger, config) ceph_gw.define() if __name__ == '__main__': settings.init() logger_level = logging.getLevelName(settings.config.logger_level) # Setup signal handlers for interaction with systemd signal.signal(signal.SIGTERM, signal_stop) signal.signal(signal.SIGHUP, signal_reload) # setup syslog handler to help diagnostics logger = logging.getLogger('rbd-target-api') logger.setLevel(logging.DEBUG) # syslog (systemctl/journalctl messages) syslog_format = logging.Formatter("%(message)s") if settings.config.log_to_stderr: syslog_handler = logging.StreamHandler(sys.stderr) if settings.config.log_to_stderr_prefix: syslog_format = \ logging.Formatter("{} %(message)s".format( settings.config.log_to_stderr_prefix)) else: syslog_handler = logging.handlers.SysLogHandler(address='/dev/log') syslog_handler.setLevel(logging.INFO) syslog_handler.setFormatter(syslog_format) logger.addHandler(syslog_handler) if settings.config.log_to_file: # file target - more verbose logging for diagnostics file_handler = RotatingFileHandler('/var/log/rbd-target-api/rbd-target-api.log', maxBytes=5242880, backupCount=7) file_handler.setLevel(logger_level) file_format = logging.Formatter( "%(asctime)s %(levelname)8s [%(filename)s:%(lineno)s:%(funcName)s()] " "- %(message)s") file_handler.setFormatter(file_format) logger.addHandler(file_handler) # config is set in the outer scope, so it's easily accessible to all # api functions config = Config(logger) if config.error: logger.error(config.error_msg) halt("Unable to open/read the configuration object") else: main() ceph-iscsi-3.9/rbd-target-gw.py000077500000000000000000000055361470665154300164630ustar00rootroot00000000000000#!/usr/bin/python import logging import logging.handlers from logging.handlers import RotatingFileHandler from flask import Flask, Response, jsonify from ceph_iscsi_config.metrics import GatewayStats import ceph_iscsi_config.settings as settings from ceph_iscsi_config.utils import CephiSCSIInval # Create a flask instance app = Flask(__name__) # workaround for https://github.com/pallets/flask/issues/2549 app.config['JSONIFY_PRETTYPRINT_REGULAR'] = False @app.route("/", methods=["GET"]) def prom_root(): """ handle the '/' endpoint - just redirect point the user at /metrics""" return ''' Ceph/iSCSI Prometheus Exporter

Ceph/iSCSI Prometheus Exporter

Metrics

''' @app.route("/metrics", methods=["GET"]) def prom_metrics(): """ Collect the stats and send back to the caller""" stats = GatewayStats() try: stats.collect() except CephiSCSIInval as err: return jsonify(message="Could not get metrics: {}".format(err)), 404 return Response(stats.formatted(), content_type="text/plain") def main(): if settings.config.prometheus_exporter: logger.info("Integrated Prometheus exporter is enabled") # starting a flask instance will occupy the main thread # Attach the werkzeug log to the handlers defined in the outer scope log = logging.getLogger('werkzeug') log.setLevel(logging.DEBUG) log.addHandler(file_handler) log.addHandler(syslog_handler) app.run(host=settings.config.prometheus_host, port=settings.config.prometheus_port, debug=False, threaded=True) else: logger.info("Integrated Prometheus exporter is disabled") if __name__ == '__main__': settings.init() logger_level = logging.getLevelName(settings.config.logger_level) # setup syslog handler to help diagnostics logger = logging.getLogger('rbd-target-gw') logger.setLevel(logging.DEBUG) # syslog (systemctl/journalctl messages) syslog_handler = logging.handlers.SysLogHandler(address='/dev/log') syslog_handler.setLevel(logging.INFO) syslog_format = logging.Formatter("%(message)s") syslog_handler.setFormatter(syslog_format) # file target - more verbose logging for diagnostics file_handler = RotatingFileHandler('/var/log/rbd-target-gw/rbd-target-gw.log', maxBytes=5242880, backupCount=7) file_handler.setLevel(logger_level) file_format = logging.Formatter("%(asctime)s [%(levelname)8s] - %(message)s") file_handler.setFormatter(file_format) logger.addHandler(syslog_handler) logger.addHandler(file_handler) main() ceph-iscsi-3.9/setup.py000066400000000000000000000026261470665154300151470ustar00rootroot00000000000000#!/usr/bin/python from setuptools import setup import distutils.command.install_scripts import shutil import os if os.path.exists('README'): with open('README') as readme_file: long_description = readme_file.read().strip() else: long_description = '' # idea from http://stackoverflow.com/a/11400431/2139420 class StripExtension(distutils.command.install_scripts.install_scripts): """ Class to handle the stripping of .py extensions in for executable file names making them more user friendly """ def run(self): distutils.command.install_scripts.install_scripts.run(self) for script in self.get_outputs(): if script.endswith(".py"): shutil.move(script, script[:-3]) setup( name="ceph_iscsi", version="3.9", description="Common classes/functions and CLI tools used to configure iSCSI " "gateways backed by Ceph RBD", long_description=long_description, author="Paul Cuzner", author_email="pcuzner@redhat.com", url="http://github.com/pcuzner/ceph-iscsi", license="GPLv3", packages=[ "ceph_iscsi_config", "gwcli" ], scripts=[ "rbd-target-gw.py", 'gwcli.py', 'rbd-target-api.py' ], data_files=[("/var/log/rbd-target-gw", []), ("/var/log/rbd-target-api", [])], cmdclass={ "install_scripts": StripExtension } ) ceph-iscsi-3.9/test/000077500000000000000000000000001470665154300144065ustar00rootroot00000000000000ceph-iscsi-3.9/test/test_chap.py000066400000000000000000000100241470665154300167270ustar00rootroot00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import import tempfile import sys import unittest try: from unittest import mock except ImportError: import mock from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives.asymmetric import rsa from cryptography.hazmat.primitives import serialization, hashes from cryptography.hazmat.primitives.asymmetric import padding # We need to mock ceph libs python bindings because there's no # updated package in pypy sys.modules['rados'] = mock.Mock() sys.modules['rbd'] = mock.Mock() from ceph_iscsi_config.client import CHAP # noqa: E402 import ceph_iscsi_config.settings as settings # noqa: E402 from base64 import b64encode # noqa: E402 class ChapTest(unittest.TestCase): def setUp(self): settings.init() def test_chap_no_encryption(self): chap = CHAP("username", "password", False) self.assertEqual(chap.user, "username") self.assertEqual(chap.password, "password") self.assertEqual(chap.password_str, "password") def test_chap_encryption(self): private_key = rsa.generate_private_key(public_exponent=65537, key_size=2048, backend=default_backend()) priv_pem = private_key.private_bytes( encoding=serialization.Encoding.PEM, format=serialization.PrivateFormat.TraditionalOpenSSL, encryption_algorithm=serialization.NoEncryption()) priv_key_file = tempfile.mkstemp() with open(priv_key_file[1], "wb") as kf: kf.write(priv_pem) pub_pem = private_key.public_key().public_bytes( encoding=serialization.Encoding.PEM, format=serialization.PublicFormat.SubjectPublicKeyInfo) pub_key_file = tempfile.mkstemp() with open(pub_key_file[1], "wb") as kf: kf.write(pub_pem) settings.config.priv_key = priv_key_file[1] settings.config.pub_key = pub_key_file[1] settings.config.ceph_config_dir = "" chap = CHAP("username", "passwordverylonglong", False) encrypted_password = chap.encrypted_password(True) chap2 = CHAP(chap.user, encrypted_password, True) self.assertEqual(chap2.user, "username") self.assertEqual(chap2.password, "passwordverylonglong") self.assertEqual(chap2.password_str, encrypted_password) self.assertNotEqual(encrypted_password, "passwordverylonglong") def test_chap_upgrade(self): private_key = rsa.generate_private_key(public_exponent=65537, key_size=2048, backend=default_backend()) priv_pem = private_key.private_bytes( encoding=serialization.Encoding.PEM, format=serialization.PrivateFormat.TraditionalOpenSSL, encryption_algorithm=serialization.NoEncryption()) priv_key_file = tempfile.mkstemp() with open(priv_key_file[1], "wb") as kf: kf.write(priv_pem) pub_pem = private_key.public_key().public_bytes( encoding=serialization.Encoding.PEM, format=serialization.PublicFormat.SubjectPublicKeyInfo) pub_key_file = tempfile.mkstemp() with open(pub_key_file[1], "wb") as kf: kf.write(pub_pem) settings.config.priv_key = priv_key_file[1] settings.config.pub_key = pub_key_file[1] settings.config.ceph_config_dir = "" key = private_key.public_key() encrypted_pw = b64encode(key.encrypt("passwordverylonglong".encode('utf-8'), padding.OAEP( mgf=padding.MGF1(algorithm=hashes.SHA1()), algorithm=hashes.SHA1(), label=None))).decode('utf-8') chap2 = CHAP("username", encrypted_pw, True) self.assertEqual(chap2.user, "username") self.assertEqual(chap2.password, "passwordverylonglong") ceph-iscsi-3.9/test/test_common.py000066400000000000000000000236531470665154300173200ustar00rootroot00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import import json import logging import sys import unittest try: from unittest import mock except ImportError: import mock # We need to mock ceph libs python bindings because there's no # updated package in pypy sys.modules['rados'] = mock.Mock() sys.modules['rbd'] = mock.Mock() import ceph_iscsi_config.settings as settings # noqa: E402 from ceph_iscsi_config.common import Config # noqa: E402 class ChapTest(unittest.TestCase): def setUp(self): self.logger = logging.getLogger() settings.init() def test_upgrade_config(self): gateway_conf_initial = json.dumps(self.gateway_conf_initial) # First, the upgrade is executed on node1 with mock.patch.object(Config, 'init_config', return_value=True), \ mock.patch.object(Config, '_read_config_object', return_value=gateway_conf_initial), \ mock.patch.object(Config, 'commit'), \ mock.patch("socket.gethostname", return_value='node1'), \ mock.patch("socket.getfqdn", return_value='node1.ceph.local'): config = Config(self.logger) # And then, the upgrade is executed on node2 current_config = json.dumps(config.config) with mock.patch.object(Config, 'init_config', return_value=True), \ mock.patch.object(Config, '_read_config_object', return_value=current_config), \ mock.patch.object(Config, 'commit'), \ mock.patch("socket.gethostname", return_value='node2'), \ mock.patch("socket.getfqdn", return_value='node2.ceph.local'): config = Config(self.logger) self.maxDiff = None iqn = 'iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw' self.assertGreater(config.config['targets'][iqn]['created'], self.gateway_conf_latest['targets'][iqn]['created']) self.assertGreater(config.config['targets'][iqn]['updated'], self.gateway_conf_latest['targets'][iqn]['updated']) config.config['targets'][iqn]['created'] = '2018/12/07 09:19:01' config.config['targets'][iqn]['updated'] = '2018/12/07 09:19:02' disk = 'rbd/disk_1' self.assertGreater(config.config['disks'][disk]['updated'], self.gateway_conf_latest['disks'][disk]['updated']) config.config['disks'][disk]['updated'] = '2018/12/07 09:19:03' self.assertDictEqual(config.config, self.gateway_conf_latest) gateway_conf_initial = { "clients": { "iqn.1994-05.com.redhat:rh7-client": { "auth": { "chap": "myiscsiusername/myiscsipassword" }, "created": "2018/12/07 09:18:01", "group_name": "mygroup", "luns": { "rbd.disk_1": { "lun_id": 0 } }, "updated": "2018/12/07 09:18:02" } }, "controls": { "immediate_data": False, "nopin_response_timeout": 17 }, "created": "2018/12/07 09:18:03", "disks": { "rbd.disk_1": { "controls": { "qfull_timeout": 18 }, "created": "2018/12/07 09:18:04", "image": "disk_1", "owner": "node1", "pool": "rbd", "pool_id": 7, "updated": "2018/12/07 09:18:05", "wwn": "4fc1071d-7e2f-4df0-95c8-925a617e2d62" } }, "epoch": 19, "gateways": { "created": "2018/12/07 09:18:06", "ip_list": [ "192.168.100.201", "192.168.100.202" ], "iqn": "iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw", "node1": { "active_luns": 1, "created": "2018/12/07 09:18:07", "gateway_ip_list": [ "192.168.100.201", "192.168.100.202" ], "inactive_portal_ips": [ "192.168.100.202" ], "iqn": "iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw", "portal_ip_address": "192.168.100.201", "tpgs": 2, "updated": "2018/12/07 09:18:08" }, "node2": { "active_luns": 0, "created": "2018/12/07 09:18:09", "gateway_ip_list": [ "192.168.100.201", "192.168.100.202" ], "inactive_portal_ips": [ "192.168.100.201" ], "iqn": "iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw", "portal_ip_address": "192.168.100.202", "tpgs": 2, "updated": "2018/12/07 09:18:10" } }, "groups": { "mygroup": { "created": "2018/12/07 09:18:11", "disks": { "rbd.disk_1": { "lun_id": 0 } }, "members": [ "iqn.1994-05.com.redhat:rh7-client" ], "updated": "2018/12/07 09:18:12" } }, "updated": "2018/12/07 09:18:13", "version": 3 } gateway_conf_latest = { "created": "2018/12/07 09:18:03", "disks": { "rbd/disk_1": { "controls": { "qfull_timeout": 18 }, "created": "2018/12/07 09:18:04", "image": "disk_1", "owner": "node1.ceph.local", "backstore": "user:rbd", "backstore_object_name": "rbd.disk_1", "pool": "rbd", "pool_id": 7, "updated": "2018/12/07 09:19:03", "wwn": "4fc1071d-7e2f-4df0-95c8-925a617e2d62" } }, "discovery_auth": { "username": "", "password": "", "password_encryption_enabled": False, "mutual_username": "", "mutual_password": "", "mutual_password_encryption_enabled": False }, "epoch": 19, "gateways": { "node1.ceph.local": { "active_luns": 1, "created": "2018/12/07 09:18:07", "updated": "2018/12/07 09:18:08" }, "node2.ceph.local": { "active_luns": 0, "created": "2018/12/07 09:18:09", "updated": "2018/12/07 09:18:10" } }, "targets": { "iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw": { "clients": { "iqn.1994-05.com.redhat:rh7-client": { "auth": { "username": "myiscsiusername", "password": "myiscsipassword", "password_encryption_enabled": False, "mutual_username": "", "mutual_password": "", "mutual_password_encryption_enabled": False }, "group_name": "mygroup", "luns": { "rbd/disk_1": { "lun_id": 0 } } } }, "acl_enabled": True, "auth": { "username": "", "password": "", "password_encryption_enabled": False, "mutual_username": "", "mutual_password": "", "mutual_password_encryption_enabled": False }, "controls": { "immediate_data": False, "nopin_response_timeout": 17 }, "created": "2018/12/07 09:19:01", "disks": { "rbd/disk_1": { "lun_id": 0 } }, "groups": { "mygroup": { "disks": { "rbd/disk_1": { "lun_id": 0 } }, "members": [ "iqn.1994-05.com.redhat:rh7-client" ] } }, "ip_list": [ "192.168.100.201", "192.168.100.202" ], "portals": { "node1.ceph.local": { "gateway_ip_list": [ "192.168.100.201", "192.168.100.202" ], "inactive_portal_ips": [ "192.168.100.202" ], "portal_ip_addresses": ["192.168.100.201"], "tpgs": 2 }, "node2.ceph.local": { "gateway_ip_list": [ "192.168.100.201", "192.168.100.202" ], "inactive_portal_ips": [ "192.168.100.201" ], "portal_ip_addresses": ["192.168.100.202"], "tpgs": 2 } }, "updated": "2018/12/07 09:19:02" } }, "updated": "2018/12/07 09:18:13", "version": 11 } ceph-iscsi-3.9/test/test_group.py000066400000000000000000000032401470665154300171520ustar00rootroot00000000000000#!/usr/bin/env python import sys import logging from ceph_iscsi_config.group import Group import ceph_iscsi_config.settings as settings settings.init() # Pre-reqs # 1. You need a working ceph iscsi environment # 2. target, disks and clients need to pre-exist log = logging.getLogger() log.setLevel(logging.DEBUG) ch = logging.StreamHandler(sys.stdout) log.addHandler(ch) # 1. Create a new group definition target_iqn = 'iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw' new_group = Group(log, target_iqn, "mygroup", ['iqn.1994-05.com.redhat:my-esx-1', 'iqn.1994-05.com.redhat:my-esx-2'], ['rbd.disk_2', 'rbd.disk_1']) new_group.apply() assert not new_group.error, "Error caught when creating the group" target_config = new_group.config.config['targets'][target_iqn] assert "mygroup" in target_config["groups"], \ "Group did not create/commit correctly to the configuration" update_group = Group(log, target_iqn, "mygroup", ['iqn.1994-05.com.redhat:my-esx-1', 'iqn.1994-05.com.redhat:my-esx-2', 'iqn.1994-05.com.redhat:my-esx-3'], ['rbd.disk_2', 'rbd.disk_1', 'rbd.disk_3']) update_group.apply() target_config = update_group.config.config['targets'][target_iqn] assert len(target_config['groups']['mygroup']['members']) == 3, \ "mygroup doesn't contain 3 members" # ?. Delete the group, just created old_group = Group(log, target_iqn, "mygroup") old_group.purge() # target_config = old_group.config.config['targets'][target_iqn] assert "mygroup" not in target_config["groups"], \ "Group did not get removed from the config object" ceph-iscsi-3.9/test/test_settings.py000066400000000000000000000074671470665154300176750ustar00rootroot00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import import unittest from ceph_iscsi_config.settings import Settings from ceph_iscsi_config.target import GWTarget from ceph_iscsi_config.gateway_setting import SYS_SETTINGS class SettingsTest(unittest.TestCase): @staticmethod def _normalize(controls, settings=None): if not settings: settings = GWTarget.SETTINGS return Settings.normalize_controls(controls, settings) def test_normalize_controls_int(self): self.assertEqual( SettingsTest._normalize({'dataout_timeout': 3}), {'dataout_timeout': 3}) self.assertEqual( SettingsTest._normalize({'dataout_timeout': '3'}), {'dataout_timeout': 3}) with self.assertRaises(ValueError) as cm: SettingsTest._normalize({'dataout_timeout': 1}) self.assertEqual('expected integer >= 2 for dataout_timeout', str(cm.exception)) with self.assertRaises(ValueError) as cm: SettingsTest._normalize({'dataout_timeout': 64}) self.assertEqual('expected integer <= 60 for dataout_timeout', str(cm.exception)) with self.assertRaises(ValueError) as cm: SettingsTest._normalize({'dataout_timeout': '64'}) self.assertEqual('expected integer <= 60 for dataout_timeout', str(cm.exception)) with self.assertRaises(ValueError) as cm: SettingsTest._normalize({'dataout_timeout': 'abc'}) self.assertEqual('expected integer for dataout_timeout', str(cm.exception)) def test_normalize_controls_yes_no(self): self.assertEqual( SettingsTest._normalize({'immediate_data': 'Yes'}), {'immediate_data': True}) self.assertEqual( SettingsTest._normalize({'immediate_data': 'yes'}), {'immediate_data': True}) self.assertEqual( SettingsTest._normalize({'immediate_data': True}), {'immediate_data': True}) self.assertEqual( SettingsTest._normalize({'immediate_data': 'True'}), {'immediate_data': True}) self.assertEqual( SettingsTest._normalize({'immediate_data': 'true'}), {'immediate_data': True}) self.assertEqual( SettingsTest._normalize({'immediate_data': '1'}), {'immediate_data': True}) self.assertEqual( SettingsTest._normalize({'immediate_data': 'No'}), {'immediate_data': False}) self.assertEqual( SettingsTest._normalize({'immediate_data': 'no'}), {'immediate_data': False}) self.assertEqual( SettingsTest._normalize({'immediate_data': False}), {'immediate_data': False}) self.assertEqual( SettingsTest._normalize({'immediate_data': 'False'}), {'immediate_data': False}) self.assertEqual( SettingsTest._normalize({'immediate_data': 'false'}), {'immediate_data': False}) self.assertEqual( SettingsTest._normalize({'immediate_data': '0'}), {'immediate_data': False}) with self.assertRaises(ValueError) as cm: SettingsTest._normalize({'immediate_data': 'abc'}) self.assertEqual('expected yes or no for immediate_data', str(cm.exception)) with self.assertRaises(ValueError) as cm: SettingsTest._normalize({'immediate_data': 123}) self.assertEqual('expected yes or no for immediate_data', str(cm.exception)) def test_normalise_list(self): self.assertDictEqual( SettingsTest._normalize( {'trusted_ip_list': '10.1.1.1,10.1.1.2,10.1.1.3'}, SYS_SETTINGS), {'trusted_ip_list': ['10.1.1.1', '10.1.1.2', '10.1.1.3']}) self.assertDictEqual( SettingsTest._normalize( {'trusted_ip_list': '10.1.1.1, 10.1.1.2 , 10.1.1.3'}, SYS_SETTINGS), {'trusted_ip_list': ['10.1.1.1', '10.1.1.2', '10.1.1.3']}) ceph-iscsi-3.9/tox.ini000066400000000000000000000005211470665154300147400ustar00rootroot00000000000000[tox] envlist = py27, py3, flake8 [testenv:flake8] deps=flake8 commands=flake8 --ignore=C901 [flake8] max-line-length = 99 max-complexity = 10 filename = *.py exclude = .git, .tox, __pycache__ venv [testenv] deps = pytest mock cryptography rtslib_fb netifaces commands= {envbindir}/py.test --ignore=test/test_group.py test/ ceph-iscsi-3.9/usr/000077500000000000000000000000001470665154300142405ustar00rootroot00000000000000ceph-iscsi-3.9/usr/lib/000077500000000000000000000000001470665154300150065ustar00rootroot00000000000000ceph-iscsi-3.9/usr/lib/systemd/000077500000000000000000000000001470665154300164765ustar00rootroot00000000000000ceph-iscsi-3.9/usr/lib/systemd/system/000077500000000000000000000000001470665154300200225ustar00rootroot00000000000000ceph-iscsi-3.9/usr/lib/systemd/system/rbd-target-api.service000066400000000000000000000011201470665154300242000ustar00rootroot00000000000000[Unit] Description=Ceph iscsi target configuration API Requires=sys-kernel-config.mount After=sys-kernel-config.mount network-online.target tcmu-runner.service Wants=network-online.target tcmu-runner.service [Service] LimitNOFILE=1048576 LimitNPROC=1048576 EnvironmentFile=-/etc/sysconfig/ceph Type=simple User=root Group=root ExecStart=/usr/bin/rbd-target-api ExecReload=/usr/bin/kill -HUP $MAINPID PrivateDevices=yes ProtectHome=true ProtectSystem=full PrivateTmp=true Restart=on-failure StartLimitInterval=30min StartLimitBurst=3 TimeoutStopSec=600 [Install] WantedBy=multi-user.target ceph-iscsi-3.9/usr/lib/systemd/system/rbd-target-gw.service000066400000000000000000000011271470665154300240530ustar00rootroot00000000000000[Unit] Description=Setup system to export rbd images through LIO Requires=sys-kernel-config.mount After=sys-kernel-config.mount network-online.target rbd-target-api.service Wants=network-online.target [Service] LimitNOFILE=1048576 LimitNPROC=1048576 Environment=PYTHONUNBUFFERED=TRUE EnvironmentFile=-/etc/sysconfig/ceph Type=simple User=root Group=root ExecStart=/usr/bin/rbd-target-gw ExecReload=/usr/bin/kill -HUP $MAINPID PrivateDevices=yes ProtectHome=true ProtectSystem=full PrivateTmp=true Restart=on-failure StartLimitInterval=30min StartLimitBurst=3 [Install] WantedBy=multi-user.target